url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.aimsciences.org/article/doi/10.3934/era.2020106?viewType=html
# American Institute of Mathematical Sciences doi: 10.3934/era.2020106 ## The global supersonic flow with vacuum state in a 2D convex duct School of Mathematical Sciences and Mathematical Institute, Nanjing Normal University, Nanjing 210023, China * Corresponding author: Gang Xu Received  June 2020 Revised  August 2020 Published  September 2020 Fund Project: The third author is supported by NSFC grants No.11571141, No.11971237 and the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (No.19KJA320001) This paper concerns the motion of the supersonic potential flow in a two-dimensional expanding duct. In the case that two Riemann invariants are both monotonically increasing along the inlet, which means the gases are spread at the inlet, we obtain the global solution by solving the problem in those inner and border regions divided by two characteristics in $(x, y)$-plane, and the vacuum will appear in some finite place adjacent to the boundary of the duct. In addition, we point out that the vacuum here is not the so-called physical vacuum. On the other hand, for the case that at least one Riemann invariant is strictly monotonic decreasing along some part of the inlet, which means the gases have some local squeezed properties at the inlet, we show that the $C^1$ solution to the problem will blow up at some finite location in the non-convex duct. Citation: Jintao Li, Jindou Shen, Gang Xu. The global supersonic flow with vacuum state in a 2D convex duct. Electronic Research Archive, doi: 10.3934/era.2020106 ##### References: show all references ##### References: Supersonic flow in 2D convex duct A global smooth solution with vacuum in 2D convex duct Inner regions and border regions Goursat problem in inner region The case that $y(x_{N-1}) = f(x_{N-1})$ The case that $y(x_{N-1}) = -f(x_{N-1})$ Solution in border region Blowup in 2D straight duct Solution in ${\Omega}_{vac}$ The regularity near vacuum boundary The case without vacuum The image in the $(u, v)-$plane [1] Tai-Ping Liu, Zhouping Xin, Tong Yang. Vacuum states for compressible flow. Discrete & Continuous Dynamical Systems - A, 1998, 4 (1) : 1-32. doi: 10.3934/dcds.1998.4.1 [2] Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115 [3] Tong Tang, Yongfu Wang. Strong solutions to compressible barotropic viscoelastic flow with vacuum. Kinetic & Related Models, 2015, 8 (4) : 765-775. doi: 10.3934/krm.2015.8.765 [4] Jian Zhai, Jianping Fang, Lanjun Li. Wave map with potential and hypersurface flow. Conference Publications, 2005, 2005 (Special) : 940-946. doi: 10.3934/proc.2005.2005.940 [5] Gui-Qiang Chen, Bo Su. A viscous approximation for a multidimensional unsteady Euler flow: Existence theorem for potential flow. Discrete & Continuous Dynamical Systems - A, 2003, 9 (6) : 1587-1606. doi: 10.3934/dcds.2003.9.1587 [6] Xiaoli Li. Global strong solution for the incompressible flow of liquid crystals with vacuum in dimension two. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4907-4922. doi: 10.3934/dcds.2017211 [7] Manuel Falconi, E. A. Lacomba, C. Vidal. The flow of classical mechanical cubic potential systems. Discrete & Continuous Dynamical Systems - A, 2004, 11 (4) : 827-842. doi: 10.3934/dcds.2004.11.827 [8] Tobias H. Colding and Bruce Kleiner. Singularity structure in mean curvature flow of mean-convex sets. Electronic Research Announcements, 2003, 9: 121-124. [9] Yingshan Chen, Mei Zhang. A new blowup criterion for strong solutions to a viscous liquid-gas two-phase flow model with vacuum in three dimensions. Kinetic & Related Models, 2016, 9 (3) : 429-441. doi: 10.3934/krm.2016001 [10] Gui-Qiang Chen, Beixiang Fang. Stability of transonic shock-fronts in three-dimensional conical steady potential flow past a perturbed cone. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 85-114. doi: 10.3934/dcds.2009.23.85 [11] Jesus Ildefonso Díaz, David Gómez-Castro, Jean Michel Rakotoson, Roger Temam. Linear diffusion with singular absorption potential and/or unbounded convective flow: The weighted space approach. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 509-546. doi: 10.3934/dcds.2018023 [12] Hui Liu, Duanzhi Zhang. Stable P-symmetric closed characteristics on partially symmetric compact convex hypersurfaces. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 877-893. doi: 10.3934/dcds.2016.36.877 [13] Duanzhi Zhang. $P$-cyclic symmetric closed characteristics on compact convex $P$-cyclic symmetric hypersurface in R2n. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 947-964. doi: 10.3934/dcds.2013.33.947 [14] Mapundi K. Banda, Michael Herty, Axel Klar. Gas flow in pipeline networks. Networks & Heterogeneous Media, 2006, 1 (1) : 41-56. doi: 10.3934/nhm.2006.1.41 [15] Radu C. Cascaval, Ciro D'Apice, Maria Pia D'Arienzo, Rosanna Manzo. Flow optimization in vascular networks. Mathematical Biosciences & Engineering, 2017, 14 (3) : 607-624. doi: 10.3934/mbe.2017035 [16] Edoardo Mainini. On the signed porous medium flow. Networks & Heterogeneous Media, 2012, 7 (3) : 525-541. doi: 10.3934/nhm.2012.7.525 [17] Magnus Aspenberg, Fredrik Ekström, Tomas Persson, Jörg Schmeling. On the asymptotics of the scenery flow. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 2797-2815. doi: 10.3934/dcds.2015.35.2797 [18] Thomas H. Otway. Compressible flow on manifolds. Conference Publications, 2001, 2001 (Special) : 289-294. doi: 10.3934/proc.2001.2001.289 [19] Tracy L. Payne. The Ricci flow for nilmanifolds. Journal of Modern Dynamics, 2010, 4 (1) : 65-90. doi: 10.3934/jmd.2010.4.65 [20] Amina Mecherbet. Sedimentation of particles in Stokes flow. Kinetic & Related Models, 2019, 12 (5) : 995-1044. doi: 10.3934/krm.2019038 Impact Factor: 0.263 ## Tools Article outline Figures and Tables
2020-10-25 03:10:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5611388087272644, "perplexity": 6285.529704029784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885126.36/warc/CC-MAIN-20201025012538-20201025042538-00700.warc.gz"}
http://qpolr.com/4-text.html
# Chapter 4 Manipulating text We introduced text in the previous chapter. In this chapter, we will show how to manipulate text as strings and factors. We will use the states dataset from the poliscidata package. For more on this data, see Chapter 5. library("poliscidata") states <- states You can use View(states) to get a sense of the 50 observations and the 135 variables. ## 4.1 Strings We will use the package string. It is part of the tidyverse but can also be called individually. As you already should have installed the tidyverse by now, it is not necessary to install the package again. library("stringr") Some of the functions have relatively simple purposes, such as str_to_upper() (which convert all characters in a string to upper case) and str_to_lower() (which convert all characters in a string to lower case). str_to_upper("Quantitative Politics with R") [1] "QUANTITATIVE POLITICS WITH R" str_to_lower("Quantitative Politics with R") [1] "quantitative politics with r" Here, we are going to look at cigarette taxes, and namely on whether the cigarette taxes are in the low, middle or high category. To look at this we will use the cig_tax12_3 variable in the states data frame. table(states$cig_tax12_3) We can see that the names for these categories are LoTax, MidTax and HiTax. With the code below we use str_replace_all() to replace the characters with new characters, e.g. HiTax becomes High taxes. states$cig_taxes <- str_replace_all(states$cig_tax12_3, c("HiTax" = "High taxes", "MidTax" = "Middle taxes", "LoTax" = "Low taxes")) table(states$cig_taxes) High taxes Low taxes Middle taxes 15 17 18 For examples on more of the functions available in the stringr package, see this introduction. ## 4.2 Factors For the cigarette taxes we have worked with above, these are categorical data that we can order. To work with ordered and unordered categories, factors is a class in R class that makes these categories good to work with. For factors, we are going to use the package forcats. This package is also part of the tidyverse. library("forcats") We create a new variable, cig_taxes_cat as a factor variable and then we see what levels we have (and the order of these). states$cig_taxes_cat <- factor(states$cig_taxes) levels(states$cig_taxes_cat) [1] "High taxes" "Low taxes" "Middle taxes" As we can see, these levels are now in the wrong order (sorted alphabetically). We can use the fct_relevel() to specify the order of the categories (from low to high). states$cig_taxes_cat <- fct_relevel(states$cig_taxes_cat, "Low taxes", "Middle taxes", "High taxes") levels(states$cig_taxes_cat) [1] "Low taxes" "Middle taxes" "High taxes" This will become useful later on when we want to make sure that the categories in a data visualisation has the correct order. For additional guidance on the functions available in the forcats package, see https://forcats.tidyverse.org/.
2019-08-18 23:11:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39928388595581055, "perplexity": 1856.9393314446888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314353.10/warc/CC-MAIN-20190818231019-20190819013019-00204.warc.gz"}
https://www.gradesaver.com/textbooks/math/trigonometry/CLONE-68cac39a-c5ec-4c26-8565-a44738e90952/chapter-4-quiz-sections-4-1-4-2-page-164/1
## Trigonometry (11th Edition) Clone The amplitude is $4$, the period is $\pi$, thevertical translation is $3$ units up since $c$ is more than zero and the phase shift is $\frac{\pi}{4}$ units to the left since $d$ is less than zero. We first write the equation in the form $y=c+a \sin [b(x-d)]$. Therefore, $y=3-4\sin(2x+\frac{\pi}{2})$ becomes $y=3-4\sin [2(x+\frac{\pi}{4})]$. Comparing the two equations, $a=-4,b=2,c=3$ and $d=-\frac{\pi}{4}$. The amplitude is $|a|=|-4|=4.$ The period is $\frac{2\pi}{b}=\frac{2\pi}{2}=\pi$. The vertical translation is $c=3$. The phase shift is $|d|=|-\frac{\pi}{4}|=\frac{\pi}{4}$ Therefore, the amplitude is $4$, the period is $\pi$, thevertical translation is $3$ units up since $c$ is more than zero and the phase shift is $\frac{\pi}{4}$ units to the left since $d$ is less than zero.
2020-12-01 00:10:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9601081609725952, "perplexity": 134.45724525756256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141515751.74/warc/CC-MAIN-20201130222609-20201201012609-00693.warc.gz"}
http://planetcalc.com/938/
Get reference code Appearance Sample ProfessionalEngineering # Altitude Pressure ##### This calculator uses barometric formula to find out the pressure at the given altitude Timur2010-07-30 15:23:29 The equation for the variation of barometric pressure with height, called barometric formula, has the form $P=P_0e^{\frac{-\mu gh}{RT}}$, where $\mu$ - Molar mass of Earth's air, 0.0289644 kg/mol $g$ - Gravitational acceleration, 9.80665 m/(s*s) $h$ - Height difference, meters $R$ - Universal gas constant for air, 8.31432 N·m /(mol·K) $T$ - Air temperature, К Default value for $P_0$ is 760 mmHg, i.e. standard barometric pressure, 101.325 kPa or 1 atmosphere. It also corresponds to 0 meters above sea level and can be left as is. Change the height (altitude above sea level) and temperature and get the results. Note, that this is theoretical value, in reality, atmosphere pressure is dependant not only on altitude and temperature, but on other factors such as humidity and weather conditions. Altitude Pressure Altitude Pressure (mmHg): ### Not suitable? View all calculators (250 calculators in total. ) Request a calculator
2015-04-01 16:31:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 7, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835360407829285, "perplexity": 3468.311704021079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131305143.64/warc/CC-MAIN-20150323172145-00263-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-5-section-5-5-fractions-decimals-and-order-of-operations-exercise-set-page-383/87
## Prealgebra (7th Edition) If $y=0.3$ and $z=-2.4$ then by substituting we get: $4y-z=4\times0.3-(-2.4)=1.2+2.4=3.6$
2018-07-19 18:03:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959856867790222, "perplexity": 2872.3184389439725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591150.71/warc/CC-MAIN-20180719164439-20180719184439-00398.warc.gz"}
https://socratic.org/questions/how-do-you-find-all-the-critical-points-to-graph-4x-2-9y-2-36-0-including-vertic
# How do you find all the critical points to graph -4x^2 + 9y^2 - 36 = 0 including vertices, foci and asymptotes? Feb 16, 2018 The center is located at the origin, vertices at $\left(0 , 2\right)$ and $\left(0 , - 2\right)$, and foci at $\left(0 , 3.6\right)$ and $\left(0 , - 3.6\right)$; the slopes of the asymptotes are ${y}^{2} = \pm \frac{2}{3} {x}^{2}$ $G r a p h :$ graph{y^2/4-x^2/9=1 [-10, 10, -5, 5]} #### Explanation: We can arrange this equation into $9 {y}^{2} - 4 {x}^{2} = 36$ in order to correct format. I also added $36$ to both sides as this will become necessary down the road. Next, I will divide both sides by $36$: $\frac{9 {y}^{2}}{36} - \frac{4 {x}^{2}}{36} = 1$. Simplify the remaining equation; ${y}^{2} / 4 - {x}^{2} / 9 = 1$ The next step is to find $a$, $b$, and $c$ (excluding the center; it is at the origin). In order to do so, we have to define where they are: ${y}^{2} / {a}^{2} - {x}^{2} / {b}^{2} = 1$ $a$ is always the root of the first denominator in hyperbolic equations, so it would be $2$. $b$ would be $3$ ($\sqrt{9} = 3$) $c$ can be found, only in hyperbolic equations, through Pythagora's Theorem: ${a}^{2} + {b}^{2} = {c}^{2}$ c=~3.6 To find the foci, add and subtract the value of $c$ to the variable being divided by $a$ (which is $y$). This value must be directly above or to the side of the center. The value of the foci are $\left(0 , 3.6\right)$ and $\left(0 , - 3.6\right)$ To find the vertices, add and subtract your $a$ value to the number being divided by it ($y$, again). Vertices: $\left(0 , 2\right)$ and $\left(0 , - 2\right)$. To find the slope depends on the format: if ${y}^{2} / {a}^{2} - {x}^{2} / {b}^{2}$, the slopes of the asymptotes are $\pm \frac{a}{b}$. (If your equation looks like this: ${x}^{2} / {a}^{2} - {y}^{2} / {b}^{2} = 1$, the slope will be $\pm \frac{b}{a}$) The slopes of the asymptotes are $\pm \frac{2}{3}$
2019-05-20 11:22:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 37, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8569062352180481, "perplexity": 224.11173594558696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255943.0/warc/CC-MAIN-20190520101929-20190520123929-00109.warc.gz"}
https://nbviewer.jupyter.org/github/nborwankar/LearnDataScience/blob/master/notebooks/A2.%20Linear%20Regression%20-%20Data%20Exploration%20-%20Lending%20Club.ipynb
# Linear Regression Data Exploration: Lending Club¶ ### How can I predict interest rates based on borrower and loan attributes?¶ The Lending Club is a peer-to-peer lending site where members make loans to each other. The site makes anonymized data on loans and borrowers publicly available. We're going to use these data to explore how the interest rate charged on loans depends on various factors. We want to explore these data, try to gain some insights into what might be useful in creating a linear regression model, and to separate out "the noise". We follow these steps, something we will do in future for other data sets as well. 1. Browse the data 2. Data cleanup 3. Visual exploration 4. Model derivation ## I. Browse Data¶ The data have the following variables (with data type and explanation of meaning) • Amount.Requested - numeric. The amount (in dollars) requested in the loan application. • Amount.Funded.By.Investors - numeric. The amount (in dollars) loaned to the individual. • Interest.rate – character. The lending interest rate charged to the borrower. • Loan.length - character. The length of time (in months) of the loan. • Loan.Purpose – categorical variable. The purpose of the loan as stated by the applicant. • Debt.to.Income.Ratio – character The % of consumer’s gross income going toward paying debts. • State - character. The abbreviation for the U.S. state of residence of the loan applicant. • Home.ownership - character. Indicates whether the applicant owns, rents, or has a mortgage. • Monthly.income -­ categorical. The monthly income of the applicant (in dollars). • FICO.range – categorical (expressed as a string label e.g. “650-655”). A range indicating the applicants FICO score. • Open.CREDIT.Lines - numeric. The number of open lines of credit at the time of application. • Revolving.CREDIT.Balance - numeric. The total amount outstanding all lines of credit. • Inquiries.in.the.Last.6.Months - numeric. Number of credit inquiries in the previous 6 months. • Employment.Length - character. Length of time employed at current job. ## II. Data Cleanup¶ We find the data are "messy" i.e aren't cleanly prepared for import - for instance numeric columns might have some strings in them. This is very common in raw data especially that obtained from web sites. Let's take a look. we're going to look at the first five rows of some specific columns that show the data dirtiness issues. In [1]: %matplotlib inline # first we ingest the data from the source on the web # this contains a reduced version of the data set from Lending Club import pandas as pd In [3]: loansData['Interest.Rate'][0:5] # first five rows of Interest.Rate Out[3]: 81174 8.90% 99592 12.12% 80059 21.98% 15825 9.99% 33182 11.71% Name: Interest.Rate, dtype: object In [4]: loansData['Loan.Length'][0:5] # first five rows of Loan.Length Out[4]: 81174 36 months 99592 36 months 80059 60 months 15825 36 months 33182 36 months Name: Loan.Length, dtype: object We see here that: • the interest rate information has "%" symbols in it. • loan length has " months" in it Other than that we can also see (exploration exercise): • there are a couple of values that are so large they must be typos • some values are missing "NA" values i.e. not available. • the FICO Range is really a numeric entity but is represented as a categorical variable in the data. In [5]: loansData['FICO.Range'][0:5] # first five rows of FICO.Range Out[5]: 81174 735-739 99592 715-719 80059 690-694 15825 695-699 33182 695-699 Name: FICO.Range, dtype: object FICO Range is represented as a categorical variable in the data. We need to change the categorical variable for FICO Range into something numeric so that we can use it in our calculations. As it stands, the values are merely labels, and while they convey meaning to humans, our software can't interpret them as the numbers they really represent. So as a first step, we convert them from categorical variables to strings. So the abstract entity 735-739 becomes a string "735-739". Then we parse the strings so that a range such as "735-739" gets split into two numbers (735,739). Finally we pick a single number to represent this range. We could choose a midpoint but since the ranges are narrow we can get away with choosing one of the endpoints as a representative. Here we arbitrarily pick the lower limit and with some imperious hand waving, assert that it is not going to make a major difference to the outcome. In a further flourish of imperiousness we could declare that "the proof is left as an exercise to the reader". But in reality there is really no such formal "proof" other than trying it out in different ways and convincing oneself. If we wanted to be mathematically conservative we could take the midpoint of the range as a representative and this would satisfy most pointy-haired mathematician bosses that "Data Science Dilbert" might encounter. To summarize - cleaning our data involves: • removing % signs from rates • removing the word ” months" from loan length. • managing outliers - remove such rows in this case • managing NA - remove such rows in this case There is one especially high outlier with monthly income > 100K\$+. This is likely to be a typo and is removed as a data item. There is also one data item with all N/A - this is also removed. ## Exercise¶ Actually perform each of the above steps on the dataset i.e. • import the data • remove the '%' suffix from each row • remove the ' months' suffix from each row • remove the outlier rows • remove rows with NA Save your code in a reusable manner - these are steps you'll be doing repeatedly. ## Visual Exploration¶ Now we are going to follow a standard set of steps in exploring data. We apply the following simple visualizations. This is something we will typically also do for other data sets we encounter in other explorations. ### Histogram¶ A histogram shows us the shape of the distribution of values for a single variable. On the x-axis we have the variable under question, divided into buckets or bins. This is a key feature of a histogram. The bin size is adjustable and different bin sizes give different information. A large bin size gives us an idea of the coarser grained structure of the distribution while a smaller bin size will shine light on the finer details of the distribution. In either case we can compare distributions, or quickly identify some key hints that tell use how best to proceed. With the distribution of FICO scores we see the histogram below. In [6]: import matplotlib.pyplot as plt import pandas as pd plt.figure() fico = loansmin['FICO.Score'] p = fico.hist() Why do we look at FICO score? Because we know from domain knowledge that this is the primary determinant of interest rate. The histogram shows us that the distribution is not a normal or gaussian distribution but that there are some other factors that might be affecting or distorting the shape of the distribution away from the bell curve. We want to dig a little deeper. ### Box Plot¶ Next we take a box plot which allows us to quickly look at the distribution of interest rates based on each FICO score range. In [7]: import matplotlib.pyplot as plt import pandas as pd plt.figure() p = loansmin.boxplot('Interest.Rate','FICO.Score') q = p.set_xticklabels(['640','','','','660','','','','680','','','','700', '720','','','','740','','','','760','','','','780','','','','800','','','','820','','','','840']) q0 = p.set_xlabel('FICO Score') q1 = p.set_ylabel('Interest Rate %') q2 = p.set_title(' ') <matplotlib.figure.Figure at 0x10222d790> First of all this tells us that there is a general downward trend in interest rate for higher FICO scores. But, given the same range of FICO scores we see a range of interest rates not a single value - so it appears there are other factors determining interest rate, given the same FICO score range. We want to investigate the impact of these other drivers and quantify this impact. What might these be? Let's use a little domain knowledge again. We know interest rate is based on risk to the borrower: the greater the risk, the greater the interest rate charged to compensate for the risk. Another factor that might affect risk is the size of the loan - the larger the amount the greater the risk of non-payment and also the greater the negative impact of actual default. We want to look at multiple factors and how they might affect the interest rate. A great way to look at multiple factors simultaneously is the scatterplot matrix. We are going to use this as the next step in visual exploration. ### Scatterplot Matrix¶ But first what is it? The scatterplot matrix is a grid of plots of multiple variables against each other. It shows the relationship of each variable to the others. The ones on the diagonal don't fit this pattern. Why not? What does it mean to find the relationship of something to itself, in this context. Not much, since we are trying to determine the impact of some variable on another variable. We're going to look at a scatterplot matrix of the five variables in our data. In [8]: ## TRY THIS! import pandas as pd a = pd.scatter_matrix(loansmin,alpha=0.05,figsize=(10,10), diagonal='hist') ## Click on the line above ## Change 'hist' to 'kde' then hit shift-enter, with the cursor still in this box ## The plot will redraw - it takes a while. While it is recomputing you will see a ## message-box that says 'Kernel Busy' near the top right corner ## You can change the code and hit shift-enter to re-execute the code ## Try changing the (10,10) to (8,8) and (12,12) ## Try changing the alpha value from 0.05 to 0.5 ## How does this change in alpha change your ability to interpret the data? ## Feel free to try other variations. ## If at any time you scramble the code and forget the syntax ## a copy of the original code is below. Copy and paste it in place. ## Remember to remove the hashmarks. ## a = pd.scatter_matrix(loansmin, alpha=0.05,figsize=(10,10), diagonal='hist) In this diagram, the boxes on the diagonal contain histogram plots of the respective variable itself. So if the 3rd variable is Loan Amount then the third row and third column are the Loan Amount column and row. And the third element down the diagonal is the histogram of the Loan Amount. To see how Loan Amount (3rd) affects Interest Rate (1st) then we look for the intersection of the 3rd row and the 1st column. We also notice that we could have looked for the intersection of the 3rd column and 1st row. They have the same plot. The scatterplot matrix plot is visually symmetric about the diagonal. Where there is some significant, useful effect we will see a noticeable trend in the scatterplot at the intersection. Where there is none we will see no noticeable trend. What do the last two sentences mean in practice? Let's compare two plots: the first one at the intersection of 1st row and 2nd column, and the second at the intersection of 1st row 4th column. In the first, FICO score shows an approximate but unmistakeable linear trend. In the second, Monthly Income shows no impact as we move along the x-axis. All the dots are bunched up near one end but show no clear, linear trend like the first one. Similarly there is no obvious variation in the plot for Loan Length while there is a distinct but increasing trend trend also in the plot for Loan Amount. So what does this suggest? It suggests that we should use FICO and Loan Amount in our model as independent variables, while Monthly Income and Loan Length don't seem to be too useful as independent variables. ## Conclusion¶ So at the end of this data alchemy exercise we have distilled our variables into two beakers - one has what we believe is relevant - the data nuggets, and the other, the data dross.....the variables that have no visible impact on our dependent variable. We're going to refine our output even further into a model in the next step - the analysis. In [9]: from IPython.core.display import HTML def css_styling():
2019-12-09 01:47:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4555322527885437, "perplexity": 1522.5893081746485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517156.63/warc/CC-MAIN-20191209013904-20191209041904-00170.warc.gz"}
https://goprep.co/an-aircraft-traveling-at-600-km-h-accelerates-steadily-at-10-i-1njhpl
# An aircraft traveling at 600 km/h accelerates steadily at 10 km/h per second. Taking the speed of sound as 1100 km/h at the aircraft’s altitude, how long will it take to reach the ‘sound barrier’? According to the Question,We have:- Initial velocity, u = 600 km/h (Given) Final velocity, v = 1100 km/h(Given) Acceleration = 10 km/h/s From relation, v = u+at, On rearranging the terms, we get, v - u = at or $t=\frac{v-u}{a}$ ∴ t = 50 seconds Hence it will take the aircraft 50 seconds to reach the "sound barrier" Rate this question : How useful is this solution? We strive to provide quality solutions. Please rate us to serve you better. Related Videos Champ Quiz | Motion under gravity46 mins Velocity Time GraphFREE Class Third Equation of Motion36 mins Learning Second Equation of motion41 mins NCERT | Motion - Part 153 mins Second Equation of Motion48 mins Understanding Motion60 mins Interactive Quiz - Motion29 mins Learn The Use of Equations of Motion37 mins Try our Mini CourseMaster Important Topics in 7 DaysLearn from IITians, NITians, Doctors & Academic Experts Dedicated counsellor for each student 24X7 Doubt Resolution Daily Report Card Detailed Performance Evaluation view all courses
2020-10-21 02:25:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30835971236228943, "perplexity": 11689.80417878415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874637.23/warc/CC-MAIN-20201021010156-20201021040156-00497.warc.gz"}
https://news.schoolsdo.org/2016/03/algebra-1-parcc-question-graph-of-function/
Sunday, September 27, 2020 # Algebra 1 PARCC question: graph of function #### The following multiple-choice question, explained here in hopes of helping algebra students in Maryland and Illinois prepare for the PARCC test near the end of this school year, appears on the released version of PARCC’s Spring 2015 test in algebra 1, here: Which is the graph of the function  $y = (x-1)^2 - 2$ ? • (A) • (B) • (C) • (D) PARCC evidence statement(s) tested: A-REI.10: Understand that the graph of an equation in two variables is the set of all its solutions plotted in the coordinate plane, often forming a curve (which could be a line). The evidence statement above references Math Practice 7: Mathematically proficient students look closely to discern a pattern or structure. Young students, for example, might notice that three and seven more is the same amount as seven and three more, or they may sort a collection of shapes according to how many sides the shapes have. Later, students will see 7 × 8 equals the well remembered 7 × 5 + 7 × 3, in preparation for learning about the distributive property. In the expression x2 + 9x + 14, older students can see the 14 as 2 × 7 and the 9 as 2 + 7. They recognize the significance of an existing line in a geometric figure and can use the strategy of drawing an auxiliary line for solving problems. They also can step back for an overview and shift perspective. They can see complicated things, such as some algebraic expressions, as single objects or as being composed of several objects. For example, they can see 5 – 3(x – y)2 as 5 minus a positive number times a square and use that to realize that its value cannot be more than 5 for any real numbers x and y. The question tests students’ understanding of the Common Core high school algebra standard HSA.REI.D.10 (under Reasoning with Equations and Inequalities and Represent and solve equations and inequalities graphically), which states that students should “understand that the graph of an equation in two variables is the set of all its solutions plotted in the coordinate plane, often forming a curve (which could be a line).” Solution strategy (there are others) Recognize the domain of the function and pick the correct representation. On inspection, the four options to this multiple choice question differ in two important aspects: (A) and (B) are scatter plots, and (C) and (D) are curves. Since x could equal 1.5 in the function given, (A) and (B) are ruled out—there’s no point above x = 1.5 on either (A) or (B). That leaves (C) and (D). The second difference between these two is that (C) only goes between x = –1 and x = 3. Can x be 10 in this function? $y = ((10)-1)^2 - 2$ $= 9^2 - 2$ $= 81 - 2 = 79$ Sure, x can be 10, and the function would still be defined. Since the graph for (C) does not allow for x = 10 and the graph for (D) does, given the arrows, option (D) is the only possible correct answer. ## Resources for further study Purple Math, developed by Elizabeth Stapel, a math teacher from the St Louis area, has a four-part series on graphing quadratic functions on a coordinate plane. She explains that, while you may start your graph by plotting points at integer x values, you have to finish the graph by drawing a smooth curve in between the points so that all points in the function’s domain, including those between the integers, are represented. The series starts here. The Khan Academy, developed by Sal Khan, an engineer who has created a library of thousands of video lessons, has a few that demonstrate how to graph quadratic function (parabolas) on the coordinate plane. The series starts here, with “Intro to Parabolas.” Chapter 5, Section 5-2 of Paul A Foerster’s book Algebra and Trigonometry deals with quadratic function graphs. He He advises, “the vertex is the most important point on the graph of a quadratic function. If you know where the vertex is, you can sketch a reasonably good parabola with very little other information.” Complete reference: Foerster, Paul A. Algebra and Trigonometry: Functions and Applications, revised edition. Addison-Wesley, 1980, 1984. The book is used in several algebra classes taught in middle and high schools in both Illinois and Maryland. ## Analysis of this question and online accessibility The question measures knowledge of the Common Core standard it purports to measure and tests students’ ability to recognize that the graph of a function in the coordinate plane includes all points in the domain, even those that fall in between the grid lines on the graph. It is considered to have a low cognitive demand. The question can be tested online and should yield results that are as valid and reliable as those obtained on paper. The multiple-choice format may promote guessing, which casts doubt on the validity of the question. No special accommodation challenges can be identified with this question, so the question is considered fair. ## Challenge How would you define a function that has a domain that looks like (A), (B), or (C)? Can you identify a real-world situation that might have such a graph? ## Purpose of this series of posts Voxitatis is developing blog posts that address every algebra 1 question released to the public by the Partnership for Assessment of Readiness for College and Careers, or PARCC, in order to help students prepare to take the test this spring. Our total release will run from February 27 through March 15, with one or two questions discussed per day. Then we’ll move to geometry at the end of March, algebra 2 during the first half of April, and eighth grade during the last half of April. Paul Katula is the executive editor of the Voxitatis Research Foundation, which publishes this blog. For more information, see the About page. ### On constitutional flat taxes in Illinois An important ballot question in IL involves the elimination of the flat tax in favor of a graduated income tax structure. ### Weather conference for Howard Co. 6th graders The Howard County (Md.) Conservancy invites 6th graders to register for a conference about preparing for extreme weather. ### Exercise harder, remember more Scientists have found that the more vigorously you exercise, the stronger the response in the brain that helps your memory. ### More than Covid keeps kids home at E. Peoria Mud & debris flooded E Peoria Comm HS this summer, so students can't return to in-person learning sooner than the end of Oct. ### Schools rethink the whole idea of snow days Why have snow days anymore if we can have 'virtual learning' days, now that we know a thing or two about how they work? ### Student news roundup, Maryland, Sept. 24 State to allow sports beginning in Oct., but some districts won't go back yet; Miss Maryland Agriculture; music lessons virtually. ### Grand jury indicts officer in Breonna Taylor case A former police officer was indicted in connection with the death of Breonna Taylor in Louisville. But it was less than many had hoped for. ### New youth forum talks virtual learning in Md. Virtual learning thoughts from a Md. HS: It can work and keeps kids safe, but it ends up being harder (you can't just ask a teacher if you don't understand). ### IL brings 1000s back to school for SAT exam Many IL seniors went back to their school buildings today to make up the SAT exam, which they missed last spring as juniors due to the pandemic. ### Baltimore City Schools to lay off 450 Layoffs are coming to Baltimore City Schools due to a budget shortfall. Some teachers and teacher's assistants are included in the layoffs. ### How citizens prefer to fund environmental action Growing demand for countries to combat climate change, less consensus on how to fund it. New study offers insight from the US, UK, Germany, France.
2020-09-27 15:56:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33722665905952454, "perplexity": 1370.422996692412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00096.warc.gz"}
http://zneg.mpl-bauen.de/100-lbs-5-ft-3.html
They typically hold up to 10 lb. Drum holds 100 ft. 5 Additional Information Adjustable Clicker w/Micrometer. I lost 100 pounds and this year makes 3 years of maintaining the weight loss. Ton (short). Click the CALCULATE button and this equals 61. Spud Can Diameter 52. 785 To go to Multiply from by cm → in 0. Nominal Average Approx. Weight For 12' Bar 5/16 x 3/8 0. Equipped with the stock truck intake and. com have lost weight with various programs ranging from Weight Watchers to Tops to NutriSystem – but the bottom line isn’t about ‘the diet’ - it’s about finding what works for you!. I'm 19 years old, and live in Mexico :D But I'm American. 48051948 US Gallons. Your niece is on the larger sideI think when my DD was 5 she was about 75-80 lbs. Nominal Pipe Size Outside Diameter Sch 5s Lbs/ft Sch 10s Lbs/Ft Sch 40s Lbs/Ft Sch 80s Lbs/Ft Sch 160 Lbs/Ft XX. ? A 40 ft long rope weighs 0. Torque Range 3 Torque Range 2 TORQUE Measuring & Testing Change between ft-lbs, in-lbs and N•m with the touch of a button High strength sealed reversible ratchet Rugged, shock resistant housing. 9), you need to weigh at least 105 lbs, and 140 lbs at the most. proto Stanley-proto Industrial Tools J6018AB Proto 3/4" Fixed Micrometer Torque Wrench, 32-3/4"L, 60 to 300 ft. 640 FT 860 FT. Limestone boulders and granite boulders in most cases weigh more. material 3 inches and smaller could be counted and the weight estimated. The density of typical soil is 100 lb/ft 3 (1600 kg/m 3). Health and Nutrition 135 U. sex with a skinny woman makes me afraid i'm gonna break them or puncture. 4 oz 800 1 lb 13 oz 300 11. ANSI Hardware Design Data Torque Design and Applications. Force of a Falling Object Date: 12/27/2002 at 20:12:35 From: Rusty Subject: Force I am in 10th grade and I would like to figure out the force of a falling object. No, the THC will probably still show if you're drug tested for up to 30 days. To lose 15 pounds in 2 months: you need to save 26250 kcals per month or 6562 per week or 937 calories per day. If you’re 5-foot-4 and weigh 126 pounds, reducing your intake by 500 to 1,000 calories daily will help you shed 1 to 2 pounds per week. He walks at a constant speed v{eq}_0{/eq} = 5 ft/s along a straight radial line. Dion Almaer is a technologist, engineer, and human dev aggregator. I've always had a very muscular build though and used to be a size 2/4 at 125 pounds. By planting your potatoes in layers within a tall box, as seen in the diagram here. Water weighs 62. SOLUTION Press = P/A. A tire is manufactured from several separate components, such as tread, innerliner, beads, belts, etc. It's eco-friendly, fully biodegradable, and 100 % natural hemp. How many cubic feet in a. One foot pound is the work done by a force of one pounl acting through a distance of one foot, in the direction of the force. The recommended Phosphorous ammendment for the bed is 100 lbs of P per acre. torque range. Apply using the suggested setting. For every additional inch above 5 feet, add five pounds. (water weighs 3~62. Note: All values are ft. 5 lb is fired at a speed of 60 ft/sec into the block and becomes embedded in the block. Boned Nylon Thread #B-69/42 Dark Yellow Mustard Cone 1/2 lb. 3048 km → mi 0. COMMON GASES CONVERSION TABLE To Use This Worksheet: 1. Convert 100 Newton Meters to Foot Pounds. ? A heavy rope, 60 ft long, weighs 0. Diferent flow rate units conversion from cubic foot per hour to pounds (water mass) per hour. The tank has a diameter of 20 feet at the top and is 15 feet deep. All Refrigerator Reviews : If you're looking for 5. Did you find us useful? Please consider supporting the site with a small donation. 56 kN)Slip with Safety Factor of 3 • Use with B22 and B24 Channel • Setscrews to be torqued to 19 ft. Omega Lift Equipment 92100 100 LB Wheel Arm - Omega Lift Equipment Wheel Arm for Post Lifts The OMEGA Wheel Arm™ is a great accessory to efficiently hold tires on a vehicle lift arm. Potted Plants (Containers): New Plants: When preparing new soil for plants, mix 2 cups of Holly-tone per cubic foot of soil (1. In this video series you will see Kyle and family from Growing Up Garden attempted to grow 100 Pounds of Potatoes In a 4 square foot box. 5 lb of phosphate, and 0. QR Code Link to This Post. 1571 150 lb/ft 3 * 0. The impact on a human body can be difficult to determine since it depends on how the body hits the ground - which part of the body, the angle of the body and/or if hands are used to protect the body and so on. depending on the application. Troy Weight (Precious Metals) 24 grains = 1 pennyweight 20 pennyweights = 480 grains = 1 ounce 12 ounces = 5760 = grains = 1 pound Apothecaries' Weight 20 grains = 1 scruple 3 scruples = 1 dram. 64 cm 5 feet 7 inches 170. 100 nm is equal to how many lb ft? Online calculator for NM to LB FT Conversion. Conversely one pound-foot is the moment about an axis that applies one pound-force at a radius of one foot. 785x10-3 m-3 For comparison, a US nickel has a mass of about 5. The modular wheelchair ramp components make it easy to move the ramp to a new location or change the configuration. ) If this conden-. The pound [lbs] to kilogram [kg] conversion table and conversion steps are also listed. 05793 pounds [lbs] Lead weighs 11. They recommend that you keep it at –5 to –10° F to keep the meat really fresh. 2 lb/ft3 to kg/m3 = 32. 355 817 948 331 4004 joules. But underwater, it is being pushed to the surface by 64 lbs of force. The U-Haul Moving & Storage of Waterfalls has propane currently priced at $3. 8266 10 104 Both feet - female NOTE: Calculations are based upon 50th percentile males and females. Drive 10 to 80 ft. Convert 100 kg to stones and pounds. 4 lb/ft3 to kg/m3 = 64. Online calculator to convert cubic feet to cubic meters (ft 3 to m 3) with formulas, examples, and tables. Domestic wool trading on a clean basis was inactive this week. Place this amount in the spreader 5. Manufactured Stone. (2800 yds total approx) • This is heavy duty sewing thread. 8 out of 5 by 6. Converting 100 lb to kg is easy. Broadcast 40 lbs per acre, February-May or August-October * Broadcasting seeding, which is usually done with a lime spreader is a quick way to plant large acreage. The paper calculator tool helps with common price and quantity conversions. Nominal Pipe Size Outside Diameter Sch 5s Lbs/ft Sch 10s Lbs/Ft Sch 40s Lbs/Ft Sch 80s Lbs/Ft Sch 160 Lbs/Ft XX. Online calculator to convert cubic feet to cubic meters (ft 3 to m 3) with formulas, examples, and tables. Revis, double board certified South Florida plastic surgeon. 1 kg for each additional cm. /100 sq ft. 1/3 Horsepower, 3 gal. Southern Rebar provides quality construction products and exceptional service at competitive pricing. figure brick 7 per sq. QR Code Link to This Post. Cumulative Percent Distribution of Population by Height and Sex:. This carbon dioxide emission calculator will help you gain an approximate idea of how many tons of carbon dioxide some of your activities generate and how many trees it would take to offset those emissions - free to use! Webmasters - get this script for your site. There were no confirmed trades reported. 50 5 Clover seed Clean and bag 13. 64 cm 5 feet 7 inches 170. 2832 lb-ft/revolution = 5252 RPM. 3048 km → mi 0. A tank in the shape of an inverted cone is full of water. Ice buildup adds weight rapidly, he adds. 3937 in → cm 2. Reversible ratcheting head allows you to torque left and right-hand fasteners. 07385 kg/m3. Product Description. (size 6-12 ft. 3-year warranty. A force of 200 lbs pushes against a rectangular plate that is 1 ft. 6" ÷ 12"/foot = 0. Following 3 steps helped this woman lose 100 pounds In three years, she lost 100 pounds and she has maintained her weight for three years. For a male the ideal weight is 106 pounds plus 6 pounds for each inch in height over 5 feet. フィート、フート(計量法上の表記)又はフット(複: feet, 単: foot)は、ヤード・ポンド法における長さの単位である。 様々な定義が存在したが、現在では「国際フィート」が最もよく用いられており、正確に 0. Alaska's Best listing of general fun facts about Alaska. I was just wondering if anyone has any pictures (wether it be thinspo or selfies, whatever) of women who are 55 and 110 lbs?. Liters 10 3 Cubic centimeters. 5 00 ft 20 3 100 170 10 5 1 5 3 3 50 5 4 30 50 58 3 lb 30 lb 0 447 ft 700 400 1 5 200 1 5 45 100 1 5 30 700 lb into the plate 400 200 100 700. A weight of 100 lbs. All weights are approximate, based on a metal density of. How many feet of stainless steel wire are in a pound? Wire mills often inventory and sell stainless steel wire by pound. 220 - V= 0 V=220 lb. 80 MESH- 25-26 lbs/ft 3: 5. Finally, the total work, 𝑊𝑊𝑇𝑇 done in raising the leaky bucket of sand using the chain is 𝑊𝑊𝑇𝑇= (𝑊𝑊𝐵𝐵+ 𝑊𝑊𝐶𝐶) + 𝑊𝑊𝑆𝑆= 2500 ft⋅lbs+3000 ft⋅lbs=5500 ft⋅lbs. 5 ft 3 ft Ballast weight WB c. That is nearly half your weight, so it may take a while to get down to that. In this video series you will see Kyle and family from Growing Up Garden attempted to grow 100 Pounds of Potatoes In a 4 square foot box. I'm aiming for 120 ideally. 3 ft 6 x 6 5 ft. I've always had a very muscular build though and used to be a size 2/4 at 125 pounds. Area = A = 2' x 1' = 2 ft2. bag provides 0. The whole vehicle -- shuttle, external tank, solid rocket booster casings and all the fuel -- has a total weight of 4. Can a 5 ft 100lb woman take down a 6'3 250 lb guy? Yes! See me do it at Kazan Dojo, Deer Park, NY. The 28 inch dining table legs are an industry standard height. SOLUTION Press = P/A. The U-Haul Moving & Storage of Waterfalls has propane currently priced at$3. Each suit of full plate must be individually fitted to its owner by a master armorsmith, although a captured suit can be resized to fit a new owner at a cost of 200. com Customer Service: 1-800-800-TOOL WARNING: Work Safely With Tools Wear Safety Goggles. The ideal weight ranges for a 5-foot-tall woman is 90 to 110 pounds, 5-foot-1 is 95 to 116 pounds, 5-foot-2 is 99 to 121 pounds, 5-foot-3 is 104 to 127 pounds and 5-foot-4 is 108 to 132 pounds, according to Rush University Medical Center. The average from that might be 115, after a survey of so many girls of 5"3. 109-121 Indoor clothing weighing 5 pounds for men and 3 pounds for women. 5, which is within a healthy range. Simply use our calculator above, or apply the formula to change the length 100 lbs to kg. 25 pounds of product per thousand square feet. How many feet of stainless steel wire are in a pound? Wire mills often inventory and sell stainless steel wire by pound. 4 lb (it's mostly water). If the fertilizer bag weighs 25 lbs. Metric to Pound Conversion On the other hand, if you have pounds and need to get kilograms, you can divide by 2. M=(220x) lb ft +©MNA=0. I'm 19 years old, and live in Mexico :D But I'm American. Im only 5 feet tall but I weigh. Weight For 12' Bar 5/16 x 3/8 0. capacity plastic hopper with screen that helps prevent materials from jamming. Your current BMI is greater than the recommended range of 18. ? Exchange values and measures from one water volume vs. 016 ft 3 / cu ft Definition of cubic foots of water provided by WikiPedia The cubic foot is an imperial and US customary (non-metric) unit of volume, used in the United States and the United Kingdom. Right now you're pretty much in the middle of those, so in my opinion you don't need to lose any weight at all, not even an ounce. 14 x r (feet) x r (feet) x T. (See Column 4 in Steam Table. 4 lbs per cubic foot would float on water (and I've never seen coal that floats unless it is close to being lignite or peat). Volume = Area × Thickness = (17,257 sq. 655 x 10 6 Foot-lbs. capacity track and every 5 ft. 3kg) of neat gypsum shall be applied over an approved liquid bonding agent. 274 ounces 1 kilogram = 2205 pounds 1 ounce (avoirdupois) = 28. The bucket has a weight of 80 lb and is being hoisted using three springs, each having an unstretched length of and stiffness of. Step 4: One hour after waking, drink one cup of coffee or tea (no sugar, very light cream/butter if you want). Way to Convert Newton Meters to Foot Pounds Online. First, I will change 5. Torque production was up by 100 lb-ft over the stock 5. Stone lbs 4' 10" 4' 11" 5' 0" 5' 1" 5' 2" 5' 3" 5' 4" 5' 5" 5' 6" 5' 7" 5' 8" 5' 9" 5' 10" 5' 11" 6' 0" 6' 1" 6' 2" 6' 3" kgs 7St 2lbs 100 20. I didn't want to go on meds and knew I could control my BP if I lost weight. "I never thought I would be squatting 100 pounds or running a 5K. Shows as a top view of a 100-lb person walking on a large horizontal disk, which rotates with constant angular velocity {eq}\omega_0=0. 07385 kg/m3. The unique convertible hitch allows changing the cart from tow-behind mode to push mode without tools. Drive 10 to 80 ft. 4,000 LBS 10,000 LBS. Yard (of ale) This is a drinking glass about 3 feet long, hence the name. 1 barrels of oil. Draw shear force diagram 5. Here is the answer to the question: 5 ft 3 inches 100 lbs. The cubic inch and the cubic foot are still used as units of volume in the United States, although the common SI units of volume, the liter, milliliter, and cubic meter, are also used, especially in manufacturing and high technology. 3—5" standard elbows at 11 = 33. 2 288 2 1 12 1 12 2 in ft in x ft. If you trying to find special discount you may need to searching when special time come or holidays. Natural Stone. divide the rate by 43. Source of basic data Build Study. – Total shipment weight of 200 pounds or more for each UPS 3 Day Select or UPS Ground shipment. I'm 19 years old, and live in Mexico :D But I'm American. weight from cubic feet of water to pounds of water, ft 3 - cu ft to lb wt. Solution to Problem 504 | Flexure Formula Problem 504 A simply supported beam, 2 in wide by 4 in high and 12 ft long is subjected to a concentrated load of 2000 lb at a point 3 ft from one of the supports. F = 100 lb for the first 5 ft + 5 lb for each additional inch. To be within the right range for your height, your ideal weight should be between 104. Then I had turning point. The English system equivalent of a watt is horsepower, and 1 hp is defined as being equal to 550 ft-lb/s. 1 oz 900 2 lb 1 oz 400 14. How to Calculate Pounds per Square Foot Concrete. Product Width 14-3/4 in. 80 MESH- 25-26 lbs/ft 3: 5. 004 448 221 6 kilonewton 1 cubic foot = 0. Cumulative Percent Distribution of Population by Height and Sex:. Domestic wool trading on a clean basis was inactive this week. The DWA-1002 Digitool Solutions 3/8" Drive 36 tooth Electronic Torque Angle Wrench has an English Range of 5 - 100 Ft Lbs and 60 - 1200 In Lbs and a Metric range of 6. bag provides 0. For example, to find out how many gallons in a cubic foot and a half, multiply 1. 3 24 20 19 15 34 Feet per Pound. Take on everything from small home repairs to engine work with an ergonomic design that ensures you never lose your grip. You'll love the ability to tow an RV with your small pickup or large SUV!. Alaska's Best listing of general fun facts about Alaska. This calculator can be used to calculate the amount of sand, soil, gravel or mulch needed for your project. and covers 5,000 square feet: 100 sq. BMI Calculator - Feet, inches, lbs Enter your weight and height then click on the buttom "Compute BMI" to get your BMI value. Cummins N855 Big Cam Connecting Rod Bolts Step 1 = 100 Nm, 75 lb. BAAM 3D Printer Gets Major Upgrade — Prints 100 lbs of Material Per Hour & More. Picture of Rattlesnake-More than 9 Feet Long and Weighing Nearly 100 Pounds-Unproven!Summary of eRumor: A picture of what appears to be a large rattlesnake said to have been found near Medicine Lodge, Kansas. Simply use our calculator above, or apply the formula to change the length 100 lbs to kg. Pumps (2) 733029102386 2-5/8” x 4-3/8”. 56 kN)Slip with Safety Factor of 3 • Use with B22 and B24 Channel • Setscrews to be torqued to 19 ft. Concrete is a composite material of cement, aggregate materials (rocks, gravel, or similar objects), and water. Hope this helps. Most inexpensive fertilizers are readily soluble, and a typical recommendation for these formulations is to use only 1 lb of N per. 5 141 10 or an 8 in Anne Taylor jeans. bolt tightening torque (lb-ft) bolt sizetorque bolt sizetorque 3/8" 7/16" 1/2" 3/4" 25 lb ft 50 lb ft 60 lb ft 120 lb ft 8mm 10mm 12mm 14mm 20 lb ft 38 lb ft 68 lb ft 100 lb ft 3301 west burnsville parkway burnsville, mn 55337 note: install parts loosely until all attachments have been made to vehicle class 3 trailer hitch hardware package:. at National Tool Warehouse Cart Wishlist Account Login. (121 mm) 380,000 1bf-ft (1,690,300 N) 17,000 1bf-ft (23,000. 36 inches (or 3 feet) to the yard, 1760 yards to the statute mile. 3/8' Drive Split-Beam Flex Ratchet Click Wrench, 20-100 lb. Use our Body Mass Index chart (BMI) to find your BMI. 2 Shear and Bending-Moment Diagrams: Equation Form Example 1, page 4 of 6 x 9 kip R A = 10 kip A 6 kip R B = 5 kip B Pass a section through the beam at a point between the 6-kip force and the right end of the beam. per cubic foot. (See Column 4 in Steam Table. If you are in the yellow zone you are a healthy weight. This new 5-100 ft-lb 3/8-in drive torque wrench operates in both digital and angle modes, allowing the user to fulfill the specifications designated on. 5, which is within a healthy range. I feel better. Notes: E = MAY CONTAIN MORE THAN 1 PIECE. at Hayneedle, where you can buy online while you explore our room designs and curated looks for tips, ideas & inspiration to help you along the way. load rating per pair; 3 stage, full extension, with over travel; Equipped with a quick release mechanism to separate for installation and removal of the drawer or sliding shelf. com Customer Service: 1-800-800-TOOL WARNING: Work Safely With Tools Wear Safety Goggles. Methods for Calculating Corn Yield than 100 feet. Get quick answers when you enter math equations or conversions in the Google Search box. Please take to your local U-Haul store to have this tank purged before filling; Looking for a different size? Check out our propane tank size chart to find the best fit for your needs. We offer a complete line of concrete and masonry accessories; forming and shoring systems; reinforcing and structural steel fabrication; decorative concrete; industrial maintenance and repair products; and products for use by departments of transportation. Troy Weight (Precious Metals) 24 grains = 1 pennyweight 20 pennyweights = 480 grains = 1 ounce 12 ounces = 5760 = grains = 1 pound Apothecaries' Weight 20 grains = 1 scruple 3 scruples = 1 dram. How to Calculate Grass Seed Per Acre. Here's the question: If a 10-lb. Water weighs 62. 1 day ago · The project’s Measured and Indicated NI 43-101 resources consist of 5. 8 Natural Aggregates 2. He also could benefit from getting into the gym and adding 10 pounds this winter, because he’s just too easily knocked off the ball by bigger opponents and he’s not going to get those easy whistles from refs unless he turns into Zlatan Ibrahimovic overnight. Knowing that 1 lb/ft 3 = 5. 5 cubic foot makes the calculation easy -- and one most smartphone calculators can complete. The Institute of Food and Agricultural Sciences (IFAS) is an Equal Opportunity Institution authorized to provide research, educational information and other services only to individuals and institutions that function with non-discrimination with respect to race, creed, color, religion, age, disability, sex, sexual orientation, marital status, national origin, political opinions or affiliations. Under the BMI classification, 100 lbs is classed as being Underweight. The Dalmatian pelican (Pelecanus crispus) is among world's heaviest flying bird species. The branch line is 50 feet long and has three standard elbows plus a gate valve. Men- 106 pounds/ 5 feet and every additional inch +6 lbs are allowed. All weights are approximate, based on a metal density of. Inner drum cage and Self-aligning Flexitube™ spring distributor tube reduces cable tangling. If a 200 pound cable is 100 feet long and hangs vertically from the top of a tall building, how How mush work is done in lifting a 40 kilogram weight to a height of 1. Sophie & Luis both lost over 50 pounds with this plan. 5 feet long Most wallstone pallets weigh a minimum of 1. At the time, she weighed 300 pounds. But to at least be in the normal range maybe get down to around 130-135. The torque sequence and values also apply to the Front Axle U-Bolts and Anchor Plates as well. 05539 kg/m3. Tools Required: 1. In the United States, 5′10″ is above average for an man. 01846 kg/m3. Show transcribed image text. A bullet weighing 0. TV Weight Capacity: Max. drive and +/-3% clockwise With a 3/8 in. Volume = Area × Thickness = (17,257 sq. QM Enlisted MOS Qualifications DA PAM 611-21 Prior to 2 Jan 02 Between 2 Jan 02 and 1 Jul 04 After 1 Jul 04 92A Very Heavy Occasionally lifts 100 pounds 5 feet. 5 (2) Divide 2. Grass seed calculations are often based on square footage or square meters for small residential lawns. Health and Nutrition 135 U. Notice: More than 11 inches is not possible, because 12 inches is 1 foot. Therefore, Ideal weight = 135 pounds; ideal range = 121. Your current BMI is lower than the recommended range of 18. This chart is suitable for people who are normal weight, overweight and obese. 359237 kilograms (100lbs = 45. 0 ft long and 3. Many Other Conversions. Under the BMI classification, 100 lbs is classed as being Underweight. EPA Beverage container—8 oz 1 bottle 0. 07385 kg/m3. Industrial Torque Wrench With a 3/8 in. All weights are approximate, based on a metal density of. 1 pound per cubic foot ( lb/ft3 ) = 6. 8 Cubic Yards. Performance Rated Capacity 8,000 lb 3,629 kg Maximum Lift Height 41 ft 11 in. Foot-pounds. At 5'5'', you should be around 125 pounds. Reversible ratcheting head allows you to torque left and right-hand fasteners. Converting to pounds (since the question was in English units I assume the answer is desired in English units) we get 0. Related documents Mechanics of Materials 7th Edition Solut Statics Final Exam Fall 2009, questions and answers Statics Spring Final Exam 2010, questions and answers Final exam Fall Statics 2008, questions and answers Statics Test 1 Spring 2007, questions and answers Statics Test 1 Fall 2008, questions and answers. A 100-lb person walks on a large disk that rotates with constant angular velocity {eq}\omega_0{/eq} = 0. 4 is 10 lb and the spring stiffness is 100 lb/in. I've gone from a 4XLT shirt and size 48 pants to wearing XL and size 38 pants. 3kg) of neat gypsum shall be applied over an approved liquid bonding agent. I always keep thinking to myself “How could have I let myself get to this point” I never thought that i would get to this weight. 3b – Engineering Problem Solving Answer Key. com has made it easier than ever to convert between hundreds of common and not-so-common oilfield measurements using our powerful yet simple-to-use online oil and gas conversion calculator. We understand that we are working with feet and inches. If the tank is lled with kerosene weighing 51. Depending on particle size/moisture/ etc, sand is about 100 lbs per cubic foot, so a 50 lb bag is about 0. If you have any questions, contact our sales team, and we will be glad to assist you. 5 feet long Most wallstone pallets weigh a minimum of 1. Find the force exerted by the worker. At his heaviest, in 2005, Kurtz weighed 278 pounds. 94 cm 5 feet 2 inches 157. For women, 5′10″ is far above average. 19 feet 4 inches 17 miles 13 yards 2 feet 5 pounds 7 ounces 3 tons 5 hundredweights 2 stones etc In the SI or metric system mixed measures are "not allowed", and they are certainly not necessary. I no longer. 18 cm 5 feet 8 inches 172. 34 Tire Size, Rear 5. About Armor About Magic Armor. Clean and bag 1. LIQUEFIED GAS CONVERSION CHART Product Name Cubic Feet / Pound Pounds / Gallon Cubic Feet / Gallon Acetylene UN/NA: 1001 CAS: 514-86-2 14. Use a sizing chart to determine your size. from the wheel to the effort is 5 ft. At 200 yards that bullet hits with 1,455 ft. Census Bureau, Statistical Abstract of the United States: 2011 Table 205. (406 mm) semi-pneumatic rubber wheels Application SENTRY Carbon Dioxide Wheeled Fire Extinguishers are designed to protect areas where Class B (flammable liquids and gases) or Class C (energized electrical equipment) fires. How much does a yard 3 of. Boned Nylon Thread #B-69/42 Dark Yellow Mustard Cone 1/2 lb. I'm 5'0" as well, and a lot of people my height can be healthy and comfortable at 100 lbs. One pound-foot is the torque created by one pound of force acting at a perpendicular distance of one foot from a pivot point. To check rock in place, the cumulative weight of the rocks larger than 9 inches must not exceed. Micro-Adjustable Torque Wrench - Metal Ha. B613 Column Support for B22 • Design Load 800 Lbs. Volume refers to the available area within the basket or cylinder of the washer. How I Experimented My Way to Losing 100 Pounds. 2 Shear and Bending-Moment Diagrams: Equation Form Example 1, page 4 of 6 x 9 kip R A = 10 kip A 6 kip R B = 5 kip B Pass a section through the beam at a point between the 6-kip force and the right end of the beam. Quick release button and directional switch make this torque wrench easy to use. Please enter either the temperature or the pressure, and click on the "Go" button to proceed. 88 / bushel 1. 62 lb in a metric tonne).
2019-12-08 00:42:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3320047855377197, "perplexity": 3872.299655712716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540503656.42/warc/CC-MAIN-20191207233943-20191208021943-00077.warc.gz"}
https://www.cigna.com/individuals-families/health-wellness/hw/systolic-blood-pressure-sts15384
Systolic Blood Pressure # Systolic Blood Pressure Systolic pressure is the pressure of blood against the artery walls when the heart has just finished contracting or pumping out blood. (Diastolic pressure is the pressure of blood against the artery walls between heartbeats, when the heart is relaxed and filling with blood.) Systolic blood pressure is the upper number of a blood pressure reading. For example, if a person's systolic pressure is 120 millimeters of mercury (mm Hg) and the diastolic pressure is 80 mm Hg, blood pressure is recorded as 120/80 and read as "120 over 80."
2020-08-12 10:02:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8183984756469727, "perplexity": 2998.2981596757736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738888.13/warc/CC-MAIN-20200812083025-20200812113025-00259.warc.gz"}
https://read.dukeupress.edu/differences/article-abstract/21/2/73/60624/The-Archaeology-of-Sound-Derek-Jarman-s-Blue-and
This article concentrates on the substance of audition in Derek Jarman's Blue. In his last feature film, Jarman makes a decisive ethical and aesthetic break: he shifts value away from the overdetermined cultural premiums associated with the visual spectacle'' and onto the indeterminate event of aurality that reconceptualizes queer belonging in terms of the erotics of the ear. Tracing the impact of Jarman's audiovisual project, the essay begins with the argument that the relationship of sound to image in Blue is defined by an entropic or unvisualized audition. This relationship in turn corresponds to a technical nonproductivity inscribing certain constructions of the aural spectacle in philosophies and theories of film sound. Blue's model of aural spectacularization is then linked to Michel Foucault's remarks on speakability/unspeakability, voice, and listening in The History of Sexuality and The Hermeneutics of the Subject. What lies unthought in these texts, this article contends, is something Foucault can only imply: a groundwork for a mode of audition that poses a moving counterweight to the ocular- and logocentric assumptions that often underwrite queer theory. This content is only available as a PDF.
2021-01-22 23:15:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3046165108680725, "perplexity": 6393.295320798114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00303.warc.gz"}
https://chemistry.stackexchange.com/questions/78728/thought-experiment-for-evaluating-total-energy
# Thought experiment for evaluating total energy Atomization Energy ΔHat is the energy required to disintegrate a molecule into isolated atoms that are infinitely away from each other. The question is about atomization energy and its relation to the total energy (E) in Schrodinger equation (Eψ=Hψ). If we disintegrate a molecule one step further to isolated electrons and nuclei, will the overall energy spent be equal to negative of the total energy of Schrodinger equation? I am assuming an energy reference will come into play as well as thermal energy. So let's say we are conducting the thought experiment of disintegration at 0 K (zero Kelvin) and we correct the values by assuming a reference energy point. Then, is the total Energy E equal to disintegration energy of the molecule into isolated electrons and nuclei? • You can easily test that when looking at a H atom. Calculate the energy on different distances between proton and electron. You can then extrapolate your findings to bigger atoms and molecules. Of course, you need to use an approximated Hamiltonian, but eh, for a basic discussion it's alright. – Fl.pf. Jul 13 '17 at 5:16 • The interaction between the isolated particles would vanish, therefore $\langle \hat V_{ij}\rangle=0$, but you still have the contributions from the kinetic energy $\langle \hat T_i\rangle$ of each individual particle. – Feodoran Jul 13 '17 at 8:33 • @Feodoran Makes sense. I also think that all identical nuclei when isolated have the same kinetic energy and all electrons when isolated have the same kinetic energy as well. So at least for two isomers it should be taken care of, should it not? In other words, the total disintegration energy difference between two isomers is equal to total (SE) energy difference. Is that about right? – Kinformationist Jul 13 '17 at 14:08 • @Fl.pf. Right this can be tested through actual calculations. I was trying to get an answer through conceptual thinking without doing the actual calculations. – Kinformationist Jul 13 '17 at 14:10 • @Kinformationist proof of concept assisted by calculation? – Fl.pf. Jul 13 '17 at 14:52 If we disintegrate a molecule one step further to isolated electrons and nuclei, will the overall energy spent be equal to negative of total energy of schrodinger equation? By convention, yes. Energy as a absolute quantity is not well-defined, energy must be defined with respect to an energy zero. By convention, when we perform quantum chemical calculations our energy zero is free electrons and nuclei infinitely far apart, which we define to have zero potential energy as they cannot interact. Also, quantum mechanically, we consider only the case at $0$ K. We can however add on corrections to the free energy due to entropy and internal motion at finite temperatures, as well as vibration zero point energies. How is this different from the atomisation energy? We can see that to get from the products of atomisation to the products of "disintegration" (as you refered to it), is to remove all the electrons from the neutral atoms. That energy is given by the sums of all the ionisation energies for each atom in the system, removing each electron one by one. This allows us to construct a Hess cycle. Finite temperature corrections are added to the quantum mechanical energies as the potential energy and free energy are different at finite temperature. • What do you mean by finite T corrections. Also E (SE) plus Nuclei and electrons leads to the Molecule. E(SE) includes individual kinetic contributions of Nuclei and electrons. What you wrote on the scheme is probably only the potential energy part of the E (SE). – Kinformationist Jul 27 '17 at 22:14 • When the electrons are bound to nuclei they have finite kinetic energy that is accounted for within the Electronic Schrodinger equation, and presuming you include nuclear kinetic energy it will include the vibrational zero point energy of your nuclei. So that kinetic energy is included in an ordinary Schrodinger equation energy. However that is all calculated to $0$ K. – user213305 Jul 28 '17 at 0:15 • As such that doesn't include the effects of entropy. At finite temperature, multiple vibrational states will be accessible to the molecules, some of which will be occupied. This introduces vibrational entropy; rotational, translational and in extreme cases electronic entropy can also be introduced. This changes the free energy of the molecule as $G = U + PV - TS = H - TS$. However as you can see above my diagram used enthalpy $H$ rather than free energy $G$ so this corrections aren't required to complete the cycle as you point out. – user213305 Jul 28 '17 at 0:21 • Finally, free energy due to entropy and (temperature in general) is not relevant to the case of a single molecule whose vibrational and rotational state we can measure/know. It is only relevant to and ensemble of many molecules at a given temperature where different numbers are in different rovibrational states. I'll try and clear up the main answer when I have a bit more time. – user213305 Jul 28 '17 at 0:24
2020-11-29 20:14:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7178151607513428, "perplexity": 464.28878338480337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141202590.44/warc/CC-MAIN-20201129184455-20201129214455-00376.warc.gz"}
https://physics.stackexchange.com/questions/276602/if-free-space-impedance-is-real-why-is-the-electric-field-not-attenuated
# If free space impedance is real, why is the electric field not attenuated? Why does vacuum have a nonzero characteristic impedance towards electromagnetic radiation? Intrinsic impedance given by $\eta=\sqrt{j\omega\mu / (\sigma +j \omega \epsilon)}$. It gives slope of transformation of $\mathbf E$ to $\mathbf H$ and vice versa. Here $\eta$ is complex. And in this expression real part is the cause of attenuation and imaginary part is the cause of phase shift. In case of free space since $\sigma = 0$, we have $\eta = \sqrt{j\omega\mu / j\omega\epsilon} = \sqrt{\mu / \epsilon}$, which is real. This suggests presence of resistive part in intrinsic impedance which means there should be attenuation. Also curiosity is how free space can offer resistance and however, the expression for electric field in plane wave $\mathbf E = E_0 \exp(wt-\beta z)$ where $\beta =2 {\pi}/{\lambda}$ and $\lambda$: wavelength suggests constant electric field. How can we reconcile the real impedance of space with the expression for electric field, which has no attenuation? • Welcome to Physics.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Take the tour! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. 4) If you get a satisfactory answer, remember to accept it by clicking on the green checkmark. 5) It is not very clear what you are asking; please elaborate. Thanks! – karatechop Aug 26 '16 at 22:51 • This question is very poorly presented. Please consider editing to make it clear what you are asking. It should not be necessary to examine links in order to make sense of your question. – sammy gerbil Aug 27 '16 at 0:25 • I suppose now this question is clear so you could remove it from HOLD – Nikhil Upadhyay Aug 28 '16 at 18:00 • How's that, @ACuriousMind? – DanielSank Aug 29 '16 at 1:53 • I know what you're trying to ask. The post was closed (and I guess downvoted, although that wasn't me) because it's just not written well. You say "the expression for electric field $\mathbf{E} = E_0 \exp(\omega t - \beta z)$ but you don't even define the symbols. For example, what is $\beta$? Also, that expression is not "the expression for electric field", it's just one possible expression, in particular giving a plane wave. These confusions make the question very hard to understand and answer because even though I can tell what you're probably really asking, I can't be 100% sure. – DanielSank Aug 29 '16 at 18:21 It's important to make the following distinction: it's not that vacuum "has" an intrinsic impedance. It's that electromagnetic waves IN a vacuum have an intrinsic ratio between their electric field (E) and magnetic field (H), which we call impedance. That impedance is given by Z = E/H, and it is a fundamental constant; it's only when EM waves travel through some medium other than a vacuum that the impedance gets altered. The units are Ohms because E is measured in Volts/meter and H is measured in Amperes/meter, and 1 Volt/Ampere is defined as an Ohm. This does not imply that vacuum "resists" electromagnetic waves and dissipates them like a resistor would. The specific value of Z(in free space) is related to the speed of light, and to the way we define the Volt and the Ampere. You could think of the "impedance" as being what limits the speed of propagation of the wave, if that is helpful. • This answer makes a common mistake in that it conflates specific units such as Volt, meter, and Amp with the more general notion of dimensions such as electric potential, length, and current. Impedance doesn't have an intrinsic unit. Ohm is just one particular choice. – DanielSank Aug 27 '16 at 20:17 • I disagree with the statement "it's not that vacuum 'has' an intrinsic impedance". I also disagree with the statement that the vacuum impedance is imaginary. It is not. Similarly to how a transmission line has a real impedance despite the fact that it is non-dissipative, the vacuum has real impedance. – DanielSank Aug 27 '16 at 20:18 • Okay, you're right about the impedance being real - if it were imaginary it would be either capacitive or inductive. Editing to remove that. – BrightBlueJim Aug 28 '16 at 22:03 • I stand by the statement that it's not the vacuum that has impedance. To say that it does implies that "nothing" has propcerties, which is nonsense, and which I believe is the source of the OP's confusion. Free space acts LIKE a transmission line with a certain impedance. It is not itself a transmission medium. There is no Ether. – BrightBlueJim Aug 28 '16 at 22:10 • Note that the energy in an EM wave in a vacuum does not alternate between E and H. There is zero phase shift between E and H. The impedance is real. – garyp Aug 29 '16 at 17:09 The so called free space impedance is a fictitious resistance offered by the free space for electromagnetic radiation. It has a meaning when an EM wave passes through free space. Otherwise you cannot measure such a resistance. But for a material, it has an intrinsic electrical or thermal resistivity and it exists all time. But here, the free space impedance doesn't mean such a resistance. If you look for such a resistance in the absence of an electromagnetic wave you cannot find one. Obviously, it is clear from that equation. The impedance is the property of a medium due to the passage of electromagnetic waves through it. Now, why the electric field does not attenuate? The impedance is caused by the passage of an electromagnetic wave. For the passage of an electromagnetic wave through a wave guide, there should not be an electric field component parallel to the conducting boundary of the wave guide. This creates losses as under such a condition a current is generated on the plates. In free space, there is nothing out there to conduct an hence offers no loss. $$E=cB$$ or $$\frac{E}{H}=\mu_0 c=\sqrt{\frac{\mu_0}{\epsilon_0}}$$ Permeability is the degree of magnetization that a material obtains in response to an applied magnetic field. Permittivity is a measure of how an electric field affects, and is affected by, a dielectric medium. It can also be seen as the resistance encountered on forming an electric field in a medium (Source: Wikipedia). Hence this resistance exist because there is a speed limit and it is the speed of light in vacuum. One can say that this limit is guaranteed by the permittivity and permeability of the medium. Otherwise vacuum should have offered infinite speed for propagation of electromagnetic waves. • I agree with your opinion about intrinsic impedance, but in case of a wave travelling in dielectric medium although there are particles to hinder its motion E/H do not attenuate so this cannot be the possible explanation. – Nikhil Upadhyay Aug 30 '16 at 19:29 • You are not referring to the intrinsic impedance of a dielectric medium. If you prefer that please edit tour question. The intrinsic impedance is a fictitious resistance. It's not actually there, but effects by an EM wave. Then how could EM wave attenuate? – UKH Aug 31 '16 at 13:10 • Yeah thats what i am suggesting that this explanation is not fitting for all cases thus a better explanation is needed – Nikhil Upadhyay Aug 31 '16 at 15:44
2021-08-03 18:32:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6583219766616821, "perplexity": 422.32195727819135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00645.warc.gz"}
http://www.citeulike.org/user/AndreiZh/article/11930670
CiteULike is a free online bibliography manager. Register and you can start organising your references online. Tags # Scanning tunneling microscopy of defect states in the semiconductor $\mathrmBi_2\mathrmSe_3$ Physical Review B, Vol. 66 (Oct 2002), 161306, doi:10.1103/physrevb.66.161306  Key: citeulike:11930670 ## Likes (beta) This copy of the article hasn't been liked by anyone yet. ### Abstract Scanning tunneling spectroscopy images of Bi2Se3 doped with excess Bi reveal electronic defect states with a striking shape resembling clover leaves. With a simple tight-binding model, we show that the geometry of the defect states in Bi2Se3 can be directly related to the position of the originating impurities. Only the Bi defects at the Se sites five atomic layers below the surface are experimentally observed. We show that this effect can be explained by the interplay of defect and surface electronic structure.
2013-05-24 02:33:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48602816462516785, "perplexity": 2505.8110342605587}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00020-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3068430/possible-orders-of-trace-k-elements-in-sl-2-mathbb-f-q
# Possible orders of trace k elements in $SL_2(\mathbb F_q)$ As a continuation of this question I would like to ask about possible orders of trace $$k$$ elements in $$SL_2(q)$$. Here are examples which I know. When trace is zero then we have $$x^2=-1$$ so it means that order of $$x$$ is $$2$$ in characteristic $$2$$ and it is $$4$$ in odd characteristic. When trace is $$-1$$ then we have $$x^2=-x+1$$, so $$x^3=-x^2+x=-x+1+x=1$$. It means that order of $$x$$ is $$3$$. When trace is $$t$$ which is order $$q-1$$ element in case $$q=2^n$$. Then order of $$x$$ is sometimes $$q+1$$ and $$x$$ generate $$\mathbb F_{q^2}$$ subalgebra. This is just guess, I have only checked this for $$q=2,4,8,16$$. Here is small test in GAP showing order of trace $$t$$ elements for $$q=2^n$$ for $$n=1..10$$, where $$t$$ is generator of the field multiplicative group: gap> List([1..10],k->Order([[Z(2^k),1],[1,0]]*Z(2^k)^0)); [ 3, 5, 9, 17, 31, 65, 43, 51, 511, 25 ] Let's call element imaginary when it is of trace $$0$$ in $$SL_2(q)$$. From above we know that order of such element is either $$2$$ or $$4$$. The next question we can ask is what order can have product of two imaginary elements. According to tests in GAP in characteristic two we obtain orders $$q-1$$, $$q+1$$ and divisors which are all orders in the group (tested only for few small q). In case of odd characteristic I do not have theory ready yet. Anyway set $$\{x^2=-1\}$$ seems to be interesting. Here is some experimental data from GAP for answering this question. I do not have full picture yet. Let $$q=p^n$$, $$p$$ is prime number. There are three types of subalgebras with one generated by one element: $$\mathbb F_q+\mathbb F_q$$, $$\mathbb F_q+\mathbb F_q\pmb i$$, $$\mathbb F_{q^2}$$ where $$\pmb i^p=1$$. The groups contained with invertible elements are $$C_{q-1}\times C_{q-1}$$, $$C_{2(q-1)}\times \underbrace{C_p\times...\times C_p}_{n-1}$$, $$C_{q^2-1}$$ with sizes $$(q-1)^2,q^2-q,q^2-1$$ respectively. The three cases are distinguished by order of generator $$u$$ of the subalgebra. When it is divisor of $$q-1$$ then we are in case 1. When it is divisor of $$q$$ then we are in case 2. When it is divisor of $$q+1$$ then we are in case 3. Common divisor of $$q-1$$ and $$q+1$$ is $$2$$ and it happens in odd characteristic. In this case element of order $$2$$ generate case 1. In characteristic 2 zero divisor $$p^2=p$$ generate subalgebra of type 1. It is element of $$Q_{01}$$ (see this question for notation) with trace equal $$1$$. Element of order $$p$$ in odd characteristic is belonging to $$Q_{12}$$ i.e. it is element of determinant $$1$$ and trace $$2$$ (and not belonging to $$\mathbb F_q1)$$.
2019-01-18 05:49:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 57, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8500667214393616, "perplexity": 92.76295091394603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659890.6/warc/CC-MAIN-20190118045835-20190118071835-00052.warc.gz"}
https://math.stackexchange.com/questions/1869797/number-of-true-relations-of-the-form-a-subseteq-b-where-a-b-in-mathcalp-1
# Number of true relations of the form $A\subseteq B$ where $A,B\in\mathcal{P}(\{1,2,\ldots,n\})$ I just started "Introduction to Topology and Modern Analysis" by G.F. Simmons and came across this problem in the exercises. Q. Let $U=\{1,2,\ldots,n\}$ for an arbitrary positive integer $n$. If $A$ and $B$ are arbitrary subsets of $U$, how many relations of the form $A\subseteq B$ are there? How many of them are true? Some previous problems for the cases $n=1,2,3$ suggested that by "relations of the form $A\subseteq B$", they mean even the ones where $A\subseteq B$ isn't true. It would be obvious that there are $2^{2n}=4^n$ such relations since $U$ has $2^n$ subsets, $A,B$ can each be chosen in $2^n$ ways, so there are $2^n\times 2^n=4^n$ such relations. Now, I think $3^n$ of these relations are true. Here's my idea/proof: Let us consider $x,y\in\mathcal{P}(U)$ where $\mathcal{P}(U)$ denotes the power set of $U$. Let $x$ have $k$ elements from $U$ (where $0\leq k\leq n$) with $k=0$ corresponding to $x=\emptyset$. For $x\subseteq y$ to be true, $y$ must have all the elements of $x$ and may/may not have elements from $U\setminus x$. We can construct $y$ by taking $m$ additional elements from $U\setminus x$ where $0\leq m\leq n-k$ which can be done in $\binom{n-k}m$ ways for each value of $m$. So, for each $x$, we can construct $y$ in $\sum\limits_{m=0}^{n-k}\binom{n-k}m=2^{n-k}$ ways. Now, we can take $x$ in $\binom nk$ ways for each value of $k$ and hence the number of true relations is $\sum\limits_{k=0}^n\binom nk 2^{n-k}=3^n$. Is the above correct/rigorous enough? Also, if the above solution is correct, doesn't it work for any arbitrary set $U$ of cardinality $n$ ? • what do you mean? You want to know with how many relations you can endow the set $\mathcal P\{1,2,3\dots n\}$ ?? And then how many of these are subsets of the relation $\subseteq$ ? – Jorge Fernández Hidalgo Jul 24 '16 at 18:18 • @CarryonSmiling, it's a problem in the book by Simmons that how many of the set inclusion relations $A\subseteq B$ where $A,B\in\mathcal{P}(\{1,2,\ldots,n\})$ are true? I'm looking for proofreading by the community on my work. – analysis123 Jul 24 '16 at 18:28 • Yeah, your solution looks good. In fact your solution was clearer to me that the actual question. – Jorge Fernández Hidalgo Jul 24 '16 at 18:29 ## 2 Answers I finally got it. The question seems to be: How many pairs $(A,B)$ of subsets of $\{1,2\dots n\}$ exist so that $A\subseteq B$. For each element $x\in \{1,2\dots n\}$ we have three choices: • $x$ is only in $B$ • $x$ is in $A$ and $B$ • $x$ is not in $A$ and not in $B$. Therefore there are $3^n$ possible pairs. Yes, this is correct, and yes, only the cardinality of $U$ matters. You could slightly simplify the proof by noting that there are $2^{n-k}$ subsets of $U\setminus x$, so you don't have to sum over binomial coefficients. I find the combinatorial proof in Carry on Smiling's answer simpler and more elegant.
2019-12-15 15:55:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9229318499565125, "perplexity": 134.83662357878316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541308604.91/warc/CC-MAIN-20191215145836-20191215173836-00168.warc.gz"}
https://www.techscience.com/mcb/v6n2/28481
Open Access ARTICLE A Theoretical Model for Simulating Effect of Parathyroid Hormone on Bone Metabolism at Cellular Level Yanan Wang, Qing-Hua Qin, Shankar Kalyanasundaram * Department of Engineering, Australian National University, Canberra, ACT 0200, Australia Molecular & Cellular Biomechanics 2009, 6(2), 101-112. https://doi.org/10.3970/mcb.2009.006.101 Abstract A mathematical model is developed for simulating anabolic behaviour of bone affected by Parathyroid Hormone (PTH) in this paper. The model incorporates a new understanding on the interaction of PTH and other factors with the RANK-RANKL-OPG pathway into bone remodelling, which is able to simulate anabolic actions of bone induced by PTH at cellular level. The RANK-RANKL-OPG pathway together with the dual action of TGF-$\beta$, which represent the core of coupling behaviour between osteoblasts and osteoclasts which are two cell types specialising in the maintenance of bone integrity, are widely considered essential for the regulation of bone remodelling at cellular level. Moreover, the anabolic effect of PTH on bone remodelling (mainly causing bone gain) is significant for therapies of bone disease such as osteoporosis. Although the Food and Drug Administration of United States has approved PTH as an anabolic treatment for osteoporosis, the corresponding underlying mechanism of bone anabolism remains elusive. The proposed mathematical model provides a detailed biological description of bone remodelling using the latest experimental findings and can explain the mechanism of bone anabolic action by PTH that is administered intermittently as well as catabolic effect when applied continuously. The development of such a model provides a rational basis for developing more biologically extensive models that may support the design of optimal dosing strategies for different therapies such as PTH-based anti-osteoporosis treatments. Keywords Wang, Y., Qin, Q., Kalyanasundaram, S. (2009). A Theoretical Model for Simulating Effect of Parathyroid Hormone on Bone Metabolism at Cellular Level. Molecular & Cellular Biomechanics, 6(2), 101–112. This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. View Like Related articles • Yanan Wang, Qing-Hua Qin, Shankar...
2023-03-21 02:18:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24238957464694977, "perplexity": 6416.053597865384}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00244.warc.gz"}
https://www.gamedev.net/blogs/entry/754424-welcome-to-gdnet/
• entry 1 6 • views 2626 # Welcome to GDNet+ 285 views So here's my first journal entry... it will probably be one of very few (I'm not much of a blogger). I finally joined GDNet+ a few days ago, mainly to support GameDev, but the yellow name and custom avatars are cool too [grin]. I haven't had much time between work and various side-projects to work on anything else, but maybe when I do get a chance to work on my own projects again I'll post some info/screenshots here and make my Developer's Journal useful. Thanks GameDev for being so awesome. Welcome to teh journal landz!!! ### # # # # # # # # # ######### # : :: # : : : # :: :## : :: # :: :## : :: # ## :: :## ## :: # ######## Woohoo!...Welcome to Journal land!+ Welcome to journal land In before trader jack's copy-paste intro! Woohoo!...Welcome to Journal land!++ Woohoo!...Welcome to Journal land!+++
2018-03-20 13:54:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18543557822704315, "perplexity": 11546.643881480613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647475.66/warc/CC-MAIN-20180320130952-20180320150952-00232.warc.gz"}
http://mesh.brown.edu/optimization/index.html
### Introduction to Geometry Processing Through Optimization by Gabriel Taubin IEEE Computer Graphics and Applications Volume 32, Issue 4, pages 88 - 94, July-August 2012 Computer representations of piece-wise smooth surfaces have become vital technologies in areas ranging from interactive games and feature film production to aircraft design and medical diagnosis. One of the dominant surface representations is polygon meshes. Simple and efficient geometry processing algorithms to operate on the very large polygon meshes used today are required in most computer graphics applications. In general, developing these algorithms involves fundamental concepts from pure mathematics, algorithms and data structures, numerical methods, and software engineering. As an introduction to the field, in this article we show how to formulate a number of geometry processing operations as the solution of systems of equations in the least squares sense.'' The equations are derived from local geometric relations using elementary concepts from analytic geometry, such as points, lines, planes, vectors, and polygons. Simple and useful tools for interactive polygon mesh editing result from the most basic descent strategies to solve these optimization problems. We develop the mathematical formulations incrementally, keeping in mind that the objective is to implement simple software for interactive editing applications that works well in practice. Higher performance versions of these algorithms can be implemented by replacing the simple solvers proposed here by more advanced ones. ## Representing Polygon Meshes There are many ways of representing polygon meshes. Here we adopt an array-based Indexed Face Set'' representation, where a polygon mesh $$P=(V,X,F)$$ is composed of: a finite set $$V$$ of vertex indices, a table of three dimensional vertex coordinates $$X=\left\{x_i:i\in V\right\}$$ indexed by vertex index, and a set $$F$$ of polygon faces, where a face $$f=\left(i_1,\dots ,i_{n_f}\right)$$ is a sequence of vertex indices without repetition. The number $$n_f$$ of vertex indices in a face may vary from face to face, and of course every face must have a minimum of $$n_f\ge 3$$ vertices. Two cyclical permutations of the same sequence of vertex indices are regarded as the same face, such as $$(0,1,2)$$ and $$(1,2,0)$$, but when a sequence of vertex indices results from another one by inverting the order, such as $$(0,1,2)$$ and $$(2,1,0)$$, we regard the two sequences as different faces, and say that the two faces have opposite orientations. This representation does not support faces with holes, which is perfectly acceptable for most applications. For each face $$f=\left(i_1,\dots ,i_{n_f}\right)$$, $$V\left(f\right)=\{i_1,\dots ,i_{n_f}\}$$ is the set of vertex indices of the face considered as a set. In terms of data structures, if $$N_V$$ is the total number of vertices of the mesh, we will always assume that the set of vertex indices is $$\{0,\dots ,N_V-1\}$$, composed of consecutive integers starting at $$0$$. The vertex coordinates are represented as a linear array of $$3N_V$$ floating point numbers, and the set of faces $$F$$ is represented as a linear array of integers resulting from concatenating the faces. To be able to handle polygon meshes with faces of different sizes (i.e., different number of vertex indices per face) we also append a special marker at the end of each face, such as the integer -1 which is never used as a vertex index, to indicate the end of the face. For example, $$[0,1,2,-1,0,2,3,-1]$$ would be the representation for the set of faces $$F$$ of a polygon mesh composed of two triangular faces, $$f_0=(0,1,2)$$ and $$f_1=(0,2,3)$$, and four vertices $$V=\{0,1,2,3\}$$ . The particular order of the faces within the linear array is not important, but once a particular order is chosen we will assume that it does not change. We refer to the relative location of a face within the array as its face index. If $$N_F$$ is the total number of faces, then the set of face indices is $$\{0,\dots ,N_F-1\}$$ . ## Polygon Mesh Smoothing Large polygon meshes are usually generated by measurement processes, such as laser scanning or structured lighting, which result into measurement errors or noise in the vertex coordinates. In some cases systematic errors are generated by algorithms which generate polygon meshes, such as isosurface algorithms. In general the noise must be removed to reveal the hidden signal, but without distorting it. Algorithms which attempt to solve this problem are referred to as smoothing or denoising algorithms. It is probably fair to say that the whole field of Digital Geometry Processing grew out of early solutions to this problem. Our goal in this article is to develop a simple and intuitive methodology to attack this problem in various ways. Similar approaches can be used later to formulate other more complex problems, such as large scale deformations for interactive shape design. In these smoothing algorithms removing noise is constrained to changes to the values of the vertex coordinates $$X$$. Neither the set of vertex indices $$V$$ nor the faces $$F$$ of the polygon mesh are allowed to change. Perhaps the simplest an oldest method to remove noise from a polygon mesh is Laplacian smoothing. In classical signal processing noise is removed from signals sampled over regular grids by convolution, i.e., by averaging neighboring values. Laplacian smoothing is based on the same idea: each vertex coordinate $$x_i$$ is replaced by a weighted average of itself and its first order neighbors. But to properly describe this method we first need to formalize a few things. ## The Primal Graph of a Polygon Mesh The graph, or more precisely the primal graph (we will introduce the dual graph later), $$G=(V,E)$$ of a polygon mesh $$P=\left(V,X,F\right)$$ is composed of the set of polygon mesh vertex indices $$V$$ as the graph vertices, and the set $$E$$ of mesh edges as the graph edges. A mesh edge is an unordered pair of vertex indices $$e=\left(i,j\right)=\left(j,i\right)$$ which appear consecutive to each other, irrespective of the order, in one or more faces of the polygon mesh. In that case we say that the face and the edge are incident to each other. The set of edges incident to a face $$f$$ is $$E\left(f\right)$$, and $$n_f$$ is also the number of edges in this set (equal to the number of vertices in the face); for example, for the face $$f=(0,1,2)$$ it is $$E(f)=\{\left(0,1\right),\left(1,2\right),\left(2,0\right)\}$$. Note that one or more faces may be incident to a common edge. The set of faces incident to a given edge $$e=(i,j)$$ is $$F\left(e\right)$$, which has $$n_e$$ number of incident faces. A boundary edge has exactly one incident face, a regular edge has exactly two incident faces, and a singular edge has three or more incident edges. We say that two vertices $$i$$ and $$j$$ are first order neighbors if the pair $$(i,j)$$ of vertex indices is an edge. For each vertex index $$i$$, the set $$V(i)=\{j:(i,j)\in E\}$$ is the set of first order neighbors of $$i$$, and $$n_i$$ is the number of elements in this set. In terms of data structures, the mesh edges $$\left(i,j\right)$$can be represented as a linear array of $${2N}_E$$ vertex indices, where $$N_E$$ is the total number of polygon mesh edges. To make the representation of each edge unique, in this array we store either the pair $$\left(i,j\right)$$ if $$i To efficiently construct the array of edges from the array of faces we use an additional data structure to represent a graph over the set of vertex indices. This graph data structure is initialized with the set of vertex indices, and an empty set of edges. The graph data structure supports two efficient operations: \(get\left(i,j\right)$$ and $$insert(i,j).$$ The operation $$get(i,j)$$ returns the edge index assigned to the edge $$\left(i,j\right),$$ if such edge exists, and a unique identifier such as -1 which is not used as an edge index, if the edge $$\left(i,j\right)$$ does not yet belong to the set of edges. If the edge $$(i,j)$$ does not yet exists, the operation $$insert(i,j)$$ appends the pair of indices to the array of edges and assigns its location in the array to the edge as the unique edge index. In this way, the index $$0$$ is assigned to the first edge created, and consecutive indices are assigned to edges created afterwards. An efficient implementation of this graph data structure can be based on a hash table. For some algorithms it is useful to have efficient method to determine the number $$n_e$$of incident faces per edge, as well as to access the indices of those faces. The graph data structure can be extended to support this functionality. The number $$n_e$$can be represented as an additional field in the record used to represent the edge $$e$$ in the graph data structure, or as an external variable length integer array. Each value is initialized to 1 by the $$insert(i,j)$$ operation, during the construction of the graph data structure, and incremented during the traversal of the array of faces, every time the $$get\left(i,j\right)$$ operation returns a valid face index. The sets of faces $$F(e)$$ incident to the edges can be represented as an array of variable length arrays indexed by edge index, and can be constructed as well during the construction of the graph data structure. ## Vertex Evolution Algorithms A large family of polygon mesh editing algorithms comprise three steps: 1) for each vertex index $$i$$ of the polygon mesh, compute a vertex displacement vector $$\triangle x_i$$ (in general as a function of the original vertex coordinates $$X$$, as well as of some external constraints or user input); 2) after all the vertex displacement vectors are computed, apply the vertex displacement vectors to the vertex coordinates $$x'_i=x_i+\lambda \triangle x_i$$ ,where $$\lambda$$ is a fixed scale parameter (either user-defined or also computed from the polygon mesh data); and 3) replace the original vertex coordinates $$X$$ by the new vertex coordinates $$X'$$. The three steps are repeated for a certain number of times specified in advance by the user, or until a certain stopping criterion is met. All the algorithms discussed in this article are members of this family. In terms of storage, these algorithms require an additional linear array of $$3N_V$$ floating point numbers to represent the vertex displacement. The vertex coordinates are updated using this procedure in linear time as a function of the number of vertices. Of course, the time and storage complexity of evaluating the vertex displacements, to determine the scale parameter, and to determine whether the stopping criterion is met when a stopping criterion is used, have to be added to the overall complexity of the algorithm. In general, algorithms with overall linear time and storage complexity as a function of the polygon mesh size are the only algorithms which scale properly to be of practical use with very large polygon meshes. ## Laplacian Smoothing As mentioned above, in Laplacian smoothing each vertex coordinate $$x_i$$ is replaced by a weighted average of itself and its first order neighbors. More precisely, for each vertex index $$i$$, a vertex displacement vector\textbf{} $\triangle x_i=\frac{1}{n_i}\sum_{j\in V(i)}{(x_j-x_i)}$ is computed as the average over the first order neighbors $$j$$ of vertex $$i$$, of the vectors $$x_j-x_i$$. After all these displacement vectors are computed as functions of the original vertex coordinates $$X$$, we apply the vertex displacement to the vertex coordinates with a scale parameter in the range $$0<\lambda <1$$ ($$\lambda =1/2$$ is usually a good choice). To compute vertex displacement vectors, it looks as though an efficient way of finding all the first order neighbors of each vertex index is needed, and in particular the number of elements in the sets of first order neighbors. Unfortunately the data structures introduced so far do not provide such methods. However, since each edge $$\left(i,j\right)$$ contributes a term to the sums defining both displacement vectors $$\triangle x_i$$and $$\triangle x_j$$, all the displacement vectors can be accumulated together while linearly traversing the array of edges. During the same traversal we also have to accumulate the number of first order neighbors of each vertex, so that the vertex displacement vectors can be normalized. In summary, the algorithm comprises the following steps: 1) for every vertex index $$i$$, set $$\triangle x_i=0$$ and $$n_i=0$$; 2) for each edge $$(i,j)$$, add $$x_i-x_j$$ to $$\triangle x_j$$, add $$x_j-x_i$$ to $$\triangle x_i$$, and increment $$n_i$$ and $$n_j$$ by 1; and 3) for each vertex index $$i$$ so that $$n_i\ne 0$$, divide $$\triangle x_i$$ by $$n_i$$. ## How to fix Laplacian Smoothing Laplacian smoothing is a very simple algorithm, and it is quite easy to implement. It does produce smoothing, but when too many iterations are applied, the shape of the polygon mesh undergoes significant and undesirable deformations. As mentioned above, this is due to the fact that the function $$E{\rm (}x{\rm )}$$ being minimized has a global minimum (actually infinitely many, but unique modulo a three dimensional translation) which does not correspond to the result of removing noise from the original vertex coordinates. Any converging descent algorithm will approach that minimum, which is why we observe significant deformations in practice. In our case, in Laplacian smoothing all the vertex coordinates of the polygon mesh converge to their centroid $\frac{1}{N_V}\sum^{N_V}_{i=1}{x_i}$ In the literature this problem is referred to as shrinkage''. Many algorithms, based on different mathematical formulations ranging from signal processing to partial differential equations, have been proposed over the last fifteen or more years to deal with, and solve, the shrinkage problem. But we are not going to survey these algorithms here. For the sake of simplicity we take the point of view that the shrinkage problem is a direct result of the wrong'' performance function being minimized. As a result, we address the shrinkage problem by modifying the performance function being minimized. However, after constructing each new performance function, we follow the same simple steps described above of minimizing the function with respect to each variable independently to obtain a properly scaled descent vector, and then updating the variables as in Laplacian smoothing by displacing the vertex coordinates in the direction of this descent vector. Finally, we repeat the process for a predetermined number of steps, or until convergence based on an error tolerance stop. ## Vertex Position Constraints The most obvious way to prevent shrinkage is not to update all the vertex coordinates. More formally, we partition the set of vertex indices $$V$$ into two disjoint sets, a set $$V_C$$ of constrained vertex indices, and a set $$V_U$$ of unconstrained vertex indices. We also partition the vector of vertex coordinates $$x$$ into a vector of constrained vertex coordinates $$x_C$$ and a vector of unconstrained vertex coordinates $$x_U$$. We keep the same sum of squares of edge lengths function$$E\left(x\right)=E(x_U,x_V)$$, but we regard it as a function of only the unconstrained vertex coordinates $$x_U$$, and the constrained vertex coordinates $$x_V$$ are regarded as constants. As such this function is still quadratic and non-negative definite, but it is no longer homogeneous. In general, this function still has a unique minimum (modulo a translation in this case) which has a closed form expression, and the minimum does not correspond to placing all the vertices at a single point in space. If we apply the same approach described above to compute a descent direction by minimizing $$E\left(x_U\right)$$ with respect to each unconstrained variable independently, we end up with the same descent vectors as in Laplacian smoothing, and the same descent algorithm, but here only the unconstrained vertex coordinates are updated. So, the computational cost of this algorithm is about the same as Laplacian smoothing. Unfortunately we still observe shrinkage''. In general it is not clear what vertices should be constrained and which ones should be free to move, but within an interactive modeling system which allows for interactive selection of vertices this is an effective way of smoothing out selected portions of a polygon mesh, which we have found to be useful in practice. Rather than keeping the constrained vertices at their original positions, they can be assigned new target'' positions, in which case the constrained vertices can be updated first, and then kept fixed during the iterations of the algorithm. Unfortunately, if the constrained vertex displacements are large compared with the average length of edges, this algorithm may result into noticeable shape artifacts during the iterations. An alternative is to switch from this hard'' constraints strategy to a soft'' constraints strategy. In a soft'' constraints strategy all the variables are free to move again, and the constraints are satisfied in the least squares'' sense by adding one or more additional terms to the function being minimized. For example, in our case we consider this function $E\left(x\right)=\sum_{\left(i,j\right)\in E}{{\left\|x_j-x_i\right\|}^2}+\mu \sum_{i\in V_C}{{\left\|x_i-x^0_i\right\|}^2}$ where $$\mu$$ is a positive constant, the second sum is over the constrained vertices, and $$x^0_i$$ is a target constrained vertex position provided as input data to the algorithm. By applying the strategy of minimizing each variable independently, we obtain the same expression as in Laplacian smoothing for the displacements $$\triangle x_i$$ corresponding to the unconstrained vertices, and $\triangle x_i=\frac{1}{n_i+\mu }\left(\sum_{j\in V(i)}{\left(x_j-x_i\right)+\mu (x^0_i-x_i)}\right)$ for the displacements corresponding to the constrained vertices. ## Face Centroid Constraints It turns out that to produce acceptable results with the vertex position constraints strategy a large proportion of the vertices must be constrained, and in that case it is not clear in general where the target constrained vertex positions should be placed. Rather than imposing constraints on vertex positions, we impose similar constraints on some or all of the face centroids. The intuition here is that the face centroids being weighted averages of the face vertex coordinates, can be regarded as the result of a smoothing process, and the problem is how to transfer that smooth shape information back from the face centroids to the vertex coordinates. Continuing with the soft constraints approaches, we consider the following performance function, which looks very similar to the one used to impose soft vertex constraints $E\left(x\right)=\sum_{\left(i,j\right)\in E}{{\left\|x_j-x_i\right\|}^2}+\mu \sum_{f\in F_C}{{\left\|x_f-x^0_f\right\|}^2}$ where $$F_C$$ is the subset of constrained faces (it could be all the faces), and for each face $$f=\left(i_1,\dots ,i_{n_f}\right)$$, we express the centroid $$x_f$$as the average of the face vertex coordinates $x_f=\frac{1}{n_f}(x_{i_1}+\dots +x_{i_{n_f}})$ so that the overall function can be regarded as a function of only the vertex coordinates, and $$x^0_f$$ is the target three dimensional point value for the face centroid. For example, $$x^0_f$$ could be the initial value for the face centroid before any smoothing is applied. Even though we would start the algorithm with the term of the performance function corresponding to the face centroid constraints identical to zero, it may become nonzero after one or more iterations while the overall function decreases. By applying the generalized Jacobi strategy of minimizing with respect to each variable independently, we obtain the following expression for each displacement $$\triangle x_i$$ $\triangle x_i{\rm =}\frac{1}{n_i+\mu \sum_{f\in F_C(i)}{\frac{1}{n_f}}}\left(\sum_{j\in V(i)}{(x_j-x_i)}+\mu \sum_{f\in F_C\left(i\right)}{\frac{1}{n_f}}(x^0_f-x_f)\right)$ where $$F_C(i)$$ is the subset of constrained faces $$f$$ which contain the vertex index $$i$$. These displacements and normalization factors can be accumulated as in previous algorithms by initializing to zero, traversing the array of edges, traversing the array of constrained faces, and then normalizing. Once the displacements are computed, the vertex coordinates are updated as in Laplacian smoothing. ## Face Normal Constraints None of the constraints discussed so far allow for direct control of local surface orientation. A smoothing algorithm able to selectively control local surface orientation is a useful tool within an interactive polygon mesh editing system, and yet another possible way to prevent the shrinkage problem of Laplacian smoothing. To be able to control surface orientations, we need to introduce surface normal vectors into the performance function to be minimized. As we have done for the face centroids, one possibility is to derive an expression for a face normal vector as a function of the face vertex coordinates, and then add an error term to the performance function for all or some of the faces. Since doing so results in nonlinear equations to solve for the displacement vectors, we propose a simpler alternative approach. We consider the following performance function $E\left(x\right)=\sum_{(i,j)\in E}{{\left\|x_j-x_i\right\|}^2}+\mu \sum_{f\in F_N}{\sum_{(i,j)\in E(f)}{{(u^t_f(x_j-x_i))}^2}}$ where the first term is the sum of square edge lengths as in all the previous performance functions, and the second term is a sum over a subset $$F_N$$ of faces where we want to impose the face normal constraint, and the edges $$\left(i,j\right)$$ incident to each face $$f$$ of this set. For each such face-edge pair we impose as a soft constraint that the face normal vector $$u_f$$ be orthogonal to the face boundary vector $$x_j-x_i$$. The face normal vectors $$u_f$$ are provided by the user as additional inputs to the algorithm. Although in our polygon mesh representation the faces are not forced to be planar, for the faces to be planar, this condition must be satisfied for all the face boundary vectors of each face. Note that with the constrained face normal vectors regarded as constants, this performance function is also quadratic and homogeneous in the vertex coordinates. In this case the displacement vectors satisfy the following linear equations $\left(n_iI+\mu \sum_{f\in F_N(i)}{\sum_{j\in V(f,i)}{n_fn^t_f}}\right)\triangle x_i=\sum_{j\in V(i)}{(x_j-x_i)}+\mu \sum_{f\in F_N(i)}{\sum_{j\in V(f,i)}{n_fn^t_f}}(x_j-x_i)$ where $$I$$ is the $$3\times 3$$ identity matrix, $$V\left(f,i\right)=V(f)\cap V(i)$$ is the set of vertices which belong to the face $$f$$ and are first order neighbors of vertex $$i$$ (there are exactly two such vertices when the face is known to contain vertex $$i$$). The $$3\times 3$$ matrix on the left hand side multiplying $$\triangle x_i$$, which is symmetric and positive definite, and can be accumulated during the mesh traversal along with the other sums, can be easily inverted using Cholesky decomposition. ## Smoothing Face Normal Vectors Now let's assume that we have a face normal vector $$u_f$$ for every face of the polygon mesh. We concatenate these face normal vectors to form a vector $$u$$ of dimension $$3N_F$$ which we consider not a constant provided by the user, but as a new variable. The variables $$u$$ and $$x$$ are of course not independent, but, rather than imposing their relations as hard constraints, we regard them as independent variables, and represent their relations as soft constraints as in the case of face normal constraints. To remove noise from the face normal vectors we consider the performance function $E\left(u\right)=\sum_{(f,g)\in E^*}{{\left\|u_f-u_g\right\|}^2}$ This is the sum over the dual mesh edges of the square differences of face normal vectors. The set of dual mesh edges $$E^*$$ is composed of pairs $$(f,g)$$ of faces which share a regular edge (one which has exactly two incident faces). Formally, the dual graph of a mesh has the faces of the mesh as dual graph vertices, and the dual mesh edges as dual graph edges. It is important to note the similarity between this performance function on the sum of square edge lengths. If we initialize the face normal vectors from the vertex coordinates, this performance function can be used to first remove noise from the face normal vectors, and then use the smoothed face normal as face normal constraints in a second smoothing process applied to the vertex coordinates. This second process of smoothing the vertex coordinates with face normal constraints can be regarded as the integration of the smoothed face normals. Applying the generalized Jacobi strategy to this performance function, we can compute a displacement vector $$\triangle u_f$$ for the face normal vectors. Since after these face normal displacements are applied, the face normal vectors may no longer be unit length, we normalize the updated face normal vectors to unit length, and then perform the face normal integration step. Another alternative is to consider the following performance function $E\left(x,u\right)=\sum_{(i,j)\in E}{{\left\|x_j-x_i\right\|}^2}+\mu \sum_{f\in F}{\sum_{(i,j)\in E(f)}{{(u^t_f(x_j-x_i))}^2+\gamma \sum_{(f,g)\in E^*}{{\left\|u_f-u_g\right\|}^2}}}$ and apply the Jacobi minimization strategy on the variables $$x$$ and $$u$$ together. In this way we can determine displacement vectors $$\triangle x_i$$ and $$\triangle u_f$$ as functions of the variables $$x$$ and $$u$$, and then update both variables at once, followed by normalization of face normal vectors to unit length. We leave the details of these derivations to the reader. Note that the two approaches discussed here allow for hard constraints to be applied to a subset of face normal vectors. As in the case of hard vertex coordinate constraints, the constrained values are just not updated. Also soft face normal constraints can be imposed by adding yet another term to the last performance function composed of the sum over a subset of faces of square errors between face normal vectors and target face normal vectors. ## Combining all the Smoothing Strategies We have considered a number of ways to add constraints to Laplacian smoothing. All these strategies can be combined into a single general polygon mesh smoothing algorithm which allows to impose hard and soft vertex constraints on disjoint subsets of vertex indices, soft face centroid constraints on a subset of faces, and hard and soft face normal constraints on disjoined subset of faces. And all these constraints can be applied simultaneously. Following the same strategy constraints on higher order properties such as curvature can be imposed, but at this point we will let the reader to figure out how to do so.
2017-12-12 02:35:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6322147250175476, "perplexity": 248.6934674501356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514250.21/warc/CC-MAIN-20171212021458-20171212041458-00584.warc.gz"}
https://aitopics.org/mlt?cdid=arxivorg%3A26D988B8&dimension=pagetext
to ### SEED: Self-supervised Distillation For Visual Representation This paper is concerned with self-supervised learning for small models. The problem is motivated by our empirical studies that while the widely used contrastive self-supervised learning method has shown great progress on large model training, it does not work well for small models. To address this problem, we propose a new learning paradigm, named SElf-SupErvised Distillation (SEED), where we leverage a larger network (as Teacher) to transfer its representational knowledge into a smaller architecture (as Student) in a self-supervised fashion. Instead of directly learning from unlabeled data, we train a student encoder to mimic the similarity score distribution inferred by a teacher over a set of instances. We show that SEED dramatically boosts the performance of small networks on downstream tasks. Compared with self-supervised baselines, SEED improves the top-1 accuracy from 42.2% to 67.6% on EfficientNet-B0 and from 36.3% to 68.2% on MobileNet-v3-Large on the ImageNet-1k dataset. ### Distill on the Go: Online knowledge distillation in self-supervised learning Self-supervised learning solves pretext prediction tasks that do not require annotations to learn feature representations. For vision tasks, pretext tasks such as predicting rotation, solving jigsaw are solely created from the input data. Yet, predicting this known information helps in learning representations useful for downstream tasks. However, recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models. To address the issue of self-supervised pre-training of smaller models, we propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation to improve the representation quality of the smaller models. We employ deep mutual learning strategy in which two models collaboratively learn from each other to improve one another. Specifically, each model is trained using self-supervised learning along with distillation that aligns each model's softmax probabilities of similarity scores with that of the peer model. We conduct extensive experiments on multiple benchmark datasets, learning objectives, and architectures to demonstrate the potential of our proposed method. Our results show significant performance gain in the presence of noisy and limited labels and generalization to out-of-distribution data. ### Big Self-Supervised Models are Strong Semi-Supervised Learners One paradigm for learning from few labeled examples while making best use of a large amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning. Although this paradigm uses unlabeled data in a task-agnostic way, in contrast to common approaches to semi-supervised learning for computer vision, we show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of big (deep and wide) networks during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. After fine-tuning, the big network can be further improved and distilled into a much smaller one with little loss in classification accuracy by using the unlabeled examples for a second time, but in a task-specific way. The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge. This procedure achieves 73.9% ImageNet top-1 accuracy with just 1% of the labels ($\le$13 labeled images per class) using ResNet-50, a $10\times$ improvement in label efficiency over the previous state-of-the-art. With 10% of labels, ResNet-50 trained with our method achieves 77.5% top-1 accuracy, outperforming standard supervised training with all of the labels. ### Efficient Self-supervised Vision Transformers for Representation Learning This paper investigates two techniques for developing efficient self-supervised vision transformers (EsViT) for visual representation learning. First, we show through a comprehensive empirical study that multi-stage architectures with sparse self-attentions can significantly reduce modeling complexity but with a cost of losing the ability to capture fine-grained correspondences between image regions. Second, we propose a new pre-training task of region matching which allows the model to capture fine-grained region dependencies and as a result significantly improves the quality of the learned vision representations. Our results show that combining the two techniques, EsViT achieves 81.3% top-1 on the ImageNet linear probe evaluation, outperforming prior arts with around an order magnitude of higher throughput. When transferring to downstream linear classification tasks, EsViT outperforms its supervised counterpart on 17 out of 18 datasets. The code and models will be publicly available. ### Contrastive Representation Distillation Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation. Code: http://github.com/HobbitLong/RepDistiller.
2021-07-24 00:57:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5017298460006714, "perplexity": 1693.0655056933833}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.87/warc/CC-MAIN-20210724001211-20210724031211-00641.warc.gz"}
http://simple.wikipedia.org/wiki/Newton's_law_of_universal_gravitation
Newton's law of universal gravitation Statue of Isaac Newton in the chapel of Trinity College, Cambridge Newton's universal law of gravitation is a true physical law that describes the attraction between two objects with mass. It is talked about in Isaac Newton's Philosophiae Naturalis Principia Mathematica.[1][2] The law is part of classical mechanics. The formula is $F_{g} = G \frac{m_1 m_2}{r^2},$ In this equation: • "Fg" is the total gravitational force between the two objects. • "G" is the gravitational constant. • "m1" is the mass of the first object. • "m2" is the mass of the second object. • "r" is the distance between the centre of object a to the centre of object b. References 1. "Sir Isaac Newton: The Universal Law of Gravitation" (in English). Astronomy 161. Retrieved 2009124. 2. Cox, Brian; Forshaw, Jeff (2011). The Quantum Universe: Everything That Can Happen Does Happen. Allen Lane. p. 14. .
2014-10-21 07:32:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8810317516326904, "perplexity": 2173.674938406578}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444209.14/warc/CC-MAIN-20141017005724-00151-ip-10-16-133-185.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/956869/mutual-independence-definition-clarification
# Mutual Independence Definition Clarification Let $Y_1, Y_2, ..., Y_n$ be iid random variables and $B_1, B_2, ..., B_n$ be Borel sets. It follows that $P(\bigcap_{i=1}^{n} (Y_i \in B_i)) = \Pi_{i=1}^{n} P(Y_i \in B_i)$...I think? If so, does the converse hold true? My Stochastic Calculus professor says it does (or maybe misinterpreted him somehow?), but I was under the impression that independence of the n random variables was equivalent to saying for any indices $i_1, i_2, ..., i_k$ $P(\bigcap_{j=i_1}^{i_k} (Y_j \in B_j)) = \Pi_{j=i_1}^{i_k} P(Y_j \in B_j)$. So, if the RVs are independent, then we can choose $i_j=j$ and k=n to get $P(\bigcap_{i=1}^{n} (Y_i \in B_i)) = \Pi_{i=1}^{n} P(Y_i \in B_i)$, but given $P(\bigcap_{i=1}^{n} (Y_i \in B_i)) = \Pi_{i=1}^{n} P(Y_i \in B_i)$, I don't know how to conclude that for any indices $i_1, i_2, ..., i_n$ $P(\bigcap_{j=i_1}^{i_k} (Y_j \in B_j)) = \Pi_{j=i_1}^{i_k} P(Y_j \in B_j)$, if that's even the right definition. p.17 here seems to suggest otherwise. idk Also this: or So, this answer is to use the Omega part to establish pairwise independence and ultimately conclude independence. Without that assumption, we cannot conclude independence. Is that right? Why does that not contradict the definition of independence: $P(\bigcap_{i=1}^{n} (Y_i \in B_i)) = \Pi_{i=1}^{n} P(Y_i \in B_i)$ ? • How does p. 17 in your link seem to suggest otherwise? – Stefan Hansen Oct 6 '14 at 7:15 • @stefan hansen sorry unclear I meant it goes against me and supports you guys – BCLC Oct 6 '14 at 8:24 • @StefanHansen added pictures. – BCLC Oct 7 '14 at 18:07 • math.stackexchange.com/questions/924865/… – BCLC Jul 12 '15 at 13:48 I was [under][2] the [impression][3] that independence of the n random variables was equivalent to saying for any indices $i_1, i_2, ..., i_n$ $P(\bigcap_{j=i_1}^{i_n} (Y_j \in B_j)) = \Pi_{j=i_1}^{i_n} P(Y_j \in B_j)$. You misread: independence of $n$ random variables $(Y_1,\ldots,Y_n)$ is equivalent to the following condition: (C) For every distinct indices $i_1, i_2, ..., i_k$ and every $B_j$, $P(\bigcap\limits_{j=i_1}^{i_k} (Y_j \in B_j)) = \prod\limits_{j=i_1}^{i_k} P(Y_j \in B_j)$. Indeed, choosing $k=n$ and $i_j=j$, (C) implies condition (C'): (C') For every $B_j$, $P(\bigcap\limits_{i=1}^{n} (Y_i \in B_i)) = \prod\limits_{i=1}^{n} P(Y_i \in B_i)$. In the other direction, if (C') holds, then, for every distinct indices $i_1, i_2, ..., i_k$ and every $B_j$, one can complete the collection of events by $(Y_s\in\mathbb R)$ for the $n-k$ missing indices $s$, then (C) follows. • OMG SORRY. (C) is totally what I meant. Thanks Did. Anyway, what? Why does C' imply C? This seems to suggest otherwise. engr.mun.ca/~ggeorge/MathGaz04.pdf I mean, isn't the example what my prof was asserting? – BCLC Oct 3 '14 at 17:35 • As I said, (C) and (C') are equivalent. Note that (C) and (C') assert that some property holds for every Borel subsets B_i, not for some specific collection (B_i). – Did Oct 3 '14 at 20:05 • My prof did say something about for every $B_i$ but how is that relevant? I honestly don't get how C follows from your splitting up of k and n-k... – BCLC Oct 3 '14 at 23:31 • If $P(Y_1\in B_1,Y_2\in B_2,Y_3\in B_3,Y_4\in B_4,Y_5\in B_5)$ is what it should be for every $(B_1,B_2,B_3,B_4,B_5)$ then $P(Y_1\in B_1,Y_3\in B_3,Y_4\in B_4)$ is what it should be for every $(B_1,B_3,B_4)$ since $$P(Y_1\in B_1,Y_3\in B_3,Y_4\in B_4)=P(Y_1\in B_1,Y_2\in\mathbb R,Y_3\in B_3,Y_4\in B_4,Y_5\in\mathbb R).$$ – Did Oct 3 '14 at 23:48 • READ MY COMMENT and THINK about it. – Did Oct 7 '14 at 18:22
2019-11-19 20:34:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744391202926636, "perplexity": 607.5555882218822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00167.warc.gz"}
https://scalability.org/2007/10/source-of-amusement-for-a-monday-evening/
# Source of amusement for a monday evening Update 24-Dev-2007: One of the site owner listed below contacted me and asked me to remove their personal information which was contained in the site registration. I complied. I have not checked whether or not their system is still an attack host. It is very important that people with good intentions protect their systems before placing them on the net. It is generally very hard to do this for windows, and fairly easy to do this for linux. For linux, look at the Firestarter package to make setting up a firewall fairly trivial. [end of update] Alrighty. I am sitting here fighting with a now mostly functional diskless SuSE 10.2 installation, when an email arrives. In my spam box. I check that about 4 times a day. Clean it out once a month or so. Usually with 8-9000 spam. Going to have to stop looking at it … Ok, back to the story. So I get this email. It did something no other spam has done in a while. It got my attention. Here is a snapshot. Nice huh? Wakes you up for a second. Note the spelling errors. One would not expect that official US government email would come complete with spelling and grammatical errors. Ok, so the grammar may not be in error, but it is not what one might expect out of a native American english speaker. Nor would one assume that US government email would be used as a vehicle for notification of a legal issue. The US government is wedded to paper. A real issue would arrive via snail mail. Well, for maybe more than a second, I didn’t know what I was looking at. Racked my brain for all of 10 seconds trying to remember a customer by the name of George Hanson. Then I thought, well, its likely in my spam box for a reason. Lets go look at the links. No, not clicking the links, look at them. Before we do, its worth defining a useful operation from mathematics. This operation is called projection. Think of it as the shadow one vector makes on another. A vector could be pencil in this case. A value close to 1 for a set of unit length (e.g. length equals 1) vectors is probably a close match, and they point to very nearly the same thing. A value close to 0 indicates that the vectors point in different directions. So why am I telling you this? Simple. You can tell whether or not something is a phishing scam by inspection (e.g. looking at it), if you can see if the link in the href aligns with and is the same as or nearly the same as the link in the text. Both are vectors. Both point you somewhere. In this case, the critical link was indicating in text that it pointed to http://ftc.gov/fraud/complaints/24_oct_2007_george_hanson.doc, but really it pointed to modhgil.com/1maverick//media/… Hmmm… modhgil.com. Not ftc.gov. They have a projection of about 0. landman@lightning:~/Desktop\$ whois modhgil.com Whois Server Version 2.0 Domain names in the .com and .net domains can now be registered with many different competing registrars. Go to http://www.internic.net for detailed information. Domain Name: MODHGIL.COM Registrar: ENOM, INC. Whois Server: whois.enom.com Referral URL: http://www.enom.com Name Server: NS0.PROVIDER-ONE.NET Name Server: NS1.PROVIDER-ONE.NET Name Server: NS2.PROVIDER-ONE.NET Name Server: NS3.PROVIDER-ONE.NET Status: ok Updated Date: 04-apr-2007 Creation Date: 01-apr-2005 Expiration Date: 01-apr-2009 >>> Last update of whois database: Mon, 29 Oct 2007 22:27:00 UTC < << ... Domain name: modhgil.com Registrant Contact: MODHGIL.COM [deleted at request of site owner] GB Ok, I am snickering now. I ran over to the FTC's site to see if they had any news of this, and sure enough, on the first page Don't Open Bogus Email that Claims to Come From the FTC Email That States It's From the FTC's 'Fraud Department' Has Virus Attached A bogus email is circulating that says it is from the Federal Trade Commission, referencing a 'complaint' filed with the FTC against the email's recipient. The email includes links and an attachment that download a virus. As with any suspicious email, the FTC warns recipients not to click on links within the email and not to open any attachments. The spoof email includes a phony sender's address, making it appear the email is from 'frauddep@ftc.gov' and also spoofs the return-path and reply-to fields to hide the email's true origin. While the email includes the FTC seal, it has grammatical errors, misspellings, and incorrect syntax. Recipients should forward the email to spam@uce.gov and then delete it. Emails sent to that address are kept in the FTC's spam database to assist with investigations. For laughs, lets see if this is a compromised machine. nmap modhgil.com Starting Nmap 4.20 ( http://insecure.org ) at 2007-10-29 18:29 EDT Interesting ports on p1host5-shared.provider-noc.net (87.236.89.24): Not shown: 1684 filtered ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 25/tcp open smtp 53/tcp open domain 80/tcp open http 106/tcp open pop3pw 110/tcp open pop3 143/tcp open imap 443/tcp open https 465/tcp open smtps 993/tcp open imaps 995/tcp open pop3s 8443/tcp open https-alt Well, it looks like someone has a few too many open ports, but it could just be part of the ruse. The email itself seems to have come from s20.80code.com. This could be forged though. Remember when whois used to give you range information for IPs? Ahhh the good old days. Now we just see this address. Well, for laughs, lets ping the s20.80code.com. Yup, the IP address maps back into is what is in the mail header. There is a real web site there. It looks like a real business. The problem is that the headers may have been forged. So we know the email is a fraud, took about 10 seconds to figure that out. What did my automated tagging pipeline say (all mails traverse this): X-Spam-Report: * 5.0 BOGOFILTER Bogosity: bogofilter thinks this mail is crap * 1.5 HTML_IMAGE_ONLY_20 BODY: HTML: images with 1600-2000 bytes of words * 0.0 HTML_MESSAGE BODY: HTML included in message * 0.0 BAYES_50 BODY: Bayesian spam probability is 40 to 60% * [score: 0.5000] * 1.9 MIME_HTML_ONLY BODY: Message only has text/html MIME parts * 1.2 MIME_HEADER_CTYPE_ONLY 'Content-Type' found without required MIME Yup, I think Bogofilter summed it up nicely. ### 2 thoughts on “Source of amusement for a monday evening” 1. This site is being listed under a search on my name. can you please do me a favor? if you still find that modhgil.com is causing grief online and is compromised, please do let me know. if you think has been sorted out (which I think I have), may i request you to close this post as this shows my personal details (personally identifiable information). regards, punit 2. I removed the personal details. I think the post is certainly still relevant, it shows an attack vector. Given the nature of the attempt to defraud us, I am all too happy to expose both the attack and method of attack so that anyone googling for it may in fact find it. I do sincerely hope that you were merely an innocent victim, and that your servers were somehow unfortunately compromised. If so, I hope you have locked them down, hard, and they are only serving/transporting content you personally approve of. The defrauding attempt was easy to see through, took all of a few seconds of my time to notice most of the problems, and further note that this is not how the government actually works here. I will continue to expose all fraud attempts I see against us (ignoring the hundreds of emails I get each day about ebay and related).
2020-12-02 22:14:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2064317911863327, "perplexity": 3481.192522834801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141716970.77/warc/CC-MAIN-20201202205758-20201202235758-00028.warc.gz"}
https://www.physicsforums.com/threads/conceptual-issue-with-average-acceleration-problem-in-giancoli.762744/
# Conceptual issue with average acceleration problem in Giancoli 1. Jul 22, 2014 ### knyaz Hi all, I recently came upon the following problem in Giancoli's Physics Principles With Applications, 6th ed. textbook that I am having some issues understanding. I am new here, so I do apologize if I have made any mistakes in formatting or anything else. 1. The problem statement, all variables and given/known data A 0.140-kg baseball traveling 35.0 m/s strikes the catcher's mitt, which, in bringing the ball to rest, recoils backward 11.0 cm. What was the average force applied by the ball on the glove? 2. Relevant equations 1.) favg = m*aavg This is where I am struggling—the solutions manual to this book states that the following equation is also to be used: 2.) aavg = (v2 - v0 2) / 2(x - x0) 3. The attempt at a solution I am familiar with that second equation up there when used to find constant acceleration, but I do not see how it can be used to find average acceleration. The derivation of this equation from the basic definitions of average velocity and acceleration involves using the following: vavg = (v + v0)/2 As far as I can tell, this wouldn't work for average acceleration—at non-constant acceleration, the average velocity would not necessarily be halfway between the initial and final velocities. Thus, it seems to me that the equation the solutions manual uses, no.2 above, cannot be used in solving the problem. Is the solutions manual wrong, then, and is the problem unsolvable (to anyone this far in the textbook, at any rate) as it is stated? Please do tell me if I am missing something in my reasoning on this, or if I am otherwise incorrect. I also do want to mention that I saw a similar post on here, involving this exact conceptual issue with a very similar problem. However, there the problem had asked for an estimate, which meant that eq (2) above was alright to use as an estimate of average acceleration. Here, the problem seems to ask for an exact answer—I do not know if it was meant to ask for an estimate, instead. Thanks for hearing me out on this, and thanks in advance to anyone who replies! -Alex 2. Jul 22, 2014 ### Staff: Mentor Hi knyaz, Welcome to Physics Forums. I agree with you that there is some ambiguity here. If you combine equations 1 and 2 to eliminate the average acceleration, you get the correct result for the average force, if, by the average force, you mean the force averaged over the distance. And, you get the average acceleration by dividing the average force by the mass. So, the average acceleration here is the acceleration averaged over the distance. But, if, by the average force, you mean the force averaged over the time, then no. Chet 3. Jul 23, 2014 ### haruspex Hi knyaz, I agree with you completely. My position is a little different from Chet's. I don't see "average force" as ambiguous. It means an average over time. If the question wants you to take an average over distance then it should specify that. This issue comes up regularly on this forum. 4. Jul 23, 2014 ### Staff: Mentor Hi guys, I don't have as much of an issue working with two different (and conflicting) definitions of average force. One nice feature of the force averaged over the displacement is that it is related to the amount of work done. But, if someone refers to the average force (without any qualification), I'm automatically thinking "force averaged over time." If the problem statement wants you to be working with the force averaged over the displacement, in my opinion, they must specify this explicitly. So, in short, I agree with you guys. Chet 5. Jul 23, 2014 ### BiGyElLoWhAt I don't know if you're this far yet, but it would be really easy to solve this using energies. (Hinted at by chet) 6. Jul 23, 2014 ### vela Staff Emeritus I concur with you and the others. The given solution at best yields an estimate of the average force. The average acceleration is defined as $a_\text{avg} = \frac{\Delta v}{\Delta t}$. You can think of it as the constant acceleration that produces the same $\Delta v$ in the same $\Delta t$. So far so good. You're given $\Delta x$ and $\Delta v$. You need $\Delta t$. The question is, can you infer what $\Delta t$ is from $\Delta x$? If you assume the acceleration is constant, like the solutions did, you can use the familiar kinematic equations to essentially find $\Delta t$ and the resulting average acceleration. The problem is that that assumption is bad. For different accelerations, the ball will travel different distances coming to rest. You can see this in the plots of velocity vs. time I've attached. For all curves, $\Delta v$ and $\Delta t$ are the same so the average accelerations are the same, but the displacement, $\Delta x$, which is represented by the area under the curves, is different for each curve. So the same average acceleration can correspond to different displacements. If you require that the displacement be 11.0 cm, then something else has to give, namely $\Delta t$. The time the ball takes to come to rest will depend on how the acceleration varies over time, resulting in different average accelerations. #### Attached Files: • ###### vt.png File size: 5.3 KB Views: 51 Last edited: Jul 23, 2014 7. Jul 23, 2014 ### knyaz Thanks everyone for the replies, I really do appreciate it. It was mainly the means of obtaining the average acceleration given in the solution that I was having issues with, because as vela pointed out there are any number of ways in which the ball could come to rest (thanks vela, I actually understand this a lot better now). It was particularly confusing that the author had explicitly stated that that equation could only be used under the assumption that acceleration is constant, which is not necessarily the case here. 8. Jul 24, 2014 ### haruspex In many cases, SHM would be a better guess for an approximation of the impact profile. (It's interesting to work out the ratio between average over time and average over distance in that case.) For the ball catcher scenario, I would think there is an initial very large force as the hand/arm is accelerated rapidly up to the speed of the ball, then a more-or-less constant force as the muscles bring the arm back to rest.
2017-08-19 19:57:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293625116348267, "perplexity": 405.01630740262084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105712.28/warc/CC-MAIN-20170819182059-20170819202059-00560.warc.gz"}
https://tepost.icu/article/ncert-solutions-for-mathematics-education-11-chapter-2-relations-and-functions-ex-2-2-exercise-2-2
# NCERT Solutions for Mathematics Education 11 Chapter 2 Relations and Functions (Ex. 2.2) Exercise 2.2 (2023) ### Exercise 2.2 1. Let ${\rm{A = }}\left\{ {{\rm{1,2,3}}...{\rm{14}}} \right\}$. Define a relation ${\rm{R}}$ from ${\rm{A}}$ to ${\rm{A}}$ by ${\rm{R = }}\ left\{ {\left( {{\rm{x,y}}} \right){\rm{:3x - y = 0}}} \right\}$, where ${\rm{x, y}} \in {\rm{A}}$. Write your domain, reach and image. Answer:The ratio ${\rm{R}}$ of ${\rm{A}}$ for ${\rm{A}}$ is given as ${\rm{R = }} \left\{ {\left( {{\rm{x,y}}} \right){\rm{:3x - y = 0}}} \right\}$ at ${\rm{x, y}} \in {\rm{A}}$ oder seja, ${\rm{R = }}\left\{ {\left( {{\rm{x,y}}} \right){\rm{:3x = y}}} \right\} \ ] wobei \[{\rm{x,y}} \in {\rm{A}}$ $\so {\rm{R = }}\left\{ {\left( {{\rm{1,3}}} \right){\rm{,}}\left( {{\rm{2 ,6}}} \right){\rm{,}}\left( {{\rm{3,9}}} \right){\rm{,}}\left( {{\rm{4,12 }}} \gut gut\}$ The domain of ${\rm{R}}$ is the set of all first elements of ordered pairs in the relation. Also Definitionsbereich von ${\rm{R = }}\left\{{{\rm{1,2,3,4}}} \right\}$. The complete set ${\rm{A}}$ is the domain of the relation ${\rm{R}}$. Hence condom of ${\rm{R = }}\left\{ {{\rm{1,2,3}}...{\rm{14}}} \right\}$ The domain of ${\rm{R}}$ is the set of all second elements of the ordered pairs in the relation. Therefore range of ${\rm{R = }}\left\{{{\rm{3,6,9,12}}} \right\}$ 2. Define a relation ${\rm{R}}$over the set ${\rm{N}}$ ​​​​​​​​​​​​​of natural numbers by ${\ rm{ R = }}\left \{ { \ left( {{\rm{x,y}}} \right){\rm{:y = x + 5, \text{x is an integer less than 4 }; x,y}} \in {\rm{N}}} \right\}$. Describe this relationship using the list form. Write the domain and image. (Video) Chapter 2 Exercise 2.2 (Q1,Q2,Q3) Relations and Functions Class 11 Maths NCERT ${\rm{R = }}\left\{ {\left( {{\rm{x,y}}} \right){\rm{:y = x + 5,\text{x is a natural one number less than 4}; x,y}} \in {\rm{N}}} \direita\}$ The natural numbers less than 4 are 1, 2, and 3. Daher ist ${\rm{R = }}\left\{ {\left( {{\rm{1,6}}} \right){\rm{,}}\left( {{\rm {2 ,7}}} \right){\rm{,}}\left( {{\rm{3,8}}} \right)} \right\}$ The domain of ${\rm{R}}$ is the set of all first elements of ordered pairs in the relation. Also Definitionsbereich von ${\rm{R = }}\left\{{{\rm{1,2,3}}} \right\}$ The domain of ${\rm{R}}$ is the set of all second elements of the ordered pairs in the relation. Therefore range of ${\rm{R = }}\left\{{{\rm{6,7,8}}} \right\}$ 3.${\rm{A = }}\left\{ {{\rm{1,2,3,5}}} \right\}$y ${\rm{B = }}\left \{ {{\rm{4,6,9}}} \right\}$. Define a relation ${\rm{R}}$from ${\rm{A}}$ to ${\rm{B}}$ times ${\rm{R = }}\ left\{ {\left( {{\rm{x,y}}} \right){\rm{\text{: the difference between x and y is odd; x}}} \in {\rm{A, y}} \in {\rm{B}}} \right\}$. Enter ${\rm{R}}$ in the list. ${\rm{A = }}\left\{ {{\rm{1,2,3,5}}} \right\}$ y ${\rm{B = }}\left\{ {{\rm{4,6,9}}} \derecho\}$ The relationship is given by ${\rm{R = }}\left\{ {\left( {{\rm{x,y}}} \right){\rm{\text{: the difference between x and y is odd; x}}} \in {\rm{A, y}} \in {\rm{B}}} \direita\}$ Daher ist ${\rm{R = }}\left\{ {\left( {{\rm{1,4}}} \right){\rm{,}}\left( {{\rm {1 ,6}}} \right){\rm{,}}\left( {{\rm{2,9}}} \right){\rm{,}}\left( {{\rm{3 ,4 }}} \right){\rm{,}}\left( {{\rm{3,6}}} \right){\rm{,}}\left( {{\rm{5,4 }} } \right){\rm{,}}\left( {{\rm{5,6}}} \right)} \right\}$ 4. The given figure shows a relation between the sets ${\rm{P}}$ and ${\rm{Q}}$. Write that relationship down (i) In the form of a set constructor (Video) Class 11 math chapter 2 Relations and functions exercise 2.2 NCERT solutions || exercise 2.2 (ii) In Listenform What is your domain and image? (UE)According to the given diagram ${\rm{P = }}\left\{ {{\rm{5,6,7}}} \right\}$ mi ${\rm{Q = }}\left\{ {{\rm{3,4,5}}} \right\}$ Hence the relational form of the set constructor ${\rm{R = }}\left\{ {\left( {{\rm{x,y}}} \right){\rm{: y = x - 2; x}} \in {\rm{P}}} \right\}$ oder ${\rm{R = }}\left\{ {\left( {{\rm{x,y}}} \right){\rm{: y = x - 2 \text{for} x = 5 ,6,7}}} \direita\}$ (ii)According to the given diagram ${\rm{P = }}\left\{ {{\rm{5,6,7}}} \right\}$ mi ${\rm{Q = }}\left\{ {{\rm{3,4,5}}} \right\}$ Hence the list form of relationships ${\rm{R = }}\left\{ {\left( {{\rm{5,3}}} \right){\rm{,}}\left( {{\rm{6,4 }}} \right){\rm{,}}\left( {{\rm{7.5}}} \right)} \right\}$ 5. Let ${\rm{A = }}\left\{ {{\rm{1,2,3,4,6}}} \right\}$. Let ${\rm{R}}$ be the relation on ${\rm{A}}$ defined by $\left\{ {\left( {{\rm{a,b}}} \ right){\rm{: a,b}} \in {\rm{A, \text{ b is exactly divisible by a}}}} \right\}$. (i) Write ${\rm{R}}$ in list form (Video) Relations and Functions|Exercise 2.2 Solutions|Class 11|Maths|Chapter2|Plus One|Malayalam|CBSE|NCERT (ii) Find the domain of ${\rm{R}}$ (iii) Find the image of ${\rm{R}}$ (UE)We know that ${\rm{A = }}\left\{{{\rm{1,2,3,4,6}}} \right\}$ e ${\rm{R}} = \left\{ {\left( {{\rm{a,b}}} \right){\rm{: a,b}} \in {\rm{A , \text{b é exatamente dividido por a}}}} \right\}$ Therefore the list form of the relation is ${\rm{R}}$ ${\rm{R = }}\left\{ {\left( {{\rm{1,1}}} \right){\rm{,}}\left( {{\rm{1,2 }}} \derecha){\rm{,}}\izquierda( {{\rm{1,3}}} \derecha){\rm{,}}\izquierda( {{\rm{1,4}} } \right){\rm{,}}\left( {{\rm{1,6}}} \right){\rm{,}}\left( {{\rm{2,2}}} \ derecha){\rm{,}}\left( {{\rm{2,4}}} \right){\rm{,}}\left( {{\rm{2,6}}} \right) {\rm{,}}\left( {{\rm{3,3}}} \right){\rm{,}}\left( {{\rm{3,6}}} \right){\ rm{,}}\left( {{\rm{4,4}}} \right){\rm{,}}\left( {{\rm{6,6}}} \right)} \right\ }$ (ii)The domain of ${\rm{R}}$ is $\left\{ {{\rm{1,2,3,4,6}}} \right\}$ (iii)The range of ${\rm{R}}$ is $\left\{ {{\rm{1,2,3,4,6}}} \right\}$ 6. Determine the domain and domain of the relation ${\rm{R}}$ defined by ${\rm{R = }}\left\{ {\left( {{\rm{x,x + 5 }}} \right){\rm{:x}} \in \left\{ {{\rm{0,1,2,3,4,5}}} \right\}} \right\}\ ] Answer:We know that the relation \[{\rm{R}}$ is given by ${\rm{R = }}\left\{\left( {{\rm{x,x + 5}}} \right){\rm{:x}} \in \left\{ {{ \ rm{0,1,2,3,4,5}}} \derecho\}} \derecho\}$ Daher ist ${\rm{R = }}\left\{ {\left( {{\rm{0.5}}} \right){\rm{,}}\left( {{\rm{1 ,6 }}} \right){\rm{,}}\left( {{\rm{2,7}}} \right){\rm{,}}\left( {{\rm{3,8 }} } \right){\rm{,}}\left( {{\rm{4,9}}} \right){\rm{,}}\left( {{\rm{5,10}} } \ gut gut\}$ Dominio de ${\rm{R = }}\left\{{{\rm{0,1,2,3,4,5}}} \right\}$ Rango de ${\rm{R = }}\left\{ {{\rm{5,6,7,8,9,10}}} \right\}$ (Video) Relations and Functions|Exercise 2.2 Solutions|Class 11|Maths|Chapter2|Plus One|Malayalam|CBSE|NCERT 7. Write the relation ${\rm{R = }}\left\{ {\left( {{\rm{x,}}{{\rm{x}}}^{\rm{3}} }} \right){\rm{\text{:x is a prime number less than 10}}}} \right\}$ in list form. Answer:if you give us this ${\rm{R = }}\left\{\left( {{\rm{x,}}{{\rm{x}}^{\rm{3}}}} \right){\ rm {\text{:x is a prime less than 10}}}} \right\}$ The prime numbers less than 10 are 2, 3, 5, and 7. Por lo tanto, $R = \left\{ {\left( {{\rm{2.8}}} \right){\rm{,}}\left( {{\rm{3.27}}} \ right) {\ rm{,}}\left( {{\rm{5,125}}} \right){\rm{,}}\left( {{\rm{7,343}}} \right)} \right\ }\ ] es el formelio de lista. 8. Let \[{\rm{A = }}\left\{ {{\rm{x,y,z}}} \right\}$ and ${\rm{B = }}\left\ { {{\rm{1,2}}} \right\}$. Find the number of ratios from ${\rm{A}}$ to ${\rm{B}}$. Answer:Angenommen, ${\rm{A = }}\left\{ {{\rm{x,y,z}}} \right\}$ und ${\rm{B = }}\left \{ {{\rm{1,2}}} \right\}$. Daher ist ${\rm{A \times B = }}\left\{ {\left( {{\rm{x,1}}} \right){\rm{,}}\left( {{\ rm{x,2}}} \right){\rm{,}}\left( {{\rm{y,1}}} \right){\rm{,}}\left( {{\rm{ y,2}}} \right){\rm{,}}\left( {{\rm{z,1}}} \right){\rm{,}}\left( {{\rm{z, 2}}} \direita)} \direita\}$ Since ${\rm{n(A \times B) = 6}}$, or number of subsets of ${\rm{A \times B}}$ is ${{\rm{2 } }^{\rm{6}}}$. 9. Let ${\rm{R}}$ be the relation on ${\rm{Z}}$ defined by ${\rm{R = }}\left\{ {\left( { { \rm{a,b}}} \right){\rm{: a,b}} \in {\rm{Z, a - b \text{is an integer}}}} \right\}$ . Find the domain and image of ${\rm{R}}$. Answer:Tenemos ${\rm{R = }}\left\{ {\left( {{\rm{a,b}}} \right){\rm{: a,b}} \in {\ rm{ Z , a - b \text{es un número entero}}}} \right\}$ It is well known that the difference between two integers is always an integer. Hence the domain of ${\rm{R = Z}}$ And the range of ${\rm{R = Z}}$ ### NCERT Solutions for Class Math 11 Chapter 2 Relations and Functions Exercise 2.2 Opting for NCERT solutions for Ex 2.2 Grade 11 Math is considered to be the best option for CBSE students when it comes to exam preparation. This chapter consists of many exercises. We provide solutions of NCERT Mathematics Exercise 2.2 Class 11 in PDF format on this page. You can download this solution at your convenience or study it directly from our online website/application. Practice issues were resolved by Vedantu's in-house experts with the utmost care and in compliance with all CBSE guidelines. 11th grade students finished with all the concepts of the math book and well versed in all the problems of the exercises in it so that every student can easily achieve the highest possible grade in the exam completion. With the help of these solutions of Class 11 Mathematics Chapter 2 Exercise 2.2, students can easily understand the pattern of questions that can be asked in the exam of this chapter and also learn the weight of the chapter marks. So that they can optimally prepare for the final exam. (Video) Class - 11 Ex - 2.2, Q1 to Q10 (Relation and Functions) Maths Chapter 2 || CBSE NCERT || Green Board Besides these NCERT solutions for Class 11 Mathematics Chapter 2 Exercise 2.2, there are many exercises in this chapter which also contain numerous questions. All these questions will be solved/answered by our in-house experts as mentioned above. Therefore, they are all of the highest quality and can be consulted by anyone during the exam preparation period. In order to get the best possible grades in class, it is very important to understand all the concepts in the textbook and solve the problems in the following exercises. Don't hesitate any longer. Download NCERT Solutions for Class 11 Mathematics Chapter 2 Exercise 2.2 from Vedantu website now to better prepare for the exam. If you have the Vedantu app on your phone, you can also download it through the app. The best thing about these solutions is that they are accessible both online and offline. ## Videos 1. Class 11th Maths | Exercise 2.2 (Q1 to Q9) | Chapter 2 : Relations and Functions | NCERT (Ignited Minds) 2. 𝐑𝐞𝐥𝐚𝐭𝐢𝐨𝐧𝐬 𝐚𝐧𝐝 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬 𝐂𝐥𝐚𝐬𝐬 𝟏𝟏 𝐍𝐂𝐄𝐑𝐓 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧𝐬 (𝐄𝐱 𝟐.𝟐) ​| Class 11 Maths Chapter 2 | Vedantu Math (Vedantu Math) 3. 🎯Ex - 2.2 (Full) || Q1 to Q9 || Relation & Functions || Class-11 || NCERT Solution (VIBA CLASSES 🎯) 4. Class 11 Maths Chapter 2 | Relations and Functions - Exercise 2.2 Solutions (Magnet Brains) 5. Class 11 Ex 2.2 Q1 to Q9 Relations And Functions/Cartesian products of Relations | NCERT Math (MathsTeacher) 6. Class 11 Relations and Functions Exercise 2.2 Q#1-9 (In English)- NCERT CBSE (Divya's Maths Solutions) Top Articles Latest Posts Article information Author: Domingo Moore Last Updated: 12/14/2022 Views: 6831 Rating: 4.2 / 5 (73 voted) Author information Name: Domingo Moore Birthday: 1997-05-20 Address: 6485 Kohler Route, Antonioton, VT 77375-0299 Phone: +3213869077934 Job: Sales Analyst Hobby: Kayaking, Roller skating, Cabaret, Rugby, Homebrewing, Creative writing, amateur radio Introduction: My name is Domingo Moore, I am a attractive, gorgeous, funny, jolly, spotless, nice, fantastic person who loves writing and wants to share my knowledge and understanding with you.
2023-03-24 16:30:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5036492943763733, "perplexity": 1559.6311257430425}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00020.warc.gz"}
https://www.physicsforums.com/threads/normal-force-acting-on-a-block-on-an-accelerating-wedge.991769/
# Normal force acting on a block on an accelerating wedge Gold Member Homework Statement: I created this stupid question to help me better understand Newtonian mechanics. Relevant Equations: N/A We have a wedge whose surface is ##\theta## from the horizontal surface. After a block is placed on its frictionless slant surface, the wedge starts to accelerate due to a force F. What is the normal force acting upon the block? I have been trying to solve it but I got no clue. Could someone give me a hint? haruspex Homework Helper Gold Member 2020 Award First step: draw separate FBDs for the two components, showing forces. Next, create variables for the accelerations. Third, write the ΣF=ma equations and any appropriate kinematic equations that express relationships between the accelerations. For that last, you need to represent the fact that the block stays on the wedge surface. Leo Liu Gold Member First step: draw separate FBDs for the two components, showing forces. Next, create variables for the accelerations. Third, write the ΣF=ma equations and any appropriate kinematic equations that express relationships between the accelerations. For that last, you need to represent the fact that the block stays on the wedge surface. Hi. The constraints in this problem gives these four distinct equations. Could you tell me if they are correct? $$\begin{pmatrix} a_{sx}\\a_{sy}\\a_t\\N \end{pmatrix}=\begin{pmatrix} 0\\0\\-mg\\F \end{pmatrix}\begin{pmatrix} \tan \theta & 1 & 0 & 0\\ m & 0 & m & -sin \theta\\ 0 & m & 0 & -\cos \theta\\ 0 & 0 & M & \sin \theta \end{pmatrix}^{-1}$$ Last edited: haruspex Homework Helper Gold Member 2020 Award Is asx the horizontal acceleration of the block in the lab frame or in the frame of the wedge? If you define the height and leg of the wedge as ##h## and ##\ell##, then the net force on the block in the wedge's reference frame is in the direction of ##(\ell, -h)##. Since we know that the net force's norm is ##g\sin\theta##, we can describe it by ##\vec f_{b|w} = \frac{g\sin\theta}{\sqrt{\ell^2+h^2}}(\ell, -h)##. I think that this might work (you can find the block's absolute acceleration from this since you know the wedge's acceleration).$$\vec a_{b|w}=\vec a_b-\vec a_w$$The block starts falling from the top, so i think that it okay that we are considering the whole wedge to be only the topmost point (so that ##\vec x_b(0)-\vec x_w(0)=\vec y_b(0)-\vec y_w(0)=\vec 0##). Last edited: Gold Member Is asx the horizontal acceleration of the block in the lab frame or in the frame of the wedge? The frame of the wedge; since it is an non inertial frame, I added the acceleration of the frame to get the real acceleration. haruspex Homework Helper Gold Member 2020 Award The frame of the wedge; since it is an non inertial frame, I added the acceleration of the frame to get the real acceleration. Then it all looks fine. Leo Liu Gold Member the net force's norm is ##g\sin\theta## Can you please tell me why this isn't ##\mu N##? I don't think you can use the component of the weight vector to calculate the sliding friction force. Can you please tell me why this isn't ##\mu N##? I don't think you can use the component of the weight vector to calculate the sliding friction force. Oh, that isn't the frictional force. You mentioned that your surface is frictionless, right? I meant by ##\vec f## the net force on the block. The normal force would be in the direction of ##(\sin\theta,\cos\theta)## and with norm ##mg\cos\theta##, so:$$\vec N=mg\cos\theta(\sin\theta,\cos\theta)$$The force I have written in my other post has a missing ##m## factor. It should be:$$\vec f_{b|w}=mg\sin\theta(\cos\theta,-\sin\theta)$$ Gold Member The normal force would be in the direction of ##(\sin\theta,\cos\theta)## and with norm ##mg\cos\theta##, so:$$\vec N=mg\cos\theta(\sin\theta,\cos\theta)$$The force I have written in my other post has a missing ##m## factor. It should be:$$\vec f_{b|w}=mg\sin\theta(\cos\theta,-\sin\theta)$$ I don't think this is the case since the wedge is accelerating. I don't think this is the case since the wedge is accelerating. Those are in the wedge's frame of reference. But yeah, I'm not sure if that relative acceleration formula in my first post is usable in this problem. Please ignore my posts; the wedge is not an inertial frame of reference.
2021-06-24 03:43:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7664111256599426, "perplexity": 803.285416844062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488550571.96/warc/CC-MAIN-20210624015641-20210624045641-00416.warc.gz"}
https://math.stackexchange.com/questions/2167205/integral-equation-with-non-convergent-eigenfunction-expansion
Integral Equation with Non convergent Eigenfunction Expansion Let $K(x,y)=K(y,x)$ be a continuous symmetric function. The integral equation $$\varphi(x)=\lambda\int_a^b K(x,y)\varphi(y)dy$$ has eigenvalues $\lambda_n$ and eigenfunctions $\varphi_n(x)$. It is known that if $$\sum\limits_{n=1}^{\infty}\frac{\varphi_n(x)\varphi_n(y)}{\lambda_n}$$ converges uniformly then it converges to $K(x,y)$. What is an example of $K(x,y)$ where the series does not converge to $K(x,y)$ ? The interval $[a,b]$ is finite. Perhaps an example with $$\sum\frac{\sin(nx)\sin(ny)}{n} ?$$
2019-08-20 02:52:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963410496711731, "perplexity": 100.0573525711808}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315222.14/warc/CC-MAIN-20190820024110-20190820050110-00388.warc.gz"}
https://www.nature.com/articles/ncomms13022?error=cookies_not_supported&code=9f7bc073-90ae-44b7-8682-8721409725ec
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # A universal test for gravitational decoherence ## Abstract Quantum mechanics and the theory of gravity are presently not compatible. A particular question is whether gravity causes decoherence. Several models for gravitational decoherence have been proposed, not all of which can be described quantum mechanically. Since quantum mechanics may need to be modified, one may question the use of quantum mechanics as a calculational tool to draw conclusions from the data of experiments concerning gravity. Here we propose a general method to estimate gravitational decoherence in an experiment that allows us to draw conclusions in any physical theory where the no-signalling principle holds, even if quantum mechanics needs to be modified. As an example, we propose a concrete experiment using optomechanics. Our work raises the interesting question whether other properties of nature could similarly be established from experimental observations alone—that is, without already having a rather well-formed theory of nature to make sense of experimental data. ## Introduction Experiments1,2,3,4 aiming at testing the presence—and amount—of gravitational decoherence generally go beyond established theory. Many theoretical models for gravitational decoherence have been proposed5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25, and it is wide open if one of these proposals is correct. As such, experiments are of a highly exploratory nature, aiming to establish data points that constrain rival theoretical proposals. This task is made even more difficult by the fact that quantum mechanics and gravity do not go hand in hand, and indeed quantum mechanics may need to be modified in a yet unknown way in order to account for gravitational effects such as decoherence. We are thus compelled to design an experiment that provides a guiding light for the search for the right theoretical model—or indeed new physical theory—whose conclusions do not rely on quantum mechanics. Here we propose an experimental procedure to estimate gravitational decoherence whose conclusions hold even if quantum mechanics would need to be modified. We first establish a general information-theoretic notion of decoherence which reduces to the standard measure within quantum mechanics. Second, drawing on ideas from quantum information, we propose a very general experiment that allows us to obtain a quantitative estimate of decoherence of any physical process for any physical theory satisfying only very mild conditions. Our method is fully general and could in principle be used to supplement many existing experimental proposals in a way that would allow us to draw conclusions from data even if quantum mechanics would need to be modified. Concretely, if a process (supposedly) causing gravitational decoherence can be probed experimentally, then our general method allows us to measure a parameter β that translates into an upper bound on decoherence, where Dec(A|E) is the amount of decoherence of a system A with respect to its environment E (we will define this below). The function h is plotted in the Discussion section for quantum mechanics, but also very general physical theories. As an example, we propose a concrete experiment using optomechanics to estimate gravitational decoherence in any such theory, including quantum mechanics as a special case. We note that our procedure could be used to probe any form of decoherence, but only in the case gravitational decoherence is there a pressing motivation for considering theories beyond quantum mechanics. ## Results ### Decoherence in quantum mechanics Before we turn to our general approach (see Fig. 1), let us first focus on the concept of decoherence within quantum mechanics as an easy warm-up. This demonstrates some principles that we will generalize to a broad framework of theories in the following section. Here we first show how the protocol given in Fig. 2 allows us to estimate quantum mechanical decoherence without knowing the decoherence process, and without doing quantum tomography to determine it. Traditionally, the presence of decoherence within quantum mechanics is related to the change of state due to measurement and the ‘collapse of the wavefunction’. Decoherence is thereby often seen as a decay of the off-diagonal terms in the density operator ρ, corresponding to a (weak) measurement of the state. It is clear that this way of thinking about decoherence is entirely tied to the quantum mechanical matrix formalism, and also offers little in the way of quantifying the amount of decoherence in an operationally meaningful way. The modern way of understanding decoherence in quantum mechanics in a quantitative way is provided by quantum information theory. One thereby thinks of a decoherence process as an interaction of a system A′ with an environment as described in Fig. 2, resulting in a quantum channel ΓA′→B. The amount of decoherence can now be quantified by the channel’s ability to transmit quantum information, that is, its quantum capacity (see Supplementary Note 1 for further background). For a finite number of channels, the relevant quantity is the single-shot capacity as determined by the so-called min-entropy Hmin(A|E)26,27. Apart from its information-theoretic significance, the min-entropy has a beautiful operational interpretation that also makes its role as a decoherence measure intuitively apparent. Very roughly, the amount of decoherence can be understood as a measure of how correlated E becomes with A. Suppose we start with a maximally entangled test state ΦAA where the decoherence process is applied to A′. This results in a state (see Fig. 2). If no decoherence occurs, the output state will be of the form where A′=B. That is, A and B are maximally entangled, but A and E are completely uncorrelated. The strongest decoherence, however, produces an output state of the form where A′=E1 and where E is subdivided into subsystems E=E1E2. That is, A is now maximally entangled with E1, whereas A and B are completely uncorrelated. What about the intermediary regime? The min-entropy can be written as where dA is the dimension of A, and (ref. 28) and where F denotes the fidelity The maximization above is taken over all quantum operations on the system E, which aim to bring the state ρAE as close as possible to the maximally entangled state ΦAA (see Fig. 3). Intuitively, Dec(A|E) can thus be understood as a measure of how far the output ρAE is from the setting of maximum decoherence (where ρAEAE is the maximally entangled state). If there is no decoherence, we have ρAE=/dAρE giving and Hmin(A|E)=log dA. If there is maximum decoherence, we have giving Dec(A|E)=1 and Hmin(A|E)=−log dA where is simply the operation that discards the remainder of the environment E2. A larger value of Dec(A|E) thus corresponds to a larger amount of decoherence. In the quantum case, Dec(A|E) can be computed using any semi-definite programming solver29,30. We remark that Dec(A|E) does itself not depend on the dimension of the system A. Furthermore, we note that Dec(A|E) does not depend on the particular physical realization of the system A, but merely the amount of information that it can hold. We point out that this entanglement-preservation picture is equivalent to the picture in which the quantum state of a single system decoheres31 (see Fig. 4). We hence see that in quantum mechanics, the relevant measure of decoherence is simply Dec(A|E) (see Fig. 5 for some examples). How can we estimate it in an experiment? Our goal in deriving this estimate will be to rely on concepts that we can later extend beyond the realm of quantum theory, deriving a universally valid test. It is clear that to estimate Dec(A|E) we need to make a statement about the entanglement between A and E—yet E is inaccessible to our experiment. A property of quantum mechanics known as the monogamy of entanglement32 nevertheless allows such an estimate: if ρAB is highly entangled, then ρAE is necessarily far from highly entangled. Since low entanglement in ρAE means that Dec(A|E) is low, a test that is able to detect entanglement between A and B should help us bound Dec(A|E) from above. ### Beyond quantum mechanics The real challenge is to show that the conclusions of our test remain valid even outside of quantum mechanics. Since we want to make as few assumptions as possible, we consider the most general probabilistic theory, in which we are only given a set of possible states Ω and measurements on these states. Every measurement is thereby a collection M={ea}a of effects ea:Ω→[0, 1] satisfying and for all ωΩ. The label a corresponds to a measurement outcome ‘a’. The notion of separated systems A, B and E is in general difficult to define uniquely. We thus again make the most minimal assumption possible in which we identify ‘systems’ A, B and E with sets of measurements that can be performed. In a nutshell, we make the following assumptions: there is a notion of states and measurements, we can observe measurement outcomes that occur with some probability, we identify subsystems by sets of possible measurements, and the no-signalling principle holds (see Supplementary Notes 3 and 4 for details). The first obstacle consists of defining a general notion of decoherence. We saw that quantumly decoherence can be quantified by how well correlations between A and A′ are preserved, and this can be measured by how well the decoherence process preserves the maximally correlated (that is, entangled) state. Indeed, we can also quantify classical noise in terms of how well it preserves correlations, where the maximally correlated state takes on the form for some classical symbols a. We hence start by defining the set of maximally correlated states, by observing a crucial and indeed defining property of the maximally correlated state in quantum mechanics. Concretely, A and A′ are maximally entangled if and only if for any von Neumann measurement on A, there exists a corresponding measurement on A′ giving the same outcome. Again, the same is also true classically but made trivial by the fact that there is only one measurement. In analogy, we thus define the set of maximally correlated states as This set coincides with the set of maximally entangled states in quantum mechanics, where A′ can potentially contain an additional component in which is irrelevant to our discussion. We thus define where ωAE is the state shared between A and E according to the general physical theory. The fidelity between two states ω1 and ω2 is thereby defined in full analogy to the quantum case33 as where the minimization is taken over all possible measurements M, and M(ω) denotes the probability distribution over the measurement outcomes of M. Here, the fidelity F(M(ω1), M(ω2)) can be written as33 where the sum ranges over all effects ei of the measurement M (see Supplementary Note 3 for further details). That is, the fidelity can be expressed as the minimum fidelity between probability distributions of classical measurement outcomes. We will not need to make explicit in order to bound Dec(A|E). Equation (6) gives us the familiar quantity within quantum mechanics, but provides us with a very intuitive way to quantify decoherence in any physical theory that admits maximally correlated states. We emphasize that with our general techniques the latter demand could be weakened to allow all theories, even those which only have (weak) approximations of maximally correlated states. The second challenge is to prove that our test actually provides a bound on Dec(A|E)ω. Note that without quantum mechanics to guide us, all that we could reasonably establish by performing measurements on A and B are the probabilities of outcomes a and b given measurement settings x and y. That is, the probability where and . Yet, given the system E is entirely inaccessible to us we have no hope of measuring Pr[a, b, c|x, y, z]ω directly, where z denotes a measurement setting on E with outcome c. Nevertheless, similar to quantum entanglement, it is known that non-signalling distributions are again monogamous34—and it is this fact that allows us to draw conclusions about E by measuring only A and B. We will therefore make a non-trivial assumption about the physical theory, namely that no-signalling holds between A, B and E. We emphasize that weaker constraints on the amount of signalling could also lead to a bound—but we are not aware of any other concrete example to consider. Mathematically, no-signalling means that the marginal distributions obey that is, the choice of measurement settings y, y′ and z, z′ does not influence the probability distribution over the outcomes a. A set of distributions is non-signalling if such conditions hold for all marginal distributions. ## Discussion What have we actually learned when performing such an experiment? We first observe that the measured β always gives an upper bound on the amount of decoherence observed—for any non-signalling theory. This means that even if quantum mechanics would indeed need to be modified we can still draw conclusions from the data we obtain. As such, the observations made in such an experiment establish a fundamental limit on decoherence no matter what the theory might actually look like in detail. It is clear, however, that the bound thus obtained is much weaker than if we had assumed quantum mechanics. No-signalling is but one of many principles obeyed by quantum mechanics, and these other features put stronger bounds on the values that Dec(A|E) can take. Our motivation for considering theories which are only constrained by no-signalling is to demonstrate even such weak demands still allow us to draw meaningful conclusions from such an experiment. One can easily adapt our approach by introducing further constraints on the probabilities Pr[a, b, c|x, y, z]—but not all of quantum mechanics—in order to get stronger bounds. Also in a fully quantum mechanical world, our approach yields a bound (see Fig. 6). If we assume quantum mechanics, we may of course also try and perform process tomography in order to determine the decoherence process, and indeed any experiment should try and perform such a tomographic analysis whenever possible. The appeal of our approach is rather that we can draw conclusions from the experimental data while making only very minimal assumptions about the underlying physical theory. One may wonder why we only upper bound Dec(A|E). Note that from our experimental statistics we can only make statements about the overall decoherence observed in the experiment, namely the gravitational decoherence (if it exists) as well as any other decoherence introduced due to experimental imperfections. Finding that the Bell violation is low (and thus maybe Dec(A|E) might be large) can thus not be attributed conclusively to the gravitational decoherence process, making a lower bound on Dec(A|E) meaningless if our desire is to make statements about a particular decoherence process such as gravity. Second, we observe that our approach can rule out models of gravitational decoherence but not verify a particular one. It is important to note that a model for gravitational decoherence does not stand on its own, but is always part of a theory on what states, evolutions and measurements behave like. Given such a physical theory and a model for gravitational decoherence, we know enough to compute Dec(A|E), such as for example in equations (15, 16, 17). In addition, we can compute an upper bound ftheory(β) on Dec(A|E) specific to that theory, which may give a much stronger bound than no-signalling alone. Indeed, we see from Fig. 6 that this is the case for quantum mechanics. Given the calculated Dec(A|E) and the experimentally observed value for ftheory(β), we can then compare: If Dec(A|E)>ftheory(β), then the model (or indeed theory) we assumed must be wrong. However, if , then we know that the model and theory would be consistent without experimental observations. Note that while our framework allows for theories with super-quantum correlations (that is, with (ref. 35)), it is also perfectly valid in the regime where . The bound shown in Fig. 6 is non-trivial for all β>2, and therefore conclusions can be drawn for all such β. Hence, the various arguments brought forward in the literature for why super-quantum correlations should not be observed36,37,38,39,40,41,42 do not contradict our work. The numeric value of the red bound in Fig. 6 may seem weak. However, recall from above that this is a bound for the most general class of theories that can be considered in our framework, while additional assumptions about the theory in question increase the strength of the bound. Our approach thus provides a guiding light in the search for gravitational decoherence models. It is very general, and could in principle be used in conjunction with other proposed experimental setups and decoherence models. In particular, it could also be used to probe decoherence models conjectured to arise from decoherence affecting macroscopic objects, where there exist proposals to bring such objects into superposition3. Clearly, however, probing such models using entanglement is extremely challenging. It is a very interesting open question to improve our analysis and to apply it to other physical theories that are more constrained than by no-signalling, but yet do not quite yield quantum mechanics. Candidates for this may come from the study of generalized probabilistic theories where the authors (e.g., refs 43, 44, 45, 46, 47, 48) introduced further constraints in order to recover quantum mechanics, but also from suggested ways to modify the Schrödinger equation in order to account for non-quantum mechanical noise. Since our approach could also be applied to higher dimensional systems, and other Bell inequalities, it is a very interesting open question whether other Bell inequalities could be used to obtain stronger bounds on Dec(A|E) from the resulting experimental observations. ## Methods ### In quantum mechanics Figure 2 illustrates the general experimental procedure. As an easy warm-up, let us first again consider what happens in quantum mechanics. For now, we assume that the measurement devices have no memory. That is, the experiment behaves the same in each round, independent on the previous measurements. It is relatively straightforward to obtain an upper bound on Dec(A|E) by extending techniques from quantum key distribution49. In essence, we maximize Dec(A|E) over all states that are consistent with the observed CHSH correlator β (see Fig. 2). This maximization problem is simplified by the inherent symmetries of the CHSH inequality, allowing us to reduce this optimization problem to consider only states that are diagonal in the Bell basis. We proceed to establish properties of min and max entropies for Bell diagonal states, leading to an upper bound. Concretely, we show in Supplementary Note 2 that where h(β) is an easy optimization problem that can be solved using Lagrange multipliers. We have chosen not to weaken this bound by an analytical bound that is strictly larger, as it is indeed easily evaluated (see Fig. 6). If the devices are allowed memory, then a variant of this test and some more sophisticated techniques from quantum key distribution can nevertheless be shown to give a bound. ### Beyond quantum mechanics Let us first give a very loose intuition why performing a Bell experiment on A and B may allow us to bound Dec(A|E)ω. It is well known34 that non-signalling correlations are also monogamous. That is, if we observe a violation of the CHSH inequality as captured by the measured parameter β, then we know that the violation between A and E and also between E and B must be low. Note that the expectation values Tr[ρAB(AxBy)] in terms of quantum observables Ax and By can be expressed in terms of probabilities as where we have again used ωAB in place of ρAB to remind ourselves that we may be outside of QM. Let us now assume by contradiction that the state ωAE shared between A and E would be close to maximally correlated. Then by definition of the maximally correlated state, for every measurement on A, there exists some measurement on E which yields the same outcome with high probability. Hence, if ωAE would be close to maximally correlated, then we would expect that E and B can achieve a similar CHSH violation as A and B—because E can make measurements that reproduce the same correlations that A can achieve with B. Yet, we know that this cannot be since CHSH correlations are monogamous. Note that a map (as in Fig. 3), followed by a measurement in fact constitutes another measurement. Hence, considering all possible measurements that Eve can perform, we cover all such possible maps that Eve might want to apply. While we do not follow the exact steps suggested by this intuition, we employ a technique in Supplementary Note 3 that has also been used for studying monogamy of CHSH correlations34. Specifically, we use linear programming as a technique to obtain bounds. We thereby first relate the fidelity to the statistical distance, which is a linear functional. We are then able to optimize this linear functional over probability distributions Pr[a, b, c|x, y, z]ω satisfying linear constraints. The first such constraint is given by the fact that we consider only non-signalling distributions. The second is the fact that the marginal distribution Pr[a, b|x, y]ω leads to the observed Bell violation β. The last one stems from the fact that maximal correlations can also be expressed using a linear constraint. Solving this linear program for an observed violation β leads to Fig. 6. ### Optomechanical experiment To gain insights into the significance of gravitational decoherence, we examine Diosi’s theory of gravitational decoherence6 as an example. This is equivalent to the decoherence model introduced in Kafri et al.10. We show in Supplementary Note 6 how Dec(A|E) can be evaluated for many other decoherence processes, opening the door for applying our method to many other possible experiments. Diosi’s model can be applied to an optomechanical cavity in which one mirror is free to move in a harmonic potential with frequency ωm as in Fig. 7. The master equation for a massive particle moving in a harmonic potential, including gravitational decoherence is where with the usual canonical position and momentum operators for the moving mirror. We have that where the gravitational decoherence rate Λgrav is given by with G the Newton gravitational constant and Δ the density of the moving mirror. As one might expect Λgrav is quite small, of the order of 10−8 s−1 for suspended mirrors with ωm1. The term with Q=ω/γm corresponds to mechanical heating. To see the effect of the gravitational term stand out next to the mechanical heating we thus need to make the temperature T low. A calculation shows that this model leads to a dephasing channel Γ(ρ)=+(1−p)ZρZ where p is a function of the density Δ, and the other parameters. In Supplementary Note 6, we show that for this model where G is the Newton gravitational constant, kB is the Boltzmann constant, and ħ the Planck constant (see Fig. 8 for the other parameters). ### Code availability The source code of the semidefinite program and the linear program used to derive the plots in Fig. 6 are available from the authors on request. ### Data availability Data sharing not applicable to this article as no data sets were generated or analysed during the current study. How to cite this article: Pfister C. et al. A universal test for gravitational decoherence. Nat. Commun. 7, 13022 doi: 10.1038/ncomms13022 (2016). ## References 1. 1 Pepper, B., Ghobadi, R., Jeffrey, E., Simon, C. & Bouwmeester, D. Optomechanical superpositions via nested interferometry. New J. Phys. 14, 115025 (2012). 2. 2 Marshall, W., Simon, C., Penrose, R. & Bouwmeester, D. Towards quantum superpositions of a mirror. Phys. Rev. Lett. 91, 130401 (2003). 3. 3 Romero-Isart, O. et al. Large quantum superpositions and interference of massive nanometer-sized objects. Phys. Rev. Lett. 107, 020405 (2011). 4. 4 Pikovski, I., Vanner, M. R., Aspelmeyer, M., Kim, M. & Brukner, C. Probing planck-scale physics with quantum optics. Nat. Phys. 8, 393–397 (2012). 5. 5 Penrose, R. On gravity’s role in quantum state reduction. Gen. Relat. Gravit. 28, 581 (1996). 6. 6 Diósi, L. Models for universal reduction of macroscopic quantum fluctuations. Phys. Rev. A 40, 1165–1174 (1989). 7. 7 Diosi, L. The gravity-related decoherence master equation from hybrid dynamics. J. Phys.: Conf. Ser. 306, 012006 (2011). 8. 8 Diosi, L. Gravitation and quantummechanical localization of macroobjects. Phys. Lett. A 105, 199 (1984). 9. 9 Diosi, L. A universal master equation for the gravitational violation of quantum mechanics. Phys. Lett. 120, 377–381 (1987). 10. 10 Kafri, D., Taylor, J. M. & Milburn, G. J. A classical channel model for gravitational decoherence. New J. Phys. 16, 065020 (2014). 11. 11 Stamp, P. C. E. Environmental decoherence versus intrinsic decoherence. Phil. Trans. Roy. Soc. A 370, 4429 (2012). 12. 12 Anastopoulous, C. & Hu, B. L. A master equation for gravitational decoherence: probing the textures of spacetime. Class. Quant. Grav. 30, 165007 (2013). 13. 13 Hu, B. L. Gravitational decoherence, alternative theories, and semiclassical gravity. J. Phys.: Conf. Ser. 504, 012021 (2014). 14. 14 Anastopoulous, C. & Hu, B. L. Decoherence in quantum gravity: issues and critiques. Journal of Physics. J. Phys.: Conf. Ser. 67, 012012 (2007). 15. 15 Kay, B. Decoherence of macroscopic closed systems within newtonian quantum gravity. Class. Quant. Grav. 15, L89 (1998). 16. 16 Breuer, H. P., Göklü, E. & Lämmerzah, C. Metric fluctuations and decoherence. Class. Quant. Grav. 26, 105012 (2007). 17. 17 Wang, C., Bingham, R. & Mendoca, J. T. Quantum gravitational decoherence of matter waves. Class. Quant. Grav. 23, L59 (2006). 18. 18 Ghirardi, G. C., Rimini, A. & Weber, T. Unified dynamics for microscopic and macroscopic systems. Phys. Rev. D 34, 470–491 (1986). 19. 19 Pearle, P. Combining stochastic dynamical state-vector reduction with spontaneous localization. Phys. Rev. A 39, 2277–2289 (1989). 20. 20 Ghirardi, G. C., Pearle, P. & Rimini, A. Markov processes in hilbert space and continuous spontaneous localization of systems of identical particles. Phys. Rev. A 42, 78–89 (1990). 21. 21 Pearle, P. Ways to describe dynamical state-vector reduction. Phys. Rev. A 48, 913–923 (1993). 22. 22 Pearle, P. Completely quantized collapse and consequences. Phys. Rev. A 72, 022112 (2005). 23. 23 Pearle, P. How stands collapse i. J. Phys. A 40, 3189 (2007). 24. 24 Pearle, P. Stress tensor for quantized random field and wave-function collapse. Phys. Rev. A 78, 022107 (2008). 25. 25 Pikovski, I., Zych, M., Costa, F. & Brukner, C. Stress tensor for quantized random field and wave-function collapse. Nat. Phys. 11, 668–672 (2008). 26. 26 Dupuis, F., Berta, M., Wullschleger, J. & Renner, R. One-shot decoupling. Commun. Math. Phys. 328, 251–284 (2014). 27. 27 Buscemi, F. & Datta, N. The quantum capacity of channels with arbitrarily correlated noise. IEEE Trans. Inform. Theory 56, 1447–1460 (2010). 28. 28 König, R., Renner, R. & Schaffner, C. The operational meaning of min- and max-entropy. IEEE Trans. Inf. Theory 55, 4337–4347 (2009). 29. 29 Renner, R. Security of Quantum Key Distribution PhD thesis (ETH Zürich (2005). 30. 30 Sturm, J. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Method. Softw. 11-12, 625–653 (1999). 31. 31 Barnum, H., Knill, E. & Nielsen, M. A. On quantum fidelities and channel capacities. IEEE Trans. Inform. Theory 46, 1317–1329 (2000). 32. 32 Terhal, B. Is entanglement monogamous? IBM J. Res. Dev. 48, 71–78 (2004). 33. 33 Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information Cambridge University Press (2000). 34. 34 Toner, B. Monogamy of non-local quantum correlations. Proc. R. Soc. A 465, 59–69 (2009). 35. 35 Tsirelson, B. S. Quantum generalizations of bell’s inequality. Lett. Math. Phys. 4, 93–100 (1980). 36. 36 Rohrlich, D. Stronger-than-quantum bipartite correlations violate relativistic causality in the classical limit. Preprint at http://arxiv.org/abs/1408.3125 (2014). 37. 37 Pawlowski, M. et al. Information causality as a physical principle. Nature 461, 1101–1104 (2009). 38. 38 Oppenheim, J. & Wehner, S. The uncertainty principle determines the nonlocality of quantum mechanics. Science 330, 1072–1074 (2010). 39. 39 Popescu, S. Nonlocality beyond quantum mechanics. Nat. Phys. 10, 264–270 (2014). 40. 40 Brunner, N., Cavalcanti, D., Pironio, S., Scarani, V. & Wehner, S. Bell nonlocality. Rev. Mod. Phys. 86, 419–478 (2014). 41. 41 van Dam, W. Implausible consequences of superstrong nonlocality. Natur. Comput. 12, 9–12 (2013). 42. 42 Dahlsten, O. C. O., Lercher, D. & Renner, R. Tsirelson’s bound from a generalized data processing inequality. New J. Phys. 14, 063024 (2012). 43. 43 Masanes, L. & Müller, M. P. A derivation of quantum theory from physical requirements. New J. Phys. 13, 063001 (2011). 44. 44 Masanes, L., Müller, M. P., Augusiak, R. & Pérez-García, D. Existence of an information unit as a postulate of quantum theory. Proc. Natl Acad. Sci. USA 110, 16373–16377 (2013). 45. 45 Chiribella, G., D’Ariano, G. M. & Perinotti, P. Informational derivation of quantum theory. Phys. Rev. A 84, 012311 (2011). 46. 46 Dakic, B. & Brukner, C. in Deep Beauty: Understanding the Quantum World through Mathematical Innovation ed. Halvorson H. 365–392Cambridge University Press (2011). 47. 47 Ududec, C. Perspectives on the Formalism of Quantum Theory PhD thesis (Univ. Waterloo (2012). 48. 48 Pfister, C. & Wehner, S. An information-theoretic principle implies that any discrete physical theory is classical. Nat. Commun. 4, 1851 (2013). 49. 49 Acín, A. et al. Device-independent security of quantum cryptography against collective attacks. Phys. Rev. Lett. 98, 230501 (2007). 50. 50 Horodecki, R., Horodecki, P. & Horodecki, M. Violating Bell inequality by mixed spin-1/2 states: necessary and sufficient condition. Phys. Lett. A 200, 340–344 (1995). 51. 51 Popescu, S. & Rohrlich, D. Generic quantum nonlocality. Phys. Lett. A 166, 293–297 (1992). 52. 52 Popescu, S. & Rohrlich, D. Which states violate bell’s inequality maximally? Phys. Lett. A 169, 411–414 (1992). 53. 53 Popescu, S. Bell’s inequalities and density matrices: revealing ‘hidden’ nonlocality. Phys. Rev. Lett. 74, 2619 (1995). 54. 54 Popescu, S. Bell’s inequalities versus teleportation: what is nonlocality? Phys. Rev. Lett. 72, 797–799 (1994). 55. 55 Nisbet-Jones, B. R., Dilley, J., Ljunggren, D. & Kuhn, A. Highly efficient source for indistinguishable single photons of controlled shape. New J. Phys. 13, 103036 (2011). 56. 56 Kessler, T. et al. A sub-40-mhz-linewidth laser based on a silicon single-crystal optical cavity. Nat. Photonics 6, 687 (2012). ## Acknowledgements We thank Markus P. Müller, Matthew Pusey, Tobias Fritz, Gary Steele, Jonas Helsen and Thinh Le Phuc for insightful discussions. C.P., J.K., M.T., A.M., R.S. and S.W. were supported by MOE Tier 3A grant ‘Randomness from quantum processes’, NRF CRP ‘Space-based QKD’. S.W. was also supported by STW, Netherlands, an NWO VIDI, and an ERC Starting Grant. N.M. and G.M. were supported by ARC Centre of Excellence for Engineered Quantum Systems, CE110001013. ## Author information Authors ### Contributions S.W. devised the project, the main conceptual ideas and proof outline. C.P. worked out almost all of the technical details, and performed the numerical calculations for the suggested experiment. J.K. worked out the bound for quantum mechanics, with help from M.T. and A.M. R.S. verified the numerical results of the linear program by an independent implementation. N.M. and G.M. proposed the optomechanical experiment in discussions with S.W. C.P., J.K., G.M. and S.W. wrote the manuscript. ### Corresponding author Correspondence to S. Wehner. ## Ethics declarations ### Competing interests The authors declare no competing financial interests. ## Supplementary information ### Supplementary Information Supplementary Figures 1-15, Supplementary Notes 1-6 and Supplementary References. (PDF 3373 kb) ## Rights and permissions Reprints and Permissions Pfister, C., Kaniewski, J., Tomamichel, M. et al. A universal test for gravitational decoherence. Nat Commun 7, 13022 (2016). https://doi.org/10.1038/ncomms13022 • Accepted: • Published: • ### Information Scrambling versus Decoherence—Two Competing Sinks for Entropy • Akram Touil •  & Sebastian Deffner PRX Quantum (2021) • ### Polarization gradient cooling and trapping of charged and neutral microspheres • Ziqiang He •  & Guangjiong Dong Journal of the Optical Society of America B (2021) • ### Einstein-Podolsky-Rosen entanglement and asymmetric steering between distant macroscopic mechanical and magnonic systems • Huatang Tan •  & Jie Li Physical Review Research (2021) • ### Strong mechanical squeezing for a levitated particle by coherent scattering • Ondřej Černotík Physical Review Research (2020) • ### Optimal estimation with quantum optomechanical systems in the nonlinear regime • Fabienne Schneiter • , Sofia Qvarfort • , Alessio Serafini • , André Xuereb • , Daniel Braun • , Dennis Rätzel •  & David Edward Bruschi Physical Review A (2020)
2021-06-21 05:35:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175802230834961, "perplexity": 1167.0518847032467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488262046.80/warc/CC-MAIN-20210621025359-20210621055359-00565.warc.gz"}
https://en.wikiversity.org/wiki/Electricity/Introduction
# Electricity/Introduction Subject classification: this is an engineering resource. Educational level: this is a secondary education resource. Educational level: this is a tertiary (university) resource. Let us begin with a few fundamental ideas about electricity to anchor all of your future lessons... ## Fundamental Background First of all, you might ask yourself, 'what exactly is electricity?' At the level of atomic physics, electricity is one of the fundamental forces of nature, arising from an "electric charge" possessed by subatomic particles (notably protons and electrons). ## Electricity in Practice The preceding description in terms of fundamental particles and forces is perhaps too theoretical for practical use. What is Electricity in practice? ### Electric charge The tiny imbalance of positive and negative charges discussed above is caused by electrons moving (in enormous numbers at the atomic scale but very tiny numbers percentage-wise) away from the atomic nuclei to which they belong The unit of quantity of electric charge is the coulomb, abbreviated "C", and named after Charles-Augustin Coulomb. It is 6.241 x 1018 elementary charges. Or equivalently, the elementary charge (charge on a single electron or proton) is 1.602 x 10-19 coulombs ### Electric current Flow of charge—what we call current—could be thought of as flow of positive charge in one direction, or of negative charge in the other direction. By convention, current flow is always considered to be the flow of positive charge, even though that is contrary to the actual flow of (negative) electrons. Hence the current is considered to flow out of the positive terminal of a battery, through whatever is being powered, and back into the negative terminal, even though electron flow through the wires is in the opposite direction. The unit of current flow is the ampere, abbreviated "A", which is a flow rate of one coulomb per second. It is named after André-Marie Ampère. In the hydraulic analogy, it is analogous to the amount of flow through a pipe, or over a dam, etc., which might be measured in liters per second with the hydraulic analogy. ### Kirchoff's Laws There are two famous "laws" of electrical engineering, known as Kirchhoff's Voltage Law and Kirchhoff's Current Law. They are really just straightforward applications of what has been discussed above. Kirchhoff's Voltage Law (sometimes abbreviated KVL) The sum of the voltage drops around any closed path in a circuit is zero This is just conservation of energy—a quantity of electric charge could pick up energy for free by going around a closed loop with a nonzero total voltage drop. Remember that "voltage drop" between two points can be either positive or negative. Kirchhoff's Current Law (sometimes abbreviated KCL) The sum of the currents flowing into a point in a circuit is zero This is just conservation of charge. Remember that current flow "into" a point is negative if the current is flowing out. now ### Ohm's Law There is a similar principle for electricity: Ohm's law, discovered by Georg Simon Ohm in the 1820's Ohm's law states that The current through a resistive medium is proportional to the applied voltage ${\displaystyle V=IR\,}$   Voltage equals current times resistance. Also stated, of course, as: ${\displaystyle I={\frac {V}{R}}}$ ${\displaystyle R={\frac {V}{I}}}$ Here are a few approximate conductivities, in siemens per meter: • Superconductors: infinity (because of complicated quantum-mechanical phenomena) • Silver: 6 * 107 • Silicon: 2 * 10-3 (but, in semiconductor materials, it is "doped" with impurities, giving it a much higher conductivity) ### Watt's Law Watt's law, discovered by Watt in the 1820's, states that the power through a resistive medium is proportional to the applied voltage ${\displaystyle P=IV=}$   Power equals current times resistance. Also stated, of course, as: ${\displaystyle I={\frac {P}{V}}}$ ${\displaystyle V={\frac {P}{I}}}$ ### "I-squared-R" and Joule's law One sometimes encounters phrases like "I-squared-R losses" in a wire. This is a combination of the power formula and Ohm's law. ${\displaystyle P=VI=(IR)I=I^{2}R\,}$ Notice that the amount of power lost in a wire is proportional to the square of the current. The phenomenon of heat being evolved when electricity passes through a conductor is sometimes called Joule's law. It's really just a consequence of conservation of energy and the equivalence of heat to other forms of energy. ## Symbols and Abbreviations The symbols used for quantities in electrical engineering can be confusing, since the symbol for a quantity may be different from the symbol for the units in which it is measured. We list the common symbols here, even though we have not yet defined all of the concepts involved. ### Electrical charge is typically denoted ${\displaystyle Q}$, measured in coulombs, abbreviated ${\displaystyle C}$. ### Current is typically denoted ${\displaystyle I}$, measured in amperes, abbreviated ${\displaystyle A}$. An ampere is, among other things, a coulomb per second. Example: ${\displaystyle V=IR\,}$   Ohm's law; voltage equals current times resistance. Example: "${\displaystyle I=28mA\,}$"   "The current is 28 milliamperes." ### Voltage is typically denoted ${\displaystyle V}$ (or sometimes ${\displaystyle E}$), and measured in volts, abbreviated ${\displaystyle V}$. The use of "${\displaystyle E}$" stands for "emf" (electro-motive force.) A volt is, among other things, a joule per coulomb. Example: "${\displaystyle V=28mV\,}$"   "The voltage is 28 millivolts." ### Resistance is typically denoted ${\displaystyle R}$, measured in ohms, abbreviated with the capital Greek omega: ${\displaystyle \Omega }$. An ohm is, among other things, a volt per ampere. Example: "${\displaystyle R=2.7K\Omega \,}$"   "The resistance is 2.7 kilohms." ### Conductance It is sometimes useful to speak of the reciprocal of resistance. This is called conductance, and is typically denoted ${\displaystyle G}$, traditionally measured in "mhos" ("mho" is "ohm" spelled backwards), abbreviated with an upside-down omega: ${\displaystyle \mho }$. A less flippant term than "mho" has been adopted: the siemens, abbreviated ${\displaystyle S}$. A mho/siemens is, among other things, an ampere per volt. Example: ${\displaystyle I=VG\,}$   Ohm's law rewritten in terms of conductance. Example: "${\displaystyle G=65m\mho \,}$"   "The conductance is 65 millimhos." Example: "${\displaystyle G=65mS\,}$"   "The conductance is 65 millisiemens." ### Capacitance is typically denoted ${\displaystyle C}$, measured in farads, abbreviated ${\displaystyle f}$. A farad is, among other things, a second per ohm, or a coulomb per volt. Example: ${\displaystyle t=RC\,}$   The time constant is the resistance times the capacitance. Example: "${\displaystyle C=75pf\,}$"   "The capacitance is 75 picofarads." ### Inductance is typically denoted ${\displaystyle L}$, measured in henries, abbreviated ${\displaystyle h}$. A henry is, among other things, an ohm-second. Example: ${\displaystyle f={\frac {1}{2\ \pi \ {\sqrt {L\ C}}}}}$   is the formula for the frequency of a resonant circuit. Example: "${\displaystyle L=120nh\,}$"   "The inductance is 120 nanohenries." ### Power is typically denoted ${\displaystyle P}$, measured in watts, abbreviated ${\displaystyle W}$. A watt is, among other things, a joule per second, or a volt-ampere. Example: ${\displaystyle P=VI\,}$   The power is the voltage times the current. Example: "${\displaystyle P=75W\,}$"   "The power is 75 watts." ### The frequency of an oscillation or signal is typically denoted ${\displaystyle f}$, measured in hertz, abbreviated ${\displaystyle Hz}$. A hertz is really just a reciprocal second. In fact, the unit of frequency used to be just "cycles per second" or simply "cycles". Example: "${\displaystyle f=102.5Mc\,}$"   "102.5 megacycles on the FM dial" (old way.) Example: "${\displaystyle f=102.5MHz\,}$"   "102.5 megahertz on the FM dial" (new way.) It happens that, in a lot of the mathematical formulas the unit of radians per second is superior. A frequency in radians per second is ${\displaystyle 2\ \pi }$ times the frequency in hertz. When measured this way, the symbol ${\displaystyle f}$ is replaced with the lower-case Greek omega: ${\displaystyle \omega }$. Many occurrences of ${\displaystyle 2\ \pi }$ disappear from various formulas when radians are used. Example: ${\displaystyle \omega ={\frac {1}{\sqrt {L\ C}}}}$   is the formula for the frequency of a resonant circuit, in radians per second. Example: ${\displaystyle X_{C}={\frac {1}{2\ \pi \ f\ C}}}$   is the formula for capacitive reactance, calibrated in hertz. Example: ${\displaystyle X_{C}={\frac {1}{\omega \ C}}}$   is the formula for capacitive reactance, calibrated in radians per second. Example: "${\displaystyle \omega =644.0265mrad/s\,}$"   "644.0265 megaradians per second on the FM dial" (not known to have ever been announced.) ## Static Electricity Electricity manifests itself in two seemingly different ways. They are different manifestations of the same thing. "Static" electricity was known to the ancients. It involves very high voltages and very low currents—the currents are so low that one doesn't always realize how high the voltage is. In order for such high voltages to persist somewhere, the insulation must be extremely good, that is, the resistances must be extremely high. Fortunately for the history of science, materials like glass, amber, and some types of rubber and other materials, have the necessary high resistance. (The word "electricity" comes from the Greek word for amber.) The voltages involved with static electricity are high enough to make the leaves of an electroscope move, but the currents are so low that we would normally not notice any effect of the current. ## "Current" Electricity "Current" electricity involves lower voltages and currents large enough to power light bulbs, motors, and such. Rubbing a glass rod with a piece of silk can't come anywhere near to providing the required level of sustained current. To get the required sustained levels of current requires either ongoing chemical reactions (as in a battery) or electromechanical devices (as in a generator.) The ability to do this was discovered in the 18th and 19th centuries by Luigi Galvani, Alessandro Volta and Michael Faraday. The study of electronics and electrical engineering involve "current" electricity almost exclusively. ## Series and Parallel Connections The terms "series" and "parallel" are actually used quite loosely by electrical engineers, to describe aspects of circuit topology. • Series connection • Parallel connection Schematic diagram showing series and parallel connections. • 2 port network ## How Are the Units Defined? As one can tell from the connections among the electrical units and the physical units such as newtons, joules, and watts, a lot of care went into the design of the system of units. But how is the coulomb defined? Why is it equal to 6.241 x 1018 elementary charges? The coulomb and volt are defined in terms of the ampere, so that a volt times an ampere equals a watt, that is, a newton-meter per second. And a coulomb is an ampere-second. The ampere was determined experimentally, as follows: it is the amount of current flowing in each of two infinitely long and infinitely thin parallel wires separated by a distance of one meter that causes an attractive magnetic force of ${\displaystyle 2\times 10^{-7}}$ newtons per meter along the length of the wires. Why the factor of ${\displaystyle 2\times 10^{-7}}$? This was chosen to make the volt a reasonable quantity relative to the voltage coming out of batteries. This definition of the units in terms of the magnetic force has the effect of defining the fundamental constant of the magnetic force, labeled ${\displaystyle \mu \,}$, to be ${\displaystyle 4\pi \times 10^{-7}\,}$. The fundamental constant of the electric force, labeled ${\displaystyle \epsilon \,}$, is related to this, according to the formula ${\displaystyle c={\frac {1}{\sqrt {\epsilon \mu }}}\,}$ where ${\displaystyle c\,}$, the speed of light, was traditionally determined experimentally. [1] ## The Next Lecture For the next lecture, see Introduction to Electricity II. ## Footnotes and References 1. The constant ${\displaystyle c\,}$, the speed of light, is now defined to be 299,792,458 meters per second. A second is officially defined in terms of a cesium clock and the meter is derived from that.
2019-03-21 16:13:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 59, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8875671625137329, "perplexity": 1165.3157321067638}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202526.24/warc/CC-MAIN-20190321152638-20190321174638-00007.warc.gz"}
https://proofwiki.org/wiki/Definition:Bounded_Mapping_to_Metric_Space
# Definition:Bounded Mapping/Metric Space ## Definition Let $M$ be a metric space. Let $f: X \to M$ be a mapping from any set $X$ into $M$. Then $f$ is a bounded mapping if and only if $f \left({X}\right)$ is bounded in $M$. ## Also see From Real Number Line is Metric Space, we can in theory consider defining boundedness on a real-valued function in terms of boundedness of a mapping into a metric space. However, as a metric space is itself defined in terms of a real-valued function in the first place, this concept can be criticised as being a circular definition.
2019-08-22 11:40:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9540705680847168, "perplexity": 206.6557151725551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317113.27/warc/CC-MAIN-20190822110215-20190822132215-00331.warc.gz"}
https://chemistry.stackexchange.com/questions/152388/why-is-the-trans-effect-of-nitrile-ion-cyanide-weaker-than-acetonitrile-methy
# Why is the trans-effect of nitrile ion (cyanide) weaker than acetonitrile (methyl cyanide) in octahedral chromium complexes? I was reading a paper which built a series of trans-philicity (a term they coined to indicate both kinetic trans-effect and thermodynamic trans-influence) from extensive calculations. And I found that in the series, $$\ce{:\!CN-}$$ is shown to have a weaker trans effect than $$\ce{MeC#N \!:}$$ [1] (Note: this is for octahedral complexes) However, this trend seems odd to me, because I cannot explain it in terms of electron donating and withdrawing capacities. I had heard that the trans-effect is strongest in ligands which were either strong $$\sigma$$-donor or strong $$\pi$$-acceptor or a combination of both. This is why $$\ce{CN-}$$, $$\ce{CO}$$ etc. have very strong trans-effects. Now, in the experimental trans-effect series that is commonly found in inorganic textbooks, we have: $$\ce{H2O < NH3 < ... < CO < CN-}$$ $$\ce{NH3}$$ is near the very beginning of the series. $$\ce{MeC#N}$$ should be a worse $$\sigma$$-donor than $$\ce{NH3}$$ (as the lone pair is in an $$\mathrm{sp}$$-orbital). It also seems unlikely that $$\ce{MeCN}$$ will be a better $$\pi$$-acceptor than $$\ce{CN-}$$ because $$\ce{MeCN\!:}$$ binds via the more electronegative $$\ce{N}$$ while $$\ce{:\!CN-}$$ binds with the less electronegative $$\ce{C}$$ (which would mean the tendency to accept electron density in the $$\pi$$-orbitals should be lower for $$\ce{MeCN}$$). So, I don't understand how it is possible to have the trans-effect order $$\ce{CN- < MeCN}$$. Is there any explanation for this? Reference: [1]. A. C. Tsipis, "Building trans-philicity (trans-effect/trans-influence) ladders for octahedral complexes by using an NMR probe", Dalton Trans. 2019, 48, 1814-1822 (DOI: 10.1039/C8DT04562C). According to the reference mentioned in the question (Ref.1): The term ‘trans-influence’, being a long-established concept of broad relevance in the realm of inorganic chemistry, was defined first in 1966 by Pidcock et al. as the ability of ligand L in a complex to weaken the metal–ligand bond trans to itself. This ground-state phenomenon should be distinguished from the kinetic phenomenon called the ‘trans-effect’, which is the effect of coordinated ligand L upon the rate of substitution reactions of the ligand in trans-position to L. Note: Pidcock et al. 1966: Ref.2 Considering the high sensitivity of the $$\ce{^{13}C}$$-$$\mathrm{NMR}$$ isotropic shielding tensor elements to small structural/electronic changes, the authors of Ref.1 have published a reliable trans-philicity ladder for octahedral $$\ce{[Cr(CO)5L]^{−/0/+}}$$ complexes using $$\ce{^{13}C}$$-$$\mathrm{NMR}$$ isotropic shielding tensor elements. In $$\ce{[Cr(CO)5L]^{−/0/+}}$$ complex, $$\ce{L}$$ represents a wide variety of ligands (50 ligands) commonly used in coordination and organometallic chemistry. Briefly, all $$\ce{^{13}C}$$-$$\mathrm{NMR}$$ isotropic shielding tensor elements and other parameters have been calculated using PBE0/Def2-TZVP(Cr)∪6-31G(d,p)(E)/PCM and PBE0/Def2-TZVP(Cr)∪6-311++G(d,p)(E)/PCM computational protocols set in dichloromethane solution where the latter protocol is more sophisticated than the former. I think, major drawback in this publication is the lack of experimental date to support the calculations. For instance, the authors admit that to the best of their knowledge, experimental data for $$\delta \ \ce{^{13}C}$$-$$\mathrm{NMR}$$ chemical shifts of $$\ce{[Cr(CO)5L]^{−/0/+}}$$ complexes are available only for the $$\ce{Cr(CO)6}$$ complex and the “free” $$\ce{CO}$$ ligand, which are $$212$$ and $$\pu{184.4 ppm}$$, respectively. When compared the calculations of $$\delta \ \ce{^{13}C}$$-$$\mathrm{NMR}$$ chemical shifts of the $$\ce{Cr(CO)6}$$ complex and the “free” $$\ce{CO}$$ ligand employing the two computational protocols, the PBE0/Def2-TZVP(Cr)∪6-31G(d,p)(E)/PCM predicted $$\delta \ \ce{^{13}C}$$-$$\mathrm{NMR}$$ chemical shifts of $$210.2$$ and $$\pu{186.1 ppm}$$, respectively for two compounds, while the protocol PBE0/Def2-TZVP(Cr)∪6-311++G(d,p)(E)/PCM) predicted $$\delta \ \ce{^{13}C}$$-$$\mathrm{NMR}$$ chemical shifts of $$226.8$$ and $$\pu{197.6 ppm}$$, respectively for the same two compounds: $$\begin{array}{l|cc} \hline \text{Compound} & \ce{\delta \ ^{13}C} \text{ (calculated)}^1 & \ce{\delta \ ^{13}C} \text{ (calculated)}^2 & \ce{\delta \ ^{13}C} \text{ (experimental)} \\ \hline \ce{Cr(CO)6} \text{ (complex)} & \pu{210.2 ppm} & \pu{226.8 ppm} & \pu{212.0 ppm} \\ \ce{CO} \text{ ('free' ligand)} & \pu{186.1 ppm} & \pu{197.65 ppm} & \pu{184.4 ppm} \\ \hline \end{array}\\ ^1 \text{From protocol PBE0/Def2-TZVP(Cr)∪6-31G(d,p)(E)/PCM;} \\ ^2\text{ From protocol PBE0/Def2-TZVP(Cr)∪6-311++G(d,p)(E)/PCM.}$$ Evidently, the GIAO/PBE0/Def2-TZVP(Cr)∪6-31G(d,p)(E)/PCM computational protocol is a better performer in the calculation of the $$\ce{^{13}C}$$-$$\mathrm{NMR}$$ spectra of $$\ce{[Cr(CO)5L]^{−/0/+}}$$ complexes than that of PBE0/Def2-TZVP(Cr)∪6-311++G(d,p)(E)/PCM one. Nevertheless, the differences of the calculated $$\Delta\sigma \ \ce{^{13}C}$$-$$\mathrm{NMR}$$ descriptors of trans-philicity for the complexes using either protocol were minimal. Yet the authors have mentioned that: It can be seen that the $$\mathrm{NMR}$$ trans-philicity ladders constructed by the two computational protocols are similar with some minor local reversed orders in the trans-philicity series of similar ligands. The PBE0/Def2-TZVP(Cr)∪6-31G(d,p)(E)/PCM computational protocol predicts for the $$\ce{NCR}$$ ligands the order: $$\ce{NCH \gt NCPh \gt NCMe}$$, while the PBE0/Def2-TZVP(Cr)∪6-311++G(d,p)(E)/PCM computational protocol predicts the order: $$\ce{NCMe \gt NCPh \gt NCH}$$. Consideration of the $$\sigma$$-donor and $$\pi$$-acceptor abilities of the $$\ce{NCR}$$ ligands supports the order predicted by the PBE0/Def2-TZVP(Cr)∪6-31G(d,p)(E)/PCM computational protocol. Thus, I could argue that the difference in two protocols make this difference than actual situation. Unless we have experimental data to support the finding, it is just speculation. Reference: 1. A. C. Tsipis, "Building trans-philicity (trans-effect/trans-influence) ladders for octahedral complexes by using an NMR probe", Dalton Trans. 2019, 48, 1814-1822 (DOI: 10.1039/C8DT04562C). 2. A. Pidcock, R. E. Richards, and L. M. Venanzi, "$$\ce{^{195}Pt–^{31}P}$$ nuclear spin coupling constants and the nature of the trans-effect in platinum complexes," J. Chem. Soc. A 1966, 1707–1710 (DOI: https://doi.org/10.1039/J19660001707). • I don't understand how this answers the question? I was asking about the difference between CN- and MeCN. MeCN has a higher trans-philicity than CN- with both basis sets. Is there any experimental data you know about the trans-effect of MeCN vs CN- that disproves the order from the calculation? Otherwise your answer is just explaining what's written in the paper. Jun 4 '21 at 7:51 • @ Shoubhik R Maiti: Before you question, did you read the paper? It was stated that two protocols give opposite results. Thus, none is conclusive. That's what I said in my answer. Please read the paper first. Jun 4 '21 at 16:13 • Of course I read the paper, that's why I mentioned it in the question. The two protocols give different results for RCN groups, but I am not asking about the order of that. (And HCN is not the same as CN-, they are different ligands in the paper) In both protocols the order of CN- and MeCN is the same. Are you arguing that because the methods predict different order for RCN ligands, the whole series is questionable? Jun 4 '21 at 19:33 • Did you have any other evidence that it is different than what the paper said? Jun 4 '21 at 19:43 • Well, seems like I misunderstood your question. I'll see some evidence to add. Jun 4 '21 at 19:51
2022-01-29 08:59:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 64, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7449086308479309, "perplexity": 1873.6444587444453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300573.3/warc/CC-MAIN-20220129062503-20220129092503-00218.warc.gz"}
http://www.etiquettehell.com/smf/index.php?topic=51263.msg2918945
News: IT'S THE 2ND ANNUAL GUATEMALA LIBRARY PROJECT BOOK DRIVE!    LOOKING FOR DONATIONS OF SCIENCE BOOKS THIS YEAR.    Check it out in the "Extending the Hand of Kindness" folder or here: http://www.etiquettehell.com/smf/index.php?topic=139832.msg3372084#msg3372084 • April 23, 2017, 06:59:02 PM ### Author Topic: Special Snowflake Stories  (Read 9367819 times) 0 Members and 4 Guests are viewing this topic. #### Miss Tickle • Member • Posts: 211 ##### Re: Special Snowflake Stories « Reply #20340 on: April 04, 2013, 08:02:20 PM » I nominate Carnival Cruise's CEO Micky Arison. He was asked by Sen. Jay Rockefeller is the company intended to reimburse the Navy and Coast Guard the US $4.2 million expended during rescue operations on various cruises, notably the Triumph "Poop" Cruise and the Splendor incident. He cited the "Maritime Tradition" of rescuing a stranded vessel as his reason for declining to pay any of the costs for the rescue. Except it seems he forgets the crew of the Star Princess declined to stop for a similar "vessel in distress" situation and not only did the passengers on the cruiseship notify the crew, the crew acknowledged they saw the fishing boat, but didn't stop. There are even photos of the three crewmen waving. I think you know where this is going: Two men died. The other is suing, of course. So, lucky for him, it's just a "tradition" and not a "rule". I will never set foot on one of his ships. #### Mental Magpie • Member • Posts: 5868 ##### Re: Special Snowflake Stories « Reply #20341 on: April 04, 2013, 08:13:44 PM » Me either. #### ladyknight1 • Member • Posts: 13891 • Not all those who wander are lost ##### Re: Special Snowflake Stories « Reply #20342 on: April 04, 2013, 09:11:35 PM » I will not either. I have been distinctly unimpressed by Carnival since cruising a competitor's ship and witnessing the behavior of their staff at the port. Never in a million years would I give that company a dime. “All that is gold does not glitter, Not all those who wander are lost; The old that is strong does not wither, Deep roots are not reached by the frost." -J.R.R Tolkien #### NyaChan • Member • Posts: 4153 ##### Re: Special Snowflake Stories « Reply #20343 on: April 04, 2013, 09:31:15 PM » What did the staff do? I've cruised with Carnival and was far less impressed than with other lines, but their staff didn't really stand out to me. #### ladyknight1 • Member • Posts: 13891 • Not all those who wander are lost ##### Re: Special Snowflake Stories « Reply #20344 on: April 04, 2013, 10:10:10 PM » They were less than courteous or respectful to their own passengers (we had friends on that ship). Also, our line for boarding the ship was handled by 6-8 crew members, facilitating quick boarding. The Carnival staff weren't focused on the guests, a lot of bad attitudes. “All that is gold does not glitter, Not all those who wander are lost; The old that is strong does not wither, Deep roots are not reached by the frost." -J.R.R Tolkien #### jedikaiti • Swiss Army Nerd • Member • Posts: 3766 • A pie in the hand is worth two in the mail. ##### Re: Special Snowflake Stories « Reply #20345 on: April 04, 2013, 10:25:32 PM » The rest of the way was uneventful and I'm celebrating my not dying with a Death by Chocolate cupcake. I thought it was appropriate. I hope it's decorated with little snowflakes. Nope, white chocolate shavings. Still awesome. Those look like snowflakes! What part of v_e = \sqrt{\frac{2GM}{r}} don't you understand? It's only rocket science! "The problem with re-examining your brilliant ideas is that more often than not, you discover they are the intellectual equivalent of saying, 'Hold my beer and watch this!'" - Cindy Couture #### snowdragon • Member • Posts: 2200 ##### Re: Special Snowflake Stories « Reply #20346 on: April 07, 2013, 07:45:42 PM » I am nominating one of my professors. She has us writing our paper piecemeal. Write the Introduction: get it aproved. Write the transition sentences that are to go between paragraphs: get those approved. Write the conclusion: get it approved, then go on write the rest of the papers. I struggled with this format. I made the teacher aware ( at her suggestion that we let her know what issues we are having - in each and every piece of correspondence we get from her. ) that I was struggling with this in January. Easter Sunday she wrote me a note and said I could have written it as a whole had I wanted to- after the bits and pieces were all done. The assignment due today was to post about our challenges in writing the bits and pieces and I stated what I had told her, that I found it difficult to do it this way but that I had gotten through it with help from the teacher I got through it, but that I had no suggestions for others, other than to go to the teacher if they needed help working around this project. I put up my post late last night. So, today I get an email about how she expects more professionalism and how she could not believe that I was still stating that writing in this manner was my greatest challenge in this assignment, especially since I had gotten full credit ( 25/25) for the assignment. She then went on to shut down the student interaction forums and bar students from communicating with each other about the assignments. OK, so you ask students what they find challenging and then have a temper tantrum when they answer honestly. Really? And then the student is unprofessional? I am so not understanding the "logic" here. But I am calling SS here. #### AnnaJ • Member • Posts: 1042 ##### Re: Special Snowflake Stories « Reply #20347 on: April 08, 2013, 01:27:53 AM » I had another driving special snowflake this morning, Mr MLackOfDrivingSkillsIsYourFault. On my route to work is a mini-roundabout. As you approach the roundabout the road splits into 2 lanes, both VERY clearly marked (road signs plus arrows on the tarmac) The left lane is for turning left ONLY (I'm in the UK, so equivalent to a right turn for you USians) The right lane is for right turns and going straight on. Approching the roundabout I was in the right lane to go straight on. Mr SS decided to come yup the left lane and then to turn right. He didn't bother to indicate. I sounded my horn to alert him to the fact that I was there and he was about to drive into the side of my car (given that he was about to turn into me, and I had nowhere to go to get out of his way, that fact that it was my right of way and he was in the wrong lane and cutting me up was the least of my worries) Cue lots of rude gestures and (I surmise, my windows were closed) shouting! Presumably I should have levitated my car to get out of his way. Or maybe I should have magically divined that he was planning to pull across, and should have waited til he had done it...! What makes it worse is that the left turn is into a car park, so it would have been really easy to turn in the car park. Or if he had stopped and indicated right someone would probably have let him in. I'm not familiar with the bolded term - is that a reference to the United States? #### violinp • Member • Posts: 4083 • cabbagegirl28's my sister :) ##### Re: Special Snowflake Stories « Reply #20348 on: April 08, 2013, 01:38:50 AM » I had another driving special snowflake this morning, Mr MLackOfDrivingSkillsIsYourFault. On my route to work is a mini-roundabout. As you approach the roundabout the road splits into 2 lanes, both VERY clearly marked (road signs plus arrows on the tarmac) The left lane is for turning left ONLY (I'm in the UK, so equivalent to a right turn for you USians) The right lane is for right turns and going straight on. Approching the roundabout I was in the right lane to go straight on. Mr SS decided to come yup the left lane and then to turn right. He didn't bother to indicate. I sounded my horn to alert him to the fact that I was there and he was about to drive into the side of my car (given that he was about to turn into me, and I had nowhere to go to get out of his way, that fact that it was my right of way and he was in the wrong lane and cutting me up was the least of my worries) Cue lots of rude gestures and (I surmise, my windows were closed) shouting! Presumably I should have levitated my car to get out of his way. Or maybe I should have magically divined that he was planning to pull across, and should have waited til he had done it...! What makes it worse is that the left turn is into a car park, so it would have been really easy to turn in the car park. Or if he had stopped and indicated right someone would probably have let him in. I'm not familiar with the bolded term - is that a reference to the United States? It's a shorter way to say Americans. "It takes a great deal of courage to stand up to your enemies, but even more to stand up to your friends" - Harry Potter #### marcel • Member • Posts: 2137 ##### Re: Special Snowflake Stories « Reply #20349 on: April 08, 2013, 01:51:54 AM » I nominate Carnival Cruise's CEO Micky Arison. He was asked by Sen. Jay Rockefeller is the company intended to reimburse the Navy and Coast Guard the US$4.2 million expended during rescue operations on various cruises, notably the Triumph "Poop" Cruise and the Splendor incident. He cited the "Maritime Tradition" of rescuing a stranded vessel as his reason for declining to pay any of the costs for the rescue. Except it seems he forgets the crew of the Star Princess declined to stop for a similar "vessel in distress" situation and not only did the passengers on the cruiseship notify the crew, the crew acknowledged they saw the fishing boat, but didn't stop. There are even photos of the three crewmen waving. I think you know where this is going: Two men died. The other is suing, of course. So, lucky for him, it's just a "tradition" and not a "rule". I will never set foot on one of his ships. It is not just a tradition, it is actualy a rule. It is simple maritime law that you hvae to respond if you receive any kind of distress call. Wherever you go..... There you are. #### MariaE • Member • Posts: 5726 • So many books, so little time ##### Re: Special Snowflake Stories « Reply #20350 on: April 08, 2013, 01:58:28 AM » I had another driving special snowflake this morning, Mr MLackOfDrivingSkillsIsYourFault. On my route to work is a mini-roundabout. As you approach the roundabout the road splits into 2 lanes, both VERY clearly marked (road signs plus arrows on the tarmac) The left lane is for turning left ONLY (I'm in the UK, so equivalent to a right turn for you USians) The right lane is for right turns and going straight on. Approching the roundabout I was in the right lane to go straight on. Mr SS decided to come yup the left lane and then to turn right. He didn't bother to indicate. I sounded my horn to alert him to the fact that I was there and he was about to drive into the side of my car  (given that he was about to turn into me, and I had nowhere to go to get out of his way, that fact that it was my right of way and he was in the wrong lane and cutting me up was the least of my worries) Cue lots of rude gestures and (I surmise, my windows were closed) shouting! Presumably I should have levitated my car to get out of his way. Or maybe I should have magically divined that he was planning to pull across, and should have waited til he had done it...! What makes it worse is that the left turn is into a car park, so it would have been really easy to turn in the car park. Or if he had stopped and indicated right someone would probably have let him in. I'm not familiar with the bolded term - is that a reference to the United States? It's a shorter way to say Americans. Not to mention that in some languages "Americans" covers everybody from the Americas - or at the very least everybody from North America. So it seems more precise and is quicker to type than "People from the US". Dane by birth, Kiwi by choice #### Gyburc • Member • Posts: 1923 ##### Re: Special Snowflake Stories « Reply #20351 on: April 08, 2013, 06:11:08 AM » I also encountered a flurry of SSs on the roads recently... I was driving down a fairly narrow two-lane country road with a speed-limit of 50 mph. The road winds rather a lot, so I tend to drive fairly slowly (up to 45 in the straight bits, down to 40 where visibility is bad). This was obviously annoying the driver behind me who was tailgating me (SS no. 1). I came round a bend, fairly slowly, because there was a side-turning just past it on my side of the road. As I came round the corner, I saw a SUV coming the other way, quite fast, and a woman on my side of the road walking in the road towards me. There was a wide grassy verge, but she was walking in the road-way (SS no. 2). Unfortunately, the SUV and I were going to pass each other exactly at the point where the woman was walking. Eeeek. So I braked as much as I could, given the tailgater behind me, then moved out as far into the road as I could without running into the SUV, and managed to get past the woman in the road. She moved all of 6 inches over towards the verge, stopped in her tracks, and swore a blue streak at me as I passed her. When you look into the photocopier, the photocopier also looks into you #### NyaChan • Member • Posts: 4153 ##### Re: Special Snowflake Stories « Reply #20352 on: April 08, 2013, 07:17:24 AM » I am nominating one of my professors. She has us writing our paper piecemeal.  Write the Introduction: get it aproved. Write the transition sentences that are to go between paragraphs: get those approved. Write the conclusion: get it approved, then go on write the rest of the papers. I struggled with this format.  I made the teacher aware ( at her suggestion that we let her know what issues we are having - in each and every piece of correspondence we get from her. ) that I was struggling with this in January.  Easter Sunday she wrote me a note and said I could have written it as a whole had I wanted to- after the bits and pieces were all done. The assignment due today was to post about our challenges in writing the bits and pieces and I stated what I had told her, that I found it difficult to do it this way but that I had gotten through it with help from the teacher I got through it, but that I had no suggestions for others, other than to go to the teacher if they needed help working around this project. I put up my post late last night. So, today I get an email about how she expects more professionalism and how she could not believe that I was still stating that writing in this manner was my greatest challenge in this assignment, especially since I had gotten full credit ( 25/25) for the assignment. She then went on to shut down the student interaction forums and bar students from communicating with each other about the assignments. OK, so you ask students what they find challenging and then have a temper tantrum when they answer honestly.   Really?  And then the student is unprofessional? I am so not understanding the "logic" here.  But I am calling SS here. I think you are in a class with someone who learned their teaching style with my HS English teacher.  She did the exact same thing only effectively SHE chose our Topic Sentences & transitions and thesis - we'd just get to fill in the blanks and god help you if you changed a word of what she had chosen.  Every paper's thesis for every student was Because [Insert Character] became [insert more/less] [adjective], he/she was able to achieve more/less happiness. • Member • Posts: 5788 • Or you can just call me Diane. (NE USA EHellion) ##### Re: Special Snowflake Stories « Reply #20353 on: April 08, 2013, 08:32:04 AM » I had another driving special snowflake this morning, Mr MLackOfDrivingSkillsIsYourFault. On my route to work is a mini-roundabout. As you approach the roundabout the road splits into 2 lanes, both VERY clearly marked (road signs plus arrows on the tarmac) The left lane is for turning left ONLY (I'm in the UK, so equivalent to a right turn for you USians) The right lane is for right turns and going straight on. Approching the roundabout I was in the right lane to go straight on. Mr SS decided to come yup the left lane and then to turn right. He didn't bother to indicate. I sounded my horn to alert him to the fact that I was there and he was about to drive into the side of my car  (given that he was about to turn into me, and I had nowhere to go to get out of his way, that fact that it was my right of way and he was in the wrong lane and cutting me up was the least of my worries) Cue lots of rude gestures and (I surmise, my windows were closed) shouting! Presumably I should have levitated my car to get out of his way. Or maybe I should have magically divined that he was planning to pull across, and should have waited til he had done it...! What makes it worse is that the left turn is into a car park, so it would have been really easy to turn in the car park. Or if he had stopped and indicated right someone would probably have let him in. I'm not familiar with the bolded term - is that a reference to the United States? It's a shorter way to say Americans. Not to mention that in some languages "Americans" covers everybody from the Americas - or at the very least everybody from North America. So it seems more precise and is quicker to type than "People from the US". Except I've never heard an American refer to Americans that way. Location:
2017-04-23 23:59:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5563815832138062, "perplexity": 2545.754841660848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118851.8/warc/CC-MAIN-20170423031158-00297-ip-10-145-167-34.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/494927/for-a-physical-pendulum-why-do-you-use-an-angular-coordinate-system-when-the-ce
# For a physical pendulum, why do you use an angular coordinate system when the centre of mass translates too? I am trying to understand why you can use $$F=ma$$ for a simple pendulum, yet need the rotational equivalent for a physical pendulum. I understand it is because the rigid body can rotate too, whereas you don't consider that for a simple pendulum and consider just the mass under translational motion. I just don't understand why you don't have to consider translational motion with the physical pendulum? My understanding was kinetic energy is the sum of the translational kinetic energy of the centre of mass and rotational kinetic energy where it is rotating about the centre of mass? I'm not sure how to use this information though for a physical pendulum when it is rotating about a different axis? • The formula $\tau= I_{support}\alpha$ (which uses an angular coordinate system) can also be used on a simple pendulum. You will have to assume a point mass "rotating" about the hinge. – harshit54 Aug 2 at 21:48 With any extended object such as a physical pendulum, you are considering not the force on a single particle but on all of the particle (atoms, molecules, whatever) that make up the object: right off the bat, you're considering the positions of $$10^\text{something}$$ particles.
2019-09-18 05:53:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8325556516647339, "perplexity": 153.91812626106685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573184.25/warc/CC-MAIN-20190918044831-20190918070831-00555.warc.gz"}
https://bookdown.org/aramir21/IntroductionBayesianEconometricsGuidedTour/probit-model.html
## 6.3 Probit model The probit model also has as dependent variable a binary outcome. There is a latent (unobserved) random variable, $$Y_i^*$$, that defines the structure of the estimation problem $$Y_i=\begin{Bmatrix} 1, & Y_i^* \geq 0\\ 0, & Y_i^* < 0 \end{Bmatrix},$$ where $$Y_i^*={\bf{x}}_i^{\top}\beta+\mu_i$$, $$\mu_i\stackrel{iid} {\sim}N(0,1)$$. Then, \begin{align} P[Y_i=1]&=P[Y_i^*\geq 0]\\ &=P[\mathbf{x}_i^{\top}\beta+\mu_i\geq 0]\\ &=P[\mu_i\geq -\mathbf{x}_i^{\top}\beta]\\ &=1-P[\mu_i < -\mathbf{x}_i^{\top}\beta]\\ &=P[\mu_i < \mathbf{x}_i^{\top}\beta], \end{align} where the last equality follows by symmetry at 0. In addition, observe that the previous calculations do not change if we multiply $$Y_i^*$$ by a positive constant, this implies identification issues regarding scale. Intuitively, this is because we just observe 0’s or 1’s that are driven by an unobserved random latent variable $$Y_i^*$$, this issue is also present in the logit model, that is why we set the variance equal to 1. implemented data augmentation to apply a Gibbs sampling algorithm in this model. Augmenting this model with $$Y_i^*$$, we can have the likelihood contribution from observation $$i$$, $$p(y_i|y_i^*)=1_{y_i=0}1_{y_i^*\leq 0}+1_{y_i=1}1_{y_i^*> 0}$$, where $$1_A$$ is an indicator function that takes the value of 1 when condition $$A$$ is satisfied. The posterior distribution is $$\pi(\beta,{\bf{Y^*}}|{\bf{y}},{\bf{X}})\propto\prod_{i=1}^n\left[\mathbf{1}_{y_i=0}1_{y_i^*< 0}+1_{y_i=1}1_{y_i^*\geq 0}\right] \times N_N({\bf{Y}}^*|{\bf{X}\beta},{\bf{I}}_N)\times N_K(\beta|\beta_0,{\bf{B}}_0)$$ when taking a normal distribution as prior, $$\beta\sim N(\beta_0,{\bf{B}}_0)$$. The conditional posterior distribution of the latent variable is \begin{align} Y_i^*|\beta,{\bf{y}},{\bf{X}}&\sim\begin{Bmatrix} TN_{[0,\infty)}({\bf{x}}_i^{\top}\beta,1), & y_i= 1\\ TN_{(-\infty,0)}({\bf{x}}_i^{\top}\beta,1), & y_i= 0 \\ \end{Bmatrix}, \end{align} where $$TN_A$$ denotes a truncated normal density in the interval $$A$$. The conditional posterior distribution of the location parameters is \begin{align} \beta|{\bf{Y}}^*, {\bf{X}} & \sim N(\beta_n,\bf{B}_n), \end{align} where $${\bf{B}}_n = ({\bf{B}}_0^{-1} + {\bf{X}}^{\top}{\bf{X}})^{-1}$$, and $$\beta_n= {\bf{B}}_n({\bf{B}}_0^{-1}\beta_0 + {\bf{X}}^{\top}{\bf{Y}}^*)$$. Application: Determinants of hospitalization in Medellín We use the dataset named 2HealthMed.csv, which is in folder DataApp (see Table 13.3 for details) in our github repository (https://github.com/besmarter/BSTApp) and was used by . Our dependent variable is a binary indicator with a value equal to 1 if an individual was hospitalized in 2007, and 0 otherwise. The specification of the model is \begin{align} \text{Hosp}_i&=\beta_1+\beta_2\text{SHI}_i+\beta_3\text{Female}_i+\beta_4\text{Age}_i+\beta_5\text{Age}_i^2+\beta_6\text{Est2}_i+\beta_7\text{Est3}_i\\ &+\beta_8\text{Fair}_i+\beta_9\text{Good}_i+\beta_{10}\text{Excellent}_i, \end{align} where SHI is a binary variable equal to 1 if the individual is in a subsidized health care program and 0 otherwise, Female is an indicator of gender, Age in years, Est2 and Est3 are indicators of socio-economic status, the reference is Est1, which is the lowest, and self perception of health status where bad is the reference. Let’s set $$\beta_0={\bf{0}}_{10}$$, $${\bf{B}}_0={\bf{I}}_{10}$$, iterations, burn-in and thinning parameters equal to 10000, 1000 and 1, respectively. mydata <- read.csv("DataApplications/2HealthMed.csv", header = T, sep = ",") attach(mydata) ## The following objects are masked from mydata (pos = 3): ## ## Age, Age2 str(mydata) ## 'data.frame': 12975 obs. of 22 variables: ## $id : int 1 2 3 4 5 6 7 8 9 10 ... ##$ MedVisPrev : int 0 0 0 0 0 0 0 0 0 0 ... ## $MedVisPrevOr: int 1 1 1 1 1 1 1 1 1 1 ... ##$ Hosp : int 0 0 0 0 0 0 0 0 0 0 ... ## $SHI : int 1 1 1 1 0 0 1 1 0 0 ... ##$ Female : int 0 1 1 1 0 1 0 1 0 1 ... ## $Age : int 7 39 23 15 8 54 64 40 6 7 ... ##$ Age2 : int 49 1521 529 225 64 2916 4096 1600 36 49 ... ## $FemaleAge : int 0 39 23 15 0 54 0 40 0 7 ... ##$ Est1 : int 1 0 0 0 0 0 0 0 0 0 ... ## $Est2 : int 0 1 1 1 0 1 1 1 0 0 ... ##$ Est3 : int 0 0 0 0 1 0 0 0 1 1 ... ## $Bad : int 0 0 0 0 0 0 0 0 0 0 ... ##$ Fair : int 0 0 0 0 0 0 1 0 0 0 ... ## $Good : int 1 1 1 1 0 0 0 1 1 1 ... ##$ Excellent : int 0 0 0 0 1 1 0 0 0 0 ... ## $NoEd : int 1 0 0 0 1 0 0 0 1 1 ... ##$ PriEd : int 0 0 0 0 0 1 1 1 0 0 ... ## $HighEd : int 0 1 1 1 0 0 0 0 0 0 ... ##$ VocEd : int 0 0 0 0 0 0 0 0 0 0 ... ## $UnivEd : int 0 0 0 0 0 0 0 0 0 0 ... ##$ PTL : num 0.43 0 0 0 0 0.06 0 0.38 0 1 ... K <- 10 # Number of regressors b0 <- rep(0, K) # Prio mean B0i <- diag(K) # Prior precision (inverse of covariance) Prior <- list(betabar = b0, A = B0i) # Prior list y <- Hosp # Dependent variables X <- cbind(1, SHI, Female, Age, Age2, Est2, Est3, Fair, Good, Excellent) # Regressors Data <- list(y = y, X = X) # Data list Mcmc <- list(R = 10000, keep = 1, nprint = 0) # MCMC parameters RegProb <- bayesm::rbprobitGibbs(Data = Data, Prior = Prior, Mcmc = Mcmc) # Inference using bayesm package ## ## Starting Gibbs Sampler for Binary Probit Model ## with 12975 observations ## Table of y Values ## y ## 0 1 ## 12571 404 ## ## Prior Parms: ## betabar ## [1] 0 0 0 0 0 0 0 0 0 0 ## A ## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] ## [1,] 1 0 0 0 0 0 0 0 0 0 ## [2,] 0 1 0 0 0 0 0 0 0 0 ## [3,] 0 0 1 0 0 0 0 0 0 0 ## [4,] 0 0 0 1 0 0 0 0 0 0 ## [5,] 0 0 0 0 1 0 0 0 0 0 ## [6,] 0 0 0 0 0 1 0 0 0 0 ## [7,] 0 0 0 0 0 0 1 0 0 0 ## [8,] 0 0 0 0 0 0 0 1 0 0 ## [9,] 0 0 0 0 0 0 0 0 1 0 ## [10,] 0 0 0 0 0 0 0 0 0 1 ## ## MCMC parms: ## R= 10000 keep= 1 nprint= 0 ## PostPar <- coda::mcmc(RegProb\$betadraw) # Posterior draws colnames(PostPar) <- c("Cte", "SHI", "Female", "Age", "Age2", "Est2", "Est3", "Fair", "Good", "Excellent") # Names summary(PostPar) # Posterior summary ## ## Iterations = 1:10000 ## Thinning interval = 1 ## Number of chains = 1 ## Sample size per chain = 10000 ## ## 1. Empirical mean and standard deviation for each variable, ## plus standard error of the mean: ## ## Mean SD Naive SE Time-series SE ## Cte -9.378e-01 1.363e-01 1.363e-03 3.601e-03 ## SHI -6.933e-03 5.868e-02 5.868e-04 2.193e-03 ## Female 1.266e-01 4.895e-02 4.895e-04 1.797e-03 ## Age -1.533e-04 3.625e-03 3.625e-05 1.199e-04 ## Age2 4.245e-05 4.354e-05 4.354e-07 1.318e-06 ## Est2 -8.793e-02 5.231e-02 5.231e-04 1.805e-03 ## Est3 -4.495e-02 8.050e-02 8.050e-04 2.751e-03 ## Fair -4.937e-01 1.133e-01 1.133e-03 2.069e-03 ## Good -1.204e+00 1.121e-01 1.121e-03 2.312e-03 ## Excellent -1.056e+00 1.339e-01 1.339e-03 3.523e-03 ## ## 2. Quantiles for each variable: ## ## 2.5% 25% 50% 75% 97.5% ## Cte -1.208e+00 -1.030e+00 -9.361e-01 -8.448e-01 -0.6733154 ## SHI -1.199e-01 -4.581e-02 -7.643e-03 3.107e-02 0.1121491 ## Female 3.131e-02 9.345e-02 1.271e-01 1.597e-01 0.2212118 ## Age -7.196e-03 -2.584e-03 -1.551e-04 2.287e-03 0.0070899 ## Age2 -4.363e-05 1.317e-05 4.217e-05 7.254e-05 0.0001262 ## Est2 -1.910e-01 -1.235e-01 -8.748e-02 -5.274e-02 0.0147171 ## Est3 -2.026e-01 -9.924e-02 -4.423e-02 8.618e-03 0.1118982 ## Fair -7.137e-01 -5.700e-01 -4.946e-01 -4.186e-01 -0.2690228 ## Good -1.421e+00 -1.279e+00 -1.206e+00 -1.130e+00 -0.9812539 ## Excellent -1.322e+00 -1.146e+00 -1.056e+00 -9.661e-01 -0.7899088 It seems from our results that female and health status are relevant variables for hospitalization, as their 95% credible intervals do not cross 0. Women have a higher probability of being hospitalized than do men, and people with bad self perception of health condition also have a higher probability of being hospitalized. We get same results programming a Gibbs sampler algorithm (see Exercise 1) and using our GUI. We also see that there are posterior convergence issues (see Exercise 2). ### References Albert, James H, and Siddhartha Chib. 1993. “Bayesian Analysis of Binary and Polychotomous Response Data.” Journal of the American Statistical Association 88 (422): 669–79. Ramírez Hassan, A., J. Cardona Jiménez, and R. Cadavid Montoya. 2013. “The Impact of Subsidized Health Insurance on the Poor in Colombia: Evaluating the Case of Medellín.” Economia Aplicada 17 (4): 543–56. Tanner, M. A., and W. H. Wong. 1987. “The Calculation of Posterior Distributions by Data Augmentation.” Journal of the American Statistical Association 82 (398): 528–40.
2022-07-02 13:59:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6919345855712891, "perplexity": 2077.6239514196764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00371.warc.gz"}
http://mathematica.stackexchange.com/questions/13996/how-can-i-create-facetted-histograms/14000
# How can I create facetted histograms? I love Mathematica, and I would love to be able to use it for all of my data analysis tasks, but there is one type of analysis which I find almost trivial in R, and I have no idea how to approach in Mathematica. Say I have a CSV file containing a few hundred thousand people's ranked preferences for items, in a format like this: Gender,X29234,X30310,X28908,... Male,1,2,3,... Female,1,3,2,... ... (in this case the female liked item X28908 more than item X30310) 10000 rows of sample data can be found here. I want to plot histograms of the ranks each gender assigned to each item. In R I can use reshape2 and ggplot2 to do something like this: library(reshape2) library(ggplot2) melted = melt(merged_data, id=1) ggplot(melted) + geom_histogram(aes(x=value,y = ..density..,fill=factor(Gender)),binwidth=1, position="identity",alpha=.5) + facet_wrap(~variable) + theme(legend.position="bottom") + scale_fill_discrete(name="Gender: ") to get a nice graph like this: The key thing for me here are that the individual plots all end up having the same axis bounds, even though the bounds of the data in each plot aren't the same, which makes comparisons really easy. How would I go about doing something like this in Mathematica so I can get rid of R in my workflow? Edit: I should clarify that this example with Gender is to simplify the problem to something easier to explain, and I'm looking for a solution which is as automatic as possible: doesn't involve enumerating in the source the possible values of the Gender field, doesn't involve hard-coding in the source the bounds of the plot etc. ggplot2 does all of these out of the box, so I'm looking for a reusable approach to do something similar. - Could you add some sample data (for example, merged_data.csv) that you used for the R plot (or link to it somewhere)? –  rm -rf Nov 2 '12 at 3:05 Coming right up... –  nicolaskruchten Nov 2 '12 at 3:14 Edited to add pastebin link to 10k lines of sample data –  nicolaskruchten Nov 2 '12 at 3:20 @nicolaskruchten, have you tried setting PlotRange to the same value for all graphs? –  alancalvitti Nov 2 '12 at 3:38 To be honest, I don't even know to split the data in this way and generate all these graphs in the first place, or how I would compute the PlotRange to assign to them :) –  nicolaskruchten Nov 2 '12 at 3:52 If I understand correctly, this question is about complete automation for uniform look. Hence the specifics of solution below. Import data: data = Import["http://pastebin.com/download.php?i=fZKMqxK9"]; Define filter that automatically separates data by gender: filter[gender_] := Rest[Transpose[Select[data, #[[1]] == gender &]]] For complete automation find absolute domain and range for all plots so it is easy to compare them (not sure if this is most elegant way): doMraN = {{1, Max[Rest@Transpose@Rest@data]}, {0, Max[((Max[#]/Total[#]) &@Tally[#][[All, 2]]) & /@ Flatten[(filter /@ Union[Rest[data[[All, 1]]]]), 1]]}} {{1, 14}, {0, 977/1411}} And you are done basically: MapThread[Histogram[{#2, #3}, Automatic, "Probability", PlotRange -> doMraN, PlotLabel -> #1[[1]], ChartStyle -> 54, GridLines -> {None, Automatic}, Frame -> True] &, filter /@ Prepend[Union[Rest[data[[All, 1]]]], "Gender"]] Grid[data[[1 ;; 20]], Frame -> All, Alignment -> Left, Background -> {{Yellow}, {Green}}] - Wow, that's impressive! Is there any way to avoid hardcoding the "Male" and "Female" strings and have it break out according to all the unique values of a column (i.e. so that if I added rows with an "Unknown" gender a third set of bars would automatically appear)? –  nicolaskruchten Nov 2 '12 at 4:45 @nicolaskruchten Yes, it is possible to automate the filtering of genders, - please see updated code. –  Vitaliy Kaurov Nov 2 '12 at 5:09 OK, very neat. The only remaining issue is the bounds of the plot: I see that you hard-coded the range, and that the domain is left to chance, for example the second plot only goes to 10. Would I have to generate the list of plots, iterate through them to find the min/max of the domain/range and then apply that to each plot? –  nicolaskruchten Nov 2 '12 at 5:24 @nicolaskruchten Done, see update. But I am sure there are many other ways to find domain/range - maybe more elegant ones. –  Vitaliy Kaurov Nov 2 '12 at 5:56 Thanks very much! I've learned a lot from this answer :) –  nicolaskruchten Nov 2 '12 at 12:15 Well, the following will get you close ... idata = Import["merged_data.txt", "CSV"]; Dimensions@idata hedr = idata[[1]]; idata = Rest@idata[[2 ;;]]; ftik = {#, #/10000.} & /@ Range[0, 6000, 1000]; xtik = {#, #} & /@ Range[0, 15, 1]; hlist = Histogram[{ Select[idata, SameQ[#[[1]], "Female"] &][[All, #]], Select[idata, SameQ[#[[1]], "Male"] &][[All, #]]}, {1}, PlotLabel -> hedr[[#]], Frame -> True, GridLines -> Automatic, FrameTicks -> {{ftik, None}, {xtik, None}}, PlotRange -> {{0, 15}, {0, 6000}}, ImageSize -> 200, BaseStyle -> {FontFamily -> "Helvetica", FontSize -> 12} ] & /@ Range[2, Length@hedr]; GraphicsGrid[ Join[Partition[hlist, 4], {{hlist[[12]], hlist[[13]], hlist[[14]]}}]] How much closer to you need to get? -
2014-09-16 23:39:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2552691102027893, "perplexity": 2811.250928795202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120057.96/warc/CC-MAIN-20140914011200-00226-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://mathoverflow.net/questions/121832/analogue-of-cyclic-homology-for-e-n-algebras
Analogue of cyclic homology for e_n-algebras? Cyclic homology may be defined as the primitive part (with respect to a natural product) of the homology of the Lie algebra associated with the "stabilization" of an associative algebra $A$. Here the "stabilization" is the matrix algebra $Mat_n(A)$, where $n$ goes to infinity. This definition explains another name of cyclic homology - "additive K-theory". I believe that there exists an analogous notion for $e_n$-algebras. For a $e_n$-algebra $A$ it must be the primitive part of the homology of the Lie algebra associated with a $e_n$-algebra, which is an appropriate "stabilization" of $A$. Whether anyone has some ideas how this thing could be defined? What is the homology of the trivial algebra? Is it an additive version of something? - This is not a great answer, but you can try to take the factorization homology of the $e_n$-algebra over $S^n$ (perhaps you need to assume that the algebra is framed). Then, since $S^n$ has an action of $SO(n+1)$, you could take the homotopy orbits (or fixed points) of the result. When $n=1$, this returns the usual definition of cyclic homology (or negative cyclic homology). But it's not obvious that this can be expressed in the terms that you're asking for above. –  Craig Westerland Feb 14 '13 at 21:40 You might want to look at this paper : arxiv.org/abs/1104.0181 by Jon Francis. He shows that Hochschild cohomology of an e_n algebra is the Lie algebra of some derived algebraic group. Of course this doesn't really answer the question. –  Geoffroy Horel Feb 19 '13 at 0:31 The thing you want to have an analogy with doesn't have anything to do with a commutative structure. In fact, there is an analog of cyclic homology for associative algebras, TC. This is pretty hard to compute. There is also an analog of HH that is specific to $E_n$ algebras, iterated THH. I don't know of anyone who has investigated cycltomic structures on iterated THH in a way that remembers that it is iterated. That might be interesting. –  Sean Tilson Sep 19 '13 at 18:34 I think I have definitely lied to you. See Covering Homology by Brun, Carlsson, and Dundas or higher topological cyclic homology by Carlsson, Douglas, and Dundas. These might answer your question. –  Sean Tilson Sep 20 '13 at 12:21 In the paper mentioned in the comments Francis defines a candidate for this. If $A$ is an $E_n$-algebra one has an equivalence of associative algebras $B = \int_{S^{n-1}} A \simeq (\int_{S^{n-1}} A)^{\rm op} = B^{\rm op}$. Thus, any left $B$-module has a canonical right $B$-module structure. Identifying $B$ with the $E_n$-enveloping algebra ${\rm Env}(A)$ this gives $A$ the natural structure of a left $\int_{S^{n-1}}A$-module structure. Letting $A^\tau$ be the induced right-module we can form $$A^\tau \otimes_{\int_{S^{n-1}} A} A$$ which he calls the $E_n$-Hochschild homology of $A$. Using excision you can prove that $\int_{S^1} A = A \otimes_{A \otimes A^{\rm op}} A$, recovering classical Hochshild homology. Further if $A$ is commutative, or at least admits a $E_{n+1}$-refinement, then the above definition coincides with $\int_{S^n} A$.
2014-09-18 19:55:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9177026152610779, "perplexity": 342.1039004438176}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657129229.10/warc/CC-MAIN-20140914011209-00076-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://socratic.org/questions/how-do-you-use-a-double-angle-formula-to-find-the-exact-value-of-cos2u-when-sin-
# How do you use a double-angle formula to find the exact value of cos2u when sin u = 7/25, where pi/2 <u < pi? Then teach the underlying concepts Don't copy without citing sources preview ? #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer 1 Feb 17, 2017 $\cos 2 u = 1 - 2 {\sin}^{2} u = 1 - 2 {\left(\frac{7}{25}\right)}^{2} = 1 - \frac{98}{625} = \frac{527}{625.}$ • 16 minutes ago • 17 minutes ago • 18 minutes ago • 20 minutes ago • 2 minutes ago • 2 minutes ago • 3 minutes ago • 7 minutes ago • 9 minutes ago • 13 minutes ago • 16 minutes ago • 17 minutes ago • 18 minutes ago • 20 minutes ago
2017-12-13 05:16:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7475740909576416, "perplexity": 12937.759134219377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948521292.23/warc/CC-MAIN-20171213045921-20171213065921-00738.warc.gz"}
http://math.stackexchange.com/questions/46242/finite-measures
# finite measures $F$ is a finite measure on $(X,A)$ $a$ and $b$ belong to $A$ show that $F(a \cup b)=F(a)+F(b)-F(a \cap b)$ I have no idea how to approach this question. Any assistance would be appreciated. - Venn diagrams? Honestly, I don't know enough about measure theory to give a proper answer, but that looks like the definition of union in the context of measure, so I imagine one could approach it as one approaches normal unions. Then again, I could be way off base. –  Jack Henahan Jun 19 '11 at 7:33 Hint: write $a \cup b$ as the following disjoint union: $$a \cup b = [a - (a \cap b)] \cup (a \cap b) \cup [b - (a \cap b)].$$ The case where $F(X)=0$ is trivial; the case where $F(X) > 0$ is essentially nothing more than ${\rm P}(a \cup b) = {\rm P}(a) + {\rm P}(b) - {\rm P}(a \cap b)$ from probability theory. –  Shai Covo Jun 19 '11 at 10:57 Note that $a = (a\cap b^c) \cup (a \cap b)$ so $$F(a)=F(a \cap b^c) + F(a \cap b)$$ Also, $a \cup b = (a\cap b^c) \cup b$, so $F(a \cup b)= F(a\cap b^c) + F(b)$. Hence $$F(a \cup b) = ...$$
2015-08-03 17:23:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9563856720924377, "perplexity": 197.0695866461162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990112.50/warc/CC-MAIN-20150728002310-00131-ip-10-236-191-2.ec2.internal.warc.gz"}
http://quant.stackexchange.com/questions/3745/how-do-i-model-garch1-1-volatility-for-historical-indexes-in-matlab/3747
# How do I model GARCH(1,1) volatility for historical indexes in Matlab? I'm currently working with historical index data from Yahoo Finance and would like to plot the GARCH(1,1) volatility of these indexes. I'm working with the Datafeed and Finance Tollboxes in Matlab right now, and I'm able to get the data and plot the indexes. However I'm having some difficulty understanding the following methodology to get the GARCH sigmas. clf; clear all; %close all; format short; t = cputime; Connect = yahoo; dataFTSE=fetch(Connect,'^FTSE','Jan 1 1990',today, 'd'); dataN225=fetch(Connect,'^N225','Jan 1 1990',today, 'd'); dataGSPC=fetch(Connect,'^GSPC','Jan 1 1990',today, 'd'); close(Connect); tsFTSE=fints(dataFTSE(:,1),dataFTSE(:,end),'FTSE100','d','FTSE100'); tsN225=fints(dataN225(:,1),dataN225(:,end),'NiKKEI225','d','NiKKEI225'); tsGSPC=fints(dataGSPC(:,1),dataGSPC(:,end),'SP500','d','SP500'); subplot 311; plot(tsFTSE) xlabel('Time (date)') ylabel('Adjusted Close price ($)') subplot 312; plot(tsGSPC) xlabel('Time (date)') ylabel('Adjusted Close price ($)') subplot 313; plot(tsN225) xlabel('Time (date)') yt = get(gca,'YTick'); set(gca,'YTickLabel', sprintf('%.0f|',yt)) e = cputime - t From then on I get the indexes in financial objects, where the prices are in cell arrays. What I think needs to happen is to fit the GARCH(1,1) model like so: ugarch(U,1,1) where U is a vector with just the prices of the index? I don't have a lot of experience with Matlab's data structures so any info or references will be greatly appreciated. The reason I don't want to use the R script is to have some uniformity of plots in my thesis. --EDIT-- I'm appending some more code which I think produces the plot I was after. It should be relatively easy to vectorize the index inputs and produce different plots. dataGSPCret = [0.0 price2ret(dataGSPC(:,end))']; retGSPC=fints(dataGSPC(:,1),dataGSPCret','retSP500','d',... 'retSP500'); [coeff3, errors3, LLF3, innovations3, sigmas3] = ... garchfit(dataGSPCret); sigmaGSPC = fints(dataGSPC(:,1),sigmas3','retSP500',... 'd','retSP500'); And then plot the GARCH variance over the daily returns. figure(2); subplot 311; hold on; plot(retFTSE); plot(sigmaFTSE); hold off; subplot 312; hold on; plot(retN225); plot(sigmaN225); hold off; subplot 313; hold on; plot(retGSPC); plot(sigmaGSPC); hold off; - For anyone looking to do something similar, I believe the appended code above is what I was trying to show with GARCH. –  cmdel Jul 12 '12 at 8:16
2014-11-22 16:47:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6139583587646484, "perplexity": 3011.4494761838955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378429.52/warc/CC-MAIN-20141119123258-00156-ip-10-235-23-156.ec2.internal.warc.gz"}
http://openstudy.com/updates/50c9c5bde4b09c5571447f25
Here's the question you clicked on: ## kr7210 What is "lorentz invariant" ? Please explain with few examples. one year ago one year ago • This Question is Closed 1. henpen Something that does not change with Lorentz transformations (that is, is the same no matter what the relative velocity of the observer). The speed of light is Lorentz invariant, and because of this $t^2-x^2$ is also Lorentz invariant (if same units are used for t as for x) 2. henpen Things like electron charge, mass (yes, mass does NOT increase with relative velocity) and other things are also Loretnz invariant. 3. kr7210 Is Mode of $\left| B ^{2} - E ^{2} \right|$ is Lorentz invariant? 4. henpen No idea, I don't know anything about electrodynamics.
2014-04-20 00:48:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316376209259033, "perplexity": 994.8777539711692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
https://datascience.stackexchange.com/questions/18670/cnn-for-phoneme-recognition
# CNN for phoneme recognition I am currently studying this paper, in which CNN is applied for phoneme recognition using visual representation of log mel filter banks, and limited weight sharing scheme. The visualisation of log mel filter banks is a way representing and normalizing the data. They suggest to visualize as a spectogram with RGB colors, which the closest I could come up with would be to plot it using matplotlibs colormap cm.jet. They (being the paper) also suggest each frame should be stacked with its [static delta delta_delta] filterbank energies. This looks like this: The input of the consist of an image patch of 15 frames set [static delta delta_detlta] input shape would be (40,45,3) The limited weight sharing consist of limiting the weight sharing to a specific filter bank area, as speech is interpreted differently in different frequency area, thus will a full weight sharing as normal convolution apply, would not work. Their implementation of limited weight sharing consist of controlling the weights in the weight matrix associated with each convolutional layer. So they apply a convolution on the complete input. The paper only applies only one convolutional layer as using multiple would destroy the locality of the feature maps extracted from the convolutional layer. The reason why they use filter bank energies rather than the normal MFCC coefficient is because DCT destroys the locality of the filter banks energies. Instead of controlling the weight matrix associated with convolution layer, I choose to implement the CNN with multiple inputs. so each input consist of a (small filter bank range, total_frames_with_deltas, 3). So for instance the paper state that a filter size of 8 should be good, so I decided a filter bank range of 8. So each small image patch is of size (8,45,3). Each of the small image patch is extracted with a sliding window of with a stride of 1 - so there is a lot of overlap between each input - and each input has its own convolutional layer. (input_3 , input_3, input3, should have been input_1, input_2, input_3 ...) Doing this way makes it possible to use multiple convolutional layer, as the locality is not a problem any more, as it applied inside a filter bank area, this is my theory. The paper don't explicitly state it but i guess the reason why they do phoneme recognition on multiple frames is to have some some of left context and right context, so only the middle frame is being predicted/trained for. So in my case is the first 7 frames set the left context window - the middle frame is being trained for and last 7 frames set is the right context window. So given multiple frames, will only one phoneme be recognised being the middle. My neural network currently looks like this: def model3(): #stride = 1 #dim = 40 #window_height = 8 #splits = ((40-8)+1)/1 = 33 next(test_generator()) next(train_generator(batch_size)) kernel_number = 200#int(math.ceil(splits)) list_of_input = [Input(shape = (window_height,total_frames_with_deltas,3)) for i in range(splits)] list_of_conv_output = [] list_of_conv_output_2 = [] list_of_conv_output_3 = [] list_of_conv_output_4 = [] list_of_conv_output_5 = [] list_of_max_out = [] for i in range(splits): #list_of_conv_output.append(Conv2D(filters = kernel_number , kernel_size = (15,6))(list_of_input[i])) #list_of_conv_output.append(Conv2D(filters = kernel_number , kernel_size = (window_height-1,3))(list_of_input[i])) list_of_conv_output.append(Conv2D(filters = kernel_number , kernel_size = (window_height,3), activation = 'relu')(list_of_input[i])) list_of_conv_output_2.append(Conv2D(filters = kernel_number , kernel_size = (1,5))(list_of_conv_output[i])) list_of_conv_output_3.append(Conv2D(filters = kernel_number , kernel_size = (1,7))(list_of_conv_output_2[i])) list_of_conv_output_4.append(Conv2D(filters = kernel_number , kernel_size = (1,11))(list_of_conv_output_3[i])) list_of_conv_output_5.append(Conv2D(filters = kernel_number , kernel_size = (1,13))(list_of_conv_output_4[i])) #list_of_conv_output_3.append(Conv2D(filters = kernel_number , kernel_size = (3,3),padding='same')(list_of_conv_output_2[i])) list_of_max_out.append((MaxPooling2D(pool_size=((1,11)))(list_of_conv_output_5[i]))) merge = keras.layers.concatenate(list_of_max_out) print merge.shape reshape = Reshape((total_frames/total_frames,-1))(merge) dense1 = Dense(units = 1000, activation = 'relu', name = "dense_1")(reshape) dense2 = Dense(units = 1000, activation = 'relu', name = "dense_2")(dense1) dense3 = Dense(units = 145 , activation = 'softmax', name = "dense_3")(dense2) #dense4 = Dense(units = 1, activation = 'linear', name = "dense_4")(dense3) model = Model(inputs = list_of_input , outputs = dense3) model.compile(loss="categorical_crossentropy", optimizer="SGD" , metrics = [metrics.categorical_accuracy]) reduce_lr=ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1, mode='auto', epsilon=0.001, cooldown=0) stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=1, mode='auto') print model.summary() raw_input("okay?") hist_current = model.fit_generator(train_generator(batch_size), steps_per_epoch=10, epochs = 10000, verbose = 1, validation_data = test_generator(), validation_steps=1) #pickle_safe = True, #workers = 4) So.. now comes the issue.. I been training the network and have only been able to get a validation_accuracy of highest being 0.17, and the accuracy after a lot of epochs end up being 1.0. (Plot is currently being made) fixed frame: (plot being still made) I am not sure why I am not getting better results.. Why this high error rate? I am using the TIMIT dataset which the other ones also use.. so why am I getting worse results? And sorry for the long post - hope more information of my design decision could be useful - and help understand how I understood the paper versus how i've applied would help pinpoint where my mistake would be. • Could you please explain the input shape (40,45,3) ? I think it should be (40,15,3) i.e. (no. of filter banks, context (left+right), 3 (static+delta+delta_delta) ). May 1, 2017 at 4:15 • I think you right about it should 3 frames. but 45 in total since 15x3 = 45. May 1, 2017 at 5:35 • Could you try and repost the outputs? Btw do you use kaldi? May 1, 2017 at 5:51 • @arduinolover yes I do use kaldi.. And it is currently running. fbank energies are extracted using kaldi framework. May 1, 2017 at 5:51 • Please go through this tutorial on CNN for speech recognition github.com/botonchou/libdnn/wiki/… May 1, 2017 at 5:54 Could be your network structure: The paper states that their experiment are done using: conv pool dense dense dense(softmax) So something like this for fws: def fws(): #Input shape: (batch_size,40,45,3) #output shape: (1,15,50) # number of unit in conv_feature_map = splitd filter_size = 8 pooling_size = 28 stride_step = 2 pool_splits = ((splits - pooling_size)+1)/2 conv_featur_map = [] pool_feature_map = [] print "Printing shapes" list_of_input = [Input(shape = (window_height,total_frames_with_deltas,3)) for i in range(splits)] #convolution shared_conv = Conv2D(filters = 150, kernel_size = (filter_size,45), activation='relu') for i in range(splits): conv_featur_map.append(shared_conv(list_of_input[i])) #Pooling input = Concatenate()(conv_featur_map) input = Reshape((splits,-1))(input) pooled = MaxPooling1D(pool_size = pooling_size, strides = stride_step)(input) #reshape = Reshape((3,-1))(pooled) #fc dense1 = Dense(units = 1000, activation = 'relu', name = "dense_1")(pooled) dense2 = Dense(units = 1000, activation = 'relu', name = "dense_2")(dense1) dense3 = Dense(units = 50 , activation = 'softmax', name = "dense_3")(dense2) Changing the position of the for loop in fws (moving it 2 lines up) makes it to a lws (Plus some adjustments with the pooling layer). • It looks way simpler than what I was going for... May 5, 2017 at 5:43
2022-05-27 20:33:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40768441557884216, "perplexity": 4031.099692666599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662675072.99/warc/CC-MAIN-20220527174336-20220527204336-00356.warc.gz"}
http://gmatclub.com/forum/critical-reasoning-question-stems-in-og-10-and-og-121167.html?fl=similar
Find all School-related info fast with the new School-Specific MBA Forum It is currently 31 Aug 2016, 08:09 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Critical reasoning question stems in OG 10 and OG 12 Author Message Manager Status: Mission GMAT Joined: 20 Apr 2011 Posts: 96 Followers: 3 Kudos [?]: 56 [0], given: 39 Critical reasoning question stems in OG 10 and OG 12 [#permalink] ### Show Tags 27 Sep 2011, 06:16 Hi, I noticed while solving the OG 10 critical reasoning questions that the question stems in OG 10 are far more varied than the ones in OG 12? Eg. Assumption question stems in OG 12 are very few and pretty straightforward; however, in OG 10, the assumption question stems are a little tricky, in that they don't make the question type very obvious. Does the actual gmat put tricky question stems for difficult questions? Or is the pool of stems resctricted to the OG 12 stems? I wonder why is the change across editions, to trick the test-taker or the stems on OG 10 don't appear on the exam anymore. Thank you. Kaplan GMAT Instructor Joined: 25 Aug 2009 Posts: 644 Location: Cambridge, MA Followers: 82 Kudos [?]: 255 [0], given: 2 Re: Critical reasoning question stems in OG 10 and OG 12 [#permalink] ### Show Tags 27 Sep 2011, 12:03 alicegmat wrote: Hi, I noticed while solving the OG 10 critical reasoning questions that the question stems in OG 10 are far more varied than the ones in OG 12? Eg. Assumption question stems in OG 12 are very few and pretty straightforward; however, in OG 10, the assumption question stems are a little tricky, in that they don't make the question type very obvious. Does the actual gmat put tricky question stems for difficult questions? Or is the pool of stems resctricted to the OG 12 stems? I wonder why is the change across editions, to trick the test-taker or the stems on OG 10 don't appear on the exam anymore. Thank you. Hi AliceGMAT, I haven't noticed this myself--can you give me a few of the "tricky" question stems from the OG so I can make sure we're discussing the same thing? Thanks! _________________ Eli Meyer Kaplan Teacher http://www.kaptest.com/GMAT Prepare with Kaplan and save $150 on a course! Kaplan Reviews Manhattan GMAT Instructor Joined: 27 Sep 2007 Posts: 24 Followers: 61 Kudos [?]: 103 [0], given: 0 Re: Critical reasoning question stems in OG 10 and OG 12 [#permalink] ### Show Tags 27 Sep 2011, 12:26 Received a PM asking me for my opinion - thanks for asking! I would also like to see some specific examples of what you're describing, but I will say this: if there is a substantive difference between something in OG10 and something in OG12, then they made that change for a reason. If they strip something out that used to be there, that's typically because they're not writing things in that way anymore. (But, again, I would like to see some examples of what you're describing. ) Director Status: My Thread Master Bschool Threads-->Krannert(Purdue),WP Carey(Arizona),Foster(Uwashngton) Joined: 28 Jun 2011 Posts: 894 Followers: 83 Kudos [?]: 210 [0], given: 57 Re: Critical reasoning question stems in OG 10 and OG 12 [#permalink] ### Show Tags 27 Sep 2011, 13:22 Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 6844 Location: Pune, India Followers: 1932 Kudos [?]: 12007 [1] , given: 221 Re: Critical reasoning question stems in OG 10 and OG 12 [#permalink] ### Show Tags 27 Sep 2011, 22:01 1 This post received KUDOS Expert's post alicegmat wrote: Hi, I noticed while solving the OG 10 critical reasoning questions that the question stems in OG 10 are far more varied than the ones in OG 12? Eg. Assumption question stems in OG 12 are very few and pretty straightforward; however, in OG 10, the assumption question stems are a little tricky, in that they don't make the question type very obvious. Does the actual gmat put tricky question stems for difficult questions? Or is the pool of stems resctricted to the OG 12 stems? I wonder why is the change across editions, to trick the test-taker or the stems on OG 10 don't appear on the exam anymore. Thank you. Responding to a pm: First of all, I would not try to deduce the format of the actual GMAT using OG 10 or 12. The questions in those books are very old. GMAT is evolving at a rapid pace and it needs to since the business environment is very dynamic. If you take GMAT today and then again 3 months later, I am sure it will be a different experience. The only thing I would take away from OG is 'which basic concepts were tested some years back?' Finally, the basic concepts are the same. An assumption was a missing premise some years back and it still is a missing premise today. The OG questions will expose me to this basic concept. They test some of these cocnepts more often now and some of them have been filtered out since they are not useful anymore. But mind you, two question formats testing the same concept could vary immensely. GMAC does not put a lot of thought into which questions will go into OG since they are not trying to prepare you for the test. All they are doing is giving you a feel of the kind of questions asked. Therefore, don't over analyze the OGs. Did you do OG 10 first and then OG 12? If yes, then that could explain why you felt that OG10 questions were trickier. It was your first shot at some new questions. By the time you started with OG12, you had done many 'GMAT type' questions and hence found them easy. If that is not the case and you feel that some OG10 questions are trickier, go ahead and put them in the CR section. GMAT Club members will make sure that there is no doubt left in your mind. "Does the actual gmat put tricky question stems for difficult questions? " It could. There is a straight forward way of asking questions and then there is a round about way of saying the same thing (and everything in between). For higher level questions, the question stem could confuse you. But that is a crude way of making a question difficult. The best thing to do in such a case is go step by step, understand the small clauses and put them all together in your own words. Focus on what you are looking for and then go on to the options. But mostly, I think, the questions stems are pretty straight forward. The added difficulty is generally more subtle e.g. two options could feel correct or none of the options could feel correct and you would need to dig deeper to get to the answer etc. "Or is the pool of stems resctricted to the OG 12 stems?" There is no reason to think so. Even when the OG 12 questions were a part of actual GMAT, the question stems would not have been restricted to only those in OG 12. "I wonder why is the change across editions, to trick the test-taker or the stems on OG 10 don't appear on the exam anymore." I highly doubt that GMAC was trying to trick the test taker. As I said before, GMAT is evolving at a rapid pace. If you pick 100 questions from its question bank today, they will be different in their format from 100 questions that you pick from its question bank a year down the line. So I wouldn't expect the questions to be similar in the two editions. That said, the difference would be more about the way you phrase it rather than the actual concept. I wouldn't expect to see the OG 10 format on my test but then, I wouldn't expect the OG 12 format either. But whatever comes, conceptually, it would not be far from the questions in either of the editions. I would be well prepared since it is the same question after all. I just have to identify it in its various forms. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199 Veritas Prep Reviews Re: Critical reasoning question stems in OG 10 and OG 12   [#permalink] 27 Sep 2011, 22:01 Similar topics Replies Last post Similar Topics: 1 Question Critical Reasoning 4 01 Aug 2016, 10:27 Critical reasoning question 1 05 Jun 2015, 10:38 2 OG question 2 14 Aug 2013, 22:53 1 OG 12 - level categorization 3 10 Oct 2011, 13:05 Urgent - OG 12 GMAT questions VS Real GMAT questions 3 06 Sep 2011, 04:21 Display posts from previous: Sort by
2016-08-31 15:09:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5184229016304016, "perplexity": 8946.242064859674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982290634.12/warc/CC-MAIN-20160823195810-00075-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/185329-tough-algorithm/
Archived This topic is now archived and is closed to further replies. Tough Algorithm? This topic is 5171 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts For some reason, I can't think of a way to do this. Say I have some numbers: (not necessarily in order) 0 1 2 3 4 and I want to find all the bit sequences that add up to a certain #. For example, if that # is 6. I would get the bit combitions: 11110 01110 10101 00101 .. this means that, for the bit combinations 11110, the numbers: 0 1 2 3 .. all add up to 6. For the bit combo 10101: 0 2 4 ... all add up to 6. Etc. How could I do this? I don't know if I explained this clearly, but thanks for any help you can give. [edited by - Gf11speed on October 13, 2003 12:01:17 AM] Share on other sites go learn what the bitwise operators (&, |, and ~) do. this is not a difficult problem. Share on other sites quote: Original post by Anonymous Poster go learn what the bitwise operators (&, |, and ~) do. this is not a difficult problem. Do you mean ^ instead of ~? I have never heard of a ~ operator before. Share on other sites ~ is the bitwise NOT operator. It gives you the opposite of a binary number. For example, the bitwise NOT of 100101 would be 011010. [edited by - rayno on October 13, 2003 12:36:44 AM] Share on other sites I''m not all that advanced now Share on other sites It isn''t as complicated as it sounds. Observe: ...unsigned short SomeVar = 5; //0000000000000101 in binarySomeVar = ~SomeVar; //SomeVar now equals 1111111111111010 in binary because ~ just flips all the bits.//That is 65530 in decimal numbers, by the way. Share on other sites Does it have to be fast? And how big is the list of numbers? - Josh Share on other sites do you want to get the bits of the binary system or of the "special" system you have used within you examples??? Because if you really want to use your "special" system, the binary operators won''t help! Share on other sites Here''s a solution in Python.. Unless you know Python, it may not be all that helpful - sorry. It prints result for the example input you gave like this: >>> sumCombinations(range(5))[6][[1, 2, 3], [0, 1, 2, 3], [2, 4], [0, 2, 4]] ---- def rangeSubsets(number): """Generator that yields all subsets for range(number). >>> list(combinations(3)) [[], [0], [1], [0, 1], [2], [0, 2], [1, 2], [0, 1, 2], [3]] """ for num in range(number**2): i = 0 bits = [] while num: if num & 1: bits.append(i) num >>= 1 i += 1 yield bitsdef fillDict(seq, keyFn=lambda x:x, valueFn=lambda x:x): """Return a dictionary in which each key maps to a sequence of values. Arguments: seq -- input sequence keyFn -- Transforms sequence values into key values. Default = no-op valueFn -- Transformas sequence values into real values. Default = no-op """ res = {} for x in seq: res.setdefault(keyFn(x), []).append(valueFn(x)) return resdef sumCombinations(numbers): return fillDict(rangeSubsets(len(numbers)), keyFn=lambda comb: sum([numbers[x] for x in comb])) Share on other sites Oops, an unfortunate typo! for num in range(number**2): This should be for num in range(2**number): Share on other sites <Code> int bitSeqTotal(int bitSeq) { int counter; int total = 0; for(counter = 0;counter < 8;counter++) { total += ((1 << counter) & bitSeq) ? counter : 0; } } void fillBitSeqString(char *buf,int num) { for(int counter = 7;counter >= 0;counter--) *(buf++) = ((1 << counter) & num) ? ''1'':''0''; } int _tmain() { int check = 0; int num = 6; char buf[20]; for(check = 0;check < 256;check++) { if(bitSeqTotal(check) == num) { fillBitSeqString(buf,check); printf("%s\n",buf); } } return 0; } </Code> • Forum Statistics • Total Topics 628675 • Total Posts 2984169 • 13 • 12 • 9 • 10 • 9
2017-12-13 05:33:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31550776958465576, "perplexity": 3993.4845230900078}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948521292.23/warc/CC-MAIN-20171213045921-20171213065921-00762.warc.gz"}
https://www.math.uci.edu/seminar_past/Inverse-Problems-and-Imaging
## Past Seminars- Inverse Problems and Imaging • Peijun Li Tue Jun 12, 2012 2:00 pm We consider the scattering of a time-harmonic plane wave incident on a two-scale heterogeneous medium, which consists of scatterers that are much smaller than the wavelength and extended scatterers that are comparable to the wavelength. A generalized Foldy-Lax formulation is proposed to capture multiple scattering among point scatterers and... • Mark A. Anastasio Tue May 15, 2012 2:00 pm Photoacoustic tomography (PAT) is an emerging soft-tissue imaging modality that has great potential for a wide range of biomedical imaging applications.  It can be viewed as a hybrid imaging modality in the sense that it utilizes an optical contrast mechanism combined with ultrasonic detection principles, thereby combining the advantages of... • Oleg Imanuvilov Tue May 8, 2012 4:00 pm We prove that if for the isotropic Lamé system the coefficiem $\mu$ is a positive constant then both coefficents can be reconstructed from the partial Cauchy data. • Luca Rondi Tue Mar 13, 2012 2:00 pm Many techniques developed for free-discontinuity problems, arising for example in imaging or in fracture mechanics, may be successfully applied to reconstruction methods for inverse problems whose unknowns may be characterized by discontinuous functions. We show the validity of this approach both from the theoretical point of view, by a... • Hongyu Liu Tue Mar 6, 2012 2:00 pm In this talk, we shall consider the near-invisibility cloaking in acoustic scattering by non-singular transformation media. A general lossy layer is included into our construction. We are especially interested in the cloaking of active/radiating objects. Our results on the one hand show how to cloak active contents more efficiently, and on the... • Andras Vasy Tue Feb 28, 2012 2:00 pm Waves reflecting/refracting/transmitting from singularities of a metric (e.g. sound speed) satisfy the law of reflection. One expects that if the singularities are sufficiently weak, in terms of differentiability (conormal order) then the reflected singularity is weaker than the transmitted one, in the sense that it is more regular. In this joint... • Lauri Oksanen Thu Jan 12, 2012 4:00 pm We consider boundary measurements for the wave equation on a bounded domain $M \subset \R^2$ or on a compact Riemannian surface, and introduce a method to locate a discontinuity in the wave speed. Assuming that the wave speed consist of an inclusion in a known smooth background, the method can determine the distance from any boundary point to the...
2018-04-23 17:15:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6108939051628113, "perplexity": 1414.6267633397179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946120.31/warc/CC-MAIN-20180423164921-20180423184921-00106.warc.gz"}
https://mathoverflow.net/questions/231053/weight-filtration-on-certain-galois-representations
# Weight filtration on certain Galois representations Let $G$ be the absolute Galois group of a number field $K$. Let $\ell$ be a prime number. There are representations $\mathbb{Z}_\ell(n)$ of $G$ on the group of $\ell$-adic integers given by the formula $g.x=\chi(g)^nx$ where $\chi:G\to \mathbb{Z}_\ell^{\times}$ is the cyclotomic character. Question: Is it true that the ext groups $\mathrm{Ext}^*(\mathbb{Z}_\ell(0),\mathbb{Z}_\ell(n))$ vanish for $n$ negative ? The reason I am asking this is that this would imply that the triangulated subcategory of the derived category of Galois representations spanned by the objects $\mathbb{Z}_\ell(n)$ has a weight filtration, as constructed for instance in Lemma 1.2. of this paper of Marc Levine: https://www.uni-due.de/~bm0032/publ/TateMotives.pdf No, life is not so easy I'm afraid. For instance, the group $\mathrm{Ext}^1_{G_{\mathbf{Q}}}(\mathbf{Z}_\ell, \mathbf{Z}_\ell(n)) = H^1(\mathbf{Q}, \mathbf{Z}_\ell(n))$ has positive rank for all odd integers $n$, whatever the sign; this is easy to see from Tate's global Euler characteristic formula. The point is that if you have two irreducible Galois representations $V_1, V_2$, and you know that $V_1$ and $V_2$ arise in geometry (as the realisations of motives $M_1, M_2$), then there are in general many more extensions of $V_1$ by $V_2$ in the category of Galois reps than there are extensions of $M_1$ by $M_2$ in the category of mixed motives. But all is not lost: there is a beautiful and deep theory that seeks to characterise in terms of local properties at $\ell$ those extensions which come from geometry. You might like to read Bloch and Kato's article in the Grothendieck Festschrift. The upshot is that for a geometric Galois representation $V$, one defines a group $H^1_\mathrm{f}(K, V) \subseteq H^1(K, V)$, which parametrises those extensions of the trivial rep by $V$ which are expected to arise in geometry. It is expected that $H^1_\mathrm{f}(\mathbf{Q}, V)$ is zero if the Hodge--Tate weights of $V$ are $\le -1$, and this is known for the representations $V = \mathbf{Q}_\ell(n)$ by a theorem of Soule.
2019-11-20 11:26:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692805767059326, "perplexity": 120.0439728590056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670558.91/warc/CC-MAIN-20191120111249-20191120135249-00230.warc.gz"}
https://dmoj.ca/problem/dpg
## Longest Path View as PDF Points: 10 (partial) Time limit: 1.0s Memory limit: 512M Problem types These problems are from the atcoder DP contest, and were transferred onto DMOJ. All problem statements were made by several atcoder users. As there is no access to the test data, all data is randomly generated. If there are issues with the statement or data, please contact Rimuru or Ninjaclasher on slack. There is a directed graph with vertices and edges. The vertices are numbered , and for each , the -th directed edge goes from Vertex to . does not contain directed cycles. Find the length of the longest directed path in . Here, the length of a directed path is the number of edges in it. #### Constraints • All values in input are integers. • All pairs are distinct. • does not contain directed cycles. #### Input Specification The first line will contain 2 space separated integers . The next lines will contain 2 space separated integers, . #### Output Specification Print the length of the longest directed path in . #### Sample Input 1 4 5 1 2 1 3 3 2 2 4 3 4 #### Sample Output 1 3 #### Explanation For Sample 1 The red directed path in the following figure is the longest: #### Sample Input 2 6 3 2 3 4 5 5 6 #### Sample Output 2 2 #### Explanation For Sample 2 The red directed path in the following figure is the longest: #### Sample Input 3 5 8 5 3 2 3 2 4 5 2 5 1 1 4 4 3 1 3 #### Sample Output 3 3 #### Explanation For Sample 3 The red directed path in the following figure is one of the longest:
2020-06-05 06:01:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23102647066116333, "perplexity": 1744.4283294273378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348493151.92/warc/CC-MAIN-20200605045722-20200605075722-00572.warc.gz"}
https://studyadda.com/sample-papers/kvpy-stream-sx-model-paper-29_q31/2207/476044
• # question_answer A charge particle ${{q}_{0}}$of mass ${{m}_{0}}$ is projected along the y-axis at t = 0 from origin with a velocity${{V}_{0}}$. If a uniform electric field ${{E}_{0}}$ also exists along the x-axis, then the time at which debroglie wavelength of the particle becomes half of the initial value is: A) $\frac{{{m}_{0}}{{v}_{0}}}{{{q}_{0}}{{E}_{0}}}$ B) $2\frac{{{m}_{0}}{{v}_{0}}}{{{q}_{0}}{{E}_{0}}}$ C) $\sqrt{3}\frac{{{m}_{0}}{{v}_{0}}}{{{q}_{0}}{{E}_{0}}}$ D) $3\frac{{{m}_{0}}{{v}_{0}}}{{{q}_{0}}{{E}_{0}}}$ ${{v}_{i}}={{v}_{0}}$          and      ${{v}_{f}}=\sqrt{v_{0}^{2}+{{\left( \frac{qE}{m}t \right)}^{2}}}$and             ${{v}_{f}}=2{{v}_{i}}\,\,\,\,\Rightarrow \,\,\,\,t=\sqrt{3}\,\frac{{{m}_{0}}{{v}_{0}}}{{{q}_{0}}{{E}_{0}}}$
2022-01-20 19:48:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9447011351585388, "perplexity": 530.5959512774821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302622.39/warc/CC-MAIN-20220120190514-20220120220514-00030.warc.gz"}
https://www.gamedev.net/articles/programming/networking-and-multiplayer/shaving-ping-r2319/
# Server Squeeze: How to Get More So the network and the server hardware are in place. There are as many customers as the infrastructure can bear. What else can be done to improve the performance, and possibly the capacity, without adding more hardware? One approach that remains is to get the server software itself to run better. This means working to "adjust" the server binaries to improve their performance. This assumes, of course, that you have access to the source code for the game server programs. If you are a game developer or part of the mod community for your favorite game, you may already have access to the game source. If you are a big enough customer of the game (for example a large gamer caf? owner), or a big sales enabler (by virtue of the large number of servers you host for Company X's latest release), you may be able to negotiate access to the source or convince the vendor to make some performance improvements of their own on your behalf. Assuming you have access to the source, you can take several steps down the optimizing path, including application of processor-agnostic general optimization and optimization targeted to your server hardware's specific processor type, including 64-bit architectures. # General Optimization A good first step toward beefing up your server program is to compile it with an effective optimizing compiler. One choice is the Intel C/C++ Compiler 2), available in both Linux and Windows flavors. If the server is a Windows system, the software developer can use their pre-existing Microsoft DevStudio IDE to manage projects and compiles, with the Intel C and C++ compilers underneath. If the server is a Linux system, the user can choose to use the Eclipse software development CDT environment or good, old fashioned command line editors and "make". How to start? For this exercise, a solid game engine example was selected: Richard Stanway's R1Q2 3). This is a tightened and enhanced version of the Quake 2 engine, which was release to the Open Source community by ID Software back in 2001. Older code? Yes, but many game programmers cut their teeth on Q2 mod development. It's a known space and a good reference point. Rich's R1Q2 was coupled with code from the LOX Q2 mod, an "extreme weapons" mod built by David S. Martin and friends, and enhanced by Geoff Joy and others. The LOX mod is a good example of performance challenging code, as the massive number of events that can be created by a single player with the right weapon selections and feature combinations can bring an otherwise healthy server to its knees. Again for this example, the target server platform is Linux, the default choice among server hosting companies where game server engines have a Linux server offering. The test server used was a vanilla Red Hat Enterprise Linux 3 (Taroon Update 4) server, running on a 3.7 GHz Pentium 4 with 1 Gig of RAM, spinning a standard Serial ATA hard drive. Note that all of the steps being discussed here, including the optimization techniques and compiler features, are applicable to or available on Windows as well. ## Step One: Get the code. Unwrapping the code and doing a straight gcc compile using the ---O2 optimization switch with the provided makefiles generated usable binaries that performed as expected. A pair of client machines running on an isolated net connected without issue and achieved pings from varying from 15 to 35 ms. Since this code has had some level of grooming, compiler warnings were minimal. ## Step Two: Perform reference benchmarking. In this case, two client machines were connected to the server, running its standard version of binaries, from a local network connection. Their static pings were recorded, as were their pings when the server was stressed. In this case, the stress test involved having the players from both client test machines launch 4 napalm grenades per second from a fixed location on the servers default level, generating at least 128 in-game explosions per second. Client "freeze" behavior, typical in this server stress condition, was monitored, as was the frequency of "RATEDROP" warnings, issued from the server when a significant drop in server-client data exchange rate is detected. ## Step Three: Get the Intel compiler. The Intel C/C++ compiler package is available for demo download, with academic, non-commercial, and commercial licenses. The software installs on nearly all major Linux distributions, including those not supporting RPM. ## Step Four: Update the makefiles to enable optimization options. In this case, that meant changing "CC=gcc" to "CC=icc". The R1Q2 makefile required no dependency changes or LDFLAGS changes. The LOX makefile required a minor change to the LDFLAGS setting to accommodate the new library home for a couple of key string functions. For round one of our compiler optimization exercise, CFLAGS was changed to add the -02 optimization switch. This is the most commonly recommended option, performing many optimizations for speed without significant regard to the impact on code size, including but not limited to: • Inlining • Forward substitution • Constant propagation • Dead static function, code, and store elimination • Tail recursions • Partial redundancy elimination One thing that became clear during the initial build with the Intel compiler was that the number of warnings increased, going from 4 to 62. Most of the warnings were variable type checking issues. Some of them warranted further investigation. In this case, only minor code changes were required. The newly rebuilt binaries were tested and results gathered. For the next round of optimization, the -02 CFLAGS option was changed to -03. This option, according to the documentation, contains "more aggressive optimizations, such as prefetching, scalar replacement, and loop and memory access transformations". This includes all of the features of the -02 optimization, plus loop unrolling, code replication to eliminate branches, and padding of certain power-of-two arrays to improve cache use. Again, the newly built binaries were tested and results were gathered. For round three, the binaries were built with an added switch: -axN. This switch enables processor-targeted optimization, in this case specifically for Intel Pentium 4 and compatible chips. Once again the new binaries were tested. The final round of compiler switch optimization called for changing the -axN switch to -axP. This option optimizes the output for Intel Pentium 4 processors with Streaming SIMD Extensions 3 (SSE3) instruction support. Once more the resulting binaries were tested. # Test Results The two client machines used for the test included: • Machine 1 - a 2.9 GHz Pentium 4 with 512 Mbytes of RAM, running an R1Q2 Quake 2 client in OpenGL mode at 1024 x 768 resolution • Machine 2 - a 1.3 GHz Celeron with 512 Mbytes of RAM, running a stock 3.20 ID client in software rendering mode at 1024 x 768 resolution Here is a summary of the results: Condition Reference (gcc -O2) icc -02 icc -03 icc -03 -axN icc -03 -axP Machine 1 2 1 2 1 2 1 2 1 2 Static Ping (20 sec avg) 18 35 18 35 16 33 15 32 13 23 Stress Ping(20 sec avg) 50 60 50 58 45 55 43 53 42 49 Perceived Lag Freeze YES (~3 sec) YES(~6 sec) NO YES (~2 sec) NO NO NO NO NO NO Stress Test Frame Drop Warnings / sec 0.125 0.25 0.125 0.25 0.1 0.17 0.1 0.17 0.08 0.12 Post-stress Test Recovery to static ping rate 10 sec 14 sec 8 sec 11 sec 7 sec 10 sec 6 sec 9 sec 3 sec 6 sec # Compile Optimization Conclusions The above results show that there is no significant ping difference between the gcc -O2 and icc -O2 behavior during relatively inactive periods, put perceived lag on the client side is reduced somewhat. Similarly, frame drop warning rates and recovery times after stress events are mildly better with the icc compiler. Results are somewhat more significant when going to a -O3 optimization level and even more dramatic when including the processor targeting options -axN and -axP. The above tests are not a perfect model for behavior in a dynamic environment, where players will be connecting from across the company or across the globe. But they do serve to demonstrate the opportunity for improvement. Clearly, ping is not the only measure of performance. While the improvements made to the test programs did improve ping somewhat, most of the impact was seen in the server's ability to maintain smooth gameplay, or to restore smooth gameplay after periods of intense activity. And this is what it is all about. # Additional Steps to Improve the Binaries The gains demonstrated above may be significant enough for some. If still more performance improvement is required, there are a number of additional steps that can be taken. While these steps are beyond the scope of this article, they are worth mentioning as areas of future exploration, especially for developers of new game offerings. One of these steps is to apply profilers to determine where the hotspots (bottlenecks) are in the game server program. Tools such as Intel's VTune Performance Analyzer product can be employed to locate the sources of program slowness, identify key algorithms that can be improved, and point toward other opportunities to optimize program behavior. Another approach that can work hand in hand with performance analysis is addition of threading techniques to the software. Individual hotspots in the program can be threaded, using available threading libraries and new or modified code, to streamline program operations and to take advantage of the performance gains offered by new dual core processor technologies. # Other Ways to Improve Server Performance There are, of course, fundamental things that a game server administrator can do to ensure that the game being hosted is optimally configured and makes best use of all the work that went into coding and compiling it well. Several key server configuration parameters may be adjustable for a particular game, significantly impacting overall performance. While these vary from game to game, they can include: • Practical player limit (i.e. don't let the user adjust this number past their purchased limit or sell player count limit packages that exceed the game engine's ability to deliver) • Hard ceiling to connected ping of players (i.e. players with ping greater than a specific limit are not allowed to connect or are disconnected during game play to protect playability for the rest) • Limited bandwidth or disabled uploading of player-specific content, such as "skins" and "sprays", where such features are supported by the game. • Cap on frames per second performance (may be expressed as max number of player updates per second) A last option: you can always change the game. A novel approach to minimizing server performance impact from level-specific content download, adopted by Richard Stanway in his R1Q2 package, involves outsourcing of map / texture / audio downloads to an HTTP server. This means that the map download function can be optionally offloaded to a separate system, perhaps one on a separate subnet to minimize network impact, with transfers running at a higher UDP data rate than the game's existing TCP connections can support. The downside to doing this with an existing, released game is that it will probably require client-side changes as well. This sort of approach would work well applied to the design of a new game server engine, and could readily be applied to a rewrite of an existing engine where the server code has been released to the Open Source community. This type of distributed data transfer between the game server and the client is also another excellent application for threading techniques. In environments supporting several Massive Multiplayer servers, these could even be scaled up to support deployment in clustered environments, with specific components of the cluster performing particular aspects of client updating and content download activity. Doug Helbling is a software engineer for Intel's software development product deployment team. He works to develop Linux product delivery solutions, various game mods and case studies. His latest project includes an optimization study of GarageGames' Torgue engine. 1) Webmin web-based interface for system administration http://www.webmin.com 2) Intel C/C++ compilers and related software products http://www.intel.com 3) R1Q2 Quake 2 release http://www.r1ch.net/stuff/r1q2 Report Article ## User Feedback There are no comments to display. ## Create an account Register a new account • 2 • 0 • 0 • 1 • 1 • 12 • 18 • 10 • 14 • 10
2018-09-26 08:45:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17585299909114838, "perplexity": 3206.250590869971}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267164469.99/warc/CC-MAIN-20180926081614-20180926102014-00346.warc.gz"}
https://aperiodical.com/tag/online-show/
# You're reading: Posts Tagged: online show ### 24-hour Maths Magic Show next weekend Next weekend, a group of maths presenters will be getting together some mathematicians, magicians and other cool people to put on a 24-hour long online YouTube mathematical magic $x$-stravaganza. Each half-hour will feature a different special guest sharing a mathematical magic trick of some kind, and across the day there’ll be a total of 48 tricks for you to watch and puzzle over.
2022-10-03 05:52:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19319069385528564, "perplexity": 5468.643826922327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00530.warc.gz"}
http://math.stackexchange.com/questions/283913/recovering-hopf-algebra-from-group-like-elements
# Recovering Hopf Algebra from Group-Like Elements Given the natural coalgebra structure on a group algebra $kG$, one can recover the group by taking the set of group-like elements of the coalgebra $kG$. When can you go the other way? In particular, given a Hopf algebra $H$, under what conditions can one recover the structure of $H$ from it's group of group-like elements? I'm also curious as to how the answer differs if $H$ is finitely generated versus finite dimensional. Thanks! - relevant – Alexander Gruber Jan 22 at 8:05 Could you elaborate a bit on your question? If you are just given the group of group-like elements, then there is of course the group algebra over any field which is a Hopf algebra having this group of group-likes. Are you also given $H$ with its algebra structure, or do you want to know if there are other Hopf algebras with this set of group-likes, etc.? – Julian Kuelshammer Feb 7 at 13:48
2013-05-21 13:07:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7341769337654114, "perplexity": 217.3218276517668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700014987/warc/CC-MAIN-20130516102654-00051-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3702489/composition-of-continuous-and-nonmeasurable-function-is-measurable
# Composition of continuous and nonmeasurable function is measurable. I'm stuck on part (b) of the question below. This is another question from a practice preliminary exam. Thanks in advance! Problem a) Let $$g$$ be a monotone function on $$\mathbb{R}$$. Prove that for every measurable function $$f$$ on a measurable set $$E$$ the composition $$g \circ f$$ is measurable. b) Show that for every continuous not strictly monotone function $$g$$ on $$\mathbb{R}$$ there exists a non-measurable function $$f$$ such that $$g \circ f$$ is measurable. My question relates to part (b). I've solved part (a). I'm not sure if they're trying to say, "every continuous [qualifiers removed] function," or if "not strictly monotone" is trying to state that $$g$$ is monotone but perhaps not strictly so. Just wondering if anyone can solve and/or provide comment or corrections to the second part of the problem above. Edited part (b) Show that for every continuous function $$g$$ on $$\mathbb{R}$$ which is not strictly monotone there exists a nonmeasurable function $$f$$ such that $$g \circ f$$ is measurable. (b) If $$g$$ is not strictly monotone, then there are $$a,b\in \mathbb R,$$ $$a\ne b,$$ such that $$g(a)=g(b).$$ Let $$E\subset\mathbb R$$ be nonmeasurable. Set $$f = a \chi_E +b\chi_{\mathbb R \setminus E}.$$ Then $$f$$ is not measurable. However, on $$E,$$ $$g\circ f = g(a),$$ and on $$\mathbb R\setminus E,$$ $$g\circ f = g(b).$$ Since $$g(a)=g(b),$$ $$g\circ f$$ is constant and hence measurable. It means $$g$$ is monotone but not one-to-one. The condition is needed because otherwise the statement is never true: If $$g$$ is one-to-one then $$g^{-1}$$ exists and it's also monotone, so by part (a), $$f$$ would be measurable because $$f = g^{-1}\circ g \circ f$$
2020-09-28 12:03:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 32, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973441481590271, "perplexity": 131.78500122751996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401600771.78/warc/CC-MAIN-20200928104328-20200928134328-00344.warc.gz"}
http://math.stackexchange.com/questions/143937/what-are-the-basic-generating-functions
# What are the basic generating functions? What are the basic generating functions? (if there is one's). And what is the generating function of: $$1 + 2x^2 + 3x^4 + 4x^6 + \cdots$$ Thanks. - What do you mean by "basic"? – Qiaochu Yuan May 11 '12 at 16:39 that i can use without proving. – Rami May 11 '12 at 16:40 That depends on who you're proving things to. – Qiaochu Yuan May 11 '12 at 16:45 The generating function of your sequence can be obtained by observing that $\frac{1}{1-x}=1+x+x^2+x^3$, so by differentiating in $x$, then multiplying by $x$ and adding 1 will give you the generating function – Alex R. May 11 '12 at 16:48 Use generating function instead of generation function. – Did May 12 '12 at 7:34 I guess you are looking for a table of common generating functions? You can look for Z trasforms tables, e.g. see here, and change $z^{-1}$ to $x$ (and restrict to right sided sequences). And what is the generation function of... Actually what you write is already the generating function, you are looking for a compact form. Consider first $G_1(x)= 1x + 2x^2 + 3 x^3+...$ (sequence $s_1 = (0,1,2,3...)$ This is just $G_1(x)=\frac{x}{(1-x)^2}$ (can be obtained directly by algebraic manipulation -see Michael's answer- or looking at entry 6 in that table) Then the sequence $s_2 = (1,2,3...)$ has $G_2(x)=1+2 x + 3 x^2 =\frac{G_1(x)}{x} = \frac{1}{(1-x)^2}$ (and this is the "Time shifting property": shifting the sequence $k$ positions to the right divides the transform by $x^k$). Instead our sequence is $s_3 = (1,0,2,0,3...)$ with $G_3(x)= 1+2x^2+3x^4+...$ but then $G_3(x)=G_2(x^2)$ (and this is the upsampling property) and you're done. - The most basic generating function comes from the geometric series: $$1 + x + x^2 + x^3 + \cdots = \frac{1}{1-x}.$$ If you differentiate this term by term, you get a new generating function: $$1 + 2x + 3x^2 + \cdots = \frac{d}{dx} \frac{1}{1-x}.$$ You can also get reindexed generating functions by multiplying by powers of $x$. For example, multiplying both sides of the geometric series by $x^3$ gives $$x^3 + x^4 + x^5 + x^6 + \cdots = \frac{x^3}{1-x}.$$ You can also get new generating functions by substitution. For example, using $x \mapsto x^2$ in the geometric series gives $$1 + x^2 + x^4 + x^6 + \cdots= \frac{1}{1-x^2}.$$ If you use some of the three ideas above, you'll be able to figure out your problem. There are lots more techniques for dealing with generating fuctions -- see if you can find some of them on your own, or consult generatingfunctionology to study the topic in depth. - +1, for giving relevant hints. – Did May 12 '12 at 7:33 Thanks that was very helpful. – Rami May 12 '12 at 19:45 Most basic types, that arise often, are variants of: $$(1 - z)^{-n} = \sum_{k \ge 0} (-1)^k \binom{-n}{k} z^k = \sum_{k \ge 0} \binom{k + n - 1}{n - 1} z^k$$ -
2015-11-25 18:25:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8819735050201416, "perplexity": 520.0373088507246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445291.19/warc/CC-MAIN-20151124205405-00135-ip-10-71-132-137.ec2.internal.warc.gz"}
https://microsoftforms.uservoice.com/forums/386451-welcome-to-microsoft-forms-suggestion-box/category/182665/filters/top?page=5
# Welcome to Microsoft Forms Suggestion Box! We love hearing from our customers. We have partnered with UserVoice, a third-party service, to hear your ideas and suggestions. We may also merge and rename suggestions for clarity. • Hot ideas • Top ideas • New ideas • My feedback 1. ## students should have the option to upload a picture of their work. Teacher should be able to see the image and score leaving feedback. Picture attachments (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 2. ## Would love to know if anyone else has issues uploading images to forms on a Mac Images break the interwebs right now (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 3. ## image basically breaks everything Have images actually work on forms (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 4. ## MS Forms - adding image to a question adds at full resolution. Adding an image (via a Bing search) when using forms.com from an iPad sometimes displays the image at full resolution, both in the Forms canvas and when previewing the Form. (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 5. ## allow users edit photos added in the title of the form. I have attached a photo to the title of my form, but I am not able to do anything with it. I'd like the option to change the size of the title in order to accommodate the size of the image. (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 6. ## freeze panes In larger tables with different options in different columns, it would be great if you could freeze the column titles so you know what selection corresponds to which option as you scroll down the page. (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 7. ## Allow creator of the Form to insert a picture along with a question. E.g. just to explicitly clarify something in the actual question. I miss the functionality of being able to paste an image, e.g. along with a sub-title, explaining how to get the necessary information for the given question. (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 8. ## I like how the image search works for the title, popping up a Bing search however after adding an image the survey crashes every time Every time I add an image from the Bing search the Form just crashes, even when I re-open it. (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 9. ## Please make it available for people to add a photo to a form. Please can you make it available for respondents to add images when completing the form? Thank you very much. 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 10. ## image alignmen settings there should be option to align images e.g left, centre, right for uploading logos etc 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 11. ## Terima kasih untuk kelengkapan data yang anda request, pengajuan akan segera diproses. 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 12. ## work 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 13. ## Allow images in answer options Allow inline mathematics for instructions and answer options (not only in separate boxes) , for example Find $\frac{1}{3}$ of 15. Of course now, you can write it but the fraction does not appear (only the code). 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 14. ## Seria muy excelente que la persona que delegencie el formulario pueda ingresar imagenes o fotos Es súper bueno que la persona que haga el formulario o lo llene el destinario pueda el cliente adjuntar imagenes 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 15. ## i want image upload option from responser side and also image in form is being visible at the time printing the response i want image upload option from responser side and also image in form is being visible at the time printing the response 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 16. ## Allow change of image size added to forms, especially in the header as company logo 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 17. ## unable to upload multiple image on forms for iPhone Why does forms not allow multiple upload on Iphone? 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 18. ## Als je steeds dezelfde vraag stelt, alleen steeds over een andere werkdag, steeds dezelfde kleuren in cirkels grafische weergave resultaten 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 19. ## Please could you create an option to use a different image as well for each translated version of the form. I have some text on an image in one of the questions, therefore it would be great if I could change the image as well in each translated version of the form. Otherwise the Spanish version will also have to view the same image with English text. 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea 20. ## Please could you create an option to use a different image as well for each translated version of the form. I have some text on an image in one of the questions, therefore it would be great if I could change the image as well in each translated version of the form. Otherwise the Spanish version will also have to view the same image with English text. 1 vote (thinking…) Signed in as (Sign out) We’ll send you updates on this idea How it works • Create a user account if you want to add a new suggestion or vote on existing suggestions. • Select one of the feedback forums listed. • Check out the ideas others have suggested and vote on your favorites. • Be sure to search for your suggestion but if you can’t find something similar enough, you can submit your own. • Keep suggestions focused on a single idea per post and limited to 25 words or less. • When you post an idea to our forum, others will be able to subscribe to it and make comments. # Feedback and Knowledge Base • ## How do you like Microsoft Forms? Standard Disclaimer Please note that the Forms Suggestion Box is moderated and is a voluntary participation-based project. If your submission is not a product feature suggestion it may be removed. Please do not send any novel or patentable ideas, copyrighted materials, samples or demos which you do not want to grant a license to Microsoft. Your submission is subject to the UserVoice License Terms.
2020-07-11 18:32:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1810532659292221, "perplexity": 4740.366382137399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00589.warc.gz"}
https://askbot.fedoraproject.org/en/question/83041/unknown-display-wrong-max-resolution-for-secondary-monitor/
# Unknown Display - Wrong max resolution for secondary monitor Hi guys! I have now a RCA RC32D2 TV and it is 32". The thing is that it should have an aspect ratio of 16:9. Here's the output of xrandr: Screen 0: minimum 8 x 8, current 2390 x 768, maximum 32767 x 32767 LVDS1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 194mm 1366x768 60.02*+ 1280x720 60.00 1024x768 60.00 1024x576 60.00 960x540 60.00 800x600 60.32 56.25 864x486 60.00 640x480 59.94 720x405 60.00 680x384 60.00 640x360 60.00 DP1 disconnected (normal left inverted right x axis y axis) HDMI1 disconnected (normal left inverted right x axis y axis) VGA1 connected 1024x768+1366+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.00* 800x600 60.32 56.25 848x480 60.00 640x480 59.94 VIRTUAL1 disconnected (normal left inverted right x axis y axis) As you can see, I'm connection my TV with a VGA cable and has the name VGA1 so I did the following according to xrandr guide to add the resolution 1366x768 cvt 1366 768 60 xrandr --newmode "1368x768_60.00" 85.25 1368 1440 1576 1784 768 771 781 798 -hsync +vsync And well, it doesn't seem to work or the resolution is still too low because I still don't see the complete screen on my second monitor, for example I'm using Firefox right now but I don't see the complete browser, the right part seems incomplete. I tried adding the following 16:9 resolution which is 1600x900 but without luck, here's what I did: cvt 1600 900 60 xrandr --newmode "1600x900_60.00" 118.25 1600 1696 1856 2112 900 903 908 934 -hsync +vsync 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)
2021-05-05 23:06:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27414068579673767, "perplexity": 8095.057193550033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00179.warc.gz"}
https://hmolpedia.com/page/Simon_Stevin
# Simon Stevin In existographies, Simon Stevin (407-335 BE) (1548-1620 ACM) (IQ:175|#315) (Gottlieb 1000:836) (Eells 100:50) (GME:#) (CR:11) was a Dutch mathematician, engineer, and polymath, noted for [] ## Overview ### Decimals In 1585, Steven, in his The Art of Tenths (De Thiende), aka La Disme (French) or Decimal Arithmetic, introduced the “decimal system” of mathematics, wherein he showed how to perform all computations whatsoever by whole numbers without fractions, by the principles of common arithmetic, namely: addition, subtraction, multiplication, and division. Steven defined the decimal point as a zero with a circle around it: , meaning ten to the power of zero: 100, , meaning ten to the power of negative one: 10-1, , meaning ten to the power of negative two: 10-2, equal 10-3, equal 10-4, and so on. Hence, e.g., a number such as 0.04 would was written by Stevin as: ${\displaystyle 4\times 10^{-2}=0.04=}$ 004 The following is one example of the Stevin decimal notation: (Steven) = 184.54290 (modern) Stevin in short, printed little circles around the exponents of the different powers of one-tenth. ### Forces In 1586, Stevin, in his Statics and Hydrostatics, gave the first complete statement of the impossibility of perpetual motion, and also derived the notion of the vectorial decomposition of forces, according to which force that must be exerted along the line of greatest slope to support a given weight on an inclined plane. ## Sways ### Students Stevin was the teacher of Isaac Beeckman. ## Quotes ### Quotes | On The following are related quotes: “Historians consider Steven’s Elements of Equilibrium (1585) to have contributed significantly to the establishment of classical mechanics, but at that time the public was not very much aware of this treatise because he neglected toe obligatory Latin language and wrote the book in Dutch. Stevin came to the idea, independent of da Vinci, of the absolute impossibility of perpetual motion Moreover, he not only suggested a theory but also applied it to practical problems of statics.” Georgij Alekseev (1978), Energy and Entropy (pg. 63) [1] ### Quotes | By The following are quotes by Stevin: “Disme [decimals] is a kind of arithmeticke [arithmetic], invented by the tenth progression, consisting in characters of cyphers; whereby a certain number is described, and by which also all accounts which happen to humane affayres [affairs], are dispatched by whole numbers, without fractions or broken numbers.” — Simon Stevin (c.1590), Publication ## End matter ### References 1. Alekseev, Georgij. (1978). Energy and Entropy (pg. 63). Mir.
2021-10-18 05:09:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6322895884513855, "perplexity": 4227.133010441435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00553.warc.gz"}
http://mathhelpforum.com/calculus/221203-confusing-result-system-odes-print.html
# Confusing Result From System of ODEs... • August 14th 2013, 02:51 PM Drecks Confusing Result From System of ODEs... Hi there. I have the following system of ODEs to solve: $\frac{dx}{dt} = \alpha x^{\frac{3}{2}}$ $\frac{dy}{dt} &=& \sqrt{\frac{\beta}{x^3}}$ Subject to $x(0) = x_0, y(0) = 0$. As you can see, the first equation is a first order separable equation which is simple to solve. The second one has $x$ as a variable... so it was my thought to solve the first equation, then subtitute the solution into the 2nd equation and then solve that by separation of variables. So, here's my solution to the first: $x(t) = \frac{4}{(\alpha t +C)^2}$ Applying the initial condition, I find $C = \pm \frac{2}{\sqrt{x_0}}$. So I substituted this solution for $x(t)$ into the second differential equation, and set about solving that by separation of variables too. Here's the solution I got: $y(t) =\frac{\sqrt{\beta}}{32\alpha}\left(\alpha t + C)^4 +D$ Here, $C$ is the integration constant which resulted from the integration of the first differential equation, and $D$ is the integration constant from the integration of the second differential equation. Applying the initial condition for $y$ and substituting in for $C$ as found before, we find: $D = - \frac{\sqrt{\beta}}{2\alpha x_0^2}$ So, putting that all together, the solved system is: $x(t) = \frac{4}{(\alpha t \pm \frac{2}{\sqrt{x_0}})^2}$ $y(t) =\frac{\sqrt{\beta}}{32\alpha}\left(\alpha t \pm \frac{2}{\sqrt{x_0}}\right)^4- \frac{\sqrt{\beta}}{2\alpha x_0^2}$ Now, I compared these analytical results against a numerical integration of the same differential equations, and I'm getting confused by the results. Here's the values of the constants and the initial conditions that I used (which are specific to a physical problem I'm trying to solve): $\alpha = 3.1678\times 10^{-10}$ $\beta = 3.9860 \times 10^5$ $x_0 = 30000$ $y_0 = 0$ And the integration time interval is [0,1000000]. I integrated with MATLAB's ode45, which is a runge-kutta based integrator, with a maximum step size of 100, and relative and absolute tolerances of $10^{-6}$. I've attached the results... 1.png and 2.png show the results of numerical/analytical comparisons for x(t) and y(t) respectively, when I choose $C = + \frac{2}{\sqrt{x_0}}$, and 3.png and 4.png show the same thing, but for $C = -\frac{2}{\sqrt{x_0}}$. Now, it was my expectation that ONE of the choices of C would produce the correct result, and one would produce the wrong results. However... I have a mix. Because choosing C to be positive gives agreement for y(t), but disagreement for x(t), and visa versa for choosing C to be negative. How can this be?... surely the constant C must be the same value for both equations, since they describe a single system... how can it be that the first equation requires a negative C, and the second requires a positive C? Have I missed some step in my integration process that should fix this? Finally, I can't explain why the numerical/analytical match for x(t) is PERFECT (i.e. when correct C is chosen), whereas the match for y(t) diverges with time. Even setting lower integration tolerances and small time steps does not eliminate the error at all, it always diverges at the same rate regardless, leading me to believe that it is not a numerical error, but something inherent in the equations. Could anybody help shed light on these two issues? • August 14th 2013, 04:33 PM HallsofIvy Re: Confusing Result From System of ODEs... Quote: Originally Posted by Drecks Hi there. I have the following system of ODEs to solve: $\frac{dx}{dt} = \alpha x^{\frac{3}{2}}$ $\frac{dy}{dt} &=& \sqrt{\frac{\beta}{x^3}}$ Subject to $x(0) = x_0, y(0) = 0$. As you can see, the first equation is a first order separable equation which is simple to solve. The second one has $x$ as a variable... so it was my thought to solve the first equation, then subtitute the solution into the 2nd equation and then solve that by separation of variables. So, here's my solution to the first: $x(t) = \frac{4}{(\alpha t +C)^2}$ Applying the initial condition, I find $C = \pm \frac{2}{\sqrt{x_0}}$. So I substituted this solution for $x(t)$ into the second differential equation, and set about solving that by separation of variables too. Here's the solution I got: $y(t) =\frac{\sqrt{\beta}}{32\alpha}\left(\alpha t + C)^4 +D$ HOW do you get this? Obviously, putting those two different solutions for x into the second equation will give you two different equations for y. Why do you not get two different solutions for y? Quote: Here, $C$ is the integration constant which resulted from the integration of the first differential equation, and $D$ is the integration constant from the integration of the second differential equation. Applying the initial condition for $y$ and substituting in for $C$ as found before, we find: $D = - \frac{\sqrt{\beta}}{2\alpha x_0^2}$ So, putting that all together, the solved system is: $x(t) = \frac{4}{(\alpha t \pm \frac{2}{\sqrt{x_0}})^2}$ $y(t) =\frac{\sqrt{\beta}}{32\alpha}\left(\alpha t \pm \frac{2}{\sqrt{x_0}}\right)^4- \frac{\sqrt{\beta}}{2\alpha x_0^2}$ Now, I compared these analytical results against a numerical integration of the same differential equations, and I'm getting confused by the results. Here's the values of the constants and the initial conditions that I used (which are specific to a physical problem I'm trying to solve): $\alpha = 3.1678\times 10^{-10}$ $\beta = 3.9860 \times 10^5$ $x_0 = 30000$ $y_0 = 0$ And the integration time interval is [0,1000000]. I integrated with MATLAB's ode45, which is a runge-kutta based integrator, with a maximum step size of 100, and relative and absolute tolerances of $10^{-6}$. I've attached the results... 1.png and 2.png show the results of numerical/analytical comparisons for x(t) and y(t) respectively, when I choose $C = + \frac{2}{\sqrt{x_0}}$, and 3.png and 4.png show the same thing, but for $C = -\frac{2}{\sqrt{x_0}}$. Now, it was my expectation that ONE of the choices of C would produce the correct result, and one would produce the wrong results. However... I have a mix. Because choosing C to be positive gives agreement for y(t), but disagreement for x(t), and visa versa for choosing C to be negative. How can this be?... surely the constant C must be the same value for both equations, since they describe a single system... how can it be that the first equation requires a negative C, and the second requires a positive C? Have I missed some step in my integration process that should fix this? Finally, I can't explain why the numerical/analytical match for x(t) is PERFECT (i.e. when correct C is chosen), whereas the match for y(t) diverges with time. Even setting lower integration tolerances and small time steps does not eliminate the error at all, it always diverges at the same rate regardless, leading me to believe that it is not a numerical error, but something inherent in the equations. Could anybody help shed light on these two issues? • August 15th 2013, 02:09 AM Drecks Re: Confusing Result From System of ODEs... Quote: Originally Posted by HallsofIvy HOW do you get this? Obviously, putting those two different solutions for x into the second equation will give you two different equations for y. Why do you not get two different solutions for y? There are only two solutions for $x(t)$ due to the fact that $C = \pm 2/\sqrt{x_0}$. Hence, yes, I do end up with two solutions for $y(t)$ as well, but they only differ due to the $\pm$ value of C as well. I got to my solution for $y(t)$ by substituting my solution for $x(t)$ into the differential equation for $y(t)$ and then integrating it by separation of variables and substitution. I didn't perform this TWICE with both values of $C$... instead I performed it once with $C$ left as an arbitrary constant, and then considered the two values of $C$ in my solution later on, as follows: $\frac{dy}{dt} = \sqrt{\frac{\beta}{x(t)^3}}$ $\frac{dy}{dt} = \sqrt{\frac{\beta}{\left(\frac{4}{\left(\alpha t +C\right)^2}\right)^3}}$ $\frac{dy}{dt} = \sqrt{\frac{\beta\left(\alpha t + C\right)^6}{4^3}}$ $\frac{dy}{dt} =\frac{\sqrt{\beta}}{8}\left(\alpha t + C\right)^3$ $dy =\frac{\sqrt{\beta}}{8}\left(\alpha t + C\right)^3 dt$ Make the substitution $u = \alpha t + C$, so $\frac{du}{dt} = \alpha$, so $dt = \frac{du}{\alpha}$. Hence $dy = \frac{\sqrt{\beta}}{8} u^3 \frac{du}{\alpha}$ $\int dy = \frac{\sqrt{\beta}}{8\alpha} \int u^3 du$ $y = \frac{\sqrt{\beta}}{8\alpha}u^4 + D$ $y = \frac{\sqrt{\beta}}{8\alpha} \left(\alpha t +C\right)^4 + D$ As far as I can tell, nothing has happened to $C$ during that integration process that would change it how it appears in the y(t) solution, compared to how it appears in the x(t) solution. I would expect that choosing $C=+$, would produce either BOTH correct results in both equations, or BOTH wrong results in both equations, and that $C=-$ would do the opposite. I did not expect that $C=+$ would produce the correct result in one equation, and $C=-$ would produce the correct result in the other. • August 15th 2013, 04:09 AM BobP Re: Confusing Result From System of ODEs... I think that you have simply got tangled up with the negative signs. Separating the variables and integrating the first equation gets you $-2x^{-1/2}=\alpha t + C,$ so $C$ is going to be negative. No plus or minus. Then later, when you substitute for $x^{-3/2},$ the substitution will be $x^{-3/2}=-\frac{1}{8}(\alpha t + C)^{3}.$
2016-08-24 19:07:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 75, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9188831448554993, "perplexity": 332.3924409596582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292607.17/warc/CC-MAIN-20160823195812-00296-ip-10-153-172-175.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1785294/two-definitions-of-the-first-chern-class
# Two definitions of the first Chern class there are two definitions of the first Chern class that I don't know how to relate - hints and references are both welcome. So, first approach: say that I have a complex vector bundle $E\to M$; I can pass to the frame bundle $F(E)\to M$, where I have a $U(n)-$action, for some $n$. Then I consider the Chern-Weil map $$S({\frak u}(n)^*)^{U(n)}\to H^*_{U(n)}(F(E))\simeq H^*(M)$$ and pick the image of the invariant polynomial $tr$, the trace. (up to some constant) Second approach: I pick a connection $D:\Gamma(E)\to \Gamma(E\otimes TM)$, with $\Gamma(\cdot)$ representing the space of sections, and consider its curvature. Work locally on some $U$ and pick a section $s$ which generates the bundle: we get $D(s)=\theta(s)\cdot s$, with $\theta(s)$ a $1-$form, and apparently the first Chern class is proportional to $d\theta(s)$. I am a bit confused: the very idea of connection seems to be absent from the first approach, so either there's a natural choice of connection, or I get the same result no matter which one I pick. Moreover, it would seem to me that also the choice of generator $s$ matters. Honestly, I don't have much of a clue about where to start tackling the problem - the two things seem quite unrelated. • What is your definition of the Chern-Weil map in the first approach? The approach I know also involves a connection (although the classes do end up being independent of this choice). – Qiaochu Yuan May 14 '16 at 17:58 • @QiaochuYuan The induced map on (equivariant) cohomology from f: F(E) \to {pt}. you then use H^*_U(n)(pt.) = the polynomials on the Lie algebra of U(n) invariant under the adjoint action, and get the Chern-Weil map – nelv May 14 '16 at 20:42 • Sure. Then yes, the point is that the Chern classes end up being independent of the choice of connection in the second approach. – Qiaochu Yuan May 14 '16 at 21:32 • that's good to know. but how does one show, from the first approach, that the image of the trace is a the curvature of a connection? btw, does this mean that in this case all the connections have the same curvature? nice! As I said, I'm a bit confused because the very idea of connection seems to be absent from the first approach – nelv May 15 '16 at 6:50 • @nelv: It does not mean that all the connections have the same curvature, but rather that the cohomology class of the curvature is the same for all connections. – Jesse Madnick May 16 '16 at 11:15
2020-02-29 07:42:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8547971844673157, "perplexity": 230.49975034258281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148671.99/warc/CC-MAIN-20200229053151-20200229083151-00372.warc.gz"}
https://tex.stackexchange.com/questions/353107/practical-difference-between-define-a-macro-that-expands-to-charxxxx-and-using
# Practical difference between define a macro that expands to \char"XXXX and using \DeclareTextSymbol This is question is only limited to using fontspec + luatex with the default TU encoding. The files are always in UTF-8. Is there any practical difference between, say \DeclareTextSymbol{\textparagraph} \UnicodeEncodingName{"00B6} (above from tuenc.def) and simply \def\textparagraph{\char"00B6} (or perhaps a robust version). The first form actually defines \textparagraph to expand to \TU-cmd \textparagraph \TU\textparagraph (three tokens); the first is a macro that checks the current encoding and, in case it's not TU, does the necessary changes in order to use the version of \textparagraph for the current encoding (or the default). The second token is used for warning or error messages, the third one is the most important one, as it expands to \char"B6 The shorter version wouldn't be the same, because if you happen to use \textparagraph in a context where a different font encoding is used (for whatever reason), you might end up with something unexpected. • Thanks. So, if say I only use TU, then the output would be exactly identical? – Yan Zhou Feb 10 '17 at 17:01 • @YanZhou Yes, but there's no point in “simplifying” the definition. Anyway, it should be \protected\def\textparagraph{\symbol{"B6}} – egreg Feb 10 '17 at 17:03 • Sure, in general I would make it protected. My situation is that sometime I need to use a private code point, say a variant of floral heart, which does not even exist in other fonts in general, and certainly not in the same point. I was wondering if there's any disadvantage to simply use \char in the mid of text in the rare occasions that they are needed. – Yan Zhou Feb 10 '17 at 17:17 • @YanZhou It's better to use \symbol anyway. For instance, \char"B6 x would have no space. – egreg Feb 10 '17 at 17:33
2019-06-17 21:19:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7824883460998535, "perplexity": 1252.1896570011882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998580.10/warc/CC-MAIN-20190617203228-20190617225228-00111.warc.gz"}
https://lurlingfacilitairediensten.nl/3e9okn8/4yc42n.php?lh=matplotlib-imshow-polar
# Matplotlib Imshow Polar meshgrid (*xi, **kwargs) [source] ¶ Return coordinate matrices from coordinate vectors. AxesImage instance. Matplotlib IRMA - 25/01/2012 29 Available projections can be listed accessing online help (65 in Matlab) import mpl_toolkits. In this tutorial, I'll walk you through how one can scale and rotate a contour based on OpenCV Python API. import numpy as np import matplotlib. IMSHOW Imshow is the go-to image plotting function in matplotlib. Matplotlib comes with a set of default settings that allow customizing all kinds of properties. annotate matplotlib. 5 and up, matplotlib offers a range of pre-configured plotting styles. A popular question is how to get live-updating graphs in Python and Matplotlib. 6 Pie and Polar charts 8. 02) 绘图函数简介 绘图函数简介 程序中调用subplot()创建子图时通过设 polar参数 为True,创建一个极坐标子图。然后调用plot()在极坐 标子图中绘图。. contourf for filled contour plots, and plt. We can display images with matplotlib. We then use this look-up table to look-up the mapping and use the mapping to transform the image data. Resources: 1 - 2; Gamma Correction. A new untitled notebook with the. Thanks to the simplicity of the cartopy interface, in many cases the hardest part of producing such visualisations is getting hold of the data in the first. 43,515 developers are working on 4,454 open source repos using CodeTriage. MatPlotLib plots are organized into figures, subplots and axes. use(my_plot_style) before creating your plot. For some reason, I cannot get matplotlib to open figures. pyplot as plt from skimage import data from skimage. All of the rest are in the full example links I've already mentioned. python 出现'matplotlib' has no attribute 'imshow'错误, AttributeError: 'Polar AxesSubplot' object has no attribute 'set_axis_bgcolor'. 以下の図のx軸、y軸を消してみる コードと軸を削除したい図 %matplotlib inline from pylab import * import matplotlib. matplotlib Mailing Lists Brought to you by: cjgohlke , dsdale , efiring , heeres , and 8 others. savefig("MyFirstPlot. subplot mnp where m refers to the row, n refers to the column, and p specifies the pane. Performing a reverse mapping¶. Matplotlib comes with a set of default settings that allow customizing all kinds of properties. It shows a cartesian imshow plot within a circle, but it's not a polar plot. ive tried matplotlib. It can be used to remove structures of an image of a certain scale, and the regularization parameter $$\lambda$$ can be used for scale selection. matplotlib所提供的图形非常丰富,除了基本的柱状图、饼图、散点图等,还提供了极坐标图、3D图等高级图形,并且你可以自由选择和组合。 每个图形函数下都有许多参数可设置,matplotlib提供的不仅仅是图形,还有更为精细的图像表达,你可以通过细节的设置来. pyplot plttheta np. Applications of scientific computing in number theory, linear regression, dynamical systems, initial value problems, random number generation and optimization. matplotlib所提供的图形非常丰富,除了基本的柱状图、饼图、散点图等,还提供了极坐标图、3D图等高级图形,并且你可以自由选择和组合。 每个图形函数下都有许多参数可设置,matplotlib提供的不仅仅是图形,还有更为精细的图像表达,你可以通过细节的设置来. Matplotlib can be used in Python scripts, the Python and IPython shell, the jupyter notebook, web application servers, and four graphical user interface toolkits. This gist is written so that the important bits for each section are quickly and easily visible. We then use this look-up table to look-up the mapping and use the mapping to transform the image data. Matplotlib Autrans - 28/09/2011 29 Available projections can be listed accessing online help (65 in Matlab) import mpl_toolkits. Text Processing is one of the most common task in many ML applications. Matplotlib Plotting Tutorials : 023 : Polar Plot and Tweaks by Fluidic 041 : Read, Process, and Manipulate images with imread and imshow by Fluidic Colours. GOES-16 and GOES-17 0. imshowを使います。 plt. 제가 만든건 아니구요 Nicolas P. pyplot モジュールの関数を使えば可視化したいデータを渡すだけで一行でグラフが描画できます。. Before going to depth images, let's first understand some basic concepts in multiview geometry. Basic plotting Anscombe dataset. A new untitled notebook with the. Non-affine transformations in matplotlib are defined using Python functions, so they are truly arbitrary. imshowを使います。 plt. If X is 3-dimensional, imshow will display a color image. Visualization with Matplotlib. Matplotlib est probablement l'un des packages Python les plus utilisés pour la représentation de graphiques en 2D. It performs "natural neighbor interpolation" of irregularly spaced data a regular grid, which you can then plot with contour, imshow or pcolor. import math import numpy as np import matplotlib. py, which is not the most recent version. MATLAB ® creates this plot as a flat surface in the x-y plane. hist が用意されてます。. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits. Using polar coordinates We will first fill a 2D square array with values and then call pyplot. txt) or read online for free. Then the matplotlib savefig function will help you. Thanks to a very nice plotting library called matplotlib plotting of (scientific) data from within python has become almost trivial. scatter(x_list,y_list) 以下のコードで各軸を削除することができる コード %matplotlib inline from pylab import. Its interactive mode supports multiple windowing toolkits (currently: GTK, Tkinter, Qt, and wxWindows) as well as multiple noninteractive backends (PDF, postscript, SVG, antigrain geometry, and Cairo). datetime objects nc-time-axis v1. Using polar coordinates We will first fill a 2D square array with values and then call pyplot. We use cookies for various purposes including analytics. show¶ matplotlib. Matplotlib Tutorials : 044. So i have this very easy plot lib graph. imread() to read an image. Polar and Log-Polar import numpy as np import matplotlib. """ import numpy as np import matplotlib. Basic Concepts¶. subplots # the histogram of the data # normed, False, each bin value is the number in that bin; True, form a probability density, # the area (or integral) under the histogram will sum to 1 # n, the values of the histogram bins # bins, the edges of the bins n, bins, patches = ax. For more in depth options see the Matplotlib website. gnuplotは止めました。 Pythonをメインで使ってるのにmatplotlibを使わないのが そもそも不自然だったんですけどね。 仕事で3次元のデータ(x, y, zの組)をプロットする機会が多いので、 得たポイントをまとめておきます。. Matplotlib is powerful, but has awkward syntax, odd default display settings, and requires setting up data arrays manually. warp() function. Often times, I ssh into another computer to do python work. While I'm coding, I like to have an ipython shell open for quick testing. Matplotlib IRMA - 25/01/2012 29 Available projections can be listed accessing online help (65 in Matlab) import mpl_toolkits. tesselate the input point set to n-dimensional simplices, and interpolate linearly on each simplex. For a brief introduction to the ideas behind the library, you can read the introductory notes. 6 Pie and Polar charts 8. figure ax = fig. In matplotlib, I want to get a specific tick on the x axis to label a particular value. Dear all, Once again, I turn for help. Powerful and simple online compiler, IDE, interpreter, and REPL. Boxes around text happen when you specify bbox. Documentation ubuntu-fr. In this case, the position of Z[0,0] is the center of the pixel, not a corner. ("data", "axes fraction") matplotlib. Installing Matplotlib. 3 but no longer works in version 2. Making pretty function plots requires multiple lines of code. The first subplot is the first column of the first row, the second subplot is the second column of the first row, and so on. Method of interpolation. 本ページでは、Python のグラフ作成パッケージ Matplotlib を用いて散布図 (Scatter plot) を描く方法について紹介します。 matplotlib. 0, n) Y2 = (1 - X / float(n)) * np. meshgrid¶ numpy. I guess this can be done by default, maybe using a wrapper in PolarAxes ? QuLogic added a commit to QuLogic/matplotlib that referenced this issue Dec 3, 2015. How to hide axis of plot in Matplotlib Ashwin Uncategorized 2015-06-29 2015-06-29 1 Minute For most types of plots drawn by Matplotlib , the ticks and labels along both X and Y axis is drawn too. Basic Concepts¶. imshow() function. arange(0,2*np. We closed 605 issues and merged 483 pull requests. Download with Google Download with Facebook or download with email. If the window was created with the cv::WINDOW_AUTOSIZE flag, the image is shown with its original size, however it is still limited by the screen resolution. Boxes around text happen when you specify bbox. How to make a quiver plot in Matplotlib Python. matplotlib Mailing Lists Brought to you by: cjgohlke , dsdale , efiring , heeres , and 8 others. The scale keyword can be used to downsample the image (scale=0. So, if we want to use NumPy, it must. however, what i want to do is subdivide the each major y-axis display into 10 smaller segments. カラーバーが表示されない!? なぜなら type(cax) が matplotlib. three-dimensional plots are enabled by importing the mplot3d toolkit. Your message dated Sat, 3 Jun 2017 23:20:31 +0200 with message-id <20170603212029. Max Liu Programmer by Passion import cv2 import numpy as np import matplotlib. I guess it's far too much trouble to add a line or two to make the examples actually work. Matplotlib Tutorial Advanced Python School in Zurich, September 2013. Basic Concepts¶. 2015/09/09 [Matplotlib-users] Creating axes with fixed distance from figure edge Thomas Robitaille 2015/09/09 Re: [Matplotlib-users] Matplotlib Curve Overlapping with Animated plot Benjamin Root 2015/09/03 Re: [Matplotlib-users] Shadows are really large in exported PNG file Mark Voorhies. matplotlib所提供的图形非常丰富,除了基本的柱状图、饼图、散点图等,还提供了极坐标图、3D图等高级图形,并且你可以自由选择和组合。 每个图形函数下都有许多参数可设置,matplotlib提供的不仅仅是图形,还有更为精细的图像表达,你可以通过细节的设置来. If origin is not None, then extent is interpreted as in matplotlib. Matplotlib Tutorials : 044. Despite being written entirely in python, the library is very fast due to its heavy leverage of numpy for number crunching and Qt's GraphicsView framework for fa. 本ページでは、Python のグラフ作成パッケージ Matplotlib を用いてヒストグラム (Histogram) を描く方法について紹介します。 matplotlib. Below is a sampling of the many TeX expressions now supported by Matplotlib's internal mathtext engine. Then the matplotlib savefig function will help you. Matplotlib can display images (assuming equally spaced horizontal dimensions) using the imshow() function. As of version 0. It however does some things in the background. pcolor(C) creates a pseudocolor plot using the values in matrix C. Example 1¶ This requires Scipy 0. finance module has been removed, development has. How to show minor tick labels on log-scale with Matplotlib. outerproduct(x,y) → numpy. def imageM(*args,**kwargs): """ imageM(*args, **kwargs) This function essentially is a wrapper for the matplotlib. 02) 绘图函数简介 绘图函数简介 程序中调用subplot()创建子图时通过设 polar参数 为True,创建一个极坐标子图。然后调用plot()在极坐 标子图中绘图。. Based on Lecture Material by Anthony Scopatz and Katy Huff. You can control the defaults of almost every property in matplotlib: figure size and dpi, line width, color and style, axes, axis and grid properties, text and font properties and so on. Since matplotlib's default is to render its graphics in an external window, for plotting in a notebook you will have to specify otherwise, as it's impossible to do this in a browser. Its interactive mode supports multiple windowing toolkits (currently: GTK, Tkinter, Qt, and wxWindows) as well as multiple noninteractive backends (PDF, postscript, SVG, antigrain geometry, and Cairo). matplotlib. In this case, the position of Z[0,0] is the center of the pixel, not a corner. subplot(m,n,p) divides the current figure into an m-by-n grid and creates axes in the position specified by p. alternative to imshow with polar axes ?. PyQtGraph is a pure-python graphics and GUI library built on PyQt4 / PySide and numpy. hist が用意されてます。. add_subplot (111). 5 result is that is essentially wrong. Luckily for us, the creator of Matplotlib has even created something to help us do just that. randint(-20,20,size=1. Interactive Applications Using Matplotlib - Sample Chapter - Free download as PDF File (. subplots() x_list = [0,1,2] y_list = [0,0,0] ax. imshow for showing images. Related courses If you want to learn more on data visualization, these courses are good: Matplotlib Intro with Python; Python for Data Analysis and Visualization - 32 HD Hours! Heatmap example The histogram2d function can be used to generate a heatmap. Axes() class for many of the same plotting functions. png") The pyplot interface is a function-based interface that uses the Matlab-like conventions. 2 Line graph 8. カラーバーが表示されない!? なぜなら type(cax) が matplotlib. python,matplotlib. 5 and up, matplotlib offers a range of pre-configured plotting styles. arange(0,2*np. 5 adds a new option to the plot directive - close-figs - that closes any previous figure windows before creating the plots. The following are code examples for showing how to use matplotlib. An example is below:. Each of these derived classes indicates a structure in the coordinate values. In this way we build a look-up table that maps co-ordinates in polar space to an equivalent co-ordinate in Cartesian space. Matplotlib is a pure python plotting library with the goal of making publication quality plots using a syntax familiar to Matlab users. For some reason, I cannot get matplotlib to open figures. If origin is None, then (x0, y0) is the position of Z[0,0], and (x1, y1) is the position of Z[-1,-1]. pyplot is a collection of command style functions that make Matplotlib work like MATLAB. Examples of colored and labeled heatmaps with custom colorscales. 2D Plotting¶ Sage provides extensive 2D plotting functionality. 興味があることをぐうたらに記していきます. feature import register. How to show minor tick labels on log-scale with Matplotlib. warp() function. Complex Numbers in Python | Set 1 (Introduction) Not only real numbers, Python can also handle complex numbers and its associated functions using the file "cmath". How to make a quiver plot in Matplotlib Python. 3, matplotlib provides a griddata function that behaves similarly to the matlab version. below is. Matplotlib Tutorials : 044. r,thetafor polar coordinates. matplotlib所提供的图形非常丰富,除了基本的柱状图、饼图、散点图等,还提供了极坐标图、3D图等高级图形,并且你可以自由选择和组合。 每个图形函数下都有许多参数可设置,matplotlib提供的不仅仅是图形,还有更为精细的图像表达,你可以通过细节的设置来. , consider the case where we would like to shift an image 50 pixels to the left. A new untitled notebook with the. You can vote up the examples you like or vote down the ones you don't like. Matplotlib comes with a set of default settings that allow customizing all kinds of properties. any help is appreciated. subplot mnp where m refers to the row, n refers to the column, and p specifies the pane. im trying to create a program that reads a text file and plots the data using matplotlib. Matplotlib 3D Three-dimensional Plotting in Matplotlib. Matplotlib has a tutorial on how to manage images. imshow() Copy. - pyplot-contents. The webpage of Matplotlib is located at:. Additionally, it supports many other features. ConnectionPatch is sometimes easier to use, though matplotlib. We strongly en-. imshow中的原点和范围 import matplotlib matplotlib. Plotting in Python. import numpy as np import matplotlib. We closed 605 issues and merged 483 pull requests. The easiest way to get started with plotting using matplotlib is often to use the MATLAB-like API provided by matplotlib. The answer is, first you interpolate it to a regular grid. If not provided, the default will depend on whether center is set. arange(0, 20, 0. imread(), cv2. OK, I Understand. Image Processing 101. Second, for contour, you are allowed to have x and y points that aren't evenly spaced, whereas for imshow() you aren't. Then we’re going to import the image sub-package of matplotlib, aliasing it as mpimg for convenience. So i have this very easy plot lib graph. { "metadata": { "name": "", "signature": "sha256:f1c30f844fb6f22510540693abd39720b206e3bd81d0e9e80e7ab324f96f0cd6" }, "nbformat": 3, "nbformat_minor": 0, "worksheets. We use cookies for various purposes including analytics. py files in that startup directory in lexicographical order by name. 2 imshow on polar axis. Sher Minn Chong. In this tutorial, I'll walk you through how one can scale and rotate a contour based on OpenCV Python API. finance module has been removed, development has. If we want to change this to the lower left corner, and also extend the data limits along each of the axis, then the origin and extent arguments of the imshow() method help. However, it does not include the NumPy functions. While I'm coding, I like to have an ipython shell open for quick testing. This video and the subsequent video shows you the animation function, how it works. Let's import matplotlib's function-based interface: import matplotlib. Additionally, it supports many other features. IMSHOW Imshow is the go-to image plotting function in matplotlib. Around the time of the 1. Default image size is 5400x2700, which can be quite slow and use quite a bit of memory. imshow for showing images. For examples of the OO approach to Matplotlib, see the API Examples. A pseudocolor plot displays matrix data as an array of colored cells (known as faces). We can display images with matplotlib. figimage complements the axes image (imshow()) which will be resampled to fit the current axes. The matplotlibrc file¶. If origin is not None, then extent is interpreted as in matplotlib. hist が用意されてます。. All-Sky Plot : Galactic plane in All Sky Maps¶. 9 3 dimensional Plots. imshow for showing images. A simple call to the imread method loads our image as a multi-dimensional NumPy array (one for each Red, Green, and Blue component, respectively) and imshow displays our image to our screen. To use 3D graphics in matplotlib, we first need to create an axes instance of the class Axes3D. In-fact there are no graphs at all, its just the space. randn (437) num_bins = 50; fig, ax = plt. You can control the defaults of almost every property in matplotlib: figure size and dpi, line width, color and style, axes, axis and grid properties, text and font properties and so on. - The `matplotlib. The following classes are defined. projections. Matplotlib Plotting Cookbook. The matplotlibrc file¶. XXX) in order to avoid thread-safety issues. Complex Numbers in Python | Set 1 (Introduction) Not only real numbers, Python can also handle complex numbers and its associated functions using the file "cmath". A field can be shown by using imshow_field(), which acts very similar to the standard Matplotlib pyplot. pyplot as plt % matplotlib inline import cv2 from scipy import misc img plt. pyplot as plt #This will not work unless you download the module at the beginning #of this file and put it in the same directory or the executable directory #for python on your machine import aitoff as at. Matplotlib 9 A new untitled notebook with the. Or how far is each point in the image from the camera because it is a 3D-to-2D conversion. scatter の概要 matplotlib には、散布図を描画するメソッドとして、 matplotlib. Matplotlib has a sphinx extension plot_directive that creates plots for inclusion in sphinx documents. In this blog, we are explaining, how to save a figure using matplotlib? Import Library import matplotlib. 5 adds a new option to the plot directive - close-figs - that closes any previous figure windows before creating the plots. What's new in matplotlib ¶. There are three Matplotlib functions that can be helpful for this task: plt. If X is 3-dimensional, imshow will display a color image. Matplotlib comes with a set of default settings that allow customizing all kinds of properties. ipynb extension (stands for the IPython notebook) is displayed in the new tab of the browser. This is the matplotlib. basemap print basemap. mathtext module for additional details. The code below results in a missing colorbar, and a very small plot area. To draw a circle using Matplotlib, the line of code below will do so. 02) 绘图函数简介 绘图函数简介 程序中调用subplot()创建子图时通过设 polar参数 为True,创建一个极坐标子图。然后调用plot()在极坐 标子图中绘图。. Then the matplotlib savefig function will help you. Code, compile, and run code in 50+ programming languages: Clojure, Haskell, Kotlin (beta), QBasic. To use 3D graphics in matplotlib, we first need to create an axes instance of the class Axes3D. Around the time of the 1. # First import numpy and matplotlib import numpy as np import matplotlib. The legend() method adds the legend to the plot. Plotting with matplotlib object oriented interface 8. Adding a colorbar to a pcolormesh when the polar projection is specified for the axes doesn't work as expected. import numpy as np import matplotlib. pyplot as plt % matplotlib inline import cv2 from scipy import misc img plt. We use cookies for various purposes including analytics. The easiest way to get started contributing to Open Source python projects like matplotlib Pick your favorite repos to receive a different open issue in your inbox every day. png") The pyplot interface is a function-based interface that uses the Matlab-like conventions. Есть другие варианты, в которых графики. The examples here are only examples relevant to the points raised in this chapter. MATLAB ® creates this plot as a flat surface in the x-y plane. When I started, I had no idea what it entailed. 1 Introducing Interactive Plotting Don't just see your data, experience it!. Matplotlib is an Open Source plotting library designed to support interactive and publication quality plotting with a syntax familiar to Matlab users. Matplotlib 3D Three-dimensional Plotting in Matplotlib. 1 is a brief summary of some of the basic 2D plotting functions included in Matplotlib. hist (x, num_bins, normed = True. Adding a colorbar to a pcolormesh when the polar projection is specified for the axes doesn't work as expected. py)。 35 import numpy npimport matplotlib. 5 result is that is essentially wrong. matplotlib配置信息是从配置文件中读取的。在配置文件中可以为matplotlib几乎所有的属性指定永久有效的默认值。 查看配置:你可以通过 matplotlib. Method of interpolation. import numpy as np import matplotlib. import numpy as np import pylab as pl n = 12 X = np. be> and subject line Re: Bug#863523: unblock: (preapproval) matplotlib/2. Matplotlib has a tutorial on how to manage images. grid()を指定します。which=の'major'と'minor'は. It contains many features such as specifying the start/stop radius and angle, interpolation order (bicubic, linear, nearest, etc), and much more. imread Polar representation of lines. When running in ipython with its pylab mode, display all figures and return to the ipython prompt. Matplotlib has native support for legends. tesselate the input point set to n-dimensional simplices, and interpolate linearly on each simplex. A popular question is how to get live-updating graphs in Python and Matplotlib. Then the matplotlib savefig function will help you. A pseudocolor plot displays matrix data as an array of colored cells (known as faces). If you want a resampled image to fill the entire figure, you can define an Axes with size [0,1,0,1]. The easiest way to get started with plotting using matplotlib is often to use the MATLAB-like API provided by matplotlib. Axes kwargs plus projection, which chooses a projection type for the axes. pyplot as plt # for data visualization. Dear all, Once again, I turn for help. 3/ When I do successive cycles of connect, disconnect, imshow (reloading the figure. GitHub stats for 2015/10/29 - 2016/07/13 (tag: v1. The matplotlib documentation comes with a much more exhaustive gallery. Matplotlib 3D Three-dimensional Plotting in Matplotlib. pyplot as plt. imshow中的原点和范围 import matplotlib matplotlib. We strongly en-. I have a polar grid represented as a a 2D array. 9 3 dimensional Plots. # First import numpy and matplotlib import numpy as np import matplotlib. matplotlib¶ "Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. And a few other Python packages. return the value at the data point closest to the point of interpolation. outer(x,y) matplotlib. rcParams字典访问所有已经加载的配置项. (rows are radial sections, columns are azimuthal sections) I've been able to display the 2D array as both a rectangular image (R vs. As of version 0. One thing to remember is that the plot_func function (passed to tfplot. Matplotlib 1. pyplot as plt % matplotlib inline import cv2 from scipy import misc img plt.
2019-10-23 19:58:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23245465755462646, "perplexity": 5020.162433585203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987835748.66/warc/CC-MAIN-20191023173708-20191023201208-00142.warc.gz"}
https://mathematica.stackexchange.com/questions/179916/indeterminate-result-from-sinc
# Indeterminate result from Sinc I have an impulse train given by (1 + Csc[(π x)/(1 + R)] Sin[(π (1 + 2 R) x)/(1 + R)])/(2 + 2 R) Because of the Csc, evaluating this expression gives an indeterminate result for all integer multiples of R + 1, as you can see from this table: TableForm[ Table[ Evaluate[(1 + Csc[(π x)/(1 + R)] Sin[(π (1 + 2 R) x)/(1 + R)])/(2 + 2 R)], {x, 0, 10}, {R, 1, 5}]] (Apologies if my formatting is off; total newbie...) Fair enough: the expression is defined in the limit, but not absolutely. So, following advice from a previous similar question (here), I replace Sin with Sinc: ((1 + Csc[(π x)/(1 + R)] Sin[(π (1 + 2 R) x)/(1 + R)])/(2 + 2 R) // FullSimplify) /. {Sin[z_] :> z*Sinc[z], Csc[z_] :> 1/(z*Sinc[z])} // Simplify This gives me (1 + ((1 + 2 R) Sinc[(π (1 + 2 R) x)/(1 + R)])/Sinc[(π x)/(1 + R)])/(2 + 2 R) And this is where I get confused. The substitution should remove all indeterminate results, but it doesn't. Instead, it only removes the first indeterminate result: TableForm[ Table[ Evaluate[{(1 + ((1 + 2 R) Sinc[(π (1 + 2 R) x)/(1 + R)])/Sinc[(π x)/(1 + R)]) / (2 + 2 R)}], {x, 0, 10}, {R, 1, 5}]] This is baffling, since the function now contains nothing but Sinc. How do I fix this? Or am I doing something wrong mathematically? You've avoided Sin[0]/0 problems, but your numerator and denominator in your more complicated expression are still both zero in some cases. One way is to take a limit in those cases: f[xx_, RR_] := Module[{expr, trial, x, R}, expr = (1 + ((1 + 2 R) Sinc[(\[Pi] (1 + 2 R) x)/(1 + R)])/ Sinc[(\[Pi] x)/(1 + R)])/(2 + 2 R); trial = Quiet[expr /. x -> xx /. R -> RR]; If[trial === Indeterminate, Limit[expr /. R -> RR, x -> xx], trial]] Table[f[x, R], {x, 0, 10}, {R, 1, 5}] (* {{1, 1, 1, 1, 1}, {0, 0, 0, 0, 0}, {1, 0, 0, 0, 0}, {0, 1, 0, 0, 0}, {1, 0, 1, 0, 0}, {0, 0, 0, 1, 0}, {1, 1, 0, 0, 1}, {0, 0, 0, 0, 0}, {1, 0, 1, 0, 0}, {0, 1, 0, 0, 0}, {1, 0, 0, 1, 0}} *) You can take the limit for all cases if you like. g[xx_, R_] := Module[{x}, Limit[(1 + ((1 + 2 R) Sinc[(\[Pi] (1 + 2 R) x)/(1 + R)])/ Sinc[(\[Pi] x)/(1 + R)])/(2 + 2 R), x -> xx]] This is, however, a bit slower than the previous definition. I call the argument pattern xx to distinguish it from the free variable x. After all, Limit[...,x->x] makes no sense. GG isn't necessary in the final code, but I was experimenting with which variable Limit preferred. Note that I localized x with Module to insure that it wouldn't conflict with either a potential global value nor a local value created by Table or something. Mathematica is an expression rewriting language: x needs to stay undefined through all the rewriting to allow Limit to use it properly. • Thanks. Could you please just confirm that I've understood this correctly? The essence of your response is that the expression is only true in the limit (in some circumstances), regardless of how I express it. I can use techniques to make Mathematica take the limit, but from a purely formulaic point of view, I might as well simply define the formula in terms of its limits from the outset, much as Sinc does for 1/Sin? Aug 12 '18 at 15:14 • By which I mean, I'll totally use your fix (much appreciated, though I don't fully understand it, but I'll learn).But I do I want to make sure I'm standing on mathematically solid ground... Aug 12 '18 at 15:28 • I'd also appreciate a pointer as to why the double xx and RR work as they do. The results are awesome though. Appreciated. Aug 12 '18 at 17:40
2021-12-02 13:14:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26569807529449463, "perplexity": 1285.827816010804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00519.warc.gz"}
https://www.fuzzingbook.org/beta/html/DynamicInvariants.html
# Mining Function Specifications¶ When testing a program, one not only needs to cover its several behaviors; one also needs to check whether the result is as expected. In this chapter, we introduce a technique that allows us to mine function specifications from a set of given executions, resulting in abstract and formal descriptions of what the function expects and what it delivers. These so-called dynamic invariants produce pre- and post-conditions over function arguments and variables from a set of executions. They are useful in a variety of contexts: • Dynamic invariants provide important information for symbolic fuzzing, such as types and ranges of function arguments. • Dynamic invariants provide pre- and postconditions for formal program proofs and verification. • Dynamic invariants provide a large number of assertions that can check whether function behavior has changed • Checks provided by dynamic invariants can be very useful as oracles for checking the effects of generated tests Traditionally, dynamic invariants are dependent on the executions they are derived from. However, when paired with comprehensive test generators, they quickly become very precise, as we show in this chapter. Prerequisites • You should be familiar with tracing program executions, as in the chapter on coverage. • Later in this section, we access the internal abstract syntax tree representations of Python programs and transform them, as in the chapter on information flow. import fuzzingbook_utils import Coverage import Intro_Testing ## Synopsis¶ >>> from fuzzingbook.DynamicInvariants import <identifier> and then make use of the following features. This chapter provides two classes that automatically extract specifications from a function and a set of inputs: • TypeAnnotator for types, and • InvariantAnnotator for pre- and postconditions. Both work by observing a function and its invocations within a with clause. Here is an example for the type annotator: >>> def sum2(a, b): >>> return a + b >>> with TypeAnnotator() as type_annotator: >>> sum2(1, 2) >>> sum2(-4, -5) >>> sum2(0, 0) The typed_functions() method will return a representation of sum2() annotated with types observed during execution. >>> print(type_annotator.typed_functions()) def sum2(a: int, b: int) ->int: return a + b The invariant annotator works in a similar fashion: >>> with InvariantAnnotator() as inv_annotator: >>> sum2(1, 2) >>> sum2(-4, -5) >>> sum2(0, 0) The functions_with_invariants() method will return a representation of sum2() annotated with inferred pre- and postconditions that all hold for the observed values. >>> print(inv_annotator.functions_with_invariants()) @precondition(lambda a, b: isinstance(a, int)) @precondition(lambda a, b: isinstance(b, int)) @postcondition(lambda return_value, a, b: a == return_value - b) @postcondition(lambda return_value, a, b: b == return_value - a) @postcondition(lambda return_value, a, b: isinstance(return_value, int)) @postcondition(lambda return_value, a, b: return_value == a + b) @postcondition(lambda return_value, a, b: return_value == b + a) def sum2(a, b): return a + b Such type specifications and invariants can be helpful as oracles (to detect deviations from a given set of runs) as well as for all kinds of symbolic code analyses. The chapter gives details on how to customize the properties checked for. ## Specifications and Assertions¶ When implementing a function or program, one usually works against a specification – a set of documented requirements to be satisfied by the code. Such specifications can come in natural language. A formal specification, however, allows the computer to check whether the specification is satisfied. In the introduction to testing, we have seen how preconditions and postconditions can describe what a function does. Consider the following (simple) square root function: def my_sqrt(x): assert x >= 0 # Precondition ... assert result * result == x # Postcondition return result The assertion assert p checks the condition p; if it does not hold, execution is aborted. Here, the actual body is not yet written; we use the assertions as a specification of what my_sqrt() expects, and what it delivers. The topmost assertion is the precondition, stating the requirements on the function arguments. The assertion at the end is the postcondition, stating the properties of the function result (including its relationship with the original arguments). Using these pre- and postconditions as a specification, we can now go and implement a square root function that satisfies them. Once implemented, we can have the assertions check at runtime whether my_sqrt() works as expected; a symbolic or concolic test generator will even specifically try to find inputs where the assertions do not hold. (An assertion can be seen as a conditional branch towards aborting the execution, and any technique that tries to cover all code branches will also try to invalidate as many assertions as possible.) However, not every piece of code is developed with explicit specifications in the first place; let alone does most code comes with formal pre- and post-conditions. (Just take a look at the chapters in this book.) This is a pity: As Ken Thompson famously said, "Without specifications, there are no bugs – only surprises". It is also a problem for testing, since, of course, testing needs some specification to test against. This raises the interesting question: Can we somehow retrofit existing code with "specifications" that properly describe their behavior, allowing developers to simply check them rather than having to write them from scratch? This is what we do in this chapter. ## Why Generic Error Checking is Not Enough¶ Before we go into mining specifications, let us first discuss why it could be useful to have them. As a motivating example, consider the full implementation of my_sqrt() from the introduction to testing: import fuzzingbook_utils def my_sqrt(x): """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx my_sqrt() does not come with any functionality that would check types or values. Hence, it is easy for callers to make mistakes when calling my_sqrt(): from ExpectError import ExpectError, ExpectTimeout with ExpectError(): my_sqrt("foo") Traceback (most recent call last): File "<ipython-input-7-774676a5ccb8>", line 2, in <module> my_sqrt("foo") File "<ipython-input-5-47185ad159a1>", line 4, in my_sqrt guess = x / 2 TypeError: unsupported operand type(s) for /: 'str' and 'int' (expected) with ExpectError(): x = my_sqrt(0.0) Traceback (most recent call last): File "<ipython-input-8-262c66114b1c>", line 2, in <module> x = my_sqrt(0.0) File "<ipython-input-5-47185ad159a1>", line 7, in my_sqrt guess = (approx + x / approx) / 2 ZeroDivisionError: float division by zero (expected) At least, the Python system catches these errors at runtime. The following call, however, simply lets the function enter an infinite loop: with ExpectTimeout(1): x = my_sqrt(-1.0) Traceback (most recent call last): File "<ipython-input-9-b72078127dc0>", line 2, in <module> x = my_sqrt(-1.0) File "<ipython-input-5-47185ad159a1>", line 6, in my_sqrt approx = guess File "<ipython-input-5-47185ad159a1>", line 6, in my_sqrt approx = guess File "ExpectError.ipynb", line 59, in check_time TimeoutError (expected) Our goal is to avoid such errors by annotating functions with information that prevents errors like the above ones. The idea is to provide a specification of expected properties – a specification that can then be checked at runtime or statically. \todo{Introduce the concept of contract.} ## Specifying and Checking Data Types¶ For our Python code, one of the most important "specifications" we need is types. Python being a "dynamically" typed language means that all data types are determined at run time; the code itself does not explicitly state whether a variable is an integer, a string, an array, a dictionary – or whatever. As writer of Python code, omitting explicit type declarations may save time (and allows for some fun hacks). It is not sure whether a lack of types helps in reading and understanding code for humans. For a computer trying to analyze code, the lack of explicit types is detrimental. If, say, a constraint solver, sees if x: and cannot know whether x is supposed to be a number or a string, this introduces an ambiguity. Such ambiguities may multiply over the entire analysis in a combinatorial explosion – or in the analysis yielding an overly inaccurate result. Python 3.6 and later allows data types as annotations to function arguments (actually, to all variables) and return values. We can, for instance, state that my_sqrt() is a function that accepts a floating-point value and returns one: def my_sqrt_with_type_annotations(x: float) -> float: """Computes the square root of x, using the Newton-Raphson method""" return my_sqrt(x) By default, such annotations are ignored by the Python interpreter. Therefore, one can still call my_sqrt_typed() with a string as an argument and get the exact same result as above. However, one can make use of special typechecking modules that would check types – dynamically at runtime or statically by analyzing the code without having to execute it. ### Runtime Type Checking¶ The Python enforce package provides a function decorator that automatically inserts type-checking code that is executed at runtime. Here is how to use it: import enforce @enforce.runtime_validation def my_sqrt_with_checked_type_annotations(x: float) -> float: """Computes the square root of x, using the Newton-Raphson method""" return my_sqrt(x) Now, invoking my_sqrt_with_checked_type_annotations() raises an exception when invoked with a type dfferent from the one declared: with ExpectError(): my_sqrt_with_checked_type_annotations(True) Traceback (most recent call last): File "<ipython-input-13-68b73bd3f6ef>", line 2, in <module> my_sqrt_with_checked_type_annotations(True) File "/Users/zeller/Library/Python/3.6/site-packages/enforce/decorators.py", line 104, in universal _args, _kwargs, _ = enforcer.validate_inputs(parameters) File "/Users/zeller/Library/Python/3.6/site-packages/enforce/enforcers.py", line 86, in validate_inputs raise RuntimeTypeError(exception_text) enforce.exceptions.RuntimeTypeError: The following runtime type errors were encountered: Argument 'x' was not of type <class 'float'>. Actual type was bool. (expected) Note that this error is not caught by the "untyped" variant, where passing a boolean value happily returns $\sqrt{1}$ as result. my_sqrt(True) 1.0 In Python (and other languages), the boolean values True and False can be implicitly converted to the integers 1 and 0; however, it is hard to think of a call to sqrt() where this would not be an error. ### Static Type Checking¶ Type annotations can also be checked statically – that is, without even running the code. Let us create a simple Python file consisting of the above my_sqrt_typed() definition and a bad invocation. import inspect import tempfile f = tempfile.NamedTemporaryFile(mode='w', suffix='.py') f.name '/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp2w86plew.py' f.write(inspect.getsource(my_sqrt)) f.write('\n') f.write(inspect.getsource(my_sqrt_with_type_annotations)) f.write('\n') f.write("print(my_sqrt_with_type_annotations('123'))\n") f.flush() These are the contents of our newly created Python file: from fuzzingbook_utils import print_file print_file(f.name) def my_sqrt(x): """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx def my_sqrt_with_type_annotations(x: float) -> float: """Computes the square root of x, using the Newton-Raphson method""" return my_sqrt(x) print(my_sqrt_with_type_annotations('123')) Mypy is a type checker for Python programs. As it checks types statically, types induce no overhead at runtime; plus, a static check can be faster than a lengthy series of tests with runtime type checking enabled. Let us see what mypy produces on the above file: import subprocess result = subprocess.run(["mypy", "--strict", f.name], universal_newlines=True, stdout=subprocess.PIPE) del f # Delete temporary file print(result.stdout) /var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp2w86plew.py:1: error: Function is missing a type annotation /var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp2w86plew.py:12: warning: Returning Any from function declared to return "float" /var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp2w86plew.py:12: error: Call to untyped function "my_sqrt" in typed context /var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp2w86plew.py:14: error: Argument 1 to "my_sqrt_with_type_annotations" has incompatible type "str"; expected "float" We see that mypy complains about untyped function definitions such as my_sqrt(); most important, however, it finds that the call to my_sqrt_with_type_annotations() in the last line has the wrong type. With mypy, we can achieve the same type safety with Python as in statically typed languages – provided that we as programmers also produce the necessary type annotations. Is there a simple way to obtain these? ## Mining Type Specifications¶ Our first task will be to mine type annotations (as part of the code) from values we observe at run time. These type annotations would be mined from actual function executions, learning from (normal) runs what the expected argument and return types should be. By observing a series of calls such as these, we could infer that both x and the return value are of type float: y = my_sqrt(25.0) y 5.0 y = my_sqrt(2.0) y 1.414213562373095 How can we mine types from executions? The answer is simple: 1. We observe a function during execution 2. We track the types of its arguments 3. We include these types as annotations into the code. To do so, we can make use of Python's tracing facility we already observed in the chapter on coverage. With every call to a function, we retrieve the arguments, their values, and their types. ### Tracking Calls¶ To observe argument types at runtime, we define a tracer function that tracks the execution of my_sqrt(), checking its arguments and return values. The Tracker class is set to trace functions in a with block as follows: with Tracker() as tracker: function_to_be_tracked(...) info = tracker.collected_information() As in the chapter on coverage, we use the sys.settrace() function to trace individual functions during execution. We turn on tracking when the with block starts; at this point, the __enter__() method is called. When execution of the with block ends, __exit()__ is called. import sys class Tracker(object): def __init__(self, log=False): self._log = log self.reset() def reset(self): self._calls = {} self._stack = [] def traceit(self): """Placeholder to be overloaded in subclasses""" pass # Start of with block def __enter__(self): self.original_trace_function = sys.gettrace() sys.settrace(self.traceit) return self # End of with block def __exit__(self, exc_type, exc_value, tb): sys.settrace(self.original_trace_function) The traceit() method does nothing yet; this is done in specialized subclasses. The CallTracker class implements a traceit() function that checks for function calls and returns: class CallTracker(Tracker): def traceit(self, frame, event, arg): """Tracking function: Record all calls and all args""" if event == "call": self.trace_call(frame, event, arg) elif event == "return": self.trace_return(frame, event, arg) return self.traceit trace_call() is called when a function is called; it retrieves the function name and current arguments, and saves them on a stack. class CallTracker(CallTracker): def trace_call(self, frame, event, arg): """Save current function name and args on the stack""" code = frame.f_code function_name = code.co_name arguments = get_arguments(frame) self._stack.append((function_name, arguments)) if self._log: print(simple_call_string(function_name, arguments)) def get_arguments(frame): """Return call arguments in the given frame""" # When called, all arguments are local variables arguments = [(var, frame.f_locals[var]) for var in frame.f_locals] arguments.reverse() # Want same order as call return arguments When the function returns, trace_return() is called. We now also have the return value. We log the whole call with arguments and return value (if desired) and save it in our list of calls. class CallTracker(CallTracker): def trace_return(self, frame, event, arg): """Get return value and store complete call with arguments and return value""" code = frame.f_code function_name = code.co_name return_value = arg # TODO: Could call get_arguments() here to also retrieve _final_ values of argument variables called_function_name, called_arguments = self._stack.pop() assert function_name == called_function_name if self._log: print(simple_call_string(function_name, called_arguments), "returns", return_value) simple_call_string() is a helper for logging that prints out calls in a user-friendly manner. def simple_call_string(function_name, argument_list, return_value=None): """Return function_name(arg[0], arg[1], ...) as a string""" call = function_name + "(" + \ ", ".join([var + "=" + repr(value) for (var, value) in argument_list]) + ")" if return_value is not None: call += " = " + repr(return_value) return call add_call() saves the calls in a list; each function name has its own list. class CallTracker(CallTracker): """Add given call to list of calls""" if function_name not in self._calls: self._calls[function_name] = [] self._calls[function_name].append((arguments, return_value)) Using calls(), we can retrieve the list of calls, either for a given function, or for all functions. class CallTracker(CallTracker): def calls(self, function_name=None): """Return list of calls for function_name, or a mapping function_name -> calls for all functions tracked""" if function_name is None: return self._calls return self._calls[function_name] Let us now put this to use. We turn on logging to track the individual calls and their return values: with CallTracker(log=True) as tracker: y = my_sqrt(25) y = my_sqrt(2.0) my_sqrt(x=25) my_sqrt(x=25) returns 5.0 my_sqrt(x=2.0) my_sqrt(x=2.0) returns 1.414213562373095 __exit__(self=<__main__.CallTracker object at 0x111c63630>, exc_type=None, exc_value=None, tb=None) After execution, we can retrieve the individual calls: calls = tracker.calls('my_sqrt') calls [([('x', 25)], 5.0), ([('x', 2.0)], 1.414213562373095)] Each call is pair (argument_list, return_value), where argument_list is a list of pairs (parameter_name, value). my_sqrt_argument_list, my_sqrt_return_value = calls[0] simple_call_string('my_sqrt', my_sqrt_argument_list, my_sqrt_return_value) 'my_sqrt(x=25) = 5.0' If the function does not return a value, return_value is None. def hello(name): print("Hello,", name) with CallTracker() as tracker: hello("world") Hello, world hello_calls = tracker.calls('hello') hello_calls [([('name', 'world')], None)] hello_argument_list, hello_return_value = hello_calls[0] simple_call_string('hello', hello_argument_list, hello_return_value) "hello(name='world')" ### Getting Types¶ Despite what you may have read or heard, Python actually is a typed language. It is just that it is dynamically typed – types are used and checked only at runtime (rather than declared in the code, where they can be statically checked at compile time). We can thus retrieve types of all values within Python: type(4) int type(2.0) float type([4]) list We can retrieve the type of the first argument to my_sqrt(): parameter, value = my_sqrt_argument_list[0] parameter, type(value) ('x', int) as well as the type of the return value: type(my_sqrt_return_value) float Hence, we see that (so far), my_sqrt() is a function taking (among others) integers and returning floats. We could declare my_sqrt() as: def my_sqrt_annotated(x: int) -> float: return my_sqrt(x) This is a representation we could place in a static type checker, allowing to check whether calls to my_sqrt() actually pass a number. A dynamic type checker could run such checks at runtime. And of course, any symbolic interpretation will greatly profit from the additional annotations. By default, Python does not do anything with such annotations. However, tools can access annotations from functions and other objects: my_sqrt_annotated.__annotations__ {'x': int, 'return': float} This is how run-time checkers access the annotations to check against. ### Accessing Function Structure¶ Our plan is to annotate functions automatically, based on the types we have seen. To do so, we need a few modules that allow us to convert a function into a tree representation (called abstract syntax trees, or ASTs) and back; we already have seen these in the chapters on concolic and symbolic testing. import ast import inspect import astor We can get the source of a Python function using inspect.getsource(). (Note that this does not work for functions defined in other notebooks.) my_sqrt_source = inspect.getsource(my_sqrt) my_sqrt_source 'def my_sqrt(x):\n """Computes the square root of x, using the Newton-Raphson method"""\n approx = None\n guess = x / 2\n while approx != guess:\n approx = guess\n guess = (approx + x / approx) / 2\n return approx\n' To view these in a visually pleasing form, our function print_content(s, suffix) formats and highlights the string s as if it were a file with ending suffix. We can thus view (and highlight) the source as if it were a Python file: from fuzzingbook_utils import print_content print_content(my_sqrt_source, '.py') def my_sqrt(x): """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx Parsing this gives us an abstract syntax tree (AST) – a representation of the program in tree form. my_sqrt_ast = ast.parse(my_sqrt_source) What does this AST look like? The helper functions astor.dump_tree() (textual output) and showast.show_ast() (graphical output with showast) allow us to inspect the structure of the tree. We see that the function starts as a FunctionDef with name and arguments, followed by a body, which is a list of statements of type Expr (the docstring), type Assign (assignments), While (while loop with its own body), and finally Return. print(astor.dump_tree(my_sqrt_ast)) Module( body=[ FunctionDef(name='my_sqrt', args=arguments(args=[arg(arg='x', annotation=None)], vararg=None, kwonlyargs=[], kw_defaults=[], kwarg=None, defaults=[]), body=[ Expr(value=Str(s='Computes the square root of x, using the Newton-Raphson method')), Assign(targets=[Name(id='approx')], value=NameConstant(value=None)), Assign(targets=[Name(id='guess')], value=BinOp(left=Name(id='x'), op=Div, right=Num(n=2))), While( test=Compare(left=Name(id='approx'), ops=[NotEq], comparators=[Name(id='guess')]), body=[Assign(targets=[Name(id='approx')], value=Name(id='guess')), Assign(targets=[Name(id='guess')], value=BinOp( left=BinOp(left=Name(id='approx'), right=BinOp(left=Name(id='x'), op=Div, right=Name(id='approx'))), op=Div, right=Num(n=2)))], orelse=[]), Return(value=Name(id='approx'))], decorator_list=[], returns=None)]) Too much text for you? This graphical representation may make things simpler. from fuzzingbook_utils import rich_output if rich_output(): import showast showast.show_ast(my_sqrt_ast) The function astor.to_source() converts such a tree back into the more familiar textual Python code representation. Comments are gone, and there may be more parentheses than before, but the result has the same semantics: print_content(astor.to_source(my_sqrt_ast), '.py') def my_sqrt(x): """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx ### Annotating Functions with Given Types¶ Let us now go and transform these trees ti add type annotations. We start with a helper function parse_type(name) which parses a type name into an AST. def parse_type(name): class ValueVisitor(ast.NodeVisitor): def visit_Expr(self, node): self.value_node = node.value tree = ast.parse(name) name_visitor = ValueVisitor() name_visitor.visit(tree) return name_visitor.value_node print(astor.dump_tree(parse_type('int'))) Name(id='int') print(astor.dump_tree(parse_type('[object]'))) List(elts=[Name(id='object')]) We now define a helper function that actually adds type annotations to a function AST. The TypeTransformer class builds on the Python standard library ast.NodeTransformer infrastructure. It would be called as TypeTransformer({'x': 'int'}, 'float').visit(ast) to annotate the arguments of my_sqrt(): x with int, and the return type with float. The returned AST can then be unparsed, compiled or analyzed. class TypeTransformer(ast.NodeTransformer): def __init__(self, argument_types, return_type=None): self.argument_types = argument_types self.return_type = return_type super().__init__() The core of TypeTransformer is the method visit_FunctionDef(), which is called for every function definition in the AST. Its argument node is the subtree of the function definition to be transformed. Our implementation accesses the individual arguments and invokes annotate_args() on them; it also sets the return type in the returns attribute of the node. class TypeTransformer(TypeTransformer): def visit_FunctionDef(self, node): # Set argument types new_args = [] for arg in node.args.args: new_args.append(self.annotate_arg(arg)) new_arguments = ast.arguments( new_args, node.args.vararg, node.args.kwonlyargs, node.args.kw_defaults, node.args.kwarg, node.args.defaults ) # Set return type if self.return_type is not None: node.returns = parse_type(self.return_type) return ast.copy_location(ast.FunctionDef(node.name, new_arguments, node.body, node.decorator_list, node.returns), node) Each argument gets its own annotation, taken from the types originally passed to the class: class TypeTransformer(TypeTransformer): def annotate_arg(self, arg): """Add annotation to single function argument""" arg_name = arg.arg if arg_name in self.argument_types: arg.annotation = parse_type(self.argument_types[arg_name]) return arg Does this work? Let us annotate the AST from my_sqrt() with types for the arguments and return types: new_ast = TypeTransformer({'x': 'int'}, 'float').visit(my_sqrt_ast) When we unparse the new AST, we see that the annotations actually are present: print_content(astor.to_source(new_ast), '.py') def my_sqrt(x: int) ->float: """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx Similarly, we can annotate the hello() function from above: hello_source = inspect.getsource(hello) hello_ast = ast.parse(hello_source) new_ast = TypeTransformer({'name': 'str'}, 'None').visit(hello_ast) print_content(astor.to_source(new_ast), '.py') def hello(name: str) ->None: print('Hello,', name) ### Annotating Functions with Mined Types¶ Let us now annotate functions with types mined at runtime. We start with a simple function type_string() that determines the appropriate type of a given value (as a string): def type_string(value): return type(value).__name__ type_string(4) 'int' type_string([]) 'list' For composite structures, type_string() does not examine element types; hence, the type of [3] is simply list instead of, say, list[int]. For now, list will do fine. type_string([3]) 'list' type_string() will be used to infer the types of argument values found at runtime, as returned by CallTracker.calls(): with CallTracker() as tracker: y = my_sqrt(25.0) y = my_sqrt(2.0) tracker.calls() {'my_sqrt': [([('x', 25.0)], 5.0), ([('x', 2.0)], 1.414213562373095)]} The function annotate_types() takes such a list of calls and annotates each function listed: def annotate_types(calls): annotated_functions = {} for function_name in calls: try: annotated_functions[function_name] = annotate_function_with_types(function_name, calls[function_name]) except KeyError: continue return annotated_functions For each function, we get the source and its AST and then get to the actual annotation in annotate_function_ast_with_types(): def annotate_function_with_types(function_name, function_calls): function = globals()[function_name] # May raise KeyError for internal functions function_code = inspect.getsource(function) function_ast = ast.parse(function_code) return annotate_function_ast_with_types(function_ast, function_calls) The function annotate_function_ast_with_types() invokes the TypeTransformer with the calls seen, and for each call, iterate over the arguments, determine their types, and annotate the AST with these. The universal type Any is used when we encounter type conflicts, which we will discuss below. from typing import Any def annotate_function_ast_with_types(function_ast, function_calls): parameter_types = {} return_type = None for calls_seen in function_calls: args, return_value = calls_seen if return_value is not None: if return_type is not None and return_type != type_string(return_value): return_type = 'Any' else: return_type = type_string(return_value) for parameter, value in args: try: different_type = parameter_types[parameter] != type_string(value) except KeyError: different_type = False if different_type: parameter_types[parameter] = 'Any' else: parameter_types[parameter] = type_string(value) annotated_function_ast = TypeTransformer(parameter_types, return_type).visit(function_ast) return annotated_function_ast Here is my_sqrt() annotated with the types recorded usign the tracker, above. print_content(astor.to_source(annotate_types(tracker.calls())['my_sqrt']), '.py') def my_sqrt(x: float) ->float: """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx ### All-in-one Annotation¶ Let us bring all of this together in a single class TypeAnnotator that first tracks calls of functions and then allows to access the AST (and the source code form) of the tracked functions annotated with types. The method typed_functions() returns the annotated functions as a string; typed_functions_ast() returns their AST. class TypeTracker(CallTracker): pass class TypeAnnotator(TypeTracker): def typed_functions_ast(self, function_name=None): if function_name is None: return annotate_types(self.calls()) return annotate_function_with_types(function_name, self.calls(function_name)) def typed_functions(self, function_name=None): if function_name is None: functions = '' for f_name in self.calls(): try: f_text = astor.to_source(self.typed_functions_ast(f_name)) except KeyError: f_text = '' functions += f_text return functions return astor.to_source(self.typed_functions_ast(function_name)) Here is how to use TypeAnnotator. We first track a series of calls: with TypeAnnotator() as annotator: y = my_sqrt(25.0) y = my_sqrt(2.0) After tracking, we can immediately retrieve an annotated version of the functions tracked: print_content(annotator.typed_functions(), '.py') def my_sqrt(x: float) ->float: """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx This also works for multiple and diverse functions. One could go and implement an automatic type annotator for Python files based on the types seen during execution. with TypeAnnotator() as annotator: hello('type annotations') y = my_sqrt(1.0) Hello, type annotations print_content(annotator.typed_functions(), '.py') def hello(name: str): print('Hello,', name) def my_sqrt(x: float) ->float: """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx A content as above could now be sent to a type checker, which would detect any type inconsistency between callers and callees. Likewise, type annotations such as the ones above greatly benefit symbolic code analysis (as in the chapter on symbolic fuzzing), as they effectively constrain the set of values that arguments and variables can take. ### Multiple Types¶ Let us now resolve the role of the magic Any type in annotate_function_ast_with_types(). If we see multiple types for the same argument, we set its type to Any. For my_sqrt(), this makes sense, as its arguments can be integers as well as floats: with CallTracker() as tracker: y = my_sqrt(25.0) y = my_sqrt(4) print_content(astor.to_source(annotate_types(tracker.calls())['my_sqrt']), '.py') def my_sqrt(x: Any) ->float: """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx The following function sum3() can be called with floating-point numbers as arguments, resulting in the parameters getting a float type: def sum3(a, b, c): return a + b + c with TypeAnnotator() as annotator: y = sum3(1.0, 2.0, 3.0) y 6.0 print_content(annotator.typed_functions(), '.py') def sum3(a: float, b: float, c: float) ->float: return a + b + c If we call sum3() with integers, though, the arguments get an int type: with TypeAnnotator() as annotator: y = sum3(1, 2, 3) y 6 print_content(annotator.typed_functions(), '.py') def sum3(a: int, b: int, c: int) ->int: return a + b + c And we can also call sum3() with strings, giving the arguments a str type: with TypeAnnotator() as annotator: y = sum3("one", "two", "three") y 'onetwothree' print_content(annotator.typed_functions(), '.py') def sum3(a: str, b: str, c: str) ->str: return a + b + c If we have multiple calls, but with different types, TypeAnnotator() will assign an Any type to both arguments and return values: with TypeAnnotator() as annotator: y = sum3(1, 2, 3) y = sum3("one", "two", "three") typed_sum3_def = annotator.typed_functions('sum3') print_content(typed_sum3_def, '.py') def sum3(a: Any, b: Any, c: Any) ->Any: return a + b + c A type Any makes it explicit that an object can, indeed, have any type; it will not be typechecked at runtime or statically. To some extent, this defeats the power of type checking; but it also preserves some of the type flexibility that many Python programmers enjoy. Besides Any, the typing module supports several additional ways to define ambiguous types; we will keep this in mind for a later exercise. ## Specifying and Checking Invariants¶ Besides basic data types. we can check several further properties from arguments. We can, for instance, whether an argument can be negative, zero, or positive; or that one argument should be smaller than the second; or that the result should be the sum of two arguments – properties that cannot be expressed in a (Python) type. Such properties are called invariants, as they hold across all invocations of a function. Specifically, invariants come as pre- and postconditions – conditions that always hold at the beginning and at the end of a function. (There are also data and object invariants that express always-holding properties over the state of data or objects, but we do not consider these in this book.) ### Annotating Functions with Pre- and Postconditions¶ The classical means to specify pre- and postconditions is via assertions, which we have introduced in the chapter on testing. A precondition checks whether the arguments to a function satisfy the expected properties; a postcondition does the same for the result. We can express and check both using assertions as follows: def my_sqrt_with_invariants(x): assert x >= 0 # Precondition ... assert result * result == x # Postcondition return result A nicer way, however, is to syntactically separate invariants from the function at hand. Using appropriate decorators, we could specify pre- and postconditions as follows: @precondition lambda x: x >= 0 @postcondition lambda return_value, x: return_value * return_value == x def my_sqrt_with_invariants(x): # normal code without assertions ... The decorators @precondition and @postcondition would run the given functions (specified as anonymous lambda functions) before and after the decorated function, respectively. If the functions return False, the condition is violated. @precondition gets the function arguments as arguments; @postcondition additionally gets the return value as first argument. It turns out that implementing such decorators is not hard at all. Our implementation builds on a code snippet from StackOverflow: import functools def condition(precondition=None, postcondition=None): def decorator(func): @functools.wraps(func) # preserves name, docstring, etc def wrapper(*args, **kwargs): if precondition is not None: assert precondition(*args, **kwargs), "Precondition violated" retval = func(*args, **kwargs) # call original function or method if postcondition is not None: assert postcondition(retval, *args, **kwargs), "Postcondition violated" return retval return wrapper return decorator def precondition(check): return condition(precondition=check) def postcondition(check): return condition(postcondition=check) With these, we can now start decorating my_sqrt(): @precondition(lambda x: x > 0) def my_sqrt_with_precondition(x): return my_sqrt(x) This catches arguments violating the precondition: with ExpectError(): my_sqrt_with_precondition(-1.0) Traceback (most recent call last): File "<ipython-input-102-c02dc99b6c54>", line 2, in <module> my_sqrt_with_precondition(-1.0) File "<ipython-input-100-39ada1fd0b7e>", line 6, in wrapper assert precondition(*args, **kwargs), "Precondition violated" AssertionError: Precondition violated (expected) Likewise, we can provide a postcondition: EPSILON = 1e-5 @postcondition(lambda ret, x: ret * ret - x < EPSILON) def my_sqrt_with_postcondition(x): return my_sqrt(x) y = my_sqrt_with_postcondition(2.0) y 1.414213562373095 If we have a buggy implementation of $\sqrt{x}$, this gets caught quickly: @postcondition(lambda ret, x: ret * ret - x < EPSILON) def buggy_my_sqrt_with_postcondition(x): return my_sqrt(x) + 0.1 with ExpectError(): y = buggy_my_sqrt_with_postcondition(2.0) Traceback (most recent call last): File "<ipython-input-107-38a36260c5b6>", line 2, in <module> y = buggy_my_sqrt_with_postcondition(2.0) File "<ipython-input-100-39ada1fd0b7e>", line 10, in wrapper assert postcondition(retval, *args, **kwargs), "Postcondition violated" AssertionError: Postcondition violated (expected) While checking pre- and postconditions is a great way to catch errors, specifying them can be cumbersome. Let us try to see whether we can (again) mine some of them. ## Mining Invariants¶ To mine invariants, we can use the same tracking functionality as before; instead of saving values for individual variables, though, we now check whether the values satisfy specific properties or not. For instance, if all values of x seen satisfy the condition x > 0, then we make x > 0 an invariant of the function. If we see positive, zero, and negative values of x, though, then there is no property of x left to talk about. The general idea is thus: 1. Check all variable values observed against a set of predefined properties; and 2. Keep only those properties that hold for all runs observed. ### Defining Properties¶ What precisely do we mean by properties? Here is a small collection of value properties that would frequently be used in invariants. All these properties would be evaluated with the metavariables X, Y, and Z (actually, any upper-case identifier) being replaced with the names of function parameters: INVARIANT_PROPERTIES = [ "X < 0", "X <= 0", "X > 0", "X >= 0", "X == 0", "X != 0", ] When my_sqrt(x) is called as, say my_sqrt(5.0), we see that x = 5.0 holds. The above properties would then all be checked for x. Only the properties X > 0, X >= 0, and X != 0 hold for the call seen; and hence x > 0, x >= 0, and x != 0 would make potential preconditions for my_sqrt(x). We can check for many more properties such as relations between two arguments: INVARIANT_PROPERTIES += [ "X == Y", "X > Y", "X < Y", "X >= Y", "X <= Y", ] Types also can be checked using properties. For any function parameter X, only one of these will hold: INVARIANT_PROPERTIES += [ "isinstance(X, bool)", "isinstance(X, int)", "isinstance(X, float)", "isinstance(X, list)", "isinstance(X, dict)", ] We can check for arithmetic properties: INVARIANT_PROPERTIES += [ "X == Y + Z", "X == Y * Z", "X == Y - Z", "X == Y / Z", ] Here's relations over three values, a Python special: INVARIANT_PROPERTIES += [ "X < Y < Z", "X <= Y <= Z", "X > Y > Z", "X >= Y >= Z", ] Finally, we can also check for list or string properties. Again, this is just a tiny selection. INVARIANT_PROPERTIES += [ "X == len(Y)", "X == sum(Y)", "X.startswith(Y)", ] ### Extracting Meta-Variables¶ Let us first introduce a few helper functions before we can get to the actual mining. metavars() extracts the set of meta-variables (X, Y, Z, etc.) from a property. To this end, we parse the property as a Python expression and then visit the identifiers. def metavars(prop): metavar_list = [] class ArgVisitor(ast.NodeVisitor): def visit_Name(self, node): if node.id.isupper(): metavar_list.append(node.id) ArgVisitor().visit(ast.parse(prop)) return metavar_list assert metavars("X < 0") == ['X'] assert metavars("X.startswith(Y)") == ['X', 'Y'] assert metavars("isinstance(X, str)") == ['X'] ### Instantiating Properties¶ To produce a property as invariant, we need to be able to instantiate it with variable names. The instantiation of X > 0 with X being instantiated to a, for instance, gets us a > 0. To this end, the function instantiate_prop() takes a property and a collection of variable names and instantiates the meta-variables left-to-right with the corresponding variables names in the collection. def instantiate_prop_ast(prop, var_names): class NameTransformer(ast.NodeTransformer): def visit_Name(self, node): if node.id not in mapping: return node meta_variables = metavars(prop) assert len(meta_variables) == len(var_names) mapping = {} for i in range(0, len(meta_variables)): mapping[meta_variables[i]] = var_names[i] prop_ast = ast.parse(prop, mode='eval') new_ast = NameTransformer().visit(prop_ast) return new_ast def instantiate_prop(prop, var_names): prop_ast = instantiate_prop_ast(prop, var_names) prop_text = astor.to_source(prop_ast).strip() while prop_text.startswith('(') and prop_text.endswith(')'): prop_text = prop_text[1:-1] return prop_text assert instantiate_prop("X > Y", ['a', 'b']) == 'a > b' assert instantiate_prop("X.startswith(Y)", ['x', 'y']) == 'x.startswith(y)' ### Evaluating Properties¶ To actually evaluate properties, we do not need to instantiate them. Instead, we simply convert them into a boolean function, using lambda: def prop_function_text(prop): return "lambda " + ", ".join(metavars(prop)) + ": " + prop def prop_function(prop): return eval(prop_function_text(prop)) Here is a simple example: prop_function_text("X > Y") 'lambda X, Y: X > Y' p = prop_function("X > Y") p(100, 1) True p(1, 100) False ### Checking Invariants¶ To extract invariants from an execution, we need to check them on all possible instantiations of arguments. If the function to be checked has two arguments a and b, we instantiate the property X < Y both as a < b and b < a and check each of them. To get all combinations, we use the Python permutations() function: import itertools for combination in itertools.permutations([1.0, 2.0, 3.0], 2): print(combination) (1.0, 2.0) (1.0, 3.0) (2.0, 1.0) (2.0, 3.0) (3.0, 1.0) (3.0, 2.0) The function true_property_instantiations() takes a property and a list of tuples (var_name, value). It then produces all instantiations of the property with the given values and returns those that evaluate to True. def true_property_instantiations(prop, vars_and_values, log=False): instantiations = set() p = prop_function(prop) len_metavars = len(metavars(prop)) for combination in itertools.permutations(vars_and_values, len_metavars): args = [value for var_name, value in combination] var_names = [var_name for var_name, value in combination] try: result = p(*args) except: result = None if log: print(prop, combination, result) if result: return instantiations Here is an example. If x == -1 and y == 1, the property X < Y holds for x < y, but not for y < x: invs = true_property_instantiations("X < Y", [('x', -1), ('y', 1)], log=True) invs X < Y (('x', -1), ('y', 1)) True X < Y (('y', 1), ('x', -1)) False {('X < Y', ('x', 'y'))} The instantiation retrieves the short form: for prop, var_names in invs: print(instantiate_prop(prop, var_names)) x < y Likewise, with values for x and y as above, the property X < 0 only holds for x, but not for y: invs = true_property_instantiations("X < 0", [('x', -1), ('y', 1)], log=True) X < 0 (('x', -1),) True X < 0 (('y', 1),) False for prop, var_names in invs: print(instantiate_prop(prop, var_names)) x < 0 ### Extracting Invariants¶ Let us now run the above invariant extraction on function arguments and return values as observed during a function execution. To this end, we extend the CallTracker class into an InvariantTracker class, which automatically computes invariants for all functions and all calls observed during tracking. By default, an InvariantTracker uses the properties as defined above; however, one can specify alternate sets of properties. class InvariantTracker(CallTracker): def __init__(self, props=None, **kwargs): if props is None: props = INVARIANT_PROPERTIES self.props = props super().__init__(**kwargs) The key method of the InvariantTracker is the invariants() method. This iterates over the calls observed and checks which properties hold. Only the intersection of properties – that is, the set of properties that hold for all calls – is preserved, and eventually returned. The special variable return_value is set to hold the return value. RETURN_VALUE = 'return_value' class InvariantTracker(InvariantTracker): def invariants(self, function_name=None): if function_name is None: return {function_name: self.invariants(function_name) for function_name in self.calls()} invariants = None for variables, return_value in self.calls(function_name): vars_and_values = variables + [(RETURN_VALUE, return_value)] s = set() for prop in self.props: s |= true_property_instantiations(prop, vars_and_values, self._log) if invariants is None: invariants = s else: invariants &= s return invariants Here's an example of how to use invariants(). We run the tracker on a small set of calls. with InvariantTracker() as tracker: y = my_sqrt(25.0) y = my_sqrt(10.0) tracker.calls() {'my_sqrt': [([('x', 25.0)], 5.0), ([('x', 10.0)], 3.162277660168379)]} The invariants() method produces a set of properties that hold for the observed runs, together with their instantiations over function arguments. invs = tracker.invariants('my_sqrt') invs {('X != 0', ('return_value',)), ('X != 0', ('x',)), ('X < Y', ('return_value', 'x')), ('X <= Y', ('return_value', 'x')), ('X > 0', ('return_value',)), ('X > 0', ('x',)), ('X > Y', ('x', 'return_value')), ('X >= 0', ('return_value',)), ('X >= 0', ('x',)), ('X >= Y', ('x', 'return_value')), ('isinstance(X, float)', ('return_value',)), ('isinstance(X, float)', ('x',))} As before, the actual instantiations are easier to read: def pretty_invariants(invariants): props = [] for (prop, var_names) in invariants: props.append(instantiate_prop(prop, var_names)) return sorted(props) pretty_invariants(invs) ['isinstance(return_value, float)', 'isinstance(x, float)', 'return_value != 0', 'return_value < x', 'return_value <= x', 'return_value > 0', 'return_value >= 0', 'x != 0', 'x > 0', 'x > return_value', 'x >= 0', 'x >= return_value'] We see that the both x and the return value have a float type. We also see that both are always greater than zero. These are properties that may make useful pre- and postconditions, notably for symbolic analysis. However, there's also an invariant which does not universally hold, namely return_value <= x, as the following example shows: my_sqrt(0.01) 0.1 Clearly, 0.1 > 0.01 holds. This is a case of us not learning from sufficiently diverse inputs. As soon as we have a call including x = 0.1, though, the invariant return_value <= x is eliminated: with InvariantTracker() as tracker: y = my_sqrt(25.0) y = my_sqrt(10.0) y = my_sqrt(0.01) pretty_invariants(tracker.invariants('my_sqrt')) ['isinstance(return_value, float)', 'isinstance(x, float)', 'return_value != 0', 'return_value > 0', 'return_value >= 0', 'x != 0', 'x > 0', 'x >= 0'] We will discuss later how to ensure sufficient diversity in inputs. (Hint: This involves test generation.) Let us try out our invariant tracker on sum3(). We see that all types are well-defined; the properties that all arguments are non-zero, however, is specific to the calls observed. with InvariantTracker() as tracker: y = sum3(1, 2, 3) y = sum3(-4, -5, -6) pretty_invariants(tracker.invariants('sum3')) ['a != 0', 'b != 0', 'c != 0', 'isinstance(a, int)', 'isinstance(b, int)', 'isinstance(c, int)', 'isinstance(return_value, int)', 'return_value != 0'] If we invoke sum3() with strings instead, we get different invariants. Notably, we obtain the postcondition that the return value starts with the value of a – a universal postcondition if strings are used. with InvariantTracker() as tracker: y = sum3('a', 'b', 'c') y = sum3('f', 'e', 'd') pretty_invariants(tracker.invariants('sum3')) ['a != 0', 'a < return_value', 'a <= return_value', 'b != 0', 'c != 0', 'return_value != 0', 'return_value > a', 'return_value >= a', 'return_value.startswith(a)'] If we invoke sum3() with both strings and numbers (and zeros, too), there are no properties left that would hold across all calls. That's the price of flexibility. with InvariantTracker() as tracker: y = sum3('a', 'b', 'c') y = sum3('c', 'b', 'a') y = sum3(-4, -5, -6) y = sum3(0, 0, 0) pretty_invariants(tracker.invariants('sum3')) [] ### Converting Mined Invariants to Annotations¶ As with types, above, we would like to have some functionality where we can add the mined invariants as annotations to existing functions. To this end, we introduce the InvariantAnnotator class, extending InvariantTracker. We start with a helper method. params() returns a comma-separated list of parameter names as observed during calls. class InvariantAnnotator(InvariantTracker): def params(self, function_name): arguments, return_value = self.calls(function_name)[0] return ", ".join(arg_name for (arg_name, arg_value) in arguments) with InvariantAnnotator() as annotator: y = my_sqrt(25.0) y = sum3(1, 2, 3) annotator.params('my_sqrt') 'x' annotator.params('sum3') 'a, b, c' Now for the actual annotation. preconditions() returns the preconditions from the mined invariants (i.e., those propertes that do not depend on the return value) as a string with annotations: class InvariantAnnotator(InvariantAnnotator): def preconditions(self, function_name): conditions = [] for inv in pretty_invariants(self.invariants(function_name)): if inv.find(RETURN_VALUE) >= 0: continue # Postcondition cond = "@precondition(lambda " + self.params(function_name) + ": " + inv + ")" conditions.append(cond) return conditions with InvariantAnnotator() as annotator: y = my_sqrt(25.0) y = my_sqrt(0.01) y = sum3(1, 2, 3) annotator.preconditions('my_sqrt') ['@precondition(lambda x: isinstance(x, float))', '@precondition(lambda x: x != 0)', '@precondition(lambda x: x > 0)', '@precondition(lambda x: x >= 0)'] postconditions() does the same for postconditions: class InvariantAnnotator(InvariantAnnotator): def postconditions(self, function_name): conditions = [] for inv in pretty_invariants(self.invariants(function_name)): if inv.find(RETURN_VALUE) < 0: continue # Precondition cond = ("@postcondition(lambda " + RETURN_VALUE + ", " + self.params(function_name) + ": " + inv + ")") conditions.append(cond) return conditions with InvariantAnnotator() as annotator: y = my_sqrt(25.0) y = my_sqrt(0.01) y = sum3(1, 2, 3) annotator.postconditions('my_sqrt') ['@postcondition(lambda return_value, x: isinstance(return_value, float))', '@postcondition(lambda return_value, x: return_value != 0)', '@postcondition(lambda return_value, x: return_value > 0)', '@postcondition(lambda return_value, x: return_value >= 0)'] With these, we can take a function and add both pre- and postconditions as annotations: class InvariantAnnotator(InvariantAnnotator): def functions_with_invariants(self): functions = "" for function_name in self.invariants(): try: function = self.function_with_invariants(function_name) except KeyError: continue functions += function return functions def function_with_invariants(self, function_name): function = globals()[function_name] # Can throw KeyError source = inspect.getsource(function) return "\n".join(self.preconditions(function_name) + self.postconditions(function_name)) + '\n' + source Here comes function_with_invariants() in all its glory: with InvariantAnnotator() as annotator: y = my_sqrt(25.0) y = my_sqrt(0.01) y = sum3(1, 2, 3) print_content(annotator.function_with_invariants('my_sqrt'), '.py') @precondition(lambda x: isinstance(x, float)) @precondition(lambda x: x != 0) @precondition(lambda x: x > 0) @precondition(lambda x: x >= 0) @postcondition(lambda return_value, x: isinstance(return_value, float)) @postcondition(lambda return_value, x: return_value != 0) @postcondition(lambda return_value, x: return_value > 0) @postcondition(lambda return_value, x: return_value >= 0) def my_sqrt(x): """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx Quite a lot of invariants, is it? Further below (and in the exercises), we will discuss on how to focus on the most relevant properties. ### Some Examples¶ Here's another example. list_length() recursively computes the length of a Python function. Let us see whether we can mine its invariants: def list_length(L): if L == []: length = 0 else: length = 1 + list_length(L[1:]) return length with InvariantAnnotator() as annotator: length = list_length([1, 2, 3]) print_content(annotator.functions_with_invariants(), '.py') @precondition(lambda L: L != 0) @precondition(lambda L: isinstance(L, list)) @postcondition(lambda return_value, L: isinstance(return_value, int)) @postcondition(lambda return_value, L: return_value == len(L)) @postcondition(lambda return_value, L: return_value >= 0) def list_length(L): if L == []: length = 0 else: length = 1 + list_length(L[1:]) return length Almost all these properties (except for the very first) are relevant. Of course, the reason the invariants are so neat is that the return value is equal to len(L) is that X == len(Y) is part of the list of properties to be checked. The next example is a very simple function: def sum2(a, b): return a + b with InvariantAnnotator() as annotator: sum2(31, 45) sum2(0, 0) sum2(-1, -5) The invariants all capture the relationship between a, b, and the return value as return_value == a + b in all its variations. print_content(annotator.functions_with_invariants(), '.py') @precondition(lambda a, b: isinstance(a, int)) @precondition(lambda a, b: isinstance(b, int)) @postcondition(lambda return_value, a, b: a == return_value - b) @postcondition(lambda return_value, a, b: b == return_value - a) @postcondition(lambda return_value, a, b: isinstance(return_value, int)) @postcondition(lambda return_value, a, b: return_value == a + b) @postcondition(lambda return_value, a, b: return_value == b + a) def sum2(a, b): return a + b If we have a function without return value, the return value is None and we can only mine preconditions. (Well, we get a "postcondition" that the return value is non-zero, which holds for None). def print_sum(a, b): print(a + b) with InvariantAnnotator() as annotator: print_sum(31, 45) print_sum(0, 0) print_sum(-1, -5) 76 0 -6 print_content(annotator.functions_with_invariants(), '.py') @precondition(lambda a, b: isinstance(a, int)) @precondition(lambda a, b: isinstance(b, int)) @postcondition(lambda return_value, a, b: return_value != 0) def print_sum(a, b): print(a + b) ### Checking Specifications¶ A function with invariants, as above, can be fed into the Python interpreter, such that all pre- and postconditions are checked. We create a function my_sqrt_annotated() which includes all the invariants mined above. with InvariantAnnotator() as annotator: y = my_sqrt(25.0) y = my_sqrt(0.01) my_sqrt_def = annotator.functions_with_invariants() my_sqrt_def = my_sqrt_def.replace('my_sqrt', 'my_sqrt_annotated') print_content(my_sqrt_def, '.py') @precondition(lambda x: isinstance(x, float)) @precondition(lambda x: x != 0) @precondition(lambda x: x > 0) @precondition(lambda x: x >= 0) @postcondition(lambda return_value, x: isinstance(return_value, float)) @postcondition(lambda return_value, x: return_value != 0) @postcondition(lambda return_value, x: return_value > 0) @postcondition(lambda return_value, x: return_value >= 0) def my_sqrt_annotated(x): """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return approx exec(my_sqrt_def) The "annotated" version checks against invalid arguments – or more precisely, against arguments with properties that have not been observed yet: with ExpectError(): my_sqrt_annotated(-1.0) Traceback (most recent call last): File "<ipython-input-170-c3c5c372ccd1>", line 2, in <module> my_sqrt_annotated(-1.0) File "<ipython-input-100-39ada1fd0b7e>", line 8, in wrapper retval = func(*args, **kwargs) # call original function or method File "<ipython-input-100-39ada1fd0b7e>", line 8, in wrapper retval = func(*args, **kwargs) # call original function or method File "<ipython-input-100-39ada1fd0b7e>", line 6, in wrapper assert precondition(*args, **kwargs), "Precondition violated" AssertionError: Precondition violated (expected) This is in contrast to the original version, which just hangs on negative values: with ExpectTimeout(1): my_sqrt(-1.0) Traceback (most recent call last): my_sqrt(-1.0) File "<ipython-input-5-47185ad159a1>", line 5, in my_sqrt while approx != guess: File "<ipython-input-5-47185ad159a1>", line 5, in my_sqrt while approx != guess: File "ExpectError.ipynb", line 59, in check_time TimeoutError (expected) If we make changes to the function definition such that the properties of the return value change, such regressions are caught as violations of the postconditions. Let us illustrate this by simply inverting the result, and return $-2$ as square root of 4. my_sqrt_def = my_sqrt_def.replace('my_sqrt_annotated', 'my_sqrt_negative') my_sqrt_def = my_sqrt_def.replace('return approx', 'return -approx') print_content(my_sqrt_def, '.py') @precondition(lambda x: isinstance(x, float)) @precondition(lambda x: x != 0) @precondition(lambda x: x > 0) @precondition(lambda x: x >= 0) @postcondition(lambda return_value, x: isinstance(return_value, float)) @postcondition(lambda return_value, x: return_value != 0) @postcondition(lambda return_value, x: return_value > 0) @postcondition(lambda return_value, x: return_value >= 0) def my_sqrt_negative(x): """Computes the square root of x, using the Newton-Raphson method""" approx = None guess = x / 2 while approx != guess: approx = guess guess = (approx + x / approx) / 2 return -approx exec(my_sqrt_def) Technically speaking, $-2$ is a square root of 4, since $(-2)^2 = 4$ holds. Yet, such a change may be unexpected by callers of my_sqrt(), and hence, this would be caught with the first call: with ExpectError(): my_sqrt_negative(2.0) Traceback (most recent call last): File "<ipython-input-175-c80e4295dbf8>", line 2, in <module> my_sqrt_negative(2.0) File "<ipython-input-100-39ada1fd0b7e>", line 8, in wrapper retval = func(*args, **kwargs) # call original function or method File "<ipython-input-100-39ada1fd0b7e>", line 8, in wrapper retval = func(*args, **kwargs) # call original function or method File "<ipython-input-100-39ada1fd0b7e>", line 8, in wrapper retval = func(*args, **kwargs) # call original function or method [Previous line repeated 4 more times] File "<ipython-input-100-39ada1fd0b7e>", line 10, in wrapper assert postcondition(retval, *args, **kwargs), "Postcondition violated" AssertionError: Postcondition violated (expected) We see how pre- and postconditions, as well as types, can serve as oracles during testing. In particular, once we have mined them for a set of functions, we can check them again and again with test generators – especially after code changes. The more checks we have, and the more specific they are, the more likely it is we can detect unwanted effects of changes. ## Mining Specifications from Generated Tests¶ Mined specifications can only be as good as the executions they were mined from. If we only see a single call to, say, sum2(), we will be faced with several mined pre- and postconditions that overspecialize towards the values seen: def sum2(a, b): return a + b with InvariantAnnotator() as annotator: y = sum2(2, 2) print_content(annotator.functions_with_invariants(), '.py') @precondition(lambda a, b: a != 0) @precondition(lambda a, b: a <= b) @precondition(lambda a, b: a == b) @precondition(lambda a, b: a > 0) @precondition(lambda a, b: a >= 0) @precondition(lambda a, b: a >= b) @precondition(lambda a, b: b != 0) @precondition(lambda a, b: b <= a) @precondition(lambda a, b: b == a) @precondition(lambda a, b: b > 0) @precondition(lambda a, b: b >= 0) @precondition(lambda a, b: b >= a) @precondition(lambda a, b: isinstance(a, int)) @precondition(lambda a, b: isinstance(b, int)) @postcondition(lambda return_value, a, b: a < return_value) @postcondition(lambda return_value, a, b: a <= b <= return_value) @postcondition(lambda return_value, a, b: a <= return_value) @postcondition(lambda return_value, a, b: a == return_value - b) @postcondition(lambda return_value, a, b: a == return_value / b) @postcondition(lambda return_value, a, b: b < return_value) @postcondition(lambda return_value, a, b: b <= a <= return_value) @postcondition(lambda return_value, a, b: b <= return_value) @postcondition(lambda return_value, a, b: b == return_value - a) @postcondition(lambda return_value, a, b: b == return_value / a) @postcondition(lambda return_value, a, b: isinstance(return_value, int)) @postcondition(lambda return_value, a, b: return_value != 0) @postcondition(lambda return_value, a, b: return_value == a * b) @postcondition(lambda return_value, a, b: return_value == a + b) @postcondition(lambda return_value, a, b: return_value == b * a) @postcondition(lambda return_value, a, b: return_value == b + a) @postcondition(lambda return_value, a, b: return_value > 0) @postcondition(lambda return_value, a, b: return_value > a) @postcondition(lambda return_value, a, b: return_value > b) @postcondition(lambda return_value, a, b: return_value >= 0) @postcondition(lambda return_value, a, b: return_value >= a) @postcondition(lambda return_value, a, b: return_value >= a >= b) @postcondition(lambda return_value, a, b: return_value >= b) @postcondition(lambda return_value, a, b: return_value >= b >= a) def sum2(a, b): return a + b The mined precondition a == b, for instance, only holds for the single call observed; the same holds for the mined postcondition return_value == a * b. Yet, sum2() can obviously be successfully called with other values that do not satisfy these conditions. To get out of this trap, we have to learn from more and more diverse runs. If we have a few more calls of sum2(), we see how the set of invariants quickly gets smaller: with InvariantAnnotator() as annotator: length = sum2(1, 2) length = sum2(-1, -2) length = sum2(0, 0) print_content(annotator.functions_with_invariants(), '.py') @precondition(lambda a, b: isinstance(a, int)) @precondition(lambda a, b: isinstance(b, int)) @postcondition(lambda return_value, a, b: a == return_value - b) @postcondition(lambda return_value, a, b: b == return_value - a) @postcondition(lambda return_value, a, b: isinstance(return_value, int)) @postcondition(lambda return_value, a, b: return_value == a + b) @postcondition(lambda return_value, a, b: return_value == b + a) def sum2(a, b): return a + b But where to we get such diverse runs from? This is the job of generating software tests. A simple grammar for calls of sum2() will easily resolve the problem. from GrammarFuzzer import GrammarFuzzer # minor dependency from Grammars import is_valid_grammar, crange, convert_ebnf_grammar # minor dependency SUM2_EBNF_GRAMMAR = { "<start>": ["<sum2>"], "<sum2>": ["sum2(<int>, <int>)"], "<int>": ["<_int>"], "<digit>": crange('0', '9') } assert is_valid_grammar(SUM2_EBNF_GRAMMAR) sum2_grammar = convert_ebnf_grammar(SUM2_EBNF_GRAMMAR) sum2_fuzzer = GrammarFuzzer(sum2_grammar) [sum2_fuzzer.fuzz() for i in range(10)] ['sum2(60, 3)', 'sum2(-4, 0)', 'sum2(-579, 34)', 'sum2(3, 0)', 'sum2(-8, 0)', 'sum2(0, 8)', 'sum2(3, -9)', 'sum2(0, 0)', 'sum2(0, 5)', 'sum2(-3181, 0)'] with InvariantAnnotator() as annotator: for i in range(10): eval(sum2_fuzzer.fuzz()) print_content(annotator.function_with_invariants('sum2'), '.py') @precondition(lambda a, b: a != 0) @precondition(lambda a, b: isinstance(a, int)) @precondition(lambda a, b: isinstance(b, int)) @postcondition(lambda return_value, a, b: a == return_value - b) @postcondition(lambda return_value, a, b: b == return_value - a) @postcondition(lambda return_value, a, b: isinstance(return_value, int)) @postcondition(lambda return_value, a, b: return_value != 0) @postcondition(lambda return_value, a, b: return_value == a + b) @postcondition(lambda return_value, a, b: return_value == b + a) def sum2(a, b): return a + b But then, writing tests (or a test driver) just to derive a set of pre- and postconditions may possibly be too much effort – in particular, since tests can easily be derived from given pre- and postconditions in the first place. Hence, it would be wiser to first specify invariants and then let test generators or program provers do the job. Also, an API grammar, such as above, will have to be set up such that it actually respects preconditions – in our case, we invoke sqrt() with positive numbers only, already assuming its precondition. In some way, one thus needs a specification (a model, a grammar) to mine another specification – a chicken-and-egg problem. However, there is one way out of this problem: If one can automatically generate tests at the system level, then one has an infinite source of executions to learn invariants from. In each of these executions, all functions would be called with values that satisfy the (implicit) precondition, allowing us to mine invariants for these functions. This holds, because at the system level, invalid inputs must be rejected by the system in the first place. The meaningful precondition at the system level, ensuring that only valid inputs get through, thus gets broken down into a multitude of meaningful preconditions (and subsequent postconditions) at the function level. The big requirement for this, though, is that one needs good test generators at the system level. In the next part, we will discuss how to automatically generate tests for a variety of domains, from configuration to graphical user interfaces. ## Lessons Learned¶ • Type annotations and explicit invariants allow for checking arguments and results for expected data types and other properties. • One can automatically mine data types and invariants by observing arguments and results at runtime. • The quality of mined invariants depends on the diversity of values observed during executions; this variety can be increased by generating tests. ## Next Steps¶ This chapter concludes the part on semantical fuzzing techniques. In the next part, we will explore domain-specific fuzzing techniques from configurations and APIs to graphical user interfaces. ## Background¶ The DAIKON dynamic invariant detector can be considered the mother of function specification miners. Continuously maintained and extended for more than 20 years, it mines likely invariants in the style of this chapter for a variety of languages, including C, C++, C#, Eiffel, F#, Java, Perl, and Visual Basic. On top of the functionality discussed above, it holds a rich catalog of patterns for likely invariants, supports data invariants, can eliminate invariants that are implied by others, and determines statistical confidence to disregard unlikely invariants. The corresponding paper [Ernst et al, 2001.] is one of the seminal and most-cited papers of Software Engineering. A multitude of works have been published based on DAIKON and detecting invariants; see this curated list for details. The interaction between test generators and invariant detection is already discussed in [Ernst et al, 2001.] (incidentally also using grammars). The Eclat tool [Pacheco et al, 2005.] is a model example of tight interaction between a unit-level test generator and DAIKON-style invariant mining, where the mined invariants are used to produce oracles and to systematically guide the test generator towards fault-revealing inputs. Mining specifications is not restricted to pre- and postconditions. The paper "Mining Specifications" [Ammons et al, 2002.] is another classic in the field, learning state protocols from executions. Grammar mining, as described in our chapter with the same name can also be seen as a specification mining approach, this time learning specifications for input formats. As it comes to adding type annotations to existing code, the blog post "The state of type hints in Python" gives a great overview on how Python type hints can be used and checked. To add type annotations, there are two important tools available that also implement our above approach: • MonkeyType implements the above approach of tracing executions and annotating Python 3 arguments, returns, and variables with type hints. • PyAnnotate does a similar job, focusing on code in Python 2. It does not produce Python 3-style annotations, but instead produces annotations as comments that can be processed by static type checkers. These tools have been created by engineers at Facebook and Dropbox, respectively, assisting them in checking millions of lines of code for type issues. ## Exercises¶ Our code for mining types and invariants is in no way complete. There are dozens of ways to extend our implementations, some of which we discuss in exercises. ### Exercise 1: Union Types¶ The Python typing module allows to express that an argument can have multiple types. For my_sqrt(x), this allows to express that x can be an int or a float: from typing import Union, Optional def my_sqrt_with_union_type(x: Union[int, float]) -> float: ... Extend the TypeAnnotator such that it supports union types for arguments and return values. Use Optional[X] as a shorthand for Union[X, None]. ### Exercise 2: Types for Local Variables¶ In Python, one cannot only annotate arguments with types, but actually also local and global variables – for instance, approx and guess in our my_sqrt() implementation: def my_sqrt_with_local_types(x: Union[int, float]) -> float: """Computes the square root of x, using the Newton-Raphson method""" approx: Optional[float] = None guess: float = x / 2 while approx != guess: approx: float = guess guess: float = (approx + x / approx) / 2 return approx Extend the TypeAnnotator such that it also annotates local variables with types. Search the function AST for assignments, determine the type of the assigned value, and make it an annotation on the left hand side. ### Exercise 3: Verbose Invariant Checkers¶ Our implementation of invariant checkers does not make it clear for the user which pre-/postcondition failed. @precondition(lambda s: len(s) > 0) def remove_first_char(s): return s[1:] with ExpectError(): remove_first_char('') Traceback (most recent call last): File "<ipython-input-193-dda18930f6db>", line 2, in <module> remove_first_char('') File "<ipython-input-100-39ada1fd0b7e>", line 6, in wrapper assert precondition(*args, **kwargs), "Precondition violated" AssertionError: Precondition violated (expected) The following implementation adds an optional doc keyword argument which is printed if the invariant is violated: def condition(precondition=None, postcondition=None, doc='Unknown'): def decorator(func): @functools.wraps(func) # preserves name, docstring, etc def wrapper(*args, **kwargs): if precondition is not None: assert precondition(*args, **kwargs), "Precondition violated: " + doc retval = func(*args, **kwargs) # call original function or method if postcondition is not None: assert postcondition(retval, *args, **kwargs), "Postcondition violated: " + doc return retval return wrapper return decorator def precondition(check, **kwargs): return condition(precondition=check, doc=kwargs.get('doc', 'Unknown')) def postcondition(check, **kwargs): return condition(postcondition=check, doc=kwargs.get('doc', 'Unknown')) @precondition(lambda s: len(s) > 0, doc="len(s) > 0") def remove_first_char(s): return s[1:] remove_first_char('abc') 'bc' with ExpectError(): remove_first_char('') Traceback (most recent call last): File "<ipython-input-196-dda18930f6db>", line 2, in <module> remove_first_char('') File "<ipython-input-194-683ee268305f>", line 6, in wrapper assert precondition(*args, **kwargs), "Precondition violated: " + doc AssertionError: Precondition violated: len(s) > 0 (expected) Extend InvariantAnnotator such that it includes the conditions in the generated pre- and postconditions. ### Exercise 4: Save Initial Values¶ If the value of an argument changes during function execution, this can easily confuse our implementation: The values are tracked at the beginning of the function, but checked only when it returns. Extend the InvariantAnnotator and the infrastructure it uses such that • it saves argument values both at the beginning and at the end of a function invocation; • postconditions can be expressed over both initial values of arguments as well as the final values of arguments; • the mined postconditions refer to both these values as well. ### Exercise 5: Implications¶ Several mined invariant are actually implied by others: If x > 0 holds, then this implies x >= 0 and x != 0. Extend the InvariantAnnotator such that implications between properties are explicitly encoded, and such that implied properties are no longer listed as invariants. See [Ernst et al, 2001.] for ideas. ### Exercise 6: Local Variables¶ Postconditions may also refer to the values of local variables. Consider extending InvariantAnnotator and its infrastructure such that the values of local variables at the end of the execution are also recorded and made part of the invariant inference mechanism. ### Exercise 7: Exploring Invariant Alternatives¶ After mining a first set of invariants, have a concolic fuzzer generate tests that systematically attempt to invalidate pre- and postconditions. How far can you generalize? ### Exercise 8: Grammar-Generated Properties¶ The larger the set of properties to be checked, the more potential invariants can be discovered. Create a grammar that systematically produces a large set of properties. See [Ernst et al, 2001.] for possible patterns. ### Exercise 9: Embedding Invariants as Assertions¶ Rather than producing invariants as annotations for pre- and postconditions, insert them as assert statements into the function code, as in: def my_sqrt(x): 'Computes the square root of x, using the Newton-Raphson method' assert isinstance(x, int), 'violated precondition' assert (x > 0), 'violated precondition' approx = None guess = (x / 2) while (approx != guess): approx = guess guess = ((approx + (x / approx)) / 2) return_value = approx assert (return_value < x), 'violated postcondition' assert isinstance(return_value, float), 'violated postcondition' return approx Such a formulation may make it easier for test generators and symbolic analysis to access and interpret pre- and postconditions.
2019-11-17 12:57:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.276896595954895, "perplexity": 8945.080162486815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668954.85/warc/CC-MAIN-20191117115233-20191117143233-00496.warc.gz"}
https://holoeye.com/tag/quantum-optics/
### Scalability of all-optical neural networks based on spatial light modulators Author(s): Ying Zuo, Zhao Yujun, You-Chiuan Chen, Shengwang Du & Liu, Junwei Abstract: “Optical implementation of artificial neural networks has been attracting great attention due to its potential in parallel computation at speed of light. Although all-optical deep neural networks (AODNNs) with a few neurons have been experimentally demonstrated with acceptable errors re- cently, the feasibility of large scale AODNNs remains unknown because error might accumulate inevitably with increasing number of neurons and connections. Here, we demonstrate a scalable AODNN with programmable linear operations and tunable nonlinear activation functions. We ver- ify its scalability by measuring and analyzing errors propagating from a single neuron to the entire network. The feasibility of AODNNs is further confirmed by recognizing handwritten digits and fashions respectively.” Publication: Physical Review Applied Issue/Year: Physical Review Applied, 2021 DOI: https://doi.org/10.1103/PhysRevApplied.15.054034 ### Discretized continuous quantum-mechanical observables that are neither continuous nor discrete Author(s): Thais L. Silva, Łukasz Rudnicki, Daniel S. Tasca, and Stephen P. Walborn Abstract: “Most of the fundamental characteristics of quantum mechanics, such as nonlocality and contextuality, are manifest in discrete, finite-dimensional systems. However, many quantum information tasks that exploit these properties cannot be directly adapted to continuous variable systems. To access these quantum features, continuous quantum variables can be made discrete by binning together their different values, resulting in observables with a finite number, d, of outcomes. While direct measurement indeed confirms their manifestly discrete character, here we employ a salient feature of quantum physics known as mutual unbiasedness to show that such coarse-grained observables are in a sense neither continuous nor discrete. Depending on d, the observables can reproduce either the discrete or the continuous behavior, or neither. To illustrate these results, we present an example for the construction of such measurements and employ it in an optical experiment confirming the existence of four mutually unbiased measurements with d=3 outcomes in a continuous variable system, surpassing the number of mutually unbiased continuous variable observables.” Publication: Physical Review Research Issue/Year: Physical Review Research, Volume 4; Number 1; Pages 013060; 2022 DOI: 10.1103/physrevresearch.4.013060 ### Representation of total angular momentum states of beams through a four-parameter notation Author(s): Fu, Shiyao; Hai, Lan; Song, Rui; Gao, Chunqing & Zhang, Xiangdong Abstract: “It has been confirmed beams carrying total angular momentums (TAMs) that consist of spin angular momentums (SAMs) and orbital angular momentums (OAMs) are widely used in classical and quantum optics. Here we propose and demonstrate a new kind of representation consisting of four real numbers to describe the TAM states of arbitrary beams. It is shown that any homogeneous polarization, scalar vortices and complex vectorial vortex field, all of which result from the TAMs of photons, can be well represented conveniently using our proposed four-parameter representation. Furthermore, the proposed representation can also reveal the internal change of TAMs as the conversion between SAMs and OAMs. The salient properties of the proposed representation is to give a universal form of TAMs associated with complicated polarizations and more exotic vectorial vortex beams, which offer an important basis for the future applications” Publication: New Journal of Physics Issue/Year: New Journal of Physics, Volume 23; Number 8; Pages 083015; 2021 DOI: 10.1088/1367-2630/ac1695 ### All-optical image identification with programmable matrix transformation Author(s): Li, Shikang; Ni, Baohua; Feng, Xue; Cui, Kaiyu; Liu, Fang; Zhang, Wei & Huang, Yidong Abstract: “An optical neural network is proposed and demonstrated with programmable matrix transformation and nonlinear activation function of photodetection (square-law detection). Based on discrete phase-coherent spatial modes, the dimensionality of programmable optical matrix operations is 30∼37, which is implemented by spatial light modulators. With this architecture, all-optical classification tasks of handwritten digits, objects and depth images are performed. The accuracy values of 85.0% and 81.0% are experimentally evaluated for MNIST (Modified National Institute of Standards and Technology) digit and MNIST fashion tasks, respectively. Due to the parallel nature of matrix multiplication, the processing speed of our proposed architecture is potentially as high as 7.4∼74 T FLOPs per second (with 10∼100 GHz detector).” Publication: Optics Express Issue/Year: Optics Express, Volume 29; Number 17; Pages 26474; 2021 DOI: 10.1364/oe.430281 ### Orbital-Angular-Momentum-Controlled Hybrid Nanowire Circuit Author(s): Ren, Haoran; Wang, Xiaoxia; Li, Chenhao; He, Chenglin; Wang, Yixiong; Pan, Anlian & Maier, Stefan A. Abstract: “Plasmonic nanostructures can enable compact multiplexing of the orbital angular momentum (OAM) of light; however, strong dissipation of the highly localized OAM-distinct plasmonic fields in the near-field region hinders on-chip OAM transmission and processing. Superior transmission efficiency is offered by semiconductor nanowires sustaining highly confined optical modes, but only the polarization degree of freedom has been utilized for their selective excitation. Here we demonstrate that incident OAM beams can selectively excite single-crystalline cadmium sulfide (CdS) nanowires through coupling OAM-distinct plasmonic fields into nanowire waveguides for long-distance transportation. This allows us to build an OAM-controlled hybrid nanowire circuit for optical logic operations including AND and OR gates. In addition, this circuit enables the on-chip photoluminescence readout of OAM-encrypted information. Our results open exciting new avenues not only for nanowire photonics to develop OAM-controlled optical switches, logic gates, and modulators but also for OAM photonics to build ultracompact photonic circuits for information processing.” Publication: Nano Letters Issue/Year: Nano Letters, Volume 21; Number 14; Pages 6220–6227; 2021 DOI: 10.1021/acs.nanolett.1c01979 ### Direct Tomography of High-Dimensional Density Matrices for General Quantum States of Photons Author(s): Zhou, Yiyu; Zhao, Jiapeng; Hay, Darrick; McGonagle, Kendrick; Boyd, Robert W. & Shi, Zhimin Abstract: “Quantum-state tomography is the conventional method used to characterize density matrices for general quantum states. However, the data acquisition time generally scales linearly with the dimension of the Hilbert space, hindering the possibility of dynamic monitoring of a high-dimensional quantum system. Here, we demonstrate a direct tomography protocol to measure density matrices of photons in the position basis through the use of a polarization-resolving camera, where the dimension of density matrices can be as large as 580×580 in our experiment. The use of the polarization-resolving camera enables parallel measurements in the position and polarization basis and as a result, the data acquisition time of our protocol does not increase with the dimension of the Hilbert space and is solely determined by the camera exposure time (on the order of 10 ms). Our method is potentially useful for the real-time monitoring of the dynamics of quantum states and paves the way for the development of high-dimensional, time-efficient quantum metrology techniques.” Publication: Physical Review Letters Issue/Year: Physical Review Letters, Volume 127; Number 4; Pages 040402; 2021 DOI: 10.1103/PhysRevLett.127.040402 ### Quantum cryptography technique: A way to improve security challenges in mobile cloud computing (MCC) Author(s): Abidin, Shafiqul; Swami, Amit; Ramirez-As{‘{i}}s, Edwin; Alvarado-Tolentino, Joseph; Maurya, Rajesh Kumar & Hussain, Naziya Abstract: “Quantum cryptography concentrates on the solution of cryptography that is imperishable due to the reason of fortification of secrecy which is applied to the public key distribution of quantum. It is a very prominent technology in which 2 beings can securely communicate along with the sights belongings to quantum physics. However, on basis of classical level cryptography, the used encodes were bits for data. As quantum utilizes the photons or particles polarize ones for encoding the quantized property. This is presented in qubits as a unit. Transmissions depend directly on the inalienable mechanic’s law of quantum for security. This paper includes detailed insight into the three most used and appreciated quantum cryptography applications that are providing its domain-wide service in the field of mobile cloud computing. These services are (it) DARPA Network, (ii) IPSEC implementation, and (iii) the twisted light HD implementation along with quantum elements, key distribution, and protocols.” Publication: Materials Today: Proceedings Issue/Year: Materials Today: Proceedings, 2021 DOI: 10.1016/j.matpr.2021.05.593 ### Toward simple, generalizable neural networks with universal training for low-SWaP hybrid vision Author(s): Muminov, Baurzhan; Perry, Altai; Hyder, Rakib; Asif, M. Salman & Vuong, Luat T. Abstract: “Speed, generalizability, and robustness are fundamental issues for building lightweight computational cameras. Here we demonstrate generalizable image reconstruction with the simplest of hybrid machine vision systems: linear optical preprocessors combined with no-hidden-layer, “small-brain” neural networks. Surprisingly, such simple neural networks are capable of learning the image reconstruction from a range of coded diffraction patterns using two masks. We investigate the possibility of generalized or “universal training” with these small brains. Neural networks trained with sinusoidal or random patterns uniformly distribute errors around a reconstructed image, whereas models trained with a combination of sharp and curved shapes (the phase pattern of optical vortices) reconstruct edges more boldly. We illustrate variable convergence of these simple neural networks and relate learnability of an image to its singular value decomposition entropy of the image. We also provide heuristic experimental results. With thresholding, we achieve robust reconstruction of various disjoint datasets. Our work is favorable for future real-time low size, weight, and power hybrid vision: we reconstruct images on a 15 W laptop CPU with 15,000 frames per second: faster by a factor of 3 than previously reported results and 3 orders of magnitude faster than convolutional neural networks.” Publication: Photonics Research Issue/Year: Photonics Research, Volume 9; Number 7; Pages B253; 2021 DOI: 10.1364/prj.416614 ### 768-ary Laguerre-Gaussian-mode shift keying free-space optical communication based on convolutional neural networks Author(s): Luan, Haitao; Lin, Dajun; Li, Keyao; Meng, Weijia; Gu, Min & Fang, Xinyuan Abstract: “Beyond orbital angular momentum of Laguerre-Gaussian (LG) modes, the radial index can also be exploited as information channel in free-space optical (FSO) communication to extend the communication capacity, resulting in the LG- shift keying (LG-SK) FSO communications. However, the recognition of radial index is critical and tough when the superposed high-order LG modes are disturbed by the atmospheric turbulences (ATs). In this paper, the convolutional neural network (CNN) is utilized to recognize both the azimuthal and radial index of superposed LG modes. We experimentally demonstrate the application of CNN model in a 10-meter 768-ary LG-SK FSO communication system at the AT $$C^2_n= 10^{-14}m^{-\frac{2}{3}}$$. Based on the high recognition accuracy of the CNN model (>95%) in the scheme, a colorful image can be transmitted and the peak signal-to-noise ratio of the received image can exceed 35 dB. We anticipate that our results can stimulate further researches on the utilization of the potential applications of LG modes with non-zero radial index based on the artificial-intelligence-enhanced optoelectronic systems.” Publication: Optics Express Issue/Year: Optics Express, Volume 29; Number 13; Pages 19807; 2021 DOI: 10.1364/oe.420176 ### Distinguishing intrinsic photon correlations from external noise with frequency-resolved homodyne detection Author(s): Lüders, Carolin & Assmann, Marc Abstract: “In this work, we apply homodyne detection to investigate the frequency-resolved photon statistics of a cw light field emitted by a driven-dissipative semiconductor system in real time. We demonstrate that studying the frequency dependence of the photon number noise allows us to distinguish intrinsic noise properties of the emitter from external noise sources such as mechanical noise while maintaining a sub-picosecond temporal resolution. We further show that performing postselection on the recorded data opens up the possibility to study rare events in the dynamics of the emitter. By doing so, we demonstrate that in rare instances, additional external noise may actually result in reduced photon number noise in the emission”
2023-01-29 15:55:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4422803521156311, "perplexity": 3076.5790348834335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00621.warc.gz"}
https://www.projectrhea.org/rhea/index.php?title=2019_Spring_ECE301_Boutin_Fourier_series_coefficients&action=edit&oldid=76824
You do not have permission to edit this page, for the following reason: The action you have requested is limited to users in the group: Users. You can view and copy the source of this page: ## Alumni Liaison Abstract algebra continues the conceptual developments of linear algebra, on an even grander scale. Dr. Paul Garrett
2021-04-10 21:28:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4874957799911499, "perplexity": 2105.4145975980755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00247.warc.gz"}
https://www.researchgate.net/publication/43021934_Spectral_structure_of_the_pygmy_dipole_resonance
Article # Spectral Structure of the Pygmy Dipole Resonance [more] Duke University, Durham, North Carolina 27708-0308, USA. (Impact Factor: 7.51). 02/2010; 104(7):072501. DOI: 10.1103/PHYSREVLETT.104.072501 Source: PubMed ABSTRACT High-sensitivity studies of E1 and M1 transitions observed in the reaction (138)Ba((gamma) over right arrow, gamma') at energies below the one-neutron separation energy have been performed using the nearly monoenergetic and 100% linearly polarized photon beams of the HI (gamma) over right arrowS facility. The electric dipole character of the so-called "pygmy" dipole resonance was experimentally verified for excitations from 4.0 to 8.6 MeV. The fine structure of the M1 "spin-flip" mode was observed for the first time in N = 82 nuclei. ### Full-text Available from: Anton P. Tonchev • Source ##### Article: Spectroscopic features of low-energy excitations in skin nuclei [Hide abstract] ABSTRACT: Systematic studies of dipole and other multipole excitations in stable and exotic nuclei are discussed theoretically. Exploring the relation of the strengths of low-energy dipole and quadrupole pygmy resonances to the thickness of the neutron (proton) skin a close connection between static and dynamic properties of the nucleus is observed. The fine structure of low-energy dipole strength in 138Ba nucleus is revealed from E1 and spin-flip M1 strengths distributions. Full-text · Article · Apr 2010 · Modern Physics Letters A • Source ##### Article: Instantaneous Shape Sampling - a model for the $\gamma$-absorption cross section of transitional nuclei [Hide abstract] ABSTRACT: The influence of the quadrupole shape fluctuations on the dipole vibrations in transitional nuclei is investigated in the framework of the Instantaneous Shape Sampling Model, which combines the Interacting Boson Model for the slow collective quadrupole motion with the Random Phase Approximation for the rapid dipole vibrations. Coupling to the complex background configurations is taken into account by folding the results with a Lorentzian with an energy dependent width. The low-energy energy portion of the $\gamma$- absorption cross section, which is important for photo-nuclear processes, is studied for the isotopic series of Kr, Xe, Ba, and Sm. The experimental cross sections are well reproduced. The low-energy cross section is determined by the Landau fragmentation of the dipole strength and its redistribution caused by the shape fluctuations. Collisional damping only wipes out fluctuations of the absorption cross section, generating the smooth energy dependence observed in experiment. In the case of semi-magic nuclei, shallow pygmy resonances are found in agreement with experiment. Full-text · Article · Sep 2010 · Physical Review C • Source ##### Article: Isospin Character of the Pygmy Dipole Resonance in Sn 124 [Hide abstract] ABSTRACT: The pygmy dipole resonance has been studied in the proton-magic nucleus 124Sn with the (α, α'γ) coincidence method at Eα=136 MeV. The comparison with results of photon-scattering experiments reveals a splitting into two components with different structure: one group of states which is excited in (α, α'γ) as well as in (γ, γ') reactions and a group of states at higher energies which is only excited in (γ, γ') reactions. Calculations with the self-consistent relativistic quasiparticle time-blocking approximation and the quasiparticle phonon model are in qualitative agreement with the experimental results and predict a low-lying isoscalar component dominated by neutron-skin oscillations and a higher-lying more isovector component on the tail of the giant dipole resonance. Full-text · Article · Nov 2010 · Physical Review Letters
2016-02-09 21:29:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5882665514945984, "perplexity": 3494.770057015528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157472.18/warc/CC-MAIN-20160205193917-00132-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/103100-unfamiliar-symbols.html
# Math Help - unfamiliar symbols 1. ## unfamiliar symbols For all numbers a and b, let be defined by . For all numbers x, y, and z, which of the following must be true? I. II. III. • A. I only • B. II only • C. III only • D. I and II only • E. I, II, and III 2. Originally Posted by aeroflix For all numbers a and b, let be defined by . For all numbers x, y, and z, which of the following must be true? I. II. III. • A. I only • B. II only • C. III only • D. I and II only • E. I, II, and III you just have to convert each side of the equation to see if it's true ... $x \odot y = xy + x + y$ $y\odot x = yx + y + x$ since $xy+x+y = yx+y+x$ for all real numbers, statement I is true. now check the other two ...
2015-01-31 11:36:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7343314290046692, "perplexity": 1594.6795877974102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869647.75/warc/CC-MAIN-20150124161109-00014-ip-10-180-212-252.ec2.internal.warc.gz"}
https://nrich.maths.org/7127/solution
### Plants Three children are going to buy some plants for their birthdays. They will plant them within circular paths. How could they do this? ### Shapes in a Grid Can you find which shapes you need to put into the grid to make the totals at the end of each row and the bottom of each column? ### Journeys in Numberland Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores. ##### Stage: 2 Challenge Level: From Year $4$ at Queen Edith School, Cambridge we had the following rather good idea sent in. After we had got the idea of following a number on its journey, we split up the work of checking out lots of numbers. Some of us began with numbers in the $30$s, some with numbers in the $40$s, and so on. We found that all the numbers we tried ended up on one of three journeys: $2, 4, 8, 16, 14, 10, 2, 4, 8,$ ... which we called the "red" journey $6, 12, 6, 12,$ ... which we called the "green" journey $18, 18, 18,$ ... which we called the "blue" journey Next, we used a $100$ square on the Smartboard, and coloured the numbers to match their journeys. After we had coloured a few of the numbers, some of us spotted patterns beginning to show, like the blue diagonal from $81$ up to $9$. We predicted that other numbers on the diagonal would also be blue and checked them out. We also saw green squares along diagonals and made more predictions. Finally, we made a display using the 100 square and some of our work to challenge other children to predict the journeys for some of the squares we had not coloured. Can you predict a journey and then check if you were right? From Krystof in  Prague and  Matthew from Hamworthy Middle School we had had similar results.  From Karin in West Acton in London we had a clever further idea sent in. My rule for "Follow the Numbers" is to work out the difference between the $2$ digits and add $5$ to the difference. Here is some of my "Follow the Numbers" Starting number:$24 24,07,12,06,11,05,10,06...$ Starting number:$39 39,11,05,10,06,11...$ Starting number:$83 83,10,06,11,05,10...$ Starting number:$63 63,08,13,07,12,06,11,05,10,06...$ On my "Follow the Numbers", most of my numbers had a pattern of $08,13,07,12,06,11,05,10,06.$ Well done Karin, I like this very much, others of you could try your own rules.
2015-05-25 21:52:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5095478892326355, "perplexity": 1550.3778783759408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928715.47/warc/CC-MAIN-20150521113208-00137-ip-10-180-206-219.ec2.internal.warc.gz"}
https://questions.examside.com/past-years/jee/question/the-force-is-given-in-terms-of-time-t-and-displacement-x-by-jee-main-physics-units-and-measurements-soa6xz4qudk1mrr5
Javascript is required 1 JEE Main 2021 (Online) 25th July Evening Shift +4 -1 The force is given in terms of time t and displacement x by the equation F = A cos Bx + C sin Dt The dimensional formula of $${{AD} \over B}$$ is : A $$[{M^0}L{T^{ - 1}}]$$ B $$[M{L^2}{T^{ - 3}}]$$ C $$[{M^1}{L^1}{T^{ - 2}}]$$ D $$[{M^2}{L^2}{T^{ - 3}}]$$ 2 JEE Main 2021 (Online) 20th July Evening Shift +4 -1 If time (t), velocity (v), and angular momentum (l) are taken as the fundamental units. Then the dimension of mass (m) in terms of t, v and l is : A $$[{t^{ - 1}}{v^1}{l^{ - 2}}]$$ B $$[{t^1}{v^2}{l^{ - 1}}]$$ C $$[{t^{ - 2}}{v^{ - 1}}{l^1}]$$ D $$[{t^{ - 1}}{v^{ - 2}}{l^1}]$$ 3 JEE Main 2021 (Online) 17th March Morning Shift +4 -1 The vernier scale used for measurement has a positive zero error of 0.2 mm. If while taking a measurement it was noted that '0' on the vernier scale lies between 8.5 cm and 8.6 cm, vernier coincidence is 6, then the correct value of measurement is ___________ cm. (least count = 0.01 cm) A 8.58 cm B 8.54 cm C 8.56 cm D 8.36 cm 4 JEE Main 2021 (Online) 16th March Evening Shift
2023-02-08 13:28:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6643774509429932, "perplexity": 3427.4960205395305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00746.warc.gz"}
http://openstudy.com/updates/4f2c1192e4b0e1bedb5d1d04
## anonymous 4 years ago Hello all, I'm having trouble with a few problems on my pretest for Calculus II, so here goes the first: Evaluate the integral using the indicated trigonometric substitution Integrate[1/(x^2sqrt(x^2-16)) , {x}]; x=4secant theta 1. 2bornot2b I think you would have got better answers or rather you will get more users to answer your question, if you would have posted the question alone, I mean try excluding the part "Hello all...........here goes the first" 2. anonymous Oh, mkay, I'll try that. Thanks! 3. 2bornot2b And you can also try posting the question with the equation in LaTeX format, in that way you can increase the probability too. 4. anonymous Latex? 5. 2bornot2b Do you want me to help you out, by posting it on behalf of you? 6. anonymous Yeah, that'd be awesome! 7. anonymous Yeah, with this before it: Evaluate the integral using the indicated trigonometric substitution 8. anonymous and no ,x 9. 2bornot2b And I think it must be dx instead of that x right? 10. anonymous its dx* 11. anonymous yeah lol 12. 2bornot2b Ok now see this one "Evaluate the integral $\int\frac{1}{(x^2\sqrt{x^2-16)}}dx$ using the substitution $x=4\sec\theta$" Is it OK? 13. anonymous yes 14. 2bornot2b Ok so I am posting it, ... 15. anonymous Mkay, thanks 16. 2bornot2b OK, so its posted at http://openstudy.com/study#/updates/4f2c150ee4b0e1bedb5d1f0a 17. Mr.Math Good tips. Questions written with Latex always attract me :-D 18. 2bornot2b And I also used the chat rooms to invite people in here. And I messaged myin using fan message 19. 2bornot2b So I think I can summarize them as follows 1. Make it concise 2. Use equation editor 3. Use the chatrooms to attract users 4. Private message the people you think can solve the problem 20. anonymous I didn't know this place had pm's 0-0
2016-10-27 20:58:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313623905181885, "perplexity": 3842.175405951355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721392.72/warc/CC-MAIN-20161020183841-00183-ip-10-171-6-4.ec2.internal.warc.gz"}
http://www.ck12.org/book/CK-12-Trigonometry-Concepts/r1/section/3.10/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # 3.10: Double Angle Identities Difficulty Level: At Grade Created by: CK-12 Estimated10 minsto complete % Progress Practice Double Angle Identities MEMORY METER This indicates how strong in your memory this concept is Progress Estimated10 minsto complete % Estimated10 minsto complete % MEMORY METER This indicates how strong in your memory this concept is ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes ### Vocabulary Language: English Double Angle Identity A double angle identity relates the trigonometric function of two times an argument to a set of trigonometric functions, each containing the original argument. Half Angle Identity A half angle identity relates a trigonometric function of one half of an argument to a set of trigonometric functions, each containing the original argument. identity An identity is a mathematical sentence involving the symbol “=” that is always true for variables within the domains of the expressions on either side. power reducing identity A power reducing identity relates the power of a trigonometric function containing a given argument to a set of trigonometric functions, each containing the original argument. Show Hide Details Description Difficulty Level: Tags: Subjects:
2017-02-20 07:17:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8257936835289001, "perplexity": 5410.56559772492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170425.26/warc/CC-MAIN-20170219104610-00238-ip-10-171-10-108.ec2.internal.warc.gz"}
https://2021.help.altair.com/2021/hwsolvers/acusolve/topics/acusolve/reading_data_with_fieldview.htm
# Read Data With AcuFieldView Parallel Similar to running client-server, you will need to configure files that will permit you to run in the AcuFieldView Parallel Server programs. A simple example of an AcuFieldView Parallel Server Configuration file to run the shared memory mpi version of AcuFieldView Parallel on a shared memory system is given below: AutoStart: true ServerType: shmem ServerName: my_shmem_system NumProcs: 9 StartDirectory: /usr2/data/test_PAR This example sets the AutoStart parameter to launch the AcuFieldView Parallel server program(s) automatically. As with client server, you can also configure this to run manually. If you do, you will need to execute the fvrunsrv command locally. A total of nine processes has been chosen. This has been set with the command line argument -np 9 for the fvrunsrv command. This means that there will be one controller process, which acts solely as a dispatcher, and eight worker processes, which read data, started on this shared memory system named my_shmem_system. The fvrunsrv command also starts the AcuFieldView Parallel server program, fvsrv_shmem. A start directory, /usr2/data/test_PAR, is set so that when the file browser to read data comes up, it will be started at this location. A simple example of an AcuFieldView Parallel Server Configuration file to run the p4 mpi version of AcuFieldView Parallel on a system such as a Linux cluster might look like: AutoStart: true ServerType: cluster ServerName: my_cluster_system NumProcs: 5 StartDirectory: /usr2/data/test_PAR In this case, again the AutoStart option has been chosen. For a parallel cluster, you will need to have the MPICH files and the FV Parallel Server programs installed on the controller node of the system, in this case called my_cluster_system. The installation location for this set of files can be anywhere you choose. When this AcuFieldView Parallel Server is selected, a total of five processes will be run. Again, one of these will be a controller process, and the remaining four will be used to read and process the dataset. For the cluster option, MPI will use the default "machine" file openmpi-default-hostfile, found within the MPI installation with AcuFieldView. If you do not want to let MPI use defaults and would rather specify which nodes of your cluster will be used as AcuFieldView "worker" servers, you can: • Specify a custom machine file with the fvrunsrv-machinefile option • Provide a list of machines using the fvrunsrv-hosts option • Use the MachineFile: field to specify a custom machine file in the Server Configuration (.srv) file Note also that all nodes of a cluster must be able to resolve the path used for the field ServerDirectory: in the Server Configuration file (.srv) in order to find the program fvsrv_par. To read data, from the File menu, click Data Input > Choose Server. From this list, select the desired AcuFieldView Parallel Server program. If both examples for the server configuration files are placed in the <AcuSolve installation directory>/fv/sconfig directory, then when the Choose Server option is selected from the Data Input menu, both the p4-based and shared memory based AcuFieldView Parallel options to read data will be present, as illustrated above. After this point, you can now read the data via a browser which starts on the Controller server system. If you want to quickly determine whether a dataset has multiple grids there are a few simple observations that can be made by attempting to read the data in the Direct Mode of operation, not Parallel or Client-Server. First, if a file has multiple grids specified, and the read is successful, then you can determine how many grids are available by reviewing the Grid Subset Selection panel. Also, once the dataset has been read in successfully, you can see the outline of each of the individual grids in the graphics window. For the case of unstructured data, the number of grids for a given dataset, as well as the number of nodes and elements for each grid will be listed in the console window. There will be one line in the console window corresponding to each grid. A typical console window output might contain the following lines: Unstructured grid 1 has 81830 nodes and 12614 elements. Unstructured grid 2 has 68350 nodes and 12191 elements. Unstructured grid 3 has 61992 nodes and 12449 elements. Unstructured grid 4 has 55450 nodes and 11502 elements. Unstructured grid 5 has 79576 nodes and 12545 elements. Unstructured grid 6 has 81502 nodes and 11656 elements. Unstructured grid 7 has 82392 nodes and 12786 elements. Unstructured grid 8 has 74937 nodes and 12813 elements. Unstructured grid 9 has 51221 nodes and 9562 elements. Unstructured grid 10 has 54713 nodes and 9574 elements. Unstructured grid 11 has 49797 nodes and 9172 elements. Unstructured grid 12 has 45543 nodes and 8938 elements. Unstructured grid 13 has 48358 nodes and 9769 elements. Unstructured grid 14 has 48633 nodes and 9696 elements. Unstructured grid 15 has 52293 nodes and 9633 elements. Unstructured grid 16 has 43283 nodes and 8838 elements. At the present time, there is no feature in AcuFieldView, or standalone utility capable of producing a multi grid file from a single grid dataset. At the present time, there is no feature in AcuFieldView, or standalone utility capable of creating partitioned files from a multi grid dataset.
2023-04-01 17:42:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3714153468608856, "perplexity": 2299.1982151210573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00481.warc.gz"}
https://ch.mathworks.com/help/stats/work-with-the-multinomial-probability-distribution.html
# Multinomial Probability Distribution Functions This example shows how to generate random numbers and compute and plot the pdf of a multinomial distribution using probability distribution functions. ### Step 1. Define the distribution parameters. Create a vector `p` containing the probability of each outcome. Outcome 1 has a probability of 1/2, outcome 2 has a probability of 1/3, and outcome 3 has a probability of 1/6. The number of trials in each experiment `n` is 5, and the number of repetitions of the experiment `reps` is 8. ```p = [1/2 1/3 1/6]; n = 5; reps = 8;``` ### Step 2. Generate one random number. Generate one random number from the multinomial distribution, which is the outcome of a single trial. ```rng('default') % For reproducibility r = mnrnd(1,p,1)``` ```r = 1×3 0 1 0 ``` The returned vector `r` contains three elements, which show the counts for each possible outcome. This single trial resulted in outcome 2. ### Step 3. Generate a matrix of random numbers. You can also generate a matrix of random numbers from the multinomial distribution, which reports the results of multiple experiments that each contain multiple trials. Generate a matrix that contains the outcomes of an experiment with `n = 5` trials and `reps = 8` repetitions. `r = mnrnd(n,p,reps)` ```r = 8×3 1 1 3 3 2 0 1 1 3 0 4 1 5 0 0 1 2 2 3 1 1 3 1 1 ``` Each row in the resulting matrix contains counts for each of the $k$ multinomial bins. For example, in the first experiment (corresponding to the first row), one of the five trials resulted in outcome 1, one of the five trials resulted in outcome 2, and three of the five trials resulted in outcome 3. ### Step 4. Compute the pdf. Since multinomial functions work with bin counts, create a multidimensional array of all possible outcome combinations, and compute the pdf using `mnpdf`. ```count1 = 1:n; count2 = 1:n; [x1,x2] = meshgrid(count1,count2); x3 = n-(x1+x2); y = mnpdf([x1(:),x2(:),x3(:)],repmat(p,(n)^2,1));``` ### Step 5. Plot the pdf. Create a 3-D bar graph to visualize the pdf for each combination of outcome frequencies. ```y = reshape(y,n,n); bar3(y) set(gca,'XTickLabel',1:n); set(gca,'YTickLabel',1:n); xlabel('x_1 Frequency') ylabel('x_2 Frequency') zlabel('Probability Mass')``` The plot shows the probability mass for each possible combination of outcomes. It does not show ${x}_{3}$ , which is determined by the constraint ${x}_{1}+{x}_{2}+{x}_{3}=n$ .
2021-10-28 02:24:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7101489305496216, "perplexity": 369.6334093249585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00679.warc.gz"}
http://www.crhm.r.rapiddg.com/proof-of-god
## Elementary Proof of the Existence of God would be greater than $X$ on the order of $\frac{3}{4\lambda}, \frac{3}{4\lambda}$, while $P(A\cap B)=1$. As I have stated above, there are no bound variables of such value. We only have to get to the other box of the P process to be able to see that the difference by this event of $T$ and $X$ could be small. If $T$ were any more different, it would be possible to write out the time, so that the different events could be in sequence. Let $A=1$ and $B=[1,2,3,4,5,7,8,9]$. As we saw above in Figure 7, the probability of success at the T stage is an integer. And we have $T$ at the top of the stage. If we write out the time for the other event $T$ and the other event $X$, then a difference in time to take the step of 1 is 1 = 3, with $T$ at the right (for it would be easy to see how long the step of 1 could be - we need to add 1 to get that to 1). On the other hand, if we add 1 to get 1 to the step of 1, $B=2,5,1,14,0,21,4,5,4,2$, then the difference to the step of 1 is 1 = 14. This step is known as the P time step, and it has a value between 0 and 15. We can get to the right part of the time to take the step of 1. If we write out $X$ from $A[\tau \frac{1}{2\lambda}$ to $X[\tau \frac{1}{2\lambda}$ with $T[\tau \frac{1}{2\lambda}$). This gives us $X[\tau \frac{1}{2\lambda}\times [\tau \frac{1}{2\lambda})$. We can write out $A[\tau \frac{1}{2\lambda}= [ \tau \frac{1}{2\lambda} ]$. Note, that this gives $B[\tau \frac{1}{2\lambda}\times [\tau \frac{1}{2\lambda}]$ where $\lambda$ and $P(B)\cap B$ are essentially the same. Hence when our "tink" operation is $V\cap B$ we get $X=V$ and so our first $T$ is exactly the same as $Y$ when applied to this equation. The process is straightforward: The change $\lambda$ (\infty) is transformed from one function of a complex value to itself; if $V\cap N$ then $P(A=0),\star(V\cap N[0-1])<0$ and $P(B)=0$, so $\lambda$ is reduced to the sum of its coefficients. Suppose that $V$ holds its value between $\lambda$ and $V\cap N$. Next let $\lambda$ be a complex value. Let $V\cap N$ be the sum of its products. If $P(A=1,\dots \dots)$ and $P(B=0),\dots \dots$, then the products are all of $P(A+1,\dots)$ and $P(B)=0$. Let each $A=0,\dots \dots$ be an integral of $\lambda$. Then, if both B and X are equal at $P(A+0;\dots)\dots$, that is $P(B=0;\dots)$. This is a true-valued sum and of itself, so $\lambda$ equals 1 if we pass $p(A+0;\dots)\dots\dots$. Otherwise $P(B=0;\dots)\dots \dots$ . Hence all B is equal to $P(A+0;\dots)=1$. However we have been told that $P(A+1,\dots)\dots$ is the sum of its coefficients without having to multiply it. It turns out that $\lambda$ has a higher $\frac{\partial\lambda}{1 - p(A+0;\dots)\dots}]$ as it decreases and $p(A+0;\dots)\dots\dots$, where as $P(A+0;\dots)\dots$ is the number of times $p(A+1;\dots)$ and $p(B=0;\dots)\dots$
2021-04-22 10:02:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8899214863777161, "perplexity": 79.31436258507965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00282.warc.gz"}
http://mymathforum.com/algebra/41018-equivalence.html
My Math Forum Equivalence Algebra Pre-Algebra and Basic Algebra Math Forum January 25th, 2014, 12:21 PM #1 Member     Joined: Nov 2010 Posts: 97 Thanks: 1 Equivalence With this practice question, I'm not sure what to do here...I have the answer, but I'm just not understanding how it is the answer. The examples in the book seems vague with no explanations. [color=#0000FF]If the displacement (size) of a motorcycle engine is $1500 cm^3$, which of the following equivalencies is incorrect?[/color] $V=1500 mL$ $V= 1.500 L$ $V= 1.5 * 10^-^3 m^3$ $V= 15.00 m^3$[color=#FF0080] <-- I know this is the answer but I don't understand this? [/color] both $V= 1500 mL$ and $V= 15.00 m^3$ January 25th, 2014, 06:33 PM #2 Global Moderator   Joined: Dec 2006 Posts: 21,037 Thanks: 2274 The book's answer is correct. The "m" in "mL" represesents the "milli" prefix, meaning "one thousandth". Why did you suggest "both V = 1500mL and V = 15.00m³"? Tags equivalence Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post zelmac Applied Math 2 February 23rd, 2013 06:41 AM Taladhis Abstract Algebra 2 February 11th, 2013 08:20 AM FloorPlay Applied Math 2 October 2nd, 2012 12:58 AM eChung00 Applied Math 3 September 30th, 2012 04:17 PM julian21 Algebra 3 December 8th, 2011 01:20 PM Contact - Home - Forums - Cryptocurrency Forum - Top
2019-10-23 23:38:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 7, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.421713650226593, "perplexity": 4419.856797481914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00263.warc.gz"}
https://papers.nips.cc/paper/2019/file/65a99bb7a3115fdede20da98b08a370f-Reviews.html
NeurIPS 2019 Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center Paper ID: 4419 Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices ### Reviewer 1 Summary: This paper analyzes both Langevin dynamics (LD) and unadjusted Langevin algorithm (ULA) for sampling a target distribution When the target distribution is smooth and satisfies a log-Sobolev inequality (LSI), the authors show exponential convergence of LD to the target distribution in KL-divergence (Theorem 1) and then extends convergence bound to ULA with an additional error term (Theorem 2) by first proving Langevin dynamics (LD), . The author extend this result to convergence in Renyi-divergence (Theorems 3 + 4 for LD and ULA respectively), under the additional assumptions that perturbations of the target satisfy LSI (Lemma 8) and the existence of a growth function $g$ for ULA. The authors also provide a convergence results when the target distribution satisfies a relaxation of LSI (Poincare inequality) in the Supplement. Quality: The paper appears to be technically sound. To complement the theoretical convergence bounds, although the toy Gaussian examples are nice illustrations, the paper lacks (synthetic) experiments to complement the theoretical results. The paper does not (empirically) demonstrate the tightness (or lack of tightness) of the theoretical convergence bounds. The paper does not provide an example that is not strongly log-concave but satisfies LSI. The convergence bounds for ULA for Renyi divergence (Theorem 4 and Section F.2 for targets satisfying LSI and Poincare inequality respectively) rely on an unknown growth function (line 248). An example is provide for a Gaussian target, but it is not clear to me that this exists in general (for any target distribution satisfying LSI). Originality: To the best of my knowledge, analyzing the convergence of ULA for target distributions under LSI is novel as is measuring convergence in Renyi-divergence. The proof of the Theorems and Lemmas in the Supplement are elegant and easy to follow. Appropriate references to related work and comparisons to previous contributions appear to be included. Clarity: This paper is very well written and excellently organized. Really nice work. Although lines (66-72) provide background on what Renyi divergence is, it could be made more clear what the advantages and disadvantages Renyi divergence offers compared to KL-divergence (for q > 1). Significance: The main contribution of this paper is generalizing the convergence results of ULA beyond log-concave targets. This paper generalizes to target distributions satisfying LSI. Section 2.2 provides a compelling case for the reasonableness of the LSI condition - it is indeed a wider class of distributions and includes practical. These results are of interest to the stochastic gradient Langevin dynamics (SGLD) literature, as they could be extended to SGLD by handling additional noise in the gradient. This is highly significant, because SGLD (and it variants) are commonly used in practice to sample from non log-concave distributions. The significance of the results for Renyi divergence is more difficult to evaluate. I am not sure what additional benefits Renyi divergence offers (compared to KL-divergence) and the additional assumptions (Lemma 8) and existence of an unknown growth function are more difficult to justify. However, I believe the significance of the first contribution is enough for me to support acceptance. Typos / Minor Comments: -Line 29 could change "L^2" to "L^2-space" to avoid confusion with "L" used for smoothness. -Line 441 "Hessian" -> "Laplacian" === Update based on Author Feedback === Thank you for providing additional motivation for why Renyi divergence may be of practical interest over KL divergence and how the growth function only needs to be a rough initial estimate of the asymptotic bias. Although the toy examples of nonconvex functions that satisfy LSI (in the author feedback) are nice, I still wish that there was a clear motivating practical example (perhaps for Bayesian inference using the Holley-Stroock perturbation theorem, where the prior satisfies a LSI, the loglikelihood is bounded, and the target posterior distribution is not logconcave but will still satisfy LSI). ### Reviewer 2 Summary: Authors study the convergence of unadjusted Langevin algorithm (ULA) which is Euler discretization of overdamped Langevin dynamics. They establish rates in KL divergence and Renyi divergence by only assuming a log-Sobolev inequality and smoothness. The results are significant and paper is well-written. My main concern is that there is no practical demonstration (examples or experiments) of the usefulness of the established results. See my further questions/comments below. ** Major comments: - Authors start their algorithm at N(x_*, . ) where x_* is a first order critical point of the potential function. They claim that x_* can be found via gradient descent. However, in practice they can only find a point that is arbitrarily close to a first-order critical point x_*. Would your final bound change after taking this into account? Same question for line 187. - The class of distributions that satisfy LSI is larger than class of logconcave distributions. However, authors do not provide an example of such a distribution that is used in practice (satisfies LSI but not strong convexity). An example of this sort would further motivate the reader and demonstrate the significance of the established results. - What happens to the upper bounds when you start the algorithm at a deterministic point? - LSI implies (by Talagrand inequality) exponential decay in Wasserstein-2 metric. In the case of exponential contraction in W2, this implies back strong convexity. Is it because you only have exponential decay rather than contraction that your results are more general than strong convexity? ** Minor comments: - I haven't noticed a single typo in the paper. -- I would like to thank the authors for answering my questions. I like the results of this paper; however, I will keep my score (see major concerns above).
2022-08-09 02:42:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6849551200866699, "perplexity": 690.0291620533552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00419.warc.gz"}
http://mathhelpforum.com/statistics/280494-power-test-pls-helppppp.html
# Thread: Power of a test Pls helppppp 1. ## Power of a test Pls helppppp how do I calculate the power of this test ? I already found the Z value beforehand the σ = 2 My steps are Z= x - u/ σ P= P(x<200 l u=196) = P(Z<2) = 0.9772 I know that the power of a second type error 1-B= P(reject Ho l Ho true) P(y>15 l p=0.9772) = ??? How do I calculate this ? *by the way the > in y > 15 is a > and equal sign but I do not know how to do the symbol. Thanks for reading.
2018-07-21 23:21:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785290718078613, "perplexity": 2218.0225125812244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592861.86/warc/CC-MAIN-20180721223206-20180722003206-00544.warc.gz"}
http://mathoverflow.net/questions/119392/top-chern-class-under-finite-unramified-dominant-morphism
# Top chern class under finite, unramified, dominant morphism Situation: Let $\Bbbk$ be an algebraically closed field. Assume that $\pi:Y\to X$ is an finite, dominant, unramified morphism between nonsingular varieties of dimensions $n$. Let $d=\deg(\pi)$. What I know: For $\Bbbk=\mathbb C$, the second chern class of $X$ equals its topological Euler characteristic (i.e., the Euler characteristic with respect to the topology of a complex manifold). I know this under the name Gauss-Bonnet Formula. It then follows that $c_n(Y)=d\cdot c_n(X)$ because $\pi$ is a $d$-fold covering map of complex manifolds. My Question: Does $c_n(Y)=d\cdot c_n(X)$ hold under the more general assumption that $\Bbbk$ is algebraically closed? In particular, does this hold in positive characteristic? PS: I am mostly interested in the case $n=2$, i.e. $\pi$ is a covering map of surfaces. However, I felt that this would probably work for any $n$. - Yes, it does hold in positive characteristic. You can show that the degree of $c_n(X)$ equals its Euler characteristic with respect to étale cohomology with coefficients in $\mathbb{Q}_{\ell}$, where $\ell$ is a prime different from the characteristic ( Top chern class in positive characteristic ), and this is has the required behavior under étale covers ( Behaviour of euler characteristics in characteristic p for finite etale covers ). [Edit:] the equality can be proved directly; in fact the proof is much easier. Since $\pi\colon Y\to X$ is étale, the tangent bundle $T_Y$ is the pullback $\pi^*T_X$. By functoriality of Chern classes, $c_n(Y) = \pi^*c_n(X)$ (at the level of Chow groups). But it easy to see that the composite $\pi_*\pi^*$ from the Chow group of $X$ to itself is just multiplication by $d$. - Precisely what I had hoped for. Looking back now, I should have searched for "étale cover". Thanks a lot! –  Jesko Hüttenhain Jan 20 '13 at 14:05 Out of curiosity, though: Is there a "direct" proof for this equality, without identifying $c_n$ with the Euler characteristic? –  Jesko Hüttenhain Jan 20 '13 at 17:57 Angelo's answer is complete, but I think you would be interested in the following (which is more about Euler characteristics than Chern classes). I will assume $k= \mathbf C$, but what I will write holds for $k$ algebraically closed of characteristic zero once you replace "cohomology with compact support and coefficients in $\mathbf Q$ on the category of para-compact Hausdorff topological spaces" by "etale cohomology with compact support and coefficients in $\mathbf Q_\ell$ for some prime $\ell$ on the category of finite type separated $k$-schemes". Let $H^\cdot_c(-,\mathbf Q)$ denote cohomology with compact support and coefficients in $\mathbf Q$ on the category of para-compact Hausdorff topological spaces. For a finite type separated $\mathbf C$-scheme, write $e_c(X)$ for the Euler characteristic of $X$, i.e., $e_c(X) = \sum_{i} (-1)^i \dim_{\mathbf Q} H^i_c(X,\mathbf Q)$. Since $X$ is separated and of finite type, this is a well-defined integer. (Of course, I'm implicitly utilizing the analytification of $X$ here.) Theorem. Let $\pi:X\to Y$ be a finite etale morphism of finite type separated $\mathbf C$-schemes. Then $e_c(X) = \deg \pi e_c(Y)$. Proof. We may and do assume $X$ and $Y$ are connected. Also, we may and do assume $\pi:X\to Y$ is Galois. (In fact, let $P\to Y$ be a Galois closure of $X\to Y$. Let $G$ be the Galois group of $P\to Y$. Let $H$ be the subgroup of $G$ such that $P/H = X$. Then $$e_c(Y) = \frac{e_c(P)}{\# G} = \frac{\# H}{\# G} e_c(X) = \frac{1}{\deg \pi} e_c(X)$$ and so the result follows in the general case.) Thus, we have a finite group $G$ acting freely (without fixed points) on $Y$ such that $X=Y/G$. Note that $\deg \pi = \vert G\vert$. Apply the Lefschetz trace formula to see that $Tr(g,H^\ast_c(Y)) =0$ for any $g\neq e$ in $G$. By character theory for $\mathbf Q_\ell[G]$, we may conclude that the element $$[H^\ast_c(Y,\mathbf Q_\ell)] := \sum (-1)^i [ H^i_c(Y,\mathbf Q_\ell)]$$ in the Grothendieck group $K_0(\mathbf Q_\ell[G])$ of finitely generated $\mathbf Q_\ell[G]$-modules is given by an integer multiple of $[\mathbf Q_\ell[G]]$; the class of the regular representation. So we may write $$[H^\ast_c(Y,\mathbf Q_\ell)] = m [\mathbf Q_\ell[G]],$$ where $m\in \mathbf Z$. Now, note that $H^i_c(X,\mathbf Q_\ell) = \left(H^i_c(Y,\mathbf Q_\ell)\right)^G$ for any $i\in \mathbf Z$. Therefore, we have that $$[H^\ast_c(X,\mathbf Q_\ell)] = m$$ in $K_0(\mathbf Q_\ell[G])$. In particular, we see that $e_c(X) = \dim_{\mathbf Q_\ell} [H^\ast_c(X,\mathbf Q_\ell)] = m$. We conclude that $$e_c(Y) = \dim_{\mathbf Q_\ell} [H^\ast_c(Y,\mathbf Q_{\ell})]= m \vert G\vert = e_c(X) \vert G \vert = \deg \pi e_c(X).$$ QED. For completeness, here is what you can do for "ramified covers". Not surprisingly, the same equality holds up to a "correction term" coming from the branch locus. Lemma. Let $M$ be a finite type separated $\mathbf C$-scheme. Let $N$ be a closed subscheme of $M$. Then $e_c(M) = e_c(N) + e_c(M\backslash N)$. Proof. Mayer-Vietoris. QED Corollary. Let $\pi:X\to Y$ be a finite flat surjective morphism, and let $D$ be a closed subscheme of $Y$ such that $\pi$ is etale over $Y\backslash D$. Then $$e_c(X) = \deg \pi e_c(Y) + e_c(\pi^{-1}D) - \deg\pi e_c(D) .$$ Proof. Write $U=Y\backslash D$ and $V=\pi^{-1}(U)$. Then $$e_c(X) = e_c(V) + e_c(\pi^{-1}D) = \deg \pi e_c(U) + e_c(\pi^{-1}D) = \deg \pi(e_c(Y) - e_c(D)) + e_c(\pi^{-1}D).$$ The first equality follows from the Lemma, the second from the Theorem and the third from the Lemma. QED We can use this Corollary to obtain a more precise description of the "error term" under some mild hypotheses. Recall that a strict normal crossings divisor on a smooth projective variety is a divisor whose irreducible components are smooth and intersect transversally. Theorem. Let $D$ be a strict normal crossings divisor on a smooth projective connected variety $X$ over $k$. Let $U$ be the complement of the support of $D$ in $X$ and let $V\to U$ be a finite etale morphism with $V$ connected. Let $\pi:Y\to X$ be the normalization of $X$ in the function field of $V$. Then 1. The singularities of $Y$ are quotient singularities (and thus rational singularities); 2. The singularities of $Y$ lie in $\pi^{-1}D^{sing}$, where $D^{sing}$ is the singular locus of $D$; 3. The morphism obtained by restriction $\pi^{-1}(D-D^{sing})\to D-D^{sing}$ is etale; 4. We have $$e_c(Y) = \deg \pi e_c(X) + e_c(\pi^{-1}(D^{sing}))-\deg \pi e_c(D^{sing}) +$$ $$e_c(\pi^{-1}(D-D^{sing})) - \deg \pi e_c(D-D^{sing}).$$ Proof. This is a long but not difficult proof. I can include the details if you'd like. For now, let me say that if you prove $Y$ has quotient singularities, it follows that $Y$ has rational singularities by a theorem of Viehweg; see "Rational singularities of higher dimensional schemes". To prove (1), (2) and (3) you use results from SGA1 on the fundamental group. Note that (4) follows from the Corollary, the Lemma and (3). QED Final Remark. In the last formula $$e_c(\pi^{-1}(D-D^{sing})) - \deg \pi e_c(D-D^{sing})= e_c(D-D^{sing})(\deg \pi - d^\prime),$$ where $d^\prime$ is the degree of the finite etale morphism $\pi^{-1}(D-D^{sing})\to D-D^{sing}$. (In a previous version I thought this was always zero, because I mistakingly assumed $d^\prime = \deg \pi$.) - First of all, +1 and thanks for the very detailed Answer. I only know Mayer-Vietoris for singular Homology, is there an equivalent for the $\ell$-adic one? –  Jesko Hüttenhain Jan 22 '13 at 17:57 PS: I ask because that corollary is, in fact, of serious interest to me =). –  Jesko Hüttenhain Jan 22 '13 at 17:58 I'm pretty sure Mayer-Vietoris also holds in etale cohomology. I don't know of a precise reference, though. But it should be somewhere in Milne's notes on etale cohomology. –  Ariyan Javanpeykar Jan 23 '13 at 9:48 I'm glad the answer is useful. I added another result which describes the error term more precisely (under some mild hypotheses on $\pi:Y\to X$); see (4) in the last theorem. One of the hypotheses is that the branch locus of $\pi:Y\to X$ is a strict normal crossings divisor. Note that this can always be achieved by Hironaka's theorem (embedded resolution of singularities); see page 404-407 of Liu's book (Section 9.2.4 and the remark right after Remark 2.36). Let me know if anything seems obscure (there are certainly some details missing in the proof of the last statement). –  Ariyan Javanpeykar Jan 23 '13 at 9:53 @Jesko. I corrected the last formula for the compactly supported Euler characteristic. I had made a slight mistake in calculating the Euler characteristic of $\pi^{-1}(D-D^{sing})$. This is not equal to $\deg \pi e_c(D-D^{sing})$. –  Ariyan Javanpeykar Jan 29 '13 at 12:00
2015-08-30 07:58:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776545763015747, "perplexity": 112.92683592519147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00130-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.fxweek.com/fx-week/news/1536533/citi-tops-usd1-billion-after-q3-leading-us-banks-in-fx-results
# Citi Tops $1 Billion After Q3, Leading US Banks In FX Results ## FRONT PAGE NEW YORK--Citibank is on course to break its record earnings of 1997 following strong third quarter FX trading results, which have pushed the bank's year-to-date revenues over the$1 billion mark. Overall, however, results were mixed among the US banks, with the biggest slide recorded by BankAmerica, which showed a 52 per cent decline from last year. "It has certainly been an eventful quarter," says one forex manager. "I think the bread and butter FX business is doing very well at the moment--
2017-08-21 06:35:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18588325381278992, "perplexity": 9395.101590322443}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107720.63/warc/CC-MAIN-20170821060924-20170821080924-00252.warc.gz"}
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/studia-mathematica/all/254/2/113321/tightness-and-weak-convergence-of-probabilities-on-the-skorokhod-space-on-the-dual-of-a-nuclear-space-and-applications
# Wydawnictwa / Czasopisma IMPAN / Studia Mathematica / Wszystkie zeszyty ## Studia Mathematica Artykuły w formacie PDF dostępne są dla subskrybentów, którzy zapłacili za dostęp online, po podpisaniu licencji Licencja użytkownika instytucjonalnego. Czasopisma do 2009 są ogólnodostępne (bezpłatnie). ## Tightness and weak convergence of probabilities on the Skorokhod space on the dual of a nuclear space and applications ### Tom 254 / 2020 Studia Mathematica 254 (2020), 109-147 MSC: 60B10, 60B12, 60F17, 60G17. DOI: 10.4064/sm180629-25-11 Opublikowany online: 6 March 2020 #### Streszczenie Let $\Phi ’_{\beta }$ denote the strong dual of a nuclear space $\Phi$ and let $D_{T}(\Phi ’_{\beta })$ be the Skorokhod space of right-continuous with left limits (càdlàg) functions from $[0,T]$ into $\Phi ’_{\beta }$. We introduce the concepts of cylindrical random variables and cylindrical measures on $D_{T}(\Phi ’_{\beta })$, and prove analogues of the regularization theorem and Minlos theorem for extensions of these objects to bona fide random variables and probability measures on $D_{T}(\Phi ’_{\beta })$. Further, we establish analogues of Lévy’s continuity theorem to provide necessary and sufficient conditions for tightness of a family of probability measures on $D_{T}(\Phi ’_{\beta })$ and sufficient conditions for weak convergence of a sequence of probability measures on $D_{T}(\Phi ’_{\beta })$. Extensions of the above results to the space $D_{\infty }(\Phi ’_{\beta })$ of càdlàg functions from $[0,\infty )$ into $\Phi ’_{\beta }$ are also given. Next, we apply our results to the study of weak convergence of $\Phi ’_{\beta }$-valued càdlàg processes and in particular to Lévy processes. Finally, we apply our theory to the study of tightness and weak convergence of probability measures on the Skorokhod space $D_{\infty }(H)$ where $H$ is a Hilbert space. #### Autorzy • C. A. Fonseca-MoraEscuela de Matemática
2021-09-24 12:46:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5338687896728516, "perplexity": 663.9617701271382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00549.warc.gz"}
https://ai.stackexchange.com/tags/tensorflow/hot
# Tag Info Accepted ### Why is Python such a popular language in the AI field? Python comes with a huge amount of inbuilt libraries. Many of the libraries are for Artificial Intelligence and Machine Learning. Some of the libraries are TensorFlow (which is a high-level neural ... • 1,953 Accepted ### What are "bottlenecks" in neural networks? The bottleneck in a neural network is just a layer with fewer neurons than the layer below or above it. Having such a layer encourages the network to compress feature representations (of salient ... • 24.7k Accepted ### Why do CNN's sometimes make highly confident mistakes, and how can one combat this problem? The concept you are looking for is called epistemic uncertainty, also known as model uncertainty. You want the model to produce meaningful calibrated probabilities that quantify the real confidence of ... • 586 ### Why is Python such a popular language in the AI field? Practically all of the most popular and widely used deep-learning frameworks are implemented in Python on the surface and C/C++ under the hood. I think the main reason is that Python is widely used ... • 773 ### Why do CNN's sometimes make highly confident mistakes, and how can one combat this problem? Your classifier is specifically learning the ways in which 0s are different from other digits, not what it really means for a digit to be a zero. Philosophically, you could say the model appears to ... Accepted ### How to classify data which is spiral in shape? There are many approaches to this kind of problem. The most obvious one is to create new features. The best features I can come up with is to transform the coordinates to spherical coordinates. I ... Accepted ### Is there a machine learning model that can be trained with labels that only say how "right" or "wrong" it was? What you are looking for is called "reinforcement learning". A reinforcement learning algorithm will try to maximize a reward function. This reward represents how "good" or "... • 424 ### What are "bottlenecks" in neural networks? Imagine, you want to re-compute the last layer of a pre-trained model : Input->[Freezed-Layers]->[Last-Layer-To-Re-Compute]->Output To train [Last-... • 101 ### How to train a neural network for a round based board game? Great question! NN is very promising for this type of problem: Giraffe Chess. Lai's accomplishment was considered to be a pretty big deal, but unfortunately came just a few months before AlphaGo ... • 6,107 ### How to classify data which is spiral in shape? Ideally neural networks should be able to find out the function out on it's own without us providing the spherical features. After some experimentation I was able to reach a configuration where we do ... ### Why is Python such a popular language in the AI field? What attracts me to Python for my analysis work is the "full-stack" of tools that are available by virtue of being designed as a general purpose language vs. R as a domain specific language. The ... ### Can LSTM neural networks be sped up by a GPU? From Nvidia www (https://developer.nvidia.com/discover/lstm): Accelerating Long Short-Term Memory using GPUs The parallel processing capabilities of GPUs can accelerate the LSTM training and ... • 1,272 Accepted ### How fast is TensorFlow compared to self written neural nets? I wanted to know how the performance of my net would be compared to the same in Tensor Flow. Not to specific but just a rough aproximation. This is very hard to answer in specific terms because ... • 210 ### Why is Python such a popular language in the AI field? Python has a standard library in development, and a few for AI. It has an intuitive syntax, basic control flow, and data structures. It also supports interpretive run-time, without standard compiler ... • 81 Accepted ### How to use CNN for making predictions on non-image data? You can use CNN on any data, but it's recommended to use CNN only on data that have spatial features (It might still work on data that doesn't have spatial features, see DuttaA's comment below). For ... • 2,571 ### How to train a neural network for a round based board game? I'm a chess player and my answer will be only on chess. Training a neural network with reinforcement learning isn't new, it has been done many times in the literature. I'll briefly explain the common ... • 1,390 Accepted ### Is a GPU always faster than a CPU for training neural networks? This changes according to your data and complexity of your models. See following article by microsoft. Their conclusion is The results suggest that the throughput from GPU clusters is always ... • 196 ### Why do CNN's sometimes make highly confident mistakes, and how can one combat this problem? Broken assumptions Generalization relies on making strong assumptions (no free lunch, etc). If you break your assumptions, then you're not going to have a good time. A key assumption of a standard ... • 853 Accepted ### Why isn't my Neural Network based calculator working? A neural network is not good at selecting a function based on those 3 input parameters, because of the way a neuron is setup. What you should do is either make a neural network for each operation, or ... ### How to embed/deploy an arbitrary machine learning model on microcontrollers? There are a few possible approaches to deploying a ML model to a microcontroller. The main limiting factor to deployment on microcontollers is that ML models are usually a representation of a set of ... • 8,937 ### How to classify data which is spiral in shape? By cheating... theta is $\arctan(y,x)$, $r$ is $\sqrt{(x^2 + y^2)}$. In theory, $x^2$ and $y^2$ should work, but, in practice, they somehow failed, even though, ... Accepted ### Can LSTM neural networks be sped up by a GPU? I found that there are cuDNN accelerated cells in Keras, for example, https://keras.io/layers/recurrent/#cudnnlstm. They are very fast. The normal LSTM cells are faster on CPU than on GPU. • 269 ### How to use CNN for making predictions on non-image data? The convolutional models are a method of choice when your problem is translation invariant (or covariant). In image classification, the image should be classified into class 'cow' if a cow is present ... • 469 ### How to detect LEGO bricks by using a deep learning approach? So I am assuming that you are trying to detect a lego brick from the image. One idea is that you can use transfer learning. Leveraging a pre-trained machine learning model is called transfer learning. ... Accepted ### Has anyone been able to solve OpenAI's hardcore bipedal walker with their implementation of DDPG? You may be very interested to know that there was a bug in the v2 Lidar tracing, making the agent think there were phantom objects, and sometimes intersecting with its own legs: https://github.com/... • 206 Accepted ### Do we have anything like accuracy and loss in RNN models? RNN's stand for Recurrent Neural Networks which is, in fact, Deep Learning. There has to be a loss since you're dealing with supervised learning and the typical loss metrics used are the same as you ... • 1,369 ### Why is Python such a popular language in the AI field? It's a mix of many factors that together make it a very good option to develop cognitive systems. Quick development Rapid prototyping Friendly syntax with almost human-level readability Diverse ... • 141 ### How to train a neural network for a round based board game? I think you should get familiar with reinforcement learning. In this field of machine learning the agent interacts whit its environment and after that the agent gets some reward. Now, the agent is the ... Accepted ### Can I build a CNN for image classification tasks just with OpenCV? OpenCV does include 2D filter convolution functions for custom separable and non-separable filters. The latter uses DFT for large filters, which may or may not be faster than the conventional method. ... • 933
2022-10-03 23:58:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3735397160053253, "perplexity": 1216.7907597488393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00660.warc.gz"}
https://www.mathskey.com/question2answer/33181/what-electric-field-strength-midpoint-between-the-spheres
# What is the electric field strength at the midpoint between the two spheres? Two 2.0-cm-diameter insulating spheres have a 5.90 cmcm space between them. One sphere is charged to + 55.0 nCnC , the other to - 52.0 nCnC . asked Feb 5, 2016 in PHYSICS Step 1: Diameter of the two spheres is . Radius of the sphere is . Charge on first sphere is . Charge on second sphere is . Distance between the two spheres is . Total distance from center of the sphere to the mid point is . . Electric field due to charged particle is , where is the permittivity of free space . is the charge of the electron. is the distance between the two point charges. Find the electric field strength of the first sphere. Substitute , and . . The electric field strength of the first sphere is . Step 2: Find the electric field strength of the second sphere. Substitute , and . . The electric field strength of the second sphere is . Since mid point is considered, the magnitude of the right sphere contribution is also same as the left sphere and is in oppsit direction. . Solution : Electric field strength at the midpoint between the two spheres is .
2021-06-21 10:12:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411465287208557, "perplexity": 795.3579623546677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00423.warc.gz"}
https://visimphysics.com/en/electricity-and-magnetism/rlc-circuit/
## Exercise 4.10 A Determine the inductance of the coil and the capacitance of the capacitor. What would be the resonant frequency of this circuit? Tip: When calculating the inductance of the coil, remember ${U{_{R{_L}}}}^2+{U{_L}}^2= {U{_{R{_L}+L}}}^2$ Answer: $13\ mH$ ; $210\ \mu F$ ; $97\ Hz$
2023-03-20 22:13:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8901534676551819, "perplexity": 291.36405918742753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00783.warc.gz"}
https://socratic.org/questions/what-is-the-orientation-in-space-of-an-atomic-orbital-associated-with
# What is the orientation in space of an atomic orbital associated with? It's related to the angular quantum number $l$ or, more simply, the orbital's shape. An electron orbital's magnetic quantum number ${m}_{l}$ is its orientation in space, and it contains integral values in the range $\left[- l , l\right]$ and the orbitals angular quantum number is $l$ that defines its shape.
2020-08-09 21:20:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7308780550956726, "perplexity": 430.72491847714053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738573.99/warc/CC-MAIN-20200809192123-20200809222123-00046.warc.gz"}
http://reasonabledeviations.com/
# Implementing k-means clustering from scratch in C++ I have a somewhat complicated history when it comes to C++. When I was 15 and teaching myself to code, I couldn’t decide between python and C++ and as a result tried to learn both at the same time. One of my first non-trivial projects was a C++ program to compute orbits – looking back on it now, I can see that what I was actually doing was a (horrifically inefficient) implementation of Euler’s method. I just couldn’t wrap my head around fixed-size arrays (not to mention pointers!). In any case, I soon realised that juggling C++ and python was untenable – not only was I new to the concepts (such as type systems and OOP), I was having to learn two sets of syntax in addition to two flavours of these concepts. I decided to commit to python and haven’t really looked back since. Now, almost 6 years later (tempus fugit!), having completed the first-year computer science course at Cambridge, I feel like I am in a much better place to have a proper go at C++. My motivation is helped by the fact that all of the second-year computational practicals for physics are done in C++, not to mention that C++ is incredibly useful in quantitative finance (which I am deeply interested in). To that end, I decided to jump straight in and implement a machine learning algorithm from scratch. I chose k-means because of its personal significance to me: when I was first learning about ML, k-means was one of the first algorithms that I fully grokked and I spent quite a while experimenting with different modifications and implementations in python. Also, given that the main focus of this post is to learn C++, it makes sense to use an algorithm I understand relatively well. Please let me add the disclaimer that this is certainly not going to be an optimal solution – this post is very much a learning exercise for me and I’d be more than happy to receive constructive criticism. As always, all code for this project can be found on GitHub. ## What is k-means clustering? I have decided to give four brief explanations with increasing degrees of rigour. Nothing beyond the first explanation is really essential for the rest of this post, so feel free to stop whenever. 1. k-means clustering allows us to find groups of similar points within a dataset. 2. k-means clustering is the task of finding groups of points in a dataset such that the total variance within groups is minimised. 3. k-means clustering is the task of partitioning feature space into k subsets to minimise the within-cluster sum-of-square deviations (WCSS), which is the sum of quare euclidean distances between each datapoint and the centroid. 4. Formally, k-means clustering is the task of finding a partition $S = \{S_1, S_2, \ldots S_k\}$ where $S$ satisfies: ## The k-means algorithm The k-means clustering problem is actually incredibly difficult to solve. Let’s say we just have $N=120$ and $k=5$, i.e we have 120 datapoints which we want to group into 5 clusters. The number of possible partitions is more than the number of atoms in the universe ($5^{120} \approx 10^{83}$) – for each one, we then need to calculate the WCSS (read: variance) and choose the best partition. Clearly, any kind of brute force solutions is intractable (to be specific, the problem has exponential complexity). Hence, we need to turn to approximate solutions. The most famous approximate algorithm is Lloyd’s algorithm, which is often confusingly called the “k-means algorithm”. In this post I will silence my inner pedant and interchangeably use the terms k-means algorithm and k-means clustering, but it should be remembered that they are slightly distinct. With that aside, Lloyd’s algorithm is incredibly simple: 1. Initialise the clusters The algorithm needs to start somewhere, so we need to come up with a crude way of clustering points. To do this, we randomly select k points which become ‘markers’, then assign each datapoint to its nearest marker point. The result of this is k clusters. While this is a naive initialisation method, it does have some nice properties - more densely populated regions are more likely to contain centroids (which makes logical sense). 2. Compute the centroid of each cluster Technically Lloyd’s algorithm computes the centroid of each partition of 3D space via integration, but we use the reasonable approximation of computing the centre of mass of the points in a given partition. The rational behind this is that the centroid of a cluster ‘characterises’ the cluster in some sense. 3. Assign each point to the nearest centroid and redefine the cluster If a point currently in cluster 1 is actually closer to the centroid of cluster 2, surely it makes more sense for it to belong to cluster 2? This is exactly what we do, looping over all points and assigning them to clusters based on which centroid is the closest. 4. Repeat steps 2 and 3 We then repeatedly recompute centroids and reassign points to the nearest centroid. There is actually a very neat proof that this converges: essentially, there is only a finite (though massive) number of possible partitions, and each k-means update at least improves the WCSS. Hence the algorithm must converge. ## Implementation Our goal today is to implement a C++ version of the k-means algorithm that successfully clusters a two-dimensional subset of the famous mall customers dataset (available here). It should be noted that the k-means algorithm certainly works in more than two dimensions (the Euclidean distance metric easily generalises to higher dimensional space), but for the purposes of visualisation, this post will only implement k-means to cluster 2D data. A plot of the raw data is shown below: By eye, it seems that there are five different clusters. The question is whether our k-means algorithm can successfully figure this out. We are actually going to cheat a little bit and tell the algorithm that there will be five clusters (i.e $k=5$). There are methods to avoid this, but they essentially involve testing different values of k and finding the best fit, so they don’t add much value to this post. ### C++ preambles Firstly, we need to define our imports and namespace. #include <ctime> // for a random seed #include <vector> using namespace std; In general, using namespace std is not considered best practice (particularly in larger projects) because it can lead to ambiguity (for example, if I define a function or variable called vector) and unexpected behaviour. However, the alternative is to have things like std::cout or vector::vector everywhere – for an educational post, the loss in clarity is worse than the potential ambiguity. ### Representing a datapoint To represent a datapoint for this program, we will be using a C++ struct. Structs caused me a great deal of confusion when I was learning about C++ because I couldn’t quite figure out how they differ from classes. As it happens, they are really quite similar – possibly the only relevant difference is that members of a struct are public by default. In any case, I would think of a struct as a way of defining a more complicated data type, though it is more than just a container for primitive datatypes because you can also define some functionality. struct Point { double x, y; // coordinates int cluster; // no default cluster double minDist; // default infinite dist to nearest cluster Point() : x(0.0), y(0.0), cluster(-1), minDist(__DBL_MAX__) {} Point(double x, double y) : x(x), y(y), cluster(-1), minDist(__DBL_MAX__) {} double distance(Point p) { return (p.x - x) * (p.x - x) + (p.y - y) * (p.y - y); } }; The first few lines are self-explanatory: we define the coordinates of a point, as well as the cluster it belongs to and the distance to that cluster. Annoyingly, you can’t directly set the default value in the struct (e.g double x = 0) – you need to do this via initialisation lists. Initially the point belongs to no cluster, so we arbitrarily set that to -1. Accordingly, we must set minDist to infinity (or the next best thing, __DBL_MAX__). We also define a distance function, which computes the (square) euclidean distance between this point and another. Our Point struct can be used as follows: // Define new point at the origin Point p1 = Point(0.0, 0.0); cout << p1.x << endl; // print the x coordinate // Define another point and compute square distance Point p2 = Point(3.0, 4.0); cout << p1.distance(p2) << endl; // prints 25.0 If we wanted to represent a datapoint in p-dimensions, we could replace the x and y members with a vector or array of doubles, with each entry corresponding to a coordinate in a given dimension. The distance function would similarly need to be modified to loop over the vectors/arrays and sum all of the squared differences. ### Reading in data from a file Having decided how we are going to store datapoints within our C++ script, we must then read in the data from a CSV file. This is rather unexciting, but actually took me a long time to figure out. Essentially, we loop over all the lines in the CSV file and break the down based on the commas. vector<Point> readcsv() { vector<Point> points; string line; ifstream file("mall_data.csv"); while (getline(file, line)) { stringstream lineStream(line); string bit; double x, y; getline(lineStream, bit, ','); x = stof(bit); getline(lineStream, bit, '\n'); y = stof(bit); points.push_back(Point(x, y)); } return points; } Note that the readcsv function returns a vector of points. I decided to use a vector instead of an array because vectors handle all of the memory management for you (though are slightly less performant) and are functionally quite similar to python lists. ### Pointers: an old enemy revisited Suppose your friend wants to visit your house. You have two options (the relevance of this thought experiment will be clear shortly.) 2. Hire a team of builders to replicate your house brick-for-brick right outside their front door. The readcsv function returns a vector of points. One might assume that we can then just pass this to whatever k-means function we define and be done with it. vector<Point> points = readcsv(); // read from file kMeansClustering(points); // pass values to function However, we must be aware that depending on the size of our dataset, points might take up quite a large chunk of memory, so we must handle it carefully to be efficient. The problem with the above code is that we are passing the values of the points to the function, i.e we are making a copy of them. This is inefficient from a memory perspective. Luckily, C++ offers a way around this, called pass by reference. Essentially, instead of giving the value of the points vector to the function, we pass the location (read: postcode) of the points vector in memory. vector<Point> points = readcsv(); kMeansClustering(&points); // pass address of points to function The prototype of our kMeansClustering function is then as follows: void kMeansClustering(vector<Point>* points, int epochs, int k); Because we are now passing an address (and thus not technically a vector<Point>), we must include an asterisk. Read the first argument as “a reference to a vector of Point objects”. I have also added two other arguments: • epochs is the number of iterations over which we will do our main k-means loop • k is the number of clusters. ### Initialising the clusters We first need to assign each point to a cluster. The easiest way of doing this is to randomly pick 5 “marker” points and give them labels 1-5 (or actually 0-4 since our arrays index from 0). The code for this is quite simple. We will use another vector of points to store the centroids (markers), where the index of the centroid is its label. We then select a random point from the points vector we made earlier (from reading in the csv) and set that as a centroid. vector<Point> centroids; srand(time(0)); // need to set the random seed for (int i = 0; i < k; ++i) { centroids.push_back(points->at(rand() % n)); } One brief C++ note: because points is actually a pointer rather than a vector, in order to access an item at a certain index we can’t do points[i] – we have to first ‘dereference’ it by doing (*points)[i]. This is quite ugly, so fortunately we have the syntactic shortcut of: points->at[i]. Once the centroids have been initialised, we can begin the k-means algorithm iterations. We now turn to the “meat” of k-means: assigning points to a cluster and computing new centroids. ### Assigning points to a cluster The logic here is quite simple. We loop through every datapoint and assign it to its nearest centroid. Because there are k centroids, the result is a partition of the datapoints into k clusters. In terms of the actual code, I had to spend some time thinking about the best way to represent that a point belonged to a certain cluster. In my python implementation (now many years old), I used a dictionary with cluster IDs as keys and a list of points as values. However, for this program I decided to use a quicker solution: I gave each point a cluster attribute which can hold an integer ID. We then set this ID to the index of the cluster that is closest to the point. for (vector<Point>::iterator c = begin(centroids); c != end(centroids); ++c) { // quick hack to get cluster index int clusterId = c - begin(centroids); for (vector<Point>::iterator it = points->begin(); it != points->end(); ++it) { Point p = *it; double dist = c->distance(p); if (dist < p.minDist) { p.minDist = dist; p.cluster = clusterId; } *it = p; } } ### Computing new centroids After our first iteration, the clusters are really quite crude – we’ve randomly selected 5 points then formed clusters based on the closest random point. There’s no reason why this should produce meaningful clusters and indeed it doesn’t. However, the heart of k-means is the update step, wherein we compute the centroids of the previous cluster and subsequently reassign points. As previously stated, we are going to majorly simplify the problem by computing the centroid of the points within a cluster rather than the partition of space. Thus all we really have to do is compute the mean coordinates of all the points in a cluster. To do this, I created two new vectors: on to keep track of the number of points in each cluster and the other to keep track of the sum of coordinates (then the average is just the latter divided by the former). vector<int> nPoints; vector<double> sumX, sumY; // Initialise with zeroes for (int j = 0; j < k; ++j) { nPoints.push_back(0); sumX.push_back(0.0); sumY.push_back(0.0); } We then iterate through all the points and increment the correct indices of the above vectors (based on the point’s cluster ID). Importantly, now is a convenient time to reset the minDist attribute of the point, so that the subsequent iteration works as intended. // Iterate over points to append data to centroids for (vector<Point>::iterator it = points->begin(); it != points->end(); ++it) { int clusterId = it->cluster; nPoints[clusterId] += 1; sumX[clusterId] += it->x; sumY[clusterId] += it->y; it->minDist = __DBL_MAX__; // reset distance } // Compute the new centroids for (vector<Point>::iterator c = begin(centroids); c != end(centroids); ++c) { int clusterId = c - begin(centroids); c->x = sumX[clusterId] / nPoints[clusterId]; c->y = sumY[clusterId] / nPoints[clusterId]; } Now that we have the new centroids, the k-means algorithm repeats. We recompute distances and reassign points to their nearest centroids. Then we can find the new centroids, recompute distances etc.. ### Writing to a file One final detail: after all of our k-means iterations, we would like to be able to write the output to a file so that we can analyse the clustering. This is quite simple - we will just iterate through the points then print their coordinates and cluster IDs to a csv file. ofstream myfile; myfile.open("output.csv"); myfile << "x,y,c" << endl; for (vector<Point>::iterator it = points->begin(); it != points->end(); ++it) { myfile << it->x << "," << it->y << "," << it->cluster << endl; } myfile.close(); ## Testing In order to test that my k-means implementation was working properly, I wrote a simple plotting script. I am somewhat embarrassed (in the context of a C++ post) to say that I wrote this in python. import matplotlib.pyplot as plt import pandas as pd import seaborn as sns # Before clustering df.columns = ["Annual income (k$)", "Spending Score (1-100)"] sns.scatterplot(x=df["Annual income (k$)"], y=df["Spending Score (1-100)"]) plt.title("Scatterplot of spending (y) vs income (x)") # After clustering plt.figure() sns.scatterplot(x=df.x, y=df.y, hue=df.c, palette=sns.color_palette("hls", n_colors=5)) plt.xlabel("Annual income (k$)") plt.ylabel("Spending Score (1-100)") plt.title("Clustered: spending (y) vs income (x)") plt.show() The result is quite pretty and it shows that – bar a few contentious points around the centre cluster – the clustering has worked as expected. ## Conclusion In conclusion, we have successfully implemented a simple k-means algorithm in C++. Obviously there is much that could be improved about my program. Firstly, many simplifications were made, for example, we restricted the problem to two dimensions and also pre-set the number of clusters. However, there are more subtle issues that we neglected to discuss, including the random initialisation which may result in suboptimal clusters. In fact, there are algorithms like k-means++ that offer major improvements over k-means by specifying better procedures to find the initial clusters. It is also worth mentioning the fundamental difficulties with k-means: it acutely suffers from the ‘curse of dimensionality’, as data becomes more sparse in high dimensions, and it is relatively inefficient since there are four loops (over iterations, points, clusters, and dimensions). However, k-means is often a great solution for quickly clustering small data and the algorithm is just about simple enough to explain to business stakeholders. In any case, the merits/disadvantages of k-means aside, writing this program has given me a lot more confidence in C++ and I am keen to develop a more advanced understanding. I think it’s a good complement to my current interests in scientific/financial computing and it is pleasing to see that I am making more progress than I was a few years back. # What we learnt building an enterprise-blockchain startup It has been almost a year since the idea of HyperVault was first conceived. In that time, we built HyperVault up from a single sentence, gained and lost team members along the way, developed a functional proof-of-concept over the short winter holidays, crashed out of a few competitions (also won a couple of prizes), and finally decided to open source. This post aims to be an honest reflection on the journey – highlighting both the good bits and the times we wanted to give up. ## What is HyperVault? At a high level, the idea behind HyperVault is to use distributed ledger technology (we preferred to avoid the B-word) to provide a secure access layer to sensitive digital resources. If your company has some super-secret documents, what’s the best way of sharing them with other people? From our market research, we found that most people would just chuck it into an email as an attachment. This is terrible from a security standpoint: system administrators can see everything that goes through and you have absolutely no access record or control whatsoever once the email goes into your ‘sent’ mailbox. HyperVault provides a provably tamper-proof system with smart sharing features, such as scheduled or location-based access. ## The Beginning: October-November 2018 HyperVault was born at a Hackbridge cluster – a fortnightly meet-up for Cambridge students who want to build side projects. Li Xi had been thinking about the applications of private blockchains in file sharing and met Robert and a group of other undergrads who were interested in joining the team. By far the hardest part of our HyperVault journey was balancing our desire to build something with the Cambridge term structure. An 8-week term, packed with lectures and supervisions, doesn’t give you much time for entrepreneurial activities. However, we were excited enough about the idea to have weekly team meetings in a comfortable room at Trinity College, where we focused on defining what exactly we wanted the product to be. At the same time, we were on a constant lookout for startup competitions that we could enter. Fortunately, Li Xi and Robert were also on the Cambridge Blockchain Society – one of whose sponsors (Stakezero Ventures) was launching the Future of Blockchain competition. This proved to be critical to HyperVault’s progress because it gave us a “fixed point” on the timeline – whatever happened, we needed to have a working product and polished pitch by March 2019, when the competition was scheduled to happen. ## Winter, literally and figuratively. December was tough. We had to transition from talking at a high level about what kind of features we wanted to offer, how cool it’d be if we could do XYZ, into actually building a working prototype. To ensure that everyone was on the same page regarding the expected time commitments, in the last couple of weeks of term, we made a detailed project plan that outlined the deliverables, due dates, and expected time commitments (we concluded that it’d be at least 12 hours a week each). Three team members dropped out at this point. There weren’t any hard feelings – it is understandable for people to have conflicting priorities and it is better that they can say so upfront. This left Li Xi and Robert the considerable task of building a proof of concept, with Andrew taking the lead on the business plan. With the reduced manpower (and a good helping of classic underestimation), the weekly commitment was closer to 30 hours. It was a real hustle having to balance this with academics (and of course, winter festivities), but by the end of the holidays, we had our proof-of-concept. The main lesson to be drawn from this is the importance of having that “hard conversation” with your team. Give people the option to call it quits at the start, but have a “no ifs, no buts” understanding that come what may, your prototype/MVP must be done by a certain date. ## The build-up to the competition (January – March) At this stage, our proof-of-concept was something like a Google Drive or Dropbox, except with a blockchain backend. We were on track, so moved into “Phase 2” – doing real market research and nailing down a business plan. Andrew tapped into his network at the Judge Business School to get feedback from his peers, which was incredibly useful. One of them mentioned that our product offering was quite similar to something they had used in a previous job. This was very worrying, and after some further googling we were deeply troubled to find out that there was a whole class of products – virtual data rooms – that were doing exactly what we had offered. Robert and Li Xi were a little discouraged by this, but Andrew reminded us about a very important fact of entrepreneurship: you never have to be first, you just have to do things better. Additionally, one of the oft-repeated pieces of advice from successful entrepreneurs is the need to focus on a very specific niche (a ‘vertical’) rather than trying to build a one-size-fits-all product. We saw that the incumbents had dominated the market for financial companies, so we decided to devote our attention to legal companies. With this in mind, we sat down as a team to really think through our business plan – we found the concept of a ”lean business canvas” enthralling – its almost scientific approach to iterating your product via hypothesis and experimentation really appealed to us (indeed, all three of us have a STEM background). ## Competition time (March, April) Coming up on March, our academic commitments were noticeably picking up. But there was no time to waste, as it was competition-season. We agreed as a team that for any competition, there would need to be at least two of us there (three was unrealistic). The first competition we entered did not go so well – we failed to convince the judges, who traditionally favoured biotech ideas, that blockchain was anything more than a fad. Despite making it to the finals, it was clear that there was very little engagement with our idea. At the time, we blamed the judges for being too closed-minded, but in retrospect, we see that it’s really our responsibility in the pitch to sell the vision we have. Nevertheless, we used the feedback to refine our business plan and tidy up our proof-of-concept. The main competition in March, however, was much more engaging and challenging (more than 100 teams participated). Due to a last-minute change of competition dates, Li Xi cut short his travel plans and flew back from Geneva for the Finals – luckily, he made it back just in time for the pitch with only a few hours to spare and the pitch went quite smoothly. The judges were intrigued by the idea and asked us some particularly challenging technical questions regarding encryption, which Li Xi dealt with deftly. Coming out of it, we won the £2000 Future of Blockchain Nucypher prize, which was a strong vote of confidence. The last competition we attended was the R3 Global Pitch Competition. R3 is an enterprise blockchain software firm working with a large ecosystem of more than 300 of the worlds’ largest companies. Even before the submission of our deck, R3 provided excellent guidance by connecting us to their legal teams and having numerous calls with us to refine our business plan, for which we were immensely grateful. We did well in the competition, winning a place in the global finals and subsequently being offered free office space at R3’s office in London to continue building HyperVault. ## The end? With this opportunity in mind, we were talking to VCs about the idea of scaling up. In particular, we had good chemistry with the folks at Stakezero and Wilbe Ventures, and they gave us a lot of very thoughtful advice, emphasising the importance of not just identifying a vertical, but also targeting the size of the customer and the key stakeholders in a company that could make things happen. We hadn’t put too much thought into this before; I guess we were implicitly making the naïve assumption that if we built a cool product, it would sell itself. Up until that point, in our team meetings, we had just said that we’d target “SMEs” (small and medium-sized enterprises). But after a lot more research and discussion, we came to the troubling realisation that our ideal customer was, in fact, an “elephant” – an organisation that would spend more than$100,000 a year on our product (see this excellent short blog post for more) – they are the only ones who have 1) enough secure documents and 2) a need to share these documents with scalability. Regardless of our successes in talking with smaller companies, big companies are an entirely different ball game. You need enterprise sales teams and a polished, audited product. In principle, there was nothing stopping us. We could have used R3’s office space to turn our proof of concept into a sleek MVP and earnestly begin the search for venture capital on the back of that. But ultimately, none of us felt particularly excited by this prospect. A startup is hard work – many hours spent fixing bugs, refining pitches, cold calling hundreds of companies, etc. The only thing keeping it together is the deep desire to build something and a fundamental belief that your product can “change the world”. Faced with the prospect of an enterprise sales cycle and a product which is really an improvement over current technology instead of something brand new (don’t get us wrong, we do still think it’s a major improvement), we realised that HyperVault as a commercial startup had run its course, at least until we had more experience. ## What comes next We truly believe that distributed ledgers applied to file-sharing could be a significant improvement over current technologies. To that end, we decided as a team that the best way forward would be to open-source our codebase, such that it becomes an example of a real-world enterprise blockchain app for others to build on. We published most of our code on GitHub a few months ago and have already had someone reach out to us expressing interest in incorporating HyperVault into their product. We plan to continue cleaning up the codebase within the next couple of months, including comprehensive documentation and a contributors’ guide. Do leave a clap or comment below if you think this is something we should devote more time to! In any case, rather than inserting some banal Edison quote about failure, we’d like to end this post simply by saying that we are extremely grateful for the friendships we made, the people we met, and the chance to tell ourselves that we built something from nothing. This post has been cross-posted on medium. Check out our website at hypervault.tech to learn more! # Graph algorithms and currency arbitrage, part 2 In the previous post (which should definitely be read first!) we explored how graphs can be used to represent a currency market, and how we might use shortest-path algorithms to discover arbitrage opportunities. Today, we will apply this to real-world data. It should be noted that we are not attempting to build a functional arbitrage bot, but rather to explore how graphs could potentially be used to tackle the problem. Later on we’ll discuss why our methodology is unlikely to result in actionable arbitrage. Rather than using fiat currencies as presented in the previous post, we will examine a market of cryptocurrencies because it is much easier to acquire crypto order book data. We’ll narrow down the problem further by making two more simplifications. Firstly, we will focus on arbitrage within a single exchange. That is, we’ll look to see if there are pathways between different coins on an exchange which leave us with more of a coin than we started with. Secondly, we will only be considering a single snapshot of data from the exchange. Obviously markets are highly dynamic, with thousands of new bids and asks coming in each second. A proper arbitrage system needs to constantly be scanning for opportunities, but that’s out of the scope of this post. With all this in mind, the overall implementation strategy was as follows: 1. For a given exchange, acquire the list of pairs that will form the vertices. 3. Process these values accordingly, assigning them to directed edges on the graph. 4. Using Bellman-Ford, find and return negative-weight cycles if they exist. 5. Calculate the arbitrage that these negative-weight cycles correspond to. The full code for this project can be found in this GitHub repo. If you find this post interesting, don’t forget to leave a star! ## Raw data For the raw data, I decided to use the CryptoCompare API which has a load of free data compiled across multiple exchanges. To get started, you’ll need to register to get a free API key. As mentioned previously, we will only look at data from Binance. I chose Binance not because it has a large selection of altcoins, but because most altcoins can trade directly with multiple pairs (e.g BTC, ETH, USDT, BNB). Some exchanges have many altcoins but you can only buy them with BTC – this is not well suited for arbitrage. Firstly, we need to find out which pairs Binance offers. This is done with a simple call (AUTH is your API key string): import requests import json def top_exchange_pairs(): url = ( "https://min-api.cryptocompare.com/data/v3/all/" + "exchanges?topTier=true&api_key=" + AUTH ) r = requests.get(url) with open("pairs_list.json", "w") as f: json.dump(r.json(), f) This is an excerpt from the resulting JSON file – for each exchange, the pairs field lists all other coins that the key coin can be traded with: "Data":{ "Binance":{ "isActive":true, "isTopTier":true, "pairs":{ "ETH":["PAX", "TUSD", "USDT", "USDC", "BTC"], "ONGAS":["BTC", "BNB", "USDT"], "PHX":["ETH","BNB","BTC"] } }, "Coinbase":{ "isActive":true, "isTopTier":true, "pairs":{ "ETH":["DAI", "USD", "USDC", "EUR", "GBP", "BTC"], "BCH":["BTC", "GBP", "EUR", "USD"] } } } I then filtered out coins with fewer than three tradable pairs. These coins are unlikely to participate in arbitrage – we would rather have a graph that is more connected. def binance_connected_pairs(): with open("pairs_list.json", "r") as f: pairs = data["Data"]["Binance"]["pairs"] return {k: v for k, v in pairs.items() if len(v) > 3} We are now ready to download a snapshot of the available exchange rates for each of these coins. import os import tqdm # progress bar if not os.path.exists(outfolder): os.makedirs(outfolder) for p1, p2s in tqdm(pair_dict.items()): url = ( "https://min-api.cryptocompare.com/data/" + f"ob/l1/top?fsyms={p1}&tsyms={','.join(p2s)}" + "&e=Binance&api_key=" + AUTH ) r = requests.get(url) with open(f"{outfolder}/{p1}_pairs_snapshot.json", "w") as f: json.dump(r.json(), f) We can then run all of the above functions to produce a directory full of the exchange rate data for the listed pairs. top_exchange_pairs() connected = binance_connected_pairs() "EOS": { "BNB": { "BID": ".2073", }, "BTC": { "BID": ".0007632", }, "ETH": { "BID": ".02594", }, "USDT": { "BID": "7.0441", }, "PAX": { "BID": "7.0535", } } This excerpt reveals something that we glossed over completely in the previous post. As anyone who has tried to exchange currency on holiday will know, there are actually two exchange rates for a given currency pair depending on whether you are buying or selling the currency. In trading, these two prices are called the bid (the current highest price someone will buy for) and the ask (the current lowest price someone will sell for). As it happens, this is very easy to deal with in the context of graphs. ## Preparing the data Having downloaded the raw data, we must now prepare it so that it can be put into a graph. This effectively means parsing it from the raw JSON and putting it into a pandas dataframe. We will arrange it in the dataframe such that it constitutes an adjacency matrix: • Column ETH row BTC is the bid: • i.e someone will pay x BTC to buy my 1 ETH • this is then the weight of the ETH $\to$ BTC edge. • Column BTC row ETH is the ask: • i.e I have to pay y BTC to buy someone’s 1 ETH • the reciprocal of this is the weight of the BTC $\to$ ETH edge. I chose this particular row-column scheme because it results in intuitive indexing: df.X.Y is the amount of Y gained by selling 1 unit of X, and df.A.B * df.B.C * df.C.D is the total amount of D gained by trading 1 unit of A when trading via $A \to B \to C \to D$. The column headers will be the same as the row headers, consisting of all the coins we are considering. The function that creates the adjacency matrix is shown here: def create_adj_matrix(pair_dict, folder, outfile="snapshot.csv"): # Union of 'from' and 'to' pairs flatten = lambda l: [item for sublist in l for item in sublist] keys, vals = pair_dict.items() all_pairs = list(set(keys).union(flatten(values))) # Create empty df df = pd.DataFrame(columns=all_pairs, index=all_pairs) for p1 in pair_dict.keys(): with open(f"{folder}/{p1}_pairs_snapshot.json", "r") as f: quotes = res["Data"]["RAW"][p1] for p2 in quotes: try: df[p1][p2] = float(quotes[p2]["BID"]) except KeyError: print(f"Error for {p1}/{p2}") continue df.to_csv(outfile) ## Putting the data into a graph We will be using the NetworkX package, an intuitive yet extremely well documented library for dealing with all things graph-related in python. In particular, we will be using nx.DiGraph, which is just a (weighted) directed graph. I was initially concerned that it’d be difficult to get the data in: python libraries often adopt their own weird conventions and you have to modify your data so that is in the correct format. This was not really the case with NetworkX, it turns out that we already did most of the hard work when we put the data into our pandas adjacency matrix. Firstly, we take negative logs as discussed in the previous post. Secondly, in our dataframe we currently have NaN whenever there is no edge between two vertices. To make a valid nx.DiGraph, we need to set these to zero. Lastly, we transpose the dataframe because NetworkX uses a different row/column convention. We then pass this processed dataframe into the nx.Digraph constructor. Summarised in one line: g = nx.DiGraph(-np.log(df).fillna(0).T) ## Bellman-Ford To implement Bellman-Ford, we make use of the funky defaultdict data structure. As the name suggests, it works exactly like a python dict, except that if you query a key that is not present you get a certain default value back. The first part of our implementation is quite standard, as we are just doing the $n - 1$ edge-relaxations where n is the number of vertices. But because the ‘classic’ Bellman-Ford does not actually return negative-weight cycles, the second part of our implementation is a bit more complicated. The key idea is that if after $n-1$ relaxations, there is an edge that can be relaxed further then that edge must be on a negative weight cycle. So to find this cycle we walk back along the predecessors until a cycle is detected, then return the cyclic portion of that walk. In order to prevent subsequent redundancy, we mark these vertices as ‘seen’ via another defaultdict. This procedure adds a linear cost to Bellman-Ford since we have to iterate over all the edges, but the asymptotic complexity overall remains $O(VE)$. def bellman_ford_return_cycle(g, s): n = len(g.nodes()) d = defaultdict(lambda: math.inf) # distances dict p = defaultdict(lambda: -1) # predecessor dict d[s] = 0 for _ in range(n - 1): for u, v in g.edges(): # Bellman-Ford relaxation weight = g[u][v]["weight"] if d[u] + weight < d[v]: d[v] = d[u] + weight p[v] = u # update pred # Find cycles if they exist all_cycles = [] seen = defaultdict(lambda: False) for u, v in g.edges(): if seen[v]: continue # If we can relax further there must be a neg-weight cycle weight = g[u][v]["weight"] if d[u] + weight < d[v]: cycle = [] x = v while True: # Walk back along preds until a cycle is found seen[x] = True cycle.append(x) x = p[x] if x == v or x in cycle: break # Slice to get the cyclic portion idx = cycle.index(x) cycle.append(x) all_cycles.append(cycle[idx:][::-1]) return all_cycles As a reminder, this function returns all negative-weight cycles reachable from a given source vertex (returning the empty list if there are none). To find all negative-weight cycles, we can simply call the above procedure on every vertex then eliminate duplicates. def all_negative_cycles(g): all_paths = [] for v in g.nodes(): all_paths.append(bellman_ford_negative_cycles(g, v)) flattened = [item for sublist in all_paths for item in sublist] return [list(i) for i in set(tuple(j) for j in flattened)] ## Tying it all together The last thing we need is a function that calculates the value of an arbitrage opportunity given a negative-weight cycle on a graph. This is easy to implement: we just find the total weight along the path then exponentiate the negative total (because our weights are the negative log of the exchange rates). def calculate_arb(cycle, g, verbose=True): total = 0 for (p1, p2) in zip(cycle, cycle[1:]): total += g[p1][p2]["weight"] arb = np.exp(-total) - 1 if verbose: print("Path:", cycle) print(f"{arb*100:.2g}%\n") return arb def find_arbitrage(filename="snapshot.csv"): g = nx.DiGraph(-np.log(df).fillna(0).T) if nx.negative_edge_cycle(g): print("ARBITRAGE FOUND\n" + "=" * 15 + "\n") for p in all_negative_cycles(g): calculate_arb(p, g) else: print("No arbitrage opportunities") Running this function gives the following output: ARBITRAGE FOUND =============== Path: ['USDT', 'BAT', 'BTC', 'BNB', 'ZEC', 'USDT'] 0.087% Path: ['BTC', 'XRP', 'USDT', 'BAT', 'BTC'] 0.05% Path: ['BTC', 'BNB', 'ZEC', 'USDT', 'BAT', 'BTC'] 0.087% Path: ['BNB', 'ZEC', 'USDT', 'BAT', 'BTC', 'BNB'] 0.087% Path: ['USDT', 'BAT', 'BTC', 'XRP', 'USDT'] 0.05% 0.09% is not exactly a huge amount of money, but it is still risk-free profit, right? ## Why wouldn’t this work? Notice that we haven’t mentioned exchange fees at any point. In fact, Binance charges a standard 0.1% commission on every trade. It is easy to modify our code to incorporate this: we just multiply each exchange rate by 0.999. But we don’t need to compute anything to see that we would certainly be losing much more money than gained from the arbitrage. Secondly, it is likely that this whole analysis is flawed because of the way the data was collected. The function download_snapshot makes a request for each coin in sequence, taking a few seconds in total. But in these few seconds, prices may move – so really the above “arbitrage” may just be a result of our algorithm selecting some of the price movements. This could be fixed by using timestamps provided by the exchange to ensure that we are looking at the order book for each pair at the exact same moment in time. Thirdly, we have assumed that you can trade an infinite quantity of the bid and ask. An order consists of a price and a quantity, so we will only be able to fill a limited quantity at the ask price. Thus in practice we would have to look at the top few levels of the order book and consider how much of it we’d eat into. It is not difficult to extend our methodology to arb between different exchanges. We would just need to aggregate the top of the order book from each exchange, then put the best bid/ask onto the respective edges. Of course, to do run this strategy live would require us to manage our inventory not just on a currency level but per currency per exchange, and factors like the congestion of the bitcoin network would come into play. Lastly, this analysis has only been for a single snapshot. A proper arbitrage bot would have to constantly look for opportunities simultaneously across multiple order books. I think this could be done by having a websocket stream which keeps the graph updated with the latest quotes, and using a more advanced method for finding negative-weight cycles that does not involve recomputing the shortest paths via Bellman-Ford. ## Conclusion All this begs the question: why is it so hard to find arbitrage? The simple answer is that other people are doing it smarter, better, and (more importantly) faster. With highly optimised algorithms (probably implemented in C++), ‘virtual colocation’ of servers, and proper networking software/hardware, professional market makers are able to exploit these simple arbitrage opportunities extremely rapidly. In any case, the point of this post was not to develop a functional arbitrage bot but rather to demonstrate the power of graph algorithms in a non-standard use case. Hope you found it as interesting as I did!
2019-12-07 18:36:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4243345260620117, "perplexity": 1364.9853452351401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540501887.27/warc/CC-MAIN-20191207183439-20191207211439-00139.warc.gz"}
http://www.physicsforums.com/showthread.php?p=116391
# torque and force by burak_ilhan Tags: force, torque P: 14 as you all now torque in rotational motion has similarity with force in translational motion. but torque has the same unit as energy.it seems to me that there is a problem. Any explanation you have? (please ssmt different than " they are defined that way!!" Sci Advisor P: 905 In translational motion, work done (energy) is $$E=\int Fdx$$ In the same manner, in rotational motion, $$E=\int\tau d\theta$$ Since $$\inline{\theta}$$ is dimensionless, torque $$\inline{\tau}$$ has same units as energy. If it helps in keeping things apart, you can think of the units of torque as being say joules per radian. Related Discussions Introductory Physics Homework 4 Introductory Physics Homework 2 Introductory Physics Homework 17 Introductory Physics Homework 2 Introductory Physics Homework 1
2014-03-11 19:26:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7061367630958557, "perplexity": 1198.4389180805183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011250185/warc/CC-MAIN-20140305092050-00057-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/work-energy-theorem-and-kinematics.742237/
# Work Energy Theorem and Kinematics 1. Mar 8, 2014 ### layout4it 1. The problem statement, all variables and given/known data A small steel ball of mass .0283kg is placed on the end of a plunger of length .0051m attached to a spring 1.88m above the ground. The spring is pre-compressed .0011m and has a spring constant of 177 N/m. The plunger is then angled on a ramp 45° above the horizontal, and is pressed in to compress the spring an additional .0088m. The plunger is then released extending to the end of the ramp and sending the ball into the air. Assuming no friction and no air resistance how far will the ball fly before hitting the ground? 2. Relevant equations Kinematics $Δx = v_0 t + 1/2 a t^2$ $v^2 = v_0^2 + 2aΔx$ $v = v_0 + at$ Work/Energy $W = ΔK$ $KE= 1/2 m v^2$ $PEspring = 1/2 k Δx^2$ $PEgravity = mgh$ 3. The attempt at a solution I split the problem into 3 parts: Launch, End of Launch → Max Height, Max Height → Ground Launch $W = ΔK$ (no friction or air resistance) $W=0$ $∴K_0 = K_f$ $KE_0 + PEgravity_0 + PEspring_0 = KE_f + PEgravity_f + PEspring_f$ $1/2 m v_0^2 + mgh_0 + 1/2 k Δx_0^2 = 1/2 m v_f^2 + mgh_f + 1/2 k Δx_f^2$ Calling $h_0$ the ground $0 + mgh_0 + 1/2 k Δx_0^2 = 1/2 m v_f^2 + mgh_f + 1/2 k Δx_f^2$ Multiply both sides by 2 to get rid of the fractions $2mgh_0 + k Δx_0^2 = m v_f^2 + 2mgh_f + kΔx_f^2$ Bring Like Terms Together $k Δx_0^2 - kΔx_f^2 = mv_f^2 + 2mgh_f - 2mgh_0$ Factor Out Mass $k Δx_0^2 - kΔx_f^2 = m(v_f^2 +2gh_f - 2gh_0)$ Divide Both Sides By Mass $\frac{(k Δx_0^2 - kΔx_f^2)}{m} = v_f^2 +2gh_f - 2gh_0$ Isolate $V_f$ $\frac{(k Δx_0^2 - kΔx_f^2)}{m} +2gh_0 - 2gh_f = v_f^2$ Solve for $V_f$ $v_{flight} = \sqrt{\frac{(k Δx_0^2 - kΔx_f^2)}{m} +2gh_0 - 2gh_f }$ (I name it $v_{flight}$ for simplicity) End of Launch → Max Height $v_{0y} = v_{flight} \sin{45°}$ $v_{fy} = v_{0y} + a_y t$ $t_1 = \frac {v_{fy} - v_{0y}} {a_y}$ $t_1 = \frac {0 - v_{0y}} {a_y}$ $t_1 = \frac {-v_{0y}} {a_y}$ Max Height → Ground $v_{fy}^2 = v_{0y}^2 + 2a_yΔy$ $v_{fy}^2 = v_{0y}^2 + 2a_yΔy$ $v_{fy}^2 = 0 + 2a_yΔy$ $v_{fy} = \sqrt{2a_yΔy}$ Δy = max height to the ground (+) $a_y$ = gravity (+) $v = v_0 + at$ $v_{fy} = v_{0y} + a_y t_2$ $v_{fy} = 0 + a_y t_2$ $t_2 = \frac{v_{fy}} {a_y}$ Final Distance $v_{0x}= v_{flight} \cos{45°}$ $Δx = v_{0x} t + 1/2 a t^2$ $Δx = v_{0x} t + 0$ $Δx = v_{0x} (t_1 + t_2)$ Last edited: Mar 8, 2014 2. Mar 8, 2014 ### haruspex That all looks right. Do you have a question? 3. Mar 8, 2014 ### layout4it What should I plug in for the $Δx_0$ and $Δx_f$ for $PE_{spring}$? 4. Mar 8, 2014 ### haruspex It's not entirely clear, but I think the idea is that the spring starts off compressed by .0011+.0088 and finishes compressed by .0011. I.e. there is some end stop preventing it being compressed by anything less than .0011. 5. Mar 9, 2014 ### layout4it When I plug in the data I get the final distance in the x direction to be about .4m which seems unreasonable.
2017-10-20 18:44:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5033029317855835, "perplexity": 1796.4512630868312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824293.62/warc/CC-MAIN-20171020173404-20171020193404-00518.warc.gz"}
https://math.stackexchange.com/questions/1665867/proving-prime-gaps-not-monotonic
# Proving prime gaps not monotonic I want to prove (or see a proof) that the prime gap function is not monotonic. (but not using extremely difficult theory like Zhangs proof please) The idea for proof is to first suppose it is monotonic and find the slowest possible asymptotic growth it could have. The get a contradiction by showing that even this is still "too fast" compared to say the chebychev bound $x/\log(x)$. Lemma: A sequence of primes with gap $d$ can't be longer than $d+2$. So we suppose the prime gap function $g(n)$ grows as slow as possible, like this: 1, 2, 2, 2, 2, 4, 4, 4, 4, 4, 4, 6, ... Then I found that we would have $\pi(\sum g(n)) = n$, so $\pi(\sum_{d=2k} (d+2)^2) = \sum_{d=2k} d+2$. The first sum being $O(n^3)$ and the second $O(n^2)$. Inserting $n = x^{1/3}$ gives $\pi(x) = O(n^{2/3})$. This gives us $x/x^{1/3}$ which sadly doesn't beat $x/\log(x)$. Is this proof strategy doomed then? Or is there a way to repair it? Alternatively does anyone have an accessable reference which has this proof? • You cannot have successive gaps like $4,4,4$ or $8,8,8$ or $10,10,10$ as they will lead to multiples of $3$ Feb 21 '16 at 17:41 • @Henry, I don't understand what you mean! Feb 21 '16 at 17:53 • One of $x,x+4,x+8$ is a multiple of $3$ for any integer $x$. More generally, one of $x,x+y,x+2y$ is a multiple of $3$ if $y$ is not a multiple of $3$. So your bound is looser than it needs to be. Feb 21 '16 at 18:04 • ok bounds are generally not exact. Feb 21 '16 at 18:05 • The 54th,55th,56th,57th primes each have a gap of 6. Feb 21 '16 at 18:54 This gives us $$x/x^{1/3}$$ which sadly doesn't beat $$x/\log(x)$$. Is this proof strategy doomed then? Or is there a way to repair it? There is no problem as the inequality holds the other way round. One should combine a hypothetical upper bound of order $$x/x^{1/3}$$ on the prime counting function, obtained under the assumption of monotone gaps, with a lower bound on the prime counting function to get a contradiction. The point is if the gaps would grow monotonically they would be large so it is natural there would be too few primes then. • $x/x^{1/3}$ is a lower bound though, we got it by assuming $g(n)$ grew as slowly as possible. since $x/\log(x)$ grows faster than this we haven't got a contradiction. To get an upper bound I would need to assume $g$ grows as fast as possible and put some limit on that, I don't see any way to do that. Feb 21 '16 at 17:55 • No it is not a lower bound on the number of primes. When I tell you it takes me at least $10$ minutes to write an answer post. Then assuming the minimal possible gap of ten minutes between two answers, after two hours of my work is $12$ answers an upper or lower bound on the number of answers I wrote? – quid Feb 21 '16 at 17:58 • It just hit me! I get it now, thank you so much. A lower bound on the gaps translates into an upper bound on the number of primes. Feb 21 '16 at 18:02 • That's great! I can see how the switch of orders can be confusing at first. – quid Feb 21 '16 at 18:03 Henry's observation solves the problem relatively simply. A gap of length $d$ cannot repeat more than $p$ times where $p$ is the smallest prime not dividing $d$. Let $f(d)$ be any increasing upper bound function for maximum number of consecutive gaps of length $d$. By the previous paragraph, $f$ can be taken to be $O((\log d)^c)$. For a logarithmic choice of $f$ let $g$ be its sum, the function with $g(n+1) - g(n) = f(n)$. Then asymptotically $g(n) \sim nf(n)$. An increasing integer sequence $P_n$ whose gap lengths are bounded by $f$ has a subsequence $P_{g(n)}$ with strictly increasing differences and thus bounded below by a quadratic function $\frac{n(n+1)}{2}$. Composing this with $g^{-1}(n) \sim \frac{n}{f(n)}$ we get a nearly-quadratic lower bound on $P_n$ which is far larger than the $n$-th prime for large $n$. • we can have 3 primes in a row with the same gap though. doesn't that mean we can't apply this argument? Feb 21 '16 at 18:53 • The 54th,55th,56th,57th primes each have a gap of 6. Feb 21 '16 at 18:53 • That's right, so I mis-stated the observation by Henry. There is no constant upper bound on the gap repetition, but a logarithmic function of the gap size $d$ equal to the smallest prime not dividing $d$. This would imply a lower bound of $O(n^2 / \log(n)^k)$ for some $k$ by the same argument. – zyx Feb 21 '16 at 19:05 • That's really interesting and I'd like to understand it more! Would you say this approach gives a shorter proof or just an alternative way of looking at the problem? Feb 21 '16 at 19:08 • (I guess this has to do with the primorial function needing to be a divisor of a long AP of primes) Feb 21 '16 at 19:08
2021-12-02 05:56:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8712767362594604, "perplexity": 227.80678557730016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00133.warc.gz"}
https://math.stackexchange.com/questions/2319427/finding-asymptotic-behaviour-of-integral
# Finding asymptotic behaviour of integral I am trying to work out the following integral : $$\int_0^{at} \mathrm{d}u \frac{e^{-u}}{(t-\frac{u}{a})^\beta}$$ Where $\beta$ is some exponent that I'm trying to evaluate by comparing integrals of that type (for the function to be integrable, I'd expect $\beta<1$). Now I expect that this integral will be $\sim t^{-\beta}$ for large $t$ : $$\int_0^{at} \mathrm{d}u \frac{e^{-u}}{(t-\frac{u}{a})^\beta}\\ = \frac{1}{t^\beta}\int_0^{at} \mathrm{d}u \frac{e^{-u}}{(1-\frac{u}{at})^\beta}$$ I am tempted to write the fraction as a series, but I'm blocked because of what happens when $u=at$. Would there be any smart way of getting the asymptotics of this integral ? Thanks By setting $u=atz$, such that $du=at\,dz$, the given integral equals $$I(a,\beta,t)=\frac{a}{t^{\beta-1}} \int_{0}^{1}\frac{e^{-at z}}{(1-z)^\beta}\,dz = \frac{a e^{-at}}{t^{\beta-1}}\int_{0}^{1}\frac{e^{at z}}{z^{\beta}}\,dz$$ hence assuming $\beta<1$ we have $$I(a,\beta,t) = \frac{a e^{-at}}{t^{\beta-1}}\sum_{n\geq 0}\frac{a^n t^n }{n!(n+1-\beta)}$$ by simply expanding the exponential function as its Taylor series and performing termwise integration. Have a look at the Wikipedia page about the incomplete $\Gamma$ function. • The sum isn't an asymptotic series for large $t$ though, and it doesn't give the leading term. – Maxim Jun 6 '18 at 19:07 You can, in fact, expand the non-exponential part into a series. Or, putting it another way, $\beta$ is fixed, the singularity at $u = a t$ is integrable, and the main contribution still comes from a small neighborhood of $u = 0$: $$\int_0^{a t} \frac {e^{-u}} {(t - u/a)^\beta} du = t \int_0^a \frac {e^{-t \xi}} {(t - t \xi/a)^\beta} d\xi \sim \frac t {(t - t \xi/a)^\beta} \bigg\rvert_{\xi = 0} \int_0^\infty e^{-t \xi} d\xi = t^{-\beta}.$$ As long as $\Re \beta < 1$, the singularity at $u = at$ will not cause you any problem. Changing variable to $u = ats$, we have $$\mathcal{I} \stackrel{def}{=} \int_0^{at} \frac{e^{-u}}{(t-\frac{u}{a})^\beta} du = \frac{at}{t^\beta}\int_0^1 \frac{e^{-ats}}{(1-s)^\beta} ds\tag{*1}$$ Notice in the neighborhood of $s=0$, $\frac{1}{(1-s)^\beta}$ is infinitely differentiable with expansion $$\frac{1}{(1-s)^\beta} = \sum_{k=0}^\infty \frac{(\beta)_k}{k!} s^k\quad\text{ where }\quad (\beta)_k = \prod_{\ell=0}^{k-1} (\beta+\ell) \tag{*2}$$ Furthermore, when $\Re\beta < 1$, we have $$\int_0^1\frac{ds}{|(1-s)^\beta|} = \frac{1}{1 - \Re\beta} < \infty$$ These two observations together allow us to apply Watson's Lemma to the integarl on RHS of $(*1)$. We can read off the asymptotic expansion for $(*1)$ from the expansion in $(*2)$. The end result is $$\mathcal{I} \asymp \frac{at}{t^\beta}\sum_{k=0}^\infty \frac{(\beta)_k}{k!}\frac{\Gamma(k+1)}{(at)^{k+1}} = \frac{1}{t^\beta}\sum_{k=0}^\infty \frac{(\beta)_k}{(at)^k}$$
2019-05-22 05:04:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558954834938049, "perplexity": 159.47449672848825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256763.42/warc/CC-MAIN-20190522043027-20190522065027-00314.warc.gz"}
https://www.tutorke.com/exams/1020-form-3-physics-paper-3-end-of-term-3-examination-2022.aspx
Get premium membership and access revision papers with marking schemes, video lessons and live classes. # Form 3 Physics Paper 3 End of Term 3 Examination 2021 Class: Form 3 Subject: Physics Level: High School Exam Category: Form 3 End Term 3 Exams Document Type: Pdf ## Exam Summary TUTORKE EXAMS END OF TERM THREE EXAMINATION 2022 FORM 3 PHYSICS PP.3 (PRACTICAL) SCHOOL:………………………………………………………… 232/3 PHYSICS PAPER 3 TIME: 2 HRS 15MIN INSTRUCTIONS TO CANDIDATES 1. Write your name and admission number in the spaces provided. 2. Answer all questions in the space provided. 3. All working must be clearly shown where necessary. 4. Non-programmable silent electronic calculation may be used. 5. Candidates should check the questions paper to ascertain that all pages are printed as indicated and that no question is missing. Question 1 You are provided with the following apparatus - Two complete retort stands. - A metre rule - Two pieces of thread (120cm and 20cm) - A stop watch - A piece of masking tape - A pendulum bob - A half metre rule a) (i) - Attach one end of string to the metre rule at the 10cm mark by fastening a loop of string tightly round the metre rule. - Fix the string at this point with a piece of masking tape - Tie the string in the second loop at 90cm mark. Fix this loop with another piece of masking tape. ii) Attach the pendulum bob at the centre of the string – so that the centre of gravity of the bob is 15cm below the point of suspension (as shown in the figure below.) b) (i) Measure the angle 2theta...........................1/2 mk ii) Pull the pendulum bob towards you through a small distance release it and measure time “t” for 10 oscillations. ..............................1/2 mk iii) Remove the masking tape slide the loops to the 12cm and 88cm marks. Refix the masking tape. Measure the angle 2theta and time“t” as before iv) Repeat (iii) above with the loops at 15cm and 85cm, 20cm and 80cm, 25cm, and 75cm. 30 and 70cm, 35cm and 65cm marks. v) Enter all your results in the table below. c) (i) Plot a graph of T2(y-axis) against cos θ (5mks) (ii) Determine the value of T2 at the point where the graph intercepts the y – axis. (1mk) (iii) Given that the value of T2 at the point A where the graph cuts the y – axis is given by T2 = (0.6π^2)/K use your result in (ii) above to determine the value of K. (3mks) Question 2 Q2. You are provided with the following apparatus ● A voltmeter ● An ammeter ● A wire x mounted on a metre rule ● 6 connecting wires with crocodile clips ● Micrometer screw gauge ● A switch ● A jockey ● One new dry cell and a cell holder. Proceed as follows: a) Connect the apparatus provided as shown in the circuit below. d).Plot a graph of potential difference, V(y-axis) against the Current I. (5mks) e) Determine the slope of the graph (2mks) f).Given that V= E – I r, use your graph to determine the value of; (i) E (1mk) (ii) r (1 mk) g).Measure the diameter d of the wire x using the micrometer screw gauge. d = __________________ mm ½ mk ____________________m (1/2 mk) h) Dismantle the apparatus and set up the circuit as shown below.
2022-11-30 09:37:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43168342113494873, "perplexity": 4054.9835798100826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710734.75/warc/CC-MAIN-20221130092453-20221130122453-00069.warc.gz"}
http://science-beta.slashdot.org/story/08/02/20/1548216/new-science-standards-approved-in-florida?sbsrc=thisday
Beta × Thank you! Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development. Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better! New Science Standards Approved in Florida ScuttleMonkey posted more than 6 years ago | from the thinking-of-the-voters-not-the-children dept. 891 anonymous_echidna writes "Florida has voted to accept the new K-12 science curriculum standards amidst a storm of controversy around the teaching of evolution, which had up until now been the scientific concept that dare not speak its name. There was a compromise made at the last minute, which was to call evolution a 'scientific theory', rather than a fact. While some lament that the change displays the woeful ignorance of science and scientific terminology, the good news is that the new curriculum emphasizes teaching the meaning of scientific terms and the scientific method in earlier grades." Jesus Fucking Christ (5, Funny) Protonk (599901) | more than 6 years ago | (#22490018) I'm moving to another country where crazy isn't an approved religion. Re:Jesus Fucking Christ (5, Interesting) flyingsquid (813711) | more than 6 years ago | (#22490106) For a biting critique of Florida's new standards, and a defense of craziness, see "Our Reputation for Flakiness is at Stake" by Carl Hiaasen [ http://www.miamiherald.com/news/columnists/carl_hiaasen/story/421075.html [miamiherald.com] ]. Re:Jesus Fucking Christ (4, Funny) Protonk (599901) | more than 6 years ago | (#22490180) I love Carl Hiassen. But I think that between the two of us we can't spell his name right. :) Re:Jesus Fucking Christ (4, Interesting) o'reor (581921) | more than 6 years ago | (#22490326) Man, thanks for bringing it up, I had forgotten Carl's name and I was fumbling around in the Colbert Reports archive, but there it is : Carl Hiaasen's interview on the Colbert Report [bravenewfilms.org] , a true moment of fun. Re:Jesus Fucking Christ (1) moderatorrater (1095745) | more than 6 years ago | (#22490192) I know what you mean. If they're going to teach the theory of evolution, they should they should at least teach that it's more than a theory! But seriously, I don't see how this is that big of a deal. Everyone with a brain already realizes that evolution is true or that God has gone to great lengths to make it look true. Those who aren't going to believe in evolution for religious reasons are going to do that anyway. Finally, individual teachers have a lot of leeway in what they teach; science teachers will teach evolution with the certainty that they feel it's due, no matter what guidelines have been set down. Re:Jesus Fucking Christ (5, Insightful) Anonymous Coward | more than 6 years ago | (#22490344) Finally, individual teachers have a lot of leeway in what they teach; science teachers will teach evolution with the certainty that they feel it's due, no matter what guidelines have been set down. Not if they want to keep their jobs they won't. With school boards and school administrators unsympathetic to the teaching of evolution, while the teaching of evolution is not banned, parent complaints will give them a reason to find some other convenient excuse to fire the teacher. For example, a Texas science director [wired.com] was canned because of her pro-evolution stance. The official reason: insubordination because she used her work email to forward a federal court judgment on evolution to friends and some online communites. Every teacher has done something similar and having a pro-evolution viewpoint will give the school administrators an excuse to find anything incriminating. Re:Jesus Fucking Christ (5, Informative) Gordonjcp (186804) | more than 6 years ago | (#22490500) If they're going to teach the theory of evolution, they should they should at least teach that it's more than a theory! Evolution *is* a theory. Perhaps they should also teach what "theory" means. Re:Jesus Fucking Christ (5, Insightful) KublaiKhan (522918) | more than 6 years ago | (#22490300) It's not really a question of religion, if you think about it--it's more a question of politics. It just happens that the politics involved are largely being used within the framework of religion in order to maintain a certain population within a given power structure, and to resist attempts to overturn said power structure from the outside. Re:Jesus Fucking Christ (5, Funny) trongey (21550) | more than 6 years ago | (#22490352) RE: Title of parent post. I don't think he could do that, even with miraculous powers. I know, the whole one-in-three business makes it kind of confusing, but I still just don't think it could be done. Re:Jesus Fucking Christ (1) Dog-Cow (21281) | more than 6 years ago | (#22490652) But I thought he is God. You mean there's something he can't do? (Disclaimer: I am not a Christian.) Man, ALL religion is crazy... (2, Insightful) crovira (10242) | more than 6 years ago | (#22490406) George Carlin was right... Anything that starts with some "There's some invisible guy, up in the sky, who can kill you, because he loves you" is deeply, persistently and fundamentally fucked up. Creationism is merely an expression of how fucked up it is. ANY country that has ANY religion is just as fucked up. "Offer your sufferings to Christ" is NOT a health care policy. Got that? Re:Man, ALL religion is crazy... (2, Funny) Anonymous Coward | more than 6 years ago | (#22490494) Hey, my invisible guy [venganza.org] doesn't want to kill me. He wants to give me beer and stripper factories. Unfortunately, he does want me to dress up like a pirate. I think that implies he wants me to go kill all the unbelievers (such as ninjas). Approved religion? (2, Insightful) hal2814 (725639) | more than 6 years ago | (#22490542) I'll stick to countries where I don't have to worry about whether a religion is "approved" or not. Re: (1) popsicle67 (929681) | more than 6 years ago | (#22490570) Simply put, it is theory, in the same way that gravity pulls is a theory. The chance that the theory is wrong approaches infinity, but there is still a minute chance for error. Re:Jesus Fucking Christ (1) errxn (108621) | more than 6 years ago | (#22490612) And that country would be... I'm waiting...waiting.... Science board is trolling? (3, Insightful) UbuntuDupe (970646) | more than 6 years ago | (#22490028) There was a compromise made at the last minute, which was to call evolution a 'scientific theory', rather than a fact. LOL! I can't believe that an actual state school board resolution has basically the same wording as when I troll. (Er, I mean, my *friend* trolls.) "Hey guys, now, let's face it, evolution is pretty much just a theory at this point. You know, THEORY? Theory as in ... NOT FACT?" Still, I think it would be an improvement of orders of magnitude if science classes in general focused more on: "how did we learn this?" (i.e., the scientific method, how observations have to be done to eliminate bias, the formulation of competing theories, how experiments are designed, how hypotheses were ruled out, etc.) as opposed to: "here is he official list of truth that you have to memorize and then do cute IQ-test-like problems with". The latter gives the wrong impression of what science is and why it matters. Re:Science board is trolling? (1) funwithBSD (245349) | more than 6 years ago | (#22490246) Yes, well it is an important distinction you are making. There are quite a few "theories" that have been taken as fact, such as the concept of "races" in the single human race. Despite the fact that the idea of race is based on viable offspring interbreeding ablity some insist that varitial==race. Go figure. Couple more I can think of that people take as fact that are only theories. Re:Science board is trolling? (4, Informative) drinkypoo (153816) | more than 6 years ago | (#22490336) There are quite a few "theories" that have been taken as fact, such as the concept of "races" in the single human race. Despite the fact that the idea of race is based on viable offspring interbreeding ablity some insist that varitial==race. Go figure. Race: "a group of persons related by common descent or heredity." Species: "Biology. the major subdivision of a genus or subgenus, regarded as the basic category of biological classification, composed of related individuals that resemble one another, are able to breed among themselves, but are not able to breed with members of another species." funwithBSD: "An individual who needs to buy a dictionary." Re:Science board is trolling? (3, Informative) DaveV1.0 (203135) | more than 6 years ago | (#22490462) The meaning of the word theory when used in the context of science: A set of statements or principles devised to explain a group of facts or phenomena, especially one that has been repeatedly tested or is widely accepted and can be used to make predictions about natural phenomena. Now, remember, Gravity is just a theory as well, so why don't you test it by jumping out off of a very tall building. Re:Science board is trolling? (1) Seumas (6865) | more than 6 years ago | (#22490596) It seems rather redundant for them to demand it be called a theory, since all of science consists of hypothesis and theories. The important thing to take out of this is that our country always has and always will cater to the ignorant religious sect, because they control everything. As long as they make up some 90% of the population, you can't expect common sense or rationality to rule. After all, these are the idiots who are counted in surveys like the one we just saw today where 66% of Americans think that nanotech is immoral. It's time to replace democracy with meritocracy. We've suffered the rule of the stupid for too long. Re:Science board is trolling? (1) blueg3 (192743) | more than 6 years ago | (#22490610) I have to agree with you there -- if you're nitpicking over whether to label information a "theory" or a "fact", you're probably not teaching how to differentiate the two or how the information you're learning was determined in the first place. Scientific teaching should not be a list-o-facts. There's not much you get out of that. The news media is a major part of the problem (5, Insightful) Steeltalon (734391) | more than 6 years ago | (#22490032) There have been too many occasions where the news media has persisted in "dumbing down" the terminology that they use. I even remember watching a "Faith and Values" show on CNN last year where John Edwards (the candidate, not the psychic) was asked his thoughts on Evolution which, in the words of Soledad O'brien, was the belief that man evolved from apes. We need the news media to take the lead in helping people understand what a theory is vs. a hypothesis. How fact and theory are not opposites. The fact that a "law" is not the opposite of a theory. Too many people are getting away with murder in these debates because the termnology isn't clearly understood and the news media doesn't care to straighten it out. Re:The news media is a major part of the problem (2, Insightful) Gat0r30y (957941) | more than 6 years ago | (#22490358) Dude, nail on the head. I don't believe that the news media is "dumbing down" their language to make it more accessible to viewers, I've always just assumed they don't have a sufficient understanding of basic science to pose good questions. I think back to college, and frankly the journalism students didn't seem to be taking many elective science courses. The journalism community as a whole doesn't seem to have a very good understanding of the scientific method. On the other hand, there are a good number of excellent science journalists (SciAm seems to me to be written for a wide audience, yet succeeds in presenting accurate and generally interesting science news). Then again, it could be that the public is just as ill-informed about science as the journalistic community. What a sorry state of affairs indeed. Re:The news media is a major part of the problem (1) Sczi (1030288) | more than 6 years ago | (#22490366) which, in the words of Soledad O'brien, was the belief that man evolved from apes That's seems to be the part that really gets the fundies' collective goat, so I say approach it more directly. If there really are any good, scientific (or even semi-scientific) arguments against the notion that man descended from apes (or common ancestor, etc), incorporate those opposing ideas into the topic. If there is any credible evidence whatsoever against evolution, then include it in the curriculum. And by extension, also include arguments against those arguments. Further, while explaining the terms as the article said, maybe include intelligent design and say "here we have laws, theories, hypotheses, and 14 rungs down we have a notion, which is what intelligent design is" Another way would be to develop some scientific consensus to establish a Law of Evolution or somesuch and get the word "theory" out of it completely since we will *never* be able to trust the ability of laymen to use "theory" properly. I found this: "The biggest difference between a law and a theory is that a theory is much more complex and dynamic. A law governs a single action, whereas a theory explains an entire group of related phenomena." so I don't know if evolution could be simplified enough to ever attain law status, maybe a subset of the theory, or a more direct line like "things evolve." but it would make it a lot easier to explain things and dodge the theory argument completely. Re:The news media is a major part of the problem (2, Interesting) drinkypoo (153816) | more than 6 years ago | (#22490404) asked his thoughts on Evolution which, in the words of Soledad O'brien, was the belief that man evolved from apes. Speaking of "dumbing down", you have no idea what's going on, do you? Referring to Evolution in this way and then asking an opinion (or the reverse) is an example of deliberate spin. You would never say that unless you wanted to get the "I didn't come from no monkey!" camp riled up, or you were an uneducated buffoon. P.S. Jesus Christ, that woman looks like Ms. The Joker when she smiles. Plastic surgery, or inbreeding? YOU DECIDE! Florida... aye (4, Informative) godawful (84526) | more than 6 years ago | (#22490034) I saw this guy arguing why evolution shouldn't be taught and i was literally left speechless Re:Florida... aye (1) MightyYar (622222) | more than 6 years ago | (#22490204) I simply can't believe that man had the free time to go to attend a government hearing. </irony> That's fair (5, Insightful) anotherone (132088) | more than 6 years ago | (#22490050) That's fair, because evolution IS a scientific theory. So is Gravity. Hopefully they'll also teach the kids what it means to be a theory, and that "theory" doesn't mean "wild-ass-guess". Re:That's fair (-1, Troll) dc29A (636871) | more than 6 years ago | (#22490160) That's fair, because evolution IS a scientific theory. So is Gravity. Hopefully they'll also teach the kids what it means to be a theory, and that "theory" doesn't mean "wild-ass-guess". Evolution is a *FACT*. Gravity is a *FACT*. The scientific explanations of those observed facts are called "Scientific Theories": The Theory of Evolution and The Theory of Gravity. If you don't believe in gravity being a fact, please jump off a 42 story building. Re:That's fair (2, Insightful) anotherone (132088) | more than 6 years ago | (#22490242) ...and this is why we need to teach our children the scientific method. Re:That's fair (1) Knara (9377) | more than 6 years ago | (#22490248) You may make such a distinction, but most people do not. I, for one, am more than happy for Florida, of all places, to be calling evolution by natural selection a theory. Besides, Florida is where old people go to die, and Cubans go to bitch about Castro, not exactly the educational center of the US. Re:That's fair (1) provigilman (1044114) | more than 6 years ago | (#22490254) I think by "fact", he meant "law". We have very few laws in science, almost everything is a theory. For example, with gravity, what causes it? We know that mass has something to do with it, but how does it affect gravitational pull? Are there gravitons? Etc.. Re:That's fair (3, Funny) msuarezalvarez (667058) | more than 6 years ago | (#22490328) I do not think I can put this in a softer way, so here it goes: Re:That's fair (5, Insightful) Ed Avis (5917) | more than 6 years ago | (#22490262) Things falling to the ground is a fact; one explanation for it is Newton's theory of gravitation, also called gravity. What is gravity? (1) ArchieBunker (132337) | more than 6 years ago | (#22490286) Not trying to troll here, but right now we are barely at the start of understanding how these forces work. How exactly does gravity work? Can its forces be duplicated in a lab? Until then its still a really good theory and the best one so far. Re:That's fair (2, Insightful) KublaiKhan (522918) | more than 6 years ago | (#22490334) Belief has nothing to do with it. That's one rather large difference between science and religion: science still works when you don't believe in it. Hell, science works when you actively try to -dis-believe it. Re:That's fair (2, Informative) SteelAngel (139767) | more than 6 years ago | (#22490338) Evolution is a *FACT*. Gravity is a *FACT*. No they are not. They are Scientific Theories. A theory is a statement that has been supported by evidence from repeatable experiments and can be used to make accurate predictions that can be borne out by experiment. Even though it satisfies (to an extent) both of those qualifications, Newton's Theory of Gravity is -wrong-. It is an acceptable approximation for certain local phenomena, however. Einstein's Theory of General Relativity has not yet been shown to be violated, yet it is still a theory. Do not let the abuse of a word in the vernacular color your perception of its meaning. Even if it is a predictive science, evolutionary biology is based on scientific theories, not 'facts'. Re:That's fair (2, Informative) wile_e_wonka (934864) | more than 6 years ago | (#22490568) A theory is a statement that has been supported by evidence from repeatable experiments and can be used to make accurate predictions that can be borne out by experiment. No its not--what you describe is a good theory--like evolution or general relativity. Bad theories exist as well (ones that were falsified or that just no longer make sense--like the "aether"), or even theories that I couldn't really say are good or bad (ones which remain untested, or are difficult to use in the formation of testable hypotheses--like string theory). Re:That's fair (4, Informative) yali (209015) | more than 6 years ago | (#22490400) If you don't believe in gravity being a fact, please jump off a 42 story building. A fact is what you have observed. A theory is an explanation of why it is so. In the strictest sense, the fact is that you have always (previously) observed that objects fall to the ground. But in order to link that fact to your prediction that he will fall to the ground after jumping off a building, you have to have a theory of gravity that predicts how a novel event (i.e., the grandparent poster jumping off a 42 story building) will unfold in the future. Put more succintly: "Objects thrown off a building have always fallen" is a statement of fact. "Objects thrown off a building will always fall" is a hypothesis derived from a theory. Re:That's fair (3, Insightful) jandrese (485) | more than 6 years ago | (#22490414) Isn't the proper terminology "law"? As in the "Law of Gravity" to related to observed and/or measured facts about the world? Theories are a description of why a law exists (Theories about gravity are actually surprisingly weak at this point. We don't really have a good understanding of why gravity works). We have observed that species change over time (short timescales with small and simple organisms like bacteria, longer timescales for larger and more complex life like Dinosaurs). Evolution is the theory that describes why we think that happens. Before people go nuts however, I'd like to point out that Creationism is not a theory, or a law, or anything to do with science. Re:That's fair (1, Insightful) Anonymous Coward | more than 6 years ago | (#22490432) Umm, no. Gravity is not a FACT. It is a theory. That every model we have come up with matches this theory is irrelevant; we cannot prove without room for doubt any scientific theory. Gravity could be some as-yet unknown particle that can exert physical force away from itself; the planets are not held together by the force of gravity pulling everything in, but by a spherical buildup of these particles pushing everything down. Gravity could be a low-end EM based force, with the planet/sun/etc being a large magnet. Gravity could be many different things. However, the ones I listed do not fit the known measurable information. This does not say that at some point in the future we will not discover something else that does not fit the current model of gravity. Evolution is similar; we were not there, we cannot state with absolute certainty that the known species evolved from microbes. What we do know fits that model, but we cannot state it as known for a fact. People say Science is about facts and Religion is about faith; they are wrong. Science is about theories and open-mindedness; a scientist that refuses to even consider changing his mind is as faithful as a baptist. Re:That's fair (0) iamacat (583406) | more than 6 years ago | (#22490478) Current species that inhibit the earth is a *FACT* Gravity is a *FACT* Genetic inheritance of traits and change observed in selectively bread domesticated species, antibiotic-resistant bacteria and environmentally stressed animals is *FACT* Evolution as the sole origin of all species from inorganic matter is a *THEORY* Gravitons being closed-loop strings that can leave our 3D membrane into the bulk is a *THEORY* Re:That's fair (1) Millennium (2451) | more than 6 years ago | (#22490556) Facts are data. Theories are interpretations of data. As such, evolution and gravity are theories, not facts. This does not mean that they are guesses; there's another term for that, and even the creationists don't use that term to describe either of these. Just as a theory is not a fact, neither is it a hypothesis. I have no doubt that the people moving to emphasize that evolution is a theory rather than a fact are indeed ignorant of just what a theory really is. But so are their opponents who attempt to claim it as a fact. Both sides need a dictionary. Re:That's fair (4, Informative) KublaiKhan (522918) | more than 6 years ago | (#22490266) I have a hobby where I argue with various fundamentalists, creationists, and the like in order to understand their particular points of view--using them as an evolutionary pressure for my arguments, as it were, to see which ones have an effect. I've noticed in my various arguments that the chief difficulty is getting them to understand the terminology behind the concepts--they simply do not have the vocabulary necessary to vocalize and understand the concepts in question. One of those words that is most egregiously misused is "theory"--the "common" form of the word is almost universally understood, but the "scientific" meaning of the word, even when carefully explained, becomes conflated with the common form. (Other difficulties I've noticed are: that those who do not accept evolutionary theory are convinced that evolution is directed towards some 'goal'; that all mutations are necessarily harmful; an ignorance of introns and other means by which genetic material can be added to a genome--one of the current arguments that crops up is the one about how you can't get more information into a genome by evolutionary means, which is, of course, utter bosh; a misunderstanding of the scientific method; the false notion that science attempts to be the Answer to Life, the Universe and Everything rather than a best-fit approximation; and the notion that scientists are trying actively to discourage religion) Other than teaching the proper meaning of the word 'theory'--which doesn't work very well, frankly; the meaning that they knew first tends to stick no matter how often you teach them the proper one due to recency bias--I'd perhaps recommend a slight change in terminology when speaking of hypotheses that have withstood rigorous testing. Such a change would, of course, have to be accepted by the scientific community as a whole, so it may not be practical--but it's perhaps worth giving some thought to. I'd almost recommend 'theorem' rather than 'theory', to leech off of the mathematician's meaning, but while that word is appealing for reasons of similarity and having the proper tone, it may not be ideal due to conflation with mathematical proofs. Re:That's fair (1) plague3106 (71849) | more than 6 years ago | (#22490376) Not sure how evolution is classified anymore, but that gravity exists is indeed a fact. The only question we still have is WHY it exists, as in what causes it. I thought so too (2, Insightful) PinkyDead (862370) | more than 6 years ago | (#22490428) Now I'm going to get myself into trouble. Because my understanding (as a scientist) has always been that all science was theory - scientific theory and not fact. Some scientific theories, like evolution, have so much evidence that they may as well be fact - but they're still technically not fact. And like you said gravity is a theory. The fact there is that when I let go of an apple it ends up on the ground, that's the fact - the most sensible theory that explains that fact and other related facts is the theory of gravity. And the theory of evolution is the most sensible theory that explains the fact that there are a wide range of different types of animals and plants on this planet. Creationism and ID are also theories - not scientific theories because they cannot stand up to testing by the scientific method. (And yes FSM is a theory too). So let baby have his bottle - tell them "Yeah! Evolution is a scientific theory - and a damned good one at that." That'll stump them. Someone call editorial... (4, Funny) EricTheGreen (223110) | more than 6 years ago | (#22490054) During more than two hours of testimony, scientists and religious representatives argued over whether teaching that humans evolved from a single-celled species over hundreds of millions of years should be taken as gospel. Not sure that's the word said scientists would use in this context themselves... Re:Someone call editorial... (2, Funny) somersault (912633) | more than 6 years ago | (#22490162) Depends on whether they think it's good news that we evolved or not? :p Turnabout is fair play (2, Insightful) tarrantm (1210560) | more than 6 years ago | (#22490604) If religious representatives insist on arguing over science standards, scientists need to barge in on all the other curricula and insist on arguing over the definitions of words in their syllabuses too. Start by telling all the comparative religion classes to teach kids that the bible being the word of god is an unsubstantiated, non-scientific hypothesis. Re:Someone call editorial... (1, Insightful) Anonymous Coward | more than 6 years ago | (#22490624) I have never understood the non-scientific community's obsession with taking *anything* as 'gospel'. By definition science only explains a phenomena until a better explanation comes along. The purpose of science is to continually refine our understanding, not create a doctrine that must be followed. You should never take any scientific theory at face value. Every true scientific explanation is accompanied by the data that led to that conclusion and thus each individual should be making their own conclusions based on the data. That is the beauty of science, if you don't think something is correct all of the information you need to prove that the wrong conclusion has been reached is there for you to use. woo hoo! (5, Funny) urcreepyneighbor (1171755) | more than 6 years ago | (#22490062) The more dumbasses in the world, the smarter I seem! woo hoo! Fear me, for I have studied the dark science of natural selection! Re:woo hoo! (5, Funny) RingDev (879105) | more than 6 years ago | (#22490110) It's that exact logic that got me a girl friend with small hands. My junk looks HUGE! -Rick Re:woo hoo! (5, Funny) MightyYar (622222) | more than 6 years ago | (#22490408) Just make sure you get her back to day care before her mommy comes to pick her up. Details get in the way of good jokes. (1) RingDev (879105) | more than 6 years ago | (#22490578) Just make sure you get her back to day care before her mommy comes to pick her up. -- W..w..W - Willy Waterloo washes Warren Wiggins who is washing Waldo Woo. Burn. Harsh. Actually, my girl friend became my wife years ago, and I read the book that your signature is from to our son just a few nights back. But all these details clog up the simplicity of the joke. The only time such a level of detail actually helps the joke is in a case like the 'Pink Monkey' joke or the 'Flower' joke which rely on the excessive use of details to make the joke funny. But getting a 5-15 minute long joke off is more of an art form, rarely seen these days. No, the simpler the better, especially in type, for todays crowd. That's why jokes like, "What do you call a boomerang that won't come back? ... A stick!" and "So, a baby seal walks into a club." work so well. Staged and delivered well, those two can get people rolling. -Rick Re:woo hoo! (2, Funny) geedra (1009933) | more than 6 years ago | (#22490482) I guess that's cool, you know, if you only ever put it in her hand... I accept evolution and I know God is real. (2, Insightful) CrazyJim1 (809850) | more than 6 years ago | (#22490088) It is strange how a Christian will say,"Things aren't perfect now after the fall", but then they'll say,"Evolution isn't God's plan." Well how do they know that? The 6 days of Creation match up with science on the ball when they aren't literal days as we know them, but days of God, which are explained to be any length of time in two different places in the Bible. I wrote a chapter in my book about it, but I don't see the need to make a long post here. You can check my book on my website if you're so inclined. I updated it last week. Keep in mind that it is a rough draft. Re:I accept evolution and I know God is real. (4, Insightful) everphilski (877346) | more than 6 years ago | (#22490200) but days of God, which are explained to be any length of time in two different places in the Bible. In several places in the Bible it explains how the passage of time is not a factor to God as it is to us (a day is like a millenia, a millenia like a day), but it explicitly says in Genesis, after each day of creation, "And there was evening and there was morning, the Nth day." If you hand-wave away that phrase, then what else do you hand-wave away? Re:I accept evolution and I know God is real. (2, Interesting) CrazyJim1 (809850) | more than 6 years ago | (#22490360) And there was evening and there was morning, the Nth day." If you hand-wave away that phrase, then what else do you hand-wave away? Good point. The way I saw it was that God created light before the sun existed. The length of the time that light shown may have been much longer than 12 hours, and what I am suggesting is that it was millions or billions of years. Then when darkness happens, it is only for a short period. Analogous to how the world was in darkness for a short period until Jesus came, and now the world is full of the light of God, and will eventually last eternally. So the length of darkness could have simply been extraordinarily short compared to the length of a day. This is just my first thoughts on that. If you want to email me, it is James_Sager_PA@yahoo.com, and after I put more thought into it, I'll get back to you. Thank you for raising an excellent point. Re:I accept evolution and I know God is real. (1) Fjandr (66656) | more than 6 years ago | (#22490372) So you are fluent in the cultural context of 1st-2nd century Hebrew? There are probably quite a few organizations who would like to talk to you then... Re:I accept evolution and I know God is real. (2, Informative) drinkypoo (153816) | more than 6 years ago | (#22490434) "And there was evening and there was morning, the Nth day." If you hand-wave away that phrase, then what else do you hand-wave away? There is no explicit statement of how long the days were. All the quote REALLY tells you, in fact, is that it got dark and then it got light, in between various tasks attributed to Yahweh. Re:I accept evolution and I know God is real. (1) KublaiKhan (522918) | more than 6 years ago | (#22490440) So the hypothetical creative being's spaceship/orbital platform/planet/space station/infinite plane's rotational period is 24 hours relative to the nearest source of intense light? If we were on Jupiter, would you be insisting on a 60-hour creation? Re:I accept evolution and I know God is real. (1) takanishi79 (1203342) | more than 6 years ago | (#22490606) I'm not particularly inclined to go into the specifics about literary framework, and all that sort of thing (the gist is that the Genesis creation account is not necessarily a statement of the nature of creation, but rather a statement of the power of the God of Judaism over and against other powers that be (Leviathan, Baal, to name a few)). But the worth that you so calmly assert to mean 'day' in Hebrew, is far from certain to mean 'day.' There are a number of other places where that same word is used to describe an age, and great lengths of time. If I'm not entirely mistaken, don't we use day language to refer to the changing of various ages. The 'dawn' of the nuclear age, as an example. Sadly, what a lot of people (both Christians and non) miss when reading the Bible, is that they aren't reading a document conceived out of their culture. Rather, it was written, collected, and edited in a time far removed from our own. The way they speak about things is different. In a thousand years, are archaeologists going to understand some of our figures of speech? I doubt it. Not without some context clues. Why would something like the Bible be any different? Re:I accept evolution and I know God is real. (2, Informative) Dan Posluns (794424) | more than 6 years ago | (#22490314) I don't have a problem with you believing whatever you want to believe, whether you want to take the six days as literal fact (which many creationists do) or more metaphorical (which many creationists would call you a sycophantic apologist for doing so). I don't care. Believe whatever you want. It's not about belief. It's about what's scientifically useful; what produces useful experiments and predictions for us to better understand the nature of our universe. In that regard, evolution is one of the most wildly successful scientific theories around. (As opposed to vehicles like Intelligent Design, which misses the point entirely and from what I've heard has yet to "reveal" anything non-trivial.) So you can believe what you want. And good on ya for it. But when it comes to science, we're interested in what's practical. Dan. Re:I accept evolution and I know God is real. (1) CrazyJim1 (809850) | more than 6 years ago | (#22490514) My main point if you missed it is that Christians should accept evolution. My smaller point is that the Big Bang , continental drift and fossil records make sense if you view the days of creation as a different length of time than a 24 hour day. There is a scripture verse that says,"We honor Kings for what they explain, and God for what he keeps hidden." In this way you can see why God did not just hand Moses a laptop with all scientific knowledge of the universe. Evolution is not natural selection (3, Informative) Ed Avis (5917) | more than 6 years ago | (#22490104) Evolution is a fact. For example dinosaurs used to exist and they don't now; horses, dogs and cats have changed. This is accepted by everyone. What is in dispute is the explanation for that evolution. It could be caused by natural selection or by something else (certainly by something else in the case of the three animals mentioned). Natural selection is a scientific theory. So be careful with the terminology. Re:Evolution is not natural selection (2, Insightful) Aladrin (926209) | more than 6 years ago | (#22490240) "Certainly"? Not certainly. Natural selection is the process by which some animals survive better than others by having certain traits. Horses that run faster are less likely to meet the glue factory before reproduction than slower horses, for example. It's still 'natural selection', it's just that environment has changed. Cats and dogs go through similar things. Assuming 'natural selection' is true and not a false hypothesis, this fits the pattern. If it's false, then this may not be the same thing at all after all. Re:Evolution is not natural selection (0) Anonymous Coward | more than 6 years ago | (#22490572) so it's 'natural selection at the hands of man'. Lamenting that evolution is called a theory? (4, Insightful) ProteusQ (665382) | more than 6 years ago | (#22490108) Isn't that like an Obama supporter lamenting that Obama was called a Presidential Candidate by the press? Re:Lamenting that evolution is called a theory? (1) Jeff DeMaagd (2015) | more than 6 years ago | (#22490502) I think the problem is that the popular definition of theory is quite different from the scientific definition. Calling it "just a theory" in the popular terms undercuts what it is and tries to leave in some "wiggle room" for Intelligent Design, which is really "just a hypothesis", but it is often presented as a fact. locokamil (850008) | more than 6 years ago | (#22490118) What's the problem here? Evolution is a theory. oyenstikker (536040) | more than 6 years ago | (#22490422) Calling it a theory implies that it might not be true, which implies that something else might be true, which implies that an unscientific religious belief might be true, which implies that religion X might be true, which might constitute state support of religion X. Of course, there is nothing prohibiting state support for religion X, only state support for "an establishment" of religion X.[1] But we are too dumb to tell the difference between "Christianity" and "The Orthodox Church In America" or the "Westboro Baptist Church", so we must not have the government, or anything government sanctioned or funded, do anything that might imply something that might imply something that might imply something that might imply something that might say that a particular religious belief is true, because next thing you know, the pope will be calling the shots. Really, that is what will happen. Screw the definitions of words, and established scientific terminology, we have to protect Amerika from the religious fanatics! Honestly, I have no idea what the problem is, but a lot of people have their panties all up in a bunch. [1] Don't believe me? Go read the constitution. Don't like it? Contact your government representatives. Don't flame me, I didn't write the constitution, nor do I necessarily agree with it[2]. [2] If you want to know what I think, I think the government should get out of the public education business. MightyYar (622222) | more than 6 years ago | (#22490492) I think the main objection is that evolution is considered at theory, but Newton's theory of gravitation is considered a fact. At the very least, they should have the same status. Sabz5150 (1230938) | more than 6 years ago | (#22490538) What's the problem here? Evolution is a theory. The problem is not Evolution's stance as a theory, but of the misrepresentation of the definition "theory". The cdesign proponentsists have failed in taking down evolution and propping up ID, and now continue their attempts to make the term theory sound like "guess made in haze of bong smoke". To say that Evolution is a theory and not a fact is an outright insult to science and the scientific method. Theories are comprised of tens, hundreds, thousands or more facts. The theory of Evolution is not a fact, it is several thousand of them. These bible bangers need to shut up and stop being scientific and technological vermin, constantly trying to erode everything we've worked towards in advancing this civilization. youngdev (1238812) | more than 6 years ago | (#22490582) The problem here is the way evolution is being presented in Florida. Up until now, Florida has presented evolution as a hypothesis. Now they want to present evolution as a theory and that is the problem. The words theory and hypothesis have scientific definitions. theory (from wikipedia): a mathematical or logical explanation, or a testable model of the manner of interaction of a set of natural phenomena, capable of predicting future occurrences or observations of the same kind, and capable of being tested through experiment or otherwise falsified through empirical observation hypothesis (from wikipedia): a suggested explanation for a phenomenon or of a reasoned proposal suggesting a possible correlation between multiple phenomena The argument made was that presenting evolution as a theory gives it unearned weight. The opponents of this change were saying that we have no observable evidence of one species changing into another (especially a more complex one). And because these changes cannot be done in a lab, then therefor evolution is a hypothesis not a theory. Now unless someone can force evolution in a lab or unless it can be falsified through tests, then it is in fact a hypothesis. However in the absence of any other reasonable hypothesis, I see no reason why it should not be taught in schools. In the interest of full disclosure: I live in Florida. I am a Christian. I believe in a 7 day creation (oh come on like it is any less logical than one species spontaneously increasing the complexity of its dna). My children go to a Christian school where they are not taught their great grandfather is curious George. I am a College educated software engineer. Why Should We Be Surprised? (4, Funny) saudadelinux (574392) | more than 6 years ago | (#22490134) Let's face it, folks no other state has its own category on Fark.com; the utter lunacy and stupidity down there has been neatly quantified. Was evolution taught before this? (1) WolfTheWerewolf (84066) | more than 6 years ago | (#22490172) I did not go through the Florida public school system, and do not know anyone who did. Still I have a hard time believing that they did not teach evolution before this... in *some* form. At least I hope they taught it, for it would be a crime against reason to omit evolution from the curriculum. Anyone from FL wish to chime in and shed some light? You know... (1) webheaded (997188) | more than 6 years ago | (#22490186) It really WOULD be nice if people knew what the hell they were talking about with this stuff. I almost wish scientists would just get together and consider changing the terminology so that the religious zealots running the country couldn't so easily dumb things down and get away with it. It's the media too, like someone previously said, but it's also the fact that people simply don't understand evolution nor do they even make an attempt to when they've got such cool catch phrases. People would rather hear, "It's Adam and Eve not Adam and Steve!" than actually examine the issue of gay marriage just like they'd rather hear "Evolution is JUST a theory!" than actually check it out or even understand what that means. Theory (2, Funny) StarReaver (1070668) | more than 6 years ago | (#22490202) I don't think that word means what you think it means (from a scientific standpoint) Terminology? (3, Insightful) jamstar7 (694492) | more than 6 years ago | (#22490210) Hmmmmmmmmmmmmmmmmmm. I wonder... Control the meaning of words, you control how they're percieved. For instance, most if not all the old Soviet republics considered themselves 'democratic' in that elections were held on a regular basis. Of course, there was only one slate of candidates to elect, so calling them 'democracies' was a bit of a misnomer. Likewise, their penchant for putting "People's' in front of just about everything, like 'People's Democratic Republic of'. Double whammy there... Now, if the definition of 'approved' now means 'guaranteed not to piss off any J Random NeoCon Fundie', and 'theory' now means 'something that cannot be proved but must be taken on faith', we're in serious trouble here... Re:Terminology? (0) Anonymous Coward | more than 6 years ago | (#22490298) Yes Prime Minister covered that one. Sir Humphrey: East Yemen, isn't that a democracy? Sir Richard: Its' full name is the Peoples' Democratic Republic of East Yemen. Sir Humphrey: Ah I see, so it's a communist dictatorship. Gospel (1) Wellington Grey (942717) | more than 6 years ago | (#22490216) During more than two hours of testimony, scientists and religious representatives argued over whether teaching that humans evolved from a single-celled species over hundreds of millions of years should be taken as gospel. Somehow, I doubt that was the language the scientists used. -Grey [silverclipboard.com] This is just plain sad (1) ryzynforce (199741) | more than 6 years ago | (#22490244) This is one of the reasons why schools across the nation are graduating dumber and dumber children that subsequently come into the workforce and degrade goods and services because the basic academics are not taught. This simply shows that the school officials are more concerned with ridiculous and trivial details that truly have no bearing on the academic front. More concern with kids feeling good about learning only what is going to be on a test instead of just teaching the kids what they need to know. Changing the language is purely semantics. It would seem quite a bit easier to let the school teach the evolution side of the story and let the churches teach the creationism side of the story. After that let the kids decide for themselves. It is silly to just change the language to "scientific theory". It does however, lend itself to causing more confusion which creates an even lower quality graduate. Idiocracy is coming true!!! "It has Electrolytes!" Just my opinion though. why complain? (3, Insightful) superwiz (655733) | more than 6 years ago | (#22490270) This is actually a good thing. A good theory stands up to scrutiny. There is not such thing as "ridiculous" challenge. Any challenge which does not deny rules of logic or observed facts has merit. If students are instilled with an extra degree of scepticism, I'd say, "good for them." Dogmatic teaching of scince as facts creates nothing but fudder for pop-culture -- it does not produce thinking minds. I'm in ur curriculumns... (3, Insightful) gandhi_2 (1108023) | more than 6 years ago | (#22490276) ...hasing a tehoree. The highest honor SCIENCE can bestow any idea is that of the "Theory". Science cannot claim anything to be a fact because in science, nothing is beyond disproval. If science starts stating things are fact, and beyond disproval then the idea in question becomes dogma. Dogma is the realm of religion. Science may be your religion, but you do science a great disservice by making it so, at the expense of the scientific schema and method. I know that the creationist/ID crowd LOVES to rub it in that evolution "is only a theory", but you've got to resist the temptation of fighting back by out-dogma-ing the dogmatists. Evolution IS only a theory, it's among the most widely studied and tested theories of science. It's the single unifying theory of biology. Everyone say it with me: Evolution IS just a theory. The 800lb Gorrilla, bad-mother-fucker, stomp your colon theory. The king of theories. In science, that's as good as it gets. And as science-minded people, we should know that. Re:I'm in ur curriculumns... (0) Anonymous Coward | more than 6 years ago | (#22490638) Mod parent up! This is one of the best (and shortest) analyses of the situation I've seen. Losing relevance... (4, Insightful) MaWeiTao (908546) | more than 6 years ago | (#22490292) The Roman Catholic church has recognized evolution essentially as fact and completely compatible with the bible. So I don't really understand what the problem is with Protestants in this country. The only reason I see for this idiotic push to marginalize evolution and push creationism as a valid theory is because Christian conservatives see their influence on American culture slipping. This is a desperate attempt to make their religion relevant. I don't understand how this is permitted. Evolution is a science. Creationism and Intelligent Design are not science and have no place in the science class. Those concepts don't conform to the standards established by science. There is a place for creationism, and that's the theology class. If parents want to compromise their children's education they should do so in private schools or at home instead of trying to force this stupidity on everyone. Re:Losing relevance... (1) drinkypoo (153816) | more than 6 years ago | (#22490490) I don't understand how this is permitted. The Political Right's most important power base is the Religious Right, because they act as a unified group, others perceive it, and they know it. Thus, they are permitted to do anything they like so long as it doesn't interfere with profit. Re:Losing relevance... (1) MightyYar (622222) | more than 6 years ago | (#22490536) I don't understand how this is permitted. This is why it is gratifying to see them losing their hold on the Republican party. What compromise? (0) Anonymous Coward | more than 6 years ago | (#22490306) There was a compromise made at the last minute, which was to call evolution a 'scientific theory', rather than a fact. IDNRTFA, but what exactly is the compromise in this statement? Don't we already call it a "theory"? Even to the layman, it's the "theory of evolution", right? Re:What compromise? (2, Interesting) Fjandr (66656) | more than 6 years ago | (#22490598) It's only a compromise in the minds of the school board members. They probably went through the same Florida schools and came out with zero understanding of what scientific terms really mean. "Theory" to them is supposed to lower the standing of the teaching of evolution, when in fact it will raise it if those same science classes teach accurate scientific terminology. Ultimately, it brings evolution back into focus in schools while simultaneously showing the school board to be uneducated dweebs. Win/win as far as I'm concerned. Teach Xenu-ology (1) jameskojiro (705701) | more than 6 years ago | (#22490342) Yeah the fact that thetan theory is not a viable alternative to evolution makes me sad and makes poor baby Xenu cry. Ignorance is bliss... (0, Troll) 15Bit (940730) | more than 6 years ago | (#22490356) ....and it seems the US is the happiest nation in the world. Cheesey (70139) | more than 6 years ago | (#22490364) Teaching evolution - does it really matter? Evolution is the least popular theory ever proposed. It has been under continuous attack ever since it was proposed. During this time, the creationists have tried every trick they can think of to get it out of the schools. They have blamed just about every evil of society on it, and they have brainwashed millions into believing that it's incompatible with their religion. They've tried to make it illegal, and they have even tried (unsuccessfully) to disprove it. And evolution has survived all of these attacks because it is true. You can always argue that the physical evidence doesn't accurately represent reality, and of course the creationists have tried that, but it's no use when they're arguing with proper scientists. Given this, I don't think we need to worry about evolution at all. Sure, creationists would like it to be thrown away entirely, but as long as we have scientists, that simply will not happen. You just can't do useful research in any physical science if you think the Bible has greater authority than a ton of physical evidence. There are worse problems in public schools than a bunch of nutcases wanting their crazy beliefs taught as if they were science. There is no evidence that will convince a creationist that he is wrong. If Jesus Christ personally appeared in front of John Q. Creationist and said "Hi, John. My name's Jesus, the Earth is billions of years old and evolution is basically true," then John Q. would probably crucify him for blasphemy. That's what the fundamentalists did, the last time Jesus told them they were wrong. "Everyone" knows that God couldn't have created the Universe using evolution: he's omnipotent, sure, but he's not that omnipotent. In summary, there is no point in trying to argue with these people, their beliefs are nuts even in comparison to other Christians, so let's just ignore them.. Oh, so the story is self-referential? (0) Anonymous Coward | more than 6 years ago | (#22490382) For the Slashdot editors, I mean. It's a long stretch to call evolution a "fact," and I'm not even talking from a religious standpoint here. There are peer-reviewed, published papers that contain research that indicates that instead of being survival of the fittest, evolution proceeds along a more symbiotic path. So evolution is a first- or second-order approximation model for genetic material selection, then? For it to be labeled a "fact," it had better be damn well applicable everywhere and you should be able to precisely predict all outcomes from it. Examine the aforementioned papers and you'll find that this is not the case. For that matter, we have exactly one data point for genetic selection modeling: Earth. You'll excuse me if I wait to see how genetic selection occurs on a few other planets before accepting it as "fact." I'm reasonably certain that one mole = 6.023 * 10^23 molecules is a fact. Evolution? Not so much. Far more precise to call it "a commonly accepted scientific theory," as that's what almost ALL science is. "Facts" and "laws" in science are pretty damned rare. In conclusion: Slashdot editors, you just called the kettle black. Get your own terms right before slamming someone else and maybe we can have some meaningful discussion. But hey, bashing the nebulous "religious right" generates page views and advertising revenue, eh? "theory" language is incorrect (1) peter303 (12292) | more than 6 years ago | (#22490392) Every educated person knows the difference between the term "theory" in science and "theory" in legal terminology. The Florida hack confuses the two meanings again. Theory in science means comprehensive explanation. Theory in law means hypothesis. I'd replpace the term "theory" by "law" or "system" to prevent future confusion. Random Evolution (0) Anonymous Coward | more than 6 years ago | (#22490394) One of the conflicts I see here is the idea that evolution can be a "fact" and at the same time argue that it purely random. By being random, advocates are admitting that it does not have a pre-defined 'direction'. Evolution and De-evolution can and do both occur and since it is a random causal event, nothing can also occur making any claim of certainty quite bizarre. How can something be a fact that is not predictable? Seems to me you can not even come up with a scientific test to prove it exists, only that sometimes live things change for the better other times they don't. And of course if it is not random, then is it planned ? Inbreeding has value (0) Anonymous Coward | more than 6 years ago | (#22490426) We need a large supply of inbred knuckledraggers so we still have Republicans. Personally, I lived in FL and TN for many years. If they really want a wall, it might be cool to build it on the Mason Dixon Line. scientific theory (1) bpotato (1159933) | more than 6 years ago | (#22490564) Uh. I fully believe the Theory of Evolution to be correct, at least in its generalities (most theories can be improved), but that doesn't mean it's not a theory. It is no more a fact than Newtonian theories of motion (which turned out to be wrong!) What's wrong with them representing it as the theory it is? Courts struck this down in Georgia (1) Shivetya (243324) | more than 6 years ago | (#22490618) http://www.cnn.com/2005/LAW/01/13/evolution.textbooks.ruling/index.html [cnn.com] he stickers read, "This textbook contains material on evolution. Evolution is a theory, not a fact, regarding the origin of living things. This material should be approached with an open mind, studied carefully and critically considered." So how is this different just because its Florida? I remember Cobb County getting lampooned for stating a fact, if for the wrong reasons. Slashdot: News for Nerds Your program is sick! Shoot it and put it out of its memory.
2014-10-26 03:37:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2983230650424957, "perplexity": 1814.790638318008}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119654793.48/warc/CC-MAIN-20141024030054-00074-ip-10-16-133-185.ec2.internal.warc.gz"}
https://brilliant.org/problems/this-one-belongs-to-inverse-trigo/
# This one belongs to inverse trigo!! Geometry Level 3 If $$\cos ^{-1} x - \cos^{-1} \frac{y}{2} = \alpha$$ then $$4x^2-4xy \cos \alpha + y^2$$ is equal to ×
2018-03-18 19:36:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9288330674171448, "perplexity": 3983.7255256524704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645943.23/warc/CC-MAIN-20180318184945-20180318204945-00741.warc.gz"}
https://www.physicsforums.com/threads/double-slit-experiment-problem.876793/
# Double slit experiment problem ## Homework Statement monochromatic light of 625 nm of wavelength falls normal to the optical bar. Total number of light lines that appear behind the bar is 11. What is the constant of the difraction bar? ## Homework Equations 3. The Attempt at a Solution [/B] I tried ##dsinx=ms## where ##s## is the wavelength and ##m## is number of lines. Since x is 90 i can calculate the ##d## to find the distance between the slits but what is diffraction constant? TSny Homework Helper Gold Member "Constant of the diffraction bar" might be referring to the "grating constant". The grating constant usually denotes the number of lines of the grating per unit length (often given as lines per millimeter). However, I've also seen people refer to the distance between two lines as the grating constant. So, you might need to consult your notes or textbook to see how it is used in your course. I think it is lines per mm, is that 1/d? TSny Homework Helper Gold Member I think it is lines per mm, is that 1/d? Yes. If you express d in mm, then you can think of d as the number of mm per line (mm/line). So, the units for 1/d would be .....? TSny Homework Helper Gold Member Your title for this thread refers to "double slit". But the question refers to a "diffraction bar", which I was thinking might be a diffraction grating. So, I'm not sure what you're actually dealing with here.
2021-02-27 22:36:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612441420555115, "perplexity": 1158.0496942034906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00435.warc.gz"}
http://www98.phys.virginia.edu/Announcements/News/List.asp
Support UVa's Physics Department! >> • Yagi Receives 2019 Young Scientist Prize Posted 2019-03-14 14:27:30 Kent Yagi has been selected as the recipient of the 2019 Young Scientist Prize from the International Commission on General Relativity & Gravitation.  Kent's citation and additional information on his prize can be found here: ... More > • Yagi Named 2019 Sloan Research Fellow Posted 2019-02-19 15:36:00 Kent Yagi has been named a 2019 Sloan Research Fellow.  From the Alfred P. Sloan Foundation Press Release:   “Sloan Research Fellows are the best young scientists working today,” says Adam F. Falk, president of the Alfred P. ... More > Posted 2019-01-16 09:31:00 From UVAToday: Brian Seymour explores the universe, its black holes and pulsars. Now he can go further with the help of a Churchill Scholarship. This fall, Seymour, a fourth-year physics and mathematics major at the University of Virginia, ... More > • UVa Physics Majors Play Key Role in Physics Teaching Initiative Serving Rural, Low-Income Communities Posted 2019-01-04 14:19:00 From UVAToday: An innovative pilot program launched by UVA alums, now in its second year, has made significant progress in addressing the disparate access to advanced courses in rural Mississippi. The Global Teaching Project provides promising ... More > • First Observation of the Parity-Violating Gamma-Ray Asymmetry in Neutron-Proton Capture Posted 2018-12-14 16:57:00 The weak interaction of quarks between the neutron and proton have been measured for the first time by the NPDGamma experiment at the Spallation Neutron Source in Oak Ridge National Lab. UVa postdoctoral research associate Jason Fry ... More > • Gillies Named Fellow of National Academy of Inventors Posted 2018-12-12 16:48:00 George Gillies, UVa Physics PhD and recently Visiting Research Professor of Physics, has been named a 2018 Fellow of the National Academy of Inventors. From UVAToday: The academy lauded Gillies, who holds 36 U.S. patents on several medical ... More > • Lee Co-organized the Inaugural UVA Symposium on Korea Posted 2018-12-04 15:42:10 Seung-Hun Lee co-organized the inaugural UVA Symposium on Korea as a part of his Pavilion Seminar, “Science & Politics.” During the Symposium, which was held in September, three leading experts on Korea -- Professor Paik, Nak-chung from ... More > • UVa Quartet At The Center Of Efforts To Gain U.S. Edge Posted 2018-11-08 10:38:00 UVAToday has a nice article highlighting UVa's expertise in quantum optical physics and photonics engineering: A University of Virginia physics professor and three UVA engineering professors are members of three new multi-disciplinary, ... More > • New Clues to the Proton Puzzle Posted 2018-11-02 17:47:00 UVa Physics professor Nilanga Liyanage and his colleagues in the PRad collaboration have made a new measurement of the proton's radius.  Other experiments have arrived at two different, incompatible values for this radius.  The new ... More > Posted 2018-10-30 13:00:00 The current edition of our departmental newsletter features a message from our new Chair, Bob Jones, a profile of new faculty member Kent Yagi, profiles of some recent graduates, and a report on the Conference for Undergraduate Women in Physics.  ... More > • Oscar A. Rondon Aramayo Elected to APS Fellowship Posted 2018-10-11 09:44:00 Oscar A. Rondon Aramayo, a recently retired Principal Scientist from the UVa Institute of Nuclear and Particle Physics (INPP) has been elected as a 2018 Fellow of the American Physical Society (APS). His citation reads: “For pioneering ... More > • Cox and Poon Highlighted in UVAToday Story on Dozen Top Science Discoveries Posted 2018-10-01 11:11:16 Brad Cox's work on the discovery of the Higgs boson and Joe Poon's work on amorphous "super steel" is celebrated in a UVAToday story on UVA's top twelve notable science discoveries of the last half-century.   Full ... More > • Louca Group Work Highlighted By AIP Posted 2018-09-20 12:50:00 Despina Louca’s group, and in particular graduate student Aaron Wegner and postdoctoral researcher Junjie Yang, has discovered a way to boost the magnetoresistance in the I-Mn-V class of semiconductors, the highest ever observed in this class, ... More > • Schauss a Regional Finalist for Blavatnik Award Posted 2018-09-16 12:06:36 Peter Schauss was selected as a 2018 Blavatnik Awards for Young Scientists Regional Finalist. The Blavatnik Awards for Young Scientists honor exceptional young scientists and engineers by celebrating their extraordinary achievements, recognizing ... More > • Physics in Orbit Posted 2018-09-07 10:33:00 UVa Physics professor Cass Sackett is testing gravity and quantum mechanics in space using the Cold Atom Laboratory (CAL), a device aboard the International Space Station.  His work is described in the current edition of UVa Today: ... More > • Waddy, Walker, and Schult Report on April APS Meeting Posted 2018-09-04 10:48:00 A report written by physics undergraduates Morgan Waddy, Matt Walker, and Levi Schult on a trip that they took to the American Physical Society April Meeting last year was published on the national Society of Physics Students website at ... More > • UVA Physics Shines at VT Conference Posted 2018-08-20 13:30:00 On August 13-18 the 20th International Workshop on Neutrinos from Accelerators was held at Virginia Tech.  The UVA HEP group took advantage of proximity to UVA and attended in full force, taking 6 poster and giving 4 talks.  Prof. Craig ... More > • Chen Wins NIST Postdoctoral Fellowship Posted 2018-08-07 11:51:34 Tianran Chen won a NIST Director’s Postdoctoral Fellowship for foreigners that is equivalent to the prestigious NRC postdoctoral fellowship for US citizens. He is moving to NIST late August in 2018 More > • Shen wins Lawrence Harrison Kilmon and May Lewis Kilmon Scholarship Posted 2018-07-20 10:21:22 The physics department congratulates Lingnan Shen for being selected as a recipient of the Lawrence Harrison Kilmon and May Lewis Kilmon Scholarship for outstanding academic achievement.  It is one of the highest forms of recognition bestowed by ... More > • UVa Physicists Probing Ever Deeper Into The Stuff Of The Universe Posted 2018-06-13 10:26:10 UVAToday has a nice article highlighting UVa's contributions to recent results in particle physics:   University of Virginia physicists have recently played key roles in new particle physics discoveries. The scientists are involved with ... More > • Stetzler Awarded DOE Computational Science Graduate Fellowship Posted 2018-06-06 11:28:43 Graduating student, Stephen Stetzler, was one of 20 students awarded the DOE Computational Science Graduate Fellowship.  The DOE CSGF program provides outstanding benefits and opportunities to students pursuing doctoral degrees in fields that use ... More > • First observation of Higgs boson-top quark interactions at CERN's CMS Experiment Posted 2018-06-04 16:32:10 The discovery of the Higgs boson in 2012 was an important breakthrough in understanding the tiniest building blocks of our universe. Since then, the focus has shifted to measuring the properties of the Higgs boson in order to determine if this new ... More > • Charged impurities in a conducting sheet can cause a new kind of wake in the current pattern Posted 2018-06-04 10:31:00 The wake behind a boat and the ripples caused by rocks in a stream are both due to the relative motion of a fluid and an obstacle,  which gives rise to wave patterns on the surface.   A recent Physical Review Letter "Kelvin-Mach wake in ... More > Posted 2018-05-30 10:31:27 From UVAToday: Brian C. Seymour of Ruckersville, a rising fourth-year student at the University of Virginia double-majoring in physics and mathematics, has received an Astronaut Scholarship, which is designed to encourage students in the sciences. ... More > • Results from Jefferson Lab's Qweak experiment published in Nature Posted 2018-05-10 10:23:00 The weak charge of the proton has been measured to high precision, for the first time, by the Q-weak experiment at the Department of Energy’s Thomas Jefferson National Accelerator Facility (Jefferson Lab).  UVa Associate Professor of Physics ... More > • Bridget Andersen Featured in UVaToday Series on Class of 2018 Posted 2018-05-03 11:20:59 Astronomy/Physics major Bridget Andersen is featured in an article in this week's edition of UVaToday. From UVaToday: “Even though Bridget chose to specialize in astronomy for her research, she also took the toughest physics courses ... More > • Sutton Receives DOE SCGSR Fellowship Posted 2018-04-26 09:59:37 UVa physics graduate student, Andrew Sutton, was one of only four students in the country who received the prestigious DOE Office of Science Graduate Student Research fellowships to conduct their research at Fermilab. The program goal is to prepare ... More > • Tianran Chen Wins 2018 Allen Talbott Gwathmey Memorial Award Posted 2018-04-19 10:23:32 Tianran Chen is a recipient of the 2018 Allan Talbott Gwathmey Memorial Award.  The Gwathmey award is an honor reserved for the most accomplished graduate students in the physical sciences at the University of Virginia in recognition of a ... More > Posted 2018-04-10 17:44:04 From UVAToday: University of Virginia students Chris Li and Sebastian Haney have received scholarships from the Barry M. Goldwater Scholarship and Excellence in Education Foundation for 2018. Two other UVA students received honorable mentions. ... More > • Origin of vertical orientation in two-dimensional metal halide perovskites and its effect on photovoltaic performance Posted 2018-04-07 14:32:04 Thin films based on two-dimensional metal halide perovskites have achieved exceptional performance and stability in numerous optoelectronic device applications. Simple solution processing of the 2D perovskite provides opportunities for manufacturing ... More > • Gupta, Lee, McMullen elected to Phi Beta Kappa Posted 2018-04-04 10:58:00 Fourth year Physics majors Arvind Gupta, Kevin Lee, and Timothy McMullen have been elected to Phi Beta Kappa. Congratulations! See http://college.as.virginia.edu/phi-beta-kappa More > • Ghosh, Poon, et al. Receive DARPA Funding To Shrink Computing Memory Posted 2018-04-02 10:57:58 From UVAToday: In a case of “smaller is better,” a team of University of Virginia researchers has received a $3.4 million grant from Defense Advanced Research Projects Agency with the goal to shrink computing memory bits to a ... More > • UVa Hosts Conference for Undergraduate Women in Physics Posted 2018-03-29 14:49:00 The following article describes the APS Conference for Undergraduate Women in Physics (CUWiP), hosted by UVa in January 2018. Female Physics Students Unite at UVa By Martine Lokken Saturday morning group photo of ... More > • 2018 Mitchell Summer Research Scholarships Awarded Posted 2018-03-27 14:01:02 Every year, the Physics Department awards six to ten Mitchell Summer Fellowships to rising third or fourth year declared physics majors. The awards are currently$5,000, for summer research with a faculty member of the department.  For ... More > • UVa's SPS Chapter Recognized Again as Outstanding Posted 2018-03-12 13:33:00 UVa's SPS chapter has been named an SPS Outstanding Chapter for the second year in a row.   This honor is only given to the top 10% of all SPS chapters in the country. The Director of the Society of Physics Students & Sigma Pi Sigma ... More > • Bloomfield Develops Jelly Earplug Posted 2018-02-12 17:33:11 From NBC29: Trademarked as “MemorySil,” the shape-memory material is first being marketed as an earplug called EarJellies, referencing the jelly-like texture of the rubber. “It is a very soft material that can barely stand up ... More > • Strain engineering of a topological semimetal Posted 2017-12-24 11:35:00 Researchers at UVa found a way through strain engineering to manipulate the electronic band structure, and induce large, reversible response of the magnetoresistive properties under high magnetic fields in topological Weyl semimetal MoTe2. ... More > • Seventh Annual Sigma Pi Sigma Research Symposium Posted 2017-12-19 15:20:00 The Seventh Annual Sigma Pi Sigma Research Symposium was held on November 3rd, 2017 in the Rotunda's Lower West Oval Room. Professors Craig Group, Marija Vucelja, Shane Davis, and Kent Paschke were invited to judge nine presentations delivered by ... More > • Craigs' Group NOvA Homework Posted 2017-12-08 11:13:46 From UVA Today: University of Virginia physicists are playing a key role in one of the world’s largest physics experiments, a nearly $300 million project called “NOvA” that is designed to study fundamental particles. The aim is ... More > • Lee Group's Work on Solar Cells Highlighted by DOE Posted 2017-11-14 10:22:00 Twisting Molecule Wrings More Power from Solar Cells Inside a solar cell, sunlight excites electrons. But these electrons often don’t last long enough to go on to power cell phones or warm homes. In a promising new type of solar cell, the ... More > • Alumnus Caplan Receives APS Dissertation Award Posted 2017-10-30 11:12:00 UVa Physics Alumnus Matthew Caplan (B.S. 2013), now a Canadian Institute for Theoretical Astrophysics Postdoctoral Fellow at McGill University, has received the 2018 Dissertation Award in Nuclear Physics from the American Physical Society. ... More > • Alumna Manning Receives Maria Goeppert Mayer Award Posted 2017-10-26 17:37:00 UVa Physics Alumna M. Lisa Manning (B.S. 2002), now Associate Professor of Physics at the Syracuse University, has received the 2018 Maria Goeppert Mayer Award by the American Physical Society. Citation: "for her use of computational and ... More > • Stetzler Wins Major Scholarship Posted 2017-10-23 10:54:47 From UVA Today: Steven Stetzler asks big questions, such as “What is the universe made of?” and “Why is the universe the way it is?” Stetzler, of Kutztown, Pennsylvania, a fourth-year physics and computer science major ... More > • Cass gets Ultra-Cool Posted 2017-10-23 10:49:45 From UVA Today: It is said that what goes up must come down. Thank gravity for that. But sometimes gravitational effects affect matter on Earth in ways that physicists would rather do without. So early next year NASA is launching to the ... More > • Probing Matter with Attosecond Photo-Electron Wavepackets Posted 2017-10-02 13:13:00 Electronic processes and electron-driven reactions in atoms, molecules and condensed systems can proceed very rapidly, with relevant time-scales in the attosecond (1 attosecond=10-18 s) regime. When activated by the photoabsorption of ... More > • Fall 2017 Physics Newsletter Posted 2017-09-20 11:28:00 The current edition of our departmental newsletter features profiles of three new faculty members (Jeffrey Teo, Gia-Wei Chern, and Marija Vucelja), and a tale spanning three generations of the Cabrera family. You can read the newsletter here: ... More > • Han wins Ig Nobel Prize Posted 2017-09-17 12:03:52 Physics Major Jiwon "Jesse" Han has won a 2017 Ig Nobel Prize for his paper on "A Study on the Coffee Spilling Phenomena in the Low Impulse Regime": www.improbable.com/ig/winners/#ig2017 Jesse's (short) Ig Nobel ... More > • Origin of Long Lifetime of Charge Carriers in Solar Cell Perovskites Posted 2017-07-05 18:19:00 When sunlight shines on a semiconducting material, electrons in the material can be excited from their original states to higher energy states, forming photo-excited electrons and leaving empty states (called holes) in the original states. Usually the ... More > • 2017-18 Deaver Scholarship Application Form for Physics Majors Posted 2017-05-18 16:11:00 The Deaver Scholarships were established to honor Bascom S. Deaver, a retired physics department professor, and are awarded to students who have declared or are intending to pursue an undergraduate major in physics. They are competitive and awarded ... More > • A novel quantum phase transition and super-entangled states Posted 2017-05-18 14:36:01 One of the main quests of many-body physics and quantum information is to understand quantum phases and in particular the entanglement present in them. In this work, Zhao Zhang and Amr Ahmadain, working with Prof. Klich, have uncovered a novel quantum ... More > • Chen wins Best Poster Award at MRS Spring Meeting Posted 2017-05-18 10:18:23 Tianran Chen received the Symposium ES1 Best Poster Presentation Award at the 2017 MRS Spring Meeting. The award was based upon technical content, graphica excellence, and presentation quality. For more see ... More > • Wright wins IT Excellence Award Posted 2017-05-11 14:00:00 Bryan Wright is one of three 2017 IT Excellence Award winners. This well earned award recognizes individuals who demonstrate outstanding service in support of the Information Technology needs of their organization. For more, see ... More > • Observing the Universe with Physics: National Physics Day 2017 Posted 2017-05-05 17:00:00 An article in UVaToday describes this year's annual National Physics Day Show, which was held on April 26. From the article: “It’s a demo show designed to educate and entertain kids and the grownups who bring them,” ... More > • Katya Gilbo Featured in UVaToday Posted 2017-04-24 11:42:00 Physics major Katya Gilbo is featured in an article in this week's edition of UVaToday. Katya says “nature is stuffed to the brim with dramatic processes, and we humans aren’t bystanders.” ... More > • Andersen wins Goldwater Scholarship; Gupta gets Honorable Mention Posted 2017-04-04 11:00:00 Bridget Andersen has won a 2017 Goldwater Scholarship, one of the most prestigious undergraduate scholarships in the natural sciences, mathematics, and engineering in America. Arvind Gupta was given an Honorable Mention. For a nice UVAToday ... More > • Pennies from Heaven Posted 2017-03-17 23:30:56 From USA TODAY: "There's a tale we've all heard: A penny dropped from the top of the Empire State Building would fall at such a rate it would impale and kill anyone it hit down below. The myth somehow weaseled its way through the ... More > • Louca Elected President of NSSA Posted 2017-03-05 17:13:42 Prof. Despina Louca has been elected to serve a four year term as President of the Neutron Scattering Society of America. See http://neutronscattering.org and ... More > • DeZoort Wins Best Poster Award at 2016 PhysCon Posted 2017-01-23 14:08:25 Gage DeZoort's poster, Anomalous Signal Reduction in the CMS ECAL Trigger, won a Best Poster award at the 2016 Quadrennial Physics Congress (PhysCon) held November 3-5, 2016, in San Francisco, CA. The$200 OSA ... More > • Liuti to serve on Physical Review C Editorial Board Posted 2017-01-20 17:44:00 Prof. Simonetta Liuti has been selected to serve on the Physical Review C Editorial Board for a three years term beginning on January 2017. See http://journals.aps.org/prc/staff More > • UVa's SPS Chapter Recognized as Outstanding Posted 2017-01-05 11:00:00 UVa's SPS chapter has been named a 2015-16 SPS Outstanding Chapter.   This honor is only given to the top 10% of all SPS chapters in the country.   The Director of the Society of Physics Students & Sigma Pi Sigma writes: "You ... More > • 'Hoo You Gonna Call Posted 2016-12-06 11:57:00 UVa Physics alumnus (graduate and undergraduate) James Maxwell played an important technical role in the recent Ghostbusters movie.  His research apparatus served as a model for the movie's laboratory and he served as technical ... More > • Cox Named 2016 AAAS Fellow Posted 2016-12-02 11:50:53 Brad Cox's election as AAAS Fellow is celebrated in UVAToday.  From the article: "Cox was honored for his contributions to the field of experimental high-energy physics, particularly in the discovery of the Higgs particle. The ... More > • 2016 SESAPS Conference Posted 2016-11-17 15:33:00 From November 9-12, 2016, the UVa Physics department hosted the 83rd Annual Meeting of the Southeastern Section of the American Physical Society (SESAPS):         http://sesaps2016.phys.virginia.edu The conference ... More > • Lee's Work on a Novel Solar Cell Material Highlighted in UVAToday Posted 2016-10-21 17:00:00 From UVaToday: "... scientists and engineers at the University of Virginia, with colleagues at the NIST Center for Neutron Research, the Oak Ridge National Laboratory and Cornell University, have made new inroads on understanding the ... More > • Dukes Describes Cosmic Pain Posted 2016-10-17 14:51:00 According to Prof. Craig Dukes, "Cosmic rays can be a real pain."  Dukes contributed a Fermilab "News at work" article discussing the work that he and his collaborators are doing to mitigate the adverse effects of cosmic ray ... More > • Negative Refraction of Electrons in Graphene Observed Posted 2016-10-07 16:30:40 Negative refraction for electrons passing a boundary has been observed by a team including Avik Ghosh, an Affiliated Professor of Physics.  The results have been published in Science. For more, see the SEAS Press Release: ... More > • Memories and energy landscapes of magnetic glassy states Posted 2016-10-03 15:48:45 Understanding how memory emerges from a complex network of neurons in our brain remains a challenging task in cognitive science.  Memory also arises in physical systems with complex energy landscapes such as glasses, disordered magnets, and social ... More > • Cates' Group Develops Novel Imaging Technique Posted 2016-09-29 14:03:33 From UVAToday: A unique new imaging method, called "polarized nuclear imaging" - combining powerful aspects of both magnetic resonance imaging and gamma-ray imaging and developed by physicists in the University of Virginia's ... More > • Chakdar Wins UVa Wide Postdoctoral Poster Competition Posted 2016-09-21 11:02:36 Shreyashi Chakdar has been awarded the First Place and the Audience Choice awards for the poster presentations in the "Physical Science and Engineering category" of University of Virginia Postdoctoral Research Symposium held on Tuesday, Sep ... More > • NOvA shines new light on how neutrinos behave Posted 2016-09-16 16:18:34 NOvA issued a press release and announced new results on the disappearance of muon neutrinos at the 38th International Conference on High Energy Physics.  From the press release: "NOvA scientists have seen evidence that one of the three ... More > • Bloomfield Explains Olympic Freestyle Posted 2016-08-10 08:36:30 UVA physics professor Lou Bloomfield explains some of the fundamental forces at work in Olympic freestyle swimming, and how swimmers can use science to get ahead. ... More > • DeZoort Elected SPS Associate Zone Councilor Posted 2016-06-14 15:18:00 Gage DeZoort has been elected to serve as Zone 4 Associate Zone Councilor (AZC) for the 2016-2017 school year. His term began on June 12, 2016. On the SPS National Council, Gage will be a representative and a voice for students’ opinions and ... More > • Day Interviewed on Future of Nuclear Energy Posted 2016-05-26 11:59:00 From CBS19: "I really don't think there is much future for Nuclear energy," said Donal Day, a Nuclear Physicist who teaches at University of Virginia. "I don't think it's because people are opposed to it, but because ... More > • Principato Meets Italian Prime Minister Posted 2016-04-19 16:53:49 Earlier this month, Cristiana Principato, a UVa graduate student in the Physics Department, had the opportunity to meet with Italian Prime Minister Matteo Renzi when he visited the Fermi National Laboratory where Cristiana conducts her research. Italy ... More > • Bixel, Diehl, and Slomka Elected to Phi Beta Kappa Posted 2016-03-30 13:00:00 David Bixel, Adam Diehl, and Matthew Slomka were elected to the Beta Chapter of Virginia of Phi Beta Kappa. Prof. Carrie Douglass, President, Phi Beta Kappa, Beta Chapter of Virginia, writes: "As the oldest and most distinguished honor ... More > • Special Physics Department Medical Physics Scholarship! Posted 2016-02-23 17:08:14 The Department has a $5,000 Scholarship available for this summer, 2016, to support a declared physics major to work with a faculty member on a medical physics project. The faculty member is not required to be member of the Physics Department, but the ... More > • UVa's SPS Chapter Again Recognized as Distinguished Posted 2016-02-11 13:00:00 For the third straight year UVa’s SPS chapter has been recognized as “distinguished”. This honor is only given to the top 20% of all SPS chapters in the country. For more about SPS Outstanding Chapter Awards, see: ... More > • Michael Mann on Climate Change Posted 2016-02-08 14:16:00 Michael E. Mann, Distinguished Professor of Atmospheric Science at Penn State University, spoke on The Physics of Climate Change at the UVa Physics Department on Friday, February 5. A video of Prof. Mann's talk can be seen at the ... More > • Mitchell Scholarship Deadline Approaching Posted 2016-01-29 18:00:00 Mitchell and other Scholarships are available to support declared undergraduate physics majors in their third or fourth year of study at UVa in the academic year 2016-17. They are competitive and awarded annually by the Department to rising 3rd and ... More > • Deaver featured in Physics Focus Posted 2016-01-04 13:14:59 Deaver's seminal work with William Fairbank on magnetic flux quantization is featured in a Physics Focus: Landmarks article (Physics is an American Physical Society site that provides daily on-line news and commentary about papers from the ... More > • Sackett Viewpoint featured in Physics Posted 2015-11-17 14:34:00 Cass Sackett wrote a Viewpoint, "Cool Physics with Warm Ions", for Physics, an American Physical Society site that provides daily on-line news and commentary about papers from the APS journal collection. According to Physics, ... More > • Zheng Elected a Fellow of the American Physical Society Posted 2015-10-22 14:10:00 Upon the recomendation of the Topical Group on Hadronic Physics (GHP), Xiaochao Zheng has been elected a Fellow of the American Physical Society. The citation reads: "For advancing the measurement of parity violating asymmetry in ... More > • Alumnus Tanner Awarded APS Isakson Prize Posted 2015-10-21 14:32:00 UVa Physics Alumnus David Tanner (B.A. 1966, M.S. 1967), now Distinguished Professor of Physics at the University of Florida, has been awarded the 2016 Frank Isakson Prize for Optical Effects in Solids by the American Physical Society. ... More > • Liuti and Rajan U.Va. recipients of DOE Topical Collaboration Award Posted 2015-10-20 11:33:00 Simonetta Liuti and her student Abha Rajan are the U.Va. recipients of DOE Topical Collaboration Award entitled "Coordinated Theoretical Approach to Transverse Momentum Dependent Hadron Structure in QCD (TMD Collaboration)". ... More > • Robert Mina: Sushi and Sliding Doors Posted 2015-10-12 13:30:00 Robert Mina's article on the 2015 CHEP (Conference on Computing in High Energy and Nuclear Physics) was featured in The SPS Observer. For more, see: ... More > • Xiaochao Zheng Featured in UVaToday's "U.Va. Faculty Will Realize Their 'Dream Ideas,' Thanks to Mead Grants" Posted 2015-09-29 15:33:00 From UVa Today: "...Xiaochao Zheng, associate professor of physics Because 3-D printing is a relatively new field, many undergraduate physics students have not been exposed to this increasingly useful tool. Zheng’s dream idea includes ... More > • Love triangles, quantum fluctuations and spin jam Posted 2015-09-02 13:00:00 Lee's group presented experimental evidence for the existence of a topological spin state called spin jam. When magnetic moments are interacting with each other in a situation resembling that of complex love triangles, called frustration, a large ... More > • Carr and Sperling Receive Dean's Scholarships Posted 2015-06-05 15:49:05 Peter Carr received the Lawrence Harrison Kilmon and May Lewis Kilmon Scholarship for outstanding academic achievement and Owen Sperling received the George C. and Carroll F. M. Seward Scholarship for outstanding academic achievement and ... More > • Xiao Receives ISO Academic Excellence Award Posted 2015-05-26 11:57:35 Liting Xiao received the International Studies Office Graduating International Students Awards in the category for Academic Excellence See https://www.facebook.com/51812338138/posts/10153293894208139/ More > • Janet Rafner Featured in UVaToday Series on Class of 2015 Posted 2015-05-21 11:39:42 From UVaToday: "... the fateful combination of being encouraged to undertake interdisciplinary study and a chance encounter with a professor led to an internship in Paris and eventually to curating an exhibition at the Science ... More > • Group wins 2015 Cory Family Teaching Award Posted 2015-05-20 14:26:41 Assistant Professor Craig Group has won the 2015 Cory Family Teaching Award. This is a prize presented annually to two untenured tenure-track professors in the College of Arts and Sciences, recognizing excellence in undergraduate instruction. ... More > • Moran Chen Featured in UVaToday Series on Class of 2015 Posted 2015-05-09 19:07:06 From UVaToday: “During Moran Chen’s time as a University of Virginia physics doctoral student, she helped set a world record in an area of physics that could play a crucial role in giving rise to a prototype quantum ... More > • Yanchenko Wins Outstanding Undergraduate Physics Major Research Award Posted 2015-05-08 13:11:21 The winner of this year's Outstanding Undergraduate Physics Major Research Award is Anna Yanchenko, whose research was entitled "THz Field Enhancement and Electron Emission from Au DENA Growth". Her research professor was Robert ... More > • Jones Elected Vice Chair of DAMOP Posted 2015-05-05 23:42:35 Bob Jones has been elected Vice Chair of the APS Division of Atomic, Molecular, and Optical Physics (DAMOP) starting in June. This is a four year service commitment to the division with consecutive single year terms as Vice Chair, Chair Elect, ... More > • Wong Wins Gwathmey Award Posted 2015-04-22 11:14:11 Dear All, Please join me to congratulate Gabriel Wong for receiving one of this year's Allan T. Gwathmey Memorial awards. The Gwathmey award is an honor reserved for the most accomplished graduate students in the ... More > • Chatterjee Garners NSF Career Award Posted 2015-04-02 13:32:22 From UVAToday: NSF’s Early Career Development Program supports junior faculty who perform outstanding research, are excellent educators and integrate education and research into their academic activities. The five-year grants ... More > • Yanchenko and Harris Elected to Phi Beta Kappa Posted 2015-03-31 16:42:26 Anna Yanchenko and Benjamin Harris were elected to the Beta Chapter of Virginia of Phi Beta Kappa. Prof. Carrie Douglass, President, Phi Beta Kappa, Beta Chapter of Virginia, writes: "As the oldest and most ... More > • UVa's SPS Chapter Recognized as Distinguished Posted 2015-02-02 11:10:50 For the second straight year UVa's SPS chapter has been recognized as "distinguished". This honor is only given to the top 20% of all SPS chapters in the country. The Director of the Society of Physics Students ... More > • McDonald Wins APS Award Posted 2015-01-20 11:00:00 Dear Faculty and Undergraduates, Please join me to congratulate Brigid McDonald for being recognized for giving one of the three best research presentations at the APS Conference for Undergraduate Women in Physics (cuwip.web.unc.edu). The ... More > • I Want The ILC! by P. Q. Hung Posted 2015-01-13 17:00:49 "The ILC Blues", by P. Q. Hung and Duong Quoc Dat, is featured in the Newsletter of the Linear Collider Comunity: http://newsline.linearcollider.org/2015/01/08/the-ilc-blues/ More > • Arnold and Louca - APS Fellows 2014 Posted 2014-12-09 17:48:18 Dear Colleagues, Peter Arnold and Despina Louca have been elected 2014 APS Fellows. Please join me to say “congratulations” to these two brand new APS Fellows in our department. Cheers, Joe ... More > • Cox and Hawley Named U.Va.’s 2014 Distinguished Scientists Posted 2014-11-06 09:59:21 From UVa Today: Two of the University of Virginia’s most accomplished faculty researchers – physicist Brad Cox and astronomer John Hawley – have been selected as 2014 Distinguished Scientists. The ... More > • Kamat Wins at Three Minute Thesis Competition Posted 2014-09-30 16:38:36 Ajinkya Kamat’s entry, “Can we solve the mystery of ‘Neutrinos’ at Large Hadron Collider?”, took third place at the 2nd Annual UVa Three Minute Thesis competition. For more, ... More > • November SESAPS Meeting Posted 2014-09-23 14:03:58 It is my pleasure to invite you to the SESAPS Meeting in Columbia, South Carolina, November 13-15, 2014. Our scientific program will focus on some of the year's outstanding research results across all fields from atomic and ... More > • Wong Selected to be Kavli Institute Graduate Fellow Posted 2014-07-03 15:23:00 Gabriel Wong, a PhD student working with Profs. Israel Klich and Diana Vaman, has been selected as a KITP (the Kavli Institute For Theoretical Physics at UCSB) Graduate Fellow for the fall of 2014. Gabriel's "Local Advisor" will be Joe ... More > • Amy Rodgers to Speak at Valedictory Exercises Posted 2014-05-04 14:34:08 The 2014 Valedictory Exercises will take place on Saturday, May 17 at 11:00 am on the lawn. This year's welcome speaker is fourth year Physics Major and Class of 2014 Trustee Amy Rodgers. Rodgers' address will be followed by the presentation of awards, ... More > • Lindgren, et al., Win Grants to Boost STEM Education Posted 2014-05-01 12:07:37 From UVaToday:The Virginia Department of Education has awarded faculty members in the University of Virginia’s Curry School of Education teacher education program and colleagues in U.Va.'s College of Arts & Sciences two grants to support Virginia ... More > • Moran Chen Wins 2014 Allen T. Gwathmey Memorial Award Posted 2014-05-01 11:38:06 Moran Chen is one of two recipients of the 2014 Allen T. Gwathmey Memorial Award, an honor reserved for the most accomplished graduate students in the sciences at the University of Virginia in recognition of a distinguished scholarly publication. Her ... More > • 'Universal' Property of Metamagnets Identified by Shivaram, et al. Posted 2014-05-01 11:37:33 From UVaToday:(A University of Virginia-led team) discovered that the magnetic effect of apparently all metamagnets is that it is non-linear. When these metamagnets are placed in an initial magnetic field and the field is doubled, they more than double ... More > • Gilbo and Xiao Featured at UVa's Public Day Posted 2014-04-23 17:52:07 Two of our undergraduates students and Physics majors, Yekatarina Gilbo and Liting Xiao, were selected to showcase their research at UVa's recent public day. Gilbo was Undergraduate Research Symposium Winner while Xiao exhibited Dark Matter search ... More > • Klich and Lee Solve Longstanding Condensed Matter Physics Problem Posted 2014-04-15 12:06:52 Our theorist, Israel Klich, and experimentalist, Seung-Hun lee, have worked together to solve a long-standing problem in condensed matter physics regarding a glassy state in some frustrated magnets. When spins are arranged in a lattice of triangular ... More > • Triple-Emmy-Winning "Professor Lou" Posted 2014-04-07 13:51:05 From UVaToday:"Professor Lou," as he is called on TV, is science central on a feature segment called "Forces of Hockey," which is produced by the Capitals and aired on the team's associated cable networks, on the NHL Network, and shown during breaks ... More > • Hoxton Lecture: Quantum Networks in Quantum Optics Posted 2014-03-20 16:53:57 Professor Jeff KimbleThis talk will discuss the opportunities for the exploration of physical systems that have not heretofore existed in the natural world. A reception will be held following the talk in the Chemistry Building atrium.Thursday, March ... More > • Janet Rafner wins Small Research and Travel Grant Posted 2014-03-20 16:25:26 Janet Rafner, one of our SPS officers, received a Small Research and Travel Grant in the amount of$1,500. She will travel to Orsay, France, to work with a team of physicists and designers to advance both public and academic understanding of the ... More > • Cates Elected Vice Chair of the APS Division of Nuclear Physics Posted 2014-02-12 10:27:29 Gordon Cates has been elected Vice Chair of the APS Division of Nuclear Physics (DNP), the beginning, in April 2014, a four year commitment in which he will next be the Chair Elect, then Chair, and finally Past Chair. More > • Zheng, et al., Report Electron-Quark Parity Violation in Nature Posted 2014-02-09 22:31:19 From Science:"... electrons also interact with the nuclei through the weak force, which violates parity and is not mirror symmetric. As a result, right-spinning and left-spinning electrons ricochet off the target differently, creating a slight ... More > • INPP 2nd Annual Lecture: Gordon Kane Posted 2014-01-30 16:14:51 *** Please note: The following lecture has been rescheduled for April 18, 2014 ****Prof. Gordon L. Kane will lecture on "String Theory, Our Real World, and Higgs Bosons" in Physics 203 on Friday, April 18, 2014.For more, ... More > • Brad Cox Named Outstanding Virginia Scientist Posted 2014-01-16 17:00:23 On January 15, 2014, UVaToday reported:"In recognition of Cox's contributions to the search for and discovery of the Higgs, Virginia Gov. Terry McAuliffe and the Science Museum of Virginia today named him as one of two Virginia Outstanding Scientists ... More > • Third Annual Undergraduate Physics Research Symposium Posted 2013-12-05 18:00:29 The Third Annual Undergraduate Physics Research Symposium was held on November 1, 2013. Six undergraduates presented their physics-related research at the event. Topics presented were as follows:Yekaterina Gilbo "The Role of Solar Wind in ... More > • Fendley, Lee, Pfister, Thacker - APS Fellows 2013 Posted 2013-12-02 20:00:42 Dear Colleagues,We have learned from APS that four of our colleagues - Paul Fendley, Seunghun Lee, Olivier Pfister and Hank Thacker have been electedAPS Fellows 2013. Please join me in congratulating all of themfor this well-deserved honor.My sincere ... More > • Third Virginia and Maryland String and Particle Theory Meeting Posted 2013-10-28 15:13:38 On Saturday, November 2, 2013, Diana Vaman and UVa's Physics Department will again play host to the Virginia and Maryland String and Particle Theory Meeting.The invited speakers are:Justin Khoury (University of Pensylvania, Philadelphia)Igor Klebanov ... More > • U.Va. Physicists Celebrate Their Role in Nobel-Winning Higgs Discovery Posted 2013-10-09 10:58:53 From UVa Today:"The Royal Swedish Academy of Sciences on Tuesday awarded the Nobel Prize in Physics to theorists Peter Higgs and Francois Englert to recognize their work in developing the theory of what is now known as the Higgs field, which gives ... More > • Joe Poon Serves on Excellence in Faculty Hiring Panel Posted 2013-10-02 18:47:50 From UVaToday:"A panel of U.Va. faculty members and a Human Resources consultant experienced in faculty searches shared a range of recruiting strategies. Dr. Sim Galazka, professor of family medicine in the School of Medicine; Joe Poon, William Barton ... More > • Pam Joseph Highlighted in Dean Woo's Zintl Award Acceptance Speech Posted 2013-09-30 17:39:41 In accepting this year's Elizabeth Zintl Leadership Award, presented by the Women's Center, Meredith Woo, the Dean of Arts and Sciences, mentioned our own Pam Joseph:"Pamela Joseph is the research administrator in our Physics Department. For over ... More > • Hung Awarded Vietnam's Medal for the Cause of Science and Technology Posted 2013-08-29 11:39:13 On Monday, August 12, 2013, P.Q. Hung was decorated by S.E. Nguyen Quan, Vietnam's Minister of Science and Technology, for his important contribution to the advances of Science and Education in Vietnam.A nice photo of the ceremony can be seen ... More > • Pfister Wins Distinguished Research Career Award Posted 2013-07-12 09:49:27 From UVa Today:"Pfister, professor of physics in the College, is a noted researcher in quantum information and quantum computing. The development of quantum computers, one of the most challenging but promising areas of information science, would have ... More > • Summer Edition of the "Physics Day Show" Posted 2013-07-03 14:25:08 To support our recruiting efforts, and for the fun of it, physics students teamed up with faculty and staff to organize two summer physics shows on July 9 and July 30 at 7 pm in room 203 in the Physics building. Spectacular and pedagogical demos will ... More > • Pam Joseph Receives 2013 Outstanding Contribution Award Posted 2013-05-21 09:44:52 From UVaToday:"Do you know which department in the University of Virginia's College of Arts & Sciences is best grant-funded?The Department of Physics claims that distinction. And several faculty members there say it's due to the work of research ... More > • Hoxton Lecture: The World According to Higgs Posted 2013-04-18 12:00:54 Professor Chris QuiggNew developments in particle physics offer a new and radically simple conception of the universe. Fundamental particles called quarks and leptons make up everyday matter, and two new laws of nature rule their interactions. Until ... More > • Eleventh Physics Department Research Poster Competition Winners Posted 2013-04-10 00:54:42 During the week of April 1 through April 5, 2013, the Physics department held a poster competition to highlight graduate student research. The competition was open to all students who had entered their third year in the graduate program and beyond.1st ... More > • UVa Featured on APS TV at March Meeting Posted 2013-03-18 11:06:34 UVa's is one of the physics programs featured on APS TV at the March Meeting:http://www.aps.org/meetings/march/services/apstv.cfmFrom the APS TV page:"The University of Virginia is ranked #2 in the U.S. for public universities and has one the largest ... More > • Hirosky Participates in "Science Straight Up" Public Outreach Posted 2013-03-14 10:40:12 From UVa Today: "Have you ever wondered what, exactly, a Higgs boson is? Or what a future quantum computer might be able to do? Or if climate change is real?Then ask a scientist. Maybe with a drink in your hand.You can Thursday evening at 7:30 at Black ... More > • Bloomfield's "Vistik" Highlighted in UVaToday Posted 2013-02-28 09:43:30 From UVaToday:"If anything bothers University of Virginia physicist Lou Bloomfield, it's a wobbly table. So much so that he actually invented a material to eliminate the problem. The material, a type of silicone rubber that is both rigid and fluid – ... More > • Day PRC Paper Included as Editors' Selection Posted 2013-01-07 16:07:25 A PRC paper presenting the results of Prof. Donal Day's work with John Arrington (our recent colloquium speaker), Nadia Fomin (his former graduate student), and a few others on the EMC effect has been included as an Editors' Selection for the month of ... More > • Cates Highlighted for Diquark Work Posted 2012-12-06 17:37:02 From DOE Pulse:... researchers have found intriguing new evidence on how the different kinds of quarks behave inside protons and neutrons. The data and insights, which were published in the journal Physical Review Letters, have recently received ... More > • UVa Physics Majors: Funding Available for Summer Research! Posted 2012-12-04 13:57:46 The UVa Physics Department is pleased to announce the availability of a number of Mitchell and other Summer Research Scholarships to support declared undergraduate physics majors to do research with a Physics Department faculty member next summer ... More > • Bloomfield Invents "Molecular Velcro" Posted 2012-11-28 10:35:49 From NewsPlex.com:Dr. Louis Bloomfield, a professor of physics at UVa, describes the material as being the "molecular equivalent of Velcro." The material bounces like a ball, stretches like silly putty, and sticks like glue but it's hard to describe ... More > • Day Elected Fellow of the American Physical Society Posted 2012-11-19 17:31:06 In recognition by his peers of his outstanding contributions to physics, Professor Donal Day has been elected a Fellow of the American Physical Society. His Fellowship Certificate will read:"For his studies of high momentum transfer quasielastic ... More > • Liuti Elected Vice Chair of SESAPS Posted 2012-10-15 21:07:10 From the Southeastern Section of the American Physical Society:"In the 2012 election, Professor Simonetta Liuti of the University of Virginia was elected Vice Chair. She will serve in the Chair line for the next four years."See ... More > • Zukai Wang Wins URA Visiting Scholars Program Award Posted 2012-10-01 10:03:16 The URA Visiting Scholars Program at Fermilab has awarded Zukai Wang $20,452 to work on the “Search for Magnetic Monopoles in the NOvA Far Detector”For more information on the program, ... More > • Second Virginia and Maryland String and Particle Theory Meeting Posted 2012-09-24 11:27:23 On Saturday, October 6, 2012, Diana Vaman and UVa's Physics Department will again play host to the Virginia and Maryland String and Particle Theory Meeting.The invited speakers are:Shinsei Ryu (U. Illinois, Urbana)Misha Stephanov (U. Illinois, ... More > • Bloomfield Explains Physics of Heroic Catch Posted 2012-07-18 14:56:54 From MSNBC:"The girl fell about 25 feet, which took about 1.25 seconds. The man stopped her fall in about 3 or 4 feet, which took about 0.1 second, depending on the stopping distance and how he supported her. So, she accumulated downward momentum over ... More > • UVa's Higgs Effort Highlighted in Press Posted 2012-07-05 09:33:22 From UVa Today:"We've had an observation that very likely is the Higgs," said University of Virginia physicist Brad Cox in the College of Arts & Sciences, who has been involved with the Higgs search at the Large Hadron Collider. "With more ... More > • Jesse W. Beams Published in July 2012 Scientific American Posted 2012-06-22 12:32:05 An excerpt of an article by Ernest O. Lawrence and J. W. Beams in the July 2012 issue of Scientific American:"Light is one of the most familiar physical realities. All of us are acquainted with a large number of its properties, while some of us who are ... More > • Mitchell Summer Research Scholarships Awarded! Posted 2012-05-10 11:20:27 Three of our physics majors have been awarded Mitchell Scholarships to doresearch with physics faculty this summer. Peter Breiding has been awarded$5,000 to work with Professor Gallagher designing and building a new tunablelaser, Davis van Petten has ... More > • Loomis and Popovic Win Presidential Research Poster Competition Posted 2012-05-08 14:11:51 From UVaToday:"University of Virginia student researchers offered unique perspectives on the physical, legal and political worlds Friday as they presented their findings in the second annual Presidential Research Poster Competition at the Rotunda. ... ... More > • UVa Credited with Originating National Physics Day Posted 2012-04-25 10:18:59 From physicscentral:"Physics fans, rejoice! April 24th is National Physics Day, and physics enthusiasts across the country are celebrating with fun physics demonstrations, public lectures, and other science events.But National Physics Day isn't new; ... More > • Tenth Physics Department Research Poster Competition Winners Posted 2012-04-24 17:59:40 During the week of April 2 through April 6, 2012, the Physics department held a poster competition to highlight graduate student research. The competition was open to all students who had entered their third year in the graduate program and beyond.1st ... More > • Jefferson Lab Searches for Heavy Photons Highlighted in Nature Posted 2012-04-04 16:40:11 From Nature News in Focus"In a three-week experiment due to start on 24 April, the electrons will crash into a thin tungsten target at 500 million times a second, creating a cascade of short-lived particles. Amid the debris, physicists with the Heavy ... More > • National Physics Day Posted 2012-04-03 16:24:51 In celebration of National Physics Day, the 18th annual University of Virginia physics demonstration show will be held at 7 p.m. on Wednesday evening, April 25, 2012, in room 203 of the Physics Building, 382 McCormick Road. This highly anticipated ... More > • Tenth Physics Department Research Poster Competition Posted 2012-03-29 10:14:14 During the week of April 2 through April 6, 2012, the Physics department will hold a poster competition to highlight graduate student research. The competition will be open to all students who have entered their third year in the graduate program and ... More > • Virginia and Maryland String and Particle Theory Meeting Posted 2012-03-27 10:33:40 On Saturday, March 31, 2012, Diana Vaman and UVa's Physics Department will play host to the Virginia and Maryland String and Particle Theory Meeting.For more see:http://www.phys.virginia.edu/Announcements/Meetings/Particle2012/ More > • UVa Researchers Featured in APS Synopses Posted 2012-03-19 22:26:48 From APS Synopsis: Getting Under the Neutron SkinHeavy nuclei are believed to have a neutron-rich skin on the surface, and the thickness of this skin may have important implications for the physics of neutron stars.Now the Lead Radius Experiment (PREx) ... More > • Cox Highlighted in Virginia Magazine Posted 2012-03-19 11:00:00 Brad Cox comments upon the LHC's search for the Higgs boson. For more: http://uvamagazine.org/university_digest/article/picking_up_the_subatomic_pieces More > • Introductory Lab Instructor Posted 2012-03-15 15:11:13 The Department of Physics at the University of Virginia invites applications for a Faculty position with the academic rank of Lecturer to serve as Introductory Lab Instructor. This is a non-tenure-track academic position, beginning Spring 2012. The ... More > • Bloomfield Makes Cents in Huffington Post Posted 2012-03-07 09:24:06 Prof. Bloomfield considers pennies from heaven (AKA the Empire State Building):'If it did strike you, it would feel like being flicked in the forehead — "but not even very hard," said Louis Bloomfield, a physicist at the University of Virginia. And ... More > • UVa team and collaborators measure ultra-fast protons in nuclei Posted 2012-02-29 13:25:19 The atomic nucleus is made of confined nucleons in constant motion dominated by their interactions with the mean field of the nucleus - that is the average potential generated the many body system. This mean field spawned motion is called the Fermi ... More > • 2012 Institute of Nuclear and Particle Physics Annual Lecture Posted 2012-02-29 10:18:37 The 2012 Institute of Nuclear and Particle Physics Annual Lecture, which will be held in the Physics Department, Room 203, on Thursday, March 15, at 3:30 p.m. The lecture will be delivered by Professor William Marciano, a prominent theoretical ... More > • Dukes' NOvA Work Highlighted in A&S Magazine Posted 2011-12-20 17:46:56 From the Fall 2011 Arts and Sciences Magazine, "Toward a New College":"Without this asymmetry, without this slight abundance of matter over antimatter, there would be nothing," says Craig DukesFor more, ... More > • UVa CMS Team Highlighted for Higgs Search Posted 2011-12-14 09:52:17 From The Daily Progress:A team of scientists that includes a group from the University of Virginia may have seen a glimpse of a subatomic particle that, in theory, is the key to holding together everything from atoms to airplanes.Scientific teams ... More > • Yohay Wins US LHC Users Organization Award Posted 2011-11-14 10:09:55 Per Sridhara Dasu for the US LHC Users Organization Executive Committee:The Young Scientist talks at the USLUO, especially the lightning round, was a grand success. Everyone liked the talks, and our NSF representative sent a special congratulatory ... More > • UVa to Dedicate New Physical and Life Sciences Research Building Posted 2011-10-18 15:24:14 From UVaToday:October 17, 2011 — The College and Graduate School of Arts & Sciences at the University of Virginia will dedicate its new, state-of-the-art Physical and Life Sciences Research Building on Friday at 3 p.m.The five-story, ... More > • Pocanic and Baessler Receive NSF MRI Funding Posted 2011-09-26 17:35:50 Dinko Pocanic and Stefan Baessler, in collaboration with colleagues from Arizona State University and Oak Ridge National Lab, have received funding for the development of a spectrometer optimized for precise measurements of correlations in neutron beta ... More > • Pfister Highlighted in Quantum Computing News Posted 2011-09-26 09:59:06 From Network World:"My work with optical fields has demonstrated good preliminary control over 60 qubit equivalents, which we call 'Qmodes' and has the potential to scale to thousands of Qmodes," Pfister says. "Each Qmode is a distinctly specified ... More > • Louca, et al., Awarded NSF MIRT Posted 2011-09-12 13:14:08 Despina Louca, with her colleagues at UT Austin, received one of the three MIRT awards made by the National Science Foundation this year among high competition.The Materials Interdisciplinary Research Team (MIRT) grant, which was awarded for the first ... More > • Pfister Accomplishes Breakthrough Toward Quantum Computing Posted 2011-07-15 09:05:30 From UVaToday:Olivier Pfister, a professor of physics in the University of Virginia's College of Arts & Sciences, has just published findings in the journal Physical Review Letters demonstrating a breakthrough in the creation of massive numbers of ... More > • D. Louca, O. Pfister, and M. Williams Promoted Posted 2011-06-20 15:52:31 Despina Louca and Olivier Pfister have been promoted to Full Professors and Mark Williams to Research Professor of Physics and Professor of Radiology, effective August 25, 2011. More > • Wolf Instrumental in Creation of the Virginia Nanoelectronics Center Posted 2011-05-25 11:32:53 From UVaToday:The University of Virginia, in partnership with the College of William & Mary and Old Dominion University, has launched the Virginia Nanoelectronics Center, or ViNC, to advance research aimed at developing next-generation ... More > • Neu Awarded FEST Funding Posted 2011-05-24 13:51:16 Christopher Neu has been named a Distinguished Young Investigator this year through the University of Virginia's Fund for Excellence in Science and Technology. Sponsored by the Office of the Vice President for Research, the fund - now in its seventh ... More > • Ninth Physics Department Research Poster Competition Posted 2011-05-11 10:55:44 During the week of March 28 through April 1, 2011, the Physics department held a poster competition to highlight graduate student research. The competition was open to all students who had entered their third year in the graduate program and ... More > • Gallagher receives 2010-11 Distinguished Scientist Award Posted 2011-04-20 09:23:24 Three pre-eminent researchers – Thomas F. Gallagher, Patrice G. Guyenet and Kodi S. Ravichandran – have been chosen to receive 2010-11 Distinguished Scientist Awards from the University of Virginia.This award from the Office of the Vice President ... More > • Rachel Yohay has been awarded the ARCS Fellowship Posted 2011-04-08 09:47:13 The Metro Washington Chapter of ARCS selected Rachel Yohay as a fellowship recipient for the 2011-2012 academic year. ARCS stands for Achievement Rewards for College Scientists and only 1 nomination was allowed out of UVa for the \$15,000 scholarship. ... More > • Wu and Yohay Honored Posted 2011-03-30 09:55:04 Chaolun Wu has been awarded a dissertation year fellowshipand Rachel Yohay has been selected from UVa this year to be nominated for the ARCS Foundation. ARCS stands for Achievement Rewards for College Scientists and only 1 nomination is allowed out of ... More > • Ninth Physics Department Research Poster Competition Posted 2011-03-08 17:34:43 During the week of March 28 through April 1, 2011, the Physics department will hold a poster competition to highlight graduate student research. The competition will be open to all students who have entered their third year in the graduate program and ... More > • Bloomfield wins Jefferson Scholars Faculty Prize Posted 2011-02-17 17:12:12 From UVaToday:The Jefferson Scholars Foundation at the University of Virginia has awarded Louis A. Bloomfield its 2011 Faculty Prize. Bloomfield, a physics professor in the College of Arts & Sciences, has been a member of the faculty since ... More > • Congratulations to Kelsie Betsch and Jirakan Nunkaew Posted 2011-02-01 16:14:30 Congratulations to Kelsie Betsch and Jirakan Nunkaew on receiving an Award for Excellence in Scholarship in the Sciences and Engineering! This award recognizes excellence in original scholarship by PhD students at the University and rewards those ... More > • Poon Group's Work on Thermoelectric Materials Recognized Posted 2011-01-27 15:51:34 Researchers in the US, including U.Va., have unveiled a new high-temperature material that is 60% better at converting heat to electricity than comparable "thermoelectrics". The material, which is a nanocomposite, could potentially be used to boost the ... More > • Memorial Resolution for Klaus Ziock Posted 2010-12-10 14:05:48 Memorial Resolution for Klaus O. H. Ziock, read to the Faculty of Arts and Sciences on December 9, 2010, by Prof. Ralph Minehart:Klaus Otto Heinrich Ziock, an experimental physicist and Physics Professor Emeritus, died on November 5, 2010, from ... More > • Dukes' NOvA Work Highlighted in UVaToday Posted 2010-12-08 09:43:44 One of the great and fundamental questions in physics is: Why is there matter? Physicists theorize that in the instant after the Big Bang created the makings of the universe, there were nearly equal amounts of matter and anti-matter, protons and ... More > • Lamacraft named a Cottrell Scholar Posted 2010-11-22 14:55:19 Dear Colleagues,I have just received word that Austen Lamacraft has been named a Cottrell Scholar by the Research Corporation for Science Advancement. This is an honor for Austen and the announcement of the award was made to the UVa President's Office. ... More > • HEP Group Seeks New Physics at the LHC Posted 2010-11-18 09:29:46 As reported in UVaToday:"International teams of scientists working on an array of high-energy physics research projects with the Large Hadron Collider near Geneva – including several physicists from the University of Virginia's College of Arts & ... More > • Quark Momentum Distributions Beyond the Free Nucleon Limit Posted 2010-11-17 14:20:37 This paper, coauthored by Nadia Fomin, a former UVa graduate student (and now a research associate at the University of Tennessee), Donal Day (her advisor at UVa), John Arrington of Argonne National Lab and others, provides for the first time a direct ... More > • Fishbane Defends Einstein Posted 2010-11-15 16:10:01 Professor Emeritus Paul Fishbane was noted in UVa Todayhttp://www.virginia.edu/uvatoday/headlines.phpfor Salon's ... More > • Lee Group Paper a J. Phys. Soc. Jpn TOP 20 Posted 2010-11-15 10:07:53 The paper "Investigation of the Spin-Glass Regime between the Antiferromagnetic and Superconducting Phases in Fe1+ySexTe1-x" was one of the top twenty papers downloaded from the Journal of the Physical Society of Japan's ... More > • Lee Highlighted in APS News Posted 2010-10-13 17:50:47 In the October 2010 APS News:"To me, this challenges the integrity of science... They say theyreached these conclusions that have enormous consequences on thepolitical and international stage. As a scientist and scholar, I feltit was my duty to check ... More > • Smola Wins Presidential Award for Excellence in Mathematics and Science Teaching Posted 2010-10-08 15:16:14 Raymond Smola, a 2008 graduate of our Master of Arts in Physics Education program, recently was one of the winners for the 2010 Presidential Award for Excellence in Teaching Mathematics and Science. This is the highest award that a K-12 teacher can ... More > • Breaking through to the other side Posted 2010-08-31 23:03:41 The highest temperature superconductivity has been found when asmall fraction of electrons was added or removed from Mottinsulators where strong electron-electron repulsion preventselectrons from moving freely. To dissect a Mott insulator andexamine ... More > • Invitation - Deaver Retirement Posted 2010-04-28 16:22:52 Honoring 45 years of service in the UVA Physics Department . . .A Retirement Symposium and Reception CelebrationforBascom DeaveronMay 12, 2010 at 1:00pmRoom 204, Physics BldgOur colleague Bascom Deaver will be retiring this summer. In celebrating ... More >
2019-03-22 22:16:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17965812981128693, "perplexity": 8319.805776475889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202698.22/warc/CC-MAIN-20190322220357-20190323002357-00008.warc.gz"}
https://physics.stackexchange.com/questions/420281/strange-question-involving-finding-a-relation-between-a-commutator-and-the-time
# Strange question involving finding a relation between a commutator and the time derivative of an operator In order to get to the parts I am stuck at, I will add the examiners' solutions to each subquestion, which is needed to get to the subquestion that I am querying. The following is a bizarre question from a highly ranked university quantum mechanics exam: Consider the quantity $$\hat{\mathcal{O}}_{mn}(t)=\int {u_m}^{*}(x,t)\hat{\mathcal{O}}\,u_n(x,t)dx$$ for some operator $\hat{\mathcal{O}}$ which has no explicit time-dependence, where $u_n(x,t)$ and $u_m(x,t)$ are eigenstates of $\hat{H}$ at time $t$. Write this expression in terms of the eigenstates $u_m(x)$ and $u_n(x)$ at time $t=0$ In the second line I believe that there is a mistake and $\int {u_m}^{*}(x,t)\hat{\mathcal{O}}\,u_n(x,t)dx$ should be $\int {u_m}^{*}(x,0)\hat{\mathcal{O}}\,u_n(x,0)dx$ since all the time dependence is in the exponentials out front. Could someone please confirm or deny whether this is indeed correct? What is the time-derivative? $$\frac{d}{dt}\hat{\mathcal{O}}_{mn}(t)?$$ Consider the operator $\hat{\mathcal{O}} = [\hat{A},\hat{H}]$ and find $\hat{\mathcal{O}}_{mn}(t)$ for this case. Find a relationship between $\frac{d \hat{A}_{mn}}{dt}$ and $\hat{\mathcal{O}}_{mn}(t)$. In the first integral there may be a typo as I think that $$\int {u_m}^{*}(x)[\hat{A},\hat{H}]u_n(x,t)dx$$ should be $$\int {u_m}^{*}(x)[\hat{A},\hat{H}]u_n(x)dx$$ I think that $$\hat{\mathcal{O}}_{mn}(0)=(E_n-E_m)\langle{\hat{A}\rangle}$$ when the examiner writes "Comparing to the above" in the last line, I presume the part that is being compared is $$\frac{d}{dt}\hat{\mathcal{O}}_{mn}(t)=-\frac{i}{\hbar}\left(E_n-E_m \right)\hat{\mathcal{O}}_{mn}(t)$$ I can't figure out how to derive the relationship $$\frac{d}{dt}\hat{A}_{mn}(t)=-\frac{i}{\hbar}[\hat{A},\hat{H}]_{mn}\tag{1}$$ as I don't even think it is correct since the question asked to find $\color{red}{\hat{\mathcal{O}}_{mn}(t)}\,\color{red}{\text{for this case.}}$ It also asked for $\color{red}{\text{a relationship between}}\,$ $\color{red}{\frac{d \hat{A}_{mn}}{dt}}$ $\color{red}{\text{and}}\,$ $\color{red}{\hat{\mathcal{O}}_{mn}(t)}$ not $[\hat{A},\hat{H}]$. I would really like to understand how to derive relation $(1)$, so if anyone could give me any hints or advice on how to go about this it will be much appreciated. • I don't think this is a "very strange" or "bizarre" question. It looks pretty standard to me. Yes there are a few typos in the solution, but have pity on the poor solution writer who has to deal with so many subscripts and arguments. – hft Jul 31 '18 at 0:42 • @hft Hi, I used the words 'strange' and 'bizarre' as before this exam I've never seen operators written with a variable dependence like $\hat{\mathcal{O}}_{mn}(t)$ shown explicitly. I'm used to seeing functions written with their explicit dependence on a variable such as the eigenstates $u_n(x,t)$ but never an operator. I have no pity for the examiner that wrote the solution as their solution needs to be verified before just handing it out for students to see. Also, it makes it a nightmare trying to learn from a solution loaded with typos (especially if you fail to identify them). – BLAZE Jul 31 '18 at 2:26 • In the "Schrodinger picture" of Quantum Mechanics all the time dependence is in the states and the operators are time-independent. In the "Hamiltonian picture" it's vice versa. In the "Interaction Picture" both the states and the operators have time dependence. All these ways of looking at the time evolution are useful, so best to just start getting used to it. ;) – hft Jul 31 '18 at 4:12 In the second line I believe that there is a mistake... Could someone please confirm or deny whether this is indeed correct Correct. There is a mistake/typo. In the first integral there may be a typo... Correct. I can't figure out how to derive the relationship $$\frac{d}{dt}\hat{A}_{mn}(t)=-\frac{i}{\hbar}[\hat{A},\hat{H}]_{mn}\tag{1}$$ First consider $$\mathcal{O}=A\;.$$ In this case: $$A_{mn}(t)=e^{i(E_m-E_n)t}A_{mn}$$ and $$\frac{dA_{mn}(t)}{dt}=i(E_m-E_n)e^{i(E_m-E_n)t}A_{mn}=i(E_m-E_n)A_{mn}(t)\;. \qquad (1)$$ Next consider (as you already did in the test problem): $$\mathcal{O}=[A,H]\;.$$ In this case: $$[A,H]_{mn}(t) = e^{i(E_m-E_n)t}(E_n-E_m)A_{mn}= (E_n-E_m)A_{mn}(t)\;. \qquad (2)$$ Comparing $i$ times Eq (1) with Eq (2), I see that: $$i\frac{dA_{mn}(t)}{dt} = -(E_m-E_n)A_{mn}(t) = (E_n-E_m)A_{mn}(t) = [A,H]_{mn}(t)\;.$$ In other words: $$\frac{dA}{dt}=-i[A,H]$$ • Thank you for your answer (+1), I would just like to point out that you are missing factors of $\hbar^{-1}$ not only in the final answer but also in the complex exponentials and in the equation after $(2)$. Many thanks. – BLAZE Jul 31 '18 at 2:50 • I have chosen units such that hbar=1. See, for example, en.wikipedia.org/wiki/Natural_units. You may restore the "missing" factors of hbar by dimensional analysis. For example, since hbar has units of energy times time, there is clearly one factor of hbar (=1) in either the numerator of the LHS of the final equation or the denominator of the RHS of the final equation... – hft Jul 31 '18 at 4:05 • I'm sorry, still a little confused by what you mean when you say "First consider $\mathcal{O}=A$. In this case $A_{mn}(t)=e^{i(E_m-E_n)t}A_{mn}$". Could you please expand on what you are doing here? As last time I checked $\hat{\mathcal{O}} = [\hat{A},\hat{H}] \ne A$. Thanks for your help so far. – BLAZE Aug 1 '18 at 1:22 • @BLAZE The first part of this answer is just addressing the part of the original question that asks for $d\hat{A}_{mn}/dt$. The part of the original question just before that finds $d\hat{\mathcal{O}}_{mn}/dt$, without any restriction on what $\hat{\mathcal{O}}$ is, so hft is just saying to use that result but replace $\hat{\mathcal{O}}$ with $\hat{A}$. The second part of this answer says to use a different replacement. There are really two different versions of $\mathcal{O}$. It might have been better in the original to use different letters, but hft has stuck with the original notation. – Mike Aug 1 '18 at 2:14
2019-11-23 00:15:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7936186194419861, "perplexity": 241.32207757582867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672170.93/warc/CC-MAIN-20191122222322-20191123011322-00022.warc.gz"}
https://wattsupwiththat.com/2014/08/28/a-note-on-the-50-50-attribution-argument-between-judith-curry-and-gavin-schmidt/
A Note on the 50-50 Attribution Argument between Judith Curry and Gavin Schmidt Guest essay by Bob Tisdale | Judith Curry and Gavin Schmidt are arguing once again about how much of the global warming we’ve experienced since 1950 is attributable to human-induced global warming.  Judith’s argument was presented in her post The 50-50 argument at ClimateEtc (where this morning there were more than 700 comments…wow…so that thread may take a few moments to download.)  Gavin’s response can be found at the RealClimate post IPCC attribution statements redux: A response to Judith Curry. Gavin’s first illustration is described by the caption: The probability density function for the fraction of warming attributable to human activity (derived from Fig. 10.5 in IPCC AR5). The bulk of the probability is far to the right of the “50%” line, and the peak is around 110%. I’ve included Gavin’s illustration as my Figure 1. Figure 1 So the discussion is about the warming rate of global surface temperature anomalies since 1950. Figure 2 presents the global GISS Land-Ocean Temperature Index data for the period of 1950 to 2013. I’m using the GISS data because Gavin was newly promoted to the head of GISS. (BTW, congrats, Gavin.)  As illustrated, the global warming rate from 1950 to 2013 is 0.12 deg C/decade, according to the GISS data. Figure 2 For this discussion, let’s overlook the two hiatus periods during the term of 1950 to 2013…whether they were caused by aerosols or naturally occurring multidecadal variations in known coupled ocean-atmosphere processes, such as the Atlantic Multidecadal Oscillation (AMO) and the dominance of El Niño or La Niña events (ENSO).  Let’s also overlook for this discussion any arguments about how much of the warming from the mid-1970s to the turn of the century was caused by manmade greenhouse gases or the naturally occurring multidecadal variations in the AMO and ENSO. Bottom line, according to Gavin: The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic. Or in other words, all the warming of global surfaces from 1950 to 2013 is caused by anthropogenic sources.  Curiously, that’s only a warming rate of +0.12 deg C/decade. He’s not saying that all of the warming, at a higher rate, from the mid-1970s to the turn of the century is anthropogenic.  His focus is the period starting in 1950 with the lower warming rate. HOWEVER Climate models are not tuned to the period starting in 1950.  They are tuned to a cherry-picked period with a much higher warming rate…the period of 1976-2005 according to Mauritsen, et al. (2012) Tuning the Climate of a Global Model [paywalled].  A preprint edition is here.  As shown in Figure 3, the period of 1976 to 2005 has a much higher warming rate, about +0.19 deg C/decade. And that’s the starting trend for the long-term projections, not the lower, longer-term trend. Figure 3 And that’s why, when compared to the observed warming rate for the period of 1950 to 2013, which, according to Gavin, is the period “that our best estimates are that pretty much all of the rise is anthropogenic”, then climate model warming rates appear to go off on a tangent.  The modelers have started their projections from a cherry-picked period with a high warming rate. Figure 4 shows the warming rates for multi-model ensemble-member mean of the CMIP5-archived models using RCP6.0 and RCP8.5 scenarios for the period of 2001-2030.  RCP6.0 basically has the same warming rate as the observations from 1976-2005, which is the model tuning period, but that’s much higher than the warming rate from 1950-2013.  And the trend of the business-as-usual RCP8.5 scenario seems to be skyrocketing off with no basis in reality. Figure 4 And in Figure 5, the modeled warming rates for the same scenarios are shown through 2100. Figure 5 CLOSING I’ve asked a similar question before:  Why would the climate modelers base their projections of global warming on the trends of a cherry-picked period with a high warming rate?  The models being out of phase with the longer-term trends exaggerates the doom-and-gloom scenarios, of course. But we purposely overlooked a couple of things in this post…that there are, in fact, naturally occurring ocean-atmosphere processes that contributed to the warming from the mid-1970s to the turn of the century—ENSO and the AMO.  The climate models are not only out of phase with the long-term data, they are out of touch with reality. SOURCES The GISS Land-Ocean Temperature Index data are available here, and the CMIP5 climate model outputs are available through the KNMI Climate Explorer, specifically the Monthly CMIP5 scenario runs webpage. Subscribe Notify of Latitude correction: As illustrated, the global warming rate from 1950 to 2013 is 0.02 deg C/decade, according to the GISS raw data. according to Gavin: correction: The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is adjustments and algorithms.. ossqss Bingo! Dougmanxx You beat me to it. “Man made” indeed! James the Elder “Estimates” and “pretty much”. GIGO correction my results show there is no man made global warming whatsoever namely this would affect minimum temperatures, but this is going down naturally, 100% http://blogs.24.com/henryp/files/2013/02/henryspooltableNEWc.pdf (last graph, on the bottom of the last table) Kelvin Vaughan If none of the warming is man made then the warming is definitely man made.. Kelvin I don’t follow if there were man made warming you should see chaos but the relationship of the speed of warming versus time (deceleration) is going down 100% natural KevinM moreCarbonOK[&theWeatherisalwaysGood]HenryP: He left off a smiley face. He means IF the warming in the world isn’t man made (raw data) THEN the warming in the charts is man made (presented data). Owen in GA HenryP: I think he means man made in the laboratory with a computer rather than man made due to economic activity. The more the alarmists claim a huge CO2 effect during the warming periods, the more impossible their task of explaining the whyfor of the pause, when CO2 increases have not paused. Their cause and effect have become totally disjointed. milodonharlani Besides the current plateau, they also have to explain the cooling from c. 1944 to 1976 under rising CO2, & the rising temperature during the 1920s to ’40s on falling CO2. Temperature accidentally happened to rise during c. 1977-96 during climbing CO2 because of the switch to the warm phase PDO in 1977. latecommer2014 Once again correlation is not causation. milodonharlani Between CO2 & temperature there isn’t even good correlation, let alone causation. On longer time scales, there is correlation & causation between rising T & CO2, but T is the cause & CO2 the effect. Chris4692 Latecommer: Correlation does not prove causation, but if there is causation there will be correlation. Robert of Ottawa Being true believing Warmistas, they turned the clock speed down on their super-dupe computers, hence reducing the rate of warming. It’s that simple! Johanus Chris4692 August 28, 2014 at 12:16 pm Latecommer: Correlation does not prove causation, but if there is causation there will be correlation. … except when there is no correlation. If, for the sake of argument, we accept that rising CO2 will cause rising global warming, then the currently rising CO2 levels should be causing rising temperatures. But currently there is no observable correlation between rising CO2 and global temps. There is no compelling, simple explanation for this lack of correlation (assuming AGW). In fact, there are at least 37 explanations for it, none of them compelling enough to displace the others. Mark It’s only appears to be in fairly recent time that there is any correlation between CO2 & temperature too. With even that looking for like B causes A (or possibly C causes A and B). joe Col Mosby: The more the alarmists claim a huge CO2 effect during the warming periods, the more impossible their task of explaining the whyfor of the pause, when CO2 increases have not paused. Their cause and effect have become totally disjointed” Oh Contraire – Dr. Mann has a recent study which shows the cooling phase is due to the AMO/PDO while none of the warming is due to the flip side of the AMO/PDO cycles. ( I may be oversimplifying his conclusion, though that is the general gist of his study). The irony is that many of the skeptics have pointed out the amo/pdo cycles partly explaining both the warming phase and the cooling phase and the most likely cause of the current pause (skeptics have proffered this explanation since the late 1990’s) yet the high priest of climate science has only recently acknowledged 1/2 of the cycle. ( Mann obviously knows more science than us mere mortals) Another irony, is that Mark Steyn pointed this out circa 2009. george e. smith Fig 1 looks smack dab right in the center to me. What is this “far to the right” mumbo jumbo?? Couldn’t be more ho hum , big deal, if I had plotted it myself. You need new spectacles Gavin; and no, I didn’t mean for you to go to Burning Man, I meant, get some new glasses ! Greg Gav says: “The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic.” This is not consistent. “Dominant” just mean largest, it does not mean more than everything else added together. If the issue is split into many parts like, human, volcanic ENSO, solar…… the “dominant” factor could actually be quite small percentage, much less than half. The previous AR4 “majority of warming” is not the same thing as AR5 “dominant cause” of warming. This is a climbdown in the IPCC position that seems to have gone mainly unnoticed. Quite where Gav gets his “best estimates are that pretty much all of ” I don’t know but it’s far more extreme claim than either AR4 or AR5. latecommer2014 It appears everyone is using adjusted temperatures, so the error bar should be as large as the adjusted temp. I do not believe in land based, corrupted temp records, and hold that any forcing caused by man is automatically absorbed and compensated for by nature. That is why we do not have “run away climate”. There is no proof man can over ride natural climate processes for any extended period. Mark Shouldn’t the error bars be somewhat larger than the adjustment? Since you still need to factor in the accuracy and precision of the original readings. Together with that applicable to any “data processing” involved. Gavin Schmidt is prevaricating as usual. Global warming since the LIA is composed of natural step changes. Those steps are exactly the same — whether CO2 was low, or high. Therefore, there is no “fingerprint of AGW”. It is clearly shown here in über-Warmist Dr. Phil Jones’ chart: M Courtney Very good point. I wonder if SkS will keep pushing their escalator line if that is pointed out. Not sure Dr. Phil Jones is an über-Warmist though. He always struck me as more wrong than wronging. lee Someone used that graph the other day on me. I pointed out Trenberth’s ‘Has Global Warming Stalled?’ ‘big jumps’ as noted by Bob Tisdale. I suggested SKS sometimes got it right for the wrong reasons. I never got a response. http://wattsupwiththat.com/2013/06/04/open-letter-to-the-royal-meteorological-society-regarding-dr-trenberths-article-has-global-warming-stalled/ FrankKarr Mr Schmidt should examine the graph above closely to see the overall warming from 1850 to 2013. Its about 0.9 degrees C over that long term period and works out to about 0.55 deg C per CENTURY. Peanuts. The biggest fraud in history over a few tenths of a Degree. rgbatduke Yeah, db, a point that I, and Lindzen, and many others have tried to emphasize in discussion. You can actually take HADCRUT4 from the first half of the 20th century and the second half of the 20th century and put them side by side on similar scales but with the time scale hidden and ask which graph occurred with the help of anthropogenic CO_2? Not so easy to tell, unless you are aware of the individual features such as the terminating super-ENSO in the late 20th century. I sometimes think that the last round of tampering in the GISS anomaly was designed as much to erase this similarity and as much of the pause as possible without quite making it laughable compared to LTT. But that game is up — there will be no more adjustments of GISS or HADCRUT4 to further warm the present as they are now UAH/RSS constrained. That hasn’t stopped them from trying to further cool the past, and now newcomers are appearing that re-krige and infill and homogenize areas that “haven’t shown enough warming” because they are less constrained by LTT; this further obfuscates if nothing else. HADCRUT4 — and earlier versions of HADCRUT even more — clearly give the lie to the assertion of “unprecedented” warming, though, in precisely this graph (which anybody can make, BTW, at least piecewise on woodfortrees). However, even this graph omits the display of or discussion of two critical problems with assertions of warming or cooling or plain old knowledge of temperature. The first and most glaring omission is the absence of any error bar or estimate on the data. This is insane! In what other field of human endeavor are so many data-derived graphs shown to so many people utterly devoid of error estimates? Note the obvious impact of error visible in the Jones curve. Does Jones, or anyone else, really think that the global average surface temperature anomaly was 10 times more volatile in the 1800’s, with the planet warming by 0.6C over as little as a year and then plunging down into 0.6C of cooling relative to some ill-defined mean in a year more? Because that’s what the error-bar free data shows. Of course not! What the graph is showing is the impact of the sparseness of the record in the 19th century. With order of 10x as much variance, there is order of 100x less data contributing in the 19th century compared to the present. In the 19th century most of the Earth’s surface area was completely unsampled (I mean “most” literally — 70% of the surface that is ocean, the bulk of at least 3 or 4 continents were either terra incognita altogether, e.g. Antarctica, or barely penetrated by a thermometer — if you will excuse the image — and consider the Amazon, central Africa, much of Siberia and central Asia, Tibet, even much of the U.S.). The parts that were sampled were obviously quite volatile — one imagines that the bulk of what is producing these large variations were things like heat waves in Europe. The variance quiets quite a lot when the colonial gold rush really gets underway in the 1880s and colonials carry thermometers with them to their newly annexed territories. The ocean remained a problem then, and remains a huge problem now with ARGO pitifully undersampling 70% of the Earth’s surface even today, and that in a highly biased fashion with buoys that float with thermohaline currents or are trapped in eddies (both unlikely to reflect their surrounding environment adequately) rather than be distributed according to a simple random number generator in Monte Carlo style (which would have a computable statistical error instead of an unknown bias). There is a surprising amount of variance for a global temperature anomaly today, but at least between the thermometric record and the LTT satellite record, we can think about resolving features of the presumably much less volatile actual anomaly from the statistical noise, by comparing the various “modelled” average temperatures. The error is almost certainly larger than the difference between, say, GISS and HADCRUT4 or Cowtan and Way, and at present these numbers are easily 0.2C or thereabouts apart much of the time. HADCRUT4 acknowledges — IIRC — 0.15C of error in the present. I think this is an underestimate but let’s go with it, as the existence of the number we hope means that they actually computed it instead of pulled it out of their nether regions, as were the error estimates on graphs in the leaked early AR5 draft (figure 1.4?), which were obviously created by a graphical artist and not by anything like an algorithm. The scaling of the variance then suggests that the error estimate in the mid-1880s ought to be a whopping 1.5C — the eye suggests that a more modest 0.4 C error bar might encapsulate 60% of the data such as it is, but that is really the error for the sampled territories only and is a lower bound on the error estimate for global temperature. I’d suggest that 0.7C is a compromise — one can probably find proxies (with their own error and resolution problems) that that constrain the error to be less than 1 C. This statistical — not systematic — error would then systematically, but slowly, shrink from then to now. It wouldn’t really be linear — as I said, there is a relatively rapid diminishment in the late 19th century followed by a slower decrease into the late 20th, but it is likely fair to say that it is at least 0.3 to 0.4C for most of the record prior to the satellite era and ARGO, as only these have made it possible to push it down to ballpark of 0.2C. If one includes the error estimate on the graphs, our certainty of any particular thermal history substantially diminishes. Maybe it warmed since the mid-1800’s. Maybe it has cooled. Maybe it warmed a lot more. Maybe the single 20 year period in the late 20th century when warming occurred has the steepest slope in the thermometric record, or — most importantly — maybe it does not! That’s the big statistical lie even in Jones relatively honest portrayal of the HADCRUT4 trends above. If one actually fit the data, with errors, and used e.g. a measure like Pearson’s $\chi^2$ to estimate the robustness of the linear trend, how likely it is that the slope is actually much larger or smaller than the simple regression fit, I promise that in the leftmost chord of the data we have almost no friggin’ idea what the linear temperature trend really was beyond “probably positive” (that is, maybe it is $0.16 \pm 0.12$ or something like that), that in the second chord we can probably say that it — again guestimating since I don’t have the data and cannot do a better analysis — $0.15 \pm 0.05$, and that only the last push is known reasonably accurately at $0.16 \pm 0.02$. In other words, it could have warmed faster in either the mid-1800s or the early 1900s than it warmed in the late 1900s. It isn’t even improbable. It is even odds that one or the other of these warming trends was larger than the best fit slope, and 25% of the time they would both be larger, and larger by just a bit is enough to confound the assertion that the more strongly constrained third linear trend is the largest. So much for “unprecedented warming” or the necessity for CO_2 forcing as an explanatory mechanism for warming at the rates in Jones’ figure above. The second problem is that we are left with a profound paradox in all discussions of global average surface temperature. Even NASA GISS acknowledges that we have very little idea what it is. It is often given as 288 K, but this obscures the simple fact that no two models for computing it, working from the same or largely overlapping surface data, get numbers that are within half a degree of one another! Or even a degree. The most honest way to present the number might be $288 \pm 1$K. Or $287 \pm 1$K. It’s hard to say, and depends on who is doing the averaging and with what model for kriging, infilling, homogenizing, and dealing with error. It is also impossible to generate a proper estimate for the probable error including all sources, because what one can estimate is only the range of values produced by the models, which is (again) a strict lower bound in any honest error estimate. Since the models tend to share data sources they are hardly independent, and yet there is a spread of more than a degree in their average. Statistics 101 — the variance of sample means drawn from overlapping populations is too small because the number of independent and identically distributed samples is smaller than the number of samples that produced the variance. To fix this is enormously difficult and requires some pretty serious statistical mojo. Indeed, it would probably be simplest to fix via Monte Carlo and just plain sampling — generate a simulated smooth temperature field with the “approximately correct” surface temperature moments, pull samples at the overlapping locations and feed them into the different models, determining both the distribution of the absolute error of the models (per model) given the data compared to the precisely known average temperature, as well how that variance compares to the multimodel variance with overlapping samples. This might then provide some sort of quantitative basis for determining the actual probable absolute global average surface temperature — note well not the anomaly — as well as a probable error estimate that has a quantitative basis (subject to various assumptions, but given time we could even investigate the effect of varying those assumptions). In the meantime, we persist in the belief that we can measure and compute the anomaly in global average surface temperature almost an order of magnitude more precisely than we can compute the average surface temperature itself. In most systems, susceptibilities (effectively, the anomaly) are second moments and their error estimates are fourth moments of the underlying distribution — the variance of the variance, so to speak. We generally know the higher order cumulants of a distribution less accurately than we know the mean/first order. This isn’t always true, of course — sometimes what we measure is a deviation, not the absolute — but thermometers don’t measure deviations from an unknown or poorly known mean, they measure temperature, the absolute quantity in question. The argument is that if there is a systematic bias in the trend of each contributing thermometer (say, we have 100 thermometers at different places, all perfectly accurate, and if 40 of them show warming of 1 degree, 20 of them show no change, and 30 of them show cooling of 1 degree, then we can conclude that there has been a statistically significant systematic trend in the anomaly of (40 – 30 =10)/100 = 0.1 C even if, when we compute the actual statistical mean and standard error of the temperatures measured by those thermometers over whatever spatial region they are sampling, the error is 1C! This isn’t impossible, of course. We can certainly imagine systems where we could reliably measure the anomaly accurately but the mean inaccurately, the simplest one being that all of the thermometers themselves were perfectly accurate, but that a demented child scribed the scales on the side so that the supposed “zero” of the all of the thermometers was randomly distributed on some wide range. Each thermometer would then precisely record deltas/displacements, but the origin of their coordinates would be a random variable. But is that a reasonable assumption for the thermometric record? It seems equally plausible (for example) that the glass bore of (say) a mercury thermometer and the actual volume of the mercury in the thermometer are random variables, but that the person who zero’d the thermometer scale was an obsessive compulsive. In that case the absolute measurement of the thermometer might be very accurate, at least when it was made at temperatures close to the reference temperature used to set the scale, but the anomaly might have a bias that might, or might not, be randomly distributed. This problem has hardly gone away now. Anthony has actually tested supposed accurate electronic thermometers in personal weather station kits obtained (for example) from China and found that they experience substantial absolute error and time dependent drift. Now and in the past, even a thermometer that was precisely made, and carefully zero’d and scaled with respect to multiple reference temperatures so that it worked perfectly the first day it was hung up in a weather station could easily experience a systematic, and biased, drift over a decade or five of usage. Spring thermometers gradually anneal and become less springy. Liquid thermometers outgas and deform. We assume that things remain the same over long times because we can’t see them moving, but they don’t. Throw in biases recorded in weather station metadata, throw in all of the occult, slow biases not recoverable from any sort of metadata — a tree line that slowly grows over time, the UHI effect as a station that was initially rural finds itself in the middle of a prosperous concrete jungle, throw in unrecorded and variable idiosyncracies of the humans who performed the measurements as they changed over the decades, and you have substantial variance not only in the absolute temperatures any given thermometer might measure, but in the trend, in the anomaly. And some of those biases might well be slow, systematic, unrecorded and virtually impossible to retroactively correct for. Again, we could probably learn quite a bit from simulations of the models used to compute the anomalies, by simply generating an (ensemble of) simulated smooth temperatures on the surface of a sphere with a given, known time variation that has or doesn’t have any given trend. Sample it, and add noise to the samples, both white unbiased noise and trended noise that might (for example) model the UHI on urban stations, or delta correlated shifts that might occur when station personnel changes, or trended noise that might represent various distributions of slow non-UHI environmental shifts — conversion of surrounding countryside from forest to pasture, the building of impoundments that transform small rivers into vast lakes (this has happened, for example, in the immediate vicinity of RDU airport, the source of our “official” temperature — Falls Lake and Lake Jordan are between them tens of thousands of acres and flank the airport, adding yet another confounding factor between comparing temperatures before the early 80’s to temperatures afterwards at this site). Where is that accounted for in the site metadata? Who even knows what sort of effect turning a mix of forest and human occupied farmland into 60 or 70 thousand acres’ worth reservoirs might have on the surrounding temperatures and “climate”, at the same time that the weather station itself went from being a tiny regional airport to being a hub for a large commercial carrier, at the same time the surrounding farmland turned into one giant suburban and urban mega-community? We don’t know, of course — and not even BEST can account for or correct for this — but we might, perhaps, simulate some range of the possibilities and see what they do not to the anomaly itself — per model, it is what it is — but to the best estimate for the uncertainty in the anomaly when any give model ignores a source of potential systematic bias. As (apparently) HADCRUT4 does when they do not correct for UHI at all, however eager they are to cool the past or warm the present in other ways. rgb eyesonu +10 I wish that there was a way to see how many times your comment has been read and/or linked to. Mark This kind of graph not only shows no relation to CO2 (human or “natural”) it also shows that the main driver(s) must be something cyclic. Yet it was only very recently that the PDO and AMO were identified and we don’t appear to fully understand either. I had a guest post at Judith’s blog some months ago in which I tried to untangle some of the weird concepts used by the IPCC and friends and show how they lead to absurd consequences (it gets more certain the longer the divergence between data and theory lasts). http://judithcurry.com/2014/01/29/the-big-question/ joelobryan When the assumptions, taken as true, that the GCM rest on become increasingly “wrong” (sign and magnitude), their outputs become increasingly absurd and result in ever more bizarre claims. We see the results of that with each new paper trying to explain the pause. Robert of Ottawa Doesn’t this charade become fraud at some point? joelobryan Robert, In the 90s and for the 2000-2006 period, much of it likely looked quite on track. The big cracks appeared with the climate gate fraud exposure in 2009. But now in 2014.5 the GCM temp divergence with reality is becoming untenable, hence all the alternative ali is are coming out every week now. Most certInly there were a few bad apples in 1998 & forward who used chicanery, data manipulations and suppression of data from rivals that were contrary to their data and results in the past temps records, results they would need to build a case against Mann’s continued Carbon intensive energy sources. Those individuals should be banished by science journals editors for Life. In the US, democrats began to see dancing trucks loads of carbon tax dollars to spend. Enviros saw a way to de-industrialize and shutdown Big Oil, their arch enemy. But I get the sense that guys like Trenberth really do want to be true to science, but with so much reputation riding on AGW it’s a hard thing to finally let go of a dying baby you birthed and nurtured in good faith. But the rime to let CAGW go is past, now they are just desperately cling to just AGW starting back up in 20 or so years. ferd berple how is it possible that humans have contributed 110% of the warming (best guess)? are they saying that otherwise there would have been cooling? why did temps rise from 1910-1940 almost identical to 1970-2000? It wasn’t CO2, so what was it? Why did temps pause from 1940-1970? How is the current pause any different? If the pause lasts from 2000-2030, how is this any different than the pause from 1940-1970? Why did the [climate] models not see the cyclical pattern, that [your] average 6th graders would have caught? Do they not know that nature is cyclical, not linear? About the 110%, see my discussion of the “net warming model”, in the link above. lee They incorporate the cooling from aerosols. Having crossed swords with Schmidt some years ago on unRealClimate, I came to the conclusion not to believe anything he says. I’ve never been back there since, as it is full of pseudo-science presented by pseudo-scientists. Clyde Scientist have a different way of talking than the public. Well at least in my experience. Words don’t always mean the same thing to them as the general public. I don’t know what “Conspire” means in the context of what Gavin Schmidt is saying below. I hope it doesn’t mean they got together in a sinister way to plan what they did. ——————————– Climate models projected stronger warming over the past 15 years than has been seen in observations. Conspiring factors of errors in volcanic and solar inputs, representations of aerosols, and El Niño evolution, may explain most of the discrepancy. http://www.nature.com/ngeo/journal/v7/n3/full/ngeo2105.html Volcanoes, the sun, aerosols, & El Nino conspired to make the models wrong. HT/ Maksimovich From Curry’s blog. EternalOptimist Gavin is English, I think. In England, the word Conspire means ‘work together’ or ‘work in tandem’. it doesnt necessarily have a sinister meaning Cheshirered Partly right, BUT the whole point of ‘conspire’ is that it is a plan, and usually for nefarious means. See below – it’s all bad, dude! If Gavin is or was ‘conspiring’, be prepared for nonsense. ***************************** “to agree together, especially secretly, to do something wrong, evil, or illegal: “They conspired to kill the king.” “to act or work together toward the same result or goal”. verb (used with object), conspired, conspiring. “to plot” (something wrong, evil, or illegal). Tonyb I am English and in the context it is used I don’t see anything sinister. He surely means merely to work together. Tonyb J I think the correct word for that would be “collaborate” To labor or work together. Con-spire is the breath together, like telling secrets… Mr Green Genes I’m 57 years old and have been English all my life and I have to disagree with that. A closer meaning for conspire than ‘work together’ is ‘plot together’. It definitely does have mildly sinister connotations. If Gavin meant ‘work together’ or ‘work in tandem’, imo he would have used the word collaborate. Leo Smith “late Middle English: from Old French conspirer, from Latin conspirare ‘agree, plot’, from con- ‘together with’ + spirare ‘breathe’.” Or more to the point whispering together. Definite hush hush. If we are talking about plotting in the open, that’s collaboration or co-operation.. Tom T Gavin is anthropomorphizing the natural forces that made him look stupid. Ok, so “volcanoes” contributed to the last 17 years of steady climate temperatures – DESPITE ever-higher CO2 levels in the atmosphere. If you believe that theory, show us the measured real, demonstrated decrease in atmospheric clarity – which has remained absolutely steady the past 21 years! Well, for two months in 2009 clarity did drop. But neither temperatures nor ice coverage changed when the atmospheric clarity DID drop that one time! The excuse is proved wrong. Again. Resourceguy Well said Bob, as usual Another Gareth If the ‘best guess’ is 110% of warming is attributable to man are they saying it would have got colder without our efforts? By deduction they must be confident they have the natural variability component understood which I sincerely doubt. BallBounces “The climate models are not only out of phase with the long-term data, they are out of touch with reality.” But, importantly, they are not out of touch with funding. bingo. Like Ok. I feel entirely dumb. What is a 110% probability??? What is 110% of the entirety of something??? What is, say, being 110% responsible for the making of 110% of a car? I told you I feel dumb. Thanks for asking that, exactly what I have been wondering. Is this a statistical term? Or just another liberty of Climatology ™ IPCC Team.? When I was young, the probability of event X was the ratio between all outcomes resulting in X and all the possible outcomes. It’s probability theory, the most basic-basic-basic of it. You cant have a total of outcomes (of whatever) with 10% more outcomes than the total possible outcomes. So, I’m dumbfounded. It could be colloquial usage, as in “I’m 120% sure that…” But I guess colloquial is inappropriate in the context of the discussion. And there my reading was completely blocked. I think the 110% comes in from extrapolation of Mann’s Hockey stick they all are in love with, which up to 1900 showed gradual cooling. If you assume that the cooling would have continued without Man’s input, then the observed warming is actually less since some of Man’s warming was negated by the natural cooling that they claim should have been occurring. jhborn I think he’s saying that the mode of the probability is that natural variation would have resulted in cooling, but man’s interference caused warming equal to 110% of the warming observed. I.e., if it warmed one degree, it would have cooled a tenth of a degree without man. The area under the curve is only 100%, though. That is, the percentage he’s talking about is a percentage of warming, not a probability. Josualdo – Gavin means that the amount of calculated CO2 warming is 110% of the measured. But his wording “fraction of warming attributable” shows that he does not understand climate. Climate, as clearly stated by the IPCC, is a complex, coupled non-linear system. That means that there are many factors involved, they affect each other, and the results are chaotic (things sometimes happen in certain conditions, and sometimes don’t). In the real world, the amount attributable to human activity is the difference between what it would have been without the human activity and what it actually was. [The reason that it’s this way round is that the non-human stuff was always going to be there. It’s the human stuff that is different. If things end up exactly where they were going to be anyway, for example, then the human impact is zero, regardless of any calculations of what the human stuff does.] If we look at the last, say 200 years, or the last 10,000 years, then it is pretty clear that Earth would likely have warmed up a bit (we can’t be sure, because we reallydon’t know how Earth’s climate behaves). That automatically puts the fraction of warming atttributable to human activity at less than 100%. So how does Gavin come up with 110%? He has used linear thinking – a big no-no in a non-linear system – he has compared his calculated human effect with the measured temperature. Josualdo Thanks, Mike, I think I got the gist of the thing. So, no probabbilties here. Having been interested in chaos theory, fractals and all that formerly fancy stuff, and knowing — well at least that was the meme — that butterfly wings might affect the weather somewhere else, I find all this certainty very strange. There’s an anecdote that almost would apply, but I guess it would not surviving the translation (and my telling it.) Josualdo * probabilities… Rud Istvan The interesting part, Bob, is that Gavin felt a reply of this sort was needed at all. I suspect that between the pause falsifying the models by the CAGW gangs own previously published standards (btw your tuning argument has been made by many including Akasofu), and all the stuff now coming out about inexplicable and in an increasing number of cases inexcusable homogenization (BOM Rutherglen in Australia, BEST station 166900) that reality is really starting to bite hard. Peter Miller Perhaps we could ask Gavin to pop into a parallel universe, where man died out on Earth a few tens of thousands of years ago. He could then take all the measurements needed to determine exactly how much of the past 70 years’ mild warming is due to the activities of man. Even in the wacky world of ‘climate science’, this is unlikely to happen anytime soon. Bottom line: None of us have a clue how much of the recent mild warming has been due to the activities of man. Those who are worried about their future salary cheques argue, “A lot.” While those who are worried about beggaring the world economy for no apparent reason, argue, “Not a lot.” Justthinkin When are going to throw these frauds in jail?Oh wait.If that happened,the psychopathic politicians might take a hit.Nothing to see here.Carry on. joelobryan Narcissism is a sociopathology. Ralph Kramden I’m having trouble interpreting figure 1. How can the fraction of global warming caused by anthropogenic causes be greater than 1. Kelvin Vaughan So all of the warming is made by man and then another 10% is made by man???????????????? there is no man made warming there never was and there probably never will be DGH If the earth might have otherwise cooled by, let’s say, 10% over this period then 110% of the observed warming would be attributable to our activities. Kelvin Vaughan No then it would be 100%. You cant have more than 100 per 100. 100 people have brown eyes 110 of them are bald. EternalOptimist Why is the keeper of the record arguing a position in the first place ? Peter Miller I guess that begs the question of whether or not GISS’ rate of temperature ‘homogenisation’, designed to cool the recent past, will accelerate or not under Gavin’s stewardship. Under Hansen, better known for his antics rather than his science, ‘homogenisation’ ran wild at GISS. AlecM On average there is net zero CO2-AGW; the atmosphere self-adapts. The warming was from other causes. hint: Gleissberg 88 year cycle, 44 years of warming followed by 44 years of cooling Gavin is an intelligent scientist, due to the ever longer ‘pause’, he must see that writing is on the wall, but his current position doesn’t offer him an acceptable alternative. joelobryan Agree. But if you have conscience and scientific integrity, when do you get to point you can’t sleep at night from the lies? Tom T Gavin is not even a scientists. He is a poor mathematician who plays office politics well. Even using the most ddjusted data set of all and assuming ALL of the warming in the 1900s was man made and a 110% warming rate… they still CANT anywhere close to 2C temperature rise from 2000-2100. There is NO CAGW. At 013C per decade, that is 1.3C per century. Since the 110% is getting people confused, let me explain a little bit. The 110% is the theoretical AGW (the warming that–according to the IPCC and Gavin–would have occurred if there were no natural cooling influence) divided by the real warming that was actually measured. I discussed this in the guest post I mentioned earlier. http://judithcurry.com/2014/01/29/the-big-question/ It’s not just counterintuitive, it also has some insane consequences. The longer the pause lasts, the more certain the AGW dominance becomes. The catch is that it’s an ever slower rate of warming, and therefore you have to expect slower waming in the future anyway. It’s pretty misleading, but I think Gavin is so steeped in this mode of thinking, he actually believes it’s the right way to calculate. Dan Remember the research is already years old. I am sure if the numbers are run today the most likely percent of AGW would now be 140%, and extremely likely more than 75% is AGW. Like you say, the longer the pause the more certain the temperature rise is AGW! If the temperature falls the next 5 years, look out, certainty will sky rocket even higher that CAGW is coming! So that 110%, -150% or 250% “probabilities” are actually something like slope, or differences, which are added and subtracted? In that case they aren’t probabilities, of course. If I got your interesting post. That’s correct. Those aren’t probabilities. The probability is the notorious 95% and the statistical method practically guarantees that it will increase. They’re introducing too many epicycles by now. joelobryan Folks here are confusing effect with probability. Probability of an occurrence can never exceed 1.0. Effects can cancel out other effects, and thus individually contribute more than 100% of an observed integrated output such as global temp reponse. NikFromNYC The rate remains defiantly linear in nearly all of the oldest real thermometer records, despite both urban heating effects and the overall greenhouse effect plus or minus feedbacks: http://s6.postimg.org/uv8srv94h/id_AOo_E.gif The same exact thing is seen in nearly every long running tide gaude record. So indeed, 110% needs be invoked since an unprecedented cooling spell has to be have been averted for a mere trend continuation to be blamed on emissions. It’s amazing how the ghost of debunked hockey sticks live on as a background assumption to conceal these old records that debunk anthropogenic claims quite strongly as far as traditional scientific rigor is concerned. so how do you account for the change in recording from mercury thermometers (not re-calibrated before 1950) and human observation (usually 4 times per day) with thermo couples and measuring records of every second a day? NikFromNYC August 28, 2014 at 12:37 pm Do you data on the trend lines, assuming a linear function Scrub the above … Do you have data on the trend lines, assuming a linear function for the full record and ,say, for 1900 onward? William McClenney Yeah, been watching and commenting at JC’s site on this. Fascinating, as is Bob’s response here. Once again, my apologies if anyone is offended by this, but this remains in my mind like “two fleas arguing over who owns the dog they are riding on” (Crocodile Dundee). It begs sanity, IMHO, that we are even having this discussion at all. There are only 2 possibilities here. That’s it! 1 The Holocene would just have continued blithely along, presumably forever were it not for Anthropogenic disturbances, AGW etc. 2. The AGW hypothesis is correct which makes Ruddiman’s Early Anthropogenic Hypothesis also correct. The Holocene may well be over and we are living in the Anthropocene now. Interglacial conditions extended by AGW. On possibility 1, here is my detailed look at the Holocene conundrum http://wattsupwiththat.com/2012/03/16/the-end-holocene-or-how-to-make-out-like-a-madoff-climate-change-insurer/ On possibility 2. we find ourselves faced with perhaps ending the Anthropocene by stripping the CO2/GHG “climate security blanket” from the atmosphere. If the AGW hypothesis is correct, that would leave glacial inception as the only other climate state, wouldn’t it? The Pretzel Logic here is simply gobsmacking!! You cannot be right about the “Anthropocene”, or ending it, without getting a hated tipping point, but of the opposite sign to the one expected. If CO2/GHGs are holding us in interglacial conditions, wouldn’t removing the excess tip us into the next glacial inception? Getting deep into the Judith/Gavin weeds is, of course, a very interesting discussion. “I suggest a new strategy R2, let the Wookie win!”, C3PO. Because the real fun begins if cede Gavin is right, because the choice is really about extending the Holocene, or removing the “climate security blanket” so we can get on with our overdue glacial inception. Muller and Pross (2007) provide one of the more poignant quotes in all of climate science: “The possible explanation as to why we are still in an interglacial relates to the early anthropogenic hypothesis of Ruddiman (2003, 2005). According to that hypothesis, the anomalous increase of CO2 and CH4 concentrations in the atmosphere as observed in mid- to late Holocene ice-cores results from anthropogenic deforestation and rice irrigation, which started in the early Neolithic at 8000 and 5000 yr BP, respectively. Ruddiman proposes that these early human greenhouse gas emissions prevented the inception of an overdue glacial that otherwise would have already started.” http://folk.uib.no/abo007/share/papers/eemian_and_lgi/mueller_pross07.qsr.pdf William McClenney You are correct. I’m not following you at all. William McClenney OK, so let me take a stab at responding. “you did not get it at all” provides only an ad hominem. The comment link, on the other hand provides what I think is enough information to suggest that your point is there is no anthropogenic influence. Stepping out on that limb is putting forth a hypothesis. I do not disagree with your hypothesis. However, one of the key steps one takes as a scientist when thinking about proposing their hypothesis is to adopt the opposing position(s) as a means of testing the hypothesis. Standard science. So adopting the opposing viewpoint, standard in science, is that there is a decisive climate impact from CO2/GHGs. And if that was correct, then we are living in the Anthropocene extension of the Holocene interglacial. So, with our standard science adopted opposite viewpoint, we now come to what do if we are right? Strip CO2/GHGs from the Anthropocene atmosphere, and where does THAT leave us? The only other state would be getting on with that overdue glacial inception. I am in no way saying you are wrong. I am saying what if you are wrong and the AGW crowd is right? Would not being right about AGW, and quelling its atmospheric presence, actually be the wrong thing to do? milodonharlani Except under Ruddiman, the Holocene would scarcely have been an interglacial at all. The Eemian lasted 16,000 years & the MIS 11 interglacial tens of thousands. Those of MIS 7 & 9 were longer than the Holocene would have been under Ruddiman’s hypothesis. William McClenney Milodon, I would suggest that instead of just taking a higher-end estimate for the length of the Eemian, which of course is a length quoted by several authors, it is by no means the consensus on the length of the Eemian. There probably isn’t one, but the range would seem to be somewhere between 10-13kyrs with 16 being an outlier, but not the furthest outlier. I do not have the time to dig all of this up anytime soon, but there is still disagreement as to whether Termination II was a single step, or a two-step one like Termination I. From memory, it seems like evidence for a 2-step deglaciation into the Eemian seems more likely as higher resolution studies pile-up. From memory again, the 135kyr start of the Eemian tends to be associated with the single warming camp. The 2-step camp, from memory counts the period from 135kyrs to 125kyrs as consisting of two warming events with a duration for both similar to the last deglaciation. ~115kyrs ago is what I remember as being one of the more frequent conclusions as to when the Eemian ran down. So something on the order of 10-20kyrs, depending on who you quote and depending on whether the 10kyr deglaciation interval is included in the estimate. I took a quick look in my Eemian folder and was rewarded with this 2008 paper http://journals.co-action.net/index.php/polar/article/download/6172/6851 Have a look at Figure 5 and you will catch my drift. This is not about tit for tat, because even on things which have happened, the science is not particularly well-settled. Which makes consideration of the science being settled on something which has not happened yet a bit unsettling…. 🙂 William McClenney My bad! I meant Figure 6 (dang keyboard) gaelansclark That is a fascinating argument! William McClenney It is, isn’t it? And it took no time at all to realize I was decidedly not the only one who had come of such an argument. This simply cannot be had both ways. AGW either can (and may already have) extended the Holocene, or it cannot. That’s pretty much it. The most thorough analysis is still Tzedakis 2010 landmark paper here http://www.clim-past.net/6/131/2010/cp-6-131-2010.pdf JimS If anthropogenic effect in global warming in the modern times is more than 1% in total, I would be impressed. Mac the Knife How could any blog generate +700 argumentative comments to a article on the 97% consensus, with 110% attribution to humans, ‘settled science’ of man made global warming??? It seems highly improbable, unless the ‘science’ is ill supported. And the proponent of the ‘110% attribution’ does not respond directly to the blog article on ClimateEtc, choosing to fire his blunderbuss from behind the self censured revetments of RealClimate, ala Kim Jong Un? (There is a bit of a resemblance….) Settled science doesn’t draw such spirited discussion. Unsettled science does, as does unsupported conjecture or willful deceit. Rud Istvan Outstanding observation. That Gavin felt compelled to rebut Judith is the big news. Downright unsettling… William McClenney Struth. You only have to take a sample of weather stations to see what’s happening http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/ WRONG the models are not tuned to the period this ONE PAPER reports for ONE model. Even here you get it wrong “Formulating and prioritizing our goals is challenging. To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act. For this, we target the 1850-1880 observed global mean temperature of about 13.7◦C [Brohan et al., 2006]. mouruanh Finally. But you’re wrong too. One model, one paper. And you left out the most important part. Arguably, the most basic physical property that we expect global climate models to predict is how the global mean surface air temperature varies naturally, and responds to changes in atmospheric composition and solar insolation. We usually focus on temperature anomalies, rather than the absolute temperature that the models produce, and for many purposes this is sufficient. Figure 1 instead shows the absolute temperature evolution from 1850 till present in realizations of the coupled climate models obtained from the CMIP3 and CMIP5 multimodel datasets. There is considerable coherence between the model realizations and the observations; models are generally able to reproduce the observed 20th century warming of about 0.7 K, and details such as the years of cooling following the volcanic eruptions. Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. Although the inter-model span is only one percent relative to absolute zero, that argument fails to be reassuring. Relative to the 20th century warming the span is a factor four larger, while it is about the same as our best estimate of the climate response to a doubling of CO2, and about half the difference between the last glacial maximum and present. http://curryja.files.wordpress.com/2013/10/figure.jpg To parameterized processes that are non-linearly dependent on the absolute temperature it is a prerequisite that they be exposed to realistic temperatures for them to act as intended. Prime examples are processes involving phase transitions of water: Evaporation and precipitation depend non-linearly on temperature through the Clausius-Clapeyron relation, while snow, sea-ice, tundra and glacier melt are critical to freezing temperatures in certain regions. The models in CMIP3 were frequently criticized for not being able to capture the timing of the observed rapid Arctic sea-ice decline. While unlikely the only reason, provided that sea ice melt occurs at a specific absolute temperature, this model ensemble behavior seems not too surprising when the majority of models do start out too cold. In addition to targeting a TOA radiation balance and a global mean temperature, model tuning might strive to address additional objectives, such as a good representation of the atmospheric circulation, tropical variability or sea-ice seasonality. But in all these cases it is usually to be expected that improved performance arises not because uncertain or non-observable parameters match their intrinsic value – although this would clearly be desirable – rather that compensation among model errors is occurring. This raises the question as to whether tuning a model influences model-behavior, and places the burden on the model developers to articulate their tuning goals, as including quantities in model evaluation that were targeted by tuning is of little value. Evaluating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models intrinsic qualities. These issues motivate our present contribution where we both document and reflect on the model tuning that accompanied the preparation of a new version of our model system for participation in CMIP5. As decisions were made, often in the interest of expediency, a nagging question remained unanswered: To what extent did our results depend on the decisions we had just made? Do you know the answer? It is mainly Bob’s argument that models are tuned to the period of the late 20th century, so it’s up to him to respond to your point specifically. So Mosher why the hell have the models gone so wrong and off Target ? ;>( James Macdonald I haven’t heard one word about the proper “scientific process”. Models are designed and tuned to a particular set of past data using certain variables (the dependent sample). In this case the main variable is CO2 plus some water vapor feedbacks. To test the validity of the model, it is then applied to a new (independent sample). If the projections don’t fit the actual data, there is something wrong with the basic assumptions. This is clearly the case with climate models, which have thus been invalidated. G. E. Pease dbstealey says: August 28, 2014 at 11:27 am “…Global warming since the LIA is composed of natural step changes. Those steps are exactly the same — whether CO2 was low, or high. Therefore, there is no “fingerprint of AGW”. It is clearly shown here in über-Warmist Dr. Phil Jones’ chart:” ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ No, the steps are not exactly the same. There clearly was ~0.4 deg C less cooling from 1950 to 1975 than from 1885 to 1910. Also, the warming cycles from 1910-1940 and 1975-2009 are respectively 10 years and 14 years longer than the 20 year warming cycle from 1860-1880. These trend differences could possibly be considered fingerprints of Anthropogenic Global Warming (AGW) if we didn’t know that there was comparable warming to modern warming in the Roman Warm Period and the Medieval Warm Period. The warming trends back then were almost certainly not fingerprints of AGW. Don B There is an interesting piece by Andy Revkin at the NY Times (really!) on the connections between the oceans and atmospheric temperatures. For me, the take-home quote from a climate scientist was “The underlying anthropogenic warming trend, even with the zero rate of warming during the current hiatus, is 0.08 C per decade.* [That’s 0.08 degrees Celsius, or 0.144 degrees Fahrenheit.] However, the flip side of this is that the anthropogenically forced trend is also 0.08 C per decade during the last two decades of the twentieth century when we backed out the positive contribution from the cycle….” http://dotearth.blogs.nytimes.com/2014/08/26/a-closer-look-at-turbulent-oceans-and-greenhouse-heating/?_php=true&_type=blogs&smid=tw-share&_r=0 Warming of 0.8 C per century is not frightening. joelobryan CAGW is in a dying screaming death spiral. Typhoon The comment by Carl Wunsch is gets to the heart of the matter. For example, 0.08C +/- 0.1C, is consistent with the null hhypothesis of zero.. Wouldn’t 1945 be a better starting point consider it is when man-made CO2 really started up up and away and became the blade of a hockey stick. http://sunshinehours.files.wordpress.com/2014/04/cdiac-co2.jpg milodonharlani Yes. But the warming since c. 1945 is no different from the warming in the early 20th century, & much less impressive than that in the early 18th century, among prior natural warming intervals. joelobryan I always figure it should be 1935. By 1938 industrial factories in Europe, russia, and Japan were in high gear, burning coal and oil as fast as they could dif it out of the ground. The US joined in that industrial fray in 1940. By 1943 US industrial output and thus energy use was up almost 300%over 1939. There was a big bad recession in 1946-1947 as factories retooled. You are assuming that four valleys of extreme industrial concentration (Germany’s Ruhr Valley, Pittsburgh’s PA, UK’s London (Thames and surrounds) and California’s LA basin) are typical of the rest of the world. Those four WERE extremely polluted, but are a very, very small part of the whole world. And, even around Pittsburgh, once you were a few miles from the steel mills and glass factories, the air cleaned up remarkably. Further, three of the four cleaned up between 1945 and 1950. (LA got worse until the early 70’s). Pittsburgh was sandblasting downtown to clean buildings as early as 1947. Note that we are globally cooling/ http://blogs.24.com/henryp/files/2013/02/henryspooltableNEWc.pdf that might become a challenge? William McClenney BarryW Dr. Curry made the point, and it’s been mentioned many times, if CO2’s affect is only noticeable post 1950, then where id the 1910-1940 rise come from? None of these so called Climate scientists have explained how one is natural and one is man made. Only that in the second that is greater than the first can logically be attributed to man. You must be able to show me a certificate of re-calibrated thermometer before 1945 then. pdtillman @Bob Tisdale’s “The climate models are not only out of phase with the long-term data, they are out of touch with reality.” +10! “It is difficult to get a man to understand something when his job depends on not understanding it.” — Upton Sinclair G.E. Pease, The steps are exactly the same, when considering even microscopic error bars. Furthermore, there is no empirical evidence showing any ‘fingerprint of AGW’. None at all. There are no measurements of a fraction of a degree warming that could be directly attributable to human emissions. Thus, the default conclusion must be that all global warming is natural, unless shown to be otherwise. To show that would require verifiable measurements. But there are no such measurements. It is like someone doing an overlay of CO2 and temperature, and saying, “Look! Rising CO2 causes rising temperature!” They do that all the time. But a temporary, coincidental correlation proves nothing. And that T/CO2 relationship broke down, both before and after a short periord from about 1980 to 1997. Global temperature has been rising at the same rate, as NikFrom NYC shows above, for hundreds of years. There is no evidence at all that human CO2 emissions cause any warming. Any such AGW is mere speculation, and it would anyway be so minor that it can be completely disregarded. The onus is on the alarmist crowd to support their CAGW conjecture. They have failed miserably, so now their tactic is to make baseless assertions as if they were fact. They aren’t. And without real world measurements, their conjecture fails. david dohbro I both agree and disagree with both Bob and Judith/Gavin, but on several different issues: First: Why is the year 1950 chosen as when AGW supposedly started? That just makes no sense. Please look at the data: Take HadCRUT4 for example. It clearly shows several periods of increasing and decreasing temperatures, each of about 30-34yrs long, making for a 60+ year cycle. This can be easily and nicely shown with a MACD, which I’ve shown last year here: http://wattsupwiththat.com/2013/10/01/if-climate-data-were-a-stock-now-would-be-the-time-to-sell/ Clearly the year 1950 is at the start of a 30+ year cooling trend that started in 1945 and ended in 1976. In other words: temperatures peaked in 1945 and bottomed in 1976. How then can there have been (AG) warming since 1950? That makes no sense. Cycle analyses will, thus, tell you when and where temperature trends change. One has to start “counting” from those trend changes. Otherwise you are mixing cyclical warming and cooling periods. Second: This also means that 1976 is a more appropriate year to look for any AGW signal. However, as I’ve shown in my MACD article, the increase in GSTA during the latest warming cycle, 1976-2007, is 0.019°C/yr, whereas that of the previous 30 yr warming period: 1911-1945 was 0.014°C/yr. Hence, assuming all else equal (i.e. nature… very dangerous to do that in science btw), then the last period had a warming rate that was 0.005°C/yr (36%) higher than the that of the previous warming period. So the maximum possible human influence is 36% IMHO. Note that a) the MACD analyses finds the same years and warming rates as Bob presented and b) that since 2007 the temporal trend in GSTA is effectively 0. 1950 makes “sense” because it makes the average rate of warming in the time frame smaller, so a larger part of the rate of warming can be attributed to AGW. joelobryan aka cherry picking. Cherry-picking one way or the other. It may have been chosen for whatever reason originally and then found to be “convenient”. Speaking of fruit, it’s also apples and oranges. The time frame is from 1950 to “the present”, which is different for each successive IPCC report. Richard M Exactly correct. Current data is not supporting AGW theory. In addition past climate changes before this idea of AGW came about were many times greater in magnitude then the slight warming which occurred last century. Again the data in this case past data does not support AGW. Yet they insist. For my money I attribute all climate changes due to natural causes and 0% to human activity. I noticed the difference in the starting from the proclaimed “CO2” age of 1950 and most of the graphs only going back to the late 70s. But never thought about it skewing their models as well. Kudos for pointing out the merely obvious to me! Lil Fella from OZ If, as we have been told, that there has been no warming for over 17 years, how can there be an argument on how much warming from 1950 to now can be attributed to man? milodonharlani No warming from 1950 to 1976, warming from 1977 to 1996, then no warming again from 1997 to 2014 & counting. That’s 20 years of warming (with some down years) vs. 44 (inclusive) years of no warming (or cooling), all the while CO2 has been rising monotonously. CACA was born falsified. The probability density function for the fraction of warming attributable to human activity…. What is this? Counting the male-female ratio of angles on the head of a pin? “Twice nothing is still nothing.” – Cyrano Jones Forget the ratio. It is an intractable measure of a religious concept — impossible to test. What is PDF for the absolute warming attributable to the growth in anthropomorphic green house gasses. What is the PDF for natural warming elements? for the past 6 decades, For the past 2 millennia.? Johanus Well said. In mathematics and logic it is easy to define entities that don’t exist (e.g. “Let X be the set of entities that don’t exist”). That is why mathematics usually require existence theorems to prove that any such entities actually exist before trying to characterize and deduce truth from them. Where is the proof of existence for this pdf (without begging the question)? Robertvd Honestly, I am astounded at the utter ignorance of the people involved in climate “science”. I have seen no decent theory backed up by experiment and evidence that CO2 has any net effect on the climate of the planet. In fact, I have seen published charts and graphs that suggest that CO2 has no or nearly no effect at all. And yet we have supposedly educated men and women claiming anthropogenic warming to a precise measure as if they knew Mother Nature’s contribution to the whole affair. Unbelievable ignorance, delusion, and arrogance. Of course since the government funded temperature data sets are now so corrupted as to be useless: how can we look for real causes of climate change? I notice that even this site calls the best European blog of last year a dispenser of “way out there theories”. Looks to me like the theories we have now are bunk and we need to be working on something else. To paragraph 1, do not be astounded. Their academic or government careers, grant funding, and personal status all depend on it. To phrase it differently, climate science increasingly resembles the worlds oldest profession. To paragraph 2 part one, that is going to be an Achilles heel. To paragraph 2 part two, truth is not always found in popularity contests despite the supposed ‘wisdom of crowds’. Were it so, then tulip bulbs would be more valuable than gold and present shareholders in the South Seas Company would be richer than Bill Gates. (h/t IIRC Mackey”s famous old book on the madness of crowds.) David Archibald Why would anyone take any notice of Judith Curry? She is not a dispassionate seeker after truth. In this interview she refers to the “Kock-funded climate denial machine”: http://oilprice.com/Interviews/The-Kardashians-and-Climate-Change-Interview-with-Judith-Curry.html Bart It looks like she is merely pointing out the strategy of blaming the Kochs, and that it isn’t working. mouruanh She’s describing ‘the climate science communication paradigm’ and why it fails, not her position in the debate. This strategy hasn’t worked for a lot of reasons. The chief one that concerns me as a scientist is that strident advocacy and alarmism is causing the public to lose trust in scientists. It’s quite clear when you read the full interview. David She is making reference to the paradigm not her position tonyb Tom T You think he doesn’t know that? He is bomb throwing he simply doesn’t care. pete That pdf is the single worst thing i have seen in climate science. There is just no way you can create such an attribution given the unknowns. At least the hockey stick had some basis to it… SAMURAI The most damning aspect of Gavin’s argument that the cherry-picked 1976~2005 warming period is almost entirely attributable to CO2 forcing, is that its warming trend is similar to the warming period to the 1921~1943 warming period (0.14/decade, 0.19c/decade respectively), and the 1921~43 warming trend can’t possibly be attributable to CO2 because even the IPCC admits CO2 levels were too low in the first half of the 20th century to have caused much warming. What these two warming periods do have in common is that the PDO was in its 30-yr warming cycle during both of these warming periods. The PDO entered its 30-yr cool cycle in 2005, and that’s precisely when global temp trends started falling again, despite record amounts of CO2 emissions. Earth’s warming and cooling cycles have followed PDO warming/cooling cycles almost perfectly for the past 164 years. Accordingly, It’s illogical to assume CO2 is the primary driving force behind global warming since 1950, because from 1950~1976 global temps were falling (PDO cool cycle in effect) and when a global warming trend started again in 1976, it coincides when the PDO entered its 30-yr warm cycle. The empirical evidence suggests that for the next 20 years, global temp trends should continue to fall, which will be the death knell for CAGW. joelobryan It will happen w/i the first 5 years as temps fall. AGW as a science hypothesis just becomes untenable in the scenario. GeneDoc Is it just too obvious that the oceans act as an enormous heat sink that moderates atmospheric temperature?When the heat content of the entire atmosphere is the same as the top 10 meters of the ocean, and when there are 321 million cubic miles of ocean, most of which is at or below 4C, how is it surprising that there is incredible buffering capacity for temperature changes? joelobryan Bingo!! Ding ding ding ding!! Flashing lights. Winner!!! The oceans control the thermostat, as they have for a billion years once our sun matured. The stupid thought that man’s fossil fuel CO2 is the thermostat regulator is total BS. Thanks, Bob. Very good information about GCMs training; the models are specialists. Why would the climate modelers base their projections of global warming on the trends of a cherry-picked period with a high warming rate? To better scare the money out of our pockets, of course. But, it seems to have come back to haunt them.
2018-07-23 08:07:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5293658375740051, "perplexity": 1928.2844835578771}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676595531.70/warc/CC-MAIN-20180723071245-20180723091245-00485.warc.gz"}
https://socratic.org/questions/what-are-all-the-possible-rational-zeros-for-f-x-5x-3-x-2-5x-1-and-how-do-you-fi
# What are all the possible rational zeros for f(x)=5x^3+x^2-5x-1 and how do you find all zeros? Sep 24, 2016 The possible rational zeros are $- 1 , - \frac{1}{5} , \frac{1}{5} , 5$ and the zeros are $- 1 , - \frac{1}{5} , 1$ #### Explanation: $f \left(x\right) = \textcolor{red}{5} {x}^{3} + {x}^{2} - 5 x - \textcolor{b l u e}{1}$ To find all the possible rational zeros, divide all the factors $p$ of the constant term by all the factors $q$ of the leading coefficient. The list of possible rational zeros is given by $\frac{p}{q}$. The constant term $= \textcolor{b l u e}{1}$ and the leading coefficient $= \textcolor{red}{5}$. The factors $p$ of the constant term $\textcolor{b l u e}{1}$ are $\pm 1$. The factors $q$ of the leading coefficient $\textcolor{red}{5}$ are $\pm 1$ and $\pm 5$ $\frac{p}{q}$=$\frac{\pm 1}{\pm 1 , \pm 5} = 1 , - 1 , \frac{1}{5} , - \frac{1}{5}$ The possible rational zeros are $- 1 , - \frac{1}{5} , \frac{1}{5} , 1$ To find all the zeros, factor the polynomial and set each factor equal to zero. $f \left(x\right) = 5 {x}^{3} + {x}^{2} - 5 x - 1$ Factor by grouping. $\left(5 {x}^{3} + {x}^{2}\right) - \left(5 x + 1\right)$ ${x}^{2} \left(5 x + 1\right) - 1 \left(5 x + 1\right)$ $\left({x}^{2} - 1\right) \left(5 x + 1\right)$ Use the difference of squares to factor ${x}^{2} - 1$ $\left(x + 1\right) \left(x - 1\right) \left(5 x + 1\right)$ Set each factor equal to zero and solve. $x + 1 = 0 \textcolor{w h i t e}{a a a} x - 1 = 0 \textcolor{w h i t e}{a a a} 5 x + 1 = 0$ $x = - 1 \textcolor{w h i t e}{a a a} x = 1 \textcolor{w h i t e}{a a a} x = - \frac{1}{5}$
2020-03-29 15:32:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 26, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.864500880241394, "perplexity": 331.38355659427873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00034.warc.gz"}
https://www.nature.com/articles/s41598-017-06366-x?error=cookies_not_supported&code=92d772da-d1ae-498b-ae01-61c2c23d87dd
# Analysis of somatic mutations across the kinome reveals loss-of-function mutations in multiple cancer types ## Abstract In this study we use somatic cancer mutations to identify important functional residues within sets of related genes. We focus on protein kinases, a superfamily of phosphotransferases that share homologous sequences and structural motifs and have many connections to cancer. We develop several statistical tests for identifying Significantly Mutated Positions (SMPs), which are positions in an alignment with mutations that show signs of selection. We apply our methods to 21,917 mutations that map to the alignment of human kinases and identify 23 SMPs. SMPs occur throughout the alignment, with many in the important A-loop region, and others spread between the N and C lobes of the kinase domain. Since mutations are pooled across the superfamily, these positions may be important to many protein kinases. We select eleven mutations from these positions for functional validation. All eleven mutations cause a reduction or loss of function in the affected kinase. The tested mutations are from four genes, including two tumor suppressors (TGFBR1 and CHEK2) and two oncogenes (KDR and ERBB2). They also represent multiple cancer types, and include both recurrent and non-recurrent events. Many of these mutations warrant further investigation as potential cancer drivers. ## Introduction Paired tumor-normal exome sequencing has revealed millions of somatic mutations across many thousands of patients1. Of these mutations, it is likely that only a small minority have a biological impact, while the majority of mutations are incidental to cancer development2. Identifying mutations that impact tumor biology and using this knowledge to guide experiments or therapeutic decision-making is a major goal. Although the specific biologic effects of many mutations are unknown, many strategies rely on aggregating mutations to draw biological conclusions. For instance, mutations can be drawn from several genes to identify gene networks and pathways that are related to tumor growth3. Many tools also query mutations at the gene level to identify genes with non-random patterns of mutations that are likely related to cancer development4, 5. As the number of mutations increases, even regions within proteins can be assessed6, and clustered mutations can be detected7. Even though knowledge of specific mutations may be lacking, these approaches can guide researchers towards the most promising subsets of mutations for further study. However, one limitation of these approaches is that they operate genome-wide, often without taking into account relevant knowledge of specific gene families or protein types. One particularly well-studied gene superfamily is protein kinases. These are a set of evolutationarily conserved phosphotransferases. There are approximately 500 protein kinase domains encoded in the human genome, spread between roughly 485 genes. These signaling molecules have well-known links to a variety of human diseases, and particularly to cancer due to their widespread functions in regulating cell behaviors8, 9. Several strategies for identifying biologically active mutations in protein kinases have been developed by focusing on characteristics specific to kinases10. Torkamani and Schork observed that known disease-causing mutations are not randomly distributed throughout these proteins and developed a machine-learning method for identifying these mutations11,12,13. When applied to cancer mutations, they observed that predicted functional mutations clustered in hotspots, suggesting that functional mutations may be shared among protein kinases14. Recent studies continue to use machine-learning and kinase-specific data to improve the identification of functional mutations in kinases15, 16. KinView is a more recent method that allows mutations to be mapped across alignments and incorporates additional annotations. It is an interactive visualization program that was used to identify a loss-of-function mutation in PKCβ, a kinase which functions as a tumor suppressor gene17. Another approach is to seek common effects of functional mutations. Dixit and colleagues demonstrated over several studies that activating protein kinase mutations shift the active-inactive equilibrium towards the active conformation, and that this is broadly true in many kinases18,19,20. Furthermore, they identified the catalytic and activation loops as particularly prone to gain-of-function events21, 22. Analogously, Olow et al. showed that nearly half of phosphorylation sites in the kinome-reactome are somatically mutated in at least some cancers23. This suggests that mutations with functional consequences may affect kinase substrates in addition to kinase enzymes. It is clear that mutations occurring in one protein kinase can be used to draw inferences in another, and that biologically active protein kinase mutations may have some distinct characteristics which can be used to better identify them. However, these kinase-specific methods rely on prior structural knowledge, sets of labeled training mutations, or curated reaction datasets that limit generalizability beyond kinases. In this study, we propose an alternative approach that relies only on unlabeled somatic mutations and an alignment of related genes or domains, which in principle is generalizable to other settings besides kinases. Rather than use prior knowledge of protein structure or post-translational modifications to find functional mutations, we first pursue the reverse task: using observed mutations and a protein kinase alignment to develop a functionality map of the human kinome. To do so, we design a series of statistical tests to identify aligned positions with non-random mutations, using our previous study of cancer genes as a starting point5. This strategy has not been used in prior studies of kinases or other gene families. We identify 23 homologous positions with non-random mutations, which is a novel finding in the field. We functionally assess eleven previously untested mutations across four genes by introduction into cell lines, and find that all eleven cause some reduction-of-function (ROF). ## Results ### Datasets We used dGene to identify genes that have kinase domains, ultimately drawing 486 kinase domain sequences from 471 unique genes from Uniprot24, 25. These kinase domains were aligned using ClustalOmega with default settings26. The default settings are quite permissive to gaps in the alignment; this is acceptable for our purposes, since the analysis assumes that aligned residues have homologous functions, and a more stringent alignment may violate the assumption. To ensure the quality of the alignment, we compared it with results produced by alternate aligners including COBALT and MUSCLE, as well as older, manually curated alignments from kinase.com, and found that all were nearly identical27,28,29. We also manually examined the alignment to ensure major structural regions were aligned properly. The final alignment has 1808 positions (alignment available in Supplementary Table 1). We draw 64,554 point mutations in these genes from our previous study, updated with additional mutations from the cBio portal (Fig. 1a, Supplementary Table 2)5, 30. 21,917 of the mutations map to the kinase domains, while the remainder are outside the kinase domain. Duplicate mutations from multiple sources were removed. We limit scope to just point mutations (missense and silent changes), because other types of mutations like insertions and deletions often cannot be mapped to a single position on the alignment. 14,665 silent mutations are included in all in silico analyses. Positions that are systematically enriched or depleted for silent mutations may be under negative or positive selection, respectively, making these events a valuable source of information31, 32. Moreover, there is evidence that some silent mutations have important functional consequences at the protein level33, 34. The mutations of our dataset come from 8,674 distinct patients, although the number of patients exome sequenced to generate these mutations is likely 10–20% higher, since some patients will have no mutations in any protein kinase. ### Testing Aligned Positions Mutations were mapped onto the alignment of human kinase domains (Fig. 1b, Supplementary Table 2). Mutations in these genes which are outside the kinase domain are used to define the null distributions of test statistics, since they are produced by the same mutational processes as kinase domain mutations, but are unaligned. We developed a series of seven statistical tests to identify homologous positions with non-random mutation patterns, which can be calculated using basic approaches outlined in the Methods section. Importantly, these methods do not make assumptions regarding the neutrality of mutations used for the null distribution. The tests compare mutations at a given aligned position to unaligned mutations from outside the kinase domain; the goal is to identify aligned positions with mutations that appear non-random in relation to unaligned mutations. The tests include: • Mutation Number – detects elevated numbers of mutations at an aligned position using a poisson distribution, given the observed mutation rates for residues aligned to the position. • Patients – uses a chi-square statistic to detect deviations from expected patient distribution, given the number of mutations observed at the position. • Cancer Types – uses a chi-square statistic to detect deviations from expected cancer type distribution, given the number of mutations observed at the position. • Reference Residues – uses a chi-square statistic to detect deviations from expected distribution of mutated residues, given the observed residue substitution frequencies, residues aligned to the position, and the total number of observed mutations. • Variant Residues – uses a chi-square statistic to detect deviations from expected distribution of variant residues, given the observed residue substitution frequencies, reference residues that are mutated, and the total number of observed mutations. • Cancer Genes – detects sets of mutated genes that are enriched in predicted cancer genes, given the observed residue subsitution frequences, residues aligned to the position, and number of observed mutations. • Gene Relatedness – detects sets of mutated genes that are more closely related than expected, given the observed residue substitution frequences, residues aligned to the position, and number of observed mutations. ### Constructing a Functionality Map Since the tests require multiple mutations and genes to be calculated, they were applied to the 831 positions (of 1808 total) that had mutations in at least two genes. The p-values from the tests were then combined using the Fisher procedure to produce a single p-value for the position35. The Fisher procedure (see methods for details) is commonly used to combine p-values in the context of meta-analyses36, but has also been used to produce consensus scores from multiple tests35. These Fisher p-values were then adjusted for multiple-testing to control the false discovery rate (FDR)37. We found 23 significantly mutated positions (SMP) with FDRs less than 0.10 (Table 1, Supplementary Table 3, Supplementary Table 4). One possible shortcoming of the Fisher procedure is that it may prioritize positions with one extremely small p-value over others with multiple borderline p-values36. Therefore, we scrutinized the results to determine the contribution of each test to detecting SMPs (Supplementary Figure 1). If we consider a p-value of less than 0.05 a positive result, Mutation Number detected the most SMPs (20 SMPs detected of 23 total; Supplementary Figure 1A). However, Cancer Genes, Gene Relatedness, Cancer Types and Patients all detected more than 5 SMPs each. Variant and Reference Residues contributed the least to detecting SMPs, with 4 SMPs detected by each. More importantly, we found that all but one SMPs were detected by multiple tests (Supplementary Figure 1B and C), and 11 of 23 were detected by three or more tests. In contrast, of 808 columns that were not identified as SMPs, only 37 were detected by two or more tests. Overall, it appears that most SMPs detected by the Fisher procedure have at least modest support from multiple tests. ### Characterizing SMPs SMPs are exceptional positions and differ markedly from other positions in the alignment. The average SMP had 377 aligned domains, versus only 75 across the entire alignment (Supplementary Table 5). They also had more mutations (117 versus 12) and more mutated genes (61 versus 10) than the average aligned position. Overall, SMPs had about twice the average number of mutations per aligned domain (0.31 versus 0.16). This increased number of mutations reflects both a greater degree of recurrence (1.9 mutations per mutated gene at SMPs versus 1.25 elsewhere), as well as more genes that are mutated at SMPs (16% of aligned genes are mutated at SMPs, versus 13% elsewhere). SMPs were also slightly more conserved that most positions. SMPs had an average entropy score of 2.29, versus 1.37 for all 831 tested positions and 0.68 for all positions. However, entropy is markedly affected by the number of genes aligning at a given position. 18 SMPs had at least 350 aligned genes; the mean SMP entropy of these SMPs was 2.52, while 181 non-SMPs with over 350 aligned genes had entropy scores of 2.86 on average. However, these are only summary statistics, and many individual SMPs go against these trends (Supplementary Table 5). When viewed against the known structure of kinase domains, these SMPs compose a map of regions that may be important to kinase function. In Fig. 2, we project these positions onto the EGFR kinase domain crystal structure. One notable group are SMPs 11–19 in Table 1 and Fig. 2; these are all very well-known activation loop (A-loop) residues, and many are known to host important functional mutations38. Additionally, SMP 1 (aligned position 145) is located in the nucleotide binding P-loop, SMPs 4 and 5 (aligned positions 246 and 254) are in the αC-helix, SMPs 2, 3 and 6 (aligned positions 200, 205, and 258) are in the loops either N- or C-terminal to the αC-helix (the β3-αC loop and the αC-β4 loop, respectively), and SMPs 9 and 10 (aligned positions 820 and 828) are located in or adjacent to the catalytic loop (C-loop). Well-known functional mutations at each of these positions are listed in the legend for Fig. 2 (see underlined mutations) and a recent study by Foster et al. demonstrated how deletion mutations in the β3-αC loop (corresponding to SMPs 2 and 3) are able to activate BRAF, EGFR, and ERBB2 kinases39. ### Selecting Mutations for Validation We first narrowed focus to just 14,541 unique missense mutations in the kinase domains (Fig. 1a). We further focus on the 42 protein kinases which we previously confirmed or predicted as cancer genes, reducing the candidates to 1894 mutations (genes had to have greater than even chance of being either an oncogene or tumor suppressor according to our previous study)5. Finally, we limited scope to the 23 SMPs, resulting in 218 candidate mutations. We selected ten of these mutations for functional testing in cell culture (Table 2). We sought a mix of recurrent and non-recurrent events, mutations from diverse areas of the kinase domain, and a variety of cancer types. In particular, we tried to test mutations at several SMPs, and avoid mutations that were closely related to well studied functional mutations. Therefore, the mutations we selected represent a variety of novel hypotheses suggested by the functionality map. The mutations we selected include events in TGFBR1, CHEK2 and KDR, as well as the ERBB2 R868W mutation (Table 2). Five are non-recurrent, and seven are not homologuous to known functional mutations to our knowledge. Our group specializes in ERBB2/HER2, and we have particular interest in mutations occuring in the terminal portion of the C-lobe. Since none of the mutations observed in this region occurred at an SMP, we identified additional mutations that otherwise did not meet the selection criteria. SMP 21 (position 1430 of the alignment) is one of the most downstream SMPs; although no mutation was observed in ERBB2 at this position, an R to C change occurred at this position 33 times in 23 different genes, including one observation of EGFR R958C. We therefore constructed ERBB2 R966C, which corresponds to this position. Our chosen mutations also represent a variety of cancer types. They occur in a total of 73 patients with more than eleven distinct cancers (Supplementary Table 6). The CHEK2 K373E variant was split among many cancer types, but 17 patients with lung adenocarcinoma carried it. The KDR variants R1032Q and S1100F were predominantly observed in 11 melanoma patients. Finally, the TGFBR1 S241L and ERBB2 R868W mutations were found in colorectal patients. ### Experimental Results Using a previously described retroviral transduction system40, we produced NIH 3T3 cells stably overexpressing both mutant and wild-type proteins for each of TGFBR1, KDR and ERBB2. We found that we could not stably overexpress wild type CHEK2 in this setting: cells retained the selection marker, but stopped expressing the construct. Instead, CHEK2 experiments were performed using transient transfection in HEK293T cells. TGFBR1, CHEK2 and KDR constructs were tagged with FLAG. All experiments were performed in duplicate or triplicate. #### TGFBR1 TGFBR1 (Transforming Growth Factor Beta Receptor 1) is a receptor S/T kinase. It has well appreciated functions in immune regulation as well as tissue remodeling. It is generally thought of as a tumor suppressor and acts to arrest the cell cycle41, although it can also act as a pro-tumor factor in later disease progression, particularly by causing increased cell invasiveness, proliferation and migration42, 43. We tested two mutations in this gene. We found that NIH 3T3 cells overexpressing TGFBR1 S241L and L354P had reduced signaling when exposed to the ligand TGFβ when compared with wild type (Fig. 3a). #### CHEK2 Checkpoint 2 is a cytoplasmic S/T kinase that has important functions in cell cycle control, specifically in DNA damage and repair, and is a well appreciated tumor suppressor44. We transiently transfected HEK 293T cells with wild type CHEK2 and five variants. We confirmed previous observations that wild type CHEK2 is constitutively activated under these conditions, as judged by phosphorylation at the autophosphorylation site S51645. We found that CHEK2 S372F, S372Y, and A392V all had less than 15% of the wild type phosphorylation. The highly recurrent mutant K373E had 45% of wild type phosphorylation, while A392S had 70% (Fig. 3b; representative raw image Supplementary Figure 2). #### KDR/VEGFR2 KDR/VEGFR2 (Vascular Endothelial Growth Factor Receptor-2) is a receptor tyrosine kinase (RTK). KDR is a well-established oncogene with crucial roles in angiogenesis, although there is evidence of an autocrine function as well46. We tested two mutations in this gene. We found that both the R1032Q and S1100F mutations markedly reduced function, as judged by levels of phospho-KDR and phospho-MAPK after exposure to the ligand VEGF (Fig. 3c). #### ERBB2/HER2 ERBB2/HER2 is a member of the EGFR family of RTKs and a well known oncogene. Our lab has shown that point mutations in the HER2 kinase domain can trigger increased signaling and cell transformation in both breast40 and colorectal cell lines47. We found that HER2 R966C and R868W caused a reduction-of-function as judged by levels of phospho-HER2 and MAPK signaling (Fig. 3d). ### Analysis of Kinase Groups Finally, using the classification scheme suggested by UniProt (Supplementary Table 7), we used the same procedures to identify additional SMPs within groups of related kinases (Supplementary Table 8). We found that groups with few members and mutations produced results that were highly sensitive to even single mutations; for this reason we limit the analysis to groups with more than 20 members and 2000 kinase domain mutations, excluding the atypical and “other” kinases, since they are highly heterogeneous. The largest groups (over 50 members each) yielded relatively few group-specific SMPs. Among the five largest groups (AGC, CAMK, CMGC, STE, TYR), only 14 SMPs could be detected, all but two of which were identified in the main analysis. In the STE group (which includes MAP kinases), column 269 was identified; this position contains numerous recurrent mutations in the group, including P124S in MAP2K1 which is common in melanomas. The other group-specific SMP from these groups was column 951 in the CAMK group (which includes CHEK2). In contrast, the smaller tyrosine kinase-like group (TKL, 33 members, including TGFBR1 and BRAF) had 13 SMPs identified, 9 of which were not identified in the main analyses. These positions included columns 887, 888 and 890, corresponding to the N-terminal portion of the A-loop. These results suggest that there may be additional SMPs present in smaller kinase groups, but that additional data will be required to identify them reliably. ## Methods ### Statistical Tests We developed a panel of statistical tests which can be used to identify non-random sets of mutations that occur at homologous positions in human kinases. Several of these tests are adapted from our previous study5. In many cases, null distributions are defined empirically (via permutation). Where needed, amino-acid substitution frequencies are defined by mutations that are outside kinase domains (but within genes bearing kinase domains) as these mutations are generated by the same mutational processes that produce the kinase domain mutations. Importantly, our method makes no assumptions regarding the functional status of these mutations; it merely assumes that mutations at some aligned positions will be enriched for functional events compared to unaligned mutations as a whole. That is, our method is tolerant to the fact that some non-kinase-domain mutations may be functional48, 49. This contrasts with prior methods which require presumably neutral mutations to define a null distribution, for instance by using silent mutations7. In some cases, the null distribution is also conditioned on the alignment and aspects of the observed mutations (for instance, most tests assume a fixed number of mutations). Careful consideration was given to recurrent mutations which occur in more than one patient. These mutations are often presumed to have a functional effect15, but they may also be idiosyncratic to particular genes. Completely excluding recurrent mutations will likely remove many biologically important mutations from the dataset; but completely including them will likely make the analysis sensitive to positions with even a few recurrent mutations. Therefore, our panel includes tests that operate at three levels, which reflect different ways of handling recurrent events. Mutation-level tests (Mutation Number, Patients, and Cancer Types) include all mutations in the dataset, and consider recurrent events as non-redundant. Residue-level tests (Reference Residues, Variant Residues) treat identical amino-acid substitutions as redundant (e.g. CHEK2 K373E, which occurs 48 times in the dataset, is counted as a single event). Finally, gene-level tests (Cancer Genes, Gene Relatedness) treat mutations that occur at a single position in a gene as redundant (e.g. CHEK2 S372F and CHEK2 S372Y are treated as a single event). This approach should balance the value of recurrent mutations in identifying important positions against the risk of finding positions that are not broadly important to kinase function. #### Mutation Number In this simple test, we identify aligned positions with a higher-than-expected number of total mutations. All mutations are used, and the null is set using only non-kinase-domain mutations. We begin by defining the expected number of mutations per residue type (r) using the mutations and sequences that are outside of kinase domains: $${E}_{r}=\frac{{O}_{r}}{{N}_{r}}$$ (1) where E r is the expected number of mutations per residue of type r, O r is the observed number of mutations affecting residues of type r outside of the kinase domains, and N r is the total number of residues of type r present in gene sequences, but outside of their respective kinase domains. Once the expectations per residue type are set, we calculate the expected number of mutations at each aligned position (a): $${E}_{a}=\sum _{r}{E}_{r}{R}_{a,r}$$ (2) where E a is the expected number of mutations at an aligned position a; E r is the expected number of mutations per residue type r, and R a,r is the number of residues aligned at a of type r. We assume that the presence of mutations at each gene and aligned position can be modeled with a poisson distribution, parameterized by E r for the appropriate residue type. It follows that the number of mutations for an entire aligned position is therefore also poisson distributed (since it is a sum of poisson variables), and parameterized by E a . By comparing the observed number of mutations at the position with the null distribution, we generate an upper tail p-value for the test. #### Patients and Cancer Types In these tests, we identify positions with mutations that are not randomly distributed among patients and cancer types, given the number of mutations observed at the position. They are calculated very similarly to one another, and are described in our previous study5. Both are calculated as chi-square goodness-of-fit tests, although both use empirical rather than theoretical distributions. Both tests use all mutations at the aligned positions. Unlike the other tests, the null distribution includes mutations in kinase domains, as well as mutations outside kinase domains. Each mutation can be assigned to a patient (and cancer type), each of which has a certain mutation count associated with it (c). The mutation count is simply the number of times the patient (or cancer type) occurs in the dataset. Once each mutation has been associated with a value of c, we calculate the test statistic for each aligned position (a): $${X}_{a}^{2}=\sum _{c}\frac{{({O}_{a,c}-{E}_{a,c})}^{2}}{{E}_{a,c}}$$ (3) $${E}_{a,c}=\frac{{N}_{a}{N}_{c}}{N}$$ (4) where O a,c is the observed number of mutations at the aligned position from patients (cancer types) with mutation count c, E a,c is the expected number of mutations at the aligned position from patients (cancer types) with mutation count c, N a is the number of mutations at the position, N c is the total number of mutations in the dataset from patients (cancer types) with mutation count c, and N is the total number of mutations in the dataset. This statistic is compared to a null distribution, which is generated by calculating the statistic for random draws with replacement from the set of patient (cancer type) labels, holding the number of mutations fixed. The final output is an upper-tail p-value. #### Reference Residues This test identifies positions where mutated residues appear non-random. It is calculated as a chi-square goodness-of-fit test, but uses an empirical null distribution instead of a theoretical one. It is a residue-level test, and recurrent mutations with identical residue changes are removed. The null distribution is set with mutations from outside of kinase domains. We use the expected number of mutations per residue of each type (E r ) that was used in Number of Mutations. We then calculate the test statistic for each aligned position (a): $${X}_{a}^{2}=\sum _{r}\frac{{({O}_{a,r}-{E}_{a,r})}^{2}}{{E}_{a,r}}$$ (5) $${E}_{a,r}={R}_{a,r}{E}_{r}$$ (6) where O a,r is the observed number of mutations at the aligned position from residues of type r, E a,r is the expected number of mutations at the aligned position at residues of type r, and R a,r is the number of residues at the aligned position a of type r. This statistic is compared to a null distribution, which is generated by calculating the statistic for random draws with replacement from the set amino acid types (weighted by E a,r for each residue type), holding the number of mutations fixed. The final output is an upper-tail p-value. #### Variant Residues This test is very similar to Reference Residues, but tests for positions where the newly produced amino acids appear non-random. It is calculated as a chi-square goodness-of-fit test, but uses an empirical null distribution instead of a theoretical one. It is a residue-level test, and recurrent mutations with identical residue changes are removed. The null distribution is set with mutations from outside of kinase domains. We then calculate the test statistic for each aligned position (a): $${X}_{a}^{2}=\sum _{v}\frac{{({O}_{a,v}-{E}_{a,v})}^{2}}{{E}_{a,v}}$$ (7) $${E}_{a,v}=\sum _{r}{P}_{r,v}{O}_{a,r}$$ (8) where v is the type of variant residue and r is the type of reference residue. P r,v refers to the probability that a mutation occurring at a residue of type r will result in a residue of type v (calculated based on the amino acid substitution frequencies observed outside of kinase domains), and O a,r is the observed number of mutations at aligned position a with reference residues of type r. This statistic is compared to a null distribution, which is generated by calculating the statistic for random draws with replacement of amino acid types (weighted by E a,v ), holding the number of mutations fixed. The final output is an upper-tail p-value. #### Cancer Genes This test identifies positions with mutations that tend to occur in predicted cancer genes. It is a gene-level test, and multiple mutations that affect a single gene at a single position are only counted once. We associate each gene with a score that represents how likely the gene is to be related to cancer. Cancer genes have smaller scores on average (for details, see “UK Score” from our previous study5). To perform the test, we calculate the average score for the genes that are mutated at a given aligned position. We generate a null distribution by calculating the average score for random draws of genes (weighted by the E r that corresponds to each gene’s aligned residue at the given position). The result of the test is a lower-tail p-value. #### Gene Relatedness This test identifies positions where mutated genes have kinase domains that are more closely relate to one another on average than expected by chance, given the mutation patterns observed outside of kinase domains. It is a gene-level test, and mutations that affect a single gene at a given position are only counted once. The distance matrix of all kinase domains in the dataset was calculated from the phylogenetic tree produced by ClustalOmega when it produced the alignment. To perform the test, we calculate the average pair-wise distance for all genes that are mutated at a given aligned position. We generate a null distribution by calculating the average pair-wise distance for random draws of genes (weighted by the E r that corresponds to each gene’s aligned residue at the given position). The result of the test is a lower-tail p-value. ### Fisher Procedure The Fisher procedure is used to combine the individual p-values into a single consensus score, as was done in OncodriveFM35. The statistic is calculated: $${X}_{2k}^{2} \sim -2\sum _{i=1}^{k}\mathrm{ln}(pi)$$ (9) where k is the number of tests being combined. The test statistic can then be used to generate an upper-tail p-value. Unweighted methods like the Fisher procedure are often considered inferior to weighted methods like weighted Z-scores in meta-analytic problems36. However, it is important to note that there is no clear role for weighting in our problem, since we have no prior reason to regard one test as more reliable or powerful than any other, as they all rely on the same underlying dataset. Therefore an unweighted approach is most appropriate. We did compare the Fisher method to unweighted Z-scores as discussed by Whitlock36. We found that the Fisher procedure and unweighted Z-scores produced highly correlated results (r = 0.903) at the 831 tested positions, and that a large majority of SMPs would be identified by either method. The Z-score method generally detected fewer positions at a given cut-off. For instance, if an FDR cutoff of 0.2 were applied to Z-score based p-values, there would be 26 positive results, 21 of which are among the 23 SMPs identified by the Fisher method at the cutoff of 0.1. Based on these observations, the unweighted Z-score method and the Fisher method identify the same positions as most likely to be significantly mutated, although the absolute p-values may differ slightly. ### Missingness and Data Handling The only variable with notable missingness was Cancer Type, which ~20% of mutations lacked. We found that excluding these mutations from the Cancer Types test or including them under a “missing/other” category produced virtually identical results. The final analysis includes them as a separate category. For genes with multiple isoforms, merging multiple datasets sometimes required mapping mutations to a common isoform. To do so, we selected the isoform that conserved the greatest number of mutations. Less than 1% of kinase domain mutations were discarded in this process. The supplementary materials indicate when the mapped isoform differs from the UniProt canonical isoform. In the body of the next and figures, we refer to mutations according to the canonical isoform. ### Experimental Procedures and Reagents Experiments were performed as previously described40. Briefly, cDNA for KDR, TGBFR1 and CHEK2 were purchased from Addgene. ERBB2 cDNA was a gift from Dr. Dan Leahy (Johns Hopkins University, Baltimore). Mutations were introduced using QuikChange II site-directed mutagenesis (Agilent). Constructs were then shuttled into the pCFG5 retroviral vector (which includes a zeocin resistance marker and IRES-GFP sequence) using the In-Fusion HD cloning system kit (Clonetech), and verified by full-length Sanger sequencing. For KDR, TGFBR1 and CHEK2, a c-terminal FLAG tag was introduced. For ERBB2, TGFBR1 and KDR, retroviral particles were produced using ϕNX amphotrophic packaging cells. NIH 3T3 cells were spin-infected with virus, and selected under 10 μg/ml zeocin for 3 weeks. Fluorescence was confirmed at >95% by flow cytometry or >90% by microscopy. Cells were serum starved for 6 hrs before lysate harvesting for each of these three genes. Cells were treated or untreated with ligand prior to harvesting in the case of TGBFR1 (20 min induction, 5 ng/ml) and KDR (10 min induction, 10 ng/ml). In the case of CHEK2, transient transfections were performed using LTX and Plus reagent from Thermo Fisher, using the manufacturers standard protocol in HEK 293 T cells. Cells were lysed 24 hrs after transfection. Transfection efficiency was confirmed by microscopy as >50% in all cases. ERBB2/HER2 signaling was assayed using pHER2 and pMAPK levels40. TGFBR1 activity was assayed using pSMAD2 levels43, 50. KDR activity was assayed using pKDR51 and pMAPK levels. CHEK2 was assayed with pS516, which is both an autophosphorylation site and necessary for full activation of CHEK2, and has been used previously as a proxy of CHEK2 activity45, 52, 53. NIH 3T3 cells were acquired from the American Type Culture Collection (ATCC). HEK 293 T cells were a gift from Dr. Akhilesh Pandey (Johns Hopkins University, Baltimore). Antibodies used include HER2 from Thermo-Fisher (Ab-17), phospho-HER2 (pY1248) from Millipore (06–229), p44/42 MAPK from Cell Signaling Technologies (CST, 137F5), phospho p44/42 MAPK from CST (20G11), FLAG from Sigma-Aldrich (F3165), phospho-KDR (pY1175) from CST (19A10), phospho-SMAD2 (S465/467) from CST (138D4), SMAD2 from CST (D43B4), phospho-CHEK2 (pS516) from CST (#2669). Ligand included VEGF165 (#8065) from CST and TGFβ. ### Data Availability All data generated or analysed during this study are included in this published article (and its Supplementary Information files). ## Discussion In this study, we hypothesized that somatic cancer mutations could be used to identify important functional regions within proteins. Specifically, we focused on the superfamily of protein kinases, which are a conserved set of phosphotransferases that share homologous sequences and structural motifs. By mapping mutations onto the alignment of protein kinases and applying a panel of statistical tests, we were able to identify homologous positions that bear mutations which appear non-random. Since mutations are pooled across all superfamily members, these positions may be broadly important to the function of many different protein kinases. We found 23 significantly mutated positions (SMPs) within the kinase alignment. SMPs were found throughout the kinase domains, with the strongest enrichment in the A-loop and other major positions located in and around the P-loop, the αC helix, and the catalytic loop. We tested eleven distinct mutations in several genes, including the oncogenes ERBB2 and VEGFR2 and the tumor suppressors CHEK2 and TGFBR1. We focused on highly novel mutations, including many that are rare or non-recurrent, and avoided mutations with that are closely related to well-studied functional mutations. All eleven mutations reduced signaling through the corresponding kinase. The mutations we tested were observed in 73 patients with eleven cancer types, with particularly large numbers of these mutations occurring in colorectal carcinomas, lung adenocarcinomas, and melanomas. The fact that all eleven tested mutations reduced function is an important finding. It illustrates the importance of functional characterization of mutations, particularly given the diverse roles protein kinases play in cancer development5. In tumor suppressors, focus is often on deletions or truncations since loss-of-function events in tumor suppressors could act as tumor drivers. In this study, we found that both highly recurrent (CHEK2 K373E) and rare point mutations (CHEK2 S372F/Y and A392V, TGFBR1 S241L and L354P) in tumor suppressors can also cause loss- or reduction-of-function. Similarly, while it may be tempting to assume that recurrent point mutations in oncogenes are either neutral or gain-of-function, this work shows that these mutations can be loss-of-function (for instance, KDR R1032Q and S1100F). In contrast to tumor suppressors, loss-of-function events in oncogenes would seem to be poor candidates as tumor drivers. As it becomes more common for patients to have their tumors exome or genome sequenced, this knowledge will be crucial in identifying events that are most like to underpin their disease. There are some important drawbacks to our approach. On a technical level, one limitation of this study is the focus on protein-level changes, which was necessary as DNA-level changes are not uniformly publicly available. However, our methods are in principle compatible with DNA-level data, and it would provide two major benefits. First, applying our framework to a DNA alignment and set of nucleotide changes would allow analysis of non-protein regions. Second, in protein-coding regions, the use of DNA-level changes would allow us to correct for codon structure, potentially improving the performance of our tests. Another caveat to this analysis is that while it provides a precise location within a gene or sets of genes to search for functional events, it does not identify specific mutations for testing. We addressed this problem by manually selecting candidate mutations from SMPs for experimentation. However, numerous methods exist that provide complementary functionality and could be combined with the work of this study. For instance, several studies have focused on identifying “hotspot” regions of genes with high densities of mutations, sometimes taking protein structure into account7, 54, 55. These methods can be used to identify regions within specific genes for further study, but do not yet implicate specific residues. Functional impact predictors which use a variety of inputs to identify mutations that are likely to alter protein function have also been developed56, including by our own group57. However, impact predictors can have high rates of false-positive results, and are best used on limited sets of mutations with a high prevalence of functional events. Combining the methods developed in this study with other complementary approaches may provide an avenue for reliably identifying functional events in large genomic datasets. There are other potential extensions to this study, encompassing multiple fields. We have tested only a small fraction of the mutations at the SMPs we identified. Direct follow up studies, particularly on ROF mutations in the tumor suppressors TGFBR1 and CHEK2 will be necessary before these mutations can be confirmed as bona fide cancer drivers. Many other mutations are found at other SMPs, and our results suggest that testing these mutations could be fruitful, particularly if present in genes with therapeutic implications. Our results also have implications for the structural understanding of kinase signaling: for instance, the ERBB2 R966C mutation demonstrates the importance of the C-lobe to kinase function, but the exact role this region plays is not fully understood. Our methods can also be applied in other settings. Although we have focused on kinases, none of our methods are kinase-specific. Our analysis is equally compatible with other conserved gene or domain families of broad importance to cancer development, such as nuclear hormone receptors58 and G-protein coupled receptors59. Our methods will also become more precise as data volumes continue to increase. We found additional SMPs within specific groups like the TKL kinases, and more may exist in even smaller groups. New platforms that incorporate multi-sequence alignments with cancer mutation data will allow future analyses to be quickly iterated and focused on specific kinases17. Our methods can even be adapted to single genes, provided a sufficient density of observed variants. In conclusion, we have demonstrated the use of somatic mutations to identify functional positions and mutations within gene families. We developed several statistical approaches for identifying positions with non-random mutations, aggregating mutations across homologous positions in the human kinome to do so. We identified 23 significantly mutated positions, and tested eleven mutations found at these positions from several genes. We confirmed all eleven as causing reductions in kinase function. Mutations that reduce the function of tumor suppressors are particularly promising as candidate cancer drivers, though other mutations at these SMPs warrant study as well. Our methods are highly extensible, providing a framework for using somatic cancer data to identify functionally important regions in proteins, and eventually identifying mutations that are relevant to cancer development and growth. ## References 1. 1. Forbes, S. A. et al. COSMIC: mining complete cancer genomes in the Catalogue of Somatic Mutations in Cancer. Nucleic acids research 39, D945–D950, doi:10.1093/Nar/Gkq929 (2011). 2. 2. Vogelstein, B. et al. Cancer Genome Landscapes. Science 339, 1546–1558, doi:10.1126/science.1235122 (2013). 3. 3. Ciriello, G., Cerami, E., Sander, C. & Schultz, N. Mutual exclusivity analysis identifies oncogenic network modules. Genome research 22, 398–406, doi:10.1101/gr.125567.111 (2012). 4. 4. Lawrence, M. S. et al. Mutational heterogeneity in cancer and the search for new cancer-associated genes. Nature 499, 214–218, doi:10.1038/nature12213 (2013). 5. 5. Kumar, R. D., Searleman, A. C., Swamidass, S. J., Griffith, O. L. & Bose, R. Statistically Identifying Tumor Suppressors and Oncogenes from Pan-Cancer Genome Sequencing Data. Bioinformatics 31, 3561–3568 (2015). 6. 6. Porta-Pardo, E. & Godzik, A. e-Driver: a novel method to identify protein regions driving cancer. Bioinformatics 30, 3109–3114, doi:10.1093/bioinformatics/btu499 (2014). 7. 7. Tamborero, D., Gonzalez-Perez, A. & Lopez-Bigas, N. OncodriveCLUST: exploiting the positional clustering of somatic mutations to identify cancer genes. Bioinformatics 29, 2238–2244, doi:10.1093/bioinformatics/btt395 (2013). 8. 8. Torkamani, A., Verkhivker, G. & Schork, N. J. Cancer driver mutations in protein kinase genes. Cancer Letters 281, 117–127, doi:10.1016/j.canlet.2008.11.008 (2009). 9. 9. Lahiry, P., Torkamani, A., Schork, N. J. & Hegele, R. A. Kinase mutations in human disease: interpreting genotype-phenotype relationships. Nat Rev Genet 11, 60–74, http://www.nature.com/nrg/journal/v11/n1/suppinfo/nrg2707_S1.html (2010). 10. 10. Izarzugaza, J., Redfern, O., Orengo, C. & Valencia, A. Cancer-associated mutations are preferentially distributed in protein kinase functional sites. Proteins 77, 892–903 (2009). 11. 11. Torkamani, A., Kannan, N., Taylor, S. S. & Schork, N. J. Congenital disease SNPs target lineage specific structural elements in protein kinases. Proc Natl Acad Sci USA 105, 9011–9016, doi:10.1073/pnas.0802403105 (2008). 12. 12. Torkamani, A. & Schork, N. J. Accurate prediction of deleterious protein kinase polymorphisms. Bioinformatics 23, 2918–2925, doi:10.1093/bioinformatics/btm437 (2007). 13. 13. Torkamani, A. & Schork, N. J. Distribution analysis of nonsynonymous polymorphisms within the human kinase gene family. Genomics 90, 49–58, doi:10.1016/j.ygeno.2007.03.006 (2007). 14. 14. Torkamani, A. & Schork, N. J. Prediction of Cancer Driver Mutations in Protein Kinases. Cancer Research 68, 1675–1682, doi:10.1158/0008-5472.can-07-5283 (2008). 15. 15. ManChon, U., Talevich, E., Katiyar, S., Rasheed, K. & Kannan, N. Prediction and prioritization of rare oncogenic mutations in the cancer Kinome using novel features and multiple classifiers. PLoS Comput Biol 10, e1003545, doi:10.1371/journal.pcbi.1003545 (2014). 16. 16. Pons, T. et al. KinMutRF: a random forest classifier of sequence variants in the human protein kinase superfamily. BMC genomics 17, 207–217, doi:10.1186/s12864-016-2723-1 (2016). 17. 17. McSkimming, D. I. et al. KinView: a visual comparative sequence analysis tool for integrated kinome research. 12, 3651–3665 (2016). 18. 18. Dixit, A. & Verkhivker, G. M. Hierarchical modeling of activation mechanisms in the ABL and EGFR kinase domains: thermodynamic and mechanistic catalysts of kinase activation by cancer mutations. PLoS Comput Biol 5, e1000487, doi:10.1371/journal.pcbi.1000487 (2009). 19. 19. Dixit, A. et al. Sequence and structure signatures of cancer mutation hotspots in protein kinases. PLoS One 4, e7485, doi:10.1371/journal.pone.0007485 (2009). 20. 20. Dixit, A., Torkamani, A., Schork, N. J. & Verkhivker, G. Computational Modeling of Structurally Conserved Cancer Mutations in the RET and MET Kinases: The Impact on Protein Structure, Dynamics, and Stability. Biophysical Journal 96, 858–874, doi:10.1016/j.bpj.2008.10.041 (2009). 21. 21. Dixit, A. & Verkhivker, G. M. The energy landscape analysis of cancer mutations in protein kinases. PLoS One 6, e26071, doi:10.1371/journal.pone.0026071 (2011). 22. 22. Dixit, A. & Verkhivker, G. M. Structure-Functional Prediction and Analysis of Cancer Mutation Effects in Protein Kinases. Computational and Mathematical Methods in Medicine 2014, 24, doi:10.1155/2014/653487 (2014). 23. 23. Olow, A. et al. An Atlas of the Human Kinome Reveals the Mutational Landscape Underlying Dysregulated Phosphorylation Cascades in Cancer. Cancer Res 76, 1733–1745, doi:10.1158/0008-5472.can-15-2325-t (2016). 24. 24. Kumar, R. D., Chang, L. W., Ellis, M. J. & Bose, R. Prioritizing Potentially Druggable Mutations with dGene: An Annotation Tool for Cancer Genome Sequencing Data. PLoS One 8, e67980, doi:10.1371/journal.pone.0067980 (2013). 25. 25. UniProt Consortium. Activities at the Universal Protein Resource (UniProt). Nucleic acids research 42, D191–198, doi:10.1093/nar/gkt1140 (2014). 26. 26. Sievers, F. & Higgins, D. G. Clustal Omega, accurate alignment of very large numbers of sequences. Methods in molecular biology (Clifton, N.J.) 1079, 105–116, doi:10.1007/978-1-62703-646-7_6 (2014). 27. 27. Papadopoulos, J. S. & Agarwala, R. COBALT: constraint-based alignment tool for multiple protein sequences. Bioinformatics 23, (1073–1079 (2007). 28. 28. Edgar, R. C. MUSCLE: multiple sequence alignment with high accuracy and high throughput. Nucleic acids research 32, 1792–1797 (2004). 29. 29. Manning, G., Whyte, D. B., Martinez, R., Hunter, T. & Sudarsanam, S. The protein kinase complement of the human genome. Science 298, doi:10.1126/science.1075762 (2002). 30. 30. Cerami, E. et al. The cBio cancer genomics portal: an open platform for exploring multidimensional cancer genomics data. Cancer Discov 2, 401–404, doi:10.1158/2159-8290.cd-12-0095 (2012). 31. 31. Yang, Z. Likelihood ratio tests for detecting positive selection and application to primate lysozyme evolution. Molecular biology and evolution 15, 568–573 (1998). 32. 32. Ostrow, S. L., Barshir, R., DeGregori, J., Yeger-Lotem, E. & Hershberg, R. Cancer Evolution Is Associated with Pervasive Positive Selection on Globally Expressed Genes. PLoS Genetics 10, e1004239, doi:10.1371/journal.pgen.1004239 (2014). 33. 33. Supek, F., Miñana, B., Valcárcel, J., Gabaldón, T. & Lehner, B. Synonymous Mutations Frequently Act as Driver Mutations in Human Cancers. Cell 156, 1324–1335, doi:10.1016/j.cell.2014.01.051 (2014). 34. 34. Kimchi-Sarfaty, C. et al. A” silent” polymorphism in the MDR1 gene changes substrate specificity. Science 315, 525–528 (2007). 35. 35. Gonzalez-Perez, A. & Lopez-Bigas, N. Functional impact bias reveals cancer drivers. Nucleic acids research 40, e169, doi:10.1093/nar/gks743 (2012). 36. 36. Whitlock, M. C. Combining probability from independent tests: the weighted Z-method is superior to Fisher’s approach. J Evol Biol 18, 1368–1373, doi:10.1111/j.1420-9101.2005.00917.x (2005). 37. 37. Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological) 289–300 (1995). 38. 38. Simonetti, F. L., Tornador, C., Nabau-Moreto, N., Molina-Vila, M. A. & Marino-Buslje, C. Kin-Driver: a database of driver mutations in protein kinases. Database: the journal of biological databases and curation 2014, bau104, doi:10.1093/database/bau104 (2014). 39. 39. Foster, S. A. et al. Activation Mechanism of Oncogenic Deletion Mutations in BRAF, EGFR, and HER2. Cancer Cell 29, 477–493, doi:10.1016/j.ccell.2016.02.010 (2016). 40. 40. Bose, R. et al. Activating HER2 Mutations in HER2 Gene Amplification Negative Breast Cancer. Cancer Discovery 3, 224–237, doi:10.1158/2159-8290.cd-12-0349 (2013). 41. 41. Moore-Smith, L. & Pasche, B. TGFBR1 signaling and breast cancer. Journal of mammary gland biology and neoplasia 16, 89–95, doi:10.1007/s10911-011-9216-2 (2011). 42. 42. Ikushima, H. et al. Autocrine TGF-β Signaling Maintains Tumorigenicity of Glioma-Initiating Cells through Sry-Related HMG-Box Factors. Cell Stem Cell 5, 504–514, doi:10.1016/j.stem.2009.08.018 (2009). 43. 43. Kojima, Y. et al. Autocrine TGF-beta and stromal cell-derived factor-1 (SDF-1) signaling drives the evolution of tumor-promoting mammary stromal myofibroblasts. Proc Natl Acad Sci USA 107, 20009–20014, doi:10.1073/pnas.1013805107 (2010). 44. 44. Craig, A. L. & Hupp, T. R. The regulation of CHK2 in human cancer. Oncogene 23, 8411–8418 (2004). 45. 45. Schwarz, J. K., Lovly, C. M. & Piwnica-Worms, H. Regulation of the Chk2 protein kinase by oligomerization-mediated cis- and trans-phosphorylation. Molecular cancer research: MCR 1, 598–609 (2003). 46. 46. Guo, S., Colbert, L. S., Fuller, M., Zhang, Y. & Gonzalez-Perez, R. R. Vascular endothelial growth factor receptor-2 in breast cancer. Biochimica et biophysica acta 1806, 108–121, doi:10.1016/j.bbcan.2010.04.004 (2010). 47. 47. Kavuri, S. M. et al. HER2 activating mutations are targets for colorectal cancer treatment. Cancer Discov 5, 832–841, doi:10.1158/2159-8290.cd-14-1211 (2015). 48. 48. Reindl, C. et al. Point mutations in the juxtamembrane domain of FLT3 define a new class of activating mutations in AML. Blood 107, 3700–3707 (2006). 49. 49. Hirota, S. et al. Gain-of-function mutations of c-kit in human gastrointestinal stromal tumors. Science 279, 577–580 (1998). 50. 50. Kong, B. et al. AZGP1 is a tumor suppressor in pancreatic cancer inducing mesenchymal-to-epithelial transdifferentiation by inhibiting TGF-beta-mediated ERK signaling. Oncogene 29, 5146–5158, doi:10.1038/onc.2010.258 (2010). 51. 51. Antonescu, C. R. et al. KDR Activating Mutations in Human Angiosarcomas are Sensitive to Specific Kinase Inhibitors. Cancer research 69, 7175–7179, doi:10.1158/0008-5472.CAN-09-2068 (2009). 52. 52. Anderson, V. E. et al. CCT241533 is a potent and selective inhibitor of CHK2 that potentiates the cytotoxicity of PARP inhibitors. Cancer Res 71, 463–472, doi:10.1158/0008-5472.can-10-1252 (2011). 53. 53. Gire, V., Roux, P., Wynford-Thomas, D., Brondello, J. M. & Dulic, V. DNA damage checkpoint kinase Chk2 triggers replicative senescence. The EMBO journal 23, 2554–2563, doi:10.1038/sj.emboj.7600259 (2004). 54. 54. Araya, C. L. et al. Identification of significantly mutated regions across cancer types highlights a rich landscape of functional molecular alterations. Nature genetics 48, 117–125, doi:10.1038/ng.3471 (2016). 55. 55. Niu, B. et al. Protein-structure-guided discovery of functional mutations across 19 cancer types. Nature genetics 48, 827–837, doi:10.1038/ng.3586 (2016). 56. 56. Kircher, M. et al. A general framework for estimating the relative pathogenicity of human genetic variants. Nature genetics 46, doi:10.1038/ng.2892 (2014). 57. 57. Kumar, R. D., Swamidass, S. J. & Bose, R. Unsupervised detection of cancer driver mutations with parsimony-guided learning. Nature genetics 48, 1288–1294, doi:10.1038/ng.3658 (2016). 58. 58. Baek, S. H. & Kim, K. I. Emerging Roles of Orphan Nuclear Receptors in Cancer. Annual Review of Physiology 76, 177–195, doi:10.1146/annurev-physiol-030212-183758 (2014). 59. 59. Dorsam, R. T. & Gutkind, J. S. G-protein-coupled receptors and cancer. Nat Rev Cancer 7, 79–94 (2007). ## Acknowledgements Our work was supported by the Alvin J. Siteman Cancer Center, the ‘Ohana Breast Cancer Research Fund, the Foundation for Barnes-Jewish Hospital (to RB), and Canadian Institutes of Health Research (DFS-134967 to RDK). ## Author information R.D.K. and R.B. designed the study. R.D.K. wrote software and performed the analysis. R.D.K. and R.B. wrote the manuscript. Correspondence to Ron Bose. ## Ethics declarations ### Competing Interests The authors declare that they have no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions • ### Integrative annotation and knowledge discovery of kinase post-translational modifications and cancer-associated mutations through federated protein ontologies and resources • Liang-Chin Huang • , Karen E. Ross • , Timothy R. Baffi • , Harold Drabkin • , Krzysztof J. Kochut • , Zheng Ruan • , Peter D’Eustachio • , Daniel McSkimming • , Cecilia Arighi • , Chuming Chen • , Darren A. Natale • , Cynthia Smith • , Pascale Gaudet • , Alexandra C. Newton • , Cathy Wu •  & Natarajan Kannan Scientific Reports (2018)
2019-09-19 14:54:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6468974351882935, "perplexity": 6196.6139982997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573533.49/warc/CC-MAIN-20190919142838-20190919164838-00522.warc.gz"}
https://byjus.com/chemistry/difference-between-allotropes-and-isomers/
Difference between Allotropes and Isomers While studying Chemistry, most students encounter with terms like Isomers and Allotropes. Although both of these terms do sound similar, they are different than each other in nature. Allotropes Allotropes are defined as the structural modifications of an element, that is, they are different forms of the same element. This happens due to the different bonding of the atoms which are arranged in sequence, thus forming a new structure. These structures possess different chemical and physical properties. Examples of allotropes would include graphite and diamond. In general, Allotropes occur in certain elements of the Group 13, 14, 15 and 16 of the Periodic Table. It should be noted that Allotropism of elements can occur only within the same phase (solid, liquid or gas forms) and thus cannot interchange or transfer to another state or phase. There are few elements for which the allotropes have different molecular formulas. For example, allotropism of oxygen molecules like dioxide O2 and ozone O3. They both can exist in all the three states i.e solid, liquid and gas. Isomers Isomers are the chemical compounds that have a similar molecular formula but with different structural formulae. Therefore, although these isomers have different atomic arrangements, yet they have the same number of atoms. Isomers do not share any common properties unless their functional group is similar, for examples 2-methylpropan-1-ol and 2-methylpropan-2-ol. There are two main types of Isomers: • Structural Isomerism (also known as constitutional isomerism) • Stereoisomerism (also known as spatial isomerism) The Difference between Allotropes and Isomers is given below: Basis Allotropes Isomers Definition Allotropes can be defined as different types of compounds made out of the same single element but in different chemical formulas and different arrangements. Isomers can be defined as the chemical compounds that have a similar molecular formula but with different structural formulae. Examples Graphite and Diamond. 2-bromopropane and 1-bromopropane. The above points would have provided the information on how Allotropes and Isomers differ in characteristics. To read further interesting articles, stay tuned with Byju’s. Practise This Question Which one of the following is employed as antihistamine?
2019-05-22 05:20:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440841436386108, "perplexity": 1255.4632364767167}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256763.42/warc/CC-MAIN-20190522043027-20190522065027-00506.warc.gz"}
http://mathhelpforum.com/algebra/91170-rational-functions-approximate-expansions.html
# Thread: Rational functions and approximate expansions 1. ## Rational functions and approximate expansions Hi, I know how to do normal binomial expansions but I can't remember questions quite like this which I've just found in my textbook, by exam being in just over a day. The first bit is to show that (1+x)/(2+x) - (1-x)/(2-x) = 2x/(4-x^2). I did that fine. The second bit says 'Hence or otherwise show that for small x f(x) = 1/2x + 1/8x^3 + 1/32x^5 and I'm stuck. Any help would be much appreciated. 2. Originally Posted by JeWiSh Hi, I know how to do normal binomial expansions but I can't remember questions quite like this which I've just found in my textbook, by exam being in just over a day. The first bit is to show that (1+x)/(2+x) - (1-x)/(2-x) = 2x/(4-x^2). I did that fine. The second bit says 'Hence or otherwise show that for small x f(x) = 1/2x + 1/8x^3 + 1/32x^5 and I'm stuck. Any help would be much appreciated. Does it really say "for small x f(x)= that"? Or does it say it is approximately that? I would be inclined to note that $\displaystyle \frac{2x}{4- x^2}= \frac{2x}{4(1- \frac{x^2}{4})}$$\displaystyle = \left(\frac{x}{2}\right)\frac{1}{1- \frac{x^2}{4}}$ and recognize that final fraction as the sum of a geometric series: $\displaystyle \sum_{n=0}^\infty r^n= \frac{1}{1- r}$ (as long as |r|< 1) with $\displaystyle r= \frac{x^2}{4}$ That is, $\displaystyle \frac{2x}{4- x^2}= \frac{x}{2}\sum_{n=0}^\infty\left(\frac{x^2}{4}\ri ght)^n$ as long as $\displaystyle x^2/4< 1$ or x< 2. If x is sufficiently small, higher powers of x can be neglected and $\displaystyle \frac{2x}{4- x^2}= {x/2}(1+ (x^2/4)+ (x^2/4)^2)= x/2+ x^3/8+ x^5/32$ approximately. 3. Yeah it did literally say what I wrote.. Thanks for the help 4. Then it's not true. Even for small values of x those are not equal. For sufficiently small x, they are approximately equal. (Unless, of course, by "small x" they mean "x= 0"!)
2018-03-18 13:51:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7652835845947266, "perplexity": 2781.0561007166602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645775.16/warc/CC-MAIN-20180318130245-20180318150245-00690.warc.gz"}
http://thepasqualian.com/?tag=traffic
## Posts Tagged ‘traffic’ ### On Optimizing the Traffic Systems of the World Using Markov Chains (How to Predict Traffic Jams) 03 May So it has been a recent dream of mine to optimize the traffic system of my hometown, and of Mexico in general, and even more generally of the world.  I seldom drive, people seem always to be on the cellphone (which, by law apparently, they shouldn't use at all when driving), crossing red lights, and so on.  More tragically in Guadalajara, the civil engineering of bridges and crossways intended to better circulation flow are often done seemingly with little thought (a bridge intended for double circulation is now for single circulation, and to do so they closed the entryway coming this way and modified it to go that way, so that a column splits the throughway in wedge-form and it's infinitely more prone to accidents than otherwise... another bridge is "closed" on full-of-traffic Sundays for bicycles... and another bridge-pass literally zigzags for entry to one major artery of the city at Los Cubos again increasing accidents in the zigzag portion).  There may be people that benefit, don't get me wrong: perhaps those that get the concession to build the bridge in the first place (often the highschooler cousin of the governor or mayor). So as I walk to work, a little office but two blocks from my house, purposely selected to minimize my having to use a car or transport to get there, I have to traverse a street which has been fitted with a light.  It gets chaotic during a time of the day, and jams because of kindergarten schools, some 4 or 5 (which shouldn't be there in a residentially-zoned area).  A cycle of the four installed semaphores, from what I gather, lasts around 2 minutes, and it is apparently optimized so that the heaviest-traffic lane light lasts a little longer than the others.  I imagine this is the case for any crossroad lights, but it is working optimally only at heaviest traffic (and assuming that the heaviest traffic lane at high traffic times is always heavy in traffic);  when there is little traffic, cars still have to wait a better part of the 2 minute cycle for their turn.  In other words, the way the traffic lights are optimized works for a specific time of day and assuming heaviest traffic in a particular direction, which means it's suboptimal in all other cases. So for the better part of a year now, maybe around 8 months, I have been thinking that lights should be adaptive, and of course I realize this is easier said than done.  In fact I have gone to friends at the local government to try to find ways to put my ideas into action, but to no satisfying results (it was actually dismissed outright).  I also tried entering a contest with this traffic idea (and a couple unrelated others).  The contest was called "Iniciativa Mexico," but it turned out to be a silly PR ploy that left many of us contestants further saddened and disillusioned at the state of our country (and questioning the lasting impact of the projects that did win). Since it is my belief that this project could benefit all countries around the world, I'm making it available to the public through my weblog, on a couple conditions: 1.  That the use of these ideas and the use of ideas derived from these ideas be free, in that they generate no fee, because they are of benefit to the public and our world (in that these ideas intend to reduce CO2 and smog levels, for example), even if this means installation of special equipment on traffic lights and cars in general (such costs should be absorbed by the government without raising taxes for the purpose, by optimization of traffic budgets already in place). 2.  It is the understanding of private companies interested in implementing this system in cities and towns or traffic systems that they will obtain zero profit or will be at a loss in the production of traffic equipment that use these ideas or ideas derived from these.  Likewise, consulting services that use these ideas or ideas derived from these will be at zero profit for the private company, including analyses of traffic that use the following ideas or ideas derived from them. 3.  Effectively, these ideas and ideas derived from these should generate no monetary profit for anyone who wishes to use them, implement them, or otherwise benefit from their functioning or infrastructure anymore than the optimization of the traffic flow and CO2 level and smog level reduction, EXCEPT when optimized-with-these-ideas-or-derived-ideas traffic system is further proven, mathematically and through simulations and to the satisfaction of a majority of a committee of 7 Markov-trained and renown statisticians (who ask no money for this purpose), impossible, and implementation of a fee-system, for example in high-density areas, is a last-resort to regulate heavy traffic. 4. If any company or country or government, local or otherwise, obtains profit from these ideas or derived ideas, other companies or countries or governments should exert pressure so that the program remain free and of benefit to humanity.  The spirit of these clauses is that these ideas and derived ideas benefit the people of a country as a whole without incurring in extra cost to the people.  Private companies give infrastructure and consulting services because they benefit, and likewise for governments: there is no advantage to be taken except that which is beneficial for everyone who uses a car or other transport, in the form of better traffic flow, avoidance of traffic jams, and quicker or optimized movement to their destination. 5.  That being said, I reserve the right to modify the clauses as I see fit, in keeping with my original vision and a spirit of good-naturedness toward humanity and this world. Our Earth needs a little breathing room, and this is my contribution to reducing the carbon dioxide signature, as best I know how currently. We must think of a traffic network of a city as a system. Each car enters a "state" when it enters a new, directed-sense BLOCK (not street: a sense street is a state, a countersense same-street is another, within a block).  We can number the blocks of a city in any way we like, but it's best that we do so in a consistent manner: for example, suppose we are at a crossroads; there are 8 states if all streets are bidirectional, and so, going, say, clockwise, each state receives numbers 1 through 8. If the optimal light cycle length is about two minutes (I surmise it so, I'm not sure it is; but the optimality should derive from, for example, psychological considerations on the amount of waiting time that reduces road-rage), let us declare this time as "universal time," in the sense that every two minutes we will monitor the current state of the system by sampling or calculating exactly how many cars are in which state (this will require cars to be fitted with, for example, RFID tags and scanners at the entrance of a street and block; not all cars necessarily but a substantial sample perhaps).  We seek to adjust, at every universal time-step, at an intersection, the proportion of time a light in a particular direction should stay on based on need.  This need could be something like this: Where J is the possibility of a jam in the current (directed) state, R is the city-set preference (North-South streets may have preference, or toward-downtown streets may have the right-of way, e.g.), A is the average waiting time in a state is differing from the speed limit times length of block, corrected for outliers (one could use several techniques to eliminate or weigh-by-less parked vehicles that would contribute significantly to the waiting time average otherwise: as by a filter).  Other terms could be added but these seem to me the most significant. The first term necessitates that we learn how to predict traffic jams.  But note that we know how many cars are in what state every 2 minutes or so.  The only thing we have to define is when a traffic jam occurs: perhaps if 80% of the (directed) street block capacity is exceeded.  So we need to know what the "working capacity" of such is: it's the length of the street divided by the average length of a car (buses and trucks could be excluded and then count as two cars), times the lanes that actually serve to move traffic: in most instances, two (the third lane may be reserved for parking; even a single parked car in the lane may disrupt flow enough to consider it not useful to move traffic). So let's therefore define a jam if .  We can also now define the "total working capacity of the system" as by , which is independent of time because it's defined on (directed) street block static properties.  Each state now has a working capacity, so create the vector M with all capacity entries ordered by state, and let "m" be the normalized vector: divide all entries by the total working capacity of the system.  This vector gives us a threshold of the proportion of cars that each state can afford!  If we exceed it the system breaks because the total capacity of each street is exceeded. Now on to the prediction bit.  Since we are sampling the state of the system every two minutes, we may have a history where we can count how many cars are at each time-step.  Let's suppose that we can count exactly where all cars are for now: we may have information on how many cars are in states 1-8 at .  We may not have knowledge of because that is information of the current ongoing cycle.  At any rate, we can form vectors that describe the quantity of cars in each state.  In other words, we have the "total occupancy" of the system at any point in the past.  Let's take a look at the trajectory of a single car, C1.  Such a car may have been at in state 3 and at in state 1.  In other words, we can create the historical progress of C1 by tracing, at each 2 minute interval, its state: C1 historical progress: . In reality, we don't need the whole historical progress, we just need the and state, because what we seek to count to produce a Markov chain matrix is how many cars jumped from what state to what state during the latest cycles.  For example, say we have several cars and three states: C1: {1,2} C2: {2,1} C3: {1,1} C4: {3,1} C5: {1,2} If we are to order by entries, we can quickly see how cars in state 1, say, have transitioned (moved) in traffic: 1/3 stayed in the same spot (awaiting the next cycle idly or parked; but let's say that there are no parked cars at the moment, for the sake of simplicity), 2/3 jumped to state 2.  We can do the same for the other states and obtain the following Markov matrix: the rows are the states, ordered in the standard way, and the columns are the states the cars transitioned to, also ordered in the standard way. So if cars move in the same proportion as they did in the previous couple light cycles, the powers of the matrix represent the proportions of cars at a particular state in the next time-step(s):  The second power of the matrix is the current light cycle system-wide state (the proportion of cars we can expect to see in each street-block state), the 3rd power is the next, the 4th power is the next next, and so on.  Let's say we can calculate up to the 11th power, or 10 steps-ahead.  This is a prediction of the state of the system 20 minutes forward in time! (Recall that a Markov matrix is really a convenient way to describe non-coupled linear differential equations). We can do it for more steps but now I want to explain what I meant by the J variable in the "need" calculation of the lights. We said that at time we can count the number of cars to obtain the vector V(-1), which tells us how many cars are in each state at that point in time, right?  If we normalize this vector by summing all entries, then dividing each entry by the total, we obtain the initial state vector v(-1).  So now gives the current proportion across states.  Similarly, and so on.  If at any entry of the vectors , we have a jam because that's how we defined a jam (exceeding 80% of the working capacity, the threshold).  Indeed the jam is at state and at time . Since we can now see a jam at any state 20 (say) minutes in advance (on a per cycle of the semaphore lights), we can now qualify the need of traffic light with respect to potential traffic jams.  Say, if the traffic jam is expected in the current about-to-begin time-cycle, give it a grade of 10.  If it's in the next-next, give it a 9, and so, the 9th time-cycle (20 minutes from now) can be given a grade of 1. If no jam is expected in 9+1 cycles, give it a grade of 0. Since we are grading on a scale of 10, let the variable J be the proportion of the grade over the maximal score.  Thus J is higher if a jam is imminent, and lower if it's predicted within the next 20 minutes.  Each light at an intersection can be therefore given a "jam need" score, which is to say when will the state they control exceed  80 percent the working capacity, i.e., the threshold value. The "preference" or right-of-way score can be simply 1 or 0 depending. How the average differs from the maximal traversal time, the variable I gleefully assigned the letter A, may be a bit controversial, in the sense that, if you notice, it's not necessarily orthogonal to the jam variable J: the jam variable J measures predictively how a state or street-block is going to jam in a long-while; that is, it will measure the accumulation of the proportion of cars on that state after a time period, so it indirectly measures the waiting time at that state.  But explicitly calculating the deviation from the "city-proposed time" it should take to traverse a block-street, this "proposed time" being the length of the street times the reciprocal of the speed limit, may give additional information (than redundant) as to the immediate waiting times (within a couple traffic light cycles, than 10).  The stronger the deviation, the higher the need of the traffic light to be green longer. J also doesn't really take into account "empty" streets that may suddenly be filled in a cycle, which A might just catch.  So then let A be the percent deviation from the "proposed time." How can we calculate the deviation from the "city-proposed-time"?  If each car history contains the additional information of when the car transitioned to a given state, (recall each position contains the state and the order describes which 2-minute universal time interval it belongs to, up to the 1-current), then we can enrich our knowledge, as by (assume universal time is every two minutes from the o'clock): Car 1: {... 1 at 12:01:00 (uni 12:00:00 - 12:01:59), 2 at 12:02:05 (uni 12:02:00 - 12:03:59)} Car 2: {... 1 at 12:05:40 (uni 12:04:00 - 12:05:59),  1 still (uni 12:06:00 - 12:07:59), 1 still (uni 12:08:00 - 12:09:59), 5 at 12:10:00 (uni 12:10:00 -12:11:59)}, etc., In this case when the current time measurement needs to be done we look at a 1-step lag transition time, so for Car 1 it would be: 1 minute 5 seconds stayed in state 1 (for uni step 12:04:00), and for Car 2 we would measure for uni step 12:12:00, 4 min 30 seconds in state 1.  We must look at when an actual transition has occurred for this, as we are looking for the time it takes to "jump" to another state.  There is the issue of getting rid of outlier influence, as by parked cars, but I suppose there are statistical methods to do so: using robust statistics, like the median time, or other Winsorized or trimmed estimators. Once we calculate the need for each traffic light at an intersection (say 4), we need assign only the proportional amount of time each light should be green.  I assume a circular transition: a green light means go forward, left, right, or U-turn, and no other light can be green at the same time.  If we weigh each variable J, A, R (jar-jar!) equally, then each light has need: with can range between zero and one. The proportion of time of a cycle a particular light should be green is thus: with is the number of traffic lights at an intersection, which is usually 4.  Higher-need lights will stay green longer (and it's possible that a light be green for longer than 1 cycle).  This has the happy consequence of maximizing the probability that a car find a traffic light is green in its traffic trajectory if: 1), a jam is prognosticated, for the effects of avoiding it, 2) the car is in a right-of-way street, and/or 3) there's a blockage that trumps the traffic flow that is relatively short-lived. Having the ability to forecast jams gives us the tremendous ability of being able to alert the driver that such may happen and to convey to him, from BLOCKS AWAY, that he may want to choose alternative routes.  Personally, I would add informative LEDs under every light that say jam ahead, jam east, jam west.  Calculation of a Markov matrix is done, of course, at every 2 minute interval, and thus information is updated continually. This sort of analysis with a Markov transition matrix has several cool consequences, in addition to those stated here.  Of course there aren't lights at every intersection, but inspection of the system could show systematic jam accumulations at intersections with no lights.  One could therefore install a traffic light at that intersection, or, better yet, run a simulation as to what would happen if a light were there.  Perhaps the light is not needed at exactly that intersection, but nearby, thereby liberating jams at other intersections down the street-way.  In any case, the Markov transition matrix allows us to simulate "what if" scenarios in flexible ways.  Another example: one has now criteria to evaluate whether speed bumps are necessary in certain streets: the net effect is to slow traffic so that more cars proportionately remain in a particular state or street block.  One can now understand if putting a speed bump or removing it in a street will have positive or detrimental effects for the system overall.  It could be more than one is needed!  In yet another example, it is often claimed that adding a roundabout (traffic circle, rotary) makes traffic more efficient.  Segmenting the roundabout into states can show this to be true or false within a traffic system.  It may be true in particular places, and false in others.  Adding a bridge or traffic level to alleviate traffic problems can now also be simulated with an additional entry on the Markov matrix: will it in effect better the flow, or make it worse?  This method allows us the systematic analysis that traffic needs in many cities around the world. The traffic system as a whole can now function adaptively, in the sense that each light competes with others at the intersection for "green time," and this will depend largely on the time of day (more or less cars) and their aggregate movement around the city.  The adaptivity of the system-wide traffic lights to their need gives them flexibility and the system becomes a moving "organism" that, in adapting to the traffic, will more-efficiently-than-the-current system move cars around.  Dynamic Programming methods and theorems suggest that, by optimizing individually each intersection and traffic cycle, we arrive at the global optimal flow of traffic system-wide. If this method of analyzing traffic is reminiscent to you of quantum mechanics, do tell.  We move proportions around in this method, but they correlate with actual physical objects (cars) in aggregate.  I think this is very cool and wonder if we cannot apply this in other subjects where flow across a network is important (but I'm not too learned about networks in general :( ). In a next post, I think I may want to post an Excel sheet with an 8-state and 22-state system based on nearby blocks to my house, to make things a bit clearer.
2014-10-25 16:42:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5726586580276489, "perplexity": 1373.914901138698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648706.40/warc/CC-MAIN-20141024030048-00141-ip-10-16-133-185.ec2.internal.warc.gz"}