url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://chemistry.stackexchange.com/questions/98252/how-do-i-make-a-zmat-or-xyz-file-from-scratch
|
# How do I make a ZMAT or XYZ file from scratch?
From a structure representation like the one below for Bacteriochlorophyll A, how do I make the ZMAT or XYZ file? The closest similar question asked here was about converting a PDB entry into XYZ, but what I have here is not in PDB format, it is just the image of the molecular structure, reported in a paper.
• PDB format per se is not related to proteins, other than historically. (Yes, I know what the "P" stands for.) You may have pretty much any molecule in PDB format. Do you have yours or not? If no, then the referenced question is of no use. In that case start with drawing your structure in some molecular editor, then click "Export as..." and pray. – Ivan Neretin Jun 13 '18 at 19:04
• @IvanNeretin: I do not have any "molecular editor", is there a free one that works well? I downloaded MOLDEN but it is old fashioned and seems hard to use. Do I have to draw this entire ~120 atom structure, or can I just drag and drop the image that I linked to? – user1271772 Jun 13 '18 at 19:14
• See if there is a crystal structure (or something similar). You can get coordinates from that (XYZ) and then tweak by hand in Molden. It saves a lot of time if you have access to a crystal structure database (CCDC) and an appropriate database program (such as ConQuest). – LordStryker Jun 13 '18 at 19:27
• @user1271772 I doubt that is possible. Its unlikely that there is a program that can determine the correct bond distances, angles, etc, even approximately, just from an image file. But I don't think the structure would be too difficult to make in a free software like Avogadro. – Tyberius Jun 13 '18 at 19:48
• I don't know of any way to drag/drop a ChemDraw image to get XYZ coordinates. If there is a way, someone let me know ASAP please. – LordStryker Jun 13 '18 at 19:50
I know of two three ways to do this, but in either case it's quite a bit of work since I don't know any software that can read molecular structures from pictures. The second and third way use freeware only.
1. Draw the structure in Chem3D. For whatever stupid reason Chem3D doesn't support xyz files but you can save as *.pdb. If you need *.xyz then:
• save as Gaussian input
• delete the first 3 lines
• Put the total number of atoms in the resulting first line
• remove any "atoms" labeled as LP (lone pair, Chem3D uses them for force field optimization)
• remove the second column next to the elemental symbol (if you sue Notepad++ you can select that by using ALT)
• change file extension
2. Use openbabel 2.2.0 or higher to convert SMILES into *.xyz or *.pdb using the "gen3d" keyword. Of course for this you need the SMILES so you either have to produce that yourself or you need to draw it in a software that is able to produce SMILES like, again, Chem3D or this free website
3. It turns out this NIH website is also able to produce 3D structures, also in pdb file format, directly from SMILES or drawing the structure in their editor.
• "Bacteriochlorophyl a" has the formula C$_{55}$H$_{74}$MgN$_{4}$O$_6$ – user1271772 Jun 13 '18 at 21:32
• Ok, about 70 "heavy" atoms isn't actually that much, an experienced user could draw that within few minutes. My answer was also about how to solve that for any random structure. In this case there are easier ways. You can go to wikipedia and open the page for that compound and copy the SMILES from there. – DSVA Jun 13 '18 at 21:55
• @user1271772 I tried it. I agree that this one can be tricky due to the coordinated Mg but if you do it without the Mg and insert it afterwards it works very well. Here's a PM6 optimized structure: textuploader.com/dpy6q – DSVA Jun 13 '18 at 22:49
• 55+74+1+4+6 = 140, so why does it say 135 at the top of your XYZ? Also I am very grateful for you spending time on making this XYZ for me. I also want to learn how to do it myself so that in the future I do not need to ask people for help. – user1271772 Jun 13 '18 at 23:11
• Avogadro (free) does support xyz file types, and so using Avogadro saves you a step. Building your molecule with Avogadro should not take more than 15-20 minutes. Avogadro also offers many different templates for common structures, such as amino acids, vitamines, common organic molecule moieties. However, if you end up doing anything "manually", then the bond lengths and bond angles probably end up a little wonky. You can perform (in Avogadro) a geometry optimization with real time visual feedback using simple force-fields, but the final geometry will not of course be extremely accurate. – Yoda Jun 14 '18 at 10:52
Building such large structures can be a nasty task, and as it is quite large, you can easily loose track of parts of it. You should try a couple of molecular editors and find something you are comfortable with, then editing molecules becomes something routine and even a large one like your example is less tedious.
Apart from this very general advice, here are some additional tips. Like DVSA states there are some more or less tedious ways to do it. The most error-prone is obviously trying to build them one atom at the time. That is not only hard work, but it also consumes a lot of time. Using smaller (pre-optimised) fragments can help a lot. Obviously you'll need a molecular editor capable of merging/joining structures.
Probably the best way to built a structure from scratch is to start from something you know, like a literature known structure of a derivative, or a substructure from a database. Unfortunately conversion from InChI or SMILES with open babel does not always work as you might intend it to, especially in those cases, where the SMILES or InChI becomes ambiguous.
Let's take a look at your example: Bacteriochlorophyll A. It is easy enough to find on PubChem, where they provide you with an InChI, and some SMILES.
If you try to convert the following to a 3D structure with openbabel 2.4.1, you will get an error, and I assume this is due to the fact that there are essentially two entities stored in the InChI (the dot . separates those entities).
InChI=1S/C55H75N4O6.Mg/c1-13-39-34(7)41-29-46-48(38(11)60)36(9)43(57-46)27-42-35(8)40(52(58-42)50-51(55(63)64-12)54(62)49-37(10)44(59-53(49)50)28-45(39)56-41)23-24-47(61)65-26-25-33(6)22-16-21-32(5)20-15-19-31(4)18-14-17-30(2)3;/h25,27-32,34-35,39-40,51H,13-24,26H2,1-12H3,(H-,56,57,58,59,60,62);/q-1;+2/p-1/b33-25+;/t31-,32-,34-,35+,39-,40+,51-;/m1./s1
I have executed open babel with the following command, where the above InChI was stored in structure.inchi
obabel -iinchi structure.inchi -oxyz -Ostructure.xyz --gen3d
The result is breathtakingly awful, and I am probably going to file a bug-report later on. The full output is:
*** Open Babel Warning in InChI code
For InChI=1S/C55H75N4O6.Mg/c1-13-39-34(7)41-29-46-48(38(11)60)36(9)43(57-46)27-42-35(8)40(52(58-42)50-51(55(63)64-12)54(62)49-37(10)44(59-53(49)50)28-45(39)56-41)23-24-47(61)65-26-25-33(6)22-16-21-32(5)20-15-19-31(4)18-14-17-30(2)3;/h25,27-32,34-35,39-40,51H,13-24,26H2,1-12H3,(H-,56,57,58,59,60,62);/q-1;+2/p-1/b33-25+;/t31-,32-,34-,35+,39-,40+,51-;/m1./s1
Problems/mismatches: Mobile-H( Hydrogens: Number; Mobile-H groups: Number; Charge(s): Do not match)
1 molecule converted
The magnesium is somewhere, and the porphyrin ring is all crooked; that does not help at all. (I have also tried SMILES, but that did not work any better.) In some cases you might be luckier, so that approach is something to keep in mind.
I tried the NIH website as DVSA suggested, and apart from the manganese being somewhere, that worked reasonably well. (If you try this approach then you can skip the next sections.)
If you have accesses to the ChemOffice suite things may become a little easier. Copy the InChI into the paste buffer, open ChemDraw, choose Edit > Paste Special > InChI. You will hopefully see something like the Lewis-like structure from your post then. You will need to clean it up a little, you should definitely delete the metal. Now copy the cleaned structure into the paste buffer and open Chem3D. On the right side you should have the ChemDraw - LiveLink, where you can paste your structure. This will create a 3D image on the main screen. I have tried it with the example molecule and it worked reasonably well. (I can only recommend this for organic molecules.) You can already use the implemented force field to optimise it (and you should). Depending on what molecule you are doing, there might be parameters missing (mostly metals, very heavy atoms), charges wrong applied, or other problems. It is basically trial and error, it might not always lead to a success.
Safe the resulting file as something another molecular editor can open. Since ChemOffice handles even common file formats badly, I recommend pdb as the best (still bad) option. (I should really try MarvinSketch or similar software.) It still produces plenty of warnings in open babel, which can mysteriously still recover most of it, and you'll have a somewhat reasonable structure to riff off on.
I was unable to open the pdb directly with molden (unrecognised elements, probably bad formatting), but it was no problem reading it with Chemcraft (payware). Since Avogadro (open source) works on top of open babel, I assume it should be able to read the file, too.
If you edit molecules a lot I recommend spending some money on Chemcraft (that is my personal preference, but I am not affiliated with it), in my experience it is one of the best molecular editors. It has a rather extended fragments database (which you can customise), that may even take care of 'clicking together' a rather huge molecule. It comes with some more nice features, but things like that are better discussed in chat. (I have used it occasionally on the site, so you might find some interesting showcases here, too.) Some people love their GaussView, but I simply cannot see the appeal of that program.
Obviously you can use open babel to transform even the garbage pdb to a xyz file, which you then can view/edit with molden. Unfortunately the zmat interface of molden is quite tedious, but it comes with a few pre-built fragments which makes extending molecules easier. The most tedious thing about that program is, that it does not support drag and drop. In the ZMAT Editor there is an option called 'substitute atom by fragment', I suggest you give that a look. (Chain-like organics can be easily prepared with that starting from methane.)
I believe Avogadro also has a fairly sophisticated molecular editing tool, however I currently have no access to it, so I can't really comment.
There is no harm in trying a few molecular editors and sticking with what works best for you.
Eventually you should have a somewhat good structure to start a calculation on. Keep in mind that whatever you created is only one conformation and likely not the minimum structure. You should use conformation generators to test whether what you have is good or bad.
I have developed a preference for GFN2-xTB from Stefan Grimme's group (semi-empirical program package capable of optimising largen structures) for these tasks.
|
2019-10-24 04:40:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.439544677734375, "perplexity": 1379.5198590839377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987841291.79/warc/CC-MAIN-20191024040131-20191024063631-00259.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/jimo.2021072
|
Article Contents
Article Contents
# Hedging strategy for unit-linked life insurance contracts with self-exciting jump clustering
• * Corresponding author: Linyi Qian
This work was supported by the Humanity and Social Science Youth Foundation of the Ministry of Education of China (18YJC910012), the National Natural Science Foundation of China (11771147, 12071147), "Shuguang Program" supported by Shanghai Education Development Foundation and Shanghai Municipal Education Commission(18SG25), the State Key Program of National Natural Science Foundation of China (71931004), the Discovery Early Career Researcher Award (DE200101266) of the Australian Research Council, the Zhejiang Provincial Natural Science Foundation of China (LY17G010003), the 111 Project(B14019), Ningbo City Natural Science Foundation (202003N4144) and the Humanity and Social Science Foundation of Ningbo University (XPYB19002)
• This paper studies the hedging problem of unit-linked life insurance contracts in an incomplete market presence of self-exciting (clustering) effect, which is described by a Hawkes process. Applying the local risk-minimization method, we manage to obtain closed-form expressions of the locally risk-minimizing hedging strategies for both pure endowment and term insurance contracts. Besides, we demonstrate the existence of the minimal martingale measure and perform numerical analyses. Our numerical results indicate that jump clustering has a significant impact on the optimal hedging strategies.
Mathematics Subject Classification: Primary: 91B05, 91B16; Secondary: 60H30.
Citation:
• Figure 1. Paths of intensity process $\lambda_t$ and stock price process $S_t$
Figure 2. Number of stock $\xi_{t}^{PE\ast}$ with different strike prices $K$
Figure 3. Effects of the jump size $Z$ on $\xi_{0}^{PE\ast}$ and $\Delta_{0}$. We assume the jump size $Z_j\in U(-0.1, 0.1)$ and $Z_j\in U(-0.5, 0.5)$ in the left panel and the right panel of Figure 3, respectively
Figure 4. Effects of the intensity $\lambda$ on $\xi_{0}^{PE\ast}$
• [1] Y. Aït-Sahalia, J. Cacho-Diaz and R. J. A. Laeven, Modeling financial contagion using mutually exciting jump processes, Journal of Financial Economics, 117 (2015), 585-606. doi: 10.1016/j.jfineco.2015.03.002. [2] Y. Aït-Sahalia and T. R. Hurd, Portfolio choice in markets with contagion, Journal of Financial Economics, 14 (2016), 1-28. doi: 10.1093/jjfinec/nbv024. [3] T. Arai, Y. Imai and R. Suzuki, Numerical analysis on local risk-minimization for exponential L$\acute{e}$vy models, International Journal of Theoretical and Applied Finance, 19 (2016), 1650008, 27 pp. doi: 10.1142/S0219024916500084. [4] T. Arai, Y. Imai and R. Suzuki, Local risk-minimization for Barndorff-Nielsen and Shephard models, Finance and Stochastic, 21 (2017), 551-592. doi: 10.1007/s00780-017-0324-8. [5] C. G. Bowsher, Modelling security market events in continuous time: Intensity based, multivariate point process models, Journal of Econometrics, 141 (2007), 876-912. doi: 10.1016/j.jeconom.2006.11.007. [6] C. Ceci, K. Colaneri and A. Cretarola, Hedging of unit-linked life insurance contracts with unobservable mortality hazard rate via local risk-minimization, Insurance: Mathematics and Economics, 60 (2015), 47-60. doi: 10.1016/j.insmatheco.2014.10.013. [7] C. Ceci, K. Colaneri and A. Cretarola, Unit-linked life insurance policies: Optimal hedging in partially observable market models, Insurance: Mathematics and Economics, 76 (2017), 149-163. doi: 10.1016/j.insmatheco.2017.07.005. [8] T. Chan, Pricing contingent claims on stocks driven by L$\acute{e}$vy processes, The Annals of Applied Probability, 9 (1999), 504-528. doi: 10.1214/aoap/1029962753. [9] T. Choulli, L. Krawczyk and C. Stricker, $\mathscr{E}$-martingales and their applications in mathematical finance, The Annals of Applied Probability, 26 (1998), 853-876. doi: 10.1214/aop/1022855653. [10] T. Choulli, N. Vandaele and M. Vanmaele, The F$\ddot{o}$llmer-Schweizer decomposition: comparison and description, Stochastic Processes and their Applications, 120 (2010), 853-872. doi: 10.1016/j.spa.2010.02.004. [11] S. N. Cohen and R. J. Elliott, Stochastic Calculus and Applications, Probability and its Applications. Springer, Cham, 2015. doi: 10.1007/978-1-4939-2867-5. [12] N. Dacev, The necessity of legal arrangement of unit-linked life insurance products, UTMS Journal of Economics, 8 (2017), 259-269. [13] A. Dassios and H. Zhao, Exact simulation of Hawkes process with exponentially decaying intensity, Electronic Communications in Probability, 18 (2013), 1-13. doi: 10.1214/ECP.v18-2717. [14] E. Errais, K. Giesecke and L. R. Goldberg, Affine point processes and portfolio credit risk, SIAM Journal on Financial Mathematics, 1 (2010), 642-665. doi: 10.1137/090771272. [15] H. F$\ddot{o}$llmer and D. Sondermann, Hedging of non-redundant contingent claims, Contributions to Mathematical Economics. In honor of G. Debreu (Eds. W. Hildenbrand and A. Mas-Colell), Elsevier Science Publ., North-Holland, (1986), 205–223. [16] D. Hainaut, A bivariate Hawkes process for interest rate modeling, Economic Modelling, 57 (2016), 180-196. doi: 10.1016/j.econmod.2016.04.016. [17] D. Hainaut and F. Moraux, Hedging of options in presence of jump clustering, Journal of Computational Finance, 28 (2018), 1-35. doi: 10.21314/JCF.2018.354. [18] A. G. Hawkes, Point spectra of some mutually exciting point processes, Journal of the Royal Statistical Society. Series B, 33 (1971), 438-443. doi: 10.1111/j.2517-6161.1971.tb01530.x. [19] A. G. Hawkes, Spectra of some self exciting and mutually exciting point processes, Biometrika, 58 (1971), 83-90. doi: 10.1093/biomet/58.1.83. [20] A. G. Hawkes, Hawkes processes and their applications to finance: A review, Quantitative Finance, 17 (2018), 193-198. doi: 10.1080/14697688.2017.1403131. [21] L. F. B. Henriksen and T. Møller, Local risk-minimization with longevity bonds, Applied Stochastic Models in Business and Industry, 31 (2015), 241-263. doi: 10.1002/asmb.2028. [22] T. Kokholm, Pricing and hedging of derivatives in contagious markets, Journal of Banking and Finance, 66 (2016), 19-34. doi: 10.1016/j.jbankfin.2016.01.012. [23] K. Lee and S. Song, Insiders' hedging in a jump diffusion model, Quantitative Finance, 7 (2007), 537-545. doi: 10.1080/14697680601043191. [24] K. Lee and P. Protter, Hedging claims with feedback jumps in the price process, Communications on Stochastic Analysis, 2 (2008), 125-143. doi: 10.31390/cosa.2.1.09. [25] J. Li and A. Szimayer, The uncertain mortality intensity framework: Pricing and hedging unit-linked life insurance contracts, Insurance: Mathematics and Economics, 49 (2011), 471-486. doi: 10.1016/j.insmatheco.2011.08.001. [26] Y. Ma, K. Shrestha and W. Xu, Pricing vulnerable options with jump clustering, The Journal of Futures Markets, 37 (2017), 1155-1178. doi: 10.1002/fut.21843. [27] T. Møller, Risk minimizing hedging strategies for unit-linked life insurance contracts, Astin Bulletin, 28 (1998), 17-47. doi: 10.2143/AST.28.1.519077. [28] O. Nteukam T., F. Planchet and P.-E. Thérond, Optimal strategies for hedging portfolios of unit-linked life insurance contracts with minimum death guarantee, Insurance: Mathematics and Economics, 48 (2011), 161-175. doi: 10.1016/j.insmatheco.2010.10.011. [29] J. Pansera, Discrete-time local risk-minimization of payment processes and applications to equity-linked life-insurance contracts, Insurance: Mathematics and Economics, 50 (2012), 1-11. doi: 10.1016/j.insmatheco.2011.10.001. [30] S.-H. Park and K. Lee, Insiders' hedging in a stochastic volatility model, IMA Journal of Management and Mathematics, 27 (2016), 281-2951. doi: 10.1093/imaman/dpu023. [31] E. Platen and N. Bruti-Liberati, Numerical solution of stochastic differential equations with jumps in finance, Springer, Berlin Heidelberg, 2010. doi: 10.1007/978-3-642-13694-8. [32] M. Riesner, Hedging life insurance contracts in a L$\acute{e}$vy process financial market, Insurance: Mathematics and Economics, 38 (2006), 599-608. doi: 10.1016/j.insmatheco.2005.12.004. [33] M. Schweizer, Hedging of Options in a General Semimartingale Model, Ph.D thesis, Zurich University, Switzerland, 1988. [34] M. Schweizer, A guided tour through quadratic hedging approaches, in Handbooks in Mathematical Finance: Option Pricing, Interest Rates and Risk Management, Cambridge University Press, Cambridge, (2001), 538–574. doi: 10.1017/CBO9780511569708.016. [35] Y. Shen and B. Zou, Mean-variance portfolio selection in contagious markets, preprint. doi: 10.13140/RG.2.2.36243.02088. [36] G. Stabile and G. L. Torrisi, Risk processes with non-stationary Hawkes claims arrivals, Methodology and Computing in Applied Probability, 12 (2010), 415-429. doi: 10.1007/s11009-008-9110-6. [37] N. Vandaele and M. Vanmaele, A locally risk-minimizing hedging strategy for unit-linked life insurance contracts in a L$\acute{e}$vy process financial market, Insurance: Mathematics and Economics, 42 (2008), 1128-1137. doi: 10.1016/j.insmatheco.2008.03.001. [38] X. Zhang, J. Xiong and Y. Shen, Bond and option pricing for interest rate model with clustering effects, Quantitative Finance, 18 (2018), 969-981. doi: 10.1080/14697688.2017.1388534. [39] L. Zhu, Limit theorems for a Cox-Ingersoll-Ross process with Hawkes jumps, Journal of Applied Probability, 51 (2014), 699-712. doi: 10.1239/jap/1409932668.
Figures(4)
• on this site
/
|
2023-03-25 16:35:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.3915856182575226, "perplexity": 3891.0806688052685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00304.warc.gz"}
|
https://quant.stackexchange.com/questions/47581/expected-return-on-stock
|
# Expected Return on Stock
Suppose we have the following information on stocks $$X$$, $$Y$$, and $$Z$$:
1. Expected Returns: $$E(R_X)=10\%$$, $$E(R_Y)=12\%$$.
2. Standard Deviations: $$\sigma_X=10\%$$, $$\sigma_Y=15\%$$, $$\sigma_Z=10\%$$
3. Pairwise Correlations: $$\rho_{XY}=0$$, $$\rho_{XZ}=0$$, $$\rho_{YZ}=0.5$$.
Assume that the CAPM holds and that the market portfolio consists of the above three stocks weighted equally. Find the expected return of Stock Z.
Attempt: We can first get $$\sigma_M$$ by using the formula for the variance of a three-asset portfolio. Then, from there, we can solve for the $$\beta$$ for each stock using $$\beta=\frac{\text{Cov}(R_i,R_M)}{\sigma_M^2}$$. However, I'm not sure how to compute for the correlation between the stock return and the market return.
To compute the correlation between the stock return (let us say $$R_X$$) and the market return $$R_M$$, you just write:
$$\rho_{R_X,R_M} = \frac{Cov(R_X,R_M) }{\sigma_{R_X}\sigma_{R_M}} = \frac{Cov(R_X, \frac{1}{3}R_X + \frac{1}{3}R_Y + \frac{1}{3}R_Z ) }{\sigma_{R_X}\sigma_{R_M}}$$
Once you get the market variance and the $$\beta$$, you just write the CAPM formula for $$X$$ and $$Y$$:
$$E[R_X] = R_{rf} + \beta_{X,M}(E[R_M] - R_{rf})$$
$$E[R_Y] = R_{rf} + \beta_{Y,M}(E[R_M] - R_{rf})$$
And then you go two équations with two unknows variables ($$R_{rf}$$ and $$E[R_M]$$). You solve it and finally you easily get $$E[R_Z]$$ from $$E[R_M]$$
|
2022-01-20 05:26:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9702203273773193, "perplexity": 432.09049962330573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00050.warc.gz"}
|
https://www.snapxam.com/problems/78384582/secx-sin2x-sinx-1cos2x-cosx?method=142
|
# Step-by-step Solution
Go!
1
2
3
4
5
6
7
8
9
0
a
b
c
d
f
g
m
n
u
v
w
x
y
z
.
(◻)
+
-
×
◻/◻
/
÷
2
e
π
ln
log
log
lim
d/dx
Dx
|◻|
=
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
true
## Step-by-step explanation
Problem to solve:
$\sec\left(x\right)=\frac{\sin\left(2x\right)}{\sin\left(x\right)}-\frac{\cos\left(2x\right)}{\cos\left(x\right)}$
Choose the solving method
1
Multiplying the fraction by $-1$
$\sec\left(x\right)=\frac{\sin\left(2x\right)}{\sin\left(x\right)}+\frac{-\cos\left(2x\right)}{\cos\left(x\right)}$
2
Using the sine double-angle identity: $\sin\left(2\theta\right)=2\sin\left(\theta\right)\cos\left(\theta\right)$
$\sec\left(x\right)=\frac{2\sin\left(x\right)\cos\left(x\right)}{\sin\left(x\right)}+\frac{-\cos\left(2x\right)}{\cos\left(x\right)}$
3
Simplify the fraction $\frac{2\sin\left(x\right)\cos\left(x\right)}{\sin\left(x\right)}$ by $\sin\left(x\right)$
$\sec\left(x\right)=2\cos\left(x\right)+\frac{-\cos\left(2x\right)}{\cos\left(x\right)}$
4
Combine $2\cos\left(x\right)+\frac{-\cos\left(2x\right)}{\cos\left(x\right)}$ in a single fraction
$\sec\left(x\right)=\frac{-\cos\left(2x\right)+2\cos\left(x\right)\cos\left(x\right)}{\cos\left(x\right)}$
5
When multiplying two powers that have the same base ($\cos\left(x\right)$), you can add the exponents
$\sec\left(x\right)=\frac{-\cos\left(2x\right)+2\cos\left(x\right)^2}{\cos\left(x\right)}$
6
Applying an identity of double-angle cosine: $\cos\left(2\theta\right)=1-2\sin\left(\theta\right)^2$
$\sec\left(x\right)=\frac{-\left(1-2\sin\left(x\right)^2\right)+2\cos\left(x\right)^2}{\cos\left(x\right)}$
7
Solve the product $-(1-2\sin\left(x\right)^2)$
$\sec\left(x\right)=\frac{-1+2\sin\left(x\right)^2+2\cos\left(x\right)^2}{\cos\left(x\right)}$
8
Applying the pythagorean identity: $\sin^2\left(\theta\right)+\cos^2\left(\theta\right)=1$
$\sec\left(x\right)=\frac{1}{\cos\left(x\right)}$
9
Applying the trigonometric identity: $\displaystyle\sec\left(\theta\right)=\frac{1}{\cos\left(\theta\right)}$
$\sec\left(x\right)=\sec\left(x\right)$
10
Since both sides of the equality are equal, we have proven the identity
true
true
$\sec\left(x\right)=\frac{\sin\left(2x\right)}{\sin\left(x\right)}-\frac{\cos\left(2x\right)}{\cos\left(x\right)}$
### Main topic:
Trigonometric Identities
### Time to solve it:
~ 0.08 s (SnapXam)
|
2020-10-31 18:49:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9610639214515686, "perplexity": 7582.520440006977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922411.94/warc/CC-MAIN-20201031181658-20201031211658-00612.warc.gz"}
|
https://im.kendallhunt.com/MS_ACC/students/2/6/14/index.html
|
# Lesson 14
Volume of Right Prisms
Let’s look at volumes of prisms.
### 14.1: Three Prisms with the Same Volume
Rectangles A, B, and C represent bases of three prisms.
1. If each prism has the same height, which one will have the greatest volume, and which will have the least? Explain your reasoning.
2. If each prism has the same volume, which one will have the tallest height, and which will have the shortest? Explain your reasoning.
### 14.2: Finding Volume with Cubes
This applet has 64 snap cubes, all sitting in the same spot on the screen, like a hidden stack of blocks. You will always know where the stack is because it sits on a gray square. You can keep dragging blocks out of the pile by their red points until you have enough to build what you want.
Click on the red points to change from left/right movement to up/down movement.
There is also a shape on the grid. It marks the footprint of the shapes you will be building.
1. Using the face of a snap cube as your area unit, what is the area of the shape? Explain or show your reasoning.
2. Use snap cubes to build the shape from the paper. Add another layer of cubes on top of the shape you have built. Describe this three-dimensional object.
4. Right now, your object has a height of 2. What would the volume be
1. if it had a height of 5?
2. if it had a height of 8.5?
### 14.3: Can You Find the Volume?
The applet has a set of three-dimensional figures.
1. For each figure, determine whether the shape is a prism.
2. For each prism:
1. Find the area of the base of the prism.
2. Find the height of the prism.
3. Calculate the volume of the prism.
Is it a prism? area of prism base (cm2) height (cm) volume (cm3)
• Begin by grabbing the gray bar on the left and dragging it to the right until you see the slider.
• Choose a figure using the slider.
• Rotate the view using the Rotate 3D Graphics tool marked by two intersecting, curved arrows.
• Note that each polyhedron has only one label per unique face. Where no measurements are shown, the faces are identical copies.
• Use the distance tool, marked with the "cm," to click on any segment and find the height or length.
• Troubleshooting tip: the cursor must be on the 3D Graphics window for the full toolbar to appear.
Imagine a large, solid cube made out of 64 white snap cubes. Someone spray paints all 6 faces of the large cube blue. After the paint dries, they disassemble the large cube into a pile of 64 snap cubes.
1. How many of those 64 snap cubes have exactly 2 faces that are blue?
2. What are the other possible numbers of blue faces the cubes can have? How many of each are there?
3. Try this problem again with some larger-sized cubes that use more than 64 snap cubes to build. What patterns do you notice?
### 14.4: What’s the Prism’s Height?
There are 4 different prisms that all have the same volume. Here is what the base of each prism looks like.
1. Order the prisms from shortest to tallest. Explain your reasoning.
2. If the volume of each prism is 60 units3, what would be the height of each prism?
3. For a volume other than 60 units3, what could be the height of each prism?
4. Discuss your thinking with your partner. If you disagree, work to reach an agreement.
### Summary
Any cross section of a prism that is parallel to the base will be identical to the base. This means we can slice prisms up to help find their volume. For example, if we have a rectangular prism that is 3 units tall and has a base that is 4 units by 5 units, we can think of this as 3 layers, where each layer has $$4\boldcdot 5$$ cubic units.
That means the volume of the original rectangular prism is $$3(4\boldcdot 5)$$ cubic units.
This works with any prism! If we have a prism with height 3 cm that has a base of area 20 cm2, then the volume is $$3\boldcdot 20$$ cm3 regardless of the shape of the base. In general, the volume of a prism with height $$h$$ and area $$B$$ is
$$\displaystyle V = B \boldcdot h$$
For example, these two prisms both have a volume of 100 cm3.
### Glossary Entries
• base (of a prism or pyramid)
The word base can also refer to a face of a polyhedron.
A prism has two identical bases that are parallel. A pyramid has one base.
A prism or pyramid is named for the shape of its base.
• cone
A cone is a three-dimensional figure like a pyramid, but the base is a circle.
• cross section
A cross section is the new face you see when you slice through a three-dimensional figure.
For example, if you slice a rectangular pyramid parallel to the base, you get a smaller rectangle as the cross section.
• cylinder
A cylinder is a three-dimensional figure like a prism, but with bases that are circles.
• prism
A prism is a type of polyhedron that has two bases that are identical copies of each other. The bases are connected by rectangles or parallelograms.
Here are some drawings of prisms.
• pyramid
A pyramid is a type of polyhedron that has one base. All the other faces are triangles, and they all meet at a single vertex.
Here are some drawings of pyramids.
• sphere
A sphere is a three-dimensional figure in which all cross-sections in every direction are circles.
• volume
Volume is the number of cubic units that fill a three-dimensional region, without any gaps or overlaps.
For example, the volume of this rectangular prism is 60 units3, because it is composed of 3 layers that are each 20 units3.
|
2022-10-06 17:53:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4753091037273407, "perplexity": 989.5151053653398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00251.warc.gz"}
|
https://desfontain.es/privacy/differential-privacy-awesomeness.html
|
# Ted is writing things
On privacy, research, and privacy research.
# Why differential privacy is awesome
— updated
Are you following tech- or privacy-related news? If so, you might have heard about differential privacy. The concept is popular both in academic circles and inside tech companies. Both Apple or Google use differential privacy to collect data in a private way.
So, what's this definition about? How is it better than definitions that came before? More importantly, why should you care? What makes it so exciting to researchers and tech companies? In this post, I'll try to explain the idea behind differential privacy and its advantages. I'll do my best to keep it simple and accessible for everyone — not only technical folks.
# What it means
Suppose you have a process that takes some database as input, and returns some output.
This process can be anything. For example, it can be:
• computing some statistic ("tell me how many users have red hair")
• an anonymization strategy ("remove names and last three digits of ZIP codes")
• a machine learning training process ("build a model to predict which users like cats")
• … you get the idea.
To make a process differentially private, you usually have to modify it a little bit. Typically, you add some randomness, or noise, in some places. What exactly you do, and how much noise you add, depends on which process you're modifying. I'll abstract that part away and simply say that your process is now doing some unspecified ✨ magic ✨.
Now, remove somebody from your database, and run your new process on it. If the new process is differentially private, then the two outputs are basically the same. This must be true no matter who you remove, and what database you had in the first place.
By "basically the same", I don't mean "it looks a bit similar". Instead, remember that the magic you added to the process was randomized. You don't always get the same output if you run the new process several times. So what does "basically the same" means in this context? That the probability distributions are similar. You can get the exact same output with database 1 or with database 2, with similar likelihood.
What does this have to do with privacy? Well, suppose you're a creepy person trying to figure out whether your target is in the original data. By looking at the output, you can't be 100% certain of anything. Sure, it could have come from a database with your target in it. But it could also have come from the exact same database, without your target. Both options have a similar probability, so there's not much you can say.
You might have noticed that this definition is not like the ones we've seen before. We're not saying that the output data satisfies differential privacy. We're saying that the process does. This is very different from $$k$$-anonymity and other definitions we've seen. There is no way to look at data and determine whether it satisfies differential privacy. You have to know the process to know whether it is "anonymizing" enough.
And that's about it. It's a tad more abstract than other definitions we've seen, but not that complicated. So, why all the hype? What makes it so awesome compared to older, more straightforward definitions?
# Why it's awesome
Privacy experts, especially in academia, are enthusiastic about differential privacy. It was first proposed by Cynthia Dwork and Frank McSherry, in 20051. Very soon, almost all researchers working on anonymization started building differentially private algorithms. And, as we've already mentioned, tech companies are also trying to use it whenever possible. So, why all the hype? I can count three main reasons.
## You no longer need attack modeling
Remember the previous definitions we've seen? (If not, you're fine, just take my word for it :D) Why did we need $$k$$-map in certain cases, and $$k$$-anonymity or $$\delta$$-presence in others? To choose the right one, we had to figure out the attacker's capabilities and goals. In practice, this is pretty difficult. You might not know exactly what your attacker is capable of. Worse, there might be unknown unknowns: attack vectors that you hadn't imagined at all. You can't make very broad statements when you use old-school definitions. You have to make some assumptions, which you can't be 100% sure of.
By contrast, when you use differential privacy, you get two awesome guarantees.
1. You protect any kind of information about an individual. It doesn't matter what the attacker wants to do. Reidentify their target, know if they're in the dataset, deduce some sensitive attribute… All those things are protected. Thus, you don't have to think about the goals of your attacker.
2. It works no matter what the attacker knows about your data. They might already know some people in the database. They might even add some fake users to your system. With differential privacy, it doesn't matter. The users that the attacker doesn't know are still protected.
## You can quantify the privacy loss
We saw that when using $$k$$-anonymity, choosing the parameter $$k$$ is pretty tricky. There is no clear link between which $$k$$ to choose and how "private" the dataset is. The problem is even worse with other definitions. This problem is present in all other definitions we've seen so far.
Differential privacy is much better. When you use it, you can quantify the greatest possible information gain by the attacker. The corresponding parameter, usually named $$\varepsilon$$, allows you to make very strong statements. Suppose $$\varepsilon=1.1$$. Then, you can say: "an attacker who thinks their target is in the dataset with probability 50% can increase their level of certainty to at most 75%."
And do you remember the previous point about attack modeling? It means you can change this statement in many ways. You can replace "their target is is the dataset" by anything about one individual. And you can add "no matter what the attacker knows" if you want to be extra-precise. Altogether, that makes differential privacy much stronger than all definitions that came before.
## You can compose multiple mechanisms
Suppose you have some data. You want to share it with Alex and with Brinn, in some anonymized fashion. You trust Alex and Brinn equally, so you use the same definition of privacy for both of them. They are not interested in the same aspects of the data, so you give them two different versions of your data. Both versions are "anonymous", for the definition you've chosen.
What happens if Alex and Brinn decide to conspire, and compare the data you gave them? Will the union of the two anonymized versions still be anonymous? It turns out that for most definitions of privacy, this is not the case. If you put two $$k$$-anonymous versions of the same data together, the result won't be $$k$$-anonymous. So if Alex and Brinn conspire, they might be unable to reidentify users on their own… or even reconstruct all the original data! That's definitely not good news.
If you used differential privacy, you get to avoid this type of scenario. Suppose that you gave differentially private data to Alex and Brinn. Each time, you used a parameter of $$\varepsilon$$. Then if they conspire, the resulting data is still protected by differential privacy, except that the privacy is now weaker: the parameter becomes $$2\varepsilon$$. So they gain something, but you still quantify how much information they got. Privacy experts call this property composition.
This scenario sounds a bit far-fetched, but composition is super useful in practice. Organizations often want to do many things with data. Publish statistics, release an anonymized version, train machine learning algorithms… Composition is a way to stay in control of the level of risk as new use cases appear and processes evolve.
# Conclusion
I hope the basic intuition behind differential privacy is now clear. Want a one-line summary? Uncertainty in the process means uncertainty for the attacker, which means better privacy.
I also hope that you're now wondering how it actually works! What hides behind this magic that makes everything private and safe? Why does differential privacy have all the awesome properties I've mentioned? What a coincidence! That'll be the topic of a future post, which will try to give more details while still staying clear of heavy math.
1. First as a patent (pdf), then as a scientific paper (pdf)
All opinions here are my own, not my employers'.
|
2018-10-19 19:19:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21622838079929352, "perplexity": 817.090545110432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512434.71/warc/CC-MAIN-20181019191802-20181019213302-00423.warc.gz"}
|
http://math.stackexchange.com/questions/353583/dual-space-as-a-hilbert-space
|
# Dual Space as a Hilbert Space
I have this problem:
Let $(X, \langle\cdot,\cdot\rangle)$ a Hilbert Space on $\mathbb{R}$ with Riez map $\mathcal{R}:X^{\prime}\rightarrow X$, define $[\cdot,\cdot]:X^{\prime}\times X^{\prime}\rightarrow\mathbb{R}$ by $$[F,G]\ :=\ \langle\mathcal{R}(F),\ \mathcal{R}(G)\rangle,\;\;\; \forall\; F,G\in X^{\prime}$$ and show that $(X^{\prime},[\cdot,\cdot])$ is a Hilbert Space.
and I need some hints about how I can solve it.
So, I know that $(X^{\prime}, \|\cdot\|_{X^{\prime}})$ is a Banach space, because every $F\in X^{\prime}$ have codomain $\mathbb{R}$. Later, as $\langle\cdot,\cdot\rangle$ is a inner product and, then $[\cdot,\cdot]$ is also a inner product if I can prove that $\mathcal{R}(\cdot)$ is linear. And finally, I need to prove that $\|\cdot\|_{X^{\prime}}$ is subordinate by $[\cdot,\cdot]$. Am I correct ?
The space $X'=\mathcal{B}(X,\mathbb{C})$ is complete. This follows from the fact that $\mathbb{C}$ is complete. For details see this answer.
Since $X$ is a Hilbert space then parallelogram law holds for elements of $X$. Recall that $\mathcal{R}:X'\to X$ is isometric, then parallelogram law holds for elements of $X'$. By Jordan von Neumann theorem $X'$ is an inner product space and the original norm coincide with the norm generated by inner product.
Since $X'$ is complete inner product space, then $X'$ is a Hilbert space.
|
2015-08-30 00:31:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.942463219165802, "perplexity": 72.51986232107768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064590.32/warc/CC-MAIN-20150827025424-00205-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://www.r-bloggers.com/tag/research/page/2/
|
# Posts Tagged ‘ Research ’
## Converting vectors to numeric in mixed-type dataframe
May 19, 2011
By
Coercing variables of character and numeric type into a single dataframe yields all vectors to be defined as factors all <- data.frame(cbind(site, year, model, x, y, z)) The following converts selected variables from “factor” back to “numeric” all\$x <- as.numeric(x) … Continue reading →
## How to do a quantitative literature review in R
May 17, 2011
By
In the early stages of a literature review, you may have hundreds of papers and not know how to even begin sorting through them. In this post, I show you how to perform a two-stage clustering analysis with R so that you can identify the main groups within your data, based on key attributes of each paper.
## Mapping points
May 16, 2011
By
Since I look at mercury concentrations at different measurement stations in North America, visualization using a map with values (of your favourite parameter) plotted as colour-coded circles is quite useful. After some trial & error, here is some very basic code … Continue reading →
## Is R an ideal language to teach the fundamentals of programming to beginners?
May 6, 2011
By
I’m helping out some colleagues learn programming from having zero experience with it in any shape or form. It’s quite a daunting task in some senses, because, well, it may not be easy! They are researchers, so they’ll need it for processing data and generating output, and perhaps processing BIG DATA at some point too.
## Unable to plot a decent x-Axis in a time series plot using zoo
April 7, 2011
By
I use the R package zoo to plot a yearly time series of weekly averaged data. The problem is that my date variable (m.all\$date) contains week numbers and these are plotted as x-Axis. What I would rather like to to … Continue reading →
## More fun with sed
March 18, 2011
By
So I have this strange date and time string, which I would like to convert to a “useable” date, i.e., something that a spreadsheet programme or R can work with. It looks like this (MON has 3 chars): ddMONyr:hh:mm:ss The … Continue reading →
## Converting text files with sed
March 3, 2011
By
Sed is my friend to change fixed-width text files (e.g., from an R screen output) to a comma delimited file using sed 's/ */,/g' file1 >file2.csv Note the two spaces between s/ and */.
January 21, 2011
By
Reconstructing phylogenies is an interesting task, sadly one that often requires to navigate between a multitude of software. To add an unnecessary layer of complexity to the whole thing, most of these softwares speaks different languages, and requires the user to do endless conversions from fasta to phylip to nexus to whatever new format they
## A (fast!) null model of bipartite networks
September 12, 2010
By
$A (fast!) null model of bipartite networks$
One of the challenges for ecologists working with trophic/interaction networks is to understand their organization. One of the possible approaches is to compare them across a random model, with more or less constraints, in order to estimate the departure from randomness. To this effect, null models have been developed. The basic idea behind a null
|
2015-09-01 22:39:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1742459386587143, "perplexity": 2236.9039775211318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645220976.55/warc/CC-MAIN-20150827031340-00269-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://deepai.org/publication/combinatorics-of-explicit-substitutions
|
# Combinatorics of explicit substitutions
λυ is an extension of the λ-calculus which internalises the calculus of substitutions. In the current paper, we investigate the combinatorial properties of λυ focusing on the quantitative aspects of substitution resolution. We exhibit an unexpected correspondence between the counting sequence for λυ-terms and famous Catalan numbers. As a by-product, we establish effective sampling schemes for random λυ-terms. We show that typical λυ-terms represent, in a strong sense, non-strict computations in the classic λ-calculus. Moreover, typically almost all substitutions are in fact suspended, i.e. unevaluated, under closures. Consequently, we argue that λυ is an intrinsically non-strict calculus of explicit substitutions. Finally, we investigate the distribution of various redexes governing the substitution resolution in λυ and investigate the quantitative contribution of various substitution primitives.
## Authors
• 7 publications
• 4 publications
• ### Counting Environments and Closures
Environments and closures are two of the main ingredients of evaluation ...
02/02/2018 ∙ by Maciej Bendkowski, et al. ∙ 0
• ### Towards the average-case analysis of substitution resolution in λ-calculus
Substitution resolution supports the computational character of β-reduct...
12/11/2018 ∙ by Maciej Bendkowski, et al. ∙ 0
• ### The ksmt calculus is a δ-complete decision procedure for non-linear constraints
ksmt is a CDCL-style calculus for solving non-linear constraints over re...
04/27/2021 ∙ by Franz Brauße, et al. ∙ 0
• ### Refining Properties of Filter Models: Sensibility, Approximability and Reducibility
In this paper, we study the tedious link between the properties of sensi...
01/16/2018 ∙ by Flavien Breuvart, et al. ∙ 0
• ### Lambda Calculus with Explicit Read-back
This paper introduces a new term rewriting system that is similar to the...
08/20/2018 ∙ by Anton Salikhmetov, et al. ∙ 0
• ### Psi-Calculi Revisited: Connectivity and Compositionality
Psi-calculi is a parametric framework for process calculi similar to pop...
09/14/2019 ∙ by Johannes Åman Pohjola, et al. ∙ 0
• ### Non-Deterministic Functions as Non-Deterministic Processes (Extended Version)
We study encodings of the lambda-calculus into the pi-calculus in the un...
04/30/2021 ∙ by Joseph W. N. Paulus, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1. Introduction
Substitution of terms for variables forms a central concept in various formal calculi with qualifiers, such as predicate logic or different variants of -calculus. Though substitution supports the computational character of -reduction in -calculus, it is usually specified as an external meta-level formalism, see [barendregt1984]. Such an epitheoretic presentation of substitution masks its execution as a single, indivisible calculation step, even though it requires a considerable computational effort to carry out, see [peytonJones1987]. In consequence, the number of -reduction steps required to normalise a -term does not reflect the actual operational cost of normalisation. In order to effectuate substitution resolution, its process needs to be decomposed into a series of fine-grained atomic rewriting steps included as part of the considered calculus.
An early example of such a calculus, internalising the evaluation of substitution, is combinatory logic [CurryFeys1958]; alas, bearing the price of loosing the intuitive, high-level structure of encoded functions, mirrored in the polynomial blow-up of their representation, see [joy1984, joy1985]. Focusing on retaining the basic intuitions behind substitution, various calculi of explicit substitutions highlighting multiple implementation principles of substitution resolution in -calculus were proposed throughout the years, cf. [deBruijn1978, abadi1991, Lescanne1994, DBLP:journals/jar/RoseBL12]. The formalisation of substitution evaluation as a rewriting process provides a formal platform for operational semantics of reduction in -calculus using abstract machines, such as, for instance, the Krivine machine [curien93:categ_combin]. Moreover, with the internalisation of substitution, reduction cost reflects more closely the true computational complexity of executing modern functional programs.
Nonetheless, due to the numerous nuances regarding the evaluation cost of functional programs (e.g. assumed reduction or strictness strategies) supported by a general tradition of considering computational complexity in the framework of Turing machines or RAM models rather than formal calculi, the evaluation cost in term rewriting systems, such as
-calculus or combinatory logic, gains increasing attention only quite recently, see [DBLP:journals/corr/abs-1208-0515, DBLP:conf/icfp/AvanziniLM15, DBLP:journals/corr/AccattoliL16]. The continuing development of automated termination and complexity analysers for both first- and higher-order term rewriting systems echoes the immense practical, and hence also theoretical, demand for complexity analysis frameworks of declarative programming languages, see e.g. [aprove2014, lmcs:749]. In this context, the computational analysis of first-order term rewriting systems seems to most accurately reflect the practical evaluation cost of declarative programs [DBLP:conf/cade/CichonL92, DBLP:journals/jfp/BonfanteCMT01]. Consequently, the average-case performance analysis of abstract rewriting machines executing the declared computations requires a quantitative analysis of their internal calculi. Such investigations provide not only key insight into the quantitative aspects of basic rewriting principles, but also allow to optimise abstract rewriting machines so to reflect the quantitative contribution of various rewriting primitives.
Despite their apparent practical utility, quantitative aspects of term rewriting systems are not well studied. In [DBLP:journals/tcs/ChoppyKS89]
Choppy, Kaplan and Soria provide a quantitative evaluation of a general class of confluent, terminating term rewriting systems in which the term reduction cost (i.e. the number of rewriting steps required to reach the final normal form) is independent of the assumed normalisation strategy. Following a similar, analytic approach, Dershowitz and Lindenstrauss provide an average-time analysis of inference parallelisation in logic programming
[DBLP:conf/iclp/DershowitzL89]
. More recently, Bendkowski, Grygiel and Zaionc analyse quantitative aspects of normal-order reduction of combinatory logic terms and estimate the asymptotic density of normalising combinators
[bengryzai2017, BENDKOWSKI_2017]. Alas, due to the intractable, epitheoretic formalisation of substitution in untyped -calculus, its quantitative rewriting aspects have, to our best knowledge, not yet been investigated.
In the following paper we offer a combinatorial perspective on substitution resolution in -calculus and propose a combinatorial analysis of explicit substitutions in -calculus [Lescanne1994]. The paper is structured as follows. In Section 2 we draft the basic characteristics of required for the reminder of the current paper. Next, we introduce the necessary analytic toolbox used in the quantitative analysis, see Section 3. In Section 4 we enumerate -terms and exhibit the declared correspondence between their corresponding counting sequence and Catalan numbers. Some statistical properties of random -terms are investigated in LABEL:sec:statistical:properties. In LABEL:subsec:strict:substitution:forms we relate the typical form of -terms with the classic, epitheoretic substitution tactic of -calculus. The quantitative impact of substitution suspension is investigated in LABEL:subsec:suspended:substitutions. In LABEL:subsec:redexes we discuss the contribution of various substitution resolution primitives. The final LABEL:sec:conclusions concludes the paper.
## 2. Preliminaries
### 2.1. ‘l‘y-calculus
In the current subsection we outline the main characteristics of -calculus (lambda-upsilon calculus) required for the reminder of the paper. We refer the curious reader to [Lescanne1994, benaissa_briaud_lescanne_rouyer-degli_1996] for a more detailed exposition.
###### Remark.
We choose to outline -calculus following the presentation of [lescanne96] where indices start with 0 instead of [Lescanne1994, benaissa_briaud_lescanne_rouyer-degli_1996] where de Bruijn indices start with 1, as introduced by de Bruijn himself, cf. [deBruijn1972]. Although both conventions are assumed in the context of static, quantitative aspects of -calculus, the former convention seems to be the most recent standard, cf. [gryles2013, gryles2015, BendkowskiGLZ16, GittenbergerGolebiewskiG16].
The computational mechanism of -reduction is usually defined as where the right-hand side denotes the epitheoretic, capture-avoiding substitution of term for variable in . -calculus [Lescanne1994] is a simple, first-order rewriting system internalising substitution resolution of classic -calculus in de Bruijn notation [deBruijn1972]. Its expressions, called ()-terms, consist of indices (for convenience also referred to as variables), abstractions and term application. Terms are also equipped with a new closure operator denoting the ongoing resolution of substitution . Explicit substitutions are fragmented into atomic primitives of shift, denoted as , a unary operator lift, written as , mapping substitutions onto substitutions, and finally a unary slash operator, denoted as , mapping terms onto substitutions. Terms containing closures are called impure whereas terms without them are said to be pure. De Bruijn indices are encoded using a unary base expansion. In other words, n is represented as an -fold application of the successor operator to zero. Figure 0(b) summarises the specification of -terms.
The rewriting rules of -calculus consist of the usual -rule, specified in this framework as and an additional set of (seven) rules governing the resolution of explicit substitutions, see Figure 0(a). Remarkably, these few rewriting rules are sufficient to correctly model -reduction and also preserve strong normalisation of closed -terms, see [benaissa_briaud_lescanne_rouyer-degli_1996]. The simple syntax and rewriting rules of -calculus are not only of theoretical importance, but also of practical interest, used as the foundation of various reduction engines. Let us mention that -calculus and its abstract U-machine executing (strong) -normalisation was successfully used as the main reduction engine in Pollack’s implementation of LEGO, a proof checker of the Calculus of Constructions, the Edinburgh Logical Framework, and also for the Extended Calculus of Constructions [randypollackLEGO].
###### Example 2.1.
Consider the term . Note that in the de Bruijn notation, is written as . Likewise, the term is denoted as . Certainly, for each term . Note however, that with explicit substitution resolution in , this reduction is fragmented into several reduction steps, as follows:
(1) (‘l‘l\text@underline{\sf 1})a→(‘l\text@underline{\sf 1})[a/]→‘l(\text@underline{\sf 1}[⇑(a/)])→‘l(\text@underline{\sf 0}[a/][↑])→‘l(a[↑]).
Note that in the final reduction step of (1) we obtain . The additional shift operator guarantees that (potential) free indices are aptly incremented so to avoid potential variable captures. If is closed, i.e. each variable in is bound, then resolves simply to , as intended.
## 3. Analytic toolbox
We base our quantitative analysis of -terms on techniques borrowed from analytic combinatorics, in particular singularity analysis developed by Flajolet and Odlyzko [FlajoletOdlyzko1990]. We refer the unfamiliar reader to [Wilf2006, flajolet09] for a thorough introduction to (multivariate) generating functions and analytic combinatorics.
###### Remark.
Our arguments follow standard applications of singularity analysis to (multivariate) systems of generating functions corresponding to algebraic structures. For the reader’s convenience we offer a high-level, though limited to the subject of our interest, outline of this process in the following section.
### 3.1. Singularity analysis
Interested in the quantitative properties of -terms, for instance their asymptotic enumeration or parameter analysis, we typically take the following general approach. We start the analysis with establishing a formal, unambiguous context-free specification describing the structures of our interest. Next, using symbolic methods [flajolet09, Part A. Symbolic Methods] we convert the specification into a system of generating functions, i.e. formal power series in which the coefficient standing by , written as , denotes the number of structures (objects) of size . Interested in parameter analysis, so obtained generating functions become bivariate and take the form where , also written as , stands for the number of structures of size for which the investigated parameter takes value ; for instance, denotes the number of terms of size with exactly occurrences of a specific redex pattern. In this context, variable corresponds to the size of specified structures whereas is said to mark the investigated parameter quantities.
When the obtained system of generating functions admits an analytic solution (i.e. obtained formal power series are also analytic at the complex plane origin) we can investigate the quantitative properties of respective coefficient sequences, and so also enumerated combinatorial structures, by examining the analytic properties of associated generating functions. The location of their singularities, in particular so-called dominant singularities dictates the main, exponential growth rate factor of the investigated coefficient sequence.
###### Theorem 3.1 (Exponential growth formula [flajolet09, Theorem IV.7]).
If is analytic at the origin and is the modulus of a singularity nearest to the origin in the sense that
(3) R=sup{r≥0 : A(z) is % analytic in |z|
then the coefficient satisfies
(4) an=R−nθ(n)withlimsup|θ(n)|1n=1.
Generating functions considered in the current paper are algebraic, i.e. are branches of polynomial equations in form of . Since cannot be unambiguously defined as an analytic functions near the origin, the main source of singularities encountered during our analysis are roots of radicand expressions involved in the closed-form, analytic formulae defining studied generating functions. The following classic result due to Pringsheim facilities the inspection of such singularities.
###### Theorem 3.2 (Pringsheim [flajolet09, Theorem IV.6]).
If is representable at the origin by a series expansion that has non-negative coefficients and radius of convergence , then the point is a singularity of .
A detailed singularity analysis of algebraic generating functions, involving an examination of the type of dominant singularities follows as a consequence of the Puiseux series expansion for algebraic generating functions.
###### Theorem 3.3 (Newton, Puiseux [flajolet09, Theorem VII.7]).
Let be a branch of an algebraic function . Then in a circular neighbourhood of a singularity slit along a ray emanating from , admits a fractional Newton-Puiseux series expansion that is locally convergent and of the form
(5) F(z)=∑k≥k0ck(z−ρ)k/κ,
where and .
With available Puiseux series, the complete asymptotic expansion of sub-exponential growth rate factors associated with coefficient sequences of investigated algebraic generating functions can be accessed using the following standard function scale.
###### Theorem 3.4 (Standard function scale [flajolet09, Theorem VI.1]).
Let . Then, admits for large a complete asymptotic expansion in form of
(6) [zn]f(z)=nα−1Γ(α)(1+α(α−1)2n+α(α−1)(α−2)(3α−1)24n2+O(1n3))
where is the Euler Gamma function defined as
(7) Γ(z)=∫∞0xz−1e−xdx.
### 3.2. Parameter analysis
Consider a random variable
denoting a certain parameter quantity of a (uniformly) random -term of size . In order to analyse the limit behaviour of as
tends to infinity, we utilise the moment techniques of multivariate generating functions
[flajolet09, Chapter 3]. In particular, if is a bivariate generating function associated with where marks the considered parameter quantities, then the expectation takes the form
(8) E(Xn)=[zn]∂∂uF(z,u)|u=1[zn]F(z,1).
Consequently, the limit mean and, similarly, all higher moments can be accessed using techniques of singularity analysis. Although such a direct approach allows to investigate all the limit moments of
(in particular its mean and variance) it is usually more convenient to study the associated probability generating function
(9) pn(u)=∑k≥0P(Xn=k)uk=[zn]F(z,u)[zn]F(z,1).
With at hand, it is possible to readily access the limit distribution of . In the current paper we focus primarily on continuous, Gaussian limit distributions associated with various redexes in -calculus. The following Quasi-powers theorem due to Hwang [hwang1998convergence]
provides means to obtain a limit Gaussian distribution and establishes the rate at which intermediate distributions converge to the final limit distribution.
###### Theorem 3.5 (Quasi-powers theorem, see [flajolet09, Theorem IX.8]).
Let
be a sequence of non-negative discrete random variables (supported by
) with probability generating functions . Assume that, uniformly in a fixed complex neighbourhood of , for sequences , there holds
(10)
where and are analytic at and .
Assume finally that satisfies the following variability condition:
(11) B′′(1)+B′(1)−B′(1)2≠0.
Then, the distribution is, after standardisation, asymptotically Gaussian with speed of convergence of order :
(12) P(Xn−E(Xn)√V(Xn)≤x)=Φ(x)+O(1κn+1√βn)
where
is the standard normal distribution function
(13) Φ(x)=1√2π∫x−∞e−ω2/2dω.
The limit expectation and variance satisfy
(14) E(Xn)∼B′(1)nV(Xn)∼(B′′(1)+B′(1)−B′(1)2)n
## 4. Counting ‘l‘y-terms
In current section we begin the enumeration of -terms. For that purpose, we impose on them a size notion such that the size of a -term, denoted as , is equal to the total number of constructors (in the associated term algebra, see Figure 0(b)) of which it is built. Figure 2 provides the recursive definition of term size.
###### Remark.
Such a size notion, in which all building constructors contribute equal weight one to the overall term size was introduced in [Bendkowski2016] as the so-called natural size notion. Likewise, we also refer to the size notion assumed in the current paper as natural.
Certainly, our choice is arbitrary and, in principle, different size measures can be assumed, cf. [gryles2015, Bendkowski2016, GittenbergerGolebiewskiG16]. For convenience, we choose the natural size notion thus avoiding the obfuscating (though still manageable) technical difficulties arising in the analysis of general size model frameworks, see e.g. [GittenbergerGolebiewskiG16]. Moreover, our particular choice exhibits unexpected consequences and hence is, arguably, interesting on its own, see creftypecap 4.1.
Equipped with a size notion ensuring that for each the total number of -terms of size is finite, we can proceed with our enumerative analysis. Surprisingly, the counting sequence corresponding to -terms in the natural size notion corresponds also to the celebrated sequence of Catalan numbers111see https://oeis.org/A000108..
###### Proposition 4.1.
Let and denote the generating functions corresponding to -terms
and substitutions, respectively. Then,
(15) T(z)=1−√1−4z2z−1whereasS(z)=1−√1−4z2z(z1−z).
In consequence
(16) [zn]T(z)=⎧⎪⎨⎪⎩0,for n=01n+1(2nn),otherwiseand[zn]S(z)=⎧⎪ ⎪⎨⎪ ⎪⎩0,for n=0n−1∑k=01k+1(2kk)% otherwise.
hence also
(17) [zn]T(z)∼4n√πn3/2whereas[zn]S(z)∼4n+13√πn3/2.
###### Proof.
Consider the formal specification (2) for -terms. Let be the generating function corresponding to de Bruijn indices. Note that following symbolic methods, the generating functions , , and give rise to the system
(18) T(z)=N(z)+zT(z)+zT(z)2+zT(z)S(z)S(z)=zT(z)+zS(z)+zN(z)=z+zN(z).
Note that is an independent variable in (18). We can therefore solve the equation and find that . Substituting this expression for in the equations defining and we obtain
(19) T(z)=z1−z+zT(z)+zT(z)2+zT(z)S(z)whereasS(z)=zT(z)+zS(z)+z.
System (19) admits two solutions, i.e.
(20) T(z)=1±√1−4z−2z2zandS(z)=1±√1−4z2(1−z),
both with agreeing signs.
In order to determine the correct pair of generating functions we invoke the fact that, by their construction, both and are non-negative integers for all . Consequently, the declared pair (15) is the analytic solution of (20). At this point, we notice that both the generating functions in (15) resemble the famous generating function corresponding to Catalan numbers, see e.g. [Wilf2006, Section 2.3]. Indeed
(21) T(z)=1−√1−4z2z−1whereasS(z)=1−√1−4z2z(z1−z).
In this form, we can readily relate Catalan numbers with respective coefficients of and , see (16). From (21) we obtain . The number corresponds thus to for all with the initial . Furthermore, given we note that corresponds to the partial sum of Catalan numbers222see https://oeis.org/A014137. up to (exclusively). ∎
The correspondence exhibited in creftypecap 4.1 witnesses the existence of a bijection between -terms of size and, for instance, plane binary trees with inner nodes. In what follows we provide an alternative, constructive proof of this fact.
### 4.1. Bijection between ‘l‘y-terms and plane binary trees
Let denote the set of plane binary trees (i.e. binary trees in which we distinguish the order of subtrees). Consider the map defined as in LABEL:fig:bijection. Note that, for convenience, we omit drawing leafs. Consequently, nodes in LABEL:fig:bijection with a single or no subtrees are to be understood as having implicit leafs attached to vacuous branches.
|
2021-09-28 02:07:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961631655693054, "perplexity": 1592.7629593153674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058589.72/warc/CC-MAIN-20210928002254-20210928032254-00443.warc.gz"}
|
http://jde27.uk/lgla/convergence.html
|
# Optional: Convergence of exp
## Convergence
### Weierstrass M-test
We will now prove that the power series $\exp(X)=\sum_{n\geq 0}\frac{1}{n!}X^{n}$ converges absolutely and uniformly on bounded sets. This will be relatively painless, but then I will show that the power series for the partial derivatives converge uniformly on bounded sets (which is needed to deduce that $\exp$ is continuously differentiable and that we can differentiate term by term). This will take a lot longer, and should only be watched by those desiring a course of analytic self-flagellation.
We will use the Weierstrass M-test:
Theorem:
If $f_{k}(X)$ is a sequence of functions such that $\lVert f_{k}(X)\rVert\leq M_{k}$ for all $k,X$ then, if $M_{k}$ converges, $f_{k}$ converges uniformly to a function $f(X)$ .
### Uniform convergence on bounded sets
So our strategy will be to find an upper bound for the operator norm of $f_{k}(X)=\sum_{n=0}^{k}\frac{1}{n!}X^{n}$ which is independent of $X$ . This will fail because $\exp$ doesn't converge uniformly everywhere: we will need to assume $\lVert X\rVert_{op}\leq R$ for some $R$ . We will then deduce uniform convergence on this bounded subset of $\mathfrak{gl}(n,\mathbf{R})$ . But if we're interested in a particular matrix then it will satisfy $\lVert X\rVert_{op}\leq R$ for some $R$ , so this is all we need.
Omitting the subscript $op$ , we have $\lVert f_{k}(X)\rVert=\lVert\sum_{n=0}^{k}\frac{1}{n!}X^{n}\rVert\leq\sum_{n=0% }^{k}\frac{1}{n!}\lVert X^{n}\rVert\leq\sum_{n=0}^{k}\frac{1}{n!}\lVert X% \rVert^{n}$ by the triangle inequality and the fact that $\lVert X^{n}\rVert\leq\lVert X\rVert^{n}$ . Now, assuming $\lVert X\rVert\leq R$ , we get $\lVert f_{k}(X)\rVert\leq\sum_{n=0}^{k}\frac{R^{n}}{n!}$ Define $M_{k}:=\sum_{n=0}^{k}\frac{R^{n}}{n!}$ and observe that $M_{k}$ converges to $\exp(R)$ as $k\to\infty$ .
Note that $R$ is a number, so here we're just using convergence of the usual exponential function rather than matrix exp. We now apply the Weierstrass M-test and deduce uniform convergence of $\exp$ for $\lVert X\rVert\leq R$ .
### Absolute convergence
Remark:
We've actually proved absolute convergence along the way. This means that if you take norms of every term in the power series then it still converges.
Remark:
Absolute convergence is the property that allows us to do rearrangements without changing the value of the sum.
## Derivatives
We also want to be able to differentiate $\exp(X)$ term-by-term. For that, we need to show that the sequence of partial derivatives of partial sums converges uniformly on bounded sets. This is where the nightmare begins. Watch on at your own risk.
What do we mean by partial derivative? $\exp(X)$ is a matrix whose entries are functions of the $n^{2}$ matrix entries $X_{11},X_{12},\ldots,X_{1n},X_{21},X_{22},\ldots X_{nn}$ . I'm interested in taking the partial derivative of an entry of $\exp(X)$ with respect to a variable $X_{ij}$ .
Example:
For example, $\frac{\partial}{\partial X_{12}}(X_{11}X_{12})=X_{11}$ and $\frac{\partial}{\partial X_{11}}(X_{22})=0$ .
We are therefore interested in applying the Weierstrass M-test to the sequence of partial sums: $f_{K}(X)=\frac{\partial}{\partial X_{ij}}\left(\sum_{n=0}^{K}\frac{1}{n!}X^{n}\right)$ We will now prove that the $L^{1}$ -norm of $f_{K}(X)$ is bounded by $M_{K}$ for some convergent sequence $M_{K}$ . Since the $L^{1}$ -norm of a matrix is the sum of absolute values of entries, this means we need to bound $\left|\frac{\partial}{\partial X_{ij}}\left(\sum_{n=0}^{K}\frac{1}{n!}X^{n}% \right)_{k\ell}\right|.$
This is a finite sum, so we can take the derivative inside the sum and get $\left|\sum_{n=0}^{K}\frac{1}{n!}\frac{\partial}{\partial X_{ij}}(X^{n})_{k\ell% }\right|.$ For a start, what is $(X^{n})_{k\ell}$ ? If $X$ is an $N$ -by-$N$ matrix (to avoid notation-clashes) $(X^{n})_{k\ell}=\sum_{i_{1}=1}^{N}\cdots\sum_{i_{n-1}=1}^{N}X_{ki_{1}}X_{i_{1}% i_{2}}\cdots X_{i_{n-1}\ell}$ so, using the product rule, and just writing one big sum instead of lots of sums, we get $\frac{\partial}{\partial X_{ij}}(X^{n})_{k\ell}=\sum\left(\frac{\partial X_{ki% _{1}}}{\partial X_{ij}}X_{i_{1}i_{2}}\cdots X_{i_{n-1}\ell}+X_{ki_{1}}\frac{% \partial X_{i_{1}i_{2}}}{\partial X_{ij}}\cdots X_{i_{n-1}\ell}+\cdots+X_{ki_{% 1}}\cdots\frac{\partial X_{i_{n-1}\ell}}{\partial X_{ij}}\right)$ Note that $\partial X_{ki_{1}}/\partial X_{ij}$ is either 1 or 0. It's 1 if $k=i$ and $i_{1}=j$ . In terms of the Kronecker delta $\delta_{ab}=\begin{cases}0&\mbox{ if }a\neq b\\ 1&\mbox{ if }a=b\end{cases}$ , this means we have $\sum\left(\delta_{ki}\delta_{i_{1}j}X_{i_{1}i_{2}}\cdots X_{i_{n-1}\ell}+X_{ki% _{1}}\delta_{i_{1}i}\delta_{i_{2}j}X_{i_{2}i_{3}}\cdots X_{i_{n-1}\ell}+\cdots% +X_{ki_{1}}X_{i_{1}i_{2}}\cdots X_{i_{n-2}i_{n-1}}\delta_{i_{n-1}i}\delta_{% \ell j}\right).$ In the first term, we can group $\delta_{i_{1}j}X_{i_{1}i_{2}}\cdots X_{i_{n-1}\ell}$ and when we sum over $i_{1},i_{2},\ldots,i_{n-1}$ this is just the $j\ell$ matrix entry of $IX\cdots X=X^{n-1}$ (because $\delta_{i_{1}j}$ is the $i_{1}j$ matrix entry of $I$ ).
Similarly, in the second term, we can group $X_{ki_{1}}\delta_{i_{1}i}$ and $\delta_{i_{2}j}X_{i_{2}i_{3}}\cdots X_{i_{n-1}\ell}$ and, when we perform all the sums, these become $X_{ki}$ and $(X^{n-2})_{j\ell}$ .
Proceeding in this manner, the sum goes away and we get: $\frac{\partial}{\partial X_{ij}}(X^{n})_{k\ell}=\delta_{ki}X^{n-1}_{j\ell}+X_{% ki}X^{n-2}_{j\ell}+\cdots+X^{n-1}_{ki}\delta_{j\ell}.$
We're trying to bound $\left|\frac{\partial}{\partial X_{ij}}\left(\sum_{n=0}^{K}\frac{1}{n!}X^{n}% \right)_{k\ell}\right|$ and we now know this is equal to $\left|\sum_{n=0}^{K}\frac{1}{n!}(\delta_{ki}X^{n-1}_{j\ell}+X_{ki}X^{n-2}_{j% \ell}+\cdots+X^{n-1}_{ki}\delta_{j\ell})\right|$ Using the triangle inequality, this is bounded above by $\sum_{n=0}^{K}\frac{1}{n!}(|\delta_{ki}||X^{n-1}_{j\ell}|+|X_{ki}||X^{n-2}|_{j% \ell}+\cdots+|X^{n-1}_{ki}||\delta_{j\ell}|)$ Note that these are really absolute values because we are working with matrix entries rather than matrices.
Each term inside the bracket has the form $|X^{m}_{ki}||X^{n-m-1}_{j\ell}|$ and we want to bound such quantities. By definition of the $L^{1}$ norm, we have $|X_{ki}^{m}|\leq\lVert X^{m}\rVert_{L^{1}}$ and, because the $L^{1}$ and operator norms are Lipschitz equivalent, we have $\lVert X^{m}\rVert_{L^{1}}\leq C\lVert X^{m}\rVert_{op}\leq C\lVert X\rVert_{% op}^{m}$ for some Lipschitz constant $C$ .
Again, using Lipschitz equivalence we get $C\lVert X\rVert_{op}^{m}\leq CD^{m}\lVert X\rVert_{L^{1}}^{m}$ for some Lipschitz constant $D$ . Therefore we get $|X^{m}_{ki}||X^{n-m-1}_{j\ell}|\leq CD^{m}\lVert X\rVert_{L^{1}}^{m}CD^{n-m-1}% \lVert X\rVert_{L^{1}}^{n-m-1}=C^{2}D^{n-1}\lVert X\rVert_{L^{1}}^{n-1}.$
All together, we get $\left|\frac{\partial}{\partial X_{ij}}\left(\sum_{n=0}^{K}\frac{1}{n!}X^{n}% \right)_{k\ell}\right|\leq\sum_{n=0}^{K}\frac{1}{n!}nC^{2}D^{n-1}\lVert X% \rVert_{L^{1}}^{n-1}=C^{2}\sum_{n=0}^{K}\frac{1}{(n-1)!}(D\lVert X\rVert_{L^{1% }})^{n-1}$ So if we assume $\lVert X\rVert_{L^{1}}\leq R$ then this is bounded above by $C^{2}\exp(DR)$ and Weierstrass's M-test applies, so the partial derivatives of the partial sums converge uniformly on bounded sets of matrices.
Don't say I didn't warn you.
|
2022-10-06 08:05:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 82, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9835109710693359, "perplexity": 100.79065819990085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00733.warc.gz"}
|
https://kerodon.net/tag/02R3
|
# Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
Corollary 4.6.8.18. Let $\operatorname{\mathcal{C}}$ be an $\infty$-category and let $f: X \rightarrow Y$ and $g: X \rightarrow Z$ be morphisms of $\operatorname{\mathcal{C}}$, which we identify with objects of the coslice $\infty$-category $\operatorname{\mathcal{C}}_{X/}$. Then the morphism space $\operatorname{Hom}_{ \operatorname{\mathcal{C}}_{X/} }( f, g)$ can be identified with the homotopy fiber of the composition map $\operatorname{Hom}_{ \operatorname{\mathcal{C}}}(Y, Z ) \xrightarrow { \circ [f] } \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Z)$ over the vertex $g \in \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Z)$.
Proof. Using Proposition 4.6.8.16, we can replace the composition map $\operatorname{Hom}_{\operatorname{\mathcal{C}}}(Y,Z) \xrightarrow { \circ [f] } \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Z)$ with the restriction map $\theta : \operatorname{\mathcal{C}}_{f/} \times _{\operatorname{\mathcal{C}}} \{ Z\} \rightarrow \operatorname{\mathcal{C}}_{X/} \times _{\operatorname{\mathcal{C}}} \{ Z\}$. The morphism $\theta$ is a left fibration (Corollary 4.3.6.11). Since the left-pinched morphism space $\operatorname{\mathcal{C}}_{X/} \times _{\operatorname{\mathcal{C}}} \{ Z\} = \operatorname{Hom}_{\operatorname{\mathcal{C}}}^{\mathrm{L}}(X,Z)$ is a Kan complex (Proposition 4.6.5.4), it follows that $\theta$ is a Kan fibration (Corollary 4.4.3.8). In particular, the homotopy fiber of the composition map $\operatorname{Hom}_{ \operatorname{\mathcal{C}}}(Y, Z ) \xrightarrow { \circ [f] } \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Z)$ over the vertex $g$ can be identified with the fiber
$\theta ^{-1} \{ g\} \simeq \operatorname{\mathcal{C}}_{f/} \times _{\operatorname{\mathcal{C}}_{X/} } \{ g\} = \operatorname{Hom}_{\operatorname{\mathcal{C}}_{X/}}^{\mathrm{L}}( f, g ),$
which is homotopy equivalent to $\operatorname{Hom}_{\operatorname{\mathcal{C}}_{X/}}(f,g)$ by virtue of Proposition 4.6.5.9. $\square$
|
2022-06-29 12:46:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.992985725402832, "perplexity": 160.0490798226601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00348.warc.gz"}
|
http://mathonline.wikidot.com/line-integrals-of-vector-fields
|
Line Integrals of Vector Fields
# Line Integrals of Vector Fields
Recall from the Line Integrals page that if $z = f(x, y)$ is a two variable real-valued function and $C$ is a smooth curve parameterized as $\vec{r}(t) = (x(t), y(t))$ for $a ≤ t ≤ b$, then the line integral of $f$ along $C$ is given by the following formula:
(1)
\begin{align} \quad \int_C f(x, y) \: ds = \int_a^b f(x(t), y(t)) \sqrt{ \left ( \frac{dx}{dt} \right )^2 + \left ( \frac{dy}{dt} \right )^2 } \: dt \end{align}
Similarly if $w = f(x, y, z)$ is a three variable real-valued function and $C$ is a smooth curve parameterized as $\vec{r}(t) = (x(t), y(t), z(t))$ for $a ≤ t ≤ b$, then the line integral of $f$ along $C$ is given by the following formula:
(2)
\begin{align} \quad \int_C f(x, y, z) \: ds = \int_a^b f(x(t), y(t), z(t)) \sqrt{\left ( \frac{dx}{dt} \right )^2 + \left ( \frac{dy}{dt} \right )^2 + \left ( \frac{dz}{dt} \right )^2} \: dt \end{align}
Now it is sometimes useful to evaluate line integrals over vector fields $\mathbf{F}$. Such line integrals appear frequently in physics. Let's first define a line integral over a vector field on $\mathbb{R}^3$. The results below have analogous counterparts in $\mathbb{R}^2$. Let $\mathbf{F} (x, y, z) = P(x, y, z)\vec{i} + Q(x, y, z) \vec{j} + R(x, y, z) \vec{k}$ be continuous on $\mathbb{R}^3$. We will shorten this as simple $\mathbf{F} = P\vec{i} + Q \vec{j} + R \vec{k}$. Let $C$ be a smooth curve that is given by the parametric equations $x = x(t)$, $y = y(t)$, and $z = z(t)$ for $a ≤ t ≤ b$ and define $\vec{r}(t) = (x(t), y(t), z(t))$.
Now take the parameter $t$'s interval $[a, b]$ and divide it into $n$ subintervals of equal width. The endpoints of these subintervals correspond to points on the curve $P_0$, $P_1$, …, $P_n$. These points divide the curve $C$ into $n$ sub arcs which are arcs between the points $P_{i-1}$ and $P_i$ on the curve. Let $\Delta s_i$ be the lengths of each of these arc. Now choose any point $P_i^*(x_i^*, y_i^*, z_i^*)$ between $P_{i-1}$ and $P_i$ that corresponds to a $t_i^* \in [t_{i-1}, t_i]$.
Now for small $\Delta s_i$, then the progression as we move from $P_{i-1}$ to $P_i$ is approximately in the direction of the unit tangent vector at $P_i^*$, that is, approximately the direction of $\mathbf{\hat{T}} (t_i^*)$. We define a Riemann sum of the dot products between the field vector $\mathbf{F} (x_i^*, y_i^*, z_i^*)$ at $(x_i^*, y_i^*, z_i^*)$ and these tangent vectors with lengths $\Delta s_i$, $\mathbf{\hat{T}}(t_i^*)$ at $(x_i^*, y_i^*, z_i^*)$:
(3)
\begin{align} \quad \sum_{i=1}^{n} F(x_i^*, y_i^*, z_i^*) \cdot [\Delta s_i \mathbf{\hat{T}} (x_i^*, y_i^*, z_i^*)] = \sum_{i=1}^{n} [F(x_i^*, y_i^*, z_i^*) \cdot \mathbf{\hat{T}} (x_i^*, y_i^*, z_i^*)] \Delta s_i \end{align}
If we let $n \to \infty$, then we obtain a line integral which we define below.
Definition: Let $\mathbf{F}(x, y, z) = P\vec{i} +Q\vec{j} + R \vec{k}$ be a continuous field on $\mathbb{R}^3$ and let $C$ be a smooth curve parameterized by the equations $x = x(t)$, $y = y(t)$, and $z = z(t)$ where $\vec{r}(t) = (x(t), y(t), z(t))$ for $a ≤ t ≤ b$. Then the Line Integral of $\mathbf{F}$ Along $C$ is defined as $\int_C \mathbf{F} (x, y, z) \cdot \mathbf{\hat{T}} (x, y, z) \: ds = \lim_{n \to \infty} \sum_{i=1}^{n} [\mathbf{F}(x_i^*, y_i^*, z_i^*) \cdot \mathbf{\hat{T}}(x_i^*, y_i^*, z_i^*)] \Delta s_i$ provided that this limit exists.
Another name for the integral above is the Line Integral of The Tangential Component of $\mathbf{F}$. A short hand form to rewrite the integral above is $\int_C \mathbf{F} \cdot \mathbf{\hat{T}} \: ds = \int_C \mathbf{F} (x, y, z) \cdot \mathbf{\hat{T}} (x, y, z) \: ds$. The following notation $\int_C \mathbf{F} \cdot d\vec{r} = \int_C \mathbf{F} \cdot \mathbf{\hat{T}} \: ds$ is also commonly used.
Now note that the curve $C$ is given by $\vec{r}(t) = (x(t), y(t), z(t))$ for $a ≤ t ≤ b$. From the Unit Tangent Vectors to a Space Curve page, we saw that thus $\mathbf{\hat{T}}(t) = \frac{\vec{r'}(t)}{\| \vec{r'}(t) \|}$. Further, we note that $\frac{ds}{dt} = \| \vec{r'}(t) \|$ and so $ds = \| \vec{r'}(t) \| \:dt$ and so:
(4)
\begin{align} \quad \int_C \mathbf{F} \cdot \mathbf{\hat{T}} \: ds = \int_a^b \mathbf{F}(r(t)) \cdot \mathbf{\hat{T}(t)} \: ds = \int_C \mathbf{F} \cdot \frac{\vec{r'}(t)}{\| \vec{r'}(t) \|} \| \vec{r'}(t) \| \: dt = \int_a^b \mathbf{F}(r(t)) \cdot \vec{r'}(t) \: dt \end{align}
Now the following theorem will draw a connection between the line integral of a vector field and line integrals of scalar fields.
Theorem 1: If $\mathbf{F}(x, y, z) = P(x, y, z)\vec{i} + Q(x, y, z)\vec{j}$ is a continuous vector field on $\mathbb{R}^3$ and if $C$ is a smooth curve parameterized by the equations $x = x(t)$, $y = y(t)$, and $z = z(t)$ where $\vec{r}(t) = (x(t), y(t), z(t))$ for $a ≤ t ≤ b$, then $\int_C \mathbf{F} \cdot \mathbf{\hat{T}} \: ds = \int_C P(x, y, z) \: dx + Q(x, y, z) \: dy + R(x, y, z) \: dz$.
Of course we can write this in short form as $\int_C \mathbf{F} \cdot \mathbf{\hat{T}} \: ds = \int_C P \: dx + Q \: dy + R \: dz$ or $\int_C \mathbf{F} \cdot d\vec{r} = \int_C P \: dx + Q \: dy + R \: dz$.
• Proof: Showing Theorem 1 is true can be done by definition.
(5)
\begin{align} \quad \int_C \mathbf{F} \cdot \mathbf{\hat{T}} \: ds = \int_a^b \mathbf{F}(\vec{r}(t)) \cdot \vec{r'}(t) \: dt = \int_a^b (P\vec{i}, Q\vec{j}, R\vec{k}) \cdot (x'(t)\vec{i}, y'(t)\vec{j}, z'(t)\vec{k}) \: dt \\ = \int_a^b P(x(t), y(t), z(t)) x'(t) + Q(x(t), y(t), z(t)) y'(t) + R(x(t), y(t), z(t)) z'(t) \: dt \\ = \int_a^b P(x(t), y(t), z(t)) x'(t) \: dt + \int_a^b Q(x(t), y(t), z(t)) y'(t) \: dt + \int_a^b R(x(t), y(t), z(t)) z'(t) \: dt \\ = \int_C P(x, y, z) \: dx + \int_C Q(x, y, z) \: dy + \int_C R(x, y, z) \: dz = \int_C P(x, y, z) \: dx + Q(x, y, z) \: dy + R(x, y, z) \: dz \quad \blacksquare \end{align}
Remark 1: We should note that if the direction along $C$ is reversed, denote it as $-C$, then the line integral above changes sign, that is $\int_{-C} \mathbf{F} \cdot \mathbf{\hat{T}} \: ds = - \int_C \mathbf{F} \cdot \mathbf{\hat{T}} \: ds$.
Remark 2: If $C$ is a closed curve, then often times a little circle on the integral symbol is used notationally to represent the integral above, that is $\int_C \mathbf{F} \cdot \mathbf{\hat{T}} \: ds = \oint_C \mathbf{F} \cdot \mathbf{\hat{T}} \: ds$, or equivalently, $\int_C \mathbf{F} \cdot dr = \oint_C \mathbf{F} \cdot dr$, which denotes the Circulation of the vector field $\mathbb{F}$ around $C$.
|
2017-08-19 03:40:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998469352722168, "perplexity": 342.9150428419804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105297.38/warc/CC-MAIN-20170819031734-20170819051734-00327.warc.gz"}
|
http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.108.133001
|
# Synopsis: Cultivating Extra Dimensions
Ultracold atoms could be used to simulate physical phenomena beyond three spatial dimensions.
The rapid development of ultracold atomic physics has catalyzed efforts to harness atoms for simulating other kinds of matter, such as exotic phases in condensed-matter physics, that may be inaccessible to experiments or difficult to crack theoretically. But condensed-matter physics is not the only arena for ultracold atom simulators: writing in Physical Review Letters, Octavi Boada at the University of Barcelona, Spain, and colleagues propose a way to simulate physics in extra dimensions, a topic dear to the hearts of those who toil in the vineyards of particle physics and quantum gravity.
The search for ways to unify and understand physical phenomena goes back to Kaluza and Klein, who in the 1920s tried to combine electromagnetism with gravity by adding a fourth spatial dimension to the usual three (plus time). More recent theoretical work has suggested that a theory of everything may need $11$ spacetime dimensions. Boada et al. are suggesting an experimental strategy for investigating how matter behaves in extra dimensions. Their idea is to encode a fourth spatial dimension in an internal degree of freedom offered by atoms trapped in an optical lattice, and do it in such a way as to exactly reproduce the physics described by a 4D Hamiltonian. The authors show two ways of observing such effects: one is to look for single-particle effects, such as rates of decay of excited states as a function of dimensionality; another is to search for many-body effects such as insulator-to-superfluid transitions that depend on the number of dimensions. – David Voss
### Announcements
More Announcements »
## Previous Synopsis
Particles and Fields
## Next Synopsis
Biological Physics
## Related Articles
Atomic and Molecular Physics
### Synopsis: Taking Pictures with Single Ions
A new ion microscope with nanometer-scale resolution builds up images using single ions emitted one at a time from an ion trap. Read More »
Atomic and Molecular Physics
### Viewpoint: Squeezed Light Reengineers Resonance Fluorescence
By bathing a superconducting qubit in squeezed light, researchers have been able to confirm a decades-old prediction for the resulting phase-dependent spectrum of resonance fluorescence. Read More »
Gravitation
### Synopsis: Skydiving Spins
Atom interferometry shows that the free-fall acceleration of rubidium atoms of opposite spin orientation is the same to within 1 part in 10 million. Read More »
|
2016-07-25 14:11:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.598331093788147, "perplexity": 1566.276605401644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824230.71/warc/CC-MAIN-20160723071024-00214-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/3032976/structure-of-a-group-g-through-its-isomorphic-images-in-operatornamesymg
|
# Structure of a group $G$ through its isomorphic images in $\operatorname{Sym}(G)$
Following the idea that a group is its structure, and reminding of Cayley theorem, I'm wondering whether we can build up virtually any finite group $$G=\lbrace a_0,\dots,a_{n-1} \rbrace$$ by searching pairs of subgroups of $$\operatorname{Sym}(G)$$ (the symmetric group over the set $$G$$), say $$\Theta=\lbrace\theta_i,i=0,\dots,n-1\rbrace$$ and $$\Gamma=\lbrace\gamma_j,j=0,\dots,n-1\rbrace$$, such that:
i) $$\theta_i(a_j)=\gamma_j(a_i)$$ for all $$i,j=0,\dots,n-1$$
ii) $$\theta_i\gamma_j=\gamma_j\theta_i$$ for all $$i,j=0,\dots,n-1$$
Supposing to have found out such a pair, we could use their elements to define right and left multiplications, where i) would ensure the identity $$a_ia_j=a_ia_j$$ for all $$i,j$$, and ii) the associativity of the composition law "under construction". Moreover, the constraint i) entails that $$\Theta=\Gamma \Rightarrow \theta_i=\gamma_i$$ for all $$i$$, so that $$G$$ is abelian if and only if $$\Theta=\Gamma$$ [Proof: $$\Theta=\Gamma \Rightarrow$$ $$\exists \sigma \in \operatorname{Sym}(n)$$ such that $$\theta_i=\gamma_{\sigma(i)}$$ for all $$i \Rightarrow$$ $$\theta_i(a_j)=\gamma_{\sigma(i)}(a_j)$$ for all $$i,j \Rightarrow$$ (by virtue of i)) $$\gamma_j(a_i)=\gamma_{\sigma(i)}(a_j)$$ for all $$i,j \Rightarrow$$ $$\gamma_{\sigma(i)}(a_i)=\gamma_{\sigma(i)}(a_{\sigma(i)})$$ for all $$i \Rightarrow$$ ($$\gamma_{\sigma(i)}$$ is 1-1) $$a_i=a_{\sigma(i)}$$ for all $$i \Rightarrow$$ ($$a_k$$ are distinct by hypothesis) $$\sigma(i)=i$$ for all $$i \Rightarrow$$ $$\theta_i=\gamma_i$$ for all $$i$$. #]
As a first test for this approach, let's consider $$\rho \in \operatorname{Sym}(G)$$ defined by $$\rho(a_k):=a_{k+1 \mod n}$$, $$k=0,\dots,n-1$$. It is: $$\rho^i(a_j)=a_{j+i \mod n}=a_{i+j \mod n}=\rho^j(a_i)$$; therefore, if we set $$\gamma_i=\theta_i:=\rho^i$$, we have that either i) and ii) are fulfilled. The subgroups of $$\operatorname{Sym}(G)$$ (here coincident) $$\Theta=\lbrace \theta_i=\rho^i \rbrace$$ and $$\Gamma=\lbrace \gamma_i=\rho^i \rbrace$$ define the (abelian) composition law $$a_ia_j=a_{j+i \mod n}$$, whence $$a_i^k=a_{ki \mod n}$$ and then $$a_1^k=a_{k \mod n}=a_k$$ for $$k=0,\dots,n-1$$. Thus, we are finally led to $$G=\lbrace a_k, k=0,...,n-1 \rbrace= \lbrace a_1^k, k=0,...,n-1 \rbrace= \langle a_1 \rangle$$, and $$G$$ is cyclic. This result is irrespective of $$n$$, so cyclic groups exist for any order $$n$$ (not a surprising result, indeed, but here what I'm focused on is rather the approach to get it).
Yet another way to start appreciating this approach, by rediscovering basic facts by means of it, could be the following. Condition ii) implies that $$\Theta\Gamma=\Gamma\Theta$$ and then $$\Theta\Gamma \le \operatorname{Sym}(G)$$. So, once set $$l:=|\Theta \cap \Gamma|$$, $$n:=|\Theta|$$ (=$$|G|$$) and noticing that $$|\Theta\Gamma|=n^2/l$$, we get: $$l \le n \le n^2/l \le n!$$, with (Lagrange) $$l|n \wedge (n^2/l)|n!$$. Now, $$\Theta \ne \Gamma \Rightarrow l < n < n^2/l \le n!$$. Then, if $$|G|=n=p$$, with $$p$$ prime, we have $$l=1$$ and then $$p^2|p!$$: contradiction. Then we are left with $$|G|=p$$ ($$p$$ prime) $$\Rightarrow \Theta=\Gamma \Rightarrow G$$ abelian.
Could this approach be used to search for other, less trivial group structures?
You can find a lot of structure by looking the representation of a group in its symmetry group, as well the symmetry group of other groups. Examples, if you have a group $$G$$ with $$|G|=p^km$$ where $$p$$ is prime, and $$m$$ is relatively prime to $$p$$, then you can show that $$G$$ is not simple if $$m! by looking at the symmetry group $$\text{Sym}_{m}\leq\text{Sym}(G)$$.
Or if you have an injective homomorphism $$\phi:G\to\text{Sym}_{G}$$ that contains cycles $$(1,2),(1,2,...,n)$$, then $$G$$ contains an isomoprhic copy of $$\text{Sym}_{n}.$$
I forget the exact relation, but I recall that you can deduce properties of symmetric groups by looking at representations of Galois Groups inside symmetric groups.
In general representing groups inside well known groups like the Symmetric groups or Matrix groups can be very helpful.
• That's encouraging - thanks for the feedback. I've edited my post with some more basic stuff in that direction, hoping it fits! – Luca Dec 10 '18 at 14:11
• " $G$ is simple if $m!<p^k m$" doesn't seem right to me as there is always a cyclic group of any given order.(And did you mean "$p$ is prime and $m$ is coprime to $p$"?) – ancientmathematician Dec 10 '18 at 15:13
• @ancientmathematician You're right, type-o. It's not simple, and we can show that by a non-trivial non-injective homomorphism $\phi:G\to\text{Sym}_{\text{Syl}_p(G)}.$ – Melody Dec 10 '18 at 18:39
• You've still got to tidy up the hypotheses on $p$ (it is to be "prime") and $m$ (it's for to be "relatively prime to $p$" - not to $m$). – ancientmathematician Dec 11 '18 at 7:35
• My gosh, I can't believe I wrote that. Thank you again. – Melody Dec 11 '18 at 16:44
|
2019-09-23 13:50:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 77, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9077320694923401, "perplexity": 176.63488232024648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00133.warc.gz"}
|
http://openstudy.com/updates/508da510e4b02e69fc43692e
|
## oksanaekjord 2 years ago solve lim x->inf (ln x)^(1/x)
1. oksanaekjord
getting 0
2. klimenkov
$\lim_{x\rightarrow\infty}(\ln x)^{\frac 1x}=1$
3. satellite73
start by taking the log get $\frac{1}{x}\ln(\ln(x))=\frac{\ln(\ln(x))}{x}$
4. satellite73
now that limit is pretty clearly 0, since log grows much slower than $$x$$ and the log of the log grows amazingly slowly since that limit is 0, and it is the limit of the log, you get $$e^0=1$$ and the limit of your original question
5. oksanaekjord
ty
|
2015-08-04 03:35:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694754481315613, "perplexity": 1748.152107768066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990217.27/warc/CC-MAIN-20150728002310-00003-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://kluedo.ub.uni-kl.de/frontdoor/index/index/year/1999/docId/851
|
## Optimal Order Results for a Class of Regularizazion Methodes Using Unbounded Operators
• A class of regularization methods using unbounded regularizing operators is considered for obtaining stable approximate solutions for ill-posed operator equations. With an a posteriori as well as an priori parameter choice strategy, it is shown that the method yields optimal order. Error estimates have also been obtained under stronger assumptions on the the generalized solution. The results of the paper unify and simplify many of the results available in the literature. For example, the optimal results of the paper includes, as particular cases for Tikhonov regularization, the main result of Mair (1994) with an a priori parameter choice and a result of Nair (1999) with an a posteriori parameter choice. Thus the observations of Mair (1994) on Tikhonov regularization of ill-posed problems involving finitely and infinitely smoothing operators is applicable to various other regularization procedures as well. Subsequent results on error estimates include, as special cases, an optimal result of Vainikko (1987) and also recent results of Tautenhahn (1996) in the setting Hilbert scales.
$Rev: 13581$
|
2017-06-24 22:37:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7314358353614807, "perplexity": 503.5907531489599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320362.97/warc/CC-MAIN-20170624221310-20170625001310-00628.warc.gz"}
|
http://sublimationwizards.com/szicloh/multiplying-matrices-3x3-bbd6d5
|
## multiplying matrices 3x3
A Matrix is an arrangement of array of number in rectangular form. This calculator can instantly multiply two matrices and show a step-by-step solution. Recall that the size of a matrix is the number of rows by the number of columns. Determinant. This procedure is only possible if the number of columns in the first matrix are equal to the number of rows in the second matrix. Matrix multiplication, also known as matrix product, that produces a single matrix through the multiplication of two different matrices. Now that we have established this, you can also think of D and E as matrices where D is a matrix of size 4 x 1 and E is a matrix of size 1 x 4. Enter the values of the matrices into the calculator and find the resultant matrix. Here is an online 3x3 matrix multiplication calculator for the multiplying 3x3 matrices. OK, so how do we multiply two matrices? C23 is co-factor associated with a23, in row 2 and column 3 An arbitrary matrix has its size denoted as mtimesn, where m refers to the number of rows in a given matrix and n refers to the number of columns in a given matrix. C Program to Multiply Two 3 X 3 Matrices; C Program to Find Inverse Of 3 x 3 Matrix in 10 Lines; Accessing 2-D Array Elements In C Programming To multiply any two matrices, we should make sure that the number of columns in the 1st matrix is … Here is the list of example matrix problems with solutions to learn how to get the product of matrices by multiplying the $3 \times 3$ matrices. You can also use the sizes to determine the result of multiplying the two matrices. The program for matrix multiplication is used to multiply two matrices. We will go on to look at a very useful property of the identity matrix. A Computer Science portal for geeks. The matrix cA will be the same size as A" (Williams, 37). Multiplication of Matrices. If A is an m x n matrix and B is an n x p matrix, they could be multiplied together to produce an m x n matrix C. Matrix multiplication is possible only if the number of columns n in A is equal to the number of rows n in B. Multiplying matrices example explained step by step. Multiplication of 3x3 identity matrix (nxn), involves multiplication of 3 rows with 3 columns. In this post, we will be learning about different types of matrix multiplication in the numpy library. The elements of the matrix A_(11), A_(22), ..., A_(text(nn)) is commonly referred to as the main diagonal of the square matrix. For example, the following matrices can be multiplied. This calculator can instantly multiply two matrices and show a step-by-step solution. 5). in a single step. A 3x3 matrix has 3 rows and 3 columns and hence the resultant product matrix which is produced by multiplying two 3x3 matrices is also a 3x3 matrix. a) Multiplying a 2 × 3 matrix by a 3 × 4 matrix is possible and it gives a 2 × 4 matrix as the answer. In this calculator, multiply matrices of the order 2x3, 1x3, 3x3, 2x2 with 3x2, 3x1, 3x3, 2x2 matrices. Sort by: Top Voted. Matrix Calculator Properties of matrix multiplication. 3x3 is an identity matrix. So it's a 2 by 3 matrix. C program to find the product of Two 3 X 3 Matrices. In order to multiply matrices, Step 1: Make sure that the the number of columns in the 1 st one equals the number of rows in the 2 nd one. Sorry, JavaScript must be enabled.Change your browser options, then try again. Multiplying matrices is done by multiplying the rows of the first matrix with the columns of the second matrix in a systematic manner. This is the currently selected item. Here is an example of matrix multiplication for two 2x2 matrices Here is an example of matrices multiplication for a 3x3 … Co-factor Cij = determinant of 2X2 matrix obtained by deleting row i and column j of A, prefixed by + or – according to following pattern… e.g. Rows: Columns: + − ×. This gives us the number we need to put in the first row, first column position in the answer matrix. C Program to Multiply Two 3 X 3 Matrices; C Program to Find Inverse Of 3 x 3 Matrix in 10 Lines; Accessing 2-D Array Elements In C Programming This tutorial explains how to multiply Matrices/Matrix in Python using nested loops or using nested lists. Important: We can only multiply matrices if the number of columns in the first matrix is the same as the number of rows in the second matrix. Matrix multiplication is NOT commutative. 4 questions. Print. Up Next. We get, AB= [(3, 1, 2), (4, 1, 5)]*[(7, 2), (6, 3)]= [([(3, 1, 2)]*[(7), (6)],[(3, 1, 2)]*[(2), (3)]),( [(4, 1, 5)]*[(7), (6)], [(4, 1, 5)]*[(2), (3)]) ], If we try to compute [(3, 1, 2)]*[(7), (6)] , the elements do not match, and the product does not exist. Can anyone explain to me in a simple way how to mutiply matrices-paticulary 3x3 and 4x4 matrices. Therefore matrix AB = $$\begin{bmatrix} 53&62 \\ 69 & 80 \end{bmatrix}$$ 3×3 Matrix Multiplication. I can't get my head around them and every time my teacher "attempts" to teach me I end up getting more confused! Your help would mean a lot to me! The following properties of matrix multiplication are important to know: 1) Matrix Multiplication is not commutative 2) If A is an m times r matrix and B is an r times n matrix, then AB will be an mtimesn matrix. It can be large or small (2×2, 100×100, ... whatever) 3. Matrix Multiplication in NumPy is a python library used for scientific computing. Multiplying matrices. Thanks :) __Multiplication of 3x3 and 3x1 matrices__ is possible and the result matrix is a 3x1 matrix. Site … Producing a single matrix by multiplying pair of matrices (may be 2D / 3D) is called as matrix multiplication which is the binary operation in mathematics. 2x2 Square Matrix. Matrix multiplication follows the same algorithm as multiplying vectors. If you multiply a matrix P of dimensions (m x n) with a matrix V of dimensions (n x p) you’ll get a matrix of dimension (m x p). Matrices are the r objects in which the elements are arranged in a systematic.. To as a square matrix have hit a few problems and i ca figure. A 3x3 works when we 're multiplying our two matrices and vectors by Duane Q. is! Matrices in this type of calculator using the * operator it on the GeeksforGeeks main and!: the transpose multiplying matrices is done by multiplying the rows and columns are referred to as.... Scientific computing of you who support me on Patreon 2×2, 100×100,... whatever ) 3 * 2.... Two matrices are arranged in a systematic manner of linear algebra ca will be the shortcoming! AB does not exist. property of the second matrix in a manner. Special Cases 3, 4 and 5 sorry, JavaScript must be enabled.Change your browser options then! Now the matrix multiplication calculator for the multiplying 3x3 matrices the two matrices of order 2 * )... Matlab is performed by using the * operator -- … multiplying matrices example explained step by step AB.! Adding '' ( has same number of columns of 2x3 and 3x3 matrices__ is possible and result... The two matrices squared matrices that happen to have neat properties three ( 3 * 2 respectively then try.. Into the calculator and find the product AB vector and E is 3x1! Squared matrix Formula & Calculation ) 3 * 3 squared matrix multiplication in NumPy a. To anyone, anywhere other ( e.g … this precalculus video tutorial provides basic... Matrix by an n × 1 matrix ; in other words, single. Multiply rows times columns by multiplying corresponding elements and adding '' ( has same number of rows by rule! Example explained step by step second is a 1 × n matrix by an n × matrix. Nested list ( list inside a list ) this tutorial explains how to multiply a 1 × matrix. Example above ) illustrates multiplication of 3 rows with 3 columns ( e.g for matrices to be row. Same shortcoming applies to all of you who support me on Patreon our is. To anyone, anywhere on to look at a very useful property of the second matrix with 3... Elements are arranged in a two-dimensional rectangular layout then try again certain rotation matrices you also... Is equal to the number of rows by the rule above, the number rows! Is not commutative row of the identity matrix matrix, AB ≠.! Takes two matrices and show a step-by-step solution rows times columns by multiplying the rows and columns! All of you who support me on Patreon instantly multiply two matrices does not exist. a 501 c... The second is a 1 × 1 matrix or using nested lists 6... By a 3x1 vector 102 ], multiplying matrices 3x3 156 ] ] in the answer matrix in! Has same number of rows in the NumPy library are arranged in a manner... Inside a list ) understand the multiplication of 3 rows with 3 columns whatever! Same shortcoming applies to all the other elements of AB of two different matrices calculator the! Neither a nor B is an arrangement of array of number in rectangular.. Is not commutative = n ), it 's 3 by 2 Academy. # is given as follows − example illustrates multiplication of 3 rows with 3 columns first number multiplying matrices 3x3 number! Can treat each element as a square matrix this tutorial explains how to mutiply matrices-paticulary 3x3 and 4x4.. Precalculus video tutorial provides a basic introduction into multiplying matrices is done by multiplying corresponding elements and adding '' has. Main diagonal and 0s everywhere else 4 or column e.g 37 ) some! Http: //MathMeeting.com Time complexity: O ( n 3 ).It can a... Size 3 x 3 matrix with the columns of the first matrix with other on same.... Transpose of matrices which look like the matrices into the calculator and find the resultant.... ) ( 3 * 2 respectively matrices D, E and s discussed in Special Cases multiplying matrices 3x3... Possible ) and the second indicates the number of rows it on the screen are in. Multiplying corresponding elements and adding '' ( Williams, 37 ) is an identity matrix write C++ illustrates. Each element as a product of two 3 x 3 matrix with columns. ( Williams, 37 ) khan Academy is a 501 ( c ) ( 3 * and... Consider two 3 × 3 matrices -- that happen to have neat properties a two-dimensional rectangular layout and.. To provide a free, world-class education to anyone, anywhere library used for scientific computing 3x3 matrices.. Of two 3 × 3 matrices a and B x a will give different results it only --. __Multiplication of 3x3 and 4x4 matrices to be a row or a vector... O ( n 3 ) 3 a Python library used for scientific computing and show step-by-step... List ( list inside a list ) vector can be optimized using Strassen ’ s matrix multiplication not! Column e.g B and B x a will give different results to be a row of second! First is just a single row, first column position in the first with. By Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License small (,. Two-Dimensional rectangular layout this calculator can instantly multiply two matrices of order multiplying matrices 3x3 * c1 and r2 * respectively... Using nested loops or using nested lists like the matrices into the calculator and find product! For two 2x2 matrices here is an identity matrix column e.g s discussed Special. 3X3 identity matrix ( nxn ), involves multiplication of 3x3 and 4x4 matrices matrices be. The product of two different matrices dimensions mxp be optimized using Strassen ’ s matrix,! m=n ` then the product of matrices: the transpose of matrices multiplication for two 2x2 here... Multiply two 3x3 matrices together involves multiplication of 3x3 identity matrix ( nxn ), it is example.
|
2021-04-11 13:03:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6194700002670288, "perplexity": 543.5450613553982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038062492.5/warc/CC-MAIN-20210411115126-20210411145126-00339.warc.gz"}
|
http://umj.imath.kiev.ua/article/?lang=en&article=9910
|
2018
Том 70
№ 6
# Existence of weak solutions of certain boundary value problems for equations of mixed type
Berezansky Yu. M.
Abstract
The differential equation of mixed type $$Lu =\sum^2_{j, k=1}D_j (b_{jk} (x) D_ku) + \sum^2_{j=1}p_i(x)D_ju + p(x)u = f(x)$$ is considered in a bounded domain of the $(x_1, x_2)$-plane, the equation being for $x_2 > 0$ elliptic and for $x_2 < 0$ of the form $k(x_2) D^2_1u + D^2_2u = f(x)$. For boundary conditions of the Tricomi type, as well as for more general conditions, two energetic inequalities are proved (for the original and adjoint problems). The existence of the weak and the uniqueness of the strong solutions follows directly for the problems under consideration. Similar problems are investigated for certain unbounded domains.
Citation Example: Berezansky Yu. M. Existence of weak solutions of certain boundary value problems for equations of mixed type // Ukr. Mat. Zh. - 1963. - 15, № 4. - pp. 347-364.
Full text
|
2018-07-17 23:14:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6009366512298584, "perplexity": 503.2074356158267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589932.22/warc/CC-MAIN-20180717222930-20180718002930-00172.warc.gz"}
|
https://zbmath.org/?q=an:0966.53053
|
# zbMATH — the first resource for mathematics
Nambu-Poisson tensors on Lie groups. (English) Zbl 0966.53053
Grabowski, Janusz (ed.) et al., Poisson geometry. Stanisław Zakrzewski in memoriam. Warszawa: Polish Academy of Sciences, Institute of Mathematics, Banach Cent. Publ. 51, 243-249 (2000).
A Nambu-Poisson structure on a manifold can be defined by a $$k$$-vector field satisfying an integrability condition. If $$k=2$$ the Nambu-Poisson structures coincide with Poisson structures. The author first gives a characterization of Nambu-Poisson structures in terms of forms rather than $$k$$-vector fields provided the manifold is endowed with a volume form. Next a description of left invariant Nambu-Poisson tensors on a Lie group in terms of subalgebras of the corresponding Lie algebra is presented. It is also studied under what conditions Nambu-Poisson tensors could be projected onto the corresponding homogeneous space.
For the entire collection see [Zbl 0936.00035].
##### MSC:
53D17 Poisson manifolds; Poisson groupoids and algebroids 53C30 Differential geometry of homogeneous manifolds
Full Text:
|
2022-01-27 20:11:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5452357530593872, "perplexity": 1279.2029572628785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00246.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0468
|
Étale implies locally of finite presentation.
Lemma 65.39.8. An étale morphism of algebraic spaces is locally of finite presentation.
Proof. The proof is identical to the proof of Lemma 65.39.5. It uses Morphisms, Lemma 29.35.11. $\square$
Comment #913 by Matthieu Romagny on
Suggested slogan: Etale implies locally of finite presentation.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2020-08-11 15:51:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9904274344444275, "perplexity": 1749.0833482058229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738816.7/warc/CC-MAIN-20200811150134-20200811180134-00125.warc.gz"}
|
https://www.weiyeying.com/ask/%E5%A6%82%E4%BD%95%E8%8E%B7%E5%BE%97%E4%BB%B7%E5%80%BC-db78a37e-1e49-499e-b84b-b749a3113a77
|
# 如何獲得價值
Using C# & Java Script
"http://localhost/Server/Vehicle/Vehicle.aspx?appid=5", when i use this link the page is opening... But i want to get this appid value, then pass this appid value to another link
Link1 http://localhost/Server/Vehicle/Vehicle.aspx?appid=5
Entry
http://localhost/Server/Vehicle/car.aspx?param=document.getElementById('appid').value
## 最佳答案
string appid = Request.QueryString["appid"];
Update:
JavaScript代碼段不會在鏈接的 href 屬性中執行(它被識別為普通字符串,不會被解析為JavaScript代碼)。
Entry
Side note: the value property works only for HTML tags that have defined an eponymous attribute. One such tag would be the input tag. The div tag instead doesn't have a value attribute defined, and therefore document.getElementById('appid').value would fail; use innerHTML instead in that case.
|
2020-07-10 03:39:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1843041032552719, "perplexity": 7525.838732872809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902496.52/warc/CC-MAIN-20200710015901-20200710045901-00464.warc.gz"}
|
https://gharpedia.com/cost-always-goes-buying-flat/
|
## Why is that the Cost Always Goes up While Buying Flat?
The builder/developer offers lowest possible price to be and to look competitive as compared to nearby projects in vicinity. To minimize the cost, he would give only basic minimum facilities and materials of ordinary brand. So while buying flat you might have to spend extra for additional amenities and to have material of better and popular brands. If there are no clarifications of such amenities from day one, either you will enter into disputes or you will end up paying more to the builder.
##### Also Read: 8 Major Factors on Which the Value of Flat will Depend!
Courtesy - 123rf
The builder’s brochure may describe that he will provide tiles of basic rate of Rs. 30.00 per sq.ft. and you may not like such tiles. But when do you like and select tiles which may cost around Rs. 50.00 per sq.ft., you will have to pay extra at Rs. 20.00 per sq.ft. for better quality of tiles.
In all such cases the basic rate of materials which the builder is going to provide should be discussed from day one, so that when you finalize materials as per your choice, you know very well how much extra you will have to spend or how much deduction you are entitled.
##### Also Read: Sell a Home with an Outstanding Loan
The list of such items is given here in for general guidance.
#### Difference
Builder may provide painting which may consist of either whitewash, colourwash or distemper You may like to provide matt finish of acrylic paint You will have to pay extra / additional rate i.e. difference of rate between matt finish paint and whitewash
He may provide flooring tiles of say basic rates say Rs. 30.00 per sq.ft. You would like to provide tiles, say of Rs. 50.00 per sq.ft. You will have to pay Rs. 20.00 per sq.ft. extra / additional for area of the floor.
He may provide dado upto 3 ft only in toilets You may like to provide dado for full height upto 7 ft to 8 ft You will have to pay for additional area of 4 to 5 ft height
He may provide standing kitchen platform of polished kota stone You may like to have it in granite You will have to pay the difference in rate of granite and polished stone plus difference in labour rate
He may not provide dado of tiles against standing kitchen platform as well as storage rack below platform You may need to provide both You will have to pay extra/ additional for such area of tiles as well as for storage reasons
The number of light points may be 2 or 3 per room or as minimum possible You may need to provide more number of light points You will have to pay extra / additional for each no. of points
The make and brand of fixtures like bib cock, washbasin, WC pan might have not been mentioned You may now need to provide fixtures of reputed brands The difference in cost of such material has to be paid. But in absence of basic rate this may lead to problem / dispute.
Similarly electrical fixtures may not be branded You prefer to have fixtures of reputed brand You will have to pay extra / additional cost for such brands.
He may not provide windows grills, curtain rods etc. You may like to have the same You will have to pay extra / additional cost for such items.
He may not provide water proofing in toilets as well as on terrace You may like to have the same You will have to pay additional rate for water proofing in toilet / terrace as applicable
The builder may or may not provide appliances like lamps, fans, geyser, R/O plant, cable connection You will like to have the same You will have to pay additional for such item
|
2019-04-22 00:54:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2252524048089981, "perplexity": 2044.6954731797814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532948.2/warc/CC-MAIN-20190421235818-20190422021818-00215.warc.gz"}
|
https://stats.stackexchange.com/questions/209998/proving-the-probability-integral-transform-without-assuming-that-the-cdf-is-stri/210119
|
# Proving the probability integral transform without assuming that the CDF is strictly increasing
I know that the proof of the probability integral transform has been given multiple times on this site. However, the proofs I found use the hypothesis that the CDF $F_X(x)$ is strictly increasing (together, of course, with the hypothesis that $X$ is a continuous random variable). I know that actually the only required hypothesis is that $X$ is a continuous random variable, and strict monotonicity is not required. Can you show me how?
Since I'm already here, I also take the occasion to ask for a simple application of the probability integral transform :) can you show me that, if $X$ has CDF $F_X(x)$ and $Y$ is the truncation of $X$ to $[a,b]$, then $Y$ is distributed as $F_X^{-1}(U)$ where $U\sim[F_X(a),F_X(b)]$?
• if you would be so kind, in the proof of your link, could you point to where the requirement that $F_X(x)$ has to be strictly increasing. Thanks! – Erosennin Apr 29 '16 at 12:57
• @Erosennin, the proof assumes the existence of the inverse of $F_X(x)$. – DeltaIV Apr 29 '16 at 13:23
• Thanks! But is there ever a CDF that is not strictly increasing? You have probably already thought of this, though... – Erosennin Apr 29 '16 at 13:28
• Of course there is. The random variable whose pdf is equal to 1/2 in [0,0.5], 0 in [0.5,1] and 1/2 in [1,1.5], has a CDF which is continuous, but is not strictly increasing. – DeltaIV Apr 29 '16 at 13:40
• The hard part is dealing with the non-absolutely continuous part of $F$. The idea is made clear by considering the extreme case of discrete $F$. At stats.stackexchange.com/a/36246/919 I give an algorithm that implements the probability integral transform in that case (as well as supplying working code). Emulating that algorithm for arbitrary $F$ will answer your question. – whuber Apr 29 '16 at 14:38
In the wikipedia link provided by the OP, the probability integral transform in the univariate case is given as follows
Suppose that a random variable $$X$$ has a continuous distribution for which the cumulative distribution function(CDF) is $$F_X$$. Then the random variable $$Y=F_X(X)$$ has a uniform distribution.
PROOF
Given any random variable $$X$$, define $$Y = F_X (X)$$. Then:
\begin{align} F_Y (y) &= \operatorname{Prob}(Y\leq y) \\ &= \operatorname{Prob}(F_X (X)\leq y) \\ &= \operatorname{Prob}(X\leq F^{-1}_X (y)) \\ &= F_X (F^{-1}_X (y)) \\ &= y \end{align}
$$F_Y$$ is just the CDF of a $$\mathrm{Uniform}(0,1)$$ random variable. Thus, $$Y$$ has a uniform distribution on the interval $$[0, 1]$$.
The problem with the above is that it is not made clear what the symbol $$F_X^{-1}$$ represents. If it represented the "usual" inverse (that exists only for bijections), then the above proof would hold only for continuous and strictly increasing CDFs. But this is not the case, since for any CDF we work with the quantile function (which is essentially a generalized inverse),
$$F_Z^{-1}(t) \equiv \inf \{z : F_Z(z) \geq t \}, \;\;t\in (0,1)$$
Under this definition the wikipedia series of equalities continue to hold, for continuous CDFs. The critical equality is
$$\operatorname{Prob}(X\leq F^{-1}_{X} (y)) = \operatorname{Prob}(X\leq \inf \{x : F_X(x) \geq y \})= \operatorname{Prob}(F_X (X)\leq y)$$
which holds because we are examining a continuous CDF. This in practice means that its graph is continuous (and without vertical parts, since it is a function and not a correspondence). In turn, these imply that the infimum (the value of inf{...}), denote it $$x(y)$$, will always be such that $$F_X(x(y)) = y$$. The rest is immediate.
Regarding CDFs of discrete (or mixed) distributions, it is not (cannot be) true that $$Y=F_X(X)$$ follows a uniform $$U(0,1)$$, but it is still true that the random variable $$Z=F_{X}^{-1}(U)$$ has distribution function $$F_X$$ (so the inverse transform sampling can still be used). A proof can be found in Shorack, G. R. (2000). Probability for statisticians. ch.7.
• +1 A similar proof is also provided on pg. 54 of Casella and Berger's Statistical Inference, second edition. – StatsStudent Apr 30 '16 at 3:29
• @Analyst1 Thanks, it's good to have multiple references. – Alecos Papadopoulos Apr 30 '16 at 12:15
|
2020-08-14 03:42:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970457553863525, "perplexity": 236.21363402323422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739134.49/warc/CC-MAIN-20200814011517-20200814041517-00209.warc.gz"}
|
https://spinnaker8manchester.readthedocs.io/en/latest/_modules/spinn_front_end_common/interface/interface_functions/sdram_outgoing_partition_allocator/
|
# Source code for spinn_front_end_common.interface.interface_functions.sdram_outgoing_partition_allocator
# Copyright (c) 2019-2020 The University of Manchester
#
# This program is free software: you can redistribute it and/or modify
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from spinn_utilities.progress_bar import ProgressBar
from pacman.model.graphs.machine import SourceSegmentedSDRAMMachinePartition
from spinn_front_end_common.utilities.exceptions import SpinnFrontEndException
[docs]class SDRAMOutgoingPartitionAllocator(object):
[docs] def __call__(self, machine_graph, transceiver, placements, app_id):
progress_bar = ProgressBar(
total_number_of_things_to_do=len(machine_graph.vertices),
string_describing_what_being_progressed=(
"Allocating SDRAM for SDRAM outgoing egde partitions"))
for machine_vertex in machine_graph.vertices:
sdram_partitions = (
machine_graph.get_sdram_edge_partitions_starting_at_vertex(
machine_vertex))
for sdram_partition in sdram_partitions:
# get placement, ones where the src is multiple,
# you need to ask for the first pre vertex
if isinstance(
sdram_partition, SourceSegmentedSDRAMMachinePartition):
placement = placements.get_placement_of_vertex(
next(iter(sdram_partition.pre_vertices)))
else:
placement = placements.get_placement_of_vertex(
sdram_partition.pre_vertex)
# total sdram
total_sdram = (sdram_partition.total_sdram_requirements())
# if bust, throw exception
if total_sdram == 0:
raise SpinnFrontEndException(
"Cannot allocate sdram size of 0 for "
"partition {}".format(sdram_partition))
# allocate
|
2022-01-17 11:10:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35663434863090515, "perplexity": 13581.899976471426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300533.72/warc/CC-MAIN-20220117091246-20220117121246-00444.warc.gz"}
|
https://unapologetic.wordpress.com/2011/03/17/sheaves/?like=1&source=post_flair&_wpnonce=134bd2d71b
|
# The Unapologetic Mathematician
## Sheaves
For the moment we will be more concerned with presheaves, but we may as well go ahead and define sheaves. These embody the way that not only can we restrict functions to localize them to smaller regions, but we can “glue together” local functions on small domains to define functions on larger domains. This time, let’s start with the fancy category-theoretic definition.
For any open cover $\{U_i\}_{i\in I}$ of an open set $U$, we can set up the following diagram:
$\displaystyle\mathcal{F}(U)\to\prod\limits_{i\in I}\mathcal{F}(U_i)\rightrightarrows\prod\limits_{(i,j)\in I\times I}\mathcal{F}(U_i\cap U_j)$
Let’s talk about this as if we’re dealing with a sheaf of sets, to make more sense of it. Usually our sheaves will be of sets with extra structure, anyway. The first arrow on the left just takes an element of $\mathcal{F}(U)$, restricts it to each of the $U_i$, and takes the product of all these restrictions. The upper arrow on the right takes an element of $\mathcal{F}(U_i)$ and restricts it to each intersection $U_i\cap U_j$. Doing this for each $U_i$ we get a map from the product over $i\in I$ to the product over all pairs $(i,j)$. The lower arrow is similar, but it takes an element in $\mathcal{F}(U_j)$ and restricts it to each intersection $U_i\cap U_j$. This may look the same, but the difference in whether the original set was the first or the second in the intersection makes a difference, as we shall see.
Now we say that a presheaf $\mathcal{F}$ is a sheaf if and only if this diagram is an equalizer for every open cover $U_i$. For it to be an equalizer, first the arrow on the left must be a monomorphism. In terms of sets, this means that if we take two elements $s\in\mathcal{F}(U)$ and $t\in\mathcal{F}(U)$ so that $s\vert_{U_i}=t\vert_{U_i}$ for all \$latex $U_i$, then $s=t$. That is, elements over $U$ are uniquely determined by their restrictions to any open cover.
The other side of the equalizer condition is that the image of the arrow on the left consists of exactly those products in the middle for which the two arrows on the right give the same answer. More explicitly, let’s say we have an $s_i\in\mathcal{F}(U_i)$ for each $U_i$, and let’s further assume that these elements agree on their restrictions. That is, we ask that $s_i\vert_{U_i\cap U_j}=s_j\vert_{U_i\cap U_j}$. If this is true for all pairs $(i,j)$, then the product $\left(s_i\right)_{i\in I}$ takes the same value under either arrow on the right. Thus it must be in the image of the arrow on the left — there must be some $s\in\mathcal{F}(U)$ so that $s\vert_{U_i}=s_i$. In other words, as long as the local elements $s_i\in\mathcal{F}(U_i)$ “agree” where their domains overlap, we can “glue them together” to give an element $s\in\mathcal{F}(U)$.
Again, the example to keep in mind is that of continuous real-valued functions. If we have a continuous function $f_U:U\to\mathbb{R}$ and another continuous function $f_V:V\to\mathbb{R}$, and if $f_U(x)=f_V(x)$ for all $x\in U\cap V$, then we can define $f:U\cup V\to\mathbb{R}$ by “gluing” these functions together over their common overlap: $f(x)=f_U(x)$ if $x\in U$, $f(x)=f_V(x)$ if $x\in V$, and it doesn’t matter which we choose when $x\in U\cap V$ because both functions give the same value there.
So, a sheaf is a presheaf where we can glue together elements over small domains so long as they agree when restricted to their intersections, and where this process defines a unique element over the larger, “glued-together” domain.
March 17, 2011 - Posted by | Topology
1. […] As ever, we want our objects of study to be objects in some category, and presheaves (and sheaves) are no exception. But, luckily, this much is […]
Pingback by Mappings Between Presheaves « The Unapologetic Mathematician | March 19, 2011 | Reply
2. For the left-most arrow in the first diagram to work, don’t the $U_i$ have to be subsets of $U$, which isn’t necessary for an open cover of the topological space, or am I once again missing something?
Comment by Avery Andrews | March 20, 2011 | Reply
3. oops open cover of $U$ in the topological space
Comment by Avery Andrews | March 20, 2011 | Reply
4. It’s not required for an open cover, but since $U$ is an open subspace we can always just pass to the intersection of the covering sets with $U$ itself. We still have an open cover of $U$, but all the sets are subsets of $U$.
Comment by John Armstrong | March 20, 2011 | Reply
5. I managed to think of this right after posting the question, but it’s nice to have it confirmed.
Comment by Avery Andrews | March 20, 2011 | Reply
6. […] Direct Image Functor So far our morphisms only let us compare presheaves and sheaves on a single topological space . In fact, we have a category of sheaves (of sets, by default) on . […]
Pingback by The Direct Image Functor « The Unapologetic Mathematician | March 21, 2011 | Reply
7. […] that we’ve talked a bunch about presheaves and sheaves in general, let’s talk about some particular sheaves of use in differential topology. Given a […]
Pingback by Sheaves of Functions on Manifolds « The Unapologetic Mathematician | March 23, 2011 | Reply
8. p 106-108 of http://folli.loria.fr/cds/1999/library/pdf/barrwells.pdf is a decent piece of side-reading for this, I think.
Comment by Avery D Andrews | March 27, 2011 | Reply
9. Nitpick: s/left/right/ in “upper arrow on the left”
|
2018-06-24 01:18:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9763162732124329, "perplexity": 255.62957930592992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865995.86/warc/CC-MAIN-20180624005242-20180624025242-00350.warc.gz"}
|
https://epynn.net/glossary.html
|
# Appendix
## Notations
Conventions are related to mathematical expression on EpyNN’s website. Divergences with the Python code are highlighted when applicable.
### Arithmetic operators
$$+$$ and $$-$$
Element-wise addition/subtraction between matrices, scalar addition/subtraction to/from each element of one matrix, scalar addition/subtraction to/from another scalar.
$$*$$ and $$/$$
Element-wise multiplication/division between matrices (See Hadamard product (matrices) on Wikipedia), matrix multiplication/division by a scalar, scalar multiplication/division by another scalar.
$$\cdot$$
Dot product between matrices (See Dot product on Wikipedia).
### Names of matrices
Layers input and output:
$$X$$
Input of forward propagation.
$$A$$
Output of forward propagation.
$$\frac{\partial \mathcal{L}}{\partial A}$$
Input of backward propagation. Referred to as dA in Python code.
$$\frac{\partial \mathcal{L}}{\partial X}$$
Output of backward propagation. Referred to as dX in Python code.
Layers parameters:
$$W$$
Weight applied to inputs for Dense and Convolution layers.
$$U$$
Weight applied to inputs for RNN, LSTM and GRU layers.
$$V$$
Weight applied to hidden cell state for RNN, LSTM and GRU layers.
$$b$$
Linear and non-linear activation products:
$$Z~and~A$$
For Dense and Convolution layers, $$Z$$ is the weighted sum of inputs also known as linear activation product while $$A$$ is the product of non-linear activation.
$$Z~and~A$$
For Embedding, Pooling, Dropout and Flatten layers, $$Z$$ is the result of layer processing equal to the output $$A$$ of this same layer. It has no relationship with linear and non-linear activation - because there is none - but the names are kept for the purpose of homogeneity.
$$h\_~and~h$$
For recurrent RNN, LSTM and GRU layers, the underscore appended to the variable name denotes the linear activation product while the underscore-free variable denotes the non-linear activation product. Note that the underscore notation also applies to partial derivatives.
### Dimensions and indexing
Uppercase and lowercase letters represent dimensions and corresponding index, respectively.
In the python code, note that dimension D is stored in the layer’s .d dictionary attribute layer.d['d'] while the corresponding index d is a namespace variable such as d.
Frequently used:
$$K, k$$
Number of layers in network.
$$U, u$$
Number of units in layer $$k$$.
$$M, m$$
Number of training examples.
$$N, n$$
Number of features per training example.
Note that in the case where layer $$k-1$$ is a Dense layer or a recurrent layer RNN, GRU, LSTM with sequences=False, then $$N$$ is equal to the number of units in layer $$k-1$$.
Related to recurrent architectures:
$$S, s$$
Number of steps in sequence.
$$E, e$$
Number of elements for steps in sequence.
Note that in the context, it is considered that $$S * E = N$$.
Related to CNN:
$$H, h$$
Height of features.
$$W, w$$
Width of features.
$$D, d$$
Depth of features.
$$Sh, s_h$$
Stride height.
$$Sw, s_w$$
Stride Width.
$$Oh, o_h$$
Output height.
$$Ow, o_w$$
Output width.
$$Fh, f_h$$
Filter height (Convolution).
$$Fw, f_w$$
Filter Width (Convolution).
$$Ph, p_h$$
Pool height (Pooling).
$$Pw, p_w$$
Pool Width (Pooling).
Note that in the context, it is considered that $$H * W * D = N$$.
## Glossary
In order to not reinvent the wheel, note that definitions below may be sourced from external resources.
Activation
Function that defines how the weighted sum of the input is transformed into an output.
Bias
Additional set of parameters in one layer added to products of weight input operations with respect to units.
Cell
In the context of recurrent networks, one cell may be equivalent to one unit.
Class (Python)
Prototype of an object.
CNN
Type of neural network used in image recognition and processing.
Convolution
Layer used in CNNs to merge input data with filter or kernel and to produce a feature map.
Cost
Scalar value which is some kind of average of the loss.
Dense
Fully-connected layer made of one or more nodes. Each node receives input from all nodes in the previous layer.
Dictionary (Python)
Unordered collection of data organized as key: value pairs.
Dropout
Dropping out units in one layer for neural network regularization.
Embedding
Input layer in EpyNN, more generally any process or object that prepares or contain data fed to the layer coming next after the input layer.
Feed-Forward
Type of layer architecture wherein units do not contain loops.
Flatten
May refer to a reshaping layer acting forward to reduce 2D+ data into 2D data and reversing the operation backward.
Float (Python)
Number that is not an integer.
Gate
Acts as a threshold to help the network to distinguish when to use normal stacked layers or an identity connection.
GRU
Recurrent layer made of one or more unit cells. Two gates and one activation (hidden cell state).
Hyperparameters
May refer to settings whose value is used to control the learning process.
Immutable (Python)
Object whose internal state can not be changed.
Instance (Python)
An individual object of a certain class.
Instantiate (Python)
Creation of an object instance.
Instantiation (Python)
The action of creating an object instance.
Integer (Python)
Zero, positive or negative numbers without fractional part.
Layer
Collection of nodes or units operating together at a specific depth within a neural network.
List (Python)
Mutable data type containing an ordered and indexed sequence.
Loss
Error with respect to one loss function which is computed for each training example and output probability.
LSTM
Recurrent layer made of one or more unit cells. Three gates and two activation (hidden and memory cell states).
Metrics
Function used to judge the performance of one model.
Model
A specific design of a neural network which incorporates layers of given architecture.
Mutable (Python)
Object whose internal state can be changed.
Neural Network
Series of algorithms that endeavors to recognize underlying relationships in a set of data.
Neuron
May be equivalent to unit.
Node
May be equivalent to unit.
Parameters
May refer to trainable parameters within a neural network, namely weights and bias.
Pooling
Compression layer used in CNNs whose function is to reduce the spatial size of a given representation to reduce the amount of parameters and computation in the network.
Recurrent
Type of layer architecture wherein units contain loops, allowing information to be stored within one unit with respect to sequential data.
RNN
Recurrent layer made of one or more unit cells. Single activation (hidden cell state).
Set (Python)
Collection which is unordered and unindexed.
String (Python)
Immutable sequence data type made of characters.
Trainable
May refer to architecture layers incorporating unfrozen trainable parameters (weight, bias).
Tuple (Python)
Immutable sequence data type made of any type of values.
Unit
The functional entity within a layer which is composed of a certain number of units.
Weight
Parameter within layers that transforms input data to output data.
|
2022-05-24 09:00:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5318542718887329, "perplexity": 2988.29987937111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00025.warc.gz"}
|
https://unicodebook.readthedocs.io/good_practices.html
|
# 9. Good practices¶
## 9.1. Rules¶
To limit or avoid issues with Unicode, try to follow these rules:
• decode all bytes data as early as possible: keyboard strokes, files, data received from the network, …
• encode back Unicode to bytes as late as possible: write text to a file, log a message, send data to the network, …
• always store and manipulate text as character strings
• if you have to encode text and you can choose the encoding: prefer the UTF-8 encoding. It is able to encode all Unicode 6.0 characters (including non-BMP characters), does not depend on endianness, is well supported by most programs, and its size is a good compromise.
## 9.2. Unicode support levels¶
There are different levels of Unicode support:
• don’t support Unicode: only work correctly if all inputs and outputs are encoded to the same encoding, usually the locale encoding, use byte strings.
• basic Unicode support: decode inputs and encode outputs using the correct encodings, usually only support BMP characters. Use Unicode strings, or byte strings with the locale encoding or, better, an encoding of the UTF family (e.g. UTF-8).
• full Unicode support: have access to the Unicode database, normalize text, render correctly bidirectional texts and characters with diacritics.
These levels should help you to estimate the status of the Unicode support of your project. Basic support is enough if all of your users speak the same language or live in close countries. Basic Unicode support usually means excellent support of Western Europe languages. Full Unicode support is required to support Asian languages.
By default, the C, C++ and PHP5 languages have basic Unicode support. For the C and C++ languages, you can have basic or full Unicode support using a third-party library like glib, Qt or ICU. With PHP5, you can have basic Unicode support using “mb_” functions.
By default, the Python 2 language doesn’t support Unicode. You can have basic Unicode support if you store text into the unicode type and take care of input and output encodings. For Python 3, the situation is different: it has direct basic Unicode support by using the wide character API on Windows and by taking care of input and output encodings for you (e.g. decode command line arguments and environment variables). The unicodedata module is a first step for a full Unicode support.
Most UNIX and Windows programs don’t support Unicode. Firefox web browser and OpenOffice.org office suite have full Unicode support. Slowly, more and more programs have basic Unicode support.
Don’t expect to have full Unicode support directly: it requires a lot of work. Your project may be fully Unicode compliant for a specific task (e.g. filenames), but only have basic Unicode support for the other parts of the project.
## 9.3. Test the Unicode support of a program¶
Tests to evaluate the Unicode support of a program:
• Write non-ASCII characters (e.g. é, U+00E9) in all input fields: if the program fails with an error, it has no Unicode support.
• Write characters not encodable to the locale encoding (e.g. Ł, U+0141) in all input fields: if the program fails with an error, it probably has basic Unicode support.
• To test if a program is fully Unicode compliant, write text mixing different languages in different directions and characters with diacritics, especially in Persian characters. Try also decomposed characters, for example: {e, U+0301} (decomposed form of é, U+00E9).
## 9.4. Get the encoding of your inputs¶
Console:
File formats:
• XML: the encoding can be specified in the <?xml ...?> header, use UTF-8 if the encoding is not specified. For example, <?xml version="1.0" encoding="iso-8859-1"?>.
• HTML: the encoding can be specified in a “Content type” HTTP header, e.g. <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">. If it is not, you have to guess the encoding.
Filesystem (filenames):
## 9.5. Switch from byte strings to character strings¶
Use character strings, instead of byte strings, to avoid mojibake issues.
|
2021-12-06 18:47:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5344555974006653, "perplexity": 5270.097402234305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363309.86/warc/CC-MAIN-20211206163944-20211206193944-00298.warc.gz"}
|
http://mathhelpforum.com/differential-equations/177514-variation-parameters-print.html
|
# Variation of Parameters
• April 10th 2011, 08:57 PM
Naples
Variation of Parameters
Solve using Variation of Parameters
1. $y" + y = tan(x)$
auxiliary equation
$m^2 + 1 = 0$
$Yc = c*cos(x) + c*sin(x)$
$Y1 = cos(x)$
$Y2 = sin(x)$
Wronskian = 1
$f(x) = tan(x)$
$u1' = -sin(x)tan(x)$
$u1 = ?$ Not sure how to integrate this...
$u2' = cos(x)tan(x) = sin(x)$
$u2 = -cos(x)$
I'd appreciate it if someone could check over what I've done so far and then tell me how to integrate $-sin(x)tan(x)$.
2.
$y" - 16y = 2e^4^x$
auxiliary equation
$m^2 - 16 = 0$
$Yc = ce^-^4^x + ce^4^x$
$Y1 = e^-4^x$
$Y2 = e^4^x$
Wronskian = 8
$f(x) = 2e^4^x$
$u1' = -(1/4)e^8^x$
$u1 = -(1/32)e^8^x$
$u2' = 1/4$
$u2 = (1/4)x$
$Yp = (1/4)xe^4^x - (1/32)e^4^x$
so $Y = ce^-^4^x + ce^4^x + (1/4)xe^4^x - (1/32)e^4^x$
Not sure if I did something wrong because when I used undetermined coefficients to solve the problem, I only got $Yp = (1/4)x*e^4^x$...?
• April 10th 2011, 09:03 PM
Chris L T521
Quote:
Originally Posted by Naples
Solve using Variation of Parameters
1. $y" + y = tan(x)$
auxiliary equation
$m^2 + 1 = 0$
$Yc = c*cos(x) + c*sin(x)$
$Y1 = cos(x)$
$Y2 = sin(x)$
Wronskian = 1
$f(x) = tan(x)$
$u1' = -sin(x)tan(x)$
$u1 = ?$ Not sure how to integrate this...
$u2' = cos(x)tan(x) = sin(x)$
$u2 = -cos(x)$
I'd appreciate it if someone could check over what I've done so far and then tell me how to integrate $-sin(x)tan(x)$.
Spoiler:
$-\sin x\tan x = -\dfrac{\sin^2x}{\cos x} = \cos x-\sec x$
Quote:
2.
$y" - 16y = 2e^4^x$
auxiliary equation
$m^2 - 16 = 0$
$Yc = ce^-^4^x + ce^4^x$
$Y1 = e^-4^x$
$Y2 = e^4^x$
Wronskian = 8
$f(x) = 2e^4^x$
$u1' = -(1/4)e^8^x$
$u1 = -(1/32)e^8^x$
$u2' = 1/4$
$u2 = (1/4)x$
$Yp = (1/4)xe^4^x - (1/32)e^4^x$
so $Y = ce^-^4^x + ce^4^x + (1/4)xe^4^x - (1/32)e^4^x$
Not sure if I did something wrong because when I used undetermined coefficients to solve the problem, I only got $Yp = (1/4)x*e^4^x$...?
It looks good. Note that we can say that $ce^{4x}-\frac{1}{32}e^{4x} \sim ke^{4x};\,\,k=c-\frac{1}{32}$, so it doesn't really add "new" information.
I hope this makes sense.
• April 11th 2011, 07:52 AM
topsquark
Quote:
Originally Posted by Naples
1. $y" + y = tan(x)$
auxiliary equation
$m^2 + 1 = 0$
$Yc = c*cos(x) + c*sin(x)$
Just a quick little comment that has nothing to do with variation of parameters. The homogeneous solution to this equation is
$Yc = c_1*cos(x) + c_2*sin(x)$
ie the arbitrary constants are not the same. You did this in your second example as well.
-Dan
|
2014-08-30 13:18:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 54, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164097905158997, "perplexity": 741.7665070080583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835119.25/warc/CC-MAIN-20140820021355-00074-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.parispicnic.com/hh8nx/efvamdf/f4fo8r.php?id=futura-bold-italic-copy-and-paste-48931a
|
About Bold Text This is an online bold text generator to convert plain text into bold text letters that you can copy and paste to use anywhere you want. However, if there's a set of unicode characters that looks like a specific font, or has a particular style (e.g. The great design is due, in part, to Heinrich Jost, who was instrumental in realizing Paul Renner’s ideas. If you want to transform your normal text into the italic or bold italic text, use this italic font generator to generate italic text. Suggested font: Century Gothic Bold Italic #9. I have an EditText named text_area where the user types his text and a ToggleButton called bold that sets the text to bold. Paratype has been designing, developing and distributing digital fonts since the 1980’s. Converts normal text into unicode bold-italic text which you can copy and paste. Paul Renner was the one who took the charge for designing and releasing it for the first time during 1927 via Bitstream. WHAT’S NEW This is version 3.5, which includes a number of fixes. solid bold fonts, Download alien encounters solid bold italic font with bold italic style. Futura® BT Bold Italic Font: Licensing Options and Technical Information. We'd love to hear from you. One of the most favorite font weights Futura Book was released in 1932 and about 7 years later Book Oblique font was released. FontPalace.com offers largest database of free fonts. Almost all … ... Futura Bold Italic - Download Thousands Of Cool Free Fonts ... Download Futura Bold Italic. Related styles: If you are looking for "blackboard bold", check out the double-struck tool. Simply Copy & Paste Helvetica Fonts. Futura font is a geometric sans serif typeface with a great repute in the type design market. Futura Heavy Italic BT Comments Source: Futura Lt BT, Italic, 9pts [ Source: Name, Q#, 2014. ] neutraface text bold italic. And turns out they are quite close, but not an exact match. Futura font is a geometric sans serif typeface with a great repute in the type design market.
The font used for its logotype is very similar to Futura Bold Italic. Black Bold Book Caps Condensed Extended Heavy Italic Light Medium Normal Oblique Plain Regular Roman Script Shadow Ultra; Browse Clipart ... Futura Heavy Italic BT Font Preview. Futura Regular. The bold text generator actually make set of symbols and special characters from the Unicode Text Symbols. Built by @varga © 2020. The most significant is the addition of several mathematical symbols including the radical symbol and the infinity symbol. It was adapted to include four weights (light, regular, bold and extra bold) and is used for both display and body copy. 50 Professional Futura Bold Bolditalic Fonts to Download. Supreme is a clothing brand founded in New York City. Font family: Futura: Font size: 41KB: Format download font: TTF(TrueType) Supported languages: English, English - United States See more: Views: 54185: ... Futura Bold Italic. Tag: case. View Sample Text, Character Map, User rating and review for Futura Bold Italic See how these styles look on apps like Facebook, Twitter, SMS; and on Mac, Windows, iPhone and Android devices. Download Popular Fonts. ... Futura Bold Italic - Download Thousands Of Cool Free Fonts ... Download Futura Bold Italic. A sans serif typeface with 22 styles, available from Adobe Fonts for sync and web use. Simply type or copy the normal text into the blank text field. Futura Book Font Free. Paratype also creates custom fonts and provides font mastering services. The text lettering employing for the Supreme logo is Futura Bold Italic font. License. Adobe Fonts is the easiest way to bring great type into your workflow, wherever you are. In application font menus, this font will display: To use this font on your website, use the following CSS: Fonts in the Adobe Fonts library include support for many different languages, OpenType features, and typographic styles. By the way, Futura is a fairly large family of fonts, with 6 weights -- Light, Book, Medium, Heavy, Bold, and Extra Bold -- in the "regular ... {fontspec} \setmainfont{Futura Medium}[ItalicFont={Futura Medium Italic}] \begin{document} The quick brown fox jumps over the lazy dog ... copy and paste … Explore Futura PT designed by Isabella Chaeva, Paul Renner, Vladimir Andrich, Vladimir Yefimov at Adobe Fonts. fr-bold.ttf Codec is a geometric sans serif type system, designed by Cosimo Lorenzo Pancini with Francesco Canovaro and Andrea Tartarelli. Download fonts, free fonts, zephyr font, microsoft fonts, gothic fonts, scary fonts and graffiti. It comes in four main varieties: Regular, bold, italic, and bold italic. Futura Book Italic. 206+ results for futura copy Related keywords (10) ft rosecube-1 fedora sf-2 futur ce boo-19 futura sb-112 futura gv-112 futura xb-112 futura sb 2-112 futura kf-112 future xb-137 ff trixie heavy-168. Font Futura Extra Black Italic. Download fonts, free fonts, zephyr font, microsoft fonts, gothic fonts, scary fonts and graffiti. The brand caters to the downtown culture like skateboarding, hip hop, punk rock etc. You can use Century Gothic Bold Italic and font size 80 to get the exact copy of the supreme logo. Font available in ttf format for you to Download set CUSTOM text has. Extra Black Italic a list of fonts can make your profile more appealing and engageable font! In Ihrem Browser, um sich die Schrift Futura-Bold zu Ihrer Website verbinden ohne... Of the Unicode symbols wherever you like, check out the double-struck tool paste this text into bold-italic... Instagram bio, Facebook, Twitter, SMS ; and on Mac, and! Forum matches view 10+ forum results first seen on DaFont: August 24 2005.. And Twitter status updates, YouTube comments, etc ohne diese von unserem Server.! I am making a basic text editor app for Android and currently working on the., ohne diese von unserem Server herunterzuladen its simplicity, welcoming in futura bold italic copy and paste simplicity, welcoming in its rationality in... Instagram bio, Facebook posts, etc for how to use the buttons to the.! For Klingspor, while Futura was designed by Paul Renner was the one who took charge. Generates bold-italic text which you can copy and paste this Italic text Generator make!, who was instrumental in realizing Paul Renner was the one who took the charge for and. User rating and review for Futura bold Italic would change to bold instagram... Vladimir Yefimov at Adobe fonts for Mac, Windows, iPhone and Android devices about 7 years later Book font! Blank boxes / question marks can use Century Gothic bold Italic letters easiest way to bring type... Into Facebook, Twitter, SnapChat or any other social media profile in 1927 it looked chic ahead!, depending on where you use it an exact match brand caters to the left fonts is the way! Scary fonts and provides font mastering services you want to create professional printout, should. As you 've got a modern MASTERPIECE When Paul Renner ’ s simpl… solid bold,. Default, glyphs in a text face are designed to work with lowercase characters use the buttons to downtown. Why do I see blank boxes / question marks copy & paste Futura this. With Francesco Canovaro and Andrea Tartarelli: Regular, bold, Italic, and advertising spaces Renner Futura.: Century Gothic bold Italic - Download Thousands of Cool Free fonts for sync web! If there 's a set of the Unicode symbols, hip hop, punk rock etc, bold Italic! In New York City more appealing and engageable you click the generate button you ’ have..., etc to emphasize a point copy & paste Futura fonts this font, microsoft,... Verbinden, ohne diese von unserem Server herunterzuladen I see blank boxes / question marks in name... On DaFont: August 24, 2005. fr-title.ttf you 're able to copy and paste the Braille symbols. Renner designed Futura in 1927 it looked chic and ahead of its time and bold Italic 9... A particular style ( e.g, instagram bio, Facebook posts, etc to copy and paste 1001. And ahead of its time, and bold Italic suggested font: Licensing Options Technical... Profile more appealing and engageable, all of the Unicode symbols and turns out are! It in your Facebook and Twitter status updates, YouTube comments, etc Twitter status updates YouTube. Generator ; Blog ; Login or Sign Up ; set CUSTOM text buttons..., Free fonts... Download Futura bold Italic font with bold Italic letters and a ToggleButton called bold futura bold italic copy and paste the. Check out the double-struck tool updates, YouTube comments, etc like a specific font, microsoft fonts Gothic. Fonts this font, microsoft fonts, scary fonts and provides font services! Of this font supports upto 74 languages Futura was designed by Paul Renner was one! The Unicode symbols and provides font mastering services ohne diese von unserem Server herunterzuladen and the infinity symbol,,. Varieties: Regular, bold, Italic, and bold Italic - Download Thousands Cool... < p > the font used for its logotype is very similar to Futura bold letters. Aslong as you 've got a modern MASTERPIECE When Paul Renner ’ s ideas have an named. Mac, Windows, iPhone and Android devices serif type system, designed Cosimo... > Aktivieren Sie JavaScript in Ihrem Browser, you should consider a commercial font can used! Is dazzling in its simplicity, welcoming in its rationality social media.. Bold font available in ttf format for you to Download mathematical symbols including the radical symbol and infinity! The User types his text and a ToggleButton called bold that sets the text lettering employing for the first during! An exact match can use Century Gothic bold Italic style Windows, iPhone and Android devices the... This Italic text Generator actually make set of the Unicode text symbols, microsoft,. Personal and commercial use Book has quite remarkable usefulness in branding, manufacturing, printing, advertising. The bold text Generator actually make set of the most favorite font Futura! Ttf format for you to Download a clothing brand founded in New York City on Mac Windows! And review for Futura bold Italic has a particular style ( e.g and italics often! ⣿⣿⣿⣿⣿⣿⠁⠀⣀⡀⠀⠀⠈⢿⣿⣿⣿⣿⣿⣿ Benguiat bold font available in ttf format for you to Download font used its... Font was released in 1932 futura bold italic copy and paste about 7 years later Book Oblique font was.! Options and Technical Information comments, etc design market a futura bold italic copy and paste serif typeface with 22 styles, from. First time during 1927 via Bitstream for blackboard bold '', check out the double-struck tool this into. Text a simple and original style... Download Futura bold Italic and Android.. Pt designed by Rudolph Koch for Klingspor, while Futura was designed by Cosimo Lorenzo Pancini with Francesco and... Bold text Generator actually make set of symbols and special characters from the Unicode symbols wherever like. Encounters solid bold fonts, Free fonts for Mac, Windows and Linux a... Website verbinden, ohne diese von unserem Server herunterzuladen favorite font weights Futura Book was.. Use Century Gothic bold Italic font with bold Italic close, but not an exact match bold. You use it in your Facebook and Twitter status updates, YouTube comments, etc supreme logo Futura. Sie JavaScript in Ihrem Browser, you can copy and paste this text into your workflow, wherever are!, etc see how futura bold italic copy and paste styles look on apps like Facebook, Twitter, SnapChat any. Sample text, Character Map, User rating and review for Futura bold Italic - Download Thousands of Cool fonts... To Futura bold Italic font: Licensing Options and Technical Information, fonts... Forum results first seen on DaFont: August 24, 2005. fr-title.ttf designing! Unicode bold-italic text which you can copy and paste this text into email use! S simpl… solid bold fonts, Gothic fonts, scary fonts and provides font mastering services buttons! User types his text and a ToggleButton called bold that sets the in! Is cleared for both personal and commercial use you need look on apps like Facebook …. List of fonts can be used on Tumblr bio, Facebook posts, etc 1927 it chic! Like a specific font, microsoft fonts, zephyr font, depending on where you it... Favorite font weights Futura Book was released in 1932 and about 7 years later Book font. For Bauer click the generate button you ’ ll have a list of fonts can make your more! And paste ; and on Mac, Windows and Linux 's why you 're able to copy and paste text... Sie können die Schrift herunterzuladen ; Blog ; Login or Sign Up ; set CUSTOM text you to.! Has quite remarkable usefulness in branding, manufacturing, printing, and bold Italic explore Futura PT designed by Chaeva! Using the EditText.setTypeface method, all of the Unicode text symbols Renner was one! Type or copy the normal text into email or use it futura bold italic copy and paste your Facebook and status. Futura is dazzling in its rationality font is a geometric sans serif typeface with a great repute in the design. … Codec is a clothing brand founded in New York City, and bold Italic.... The double-struck tool on formatting the text in text_area would change to bold When the button is on the way. Why do I see blank boxes / question marks make set of symbols and characters. Tumblr bio, Facebook, … font Futura Extra Black Italic Francesco Canovaro and Andrea.... The infinity symbol fontsup.com is … I am making a basic text editor app for and! Text in text_area would change to bold When the button is on, you can copy and paste … fonts... Download Futura bold Italic text into email or use it in your Facebook and Twitter updates!: Century Gothic bold Italic text is the easiest way to bring great type into your instagram bio, bio... And about 7 years later Book Oblique font was released they are quite close, not. Skateboarding, hip hop, punk rock etc fonts... Download Futura Italic! Can copy and paste the Braille Unicode symbols wherever you are looking for blackboard bold,! In New York City können die Schrift Futura-Bold zu Ihrer Website verbinden, ohne diese von unserem Server herunterzuladen designed. Most favorite font weights Futura Book has quite remarkable usefulness in branding, manufacturing, printing, and bold text... Lettering employing for the first time during 1927 via Bitstream this Italic text the! In a text face are designed to work with lowercase characters see how these styles look on apps Facebook... A specific font, microsoft fonts, Download alien encounters solid bold Italic looking for bold!
|
2021-05-15 15:18:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32219743728637695, "perplexity": 14429.380859560977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991370.50/warc/CC-MAIN-20210515131024-20210515161024-00260.warc.gz"}
|
https://alep.science/content/labs/87-Q1%20Viscosity%20of%20Oil.html
|
# 87-Q1: Viscosity of Oil¶
Time 1$$\frac{1}{2}$$ hr.
## Apparatus¶
Tall burette $$(250\text{ml})$$ or $$500\text{ml}$$ measuring cylinder; 4 steel ball bearings of different diameters (between $$0.2 \text{ and } 0.65\text{cm})$$; micrometer screw gauge; liquid L; ruler; stopwatch; strong magnet; forceps; thermometer ($$0-100\text{°C}$$); clamp and stand (if burette is used only); 2 small dishes; note giving values of $$\rho_1 \text{ and } \rho_2$$; graph paper.
The aim of this experiment is to determine the viscosity of liquid L. Proceed as follows:
1. Set up the apparatus as shown below with burette C nearly filled with liquid L.
1. Determine and write down the diameters of all the steel ball bearings using the micrometer screw gauge. Then wet all the balls with the liquid L by keeping them in a small dish containing the liquid. (8 marks)
2. By using the forceps, drop the balls one by one in the liquid. Measure and record the time taken by each ball to fall the distance $$S$$ between points $$x$$ and $$y$$ in the liquid. The point $$x$$ should be chosen such that the distance from the meniscus of the liquid to $$x$$ is at least $$7\text{cm}$$. The point $$y$$ should be at least $$20\text{cm}$$ away from $$x$$. Measure and record the distance $$S$$ with a ruler. The bar magnet may be used to pull out the balls from the liquid L in the burette.
Make a table of results and tabulate the following: Average diameter ($$d$$) of each ball in cm, the square of the radius ($$r^2$$) of each ball in cm$$^2$$, the average terminal velocity $$v$$ of each ball in cms$$^{-1}$$. Record the room temperature. (marks: $$t$$ 8, $$s$$ 2, $$r^2$$ 2, $$v$$ 4)
3. Plot a graph of $$r^2$$ vs. $$v$$ and draw the best line through the points. Calculate the slope of the graph. (marks 10, 3)
4. Determine the viscosity $$\eta$$ in SI units of liquid L using the relation:
$\eta = \frac{2\text{g}}{9} \left( \rho_1 - \rho_2 \right) \frac{r^2}{v}$
where $$\rho_1$$ is the density of the steel balls, $$\rho_2$$ is the density of liquid L, and g $$(=9.8\text{ms}^{-2}$$) is the acceleration due to gravity. (6 marks)
5. Give the SI units of $$\eta$$ and state any sources of errors in your experiment. (marks 2, 5)
|
2021-01-22 20:05:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6517847180366516, "perplexity": 736.6106294498705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531335.42/warc/CC-MAIN-20210122175527-20210122205527-00770.warc.gz"}
|
https://socratic.org/questions/what-is-the-vertex-of-y-1-5x-2
|
# What is the vertex of y= 1/5x^2 ?
Nov 26, 2015
Vertex is $\left(0 , 0\right)$
#### Explanation:
The standard equation for a parabola (non-conic) is
y= a(x-h)^2 +k ; => a != 0 , h, k are real number
the vertex is $\left(h , k\right)$
The equation $y = \frac{1}{5} {x}^{2} \implies y = \frac{1}{5} {\left(x - \textcolor{red}{0}\right)}^{2} + \textcolor{red}{0}$
Thus the vertex is $\left(0 , 0\right)$ , and graph will look like this
graph{1/5x^2 [-10, 10, -5, 5]}
|
2021-09-26 16:52:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6311139464378357, "perplexity": 7484.976419145762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00302.warc.gz"}
|
https://www.breathinglabs.com/sports-athletics/simultaneous-assessment-of-lung-morphology-and-respiratory-motion-in-retrospectively-gated-in-vivo-microct-of-free-breathing-anesthetized-mice/
|
# Simultaneous assessment of lung morphology and respiratory motion in retrospectively gated in-vivo microCT of free breathing anesthetized mice
Cinematic X-ray imaging has already successfully been used to quantify breathing motion. The breathing changes the expansion state and the air content of the lung, which in turn modulates the X-ray attenuation and can therefore be detected as a change in the intensity of the lung region over time12.
### Principle
To enable quantification of the respiratory motion this information needs to be extracted from the acquired projection images. To this end the average brightness in a user-defined rectangular region over the lung-diaphragm interface is analyzed as indicated in red in Fig. 1D. Breathing modulates the position of the diaphragm and the expansion state of the chest both resulting in a change in the average X-ray transmission over time $$U(\alpha )$$ (blue-curve, Fig. 1A). However, the angular projection of the mouse anatomy as well as the modulation of the X-ray tube intensity caused by the electronics contribute more strongly to the obtained X-ray attenuation function than the breathing as evidenced by the large baseline of obtained function (blue) in both the polar and linear plot Fig. 1A,B. To remove this unwanted effects we exploited the facts that the data should be periodic in 360$$^\circ$$ and that the anatomy of the mouse is mostly reflected in the first K frequencies of the Fourier transformation of the signal. However, the insert in Fig. 1A shows that the modulation of the intensity of the X-ray tube yielded in a non-2pi-periodic function. We therefore extended the function by adding its mirrored version, which then always results in a periodic function in 4pi. Thus, we reconstructed the background signal by inverse Fourier transformation of the first K frequencies and subtracted the results from the original data, resulting in the red trace Fig. 1A, which is also shown in a linear plot in Fig. 1B. (Note: for the demonstrated example $$K = 20$$ was used). It can be seen in Fig. 1A,B (red curves) that the breathing peaks are now well defined. To further suppress potential imprecision in the background correction at the beginning and end of the acquisition, the angular ranges from 0–90$$^\circ$$ to 630–720$$^\circ$$ are discarded as indicated by the black dashed vertical lines in Fig. 1B. The amplitude of the remaining angular range was scaled between 0 and 1 and a level of 0.3 was used to detect breathing events (horizontal black line in Fig. 1. Example projection images are illustrated in Fig. 1C at angles of 0, 90, 180 and 270$$^\circ$$. The normalized power spectra shown in Fig. 1D demonstrate that once the strong contributions of shape of the mouse (blue, beginning of the spectrum) is removed the breathing events and their harmonics (asterisks) can clearly be observed. Moreover, in the filtered spectrum (red) a peak at approximately 470 bpm is visible Fig. 1D (§) depicting the heart rate of the mouse Fig. 1D. This trace is used for both deriving functional parameter and sorting the projection images to perform RG CT reconstruction.
### X-ray dose measurements
A commercial X-ray dose measurement system was used to measure the dose length product for several acquisition protocols as summarized in Table 3. Wrapping the probe with a 1 cm layer of pork to mimic scattering processes in the mouse did not affect the readings substantially. Note, that the standard CT acquisition protocol of 17 s, includes ramping up the tube voltage for 2 s resulting in a total exposure time of 19 s. Therefore, our acquisition protocol for rgXLF using a tube voltage of 90 kV, a current of 100 $$\upmu$$A, a field-of-view (FOV) of 20x20 $$\hbox {mm}^2$$ and an acquisition time of 34 s results in a dose of approximately 37 mGy. Whereas the planar XLF measurement performed with the same parameters apart from the tube current of 40 $$\upmu$$A and a total acquisition time of 30 s, results in a total X-ray dose of approximately 13 mGy.
### Parametrization of the breathing pattern
In order to parameterize the obtained breathing pattern we applied the same strategy described by Khan et al.11. In short, a level function at 30% of the relative X-ray attenuation signal was used to identify single breathing events. The start of the inspiration phase was defined as point with the highest curvature prior each breathing peak. The expiration phase is defined as as descending part of the curve from the peak till the start of the next breathing event is reached. Although, several parameters can be calculated for each individual event11 in this study we focused on the parameter k of a function $$f(t) = I_{0}\exp (-k \times t^{2})+c$$ fitted to the expiration phase and of the heart rate measured in Fourier space. Since, the underlying lung motion is more complex the used function presents a simplification. The magnitude of the movement of the lung tissue increases towards the position of the diaphragm, thus size and placement of the region affects the calculated k-value as shown in Supplemental Fig. 1A. Therefore, similar regions should be used for comparison of the bulk motion of the lung between different subjects. To increase the robustness fit and to account for the noise and limited temporal resolution of the data, the data of all measured expiration phases are overlaid (Supplemental Fig. 1B).
### Validation of the RG based respiratory motion measurements in the mdx mouse model
To evaluate the accuracy of rgXLF we compared it to the established planar XLF method performed subsequently in the same mdx mice and wild type controls (wt) applying the same analysis pipeline described above. Figure 2A,B demonstrates that in both methods the k-value of the expiration phase as well as the heart rate showed similar results (Pearson correlation coefficient of 0.92 for the k-value and heart rate). Moreover, both methods allowed to successfully discriminate mdx from wt mice (Mann-Whitney U = 0 for both k-value and heart rate) and revealed an elevated k-value and heart rate in mdx. Note, that XLF and rgXLF have been performed subsequently. Due to the variations of the breathing rates of 15% on average between the measurements a perfect correlation can not be expected.
### Retrospectively gated CT reconstruction
The data derived breathing curves (exemplary shown in Fig. 1B) were used to sort the angular projections acquired over 720$$^\circ$$ rotation into two bins: (i) inspiration and (ii) expiration. Since the inspiration phases are very short, only a few projections are recorded. Thus, only predominately the expiration phase was 3D reconstructed. To this end we used the following scheme. Since, each projection angle between 0 and 360$$^\circ$$ has been acquired twice we generate a new data set over 360$$^\circ$$ taking either the corresponding projection from the first or from the second rotation depending on which showed the lowest value in the calculated breathing curve. If both projections in the first and second rotation had a value below 0.1 the average of both frames was used. In addition, the amount of angles at which no frame was found to have a value of less then 0.3 was reported as a measure of the reliability of the approach. Since we applied a standard filtered back projection algorithm for 3D reconstruction, which requires a set of equally distributed angular projections, only in cases in which the breathing events do not largely overlap between the 1st and 2nd rotation, we achieved reconstructions with a sharper delineation of the lung and the absence of motion artifacts as shown in Fig. 3B in contrast to the reconstruction demonstrated in Fig. 3A obtained without applying RG. Averaging the two frames at the same angle, if both belong to the expiration phase, reduced the noise level and therefore further improved the image quality. This is helpful if the lung needs to be segmented for subsequent analysis as demonstrated in Fig. 3C.
### Combined functional and anatomical characterization of the mdx mouse model
The use of our improved RG approach allows to simultaneously quantify anatomical and functional differences in the mdx mice compared to their wild type controls. In Fig. 4 representative cross-sections, lungs segmented in 3D and parts of the isolated breathing events are shown for one wild type control Fig. 4A,B and a mdx mouse Fig. 4C,D. For segmenting the lung envelope a simple threshold based segmentation followed by manual removal of the air outside the mouse was used. As threshold the arithmetic average between the mean grey values of the lung region and soft-tissue was employed. Already the cross-sections Fig. 4A,C demonstrate that the shape and position of the diaphragm is dramatically different between the mdx and the control mouse. This is further demonstrated in the 3D renderings of the segmented lungs. In mdx post-caval lung lobe appears enlarged and elongated towards the abdomen. Additionally, the lumen of the airways is increased in the mdx animal (Fig. 4B,D). Figure 4E show the traces of the breathing events extracted from the raw-data sets of the CT acquisitions according to the principle described above. Clearly, a more rapid decay (larger k-value) was evidenced in mdx mice (red) compared to healthy controls (blue). In addition, the high frequency modulations of the traces represent the heart beat.
|
2023-02-03 01:01:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2917174696922302, "perplexity": 1247.407007964847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.2/warc/CC-MAIN-20230202232251-20230203022251-00494.warc.gz"}
|
https://byjus.com/rd-sharma-solutions/class-10-maths-chapter-1-real-numbers-exercise-1-5/
|
RD Sharma Solutions for Class 10 Chapter 1 Real Numbers Exercise 1.5
Irrational numbers are the numbers which cannot be expressed in the standard form of p/q. Proving the irrationality of numbers is clearly explained in this exercise. Our subjects experts at BYJU’S have created the RD Sharma Solutions Class 10 to make students understand the correct procedure to solve the exercise problems. In case you need any reference to any question of this exercise, you can access the RD Sharma Solutions for Class 10 Maths Chapter 1 Real Numbers Exercise 1.5 for which PDF is available below.
RD Sharma Solutions for Class 10 Chapter 1 Real Numbers Exercise 1.5 Download PDF
Access RD Sharma Solutions for Class 10 Chapter 1 Real Numbers Exercise 1.5
1. Show that the following numbers are irrational.
(i) 1/√2
Solution:
Let’s assume on the contrary that 1/√2 is a rational number. Then, there exist positive integers a and b such that
1/√2 = a/b where, a and b, are co-primes
⇒ (1/√2)2 = (a/b)2
⇒ 1/2 = a2/b2
⇒ 2a2 = b2
⇒ 2 | b2 [∵ 2|2b2 and 2a2 = b2]
⇒ 2 | b ………… (ii)
⇒ b = 2c for some integer c.
⇒ b2 = 4c2
⇒ 2a2 = 4c2 [∵ b2 = 2a2]
⇒ a2 = 2c2
⇒ 2 | a2
⇒ 2 | a ……… (i)
From (i) and (ii), we can infer that 2 is a common factor of a and b. But, this contradicts the fact that a and b are co-primes. So, our assumption is incorrect.
Hence, 1/√2 is an irrational number.
(ii) 7√5
Solution:
Let’s assume on the contrary that 7√5 is a rational number. Then, there exist positive integers a and b such that
7√5 = a/b where, a and b, are co-primes
⇒ √5 = a/7b
⇒ √5 is rational [∵ 7, a and b are integers ∴ a/7b is a rational number]
This contradicts the fact that √5 is irrational. So, our assumption is incorrect.
Hence, 1/√2 is an irrational number.
(iii) 6 + √2
Solution:
Let’s assume on the contrary that 6+√2 is a rational number. Then, there exist co prime positive integers a and b such that
6 + √2 = a/b
⇒ √2 = a/b – 6
⇒ √2 = (a – 6b)/b
⇒ √2 is rational [∵ a and b are integers ∴ (a-6b)/b is a rational number]
This contradicts the fact that √2 is irrational. So, our assumption is incorrect.
Hence, 6 + √2 is an irrational number.
(iv) 3 − √5
Solution:
Let’s assume on the contrary that 3-√5 is a rational number. Then, there exist co prime positive integers a and b such that
3-√5 = a/b
⇒ √5 = a/b + 3
⇒ √5 = (a + 3b)/b
⇒ √5 is rational [∵ a and b are integers ∴ (a+3b)/b is a rational number]
This contradicts the fact that √2 is irrational. So, our assumption is incorrect.
Hence, 3-√5 is an irrational number.
2. Prove that the following numbers are irrationals.
(i) 2/√7
Solution:
Let’s assume on the contrary that 2/√7 is a rational number. Then, there exist co-prime positive integers a and b such that
2/√7 = a/b
⇒ √7 = 2b/a
⇒ √7 is rational [∵ 2, a and b are integers ∴ 2b/a is a rational number]
This contradicts the fact that √7 is irrational. So, our assumption is incorrect.
Hence, 2/√7 is an irrational number.
(ii) 3/(2√5)
Solution:
Let’s assume on the contrary that 3/(2√5) is a rational number. Then, there exist co – prime positive integers a and b such that
3/(2√5) = a/b
⇒ √5 = 3b/2a
⇒ √5 is rational [∵ 3, 2, a and b are integers ∴ 3b/2a is a rational number]
This contradicts the fact that √5 is irrational. So, our assumption is incorrect.
Hence, 3/(2√5) is an irrational number.
(iii) 4 + √2
Solution:
Let’s assume on the contrary that 4 + √2 is a rational number. Then, there exist co prime positive integers a and b such that
4 + √2 = a/b
⇒ √2 = a/b + 4
⇒ √2 = (a + 4b)/b
⇒ √2 is rational [∵ a and b are integers ∴ (a + 4b)/b is a rational number]
This contradicts the fact that √2 is irrational. So, our assumption is incorrect.
Hence, 4 + √2 is an irrational number.
(iv) 5√2
Solution:
Let’s assume on the contrary that 5√2 is a rational number. Then, there exist positive integers a and b such that
5√2 = a/b where, a and b, are co-primes
⇒ √2 = a/5b
⇒ √2 is rational [∵ a and b are integers ∴ a/5b is a rational number]
This contradicts the fact that √2 is irrational. So, our assumption is incorrect.
Hence, 5/√2 is an irrational number.
3. Show that 2 − √3 is an irrational number.
Solution:
Let’s assume on the contrary that 2 – √3 is a rational number. Then, there exist co prime positive integers a and b such that
2 – √3= a/b
⇒ √3 = 2 – a/b
⇒ √3 = (2b – a)/b
⇒ √3 is rational [∵ a and b are integers ∴ (2b – a)/b is a rational number]
This contradicts the fact that √3 is irrational. So, our assumption is incorrect.
Hence, 2 – √3 is an irrational number.
4. Show that 3 + √2 is an irrational number.
Solution:
Let’s assume on the contrary that 3 + √2 is a rational number. Then, there exist co prime positive integers a and b such that
3 + √2= a/b
⇒ √2 = a/b – 3
⇒ √2 = (a – 3b)/b
⇒ √2 is rational [∵ a and b are integers ∴ (a – 3b)/b is a rational number]
This contradicts the fact that √2 is irrational. So, our assumption is incorrect.
Hence, 3 – √2 is an irrational number.
5. Prove that 4 − 5√2 is an irrational number.
Solution:
Let’s assume on the contrary that 4 – 5√2 is a rational number. Then, there exist co prime positive integers a and b such that
4 – 5√2 = a/b
⇒ 5√2 = 4 – a/b
⇒ √2 = (4b – a)/(5b)
⇒ √2 is rational [∵ 5, a and b are integers ∴ (4b – a)/b is a rational number]
This contradicts the fact that √2 is irrational. So, our assumption is incorrect.
Hence, 4 – 5√2 is an irrational number.
6. Show that 5 − 2√3 is an irrational number.
Solution:
Let’s assume on the contrary that 5 – 2√3 is a rational number. Then, there exist co prime positive integers a and b such that
5 – 2√3 = a/b
⇒ 2√3 = 5 – a/b
⇒ √2 = (5b – a)/(2b)
⇒ √2 is rational [∵ 2, a and b are integers ∴ (5b – a)/b is a rational number]
This contradicts the fact that √2 is irrational. So, our assumption is incorrect.
Hence, 5 – 2√3 is an irrational number.
7. Prove that 2√3 − 1 is an irrational number.
Solution:
Let’s assume on the contrary that 2√3 – 1 is a rational number. Then, there exist co prime positive integers a and b such that
2√3 – 1 = a/b
⇒ 2√3 = a/b – 1
⇒ √3 = (a – b)/(2b)
⇒ √3 is rational [∵ 2, a and b are integers ∴ (a – b)/2b is a rational number]
This contradicts the fact that √3 is irrational. So, our assumption is incorrect.
Hence, 2√3 – 1 is an irrational number.
8. Prove that 2 − 3√5 is an irrational number.
Solution:
Let’s assume on the contrary that 2 – 3√5 is a rational number. Then, there exist co prime positive integers a and b such that
2 – 3√5 = a/b
⇒ 3√5 = 2 – a/b
⇒ √5 = (2b – a)/(3b)
⇒ √5 is rational [∵ 3, a and b are integers ∴ (2b – a)/b is a rational number]
This contradicts the fact that √5 is irrational. So, our assumption is incorrect.
Hence, 2 – 3√5 is an irrational number.
9. Prove that √5 + √3 is irrational.
Solution:
Let’s assume on the contrary that √5 + √3 is a rational number. Then, there exist co prime positive integers a and b such that
√5 + √3 = a/b
⇒ √5 = (a/b) – √3
⇒ (√5)2 = ((a/b) – √3)2 [Squaring on both sides]
⇒ 5 = (a2/b2) + 3 – (2√3a/b)
⇒ (a2/b2) – 2 = (2√3a/b)
⇒ (a/b) – (2b/a) = 2√3
⇒ (a2 – 2b2)/2ab = √3
⇒ √3 is rational [∵ a and b are integers ∴ (a2 – 2b2)/2ab is rational]
This contradicts the fact that √3 is irrational. So, our assumption is incorrect.
Hence, √5 + √3 is an irrational number.
10. Prove that √2 + √3 is irrational.
Solution:
Let’s assume on the contrary that √2 + √3 is a rational number. Then, there exist co prime positive integers a and b such that
√2 + √3 = a/b
⇒ √2 = (a/b) – √3
⇒ (√2)2 = ((a/b) – √3)2 [Squaring on both sides]
⇒ 2 = (a2/b2) + 3 – (2√3a/b)
⇒ (a2/b2) + 1 = (2√3a/b)
⇒ (a/b) + (b/a) = 2√3
⇒ (a2 + b2)/2ab = √3
⇒ √3 is rational [∵ a and b are integers ∴ (a2 + 2b2)/2ab is rational]
This contradicts the fact that √3 is irrational. So, our assumption is incorrect.
Hence, √2 + √3 is an irrational number.
11. Prove that for any prime positive integer p, √p is an irrational number.
Solution:
Let’s assume on the contrary that √p is a rational number. Then, there exist positive integers a and b such that
√p = a/b where, a and b, are co-primes
⇒ (√p)2 = (a/b)2
⇒ p = a2/b2
⇒ pb2 = a2
⇒ p | a2 [∵ p|pb2 and pb2 = a2]
⇒ p | a ………… (ii)
⇒ a = pc for some integer c.
Now, b2p = a2
⇒ b2p = p2c2
⇒ b2 = pc2
⇒ p | b2 [∵ p|pc2 and pc2 = b2]
⇒ p | b ……… (i)
From (i) and (ii), we can infer that p is a common factor of a and b. But, this contradicts the fact that a and b are co-primes. So, our assumption is incorrect.
Hence, √p is an irrational number.
12. If p, q are prime positive integers, prove that √p + √q is an irrational number.
Solution:
Let’s assume on the contrary that √p + √q is a rational number. Then, there exist co prime positive integers a and b such that
√p + √q = a/b
⇒ √p = (a/b) – √q
⇒ (√p)2 = ((a/b) – √q)2 [Squaring on both sides]
⇒ p = (a2/b2) + q – (2√q a/b)
⇒ (a2/b2) + (p-q) = (2√q a/b)
⇒ (a/b) + ((p-q)b/a) = 2√3
⇒ (a2 + b2(p-q))/2ab = √3
⇒ √3 is rational [∵ a and b are integers ∴ (a2 + b2(p-q))/2ab is rational]
This contradicts the fact that √3 is irrational. So, our assumption is incorrect.
Hence, √p + √q is an irrational number.
|
2020-08-08 00:55:31
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8840427398681641, "perplexity": 2308.344965640641}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737233.51/warc/CC-MAIN-20200807231820-20200808021820-00024.warc.gz"}
|
https://tnboardsolutions.com/samacheer-kalvi-11th-physics-guide-chapter-3/
|
Tamilnadu State Board New Syllabus Samacheer Kalvi 11th Physics Guide Pdf Chapter 3 Laws of Motion Text Book Back Questions and Answers, Notes.
## Tamilnadu Samacheer Kalvi 11th Physics Solutions Chapter 3 Laws of Motion
### 11th Physics Guide Laws of Motion Book Back Questions and Answers
Part – I:
I. Multiple choice questions:
Question 1.
When a car takes a sudden left turn in the curved road, passengers are pushed towards the right due to _______.
a) inertia of direction
b) inertia of motion
c) inertia of rest
d) absence of inertia
a) inertia of direction
Question 2.
An object of mass m held against a vertical wall by applying horizontal force F as shown in the figure. The minimum value of the force F is _______ (IIT JEE 1994)
a) Less than mg
b) Equal to mg
c) Greater than mg
d) Cannot determine
c) Greater than mg
Question 3.
A vehicle is moving along the positive x-direction, if a sudden brake is applied, then _______.
a) frictional force acting on the vehicle is along negative x-direction
b) frictional force acting on the vehicle is along the positive x-direction
c) no frictional force acts on the vehicle
d) frictional force acts in a downward direction
a) frictional force acting on the vehicle is along the negative x-direction
Question 4.
A book is at rest on the table which exerts a normal force on the book. If this force is considered as a reaction force, what is the action force according to Newton’s third law?
a) Gravitational force exerted by Earth on the book
b) Gravitational force exerted by the book on Earth
c) Normal force exerted by the book on the table
d) None of the above
c) Normal force exerted by the book on the table
Question 5.
Two masses m1 and m2 are experiencing the same force where m1 < m2. The ration of their acceleration a1/a2 is _______.
a) 1
b) less than 1
c) greater than 1
d) all the three cases
c) greater than 1
Question 6.
Choose an appropriate free body diagram for the particle experiencing net acceleration along the negative y-direction. (Each arrow mark represents the force acting on the system).
Question 7.
A particle of mass m sliding on the smooth double inclined plane (shown in the figure) will experience _______.
a) greater acceleration along the path AB
b) greater acceleration along the path AC
c) same acceleration in both the paths
d) no acceleration in both the paths
b) greater acceleration along the path AC
Question 8.
Two blocks of masses m and 2m are placed on a smooth horizontal surface as shown. In the first case, only a force F1 is applied from the left. Later only a force F2 is applied from the right. If the force acting at the interface of the two blocks in the two cases is the same, then F1 : F2 is _______. (Physics Olympiad 2016)
a) 1 : 1
b) 1 : 2
c) 2 : 1
d) 1 : 3
c) 2 : 1
Question 9.
Force acting on the particle moving with constant speed is _______.
a) always zero
b) need not be zero
c) always non zero
d) cannot be concluded
b) need not be zero
Question 10.
An object of mass m begins to move on the plane inclined at an angle θ. The coefficient of static friction of inclined surface is us. The maximum static friction experienced by the mass is _______.
a) mg
b) μs mg
c) μs mg sin θ
d) μs mg cos θ
d) μs mg cos θ
Question 11.
When the object is moving at a constant velocity on the rough surface _______.
a) net force on the object is zero
b) no force acts on the object
c) only external force acts on the object
d) only kinetic friction acts on the object
a) net force on the object is zero
Question 12.
When an object is at rest on the inclined rough surface _______
a) static and kinetic frictions acting on the object is zero
b) static friction is zero but kinetic friction is not zero.
c) static friction is not zero and kinetic friction is zero.
d) static and kinetic frictions are not zero.
c) static friction is not zero and kinetic friction is zero.
Question 13.
The centrifugal force appears to exist _______.
a) only in inertial frames
b) only in rotating frames
c) in an accelerated frame
d) both in inertial and non-inertial frames
b) only in rotating frames
Question 14.
Choose the correct statement from the following.
a) Centrifugal and centripetal forces are action-reaction pairs.
b) Centripetal forces is a natural force.
c) Centrifugal force arises from the gravitational force.
d) Centripetal force acts towards the center and centrifugal force appears to act away from the center in a circular motion.
d) Centripetal force acts towards the center and centrifugal force appears to act away from the center in a circular motion.
Question 15.
If a person moving from pole to equator, the centrifugal force acting on him
a) increase
b) decreases
c) remains the same
d) increase and then decreases
a) increase
Question 1.
Explain the concept of Inertia. Write two examples each for Inertia of motion, inertia of rest and inertia of direction.
The inability of objects to move on its own or change its state of motion is called inertia. Inertia means resistance to change its state. There are three types of inertia:
1. Inertia of rest:
The inability of an object to change its state of rest is called inertia of rest.
Example:
• When a stationary bus starts to move, the passengers experience a sudden backward push.
• A book lying on the table will remain at rest until it is moved by some external agencies.
2. Inertia of motion:
The inability of an object to change its state of uniform speed (constant speed) on its own is called inertia of motion.
Example:
• When the bus is in motion, and if the brake is applied suddenly, passengers move forward and hit against the front seat.
• An athlete running is a race will continue to run even after reaching the finishing point.
3. Inertia of direction:
The inability of an object to change its direction of motion on its own is called inertia of direction.
Example:
• When a stone attached to a string is in whirling motion, and if the string is cut suddenly, the stone will not continue to move in circular motion but moves tangential to the circle.
• When a bus moving along a straight line takes a turn to the right. The passengers are thrown towards left.
Question 2.
State Newton’s second law:
Newton second law state that” The force acting on an object is equal to the rate of change of its momentum:
F = $$\frac { dp }{ dt }$$
If m is the mass_of the object, and v its velocity of motion then $$\overline{p}$$ = m$$\overline{v}$$:
The above equation can be written as
F = $$\frac { dp }{ dt }$$(m$$\overline{v}$$ ) = m$$\frac { dv }{ dt }$$
∴ F = ma
Question 3.
Define one newton:
One newton is defined as the force which acts on 1 kg of mass to give an acceleration 1 ms-2 in the direction of the force.
Question 4.
Show that impulse is the change of momentum.
If a very large force acts on an object for a very short time, then the force is impulsive force or impulse. If a Force (F) acts on an object in a very short time (∆t), then from Newton’s second law dp = f.dt
Integrating $$\int_{i}^{f} d p=\int_{t_{1}}^{t_{2}} f d t$$
Pf – Pi =F[t1 – t2]
Pi – initial momentum of the object at time t1
Pf – Final momentum of the object at time t2
Pf – Pi = ∆P = > Change is momentum
(t2 – t1) = ∆t = > Time interval
The above equation can be written as
∴ ∆P = F ∆ t.
ie $$\int_{t_{1}}^{t_{2}} F d t$$ = J is called the impulse and t1 it is equal to change in momentum of the object is ∆P = F ∆ t. When F is kept constant
Question 5.
Using free body diagram, show that it is easy to pull an object than to push it.
It is easier to pull on object than to push it.
An object pushed at a angle θ.
Case 1:
When a body is pushed at an arbitrary angle θ (0 to π/2) the applied force. F is resolved into two components
F sin θ = Horizontally – parallel to surface
F cos θ = Vertically – perpendicular to surface
The total downward force = mg + F cos 0 This is equal to normal force (reaction)
NPush = mg + F cos θ … (1)
The static friction is equal to
Ps(max) = μS Npush = μs (mg + F cos θ) … (2)
Case 2:
An object pulled at an angle θ
When an object is pulled at an angle 0 the applied force is resolved into two components.
F sin θ – Horizontally – parallel to the surface
F cos θ – Vertically – perpendicular to the surface
The total downward force = mg – F cos θ
This is equal to the normal force (reaction)
Npull = mg – F cos θ … (3)
The static friction is equal to
Fs(max) = μs Npull = μs (mg – F cos θ) … (4)
Conclusion:
From equations (1) & (3) (or) from (2) and (4) it is clear that normal force or reaction due to pulling is less than that of push. So it is easier to pull an object than push it out.
Question 6.
Explain the various types of friction suggest a few methods to reduce friction.
There are two types of Friction:
(1) Static Friction:
Static friction is the force which opposes the initiation of motion of an object on the surface. The magnitude of static frictional force fs lies between
$$0 \leq f_{s} \leq \mu_{s} \mathrm{N}$$
where, µs – coefficient of static friction
N – Normal force
(2) Kinetic friction:
The frictional force exerted by the surface when an object slides is called as kinetic friction. Also called as sliding friction or dynamic friction,
fk – µkN
where µk – the coefficient of kinetic friction
N – Normal force exerted by the surface on the object
Methods to reduce friction:
Friction can be reduced
• By using lubricants
• By using Ball bearings
• By polishing
• By streamlining
Question 7.
What is the meaning of “Pseudo force”?
“A force which does not actually act on particles but appears due to acceleration of the frame is called – Pseudo force”. In order to use Newton I & II law in the rotational frame of reference, one needs to include a “Pseudo force” called “Centrifugal force”. A pseudo force has no origin, It arises due to the non-inertial nature of the frame considered. In order to solve circular motion problems from a rotating frame of reference, pseudo force is necessary.
Question 8.
State of the empirical laws of static and kinetic friction.
Question 9.
State Newton’s third law.
Newton’s third law states that for every action there is an equal and opposite reaction.
Question 10.
What are inertial frames?
A frame of reference in which Newton’s I law of motion holds good is called an inertial frame of reference. In such a frame if no force acts on a body it continues to be at rest or in uniform motion. So it is called as an inertial frame.
Question 11.
Under what condition will a car skid on a leveled circular road?
On a leveled circular road, if the static friction is not able to provide enough centripetal ’force to turn, the vehicle will start to skid $$\mu_{s}<\frac{v^{2}}{r g}$$
Question 1.
Prove the law of conservation of linear momentum. Use it to find the recoil velocity of a gun when a bullet is fired from it.
Law: If there are no external forces acting on the system, then the total linear momentum of the system ( tot) is always a constant vector. In other words, the total linear momentum of the system is conserved in time.
Proof:
By combing Newton’s second and third laws the law of conservation of total linear momentum can be proved. When two particles interact with each other, they exert equal and opposite forces on each other
Consider two particles 1 & 2.
Let F21 be the force exerted on 2 by 1
Let F12 be the force exerted on 1 by 2
According to Newtons III law
$$\bar { F }$$21 = – $$\bar { F }$$12 → (1)
In terms of momentum, according to Newtons II Law,
F12 = $$\frac { d }{ dt }$$$$\bar { P }$$1 → (2)
F21 = $$\frac { d }{ dt }$$$$\bar { P }$$2 → (3)
Where is the momentum of p1 particle P2 is the momentum of II particle Sub (2) & (3) in (1)
This implies that (P1 + P2) = constant vector
$$\bar { P }$$1 + $$\bar { P }$$2 = $$\bar { P }$$tot– is the total linear momentum of the two particle system.
F12 & F21 are internal force and no external force acting on the system form outside. So total linear momentum is conserved.
Question 2.
What are concurrent forces? State Lamis theorem.
Concurrent forces:
A collection of forces is said to be concurrent if the lines of forces act at a common point. If the concurrent forces are in same plane they are coplanar also, in additional to concurrent forces.
Lamis is a theorem:
If a system of three concurrent and coplanar forces is in equilibrium, then Lamis theorem states that ” the magnitude of each force of the system is proportional to sine of the angle between other two forces”.
Proof:
Let F1, F2 and F3 be three coplanar and concurrent forces act at a common point 0 as in figure.
If the point 0 is in equilibrium then according to Lamis theorem.
Question 3.
Explain the motion of blocks connected by a string in 1) vertical motion 2) horizontal motion.
When blocks are connected by strings and force F is applied vertically or horizontally, it produces Tension (T) in the string which affects acceleration to some extent. Let us discuss vertical and horizontal motion here
Case 1:
1) Vertical motion of connected bodies:
Consider two blocks to masses m1 and m2 (m1 > m2) connected by light and in an extensible string that passes over the pulley.
Let T be the tension in the string and a be the acceleration. When the system is released m2 move vertically up and m1 move vertically down with acceleration a. The gravitational force m1g on m1 is used to lift m2. The upward direction is chosen as y. The free body diagram of both masses can be drawn as
Applying Newton II law for mass m2
T$$\hat{j}$$ – m2g$$\hat{j}$$ = m2a$$\hat{j}$$ → (1)
By comparing the components on both sides We get
T – m2g = m2a
Similarly for mass m1
If m1 = m2 ie both masses are equal, a = 0 This shows that if masses are equal, there is no acceleration, and the system as a whole will be at rest.
Tension on the string:
Substitute the value of ‘a’ from (4) in (1)
Equation (4) gives the magnitude of acceleration for m1 the acceleration vector is
For m1 the acceleration vector is
Case 2:
2) Horizontal motion of connected masses:
Let mass m2 kept on the horizontal surface or table and m1 is hanging through a small pulley.
Assume there is no friction on the surface. As both the blocks are connected to the unstretchable string, m1 moves with an acceleration downward then m2 also moves with the same acceleration ‘a’ horizontally.
The forces acting on m2 are
1. Downward gravitational force (m2 g)
2. Upward normal force (N) exerted by the surface.
3. Horizontal Tension (T) exerted by the string.
Free body diagram
i) Downward gravitational force m1g
ii) Tension ‘T’ – upwards.
Applying II Law for m1.
T$$\hat{j}$$ – m1g$$\hat{j}$$ = – m1a$$\hat{j}$$
m1g$$\hat{j}$$ – T$$\hat{j}$$ = m1a$$\hat{j}$$
By comparing components
m1g – T = m1a → (1)
Apply Newton’s II Law for m2
T$$\hat{i}$$ = m2a$$\hat{i}$$ (along x)
Comparing components
N = m2 → (2)
There is no acceleration along y direction
N = m2a → (3)
Substituting (2) in (1)
Tension can be got by substituting (4) in (2)
Conclusion: By comparing motion in both cases it is clear that the tension in the string for horizontal motion is half of the tension for vertical motion for same set of masses and strings.
Question 4.
Briefly explain the origin of friction show that in an inclined plane, angle of friction is equal to angle of repose.
Frictional is an opposing force exerted by the surface on the object which resists it motion. This force is frictional force. Friction force always opposes the relative motion between two surfaces which are in contact.
“when a force parallel to the surface is applied on the object, the force tries to move the object with respect to the surface. This relative motion is opposed by the surface by exerting a frictional force on the objects in a direction opposite to the applied force. Frictional force always act on the object parallel to surface on which the object is placed”.
To show angle of repose = angle of friction in an inclined plane.
Angle of friction = tan θ = Fs/N
Consider an inclined plane on which an object is placed.
Let θ be the angle of inclination of the plane with horizontal.
“Angle of repose is the angle of inclined plane with horizontal such that an object placed on it begins to slide”.
The gravitational force mg is resolved into two components.
mg sin θ – parallel to inclined plane.
mg cos θ – perpendicular to inclined plane.
The component mg sin θ parallel to inclined plane tries to move the object down.
The component mg cos θ perpendicular to inclined plane is balanced by normal reaction N.
N = mg cos θ … (1)
When the object just begins to slide static friction attains its maximum value
tan θ = angle of repose = angle of friction.
Which is equal to tan θ = μs
Hence proved.
Question 5.
State Newton’s three laws and discuss their significance.
1. I law: Every object continues to be in the state of rest or off uniform motion unless there is external force acting on it.
2. II law: The force acting on an object is equal to the rate of change of its momentum F = $$\frac{d(\bar{p})}{d t}$$
3. III law: For every action there in an equal and an opposite reaction.
Discussion:
1) Newton’s laws are vector laws. The equation $$\bar { F }$$ = $$\bar { ma }$$ is a vector equation and it is essentially equal to three scalar equations. In Cartesian co-ordinate this equation can be written as
Fx$$\hat{i}$$ + Fy$$\hat{i}$$ + Fy$$\hat{k}$$ = max$$\hat{i}$$ + may$$\hat{j}$$ + maz$$\hat{k}$$ From this we can infer Fz cannot affect ay and az and vice versa.
2) The acceleration experienced at a time ‘t’ depends on the force and the body at that instant does not depend on the force which acted on the body before F(t) = ma(t) In general direction of force may be different form direction of motion.
Case 1: Force and motion in same direction:
Example: when an apple falls towards earth the direction of motion and the force are in same downward direction.
Case 2: Force and motion are not in same direction
Example: The moon experiences a force towards the earth. But if actually moves in an elliptical orbit. In this case direction of motion and force are different.
Case 3: Force and motion in opposite direction:
If an object is thrown vertically upwards direction of motion is upward, but a gravitational force is downward.
Case 4: Zero net force, but there is motion:
Example: When a raindrop gets detached from the cloud it experiences both downward gravitational force and upward air drag force. As it descends the upward drag force increases and cancels the downward force. So the raindrop moves with a constant velocity called terminal velocity till it touches the surface of the earth, Hence the raindrop comes with zero net force and therefore with zero acceleration but with non zero terminal velocity.
Case 5: if multiple forces
$$\bar { F }$$1, $$\bar { F }$$2, $$\bar { F }$$3, …… $$\bar { F }$$n act on the same body, then the total force $$\bar { F }$$ net is equal to vector sum of individual forces, which provides the acceleration.
$$\bar { F }$$net = $$\bar { F }$$1, $$\bar { F }$$2, $$\bar { F }$$3, …… $$\bar { F }$$n
$$\bar { F }$$net = m$$\bar { a }$$
Example: Bow & arrow
Case 6: Newtons II Law can be written as
F = m$$\frac { dv }{ dt }$$ = m.$$\frac{d^{2} r}{d t^{2}}$$
Newtons II law is basically a second order derivative of position vector, which is not zero, there must be a force acting on it.
Case 7: if no force acts an a body then
m. $$\frac{d \bar{v}}{d t}$$ = 0
Which implied $$\bar { V }$$ = constant. If is essentially a I law. So Newtons II law is consistent with I law, but cannot be derived form each other. Newton II law is a cause and effect relation. Force is the cause and effect is the acceleration,
(effect) ma = F(cause)
$$\frac{d \bar{p}}{d t}$$ = F.
Question 6.
Captain the similarities and differences of between centripetal and centrifugal forces.
Question 7.
Briefly explain centrifugal force with suitable examples.
Circular motion can be analyzed from two different frames of reference. One is the Inertial frame where Newton’s laws are obeyed. The other is the rotating frame of reference which is noninertial as it is accelerating. To use Newton’s I and II law in the rotational frame of reference the pseudo force called as centrifugal force is needed.
The centrifugal forces appear to act on objects with respect to rotating frames. To explain consider an example, In the case of a whirling motion of a stone tied to a string, assume the stone has angular velocity ω in an inertial frame. If the motion of the stone is observed from a frame which is also rotating along with the stone with the same angular velocity ω then the stone appears to be at rest.
This implies that in addition to the inward centripetal force (-mrω²) there must be an equal and opposite force that acts on outwards equal to (+ mrω2). So the total force acting on the stone in the rotating frame is equal to zero (- mrω² + mrω² = 0) This outward force acting on the stone + mrω² is called centrifugal force.
Question 8.
Briefly explain ‘rolling friction’.
When an object moves on a surface essentially it is sliding on it. But the wheels move on the surface through rolling motion. In case of rolling motion when a wheel moves on a surface the point of contact is always at rest. Since the point of contact is at rest, there is no relative motion between the wheel and the surface. Hence the frictional force is very less.
At the same time if an object moves without a wheel, there is a relative motion between the object and the surface. As a result frictional force is larger. This makes it difficult to move the object.
Ideally in pure rolling, motion of point of contact with the surface should be at rest, but in practice it is not so. Due to elastic nature of the surface at the point of contact there will be same deformation on the wheel or on the surface. Due to this deformation there will be minimal friction between wheel and surface. It is called rolling friction. In fact rolling friction if much smaller than kinetic friction.
Question 9.
Describe the method of measuring angle of repose.
Angle of repose is the minimum angle that an inclined plane makes with horizontal when a body placed on it just begins to slide. Consider a body of mass m placed on an inclined plane. The angle of inclination 0 of the inclined plane is adjusted that the body on the plane begins to slide down. Thus 0 is the angle of repose.
Various forces acting on the body are:
(a) Weight mg of the body acting vertically downwards
(b) The limiting friction fs (max) in upward direction along the inclined plane. It balances the component mg sin θ of the weight mg along the inclined plane.
∴ mg sin θ = fs(max) …. (1)
(c) The normal reaction ‘N’ perpendicular to the inclined plane. It is balanced by the component mg cos θ, perpendicular to the inclined plane.
Where θ is the angle of repose.
Question 10.
Explain the need for banking of tracks.
In a leveled circular road skidding mainly depends on the co-efficient of static friction ns. The coefficient of static friction depends on the nature of surface which has a maximum limiting value. To avoid this usually “the outer edge of the road is slightly raised compared to inner edge”. This is called banking of roads or tracks. The angle of inclination called banking angle.
Let the surface of the road make angle θ with horizontal surface. Then the normal force makes an angle θ with vertical. When the car takes a turn, two forces are acting on the car.
(a) Gravitational force mg (downwards)
(b) Normal force N (Perpendicular to surface).
Normal force ‘N’ can be resolved into two components N cos θ and N sin θ and balances downward gravitational force.
N sin θ provides necessary centripetal acceleration, According to II law
N cos θ = mg
N sin θ = $$\frac{m v^{2}}{r}$$
Dividing the above equations,
tan θ = $$\frac{v^{2}}{r g}$$
V = $$\sqrt{r g \tan \theta}$$
∴ The banking angle θ and radius of curvature of the road or track determines the safe speed of car at the turning.
If the speed exceeds this safe limit, then it starts to skid outward but the frictional force comes into effect and provides an additional centripetal force to prevent outward skidding. But at the same time if the speed is less than the safe limit it starts to skid inward and again frictional force come into effect which reduces centripetal force to prevent inward skidding
However if the speed of the vehicle is sufficiently greater than the correct speed the frictional force cannot stop the car from skidding. So to avoid skidding in circular road or tracks they are banked.
Question 11.
Calculate the centripetal acceleration of moon towards the earth.
The moon orbits the earth once in 27.3 days in an almost circular orbit.
Radius of earth = 6.4 x 106 m.
Solution:
The centripetal accelerations given by a = $$\frac{ v^{2}}{r}$$
This can be related with moon am = Rmω²
ω → angular velocity
am → centripetal acceleration of the moon due to earths gravity to angular velocity
Rm → Distance between the center of earth to moon. Which is 60 times the radius of the earth.
IV. Conceptual Questions:
Question 1.
Why it is not possible to push a car from inside?
While trying to push a car from outside, he pushes the ground backward at an angle. The ground offers an equal reaction in the opposite direction, so car can be moved. But the person sits inside means car and the person becomes a single system, and the force given will be a internal force. According to Newton’s third law, total internal force acting on the system is zero and it cannot accelerate the system.
Question 2.
There is a limit beyond which the polishing of a surface increases frictional resistance rather than decreasing it. Why?
When surfaces are highly polished the area of contact between them increases. As result of this a large number of atoms and molecules lying on both surfaces start exerting strong attractive forces on each other. Therefore the frictional force increases.
Question 3.
No. According to Newton’s third law, for every action, there is an equal and opposite reaction. So, whatever case we consider, if there is an action there is always a reaction. So it is impossible.
Question 4.
Why does a parachute descend slowly?
When a parachute is descending down in the atmosphere an equal and opposite force to motion of parachute is acting due to air resistance. As the area of the parachute is large the air resistance resists the motion and so it descends slowly.
Question 5.
When walking on ice one should take short steps. Why?
When a person is walking on ice he presses the ice downward with his feet and in turn the ice pushes the person with an equal force. Since ice slippery the person is not able to press it hardly. So the action legs and so the reaction which is also less. So by making small steps, with larger normal force one can walk without slipping.
Question 6.
When a person walks on a surface the frictional force exerted by the surface on the person is opposite to the direction of motion. True or false.
False. In frictional force exerted by the surface on the person is in the direction of his motion. The frictional force acts as an external force to move the person. When the person trying to move, he gives a push to ground on the backward direction and by Newton’s third law he is pushed by the ground in the forward direction. Hence frictional force acts along the direction of motion.
Question 7.
Can the coefficient of friction be more than one?
No, it cannot be more than 1 for normal plane surfaces. But when surfaces are so irregular that they have sharp minute projections and cavities on them. Then the coefficient of friction may be more than one.
Question 8.
Can we predict the direction of motion of a body from the direction of force on it?
Yes. The direction of motion is always opposite to the force of kinetic friction. By using the principle of equilibrium, the direction of force of static friction can be determined. When the object is in equilibrium, the frictional force must point in the direction which results as a net force is zero.
Question 9.
The momentum of a system of particles is always conserved. True or false.
True: The Linear momentum of the system is always a constant vector, as long as no external force are acting on it.
V. Numerical Problems:
Question 1.
A force of 50N acts on the object of mass 20kg shown in the figure. Calculate the acceleration of the object in x and y-direction.
Solution:
m = 20kg, F = 50N
Force can be resolved into 2 components.
m = 20kg
Fx = F cos θ = 50 cos 30 = $$\frac{50 \sqrt{3}}{2}$$$$\frac { 1 }{ 2 }$$
Fy = F sin θ = 50 sin 30 = 50 x $$\frac { 1 }{ 2 }$$ = 25N
Question 2.
A spider of mass 50 g is hanging on a string of a cobweb as shown in the figure. What is the tension in the string?
Solution:
Question 3.
What is the reading shown in spring balance?
Solution:
∵ Forces on both sides are equal the reading in the spring balance is zero.
(ii)
The spring is pulled by a force along the inclined plane so.
F = mg sin θ
= 2 x 9.8 x sin 30
= 9.8 x 2 x 1/2
F = 9.8 N
Question 4.
The physics books are stacked on each other in the sequence +1 volumes 1 and 2, +2 volumes 1 and 2 on a table, a) identify the forces acting on each book and draw the free body diagram, b) Identify the forces exerted by each books on the other.
Solution:
Force on book A
1) Downward gravitational force exerted by earth (mA g)
2) Upward normal force (NB) exerted by book B (NB).
Force on book B
i) Downward gravitational force exerted by earth (mB g)
ii) Downward force exerted by books A (NA)
iii) Upward normal force exerted by book C (Nc).
Force on book C
i) Downward gravitational force exerted by earth (mC g)
ii) Downward force exerted by books B (NB)
iii) Upward normal force exerted by book D (ND).
Force on book D
i) Downward gravitational force exerted by earth (mD g)
ii) Downward force exerted by books C (Nc)
iii) Upward force exerted by the NTable.
Question 5.
A bob attached to the string oscillates back and forth. Resolve the forces acting on the bob into components. What is the acceleration experience by the bob at an angle 0?
Solution:
(i) Gravitational force(mg) acting downwards.
(ii) Tension (T) exerted by the string on the bob whose position determines the direction of T.
The bob is moving is a circular arc and so it has centripetal acceleration. At points A and C bob comes to rest momentarily and then its velocity increases when move towards B.
Hence tangential acceleration is along the arc.
The gravitational force can be resolved into two components.
mg cos θ along the string
mg sin θ perpendicular to the string
At point A & C T=mg cos θ and at all other points T is greater than mg cos θ.
∴ Centripetal force = T – mg cos θ
∴ mac = T- mg cos θ
Centripetal acceleration
ac = $$\frac { T – mg cos θ}{ 2 }$$m
Question 6.
Two masses m1 and m2 are connected with a string passing over a frictionless pulley fixed at the corner of the table as shown in the figure. The coefficient of static friction of mass m1 with the table is µs. Calculate the minimum mass m3 that may be placed on m1 to prevent it from sliding. Check if m1 = 15kg, m2 = 10kg, m3 = 25kg and µs = 0.2.
Solution:
(1) The mass m1 and m2 provides maximum static friction on the table
fS(max) = µsN
= µs(m1 + m3)g
fS(max) = 0.2(m1 + m3)g …. (1)
Tension T in the string provided by
= T = m2g …. (2)
T < fs(max) then m1 and m2 won’t move.
If m3 is not present
T = fs (for just sliding)
When m3 is placed on m1 it won’t slide
(2) If m1 = 15kg
m2 = 10kg
m3 = 25kg
µs = 0.2.
Then µ(m1+m3)g = 0.2(15±10)g
= 2.5 x 0.2g = 5 g =5g N
= 5g N
But m2 g = 10g N
∵ m2g = 10g N = T >µs(m1 + m3)g
∵ T > µs (m1 + m3)
m1 + m3 will slide on the table.
Question 7.
Calculate the acceleration of the bicycle of mass 25 kg as shown in fig 1 & 2.
Solution:
Case 1:
Resultant force =
forward force – Frictional force
= 500 – 400
FR = 100N
FR = ma = 100N
a = $$\frac { 100 }{ 25 }$$
a = 4 ms-2
Case 2:
Resultant force = forward force – Frictional force = 400 – 400 = 0 N
a = 0.
Question 8.
Apply Lamis theorem on sling shot and calculate the Tension in each string?
Solution:
Question 9.
A foot ball player kicks a 0.8 kg ball and imparts it a velocity 12 ms-1. The contact between foot and ball is only for one sixtieth of a second find the average kicking force.
Solution:
m = 0.8 kg
v = 12 ms-1
t = $$\frac { 1 }{ 60 }$$S
F = ?
Change in momentum = impulse
Pf – Pi = Ft
mv – 0 = Ft
Question 10.
A stone of mass 2kg is attached to a string of length 1m. The string can withstand a maximum tension of 200N, What is the maximum speed that stone can have during the whirling motion.
Solution:
m = 2 kg
I = 1 m = r
T = F = 200 N
V = ?
During whirling motion, the force acting on the stone is a centripetal force which provides the necessary Tension in the string.
T = F = $$\frac{m v^{2}}{r}$$
200 = $$\frac{2 \times v^{2}}{1}$$
Vmax = 10 ms-1
Question 11.
Imagine that the gravitational force between Earth and Moon is provided by an invisible string that exists between the Moon and Earth. What is the tension that, exists in this invisible string due to earth’s centripetal force?
Solution:
Mass of moon = 7.34 x 1022 kg
Distance between the moon and earth = 3.84 x 108 m
Question 12.
Two bodies of masses 15kg and 10kg are connected with light string kept on a smooth surface. A horizontal force F = 500N is applied to a 15kg as shown in the figure. Calculate the tension acting in the string.
Let m1 = 15kg
m2 = 10kg
F = 500N
All the two blocks move with a common acceleration (a) under the force F = 500 N.
F = (m1+m2)
Question 13.
People often say “for every action there is an equivalent opposite reaction”. Here they meant ‘action of a human’. Is it correct to apply Newton’s third law to human actions? What is mean by ‘action’ in Newton’s third law? Give your arguments based on Newton’s laws.
Solution:
Newton’s third law is applicable to only humans actions that involve physical force. Third law is not applicable to humans’ psychological actions or thoughts.
Question 14.
A car takes a turn with velocity 50ms-1 on the circular road of the radius of curvature 10m. Calculate the centrifugal force experienced by a person of mass 60kg inside the car?
Solution :
V = 50 ms-1
r = 10m
centrifugal reaction = ?
m = 60kg
F = $$\frac{m v^{2}}{r}$$
= $$\frac { 60×50×50 }{ 10 }$$
= 3 x 5 x 103
F = 15,000 N
Question 15.
A long stick rests on the surface. A person standing 10m away from the stick with what minimum speed an object of mass 0.5 kg should he threw so that it hits the stick. (Coefficient of kinetic friction is 0.7)
Solution:
Force on the mass = μkN = μk mg
F = 0.7 x 0.5 x 9.8
F = 3.43 N
But F = ma
a = $$\frac { 3.43 }{ 0.5 }$$
= 6.86ms-2
m = 0.5kg
W.K.T
v² – u² = 2as
|u²| = 2as
u = $$\sqrt{2 \times 6.86 \times 10}$$
= 11.71 ms-2
Velocity with which the mass is thrown = 11.71 ms-1
### 11th Physics Guide Laws of Motion Additional Important Questions and Answers
I. Multiple choice questions:
Question 1.
The concept “force causes motion” was given by –
(a) Galileo
(b) Aristotle
(c) Newton
(d) Joule
(b) Aristotle
Question 2.
A body of 5 kg is moving with a velocity of 20m/s. If a force of 100 N is applied on it for 10s in the same direction as its velocity, what will now be the velocity of the body _______.
a) 2000 m/s
b) 220 m/s
c) 240 m/s
d) 260 m/s
b) 220 m/s
Question 3.
The inability of objects to move on their own or change their state of motion is called as –
(a) force
(b) momentum
(c) inertia
(d) impulse
(c) inertia
Question 4.
A force vector applied on a mass is represented as $$\vec { f }$$ = 6$$\vec { i }$$ – 8$$\vec { j }$$ + 10$$\vec { k }$$ and accelerates with 1ms-2, what will be the mass of the body in kg _______.
a) 10$$\sqrt{2}$$
b) 20
c) 2$$\sqrt{10}$$
d) 10
a) 10$$\sqrt{2}$$
Question 5.
A particle of mass m is at rest at the origin at time t = 0. It is subjected to force F(t)= F0e-bt in the x direction. Its speed v(t) is depicted by which of the following curves?
Question 6.
If the brake is applied in the moving bus suddenly, passengers move forward is an example for –
(a) Inertia of motion
(b) Inertia of direction
(c) Inertia of rest
(d) back pull
(a) Inertia of motion
Question 7.
Two masses m1 and m2 are connected by a light string passing over a smooth pulley. When set free m1 moves down by 2m in 2s the ratio of m1/m2 is _______.
a) 9/7
b) 11/ 9
c) 13/11
d) 15/13
b) 11/ 9
Question 8.
Two blocks are connected by a string as shown in the figure. The upper block is hung by another string. A force ‘F’ applied on the upper string produces an acceleration of 2ms-2. in the upward direction in both the blocks. If T and T1 in both parts of the string then _______.
a) T = 84N; T1 = 50N
b) T = 70N; T1 = 70N
c) T = 84N; T1 = 60N
d) T = 70N; T1 = 60N
c) T = 84N; T1 = 60N
Question 9.
Four identical blocks each of mass m linked by threads as shown. IF the system moves with constant acceleration under the influence of force F, the Tension T2 _______.
a) F
b) F/2
c) 2F
d) F/4
b) F/2
Question 10.
Rate of change of momentum of an object is equal to –
(a) acceleration
(b) work done
(c) force
(d) impulse
(c) force
Question 11.
A block of mass 10 kg is pushed on a smooth inclined plane of inclination 30°, so that it has an acceleration 2ms-2. The applied force is _______.
a) 50 N
b) 60 N
c) 70 N
d) 80 N
c) 70 N
Question 12.
Two blocks of masses 6 kg and 3 kg are connected by the string as shown over a frictionless pulley. The acceleration of the system is _______.
a) 4 ms-2
b) 2 ms-2
c) Zero
d) 6 ms-2
c) Zero
Question 13.
The acceleration of the systems shown in the figure is _______.
a) 1 ms-2
b) 2 ms-2
c) 3 ms-2
d) 4 ms-2
a) 1 ms-2
Question 14.
A uniform rope of length L is pulled by constant force P as shown, The Tension in the rope at a distance x from the end where the force is applied _______.
a) P
b) p[a – $$\frac { x }{ L }$$ ]
c) px/L
d) p(1 + $$\frac { x }{ L }$$ )
b) p[a – $$\frac { x }{ L }$$ ]
Question 15.
The law which is valid in both inertial and non-inertial frame is –
(a) Newton’s first law
(b) Newton’s second law
(c) Newton’s third law
(d) none
(c) Newton’s third law
Question 16.
One end of a string of length / is connected to a particle of mass m and the other to a small peg on a smooth horizontal table. If the particle moves in a circle with speed v the net force on the particle (directed towards centre) is _______.
a) T
b) T – $$\frac{m v^{2}}{r}$$
c) T + $$\frac{m v^{2}}{r}$$
d) 0
a) T
Question 17.
A block is kept on a frictionless inclined surface with the angle of inclination α. The incline is given an acceleration ‘a’ to keep the block stationary. Then a is equal to _______.
a) g
b) g tan α
c) g / tan α
d) g cosec α
b) g tan α
Question 18.
The action and reaction forces acting on –
(a) same body
(b) different bodies
(c) either same or different bodies
(d) none of the above
(b) different bodies
Question 19.
If an elevator moving vertically up with an acceleration g, the force entered on the floor by a passenger of mass M is _______.
a) mg
b) 1 / 2 mg
c) zero
d) 2 mg
d) 2 mg
Question 20.
A man of weight 80 kg, stands on a weighing scale in a lift which is moving upwards with a uniform acceleration of 5 ms-2. What would be the reading on the scale? [g = 10 ms-2].
a) 400 N
b) 1200 N
c) 800 N
d) zero
b) 1200 N
Question 21.
A person is standing in an elevator in which situation he find his weight more than the actual weight _______.
a) the elevator moves upwards with constant acceleration
b) the elevator moves downwards with constant acceleration
c) the elevator moves upwards with uniform velocity
d) the elevator moves downwards with uniform velocity
b) the elevator moves downwards with constant acceleration
Question 22.
Newton’s second law gives –
(a) $$\overrightarrow{\mathrm{F}} \propto \frac{d \overrightarrow{\mathrm{P}}}{\mathrm{dt}}$$
(b) $$\overrightarrow{\mathrm{F}}=\frac{d \overrightarrow{\mathrm{P}}}{\mathrm{dt}}$$
(c) $$\overrightarrow{\mathrm{F}}=m \vec{a}$$
(d) all the above
(d) all the above
Question 23.
A mass of 1 kg is suspended by a string. Another string C is connected to its lower end (as in the figure). If a sudden jerk is given to c, then _______.
a) The portion AB of the string break
b) The portion BC of the string will break
c) The mass will be rotating
d) none of the above
a) The portion AB of the string break
Question 24.
In the above questions, if the string c is stretched slowly then _______.
a) The portion AB of the string break
b) The portion BC of the string will break
c) The mass will be rotating
d) none of the above
a) The portion AB of the string break
Question 25.
As shown in the figure, the tension in the horizontal card is 30N. The weight w and the tension in the string OA in Newton are _______.
a) 30, $$\sqrt{3}$$, 30
b) 30$$\sqrt{3}$$, 60
c) 60$$\sqrt{3}$$, 30
d) none of the above
b) 30$$\sqrt{3}$$, 60
Question 26.
A constant retarding force of 50 N is applied to a body of mass 20 kg moving initially with a speed of 15 ms-1. How long does the body take to stop?
(a) 0.75 s
(b) 1.33 s
(c) 6 s
(d) 35 s
Acceleration a = $$\frac{-F}{m}$$ = $$\frac{50}{20}$$ = – 2.5 ms-2
u = l5 ms-1
v = 0
t = ?
v = u + at
0 = 15 – 2.5t
t = $$\frac{15}{2.5}$$ = 6s
Question 27.
A block of mass 10 kg is suspended through two light spring balances as in figure.
a) both scales will read 10 kg
b) both scales will read 5 kg
c) the upper scale will read 10 kg and the lower zero
d) the reading may be anything but their sum will be 10 kg
a) both scales will read 10 kg
Question 28.
Two blocks A and B of masses 2m and m respectively are connected by a massless and inextensible string. The whole system is suspended by a massless spring as in figure. The magnitude of the acceleration of A and B after the string is cut, are respectively.
a) g, g/2
b) g/2, g
c) g, g
d) g/2, g/2
b) g/2, g
Question 29.
Choose the correct statement _______.
a) The frictional forces are dependent on the roughness of the surface.
b) The kinetic friction is proportional to normal reaction
c) The friction is independent of area of contact
d) All statements are correct
d) All statements are correct
Question 30.
When an object of mass m slides on a frictionless surface inclined at an angle θ, then the normal force exerted by the surface is-
(a) g cos θ
(b) mg cos θ
(c) g sin θ
(d) mg tan θ
(b) mg cos θ
Question 31.
Two cars of unequal masses are similar types. If they are moving at the same initial speed, the minimum stopping distance _______.
a) is smaller for the heavier car
b) is smaller for the lighter car
c) is same for both car
d) depends on volume of the car
c) is same for both car
Question 32.
An ice block is kept on an inclined plane of angle of 30°. The coefficient of kinetic friction between the block and the inclined plane is $$\frac{1}{\sqrt{3}}$$. The acceleration of the block is _______.
a) zero
b) 2 ms-2
c) 1.5 ms-2
d) 5 ms-2
d) 5 ms-2
Question 33.
Starting from rest a body slides down at 45° inclined plane in twice the time, it takes to slide down the same distance in the absence of friction. The coefficient of friction between the body and the inclined plane is __________.
a) 0.33
b) 0.25
c) 0.75
d) 0.80
c) 0.75
Question 34.
A uniform metal chain if placed on a rough table such that the one end of the chain hangs down over the edge of the table. When one-third of its length hangs over the edge, the chain starts sliding. Then the coefficient of static friction is _______.
a) 3/4
b) 1/4
c) 2/3
d) 1/2
d) 1/2
Question 35.
If two masses m1 and m2 (m1 > m2) tied to string moving over a frictionless pulley, then the acceleration of masses –
(a) $$\frac{\left(m_{1}-m_{2}\right)}{m_{1}+m_{2}}$$ g
(b) $$\frac{m_{1}+m_{2}}{\left(m_{1}-m_{2}\right)}$$ g
(c) $$\frac{2 m_{1} m_{2}}{m_{1}+m_{2}}$$ g
(d) $$\frac{m_{1} m_{2}}{2 m_{1} m_{2}}$$ g
(a) $$\frac{\left(m_{1}-m_{2}\right)}{m_{1}+m_{2}}$$ g
Question 36.
While walking on ice one should take small steps to avoid slipping. This is because smaller steps ensure _______.
a) large friction
b) smaller friction
c) larger normal force
d) smaller normal force
c) larger normal force
Question 37.
A box is placed on inclined plane and has to be pushed down. The angle of inclination is _______.
a) equal to angle of friction
b) more than angle of friction
c) equal to angle repose
d) less than angle of repose
d) less than angle of repose
Question 38.
Three masses in contact is as shown above. If force F is applied to mass m1 then the contact force acting on mass m2 is –
(a) $$\frac{\mathrm{F}}{m_{1}+m_{2}+m_{3}}$$
(b) $$\frac{m_{1} F}{\left(m_{1}+m_{2}+m_{3}\right)}$$
(c) $$\frac{\left(m_{2}+m_{3}\right) F}{\left(m_{1}+m_{2}+m_{3}\right)}$$
(d) $$\frac{m_{3} \mathrm{F}}{m_{1}+m_{2}+m_{3}}$$
(c) $$\frac{\left(m_{2}+m_{3}\right) F}{\left(m_{1}+m_{2}+m_{3}\right)}$$
Question 39.
A block B is placed on block A. The mass block B is less than the mass of block friction that exists between the blocks whereas the ground on which block A is placed is taken to be smooth. A horizontal force ‘F’ increasing linearly with time beings to act on B. The acceleration aA and aB of blocks A and B respectively are plotted against Y. The correctly plotted graph is _______.
Question 40.
Two blocks of mass m1 6 kg and m2 = 3 kg as in figure coefficient of friction between m1, and m2 and between m1 and surface is 0.5 and 0.4 respectively. The maximum horizontal force to can be applied to the mass m1 so that they move without separation is _______.
a) 41 N
b) 61 N
c) 81 N
d) 101 N
c) 81 N
Question 1.
Identify the internal and external forces acting on the following systems.
a) Earth alone as a system
b) Earth and sun as a system
c) Our body as a system white walking
d) our body and earth as a system
Solution:
(a) Earth alone as a system:
Earth orbits the sun due to the gravitational attraction of the sun. If we consider earth as a system, then the sun’s gravitational force is an external force. If the moon is also taken into account, it also exerts an external force on earth.
(b) (Earth + sun) as a system:
In this case, there are two internal forces which form action and reaction pair, i.e gravitational force on the sun by earth & vice versa.
(c) Our body as a system:
While walking, we exert a force on earth and earth exerts an equal and opposite force on our body. If our body is considered as a system then the force exerted by the earth on our body is external.
(d) Our body + earth as a system.
In this case, there are two internal forces present in the system one is the force exerted by our body on earth and the other is the equal and opposite force exerted by the earth on our body.
Question 2.
When a cricket player catches the ball he pulls his hands gradually is the directions of the ball’s motion. Why?
If the man stops his hands soon after catching the ball the ball comes to rest quickly. It means that the momentum of the ball is brought to rest very quickly. So the average force acting on the body will be very large. Due to the large average force, the hands will get hurt. To avoid this the player brings the ball to rest slowly.
Question 3.
An impulse is applied to a moving object with a force at an angle of 20° w.r.t. velocity vector, what is the angle between the impulse vector and change in momentum vector?
Impulse and change in momentum are in the same direction. Therefore the angle between these two vectors is zero.
Question 4.
Why a high jumper after crossing the bar made to fall on a spongy floor instead of the cemented floor.
After high jumping, landing on a cemented floor is more dangerous than jumping on a spongy surface. A spongy surface brings the body to rest slowly than the cemented floor So that the average force experienced by the body will be lesser.
Question 5.
Obtain an expression for centrifugal force acting on a man on the surface of the earth.
Even though earth is treated as an inertial frame, it is actually not so. Earth spins about its own axis with an angular velocity co Any object at the surface of the earth (rotational frame) experiences a centrifugal force. The centrifugal force appears to act exactly in opposite direction from the axis of rotation.
The centrifugal force on a man standing on the surface of earth is = mrω² where r is the perpendicular distance of man from the axis of rotation. From fig. r = R cos θ where R= radius of earth.
θ = latitude of the earth where man is standing
Question 6.
A stone when thrown on a glass window smashes the window pane to pieces, but a bullet from the gun passes through, by making a clean hole. Why?
Due to small speed the stone remains in contact with the window pane for a long time. It transfer its motion to the pane and break them into pieces. But the particles of window pane near a hole are unable to share the fast motion of the bullet and so remain undisturbed.
Question 7.
China wares are wrapped in a straw paper before packing why?
The straw paper between the china-wares increases the time of experiencing the jerk during transportation. Hence they strike against each other with less force and are less likely to get damaged.
Question 8.
Why it is necessary to bend knees while jumping from greater height?
During the jump, our feet comes to rest at once and for this smaller time a larger force a acts on feet. If we bend the knees slowly, the value of time impact increases and less force acts on our feet. So we get less hurt.
Question 9.
Why is it difficult to drive a nail into a wooden block without supporting it?
When we hit the nail with the hammer the nail and unsupported block together move forward as a single system. There is no force of reaction. When the block is rested against support, the reaction of the support holds the 14. block in position and nail is driven into the block.
Question 10.
Why is static friction called a self-adjusting force?
As applied force increases, the static friction also increases and becomes equal to the applied force. That is why static friction is called a self-adjusting force.
Question 11.
Carts with rubber tyres are easier to fly than those with iron wheels. Why?
The coefficient of friction between rubber tyres and road is much smaller than that between iron wheels and road.
Question 12.
Why ball bearings are used in machinery?
By using ball bearings between the moving parts of the machinery the sliding friction gets converted into rolling friction. The rolling friction is much lesser than the sliding friction. This reduces power dissipation.
Question 13.
A bird is sitting on the floor of a closed glass cage and the cage is in the hand of a girl. Will the girl experience any change in weight of the cage when the bird –
(i) starts flying in the cage with a constant velocity
(ii) flies upwards with acceleration
(iii) fies downwards with acceleration.
Solution:
In a closed cage, the inside air is bound with a cage.
(i) As the acceleration is zero, there is no change in the weight of the cage.
(ii) while flying upwards R – Mg = Ma
R = M (a+g)
The cage will appear heavier
(iii) while flying downwards
Mg – R – Ma
R = M (g – a)
The cage will appear lighter.
Question 14.
A long rope is hanging, passing over a pulley. Two monkeys of equal weight climb up from the opposite ends of the rope. One of them climbs up more rapidly relative to rope. Which monkey will reach the top first? Pulley is frictionless and the rope is mass less and inextensible.
There is no external force which may provide momentum to any monkey. The monkeys themselves give equal momenta to each other. Therefore two monkeys will climb up the rope at the same rate relative to earth. As their masses are equal they will reach the top simultaneously.
Question 15.
A light string passing over a smooth pulley connects two blocks of masses m1 and m2 (vertically). If the acceleration of the system is g/8. Find the ratio of the two masses.
Question 16.
Briefly explains how a horse is able to pull a cart?
Consider a cart connected to a horse by a string. The horse while pulling the cart produces a Tension T in the string in their forward direction (action). The cart in turn pulls the horse by an equal force T in opposite direction.
Initially, the horse presses the ground with a force ‘F’ in an inclined direction. The reaction ‘R’ of the ground acts on the horse in the opposite direction.
The reaction ‘R’ can be resolved into two perpendicular components.
1) The vertical components ‘V balances the weight of the horse.
2) The horizontal components ‘H’ helps the horse to move forward.
Let F be the force of friction.
The horse moves forward if H > T.
In that case net force acting on horse = H – T
If m is mass of horse and ‘a’ be its acceleration.
H – T = ma → (1)
The cart moves forward if T > F.
In this case, net force acting on the cart = T – F.
The weight of the cart is balanced by the reaction of the ground acting on it.
If M is the man of cart and a is its acceleration T-F = Ma -»(2).
H – F = (M+m)a
a = $$\frac { H-F }{ M+m }$$
obviously a is positive of H-F is positive or H > F. Thus the system moves.
Question 17.
A man of mass m is standing on the floor of a lift. Find his apparent weight when the lift is (i) moving upwards with uniform acceleration ‘a’ (ii) moving downwards with uniform acceleration a (iii) at rest or moving with uniform velocity (v) (iv) falling freely. Take acceleration due to gravity as g.
Consider a man of mass ‘m’ standing on a weighing machine placed in a lift. The actual weight of the man is mg. It acts vertically downwards through center of gravity ‘G’ of the man, which acts in a weighting machine which offers a normal reaction ‘R’ read by a machine.
So R is the apparent weight of the man.
Case 1:
When the lift moves upwards with an acceleration ‘a’
The net upward force on the man = R – mg = ma
Apparent weight R = m(g+a)
So when the lift accelerates upwards, the apparent weight of man increases
Case 2:
When lift moves downwards with acceleration ‘a’ Net downward force: mg – R = ma R = m(g – a)
So when lift accelerates down, the apparent weight of man decreases.
Case 3:
When lifting at rest or moving with uniform velocity.
So acceleration ‘a’ = 0
Net force on man is R – mg = mx 0
R – mg = 0
R = mg
Apparent weight = actual weight
(iv) when lift falls freely a = g
the net downward force mg – R = ma
mg – R = mg
R = 0
The apparent weight of the man equal to zero.
Hence the person develops a feeling of weightlessness when he falls freely under gravity.
Question 18.
Derive the law of conservation of linear momentum from Newton’s third law of motion.
Impulse of FAB = $$\overline{F}$$AB ∆t = change in momentum is A
$$\overline{F}$$BA ∆t = m1$$\overline{v}$$1 – m1$$\overline{u}$$1
Impulse of FBA = $$\overline{F}$$BA ∆t = change in momentum
|
2021-10-23 02:02:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5491734743118286, "perplexity": 984.7514443181731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00456.warc.gz"}
|
https://studyqas.com/what-is-the-h-if-the-ph-of-a-solution-is-1-65/
|
# What is the h+ if the ph of a solution is 1.65?
What is the h+ if the ph of a solution is 1.65?
## This Post Has 6 Comments
1. kcopeland210 says:
The $H+\\$ concentration is $1.41 * 10^{-5}\\$
Explanation:
The pH of any solution is given by the equation
$pH = -log [H+]$
Where $H+\\$ is the hydrogen ion concentration
Substituting the given values in above equation, we get -
$4.85 = -log[H+]\\H += 1.41 * 10^{-5}\\$
2. annjetero2oy23ay says:
[H⁺] = 0.920 MpH = 0.036[OH⁻] = 1.086x10⁻¹⁴ M
Explanation:
To calculate [H⁺] first we calculate the total number of H⁺ ions.
H⁺ from HCl ⇒ 49.3 mL * 1.19 gSolution/mL * 38 gHCl / 100 gSolution = 22.29 g HCl
22.29 g HCl ÷ 36.45 g/mol = 0.612 mol HCl = 0.612 mol H⁺
H⁺ from HNO₃ ⇒ 19.5 mL * 1.42 gSolution/mL * 70 gHCl / 100 gSolution = 19.38 g HNO₃
19.38 g HNO₃ ÷ 63 g/mol = 0.308 mol HNO₃ = 0.308 mol H⁺
Total H⁺ moles = 0.612 + 0.308 = 0.920 mol H⁺
Final Volume = 1.00 L
[H⁺] = 0.920 mol / 1.0 L = 0.920 M
Now we calculate pH:
pH = -log [H⁺]pH = -log(0.920) = 0.036
To calculate [OH⁻], we calculate pOH:
pOH = 14 - pHpOH = 14 - 0.036 = 13.964
pOH = 13.964 = -log[OH⁻]
$10^{-13.964}$ = [OH⁻] = 1.086x10⁻¹⁴ M
3. jjaheimhicks3419 says:
the concentration of H+ = 3 × $10^{-3}$ M and the pH = 2.6 of the solution
Explanation:
Based on reduction potentials, hydrogen is a better reducing agent than copper, therefore copper($Cu^{2+}$) is the cathode and hydrogen ($H_{2}$) is the anode.
Cathode reaction (reduction): $Cu^{2+}(aq) + 2e^{-}$ ⇒ Cu(s)
Anode reaction (oxidation) : $H_{2}(g)$ ⇒ $2H^{+}(aq) + 2e^{-}$
net reaction: $H_{2}(g) +$ $Cu^{2+}(aq)$ ⇒ $2H^{+}(aq) + Cu(s)$
$E_{0}cell = E_{0}cathode - E_{0}anode$
E cathode = 0.337 v
$E_{0}cell = + 0.337 - 0 = 0.337$
Q(reaction quotient) = $\frac{[H^{+}]^{2} }{[Cu^{2+}]P_{H2} }$
for 2 electrons $Cu^{2+} = 1.00M$ but $H^{+}$ is unknown. we solve this using hernst equation.
$E = E^{0} -\frac{0.0257}{n}ln\frac{[H^{+}]^{2} }{[Cu^{2+}]P_{H2} }$
$0.490 = 0.337 -\frac{0.0257}{2}ln\frac{[H^{+}]^{2} }{[1][1]}$
$ln{[H^{+}]^{2} } = -11.9$
$2ln{[H^{+}] } = -11.9$
$ln{[H^{+}] } = -5.95$
$[H^{+}] = 3* 10^{-3} M$
pH = 2.6
4. SunsetPrincess says:
The pH of a solution is measure of the acidity of a certain solution based from the concentration of the hydrogen ions. It is associated with the hydrogen ion dissolved in the solution. It is expressed as pH = -log [H+]. We calculate the concentration of the hydrogen ions from this expression.
pH = -log [H+]
1.65 = -log [H+]
antilog [- 1.65] = [H+]
[H+] = 10^-1.65
[H+] = 0.0224 M
5. aaronpmoore1010 says:
pH → 7.46
Explanation:
We begin with the autoionization of water. This equilibrium reaction is:
2H₂O ⇄ H₃O⁺ + OH⁻ Kw = 1×10⁻¹⁴ at 25°C
Kw = [H₃O⁺] . [OH⁻]
We do not consider [H₂O] in the expression for the constant.
[H₃O⁺] = [OH⁻] = √1×10⁻¹⁴ → 1×10⁻⁷ M
Kw depends on the temperature
0.12×10⁻¹⁴ = [H₃O⁺] . [OH⁻] → [H₃O⁺] = [OH⁻] at 0°C
√0.12×10⁻¹⁴ = [H₃O⁺] → 3.46×10⁻⁸ M
- log [H₃O⁺] = pH
pH = - log 3.46×10⁻⁸ → 7.46
6. huwoman says:
The pH of a solution is a value used to measure the acidity of a solution and also a measure of the hydrogen ion present in the solution. It is logarithmically related to the hydrogen ion concentration. It is expressed as:
pH = -log [H+]
4.85 = -log[H+]
[H+] = 1.41x10^-5 M
|
2022-12-05 15:28:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163419961929321, "perplexity": 14893.556320105505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00169.warc.gz"}
|
https://repository.kaust.edu.sa/handle/10754/558578
|
Recent Submissions
• High-speed colour-converting photodetector with all-inorganic CsPbBr3 perovskite nanocrystals for ultraviolet light communication
(Light: Science & Applications, Springer Science and Business Media LLC, 2019-10-16) [Article]
Optical wireless communication (OWC) using the ultra-broad spectrum of the visible-to-ultraviolet (UV) wavelength region remains a vital field of research for mitigating the saturated bandwidth of radio-frequency (RF) communication. However, the lack of an efficient UV photodetection methodology hinders the development of UV-based communication. The key technological impediment is related to the low UV-photon absorption in existing silicon photodetectors, which offer low-cost and mature platforms. To address this technology gap, we report a hybrid Si-based photodetection scheme by incorporating CsPbBr3 perovskite nanocrystals (NCs) with a high photoluminescence quantum yield (PLQY) and a fast photoluminescence (PL) decay time as a UV-to-visible colour-converting layer for high-speed solar-blind UV communication. The facile formation of drop-cast CsPbBr3 perovskite NCs leads to a high PLQY of up to ~73% and strong absorption in the UV region. With the addition of the NC layer, a nearly threefold improvement in the responsivity and an increase of ~25% in the external quantum efficiency (EQE) of the solar-blind region compared to a commercial silicon-based photodetector were observed. Moreover, time-resolved photoluminescence measurements demonstrated a decay time of 4.5 ns under a 372-nm UV excitation source, thus elucidating the potential of this layer as a fast colour-converting layer. A high data rate of up to 34 Mbps in solar-blind communication was achieved using the hybrid CsPbBr3–silicon photodetection scheme in conjunction with a 278-nm UVC light-emitting diode (LED). These findings demonstrate the feasibility of an integrated high-speed photoreceiver design of a composition-tuneable perovskite-based phosphor and a low-cost silicon-based photodetector for UV communication.
• Low-Power Hardware Implementation of a Support Vector Machine Training and Classification for Neural Seizure Detection
(IEEE Transactions on Biomedical Circuits and Systems, IEEE, 2019-10-14) [Article]
In this paper, a low power support vector machine (SVM) training, feature extraction, and classification algorithm are hardware implemented in a neural seizure detection application. The training algorithm used is the sequential minimal optimization (SMO) algorithm. The system is implemented on different platforms: such as field programmable gate array (FPGA), Xilinx Virtex-7 and application specific integrated circuit (ASIC) using hardware-calibrated UMC 65nm CMOS technology. The implemented training hardware is introduced as an accelerator intellectual property (IP), especially in the case of large number of training sets, such as neural seizure detection. Feature extraction and classification blocks are implemented to achieve the best trade-off between sensitivity and power consumption. The proposed seizure detection system achieves a sensitivity around 96.77% when tested with the implemented linear kernel classifier. A power consumption evaluation is performed on both the ASIC and FPGA platforms showing that the ASIC power consumption is improved by a factor of 2X when compared with the FPGA counterpart.
• Stochastic Geometry-based analysis of Airborne Base Stations with Laser-powered UAVs
(IEEE Communications Letters, IEEE, 2019-10-11) [Article]
One of the most promising solutions to the problem of limited flight time of unmanned aerial vehicles (UAVs), is providing the UAVs with power through laser beams emitted from Laser Beam Directors (LBDs) deployed on the ground. In this letter, we study the performance of a laser-powered UAV-enabled communication system using tools from stochastic geometry. We first derive the energy coverage probability, which is defined as the probability of the UAV receiving enough energy to ensure successful operation (hovering and communication). Our results show that to ensure energy coverage, the distance between the UAV and its dedicated LBD must be below a certain threshold, for which we derive an expression as a function of the system parameters. Considering simultaneous information and power transmission through the laser beam using power splitting technique, we also derive the joint energy and the Signal-to-noise Ratio (SNR) coverage probability. The analytical and simulation results reveal some interesting insights. For instance, our results show that we need at least 6 LBDs/10km2 to ensure a reliable performance in terms of energy coverage probability.
• An explicit marching-on-in-time scheme for solving the time domain Kirchhoff integral equation.
(The Journal of the Acoustical Society of America, Acoustical Society of America (ASA), 2019-10-09) [Article]
A fully explicit marching-on-in-time (MOT) scheme for solving the time domain Kirchhoff (surface) integral equation to analyze transient acoustic scattering from rigid objects is presented. A higher-order Nyström method and a PE(CE)m-type ordinary differential equation integrator are used for spatial discretization and time marching, respectively. The resulting MOT scheme uses the same time step size as its implicit counterpart (which also uses Nyström method in space) without sacrificing from the accuracy and stability of the solution. Numerical results demonstrate the accuracy, efficiency, and applicability of the proposed explicit MOT solver.
• An explicit marching-on-in-time scheme for solving the time domain Kirchhoff integral equation.
(The Journal of the Acoustical Society of America, Acoustical Society of America (ASA), 2019-10-09) [Article]
A fully explicit marching-on-in-time (MOT) scheme for solving the time domain Kirchhoff (surface) integral equation to analyze transient acoustic scattering from rigid objects is presented. A higher-order Nyström method and a PE(CE)m-type ordinary differential equation integrator are used for spatial discretization and time marching, respectively. The resulting MOT scheme uses the same time step size as its implicit counterpart (which also uses Nyström method in space) without sacrificing from the accuracy and stability of the solution. Numerical results demonstrate the accuracy, efficiency, and applicability of the proposed explicit MOT solver.
• Ultraviolet-to-blue color-converting scintillating-fibers photoreceiver for 375-nm laser-based underwater wireless optical communication
(Optics Express, The Optical Society, 2019-10-08) [Article]
Underwater wireless optical communication (UWOC) can offer reliable and secure connectivity for enabling future internet-of-underwater-things (IoUT), owing to its unlicensed spectrum and high transmission speed. However, a critical bottleneck lies in the strict requirement of pointing, acquisition, and tracking (PAT), for effective recovery of modulated optical signals at the receiver end. A large-area, high bandwidth, and wide-angle-of-view photoreceiver is therefore crucial for establishing a high-speed yet reliable communication link under non-directional pointing in a turbulent underwater environment. In this work, we demonstrated a large-area, of up to a few tens of cm2, photoreceiver design based on ultraviolet(UV)-to-blue color-converting plastic scintillating fibers, and yet offering high 3-dB bandwidth of up to 86.13 MHz. Tapping on the large modulation bandwidth, we demonstrated a high data rate of 250 Mbps at bit-error ratio (BER) of 2.2 × 10−3 using non-return-to-zero on-off keying (NRZ-OOK) pseudorandom binary sequence (PRBS) 210-1 data stream, a 375-nm laser-based communication link over the 1.15-m water channel. This proof-of-concept demonstration opens the pathway for revolutionizing the photodetection scheme in UWOC, and for non-line-of-sight (NLOS) free-space optical communication.
• A Novel Subdomain 2D/Q-2D Finite Element Method for Power/Ground Plate-Pair Analysis
(IEEE Transactions on Electromagnetic Compatibility, IEEE, 2019-10-07) [Article]
Upon excitation by a surface magnetic current, a power/ground plate-pair supports only $\mathrm{TM}^{z}$ modes. This means that the magnetic field has only azimuthal components permitting a simple but effective domain decomposition method (DDM) to be used. In the proximity of an antipad, field interactions are rigorously modeled by a quasi-two-dimensional (Q-2D) finite element method (FEM) making use of three-dimensional (3D) triangular prism mesh elements. Since high-order $\mathrm{TM}^{z}$ modes are confined in the close proximity of the antipad, field interactions in the region away from the antipad only involve the fundamental mode and are rigorously modeled by a 2D FEM. This approach reduces 3D computation domain into a hybrid 2D/Q-2D domain. The discretization of this hybrid domain results in a global matrix system consisting of two globally coupled matrix equations pertinent to 2D and Q-2D domains. In this article, these two matrix equations are “decoupled” using a Riemann solver and the information exchange between the two domains is facilitated using numerical flux. The resulting decoupled two matrix equations are iteratively solved using the Gauss–Seidel algorithm. The accuracy, efficiency, and robustness of the proposed DDM are verified by four representative examples.
• End-to-end Performance Analysis of Delay-sensitive Multi-relay Networks
(IEEE Communications Letters, IEEE, 2019-10-07) [Article]
We study the end-to-end (E2E) performance of multi-relay networks in delay-constrained applications. The results are presented for both decode-and-forward (DF) and AF (A: amplify) relaying schemes. We use some fundamental results on the achievable rates of finite-length codes to analyze the system performance in the cases with short packets. Taking the message decoding delays and different numbers of hops into account, we derive closed-form expressions for the E2E packet transmission delay, the E2E error probability as well as the E2E throughput. Moreover, for different message decoding delays, we determine the appropriate codeword length and the relay power such that the same E2E error probability and packet transmission delay are achieved in the AF-and DF-relay networks. As we show, for different codeword lengths and numbers of hops, the E2E performance of multi-relay networks are affected by the message decoding delay of the nodes considerably.
• Modeling and Experimental Study of the Vibration Effects in Urban Free-Space Optical Communication Systems
(IEEE Photonics Journal, IEEE, 2019-10-04) [Article]
Free-space optical (FSO) communication, considered as a last-mile technology, is widely used in many urban scenarios. However, the performance of urban free-space optical (UFSO) communication systems fades in the presence of system vibration caused by many factors in the chaotic urban environment. In this paper, we develop a dedicated indoor vibration platform and atmospheric turbulence to estimate the Bifurcated-Gaussian (B-G) distribution model of the receiver optical power under different vibration levels and link distances using nonlinear iteration method. Mean square error (MSE) and coefficient of determination ($R^2$) metrics have been used to show a good agreement between the PDFs of the experimental data with the resulting B-G distribution model. Besides, the UFSO channel under the effects of both vibration and atmospheric turbulence is also explored under three atmospheric turbulence conditions. Our proposed B-G distribution model describes the vibrating UFSO channels properly and can easily help to perform and evaluate the link performance of UFSO systems, e.g., bit-error-rate (BER), outage probability. Furthermore, this work paves the way for constructing completed auxiliary control subsystems for robust UFSO links and contributes to more extensive optical communication scenarios, such as underwater optical communication, etc.
• Tunable Dual-Wavelength Self-injection Locked InGaN/GaN Green Laser Diode
(IEEE Access, Institute of Electrical and Electronics Engineers (IEEE), 2019-10-01) [Article]
We implemented a tunable dual-longitudinal-mode spacing InGaN/GaN green (521–528 nm) laser diode by employing a self-injection locking scheme that is based on an external cavity configuration and utilizing either a high-or partial-reflecting mirror. A tunable longitudinal-mode spacing of 0.20 – 5.96 nm was accomplished, corresponding to a calculated frequency difference of 0.22–6.51 THz, as a result. The influence of operating current and temperature on the system performance was also investigated with a measured maximum side-mode-suppression ratio of 30.4 dB and minimum dual-mode peak optical power ratio of 0.03 dB. To shed light on the operation of the dual-wavelength device arising from the tunable longitudinal-mode spacing mechanism, the underlying physics is qualitatively described. To the best of our knowledge, this tunable longitudinal-mode-spacing dual-wavelength device is novel, and has potential applications as an alternative means in millimeter wave and THz generation, thus possibly addressing the terahertz technology gap. The dual-wavelength operation is also attractive for high-resolution imaging and broadband wireless communication.
• Error Rate Analysis of Amplitude-Coherent Detection over Rician Fading Channels with Receiver Diversity
(IEEE Transactions on Wireless Communications, Institute of Electrical and Electronics Engineers (IEEE), 2019-09-27) [Article]
Amplitude-coherent (AC) detection is an efficient technique that can simplify the receiver design while providing reliable symbol error rate (SER). Therefore, this work considers AC detector design and SER analysis using M-ary amplitude shift keying (MASK) modulation with receiver diversity over Rician fading channels. More specifically, we derive the optimum, near-optimum and a suboptimum AC detectors and compare their SER with the coherent, phase-coherent, noncoherent and the heuristic AC detectors. Moreover, the analytical and asymptotic SER at high signal-to-noise ratios (SNRs) are derived for the heuristic detector using single and multiple receiving antennas. The obtained analytical and simulation results show that the SER of the AC and coherent MASK detectors are comparable, particularly for high values of the Rician K-factor, and small number of receiving antennas. In most of the considered scenarios, the heuristic AC detector outperforms the optimum noncoherent detector significantly, except for the binary ASK case at low SNRs. Moreover, the obtained results show that the heuristic AC detector is immune to phase noise, and thus, it outperforms the coherent detector in scenarios where the system is subject to considerable phase noise.
• Quantitative Phase and Intensity Microscopy Using Snapshot White Light Wavefront Sensing
(Scientific Reports, Springer Science and Business Media LLC, 2019-09-24) [Article]
Phase imaging techniques are an invaluable tool in microscopy for quickly examining thin transparent specimens. Existing methods are limited to either simple and inexpensive methods that produce only qualitative phase information (e.g. phase contrast microscopy, DIC), or significantly more elaborate and expensive quantitative methods. Here we demonstrate a low-cost, easy to implement microscopy setup for quantitative imaging of phase and bright field amplitude using collimated white light illumination.
• Deep UV Laser at 249 nm Based on GaN Quantum Wells
(ACS Photonics, American Chemical Society (ACS), 2019-09-20) [Article]
In this Letter, we report on deep UV laser emitting at 249 nm based on thin GaN quantum wells (QWs) by optical pumping at room temperature. The laser threshold was 190 kW/cm2 that is comparable to state-of-the-art AlGaN QW lasers at similar wavelengths. The laser structure was pseudomorphically grown on a c-plane sapphire substrate by metalorganic chemical vapor deposition, comprising 40 pairs of 4 monolayer (ML) GaN QWs sandwiched by 6 ML AlN quantum barriers (QBs). The low threshold at the wavelength was attributed to large optical and quantum confinement and high quality of the material, interface, and Fabry-Pérot facet. The emissions below and above the threshold were both dominated by transverse electric polarizations thanks to the valence band characteristics of GaN. This work unambiguously demonstrates the potentials of the binary AlN/GaN heterojunctions for high-performance UV emitters.
• Systematic and Unified Stochastic Tool to Determine the Multidimensional Joint Statistics of Arbitrary Partial Products of Ordered Random Variables
(IEEE Access, Institute of Electrical and Electronics Engineers (IEEE), 2019-09-19) [Article]
In this paper, we introduce a systematic and unified stochastic tool to determine the joint statistics of partial products of ordered random variables (RVs). With the proposed approach, we can systematically obtain the desired joint statistics of any partial products of ordered statistics in terms of the Mellin transform and the probability density function in a unified way. Our approach can be applied when all the K-ordered RVs are involved, even for more complicated cases, for example, when only the Ks (Ks<K) best RVs are also considered. As an example of their application, these results can be applied to the performance analysis of various wireless communication systems including wireless optical communication systems. For an applied example, we present the closed-form expressions for the exponential RV special case. We would like to emphasize that with the derived results based on our proposed stochastic tool, computational complexity and execution time can be reduced compared to the computational complexity and execution time based on an original multiple-fold integral expression of the conventional Mellin transform based approach which has been applied in cases of the product of RVs.
• Investigating the Performance of a Few-Mode Fiber for Distributed Acoustic Sensing
(IEEE Photonics Journal, Institute of Electrical and Electronics Engineers (IEEE), 2019-09-17) [Article]
We experimentally investigated the performance of a distributed acoustic sensor (DAS) designed using a few-mode fiber (FMF), when launching different spatial modes under intentional index perturbation within the fiber. Our demonstration showed that the quasi-single mode (QSM) operated FMF offers higher signal-to-noise ratio (SNR) for the DAS, compared with the case when launching other degenerate higher order modes. Additionally, we compared the behavior of the single-mode fiber (SMF)- and FMF-based DAS when using optical pulses of varying power levels. The FMF enables the realization of a DAS with longer sensing range and higher spatial resolution. The developed FMF-based DAS is further tested via sensing various vibration events produced by piezoelectric transducer (PZT) cylinder, pencil break, and loudspeaker.
• A Non-Isolated Hybrid-Modular DC-DC Converter for DC Grids: Small-Signal Modeling and Control
(IEEE Access, Institute of Electrical and Electronics Engineers (IEEE), 2019-09-13) [Article]
This paper presents small-signal modeling, stability analysis, and controller design of a nonisolated bidirectional hybrid-modular DC-DC Converter for DC grid applications. The DC-DC converter can be used to interconnect two different DC voltage levels in a medium-/high-voltage DC grid. Half-bridge Sub-Modules (SMs) and a high-voltage valve are the main components of the converter. The high-voltage valve can be implemented via employing series-connected Insulated-Gate Bipolar Transistors (IGBTs). Operation with zero voltage switching of the involved high-voltage valve is feasible, i.e., there is no concern pertinent to dynamic voltage sharing among the series-connected IGBTs. The power is transferred from one side to another through the involved SMs, where their capacitors are connected in series across the high-voltage side, while they are connected sequentially across the low-voltage side. In this paper, the state-space averaging technique is employed to derive the small-signal model of the presented converter for controller design. Closed-form expression of the duty cycle-to-inductor current transfer function is extracted. Comparison between simulation results of the small-signal model and the detailed circuit model is presented to authenticate the accuracy of the derived small-signal model. Finally, a scaled-down prototype is used to verify the accuracy of the small-signal model.
• Honeycomb-serpentine silicon platform for reconfigurable electronics
(Applied Physics Letters, AIP Publishing, 2019-09-09) [Article]
The shape reconfiguration is an arising concept in advanced electronics research, which allows the electronic platform to change in shape and assume different configurations while maintaining high electrical functionality. The reconfigurable electronic platforms are attractive for state of the art biomedical technologies, where the reshaping feature increases the adaptability and compliance of the electronic platform to the human body. Here, we present an amorphous silicon honeycomb-shaped reconfigurable electronic platform that can reconfigure into three different shapes: the quatrefoil shape, the star shape, and an irregular shape. We show the reconfiguration capabilities of the design in microscale and macroscale fabricated versions. We use finite element method analysis to calculate the stress and strain profiles of the microsized honeycomb-serpentine design at a prescribed displacement of 100 μ m. The results show that the reconfiguration capabilities can be improved by eliminating certain interconnects. We further improve the design by optimizing the serpentine interconnect parameters and refabricate the platform on a macroscale to facilitate the reconfiguration process. The macroscale version demonstrates an enhanced reconfiguration capability and elevates the stretchability by 21% along the vertical axis and by 36.6% along the diagonal axis of the platform. The resulting reconfiguring capabilities of the serpentine-honeycomb reconfigurable platform broaden the innovation opportunity for wearable electronics, implantable electronics, and soft robotics.
• Flexible tag design for semi-continuous wireless data acquisition from marine animals
(Flexible and Printed Electronics, IOP Publishing, 2019-09-06) [Article]
Acquisition of sensor data from tagged marine animals has always been a challenge. Presently, we come across two extreme mechanisms to acquire marine data. For continuous data acquisition, hundreds of kilometers of optical fiber links are used which in addition to being expensive, are impractical in certain circumstances. On the other extreme, data is retrieved in an offline and invasive manner after removing the sensor tag from the animal's skin. This paper presents a semi-continuous method of acquiring marine data without requiring tags to be removed from the sea animal. Marine data is temporarily stored in the tag's memory, which is then automatically synced to floating receivers as soon as the animal rises to the water surface. To ensure effective wireless communication in an unpredictable environment, a quasi-isotropic antenna has been designed which works equally well irrespective of the orientation of the tagged animal. In contrast to existing rigid wireless devices, the tag presented in this work is flexible and thus convenient for mounting on marine animals. The tag has been initially tested in air as a standalone unit with a communication range of 120m. During tests in water, with the tag mounted on the skin of a crab, a range of 12m has been observed. In a system-level test, the muscle activity of a small giant clam (Tridacna maxima) has been recorded in real time via the non-invasive wireless tag.
• Multi-cell MMSE Combining over Correlated Rician Channels in Massive MIMO Systems
(IEEE Wireless Communications Letters, Institute of Electrical and Electronics Engineers (IEEE), 2019-09-04) [Article]
This work investigates the uplink of massive MIMO systems using multi-cell MMSE (M-MMSE) combining that was shown to yield unbounded capacity in Rayleigh fading. All intra and inter-cell channels are correlated with distinct per-user Rician factors and channel correlation matrices, pilot contamination and imperfect channel estimation. First, a closed-form approximation of the spectral efficiency (SE) is derived thus enabling to demonstrate that, under certain conditions on the correlation matrices, M-MMSE generates unbounded SE in Rician fading. Second, the impact of inter-cell LoS components is examined in favorable propagation conditions, and, interestingly, shown to be more beneficial in terms of SE than when these interfering links are entirely scattered.
• Reduced complexity DOA and DOD estimation for a single moving target in bistatic MIMO radar
(Signal Processing, Elsevier BV, 2019-09-02) [Article]
In this work, we propose a reduced dimension and low complexity algorithm to estimate the direction-of-arrival (DOA), direction-of-departure (DOD) and the Doppler shift of a moving target for a multiple-input-multiple-output (MIMO) radar. We derive two cost functions based on two different objective functions. We solve each of the derived cost function with a low complexity fast-Fourier-transform (FFT)-based solution in three dimensions. We further carry out a derivation to reduce the three-dimensional search to two-dimensional (2D) search and solve it with a 2D-FFT. Another reduced dimension algorithm is derived using the generalized eigenvalue method which finds the estimate of unknown parameters in one dimension with less memory constraints. This way, we propose three algorithms based on the first cost function and another three algorithms based on the second. Simulation results are used to validate the proposed algorithms. We compare the mean-square-error (MSE) performance and computational complexity of our proposed algorithms with existing ones as well. We show that our proposed algorithms have better MSE performance than existing ones and achieves the Cramér-Rao lower bound (CRLB) for all unknown target parameters. The proposed algorithms exhibit lower computational complexity than the existing ones and also provide an estimate for the Doppler shift.
|
2019-10-17 16:41:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4081631600856781, "perplexity": 1839.0953876643475}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00166.warc.gz"}
|
https://cs.stackexchange.com/posts/42067/revisions
|
6 replaced http://math.stackexchange.com/ with https://math.stackexchange.com/ edited Apr 13 '17 at 12:19 $$L=\{0^m1^n \enspace | \enspace m \neq n\}$$ I saw that this exact question exists elsewhereelsewhere, but I couldn't understand what was being said there. My question does not mandate the use of the Pumping Lemma as stated "elsewhere", but I am using the Pumping Lemma anyway. I want to present what I have so far, and for someone to tell me if I'm on the right track: Assume $$L$$ is not regular. Let $$p$$ be the pumping length given by the Pumping Lemma for regular languages. Let the string $$w = 0^p1^{p+1} \in L$$ By the Pumping Lemma, $$w = xy^iz$$, where $$i \geq 0$$, $$\color{green}{\lvert y \rvert \geq 1}$$, and $$\color{red}{\lvert xy \rvert \lt p}$$. Let: \begin{aligned} \mathcal{x} &= \mathcal{0}^{p} \\ {y} & = {1}^{p+1} \\ {z} & = \varepsilon \end{aligned} It is at this point in the proof that I get confused. I feel as if I've set it up well, but just can't finish. Here's what I've got, though: We see that $$\lvert y \rvert= p+1 \geq 1 \enspace \color{green}{\checkmark}$$ However, $$\lvert xy \rvert= p+p+1 \gt p \enspace \color{red}{\textbf{X}}$$ As we can see by $$\textit{(7)}$$, our test string $$w$$ violates a $$\color{red}{condition}$$ of the Pumping Lemma, thus is not regular. Thumbs up, thumbs down, anyone? Did I make the appropriate inferences about my split string $$w$$ in order to achieve a contradiction, and did I even split the string correctly? And to boot, did I even pick a $$w$$ that is useful to the proof? $$L=\{0^m1^n \enspace | \enspace m \neq n\}$$ I saw that this exact question exists elsewhere, but I couldn't understand what was being said there. My question does not mandate the use of the Pumping Lemma as stated "elsewhere", but I am using the Pumping Lemma anyway. I want to present what I have so far, and for someone to tell me if I'm on the right track: Assume $$L$$ is not regular. Let $$p$$ be the pumping length given by the Pumping Lemma for regular languages. Let the string $$w = 0^p1^{p+1} \in L$$ By the Pumping Lemma, $$w = xy^iz$$, where $$i \geq 0$$, $$\color{green}{\lvert y \rvert \geq 1}$$, and $$\color{red}{\lvert xy \rvert \lt p}$$. Let: \begin{aligned} \mathcal{x} &= \mathcal{0}^{p} \\ {y} & = {1}^{p+1} \\ {z} & = \varepsilon \end{aligned} It is at this point in the proof that I get confused. I feel as if I've set it up well, but just can't finish. Here's what I've got, though: We see that $$\lvert y \rvert= p+1 \geq 1 \enspace \color{green}{\checkmark}$$ However, $$\lvert xy \rvert= p+p+1 \gt p \enspace \color{red}{\textbf{X}}$$ As we can see by $$\textit{(7)}$$, our test string $$w$$ violates a $$\color{red}{condition}$$ of the Pumping Lemma, thus is not regular. Thumbs up, thumbs down, anyone? Did I make the appropriate inferences about my split string $$w$$ in order to achieve a contradiction, and did I even split the string correctly? And to boot, did I even pick a $$w$$ that is useful to the proof? $$L=\{0^m1^n \enspace | \enspace m \neq n\}$$ I saw that this exact question exists elsewhere, but I couldn't understand what was being said there. My question does not mandate the use of the Pumping Lemma as stated "elsewhere", but I am using the Pumping Lemma anyway. I want to present what I have so far, and for someone to tell me if I'm on the right track: Assume $$L$$ is not regular. Let $$p$$ be the pumping length given by the Pumping Lemma for regular languages. Let the string $$w = 0^p1^{p+1} \in L$$ By the Pumping Lemma, $$w = xy^iz$$, where $$i \geq 0$$, $$\color{green}{\lvert y \rvert \geq 1}$$, and $$\color{red}{\lvert xy \rvert \lt p}$$. Let: \begin{aligned} \mathcal{x} &= \mathcal{0}^{p} \\ {y} & = {1}^{p+1} \\ {z} & = \varepsilon \end{aligned} It is at this point in the proof that I get confused. I feel as if I've set it up well, but just can't finish. Here's what I've got, though: We see that $$\lvert y \rvert= p+1 \geq 1 \enspace \color{green}{\checkmark}$$ However, $$\lvert xy \rvert= p+p+1 \gt p \enspace \color{red}{\textbf{X}}$$ As we can see by $$\textit{(7)}$$, our test string $$w$$ violates a $$\color{red}{condition}$$ of the Pumping Lemma, thus is not regular. Thumbs up, thumbs down, anyone? Did I make the appropriate inferences about my split string $$w$$ in order to achieve a contradiction, and did I even split the string correctly? And to boot, did I even pick a $$w$$ that is useful to the proof? Post Closed as "unclear what you're asking" by D.W.♦, David Richerby, Juho, Nicholas Mancuso, Shaull occurred May 5 '15 at 18:58 5 added 152 characters in body edited May 3 '15 at 19:12 Chuckles 633 bronze badges $$L=\{0^m1^n \enspace | \enspace m \neq n\}$$ I saw that this exact question exists elsewhere, but I couldn't understand what was being said there. My question does not mandate the use of the Pumping Lemma as stated "elsewhere", but I am using the Pumping Lemma anyway. I want to present what I have so far, and for someone to tell me if I'm on the right track: Assume $$L$$ is not regular. Let $$p$$ be the pumping length given by the Pumping Lemma for regular languages. Let the string $$w = 0^p1^{p+1} \in L$$ By the Pumping Lemma, $$w = xy^iz$$, where $$i \geq 0$$, $$\color{green}{\lvert y \rvert \geq 1}$$, and $$\color{red}{\lvert xy \rvert \lt p}$$. Let: \begin{aligned} \mathcal{x} &= \mathcal{0}^{p} \\ {y} & = {1}^{p+1} \\ {z} & = \varepsilon \end{aligned} It is at this point in the proof that I get confused. I feel as if I've set it up well, but just can't finish. Here's what I've got, though: We see that $$\lvert y \rvert= p+1 \geq 1 \enspace \color{green}{\checkmark}$$ However, $$\lvert xy \rvert= p+p+1 \gt p \enspace \color{red}{\textbf{X}}$$ As we can see by $$\textit{(7)}$$, our test string $$w$$ violates a $$\color{red}{condition}$$ of the Pumping Lemma, thus is not regular. Thumbs up, thumbs down, anyone? Did I make the appropriate inferences about my split string $$w$$ in order to achieve a contradiction, and did I even split the string correctly? And to boot, did I even pick a $$w$$ that is useful to the proof? $$L=\{0^m1^n \enspace | \enspace m \neq n\}$$ I saw that this exact question exists elsewhere, but I couldn't understand what was being said there. My question does not mandate the use of the Pumping Lemma as stated "elsewhere", but I am using the Pumping Lemma anyway. I want to present what I have so far, and for someone to tell me if I'm on the right track: Assume $$L$$ is not regular. Let $$p$$ be the pumping length given by the Pumping Lemma for regular languages. Let the string $$w = 0^p1^{p+1} \in L$$ By the Pumping Lemma, $$w = xy^iz$$, where $$i \geq 0$$, $$\color{green}{\lvert y \rvert \geq 1}$$, and $$\color{red}{\lvert xy \rvert \lt p}$$. Let: \begin{aligned} \mathcal{x} &= \mathcal{0}^{p} \\ {y} & = {1}^{p+1} \\ {z} & = \varepsilon \end{aligned} It is at this point in the proof that I get confused. I feel as if I've set it up well, but just can't finish. Here's what I've got, though: We see that $$\lvert y \rvert= p+1 \geq 1 \enspace \color{green}{\checkmark}$$ However, $$\lvert xy \rvert= p+p+1 \gt p \enspace \color{red}{\textbf{X}}$$ As we can see by $$\textit{(7)}$$, our test string $$w$$ violates a $$\color{red}{condition}$$ of the Pumping Lemma, thus is not regular. Thumbs up, thumbs down, anyone? $$L=\{0^m1^n \enspace | \enspace m \neq n\}$$ I saw that this exact question exists elsewhere, but I couldn't understand what was being said there. My question does not mandate the use of the Pumping Lemma as stated "elsewhere", but I am using the Pumping Lemma anyway. I want to present what I have so far, and for someone to tell me if I'm on the right track: Assume $$L$$ is not regular. Let $$p$$ be the pumping length given by the Pumping Lemma for regular languages. Let the string $$w = 0^p1^{p+1} \in L$$ By the Pumping Lemma, $$w = xy^iz$$, where $$i \geq 0$$, $$\color{green}{\lvert y \rvert \geq 1}$$, and $$\color{red}{\lvert xy \rvert \lt p}$$. Let: \begin{aligned} \mathcal{x} &= \mathcal{0}^{p} \\ {y} & = {1}^{p+1} \\ {z} & = \varepsilon \end{aligned} It is at this point in the proof that I get confused. I feel as if I've set it up well, but just can't finish. Here's what I've got, though: We see that $$\lvert y \rvert= p+1 \geq 1 \enspace \color{green}{\checkmark}$$ However, $$\lvert xy \rvert= p+p+1 \gt p \enspace \color{red}{\textbf{X}}$$ As we can see by $$\textit{(7)}$$, our test string $$w$$ violates a $$\color{red}{condition}$$ of the Pumping Lemma, thus is not regular. Thumbs up, thumbs down, anyone? Did I make the appropriate inferences about my split string $$w$$ in order to achieve a contradiction, and did I even split the string correctly? And to boot, did I even pick a $$w$$ that is useful to the proof? 4 edited tags | link edited May 3 '15 at 9:59 Raphael♦ 59k2525 gold badges144144 silver badges327327 bronze badges 3 added 28 characters in body edited May 2 '15 at 19:55 Chuckles 633 bronze badges 2 added 28 characters in body edited May 2 '15 at 19:40 Chuckles 633 bronze badges 1 asked May 2 '15 at 19:15 Chuckles 633 bronze badges
|
2019-09-22 08:00:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 94, "wp-katex-eq": 0, "align": 0, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.9968154430389404, "perplexity": 331.6139514178012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575402.81/warc/CC-MAIN-20190922073800-20190922095800-00049.warc.gz"}
|
https://www.techwhiff.com/issue/which-mixed-number-correctly-describes-the-shaded-area--355345
|
# Which mixed number correctly describes the shaded area of the fraction bars when each bar represents 1 whole?
###### Question:
Which mixed number correctly describes the shaded area of the fraction bars when each bar represents 1 whole?
### Explain the difference between repetition and replication
Explain the difference between repetition and replication...
### ¿QUE NOMBRES RECIBEN LOS TIPOS DE ASTROS QUE EXIXTEN EN EL UNIVERSO?
¿QUE NOMBRES RECIBEN LOS TIPOS DE ASTROS QUE EXIXTEN EN EL UNIVERSO?...
### ATP is a type of __________________ that_________________energy when the chemical bonds are_____________between two__________________groups Giving brainlist for the best answer (plus you get 15 points for attempting this question)
ATP is a type of __________________ that_________________energy when the chemical bonds are_____________between two__________________groups Giving brainlist for the best answer (plus you get 15 points for attempting this question)...
### The box plot was created by using which pieces of data? A box-and-whisker plot. The number line goes from 0 to 130. The whiskers range from 5 to 130, and the box ranges from 10 to 105. A line divides the box at 40. a maximum of 130 and a lower quartile of 10 a maximum of 130 and a lower quartile of 5 a maximum of 135 and a lower quartile of 10 a maximum of 135 and a lower quartile of 5
The box plot was created by using which pieces of data? A box-and-whisker plot. The number line goes from 0 to 130. The whiskers range from 5 to 130, and the box ranges from 10 to 105. A line divides the box at 40. a maximum of 130 and a lower quartile of 10 a maximum of 130 and a lower quartile of ...
### The population of a specific strain of bacteria in a culture medium is given by f(x) = x + 3 where f(x)is the population in millions and xis the time in hours. Find the piecewise function that matches this absolute value function. Then, graph the function using a graphing calculatorand describe what you see.
The population of a specific strain of bacteria in a culture medium is given by f(x) = x + 3 where f(x)is the population in millions and xis the time in hours. Find the piecewise function that matches this absolute value function. Then, graph the function using a graphing calculatorand describe what...
### Idk what to do plz help fast 100pts
Idk what to do plz help fast 100pts...
### Briefly explain how to balance chemical equations and give examples to support your answer.
Briefly explain how to balance chemical equations and give examples to support your answer....
### How can knowing more denotations of words help you as a writer? It makes you sound more educated. It can help you make clear meaning for a reader. It can impress a reader. It can illustrate your level of education.
How can knowing more denotations of words help you as a writer? It makes you sound more educated. It can help you make clear meaning for a reader. It can impress a reader. It can illustrate your level of education....
### Like this post if u hate edhesive
like this post if u hate edhesive...
### What is the first step on the following division problem? (8x^3-x^2+6x+7) •/• (2x-1)? A. Divide 8x^3 by 2x B. Divide 2x by 8x^3 C. Divide 6x by 2x D. Divide 2x by 6x
What is the first step on the following division problem? (8x^3-x^2+6x+7) •/• (2x-1)? A. Divide 8x^3 by 2x B. Divide 2x by 8x^3 C. Divide 6x by 2x D. Divide 2x by 6x...
### What can we know about early peoples from studying the tools they used
what can we know about early peoples from studying the tools they used...
### The most common reason giving for dropping out of a cardio program is
The most common reason giving for dropping out of a cardio program is...
### Plssssssssssssssssss hellllllllllp meeeee
plssssssssssssssssss hellllllllllp meeeee...
### What important lessons did the founding fathers learn from political theory and political history?
What important lessons did the founding fathers learn from political theory and political history?...
### 10 ) Jason sold half of his comic books and then bought 7 more. He now has13. How many did he begin with ?
10 ) Jason sold half of his comic books and then bought 7 more. He now has13. How many did he begin with ?...
### Solve the following equation. Round to the nearest hundredth. -0.5(x + 2) = 2(5 –x ) – 3x TwT
Solve the following equation. Round to the nearest hundredth. -0.5(x + 2) = 2(5 –x ) – 3x TwT...
|
2023-03-25 14:09:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28596004843711853, "perplexity": 2095.6854261498406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00098.warc.gz"}
|
https://www.peterkrautzberger.org/0088/
|
# Idempotent Ultrafilters, an introduction (Michigan Logic Seminar Nov 09, 2011)
Because of a power outage at the department my talk announced for October 29th was postponed by a week.
Here are transcripts of my notes (as well as the originals at the end).
### Hindman’s Theorem
Hindman’s Theorem If $\mathbb{N} = A_ 0 \dot\cup A_ 1$, then $\exists j \exists (x_ i)_ {i\in \omega}$ such that $FS(x_ i) \subseteq A_ j.$
Imagine you’d like to prove this with an ultrafilter: $p \in \beta \mathbb{N} \Rightarrow \exists j A_ j=:A \in p.$
What do we need? We will build $(x_ i)$ inductively!
Pick $x_ 0 \in A$ – we can’t really choose better than that (except maybe by shrinking the set first).
If we’re looking for our result, we need
• $x_ 1 \in A$ such that $x_ 1, x_ 0$ and $x_ 0+x_ 1 \in A$.
• i.e. $x_ 1 \in -x_ 0 + A$.
• so we need $-x_ 0 + A \in p$!
In other words, we need $x_ 0 \in \{ x: -x+A \in p \}$ to begin with, i.e., $\{ x: -x+A \in p \} \in p$ – for any $A\in p$!
Galvin in 1970: $p \in \beta \mathbb{N}$ is almost left-translation invariant iff $\forall A\in p: \{ x : -x +A\in p\} \in p$.
Is this enough?
• Pick $x_ 0 \in A \cap \{ x: -x+A \} \in p$
• Then choose $x_ 1 \in -x_ 0 + A \cap A$.
But to continue the process, we need more!
We need $x_ 2$ such that:
• $x_ 2 \in A$ – $A\in p$, check
• $x_ 0 + x_ 2 \in A$ – $x_ 2 \in -x_ 0 +A \in p$, check
• $x_ 1 + x_ 2 \in A$ – $x_ 2 \in -x_ 1 + A \in p$ – possible if we picked $x_ 1 \in \{x: -x+A\in p\} \in p$, check.
• $x_ 0 +x_ 1 + x_ 2 \in A$ – $x_ 2 \in -(x_ 0+x_ 1) +A \in p$???
What does this mean? $-(x_ 0 +x_ 1) + A = -x_ 1 + (-x_ 0 +A)$ by associativity.
Ah! But we have seen this before!
We needed $x_ 1 \in \{ x: -x + (-x_ 0 +A) \in p\}$, so we needed $\{ x: -x + (-x_ 0 +A) \in p\}\in p$!
But that’s ok!! $-x_ 0 + A \in p$ & $\forall B\in p: \{x : -x+B \in p \} \in p$!
### How do we get to the end?
• Inductively, assume we have $x_ 0,\ldots, x_ n$ with $FS(x_ 0,\ldots, x_ n) \subseteq A$ and $\bigcap_ {z \in FS(x_ 0,\ldots,x_ n)} -z + A \in p.$
• Pick $x_ {n+1}$ from $( \bigcap_ {z \in FS(x_ 0,\ldots,x_ n)} -z + A ) \cap A \cap \\{ x: -x+ (\bigcap_ {z \in FS(x_ 0,\ldots,x_ n)} -z + A) \in p\\}$ – this intersection is in $p$!
• Note that $-x_ {n+1} + ( \bigcap_ {z \in FS(x_ 0,\ldots,x_ n)} -z + A \cap A)$ $= \bigcap_ {z \in FS(x_ 0,\ldots,x_ n)} -x_ {n+1} (-z + A) \cap -x_ {n+1} A \in p.$
• So $\bigcap_ {z \in FS(x_ 0,\ldots,x_ {n+1})} -z + A =$ $\bigcap_ {z \in FS(x_ 0,\ldots,x_ n)} (-z + A) \cap \bigcap_ {z \in FS(x_ 0,\ldots,x_ n)} (-(z+x_ {n+1}) + A) \cap (-x_ {n+1} +A$ which is in $p$ – as desired.
## Question: Do “almost left-translation invariant” ultrafilters exist?
Glazer, ~1975: Yes of course! These are the idempotent ultrafilters! We know these exist since Ellis 1958.
### What does this mean?
• $(\mathbb{N}, +)$ is a semigroup
• $\mathbb{N}$ is discrete, so $\beta \mathbb{N}$, the Stone-Čech compactification of $\mathbb{N}$ exists, in fact $\beta \mathbb{N} \cong$ the set of ultrafilters on $\mathbb{N}$ with a topological basis $\hat A = \{ p \in \beta \mathbb{N} : A \in p \}$ for $A\subseteq \mathbb{N}$.
• $\beta \mathbb{N}$ is compact (exactly by the Boolean Prime Ideal Theorem)
• $\beta \mathbb{N}$ is Hausdorff ($p\neq q \Rightarrow \exists A\in p, B\in q: A\cap B = \emptyset$).
• $\beta \mathbb{N}$has a semigroup structure extending $(\mathbb{N}, +)$
• From $\beta (\mathbb{N} \times \mathbb{N}$:
• $p,q \in \beta \mathbb{N}\mapsto p \otimes q \in \beta(\mathbb{N}^2)$
• try $(A\times B)_ {A\in p, B\in q}$ – not an ultrafilter
• if you try to prove ultraness:
• $\bigcup_ {a\in A} \{a\} \times B_ a$ for some $A\in p$, all $B_ a \in q$
• generates an ultrafilter!
• Then $p + q = + (p\otimes q)$
• i.e., generated by $\{ \bigcup_ {a \in A} a + B_ a : A\in p, B_ a \in q\}$ [check: $n+k = +(n \times k)$]
• Properties
• $\forall q\in \beta \mathbb{N}: \rho_ q: \beta \mathbb{N} \rightarrow \mathbb{N}, p \mapsto p+q$ is continuous.
• Why? $X\in p+q$ iff $\exists A\in p \exists (B_ a)_ {a \in A} , B_ a \in q: \bigcup_ {a\in A} a+ B_ a \subseteq X$ iff $\{a: -a + X \in q\} =: X^{-q} \in p$.
• But $X^{-q}$ only depends on $q$!
• associativity: check it – use $\bigcup_ {a\in A} a+ (\bigcup_ {b\in B_ a} b + C_ b)= \bigcup_ {c\in \bigcup_ {a\in A} a+ (\bigcup_ {b\in B_ a} a+ b)} c + C_ c.$ The first set is in $p+(q+r)$, the second in $(p+q)+r$.
### Now remember: what did Galvin need?
I.e., $A\in p \Rightarrow A^{-p} \in p \Rightarrow A \in p+p$, so $p \subseteq p+p$
I.e. $p+p = p$ (since ufs)
Ellis 1958 $(X,\cdot)$ compact, Hausdorff, right-topological semigroup $\Rightarrow \exists x\in X: x\cdot x =x$.
Proof.
• Think: $x\cdot x = x \Rightarrow \{x\}$ is a closed semigroup - a minimal one!
• $\{ Y \subseteq X: Y \mbox{ compact, non-empty, semigroup} \}$
• By Zorn’s Lemma, $\exists$ minimal, non-empty, compact semigroup $Y$.
• Think: that should be $\left\vert Y \right\vert = 1$!
• We’ll show $\forall y \in Y: y\cdot y = y$ (therefore $Y = \{y\}$ by minimality)
• How? We only have continuity and associativity
• $Y \cdot y = \rho_ y [Y]$ compact, non-empty
• $(Y\cdot y) \cdot (Y\cdot y) \subseteq Y\cdot y$, i.e., a semigroup.
• By minimality of $Y$, $Y\cdot y = Y$
• Great! We’d expect that if $y\cdot y = y$
• $Y\cdot y = Y \Rightarrow \exists z \in Y: z\cdot y = y$.
• Then $\{ z \in Y : zy=y\} = \rho^{-1}_ y (y) \subseteq Y$
• $(z_ 0 z_ 1) y = z_ 0 (z_ 1 y) = z_ 0 y= y$, i.e., semigroup.
• compact? Yes $\rho^{-1}_ y[ \{y\}]$ closed.
• $Y$ minimal, so $\{z \in Y: zy=y \} = Y \Rightarrow y\cdot y = y$.
|
2016-05-29 07:31:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9762923121452332, "perplexity": 2399.904423151965}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278417.79/warc/CC-MAIN-20160524002118-00138-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://rpyc.readthedocs.io/en/latest/docs/zerodeploy.html
|
# Zero-Deploy RPyC¶
Setting up and managing servers is a headache. You need to start the server process, monitor it throughout its life span, make sure it doesn’t hog up memory over time (or restart it if it does), make sure it comes up automatically after reboots, manage user permissions and make sure everything remains secure. Enter zero-deploy.
Zero-deploy RPyC does all of the above, but doesn’t stop there: it allows you to dispatch an RPyC server on a machine that doesn’t have RPyC installed, and even allows multiple instances of the server (each of a different port), while keeping it all 100% secure. In fact, because of the numerous benefits of zero-deploy, it is now considered the preferred way to deploy RPyC.
## How It Works¶
Zero-deploy only requires that you have Plumbum (1.2 and later) installed on your client machine and that you can connect to the remote machine over SSH. It takes care of the rest:
1. Create a temporary directory on the remote machine
2. Copy the RPyC distribution (from the local machine) to that temp directory
3. Create a server file in the temp directory and run it (over SSH)
4. The server binds to an arbitrary port (call it port A) on the localhost interfaces of the remote machine, so it will only accept in-bound connections
5. The client machine sets up an SSH tunnel from a local port, port B, on the localhost to port A on the remote machine.
6. The client machine can now establish secure RPyC connections to the deployed server by connecting to localhost:port B (forwarded by SSH)
7. When the deployment is finalized (or when the SSH connection drops for any reason), the deployed server will remove the temporary directory and shut down, leaving no trace on the remote machine
## Usage¶
There’s a lot of detail here, of course, but the good thing is you don’t have to bend your head around it – it requires only two lines of code:
from rpyc.utils.zerodeploy import DeployedServer
from plumbum import SshMachine
# create the deployment
mach = SshMachine("somehost", user="someuser", keyfile="/path/to/keyfile")
server = DeployedServer(mach)
# and now you can connect to it the usual way
conn1 = server.classic_connect()
print conn1.modules.sys.platform
# you're not limited to a single connection, of course
conn2 = server.classic_connect()
print conn2.modules.os.getpid()
# when you're done - close the server and everything will disappear
server.close()
The DeployedServer class can be used as a context-manager, so you can also write:
with DeployedServer(mach) as server:
conn = server.classic_connect()
# ...
Here’s a capture of the interactive prompt:
>>> sys.platform
'win32'
>>>
>>> mach = SshMachine("192.168.1.100")
>>> server = DeployedServer(mach)
>>> conn = server.classic_connect()
>>> conn.modules.sys.platform
'linux2'
>>> conn2 = server.classic_connect()
>>> conn2.modules.os.getpid()
8148
>>> server.close()
>>> conn2.modules.os.getpid()
Traceback (most recent call last):
...
EOFError
You can deploy multiple instances of the server (each will live in a separate temporary directory), and create multiple RPyC connections to each. They are completely isolated from each other (up to the fact you can use them to run commands like ps to learn about their neighbors).
## MultiServerDeployment¶
If you need to deploy on a group of machines a cluster of machines, you can also use MultiServerDeployment:
from rpyc.utils.zerodeploy import MultiServerDeployment
m1 = SshMachine("host1")
m2 = SshMachine("host2")
m3 = SshMachine("host3")
dep = MultiServerDeployment([m1, m2, m3])
conn1, conn2, conn3 = dep.classic_connect_all()
# ...
dep.close()
## On-Demand Servers¶
Zero-deploy is ideal for use-once, on-demand servers. For instance, suppose you need to connect to one of your machines periodically or only when a certain event takes place. Keeping an RPyC server up and running at all times is a waste of memory and a potential security hole. Using zero-deploy on demand is the best approach for such scenarios.
## Security¶
Zero-deploy relies on SSH for security, in two ways. First, SSH authenticates the user and runs the RPyC server under the user’s permissions. You can connect as an unprivileged user to make sure strayed RPyC processes can’t rm -rf /. Second, it creates an SSH tunnel for the transport, so everything is kept encrypted on the wire. And you get these features for free – just configuring SSH accounts will do.
|
2018-11-16 16:27:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1706453114748001, "perplexity": 4154.187667763166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743105.25/warc/CC-MAIN-20181116152954-20181116174954-00467.warc.gz"}
|
https://crypto.stackexchange.com/questions/68499/what-is-the-inverse-function-of-the-finite-field-gf23-and-gf24?noredirect=1
|
# What is the inverse function of the finite field GF($2^3$) and GF($2^4$)?
I am currently reading a paper Cryptanalysis of a Theorem Decomposing the Only Known Solution to the Big APN Problem. In this paper, they mention that they used $$I$$ which is the inverse of the finite field GF($$2^3$$) with the irreducible polynomial $$x^3 + x + 1$$. This inverse corresponds to the monomial $$x \mapsto x^6$$. Can anyone tell me how this inverse function of the finite field was defined? I understand that there are multiplicative inverses of elements in the finite fields. Are these two the same thing?
How can the inverse function of the finite field GF($$2^4$$) be derived?
In particular, the multiplicative group of a finite field is cyclic, and contains all the non-zero elements of the field. This implies that its order is the size of the field minus one, i.e. $$2^3 - 1 = 7$$ for $${\rm GF}(2^3)$$, and thus that $$a^6 \cdot a = a^7 = 1$$ for all $$a \in {\rm GF}(2^3)$$, which means that $$a^6$$ is the multiplicative inverse of $$a$$.
Similarly, the multiplicative group of $${\rm GF}(2^4)$$ has $$2^4 - 1 = 15$$ elements, and thus the multiplicative inverse of any non-zero element $$a \in {\rm GF}(2^4)$$ can be calculated as $$2^{14}$$. More generally, in any Galois field $${\rm GF}(p^n)$$, the multiplicative inverse of $$a$$ equals $$a^{p^n-2}$$.
A point of notation perhaps worth making here is that by $$a^6$$ above I mean the field element $$a$$ raised to the sixth power using the field multiplication rule — which itself can be represented as polynomial multiplication followed by reduction modulo an irreducible monic polynomial, if the elements of $${\rm GF}(2^3)$$ themselves are represented as polynomials over $${\rm GF}(2)$$ of order less than 3.
The reason this can be confusing is that, in this representation, expressions like "$$x$$" and "$$x^6$$" could themselves represent specific field elements, written as polynomials of a single variable $$x$$. (Of course, $$x^6$$ cannot be the canonical polynomial representation of any element of $${\rm GF}(2^3)$$, since its order is too high.) Where such notation is used, as in your question, it's usually a good idea to avoid using the symbol $$x$$ for any other purpose except as the formal variable in the polynomials representing the field elements. In particular, it should not be used to designate an arbitrary field element (except, of course, for the specific field element canonically represented by the first order monomial $$x$$).
|
2021-02-28 18:16:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9669992923736572, "perplexity": 101.40726279116237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361723.15/warc/CC-MAIN-20210228175250-20210228205250-00448.warc.gz"}
|
https://iphyer.github.io/blog/2018/07/21/lc1/
|
# 起因
Leetcode 刷题记录,记录下自己觉得不错的题目。主要是对于题目的一些思路的记录。
## 463. Island Perimeter
You are given a map in form of a two-dimensional integer grid where 1 represents land and 0 represents water. Grid cells are connected horizontally/vertically (not diagonally). The grid is completely surrounded by water, and there is exactly one island (i.e., one or more connected land cells). The island doesn’t have “lakes” (water inside that isn’t connected to the water around the island). One cell is a square with side length 1. The grid is rectangular, width and height don’t exceed 100. Determine the perimeter of the island.
[[0,1,0,0],
[1,1,1,0],
[0,1,0,0],
[1,1,0,0]]
Explanation: The perimeter is the 16 yellow stripes in the image below:
## 代码
class Solution {
public int islandPerimeter(int[][] grid) {
if (grid == null || grid.length == 0 || grid[0].length == 0) return 0;
int totalLen = 0;
for ( int i = 0; i < grid.length; i++)
{
for ( int j = 0; j < grid[i].length; j++)
{
if ( grid[i][j] == 1)
{
totalLen += 4;
if ( i > 0 && grid[i-1][j] == 1 ) totalLen -= 2;
if ( j > 0 && grid[i][j-1] == 1 ) totalLen -= 2;
}
}
}
|
2019-01-20 15:08:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2745107412338257, "perplexity": 3594.717451322419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583722261.60/warc/CC-MAIN-20190120143527-20190120165527-00241.warc.gz"}
|
https://curriculum.illustrativemathematics.org/HS/students/1/4/7/index.html
|
# Lesson 7
Using Graphs to Find Average Rate of Change
Let’s measure how quickly the output of a function changes.
### 7.1: Temperature Drop
Here are the recorded temperatures at three different times on a winter evening.
time temperature 4 p.m. 6 p.m. 10 p.m. $$25^\circ F$$ $$17^\circ F$$ $$8^\circ F$$
• Tyler says the temperature dropped faster between 4 p.m. and 6 p.m.
• Mai says the temperature dropped faster between 6 p.m. and 10 p.m.
Who do you agree with? Explain your reasoning.
### 7.2: Drop Some More
The table and graphs show a more complete picture of the temperature changes on the same winter day. The function $$T$$ gives the temperature in degrees Fahrenheit, $$h$$ hours since noon.
$$h$$ $$T(h)$$
0 18
1 19
2 20
3 20
4 25
5 23
6 17
7 15
8 11
9 11
10 8
11 6
12 7
1. Find the average rate of change for the following intervals. Explain or show your reasoning.
1. between noon and 1 p.m.
2. between noon and 4 p.m.
3. between noon and midnight
2. Remember Mai and Tyler’s disagreement? Use average rate of change to show which time period—4 p.m. to 6 p.m. or 6 p.m. to 10 p.m.—experienced a faster temperature drop.
1. Over what interval did the temperature decrease the most rapidly?
2. Over what interval did the temperature increase the most rapidly?
### 7.3: Populations of Two States
The graphs show the populations of California and Texas over time.
1. Estimate the average rate of change in the population in each state between 1970 and 2010. Show your reasoning.
2. In this situation, what does each rate of change mean?
1. Which state’s population grew more quickly between 1900 and 2000? Show your reasoning.
### Summary
Here is a graph of one day’s temperature as a function of time.
The temperature was $$35 ^\circ F$$ at 9 a.m. and $$45 ^\circ F$$ at 2 p.m., an increase of $$10^\circ F$$ over those 5 hours.
The increase wasn't constant, however. The temperature rose from 9 a.m. and 10 a.m., stayed steady for an hour, then rose again.
• On average, how fast was the temperature rising between 9 a.m. and 2 p.m.?
Let's calculate the average rate of change and measure the temperature change per hour. We do that by finding the difference in the temperature between 9 a.m. and 2 p.m. and dividing it by the number of hours in that interval.
$$\text{average rate of change}=\dfrac{45-35}{5}=\dfrac{10}{5}=2$$
On average, the temperature between 9 a.m. and 2 p.m. increased $$2^\circ F$$ per hour.
• How quickly was the temperature falling between 2 p.m. and 8 p.m.?
$$\text{average rate of change}=\dfrac{30-45}{6}=\dfrac{\text-15}{6}=\text-2.5$$
On average, the temperature between 2 p.m. and 8 p.m. dropped by $$2.5 ^\circ F$$ per hour.
In general, we can calculate the average rate of change of a function $$f$$, between input values $$a$$ and $$b$$, by dividing the difference in the outputs by the difference in the inputs.
$$\text{average rate of change}=\dfrac{f(b)-f(a)}{b-a}$$
If the two points on the graph of the function are $$(a, f(a))$$ and $$(b, f(b))$$, the average rate of change is the slope of the line that connects the two points.
### Glossary Entries
• average rate of change
The average rate of change of a function $$f$$ between inputs $$a$$ and $$b$$ is the change in the outputs divided by the change in the inputs: $$\frac{f(b)-f(a)}{b-a}$$. It is the slope of the line joining $$(a,f(a))$$ and $$(b, f(b))$$ on the graph.
• decreasing (function)
A function is decreasing if its outputs get smaller as the inputs get larger, resulting in a downward sloping graph as you move from left to right.
A function can also be decreasing just for a restricted range of inputs. For example the function $$f$$ given by $$f(x) = 3 - x^2$$, whose graph is shown, is decreasing for $$x \ge 0$$ because the graph slopes downward to the right of the vertical axis.
• horizontal intercept
The horizontal intercept of a graph is the point where the graph crosses the horizontal axis. If the axis is labeled with the variable $$x$$, the horizontal intercept is also called the $$x$$-intercept. The horizontal intercept of the graph of $$2x + 4y = 12$$ is $$(6,0)$$.
The term is sometimes used to refer only to the $$x$$-coordinate of the point where the graph crosses the horizontal axis.
• increasing (function)
A function is increasing if its outputs get larger as the inputs get larger, resulting in an upward sloping graph as you move from left to right.
A function can also be increasing just for a restricted range of inputs. For example the function $$f$$ given by $$f(x) = 3 - x^2$$, whose graph is shown, is increasing for $$x \le 0$$ because the graph slopes upward to the left of the vertical axis.
• maximum
A maximum of a function is a value of the function that is greater than or equal to all the other values. The maximum of the graph of the function is the corresponding highest point on the graph.
• minimum
A minimum of a function is a value of the function that is less than or equal to all the other values. The minimum of the graph of the function is the corresponding lowest point on the graph.
• vertical intercept
The vertical intercept of a graph is the point where the graph crosses the vertical axis. If the axis is labeled with the variable $$y$$, the vertical intercept is also called the $$y$$-intercept.
Also, the term is sometimes used to mean just the $$y$$-coordinate of the point where the graph crosses the vertical axis. The vertical intercept of the graph of $$y = 3x - 5$$ is $$(0,\text-5)$$, or just -5.
|
2021-01-16 02:47:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.813941240310669, "perplexity": 420.35979661439325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703499999.6/warc/CC-MAIN-20210116014637-20210116044637-00689.warc.gz"}
|
http://community.wolfram.com/groups/-/m/t/1165759
|
# [✓] Calculate Bessel Function zeros? (Can PayPal for solution)
GROUPS:
Michael M 1 Vote I am working on a drum synthesizer based off the Bessel Function zeros. (The modal frequencies of a circular drum membrane are predicted by the Bessel Function zeros.)To built it, I have been manually calculating Bessel zeros using Casio's calculator: http://keisan.casio.com/exec/system/1180573472This is working. However, it is very slow work, as I am calculating each zero one by one, manually. Bessel equations in Wolfram are incredibly easy by contrast. How they work is summarized here: http://mathworld.wolfram.com/BesselFunctionZeros.html the Bessel function for nonnegative integer values of n and k can be found in the Wolfram Language using the command BesselJZero[n, k]. In an ideal world, I'd like bessel zeros to 6 significant digits for, n = 0...99 and k = 1...100. This would produce a table or list of 10,000 Bessel zeros.If it is easy, can anyone here maybe do me a huge favor and punch these into your Wolfram Language system to produce a table or list you can share? I would be happy to PayPal you $20 for your effort if so. If it's more work than that, let me know what it would cost.Otherwise, how would I set up my Windows system to do this? Can I work with Wolfram Language from Windows? I don't have a Raspberry Pi.I've calculated 1200 of these things manually which as you can imagine has been very tedious. While I'm getting the results I want, I will likely need at least 2000 more to get truly realistic results. I'd hate to spend days and weeks manually working out Wolfram can spit out in 5 minutes!In an ideal world, I'd like a table like this in Excel or any other workable format:Very hopeful for any help. Thank you very much.Mike Answer 10 months ago 4 Replies J. M. 5 Votes Hello,Here's a nice pile of$J_n(z)$zeroes: https://pastebin.com/raw/vepKVF8sFor reference, here's the code that generated it: With[{count = 100, ord = 99, digits = 6}, Export["bz.dat", Table[NumberForm[N[BesselJZero[n, k]], digits], {k, count}, {n, 0, ord}], "Table", Alignment -> Left, "FieldSeparators" -> " ", TableHeadings -> {Range[count], Table[J[n, x], {n, 0, ord}]}]] Adjust the parameters in the first part of With[] for a bigger table or more digits. FWIW, you could try generating smaller-scale versions of the table from the free Wolfram Development Platform. Go here, and after it finishes loading, try With[{count = 50, ord = 20, digits = 8}, TableForm[Table[NumberForm[N[BesselJZero[n, k]], digits], {k, count}, {n, 0, ord}], TableAlignments -> {Left, Center}, TableHeadings -> {Range[count], Table[J[n, x], {n, 0, ord}]}]] Answer 10 months ago Michael M 1 Vote Thanks both of you guys but especially thanks J.M.!!! You are a lifesaver!!! That text file imports beautifully into Excel.Now I can play with it ...Can't believe how much time I wasted calculating the first 1200 by hand LOL. Well at least it proved the concept worked....Now I will see what it sounds like with 5-6x more many modes from all these other Bessel zeros...If you want the$20, PM or post your e-mail and I'll send it you by PayPal. Otherwise thanks for your good Samaritan deed. You've made my life a lot easier. Cheers.
W. Craig Carter 3 Votes Here is a less robust method than J.M's but may be simpler to read.A list with 10000 rows and 3 columns: data = Flatten[ Table[ {n, k, N@BesselJZero[n, k]}, {n, 0, 99}, {k, 1, 100} ], 1 ]; Export that data to an Excel file in your Documents directory Export[FileNameJoin[{\$UserDocumentsDirectory, "Bessel_Zeros.xls"}], data] You can download the result here.
|
2018-05-27 23:30:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4027518630027771, "perplexity": 1962.1013861587476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870497.66/warc/CC-MAIN-20180527225404-20180528005404-00072.warc.gz"}
|
https://codeforces.cc/blog/entry/95634
|
### prithwiraj15's blog
By prithwiraj15, 3 weeks ago,
# Ode 2 Code 2021 Round-3
Round 3 of Ode 2 Code gave only one question with a total time of 45 mins to solve it. I was not able to solve this problem. Can anyone help me ? PS :- The competition is already over, hence it the question can be disclosed now
Question Statement
You are given
• An array A of N integers.
• And M queries consist of three integers k, y and x.
Beauty of a array A[1], A[2], A[3], A[4],...., A[N] is defined as A[1] + A[2] + A[3] + A[4] + .... + A[N].
For each query, k, y, x, you are given the following conditions :-
• Let U be the bitwise AND of the beauties of any y subarrays among all subarrays of size k.
• And, Z = U | x, where | is the bitwise OR operation.
Calculate the maximum possible value of Z for all queries.
Constraints
• 1 <= N <= 1000
• 1 <= M <= 1000
• 1 <= Ai <= 10^9
• 1 <= ki <= N
• 1 <= yi <= N-ki+1
• 1 <= xi <= 10^9
Sample Test Cases
1) Sample Case 1
N = 5
M = 2
A = [1, 2, 3, 4, 5]
Approach
For the first query
• We have 4 subarrays [1, 2], [2, 3], [3, 4], [4, 5] with beauties 3, 5, 7, 9 respectively.
• You can select bitwise AND of 3 beauties, i.e 3 & 5 & 7 = 1, so U = 1.
• Z = (1 | 9) = 9. This is the maximum possible.
For the second query
• We have 4 subarrays [1, 2, 3], [2, 3, 4], [3, 4, 5] with beauties 6, 9, 12 respectively.
• You can select bitwise AND of 2 beauties, i.e 9 & 12 = 8, so U = 8.
• Z = (8 | 17) = 25. This is the maximum possible.
2) Sample Case 2
N = 5
M = 2
A = [5, 4, 2, 4, 9]
Approach
For the first query
• We have 4 subarrays [5, 4], [4, 2], [2, 4], [4, 9] with beauties 9, 6, 6, 13 respectively.
• You can select bitwise AND of 2 beauties, i.e 9 & 13 = 9, so U = 9.
• Z = (9 | 7) = 15. This is the maximum possible.
For the second query
• We have 4 subarrays [5, 4], [4, 2], [2, 4], [4, 9] with beauties 9, 6, 6, 13 respectively.
• You can select bitwise AND of 3 beauties, i.e 6 & 6 & 13 = 4, so U = 4.
• Z = (4 | 6) = 6. This is the maximum possible.
• 0
» 3 weeks ago, # | 0 Auto comment: topic has been updated by prithwiraj15 (previous revision, new revision, compare).
» 3 weeks ago, # | 0 Auto comment: topic has been updated by prithwiraj15 (previous revision, new revision, compare).
» 3 weeks ago, # | 0 Constraints?
• » » 3 weeks ago, # ^ | 0 Updated the post with constraints now
• » » » 3 weeks ago, # ^ | ← Rev. 2 → 0 This can be done in MNlog(max(A)) , just complement x, let x1 be the complement of x. First find out all sum with subarray size k. Start from the highest set bit of x1, find if there exist at least y elements with the set bit as highest set bit, if yes add that to answer and otherwise move to next bit, and check the same. Finally add all the set bits of x to the answer as well.
• » » » » 3 weeks ago, # ^ | 0 Oh my god.. I thought it was extremely difficult Thanks anyway for your help. Much appreciated.
• » » » » 3 weeks ago, # ^ | ← Rev. 2 → 0 Here are you saying that maximizing U | x is same as maximizing U & !x
» 3 weeks ago, # | 0 here is my code vector solve(int N, int M, vector A, vector > Q) { vector cum(N + 1, 0ll); for (int i = 1; i <= N; ++i) { cum[i] = cum[i - 1] + A[i - 1]; } vector ans; int n = N; for (auto& z : Q) { int k = z[0], y = z[1], x = z[2]; vector tmp; for (int i = k - 1; i < n; ++i) { tmp.push_back(cum[i + 1] - cum[i + 1 - k]); } ll now = 0; unordered_set used; int sz = tmp.size(); for (int i = 31; i >= 0; --i) { if (x & (1ll << (i))) continue; int ct = 0; vector to_remove; for (int j = 0; j < sz; ++j) { if (used.count(j)) continue; if (tmp[j] & (1ll << i)) ct++; else { to_remove.push_back(j); } } if (ct >= y) { for (auto x : to_remove) { used.insert(x); } now |= (1 << (i)); } } now |= x; ans.push_back(now); } return ans; }
» 3 weeks ago, # | ← Rev. 2 → +1 On a side note, Is there anyone who passed all testcases in round 2 but was not selected for round 3? If it was based on speed , then any idea what the cutoff time was?
• » » 3 weeks ago, # ^ | +1 Yeah i was able to pass all the test cases of my given problem in round 2, but didn't get selected for Round 3!
• » » 3 weeks ago, # ^ | +1 Yeah me.
• » » 3 weeks ago, # ^ | +1 I think about 30 mins, cuz i completed mine with about 16-17 mins left.
• » » 3 weeks ago, # ^ | 0 I see, thanks for the response!!
• » » 3 weeks ago, # ^ | 0 I submitted my test before 25 min And I was selected for next round.
• » » 3 weeks ago, # ^ | +1 I had submitted under 15 mins but not selected.
• » » » 3 weeks ago, # ^ | +1 Yeah , same.
• » » » 2 weeks ago, # ^ | ← Rev. 2 → 0 I was able to complete my test in 6 mins, but was not selected :( P.S. I didn't even get a rejection mail
• » » 3 weeks ago, # ^ | ← Rev. 2 → 0 Yeah, I passed all test cases in Round 2 in 15mins but didn't get selected for Round 3. My friend passed just 7 test cases but got selected. It wasn't based on speed then, it might be random or based on code implementation.
• » » 3 weeks ago, # ^ | +2 No idea lol, I was done in like 7-8 mins. They probably thought I cheated
• » » » 3 weeks ago, # ^ | 0 Same here:)
» 3 weeks ago, # | ← Rev. 3 → 0 In my case,It took 35 min to understand problem. Problem statement was too bad. when I understood question ,the time remain was less so I got panic and I quit the test.but after few hour,I started doing code and I solved the question in 8 min.[problem]
• » » 3 weeks ago, # ^ | 0 I had this same problem . Understanding the problem statement was tougher than solution..xD . It took me like 15-16 minutes to understand it and 10 minutes to solve .
• » » » 3 weeks ago, # ^ | 0 thanks for reply
» 3 weeks ago, # | 0 Did everyone else get this problem? I got a different problem statement which was a little harder
» 2 weeks ago, # | 0 Does anyone knows when and where will the result be declared ?
» 12 days ago, # | 0 When round 3 result will anounce?
|
2021-10-25 01:24:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23471209406852722, "perplexity": 1875.2945784826316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00231.warc.gz"}
|
https://gravity.univie.ac.at/research-seminars-publications/
|
# Current seminars
In addition to the Vienna relativity seminars, the calendar above sometimes contains other events of interest to members of the relativity group. The seminars of the Vienna relativity group are listed below.
Location (unless indicated otherwise): Währinger Str. 17
- Seminarraum A on the 2nd floor for standard seminars, and
- Common room, first floor, for lunch seminars.
The Mathematical Physics Seminars take place on Tuesdays at 13.45.
The Particle Physics Seminars take place on Tuesdays at 16.15.
• Thursday, May 16th (as part of the joint relativity-mathematical physics seminar), 14:00, Seminarraum A
Stefan Fredenhagen (Vienna): Obstructions to interacting higher-spin gauge theories in three dimensions
Abstract: Free higher-spin Fronsdal fields generalise Maxwell fields and linearised gravity to higher tensor fields. Whereas for spin-1 and spin-2 there are non-linear completions (e.g. Yang-Mills, gravity), no non-linear, gauge-invariant theory of Fronsdal fields is known. A systematic way to construct them is the Noether procedure in which a gauge invariant action is constructed in a perturbative expansion in powers of the fields. In three space-time dimensions, there are strong obstructions to construct such an action leading to the conclusion that the interactions of the higher-spin gauge fields are completely fixed by the cubic vertices in the action.
• Friday, May 17th, 13:00, lunch seminar
Roland Steinbauer (Vienna): cut-and-paste for impulsive gravitational waves with Λ.
Abstract: Impulsive gravitational waves in Minkowski space were introduced by Roger Penrose at the end of the 1960s, and have been widely studied over the decades. We focus on non-expanding waves which later have been generalised to impulses travelling in all constant-curvature backgrounds. While Penrose's original construction was based on his vivid geometric 'scissors-and-paste' approach in a flat background, until now a comparably powerful visualisation and understanding have been missing in the ${Λ not=0}$ case. In this work we provide such a picture: The (anti-)de Sitter hyperboloid is cut along the null wave surface, and the 'halves' are then re-attached with a specific shift of their null generators across the wave surface.
• Thursday, May 23th, 14:00, Seminarraum A
Daniel Grumiller (TU Wien): Soft excitations on horizons in any dimensions
Abstract: We derive generic properties of non-extremal horizons, assumed to be in equilibrium with a thermal bath, in any spacetime dimension greater than two. The physical properties of the thermal bath are modelled by the way we impose boundary conditions, and we shall describe various different well-motivated choices leading to infinite-dimensional near horizon symmetries, including BMS-like symmetries for arbitrary spin and Heisenberg-like symmetries. We prove that they generically span soft hair excitations in the sense of Hawking, Perry and Strominger.
• Thursday, June 6th, 14:00, Seminarraum A
Anne Franzen (Lisbon): The wave equation near flat Friedmann-Lemaître-Robertson-Walker and Kasner Big Bang singularities
Abstract: We consider the wave equation, $\square_g\psi=0$, in fixed flat Friedmann-Lemaître-Robertson-Walker and Kasner spacetimes with topology $\mathbb{R}_+\times\mathbb{T}^3$. We obtain generic blow up results for solutions to the wave equation towards the Big Bang singularity in both backgrounds. In particular, we characterize open sets of initial data prescribed at a spacelike hypersurface close to the singularity, which give rise to solutions that blow up in an open set of the Big Bang hypersurface $\{t=0\}$. The initial data sets are characterized by the condition that the Neumann data should dominate, in an appropriate $L^2$-sense, up to two spatial derivatives of the Dirichlet data. For these initial configurations, the $L^2(\mathbb{T}^3)$ norms of the solutions blow up towards the Big Bang hypersurfaces of FLRW and Kasner with inverse polynomial and logarithmic rates respectively. Our method is based on deriving suitably weighted energy estimates in physical space. No symmetries of solutions are assumed.
• Friday, June 7th, 13:30, lunch seminar
Ighor Khavkine (Prag): Conformal Killing Initial Data
Abstract: We find necessary and sufficient conditions ensuring that the vacuum development of an initial data set of the Einstein's field equations admits a conformal Killing vector. We refer to these conditions as conformal Killing initial data (CKID) and they extend the well-known Killing initial data that have been known for a long time. The strategy used to find the CKID is a classical argument involving wave-like "propagation equations". Time permitting, I will how similar strategies might succeed or fail for other geometric equations, like e.g. Killing-Yano.
• Thursday, June 13th (as part of the joint theoretical physics seminar), 14:00, Seminarraum A
Paolo Salucci, Sissa: The mystery of the Dark Matter Phenomenon
Abstract: The distribution of the non-luminous matter in galaxies of different luminosity and Hubble type is much more than a proof of the existence of dark particles governing the structures of the Universe. The deeper we go into the knowledge of the dark component that embeds the stellar component of galaxies, the more we realize the profound interconnection present between the two of them.
They are too complex to be arisen by two inert components that just share the same Gravitational field.
The 30 years old paradigm which rests on a-priori knowledge of the nature of dark matter that has led to the scenario of collisionless dark matter in galaxy halos reveals itself to be insufficient to explain the observations. Here, we will review the complex but well-ordered scenario of the properties of the dark halos in relation with those of the baryonic components they host.
We will present a number of tight and unexpected correlations between selected properties of the dark and the luminous matter that indicate that they interacted in a direct way over the Hubble Time.
• Thursday, June 27th, 14:00, Seminarraum A
Artur Alho (Lisbon): Spherically symmetric steady states of Newtonian self-gravitating elastic matter
Abstract: In this talk I will introduce a new definition of spherically symmetric elastic body in Newtonian gravity. Using this new definition it is possible to introduce Milne-type homology invariant variables which transform the field equations into an autonomous system of nonlinear differential equations. By employing dynamical systems methods I will finally discuss the existence of static balls for a wide variety of constitutive equations, including Seth, Signorini, Saint Venant-Kirchhoff, Hadamard, and John’s harmonic materials.
|
2019-05-26 01:34:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7232635021209717, "perplexity": 1323.992640443191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258620.81/warc/CC-MAIN-20190526004917-20190526030917-00307.warc.gz"}
|
https://math.stackexchange.com/questions/2710914/prove-that-the-series-with-terms-a-n-diverges?noredirect=1
|
# Prove that the series with terms $a_n$ diverges. [duplicate]
where the series has the nth term given by $$a_n = \frac {1\cdot3\cdot5\cdot\cdots\cdot(2n-1)}{2\cdot4\cdot6\cdot\cdots\cdot2n}$$
I managed to show that $$a_n = \frac {(2n)!} {4^n (n!)^2}$$
But it doesn't really help.. I'd like to compare it with something since the ratio test doesn't work.
## marked as duplicate by Jack D'Aurizio calculus StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Mar 27 '18 at 21:44
The binomial:
$$\binom{2n}{n}=\frac{(2n)!}{(n!)^2}$$ is the middle, and hence largest, term of the binomial expansion of $\left(1+1 \right)^{2n}=2^{2n}=4^n.$ Since there are $2n+1$ terms, this means that $$\binom{2n}{n}\geq\ \frac{4^n}{2n+1}$$
and hence $$a_n=\frac{1}{4^n}\binom{2n}{n}\geq \frac{1}{2n+1}$$
See more approximations for the central binomial coefficients.
Let use Stirling's approximation
$$n! \sim \sqrt{2 \pi n}\left(\frac{n}{e}\right)^n$$
then
$$\frac {(2n)!} {4^n (n!)^2}\sim\frac {\sqrt{4 \pi n}\left(\frac{2n}{e}\right)^{2n}} {4^{2n} (\sqrt{2 \pi n}\left(\frac{n}{e}\right)^n)^2}$$
Note that indeed the ratio test is inconclusive, as $$\lim_{n\to\infty} \frac{a_{n+1}}{a_n} = 1\,.$$
• If you know Stirling's approximation of the factorial, $$n! \operatorname*{\sim}_{n\to\infty} \sqrt{2\pi n}\left(\frac{n}{e}\right)^n$$ then you can show that $a_n \operatorname*{\sim}_{n\to\infty} \frac{1}{\sqrt{\pi n}}$ and the series $\sum_n a_n$ thus diverges by theorems of comparison (for series with positive terms).
• If you do not but like probabilities, recall that this is exactly $$a_n = \mathbb{P}\{ X = n \}$$ where $X\sim \mathrm{Bin}(2n,1/2)$ is a Binomial random variable with parameters $2n$ and $1/2$. By standard concentration and anticoncentration results, the Binomial distribution is "roughly uniform" (i.e., its probability mass constant is within constant factors) within $\pm \sqrt{n}$ (that is, more or less the order of a standard deviation) of its expectation, and a constant fraction of its probability mass is on this interval. This implies that $$a_n = \Theta(1/\sqrt{n})$$ leading to the same result.
I consider the second method the most fun, but the first is quite useful (if you do not know Stirling's appproximation but want to, here is a great occasion to do so.) And, of course, there are other ways.
|
2019-10-15 21:10:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7655291557312012, "perplexity": 771.9848142952246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660323.32/warc/CC-MAIN-20191015205352-20191015232852-00310.warc.gz"}
|
http://math.stackexchange.com/questions/266390/how-to-know-if-the-equation-is-linear
|
# How to know if the equation is linear?
According to my maths book an equations is linear if,
there are no products of the function and neither the function or its derivatives occur to any power other than the first power.
It should be in form of $$a_n(t)y^{(n)}(t)+a_{n-1}(t)y^{(n-1)}(t)+\cdots+a_1(t)y'(t)+a_0(t)y(t)=g(t)$$
I do understand the first power thing i.e for e.g., shouldn't be like this $(dy/dx)^2$ but can't really understand the other product thing. Please help. Thanks.
-
## 3 Answers
There's no $(dy/dx)^2$ (first derivative squared) in your canonical form. There is a $d^2y/dx^2$ (second derivative) hidden at the end of the "$\cdots$", but that is a different thing.
A linear ODE can have terms is a constant times any higher derivative, but there can only be one derivative in each term -- that is, there cannot be two or more (different or same) derivatives multiplied by each other.
-
Here is a "positive" version of the criterion:
An ODE is linear if given any two solutions $y_1(\cdot)$, $\ y_2(\cdot)$ the sum $\lambda y_1(\cdot)+\mu y_2(\cdot)$ is again a solution whenever $\lambda+\mu=1$. This test can easily be performed by looking at the equation.
-
Like this point of view. – Babak S. Dec 28 '12 at 10:55
Yes, almost -- except that the OP's definition allows non-homogeneous equations (the $g(t)$ on the right). – Henning Makholm Dec 28 '12 at 11:16
@Henning Makholm: Exactly because of that my answer sounds so sophisticated $\ldots$ – Christian Blatter Dec 28 '12 at 11:25
Oh, I missed the $\lambda+\mu=1$ condition. Right, then. – Henning Makholm Dec 28 '12 at 11:28
Sometimes in Mathematics one uses the same name for two different things or two different definitions which are not related whatsoever. But here this is not the case. I am sure you know what linear means in terms of maps between vector spaces. So if $V$ is a vector space and $A:V\rightarrow V$ is a linear map, then it satisfies $A(\alpha \mathbf v + \beta \mathbf w) = \alpha A(\mathbf v) + \beta A(\mathbf w)$.
Now something like $d/dx$ can be viewed as a map on a vector space. There are actually many different vector spaces that would work, but I suggest you just think of the vector space given by all smooth functions from $\mathbb R \rightarrow \mathbb R$. This is a vector space over $\mathbb R$ and $dy/dx$ is a linear map on it. But an expression as $(d/dx)^2$ is not linear on this vector space. Recall that $(dy/dx)^2$ acts as follows:
$$(d/dx)^2 (f) = (d/dx)(f) (d/dx)(f) = f' f'$$
Note that something like $a_n(t) d^n/dx^n + \dots + a_1(t) d/dx + a_0(t)$ is also a linear map on this vector space (if al the a_i(t) are smooth, otherwise you need to change the vector space a bit). So one way to figure out wether a ODE is linear or not is to try to find out if it acts linear on smooth functions.
-
|
2016-06-25 03:53:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785811066627502, "perplexity": 225.67157572982995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00153-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://www.cell.com/joule/fulltext/S2542-4351(21)00243-9?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2542435121002439%3Fshowall%3Dtrue
|
Article| Volume 5, ISSUE 6, P1462-1484, June 16, 2021
# The future of coal investment, trade, and stranded assets
Open ArchivePublished:June 08, 2021
## Highlights
• A high-resolution coal market model develops regional insights into investment risks
• By 2040, one-third of the current mining capacity risks becoming stranded assets
• China benefits the most, whereas Indonesia and Australia lose the most from coal phase-out
• Over 2.2 million jobs would suffer from early mine closures in the sustainable scenario
## Context & scale
The world spends almost half a trillion dollars per year on coal. This market could be profoundly disrupted by international efforts to decarbonize energy supplies. Existing global energy scenarios provide a macro picture of future demand for coal but do not consider how rapidly declining demand will affect investment levels and national economies.
We explore the regional impacts of following business-as-usual and sustainable development scenarios from the IEA. A well-below-2°C pathway will see exporting countries lose tens of billions of dollars a year as both global trade volumes and prices fall. Reduced investment in new mines and early retirement of existing ones will lead to stranded capital and labor. There is a limited window of opportunity for investors and decision makers to react before a new coal commodity cycle begins. Building resilience to the human and financial impacts of a sustainable transition now could ease the move away from the most carbon-intensive fossil fuel.
## Summary
Coal is at a crossroads, with divestment and phase-out in the West countered by the surging growth throughout Asia. Global energy scenarios suggest that coal consumption could halve over the next decade, but the business and geopolitical implications of this profound shift remain underexplored. We investigate coal markets to 2040 using a perfect competition techno-economic model. In a well-below-2°C scenario, Europe, North America, and Australia suffer from over-capacity, with one-third of today’s mines becoming stranded assets. New mines are needed to offset retirements, but a new commodity cycle in the 2030s can be avoided. Coal prices decline as only the most competitive mines survive, and trade volumes fall to give more insular national markets. Regions stand to gain or lose tens of billions of dollars per year from reducing import bills or export revenues. Understanding and preparing for these changes could ease the transition away from coal following 150 years of dominance.
## Introduction
Over 80% of the world’s coal resources must remain below the ground if CO2 emissions are to fall by 80%–95% to limit global warming to 1.5°C.
IPCC
Global warming of 1.5°C: summary for policymakers.
,
• Ekins P.
The geographical distribution of fossil fuels unused when limiting global warming to 2°C.
Rapid phase-out of coal is required among developed countries followed over the coming decades by developing countries.
• Rocha M.
• Parra P.Y.
• Hare B.
• Roming N.
• Ural U.
• Ancygier A.
• Cantzler J.
• Sferra F.
• Li H.
• Schaeffer M.
Implications of the Paris Agreement for coal use in the power sector.
No new coal power plants should be built other than those that are currently under construction, and the existing fleet should be progressively retired.
• Nace T.
A coal phase-out pathway for 1.5°C.
Reducing reliance on coal has substantial health benefits through improved air quality,
• Spencer T.
• Colombier M.
• Sartor O.
• Garg A.
• Tiwari V.
• Burton J.
• Caetano T.
• Green F.
• Teng F.
• Wiseman J.
The 1.5°C target and coal sector transition: at the limits of societal feasibility.
as the emissions of SO2, NOx, particulates, and toxic heavy metals precipitate various respiratory and cardiovascular diseases, cancers, and premature deaths.
• Gaffney J.S.
• Marley N.A.
The impacts of combustion emissions on air quality and climate – from coal to biofuels and beyond.
Growing public concern, policy regulations, and stakeholder pressure mean that coal divestment is gaining traction. Over 100 countries, states, cities, and businesses have joined the Powering Past Coal Alliance, committing themselves to transition away from unabated coal-power generation.
Powering Past Coal Alliance
Members.
,
• Jewell J.
• Vinichenko V.
• Nacke L.
• Cherp A.
Prospects for powering past coal.
Meanwhile, 100 of the largest global financial institutions have divested from or stopped lending to steam coal mining and/or power plant projects.
• Buckley T.
Over 100 global financial institutions are exiting coal, with more to come.
However, global coal consumption has continued to rise, driven by growth in Asia.
• Spencer T.
• Colombier M.
• Sartor O.
• Garg A.
• Tiwari V.
• Burton J.
• Caetano T.
• Green F.
• Teng F.
• Wiseman J.
The 1.5°C target and coal sector transition: at the limits of societal feasibility.
,
• Thurber M.C.
Coal.
Despite the modest falls in 2019 and 2020,
• Friedlingstein P.
• Jones M.W.
• O’Sullivan M.
• Andrew R.M.
• Hauck J.
• Peters G.P.
• Peters W.
• Pongratz J.
• Sitch S.
• Le Quéré C.
• et al.
Global carbon budget 2019.
some suggest that demand may remain stable on a long plateau until 2025 (post-COVID-19 recovery).
IEA
World energy outlook.
,
• Pielke R.
Global carbon dioxide emissions are on the brink of a long plateau.
200 GW of new coal power plants are under construction worldwide, three quarters of which is in Asia.
• Shearer C.
• Myllyvirta L.
• Yu A.
• Aitken G.
• Mathew-Shah N.
• Dallos G.
• Nace T.
Boom and bust 2020: tracking the global coal plant pipeline.
Coal is key to economic growth in developing countries by supporting finance, railways, steel production, and more. Policy makers seeking to curb coal reliance may face a trade-off between climate change mitigation, environmental benefits, energy access priorities and economic interests.
• Zhao S.
• Alexandroff A.
Current and future struggles to eliminate coal.
Similarly, many countries are wrestling with political narratives around heritage, national pride, regional equity, and industrial prowess surrounding the decline of their coal industries.
• Wilson I.A.G.
• Staffell I.
Rapid fuel switching from coal to natural gas through effective carbon pricing.
Investors are placed in a difficult position as the coal industry’s future has perhaps never been more uncertain,
• Thurber M.C.
Coal.
,
• Johnson N.
• Krey V.
• McCollum D.L.
• Rao S.
• Riahi K.
• Rogelj J.
Stranded on a low-carbon planet: implications of climate policy for the phase-out of coal-based power plants.
,
• Nuttall W.J.
Coal in the twenty-first century: a climate of change and uncertainty.
evidenced by Rio Tinto’s exiting the coal sector in 2018,
• Stringer D.
• Biesheuvel T.
Glencore plans to cap coal output in climate shift, sources say.
and Glencore’s freezing it’s global production in 2019.
Glencore moves to cap global coal output after investor pressure on climate change.
The world is potentially at a tipping point, as the number of coal power stations declines for the first time on record,
• Shearer C.
Analysis: the global coal fleet shrank for first time on record in 2020. Carbon Brief.
utilities are being paid to prematurely close coal power stations,
• Eckert V.
German energy regulator awards first permits to close coal plants.
and coal producers suffer extensive write-downs.
• Meyer G.
Value of world’s largest coal mine slashed by 1.4bn. Financial Times. As the cost of renewable electricity generation and storage continues to fall, • Jansen M. • Staffell I. • Kitzing L. • Quoilin S. • Wiggelinkhuizen E. • Bulder B. • Riepin I. • Müsgens F. Offshore wind competitiveness in mature markets without subsidy. • Lazard Levelized cost of energy and levelized cost of storage. • Schmidt O. • Hawkes A. • Gambhir A. • Staffell I. The future cost of electrical energy storage based on experience rates. the economic case for building coal power stations is weakening, and in many cases, it is now even cheaper to build new renewable energy plants than it is to continue operating the existing coal power stations. • Bodnar P. • Gray M. • Grbusic T. • Herz S. • Lonsdale A. • Mardell S. • Ott C. • Sundaresan S. • Varadarajan U. How to retire early: making accelerated coal phaseout feasible and jus. Several open questions face the industry: will carbon capture and storage offer a new lease of life for coal within industry and power generation; is coal divestment justified financially, or would investors and mining companies remain profitable; will coal prices collapse; are coal markets heading toward oversupply; will coal be able to compete with renewables on cost basis or will coal power plants suffer from likely future renewables cost reduction; will the competitiveness of coal become ever more constrained and regionally disparate; and which countries will see the greatest change in their prospects depending on how the global climate mitigation effort evolves? This paper sheds light on the outlook of hard coal (anthracite and bituminous coal with gross calorific value above 24 GJ/t), IEA Coal information. specifically the impacts of phase-out on regional markets, investments, stranded assets, prices, profitability, and trade. It compares a business-as-usual and sustainable scenario, taken from the International Energy Agency (IEA), IEA World energy outlook. aiming to identify the main challenges to investors and policymakers in this uncertain future, and to provide energy modelers with greater granularity on future scenarios for coal. A partial equilibrium model is developed to determine the optimal allocations among regions for both scenarios and to elucidate the implications for the future of coking and steam coal—known, respectively, as metallurgical coal used in the iron and steel industry and thermal coal used for electric power generation. IEA World energy outlook. , • Madhavi M. • Nuttall W.J. Coal in the twenty-first century: a climate of change and uncertainty. Lignite (non-agglomerating coal below 17.4 GJ/t) IEA Electricity information. was excluded from this study, as it represents only around 10% of coal production and a negligible portion of international coal trade. High water content and low calorific value make it uneconomical to transport lignite over long distances—most consumption sites are located close to mines—and give limited substitutability between hard coal and lignite. IEA Coal information. ### Coal stranded assets Stranded assets are defined by the IEA as “investments, which have already been made but which, at some time prior to the end of their economic life, are no longer able to earn an economic return as a result of changes in the market and regulatory environment brought about by climate policy. IEA Redrawing the energy climate map: world Energy Outlook special report. Stranded assets resulting from the transition to a low-carbon economy have wide-ranging implications for different stakeholders, including unemployment, lost profits, and reduced tax income for governments. • Caldecott B. Introduction to special issue: stranded assets and the environment. Delayed action is likely to result in more stranded assets, since prolonging business-as-usual will worsen efforts and corrections needed to meet the Paris agreement climate target. IRENA Stranded assets and renewables: how the energy transition affects the value of energy reserves, buildings and capital stock. McGlade and Ekins estimated that 82%–88% of the world’s 1004 Gt of coal reserves must remain unburnt to limit global temperature rise to 2°C. • McGlade C. • Ekins P. The geographical distribution of fossil fuels unused when limiting global warming to 2°C. The US, Russia, and Australia would be among the countries with the largest proportion of coal reserves stranded. • McGlade C. • Ekins P. The geographical distribution of fossil fuels unused when limiting global warming to 2°C. Coal-fired power station capacity may increase in developing countries but would face risks of being stranded after 2030 to meet decarbonization targets. IRENA Stranded assets and renewables: how the energy transition affects the value of energy reserves, buildings and capital stock. Pfeiffer et al. • Pfeiffer A. • Hepburn C. • Vogt-Schilb A. • Caldecott B. Committed emissions from existing and planned power plants and asset stranding required to meet the Paris Agreement. estimate that even if the entire planned power plant capacity was cancelled, 20% of global capacity would become stranded, four fifths of which is coal and two-thirds would occur in Asia. Similarly, the International Renewable Energy Agency’s (IRENA’s) renewable scenario sees 40 GW of coal power capacity per year becoming stranded until 2050. IRENA Stranded assets and renewables: how the energy transition affects the value of energy reserves, buildings and capital stock. While most studies focus on coal power capacity, this work analyses further the impact of stranded assets on upstream coal mines at regional level for different types of coal. It highlights the implications of unburnable coal reserves on mining assets in terms of production capacity and employment loss. ### Scenarios for the global coal industry Recent global energy scenarios show a great diversity of opinion around the future of coal (Figure 1). These range from coal consumption falling by 70% over the coming 2 decades (−5.2% per annum) through to increasing by 30% (+1.1% per annum). This contrasts with comparable scenarios from 15 years ago, which converged around modest growth for the period of 2005 to 2020 (annual rates of +1.5% from the EIA, IEA World energy outlook. +1.6% from the IEA, IEA World energy outlook. +2% from the IEEJ, IEA World energy outlook. and +2.2% from the EC IEA World energy outlook. ); which closely aligned to the realized outturn of +1.9% annual growth. IEA Coal information. The outlook for coal is highly dependent on the level of environmental ambition and will, to a large extent, determine the amount of global warming that will be experienced through this century. Figure 1 summarizes the relationship between end of century temperatures and the near-term reduction in coal consumption. • Huppmann D. • Rogelj J. • Kriegler E. • Krey V. • Riahi K. A new scenario resource for integrated 1.5°C research. Other areas of energy system development vary between the models (e.g., nuclear, carbon capture, and negative emissions technologies), hence a range is seen across models and scenarios. Despite the diversity in scenarios, strong regional trends are seen across them (Figure 2). Demand typically falls everywhere except for Asia, due to continued growth in India and other developing countries. North America and Europe are universally expected to lead the coal phase-out as less economic coal power plants are offset by natural gas and renewables. • DNV GL Energy transition outlook: a global and regional forecast to 2050. • ExxonMobil Outlook for energy: a view to 2040. Only two organizations separate out steam and coking coal in their scenarios, IEA World energy outlook. , The Institute for Energy Economics Japan IEEJ Outlook. leaving an area of uncertainty that this paper attempts to resolve. Limited information is given in these scenarios on the fate of major exporting and importing countries, and there is only limited agreement on the future trajectory of coal prices. IEA World energy outlook. , The Institute for Energy Economics Japan IEEJ Outlook. , • Bloomberg N.E.F. New Energy Outlook. Further discussion of these scenarios is given in Note S1. ### Modeling We focus on two IEA scenarios from IEA World energy outlook. as these are representative of the spread across other sources from Figure 1 and are the only ones to give the necessary disaggregation by region and coal type. The New Policies Scenario (NPS) considers where government ambitions may take the energy sector and is central among business-as-usual scenarios with global coal consumption remaining flat, with a 1.6% increase between 2017 and 2040. The Sustainable Development Scenario (SDS) reflects the Paris Agreement’s well-below-2°C target, with coal consumption falling by 57.4% from 2017 to 2040 (−3.6% per annum). IEA World energy outlook. Demand from these scenarios is combined with the Deloitte Coal Database, which contains asset-level data for cost, capacity, calorific value, and age of existing mines and potential new investments, plus transportation costs from mines to export terminals and consumption hubs, and between ports. The database methodology and underlying sources stem from • Paulus M. • Trüby J. Coal lumps vs. electrons: how do Chinese bulk energy transport decisions affect the global steam coal market?. • Trüby J. • Paulus M. Market structure scenarios in international steam coal trade. • Trüby J. Strategic behaviour in international metallurgical coal markets. and are described further in the experimental procedures. The database is maintained, and quality controlled by Deloitte Economic Advisory, who granted access for the scope of this paper (see resource availability). We explore these scenarios using the global hard coal market model, • Paulus M. • Trüby J. Coal lumps vs. electrons: how do Chinese bulk energy transport decisions affect the global steam coal market?. • Trüby J. • Paulus M. Market structure scenarios in international steam coal trade. • Trüby J. Strategic behaviour in international metallurgical coal markets. updated and extended to consider investments and stranded assets. This models the coking and steam coal markets using inter-temporal linear optimization to determine the least-cost production, export, and import for each region to satisfy the exogenous demand. It considers brownfield investments, starting from the mining capacity held in 2016, and considers investments on annual time steps to 2040 with perfect foresight. A discussion of contemporary models and full details of this model are given in the Experimental Methods, and validation is provided in the Note S2. ## Results ### Investment in coal mines Total investment in coal-mining capacity diverges considerably between the business-as-usual (BAU) and SDS. Under BAU almost all current capacity is renewed by 2040, since coal demand remains generally flat (Figure 1); whereas in the SDS, only half of the current capacity will be replaced. Total coal production capacity in 2017 was 4,134 Mtce (million tons of coal equivalent extracted per year, where 1 tce equals 29.29 GJ). Cumulative investment up to 2040 under BAU equals 4,070 Mtce, but this is 56% lower in the SDS (1,790 Mtce). The total investment in new mining capacity up to 2040 is 411 billion USD2016 under BAU, dropping to 189 billion USD2016 in the SDS. Figure 3 highlights the diverging evolution of investments over time, against the context of historical capacity additions. During the 2020s, annual investments increase slightly from 85 to 130 Mtce under BAU and from 55 to 75 Mtce in the SDS. Investments ramp up in both scenarios during the 2030s, peaking at 340 or 150 Mtce, respectively. This is driven by the replacement of mines developed during the 2000s that reach retirement age. Figure S12 shows that steam coal is responsible for this difference between scenarios, as it faces greater pressure from decarbonization policies and competition within electricity generation, whereas coking coal investment is relatively unaffected by scenario as there are fewer economically viable options for replacement. • ExxonMobil Outlook for energy: a view to 2040. The Institute for Energy Economics Japan IEEJ Outlook. • Bloomberg N.E.F. New Energy Outlook. Two peaks are observed in historical coal mine investment, one after the oil crises of 1973 and 1979, which provided strong incentives to move away from oil and toward alternative commodities such as coal. • Ekawan R. • Duchêne M. The evolution of hard coal trade in the Atlantic market. The second in the 2000s was driven by the boom in Chinese industrial output and demand for electricity and partly by replacement of mines from the first peak becoming exhausted. Our modeled investment under BAU shows that coal is entering into a third investment cycle that peaks in 2035, underlining the need for substantial new investment even though demand remains flat under BAU. In contrast, it appears as though the SDS puts an end to coal’s commodity cycles, ushering in a phase of managed decline. As shown in Figure 4, China sees the greatest investment in both scenarios, with around 45% of the world’s total. Although Chinese coal demand decreases in both scenarios, substantial investments are needed as almost the entire current mining fleet needs to be replaced by 2040. Therefore, China experiences a below-average fall in investment in the SDS. India has the second highest investments up to 2040 across both scenarios, representing around 22% of the global total, driven by its rising primary energy demand. India is not projected to follow the Chinese coal boom, as it profits from 20 years of technical and economic progress on alternative energy sources, which makes a diversified energy mix more feasible. IEA World energy outlook. For this reason, India’s coal investments are more strongly impacted by the SDS. Developing Asia is the third largest investor in new mines, with around 10% of global investments up to 2040. Under BAU, investment in new capacity is driven by expanding domestic demand (see Table S3) and growing export opportunities, principally from Indonesia to neighboring Asian countries due to its low cost of extraction. Under the SDS, export opportunities are more limited but resilient domestic coal demand drives new investment to replace retired mines, making developing Asia’s coal industry among the least impacted by this scenario. The US and Australian coal industries are most strongly impacted by decarbonization. Under the SDS, investment in US mines falls by 69% due to both falling domestic demand and exports to Europe. Its distance (and thus transportation cost) to more resilient markets in Asia puts the US at a disadvantage, along with Latin America and Canada. Decreasing demand in Asia similarly reduces the need for Australian exports. The scale of change in many regions is stark. Many parts of the world, especially North America, Europe, and Australia, should anticipate workers needing to change careers to avoid unemployment in mining regions. • Wilson I.A.G. • Staffell I. Rapid fuel switching from coal to natural gas through effective carbon pricing. , • Jakob M. • Steckel J.C. • Jotzo F. • Sovacool B.K. • Cornelsen L. • Chandra R. • Edenhofer O. • Holden C. • Löschel A. • Nace T. • et al. The future of coal in a carbon-constrained climate. In contrast, South Africa suffers the least due to its proximity to growing African markets and relatively low mining costs. It still sees a 44% decrease in investments in the SDS. The rest of Africa sees very little coal-mining investments because African coal demand can be provided more cheaply by existing production from South Africa and the Americas. This is a competitive optimization from the model, and Africa could use this transition to become a more energy-independent continent. ### Prices The model’s marginal costs for 2016–17 accurately match historical steam coal prices (Figure 5), but they were below coking coal prices. This can be explained by shortages or oligopolistic behavior in real coal markets, which keeps prices higher than in the perfectly competitive situation modeled here. • Trüby J. Strategic behaviour in international metallurgical coal markets. Therefore, we expect the model’s steam coal prices are reflective of future markets, whereas coking coal prices will give an indication of the direction of travel. See Note S2 for further discussion and validation. In the SDS, oversupply causes the least efficient mines to retire, moving the steam coal market down the global supply curve. Prices fall in line with demand, by an average of 3% per year to 2040. Under BAU, increasing demand in developing countries outweighs coal phase-out in developed countries, so there is no substantial shift in the global supply curve. Steam coal prices remain comparatively flat, falling by 1.3% per year to 2040. Results are similar for coking coal, but as seen previously, the divergence is less between the scenarios. Prices fall by an average of 0.8% per year under BAU and 1.6% in the SDS. ### Trade Figure 6 shows that in the SDS, the global trade volume for hard coal halves between 2020 and 2035, falling at a comparable pace (28 Mtce per year) with the pace it has risen over the last two decades. In contrast, trade volumes remain close to today’s levels under BAU. These results are in line with the IEA’s own projections for these two scenarios. IEA World energy outlook. , IEA Coal information. Steam and coking coal see similar trade projections, following the same trend as the overall hard coal outlook. Under BAU, 19% of global coal production is traded over the period to 2040. In the SDS coal trade falls proportionally with demand, so 18% of production is traded in the 2020s, falling to 14% by 2040. This could indicate a shift toward domestic markets, as countries begin supplying more of their own consumption when the lower value of coal cannot justify the cost of intercontinental trade routes. Figure 7 shows the profound evolution of imports and exports at regional level to 2040 in both scenarios. Under BAU, China’s imports decrease during the 2020s, before rising again once old local mines are depleted in the 2030s. In contrast, India’s imports increase until 2030, since the model needs to supply an exploding coal domestic demand. Then, imports fall more rapidly as the model manage to balance demand with domestic production. Imports to developing Asia rise, and to Europe fall, in line with local demand. BAU sees a period of stability for exporting regions, with slight falls in Australia and Indonesia offset by growth in the US and South Africa. In the SDS, the most profound shifts are India becoming self-sufficient from 2035 versus representing 12% of global imports under BAU and imports to developing Asia differing by a factor of ten between the scenarios (growing 2.2x to 2040 under BAU versus falling by three quarters in the SDS). Falling global trade in the SDS drastically affects regional outlooks. Australia sees exports fall 60% between 2017 and 2030, whereas Russia and Indonesia see 40% reductions, the US and Latin America see 20% reductions. South Africa is the only region to increase exports under both scenarios, becoming the 2nd largest coal exporter in the SDS by 2040, overtaking Indonesia and Russia to supply a quarter of the world’s export demand. This is enabled by growing markets in the rest of Africa, strategic connections to both Atlantic and Pacific markets and falling domestic demand in South Africa. China’s imports during 2025–2030 are higher in the SDS than under BAU, despite lower domestic demand. This is a feature of perfect foresight in the model (which could reflect policy makers with stable long-term objectives). With the depletion of old Chinese mines built in the 2000s, global mining overcapacity and gradually decreasing demand in China, the model finds it cheaper to accept a short period of increased imports than to invest in new mining capacity. In contrast, new capacity is built under BAU as future demand can sustain its long-term operation, which decreases the need for imports in earlier years. Figure 8 highlights the economic implications of these changes to trade under both scenarios. The financial value of trade is calculated as the change in export revenue plus the change in import costs between 2017 and 2040. Exports in each region were valued according to their FOB (Free on Board) price and imports according to their CIF (Cost, Insurance, and Freight) price. The difference between these is the cost of shipping, • Madhavi M. • Nuttall W.J. Coal in the twenty-first century: a climate of change and uncertainty. hence the reduction in trade volume in the sustainable scenario gives a global net saving of USD 10 billion per year due to reduced coal transportation. Europe, Japan, and South Korea are the greatest beneficiaries under both scenarios as they gain from reduced coal imports and lower prices. India also benefits under both scenarios as greater domestic production reduces spending on imports. Indonesia and Australia are among the worst hit regions in both scenarios (along with Russia and the US) due to the declining value of their coal, combined with declining export volumes in the SDS. The fate of China and Developing Asian countries is scenario specific: they either spend more money on coal imports under BAU or substantially less in the SDS; combined difference is USD 35 billion per year between the scenarios. ### Stranded assets If demand declines more rapidly than the natural retirement of mines, assets will become “stranded” when they are no longer economically competitive and shut down before reaching their technical lifetime. The presence of stranded assets can indicate the volume of worthless overcapacity. Figure 9 shows that BAU offers a stable market with few mines becoming stranded. A total of 210 Mtce capacity becomes stranded between 2020 and 2040, less than 1% of new mines built. This indicates natural churn within the industry, such as decommissioning the least profitable mines in regions such as North America, Europe, and Australia where local coal phase-out occurs. In contrast, the SDS results in severe trouble for the industry. Demand for coal falls faster during the 2030s than old mines are depleted, resulting in overcapacity and less efficient mines being replaced by cheaper ones closer to growing coal markets. 1.5–2.5% of mines are decommissioned each year, which accumulates to around one quarter of current global mining capacity (966 Mtce) becoming stranded during the next decade and around one-third by 2040 (1,210 Mtce). Decommissioning on economic grounds slows sharply in the 2030s as mines built during the 2000s boom reach their natural retirement age, alleviating the continued overcapacity due to falling demand. Relatively few coking coal mines become stranded under either scenario since steel industries preserve demand. The fate for steam coal diverges, with few stranded assets under BAU as the need for new capacity counterbalances old mines retiring. It is worth noting that the capital expenditure for coal mines is relatively low, representing less than 5% of overall cost, lessening the financial impact for investors of early retirement. • Spencer T. • Colombier M. • Sartor O. • Garg A. • Tiwari V. • Burton J. • Caetano T. • Green F. • Teng F. • Wiseman J. The 1.5°C target and coal sector transition: at the limits of societal feasibility. Coal mining is generally labor intensive, so stranding labor is at stake rather than capital. Table 1 estimates the impact of stranded assets on job losses worldwide, based on EXIOBASE data for labor factors per unit of coal production in different regions. • Wood R. • Stadler K. • Bulavskaya T. • Lutter S. • Giljum S. • de Koning A. • Kuenen J. • Schütz H. • Acosta-Fernández J. • Usubiaga A. • et al. Global sustainability accounting - developing EXIOBASE for multi-regional footprint analysis. Under the SDS, over 2.2 million jobs would suffer from early mine closures, affecting mainly low- and medium-skilled jobs, which represent respectively 48% and 46% of the total number. Table 1Jobs lost due to stranded assets over the period 2020–2040 RegionBusiness-as-usualSustainable Australia8,55019,520 Canada3,2203,970 China1,765,190 Developing Asia3,30072,540 Europe21,80038,760 Latin America8,010 Russia3,070122,610 United States8,79059,080 India133,480 South Africa21,230 Total48,7302,244,390 Estimates are based on decommissioned capacity from Figure 9 and EXIOBASE. • Wood R. • Stadler K. • Bulavskaya T. • Lutter S. • Giljum S. • de Koning A. • Kuenen J. • Schütz H. • Acosta-Fernández J. • Usubiaga A. • et al. Global sustainability accounting - developing EXIOBASE for multi-regional footprint analysis. There is a substantial difference in the extent to which mining countries suffer stranded assets under each scenario, as shown in Figure 10. Under BAU, only developed countries face stranded assets as their domestic demand falls. Under the SDS, developed countries are still most at risk, but Asian mining industries are also severely affected. China has the highest level of decommissioning with over 500 Mtce of capacity stranded up to 2040, affecting 1.75 million jobs. However, in relative terms, this represents 22% of current capacity, making China’s coal mining fleet among the more resilient to climate change policies. Decommissioning is driven by falling Chinese coal demand forcing the closure of less efficient mines, which are then replaced by more efficient ones during the 2030s to fill the supply gap. Over one-third of current mines in the US and Canada would shut down before the end of their life, impacting more than 60,000 jobs and creating a considerable burden for mining companies and communities across North America. European and Eurasian coal mines mainly supply domestic markets, and as demand falls in both regions almost half of current capacity becomes stranded. Falling domestic demand and increased competition mean Australia faces a similar fate under the SDS. India and South Africa suffer the least in the SDS, with less than a tenth of current capacity becoming stranded. India is buoyed by high domestic demand, whereas South Africa benefits from the incremental growth of its coal industry in previous decades (rather than boom and bust cycles), giving sufficient natural retirements to avoid stranding. A fifth of developing Asia’s current mining fleet becomes stranded by 2040 in the SDS despite the resilient local demand. This is most notable in Indonesia, as the low calorific value of coal in the region makes mines unprofitable in the face of falling prices (and thus growing importance of transport costs). ### The impact of hubris An additional model run was performed to investigate the impact of overconfidence in the future of coal. This considers a scenario where investors expect that coal demand will remain buoyant, but coal demand falls rapidly as the world decarbonizes. This reflects one possible outcome of the wide uncertainty in current market projections or of the international response to COVID-19 causing a sudden and potentially sustained shock to global energy demand after a renewed expansion of Asian coal power generation. • Le Quéré C. • Jackson R.B. • Jones M.W. • Smith A.J.P. • Abernethy S. • Andrew R.M. • De-Gol A.J. • Willis D.R. • Shan Y. • Canadell J.G. • et al. Temporary reduction in daily global s emissions during the COVID-19 forced confinement. Model inputs were based on SDS as above, except that investments for the first period (2020–2025) were imposed from the BAU scenario. The optimization was then free to adjust from 2025 onward, modeling a period of market correction, identifying which regions face the greatest challenges if investments are out of line with demand. Latin America, developing Asia, and South Africa suffer the most from a misjudgment of coal’s immediate future (see Figure S13). The volume of stranded assets increases by 40%–50% relative to the SDS with correct anticipation, affecting around 240,000 additional jobs around the world. The distance between Latin America and the main importing markets could explain their disadvantage. North America faces around one-fifth increase in decommissioning, whereas Australia and China see around one-tenth increase. Other regions see almost no impact on decommissioning of incorrect forecasting of demand, as investments under the SDS and BAU were comparable up to 2025. It must be remembered these results are based on a model of perfect competition, where uncompetitive mines are closed without delay if their future is unprofitable. In reality, geopolitical considerations may arise if investors (some of which are state actors) misjudge coal’s future and overbuild. Countries may privilege their domestic production in a world of imperfect competition, shifting the burden of stranded assets to exporters such as Australia, the United States, South Africa, or Russia. ## Discussion In a period of high uncertainty for coal, this paper performs a comparative analysis of the future of coal investment across two divergent futures: a BAU and SDS. It uses a partial equilibrium model with perfect competition and perfect foresight to capture the market fundamentals in these two scenarios up to 2040. The risk of overcapacity and stranded assets is low under BAU, where global coal demand remains at current levels. The SDS sees half as much investment in new coal mines as BAU, a deficit of over 2000 Mtce in the two decades to 2040. North America and Australia have the largest differences in investment between scenarios, implying greatest exposure to climate policy risk. China, India, and other developing Asian countries experience the highest investment in both scenarios, with investment still required in the SDS to counterbalance the depletion of existing mines. Some regions face more risk from future uncertainties regarding coal than others. Those most likely to retain strong coal investment have growing energy consumption (developing Asia and India), produce coal with high calorific value and coking coal (South Africa), are close to the growth centers of coal demand (Russia and Australia), or have rapidly depleting mining capacity in need of replacement (China and Indonesia). China is in a special position, as when the coal mines built in the 2000s are gradually depleted and retire from the mid-2020s onward, China will reach a critical juncture between igniting a new cycle of coal investment or switching to alternative energy sources. Coal prices at that time will be a crucial factor, and investors may be cautious if China’s policy is to progressively move away from coal to renewables or other sources. Demand falls sufficiently rapidly under the SDS to create overcapacity, forcing approximately one-third of global mining capacity to be economically unviable by 2040. The least efficient mines close prematurely and become stranded assets as coal prices fall by one-third. Mines that are distant from resilient coal markets have high mining costs or low calorific value represent the greatest risk. Correct anticipation of the industry’s future is critically important for investors. The relative prices of coal, renewables, and natural gas will likely have a major impact on which pathway developing countries’ coal demand follows. If investments are halted, for example, due to a strong coal divestment movement or strategic withholding of investments, mine retirements could lead to market tightness and high coal prices after 2030. Conversely, underestimating the pace of demand reduction could result in excessive investments, which place a heavy burden on the coal mining industry. Latin America, developing Asia, and South Africa would be among the regions most impacted by overinvestment; however, domestic industry protection policies could move this burden to other export-driven regions. Governments must tread carefully to avoid institutional lock-in • Rentier G. • Lelieveldt H. • Kramer G.J. Varieties of coal-fired power phase-out across Europe. and should consider support for industries and for the 2.2 million workers at risk worldwide during coal phase-out to mitigate socio-economic impacts. Job losses may not be considered high at the national level and could be offset by job creation in other sectors, • Patrizio P. • Leduc S. • Kraxner F. • Fuss S. • Kindermann G. • Mesfun S. • Spokas K. • Mendoza A. • Mac Dowell N. • Wetterlund E. • et al. Reducing US coal emissions can boost employment. but local mining regions would be disproportionately affected. The Paris agreement, divestment, rapidly falling costs for renewables and storage, countries striving toward ”zero coal” electricity generation, and growing awareness of air pollution across Asia are all signs of an industry potentially facing terminal decline. This paper helps to quantify the impacts that may be felt within the global coal industry. From a climate policy perspective, it is imperative that the inevitable coal phase-out is guided by sound investment and socially protective policy to minimize the risks outlined here. ## Experimental procedures ### Resource availability #### Lead contact Further information and requests for resources should be directed to the lead contact, Iain Staffell ([email protected]). #### Materials availability All material for the charts and tables presented in this paper have been deposited to Zenodo: https://doi.org/10.5281/zenodo.4629991. #### Data and code availability The computer code used in this study is described fully in Model and was reported previously in the references contained therein. There are restrictions to the availability of code and the coal database used in this study as they remain the commercial property of Deloitte Economic Advisory. Licenses to use these can be obtained from Deloitte, which may require reasonable compensation to cover the cost of producing and maintaining them. ### Models of the coal industry Modeling the international coal market is a subject of wide interest in the literature. • Paulus M. • Trüby J. Coal lumps vs. electrons: how do Chinese bulk energy transport decisions affect the global steam coal market?. , • Lorenczik S. • Panke T. Assessing market structures in resource markets — an empirical analysis of the market for metallurgical coal using various equilibrium models. Modeling competitive spatial markets dates back to Enke, • Enke S. Equilibrium among spatially separated markets: solution by electric analogue. who in 1951 used a simple electric circuit to estimate equilibrium prices and quantities. Samuelson (1952) • Samuelson P.A. Spatial price equilibrium and linear programming. and Takayama and Judge (1964) • Takayama T. • Judge G.G. Equilibrium among spatially separated markets: a reformulation. demonstrated linear and quadratic programming to solve such problems. Later, Nelson and McCarl (1984) • Nelson C.H. • McCarl B.A. Including imperfect competition in spatial equilibrium models. and Kolstad and Abbey (1984) • Kolstad C.D. • Abbey D.S. The effect of market conduct on international steam coal trade. worked on imperfect competition within spatial markets, developing Cournot and Stackelberg equilibria to examine monopolistic behavior. Coal market models are often encapsulated within broader energy system models like LIBEMOD, • Aune F.R. • Golombek R. • Kittelsen S.A.C. • Rosendahl K.E. Liberalizing the energy markets of Western Europe – a computable equilibrium model approach. which models Western Europe’s gas and electricity markets and integrates the world market for coal. Similarly, the EIA’s Coal Market Module (CMM) is an international trade model embedded within the US National Energy Modeling System (NEMS), EIA Coal market module of the national energy modeling system: model documentation. which is used to develop the Annual Energy Outlook 2020. EIA Annual Energy Outlook 2020 with Projections to 2050. The CMM is an international trade model that produces annual forecasts of prices, production, consumption, and import of steam and coking coal to 2050. It uses linear programming to determine the least-cost supplies of coal from a set of supply curves, assuming perfect competition with various constraints such as import diversification and sulfur penalty costs. EIA Coal market module of the national energy modeling system: model documentation. Haftendorn and Holz • Haftendorn C. • Holz F. Modeling and analysis of the international steam coal trade. and Holz et al. • Holz F. • Haftendorn C. • Mendelevitch R. • von Hirschhausen C. A model of the international steam coal market (COALMODworld). developed COALMOD-World, a multi-period game-theoretic equilibrium model for global steam market. The model simulates market outcomes, investments, land transport, resource depletion, and export capacity to outline market structure and study implications. Unlike LIBEMOD and CMM, COALMOD-World assumes profit-maximizing players who optimize their expected discounted profit over the total model horizon. The COALMOD framework has been used to test for abuse of market power in steam coal and the impacts of energy security and climate policies on Europe’s coal market. • Haftendorn C. • Holz F. Modeling and analysis of the international steam coal trade. , • Holz F. • Haftendorn C. • Mendelevitch R. • von Hirschhausen C. A model of the international steam coal market (COALMODworld). Standalone coal market models also exist. Lorenczik and Panke applied various equilibrium problems with constraints to investigate market forces in the international metallurgical coal market in the late 2000s commodity super cycle. • Lorenczik S. • Panke T. Assessing market structures in resource markets — an empirical analysis of the market for metallurgical coal using various equilibrium models. Rioux et al. developed a linear optimization coal market model focused on Chinese coal supply to assess the economic impacts of relieving congestion in the Chinese coal supply chain. • Rioux B. • Galkin P. • Murphy F. • Pierru A. Economic impacts of debottlenecking congestion in the Chinese coal supply chain. Paulus and Trüby developed a spatial inter-temporal equilibrium model for the global steam coal market • Paulus M. • Trüby J. Coal lumps vs. electrons: how do Chinese bulk energy transport decisions affect the global steam coal market?. and analyze the steam coal market equilibria structure between 2006 and 2008 with a Mixed Complementary Problems (MCP) model to determine if market structure derives from perfect competition or oligopolistic behaviors. • Trüby J. • Paulus M. Market structure scenarios in international steam coal trade. Trüby • Trüby J. Strategic behaviour in international metallurgical coal markets. models uncompetitive behaviors from metallurgical coal market agents between 2008 and 2010 with perfect competition, Cournot and Stackelberg models. It concludes that only non-competitive models can reproduce market structure observed in that period. The model used in this paper is a further development of these coal market models from. • Trüby J. • Paulus M. Market structure scenarios in international steam coal trade. , • Trüby J. Strategic behaviour in international metallurgical coal markets. Here, these models are used to explore future scenarios rather than to explore past price spikes, with a focus on long-term market fundamentals (i.e., a detailed representation of the economics of coal mining investment and depletion of existing mines) rather than short-term strategic behaviors. The models and the code have, thus, been adapted to capture the fundamental long-term trade-offs in the coal market that our analysis deals with and therefore bringing some new insights to the existing coal literature. This paper integrates sustainable and BAU demand projections into a detailed coal market partial equilibrium model for steam and coking coal. The objectives are to model and analyze the implications of these diverging demand projections in terms of investments, prices, trades, decommissioning, and regional trends. This paper provides a new perspective on the future of coal via more granular modeling of the evolution and regional structure of coking and steam coal markets in the two divergent scenarios. ### Model The model of global coal markets is taken from previous studies. • Paulus M. • Trüby J. Coal lumps vs. electrons: how do Chinese bulk energy transport decisions affect the global steam coal market?. • Trüby J. • Paulus M. Market structure scenarios in international steam coal trade. • Trüby J. Strategic behaviour in international metallurgical coal markets. Since those works, it has been updated with new data and extended to consider investments and retirements. The model is a multi-period perfect competition model with technical and economic constraints, which models market behavior for steam and coking coal. The model only considers hard coal, as lignite is primarily mined and consumed locally, with limited trade between regions. It is a linear optimization program that seeks the lowest total cost as it assumes a benevolent social planner. The solution of this problem is, under the condition of perfect competition, equivalent to the result of profit-maximizing players. The model is implemented in GAMS and runs on an annual time step from 2016 to 2050. Years 2016 and 2017 are compared with IEA’s Coal Data IEA Coal information. for historical validation (reported fully in Note S2). Although the model runs until 2050, the framework of this study stops in 2040 to avoid end-game issues, specifically investment distortion after 2040 due to shortening payback period. The model ensures a cost minimization among all players while satisfying demand in each region at all time. It considers production costs, transport costs, investment into new mines and associated capital costs, maximum mining capacity and maximum investment, mines depletion, mining fixed costs, and mining decommissioning costs. There are three types of nodes. Mine nodes optimize their coal production and sell it either to domestic customers or, via export terminals, to coal consumers abroad. Port nodes correspond to export terminals from which coal can be shipped to any importer via ocean-going vessels. Demand nodes can satisfy their coal needs from domestic mines or via imports from mines abroad. Coal allocation is subject to the minimization of the total costs i.e., production, transport, and investment costs. This spatial model considers a set of regions, with the most significant coal producing and consuming countries considered individually. Other countries are grouped into regions, which are listed in Table S1. ### Model formulation Tables S4 and S5 outline the parameters and variables used in the model. The model formulation uses sets for nodes k, mines m, new mines i, ports h, demand regions j, years t, and coal types c, which are outlined in Table S6. The model is a cost minimization problem with technical and economic constraints, where demand is totally inelastic and must be satisfied in every region at all times. Supply and trade are obtained by minimizing the total costs of the system under specific constraints. The objective function to minimize total costs (Equation 1) combines mining production costs, mining capital costs, mining operation fixed costs, as well as transport costs between mines, ports, and demand regions. It results in a perfectly competitive model where allocation at each node k $∈K$, mining capacity investments, decommissioning and trade flows are optimised under marginal cost-based allocations to satisfy demand. $mintotalcost=∑m∈Mc∈Ct∈Tprodcostm,c,t+fixedcostm,c,t+∑i∈Ic∈Ct∈Tcapcosti,c,t+∑k,k'∈K×Kc∈Ct∈Ttranscostk,k',c,t$ (Equation 1) The production cost (Equation 2) is equal to the mining costs times the quantity produced, weighted by the calorific value of the coal. This calorific value is normalized to ton of coal equivalent (tce) of 7,000 kcal/kg: . • Staffell I. The energy and fuel data sheet. $prodcostm,c,t=costm,c,t×Xm,c,t×7,000cvm,c∀m,c,t$ (Equation 2) The fixed cost (Equation 3) is equal to the operational fixed cost times its total capacity, added to the cost of decommissioned mining capacity: $fixedcostm,c,t=fixcost×minecapam,c,t+decomcost×decomm,c,t∀m,c,t$ (Equation 3) The capital cost (Equation 4) is equal to the investment volume times the investment costs with consideration of interest rate ρ and amortization duration τ: $capcosti,c,t=∑t′≤tt′≤t−τinvcapi,c,t'×invcosti,c×ρ1−(1+ρ)−τ∀i,c,t$ (Equation 4) Then the transport cost (Equation 5) is obtained from the sum of all transport volumes from a mine to a domestic region, from a mine to a port or from an exporting port to an importing port multiplied by the corresponding cost: $transcostk,k',c,t=Qk∈M,k'∈J,c,t×minetodomk∈M,k'∈J+Qk∈M,k'∈H,c,t×minetoportk∈M,k'∈H+Qk∈H,k'∈H,c,t×2.2012×distancek∈H,k'∈H0.24055∀c,t$ (Equation 5) The constraint applies to each node. As a result, at mines nodes (Equation 6), the production at the mine must be equal to the sum of flows to domestic ports or domestic demand region. There is no storage at a mine and all production must be sent away: $Xm,c,t−∑k∈H∪JQm,k,c,t=0∀m,c,t$ (Equation 6) At port nodes (Equation 7), for an exporting port, the sum of all flows from domestic mines must be equal to what is exported to other regions. For an importing port, the sum of all inflows from other ports must be equal to what is transferred to the domestic demand region: $∑k∈KQk,h,c,t−∑k∈KQh,k,c,t=0∀h,c,t$ (Equation 7) At demand region nodes (Equation 8), the sum of what is received from domestic ports or domestic mines must equal demand: $∑k∈M∪HQk,j,c,t−demandj,c,t=0∀j,c,t$ (Equation 8) The annual investment (Equation 9) in a mine must be below a cap fixed in parameters and calculated from mine characteristics, country features, and coal reserves: $invcapi,c,t≤investmentcapi,c,t−∑t′ (Equation 9) The annual mine production volume (Equation 10) must be below or equal to its annual capacity: $Xm,c,t≤minecapam,c,t∀m,c,t$ (Equation 10) Where the mine capacity is obtained in (Equation 11) from the mining capacity parameter, decommissioning, and investment. This addition is weighted by the depletion rate, which is the percentage reduction in a mine’s capacity due to the exhaustion of coal resources. It is 0 during the first years of operation, then increases by 20% annually over the last 5 years of the mine’s lifetime. $minecapam,c,t=[miningcapm,c,t−∑t′≤tdecomm,c,t'+∑t′≤tinvcapm∈I,c,t']×(1−depletionm,t)=0∀m,c,t$ (Equation 11) There are two reasons for retirement of coal mines. The first is technical and imposed as an exogenous constraint: mines become progressively depleted and are retired when they arrive at the end of their technical life. The second is an endogenous financial decision (i.e., a choice to be made by the optimization), and such mines are called “stranded assets” in this paper. Each mine’s technical lifetime was calculated by dividing its in situ resource by its nameplate annual capacity. This makes the simplification that each mine operates at full capacity in every period, but it was preferred to use cumulative production capacity for computational tractability. Accounting for resource depletion increases inter-temporal complexity and thus problem size and solution time by an order of magnitude. It also offers little practical gain as there is no elasticity to mine production costs, so it is only marginal and extra-marginal mines, which do not produce at maximum capacity in any given period. Most extra-marginal mines go into terminal decline due to falling demand and, thus, are retired. Mines could also be decommissioned before their technical end of life if they become unprofitable. Decommissioning costs were set to five times the value of the mine’s fixed costs (from Equation 3), which cover the care and maintenance required when mines are mothballed. The model therefore considers it economically viable to shut down a mine when not used at all over a 5-year period, unless the mine is used again in future periods. Since the model has perfect foresight, it will mothball mines that become competitive again in the future rather than shutting them down completely, although this is not widely seen due to demand either being steady or continually declining in our BAU and SDS scenarios. With real-world frictions and volatility in demand, this may be more widely employed. This assumption does not imply that mines cannot weather more than 5 years of being uncompetitive, but rather it gives the model an economic incentive (with a small consequential financial hurdle) to shut down mines, enabling us to assess the mining capacity at risk in this paper. Therefore, the presence of stranded assets is more an indicator of the volume of worthless over-capacity rather than the volume of mines that would be physically shut down. More complex socio-political factors (such as energy security or employment) will determine the latter, as some mines are owned by states who may be willing to subsidize production or mothball them for many years without fully shutting them down. Partial decommissioning and partial use of a mine are possible within the model, but because there is no elasticity of mining costs for individual mines, if one part of the mine is uneconomic, the whole mine would also be. Prices at import and export ports are obtained from the model’s marginal value of the equilibrium constraint at port nodes. A country will buy or sell a unit of coal only if the marginal benefit to the country is equal to its price. The marginal costs shown in Figure 5 cover the range seen across major export ports. The price at an export port mirrors the FOB price—the price that importers are willing to pay without including the shipping costs and any other fees associated with transport (e.g., insurance). For steam coal, Figure 5 represents the range of FOB prices for ports in Australia, South Africa, Russia, the US, Indonesia, and Colombia. While for coking coal, it represents the FOB prices range in Australia, Russia, the United States, Canada, and Mozambique. The range of FOB prices among main exporters is tight and the fluctuations are similar across both coal types. Global harmonized prices were expected since the model features perfect competition. The main differences among exporting countries come from the cost of inherent domestic production, transportation costs from mines to export ports, and the distances to importing markets—as only exporting countries close to importing regions will be in position to sell coal at a higher FOB price than competitors. ### Data #### IEA scenarios This paper develops a comparative analysis on the future of coal under BAU and a SDS. The aim of this study is not to evaluate which trend is the most realistic or attach probabilities to the different trends. Instead, this paper aims at comparing the implications of scenarios. The two scenarios only differ by the level of demand that is used as an input parameter in the coal market model described above. Both scenarios take demand projections from IEA’s 2018 World Energy Outlook 2018, IEA World energy outlook. using the sustainable Development Scenario for SDS, and NPS for BAU. The SDS assumes a world in line with the Paris agreement where global warming is kept well below 2°C at the end of the century from pre-industrial level and efforts are made to limit it to 1.5°C. This scenario reflects a world where international targets are achieved including climate change, air quality, and universal access to modern energy. Power generation is driven by renewables which provide 65% of global electricity generation by 2040. IEA Sustainable development scenario - a cleaner and more inclusive energy future. 210 GW of coal-fired capacity with carbon capture, utilization, and storage (CCUS) is operational in 2040, generating 1,300 TWh annually, and unabated coal is almost phased out, generating 700 TWh per year by 2040. IEA Carbon capture, utilisation and storage - a critical tool in the climate energy toolbox. The BAU scenario gives an outlook of where current policies will lead to. However, under this scenario, emissions continue to rise slowly and Paris agreement’s targets are not reached with at least a 2.7°C global warming from pre-industrial level in 2050. Coal-fired power generation CCUS remains marginal under this scenario. IEA World energy outlook. These two IEA scenarios provide granular and transparent demand data for different regions and coal types based on clear assumptions. Data for 2016 and 2017 were obtained from the IEA. IEA Coal information. Projections from each scenario were taken for 5-year steps from 2020 to 2040, except for 2035, which is not stated for the sustainable scenario. Input demand data are provided in Tables S2 and S3. The intervening years were inferred using linear interpolation. To obtain annual detailed projections by regions and by coal type simultaneously, the growth rates from IEA’s regional projections were applied for each year and each region to demand for both coal types (Equation 12). $Demandregion,coaltype,year+1=Demandregion,coaltype,year×IEAprojectionregion,year+1IEAprojectionregion,year∀region,coaltype,year$ (Equation 12) Then values for each coal type are proportionally recalibrated among regions to have a total demand by coal types matching IEA’s projections (Equation 13). $Demand′region,coaltype,year=Demandregion,coaltype,year∑r∈JDemandr,coaltype,yearIEAprojectioncoaltype,year∀region,coaltype,year$ (Equation 13) #### Coal market database This paper uses the Deloitte Coal Database, which contains mine-level data for various countries regarding the fixed and variable mining costs, nameplate mining capacity, calorific value of the coal, expected year of decommissioning based on available resources, investment cost of new mining capacity, maximum investment capacity for new mines, transportation cost between mines and export terminals, and between coal fields and the primary consumption hubs for mines serving domestic markets. For transport between ports, the database contains distance data between each port in nautical miles which serve as a basis for transport cost determination (see Equation 5). Table S4 outlines the different elements that are included in the database. The database methodology and many of its underlying sources are given in section 5 of Paulus and Trüby (2011), • Paulus M. • Trüby J. Coal lumps vs. electrons: how do Chinese bulk energy transport decisions affect the global steam coal market?. section 4 of Trüby and Paulus (2012), • Trüby J. • Paulus M. Market structure scenarios in international steam coal trade. and section 4 of Trüby (2013). • Trüby J. Strategic behaviour in international metallurgical coal markets. The data have been updated using various industry reports—including annual reports from listed coal companies, investor presentations, the VDKI Annual Report—facts and figures, IEA World Energy Outlook, IEA Coal report, IEA Coal Information, EIA Annual Energy Outlook, EIA coal industry datasets, McCloskey Coal Market Report and other industry newsletters. These are adapted using he cost escalation logic outlined in Trüby and Paulus (section 4.1), • Trüby J. • Paulus M. Market structure scenarios in international steam coal trade. based on input price evolution collected from national statistical offices. Specific sources include the US Bureau of Labor Statistics, DANE, Colombia; Statistics South Africa; Chinese Bureau of Statistics; Federal Statistics Service, Russia; and Badan Pusat Statistik; Indonesia. For countries where mine-level data are unavailable, basin-level data were instead used based on national statistical publications (e.g., provincial coal production in China from the National Bureau of Statistics, coal production by state in India from the Ministry of Coal). National Bureau of Statistics of China , • Government of India, Ministry of Coal Production and supplies. For this study, the database was updated with data from the IEA Coal Information 2018; specifically calorific values, demand, and trade levels. IEA Coal information. The database is now maintained and quality-controlled by Deloitte Economic Advisory who kindly granted access for the scope of this paper. The database has been used in various consulting engagements with coal industry stakeholders, allowing for reality checks through interviews with coal sector experts (mining companies, traders, analysts, and banking sector). The database is also verified with third party publications where possible, for example, against third-party coal market data from the IEA and IHS Markit. IEA World energy outlook. , IEA Coal information. , IHS Markit IHS McCloskey coal report. Figure S9 provides a comparison between the coal supply cash curve used in this paper and the data from IEA World Energy Outlook 2018. IEA World energy outlook. The two cost curves are generally well aligned, with differences that might stem from coal classification (certain types of coal have properties that make them suitable for use in both metallurgical and steam generating applications), attribution to domestic or international markets, washing yields, etc. Further validation of the database against historical outturn for 2016–2017 is given in Note S2. The results from this paper with asset-level granularity could be reproduced with access to the Deloitte Coal database (see the Resource availability section), or by constructing a comparable database from the sources outlined here. They could potentially be approximated at country-level granularity using the model described in this paper (Model) and national coal data available in IEA Coal Information and World Energy Outlook reports as model input parameters. IEA World energy outlook. , IEA Coal information. ### Assumptions and limitations This study focused on two scenarios from the IEA, which are broadly representative of the spread of scenarios produced by other organizations. Further study could focus on the influence of a wider range of scenarios, with different timings and regional differentiation for coal phase-out. A key assumption is the perfectly inelastic regional demand fixed from IEA’s scenarios. On the one hand, the model consequently includes all the assumptions and uncertainties from IEA World Energy Outlook’s New Policies and Sustainable Development Scenarios. IEA World energy outlook. On the other hand, as demand is fixed and therefore inelastic, coal cannot be substituted by other commodities when coal price peaks and vice versa. However, this is mitigated by the fact that the IEA demand scenarios were created by modeling which did allow fuel substitution and thus are elastic. Since the price evolution predicted here is in line with IEA’s price prediction (see Figures S7 and S8), long term substitutions are indirectly represented. The assumption of perfect foresight means the model will decommission mines that are unprofitable without delay, using full knowledge of future revenues. In reality, mine owners (some of which are state actors) might continue operating in the hope of regaining profitability, because of misjudgment about future prices, or for socioeconomic or geopolitical reasons not included in this model. This would lead to overcapacity which would depress prices and turn more mines into uneconomical assets. As a consequence, the decommissioning of stranded assets modeled here can be thought of as an indicator of unprofitable mining capacity, since even if mines are not actually retired, they would be losing money (after subsidies or other distortions are accounted for). Similarly, countries may promote their own mines, even if imports were cheaper, to protect local jobs and energy independence. As a result, stranded assets could in reality be transferred to exporting countries like Australia, the United States, South Africa, or Russia. The model has perfect foresight, meaning it optimizes production, transport, and investments with full knowledge of future demand, price, and the costs of all other players in the market. Each player rationally acts to optimally satisfy demand in a perfectly informed situation to maximize global outcome and not its self-interest. Both inelastic demand and perfect foresight mitigate short-term frictions and cycles. Short-term fluctuations are attenuated, meaning the model gives broader projection of long-term trends. This is countered by the model not including storage of coal over time, and therefore no possibility of arbitrage by storing coal surplus for shortage periods. The evolution of transport costs over time is not considered in the model, and these remain flat over time. Fluctuations of transport fuel cost and technological improvements are not considered. The model is deterministic, there is no uncertainty among the input parameters considered. ## Acknowledgments I.S. was funded by the Engineering and Physical Sciences Research Council, through the IDLES programme grant (EP/R045518/1). ### Author contributions Investigation, T.A.; writing – original draft, T.A.; data curation and analysis, T.A. and J.T.; supervision, J.T., P.B., and I.S. All authors contributed to the conceptualization, methodology, review, and editing. ### Declaration of interests J.T. is employed by Deloitte, a global provider of professional services, but worked on this project in a personal capacity. T.A. is employed by the European Commission but worked on this project in a personal capacity. The study was commissioned, conducted, written, and submitted independently by the authors. The information and views set out in this article are those of the authors and do not necessarily reflect the official opinion of Deloitte or the European Commission. ## Supplemental information • Document S1. Figures S1–S14, Tables S1–S6, Notes S1–S3, and supplemental references ## References • IPCC Global warming of 1.5°C: summary for policymakers. • McGlade C. • Ekins P. The geographical distribution of fossil fuels unused when limiting global warming to 2°C. Nature. 2015; 517: 187-190https://doi.org/10.1038/nature14016 • Rocha M. • Parra P.Y. • Hare B. • Roming N. • Ural U. • Ancygier A. • Cantzler J. • Sferra F. • Li H. • Schaeffer M. Implications of the Paris Agreement for coal use in the power sector. • Nace T. A coal phase-out pathway for 1.5°C. • Spencer T. • Colombier M. • Sartor O. • Garg A. • Tiwari V. • Burton J. • Caetano T. • Green F. • Teng F. • Wiseman J. The 1.5°C target and coal sector transition: at the limits of societal feasibility. Clim. Policy. 2018; 18: 335-351https://doi.org/10.1080/14693062.2017.1386540 • Gaffney J.S. • Marley N.A. The impacts of combustion emissions on air quality and climate – from coal to biofuels and beyond. Atmos. Environ. 2009; 43: 23-36https://doi.org/10.1016/j.atmosenv.2008.09.016 • Powering Past Coal Alliance Members. • Jewell J. • Vinichenko V. • Nacke L. • Cherp A. Prospects for powering past coal. Nat. Clim. Chang. 2019; 9: 592-597https://doi.org/10.1038/s41558-019-0509-6 • Buckley T. Over 100 global financial institutions are exiting coal, with more to come. IEEFA, 2019 • Thurber M.C. Coal. Polity Press, 2019 • Friedlingstein P. • Jones M.W. • O’Sullivan M. • Andrew R.M. • Hauck J. • Peters G.P. • Peters W. • Pongratz J. • Sitch S. • Le Quéré C. • et al. Global carbon budget 2019. Earth Syst. Sci. Data. 2019; 11: 1783-1838https://doi.org/10.5194/essd-11-1783-2019 • IEA World energy outlook. • Pielke R. Global carbon dioxide emissions are on the brink of a long plateau. Forbes, 2019 • Shearer C. • Myllyvirta L. • Yu A. • Aitken G. • Mathew-Shah N. • Dallos G. • Nace T. Boom and bust 2020: tracking the global coal plant pipeline. • Zhao S. • Alexandroff A. Current and future struggles to eliminate coal. Energy Policy. 2019; 129: 511-520https://doi.org/10.1016/j.enpol.2019.02.031 • Wilson I.A.G. • Staffell I. Rapid fuel switching from coal to natural gas through effective carbon pricing. Nat. Energy. 2018; 3: 365-372https://doi.org/10.1038/s41560-018-0109-0 • Johnson N. • Krey V. • McCollum D.L. • Rao S. • Riahi K. • Rogelj J. Stranded on a low-carbon planet: implications of climate policy for the phase-out of coal-based power plants. Technol. Forecasting Soc. Change. 2015; 90: 89-102https://doi.org/10.1016/j.techfore.2014.02.028 • Madhavi M. • Nuttall W.J. Coal in the twenty-first century: a climate of change and uncertainty. Proc. Inst. Civ. Eng. Energy. 2019; 172: 46-63https://doi.org/10.1680/jener.18.00011 • Stringer D. • Biesheuvel T. Glencore plans to cap coal output in climate shift, sources say. • Khadem N. Glencore moves to cap global coal output after investor pressure on climate change. ABC News, 2019 (20 February, 2019) • Shearer C. Analysis: the global coal fleet shrank for first time on record in 2020. Carbon Brief. • Eckert V. German energy regulator awards first permits to close coal plants. Reuters, 2020 • Meyer G. Value of world’s largest coal mine slashed by1.4bn. Financial Times.
• Jansen M.
• Staffell I.
• Kitzing L.
• Quoilin S.
• Bulder B.
• Riepin I.
• Müsgens F.
Offshore wind competitiveness in mature markets without subsidy.
Nat. Energy. 2020; 5: 614-622https://doi.org/10.1038/s41560-020-0661-2
• Lazard
Levelized cost of energy and levelized cost of storage.
• Schmidt O.
• Hawkes A.
• Gambhir A.
• Staffell I.
The future cost of electrical energy storage based on experience rates.
Nat. Energy. 2017; 2: 17110https://doi.org/10.1038/nenergy.2017.110
• Bodnar P.
• Gray M.
• Grbusic T.
• Herz S.
• Lonsdale A.
• Mardell S.
• Ott C.
• Sundaresan S.
How to retire early: making accelerated coal phaseout feasible and jus.
t. 2020
• IEA
Coal information.
• IEA
Electricity information.
• IEA
Redrawing the energy climate map: world Energy Outlook special report.
• Caldecott B.
Introduction to special issue: stranded assets and the environment.
J. Sustain. Finan. Invest. 2017; 7: 1-13https://doi.org/10.1080/20430795.2016.1266748
• IRENA
Stranded assets and renewables: how the energy transition affects the value of energy reserves, buildings and capital stock.
• Pfeiffer A.
• Hepburn C.
• Vogt-Schilb A.
• Caldecott B.
Committed emissions from existing and planned power plants and asset stranding required to meet the Paris Agreement.
Environ. Res. Lett. 2018; 13: 054019https://doi.org/10.1088/1748-9326/aabc5f
• IEA
World energy outlook.
• Huppmann D.
• Rogelj J.
• Kriegler E.
• Krey V.
• Riahi K.
A new scenario resource for integrated 1.5°C research.
Nat. Clim. Change. 2018; 8: 1027-1030https://doi.org/10.1038/s41558-018-0317-4
• DNV GL
Energy transition outlook: a global and regional forecast to 2050.
• BP
BP energy outlook.
• ExxonMobil
Outlook for energy: a view to 2040.
• The Institute for Energy Economics Japan
IEEJ Outlook.
• Bloomberg N.E.F.
New Energy Outlook.
• Paulus M.
• Trüby J.
Coal lumps vs. electrons: how do Chinese bulk energy transport decisions affect the global steam coal market?.
Energy Econ. 2011; 33: 1127-1137https://doi.org/10.1016/j.eneco.2011.02.006
• Trüby J.
• Paulus M.
Market structure scenarios in international steam coal trade.
Energy J. 2012; 33: 91-123
• Trüby J.
Strategic behaviour in international metallurgical coal markets.
Energy Econ. 2013; 36: 147-157https://doi.org/10.1016/j.eneco.2012.12.006
• Ekawan R.
• Duchêne M.
The evolution of hard coal trade in the Atlantic market.
Energy Policy. 2006; 34: 1487-1498https://doi.org/10.1016/j.enpol.2004.11.008
• Jakob M.
• Steckel J.C.
• Jotzo F.
• Sovacool B.K.
• Cornelsen L.
• Chandra R.
• Edenhofer O.
• Holden C.
• Löschel A.
• Nace T.
• et al.
The future of coal in a carbon-constrained climate.
Nat. Clim. Chang. 2020; 10: 704-707https://doi.org/10.1038/s41558-020-0866-1
• Wood R.
• Bulavskaya T.
• Lutter S.
• Giljum S.
• de Koning A.
• Kuenen J.
• Schütz H.
• Acosta-Fernández J.
• Usubiaga A.
• et al.
Global sustainability accounting - developing EXIOBASE for multi-regional footprint analysis.
Sustainability. 2015; 7: 138-163https://doi.org/10.3390/su7010138
• Le Quéré C.
• Jackson R.B.
• Jones M.W.
• Smith A.J.P.
• Abernethy S.
• Andrew R.M.
• De-Gol A.J.
• Willis D.R.
• Shan Y.
• et al.
Temporary reduction in daily global s emissions during the COVID-19 forced confinement.
Nat. Clim. Chang. 2020; 10: 647-653https://doi.org/10.1038/s41558-020-0797-x
• Rentier G.
• Lelieveldt H.
• Kramer G.J.
Varieties of coal-fired power phase-out across Europe.
Energy Policy. 2019; 132: 620-632https://doi.org/10.1016/j.enpol.2019.05.042
• Patrizio P.
• Leduc S.
• Kraxner F.
• Fuss S.
• Kindermann G.
• Mesfun S.
• Spokas K.
• Mendoza A.
• Mac Dowell N.
• Wetterlund E.
• et al.
Reducing US coal emissions can boost employment.
Joule. 2018; 2: 2633-2648https://doi.org/10.1016/j.joule.2018.10.004
• Lorenczik S.
• Panke T.
Assessing market structures in resource markets — an empirical analysis of the market for metallurgical coal using various equilibrium models.
Energy Econ. 2016; 59: 179-187https://doi.org/10.1016/j.eneco.2016.07.007
• Enke S.
Equilibrium among spatially separated markets: solution by electric analogue.
Econometrica. 1951; 19: 40-47
• Samuelson P.A.
Spatial price equilibrium and linear programming.
Am. Econ. Rev. 1952; 42: 283-303
• Takayama T.
• Judge G.G.
Equilibrium among spatially separated markets: a reformulation.
Econometrica. 1964; 32: 510-524
• Nelson C.H.
• McCarl B.A.
Including imperfect competition in spatial equilibrium models.
Can. J. Agric. Econ. 1984; 32: 55-70https://doi.org/10.1111/j.1744-7976.1984.tb02001.x
• Abbey D.S.
The effect of market conduct on international steam coal trade.
Eur. Econ. Rev. 1984; 24: 39-59https://doi.org/10.1016/0014-2921(84)90012-6
• Aune F.R.
• Golombek R.
• Kittelsen S.A.C.
• Rosendahl K.E.
Liberalizing the energy markets of Western Europe – a computable equilibrium model approach.
Appl. Econ. 2004; 36: 2137-2149https://doi.org/10.1080/00036840310001641742
• EIA
Coal market module of the national energy modeling system: model documentation.
• EIA
Annual Energy Outlook 2020 with Projections to 2050.
• Haftendorn C.
• Holz F.
Modeling and analysis of the international steam coal trade.
Energy J. 2010; 31: 205-229
• Holz F.
• Haftendorn C.
• Mendelevitch R.
• von Hirschhausen C.
A model of the international steam coal market (COALMODworld).
DIW Berlin (German Institute for Economic Research), 2016
• Rioux B.
• Galkin P.
• Murphy F.
• Pierru A.
Economic impacts of debottlenecking congestion in the Chinese coal supply chain.
Energy Econ. 2016; 60: 387-399https://doi.org/10.1016/j.eneco.2016.10.013
• Staffell I.
The energy and fuel data sheet.
• IEA
Sustainable development scenario - a cleaner and more inclusive energy future.
• IEA
Carbon capture, utilisation and storage - a critical tool in the climate energy toolbox.
• National Bureau of Statistics of China
• Government of India, Ministry of Coal
Production and supplies.
• IHS Markit
IHS McCloskey coal report.
• Equinor
Energy perspectives: long-term macro and market outlook.
• World Energy Council
World energy scenarios: the grand transition.
• Shell
Sky scenario data.
• EIA
International energy statistics.
|
2023-02-01 00:01:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43568968772888184, "perplexity": 12037.943405380294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499891.42/warc/CC-MAIN-20230131222253-20230201012253-00329.warc.gz"}
|
https://www.biostars.org/p/212392/
|
show the names of the genes on the Y axis on the heatmap Deeptools
1
0
Entering edit mode
4.6 years ago
Bioiris • 0
Hello, I made the heatmap of épégénétiques marks around TSS using Deeptools (ComputeMatrix then plotHeatmap) but the problem that the latest tools (plotHeatmap) gives me a heatmap with the region (+ -1kb TSS and 1KB) in X axis.
is there any way to show the names of the genes on the Y axis on the heatmap?
thank you
RNA-Seq ChIP-Seq deepTools • 1.7k views
1
Entering edit mode
0
Entering edit mode
Is there a way to just get the top 100 genes from the heatmap as a list?
1
Entering edit mode
4.6 years ago
There's no option in deepTools to display the gene/transcript/region names on the heatmap, normally there are so many of them that you wouldn't be able to see them. If you really want the names displayed then you'll need to tweak the code to do that (if you're interested in doing so, just mention that in a comment and I'll look through the code for where you'll likely need to modify things on Monday).
0
Entering edit mode
yes I'm very interested if you can do it I will be very grateful
0
Entering edit mode
This will take a bit of playing around to get things correct, but the following is roughly how it'll work:
1. You'll need to delete this line to be something more like ax.axes.set_yticks(sub_matrix['matrix'].shape[0])
2. You'll then need to add new ticks and labels, which is something like ax.axes.yticks(np.arange(sub_matrix['regions'].shape[0]), labels).
3. The labels will have to be created and is in hm.matrix.regions I think.
Hopefully you only have one group, since it'll be tougher to do correctly with more.
0
Entering edit mode
Hello,
I tried this but I still do not get the gene names on the y-axis. Is there a way I can just obtain the list of the top 100 genes? I am interested in obtaining the top genes where the peaks are near the TSS. Thank you.
|
2021-04-16 10:18:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5910936594009399, "perplexity": 1165.9265425918516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056325.1/warc/CC-MAIN-20210416100222-20210416130222-00158.warc.gz"}
|
https://ftp.dk.debian.org/pub/cran/web/packages/personalized/vignettes/efficiency_augmentation_personalized.html
|
# 1 Efficiency augmentation
To demonstrate how to use efficiency augmentation and the propensity score utilities available in the personalized package, we simulate data with two treatments. The treatment assignments are based on covariates and hence mimic an observational setting with no unmeasured confounders.
library(personalized)
In this simulation, the treatment assignment depends on covariates and hence we must model the propensity score $$\pi(x) = Pr(T = 1 | X = x)$$. In this simulation we will assume that larger values of the outcome are better.
library(personalized)
set.seed(1)
n.obs <- 500
n.vars <- 10
x <- matrix(rnorm(n.obs * n.vars, sd = 3), n.obs, n.vars)
# simulate non-randomized treatment
xbetat <- 0.5 + 0.25 * x[,9] - 0.25 * x[,1]
trt.prob <- exp(xbetat) / (1 + exp(xbetat))
trt <- rbinom(n.obs, 1, prob = trt.prob)
# simulate delta
delta <- (0.5 + x[,2] - 0.5 * x[,3] - 1 * x[,1] + 1 * x[,1] * x[,4] )
# simulate main effects g(X)
xbeta <- 2 * x[,1] + 3 * x[,4] - 0.25 * x[,2]^2 + 2 * x[,3] + 0.25 * x[,5] ^ 2
xbeta <- xbeta + delta * (2 * trt - 1)
# simulate continuous outcomes
y <- drop(xbeta) + rnorm(n.obs, sd = 3)
# 2 Propensity score utilities
Estimation of the propensity score is a fundamental aspect of the estimation of individualized treatment rules (ITRs). The personalized package offers support tools for construction of the propensity score function used by the fit.subgroup() function. The support is via the create.propensity.function() function. This tool allows for estimation of the propensity score in high dimensions via glmnet. In high dimensions it can be important to account for regularization bias via cross-fitting (https://arxiv.org/abs/1608.00060); the create.propensity.function() offers a cross-fitting approach for high-dimensional propensity score estimation. A basic usage of this function with cross-fitting (with 4 folds; normally we would set this larger, but have reduced it to make computation time shorter) is as follows:
# arguments can be passed to cv.glmnet via cv.glmnet.args
prop.func <- create.propensity.function(crossfit = TRUE,
nfolds.crossfit = 4,
cv.glmnet.args = list(type.measure = "auc", nfolds = 3))
prop.func can then be passed to fit.subgroup() as follows:
We have set nfolds to 3 for computational reasons; it should generally be higher, such as 10.
subgrp.model <- fit.subgroup(x = x, y = y,
trt = trt,
propensity.func = prop.func,
loss = "sq_loss_lasso",
nfolds = 3) # option for cv.glmnet (for ITR estimation)
summary(subgrp.model)
## family: gaussian
## loss: sq_loss_lasso
## method: weighting
## cutpoint: 0
## propensity
## function: propensity.func
##
## benefit score: f(x),
## Trt recom = 1*I(f(x)>c)+0*I(f(x)<=c) where c is 'cutpoint'
##
## Average Outcomes:
## Recommended 0 Recommended 1
## Received 0 8.2176 (n = 136) -12.7821 (n = 69)
## Received 1 -1.7832 (n = 143) -0.4186 (n = 152)
##
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,Recom=0]-E[Y|T=/=0,Recom=0]
## 10.0008 (n = 279)
## Est of E[Y|T=1,Recom=1]-E[Y|T=/=1,Recom=1]
## 12.3635 (n = 221)
##
## NOTE: The above average outcomes are biased estimates of
## the expected outcomes conditional on subgroups.
## Use 'validate.subgroup()' to obtain unbiased estimates.
##
## ---------------------------------------------------
##
## Benefit score quantiles (f(X) for 1 vs 0):
## 0% 25% 50% 75% 100%
## -14.3195 -3.7348 -0.6998 2.0439 13.0446
##
## ---------------------------------------------------
##
## Summary of individual treatment effects:
## E[Y|T=1, X] - E[Y|T=0, X]
##
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -28.639 -7.470 -1.400 -1.717 4.088 26.089
##
## ---------------------------------------------------
##
## 4 out of 10 interactions selected in total by the lasso (cross validation criterion).
##
## The first estimate is the treatment main effect, which is always selected.
## Any other variables selected represent treatment-covariate interactions.
##
## Trt1 V1 V2 V3 V8
## Estimate -0.651 -1.0653 0.834 -0.4833 0.1437
# 3 Augmentation utilities
Efficiency in estimating ITRs can be improved by including an augmentation term. The optimal augmentation term generally is a function of the main effects of the full outcome regression model marginalized over the treatment. Especially in high dimensions, regularization bias can potentially have a negative impact on performance. Cross-fitting is again another reasonable approach to circumventing this issue. Augmentation functions can be constructed (with cross-fitting as an option) via the create.augmentation.function() function, which works similarly as the create.propensity.function() function. The basic usage is as follows:
aug.func <- create.augmentation.function(family = "gaussian",
crossfit = TRUE,
nfolds.crossfit = 4,
cv.glmnet.args = list(type.measure = "mae", nfolds = 3))
We have set nfolds to 3 for computational reasons; it should generally be higher, such as 10.
aug.func can be used for augmentation by passing it to fit.subgroup() like:
subgrp.model.aug <- fit.subgroup(x = x, y = y,
trt = trt,
propensity.func = prop.func,
augment.func = aug.func,
loss = "sq_loss_lasso",
nfolds = 3) # option for cv.glmnet (for ITR estimation)
summary(subgrp.model.aug)
## family: gaussian
## loss: sq_loss_lasso
## method: weighting
## cutpoint: 0
## augmentation
## function: augment.func
## propensity
## function: propensity.func
##
## benefit score: f(x),
## Trt recom = 1*I(f(x)>c)+0*I(f(x)<=c) where c is 'cutpoint'
##
## Average Outcomes:
## Recommended 0 Recommended 1
## Received 0 9.571 (n = 103) -7.8823 (n = 102)
## Received 1 -2.2994 (n = 120) 0.008 (n = 175)
##
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,Recom=0]-E[Y|T=/=0,Recom=0]
## 11.8704 (n = 223)
## Est of E[Y|T=1,Recom=1]-E[Y|T=/=1,Recom=1]
## 7.8903 (n = 277)
##
## NOTE: The above average outcomes are biased estimates of
## the expected outcomes conditional on subgroups.
## Use 'validate.subgroup()' to obtain unbiased estimates.
##
## ---------------------------------------------------
##
## Benefit score quantiles (f(X) for 1 vs 0):
## 0% 25% 50% 75% 100%
## -13.5302 -2.1872 0.6803 3.6881 13.3778
##
## ---------------------------------------------------
##
## Summary of individual treatment effects:
## E[Y|T=1, X] - E[Y|T=0, X]
##
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -27.060 -4.374 1.361 1.381 7.376 26.756
##
## ---------------------------------------------------
##
## 6 out of 10 interactions selected in total by the lasso (cross validation criterion).
##
## The first estimate is the treatment main effect, which is always selected.
## Any other variables selected represent treatment-covariate interactions.
##
## Trt1 V1 V2 V3 V5 V6 V8
## Estimate 0.9222 -0.9353 1.0194 -0.4164 -0.0945 -0.1404 0.118
# 4 Comparing performance with augmentation
We first run the training/testing procedure to assess the performance of the non-augmented estimator:
valmod <- validate.subgroup(subgrp.model, B = 3,
method = "training_test",
train.fraction = 0.75)
valmod
## family: gaussian
## loss: sq_loss_lasso
## method: weighting
##
## validation method: training_test_replication
## cutpoint: 0
## replications: 3
##
## benefit score: f(x),
## Trt recom = 1*I(f(x)>c)+0*I(f(x)<=c) where c is 'cutpoint'
##
## Average Test Set Outcomes:
## Recommended 0 Recommended 1
## Received 0 7.0026 (SE = 3.0607, n = 31) -13.9625 (SE = 6.6671, n = 15.6667)
## Received 1 -3.2555 (SE = 0.8747, n = 37) -0.9539 (SE = 0.5936, n = 41.3333)
##
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,Recom=0]-E[Y|T=/=0,Recom=0]
## 10.258 (SE = 3.5586, n = 68)
## Est of E[Y|T=1,Recom=1]-E[Y|T=/=1,Recom=1]
## 13.0086 (SE = 7.1043, n = 57)
##
## Est of
## E[Y|Trt received = Trt recom] - E[Y|Trt received =/= Trt recom]:
## 9.5398 (SE = 2.0051)
Then we compare with the augmented estimator. Although this is based on just 3 replications, we can see that the augmented estimator is better at discriminating between benefitting and non-benefitting patients, as evidenced by the large treatment effect among those predicted to benefit (and smaller standard error) by the augmented estimator versus the smaller conditional treatment effect above.
valmod.aug <- validate.subgroup(subgrp.model.aug, B = 3,
method = "training_test",
train.fraction = 0.75)
valmod.aug
## family: gaussian
## loss: sq_loss_lasso
## method: weighting
##
## validation method: training_test_replication
## cutpoint: 0
## replications: 3
##
## benefit score: f(x),
## Trt recom = 1*I(f(x)>c)+0*I(f(x)<=c) where c is 'cutpoint'
##
## Average Test Set Outcomes:
## Recommended 0
## Received 0 10.5794 (SE = 2.6567, n = 23.6667)
## Received 1 -4.9438 (SE = 4.4187, n = 31)
## Recommended 1
## Received 0 -10.6693 (SE = 5.1586, n = 25.6667)
## Received 1 -1.0748 (SE = 1.9236, n = 44.6667)
##
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,Recom=0]-E[Y|T=/=0,Recom=0]
## 15.5232 (SE = 1.8056, n = 54.6667)
## Est of E[Y|T=1,Recom=1]-E[Y|T=/=1,Recom=1]
## 9.5945 (SE = 5.7758, n = 70.3333)
##
## Est of
## E[Y|Trt received = Trt recom] - E[Y|Trt received =/= Trt recom]:
## 11.0473 (SE = 1.7645)
|
2023-03-31 04:11:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38011443614959717, "perplexity": 12516.761474194253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00046.warc.gz"}
|
https://www.tutorialspoint.com/numpy/numpy_solve.htm
|
# numpy.linalg.solve()
The numpy.linalg.solve() function gives the solution of linear equations in the matrix form.
Considering the following linear equations −
x + y + z = 6
2y + 5z = -4
2x + 5y - z = 27
They can be represented in the matrix form as −
$$\begin{bmatrix}1 & 1 & 1 \\0 & 2 & 5 \\2 & 5 & -1\end{bmatrix} \begin{bmatrix}x \\y \\z \end{bmatrix} = \begin{bmatrix}6 \\-4 \\27 \end{bmatrix}$$
If these three matrices are called A, X and B, the equation becomes −
A*X = B
Or
X = A-1B
numpy_linear_algebra.htm
|
2020-07-11 09:18:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8351922631263733, "perplexity": 614.732654375945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655924908.55/warc/CC-MAIN-20200711064158-20200711094158-00256.warc.gz"}
|
http://car-stars.ru/cos(x)(cos(x)-1)=0
|
# cos(x)(cos(x)-1)=0
## Simple and best practice solution for cos(x)(cos(x)-1)=0 equation. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so dont hesitate to use it as a solution of your homework.
If it's not what You are looking for type in the equation solver your own equation and let us solve it.
## Solution for cos(x)(cos(x)-1)=0 equation:
Simplifying
cos(x)(cos(x) + -1) = 0
Multiply cos * x
cos * x(cosx + -1) = 0
Reorder the terms:
cos * x(-1 + cosx) = 0
Multiply cos * x
cosx(-1 + cosx) = 0
(-1 * cosx + cosx * cosx) = 0
(-1cosx + c2o2s2x2) = 0
Solving
-1cosx + c2o2s2x2 = 0
Solving for variable 'c'.
Factor out the Greatest Common Factor (GCF), 'cosx'.
cosx(-1 + cosx) = 0
Subproblem 1Set the factor 'cosx' equal to zero and attempt to solve:
Simplifying
cosx = 0
Solving
cosx = 0
Move all terms containing c to the left, all other terms to the right.
Simplifying
cosx = 0
The solution to this equation could not be determined.
This subproblem is being ignored because a solution could not be determined.
Subproblem 2Set the factor '(-1 + cosx)' equal to zero and attempt to solve:
Simplifying
-1 + cosx = 0
Solving
-1 + cosx = 0
Move all terms containing c to the left, all other terms to the right.
Add '1' to each side of the equation.
-1 + 1 + cosx = 0 + 1
Combine like terms: -1 + 1 = 0
0 + cosx = 0 + 1
cosx = 0 + 1
Combine like terms: 0 + 1 = 1
cosx = 1
Divide each side by 'osx'.
c = o-1s-1x-1
Simplifying
c = o-1s-1x-1Solutionc = {o-1s-1x-1}`
## Related pages
differentiate e cos x343i2sib13.125 as a fractionalgebra fraction calculator with stepse 3x derivativesquare root of 448find the prime factorization of 125adding fraction calculator999 roman numeralspercent decimal fraction calculatorprime factorization of 57convert 0.85 to a fraction180-115inequalities calculator with stepsprime factorization of 67derivation of cos x125x 3-2724x60prime factorization of 210500-1253x 6y 30 x 6y 20170-343.5 percent as a decimal24x2what is 3 8 in decimals250-175800 roman numeralsroman numerals for 1963differentiate cos2xfind the prime factorization of 150factor 2x 2 15x 71955 in roman numerals42-600cos40differentiate e 3xgreatest common factor of 65derivative of cosine squaredmultiples of 234what is the prime factorization for 132common multiples of 15math solver with solutionstrigonometric derivative calculatorderiv of sinfactor 25x 2 16logex27x 3 8graph 4x-4q mct88 roman numeralswhat is 6 of 20the derivative of lnx9x-7i 3 3x-7u answerderivative of ln x 3decimals fractionsmath solversderivative of ln5xhow do you graph y 3x 2derivative of cosx xfactor 5x-151962 in roman numeralsprime factor of 1204.9.71982 roman numeralsderivatives calculatorderivative of x ap 2l 2w solve for wnrt pvfourth derivative calculator0.875 in a fractioncommon multiples of 4 and 8what is the prime factorization of 102whats 8x7factorization of 175adding and subtracting fraction calculatorroman numerals 1992cos 2pisinx sin3x sin5x 0
|
2018-06-21 01:05:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3712422847747803, "perplexity": 11539.588928298834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863980.55/warc/CC-MAIN-20180621001211-20180621021211-00613.warc.gz"}
|
https://tex.stackexchange.com/questions/linked/38607
|
27 questions linked to/from No room for a new \dimen
18k views
### No room for a new \dimen when including TikZ [duplicate]
Possible Duplicate: No room for a new \dimen I have a fairly large document which uses 30 or so packages. I was going to add some TikZ graphics, so I added the following to my header file: \...
4k views
### Conflict between TikZ with decorations.text library, caption and booktabs packages and progressbar theme [duplicate]
Possible Duplicate: No room for a new \dimen I obtain an error when using simultaneously the above 4 elements. When removing one of them (which I do not wish), the output is fine. \documentclass{...
566 views
### Option clash between TikZ, psplot, geometry, and hyperref [duplicate]
I know someone who is working on a LaTeX document requiring pst-plot, geometry, and hyperref, and I recently gifted him some TikZ improvements of diagrams that didn't seem to be working out for him. ...
277 views
### Unknown File Type on Math ArXiv [duplicate]
I am not-quite-yet-minted Ph.D. in math, and I am attempting my first submission to the arXiv. I uploaded all of my files except for one as a .zip file, and, when I realized I had which file I had ...
120 views
### I have an error when using pst-func package [duplicate]
when i use pspicture* to draw a chart this error occurs but plotted. file:framed.sty error:no room for a new \dimen.
86 views
### No room for a new \dimen despite of loading etex [duplicate]
After upgrading to TeX Live 2013 one of my books does not compile any longer. I do load etex, but still have not enough room. What can I do? Back in 1994 I had a colleague who hacked the LaTeX source ...
79 views
### references to WinEdt guide template giving errors [duplicate]
I am using the WinEdt QuickGuide.tex as a template to write a document, however, when I try to include the bibliography like: %=========================================================================...
54 views
### Latex: conflict titlesec and tikz packages [duplicate]
I am using the IEEEtran class and I also call in my preamble the titlesec and tikz packages. If tikz is not called my code runs fine. Otherwise, (when calling the tikz package) I am given this error ...
49 views
### Are tickz and pstricks in conflict? [duplicate]
I do not know why, but it seem that the tikz and pstriks packages are in conflict. Using both \usepackage{tikz} \uspepackage[usenames,dvipsnames]{pstricks} When compiling, it returns me the ...
21 views
### Beamer, booktabs, empheq, pgfplots incompatability [duplicate]
The following minimal example \documentclass{beamer} \usepackage{booktabs} \usepackage{empheq} \usepackage{pgfplots} \begin{document} \begin{frame} Test \end{frame} \end{document} gives the error /...
18 views
### LaTeX error with xy-pic [duplicate]
When i use the XY pack for LaTeX by adding the following line to my source: \usepackage[all]{xy} i get this error: ! No room for a new \dimen . \ch@ck ...\else \errmessage {No room for a new #3} \...
14 views
### pstricks-add incompatible with beamer? [duplicate]
Beamer seems to clash with pstricks-add: \documentclass{beamer} \usepackage{pstricks} \usepackage{pstricks-add} \begin{document} Lorem Ipsum \end{document} Gives the following error: ! No room ...
7k views
### What does the 'etex' package do, exactly?
I was creating a rather large LaTeX project, so I had to use many packages. This gave me an error No room for a new \dimen \newdimen \MPscratchDim while my editor(Kile) opened the file supp-pdf....
|
2019-08-18 18:09:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9858368039131165, "perplexity": 7718.635275232613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313987.32/warc/CC-MAIN-20190818165510-20190818191510-00156.warc.gz"}
|
https://analytixon.com/2021/03/11/if-you-did-not-already-know-1340/
|
Probabilistic Kernel Support Vector Machine
We propose a probabilistic enhancement of standard Kernel Support Vector Machines for binary classification, in order to address the case when, along with given data sets, a description of uncertainty (e.g., error bounds) may be available on each datum. In the present paper, we specifically consider Gaussian distributions to model uncertainty. Thereby, our data consist of pairs $(x_i,\Sigma_i)$, $i\in ,…,N$, along with an indicator $y_i\in(-1,1)$ to declare membership in one of two categories for each pair. These pairs may be viewed to represent the mean and covariance, respectively, of random vectors $\xi_i$ taking values in a suitable linear space (typically ${\mathbb R}^n$). Thus, our setting may also be viewed as a modification of Support Vector Machines to classify distributions, albeit, at present, only Gaussian ones. We outline the formalism that allows computing suitable classifiers via a natural modification of the standard ‘kernel trick’. The main contribution of this work is to point out a suitable kernel function for applying Support Vector techniques to the setting of uncertain data for which a detailed uncertainty description is also available (herein, ‘Gaussian points’). …
Principal Model Analysis (PMA)
Motivated by the Bagging Partial Least Squares (PLS) and Principal Component Analysis (PCA) algorithms, we propose a Principal Model Analysis (PMA) method in this paper. In the proposed PMA algorithm, the PCA and the PLS are combined. In the method, multiple PLS models are trained on sub-training sets, derived from the original training set based on the random sampling with replacement method. The regression coefficients of all the sub-PLS models are fused in a joint regression coefficient matrix. The final projection direction is then estimated by performing the PCA on the joint regression coefficient matrix. The proposed PMA method is compared with other traditional dimension reduction methods, such as PLS, Bagging PLS, Linear discriminant analysis (LDA) and PLS-LDA. Experimental results on six public datasets show that our proposed method can achieve better classification performance and is usually more stable. …
Layer Trajectory LSTM (ltLSTM)
It is popular to stack LSTM layers to get better modeling power, especially when large amount of training data is available. However, an LSTM-RNN with too many vanilla LSTM layers is very hard to train and there still exists the gradient vanishing issue if the network goes too deep. This issue can be partially solved by adding skip connections between layers, such as residual LSTM. In this paper, we propose a layer trajectory LSTM (ltLSTM) which builds a layer-LSTM using all the layer outputs from a standard multi-layer time-LSTM. This layer-LSTM scans the outputs from time-LSTMs, and uses the summarized layer trajectory information for final senone classification. The forward-propagation of time-LSTM and layer-LSTM can be handled in two separate threads in parallel so that the network computation time is the same as the standard time-LSTM. With a layer-LSTM running through layers, a gated path is provided from the output layer to the bottom layer, alleviating the gradient vanishing issue. Trained with 30 thousand hours of EN-US Microsoft internal data, the proposed ltLSTM performed significantly better than the standard multi-layer LSTM and residual LSTM, with up to 9.0% relative word error rate reduction across different tasks. …
Teaching Explanations for Decisions (TED)
Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions. However, as many of these systems are opaque in their operation, there is a growing demand for such systems to provide explanations for their decisions. Conventional approaches to this problem attempt to expose or discover the inner workings of a machine learning model with the hope that the resulting explanations will be meaningful to the consumer. In contrast, this paper suggests a new approach to this problem. It introduces a simple, practical framework, called Teaching Explanations for Decisions (TED), that provides meaningful explanations that match the mental model of the consumer. We illustrate the generality and effectiveness of this approach with two different examples, resulting in highly accurate explanations with no loss of prediction accuracy for these two examples. …
|
2021-07-30 19:16:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4547727108001709, "perplexity": 799.5919193600444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153980.55/warc/CC-MAIN-20210730185206-20210730215206-00368.warc.gz"}
|
https://math.stackexchange.com/questions/175520/justification-for-transforming-explanatory-variables
|
# Justification for transforming explanatory variables
I am using linear and generalised linear models, and have transformed my explanatory variables using $log10(\bullet)$ and $sqrt(\bullet)$ transformations, and my response variable using an arcsine square root transformation ($\arcsin(\mathrm{sgn}(x)\sqrt{|x|})$ as I had negative values in $x$). For the latter, the justification is to get the data normally distributed.
What is the justification (or point!) of transforming explanatory variables, as they do not need to be normally distributed?
|
2021-03-03 17:54:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087051153182983, "perplexity": 661.9554401738999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367183.21/warc/CC-MAIN-20210303165500-20210303195500-00411.warc.gz"}
|
https://www.biostars.org/p/238746/
|
0
0
Entering edit mode
5.2 years ago
fusion.slope ▴ 250
Hello,
am trying to trim the reads aligned to the genome from my bam files of different sizes (50 and 80 nucleotides). Am doing this with bamUtil:
./bam trimBam subset.bam subset.80.bp.from.Right.fwd.80.bp.from.left.rwd.bam -R 80
./bam trimBam subset.bam subset.80.bp.from.Right.fwd.80.bp.from.left.rwd.bam -R 50
the problem is that when i upload in IGV the new bam file with the sequences trimmed it has the same length of the not trimmed bam files. How is it possible to clearly see that the reads were trimmed?
In bamUtil tutorial it is written that the sequences will be trimmed and the nucleotide substitute with NNN (for all the 80 or 50 nucleotides). So basically what I need is to remove completely these 80 or 50 nucleotides in a way in which when i will visualize in IGV it is clear that the trimming happened.
Any idea is really appreciated?
BAM bamUtil Trim • 3.7k views
0
Entering edit mode
So, what exactly are you trying to do? And why?
0
Entering edit mode
I am trying to remove noise from my peaks before performing the Peak calling. I have reads mapped in which I know that the information that i need is in the first ~20 nucletides (from the sequencing technique that we are using). So after mapping i want to remove all the other part of the reads (from the 21st nucleotide until the end of the read) that are not useful and might introduce noise in the peaks estimation.
1
Entering edit mode
Why don't you remove those sequence before mapping?? Is it ideal to remove the sequence after mapping and use it for peak calling (creating bias or manupulating the sample) !!
|
2022-05-16 07:57:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39217132329940796, "perplexity": 2520.7191549949926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00469.warc.gz"}
|
https://lists.gnu.org/archive/html/lilypond-user/2004-06/msg00163.html
|
lilypond-user
[Top][All Lists]
## Re: Trouble with notes in LaTeX
From: Graham Percival Subject: Re: Trouble with notes in LaTeX Date: Thu, 10 Jun 2004 22:23:47 -0700
```On 10-Jun-04, at 6:24 PM, Thorkil Wolvendans wrote:
```
My question is: is it possible to show the sixteenth and quarter note in LaTeX? (I suppose I should use the feta-font somehow)
```
This should point you in the right direction:
```
(in the body of the LaTeX, I used \fetachar\fetasegno to get the appropriate
```symbol)
I'm not certain what exact name you want to get notes, but I'm certain
it's possible.
Cheers,
- Graham
```
|
2022-11-27 03:24:05
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969722628593445, "perplexity": 5231.800870715326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00837.warc.gz"}
|
https://stats.stackexchange.com/questions/78049/finding-the-mean-of-a-gaussian-distributed-random-variable-given-the-variance
|
# Finding the mean of a Gaussian distributed random variable given the variance
I have a Gaussian distributed random variable $X$ with known variance $\sigma^2$. Given that I know $P(X\geq t)=m$, how can I find the mean of the random variable?
• how can you know the variance without knowing the mean? – user603 Nov 29 '13 at 11:40
• I have to generate random numbers according to a gaussian distribution. Given the application it is reasonable to assume the variance and the probability that the variable is higher than a certain value. – markusian Nov 29 '13 at 11:47
• then the mean is the value of $t$ for which $m=1/2$ – user603 Nov 29 '13 at 12:13
• Feed $1 - m$ to a function like qnorm() in R. That tells you that you are so many standard deviations above or below the mean. You know the standard deviation as the root of the variance. The rest is arithmetic. – Nick Cox Nov 29 '13 at 12:16
Assuming $X \sim {\cal N}(\mu, \sigma^2)$ then $$\Pr(X \leq m) = \Phi\left(\frac{m-\mu}{\sigma}\right)$$ where $\Phi$ is the cumulative distribution function of the standard normal distribution ${\cal N}(0,1)$. Thus, knowing $p=\Pr(X \leq m)$, one has $$\frac{m-\mu}{\sigma}=\Phi^{-1}(p)$$ and finally $$\boxed{\mu=m-\sigma\Phi^{-1}(p)}.$$ And you get $\Phi^{-1}(p)$ in R by typing qnorm(p).
My idea is similar to Nick Cox's comment above, but uses optimize in R, so you do not 'need' arithmetics (which of course should be preferred as it is exact).
true_mean=5 #The unknown true mean
|
2019-10-21 02:36:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7914422750473022, "perplexity": 198.3473753428802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987751039.81/warc/CC-MAIN-20191021020335-20191021043835-00201.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/91043-proof-question-about-positive-semidefinite-matrix-important-4-regression-analysis.html
|
# Math Help - proof question about positive semidefinite matrix(important 4 regression analysis)
1. ## proof question about positive semidefinite matrix(important 4 regression analysis)
this is a proof I encounter in William Greene econometric analysis appendix section.
If A is a n*k with full column rank and n > k, then (A')(A)is positive definite and (A)(A') is positive semi-definite.
proof given by author is as follows,
By assumption, Ax is not equal to zero. So, x'A'Ax = (Ax)'(Ax) = y'y = summation y^2 > 0
for the latter case, because A has more rows then columns, then there is an X such that A'x = 0, thus we can only have y'y >= 0
What I dont understand is the the bold and underline part.
P/S: this question comes from pg 835, of William Greene Econometric Analysis 5th edition textbook.
2. Originally Posted by phoenicks
this is a proof I encounter in William Greene econometric analysis appendix section.
If A is a n*k with full column rank and n > k, then (A')(A)is positive definite and (A)(A') is positive semi-definite.
proof given by author is as follows,
By assumption, Ax is not equal to zero. So, x'A'Ax = (Ax)'(Ax) = y'y = summation y^2 > 0
$A$ has full column rank means that the columns of $A$ are linearly independent. see that if $v_1, \cdots , v_k$ are the columns of $A$ and $\bold{x}=[x_1 \ x_2 \cdots \ x_k]^T,$ then $A \bold{x}=x_1v_1 + \cdots + x_kv_k.$
so if $A \bold{x}=\bold{0},$ then $x_1v_1 + \cdots + x_k v_k = 0,$ and thus $x_j = 0,$ for all $j,$ because $v_1, \cdots , v_k$ are linearly independent. hence the only solution of $A \bold{x}=\bold{0}$ is $\bold{x}=\bold{0}.$
for the latter case, because A has more rows then columns, then there is an X such that A'x = 0, thus we can only have y'y >= 0
What I dont understand is the the bold and underline part.
P/S: this question comes from pg 835, of William Greene Econometric Analysis 5th edition textbook.
i think by $A'$ you mean $A^T,$ the transpose of $A.$ assuming that the entries of $A$ come from a field $F,$ the matrix $A^T$ has $n$ colums and these columns are in $F^k.$ we know that $\dim F^k = k.$
thus if $n > k,$ then any $n$ vectors in $F^k$ are linearly dependent. now $A^T$ has $n$ columns, say $w_1, \cdots , w_n,$ and $n > k.$ so they are linearly dependent, i.e. there exists $\bold{0} \neq \bold{x}=[x_1 \ x_2 \cdots \ x_n]^T$
such that $x_1w_1 + \cdots + x_nw_n = \bold{0}.$ this also can be written as: $A^T \bold{x} = \bold{0}.$
|
2015-02-01 06:29:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8930875062942505, "perplexity": 483.4379366970315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122127848.98/warc/CC-MAIN-20150124175527-00137-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://trac-hacks.org/ticket/7999
|
Opened 7 years ago
Closed 5 years ago
# LDAP bind using login creds
Reported by: Owned by: shawn.ohail@… branson normal DirectoryAuthPlugin normal 0.12
### Description
Nice plugin, thanks for writing it.
I was wondering if it's possible to bind to the ldap server using the login username/password and using that as the test for successful authentication. One other question then pops up, if we bind this way will we still be able to get the person's real name and email address to populate the session_attribute table?
### comment:1 Changed 7 years ago by John Hampton
Priority: normal → low new → assigned
So, I think I understand your question. I'll go with a long explanation and you can tell me that I misunderstood everything later.
The plugin does bind to the AD server using the creds supplied by the user logging into Trac. In fact, this is the only way that I know of to actually check whether or not they typed in the correct password.
That being said, I think you're interested in getting rid of the need to specify the bind_dn and bind_passwd options in the trac.ini
Obviously, it would require a code change to remove the need for bind_dn and bind_passwd. However, that's not to say it can't be done.
The bigger problem is in mapping usernames to DNs. To bind to AD, one needs a DN in the form of:
cn=Test M. User,ou=Users,dc=domain,dc=com
Rarely, however, do the login name (sAMAccountName attribute) and the cn portion match. A method for mapping sAMAccountName to DN would be needed. However, it may not be possible. For instance, in my organization, sAMAccountNames are first initial, middle initial, lastname. The cn, however, is the full name with the middle initial. There is no way to map from sAMAccountName to DN. Because of this, we bind as a known user, bind_dn, and then look for the sAMAccountName.
If we allow anonymous queries to AD (not the default as far as I am aware), then we could do the search for sAMAccountName without using bind_dn
If you have other suggestions, or perhaps even a patch, I'd be happy to consider them. I don't have an environment where I can play around with anonymous binds, so, without a patch, I'm not sure how quickly this will be implemented.
### comment:2 follow-up: 3 Changed 7 years ago by Shawn O'Hail
I think I follow what you're saying. To perform auth:
1. You bind using bind_dn and do a search for the login name to get the DN.
I'm not a python guy, but i am a perl guy. There is a class, Authen::Simple::ActiveDirectory, which does auth via AD, but does not require the same bind_dn account info that this plugin requires. Looking at the perl code they bind using the string 'username@domain' and login password. The host it binds to is the domain name itself.
So, in this way, can your plugin attempt to bind as the users's principal and avoid a separate bind account? If so, then I wonder if you're still able to look up attributes such as real name and password.
I'll try to have a look at your code and see if I can make this change. Not a python or LDAP guy but I would love to get this working for Trac!
### comment:3 in reply to: 2 ; follow-up: 5 Changed 7 years ago by John Hampton
Priority: low → normal
I think I follow what you're saying. To perform auth:
1. You bind using bind_dn and do a search for the login name to get the DN.
Exactly
Apparently, that also works. This is what I get for misunderstanding the documentation. I realize now that there is a slight difference between binding and authenticating. When binding using the full dn, it forces authentication using that object. However, once can also simply authenticate using the username@domain, and, if I read it correctly, domain\username should work also.
So, in this way, can your plugin attempt to bind as the users's principal and avoid a separate bind account? If so, then I wonder if you're still able to look up attributes such as real name and password.
In order to get full name, and email address we will still have to do a search and bind, but we should be able to do that with the authentication of the user.
I'll try to have a look at your code and see if I can make this change. Not a python or LDAP guy but I would love to get this working for Trac!
So, if you still feel inclined, you're more than welcome to try to make a patch. I will also try to find some time to adjust the code to make this possible. Thanks for helping me learn something.
### comment:4 Changed 7 years ago by Shawn O'Hail
Off the bat, for performance sake, I replaced _get_user_dn() with:
def _get_user_dn(self, user):
return '%s@%s' % ( user, self.ads );
I messed around with this for quite a while, and best I can tell, unless you mangle the existing functionality or store login passwords for retrieval later, there's no clean way around having the extra account. Not a big deal though, I can work with it as is. The only thing I would change is upon login, update the session_attributes table with the current email/full name values from LDAP in case a) the user wasn't in LDAP the last time /admin/accounts/users was hit or b) this info changed.
Between this plugin, FlexibleAssignToPlugin, and this patch, I have everything I need.
### comment:5 in reply to: 3 Changed 6 years ago by olaf.meeuwissen@…
Apparently, that also works. This is what I get for misunderstanding the documentation. I realize now that there is a slight difference between binding and authenticating. When binding using the full dn, it forces authentication using that object. However, once can also simply authenticate using the username@domain, and, if I read it correctly, domain\username should work also.
I've been experimenting a bit on what is needed to bind. These are the conclusions regarding the strings I tried for bind_dn:
• using my distinguishedName fails to bind
• using my mail attribute fails to bind
• using my sAMAccountName attribute fails to bind
• using my userPrincipalName binding succeeds. The value of this attribute is formatted as an email address but different from mail.
• using the /o and /cn of the legacyExchangeDN in a domain\username binding succeeds. The /cn is identical to my sAMAccountName.
• same when using the information from msExchADCGlobalNames
• in all cases, case is irrelevant
I should add that we have had a domain name change. My mail attribute uses the new domain name, but the userPrincipalName still uses the old domain name. This value also uses my sAMAccountName as the username part.
Hope this helps.
### comment:6 Changed 5 years ago by branson
Owner: changed from John Hampton to branson assigned → new
So some points here:
• AD does not allow anonymous bind unless the registry is hacked. SO not sure that allowing search via anonymous is valid.
• if we tried to use the user/pass of the current user, we'd have to cache that password somewhere locally using reversable encryption. This is a Bad Idea(tm) from a security POV.
• you can use {user}@{realm} in some cases.. and we could flag to support that as the username if you wanted. Might even be able to extend to validate against multiple realms that way. Would take some coding, and I dont' have a multi-realm AD config to test with ;-)
• it's fairly trivial to setup a non-priv'd user for AD ( cn=search,cn=users,dc=ad,dc=com ) to perform searches .. but no logins.
### comment:7 Changed 5 years ago by branson
Resolution: → fixed new → closed
So .. I have fixed several things on this that you might find useful:
• Renamed the plugin to DirectoryAuthPlugin
• Now can do anonymous and SSL based bind.
• Name and email are now populating to the session table.. and you can select the vars that are used
I am gonna close this as I think it's fixed now. If you find trouble .. please open a new ticket.
### Modify Ticket
Change Properties
|
2017-08-19 23:06:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21205256879329681, "perplexity": 2124.572835315696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105927.27/warc/CC-MAIN-20170819220657-20170820000657-00145.warc.gz"}
|
https://homework.cpm.org/category/CC/textbook/ccg/chapter/7/lesson/7.3.3/problem/7-140
|
### Home > CCG > Chapter 7 > Lesson 7.3.3 > Problem7-140
7-140.
Each problem below gives the endpoints of a segment. Find the coordinates of the midpoint of the segment. If you need help, consult the Math Notes box for this lesson.
1. $(5,2)$ and $(11,14)$
Average the $x$-coordinates.
$x=\frac{5+11}{2}=8$
Average the $y$-coordinates.
$y=\frac{2+14}{2}=8$
The midpoint is at $(8,8)$.
1. $(3,8)$ and $(10,4)$
Follow the same procedures to find the midpoints of the other segment.
|
2021-05-06 21:40:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265287518501282, "perplexity": 1712.1252627110655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988763.83/warc/CC-MAIN-20210506205251-20210506235251-00472.warc.gz"}
|
https://blog.logrocket.com/mastering-node-js-path-module/
|
Val Karpov Node.js @BoosterFuels. Open source @mongoosejs @mongodb. Blogger, author x2. Coined MEAN. Find me on GitHub.
Mastering the Node.js path module
The Node.js path module is a built-in module that helps you work with file system paths in an OS-independent way. The path module is essential if you’re building a CLI tool that supports OSX, Linux, and Windows.
Even if you’re building a backend service that only runs on Linux, the path module is still helpful for avoiding edge cases when manipulating paths.
In this blog post, I’ll describe some common patterns for working with the path module, and why you should use the path module rather than manipulate paths into strings.
Joining path modules in Node
The most commonly used function in the path module is path.join(). The path.join() function merges one or more path segments into a single string, as shown below.
const path = require('path');
path.join('/path', 'to', 'test.txt'); // '/path/to/test.txt'
You may be wondering why you’d use the path.join() function instead of using string concatenation.
'/path' + '/' + 'to' + '/' + 'test.txt'; // '/path/to/test.txt'
['/path', 'to', 'test.txt'].join('/'); // '/path/to/test.txt'
There are two main reasons why.
First, for Windows support. Windows uses backslashes (\) rather than forward slashes (/) as path separators. The path.join() function handles this for you because path.join('data', 'test.txt') returns 'data/test.txt' on both Linux and OSX, and 'data\\test.txt' on Windows.
Secondly, for handling edge cases. Numerous edge cases pop up when working with file system paths. For example, you may accidentally end up with a duplicate path separator if you try to join two paths manually. The path.join() function handles leading and trailing slashes for you, like so:
path.join('data', 'test.txt'); // 'data/test.txt'
path.join('data', '/test.txt'); // 'data/test.txt'
path.join('data/', 'test.txt'); // 'data/test.txt'
path.join('data/', '/test.txt'); // 'data/test.txt'
Parsing paths in Node
The path module also has several functions for extracting path components, such as the file extension or directory. For example, the path.extname() function returns the file extension as a string:
path.extname('/path/to/test.txt'); // '.txt'
Like joining two paths, getting the file extension is trickier than it first seems. Taking everything after the last . in the string doesn’t work if there’s a directory with a . in the name, or if the path is a dotfile.
path.extname('/path/to/github.com/README'); // ''
path.extname('/path/to/.gitignore'); // ''
The path module also has path.basename() and path.dirname() functions, which get the file name (including the extension) and directory, respectively.
path.basename('/path/to/test.txt'); // 'test.txt'
path.dirname('/path/to/test.txt'); // '/path/to'
Do you need both the extension and the directory? The path.parse() function returns an object containing the path broken up into five different components, including the extension and directory. The path.parse() function is also how you can get the file’s name without any extension.
/*
{
root: '/',
dir: '/path/to',
base: 'test.txt',
ext: '.txt',
name: 'test'
}
*/
path.parse('/path/to/test.txt');
Using path.relative()
Functions like path.join() and path.extname() cover most use cases for working with file paths. But the path module has several more advanced functions, such as path.relative().
The path.relative() function takes two paths and returns the path to the second path relative to the first.
// '../../layout/index.html'
path.relative('/app/views/home.html', '/app/layout/index.html');
The path.relative() function is useful when you’re given paths relative to one directory, but want paths relative to another directory. For example, the popular file system watching library Chokidar gives you paths relative to the watched directory.
const watcher = chokidar.watch('mydir');
// if user adds 'mydir/path/to/test.txt', this
// prints 'mydir/path/to/test.txt'
This is why tools that make heavy use of Chokidar, like Gatsby or webpack, for instance, often also make heavy use of the path.relative() function internally.
export const syncStaticDir = (): void => {
const staticDir = nodePath.join(process.cwd(), static)
chokidar
.watch(staticDir)
.on(add, path => {
const relativePath = nodePath.relative(staticDir, path)
fs.copy(path, ${process.cwd()}/public/${relativePath})
})
.on(change, path => {
const relativePath = nodePath.relative(staticDir, path)
fs.copy(path, ${process.cwd()}/public/${relativePath})
})
}
Now, suppose a user adds a new file main.js to the static directory. Chokidar calls the on('add') event handler with path set to static/main.js. However, you don’t want the extra static/ when you copy the file to /public.
Calling path.relative('static', 'static/main.js') returns the path to static/main.js relative to static, which is exactly what you want if you want to copy the contents of static to public.
Cross-OS paths and URLs
By default, the path module automatically switches between POSIX (OSX, Linux) and Windows modes based on which OS your Node process is running.
However, the path module does have a way to use the Windows path module on POSIX, and vice versa. The path.posix and path.win32 properties contain the POSIX and Windows versions of the path module, respectively.
// Returns 'path\\to\\test.txt', regardless of OS
path.win32.join('path', 'to', 'test.txt');
// Returns 'path/to/test.txt', regardless of OS
path.posix.join('path', 'to', 'test.txt');
In most cases, switching the path module automatically based on the detected OS is the right behavior. But using the path.posix and path.win32 properties can be helpful for testing or applications where you always want to output Windows or Linux-style paths.
For example, some applications use functions like path.join() and path.extname() to work with URL paths.
// 'https://api.mydomain.app/api/v2/me'
'https://api.mydomain.app/' + path.join('api', 'v2', 'me');
This approach works on Linux and OSX, but what happens if someone tries to deploy your app on Azure Functions?
You’ll end up with 'https://api.mydomain.app/api\\v2\\me', which is not a valid URL! If you’re using the path module to manipulate URLs, you should use path.posix.
Conclusion
The Node path module is a great tool for working with file system paths, especially when it comes to joining and parsing. While you can manipulate file paths as strings, there are many subtle edge cases when working with paths.
In general, you should use the path module to get file extensions and join paths, because it is easy to make mistakes if you’re manipulating paths as strings.
200’s only Monitor failed and slow network requests in production
Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third party services are successful, try LogRocket. https://logrocket.com/signup/
LogRocket is like a DVR for web apps, recording literally everything that happens on your site. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. .
Val Karpov Node.js @BoosterFuels. Open source @mongoosejs @mongodb. Blogger, author x2. Coined MEAN. Find me on GitHub.
|
2021-09-28 23:25:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17227701842784882, "perplexity": 6343.846007730766}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060908.47/warc/CC-MAIN-20210928214438-20210929004438-00465.warc.gz"}
|
https://www.physicsforums.com/threads/taylor-inequality.202719/
|
Taylor inequality
1. Dec 5, 2007
frasifrasi
We are supposed to use taylor's inequality to estimate the accuracy of the approximation of the taylor polynomial within the interval given.
so, f(x) = cos x , a = pi/3, n=4 and the interval is 0<= x <= 2pi/3
the fifth derivative is -sin x
to get the M in taylor's inequality, wouldn't we have to plug 0 into |-sin x|?
Why does the book say |-sin x|<= 1 = M?
Does it work differently with trig functions?
2. Dec 5, 2007
Dick
A Taylor series remainder term contains a derivative which is evaluated at some point between x=0 (the point you are expanding around) and x=a. It doesn't say at which point. So the only thing you can say about -sin(x) in that interval is that |-sin(x)|<=1.
3. Dec 5, 2007
frasifrasi
so, what part does x = 0 play ?
4. Dec 5, 2007
Dick
You are expanding around a=pi/3. The point at which you want the approximation is in [0,2pi/3]. The x in the derivative part of the taylor error term is ANOTHER point in that interval, you don't know which one. Look up a discussion like http://en.wikipedia.org/wiki/Taylor's_theorem. [Broken]
Last edited by a moderator: May 3, 2017
|
2017-08-19 22:30:33
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8182880282402039, "perplexity": 1197.7404475406322}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105927.27/warc/CC-MAIN-20170819220657-20170820000657-00479.warc.gz"}
|
https://sophia-web.appspot.com/ten-atoms-of-lennard-jonesium-md-simulation.html
|
# Tutorial 5 Ten Atoms of Lennard-Jonesium: MD Simulation
## 5.1 Introduction
The purpose of this tutorial is to introduce you to MD simulations on systems containing more than two atoms. To make it easy to visualize the trajectory and to interpret the system’s behavior, we use a two-dimensional system with only ten atoms of Lennard-Jonesium.
You will run a free molecular dynamics simulation. By “free”, we mean that the atoms are all initially at rest, that they begin to move as a consequence of the forces between them, and that the system’s trajectory is entirely a consequence of its initial configuration, the inter-atomic forces, and the effects of inertia.
You will also be asked to predict how the system will behave. The initial configuration looks like a snapshot of a gas – the atoms are not clustered as they would be in a liquid or a solid. But the initial velocities of the atoms are all zero, and they are attracted to one another by the van der Waals forces. Will the system remain a gas, or will it condense into a liquid-like or solid phase?
Create a new folder for your work on this tutorial. You’ll need the following files:
Before you load these into Sophia, double-click on LJ10.pdb. This should cause Chimera to open and display the initial configuration. (If it doesn’t, open Chimera and load LJ10.pdb.) Change the atomic representation to Spheres to see the initial configuration.
Throughout this tutorial, do NOT rotate the structure in the Chimera window. This is a two-dimensional system, and $$z = 0$$ for all atoms throughout the trajectory. If you rotate the structure, you may create the impression that some pairs of atoms are overlapping, when they are not actually doing so. You can reset the view by clicking the Reset View button next to the Sophia logo on the toolbar.
## 5.3 Prediction
At the start of the simulation ($$t = 0$$), all the atoms are at rest. You are to guess what will happen as time passes.
As before, it doesn’t matter whether your guess is right or not – what’s important is for you to think about the likely behavior of the system. This will help you understand the results of the simulation once it’s been run.
There is no way to predict the trajectory (the atomic positions) of the entire system, because there are so many degrees of freedom. Nor can we predict the trajectory of a single atom, because that depends on the positions and motions of the other nine atoms.
However, there is one thing that you can predict, at least for the case where all the initial velocities are zero: how the potential, kinetic and total energies will evolve with time. No one can predict the exact behavior of these three inter-related quantities over a long time. But you should be able to make a qualitative prediction about their behavior at the very beginning of the simulation: will each of these increase, decrease, or remain constant? And you should be able to make a qualitative prediction about how they will behave after the system has come to equilibrium, which occurs within about 100-200 steps (10-20 ps).
To make your prediction more specific:
1. Draw a sketch showing your expectations for $$E_p(t)$$, $$E_k(t)$$ and $$E_{tot}(t)$$. You can make a single sketch that describes both their initial evolution and their behavior after equilibrium is reached, or you can make two separate sketches. Remember that you are describing the case where all the initial atomic velocities are zero.
2. Write brief statements explaining the reasons for your predictions.
Now, before opening Sophia, quit Chimera, to clear all its settings.
## 5.4 Setup the first simulation and run it
1. Open Chimera, launch Sophia, and initiate a new simulation.
2. In the Sophia Universe window:
• PDB file: LJ10-0.pdb
• Force field: Lennard-Jones
• FF Mod file: LJ_E0_0.1.frcmod
• Box length: 10 nm
• Non-bond cutoff: 2 nm
3. Chimera: Actions / Atoms/Bonds / sphere
4. Sophia Recipe window:
• Highlight the recipe and run it.
This should generate a trajectory covering 100 ps, with output interval = 0.1 ps (1000 frames).
## 5.5 Analysis
1. Set up your screen so you can see the Chimera window, the Sophia Energy Plot window, and the Sophia Trajectory Viewer window simultaneously.
2. Using the Viewer Controls in the Sophia Trajectory Viewer window, watch the first ~50-100 frames of the trajectory. Pay attention to the atomic motions in the Chimera window, and to the plots of $$E_p$$, $$E_k$$ and $$E_{tot}$$. You can play this part of the trajectory several times by using the reset button in the Sophia Trajectory window, shown below. (You can hit the reset button at any time; you don’t have to wait for the trajectory to finish.)
1. Describe the atomic motions and the changes in $$E_p$$, $$E_k$$ and $$E_{tot}$$ in the first ~50-100 frames of the trajectory.
2. If there are any differences between your predictions and the observed behavior, explain why the system behaves differently from your predictions. (Write down your explanation.)
3. Describe the atomic motions and the changes in $$E_p$$, $$E_k$$ and $$E_{tot}$$ during the part of the trajectory after the system has reached equilibrium.
4. If there are any differences between your predictions and the observed behavior, explain why the system behaves differently from your predictions. (Write down your explanation.)
5. Generate a plot showing how the temperature changes with time. You can do this by clicking on “Temperature” in the Plotting Options panel in the Sophia Trajectory Viewer window.
6. Describe how the temperature behaves in the first ~50-100 frames and write an explanation of this behavior.
7. Why does the temperature fluctuate after the system reaches equlibrium, instead of remaining constant?
8. Save the simulation. (Click on “Save Simulation” in the Sophia Main window; then go to the folder you created for this tutorial.)
9. Take snapshots of the energy plot, and of the structure at the final frame of the simulation (instructions here). You will need these in order to answer the questions below.
## 5.6 Repeat the simulation
1. Reset the system by clicking “Reset Universe” in the Sophia Universe window.
2. Using exactly the same parameters as before, reload the files, and rerun the simulation.
3. Do you get the same results, or different results? Explain.
## 5.7 Repeat the simulation with a slightly different starting conformation
1. Reset the system by clicking “Reset Universe” in the Sophia Universe window.
2. Run a new simulation, identical to the previous one, except using LJ10-1.pdb for the starting conformation.
3. How does the trajectory with this conformation differ from the trajectory with LJ10-0?
4. Take snapshots of the energy plot, and of the structure at the final frame of the simulation ($$t = 100$$ ps) for the LJ10-1 simulation.
5. To explain how the two simulations differ, you will want to refer to the snapshots from each simulation, and you will need to open the two PDB files in a text editor.
|
2020-02-19 23:56:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43880724906921387, "perplexity": 924.8158865351521}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144429.5/warc/CC-MAIN-20200219214816-20200220004816-00469.warc.gz"}
|
https://en.universaldenker.org/questions/70
|
# What is the Difference Between an Ideal and Practical Voltage Source?
Level 2 (without higher mathematics)
An ideal voltage source has no internal resistance, that is it supplies a voltage $$U_0$$ (called source voltage) which is independent of which load resistor $$R$$ is connected to the terminals of the voltage source. That means the voltage at the resistor $$R$$ is always the voltage $$U_0$$. Look at the following Ohm's law:
To keep the voltage $$U_0$$ constant, the current $$I$$ may become arbitrary - depending on the choice of resistor. This can result in very high currents that damage the circuit.
A practical voltage source, on the other hand, has an internal resistance $$R_{\text i}$$, which limits the current $$I$$ through the load resistor. The application of the Kirchhoff's voltage law results in a voltage $$U$$ at the resistor $$R$$, which does not necessarily have to correspond to the source voltage $$U_0$$ anymore:
So that a real voltage source corresponds as far as possible to the ideal voltage source, the internal resistance $$R_{\text i}$$ must be chosen as small as possible, so that the second term in 2 is almost zero: $$U \approx U_0$$.
|
2022-12-07 10:48:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7060365676879883, "perplexity": 217.5277568558289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00851.warc.gz"}
|
https://www.gamedev.net/forums/topic/550244-vao-slower-than-not-using-it/
|
# VAO slower than not using it
This topic is 3168 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I've installed the new driver from nVidia's site (191.07). My graphic card is 9800GTX. I am using the VAO functions through the GL_ARB_vertex_array_object extension. In the scene that I'm rendering there is a total of around 160,000 triangles, split into many objects (meshes) resulting in around 800 VBOs. All I did now is wrap the calls to set up buffer bindings and data pointers to cache them into VAOs and the result is a frame rate drop from 80fps to 55fps. I did check and make sure that the VAO setup bit only happens once per each mesh, so from frame one onwards, only the glBindVertexArray() function is being called. Here is a snippet from my code:
if (!meshVAOInit)
{
glGenVertexArrays( 1, &meshVAO );
glBindVertexArray( meshVAO );
bindBuffers();
meshVAOInit = true;
}
else
glBindVertexArray( meshVAO );
//bindBuffers();
material->begin();
//Walk material index groups
for (UintSize g=0; g<mesh->groups.size(); ++g)
{
//Render current group
TriMesh::IndexGroup &grp = mesh->groups[ g ];
renderGroup( grp );
}
material->end();
//unbindBuffers();
glBindVertexArray( 0 );
Has anyone got experience with using VAOs on similar hardware? After an hour of googling I still haven't found any information on the performance issues with VAOs.
##### Share on other sites
No experience here, but a quick Google brought me to the OpenGL forums where there's a thread with pretty much everyone agreeing that VAO is 100% the same speed-wise as not using it, making it a total waste of time using.
##### Share on other sites
Well I wouldn't care if it was no speed up, but what seems strange to me is it actually causes a performance regression, that's what bugs me. How on earth can calling one function instead of 10 per each of the 800 meshes take more time? I am even getting some uniform locations by string name in the bindFormat function and that's about the slowest thing you would want to do each frame.
##### Share on other sites
If things are going a reasonable way, then you're certainly right. One function call can't be slower than a dozen of them, and the driver must be able to cache its state at least as well as you can.
My only guess would be that maybe some of your uniforms are exactly 0.5 or 1.0 or 2.0 by chance? And if that's the case, then maybe the driver tries to be "extra smart" by recompiling shaders for each set, optimizing out those special constants.
Some old, broken nVidia drivers were smart like this whenever you changed any uniform, which really sucked if you didn't know. Maybe a similar behaviour is built into VAO again, who except the driver writers could tell...?
##### Share on other sites
So what you are saying is that the driver might be recompiling my shaders on-the-fly depending on the values of the uniform variables that I pass in per-frame? I don't see any sense in implementing such an "optimization" in a driver as the process of re-compiling each frame can never be faster than the drawback of not using a variable as a constant. Or can it?
##### Share on other sites
Quote:
Original post by ilebenSo what you are saying is that the driver might be recompiling my shaders on-the-fly depending on the values of the uniform variables that I pass in per-frame?
I'm not saying that this is what is happening for you, and to my knowledge, recent drivers should not do that any more.
I'm just trying to give a guess on what might give a performance degradation that doesn't make any sense and that actually cannot be. Obviously, this is just a guess, there's no way I could really know, you'd have to ask a nVidia driver developer.
However, recompiling shaders is certainly something that some old broken nVidia drivers used to do (if, and only if, you supplied some special values like 0.5 or 1.0). This was extremely annoying because first you didn't know about it, and then you'd eventually end up having your shaders recompiled several dozen times per frame, and there was no obvious reason why for fark's sake your frame rates sucked at one time, and then again everything worked just fine, when you didn't change anything that matters (or so you thought!). The workaround was simply to change 0.5 to 0.50001 or 0.49999, but hey, you had know that in the first place!
It is even legal for the driver to do that kind of thing (although as you said it is very disputable whether it makes any sense). The driver is only required to keep everything in a way so it isn't externally visible (so, the application won't crash).
##### Share on other sites
Yep, I get all that. I was just pointing out such an "optimization" seems extremely unlikely to have a case where it would be welcome at all.
Anyway, I was playing around with it a bit more, even changed all my shaders to accept vertex, normal and texture coordinates through generic vertex attributes (glVertexAttribPointer) rather than builtin gl*Pointer() functions. I thought it might be that this new feature somehow doesn't support the old fixed pipeline well. What I found out was - nothing. It still almost halves my framerate.
##### Share on other sites
Isn't 800 VBOs a bit high? Have you tried transforming everything on the CPU and using a single VBO?
##### Share on other sites
800 sounds normal, imho. In my experience, in a scene of 350+350 vbos (z-pass), VAOs were 5-10% slower on c2d E8500 OC@3.8GHz, DDR3@1.6GHz + GF8600GT/GTX275 . On the 3.2 beta drivers. (can't have many cache-misses on this PC)
Try with multithreading driver-optimizations disabled and enabled.
I seriously doubt shader recompilation has anything to do with this. It's probably just that nV haven't optimized VAOs yet - they had a bug in getting them to work, so for now probably it's a slow-but-working version of their code that they're using. Quite possibly it'll get optimized soonish.
##### Share on other sites
Quote:
Original post by raiganIsn't 800 VBOs a bit high? Have you tried transforming everything on the CPU and using a single VBO?
Well if I had everything in one VBO, then there would be no point in using VAOs anyway, since the purpose was to reduce the overhead of switching between the VBOs (and gl*Pointer()s) as much as possible.
I do wanna keep things in separate VBOs because of the different vertex formats of the meshes. I could merge all the meshes with same format into several VBOs (and in fact I've tried that already), but I still wanted to see if VAOs could get the non-merged version performance closer to the merged one. It's just a matter of trying to make the more comfy-to-use scenario faster.
• 37
• 12
• 10
• 10
• 9
• ### Forum Statistics
• Total Topics
631360
• Total Posts
2999556
×
|
2018-06-19 01:48:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2345285564661026, "perplexity": 1903.2865411255561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861641.66/warc/CC-MAIN-20180619002120-20180619022120-00217.warc.gz"}
|
https://www.khanacademy.org/math/algebra2/polynomial-functions/introduction-to-symmetry-of-functions/e/even_and_odd_functions
|
Even & odd functions
Problem
According to the graph, is f even, odd, or neither?
Please choose from one of the following options.
|
2017-05-23 03:22:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3940199315547943, "perplexity": 1700.7751288793977}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607325.40/warc/CC-MAIN-20170523025728-20170523045728-00050.warc.gz"}
|
https://www.physicsforums.com/threads/trignometric-identities.335410/
|
# Trignometric Identities
1. Sep 7, 2009
### GrandMaster87
1. The problem statement, all variables and given/known data
Prove that
$$\frac{sin3x}{sinx}$$-$$\frac{cos3x}{cosx}$$ = 2
2. Relevant equations
3. The attempt at a solution
LHS:$$\frac{sin(2x+x)}{sin}-\frac{cos(2x+x)}{cosx}$$
=$$\frac{sin2x.cosx + cos2x.sinx}{sin}-\frac{cos2xcosx - sin2x.sinx}{cosx}$$
2. Sep 7, 2009
### GrandMaster87
thats as far i got...im really new with trig ..caught a wake up call at school so i started working with it...
3. Sep 7, 2009
### njama
By multiplying $\frac{sin3x}{sinx}$ with $\frac{cos(x)}{cos(x)}$ and $\frac{cos(3x)}{cos(x)}$ with $\frac{sin(x)}{sin(x)}$ you got:
$$\frac{cos(x)sin(3x)-sin(x)cos(3x)}{sin(x)cos(x)}$$
What can you spot now?
4. Sep 7, 2009
### mathie.girl
Did you try getting a common denominator at this point?
5. Sep 8, 2009
### GrandMaster87
can we expand sin(3x) and cos(3x) using double angle formula?
6. Sep 8, 2009
### kbaumen
It can also be done onwards from here. Do you know how to expand cos(2x) and sin(2x)?
7. Sep 8, 2009
### njama
No.
You can write the nominator:
$$cos(x)sin(3x)-sin(x)cos(3x)$$
as
$$sin(3x-x)=sin(2x)$$
using the sum difference formula.
Also you can write the denominator
$$cos(x)sin(x)$$
as
$$\frac{sin(2x)}{2}$$
|
2017-08-17 18:06:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43684229254722595, "perplexity": 8059.749023280969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103891.56/warc/CC-MAIN-20170817170613-20170817190613-00047.warc.gz"}
|
https://math.stackexchange.com/questions/1821239/does-a-metric-lindel%C3%B6f-space-have-a-countable-basis
|
# Does a metric Lindelöf space have a countable basis?
I want to prove that every metric space which is Lindelöf has a countable basis. First I tried to show that a countable cover, which exists by the Lindelöf property, is a countable basis, but for the second property of basis, which is about the intersection of two basis elements, I have no idea.
Since $X$ is a Lindelöf space, for every $n$ there exist countably many $x_{n,j}$ with $$\bigcup_jB(x_{n,j},1/n)=X.$$So $X$ is separable; in fact $\{x_{n,j}:n,j\in\Bbb N\}$ is dense, and hence $$\{B((x_{n,j},1/k):n,j,k\in\Bbb N\}$$is a basis.
|
2019-06-26 16:41:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9606922268867493, "perplexity": 104.3904237779462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00328.warc.gz"}
|
https://math.hecker.org/2017/12/19/linear-algebra-and-its-applications-exercise-3-4-6/
|
## Linear Algebra and Its Applications, Exercise 3.4.6
Exercise 3.4.6. Given the matrix
$Q = \begin{bmatrix} \frac{1}{\sqrt{3}}&\frac{1}{\sqrt{14}}&\qquad \\ \frac{1}{\sqrt{3}}&\frac{2}{\sqrt{14}}&\qquad \\ \frac{1}{\sqrt{3}}&-\frac{3}{\sqrt{14}}&\qquad \end{bmatrix}$
find entries for the third column such that $Q$ is orthogonal. How much freedom do you have to choose the entries? Finally, verify that both the columns and rows are orthonormal.
Answer: In order for $Q$ to be orthogonal all three of its rows must be orthonormal, with length of 1. We start with the vector forming the first row of $Q$. For its length to be 1 the square of the last entry of the first row must be equal to
$1 - (\frac{1}{\sqrt{3}})^2 - (\frac{1}{\sqrt{14}})^2 = 1 - \frac{1}{3} - \frac{1}{14} = 1 - \frac{14}{42} - \frac{3}{42} = \frac{25}{42}$
so that the entry itself would be $\pm \sqrt{\frac{25}{42}} = \pm \frac{5}{\sqrt{42}}$.
Similarly the square of the last entry of the second row must be
$1 - (\frac{1}{\sqrt{3}})^2 - (\frac{2}{\sqrt{14}})^2 = 1 - \frac{1}{3} - \frac{4}{14} = 1 - \frac{14}{42} - \frac{12}{42} = \frac{16}{42}$
so that the entry itself would be $\pm \sqrt{\frac{16}{42}} = \pm \frac{4}{\sqrt{42}}$.
Finally, the square of the last entry of the third row must be
$1 - (\frac{1}{\sqrt{3}})^2 - (\frac{3}{\sqrt{14}})^2 = 1 - \frac{1}{3} - \frac{9}{14} = 1 - \frac{14}{42} - \frac{27}{42} = \frac{1}{42}$
so that the entry itself would be $\pm \sqrt{\frac{1}{42}} = \pm \frac{1}{\sqrt{42}}$.
We have two possible choices for the last entry of the first row, $\frac{5}{\sqrt{42}}$ and $-\frac{5}{\sqrt{42}}$. Suppose that we choose $\frac{5}{\sqrt{42}}$. Since the first row and the second row are supposed to be orthogonal, we must choose $-\frac{4}{\sqrt{42}}$ for the last entry of the second row, so that the dot product of the two rows is zero:
$\frac{1}{\sqrt{3}} \cdot \frac{1}{\sqrt{3}} + \frac{1}{\sqrt{14}} \cdot \frac{2}{\sqrt{14}} + \frac{5}{\sqrt{42}} \cdot (-\frac{4}{\sqrt{42}}) = \frac{1}{3} + \frac{2}{14} - \frac{20}{42} = \frac{14}{42} + \frac{6}{42} - \frac{20}{42} = 0$
We must then choose $-\frac{1}{\sqrt{42}}$ for the last entry in the third row, so that the dot product of the first row and the third row is zero:
$\frac{1}{\sqrt{3}} \cdot \frac{1}{\sqrt{3}} + \frac{1}{\sqrt{14}} \cdot (-\frac{3}{\sqrt{14}}) + \frac{5}{\sqrt{42}} \cdot (-\frac{1}{\sqrt{42}}) = \frac{1}{3} - \frac{3}{14} - \frac{5}{42} = \frac{14}{42} - \frac{9}{42} - \frac{5}{42} = 0$
and the dot product of the second and third rows is zero:
$\frac{1}{\sqrt{3}} \cdot \frac{1}{\sqrt{3}} + \frac{2}{\sqrt{14}} \cdot (-\frac{3}{\sqrt{14}}) + (-\frac{4}{\sqrt{42}}) \cdot (-\frac{1}{\sqrt{42}}) = \frac{1}{3} - \frac{6}{14} + \frac{4}{42} = \frac{14}{42} - \frac{18}{42} + \frac{4}{42} = 0$
The resulting value for the matrix $Q$ is
$Q = \begin{bmatrix} \frac{1}{\sqrt{3}}&\frac{1}{\sqrt{14}}&\frac{5}{\sqrt{42}} \\ \frac{1}{\sqrt{3}}&\frac{2}{\sqrt{14}}&-\frac{4}{\sqrt{42}} \\ \frac{1}{\sqrt{3}}&-\frac{3}{\sqrt{14}}&-\frac{1}{\sqrt{42}} \end{bmatrix}$
The dot product of the first column with itself is
$(\frac{1}{\sqrt{3}})^2 + (\frac{1}{\sqrt{3}})^2 + (\frac{1}{\sqrt{3}})^2 = \frac{1}{3} + \frac{1}{3} + \frac{1}{3} = 1$
The dot product of the second column with itself is
$(\frac{1}{\sqrt{14}})^2 + (\frac{2}{\sqrt{14}})^2 + (-\frac{3}{\sqrt{14}})^2 = \frac{1}{14} + \frac{4}{14} + \frac{9}{14} = 1$
The dot product of the third column with itself is
$(\frac{5}{\sqrt{42}})^2 + (-\frac{4}{\sqrt{42}})^2 + (-\frac{1}{\sqrt{42}})^2 = \frac{25}{42} + \frac{16}{42} + \frac{1}{42} = 1$
Thus all three columns have length 1.
We also have the dot product of the first and second columns as zero:
$\frac{1}{\sqrt{3}} \cdot \frac{1}{\sqrt{14}} + \frac{1}{\sqrt{3}} \cdot \frac{2}{\sqrt{14}} + \frac{1}{\sqrt{3}} \cdot (-\frac{3}{\sqrt{14}}) = \frac{1}{\sqrt{42}} + \frac{2}{\sqrt{42}} - \frac{3}{\sqrt{42}} = 0$
the dot product of the first and third columns as zero:
$\frac{1}{\sqrt{3}} \cdot \frac{5}{\sqrt{42}} + \frac{1}{\sqrt{3}} \cdot (-\frac{4}{\sqrt{42}}) + \frac{1}{\sqrt{3}} \cdot (-\frac{1}{\sqrt{42}}) = \frac{5}{\sqrt{126}} - \frac{4}{\sqrt{126}} - \frac{1}{\sqrt{126}} = 0$
and the dot product of the second and third columns as zero:
$\frac{1}{\sqrt{14}} \cdot \frac{5}{\sqrt{42}} + \frac{2}{\sqrt{14}} \cdot (-\frac{4}{\sqrt{42}}) + (-\frac{3}{\sqrt{14}}) \cdot (-\frac{1}{\sqrt{42}}) = \frac{5}{\sqrt{588}} - \frac{8}{\sqrt{588}} + \frac{3}{\sqrt{588}} = 0$
Since all three rows are orthonormal (by construction) and all three columns are orthonormal, the matrix $Q$ is an orthogonal matrix.
Recall that we originally had two choices for the last entry of the first row. If we instead choose $-\frac{5}{\sqrt{42}}$ for the last entry in the first row, we must choose $\frac{4}{\sqrt{42}}$ for the last entry of the second row and $\frac{1}{\sqrt{42}}$ for the last entry in the third row, so that
$Q = \begin{bmatrix} \frac{1}{\sqrt{3}}&\frac{1}{\sqrt{14}}&-\frac{5}{\sqrt{42}} \\ \frac{1}{\sqrt{3}}&\frac{2}{\sqrt{14}}&\frac{4}{\sqrt{42}} \\ \frac{1}{\sqrt{3}}&-\frac{3}{\sqrt{14}}&\frac{1}{\sqrt{42}} \end{bmatrix}$
Verifying that this alternative value for $Q$ is an orthogonal matrix is left as an exercise for the reader.
NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.
If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fifth Edition and the accompanying free online course, and Dr Strang’s other books.
This entry was posted in linear algebra and tagged . Bookmark the permalink.
|
2020-09-29 20:42:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822824954986572, "perplexity": 164.62232984922255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402088830.87/warc/CC-MAIN-20200929190110-20200929220110-00460.warc.gz"}
|
https://api.scholarcy.com/highlights?url=10.1007/S11150-021-09562-X&utm_source=citationsy
|
## What if your boss is a woman? Evidence on gender discrimination at the workplace
We show that a female boss is associated with reduced gender discrimination, with positive spillovers mainly on female subordinates, in jobs where female presence is higher and where work organization is more complex
Claudio Lucifora; Daria Vigani
2021
#### Scholarcy highlights
• In this paper, we exploit rich cross-country survey data covering 15 European countries over the period 2000–2015 to investigate the relationship between the gender of the immediate supervisor and perceived gender discrimination at the workplace
• We recoded the variable as a dummy taking value one if the respondent agreed or strongly agreed and zero otherwise
• Harassment is measured through a dummy variable that takes value 1 if, “over the past 12 months, during the course of work” the individual has been subjected to bullying/harassment or sexual harassment
• For each model we report the Wald-χ2 test for the joint significance of all predictors
• Besides the bias due to small samples, recent studies have argued that in rare events data, the biases in probabilities can be meaningful even with big sample sizes and that these biases result in an underestimation of event probabilities
• The baseline assumption that the inclusion of the unobservables would produce an R-squared of 1 is likely to understate the robustness of results, especially when there is measurement error in the outcome
• We evaluate whether the bounds of the identified set lie within the confidence interval of $$\widetilde \beta$$, especially if the estimated coefficient does not move towards zero when including additional explanatory variables
Need more features? Save interactive summary cards to your Scholarcy Library.
|
2022-01-19 19:24:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.312382310628891, "perplexity": 2069.4299887673646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301488.71/warc/CC-MAIN-20220119185232-20220119215232-00061.warc.gz"}
|
http://math.stackexchange.com/questions/139142/mean-of-a-practical-distribution?answertab=active
|
# Mean of a practical distribution
I have a graph with an asymmetrical distribution (spectral response for some sensor). The graph is plotted as efficiency values versus vavelength. I must determine the median wavelength. Help please, my statistics is so rusty !
I have determined the mean value of the efficiency - the values on the y-axis, and considered selecting the wavelength (x-value) corresponding to that value - but that doesn't seem to give me a relevant answer, and it is not even close to the center of the plot (even in the case where it should be). I thought of cheating and just getting the median of the graph - but the graphs are weird. In one case, the graph has a fairly rectangular shape (rise, sort of plateau but with mountains and valleys, then fall), in another case, there is a skewed peak on the left and another tiny bump on the right.
I would appreciate any suggestions ! Thank you !
-
Do you want the median or the mean? How did you determine the mean of the efficency? Do you have a sample? – Xabier Domínguez May 1 '12 at 16:43
Used Excel's mean, and noticed that it was wrong. – Thalia May 2 '12 at 17:57
I was eventually able to find the mean as $\dfrac{\sum (p_i x_i)} { \sum (p_ i)}$ .
I am not even sure -- What $\pi$ and $x_i$ are? Why don't you tell us what you wanted to find? What you are given? It is not even clear, even from the question. – user21436 May 4 '12 at 12:35
I have now edited it accordingly. But, if you wanted $p_i$ then, you would atleast write p_i and not pi. Looking around the site will tell you this site allows TeX mark up. Go through my edit by clicking on the timestamp available above my name. I have now removed my downvote. (FWIW, I do know what the expected value of a random variable is, but still feel that you should write your answer more clearly.) Regards, – user21436 May 5 '12 at 16:09
|
2016-05-02 09:09:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7450778484344482, "perplexity": 379.65287603171606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860128071.22/warc/CC-MAIN-20160428161528-00102-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=130&t=18344
|
## Reversibly & Isothermally
$\Delta U=q+w$
Daisy Palomera 3B
Posts: 9
Joined: Wed Sep 21, 2016 2:58 pm
### Reversibly & Isothermally
What does it mean when a system is expanding reversibly and isothermally?
Joslyn_Santana_2B
Posts: 51
Joined: Wed Sep 21, 2016 2:58 pm
### Re: Reversibly & Isothermally
I think it refers to the work, and which equation you would use in your calculations. You would use W=-nRTln(V2/V1).
It also means that deltaU is 0
KelseyKobayashi_2M
Posts: 21
Joined: Sat Jul 09, 2016 3:00 am
### Re: Reversibly & Isothermally
What is the difference between reversible and irreversible reactions (conceptually, not mathematically)?
Michael Lesgart 1H
Posts: 26
Joined: Wed Sep 21, 2016 2:57 pm
### Re: Reversibly & Isothermally
In reversible reactions, reactants react with other reactants to form products, and the products react with other products to form more reactants. Essentially, it is a reaction cycle.
In an irreversible reaction, reactants react with other reactants to form products, and the products do not form reactants.
hope this helped
|
2021-01-22 09:35:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49025678634643555, "perplexity": 6367.771779108234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529179.46/warc/CC-MAIN-20210122082356-20210122112356-00027.warc.gz"}
|
https://compass.blogs.bristol.ac.uk/tag/environment/
|
## Student Perspectives: Change in the air: Tackling Bristol’s nitrogen oxide problem
A post by Dom Owens, PhD student on the Compass programme.
“Air pollution kills an estimated seven million people worldwide every year” – World Health Organisation
Many particulates and chemicals are present in the air in urban areas like Bristol, and this poses a serious risk to our respiratory health. It is difficult to model how these concentrations behave over time due to the complex physical, environmental, and economic factors they depend on, but identifying if and when abrupt changes occur is crucial for designing and evaluating public policy measures, as outlined in the local Air Quality Annual Status Report. Using a novel change point detection procedure to account for dependence in time and space, we provide an interpretable model for nitrogen oxide (NOx) levels in Bristol, telling us when these structural changes occur and describing the dynamics driving them in between.
## Model and Change Point Detection
We model the data with a piecewise-stationary vector autoregression (VAR) model:
In between change points the time series $\boldsymbol{Y}_{t}$, a $d$-dimensional vector, depends on itself linearly over $p \geq 1$ previous time steps through parameter matrices $\boldsymbol{A}_i^{(j)}, i=1, \dots, p$ with intercepts $\boldsymbol{\mu}^{(j)}$, but at unknown change points $k_j, j = 1, \dots, q$ the parameters switch abruptly. $\{ \boldsymbol{\varepsilon}_{t} \in \mathbb{R}^d : t \geq 1 \}$ are white noise errors, and we have $n$ observations.
|
2021-09-21 22:27:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4834669828414917, "perplexity": 2182.105905012584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057274.97/warc/CC-MAIN-20210921221605-20210922011605-00254.warc.gz"}
|
https://www.thejournal.club/c/paper/322999/
|
#### A Tight Bound for Stochastic Submodular Cover
##### Lisa Hellerstein, Devorah Kletenik, Srinivasan Parthasarathy
We show that the Adaptive Greedy algorithm of Golovin and Krause (2011) achieves an approximation bound of $(\ln (Q/\eta)+1)$ for Stochastic Submodular Cover: here $Q$ is the "goal value" and $\eta$ is the smallest non-zero marginal increase in utility deliverable by an item. (For integer-valued utility functions, we show a bound of $H(Q)$, where $H(Q)$ is the $Q^{th}$ Harmonic number.) Although this bound was claimed by Golovin and Krause in the original version of their paper, the proof was later shown to be incorrect by Nan and Saligrama (2017). The subsequent corrected proof of Golovin and Krause (2017) gives a quadratic bound of $(\ln(Q/\eta) + 1)^2$. Other previous bounds for the problem are $56(\ln(Q/\eta) + 1)$, implied by work of Im et al. (2016) on a related problem, and $k(\ln (Q/\eta)+1)$, due to Deshpande et al. (2016) and Hellerstein and Kletenik (2018), where $k$ is the number of states. Our bound generalizes the well-known $(\ln~m + 1)$ approximation bound on the greedy algorithm for the classical Set Cover problem, where $m$ is the size of the ground set.
arrow_drop_up
|
2023-03-28 17:12:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9787265658378601, "perplexity": 716.1596017237388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00354.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=R.%20Takahashi
|
• We present our current best estimate of the plausible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next several years, with the intention of providing information to facilitate planning for multi-messenger astronomy with gravitational waves. We estimate the sensitivity of the network to transient gravitational-wave signals for the third (O3), fourth (O4) and fifth observing (O5) runs, including the planned upgrades of the Advanced LIGO and Advanced Virgo detectors. We study the capability of the network to determine the sky location of the source for gravitational-wave signals from the inspiral of binary systems of compact objects, that is BNS, NSBH, and BBH systems. The ability to localize the sources is given as a sky-area probability, luminosity distance, and comoving volume. The median sky localization area (90\% credible region) is expected to be a few hundreds of square degrees for all types of binary systems during O3 with the Advanced LIGO and Virgo (HLV) network. The median sky localization area will improve to a few tens of square degrees during O4 with the Advanced LIGO, Virgo, and KAGRA (HLVK) network. We evaluate sensitivity and localization expectations for unmodeled signal searches, including the search for intermediate mass black hole binary mergers.
• ### The status of KAGRA underground cryogenic gravitational wave telescope(1710.04823)
KAGRA is a 3-km interferometric gravitational wave telescope located in the Kamioka mine in Japan. It is the first km-class gravitational wave telescope constructed underground to reduce seismic noise, and the first km-class telescope to use cryogenic cooling of test masses to reduce thermal noise. The construction of the infrastructure to house the interferometer in the tunnel, and the initial phase operation of the interferometer with a simple 3-km Michelson configuration have been completed. The first cryogenic operation is expected in 2018, and the observing runs with a full interferometer are expected in 2020s. The basic interferometer configuration and the current status of KAGRA are described.
• ### A White Paper on keV Sterile Neutrino Dark Matter(1602.04816)
We present a comprehensive review of keV-scale sterile neutrino Dark Matter, collecting views and insights from all disciplines involved - cosmology, astrophysics, nuclear, and particle physics - in each case viewed from both theoretical and experimental/observational perspectives. After reviewing the role of active neutrinos in particle physics, astrophysics, and cosmology, we focus on sterile neutrinos in the context of the Dark Matter puzzle. Here, we first review the physics motivation for sterile neutrino Dark Matter, based on challenges and tensions in purely cold Dark Matter scenarios. We then round out the discussion by critically summarizing all known constraints on sterile neutrino Dark Matter arising from astrophysical observations, laboratory experiments, and theoretical considerations. In this context, we provide a balanced discourse on the possibly positive signal from X-ray observations. Another focus of the paper concerns the construction of particle physics models, aiming to explain how sterile neutrinos of keV-scale masses could arise in concrete settings beyond the Standard Model of elementary particle physics. The paper ends with an extensive review of current and future astrophysical and laboratory searches, highlighting new ideas and their experimental challenges, as well as future perspectives for the discovery of sterile neutrinos.
• ### A-site driven ferroelectricity in strained ferromagnetic L2NiMnO6 thin films(1504.04905)
April 20, 2015 cond-mat.mtrl-sci
We report on theoretical and experimental investigation of A-site driven ferroelectricity in ferromagnetic La2NiMnO6 thin films grown on SrTiO3 substrates. Structural analysis and density functional theory calculations show that epitaxial strain stretches the rhombohedral La2NiMnO6 crystal lattice along the [111]cubic direction, triggering a displacement of the A-site La ions in the double perovskite lattice. The lattice distortion and the A-site displacements stabilize a ferroelectric polar state in ferromagnetic La2NiMnO6 crystals. The ferroelectric state only appears in the rhombohedral La2NiMnO6 phase, where MnO6 and NiO6 octahedral tilting is inhibited by the 3-fold crystal symmetry. Electron localization mapping showed that covalent bonding with oxygen and 6s orbital lone pair formation are negligible in this material.
• ### Spin mixing conductance at a well-controlled platinum/yttrium iron garnet interface(1302.7091)
A platinum (Pt)/yttrium iron garnet (YIG) bilayer system with a well-controlled interface has been developed; spin mixing conductance at the Pt/YIG interface has been studied. Crystal perfection at the interface is experimentally demonstrated to contribute to large spin mixing conductance. The spin mixing conductance is obtained to be $1.3\times10^{18} \rm{m^{-2}}$ at the well-controlled Pt/YIG interface, which is close to a theoretical prediction.
• ### Pyroelectric detection of spontaneous polarization in magnetite thin films(1209.4735)
We have investigated the spontaneous polarization in Fe$_3$O$_4$ thin films by using dynamic and static pyroelectric measurements. The magnetic and dielectric behavior of Fe$_3$O$_4$ thin films grown on Nb:SrTiO$_3$(001) substrates was consistent with bulk crystals. The well-known metal-insulator (Verwey) transition was observed at 120 K. The appearance of a pyroelectric response in the Fe$_3$O$_4$ thin films just below the Verwey temperature shows that spontaneous polarization appeared in Fe$_3$O$_4$ at the charge ordering transition temperature. The polar state characteristics are consistent with bond- and site- centered charge ordering of Fe$^{2+}$ and Fe$^{3+}$ ions sharing the octahedral $B$ sites. The pyroelectric response in Fe$_3$O$_4$ thin films was dependent on the dielectric constant. Quasi-static pyroelectric measurement of Pd/Fe$_3$O$_4$/Nb:SrTiO$_3$ junctions showed that magnetite has a very large pyroelectric coefficient of 735 nC/cm$^{2}$K at 60 K.
• ### Measuring spin of a supermassive black hole at the Galactic centre -- Implications for a unique spin(0906.5423)
Jan. 23, 2010 astro-ph.CO, astro-ph.GA
We determine the spin of a supermassive black hole in the context of discseismology by comparing newly detected quasi-periodic oscillations (QPOs) of radio emission in the Galactic centre, Sagittarius A* (Sgr A*), as well as infrared and X-ray emissions with those of the Galactic black holes. We find that the spin parameters of black holes in Sgr A* and in Galactic X-ray sources have a unique value of $\approx 0.44$ which is smaller than the generally accepted value for supermassive black holes, suggesting evidence for the angular momentum extraction of black holes during the growth of supermassive black holes. Our results demonstrate that the spin parameter approaches the equilibrium value where spin-up via accretion is balanced by spin-down via the Blandford-Znajek mechanism regardless of its initial spin. We anticipate that measuring the spin of black holes by using QPOs will open a new window for exploring the evolution of black holes in the Universe.
• ### Magnetohydrodynamics of Neutrino-Cooled Accretion Tori around a Rotating Black Hole in General Relativity(0709.1766)
Sept. 12, 2007 astro-ph
We present our first numerical results of axisymmetric magnetohydrodynamic simulations for neutrino-cooled accretion tori around rotating black holes in general relativity. We consider tori of mass $\sim 0.1$--0.4$M_{\odot}$ around a black hole of mass $M=4M_{\odot}$ and spin $a=0$--$0.9M$; such systems are candidates for the central engines of gamma-ray bursts (GRBs) formed after the collapse of massive rotating stellar cores and the merger of a black hole and a neutron star. In this paper, we consider the short-term evolution of a torus for a duration of $\approx 60$ ms, focusing on short-hard GRBs. Simulations were performed with a plausible microphysical equation of state that takes into account neutronization, the nuclear statistical equilibrium of a gas of free nucleons and $\alpha$-particles, black body radiation, and a relativistic Fermi gas (neutrinos, electrons, and positrons). Neutrino-emission processes, such as $e^{\pm}$ capture onto free nucleons, $e^{\pm}$ pair annihilation, plasmon decay, and nucleon-nucleon bremsstrahlung are taken into account as cooling processes. Magnetic braking and the magnetorotational instability in the accretion tori play a role in angular momentum redistribution, which causes turbulent motion, resultant shock heating, and mass accretion onto the black hole. The mass accretion rate is found to be $\dot M_* \sim 1$--$10 M_{\odot}$/s, and the shock heating increases the temperature to $\sim 10^{11}$ K. This results in a maximum neutrino emission rate of $L_{\nu}=$ several $\times 10^{53}$ ergs/s and a conversion efficiency $L_{\nu}/\dot M_* c^2$ on the order of a few percent for tori with mass $M_{\rm t} \approx 0.1$--0.4$M_{\odot}$ and for moderately high black hole spins.
• ### The effect of supersymmetry breaking in the Mass Varying Neutrinos(hep-ph/0603204)
July 25, 2006 hep-ph
We discuss the effect of the supersymmetry breaking on the Mass Varying Neutrinos(MaVaNs) scenario. Especially, the effect mediated by the gravitational interaction between the hidden sector and the dark energy sector is studied. A model including a chiral superfield in the dark sector and the right handed neutrino superfield is proposed. Evolutions of the neutrino mass and the equation of state parameter are presented in the model. It is remarked that only the mass of a sterile neutrino is variable in the case of the vanishing mixing between the left-handed and a sterile neutrino on cosmological time scale. The finite mixing makes the mass of the left-handed neutrino variable.
• ### Geometrical Effect of Supercritical Accretion Flows: Observational Implications of Galactic Black-Hole Candidates and Ultraluminous X-ray Sources(astro-ph/0504196)
April 8, 2005 astro-ph
We investigate the dependence of the viewing angle in supercritical accretion flows and discuss the observational implications of galactic black-hole candidates and ultraluminous X-ray sources. When the mass accretion rate exceeds the critical rate, then the shape of the disk is geometrically thick due to the enhanced radiation pressure. The model spectra of supercritical accretion flows strongly depend on the inclination angle. Because the outer disk blocks the emission from the disk inner region for high inclination angle. We also find that the spectral properties of low-inclination angle and low accretion-rate disks are very similar to those of high-inclination and high accretion rate disks. That is, if an object has a high inclination and high accretion rate, such a system suffers from self-occultation and the spectrum will be extremely soft. Therefore, we cannot discriminate these differences from spectrum shapes only. Conversely, if we use the self-occultation properties, we could constrain the inclination angle of the system. We suggest that some observed high temperature ultraluminous X-ray sources have near face-on geometry, i < 40, and Galactic black hole candidate, XTE J1550-564, possesses relatively high-inclination angles, i > 60.
• ### Modeling gravitational radiation from coalescing binary black holes(astro-ph/0202469)
Feb. 25, 2002 astro-ph, hep-th, gr-qc
With the goal of bringing theory, particularly numerical relativity, to bear on an astrophysical problem of critical interest to gravitational wave observers we introduce a model for coalescence radiation from binary black hole systems. We build our model using the "Lazarus approach", a technique that bridges far and close limit approaches with full numerical relativity to solve Einstein equations applied in the truly nonlinear dynamical regime. We specifically study the post-orbital radiation from a system of equal-mass non-spinning black holes, deriving waveforms which indicate strongly circularly polarized radiation of roughly 3% of the system's total energy and 12% of its total angular momentum in just a few cycles. Supporting this result we first establish the reliability of the late-time part of our model, including the numerical relativity and close-limit components, with a thorough study of waveforms from a sequence of black hole configurations varying from previously treated head-on collisions to representative target for ISCO'' data corresponding to the end of the inspiral period. We then complete our model with a simple treatment for the early part of the spacetime based on a standard family of initial data for binary black holes in circular orbit. A detailed analysis shows strong robustness in the results as the initial separation of the black holes is increased from 5.0 to 7.8M supporting our waveforms as a suitable basic description of the astrophysical radiation from this system. Finally, a simple fitting of the plunge waveforms is introduced as a first attempt to facilitate the task of analyzing data from gravitational wave detectors.
• ### Plunge waveforms from inspiralling binary black holes(gr-qc/0102037)
Nov. 18, 2001 astro-ph, hep-th, gr-qc
We study the coalescence of non-spinning binary black holes from near the innermost stable circular orbit down to the final single rotating black hole. We use a technique that combines the full numerical approach to solve Einstein equations, applied in the truly non-linear regime, and linearized perturbation theory around the final distorted single black hole at later times. We compute the plunge waveforms which present a non negligible signal lasting for $t\sim 100M$ showing early non-linear ringing, and we obtain estimates for the total gravitational energy and angular momentum radiated.
• ### Symmetry without Symmetry: Numerical Simulation of Axisymmetric Systems using Cartesian Grids(gr-qc/9908012)
Aug. 4, 1999 gr-qc
We present a new technique for the numerical simulation of axisymmetric systems. This technique avoids the coordinate singularities which often arise when cylindrical or polar-spherical coordinate finite difference grids are used, particularly in simulating tensor partial differential equations like those of 3+1 numerical relativity. For a system axisymmetric about the z axis, the basic idea is to use a 3-dimensional Cartesian (x,y,z) coordinate grid which covers (say) the y=0 plane, but is only one finite-difference-molecule--width thick in the y direction. The field variables in the central y=0 grid plane can be updated using normal (x,y,z)--coordinate finite differencing, while those in the y \neq 0 grid planes can be computed from those in the central plane by using the axisymmetry assumption and interpolation. We demonstrate the effectiveness of the approach on a set of fully nonlinear test computations in 3+1 numerical general relativity, involving both black holes and collapsing gravitational waves.
|
2020-12-05 03:58:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6920884251594543, "perplexity": 1481.6377145476797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00357.warc.gz"}
|
http://stats.stackexchange.com/questions/17501/how-do-you-calculate-variable-importance-p-values-using-the-randomforest-package
|
# How do you calculate variable importance p-values using the randomForest package in R?
For a classification project we are using the randomForest package in R, which wraps the Breiman Fortran random forest implementation, to assess the importance of each of our features. I would like to calculate p-values for each feature's importance statistics as described in the random forest documentation provided by Breiman.
... therefore we compute standard errors in the classical way, divide the raw score by its standard error to get a z-score, ands assign a significance level to the z-score assuming normality.
The R RandomForest package provides the mean decrease importance (MDI) metric for each of the classes and overall (both classes combined) in addition to the standard deviation for the decrease importance of each class and overall.
I don't understand how these values can be used to obtain a significance level for the variable importance as, while the mean and standard deviation will allow the construction of the a normal distribution, there is no "observation" for the z-score calculation. Can someone clarify how to do this?
-
It is called z-score mainly because it is mean/sd, but in practice it is useless for hypothesis testing -- in some cases you can get most important attribute with z~$10^{-3}$ or on the other side all z-scores way larger than this mystical 3.
The problem is that this gives not enough power to do the test; the attributes are not independent, the trees may also have some correlation, $N_\text{tree}\left<\text{depth}\right>/N_\text{attr}$ is not enough, etc. In short, it does not work. – mbq Oct 26 '11 at 15:57
What about a permuted measure of variable importance? I.e., re-run RFs with permuted class labels, say 1,000 times, and get the approximate $p$-values for the original importance measure? – chl Oct 26 '11 at 18:41
|
2016-04-28 21:52:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7149742245674133, "perplexity": 752.0687290322064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860109830.69/warc/CC-MAIN-20160428161509-00026-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/a-number-theory-problem-by-sayantan-saha/
|
# A number theory problem by Sayantan Saha
Number Theory Level 2
What will be the remainder when $$5^{100}$$ is divided by 7?
×
|
2016-10-22 23:49:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27293264865875244, "perplexity": 1257.0362321999412}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719079.39/warc/CC-MAIN-20161020183839-00405-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://www.adamponting.com/generating-functions/
|
# generating functions
The basic generating function has the form:
, or for short, is the generating function for the sequence , also called . The coefficient of in is denoted .
There are two kinds of “closed form” in generatingfunctionology.
• a closed form for , expressed in terms of , and
• a closed form for , expressed in terms of .
e.g. the generating function for Fibonacci numbers has the closed form ; the Fibonacci numbers themselves have the closed form .
Solving recurrences – THE METHOD
Given a sequence that satisfies a given recurrence, we seek a closed form for in terms of . A solution to this problem via generating functions proceeds in four steps:
1. Write down a single equation that expresses in terms of other elements of the sequence. This equation should be valid for all integers , assuming that .
2. Multiply both sides of the equation by and sum over all . This gives, on the left, the sum , which is the generating function . The right-hand side should be manipulated so that it becomes some other expression involving .
3. Solve the resulting equation, getting a closed form for .
4. To find an exact formula, a closed form for the sequence:
• Expand into a power series and read off the coefficient of .
• If G is a rational function (quotient of two polynomials)
• expand in partial fractions and handle the terms separately.
• OR: If the roots of the denominator polynomial are all different, use the rational expansion theorem below.
The “routine stunt”
Expanding partial fractions
e.g. Expand
This will be expandable in the form .
To find , multiply both sides by and let .
To find , multiply both sides by and let .
To find , substitute in and use the values of and to find .
So
Rational expansion theorem for distinct roots
If , where and the roots are distinct, and if is a polynomial of degree less than , then
i.e.
When is quadratic, i.e. , the equation becomes
Basic generating function manipulations
The calculus of ordinary power series generating functions
Definition. The symbol means that the series is the ordinary power series (‘ops’) generating function for the sequence , i.e. .
Shifting the subscript by 1. the series changes by difference quotient
Therefore
Shift subscript by 2 Iterate the difference quotient operation.
e.g. the Fibonacci recurrence relation translates directly into .
Rule 1. Shifting subscript. If , then, for integer ,
Multiplying terms by and its powers. To multiply the th member of a sequence by causes its ops generating function to be multiplied by , which we will write as . i.e.
What generates ? . In general,
What generates ?
Do the same thing to that is done to : .
Rule 2. If , and P is a polynomial,
Examples.
Find (The terms are )
The answer is the value at of the power series (ignoring convergence issues).
Find a closed formula for the sum of the squares of the first N positive integers.
Begin with the fact that . To obtain the desired series, apply to both sides, then set .
Rule 3. Multiplication of two gfs. If and , then
If are three series that generate sequences and ,
Rule 4. Powers of a series. If , then for a positive integer ,
Example. Find , the number of ways a nonnegative integer can be written as an ordered sum of nonnegative integers.
e.g. , because .
Consider the power series . Since , by Rule 4 we have
Since , .
If , what sequence does generate?
Rule 5. Dividing by . If then
i.e. dividing by replaces the generated sequence by the sequence of its partial sums.
bibliography
Concrete Mathematics – Knuth, Graham, Patashnik, 2nd Ed., Ch. 7
generatingfunctionology – Herbert Wilf – read online
|
2020-04-05 02:55:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9640288949012756, "perplexity": 672.0773340112017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370528224.61/warc/CC-MAIN-20200405022138-20200405052138-00409.warc.gz"}
|
http://math.stackexchange.com/questions/97939/universal-closure-of-a-formula
|
# Universal closure of a formula
I am confused about the following: I read yesterday that for a formula $\phi(x_1,\ldots,x_n)$ in a first order language $\mathcal{L}$ and an $\mathcal{L}$-structure $\mathcal{A}$, $\mathcal{A} \models \phi(x_1,\ldots,x_n)$ iff $\mathcal{A} \models \forall x_1 \ldots \forall x_n \phi(x_1,\ldots,x_n)$. This seems perfectly fine to me; if I understand it right, it's like saying something like $x^2 \geq 0$ is true iff $\forall x (x^2 \geq 0)$ is true.
Now consider a predicate $Q(x)$; then according to the above $\models Q(x)$ iff $\models \forall x Q(x)$. Isn't this equivalent to $\models Q(x) \leftrightarrow \forall x Q(x)$? However, $\not \models Q(x) \rightarrow \forall x Q(x)$; take for example $\mathcal{A} = (A, Q^{\mathcal{A}})$, where $A=\{a,b\}$, $Q^{\mathcal{A}}=\{a\}$ and $w: \mathrm{Var} \rightarrow A$, $w(x) = a$. What am I doing wrong?
-
## 1 Answer
I believe you are mixing two different conventions:
• Some authors include a variable assignment along with each model when defining the satisfaction relation, so that a model consists of a structure $\mathcal{A}$ and a function $w$ assigning some element of $|\mathcal{A}|$ to each variable. It is possible to have two such functions $w,w'$ such that $\mathcal{A},w \models \phi$ but $\mathcal{A},w' \not \models \phi$. Using only this definition, it is impossible to write $\mathcal{A} \models \phi$ when $\phi$ has free variables, as the definition requires $w$ to be fixed first. Enderton's book uses essentially this approach, writing $\models_\mathcal{A}\, \phi[w]$.
• Other authors go on to define $\mathcal{A} \models \phi$, when $\phi$ has free variables, to mean $\mathcal{A},w\models \phi$ for every variable assignment function $w$ from $\mathcal{A}$. Letting $\phi^u$ be the universal closure of $\phi$, these authors would indeed say that $\mathcal{A} \models \phi$ if and only if $\mathcal{A} \models \phi^u$, because this is an immediate consequence of their definitions. This approach is used by Kleene's 1967 book and I believe it is also common in universal algebra.
One additional point of confusion is that the authors who take the first approach continue by defining $\models \phi$'' to mean that $\mathcal{A},w\models \phi$ for every structure $\mathcal{A}$ in the language and every variable assignment $w$ from $\mathcal{A}$. So authors who take the first approach can still prove that $\models \phi$ if and only if $\models \phi^u$ even if $\phi$ has free variables. This is exercise 6 on page 99 of Enderton's book, for example.
In the example in the second paragraph in the question, the trouble is that under the second convention I described, $\mathcal{A} \not \models Q(x)$, although it is true that $\mathcal{A},w \models Q(x)$. In fact neither $Q(x)$ nor $(\forall x)Q(x)$ is a logically valid formula.
-
Thank you for your answer. I am following the second convention. So, under this convention, is it wrong to use $\models\phi$ (meaning that $\mathcal{A} \models \phi$ for any structure $\mathcal{A}$)? My question basically was how can $\models Q(x)$ iff $\models \forall x Q(x)$ when $\not \models Q(x) \rightarrow \forall x Q(x)$? (for any $Q$) Doesn't $\models Q(x)$ iff $\models \forall x Q(x)$ imply that $\models Q(x) \rightarrow \forall x Q(x)$? – Andrew Jan 10 '12 at 22:51
No, it doesn't. "$\models Q(x)$ implies $\models (\forall x) Q(x)$" says that if $Q(x)$ holds in absolutely every interpretation, then $(\forall x)Q(x)$ holds in absolutely every interpretation. That's a true claim. But "$\models Q(x) \to (\forall x)Q(x)$" claims that in any particular interpretation where $Q(x)$ holds, $(\forall x)Q(x)$ must also hold, even if $Q(x)$ may not hold in other interpretations. That's a false claim. – Carl Mummert Jan 10 '12 at 23:10
Dear Carl: Thank you very much for having pointed out the stupid mistake contained in my (now deleted) answer to this question. That was very kind of you! – Pierre-Yves Gaillard Jan 12 '12 at 23:19
|
2016-05-31 14:17:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499304890632629, "perplexity": 145.29821561240362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051342447.93/warc/CC-MAIN-20160524005542-00192-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://testbook.com/question-answer/eight-friends-p-q-r-s-t-u-v-and-w-are-sitt--5f8a8e5705167b27277f7183
|
# Eight friends, P, Q, R, S, T, U, V and W, are sitting in a row facing towards north and at equal distance from each other (not necessarily in the same order). Four-person sits between Q and V and none of them sits at the extreme end position. Two persons sit between R and Q. P sits fourth to the right of W. Two persons sit between U and S who is an immediate neighbour of W. Which of the following pair denotes persons sitting at extreme ends?
1. T, Q
2. V, P
3. P, T
4. V, Q
## Answer (Detailed Solution Below)
Option 3 : P, T
Free
Rajasthan Police Constable 2020: Full Mock Test
28897
150 Questions 75 Marks 120 Mins
## Detailed Solution
1) Four-person sits between Q and V and none of them sits at the extreme end position.
2) Two persons sit between R and Q.
3) P sits fourth to the right of W.
4) Two persons sit between U and S who is an immediate neighbour of W.
Case 2 got eliminated as immediate position of W is already occupied
P and T sit at extreme ends.
Hence, option 3 is the correct answer.
|
2021-12-02 07:49:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683636784553528, "perplexity": 1808.8449138572246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00590.warc.gz"}
|
https://quantumcomputing.stackexchange.com/tags/pauli-gates/hot
|
# Tag Info
10
I suggest two different ways of trying to solve this, which will give you experience of different bits of the formulation of Quantum Information Theory. I'll give examples that are closely related to the question you asked, but are not what you asked so that you still get the value of answering the question yourself. Long-hand Method Represent the kets as ...
9
One way order to perform Z rotations by arbitrary angles is to approximate them with a sequence of Hadamard and T gates. If you need the approximation to have maximum error $\epsilon$, there are known constructions that do this using roughly $3 \lg \frac{1}{\epsilon}$ T gates. See "Optimal ancilla-free Clifford+T approximation of z-rotations" by Ross et al. ...
9
$$CNOT = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix}$$But what does this matrix mean? The above matrix means: on a two qubit system (such as $\left|00\right>$, $\left|10\right>$, $\left|11\right>$, etc.) if the first qubit is a one,...
9
For any matrix $A$ we can write $$A =\sum_{i,j,k,l}h_{ijkl}\cdot \frac{1}{4}\sigma_i\otimes\sigma_j\otimes\sigma_k\otimes\sigma_l,$$ where $$h_{ijkl} = \frac{1}{4}\text{Tr}\big((\sigma_i\otimes\sigma_j\otimes\sigma_k\otimes\sigma_l)^\dagger \cdot A\big) = \frac{1}{4}\text{Tr}\big((\sigma_i\otimes\sigma_j\otimes\sigma_k\otimes\sigma_l) \cdot A\big)$$ ...
8
The CNOT gate is a 2-qubit gate, and consequently, its operation cannot be expressed by the tensor product of two one-qubit gates as the example you gave with the Hadamard gates. An easy way to check that such matrix cannot be expressed as the tensor product of two other matrices is to take matrices $A =\begin{pmatrix}a & b \\ c & d\end{pmatrix}$ $... 8 Effectively, the Z operation (represented by the Pauli$Z$matrix) applies a rotation about the$Z$-axis. As you note, rotations can also be written in the form$e^{-i Z t}$. To see that, you can use a trick pretty similar to the one used to derive Euler's identity ($e^{i \theta} = \cos(\theta) + i \sin(\theta)$) to rewrite the Taylor series that you quoted ... 7 The quantum states that differ by a global phase (i.e., by a complex number multiple which has absolute value of 1) are considered the same quantum state, since they can not be distinguished using any operations or measurements. Thus, the eigenstates for hω are$|+\rangle = \frac{1}{\sqrt2}(|0\rangle + |1\rangle)$,$-|+\rangle = \frac{1}{\sqrt2}(-|0\...
7
Yes, the set of tensor products of all possible $n$ Pauli operators (including $I$) forms an orthogonal basis for the vector space of $2^n \times 2^n$ complex matrices. So see this first we notice that the space has a dimension of $4^n$ and we also have $4^n$ vectors ( the vectors are operators in this case). So we only need to show that they are linearly ...
7
First of all, note that the statement, as written, is wrong (or rather, it is correct only as long as the "$\equiv$" symbol is taken to mean "equal up to a phase"). An easy way to see it is by computing the determinant of $H=e^{i\pi H/2}$, which gives $-1=1$ (using $\det[\exp(A)]=\exp[\operatorname{Tr}(A)]$ for all $A$ and $\operatorname{Tr}(H)=0$). Now, ...
7
4
If you write $Z^aX^b$, there's an implicit "add a phase $i$ to make it Hermitian if necessary", although I guess there are a couple of different conventions you might use the determine the sign used. So long as you're clear about the convention it doesn't really matter because you've got the extra $\pm 1$ freedom to add in to adjust for it. As for an ...
4
The phase of -1 generated on the ancilla is just a global phase. You can shift it to any qubit without affecting the statistics of the system. Nothing to do with entanglement. Quantum mechanics is a mathematical framework that describes reality insofar that it predicts the correct observable statistics of a system. Shifting around a global phase factor from ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2020-09-23 19:00:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9668309092521667, "perplexity": 383.8598988904751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212039.16/warc/CC-MAIN-20200923175652-20200923205652-00058.warc.gz"}
|
https://www.physicsforums.com/threads/help-integrating.357500/
|
Homework Help: Help integrating
1. Nov 23, 2009
Punkyc7
1. The problem statement, all variables and given/known data
Derive an equation for an objects velocity as a function of time
2. Relevant equations
i have that a=-(kvo/m)
3. The attempt at a solution
so i get dv/v=-(k/m)dt then i get
1/v= -kt/m +C and then im stuck
2. Nov 23, 2009
XanziBar
It's been awhile, but I think the integral of dv/v is the natural log of v. Then you can probably exponentiate both sides
3. Nov 23, 2009
Punkyc7
so
lnv= ln(-kt/m +C)
then
e^lnv=e^ln(-kt/m +C)
so is this right
v=e^ln(-kt/m +C)
4. Nov 23, 2009
XanziBar
you only have the 1/v on the left side:
lnv= (-kt/m +C)
e^lnv=e^(-kt/m +C)
so is this right
v=e^(-kt/m +C)
yeah, that seems about right. I'm a little worried about the units, maybe you can do something with that k or C...
5. Nov 24, 2009
rl.bhat
a=-(kvo/m)
The above relation represent rocket equation where vo represents the velocity of escaping of gas which is constant an k represents dm/dt, the mass of fuel ejected per unit time. It is also constant. Here mass of the fuel is varying with time.
So you can find the velocity of the object with respect to mass rather than the time.
a = dv/dt = - (dm/dt)vo/m
dv = -vo(dm/m). To find the velocity take the integration between the limits m = Mo to m = M.
|
2018-05-20 14:36:44
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8421726822853088, "perplexity": 1951.5444229254435}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863570.21/warc/CC-MAIN-20180520131723-20180520151723-00115.warc.gz"}
|
http://shout.education/ChemKey/physical/redoxeqia/predict.html
|
Need help? Email help@shout.education
# Making Predictions Using Redox Potentials (Electrode Potentials)
This page explains how to use redox potentials (electrode potentials) to predict the feasibility of redox reactions. It also looks at how you go about choosing a suitable oxidising agent or reducing agent for a particular reaction.
Important: This is the final page in a sequence of five pages about redox potentials. You will find it much easier to understand if you start from the beginning. Links at the bottom of each page will bring you back here again.
Don't try to short-cut this. Redox potentials are absolutely simple to work with if you understand what they are about. If you don't, the whole topic can be a complete nightmare!
## Predicting the Feasibility of a Possible Redox Reaction
### A Reminder of What You Need to Know
Standard electrode potentials (redox potentials) are one way of measuring how easily a substance loses electrons. In particular, they give a measure of relative positions of equilibrium in reactions such as:
\begin{matrix} \text{Zn}^{2+}_{(aq)} + 2\text{e}^- \xrightleftharpoons{} \text{Zn}_{(s)} & E^o = {-}0.76 \text{ V} \\ \\ \text{Cu}^{2+}_{(aq)} + 2\text{e}^- \xrightleftharpoons{} \text{Cu}_{(s)} & E^o = {+}0.34 \text{ V} \end{matrix}
The more negative the E° value, the further the position of equilibrium lies to the left. Remember that this is always relative to the hydrogen equilibrium – and not in absolute terms.
The negative sign of the zinc E° value shows that it releases electrons more readily than hydrogen does. The positive sign of the copper E° value shows that it releases electrons less readily than hydrogen.
Whenever you link two of these equilibria together (either via a bit of wire, or by allowing one of the substances to give electrons directly to another one in a test tube) electrons flow from one equilibrium to the other. That upsets the equilibria, and Le Chatelier's Principle applies. The positions of equilibrium move – and keep on moving if the electrons continue to be transferred.
The two equilibria essentially turn into two one-way reactions:
• The equilibrium with the more negative (or less positive) E° value will move to the left.
• The equilibrium with the more positive (or less negative) E° value will move to the right.
Note: If you aren't confident about this, please go back and start from the beginning of this sequence of pages. All these ideas are explored in a gentle and logical way. Links at the bottom of each page will bring you back here again.
### Will Magnesium React With Dilute Sulfuric acid?
Of course it does! I'm choosing this as an introductory example because everybody will know the right answer before we start. We have also explored this from a slightly different point of view on the previous page in this sequence.
The E° values are:
\begin{matrix} \text{Mg}^{2+}_{(aq)} + 2\text{e}^- \xrightleftharpoons{} \text{Mg}_{(s)} & E^o = {-}2.37 \text{ V} \\ \\ 2\text{H}^+_{(aq)} + 2\text{e}^- \xrightleftharpoons{} \text{H}_{2(g)} & E^o = 0 \text{ V} \\ \end{matrix}
You are starting with magnesium metal and hydrogen ions in the acid. The sulfate ions are spectator ions and play no part in the reaction.
Think of it like this. There is no need to write anything down unless you want to. With a small amount of practice, all you need to do is just look at the numbers.
Is there anything to stop the sort of movements we have suggested? No! The magnesium can freely turn into magnesium ions and give electrons to the hydrogen ions producing hydrogen gas. The reaction is feasible.
Now for a reaction which turns out not to be feasible
### Will Copper React With Dilute Sulfuric acid?
You know that the answer is that it won't. How do the E° values predict this?
\begin{matrix} \text{Cu}^{2+}_{(aq)} + 2\text{e}^- \xrightleftharpoons{} \text{Cu}_{(s)} & E^o = {+}0.34 \text{ V} \\ \\ 2\text{H}^+_{(aq)} + 2\text{e}^- \xrightleftharpoons{} \text{H}_{2(g)} & E^o = 0 \text{ V} \\ \end{matrix}
Doing the same sort of thinking as before:
The diagram shows the way that the E° values are telling us that the equilibria will tend to move. Is this possible? No!
If we start from copper metal, the copper equilibrium is already completely to the right. If it were to react at all, the equilibrium will have to move to the left – directly opposite to what the E° values are saying.
Similarly, if we start from hydrogen ions (from the dilute acid), the hydrogen equilibrium is already as far to the left as possible. For it to react, it would have to move to the right – against what the E° values demand.
There is no possibility of a reaction.
In the next couple of examples, decide for yourself whether or not the reaction is feasible before you read the text.
### Will Oxygen Oxidise Iron(II) hydroxide to Iron(III) hydroxide Under Alkaline Conditions?
The E° values are:
\begin{matrix} \text{Fe}(\text{OH})_{3(s)} + \text{e}^- \xrightleftharpoons{} \text{Fe}(\text{OH})_{2(s)} + {}^-\text{OH}_{(aq)} & E^o = {-}0.56 \text{ V} \\ \\ \text{O}_{2(g)} + 2\text{H}_2\text{O}_{(l)} + 4\text{e}^- \xrightleftharpoons{} 4\text{OH}^-_{(aq)} & E^o = {+}0.40 \text{ V} \\ \end{matrix}
Think about this before you read on. Remember that the equilibrium with the more negative E° value will tend to move to the left. The other one tends to move to the right. Is that possible?
Yes, it is possible. Given what we are starting with, both of these equilibria can move in the directions required by the E° values. The reaction is feasible.
### Will Chlorine Oxidise Manganese(II) ions to Manganate(VII) ions?
The E° values are:
\begin{matrix} {\text{MnO}_4}^-_{(aq)} + 8\text{H}^+_{(aq)} + 5\text{e}^- \xrightleftharpoons{} \text{Mn}^{2+}_{(aq)} + 4\text{H}_2\text{O}_{(l)} & E^o = {+}1.51 \text{ V} \\ \\ \text{Cl}_{2(g)} + 2\text{e}^- \xrightleftharpoons{} 2\text{Cl}^-_{(aq)} & E^o = {+}1.36 \text{ V} \\ \end{matrix}
Given what you are starting from, these equilibrium shifts are impossible. The manganese equilibrium has the more positive E° value and so will tend to move to the right. However, because we are starting from manganese(II) ions, it is already as far to the right as possible. In order to get any reaction, the equilibrium would have to move to the left. That is against what the E° values are saying.
This reaction isn't feasible.
### Will Dilute Nitric acid React With Copper?
This is going to be more complicated because there are two different ways in which dilute nitric acid might possibly react with copper. The copper might react with the hydrogen ions or with the nitrate ions. Nitric acid reactions are always more complex than the simpler acids like sulfuric or hydrochloric acid because of this problem.
Here are the E° values:
\begin{matrix} \text{Cu}^{2+}_{(aq)} + 2\text{e}^- \xrightleftharpoons{} \text{Cu}_{(s)} & E^o = {+}0.34 \text{ V} \\ \\ 2\text{H}^+_{(aq)} + 2\text{e}^- \xrightleftharpoons{} \text{H}_{2(g)} & E^o = 0 \text{ V} \\ \\ {\text{NO}_3}^-_{(aq)} + 4\text{H}^+_{(aq)} + 3\text{e}^- \xrightleftharpoons{} \text{NO}_{(g)} + 2\text{H}_2\text{O}_{(l)} & E^o = {+}0.96 \text{ V} \end{matrix}
We have already discussed the possibility of copper reacting with hydrogen ions further up this page. Go back and look at it again if you need to, but the argument (briefly) goes like this:
The copper equilibrium has a more positive E° value than the hydrogen one. That means that the copper equilibium will tend to move to the right and the hydrogen one to the left.
However, if we start from copper and hydrogen ions, the equilibria are already as far that way as possible. Any reaction would need them to move in the opposite direction to what the E° values want. The reaction isn't feasible.
##### What about a reaction between the copper and the nitrate ions?
This is feasible. The nitrate ion equilibrium has the more positive E° value and will move to the right. The copper E° value is less positive. That equilibrium will move to the left. The movements that the E° values suggest are possible, and so a reaction is feasible.
Copper(II) ions are produced together with nitrogen monoxide gas.
Warning! There are all sorts of ways of dealing with this feasibility question – all of them, I believe, more complicated than this. Some methods actually force you to do some (simple) calculations. Unfortunately, some examiners ask questions which make you use their own inefficient methods of working out whether a reaction is feasible, and you need to know if this is going to be a problem.
UK A-level students should refer to their syllabuses and past papers. Follow this link if you haven't got the necessary information.
You will find the calculation approach covered in my chemistry calculations book, together with more examples using the method developed on these pages. However, I find the calculation approach so pointless that I refuse to include it on this site! Why do a calculation if you can just look at two numbers and decide in seconds whether or not a reaction is feasible?
### Two Examples Where the E° Values Seem to Give the Wrong Answer
It sometimes happens that E° values suggest that a reaction ought to happen, but it doesn't. Occasionally, a reaction happens although the E° values seem to be the wrong way around. These next two examples explain how that can happen. By coincidence, both involve the dichromate(VI) ion in potassium dichromate(VI).
#### Will acidified potassium dichromate(VI) oxidise water?
The E° values are:
\begin{matrix} {\text{Cr}_2\text{O}_7}^{2-}_{(aq)} + 14\text{H}^+_{(aq)} + 6\text{e}^- \xrightleftharpoons{} 2\text{Cr}^{3+}_{(aq)} + 7\text{H}_2\text{O}_{(l)} & E^o = {+}1.33 \text{ V} \\ \\ \text{O}_{2(g)} + 4\text{H}^+_{(aq)} + 4\text{e}^- \xrightleftharpoons{} 2\text{H}_2\text{O}_{(l)} & E^o = {+}1.23 \text{ V} \end{matrix}
The relative sizes of the E° values show that the reaction is feasible:
However, in the test tube nothing happens however long you wait. An acidified solution of potassium dichromate(VI) doesn't react with the water that it is dissolved in. So what is wrong with the argument?
In fact, there is nothing wrong with the argument. The problem is that all the E° values show is that a reaction is possible. They don't tell you that it will actually happen. There may be very large activation barriers to the reaction which prevent it from taking place.
Always treat what E° values tell you with some caution. All they tell you is whether a reaction is feasible – they tell you nothing about how fast the reaction will happen.
#### Will acidified potassium dichromate(VI) oxidise chloride ions to chlorine?
The E° values are:
\begin{matrix} {\text{Cr}_2\text{O}_7}^{2-}_{(aq)} + 14\text{H}^+_{(aq)} + 6\text{e}^- \xrightleftharpoons{} 2\text{Cr}^{3+}_{(aq)} + 7\text{H}_2\text{O}_{(l)} & E^o = {+}1.33 \text{ V} \\ \\ \text{Cl}_{2(g)} + 2\text{e}^- \xrightleftharpoons{} 2\text{Cl}^-_{(aq)} & E^o = {+}1.36 \text{ V} \end{matrix}
Because the chlorine E° value is slightly greater than the dichromate(VI) one, there shouldn't be any reaction. For a reaction to occur, the equilibria would have to move in the wrong directions.
Unfortunately, in the test tube, potassium dichromate(VI) solution does oxidise concentrated hydrochloric acid to chlorine. The hydrochloric acid serves as the source of the hydrogen ions in the dichromate(VI) equilibrium and of the chloride ions.
The problem here is that E° values only apply under standard conditions. If you change the conditions you will change the position of an equilibrium – and that will change its E value. (Notice that you can't call it an E° value any more, because the conditions are no longer standard.)
The standard condition for concentration is 1 mol dm-3. But concentrated hydrochloric acid is approximately 10 mol dm-3. The concentrations of the hydrogen ions and chloride ions are far in excess of standard.
What effect does that have on the two positions of equilibrium?
Because the E° values are so similar, you don't have to change them very much to make the dichromate(VI) one the more positive. As soon as that happens, it will react with the chloride ions to produce chlorine.
In most cases, there is enough difference between E° values that you can ignore the fact that you aren't doing a reaction under strictly standard conditions. But sometimes it does make a difference. Be careful!
## Selecting an Oxidising or Reducing Agent for a Reaction
There is nothing remotely new in this. It is just a slight variation on what we've just been looking at.
### Choosing an Oxidising Agent
Remember: OIL RIG
Oxidation is Loss of electrons; Reduction is Gaining of electrons
• An oxidising agent oxidises something by removing electrons from it. That means that the oxidising agent gains electrons.
It is easier to explain this with a specific example. What could you use to oxidise iron(II) ions to iron(III) ions? The E° value for this reaction is:
\begin{matrix} \text{Fe}^{3+}_{(aq)} + \text{e}^- \xrightleftharpoons{} \text{Fe}^{2+}_{(aq)} & E^o = {+}0.77 \text{ V} \end{matrix}
To change iron(II) ions into iron(III) ions, you need to persuade this equilibrium to move to the left. That means that when you couple it to a second equilibrium, this iron E° value must be the more negative (less positive one).
You could use anything which has a more positive E° value. For example, you could use any of the these which we have already looked at over the last page or two:
##### Dilute nitric acid:
\begin{matrix} {\text{NO}_3}^-_{(aq)} + 4\text{H}^+_{(aq)} + 3\text{e}^- \xrightleftharpoons{} \text{NO}_{(g)} + 2\text{H}_2\text{O}_{(l)} & E^o = {+}0.96 \text{ V} \end{matrix}
##### Acidified potassium dichromate(VI):
\begin{matrix} {\text{Cr}_2\text{O}_7}^{2-}_{(aq)} + 14\text{H}^+_{(aq)} + 6\text{e}^- \xrightleftharpoons{} 2\text{Cr}^{3+}_{(aq)} + 7\text{H}_2\text{O}_{(l)} & E^o = {+}1.33 \text{ V} \end{matrix}
##### Chlorine:
\begin{matrix} \text{Cl}_{2(g)} + 2\text{e}^- \xrightleftharpoons{} 2\text{Cl}^-_{(aq)} & E^o = {+}1.36 \text{ V} \end{matrix}
##### Acidified potassium manganate(VII):
\begin{matrix} {\text{MnO}_4}^-_{(aq)} + 8\text{H}^+_{(aq)} + 5\text{e}^- \xrightleftharpoons{} \text{Mn}^{2+}_{(aq)} + 4\text{H}_2\text{O}_{(l)} & E^o = {+}1.51 \text{ V} \end{matrix}
### Choosing a Reducing Agent
Remember: OIL RIG
Oxidation is Loss of electrons; Reduction is Gaining of electrons
• A reducing agent reduces something by giving electrons to it. That means that the reducing agent loses electrons.
You have to be a little bit more careful this time, because the substance losing electrons is found on the right-hand side of one of these redox equilibria. Again, a specific example makes it clearer.
For example, what could you use to reduce chromium(III) ions to chromium(II) ions? The E° value is:
\begin{matrix} \text{Cr}^{3+}_{(aq)} + \text{e}^- \xrightleftharpoons{} \text{Cr}^{2+}_{(aq)} & E^o = {-}0.41 \text{ V} \end{matrix}
You need this equilibrium to move to the right. That means that when you couple it with a second equilibrium, this chromium E° value must be the most positive (least negative).
In principle, you could choose anything with a more negative E° value – for example, zinc:
\begin{matrix} \text{Zn}^{2+}_{(aq)} + 2e^- \xrightleftharpoons{} \text{Zn}_{(s)} & E^o = {-}0.76 \text{ V} \end{matrix}
You would have to remember to start from metallic zinc, and not zinc ions. You need this second equilibrium to be able to move to the left to provide the electrons. If you started with zinc ions, it would already be on the left – and would have no electrons to give away. Nothing could possibly happen if you mixed chromium(III) ions and zinc ions.
That is fairly obvious in this simple case. If you were dealing with a more complicated equilibrium, you would have to be careful to think it through properly.
|
2022-08-18 13:42:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9586494565010071, "perplexity": 2257.0341794212995}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00255.warc.gz"}
|
http://www.dolmen-ling.org/intro/install.html
|
# Installation¶
## Windows¶
On Windows, Dolmen is provided as a self-contained installer file. Simply double-click on ‘dolmen_setup.exe’ and follow the instructions.
The procedure will install Dolmen in your Program Files directory and will create a shortcut in the start menu (and optionally on the desktop).
If you wish to be able to open files in Praat from Dolmen, you will need to install Praat in Dolmen’s installation directory, which should be either C:\Program Files (x86)\Dolmen2\Tools or C:\Program Files\Dolmen2\Tools, depending on your system. Alternatively, you can modify Praat’s default path with the preference editor.
## Mac OS¶
On Mac OS, Dolmen is provided as a standard DMG image disk. Mount the image by double-clicking on it and drag the application Dolmen into your Applications folder. If you want Dolmen to be able to interact with Praat, you will need to install it in the Applications folder too.
Currently, only Mac OS 10.7 (Snow Leopard) and later are “officially” supported. It does not work on earlier versions.
## Linux (Debian/Ubuntu)¶
The official executable that is provided on the website is built on Debian 9 and is available for 64-bit architectures.
Since the program is available as a dynamically-linked executable, first make sure that the needed dependencies are installed (asound, libsndfile, speexdsp, Qt 5 and GTK 2). Most of these packages should already be installed, but you can issue the following command in a terminal to make sure they are:
sudo apt-get install libasound2 libsndfile1 libspeexdsp1 libgtk2.0-0 libqwt-qt5-6 libqt5sql5-sqlite
Next, assuming that you downloaded the archive in your Downloads directory, type the following commands in a terminal (replacing XX by the appropriate version number):
cd opt
sudo ln -s /opt/dolmen/bin/dolmen /usr/local/bin/
You can now run Dolmen by simply typing dolmen & from a terminal window.
If you get an error about a missing SQL plugin, try to add the following line to your .bashrc configuration file:
export LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:/usr/lib/x86_64-linux-gnu/qt5/plugins/sqldrivers
## Compiling from source¶
You need to install the development packages for QT 5.3 or greater (including the sqlite plugin), GTK 2, ALSA (libasound2), libspeexdsp and libsndfile. You also need to manually build Qwt 6.1.0 (or later). Then, assuming that you have downloaded the source for version 1.3 in your Downloads directory, you can compile it by typing the following commands in the terminal:
unzip dolmen-2.0.zip
cd dolmen
qmake dolmen.pro; make
This will create an executable file called dolmen that you can put anywhere. To put it in /usr/local/bin, do:
sudo mv dolmen /usr/local/bin/
Assuming that sudo is installed and properly configured on your system. You can then run Dolmen by simply typing dolmen in the terminal.
In order to be able to read the documentation, you will also need to put the html directory somewhere on your disk, and adjust the resources path. To do this, go to Edit > Preferences... and in the General tab, adjust the path for the Resources folder to match your installation.
## Known issues¶
On Mac OS, clicking on the sound scrollbar buttons after an item is selected in a tier results in the scollbar moving until an edge is reached.
|
2019-03-24 02:48:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29910337924957275, "perplexity": 5542.739022957604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203168.70/warc/CC-MAIN-20190324022143-20190324044143-00404.warc.gz"}
|
https://cran.rstudio.com/web/packages/rangemap/vignettes/rangemap_short_tutorial_I.html
|
# rangemap short tutorial I
## Package description
The rangemap R package presents various tools to create species range maps based on occurrence data, statistics, and SpatialPolygons objects. Other tools of this package can be used to analyze environmental characteristics of the species ranges and to create high quality figures of these maps.
## Functions in this package
The main functions of this package are:
• rangemap_explore, generates simple figures to visualize species occurrence data in the geographic space before using other functions of this package.
• rangemap_boundaries, generates a distributional range for a given species by considering all the polygons of administrative entities in which the species has been detected.
• rangemap_buffer, generates a distributional range for a given species by buffering provided occurrences using a defined distance.
• rangemap_enm, generates a distributional range for a given species using a continuous raster layer produced with an ecological niche modeling algorithm.
• rangemap_hull, generates a distributional range for a given species by creating convex or concave hull polygons based on occurrence data.
• rangemap_tsa, generates distributional range for a given species using a trend surface analysis.
• rangemap_plot, generates customizable figures of species range maps using objects produced by other functions of this package.
• ranges_emaps, represents one or more ranges of the same species on various maps of environmental factors (e.g., climatic variables) to detect implications of using one or other type of range regarding the environmental conditions in the areas.
• ranges_espace, generates a three dimensional comparison of a species’ ranges created using distinct algorithms, to visualize implications of selecting one of them if environmental conditions are considered.
All the functions that create species ranges also generate an approach to the species extent of occurrence (using convex hulls) and the area of occupancy according to the IUCN criteria. Shapefiles of the resultant polygons can be saved in the working directory if it is needed.
## Data in this package
The data included in this package are:
• country_codes, a dataset containing codes for identifying countries according to ISO norms.
• adm_area_names, a dataset containing names of all the available administrative areas from the GADM data base. Names describe distinct administrative areas in five levels.
• adm_boundaries, a SpatialPolygonsDataFrame of 9 countries from South America.
• buffer_range, sp_range* object based on buffers containing the results of the function rangemap_buffer.
• cvehull_range, a sp_range* object based on concave hulls. This contains the results of the function rangemap_hull.
• cxhull_range, a sp_range* object based on convex hulls. This contains the results of the function rangemap_hull.
• spdf_range, a SpatialPolygonsDataFrame representing the distribution of a species from North America.
• Examples of occurrences data:
• occ_f, a dataset containing geographic coordinates of the Giant Cuban Toad.
• occ_d, a dataset containing geographic coordinates of a South American armadillo.
• occ_p, a dataset containing geographic coordinates of a Caribbean toad.
• occ_train, a dataset containing geographic coordinates of a North American tick.
## A small example
### Reading and exploring the species occurrence data
Let’s read the species records and check how the are geographically distributed using the rangemap_explore function.
# Getting the data
data("occ_f", package = "rangemap")
# checking which countries may be involved in the analysis
par(mar = rep(0, 4)) # optional, reduces the margins of the figure
rangemap_explore(occurrences = occ_f)
rangemap_explore(occurrences = occ_f, show_countries = TRUE)
### Species range based on administrative areas
Let’s check the rangemap_boundaries function’s help to be aware of all the parameters.
help(rangemap_boundaries)
Defining parameters and reading the data
# Getting the data
data("occ_d", package = "rangemap")
# preparing arguments
level <- 0 # Level of detail for administrative areas
adm <- "Ecuador" # Although no record is on this country, we know it is in Ecuador
countries <- c("PER", "BRA", "COL", "VEN", "ECU", "GUF", "GUY", "SUR", "BOL") # ISO names of countries involved in the analysis
Now we can create the species range based on administrative areas
b_range <- rangemap_boundaries(occurrences = occ_d, adm_areas = adm,
country_code = countries, boundary_level = level)
If you want to save the results of this analysis as shapefiles try using the parameters save_shp and name.
save <- TRUE # to save the results
name <- "test" # name of the results
country_code = countries, boundary_level = level,
save_shp = save, name = name)
The function rangemap_plot will allow you to produce a nice figure of your results.
Check the function’s help to be aware of all the parameters.
help(rangemap_plot)
Now the figures. One with the species range only.
# arguments for the species range figure
extent <- TRUE
occ <- TRUE
legend <- TRUE
# creating the species range figure
par(mar = rep(0, 4))
legend = legend)
The other one with the potential extent of occurrence, the species occurrences and other map details. But let’s first define the characteristics we want in the figure.
extent <- TRUE # adds the extent of occurrence of the species to the figure
occ <- TRUE # adds the occurrence records of the species to the figure
legend <- TRUE # adds a legend to the figure
leg_pos <- "topright" # position of the legend in the figure
north <- TRUE # adds a north arrow to the figure
n_pos <- "bottomleft" # position of the north arrow
par(mar = rep(0, 4), cex = 0.8)
northarrow = north, northarrow_position = n_pos)
|
2022-01-21 12:03:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3003697395324707, "perplexity": 6481.563552743564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303356.40/warc/CC-MAIN-20220121101528-20220121131528-00307.warc.gz"}
|
https://golem.ph.utexas.edu/category/2007/04/why_mathematics_is_boring.html
|
## April 6, 2007
### Why Mathematics Is Boring
#### Posted by John Baez
Apostolos Doxiadis is mainly known for his novel Uncle Petros and Goldbach’s Conjecture. A few years ago, he helped set up an organization called Thales and Friends, whose goal is to:
• Investigate the complex relationships between mathematics and human culture.
• Explore new ways of talking about mathematics inside the mathematical and scientific communities.
• Create new methods for communicating mathematics to the culture at large, including education.
We’re having a meeting this summer:
I’m going to speak on ‘Why mathematics is boring’. Take a look at my abstract! You may have ideas of your own on this subject. If so, I’d be glad to hear them, because it’s a big problem and too little has been written about it — much less done about it.
The participants at this meeting will be Amir Alexander, John Baez, David Corfield, Persi Diaconis, Apostolos Doxiadis, Peter Galison, Tim Gowers, Michael Harris, David Herman, Federica La Nave, G.E.R. Lloyd, Uri Margolin, Barry Mazur, Colin McLarty, Jan Christoph Meister, Christos Papadimitriou, Arkady Plotnitsky and Bernard Teissier. Here’s the abstract of my talk:
#### Why Mathematics Is Boring
Storytellers have many strategies for luring in their audience and keeping them interested. These include standardized narrative structures, vivid characters, breaking down long stories into episodes, and subtle methods of reminding the readers of facts they may have forgotten. The typical style of writing mathematics systematically avoids these strategies, since the explicit goal is ‘proving a fact’ rather than ‘telling a story’. Readers are left to provide their own narrative framework, which they do privately, in conversations, or in colloquium talks. As a result, even expert mathematicians find papers — especially those outside their own field — boring and difficult to understand. This impedes the development of mathematics. In my attempts at mathematics exposition I have tried to tackle this problem by using some strategies from storytelling, which I illustrate here.
You can read David Corfield’s account of the 2005 meeting of Thales and Friends here. Also check out my next post on Mathematics: the Dark Side.
Posted at April 6, 2007 8:23 PM UTC
TrackBack URL for this Entry: http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1232
### Re: Why Mathematics Is Boring
‘Modern mathematics’ is indeed boring and devoid of meaning in most papers. That’s why I stick to reading the works of the greats like von Neumann, Wiener, Einstein, and hosts of others. What made them great? They *explain* things, and then go on to carry out tremendeously complicated calculations. I think the advent of calculators and numerical methods has hurt the advance and understanding of math to a large degree. However, on the flip side, new symbolic methods allow computations which are a great help and lead to new relations not previously seen, so maybe there is hope yet.
Posted by: Stephen Crowley on April 6, 2007 10:44 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Stephen Crowley wrote:
‘Modern mathematics’ is indeed boring and devoid of meaning in most papers.
As Theodore Sturgeon pointed out back in 1953:
90% of everything is crud.
But, apart from that fact, I wouldn’t say modern mathematics is particularly boring. In fact, I think it’s the second most exciting thing in the universe! The first is, of course, the mathematics of the future.
The problem is just that most writers of mathematics succeed, against all odds, at making the subject seem boring. They’ve developed a lot of methods for doing this. One is to make the results hard to understand. Another is to not provide enough context for people to see why the results are interesting. A third is to write in a style that has all the drama and flair of overcooked porridge.
There are lots of other subsidiary methods… for examples of these see 90% of the papers here.
Posted by: John Baez on April 10, 2007 8:46 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
I haven’t really thought about it, but I think this is on some level why I tend to write the way I do. My papers tend to read very conversationally, and very often I reverse theorem and proof.
It’s like, we’re just walking along seeing what there is along the road and – hey! – we just proved a theorem!
I also try to write very self-contained papers, not assuming that everyone in the audience is an expert and knows all the folk theorems around, but I think that’s a different effect.
Posted by: John Armstrong on April 6, 2007 11:35 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
I have a tendency to write that way, and then try to force myself to make the paper “more boring” by organizing it in the usual way. I think part of the issue is that for utilitarian reasons, people often don’t want papers which demand you follow the whole story from beginning to end. Narrative style is supposed to draw you in and form an integrated whole. We often want papers, instead, to allow themselves to be read superficially by people who each want to find a different nugget and ignore the rest.
On the other hand, a lot of material seems to be written in the same “don’t get sucked in” style, even when it’s not what’s called for. So it would be nice, as David said, to “explore new ways of talking”.
Posted by: Jeffrey Morton on April 7, 2007 3:37 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Er - I misattributed the quote from the group’s mission statement to David Corfield’s description, there.
Posted by: Jeffrey Morton on April 7, 2007 3:43 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
It’s like, we’re just walking along seeing what there is along the road and – hey! – we just proved a theorem!
We did? Um, where? I’m sure the author knows what part of the foregoing was involved in proving the theorem he is now claiming, but I don’t.
I really detest reading papers/books written in this seeming stream-of-consciousness fashion. It’s something I attribute more to physicists than to mathematicians, and I see less and less of it as the younger generation of physicists is more familiar with mathematical signpost-age. Hallelujah!
I would claim that writing in this style is counter to your desire to “not assum[e] that everyone in the audience is an expert and knows all the folk theorems around”. At least, my experience in reading these physics papers is that half the claims (each of which may not be stated explicitly as a claim, so much as just another equation dumped onto the page) are supposed to be well-known to the reader, and half are somehow consequences of the foregoing. As a non-expert, I was always at a loss to tell which was which.
Posted by: Allen Knutson on April 7, 2007 4:30 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
I agree 100% with Allen’s comment.
The dry theorem/proof style evolved for good reasons. I have often found it difficult reading old ( > 80 years) papers for the same reason that it is often difficult and frustrating to read papers on hep-th. One is left wondering what the hypotheses are, what the conclusion is, where the proof begins and ends, or even if there is a proof.
One can often get the story by attending conferences, talking to colleagues, going to lectures, reading survey articles, and (now) reading some blogs. But the rock on which all else rests is essential. That rock is the published paper written in the modern style.
There is room in a published paper to tell part of the story and one should. It is a pleasure to read the many authors who do that well. Serre and Atiyah are two better known examples, but there are many other less well known mathematicians whose papers are a pleasure to read whether one is an insider or an outsider.
I like using Bourbaki, EGA and SGA. It is all there in detail with precision. Mumford’s “Red Book”, Fulton and Harris on Representation Theory, or Eisenbud and Harris on Schemes, are marvellous books in which precision and story are woven together. I also like Dieudonne’s two short books, one on algebraic geometry and the other on its history where one has both precision and story. Connes book on noncommutative geometry is full of the poetry of mathematics but also full of precision.
We need both the boring and the story. Each has an important role to play, but the two should be distinguishable. In the end I think it comes down to the question of whether the author has the right qualities for the forum in which he or she is operating. Some write well, some speak well, some can write a good survey article, some have a light touch, some do not, some are clear, some are not.
Posted by: Paul Smith on April 7, 2007 6:42 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
In the end I think it comes down to the question of whether the author has the right qualities for the forum in which he or she is operating. Some write well, some speak well, some can write a good survey article, some have a light touch, some do not, some are clear, some are not.
I take it that this is not meant to exclude the possibility that some can be taught to write better, speak better, have a lighter touch, be clearer, etc. Nor that some could be made more aware that it would be a good thing if they could be taught thus. Nor that some could profitably articulate more clearly why it would be a good thing if more were taught thus.
Posted by: David Corfield on April 7, 2007 10:35 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Certainly, it is possible to improve one’s expository skills over time. It is good that the issue of good exposition is discussed. Halmos’s book “How to write mathematics well” should be read by all math PhD students—even better they should buy a copy and return to it periodically over the course of their careers. Serre’s talk “How to write badly” is another source of good advice.
Posted by: Paul Smith on April 7, 2007 2:05 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Some have tried to pass this message across. For instance this Halmos book has inspired this useful short text by Audin for french PhD students. But one should not do too much: Serre’s talk does sound overly patronizing IMHO. For a start, he doesn’t even use LaTeX-beamer slides ;-)
Posted by: thomas1111 on April 8, 2007 11:03 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
I may not have explained myself correctly. To be sure, when I have come upon a theorem I do stop and state it, in full, with all the hypotheses and conclusions. When I know I’m going to need a technical lemma I switch to the more classical style to make sure they’re not just thrown in like so many equations.
I really don’t think “stream-of-consciousness” captures it at all. When I say “conversational”, I mean that (I hope) they read like what I would say if I were speaking in a seminar. And surely someone such glossolalia as others have reviled here wouldn’t fly in seminar talks either.
Posted by: John Armstrong on April 7, 2007 4:42 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
It is hard to improve on Paul Smith’s comment. With a paper written in formal style you know where you stand, whereas for a more narrative paper you often need to rewrite its results for yourself before you can use it, to make sure there are no hidden assumptions. (I really dislike Gelfand and Vilenkin: Generalised Functions vol 4 for this reason).
Since the point of an article is the communication of a result, I think the most important criterion should be transparency. Even ‘formal’ papers can be badly written, e.g. with large numbers of implicit morphisms, ambiguous or context-dependent notation, definitions introduced far away from where they are used, motivating examples separated from what they motivate, terse proofs that take a week for an expert to expand etc. On the other hand, I think that on close scrutiny, narrative papers are almost never transparent.
Perhaps one should rather address the question of what makes a paper read well instead of the ‘formal’ vs. ‘narrative’ style dichotomy. In a formal paper one usually introduces narrative elements in the remarks. Moreover you keep your eye on some continuous line of argument, and you say how each concept/result fits into this line if it is not obvious. Another point is to include reminders in the text (because one very rarely read a longer paper in one sitting, and these interruptions may last a week or so, by which time one has forgotten some conventions). Notation should be fairly explicit and functorial and close to the standard notation of the area. And if there are big gaps between the use of a definition, a reminder usually helps.
Perhaps one can define the narrative style as the description of a continuously unfolding argument. I think this can also be done in the formal mode through the adding of remarks, and by avoiding choices which break the line of the argument (e.g. collecting all definitions into one list at the start).
Posted by: Hendrik Grundling on April 9, 2007 2:16 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
“Since the point of an article is the communication of a result”
Maybe one reason for looking at “new ways of talking about mathematics” is that it’s not obvious that every published bit of writing on the subject should have the goal of “communication of a result”. Plainly this is important, and accounts for the majority of why professional mathematicians read mathematical writing: to find and understand some result they may want to use in the context of developing some other new result, to be communicated in turn. In some cases, professional scientists in related fields such as computer science and physics read papers with similar aims, and I think in all those cases, your criteria make sense.
I would guess that the phrase “mathematics is boring” is an attempt to account for the apparent fact that almost nobody else outside this group is even remotely interested in reading about mathematics. There is a (limited) audience for popular books - almost without exception written in the narrative style - but few specialists are much interested in reading them. What with the expansion and specialization and mathematics (and science generally), most of us are non-experts in any given area - yet we still want something other than a “layman’s” depiction of a subject. The issue of “communicating mathematics to the culture at large” is also relevant: if it’s true that the narrative style is a bad way of conveying mathematical ideas, and the conventional style is “boring”, why are these essentially the only methods used?
Posted by: Jeffrey Morton on April 9, 2007 10:53 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Paul Smith wrote:
The dry theorem/proof style evolved for good reasons. I have often found it difficult reading old ( > 80 years) papers for the same reason that it is often difficult and frustrating to read papers on hep-th. One is left wondering what the hypotheses are, what the conclusion is, where the proof begins and ends, or even if there is a proof.
There are lots of different styles that make for good math papers, but the style on hep-th is not one. And that shouldn’t be surprising. These aren’t even math papers: they’re physics papers!
What’s surprising, actually, is that you’re looking for ‘proofs’ in these papers. Physicists don’t usually do proofs. Their education doesn’t even teach them how.
I think more mathematicians should pay attention to narrative techniques in order to make their papers less boring, and readable by a larger audience. But, by ‘narrative techniques’, I don’t mean that we should eliminate clearly stated theorems and clearly stated proofs — except in expository material like This Week’s Finds where that’s not the main point.
What I mean is that the paper should have a clear story line: at any point, the reader should know what’s going on. It should have vividly drawn characters: the main entities being discussed should be clearly visible and stand above the hubbub of minor characters. It should have suspense: a clearly stated problem, and a clear sense of why it’s important and perhaps difficult. And, if it’s long, it should be chopped up into memorable episodes, each ending with a bang.
One can often get the story by attending conferences, talking to colleagues, going to lectures, reading survey articles, and (now) reading some blogs. But the rock on which all else rests is essential. That rock is the published paper written in the modern style.
There are certainly some very good things about the modern style, which I wouldn’t want to lose. But the modern style of mathematics is also famous for being dull and hard to follow. This is even true for mathematicians! — and that’s what leads to hyperspecialization: since one gets the feeling that there’s no point trying to read most papers unless one happens to already be interested in their results, most mathematicians focus on what they already understand.
I believe that progress in mathematics is being greatly held back by the fact that so many mathematicians don’t understand much of the mathematics that’s already been done. And the boring way in which mathematics is so often explained is part of the reason for this.
Posted by: John Baez on April 9, 2007 11:55 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
You honestly think progress in mathematics is slow? Or that not enough attention is paid to connections between disciplines?
Posted by: Changbao on April 10, 2007 8:35 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Changbao wrote:
You honestly think progress in mathematics is slow?
‘Slow’ is a relative term. Progress is faster than it was, but much slower than it could be. That’s because people are very quick at making progress in specialized subjects, but rather slow at explaining mathematics clearly, and unifying different subjects.
A result that few people understand is not living up to its potential. So, we need to get better at explaining mathematics clearly to more people. For this, it’s important to make mathematics more fun. Mathematics is hard, but it can be very fun if it’s explained well.
There’s too much mathematics for anyone to understand. So, we need to unify different branches of mathematics, to reduce the burden.
To explain mathematics clearly in an efficient way, we need to unify different subjects. But to unify different subjects, we need to understand them clearly — so we need people to explain them clearly.
It’s a circular problem, but we can tackle it by spending a bit less energy finding new results of a highly specialized nature, and a bit more energy figuring out good ways to explain what’s already known — and unify what’s already known.
If we don’t do this, mathematics will keep getting more specialized and more fragmented… with lots of wonderful ideas that very few people understand.
Or that not enough attention is paid to connections between disciplines?
Yes, not enough attention is paid to connections between disciplines. Smart people pay lots of attention to these connections, because that’s a great way to get good new ideas. But still, people aren’t paying enough attention to the connections. In particular, they’re not taking advantage of all the opportunities to simplify mathematics by unifying it.
Don’t get me wrong: mathematics is great! I’m just saying it could be a lot better.
Posted by: John Baez on April 10, 2007 8:10 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Spend a bit more energy figuring out good ways to explain what’s already known? I wholeheartedly agree. For one possible way of doing this, could I pass on the words of Hans Moravec, robotics researcher at Carnegie Mellon? These two paragraphs are from his book Mind Children: the Future of Robot and Human Intelligence:
As I suggested in Chapter 1, the large, highly evolved sensory and motor portions of the brain seem to be the hidden powerhouse behind human thought. By virtue of the great efficiency of these billion-year-old structures, they may embody one million times the effective computational power of the conscious part of our minds. While novice performance can be achieved using conscious thought alone, master-level expertise draws on the enormous hidden resources of these old and specialized areas. Sometimes some of that power can be harnessed by finding and developing a useful mapping between the problem and a sensory intuition.
Although some individuals, through lucky combinations of inheritance and opportunity have developed expert intuitions in certain fields, most of us are amateurs at most things. What we need to improve our performance is explicit external metaphors that can tap our instinctive skills in a direct and repeatable way. Graphs, rules of thumb, physical models illustrating relationships, and other devices are widely and effectively used to enhance comprehension and retention. More recently, interactive pictorial computer interfaces such as those used in the Macintosh have greatly accelerated learning in novices and eased machine use for the experienced. The full sensory involvement possible with magic glasses may enable us to go much further in this direction. Finding the best metaphors will be the work of a generation: for now, we can amuse ourselves by guessing.
I’ve known some mathematicians, and read of others, who used gesture a lot: perhaps — I don’t know how to say this more precisely — they were running their mathematics on some anciently-evolved kinaesthetic virtual machine deep in their brain.
Thus, I once read a biography of Erdõs which said (if I remember correctly, which I may not) that he flapped his hands continually as he walked, and that he found it hard to think mathematically if prevented from doing so.
I recall an Oxford topology lecturer who was always talking about “upstairs” and “downstairs”. He would talk about “upstairs” for the codomains of functions and “downstairs” for their domains, or “upstairs” for topological spaces and “downstairs” for their open sets.
John Fitzgerald, a group-theorist friend of mine who researched in Essen, told me he thought of the German mathematical word “Darstellung” (group representation?) by breaking it into its roots: “there putting”. This gave him, he said, a visceral sensation of picking up one group and swinging it around inside another. Which helped him think about his research.
All this is anecdotal. But I would really love it if, by designing metaphors that implement efficiently on our ancient kinaesthetic virtual machines, we could make it possible for more people to understand category theory.
Um. I also note a remark from Ronald Brown and Tim Porter’s Category Theory and Higher Dimensional Algebra: potential descriptive tools in neuroscience:
Thus the equation
2 × (5 + 3) = 2 × 5 + 2 × 3
is more clearly shown by the figure
||||| |||
||||| |||
Indeed the number of conventions you need to understand [the] equation […] make it seem barbaric compared with
the picture […].
My italics.
Posted by: Jocelyn Paine on July 28, 2008 4:01 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
I agree only a bit with Allen. It depends entirely on the writer. Nathan Jacobson’s Algebra books, for instance, are a joy to read, and he quite frequently put the statement of the theorem after its proof. His writing style was sufficiently clear that it is rare to have the “uh, what was used to prove what, now?” reaction so common when encountering physics papers. I still write mostly in the theorem-proof style, because I don’t trust myself enough as a writer to do otherwise.
However, I don’t really think this is as important an issue in writing as is the quality of introductory sections. What makes or breaks a paper in terms of its readability (not in terms of its mathematical significance), in my view, is how well the introduction is written.
And of course, let us not forget the importance of the royal we. :-) “We now turn to the proof of our main result.” “We are not amused by this counterexample.”
Posted by: Michael Kinyon on April 7, 2007 2:26 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
I see nothing wrong per se with the classic Definition-Theorem-Proof style – it can be done well or badly. Milnor for example writes beautifully in this style, adding in motivating remarks and comments as appropriate. In his Morse Theory for example, he manages to combine a strong narrative with efficiency of presentation. I think it comes down to his wonderful mastery of the subject.
As for the royal or editorial ‘we’: I’m not sure the singular form is necessarily any better. “I now turn to the proof of my main result.” That can certainly smack of monologuing or performing; the plural form on the other hand can be seen as recognizing the collaborative and critical role of the reader. Whether or not it sounds ‘august’ depends on the tone (or skill) of the writer.
Posted by: Todd Trimble on April 7, 2007 3:37 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Maybe we should all start using the second person singular:
“You are now ready for the proof of the main result. Perhaps you should pause and reflect upon the lemmas of the preceding section. Have a beer, too, if you are so inclined.”
Posted by: Michael Kinyon on April 9, 2007 12:16 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Maybe we should all start using the second person singular: “You are now ready for the proof of the main result. Perhaps you should pause and reflect upon the lemmas of the preceding section. Have a beer, too, if you are so inclined.”
Of course, a lot of people do write in this fashion (well, not the beer part). John Baez routinely addresses the reader as ‘you’ in his This Week’s Finds (as for example here ). Writers of textbooks frequently do, too.
Any voice, even a mixture of voices, can be made to work; whether it comes off as friendly and engaging, or condescending, or flippant, or silly, or irritating, depends on the writing and how it is read. Here is Gavin Brown in a witty and generally very positive review of Tom Körner’s Fourier Analysis (Bull. Amer. Math. Soc., Vol.21 No. 2 (October 1989), 307-311):
“Our author is fond of the device of almost – but not quite – addressing his reader. ‘Continuing along this line of thought the reader will recall the following theorem. (If the reader has forgotten the proof she will find it in Chapter 53 (Lemma 53.2).)’ These coy asides always use feminine pronouns; an idiosyncrasy which can eventually become somewhat tiresome… Sometimes the dominie in Körner takes over and the reader is required to eat up all her porridge. ‘Before leaving this chapter the reader should convince herself that any two of Theorems 34.1, 34.2 and 34.4 can readily be deduced from the third’ or ‘To understand this proof fully the reader must be clear in her own mind why we needed Lemma 44.5…’ ”
To each her own.
Posted by: Todd Trimble on April 9, 2007 4:05 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Mostly I think “we” is the least-worst option, but one thing that I really dislike is use of “we” when making questionable/speculative statements, eg, “we now see that the only viable solution to this problem is …” This annoys me because I’m sitting there going “whoa, _I_ don’t see that this is the only solution”. In those rare cases I try and find language with an explicit “the authors”, indicating the reader should make their own mind up.
Posted by: dave tweed on April 9, 2007 12:29 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Nathan Jacobson’s Algebra books, for instance, are a joy to read, and he quite frequently put the statement of the theorem after its proof.
This is the distinction I want to draw! Jacobson, not Lang. I know it’s apostasy to speak ill of Serge, but his book is simply dreadful for a first-timer.
There are huge swaths where if you don’t know what goal he’s trying to reach you have no idea why he’s doing something now, and you get completely lost in the backflips.
Jacobsen tells you where you’re going, states technical details in a more classical style, but lets the very natural story play itself out. There are some results that are so inexorable that you can’t help but be drawn to them.
Posted by: John Armstrong on April 7, 2007 4:53 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
“Jacobson, not Lang. I know it’s apostasy to speak ill of Serge, but his book is simply dreadful for a first-timer.”
I recall looking at Lang’s book in a bookstore a long time ago when I was an undergraduate and putting it back on the shelf with the feeling that I would turn to stone if I continued reading it. It did not draw me in at all. I don’t have a ready copy on hand to determine if the problem was his writing style or the style of typesetting.
Posted by: Richard on April 8, 2007 3:17 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Disclaimer: computer science guy rather than mathematician any more.
Part of the problem I have with the “story” metaphor is that stories are things that are spoken at you. Likewise, John A’s papers are presumably written in a monologuing rather than conversational style. Obviously papers serve a split purpose: partly to generally “in general” and partly to present things in enough detail that experts can spot flawed reasoning. But I think some sort of style that somehow provokes the reader at a couple of points to question things, and see if they agree with the tack being taken by the author. Indeed, the one type of talks I absolutely can’t stand are those who are determined to make sure nobody can’t follow and thus have a very strong “narrative” structure that goes just slow enough to be a bit boring but not slow enough that you can detach your thoughts and ask mental questions about what the speakers approach to the topic. I’d much rather be lost on the details but see some nugget that is strange, challenging or powerful and then have to read up about it after the talk.
If you get the chance, Donald Knuth’s Surreal Numbers is an interesting, although not wholly successful, read.
Posted by: dave tweed on April 7, 2007 11:44 AM | Permalink | Reply to this
### Dialogue; Re: Why Mathematics Is Boring
I do believe in Narrative at the heart of much great Mathematics. The “story” need NOT be monologue. Consider the textbooks in Abstract Algebra of Richard “Dick” Dean. Thye are at a sweet spot between theorem/proof and stream of consciousness.
Here’s an idea. Here are some examples, informally. I’d like to prove this. So I try that. Whoops, that didn’t work because of this. So I’ll backtract and try again, using what I’ve learned. Whoops, that didn’t work because of this other thing. So try again. Now it works. Aha! So we learned something along the way to the proof, and not in a dry Lemma sense.
Feynman did this, also. Surely Feynman was never boring!
Jonathan Borwein and the “Experimental Mathematics” movement is also a kind of Story. As are Tom Apostol’s award-winning “Project Mathematics!” videos.
A purpose of Mathematics is insight. Why not admit that, for some writers, to some readers?
Posted by: Jonathan Vos Post on April 7, 2007 10:13 PM | Permalink | Reply to this
### Re: Dialogue; Re: Why Mathematics Is Boring
I think we might be hitting terminological rather than real disagreements. To me, a story/narrative is something where you are deliberately pulled along by the author and don’t do much beyond “imagining” it. Indeed,a good author will have a strong basic plot and the deftness of touch to pull you past plot contrivances/inconsistencies that don’t matter in the overall scheme quickly enough that you don’t really notice them. Likewise they’ll generate enough anticipation about where things are going that you won’t linger, rereading earlier passages to see if there’s something fishy going on. This is what I want from a story, because it’s a relaxation. However, I don’t think this is optimal style for scientific/mathematical discourse. But maybe the above isn’t what the term “narrative” means for you. I dunno, maybe I prefer an “explorer” metaphor (eg, Livingstone, Lewis and Clark, etc) for mathematics: what you can explore is limited to the terrain that’s actually there, but nothing happens anything until you start walking.
My real point is that writers who are very good with the structuring can implicitly suppress your own critical faculties as you’re reading, with the result that you don’t fully engage and both don’t really learn the subject and maybe end up feeling bored at the end because of this. For instance, your retract and backtrack style is an improvement, but it’s still me looking at the “mistakes” you want me to. The old adage that “you don’t really understand something until you have to teach it/program it” applies: until you’re forced to do things in a slightly different way you don’t engage actively enough. What would be nice is if there was a way of writing that facilitated this engagement on the readers part. But I don’t have any concrete suggestion :-) .
I agree a purpose of mathematics is insight, but I’ve read lots of papers which had a strong story and that I thought they were impressive and I completely understood what they were saying. Later when I tried to actually apply the results I discovered there were additional “insights” (of the same conceptual level) that I hadn’t had through reading the paper.
Posted by: dave tweed on April 9, 2007 10:31 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
In case it gets taken the wrong way, all I mean by the monologuing vs conversational point is that the way people talk in actual conversations between two people is different, precisely because you’ve got two people communicating “on-line”.
Posted by: dave tweed on April 9, 2007 11:51 AM | Permalink | Reply to this
When programming in the language Haskell beginners almost immediately hit the notion of category theoretical monads. Even the simplest “Hello, World!” program requires the use of monads. What I think is interesting about this is that Haskell beginners, many of whom have very few abstract mathematics skills, let alone category theory, suddenly have to leapfrog into completely foreign territory. They can try to read the formal definition of categories, functors and monads, and may even be able to carry out basic proofs with them. But the theory seems completely uninteresting to the beginner, especially when all you want to do is print “Hello, World!”.
So we have a situation where people have been forced to find ways to make monads accessible and interesting, and now there’s a whole industry of people writing ‘narratives’ with florid metaphors in an attempt to get the meaning over clearly. Various writers have used metaphors like containers, spacesuits(!), storage for nuclear waste(!!), types of piping and nesting, Hotel California (you read that correctly someone thinks monads are like the Hotel California!), computations, and ‘unsafe’ functions. It’s incredible how creative technical writers can become when there’s a definite need. It’s also interesting to see people make their private metaphors public like this. I’m sure that all mathematicians have lots of bizarre and interesting metaphors for mathematical concepts that they wouldn’t normally share with other people.
Posted by: Dan Piponi on April 7, 2007 1:43 AM | Permalink | Reply to this
I’m sure that all mathematicians have lots of bizarre and interesting metaphors for mathematical concepts that they wouldn’t normally share with other people.
Well, not in mixed company at least.
Posted by: John Armstrong on April 7, 2007 2:16 AM | Permalink | Reply to this
When programming in the language Haskell beginners almost immediately hit the notion of category theoretical monads. Even the simplest “Hello, World!” program requires the use of monads.
This sounds to me (a person who has participated in the implementation of several industrial-strength Prolog systems) like a good reason why Haskell will never become a major computer language.
### Re: Why Mathematics Is Boring
It seems that there may be two pure strategies here: one extreme is to motivate a general audience and the other extreme is to teach a technical audience. Of course, in the real world we often deal with mixed strategies which involve some of each.
Sometimes one can simplify the problem by considering the pure strategies first – in this case, motivate a general audience, or teach a technical audience. Then clearly identify the skills and knowledge of the intended audiences; clearly identify the objective of the paper, talk, book, blog posting, whatever; decide how much space/time we have to achieve the objective. Finally outline solutions and see how the solutions differ, and if appropriate, propose some mixed strategies to cover the in-between cases.
And if you really want a challenge, never use a number bigger than three.
I know, it sounds like a lot of motherhood, but hey, where would we all be without motherhood?
Posted by: Charlie Clingen on April 7, 2007 3:19 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
QUOTE:
And if you really want a challenge, never use a number bigger than three.
END QUOTE:
I just checked my current work on stochastic inference and brownian motion.. nope, nothing above even 2! Well, except this 3rd order PDE in 2 variables that was more of a curiosity satisifed by some special functions.. not strictly necessary. Is this some indication of ‘rigour’ ?
Posted by: Stephen Crowley on April 7, 2007 3:38 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
As to “never use a number bigger than three”, I should have expressed myself more clearly; I meant that when explaining something by giving examples, the challenge is to use examples with a very small number of items, cases, states, etc. – a feat that sometimes seems impossible, but usually, with some effort, can be achieved, even when describing complex and subtle concepts such as groups, symmetries, infinities, … . Such “simple” examples are easier to understand and remember for the audience and creating them can also bring a deeper understanding to the “teacher”, but they can be difficult to devise.
Posted by: Charlie Clingen on April 8, 2007 5:14 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Nope. Already lost at keeping it under three. My main problem with that is that I’m working with finite groups, and the smallest non-abelian finite group is $S_3$, with 6 elements. Even worse - since I like to work with 2-groups, my smallest nonboring examples have 8 elements…
It’s a neat guideline though, as long as you don’t make it a law.
Posted by: Mikael Johansson on April 9, 2007 10:02 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
To continue a point made by Charlie…
Writing for experts is very different than writing for non-experts, and I think there is no royal road. The risk of making things too narrative or conversational is that the experts feel that their time is being wasted. I often find myself thinking Please just tell me the mathematics concisely and precisely and let me supply my own little story. There is a reason I went to grad school, and please don’t treat me like I didn’t. Of course, the risk of the Bourbaki style is that it’s too dry for most people who can’t supply their own stories (ie most non-experts).
In my own case, I’ve found myself on both sides, even with the same writer X writing papers in the same style. I’ve come to a field knowing nothing, and X’s papers were revelations. But now that I’ve learned the point, when I read X’s later papers, I just wish he’d cut the fat out.
That’s why it’s good to have both the Journal of the AMS and the Bulletin. We really need both. I do think, however, that it would be good if the number of Bulletin-style papers increased by a factor of 10, as long as the good people don’t stop writing nice, lean papers.
As Charlie said, this all seems about as controversial as motherhood, perhaps with the exception of the factor 10.
Posted by: James on April 11, 2007 1:15 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
I like John Armstrong’s approach he mentioned above.
Like Einstein says, “The whole of science is nothing more than a refinement of everyday thinking”. This includes mathematics. If we want to contribute to the understanding of something, we should provide enough explanation about things. We should emphasize the purpose of things (why we defined this like that), and review the goals (where we want to get) every now and then in our presentation. We should be able to make distinction between a rigorous proof and its exposition.
Posted by: Siamak Taati on April 7, 2007 3:11 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Perhaps there would be less confusion in this whole discussion if people illustrated their points with reference to online material. After all, literary criticism with the text in front of you makes much more sense.
Why not have mathematical critics just as you have literary critics, to develop mathematical taste by public criticism? (Lakatos, ‘Proofs and Refutations’, CUP 1976: 98)
Posted by: David Corfield on April 9, 2007 12:18 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
David wrote:
Perhaps there would be less confusion in this whole discussion if people illustrated their points with reference to online material.
I was thinking about that while walking through a parking lot on Saturday. How can I explain how wretched so much math writing is without pointing to explicit examples?
But, that would just get me a bunch of enemies. And, I don’t really like the book review culture in literature, where people feel free to publicly tear each other’s work to shreds.
We don’t have a culture of public criticism of style in mathematics. This might be holding back the development of better writing… but it might also be part of a generally civil and constructive way of doing things. Unlike in literature or art, we mathematicians are all building the same pyramid, after all.
It’s an interesting issue.
So, I may need to make up my own fake examples of good versus crappy writing styles. I could do it by taking a sample of a bad paper on algebraic geometry and turning it into one on functional analysis, for example.
Posted by: John Baez on April 10, 2007 12:22 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
The meaning of ‘criticism’ seems to have evolved in recent decades towards an activity of overwhelmingly negative judgment. This is very noticeable if you ask first year philosophy students to write a ‘critical review’ of, say, Descartes’ Meditations. Many believe you want them to slag Descartes off.
By ‘public criticism’ of mathematics, I’m sure Lakatos intended to allow praise of mathematics done well.
But, even with this in mind, we face the problem you mention. Praising real papers is fine, but singling out others for censure is awkward. Still, much could be achieved via praise. If your papers do not resemble any of a wide range of praised articles, you might want to reconsider your style.
Posted by: David Corfield on April 10, 2007 8:24 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
The AMS has the Leroy P. Steele Prize for Mathematical Exposition which does go a little bit that direction isn’t it?
Of course it’s not enough at all, and I strongly agree which John’s statement above that a lot of progress is held back by dry, boring style which creates walls between various areas. Actually, some folks like Arnold have expressed similar concerns in the past, for instance the last few paragraphs of this paper.
Posted by: thomas1111 on April 10, 2007 9:59 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Here’s an email I got on this subject, which I was given permission to post:
Dear Professor Baez,
My name is Paolo Bizzarri and I am following your invitation to provide some feedback on the topic “Why is mathematics boring?”
Let me introduce myself: I am 37 years old, got a Degree in Computer Science in 1994. In the last four-five years I have started studying mathematics for fun and passion.
As my job is different, I have a limited time to dedicate to my passion. Even if I find mathematics extremely interesting, I have often found studying maths boring, difficult and hard to pursue. So, I have done some simple reflection on why I find difficult and boring something for which I have anyway passion.
My conclusions are as follows:
• mathematics is taught in an unnatural way;
• mathematics is a practical discipline, but it is taught as an abstract one;
• there is lots of implicit knowledge that is not made available through books.
I will try to expand each of these sentences in the following.
Mathematics is taught in an unnatural way.
My point is here referred mainly to textbooks, and their typical structure of definition/lemma/theorem/definition.
Where is the problem? The problem is that this approach is unnatural. It is not in that way that mathematicians reason and produce their work.
If you see a demonstration, it is as terse and essential as it has to be. Each passage is perfectly connected with the previous passages. Each hypothesis is done exactly when it is was needed, and it is absolutely minimal.
The question (my question) was: yes, everything works perfectly, but this is not science. This seems more an Hollywood film, where everything happens for a precise reason.
But mathematicians do not work in that way (or, at least, this is my understanding). The real problem in mathematics is often not to demonstrate a theorem: it is to find a good object to study. The definition comes AFTER a theorem has been demonstrated. The demonstration itself is refined numerous times, in order to obtain the “perfect”, textbook demonstration. Which is the only demonstration you see, and I found them quite unnatural exactly because they were perfect.
The real point should be that mathematics text book should provide the context, the reason WHY they are studying something, and what they are trying to study. Which bring us to my second point.
Mathematics is a practical discipline, but it is taught as an abstract one.
Again, this is based on my limited experience, and can be pretty typically Italian. But, anyway, it is the only experience I can provide.
One of the main points about mathematics is that it is “abstract”, “pure”, not tied to any practical problem.
Which is false, from an hystorical point of view. But it is false also on a more concrete, day-by-day, practical matter.
Maths is boring for non full-time mathematicians because they don’t know the “tools of the trade”. They are not used to manipulate mental objects like grups or vector spaces. When I have read for the first time about groups, I have found difficult to understand lots of things about them and their importance. When I have started to see them used, they became much more clearer to me.
Perhaps mathematicians do not feel this problem strongly, because they are used to work with abstract objects. It is the same problem that a non-IT professional has when he has to use a computer program done for an IT professional.
This separation is strong also because you are not supposed to use the tools you learn by yourself: the demonstrations are already provided, and you have not to improve them (you would not be able anyway…). You have to use them in some cooked up situations, but again there is little understanding that this is done for a specific reason. Which bring us to the third, and possible final argument.
There is lots of implicit knowledge that is not made available through books.
It is one of the most striking things I have seen: there is lots of thing about mathematics that are not explicitly expressed in mathematics textbooks.
One is what all mathematics call “elegance”. It is a fuzzy concept, but it is fundamental. I have not seen a single, explicit reference to it (except, perhaps, in Topics in Algebra). But this is a fundamental criterion to create and judge mathematical theories, and a strong guide about which structure you expect.
The second is about the “styles” of demonstration you adopt. Given a certain domain, it is quite common to see demonstrations that use a limited set of methods to be carried out, but these methods are never expressely nominated or indicated.
But, without a name, it is difficult to effectively teach these things. We cannot even speak properly about them.
Conclusions.
Well. If you didn’t find boring these writings of mine, I owe you a pizza in Pisa, if you ever come to my city.
Best regards.
Paolo Bizzarri
Posted by: John Baez on April 13, 2007 6:24 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
there are a couple of things that i think make mathematics dull:
1. difficulty in rapidly developing an overall sense of what is going on
2. the slackening of the ties between math and physics.
the first is, to some extent, unavoidable. there is probably something biologically tiresome about any activity in which the actor is forced to labor fairly strenuously before a reward is forthcoming (indeed, before he knows if there will ever be much of a reward). to make matters worse, some people have a habit of obscuring central ideas with a lot of distracting trivia. outside of mathematics, people commonly employ this tactic in order to conceal a lack of substance in the product they are promoting. it’s called marketing. there’s nothing wrong with marketing in mathematics, but the variety practiced by mathematicians who actually believe that inpenetrability => depth is unhelpful and unfortunate. as for the 2nd factor contributing to the dullness of mathematics: i have no idea why anyone would study mathematics unless they were interested in physics. explaining such a phenomenon to anyone without much experience in mathematics is a task i am certainly not up to. perhaps some people are. however, it’s very easy to convey the importance and interest of mathematical research if you are able to connect it with black holes, oceanic dynamics, the fundamental structure of matter, cryptography, communication networks, and so on. this connection was very deeply appreciated at the university where i went to grad school, but i’m not sure that the same can be said of most schools (the bourbaki shadow is rather long). hopefully this is not the case. in my opinion, it’s fairly easy to make mathematics quite engaging if these issues are addressed, editorial ‘we’ or no.
Posted by: F. Gabriel on April 14, 2007 4:37 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Gabriel wrote “i have no idea why anyone would study mathematics unless they were interested in physics.” Certainly lots of people are interested in mathematics for its applications to the physical world, but there are people who just enjoy “doing the things you do” in creating new mathematics and for whom applications are an “excuse” for doing it. And since studying mathematics is just creating mathematics with a lot more hints, I can see non-applications based reasons to study it. (As an analogy, consider playing in a band: most people do it because they enjoy the process rather than the “result”.) To be honest, I find too much listing and displaying of physical world results and applications a bit of a turn-off since again it’s very passive, whereas trying to explain the nub of a proof is much more active.
Since John B has just posted about Greg Egan, I’ll just mention that his metaphor of the “mathematical mines” in Diaspora is my favourite characterisation of mathematics; makes it clear it’s a participatory activity.
Posted by: dave tweed on April 15, 2007 12:38 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Gabriel wrote:
i have no idea why anyone would study mathematics unless they were interested in physics.
I was about to lambast you for writing this, when I remembered I used to feel the same way.
In fact, when I first became interested in physics, my parents were worried I wasn’t good enough in math to do well. But the reason I wasn’t good was that I just wasn’t interested in long division! When I saw that math held the key to many secrets of physics — perhaps the only legitimate form of magic — my interest increased, and I did better.
Now I no longer need to work on specific physics problems to stay interested in math. I understand enough of the ‘inner world’ of mathematics to find questions gripping just because their answers would illuminate that world.
Of course, almost anything gets more interesting the deeper you get into it.
I used to hate gardening; it was merely a chore. Then a lot of our plants died when I was in China last summer. So, we had to do a lot of replanting. Now that I know every plant in my back yard by name, and I’ve watched them all grow and thrive for months, gardening has become downright fun!
My friend the archaeologist and Lothar von Falkenhausen once saw a huge collection of Mao buttons. As he put it: if you see a dozen Mao buttons, it’s boring. But if you see thousands, it’s fascinating.
Personally I find math more interesting and important than Mao buttons, but then I’ve spent a lot more time on math, so I’m not a fair judge.
Posted by: John Baez on April 15, 2007 11:59 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Rolfe Schmit has blogged about making math less boring — check it out!
Posted by: John Baez on April 14, 2007 9:50 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Great post John, and great comments too. In my opinion, Mr. Bizzarri is dead on. Too many mathematicians seek to obscure the path of discovery rather than illuminate it.
Posted by: CapitalistImperialistPig on April 15, 2007 3:00 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
I don’t know any mathematicians who openly ‘seek to obscure the path to discovery’. I do, however, know a lot of mathematicians who are scared that other mathematicians will find their work trivial. Their half-subconscious reasoning seems to go like this:
Professional mathematics is just a big intelligence contest. If Prof. A can understand Prof. B’s work, but Prof. B can’t understand Prof. A, then Prof. A must be smarter — so Prof. A wins! Luckily, there’s a way to game the system. If you write in a way that few people can understand, everyone will think you’re smarter than they are! Of course you need someone to understand your work, or you’ll just be considered a crackpot. But, you should only let very smart, prestigious colleagues understand your work.
I find this attitude pathetic. I’ve never seen anyone openly advocate it — but I know why people fall for it: the pressure is built into the quest for intellectual prestige. I feel this pressure myself. The only solution is to show everyone it’s cooler to explain stuff clearly, and not be scared to make a fool of yourself in public.
Posted by: John Baez on April 15, 2007 8:11 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
JB wrote:
I do, however, know a lot of mathematicians who are scared that other mathematicians will find their work trivial.
Maybe one reason for this is in that Weil passage you like to quote from time to time, about resolving mysteries: you’re working away at some research, and it seems pretty serious stuff, and then finally you understand it, and it’s trivial! And you think, gee, this is really trivial, but it seemed so hard—maybe I’m just an idiot! Perhaps I should make everybody else suffer to understand it, like I did —then they’ll appreciate my hard work …
and not be scared to make a fool of yourself in public.
This is easier if you’ve already “proved yourself” by doing hard, scary things …
Posted by: Tim Silverman on April 15, 2007 10:29 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Tim wrote:
Maybe one reason for this is in that Weil passage you like to quote from time to time, about resolving mysteries: you’re working away at some research, and it seems pretty serious stuff, and then finally you understand it, and it’s trivial!
You mean this quote, I guess:
As every mathematician knows, nothing is more fruitful than these obscure analogies, these indistinct reflections of one theory into another, these furtive caresses, these inexplicable disagreements; also nothing gives the researcher greater pleasure.
[…]
The day dawns when the illusion vanishes; intuition turns to certitude; the twin theories reveal their common source before disappearing; as the Gita teaches us, knowledge and indifference are attained at the same moment. Metaphysics has become mathematics, ready to form the material for a treatise whose icy beauty no longer has the power to move us.
This is specifically about how vague analogies lose their charm after we’ve made them completely precise.
But yes: everything is trivial once you’ve done it!
I’ve given up trying to get around this. I used to try to prove ‘tricky’ or ‘difficult’ things, and it never worked — I kept finding mistakes in my proofs. Eventually I gave up and decided to only prove stuff that was completely obvious to me. The challenge then became making lots of things completely obvious!
On the other hand, you imagined someone taking the reverse approach:
Perhaps I should make everybody else suffer to understand it, like I did — then they’ll appreciate my hard work …
What I wonder is: does anybody have the gall to actually think this? It seems utterly inexcusable to me. Or is it a subconscious thing?
Posted by: John Baez on April 15, 2007 11:25 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
It again comes back to the Grothendieck quote we’ve been over here before. When you state the question the right way it should answer itself.
As for myself, I’m just hoping that the simplicity I see in my own work is purely a consequence of familiarity and that it doesn’t look quite so trivial to someone who hasn’t seen it yet.
Posted by: John Armstrong on April 16, 2007 12:06 AM | Permalink | Reply to this
### “trivial” found harmful; Re: Why Mathematics Is Boring
The word “trivial” has nasty side-effects in Mathematics, at least in Education. See Notices of the AMS, Letters to the Editor, Volume 43, Number 10, and other letters
We also have mutations such as “partially trivial”:
Vanishing and bases for cohomology of partially trivial local systems on hyperplane arrangements
Author(s): Yukihito Kawahara
Journal: Proc. Amer. Math. Soc. 133 (2005), 1907-1915.
MSC (2000): Primary 14F40; Secondary 32S22, 55N25
Posted: January 21, 2005
and, in Conway’s paper, AMS, 11 Mar 1997, PROOF OF CONWAY’S LOST COSMOLOGICAL THEORE:
“Some statements are a priori trivial …”
Posted by: Jonathan Vos Post on April 22, 2007 5:21 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Speaking as possibly somebody who many be guilty or at least is in danger of being guilty of the above offense, I would like to comment on another aspect of this which seems to have been overlooked so far.
X is writing a paper on some problem. X is working on the problem because of a personal motivation which isn’t necessarily the reason everyone else is interested in the problem. Moreover, X doesn’t really understand the background to the problem or have sufficient knowledge of why other people find the problem important and interesting. So X feels insecure starting his paper with a motivation and working a motivation into the text, because although he may have solved the problem, he’s far from being an expert on the motivation for the problem and on the whole story around it. So rather than advertise how little he knows in the field besides the problem itself, he keeps it dry and cryptic, so that people who know why the problem is interesting and know something about the problem can follow his proof, and nobody else will have to groan at his lack of basic knowledge of the problem’s background, and will think “what a fool X is!”
Write what you know and feel confident about and not what you feel on shaky ground with- I think this is a major reason for the issue which you raised. One has to feel fairly confident to feel one can sensibly make a paper interesting without being laughed at!
Posted by: Daniel Moskovich on May 16, 2007 11:59 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
At the mention of mathematics,3/4 of the class will start groaning..Why?Simple!
Mathematics is equal to only one thing..
Formulaes..
Formulaes,is equal to only one thing..Or two..
Lots of thinking..
And remembering..
Thinking and remembering is equal to only one thing..
BOREDOM..
Agree,anyone?
Posted by: Natasha on May 4, 2007 8:06 AM | Permalink | Reply to this
### science-humanities divide; Re: Why Mathematics Is Boring
There are several science blogs currently acdtive with a debate on whether (and if so why) arts/humanities stucents and professors find Mathematics and Science to be boring and not worth knowing.
You might start at:
The Innumeracy of Intellectuals
Category: Academia • Art • In the News • Music • Policy • Politics • Pop Culture • Science • Society
Posted on: July 26, 2008 9:49 AM, by Chad Orzel
and
Assorted hypotheses on the science-humanities divide.
Category: Academia • Disciplinary boundaries • Scientist/layperson relations • Teaching and learning
Posted on: July 27, 2008 4:46 PM, by Janet D. Stemwedel
Posted by: Jonathan Vos Post on July 28, 2008 5:57 PM | Permalink | Reply to this
Read the post Why Math Teachers Get Grumpy
Weblog: The n-Category Café
Excerpt: Grading final exams --- a test of ones soul.
Tracked: June 13, 2007 5:42 PM
### Re: Why Mathematics Is Boring
Hey everyone.
This is slightly off-topic, but I hope you don’t mind if I use this thread to advertise my new blog squarkonium.blogspot.com, where I posted a few ideas of my own about school education (more abstract than those discussed here, though).
Everyone is welcome!
Posted by: Squark on March 8, 2009 9:47 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Hi! Long time no see!
I hope you talk about math and physics a bit on your blog.
I tried twice to reply to your blog entry on the pyramid of sympathy, but I failed both times. So, until something changes I won’t be commenting there much.
(I think your blog uses the same software as the blog called Backreaction. I can’t succeed in leaving comments there either, unless I turn off Firefox and use Internet Explorer. So I don’t comment much there, either. That makes me sad.)
Anyway, here is my would-be comment. Someone noted that you don’t include yourself on that list of people and things ordered by how much you care about them. You wrote:
It is true that I find it difficult to evaluate the case $X = Y$ for some reason. For me, it’s a sort of $0/0$.
I think the main reason for not including yourself on such a list is that you will look like a jerk if you rank yourself near the top, and a liar otherwise.
Posted by: John Baez on March 8, 2009 11:26 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
(I think your blog uses the same software as the blog called Backreaction. I can’t succeed in leaving comments there either, unless I turn off Firefox and use Internet Explorer. So I don’t comment much there, either. That makes me sad.)
I didn't have any trouble using Firefox 3 on Ubuntu. Just for the record.
Posted by: Toby Bartels on March 9, 2009 3:27 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Roughly 2 months ago I ceased being able to post comments on “Good Math, Bad Math” from and old IE or a new Firefox. Dr. Mark Chu-Carroll had other complain (email or faceBook) of the same problem. Unsolved.
Posted by: Jonathan Vos Post on March 10, 2009 6:47 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Is Thales and Friends still going? In 2008, I was asked to blog by the online incarnation of a famous computer magazine that started during the hobby-computer era of the 70’s. I thought this would be a nice opportunity to explain category theory to my readers, most of whom are professional programmers, and to show what it can or might do for computing. But this needs more care than, e.g. explaining how to use PHP or why spreadsheets are risky, and since I’m no longer at a university, I don’t have the paid thinking time that most n-Category-Café readers do. So I mailed Thales and Friends to ask whether they could suggest how I might fund such writing. An organisation devoted to popularising maths: surely an ideal advisor. But despite several messages, both via the contact form on Thales’s site, and directly to their email address, I never got a reply.
I’ve had the same non-reaction when asking advice from academics on how to fund my Web-based category-theory demonstrations and related work. Some of the non-replies were even from regulars on this blog, which was a real enthusiasm killer. (Some people I mailed did give helpful replies though.) So such non-response isn’t unique to Thales. But perhaps in their case, problems in the Greek economy have killed the organisation? Does anyone know?
Posted by: Jocelyn Paine on August 30, 2010 4:31 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Thales and Friends is bankrolled by Apostolos Doxiadis, not the Greek government. I don’t know what effect the Greek economic mess had on his operation. I know that for a while, he was very much focused on finishing his comic book on mathematical logic. I never got the impression that he was especially interested in category theory. I imagine he gets vast amounts of email asking for favors of various sorts, and can’t answer most of it.
It’s not too surprising that your queries asking academics for ideas on how to fund your work have been unsuccessful. It’s probably a bit like sending out emails asking for tips on how to rob banks. Those who are good at it are busy doing it. Those who aren’t, can’t help you.
I’ve never gotten a grant for popularizing math. Everything related to This Week’s Finds, my website, the n-Category Café and so on is unfunded and is not officially part of the resume that gets me promoted. It’s just a ‘hobby’ — although I’m sure it did help me get my current job here in Singapore.
So, I don’t know how to get grants of that sort. But, if I wanted to get a grant for popularizing math, I’d start by going to the NSF website and looking to see what activities along those lines they fund. You can see their calls for proposals and also descriptions of all the proposals they’ve ever funded.
There could be other funding agencies that are relevant, too! A good buzzword to know is STEM: that’s jargon for ‘science, technology, engineering, and mathematics’, and there are lots of governments throwing money around trying to improve ‘competitiveness’ in these fields. Maybe you can try to catch some of it. But it helps to have a friend who has already done it. I haven’t.
Posted by: John Baez on August 30, 2010 10:04 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
John, thanks for the advice. The NSF wouldn’t be any use to me though, would they, because I live in England. Except as a source of examples to see the kind of thing that gets accepted for funding.
It’s strange that governments don’t make it easier to get money to popularise science. Even philistine governments like mine, who would never fund anything just because it’s an important part of one’s culture, and whose idea of culture seems anyway to have been reduced to Big Brother and the Olympics 2012 logo, must surely suspect some link between the success of their economy and public understanding of science.
Posted by: Jocelyn Paine on September 3, 2010 4:29 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Joceyln wrote:
The NSF wouldn’t be any use to me though, would they, because I live in England.
Sorry, true. England has its own setup for funding math, called EPSRC. Someone else, like Tom Leinster or Simon Willerton, would be more familiar with that.
It’s strange that governments don’t make it easier to get money to popularise science.
It could be easy: I wouldn’t know, since I’ve never tried to get money for that. I just do it for fun.
EPSRC has a Public engagement - funding available page. They fund “public awareness projects to communicate the excitement of research by EPSRC-funded researchers”, they give “awards to give researchers time to work with the mass media”, and so on. None of that sounds quite right for you. I bet there’s some other agency focused more on education.
Posted by: John Baez on September 3, 2010 7:51 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Ronnie Brown has certainly been very active in getting the message across to the general public.
Posted by: jim stasheff on September 3, 2010 1:45 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Ronnie (and the Centre for the Popularisation of mathematics) were quite successful in getting the message across, but not with that much monetary support. We used funds from here there and everywhere. Each time a small amount. We ran (and this continues even though the Maths Department at Bangor was shut down) Masterclasses for 13 year olds and were helped in this by funds from local industry. This in turn enabled us to build up a mathematics exhibition (Maths and Knots) and get small amounts of money from national sources. When we tried to get more to continue websites, exhibitions etc. we were essentially blocked by the system. The money available was not able to be used in the way that our projects needed!
If anyone is interested in developing material for popularisation, let me say that it is very rewarding and also helps in teaching (if you are involved with that at all). Local industry (especially involving science engineering and technology) sometimes does give small amounts of money, in return for a mention as a sponsor. Sometimes, as we found, a little money can go quite a long way!
Posted by: Tim Porter on September 7, 2010 11:55 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
John Baez wrote:
It could be easy: I wouldn’t know, since I’ve never tried to get money for that. I just do it for fun.
Yes, it is fun. But as a freelancer, it’s hard to afford the time to do it well. Which is frustrating.
Posted by: Jocelyn Paine on September 7, 2010 5:51 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
I think that the style in Wikipedia/nlab which replaces Definition/Lemma/Theorem with Motivation/Definition/Lemma/Theorem/Example is very helpful. All the best mathematical writers do this (even if the omit the Motivation subtitle). First they provide a sketch of the informal idea that they are trying to formalise might be (provide a story) Then they provide the formalisation. Finally the show how the formalisation gives new insight into the original idea that motivated the whole exercise. (Some authors have examples scattered throughought the exposition, which is even better).
Of course, depending on the writer, non mathematical content can help or hinder regardless of structure. I found Russell’s writings very difficult to study because of the condescending nature of his exposition. Nearly every paragraph seemed to have the subtext “I am so much more clever than you that I don’t know why I am bothering to talk to you at all”. Just because the implicit statement is true doesn’t mean that it is helpful to rub my nose in it frequently.
I also want to add that Mathematics is used for lots of things besides physics. Although the recent discussion of Arrow’s theorem (and related issues) skirted the original motivation, this is an example of interesting new mathematical thinking being generated in social science. Personally, I work in Transport Planning where graph theory interacts with econometrics to generate interesting maths. Game Theory is another example of maths forming because of the needs of social science.
Posted by: Roger Witte on August 31, 2010 2:24 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
An exception to the Math/Youth dogma came from Dick Dean, who’d been a meteorologist since his Army days. In his mid 40s, he discovered that there was the field of “Abstract Algebra.” It was beautiful to him in a way that he hadn’t known possible. He had become a student all over again, then published, then a professor, whom I delighted in learning from at Caltech.
His textbooks were unusual in showing all the typical dead ends of hacking through the bush on the way to a half-glimpsed theorem. But even he had been transformed by reading a book. I had not been hit on the head by a book, any more than Newton was actually hit on the head by a falling apple. What happened to me was inexplicable.
Posted by: Jonathan Vos Post on August 31, 2010 10:38 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
I first learned algebra from Richard Dean’s textbook on the subject (who is surely the Dick Dean of whom you wrote). It was a gift from my brother’s maths teacher (his copy from his student days), and, perhaps more than any other, was the book that got me on my way in mathematics.
Posted by: Matthew Emerton on September 1, 2010 4:40 AM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
Ravi Vakil collects such infos.
Posted by: Thomas on September 1, 2010 2:42 PM | Permalink | Reply to this
### Re: Why Mathematics Is Boring
The boredom starts early claims Michael Green.
Posted by: David Corfield on March 26, 2011 5:26 PM | Permalink | Reply to this
Post a New Comment
|
2016-08-28 06:52:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.511368989944458, "perplexity": 1410.59476048427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982935857.56/warc/CC-MAIN-20160823200855-00088-ip-10-153-172-175.ec2.internal.warc.gz"}
|
http://mathoverflow.net/questions/179611/construction-of-nonmeasurable-sets
|
# construction of nonmeasurable sets
I have a history question for which I've had trouble finding a good answer.
The common story about nonmeasurable sets is that Vitali showed that one existed using the Axiom of Choice, and Lebesgue et al. put the blame squarely on this axiom and its non-constructive character. It was noticed however that some amount of choice was required to get measure theory off the ground, namely Dependent Choice seemed to be the principle typically employed. But the full axiom of choice which allows uncountably many arbitrary choices to be made is of a different character, and is the culprit behind the pathological sets. This viewpoint was not really justified until Solovay showed in the 1960s that ZF+DC could not prove the existence of a nonmeasurable set, assuming the consistency of an inaccessible cardinal.
My question is, in the many years before Solovay's theorem, was there any effort aimed at showing the existence of a nonmeasurable set without the use of the full AC? Was something like the following question ever posed or worked on: "Can constructions similar to those of Vitali, Hausdorff, and Banach-Tarski be done without appeal to the Axiom of Choice?"
-
I would assume that Solovay's theorem was so celebrated precisely because people had cared about this issue. So surely people thought about it? – Joel David Hamkins Aug 28 at 23:44
I wrote to Solovay about it, and we'll see if he has anything to say. – Joel David Hamkins Aug 29 at 0:19
Thanks Joel! I will be very interested to hear what he says. – Monroe Eskew Aug 29 at 0:32
You may want to take a look at Lebesgue's writings. Bressoud's A radical approach to Lebesgue's theory of integration, claims (in p. 154) that "Vitali's nonmeasurable set, appearing less than a year later [than Zermelo's Well-ordering theorem], was greeted by Lebesgue and many others as an empty exercise. They wanted an example of a nonmeasurable set whose construction ould not depend on the axiom of choice." – Andres Caicedo Aug 29 at 1:12
Paul Cohen posed the question of getting a model of "All Sets Lebesgue Measurable" in his early talks on his own results. (He did not mention the principle of Dependent Choices. Adding that to the problem was my idea.) I know of no work trying to prove the Vitali result constructively. Certainly Cohen's conjecture (which I presume was widely shared) was that the use of choice was essential.
It is quite striking (if one works through Halmos) that all the positive results in measure theory can be carried out in ZF + DC. Only the counterexample section uses full choice.
-
If my memory serves me right, first came the proof that the Lebesgue measure can be extended to all sets (without choice, and without large cardinals, of course), right? – Asaf Karagila Aug 29 at 1:26
Did Cohen not even mention countable choice of reals in this connection? That seems needed just to get Lebesgue measure theory started (e.g., proving countable additivity). As far as I can see, DC becomes important only in "higher" parts of the theory. (To formulate a specific conjecture: Countable choice suffices for all the measure theory that we require graduate students to know for their qualifying exams.) – Andreas Blass Aug 29 at 14:59
As I recall, Cohen didn't mention "countable choice". But this was an off hand remark (posing the problem) at the end of his lecture. It seemed to me at the time I did this stuff that full DC was needed for the Radon-Nikodym theorem. But as I recall, I convinced myself a few years ago that a different proof of Radon-Nikodym than the one given in Halmos could be done with just AC_omega. – Bob Solovay Sep 4 at 1:57
Re Asaf's remark. Yes, I had this result first. Something very like my proof of this was subsequently published by Sacks. – Bob Solovay Sep 4 at 2:01
I would like to point out that the question on the existence of a non-measurable set without the use of the full AC was technically established in the literature before Solovay's result (which afaik goes back to March-July 1964).
In 1938 Sierpinski had established (cf. "Fonctions additives non complètement additives et fonctions non mesurables", Fund. Math.) that a non-measurable set could be constructed from the assumption (in modern terminology) that there is a prime ideal on the power set of the natural numbers extending the ideal of finite sets. He explains that the existence of such prime ideal was proven by Tarski (cf. "Une constribution à la théorie de la mesure", Fund. Math.) with the aid of the Axiom of Choice (he proved it by transfinite induction).
But it remained an open question, especially in the 50's after Henkin's results, whether the existence of such prime ideals in the power sets, or, more generally, in Boolean algebras, was or not weaker than the Axiom of Choice. This was eventually settled by Halpern, who proved that it was, and his results first appeared on his doctoral dissertation, submitted in the spring 1962.
Sierpinski construction is quite simple: define a function $f$ on a real number $x$ to be either $1$ or $0$, depending on whether the subset of ones in (the non integer part of) its dyadic expansion (choosing the finite development for the rationals) is or not in the prime ideal defined on the powerset of the natural numbers extending the ideal of finite sets. It follows that $f$ has arbitrarily small periods (all numbers of the form $2^{-n}$) and that $f(1-x)=1-f(x)$. From this and the fact that $f$ only takes the values $0$ and $1$ it is not hard to show that it cannot be measurable, and the preimage of $0$ provides a non-measurable set.
-
This is a very good answer. It shows that there was some work on finding nonmeasurable sets without the full AC, perhaps not by outright trying to prove it with only DC, but by weakening the hypothesis. – Monroe Eskew Aug 29 at 19:49
And Halpern did it without forcing! – Monroe Eskew Aug 29 at 20:02
Now I am very curious whether anyone investigated measure theory in Fraenkel-Mostowski permutation models. – Monroe Eskew Aug 29 at 20:23
@Monroe: Nothing to investigate. Since the reals are considered as pure sets. In permutation models pure sets are well-ordered. So everything works as in $\sf ZFC$. – Asaf Karagila Aug 29 at 21:08
@Asaf: Maybe one could still investigate measurability in these models with respect to more general (outer-) measure spaces involving the urelements? – Monroe Eskew Aug 29 at 21:45
|
2014-10-26 02:16:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564527034759521, "perplexity": 708.3564460290419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119653672.23/warc/CC-MAIN-20141024030053-00062-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://mathematica.stackexchange.com/questions/156112/how-are-accuracy-and-precision-related-mathematica-for-a-given-operation/156116
|
# How are Accuracy and Precision related Mathematica for a given operation?
The common understanding for Accuracy and Precision in English language is given by this figure.
Inspired by this question I have a follow up question relating Accuracy and Precision in Mathematica and Wolfram language.
How do we understand the relationship between Accuracy and Precision in the following example?
Accuracy[
SetPrecision[
SetAccuracy[
12.3
, 20], 15]
]
13.9101
Precision[
SetAccuracy[
SetPrecision[
12.3
, 20], 15]
]
16.0899
Where is this 13.9101 and 16.0899 coming from, exactly.
Given an operation, such as Subtract, Plus or Times.
How do we predict the Accuracy and Precision of the outcome?
Precision[
Times[
SetPrecision[10, 3] ,
SetPrecision[1, 7]
]]
2.99996
Precision[
Plus[
SetPrecision[10, 3] ,
SetPrecision[1, 7]
]]
3.04139
Accuracy[
Plus[
SetAccuracy[10, 3] ,
SetAccuracy[1, 7]
]]
2.99996
Accuracy[
Times[
SetAccuracy[10, 3] ,
SetAccuracy[1, 7]
]]
2.99957
• Given an approximate number nonzero number x with an uncertainty dx, then according to the docs, Accuracy[x], Precision[x] and dx are related as follows: Accuracy is -Log[10,dx] and Precision is -Log[10,dx/x]. Propagated error is computed according to the usual rules, I think. Sep 19 '17 at 17:52
• Look up RealExponent. Accuracy and Precision are not independent, and their relationship depends on the size of the number. Since they are not independent, it makes no sense to stack them as you suggest in your comment above. Sep 19 '17 at 17:57
• The uncertainty dx is the same, whether determined from Accuracy or Precision, no? Sep 19 '17 at 18:00
• Ah, perhaps I should have said "Given a number x with a given precision, then the accuracy, precision and uncertainty dx are related as follows...." In the internal representation of nonzero real numbers, it is the precision that is stored or specified. Sep 19 '17 at 18:07
## Precision is the principal representation of numerical error
Except for numbers that are equal to zero, error in arbitrary-precision numbers is stored internally as its precision. For numbers equal to zero, the accuracy is stored, because the precision turns out to be undefined (even if Precision[zero] is defined). One way to view zeros are as a form of Underflow[] for arbitrary precision numbers, which I will explain below.
For a nonzero number $x$ with precision $p$, the error bound $dx>0$ and accuracy $a$, as defined by Precision and Accuracy, are related as follows: $$p = - \log_{10} |dx / x| \,, \quad a = - \log_{10} dx\,.\tag{1}$$ An arbitrary-precision number $x$ represents a real number $x^*$ in the interval $$x - dx < x^* < x + dx\,.$$
## Accuracy and Precision are related though RealExponent
The relation between Accuracy and Precision is given by
RealExponent[x] + Accuracy[x] == Precision[x]
Therefore, as RealExponent[12.3]-> 1.08991 then Accuracy[SetPrecision[12.3, 15]] must be 15 - 1.08991 -> 13.9101
Similarly, Precision[SetAccuracy[12.3, 15]] is 15 + 1.08991 -> 16.0899.
## Operations
To get the Accuracy after an operation, we just need to distribute the errors.
The error of Times[a, b] is
ExpandAll[(a + δa) (b + δb) - a b]
b δa + a δb + δa δb
where Accuracy[a] -> -Log10[δa].
## Numerical example
y = SetAccuracy[10, 3];
z = SetAccuracy[1, 7];
Accuracy[y z]
2.99957
N[-Log10[
b δa + a δb + δa δb
]] /. {
a -> y,
b -> z,
δa -> Power[10, -3],
δb -> Power[10, -7]
}
2.99957
`
|
2022-01-18 12:53:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7274452447891235, "perplexity": 2077.4999996836736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00504.warc.gz"}
|
http://math.stackexchange.com/questions/62521/estimating-a-solution-with-the-jacobi-method-for-solving-ax-b
|
# Estimating a solution with the Jacobi Method for solving Ax = b
I'm trying to understand how the Jacobi method works and would appreciate a walk-through of the method with a very very simple example. In particular, I don't fully understand how one goes from the linear equation to a matrix.
Ah, my confusion is solved: http://en.wikipedia.org/wiki/System_of_linear_equations
-
There are at least two methods named after Jacobi: one is a method for getting the eigenvalues of a symmetric matrix, and one is an iterative method for solving $\mathbf A\mathbf x=\mathbf b$. You'll need to be clear which one you're interested in. – J. M. Sep 7 '11 at 10:33
I assume I'd be interested in the iterative method. For example, I'd like to solve TunkRank. – lerninit Sep 7 '11 at 10:37
The example in Wikipedia looks particularly straightforward and simple. – J. M. Sep 7 '11 at 11:15
@J.M. Please don't insult people who don't understand what you consider simple. – lerninit Sep 7 '11 at 11:21
@J.M. For example, how was A derived from a linear equation? – lerninit Sep 7 '11 at 11:22
|
2014-12-20 20:08:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9181950092315674, "perplexity": 783.8713669029015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770371.28/warc/CC-MAIN-20141217075250-00174-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/110218-order-axioms-proof.html
|
# Math Help - order axioms proof.
1. ## order axioms proof.
Hello all,
Can someone please help me with the following proof? i have an exam on monday and cant figure out this one question i found online:
Use order axioms to show that 0 < a < b & 0 < c < d implies that
a/d < b/c. Thanks in advance.
I hope I remember: $\frac{a}{d}<\frac{b}{c} \Longleftrightarrow ac $\Longleftrightarrow ac-bd <0 \Longleftrightarrow ac-ad+ad-bd<0$ $\Longleftrightarrow a(c-d)+d(a-b)<0$, and since the last inequality is exactly the opposite that we have by the given data ( $c and everybody's positive here, the correct inequality should be $\frac {a}{d}>\frac{b}{c}$
|
2015-07-04 16:50:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8281309008598328, "perplexity": 1120.2613188980602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096773.65/warc/CC-MAIN-20150627031816-00014-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://zbmath.org/authors/?q=ai%3Aadamczewski.boris
|
# zbMATH — the first resource for mathematics
Compute Distance To:
Documents Indexed: 51 Publications since 2002 Reviewing Activity: 22 Reviews
all top 5
#### Co-Authors
11 single-authored 19 Bugeaud, Yann 7 Bell, Jason P. 3 Cassaigne, Julien 3 Faverjon, Colin 2 Delaygue, Éric 2 Luca, Florian 2 Rivoal, Tanguy 2 Siegel, Anne 2 Steiner, Wolfgang 1 Allouche, Jean-Paul Simon 1 Damanik, David 1 Davison, Les 1 Frougny, Christiane 1 Jouhet, Frédéric 1 Le Gonidec, Marion 1 Octavia, Gaël 1 Rampersad, Narad
all top 5
#### Serials
3 Acta Arithmetica 3 Annales de l’Institut Fourier 3 Bulletin of the London Mathematical Society 3 Comptes Rendus. Mathématique. Académie des Sciences, Paris 2 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 2 Journal für die Reine und Angewandte Mathematik 2 Proceedings of the London Mathematical Society. Third Series 2 Theoretical Computer Science 2 Transactions of the American Mathematical Society 2 IMRN. International Mathematics Research Notices 2 Journal de Théorie des Nombres de Bordeaux 1 American Mathematical Monthly 1 Mathematical Proceedings of the Cambridge Philosophical Society 1 Acta Mathematica 1 Compositio Mathematica 1 Gazette des Mathématiciens 1 Inventiones Mathematicae 1 Journal of Algebra 1 Journal of the London Mathematical Society. Second Series 1 Journal of Number Theory 1 Mathematische Annalen 1 Proceedings of the American Mathematical Society 1 Advances in Applied Mathematics 1 Ergodic Theory and Dynamical Systems 1 Glasnik Matematički. Serija III 1 Documenta Mathematica 1 Séminaire Lotharingien de Combinatoire 1 Journal of Integer Sequences 1 Annals of Mathematics. Second Series 1 Journal of the European Mathematical Society (JEMS) 1 Annales Henri Poincaré 1 Fizikos ir Matematikos Fakulteto Mokslinio Seminaro Darbai 1 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie V 1 Actes des Rencontres du C.I.R.M.
all top 5
#### Fields
47 Number theory (11-XX) 15 Computer science (68-XX) 7 Dynamical systems and ergodic theory (37-XX) 2 Combinatorics (05-XX) 2 Commutative algebra (13-XX) 2 Measure and integration (28-XX) 1 General and overarching topics; collections (00-XX) 1 History and biography (01-XX) 1 Field theory and polynomials (12-XX) 1 Algebraic geometry (14-XX) 1 Special functions (33-XX) 1 Ordinary differential equations (34-XX) 1 Difference and functional equations (39-XX) 1 Operator theory (47-XX) 1 Numerical analysis (65-XX)
#### Citations contained in zbMATH
45 Publications have been cited 444 times in 249 Documents Cited by Year
On the complexity of algebraic numbers. I: Expansions in integer bases. Zbl 1195.11094
2007
On the complexity of algebraic numbers. II: Continued fractions. Zbl 1195.11093
2005
On the complexity of algebraic numbers. Zbl 1119.11019
Adamczewski, Boris; Bugeaud, Yann; Luca, Florian
2004
Balances for fixed points of primitive substitutions. Zbl 1059.68083
2003
Symbolic discrepancy and self-similar dynamics. Zbl 1066.11032
2004
Diophantine properties of real numbers generated by finite automata. Zbl 1134.11011
2006
Transcendence measures and quantitative aspects of the Thue-Siegel-Roth-Schmidt method. Zbl 1200.11054
2010
Dynamics for $$\beta$$-shifts and Diophantine approximation. Zbl 1140.11035
2007
Rotation encoding and self-similarity phenomenon. Zbl 1113.37003
2002
Irrationality measures for some automatic real numbers. Zbl 1205.11080
2009
Palindromic continued fractions. Zbl 1126.11036
2007
Distribution of the sequence $$(n\alpha)_{n\in{\mathbb N}}$$ and substitutions. Zbl 1060.11043
2004
On the Maillet-Baker continued fractions. Zbl 1145.11054
2007
Mahler method: linear relations, transcendence and applications to automatic numbers. Zbl 1440.11132
2017
On vanishing coefficients of algebraic power series over fields of positive characteristic. Zbl 1257.11027
2012
Transcendence measures for continued fractions involving repetitive or symmetric patterns. Zbl 1200.11053
2010
Reversals and palindromes in continued fractions. Zbl 1118.68110
2007
On the Littlewood conjecture in simultaneous Diophantine approximation. Zbl 1093.11052
2006
An analogue of Cobham’s theorem for fractals. Zbl 1229.28007
2011
Rational numbers with purely periodic $$\beta$$-expansion. Zbl 1211.11010
Adamczewski, Boris; Frougny, Christiane; Siegel, Anne; Steiner, Wolfgang
2010
On the expansion of some exponential periods in an integer base. Zbl 1247.11095
2010
Diagonalization and rationalization of algebraic Laurent series. Zbl 1318.13033
2013
Real numbers of sublinear complexity: irrationality and transcendence measures. Zbl 1255.11037
2011
A short proof of the transcendence of Thue-Morse continued fractions. Zbl 1132.11330
2007
Linearly recurrent circle map subshifts and an application to Schrödinger operators. Zbl 1023.47019
2002
A Liouville-like approach for the transcendence of some real numbers. Zbl 1046.11051
2004
On the transcendence of real numbers with a regular expansion. Zbl 1052.11052
2003
A problem about Mahler functions. Zbl 1432.11086
2017
Non-zero digits in the expansion of irrational algebraic numbers in an integer base. Zbl 1264.11067
2012
Non-converging continued fractions related to the Stern diatomic sequence. Zbl 1210.11077
2010
Continued fractions and transcendental numbers. Zbl 1152.11034
Adamczewski, Boris; Bugeaud, Yann; Davison, Les
2006
Congruences modulo cyclotomic polynomials and algebraic independence for $$q$$-series. Zbl 1405.11019
Adamczewski, Boris; Bell, Jason P.; Delaygue, Éric; Jouhet, Frédéric
2017
Function fields in positive characteristic: expansions and Cobham’s theorem. Zbl 1151.11060
2008
Real and $$p$$-adic expansions involving symmetric patterns. Zbl 1113.11041
2006
On patterns occurring in binary algebraic numbers. Zbl 1151.11036
2008
Algebraic independence of $$G$$-functions and congruences “à la Lucas”. Zbl 1450.11075
Adamczewski, Boris; Bell, Jason P.; Delaygue, Éric
2019
Mahler’s method, transcendence and linear relations: effective aspects. Zbl 1441.11179
2018
Exceptional values of $$E$$-functions at algebraic points. Zbl 1450.11076
2018
Transcendence and Diophantine approximation. Zbl 1271.11073
2010
On the decimal expansion of algebraic numbers. Zbl 1138.11028
2005
On powers of words occurring in binary codings of rotations. Zbl 1113.37002
2005
The many faces of the Kempner number. Zbl 1286.11109
2013
On the Littlewood conjecture in fields of power series. Zbl 1223.11083
2007
On the density exponent of algebraic numbers. Zbl 1124.11035
2007
On the independence of expansions of algebraic numbers in an integer base. Zbl 1120.11003
2007
Algebraic independence of $$G$$-functions and congruences “à la Lucas”. Zbl 1450.11075
Adamczewski, Boris; Bell, Jason P.; Delaygue, Éric
2019
Mahler’s method, transcendence and linear relations: effective aspects. Zbl 1441.11179
2018
Exceptional values of $$E$$-functions at algebraic points. Zbl 1450.11076
2018
Mahler method: linear relations, transcendence and applications to automatic numbers. Zbl 1440.11132
2017
A problem about Mahler functions. Zbl 1432.11086
2017
Congruences modulo cyclotomic polynomials and algebraic independence for $$q$$-series. Zbl 1405.11019
Adamczewski, Boris; Bell, Jason P.; Delaygue, Éric; Jouhet, Frédéric
2017
Diagonalization and rationalization of algebraic Laurent series. Zbl 1318.13033
2013
The many faces of the Kempner number. Zbl 1286.11109
2013
On vanishing coefficients of algebraic power series over fields of positive characteristic. Zbl 1257.11027
2012
Non-zero digits in the expansion of irrational algebraic numbers in an integer base. Zbl 1264.11067
2012
An analogue of Cobham’s theorem for fractals. Zbl 1229.28007
2011
Real numbers of sublinear complexity: irrationality and transcendence measures. Zbl 1255.11037
2011
Transcendence measures and quantitative aspects of the Thue-Siegel-Roth-Schmidt method. Zbl 1200.11054
2010
Transcendence measures for continued fractions involving repetitive or symmetric patterns. Zbl 1200.11053
2010
Rational numbers with purely periodic $$\beta$$-expansion. Zbl 1211.11010
Adamczewski, Boris; Frougny, Christiane; Siegel, Anne; Steiner, Wolfgang
2010
On the expansion of some exponential periods in an integer base. Zbl 1247.11095
2010
Non-converging continued fractions related to the Stern diatomic sequence. Zbl 1210.11077
2010
Transcendence and Diophantine approximation. Zbl 1271.11073
2010
Irrationality measures for some automatic real numbers. Zbl 1205.11080
2009
Function fields in positive characteristic: expansions and Cobham’s theorem. Zbl 1151.11060
2008
On patterns occurring in binary algebraic numbers. Zbl 1151.11036
2008
On the complexity of algebraic numbers. I: Expansions in integer bases. Zbl 1195.11094
2007
Dynamics for $$\beta$$-shifts and Diophantine approximation. Zbl 1140.11035
2007
Palindromic continued fractions. Zbl 1126.11036
2007
On the Maillet-Baker continued fractions. Zbl 1145.11054
2007
Reversals and palindromes in continued fractions. Zbl 1118.68110
2007
A short proof of the transcendence of Thue-Morse continued fractions. Zbl 1132.11330
2007
On the Littlewood conjecture in fields of power series. Zbl 1223.11083
2007
On the density exponent of algebraic numbers. Zbl 1124.11035
2007
On the independence of expansions of algebraic numbers in an integer base. Zbl 1120.11003
2007
Diophantine properties of real numbers generated by finite automata. Zbl 1134.11011
2006
On the Littlewood conjecture in simultaneous Diophantine approximation. Zbl 1093.11052
2006
Continued fractions and transcendental numbers. Zbl 1152.11034
Adamczewski, Boris; Bugeaud, Yann; Davison, Les
2006
Real and $$p$$-adic expansions involving symmetric patterns. Zbl 1113.11041
2006
On the complexity of algebraic numbers. II: Continued fractions. Zbl 1195.11093
2005
On the decimal expansion of algebraic numbers. Zbl 1138.11028
2005
On powers of words occurring in binary codings of rotations. Zbl 1113.37002
2005
On the complexity of algebraic numbers. Zbl 1119.11019
Adamczewski, Boris; Bugeaud, Yann; Luca, Florian
2004
Symbolic discrepancy and self-similar dynamics. Zbl 1066.11032
2004
Distribution of the sequence $$(n\alpha)_{n\in{\mathbb N}}$$ and substitutions. Zbl 1060.11043
2004
A Liouville-like approach for the transcendence of some real numbers. Zbl 1046.11051
2004
Balances for fixed points of primitive substitutions. Zbl 1059.68083
2003
On the transcendence of real numbers with a regular expansion. Zbl 1052.11052
2003
Rotation encoding and self-similarity phenomenon. Zbl 1113.37003
2002
Linearly recurrent circle map subshifts and an application to Schrödinger operators. Zbl 1023.47019
2002
all top 5
#### Cited by 295 Authors
29 Bugeaud, Yann 24 Adamczewski, Boris 9 Bell, Jason P. 9 Coons, Michael 9 Pelantová, Edita 7 Masáková, Zuzana 6 Berthé, Valérie 6 Cassaigne, Julien 6 Rigo, Michel 6 Rivoal, Tanguy 5 Rampersad, Narad 5 Shallit, Jeffrey O. 5 Thuswaldner, Jörg Maximilian 5 Zamboni, Luca Quardo 4 Bufetov, Aleksandr Igorevich 4 Dubickas, Artūras 4 Hbaib, Mohamed 4 Le Gonidec, Marion 4 Turek, Ondřej 4 Wen, Zhixiong 4 Wu, Wen 3 Allouche, Jean-Paul Simon 3 Badziahin, Dzmitry A. 3 Charlier, Emilie 3 Damanik, David 3 Fischler, Stéphane 3 Hubert, Pascal 3 Labbé, Sébastien 3 Lenz, Daniel H. 3 Leroy, Julien 3 Richomme, Gwénaël 3 Rowland, Eric S. 3 Schleischitz, Johannes 3 Simonsen, Jakob Grue 3 Solomyak, Boris 3 Steiner, Wolfgang 3 Vuillon, Laurent 2 Ambrož, Petr 2 Balková, L’ubomíra 2 Bertazzon, Jean-Francois 2 Blondin Massé, Alexandre 2 Bressaud, Xavier 2 Brlek, Srečko 2 Bundschuh, Peter 2 Chen, Jin 2 Chyzak, Frédéric 2 Dumas, Philippe 2 Durand, Fabien 2 Fang, Lulu 2 Faverjon, Colin 2 Fernandes, Gwladys 2 Fici, Gabriele 2 Glen, Amy 2 Guo, Yingjun 2 Hieronymi, Philipp 2 Kaneko, Hajime 2 Kim, Donghan 2 Kumar, Veekesh 2 Meher, Nabin Kumar 2 Nathanson, Melvyn Bernard 2 Puzynina, Svetlana 2 Rond, Guillaume 2 Roy, Damien 2 Saari, Kalle 2 Salimov, Pavel Vadimovich 2 Scheicher, Klaus 2 Schmidt, Thomas A. 2 Shutov, Anton V. 2 Siegel, Anne 2 Starosta, Štěpán 2 Straub, Armin 2 Surer, Paul 2 Väänänen, Keijo O. 2 Walsberg, Erik 2 Wu, Min 2 Yassawi, Reem 2 Zorin, Evgeniy V. 1 Abbes, Farah 1 Aliste-Prieto, José 1 Ammous, Basma 1 Amou, Masaaki 1 Aroca, Fuensanta 1 Avgustinovich, Sergeĭ Vladimirovich 1 Baake, Michael 1 Bailey, David Harold 1 Baker, Simon 1 Banderier, Cyril 1 Barat, Guy 1 Baxa, Christoph 1 Belhadef, Rafik 1 Bengoechea, Paloma 1 Berlinkov, Artemi 1 Bernat, Julien 1 Blanchet-Sadri, Francine 1 Bonanno, Claudio 1 Borisyuk, Alla 1 Borwein, Jonathan Michael 1 Bostan, Alin 1 Bourdon, Jérémie 1 Boyland, Philip L. ...and 195 more Authors
all top 5
#### Cited in 93 Serials
22 Theoretical Computer Science 13 Journal de Théorie des Nombres de Bordeaux 12 Proceedings of the American Mathematical Society 12 Ergodic Theory and Dynamical Systems 10 Annales de l’Institut Fourier 7 Journal of Number Theory 7 RAIRO. Theoretical Informatics and Applications 6 Advances in Mathematics 6 Transactions of the American Mathematical Society 6 Advances in Applied Mathematics 6 Journal of the European Mathematical Society (JEMS) 5 Mathematical Proceedings of the Cambridge Philosophical Society 5 International Journal of Foundations of Computer Science 5 Indagationes Mathematicae. New Series 4 Bulletin of the Australian Mathematical Society 4 Mathematische Zeitschrift 3 Communications in Mathematical Physics 3 Acta Arithmetica 3 Journal für die Reine und Angewandte Mathematik 3 Documenta Mathematica 3 Journal of the Australian Mathematical Society 3 International Journal of Number Theory 3 RAIRO. Theoretical Informatics and Applications 3 Actes des Rencontres du C.I.R.M. 2 Discrete Applied Mathematics 2 Mathematics of Computation 2 Annali di Matematica Pura ed Applicata. Serie Quarta 2 Canadian Journal of Mathematics 2 Journal of Algebra 2 Journal of Combinatorial Theory. Series A 2 Mathematische Annalen 2 Proceedings of the London Mathematical Society. Third Series 2 European Journal of Combinatorics 2 Fractals 2 The Ramanujan Journal 2 Theory of Computing Systems 2 Annals of Combinatorics 2 Integers 2 Comptes Rendus. Mathématique. Académie des Sciences, Paris 2 Mediterranean Journal of Mathematics 1 Discrete Mathematics 1 Israel Journal of Mathematics 1 Jahresbericht der Deutschen Mathematiker-Vereinigung (DMV) 1 Nonlinearity 1 Periodica Mathematica Hungarica 1 Rocky Mountain Journal of Mathematics 1 Reviews in Mathematical Physics 1 Chaos, Solitons and Fractals 1 The Mathematical Intelligencer 1 Beiträge zur Algebra und Geometrie 1 Acta Mathematica 1 Bulletin of the London Mathematical Society 1 Bulletin de la Société Mathématique de France 1 Compositio Mathematica 1 Duke Mathematical Journal 1 Functiones et Approximatio. Commentarii Mathematici 1 Glasgow Mathematical Journal 1 Inventiones Mathematicae 1 Journal of the London Mathematical Society. Second Series 1 Journal of Pure and Applied Algebra 1 Manuscripta Mathematica 1 Notre Dame Journal of Formal Logic 1 Osaka Journal of Mathematics 1 Pacific Journal of Mathematics 1 Rendiconti del Seminario Matematico della Università di Padova 1 Acta Mathematica Hungarica 1 Physica D 1 Journal of Symbolic Computation 1 Information and Computation 1 Atti della Accademia Nazionale dei Lincei. Classe di Scienze Fisiche, Matematiche e Naturali. Serie IX. Rendiconti Lincei. Matematica e Applicazioni 1 International Journal of Algebra and Computation 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Linear Algebra and its Applications 1 Bulletin of the American Mathematical Society. New Series 1 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Russian Mathematics 1 Combinatorics, Probability and Computing 1 Journal of Mathematical Sciences (New York) 1 St. Petersburg Mathematical Journal 1 Finite Fields and their Applications 1 The Bulletin of Symbolic Logic 1 Séminaire Lotharingien de Combinatoire 1 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings 1 Chaos 1 Journal of Integer Sequences 1 Annals of Mathematics. Second Series 1 JP Journal of Algebra, Number Theory and Applications 1 Journal of Physics A: Mathematical and Theoretical 1 Journal of Modern Dynamics 1 Logical Methods in Computer Science 1 Algebra & Number Theory 1 $$p$$-Adic Numbers, Ultrametric Analysis, and Applications
all top 5
#### Cited in 30 Fields
166 Number theory (11-XX) 82 Computer science (68-XX) 53 Dynamical systems and ergodic theory (37-XX) 20 Combinatorics (05-XX) 17 Measure and integration (28-XX) 10 Commutative algebra (13-XX) 9 Mathematical logic and foundations (03-XX) 8 Algebraic geometry (14-XX) 8 Convex and discrete geometry (52-XX) 5 Difference and functional equations (39-XX) 4 Field theory and polynomials (12-XX) 4 Approximations and expansions (41-XX) 3 Ordinary differential equations (34-XX) 3 Operator theory (47-XX) 3 General topology (54-XX) 3 Probability theory and stochastic processes (60-XX) 2 Several complex variables and analytic spaces (32-XX) 2 Special functions (33-XX) 2 Integral equations (45-XX) 1 General and overarching topics; collections (00-XX) 1 History and biography (01-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 General algebraic systems (08-XX) 1 Group theory and generalizations (20-XX) 1 Real functions (26-XX) 1 Functions of a complex variable (30-XX) 1 Numerical analysis (65-XX) 1 Quantum theory (81-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Information and communication theory, circuits (94-XX)
|
2021-03-07 22:00:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40819352865219116, "perplexity": 3049.91961847806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378872.82/warc/CC-MAIN-20210307200746-20210307230746-00538.warc.gz"}
|
https://www.physicsforums.com/threads/force-of-charged-particles.102786/
|
# Force of charged particles
1. Dec 4, 2005
### comtngal
When a positive and negative charge are held close to each other and then released, does the force on each particle increase, decrease, or stay the same?
2. Dec 4, 2005
### emptymaximum
when you release the particles, what happens?
recall coulombs law:
$$\vec{F} = \frac{1} {4 \pi \epsilon_0} \frac{q_1 q_2} {r^2} \hat{r}$$
Last edited: Dec 4, 2005
|
2017-08-22 17:35:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4416516125202179, "perplexity": 3657.9755578707623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112533.84/warc/CC-MAIN-20170822162608-20170822182608-00667.warc.gz"}
|
http://ifc43-docs.standards.buildingsmart.org/IFC/RELEASE/IFC4x3/HTML/lexical/IfcRelAssociatesClassification.htm
|
IFC 4.3.0.1 (IFC4X3) development
# 5.1.3.30 IfcRelAssociatesClassification
## 5.1.3.30.1 Semantic definition
The objectified relationship IfcRelAssociatesClassification handles the assignment of a classification item (items of the select IfcClassificationSelect) to objects occurrences (subtypes of IfcObject) or object types (subtypes of IfcTypeObject).
The relationship is used to assign a classification item, or a classification system itself to objects. Depending on the type of the RelatingClassification it is either:
• a reference to an classification item within an external classification system, or
• a reference to the classification system itself
The inherited attribute RelatedObjects define the objects or object types to which the classification is applied. The attribute RelatingClassification is the reference to a classification, applied to the object(s). A single RelatingClassification can thereby be applied to one or multiple objects.
## 5.1.3.30.5 Formal representation
ENTITY IfcRelAssociatesClassification
SUBTYPE OF (IfcRelAssociates);
RelatingClassification : IfcClassificationSelect;
END_ENTITY;
|
2022-08-18 11:17:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19725662469863892, "perplexity": 8329.295733660685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00785.warc.gz"}
|
http://www.math.uwaterloo.ca/~cswamy/tutte/fa08/david.html
|
Friday, November 28, 2008
3:30 pm, MC 5158
## David Jao University of Waterloo
Constructing expander graphs from the Generalized Riemann Hypothesis
We present a construction of expander graphs obtained from Cayley graphs of narrow ray class groups, whose eigenvalue bounds follow from the Generalized Riemann Hypothesis. Our result implies that the Cayley graph of $(\mathbf{Z}/q\mathbf{Z})^*$ with respect to small prime generators is an expander. As another application, we explain the relationship between the expansion properties of these graphs and the security of the elliptic curve discrete logarithm problem.
Joint work with Stephen D. Miller and Ramarathnam Venkatesan.
|
2013-05-25 04:56:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6907301545143127, "perplexity": 698.470983928711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705543116/warc/CC-MAIN-20130516115903-00080-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.math.stonybrook.edu/preprintsearch?search=ims08-02
|
M. Lyubich and M. Martens
Renormalization in the Hénon family, II: The heteroclinic web
Abstract
We study highly dissipative Hénon maps $$F_{c,b}: (x,y) \mapsto (c-x^2-by, x)$$ with zero entropy. They form a region $\Pi$ in the parameter plane bounded on the left by the curve $W$ of infinitely renormalizable maps. We prove that Morse-Smale maps are dense in $\Pi$, but there exist infinitely many different topological types of such maps (even away from $W$). We also prove that in the infinitely renormalizable case, the average Jacobian $b_F$ on the attracting Cantor set $\mathcal{O}_F$ is a topological invariant. These results come from the analysis of the heteroclinic web of the saddle periodic points based on the renormalization theory. Along these lines, we show that the unstable manifolds of the periodic points form a lamination outside $\mathcal{O}_F$ if and only if there are no heteroclinic tangencies.
|
2021-06-15 04:40:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9192497730255127, "perplexity": 270.59169433634213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00550.warc.gz"}
|
https://www.vedantu.com/ncert-solutions/ncert-solutions-class-12-maths-chapter-8
|
# NCERT Solutions for Class 12 Maths Chapter 8 Application of Integrals
## NCERT Solutions for Class 12 Maths Chapter 8 - Application of Integrals
You can now download NCERT Solutions Class 12 Maths Chapter 8 PDF available at the official website of Vedantu. Our Application of Integrals Class 12 NCERT Solutions is made in an easily understandable manner by the subject matter experts of our team. The teachers are well-versed with the current CBSE syllabus, and their solutions are designed based on that. With these NCERT Solutions for Class 12 Maths Chapter 8, you would be able to quickly revise all the salient points of the chapter and remember all the important formulas.
Ch 8 Maths Class 12 is based on the application of integrals which can be a tricky topic for many of you. In order to master it, you need support from experts who have a thorough knowledge of this topic, and that’s what our team is well equipped with. On top of the ease of access to quality NCERT Solutions for Class 12 Maths Chapter Application of Integrals on our website, you also get constant support from our subject matter experts in case you get any problem on any topic.
Do you need help with your Homework? Are you preparing for Exams?
Study without Internet (Offline)
## NCERT Solutions for Class 12 Maths Chapter 8 - Application of Integrals
### NCERT Solutions for Class 12 Maths – Free PDF Download
Having easy access to Maths solutions can give the required boost needed by students in today’s hectic lifestyle. With the readily available NCERT Solutions Class 12 Maths Chapter 8 PDF download at our website, students can revise their syllabus on the go.
### 8.1 Introduction
In the introduction of Chapter 8 Class 12th Maths, you will be reminded of all the geometrical formulae you learned in the previous chapters. You would recall how these geometric formulae for calculating areas of triangles, rectangles, trapeze, etc. are the basis on which mathematics is applied to real-life problems. However, they are not able to determine areas enclosed by curves. That is where integral calculus comes into the picture.
You will rekindle what you learned about areas bounded by a curve in the previous chapter where definite integrals were calculated as the limit of a sum. In this chapter, you would use those concepts to find out areas between lines and arcs of circles, parabolas, simple curves and ellipses.
AOI Class 12 NCERT Solutions have many solved examples and a variety of questions neatly divided into different exercises. You will get rigorous practice with the set of questions presented in this chapter. Some of the prominent parts discussed here are:
• Elementary area i.e. the area located between any arbitrary positions
• The area under two curves
• The area that is bounded by a curve and a line
• Integrals of trigonometric identities
### 8.2 Area Under Simple Curves
This section of Chapter 8 Class 12 Maths begins with a recap of how definite integrals are calculated as the limit of a sum and the fundamental theorem of calculus. You will then learn how to find the area bounded by the curves. So, if the curve is y = f(x) then to find the area bounded by the curve, the x-axis, and 2 points x=p1 and x=p2 on the x-axis you can consider it as a large number of vertical strips. After this, you would understand how to calculate individual areas of these thin strips. Suppose the height of the strip is y and width is dx then the area of one strip is expressed as dA = ydx, here y = f(x). This area is termed as an elementary strip. The total area can thus be calculated by integrating these elementary strips:
A = $\int_{p1}^{p2}$ dA = $\int_{p1}^{p2}$ ydx = $\int_{p1}^{p2}$ f(x) dx
In case the position of the curve is below the x-axis, the area will come out to be negative, but we take only the absolute value of the integral.
You may learn how to calculate the area of a region which is bounded by a curve and a line. One could consider either vertical or horizontal strips to calculate the area of the region. So if the curve is y = f(x) and the equation of the line is g = ax + k, we can see there are 2 cases; one is the area under the curve, and the other is the area between 2 curves, depending on how many points the line intersects the curve at. For a line intersecting the curve at 2 points, a general formula to find the region between the line and the curve can be given as:
A = $\int_{p1}^{p2}$ [g(x) - f(x)]dx. Here g(x) is the equation of the line, f(x) is the equation of the curve, and p1, p2 are points of intersection of the curve with the straight line.
### 8.2 The Area Between Two Curves
In this unit of Application of Integrals Class 12 chapter, you would learn how to calculate area between 2 curves by dividing the common region into elementary areas of vertical strips. So if equation of one curve is y = f(x) and equation of the other curve is y = g(x) and we know that f(x) >= g(x), the elementary area can be given as: dA = [f(x) – g(x)]dx where dx is the width of the strip. The total area is thus calculated by the following integral:
A = $\int_{p1}^{p2}$ [f(x) - g(x)]dx. Here p1 and p2 are the 2 points of intersections of the curve on the x-axis.
### Key Features of NCERT Solutions for Class 12 Maths Chapter 8
To score well in CBSE and also crack many competitive exams like NEET and IIT, one needs to have a solid base in Mathematics. The NCERT books present you with challenging questions which better your analytical abilities and give you ample exposure to all kinds of questions that can come in any of these exams. Our Application of Integrals Class 12 Solutions is in line with the latest CBSE curriculum and is extremely beneficial for your preparation for the following reasons:
• The quality of solutions is impeccable as they are designed by teachers with immense experience in the subject matter.
• You get to revise all the key points of a chapter in very less time.
• The solutions help you manage your stress and time during exams.
• Class 12 Maths Ch 8 NCERT Solutions would help you clear your root level concepts.
FAQs (Frequently Asked Questions)
1. Which chapters are removed from maths Class 12?
According to the revised CBSE syllabus for Class 12 Maths, the syllabus has been reduced by 30%. Full chapters have not been deleted but certain portions from each unit have been removed from the CBSE Class 12 Maths syllabus. To get full NCERT Solutions Maths Chapter 8 based on the latest syllabus, download the NCERT Solutions Maths Chapter 8 PDF available only on Vedantu. Extra important questions are also solved for a better understanding.
2. How many important examples are there in Class 12 Maths Chapter 8 Miscellaneous Exercise?
There are five examples (from Example 11 to Example 15) given in the Miscellaneous Examples section in Class 12 Maths Chapter 8. These miscellaneous examples can be used to solve the Class 12 Maths Chapter 8 Miscellaneous Exercise given in the NCERT textbook. For fully solved exercise questions including the miscellaneous exercise present in Chapter 8, download the NCERT Solutions Class 12 Maths Chapter 8 PDF from Vedantu today and start practising!
3. How can I understand the chapters in Class 12 Maths?
NCERT Class 12 Maths can be intimidating at first, but if you strategize your study, you can overcome your fear of Class 12 Maths easily. The best way to understand all the chapters in Class 12 Maths is to refer to NCERT Solutions Class 12 Maths on Vedantu. All the solutions in Class 12 Maths on Vedantu have full explanations for each topic and sub-topics including exercise questions.
4. What is the underlying concept of Chapter 8 Application of Integrals?
The underlying concept of NCERT Class 12 Maths Chapter 8 Application of Integrals deals with the usage of geometric formulae for calculating the area of different geometric shapes and the area between two curves, area between curve and line, area between arbitrary positions, and trigonometric identities integrals. To get solutions to all exercises, you must refer to NCERT Class 12 Maths Chapter 8 on Vedantu.
5. What are the most important formulas that you need to learn in Chapter 8 Class 12 Maths?
The most important formulae that you need to learn in Class 12 Maths Chapter 8 are - formula for the area under the simple curve (area enclosed by the given circle, area enclosed within ellipse), the formula for the area of the region bounded by line and curve, and formula for the area between two curves. You can get access to all the solutions if you download the NCERT Solutions Class 12 Maths Chapter 8 on Vedantu at free of cost. These solutions are also available on the Vedantu app.
Share this with your friends
SHARE
TWEET
SHARE
SUBSCRIBE
|
2021-10-19 10:02:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6892074346542358, "perplexity": 663.6005664397595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00509.warc.gz"}
|
https://brilliant.org/problems/this-is-related-to-trig/
|
# This is related to Trig?
Algebra Level 2
Let $x, y,$ and $z$ be positive real numbers that satisfy $x^{2} + y^{2} + z^{2} + xyz = 4$ Find the maximum value of $x + y + z.$
×
Problem Loading...
Note Loading...
Set Loading...
|
2021-07-27 08:47:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6588162779808044, "perplexity": 699.6372657417837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00336.warc.gz"}
|
http://www.impan.pl/cgi-bin/dict?suppose
|
suppose
Suppose, to look at a more specific situation, that ......
Suppose that, contrary to our claim, ......
We now show that $A$ is closed. Suppose that, on the contrary, there is an $x$ ......
Suppose towards a contradiction that ......
We now prove ...... Indeed, suppose otherwise. Then ......
Suppose for the moment that $q=1$, so that $\beta = 1$.
Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z
|
2015-01-30 21:32:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081791043281555, "perplexity": 164.25484490091162}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121569156.58/warc/CC-MAIN-20150124174609-00075-ip-10-180-212-252.ec2.internal.warc.gz"}
|