url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://zbmath.org/?q=an:0867.62035
# zbMATH — the first resource for mathematics Asymptotic equivalence of density estimation and Gaussian white noise. (English) Zbl 0867.62035 Summary: Signal recovery in Gaussian white noise with variance tending to zero has served for some time as a representative model for nonparametric curve estimation, having all the essential traits in a pure form. The equivalence has mostly been stated informally, but an approximation in the sense of Le Cam’s deficiency distance $$\Delta$$ would make it precise. The models are then asymptotically equivalent for all purposes of statistical decision with bounded loss. In nonparametrics, a first result of this kind has recently been established for Gaussian regression. We consider the analogous problem for the experiment given by $$n$$ i.i.d. observations having density $$f$$ on the unit interval. Our basic result concerns the parameter space of densities which are in a Hölder ball with exponent $$\alpha>1/2$$ and which are uniformly bounded away from zero. We show that an i.i.d. sample of size $$n$$ with density $$f$$ is globally asymptotically equivalent to a white noise experiment with drift $$j^{1/2}$$ and variance $$(4n)^{-1}$$. This represents a nonparametric analog of Le Cam’s heteroscedastic Gaussian approximation [L. Le Cam, Ann. Inst. H. Poincaré 21, 225-287 (1985; Zbl 0584.62024)] in the finite-dimensional case. The proof utilizes empirical process techniques related to the Hungarian construction. White noise models on $$f$$ and $$\log f$$ are also considered, allowing for various “automatic” asymptotic risk bounds in the i.i.d. model from white noise. ##### MSC: 62G07 Density estimation 62B15 Theory of statistical experiments 62M99 Inference from stochastic processes 62G20 Asymptotic properties of nonparametric inference Full Text: ##### References: [1] Belitser, E. and Levit, B. (1995). On minimax filtering over ellipsoids. Math. Methods Statist. 4 259-273. · Zbl 0836.62070 [2] Brown, L. D. and Low, M. (1996). Asy mptotic equivalence of nonparametric regression and white noise. Ann. Statist. 24 2384-2398. · Zbl 0867.62022 [3] Donoho, D. (1994). Asy mptotic minimax risk (for sup-norm loss): solution via optimal recovery. Probab. Theory Related Fields 99 145-170. · Zbl 0802.62007 [4] Donoho, D. L. and Johnstone, I. (1992). Minimax estimation via wavelet shrinkage. Unpublished manuscript. · Zbl 0935.62041 [5] Donoho, D. L. and Low, M. (1992). Renormalization exponents and optimal pointwise rates of convergence. Ann. Statist. 20 944-970. · Zbl 0797.62032 [6] Dudley, R. (1989). Real Analy sis and Probability. Wadsworth & Brooks/Cole, Pacific Grove, CA. · Zbl 0686.60001 [7] Efroimovich, S. Yu. and Pinsker, M. S. (1982). Estimating a square integrable probability density of a random variable. Problems Inform. Transmission 18 172-189. · Zbl 0533.62038 [8] Falk, M. and Reiss, R.-D. (1992). Poisson approximation of empirical processes. Statist. Probab. Lett. 14 39-48. · Zbl 0754.60048 [9] Golubev, G. K. (1984). On minimax estimation of regression. Problems Inform. Transmission 20 56-64. (In Russian.) · Zbl 0538.62005 [10] Golubev, G. K. (1991). LAN in problems of nonparametric estimation of functions and lower bounds for quadratic risks. Theory Probab. Appl. 36 152-157. · Zbl 0738.62043 [11] Ibragimov, I. A. and Khasminski, R. Z. (1977). On the estimation of an infinite dimensional parameter in Gaussian white noise. Soviet Math. Dokl. 236 1053-1055. · Zbl 0389.62023 [12] Koltchinskii, V. (1994). Komlos-Major-Tusnady approximation for the general empirical process and Haar expansions of classes of functions. J. Theoret. Probab. 7 73-118. · Zbl 0810.60002 [13] Korostelev, A. P. (1993). An asy mptotically minimax regression estimate in the uniform norm up to an exact constant. Theory Probab. Appl. 38 737-743. · Zbl 0819.62034 [14] Korostelev, A. P. and Nussbaum, M. (1996). The asy mptotic minimax constant for sup-norm loss in nonparametric density estimation. Discussion paper, SFB 373, Humboldt Univ., Berlin. [15] Le Cam, L. (1985). Sur l’approximation de familles de mesures par des familles gaussiennes. Ann. Inst. H. Poincaré 21 225-287. · Zbl 0584.62024 [16] Le Cam, L. (1986). Asy mptotic Methods in Statistical Decision Theory. Springer, New York. · Zbl 0605.62002 [17] Le Cam, L. and Yang, G. (1990). Asy mptotics in Statistics. Springer, New York. [18] Low, M. (1992). Renormalization and white noise approximation for nonparametric functional estimation problems. Ann. Statist. 20 545-554. · Zbl 0756.62018 [19] Mammen, E. (1986). The statistical information contained in additional observations. Ann. Statist. 14 665-678. · Zbl 0633.62006 [20] Millar, P. W. (1979). Asy mptotic minimax theorems for the sample distribution function. Z. Wahrsch. Verw. Gebiete 48 233-252. · Zbl 0387.62029 [21] Nikolskij, S. M. (1975). Approximation of Functions of Several Variables and Imbedding Theorems. Springer, Berlin. [22] Nussbaum, M. (1985). Spline smoothing in regression models and asy mptotic efficiency in L2. Ann. Statist. 13 984-997. · Zbl 0596.62052 [23] Parthasarathy, K. R. (1978). Introduction to Probability and Measure. Springer, New York. · Zbl 0376.60052 [24] Pinsker, M. S. (1980). Optimal filtering of square integrable signals in Gaussian white noise. Problems Inform. Transmission 16 120-133. · Zbl 0452.94003 [25] Reiss, R.-D. (1993). A Course on Point Processes. Springer, New York. · Zbl 0771.60037 [26] Rio, E. (1994). Local invariance principles and their application to density estimation. Probab. Theory Related Fields 98 21-45. · Zbl 0794.60019 [27] Shorack, G. and Wellner, J. (1986). Empirical Processes with Applications to Statistics. Wiley, New York. · Zbl 1170.62365 [28] Strasser, H. (1985). Mathematical Theory of Statistics. de Gruy ter, Berlin. · Zbl 0594.62017 [29] Tsy bakov, A. B. (1994). Efficient nonparametric estimation in L2 with general loss. Unpublished manuscript. [30] Woodroofe, M. (1967). On the maximum deviation of the sample density. Ann. Math. Statist. 38 475-481. · Zbl 0157.48002 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-09-18 08:54:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6147651672363281, "perplexity": 2868.6427117547023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056348.59/warc/CC-MAIN-20210918062845-20210918092845-00572.warc.gz"}
https://www.physicsforums.com/threads/lightcone-1-0-tabular-cosmology-calculator.689146/
# Lightcone 1.0 tabular cosmology calculator! 1. May 1, 2013 ### marcus New release of Jorrie's tabular calculator. http://www.einsteins-theory-of-relativity-4engineers.com/LightCone1.0/LightCone.html this is clearly the best on-line cosmology calculator for the general user on the web. Hands down. Lightcone rules. that's all there is to say. Check it out. to get column definitions and explanations of the various quantities being tabulated (scale factor, Hubble radius R, proper distance now D, and then Dthen just click the button that says "show column setup" when you know the information therein, click the same button which now says "hide column setup" Congratulatons Jorrie. Great job! ================== PS the Dthen column sketches the proper distance outline of the past lightcone. Gives its radius at each stage of past history. So the tabulator's name is appropriate. That column is of central importance. All the galaxies we are now seeing live on that lightcone (which because of expansion has its own distinctive non-conical shape.) 2. May 1, 2013 ### Mordred The new Planck vs WMAP option setting is particularly handy as it allows one to compare how the difference in the values affect the light cones. As there is new changes please report any browser related errors to Jorrie. The more PF members we can get testing various browser, phones and mobile devices. Yes it works on these as well the better chance Jorrie has on maximizing the flexibility of usage. 3. May 2, 2013 ### Mordred The new tool tips and columns really helps in usage. I took one of Marcus previous examples and tried to narrow down just how long the Universe was close to static. This is the period when the matter/dark energy was close to being balanced. First I kept the inputs as default, turned all my column selections on, increased the number of decimal places to 6 and set steps at 100. Click calculate. then I looked for the period of time where A Ro was lowest. This showed around 7.4 to 7.8 G yr. I looked over on the S column and picked two S values surrounding that period in time. In this case 1.68 which I set for S_upper, S_lower I set for 1.64. As I didn't need as many rows I set steps to 30. Between S 1.653333 and 1.650667 A R0 is the smallest values. so the universe was almost balanced for a very short period of time cosmologically speaking. Roughly 200 million years. to post on the forum I simply click PF format tab. then click calculate and then copy the results and post on the forum. $${\scriptsize \begin{array}{|c|c|}\hline R_{0} (Gly) & R_{∞} (Gly) & S_{eq} & H_{0} & \Omega_\Lambda & \Omega_m\\ \hline 14.4&17.3&3400&67.92&0.693&0.307\\ \hline \end{array}}$$ $${\scriptsize \begin{array}{|r|r|} \hline S=z+1&a=1/S&T (Gy)&R (Gly)&D (Gly)&D_{then}(Gly)&D_{hor}(Gly)&D_{par}(Gly)&a'R_{0} \\ \hline 1.680000&0.595238&7.425739&9.821697&8.187321&4.873405&14.687083&22.664831&0.872703\\ \hline 1.678667&0.595711&7.433501&9.829625&8.174286&4.869511&14.690984&22.690599&0.872692\\ \hline 1.677333&0.596184&7.441368&9.837561&8.161085&4.865512&14.694792&22.716505&0.872681\\ \hline 1.676000&0.596659&7.449143&9.845502&8.148050&4.861605&14.698704&22.742355&0.872671\\ \hline 1.674667&0.597134&7.457022&9.853451&8.134849&4.857593&14.702524&22.768345&0.872661\\ \hline 1.673333&0.597610&7.464908&9.861406&8.121648&4.853575&14.706351&22.794376&0.872652\\ \hline 1.672000&0.598086&7.472702&9.869366&8.108612&4.849648&14.710282&22.820350&0.872644\\ \hline 1.670667&0.598563&7.480600&9.877334&8.095411&4.845617&14.714120&22.846464&0.872636\\ \hline 1.669333&0.599042&7.488505&9.885309&8.082210&4.841579&14.717964&22.872620&0.872628\\ \hline 1.668000&0.599520&7.496417&9.893290&8.069008&4.837535&14.721815&22.898818&0.872621\\ \hline 1.666667&0.600000&7.504334&9.901278&8.055807&4.833484&14.725671&22.925058&0.872615\\ \hline 1.665333&0.600480&7.512259&9.909272&8.042605&4.829427&14.729534&22.951340&0.872609\\ \hline 1.664000&0.600962&7.520189&9.917273&8.029404&4.825363&14.733403&22.977664&0.872603\\ \hline 1.662667&0.601443&7.528126&9.925280&8.016202&4.821292&14.737278&23.004031&0.872599\\ \hline 1.661333&0.601926&7.536070&9.933293&8.003000&4.817215&14.741159&23.030440&0.872594\\ \hline 1.660000&0.602410&7.544119&9.941314&7.989633&4.813032&14.744947&23.056990&0.872591\\ \hline 1.658667&0.602894&7.552075&9.949341&7.976432&4.808942&14.748841&23.083484&0.872588\\ \hline 1.657333&0.603379&7.560038&9.957374&7.963230&4.804845&14.752740&23.110021&0.872585\\ \hline 1.656000&0.603865&7.568106&9.965414&7.949863&4.800642&14.756547&23.136700&0.872583\\ \hline 1.654667&0.604351&7.576082&9.973461&7.936661&4.796531&14.760459&23.163322&0.872582\\ \hline 1.653333&0.604839&7.584164&9.981514&7.923294&4.792315&14.764278&23.190087&0.872581\\ \hline 1.652000&0.605327&7.592252&9.989573&7.909927&4.788091&14.768102&23.216895&0.872581\\ \hline 1.650667&0.605816&7.600247&9.997639&7.896725&4.783961&14.772033&23.243647&0.872581\\ \hline 1.649333&0.606306&7.608348&10.005712&7.883358&4.779724&14.775871&23.270541&0.872582\\ \hline 1.648000&0.606796&7.616456&10.013791&7.869991&4.775480&14.779714&23.297480&0.872583\\ \hline 1.646667&0.607287&7.624570&10.021876&7.856624&4.771229&14.783564&23.324462&0.872585\\ \hline 1.645333&0.607780&7.632691&10.029968&7.843257&4.766971&14.787420&23.351487&0.872588\\ \hline 1.644000&0.608273&7.640819&10.038066&7.829890&4.762707&14.791282&23.378557&0.872591\\ \hline 1.642667&0.608766&7.648953&10.046171&7.816523&4.758435&14.795151&23.405670&0.872594\\ \hline 1.641333&0.609261&7.657093&10.054283&7.803156&4.754157&14.799026&23.432828&0.872599\\ \hline 1.640000&0.609756&7.665341&10.062400&7.789624&4.749771&14.802807&23.460130&0.872604\\ \hline \end{array}}$$ Last edited: May 2, 2013 4. May 2, 2013 ### marcus Great use of Lightcone! It looks like you have found the inflection point in this graph http://ned.ipac.caltech.edu/level5/March03/Lineweaver/Figures/figure14.jpg that is the place where the slope stops declining and starts to increase. You have found that it happens around year 7.600 billion In the row labeled S=1.650... For people who aren't familiar with the idea of an inflection point, look at the heavy solid curve in "Figure 14" I linked to and judge by eye where you think the curve changes from convex to concave. It is hard to see exactly but it is very easy to see in the table, because the rightmost column is actually proportional to the slope of the curve! Up to year 7.6 billion the slope is decreasing, then it reaches a minimum and starts to increase (so it begins to look like "acceleration" and have more resemblance to exponential growth. Thanks Mordred. Definitely an effective use of Lightcone calculator. 5. May 2, 2013 ### Mordred Here is the cool part you turn off the rows to make it all easier to see. Also easier to show on the forum. $${\scriptsize \begin{array}{|c|c|}\hline R_{0} (Gly) & R_{∞} (Gly) & S_{eq} & H_{0} & \Omega_\Lambda & \Omega_m\\ \hline 14.4&17.3&3400&67.92&0.693&0.307\\ \hline \end{array}}$$ $${\scriptsize \begin{array}{|r|r|} \hline S=z+1&a=1/S&T (Gy)&a'R_{0} \\ \hline 1.658&0.603136&7.556056&0.872586\\ \hline 1.657&0.603379&7.560038&0.872585\\ \hline 1.657&0.603622&7.564121&0.872584\\ \hline 1.656&0.603865&7.568106&0.872583\\ \hline 1.655&0.604108&7.572093&0.872582\\ \hline 1.655&0.604351&7.576082&0.872582\\ \hline 1.654&0.604595&7.580172&0.872581\\ \hline 1.653&0.604839&7.584164&0.872581\\ \hline 1.653&0.605083&7.588157&0.872581\\ \hline 1.652&0.605327&7.592252&0.872581\\ \hline 1.651&0.605571&7.596248&0.872581\\ \hline 1.651&0.605816&7.600247&0.872581\\ \hline 1.650&0.606061&7.604346&0.872581\\ \hline 1.649&0.606306&7.608348&0.872582\\ \hline 1.649&0.606551&7.612451&0.872582\\ \hline 1.648&0.606796&7.616456&0.872583\\ \hline \end{array}}$$
2018-06-22 04:04:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5655288696289062, "perplexity": 1879.9327370792828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864343.37/warc/CC-MAIN-20180622030142-20180622050142-00291.warc.gz"}
http://rpg.stackexchange.com/questions/14049/what-objects-can-a-spell-target
What “objects” can a spell target? Some of the wizard spells have creature or object as the target, e.g. Force Orb [ddi] What is an object ? So in the case of Force Orb, could a wizard target say a pebble on the ground, hit it (as it has a very low Reflex) and then hit the monsters around the main target, i.e. the pebble ... this seems a bit of a hack ... - Can attacks target a square in 4E? (in previous editions this was AC 10) – Snowbody May 4 '12 at 17:17 A pebble is an object. However, D&D4E only provides rules for attacking objects as small as the Tiny size category (i.e. a bottle or book). So if there is an object in a square which is large enough to be targeted by a power then you can target that object and if you hit it you can make your secondary attack against all the targets in squares around it (in the case of Force Orb). I should also note that the smaller an object is the higher its Reflex defense is. Tiny objects have a Reflex defense of 10 which is the highest for objects (Gargantuan objects have a Reflex defense of 2 for reference). So if a DM wanted to extend the list of available targets to Diminutive targets for example (as a house rule) the objects Reflex defense would be somewhere around 12 to 15 more than likely. See pages 176 and 177 of the Rules Compendium for all the rules for attacking objects. - Thanks for the response, and pointer to the pages in the Rules Compendium – SteveC May 4 '12 at 17:16 I don't think the pebble's reflex really even matters here. The PC is trading the primary target damage (as we assume they don't care how much they hurt said pebble) in order to splash damage targets around them. It seems to be a fairly legitimate and even 'real life' tactic. It's the same thing you do with any other AOE spell - in fact, this only seems to be hurting the PCs, because while the initial attack is 'easier', they're losing the 'primary target' of the spell. In reference to size categories, it may be worth bringing up semantics on how big the 'force orb' actually is that you are throwing. Maybe the reason why pebble size vs bottle size vs whatever doesn't come into play is because it's irrelevant whether you can target something smaller. IE - if I throw a basketball at my daughter's toy tree house (roughly the size of the basketball) I hit it. Why would I need to target the dolls inside - if I am able to succeed on the larger target, the smaller one is hit by default because the projectile is too large. Anyway, point is, I'm not familiar with the sizing on Force Orb, but if the 'projectile' orb is too large, it won't matter. - With the "Target" requirement, the pebble's reflex does matter. A Force Orb spell is not a Fireball spell. Although the two seem to have similar aspects, a Fireball does not have a target requirement, only an area of effect. Thus, the basketball reference, which would fit if this question was about Fireball doesn't necessarily fit with Force Orb. With the target requirement, a new can of worms is opened. If the target is missed, because it is required for the spell, does the spell fail? It may simply dissipate. – Bon Gart May 4 '12 at 18:24 Size categories can produce additional weirdness. My Dm hid a Kenku assassin in a tree, and none of the party could spot it. My wizard threw a force orb at the tree itself, reasoning that anything hiding in it would be considered adjacent. The DM allowed it as it was about the only thing that could work, but what if I'd targeted 'the building' or 'the ship' in later situations? – Ananisapta May 21 '12 at 13:41 @Ananisapta a common house rule, usually invoked when dealing with large creatures, is that the secondary targets must be adjacent to the specific square of the creature or object that's been hit by the power. – Zachiel Sep 15 '12 at 9:07
2016-07-24 12:52:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4876612722873688, "perplexity": 1588.575376533666}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824037.46/warc/CC-MAIN-20160723071024-00299-ip-10-185-27-174.ec2.internal.warc.gz"}
https://byjus.com/current-density-calculator/
# Current Density Calculator Formula: Current density(J)= Current(I)Area(A) Enter the unknown variable value as 'x' Current density(J): A/m2 Current(I): Ampere(A) Area(A): m2 x= The Current Density Calculator an online tool which shows Current Density for the given input. Byju's Current Density Calculator is a tool which makes calculations very simple and interesting. If an input is given then it can easily show the result for the given number. #### Practise This Question What is true for photolithotrophs ?
2019-06-19 23:53:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8690472841262817, "perplexity": 4526.8962260757935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999066.12/warc/CC-MAIN-20190619224436-20190620010436-00441.warc.gz"}
https://math.stackexchange.com/questions/2305404/probability-how-many-different-choices-could-she-make-is-it-correct/2305464
# probability. how many different choices could she make? is it correct? A pizza shop offers a selection of $10$ different pizza toppings on its pizzas. Erika orders a pizza with $2$ toppings. How many different choices could she make? I think that the right answer is $10^2$, but I want to be sure that is correct. Could someone help me? • No...presumably choosing $A$ and then $B$ is the same as choosing $B$ then $A$. Also, you have to specify whether or not "no topping" or "just $A$" or "double $A$" are options. – lulu Jun 1, 2017 at 10:09 The number of ways of choosing two different toppings will be $$9+8+...+1 = 45$$ Since, if our toppings are $a_1, a_2, ...,a_{10}$, then our toppings choices are limited to the following combinations if there is no chance of repeating a topping: $$a_1a_2, a_1a_3, ..., a_1a_{10} \qquad \qquad ...9\ choices \\\qquad a_2a_3, ..., a_2a_{10} \qquad \qquad ...8\ choices \\ . \\. \\. \\\qquad \qquad \quad a_9a_{10}\qquad \qquad ...1\ choice$$ If repeated toppings are allowed, we have an additional 10 choices $(a_1a_1, ..., a_{10}a_{10})$ Can she choose the same topping twice? If she can, the answer is $$10\cdot 9/2 + 10 = 55$$ $$10\cdot 9 / 2 = \binom {10}2 = 45$$
2022-10-06 22:37:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7395827174186707, "perplexity": 456.8731548997944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00532.warc.gz"}
http://wims.unice.fr/wims/en_U1~arithmetic~oeffactor.en.html
# OEF factoris --- Introduction --- This module actually contains 14 elementary exercises on the prime factorization of integers: existence, uniqueness, relation with gcd and lcm, etc. ### Number of divisors Give an integer which have exactly divisors ( 1 and are divisors of ) and which is divisible by at least two three distinct primes. ### Division We have an integer whose prime factorization is of the form = ×× . Given that divides , what is ? ### Divisor We have an integer whose prime factorization is of the form = . Given that divides , what is ? ### Sum of factorizations Let and be two positive , having the following factorizations: = 123 , = 124 , where the factors i are distinct primes. Is it possible to have a factorization of the form | | = 123 , where i are distincts primes? ### Find factors II Here are the prime factorizations of two integers: =    ,    = , where the factors , are distinct primes. Find these factors. ### Find factors III Here are the prime factorizations of two integers: =    ,    = , where the factors , , are distinct primes. Find these factors. ### gcd Let m, n be two positive integers with the following factorizations. m = , n = , where , , are distinct prime numbers. Compute gcd(m,n) as a function of , , . ### lcm Let m, n be two positive integers with the following factorizations. m = , n = , where , , are distinct prime numbers. Compute lcm(m,n) as a function of , , . ### Maximum of factors Let be an integer with decimal digits. Given that has no prime factor < , how many prime factors may have at maximum? ### Number of divisors II Let be a positive integer with the following factorization into distinct prime factors. = 1 2 What is the number of divisors of  ? (A divisor of is a positive integer which divides , including 1 and itself.) ### Number of divisors III Let be a positive integer with the following factorization into distinct prime factors. = 1 2 3 What is the number of divisors of  ? (A divisor of is a positive integer which divides , including 1 and itself.) ### Trial division We have an integer < , and we want to find a prime factor of by trial dividing successively by 2,3,4,5,6,... Knowing that has a prime factorization of the form = 11 22 ... tt where the sum of powers 1+2+...+t = , (but where the factors i are unknown) what is the last divisor we will have to try (without worrying about whether this divisor is prime or not), in the worst case? ### Two factors Compute the number of positive integers whose prime factorization is of the form = × , where the powers and are integers . ### Two factors II Compute the number of positive integers whose prime factorization is of the form = × , where the powers and are integers . Other exercises on: Factorization   Integers   arithmetics In order to access WIMS services, you need a browser supporting forms. In order to test the browser you are using, please type the word wims here: and press Enter''. Please take note that WIMS pages are interactively generated; they are not ordinary HTML files. They must be used interactively ONLINE. It is useless for you to gather them through a robot program. Description: collection of elementary exercises on the factorization of integers. This is the main site of WIMS (WWW Interactive Multipurpose Server): interactive exercises, online calculators and plotters, mathematical recreation and games Keywords: wims, mathematics, mathematical, math, maths, interactive mathematics, interactive math, interactive maths, mathematic, online, calculator, graphing, exercise, exercice, puzzle, calculus, K-12, algebra, mathématique, interactive, interactive mathematics, interactive mathematical, interactive math, interactive maths, mathematical education, enseignement mathématique, mathematics teaching, teaching mathematics, algebra, geometry, calculus, function, curve, surface, graphing, virtual class, virtual classes, virtual classroom, virtual classrooms, interactive documents, interactive document, algebra, arithmetic, number theory, prime, prime factorization, integer, factor, gcd,lcm
2017-02-27 18:40:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3865281939506531, "perplexity": 3051.7431757523086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00282-ip-10-171-10-108.ec2.internal.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?jrnid=tmf&wshow=issue&year=1999&volume=119&volume_alt=&issue=3&issue_alt=&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS TMF: Year: Volume: Issue: Page: Find Change of variable formulas for Feynman pseudomeasuresO. G. Smolyanov, A. Trumen 355 Gauge-periodic point perturbations on the Lobachevsky planeJ. Brüning, V. A. Geiler 368 Measures on diffeomorphism groups for non-Archimedean manifolds: Group representations and their applicationsS. V. Lyudkovskii 381 KdV equation on a half-line with the zero boundary conditionI. T. Habibullin 397 Renormalization group analysis for singularities in the wave beam self-focusing problemV. F. Kovalev 405 Conjugate chains of discrete symmetries in $(1+2)$ nonlinear equationsA. V. Yurov 419 The $t\to\infty$ asymptotic regime of the Cauchy problem solution for the Toda chain with threshold-type initial dataI. M. Guseinov, A. Kh. Khanmamedov 429 Variational principle, characteristic electric multipoles, and higher polarizing moments in field theoryV. P. Kazantsev 441 Magnetic impurity effect on superconductivity in systems with comparable Fermi and Debye energiesM. E. Palistrant 455 Mayer-series asymptotic catastrophe in classical statistical mechanicsG. I. Kalmykov 475 New insight on an old approach to the theory of critical phenomenaG. A. Martynov 498 Geodesic equations for a charged particle in the unified theory of gravitational and electromagnetic interactionsV. R. Krym 517
2020-05-26 05:15:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28867000341415405, "perplexity": 3485.878210484886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390448.11/warc/CC-MAIN-20200526050333-20200526080333-00326.warc.gz"}
https://research.nsu.ru/ru/publications/measurement-of-the-transverse-momentum-distribution-of-drellyan-l
# Measurement of the transverse momentum distribution of Drell–Yan lepton pairs in proton–proton collisions at √s=13 TeV with the ATLAS detector The ATLAS collaboration Результат исследования: Научные публикации в периодических изданияхстатьярецензирование 9 Цитирования (Scopus) ## Аннотация This paper describes precision measurements of the transverse momentum pTℓℓ (ℓ= e, μ) and of the angular variable ϕη∗ distributions of Drell–Yan lepton pairs in a mass range of 66–116 GeV. The analysis uses data from 36.1 fb- 1 of proton–proton collisions at a centre-of-mass energy of s=13TeV collected by the ATLAS experiment at the LHC in 2015 and 2016. Measurements in electron-pair and muon-pair final states are performed in the same fiducial volumes, corrected for detector effects, and combined. Compared to previous measurements in proton–proton collisions at s=7 and 8TeV, these new measurements probe perturbative QCD at a higher centre-of-mass energy with a different composition of initial states. They reach a precision of 0.2% for the normalized spectra at low values of pTℓℓ. The data are compared with different QCD predictions, where it is found that predictions based on resummation approaches can describe the full spectrum within uncertainties. Язык оригинала английский 616 28 European Physical Journal C 80 7 https://doi.org/10.1140/epjc/s10052-020-8001-z Опубликовано - 1 июл 2020 ## Fingerprint Подробные сведения о темах исследования «Measurement of the transverse momentum distribution of Drell–Yan lepton pairs in proton–proton collisions at √s=13 TeV with the ATLAS detector». Вместе они формируют уникальный семантический отпечаток (fingerprint).
2021-08-03 15:40:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598653674125671, "perplexity": 6164.638620654157}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154459.22/warc/CC-MAIN-20210803124251-20210803154251-00362.warc.gz"}
https://biologicalmodeling.org/chemotaxis/tutorial_purerandom
# Software Tutorial: Modeling a Pure Random Walk Strategy In this tutorial, we will simulate a random walk and take a look at how well this allows a bacterium to reach a goal. You might not anticipate that the random walk will do a very good job of this — and you would not be wrong — but it will give us a baseline simple strategy to compare against a more advanced random walk strategy. Specifically, we will build a Jupyter notebook to do so. You can create a blank file called chemotaxis_std_random.ipynb and type along, but the notebook will be quite lengthy, so feel free to download the final notebook here if you like: chemotaxis_std_random.ipynb. A detailed explanation of the model and each function can be found in this completed file as well as the tutorial below. Make sure that the following dependencies are installed: Python3 3.6+ python --version Jupyter Notebook 4.4.0+ jupyter --version Numpy 1.14.5+ pip list | grep numpy Matplotlib 3.0+ pip list | grep matplotlib Colorspace or with pip any pip list | grep colorspace ## Converting a run-and-tumble model to a random walk simulation Our model will be based on observations from our BioNetGen simulation and known biology of E. coli. We summarize this simulation, discussed in the main text, as follows. 1. Run. The duration of a cell’s run follows an exponential distribution with mean equal to the background run duration run_time_expected. 2. Tumble. The duration of a cell’s tumble follows an exponential distribution with mean 0.1s1. When it tumbles, we assume it only changes its orientation for the next run but doesn’t move in space. The degree of reorientation is a random number sampled uniformly between 0° and 360°. 3. Gradient. We model an exponential gradient with a goal (1500, 1500) having a concentration of 108. All cells start at the origin (0, 0), which has a concentration of 102. The ligand concentration at a point (x, y) is given by L(x, y) = 100 · 108 · (1-d/D), where d is the distance from (x, y) to the goal, and D is the distance from the origin to the goal; in this case, D is 1500√2 ≈ 2121 µm. First, we will import all packages needed. import numpy as np import matplotlib.pyplot as plt import math from matplotlib import colors from matplotlib import patches import colorspace Next, we specify all the model parameters: • mean tumble time: 0.1s; • cell speed of 20µm/s2. We also set a “seed” of our pseudorandom number generator to ensure that the sequence of “random” numbers given to us by Python will be the same every time we run the simulation. To obtain a different outcome, change the seed. Note: For more on seeding, please consult the discussion of pseudorandom number generation at Programming for Lovers. SEED = 128 #Any random seed np.random.seed(SEED) #set seed for Numpy random number generator #Constants for E.coli tumbling tumble_time_mu = 0.1 #second #E.coli movement constants speed = 20 #um/s, speed of E.coli movement #Model constants start = [0, 0] #All cells start at [0, 0] ligand_center = [1500, 1500] #Position of highest concentration center_exponent, start_exponent = 8, 2 #exponent for concentration at [1500, 1500] and [0, 0] origin_to_center = 0 #Distance from start to center, intialized here, will be actually calculated later saturation_conc = 10 ** 8 #From BNG model We now will have two functions that will establish the ligand concentration at a given point (x, y) as equal to L(x, y) = 100 · 108 · (1-d/D). First, we introduce a function to compute the distance between two points in two-dimensional space. # Calculates distance between point a and b # Input: positions a, b. Each in the form array [x, y] # Returns the distance, a float. def distance(a, b): return math.sqrt((a[0] - b[0]) ** 2 + (a[1] - b[1]) ** 2) Next, we define a function to determine the concentration of ligand at a given position according to our formula, which will use distance as a subroutine. # Calculates the concentration of a given position # Exponential gradient, the exponent follows a linear relationship with distance to center # Input: position pos, [x, y] # Returns the concentration, a float. def calc_concentration(pos): dist = distance(pos, ligand_center) exponent = (1 - dist / origin_to_center) * (center_exponent - start_exponent) + start_exponent return 10 ** exponent The following tumble_move function chooses a direction of movement as a uniform random number between 0 and 2π radians. As noted previously, the duration of a cell’s tumble follows an exponential distribution with mean equal to 0.1s. # Samples the new direction and time of a tumble # Calculates projection on the Horizontal and Vertical direction for the next move # No input # Return the horizontal movement projection (float), the vertical one (float), tumble time (float) def tumble_move(): #Sample the new direction unformly from 0 to 2pi, record as a float new_dir = np.random.uniform(low = 0.0, high = 2 * math.pi) projection_h = math.cos(new_dir) #displacement projected on Horizontal direction for next run, float projection_v = math.sin(new_dir) #displacement projected on Vertical direction for next run, float #Length of the tumbling sampled from exponential distribution with mean=0.1, float tumble_time = np.random.exponential(tumble_time_mu) return new_dir, projection_h, projection_v, tumble_time In a given run of the simulation, we keep track of the total time t, and we only continue our simulation if t < duration, where duration is a parameter indicating how long to run the simulation. If t < duration, then we apply the following steps to a given cell. • Sample the run duration curr_run_time from an exponential distribution with mean run_time_expected; • run for curr_run_time seconds in the current direction; • sample the duration of tumble tumble_time; • determine the new direction of the simulated bacterium by calling the tumble_move function discussed above; • increment t by curr_run_time and tumble_time. These steps are achieved by the simulate_std_random function below, which takes the number of cells num_cells to simulate, the time to run each simulation for duration, and the mean time of a single run run_time_expected. This function stores the trajectories of these cells in a variable named path. # This function performs simulation # Input: number of cells to simulate (int), how many seconds (int), the expected run time before tumble (float) # Return: the simulated trajectories path: array of shape (num_cells, duration+1, 2) def simulate_std_random(num_cells, duration, run_time_expected): #Takes the shape (num_cells, duration+1, 2) #any point [x,y] on the simulated trajectories can be accessed via path[cell, time] path = np.zeros((num_cells, duration + 1, 2)) for rep in range(num_cells): # Initialize simulation t = 0 #record the time elapse curr_position = np.array(start) #start at [0, 0] curr_direction, projection_h, projection_v, tumble_time = tumble_move() #Initialize direction randomly past_sec = 0 while t < duration: #run curr_run_time = np.random.exponential(run_time_expected) #get run duration, float #displacement on either direction is calculated as the projection * speed * time #update current position by summing old position and displacement curr_position = curr_position + np.array([projection_h, projection_v]) * speed * curr_run_time #tumble curr_direction, projection_h, projection_v, tumble_time = tumble_move() #increment time t += (curr_run_time + tumble_time) #record position approximate for integer number of second curr_sec = int(t) for sec in range(past_sec, min(curr_sec, duration) + 1): #fill values from last time point to current time point path[rep, sec] = curr_position.copy() past_sec= curr_sec return path Now that we have established parameters and written the functions that we will need, we will run our simulation with num_cells equal to 3 and duration equal to 500 to get a rough idea of what the trajectories of our simulated cells will look like. #Run simulation for 3 cells with different background tumbling frequencies, Plot path duration = 800 #seconds, duration of the simulation, int num_cells = 3 #number of cells, int origin_to_center = distance(start, ligand_center) #Update the global constant run_time_expected = 1.0 #expected run time before tumble, float #Calls the simulate function path = simulate_std_random(num_cells, duration, run_time_expected) #get the simulated trajectories print(path[:,-1,:]) #print the terminal poistion of each simulation ## Visualizing simulated cell trajectories Now that we have generated the data of our randomly walking cells, our next step is to plot these trajectories using Matplotlib. We will color-code the background ligand concentration. The ligand concentrations at each position (a, b) where a and b are both integers can be represented using a matrix, and we take the logarithm of each value of this matrix to better color our exponential gradient. That is, a value of 108 will be converted to 8, and a value of 104 will be converted to 4. A white background color will indicate a low ligand concentration, while red indicates high concentration. #Below are all for plotting purposes #Initialize the plot with 1*1 subplot of size 8*8 fig, ax = plt.subplots(1, 1, figsize = (8, 8)) #First set color map to color-code the concentration mycolor = [[256, 256, 256], [256, 255, 254], [256, 253, 250], [256, 250, 240], [255, 236, 209], [255, 218, 185], [251, 196, 171], [248, 173, 157], [244, 151, 142], [240, 128, 128]] #RGB values, from coolors:) for i in mycolor: for j in range(len(i)): i[j] *= (1/256) #normalize to 0~1 range cmap_color = colors.LinearSegmentedColormap.from_list('my_list', mycolor) #Linearly segment these colors to create a continuous color map #Store the concentrations for each integer position in a matrix conc_matrix = np.zeros((4000, 4000)) #we will display from [-1000, -1000] to [3000, 3000] for i in range(4000): for j in range(4000): conc_matrix[i][j] = math.log(calc_concentration([i - 1000, j - 1000])) #calculate the exponents of concentrations at each location #Simulate the gradient distribution, plot as a heatmap ax.imshow(conc_matrix.T, cmap=cmap_color, interpolation='nearest', extent = [-1000, 3000, -1000, 3000], origin = 'lower') Next, we plot each cell’s trajectory over each of its tumbling points. To visualize older vs. newer time points, we set the color as a function of t so that newer points have lighter colors. #Plot simulation results time_frac = 1.0 / duration #Plot the trajectories. Time progress: dark -> colorful for t in range(duration): ax.plot(path[0,t,0], path[0,t,1], 'o', markersize = 1, color = (0.2 * time_frac * t, 0.85 * time_frac * t, 0.8 * time_frac * t)) ax.plot(path[1,t,0], path[1,t,1], 'o', markersize = 1, color = (0.85 * time_frac * t, 0.2 * time_frac * t, 0.9 * time_frac * t)) ax.plot(path[2,t,0], path[2,t,1], 'o', markersize = 1, color = (0.4 * time_frac * t, 0.85 * time_frac * t, 0.1 * time_frac * t)) ax.plot(start[0], start[1], 'ko', markersize = 8) #Mark the starting point [0, 0] for i in range(num_cells): ax.plot(path[i,-1,0], path[i,-1,1], 'ro', markersize = 8) #Mark the terminal points for each cell We mark the starting point of each cell’s trajectory with a black dot and the ending point of the trajectory with a red dot. We place a blue cross over the goal. Finally, we set axis limits, assign axis labels, and generate the plot. ax.plot(1500, 1500, 'bX', markersize = 8) #Mark the highest concentration point [1500, 1500] ax.set_title("Pure random walk \n Background: avg tumble every {} s".format(run_time_expected), x = 0.5, y = 0.87) ax.set_xlim(-1000, 3000) ax.set_ylim(-1000, 3000) ax.set_xlabel("poisiton in um") ax.set_ylabel("poisiton in um") plt.show() STOP: Run the notebook. What do you observe? Are the cells moving up the gradient? Is this a good strategy for a bacterium to use to search for food? ## Quantifying the performance of our search algorithm We already know from our work in previous modules that a random walk simulation can produce very different outcomes. To assess the performance of the random walk algorithm, we will simulate num_cells = 500 cells and duration = 1500 seconds. Visualizing the trajectories for this many cells will be messy. Instead, we will measure the distance between each cell and the target at the end of the simulation, and then take the average and standard deviation of this value over all cells. #Run simulation for 500 cells, plot average distance to highest concentration point duration = 1500 #seconds, duration of the simulation num_cells = 500 #number of cells, intorigin_to_center = distance(start, ligand_center) #Update the global constant origin_to_center = distance(start, ligand_center) #Update the global constant run_time_expected = 1.0 #expected run time before tumble, float all_distance = np.zeros((num_cells, duration)) #Initialize to store results, array with shape (num_cells, duration) paths = simulate_std_random(num_cells, duration, run_time_expected) #run simulation for cell in range(num_cells): for time in range(duration): pos = paths[cell, time] #get the position [x,y] for the cell at a given time dist = distance(ligand_center, pos) #calculate the Euclidean distance between that position to [1500, 1500] all_distance[cell, time] = dist #record this distance # For all time, take average and standard deviation over all cells. all_dist_avg = np.mean(all_distance, axis = 0) #Calculate average over cells, array of shape (duration,) all_dist_std = np.std(all_distance, axis = 0) #Calculate the standard deviation, array of shape (duration,) We will then plot the average and standard deviation of the distance to goal using the plot and fill_between functions. #Below are all for plotting purposes #Define the colors to use colors1 = colorspace.qualitative_hcl(h=[0, 300.], c = 60, l = 70, pallete = "dynamic")(1) xs = np.arange(0, duration) #Set the x-axis for plot: time points. Array of integers of shape (duration,) fig, ax = plt.subplots(1, 1, figsize = (10, 8)) #Initialize the plot with 1*1 subplot of size 10*8 mu, sig = all_dist_avg, all_dist_std #Plot average distance vs. time ax.plot(xs, mu, lw=2, label="pure random walk, back ground tumble every {} second".format(run_time_expected), color=colors1[0]) #Fill in average +/- one standard deviation vs. time ax.fill_between(xs, mu + sig, mu - sig, color = colors1, alpha=0.15) ax.set_title("Average distance to highest concentration") ax.set_xlabel('time (s)') ax.set_ylabel('distance to center (µm)') ax.hlines(0, 0, duration, colors='gray', linestyles='dashed', label='concentration 10^8') ax.legend(loc='upper right') ax.grid() STOP: Before visualizing the average distances at each time step, what do you expect the average distance to the goal to be? Now, run the notebook. The colored line indicates average distance of the 500 cells; the shaded area corresponds to one standard deviation from the mean; and the grey dashed line corresponds to a maximum ligand concentration of 108. As mentioned, you may not be surprised that this simple random walk strategy is not very effective at finding the goal. Not to worry: in the main text, we discuss how to adapt this strategy into one that better reflects how E. coli explores its environment based on what we have learned in this module about chemotaxis. 1. Saragosti J., Siberzan P., Buguin A. 2012. Modeling E. coli tumbles by rotational diffusion. Implications for chemotaxis. PLoS One 7(4):e35412. available online 2. Baker MD, Wolanin PM, Stock JB. 2005. Signal transduction in bacterial chemotaxis. BioEssays 28:9-22. Available online
2022-05-21 18:37:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4635462760925293, "perplexity": 3868.9473294908134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00591.warc.gz"}
https://www.tec-science.com/mechanical-power-transmission/planetary-gear/fundamental-equation-of-planetary-gears-willis-equation/
The Willis equation describes the motion of the individual gears of a planetary gearbox (epicyclic gear). ## Superposition of motions The change in speed of planetary gearboxes are no longer as easy to understand as those of stationary transmissions. This is due to the fact that the motion of the rotating planet gears is ultimately a superposition of three different motions. The motion no longer consists of a simple rotation around its own axis, but the axis itself performs an additional circular motion around the axis of the sun gear, while the planet gear also performs an additional circular motion because of the rotation of the sun gear. Thus, the motion of a rotating planet gear can be traced back to the superposition of three separately observable motions: 1. rotation of the carrier around the sun gear 2. rotation of the planet gear around its own center of gravity 3. rotation of the sun gear However, the motions are not independent from each other, because the planet gear rotates on the sun gear. Thus the diameter ratio between the sun gear and the planet gear determines how often the planet gear rotates around its own axis while it moves once around the sun gear. In order to derive the relationship of the rotational speeds between the sun gear, the planet gear and the carrier, the above-mentioned motions are first described separately and then superposed. For the sake of clarity, the gears are assumed to be (pitch) cylinders. ## Rotation of the carrier around the sun gear If the sun gear stands still and the planet gear is locked firm on the carrier, then the swept angle of the carrier φc corresponds to the angular position of the planet gear φp1. \begin{align} \label{P1} &\underline{\varphi_{p1} = \varphi_c}  \\[5px] \end{align} ## Rotation of the planet gear around its own center of gravity In fact, the planet gear will roll on the sun gear when mounted rotatably on the carrier and thus rotate around its own center of gravity. The planet gear will thus rotate by an additional angle φp2. If one considers a mere rolling motion, then the arc length bc, which the carrier has covered on the sun gear, corresponds exactly to the arc length bp2, by which the planet gear has moved on its circumference. The additional angle φp2 can be determined by the radian measure as follows: \begin{align} &b_{p2} = b_c \\[5px] &\tfrac{d_p}{2} \cdot \varphi_{p2} = \tfrac{d_s}{2} \cdot \varphi_c  \\[5px] \label{P2} &\underline{\varphi_{p2} = \frac{d_s}{d_p} \cdot \varphi_c}  \\[5px] \end{align} ## Rotation of the sun gear The carrier is now held in position and the sun gear is rotated clockwise by an angle φs. In this case, the planet gear will turn counterclockwise by an angle φp3. Analogous to the case before, the following statement applies: The arc length bs at the circumference of the sun gear corresponds to the arc length bp3, by which the planet gear has moved on its circumference: \begin{align} &b_{p3} = – b_s \\[5px] &\tfrac{d_p}{2} \cdot \varphi_{p3} = – \tfrac{d_s}{2} \cdot \varphi_s  \\[5px] \label{P3} &\underline{\varphi_{p3} = – \frac{d_s}{d_p} \cdot \varphi_s}  \\[5px] \end{align} The negative sign indicates that the motion of the planet gear is in the opposite direction to the motion of the sun gear. ## Superposition of the different motions The motions of the planet gear according to the equations (\ref{P1}), (\ref{P2}) and (\ref{P3}), which have been considered separately so far, can now be superposed to the total motion: \begin{align} &\varphi_p = \varphi_{p1} +\varphi_{p2} + \varphi_{p3}\\[5px] \label{P} &\underline{\varphi_{p} = \cdot \varphi_c + \frac{d_s}{d_p} \cdot \varphi_c     –    \frac{d_s}{d_p} \cdot \varphi_s   }  \\[5px] \end{align} The angular positions φ contained in this equation result from the respective angular velocity ω and the elapsed time t (φ=ω⋅t), whereby the angular velocity is directly related to the rotational speed n by ω=2π⋅n: \begin{align} &\varphi = \omega \cdot t  ~~~ \text{with} ~~~ \omega = 2 \pi \cdot n ~~~\text{applies:}  \\[5px] \label{varp} &\underline{\varphi = 2 \pi \cdot n  \cdot t}  \\[5px] \end{align} If equation (\ref{varp}) is used in equation (\ref{P}), the following relationship ultimately results between the rotational speed of the planet gear nP and the rotational speeds of the sun gear ns and the carrier nc: \begin{align} &2 \pi \cdot n_p  \cdot t = 2 \pi \cdot n_c  \cdot t + \frac{d_s}{d_p} \cdot 2 \pi \cdot n_c  \cdot t – \frac{d_s}{d_p} \cdot 2 \pi \cdot n_s  \cdot t \\[5px] &n_p =n_c + \frac{d_s}{d_p} \cdot n_c – \frac{d_s}{d_p} \cdot n_s ~~~~~~~~\text{|} \cdot d_p \\[5px] &n_p \cdot d_p =n_c \cdot d_p + d_s \cdot n_c – d_s \cdot n_s  \\[5px] \label{g} &\boxed{n_p \cdot d_p = n_c \cdot \left(d_p + d_s \right) – n_s \cdot d_s} \\[5px] \end{align} Since the pitch circle diameter d of a gear is directly proportional to the number of teeth z, the equation above can also be expressed by the respective number of teeth: \begin{align} \label{pln} &\boxed{n_p \cdot z_p = n_c \cdot \left(z_p + z_s \right) – n_s \cdot z_s} \\[5px] \end{align} This equation is called the fundamental formula of planetary gears  (also called Willis equation). The Willis equation is used to determine the different transmission ratios depending on the mode of operation, which will be explained in more detail in the article Willis equation for planetary gears.
2022-05-23 18:15:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987568855285645, "perplexity": 1678.4839444724978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662560022.71/warc/CC-MAIN-20220523163515-20220523193515-00633.warc.gz"}
http://math.stackexchange.com/questions/48815/agreement-of-q-expansion-of-modular-forms/4885
# Agreement of $q$-expansion of modular forms If I have modular functions $f$ and $g$ with $f = a_{1} + a_{2}q + \cdots$ and $g = b_{1} + b_{2}a + \cdots$ both $q$-expansions, why does/how does it follow $f = g$ after checking only finitely many terms? - Because of finite dimensionality of the space of modular forms under consideration? – John M Jul 1 '11 at 4:31 @John You might as well write that up as an answer. – Alex B. Jul 1 '11 at 6:36 Suppose $f$ and $g$ both have weight $k$ and level $\Gamma$. As John M notes, it is clear that there must exist some constant $N$ depending on $\Gamma$ and $k$ such that if $a_i = b_i$ for $0 \le i \le N$, then $f = g$, since the space $M_k(\Gamma)$ in which both forms live is finite-dimensional. This doesn't help you find $N$ though. The simplest general result is the "Sturm bound", which shows that one may take $$N = \frac{k d_\Gamma}{12}$$ where $d_\Gamma$ is the index of the image of $\Gamma$ in $\operatorname{PSL}_2(\mathbb{Z})$, which is either equal to or half of the index of $\Gamma$ in $\operatorname{SL}_2(\mathbb{Z})$ depending on whether or not $-1 \in \Gamma$. This is well explained in William Stein's free online textbook "Computing with Modular Forms".
2016-07-29 14:05:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7639815807342529, "perplexity": 167.1185977480959}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257830091.67/warc/CC-MAIN-20160723071030-00137-ip-10-185-27-174.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/60279/contact-term-and-schwinger-term
# Contact Term and Schwinger Term In field theory, when 4-divergences of time-ordered Green's functions are computed, there are extra terms known as 'Schwinger terms'. When deriving the quantum equations of motion for time-ordered Green's functions, there are extra terms known as 'Contact terms'. Are contact terms and Schwinger terms one and the same? Or is one a special case of the other? Or are they completely unrelated things? [There's also some kind of relationship with $\mathcal{L}_\text{int}\neq\mathcal{H}_\text{int}$, when I can't place my finger on.] - Suggestion to the question (v1): While you are at it, you could also ask about 'Seagull terms'. – Qmechanic Apr 6 '13 at 23:16
2016-05-06 00:14:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41921672224998474, "perplexity": 926.0853972483239}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861700245.92/warc/CC-MAIN-20160428164140-00097-ip-10-239-7-51.ec2.internal.warc.gz"}
https://yalmip.github.io/debugginginfeasible/
# Debugging infeasible models Tags: Updated: You’ve created your massive 5000 lines of code model, and when you run it, the solver claims it is infeasible sol = optimize(Constraints,Objective) yalmiptime: 0.2192 solvertime: 0.2498 info: 'Infeasible problem (MOSEK)' problem: 1 Where to start… Before asking a question on the YALMIP forum, make sure you’ve at least covered the first four tips here. ### 1. Absolutely most common mistake Code looks like this x = sdpvar(n,m); Works like a charm as long as n and m are different, but when they are equal, you most likely don’t want a symmetric matrix which this will create. Hence, you should have had x = sdpvar(n,m,'full'); ### 2. Is it really infeasible? To begin with, get rid of the objective function. An objective function cannot generate any infeasibility, but in the feasibility analysis, it is just unnecessary to keep it. You might have stumbled into a bug in the solver presolve code or something, which causes it to make an incorrect statement. Some solvers mess up infeasibility with unbounded objective. If that is the case, you would have to debug your unbounded model. Hence, if the problem without objective is feasible, the problem is in the objective and not in the constraints optimize(Constraints) yalmiptime: 0.1859 solvertime: 0.2381 info: 'Infeasible problem (MOSEK)' problem: 1 Nope, not that simple… ### 3. Get a second opinion Solvers can fail, so try another solver. optimize(Constraints,[],sdpsettings('solver','gurobi')) yalmiptime: 0.3514 solvertime: 0.2166 info: 'Infeasible problem (GUROBI)' problem: 1 OK, unlikely that two solvers make the same incorrect judgement. ### 4. Clean up and simplify your model Searching for a needle is easier in a clean small room, than a messy huge room. You don’t have to debug your complete model, if the feasibility remains when you remove most parts of it. Make a quick effort to remove stuff. You might find the bug by simply looking at the condensed code… ### 5. Do you have a known feasible solution? If you have a known feasible solution, use that and see if your model actually is feasible when you use it. Simply assign the solution and check the constraints assign(x,claimedfeasible); check(Constraints) +++++++++++++++++++++++++++++++++++++++++++++++++++++ | ID| Constraint| Primal residual| +++++++++++++++++++++++++++++++++++++++++++++++++++++ | #1| Elementwise inequality| 1| | #2| Elementwise inequality| 0| | #3| Elementwise inequality| 1| | #4| Elementwise inequality| 0| | #5| Elementwise inequality| -1| | #6| Elementwise inequality| 0| | #7| Elementwise inequality| 1| | #8| Elementwise inequality| 0| | #9| Elementwise inequality| 1| | #10| Elementwise inequality| 0| | #11| Equality constraint| 0| +++++++++++++++++++++++++++++++++++++++++++++++++++++ If this would have shown all constraints feasible, you would have found a bug in both YALMIP and all solvers you’ve tested. Most likely it will show that some constrant is infeasible, as in this case where contraint 5 is violated. So, constraint 5? With the set of constraints listed above, it might be a nightmare to figure out which constraint this actually is. This is where tagging constraints might help. Add some nice tags in your code that defines the constraints, and it might look like this instead ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ | ID| Constraint| Primal residual| Tag| ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ | #1| Elementwise inequality| 1| Fubar constraints 1| | #2| Elementwise inequality| 0| Foo constraints 1| | #3| Elementwise inequality| 1| Fubar constraints 2| | #4| Elementwise inequality| 0| Foo constraints 2| | #5| Elementwise inequality| -1| Fubar constraints 3| | #6| Elementwise inequality| 0| Foo constraints 3| | #7| Elementwise inequality| 1| Fubar constraints 4| | #8| Elementwise inequality| 0| Foo constraints 4| | #9| Elementwise inequality| 1| Fubar constraints 5| | #10| Elementwise inequality| 0| Foo constraints 5| | #11| Equality constraint| 0| | ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Hence, the Fubar constraint you setup in iteration 3, is not correct, or at least not consistent with the solution that you claim is valid. An alternative is to modularize your code a bit and create sub-components % Create The Fubar constraints Fubar = ... % Create The Foo constraints Foo = ... optimize([Fubar, Foo]) check(Fubar) check(Foo) ### 6. Solve a model with slacked constraints Try to add slacks on constraints, and minimize the slack. Very often, you will see non-zero slacks just a few constraints, and they are often the guilty ones. At least it helps you to hone in on problematic constraints. Hence, we replace Constraints = [] for i = 1:N Constraints = [Constraints, something1 <= 0]; Constraints = [Constraints, something2 == 0]; end with some slacked variant, such as slack1 = sdpvar(N,1); slack2 = sdpvar(N,1); Constraints = [slack1>=0] for i = 1:N Constraints = [Constraints, something1 <= slack1(i)]; Constraints = [Constraints, something2 == slack2(i)]; end and solve the problem while trying to drive the slacks to zero optimize(Constraints, sum(slack1) + sum(abs(slack2)) Checking the values of the slacks could reveal something value(slack1) ans = 0 0 0 0 0.5000 value(slack2) ans = 0 0 0 0 0.5000 The fifth constraints combined in the two sets of constraints appear to be problematic, as we cannot find a solution where they both are feasible. ### 7. Bisect you constraints Remove constraints, and see when it becomes feasible. In the end, this might be your only option to hone in on the problems in your code. You can do this either by commenting out parts in your code, or by indexing from the full set. In the following example, we have a model with 11 constraints which is infeasible sol = optimize(Constraints(1:5));if sol.problem==0;display('Feasible');else;display('Infeasible');end Feasible sol = optimize(Constraints(6:11));if sol.problem==0;display('Feasible');else;display('Infeasible');end Feasible Nasty, there is a combination of some of the first 5 constraints (which are feasible on their own) which combined with the last 6 constraints (which are feasible on their own) causing infeasibility. sol = optimize(Constraints(1:8));if sol.problem==0;display('Feasible');else;display('Infeasible');end Feasible sol = optimize(Constraints(1:10));if sol.problem==0;display('Feasible');else;display('Infeasible');end Feasible From this we know that the problem occurs when the eleventh constraint is added to the model, in combination with the other. Now you just have to come up with strategies to dig further. Essentially some kind of bisection. sol = optimize(Constraints([1:5 11]));if sol.problem==0;display('Feasible');else;display('Infeasible');end Infeasible sol = optimize(Constraints([1:3 11]));if sol.problem==0;display('Feasible');else;display('Infeasible');end Feasible sol = optimize(Constraints([4:5 11]));if sol.problem==0;display('Feasible');else;display('Infeasible');end Infeasible sol = optimize(Constraints([5 11]));if sol.problem==0;display('Feasible');else;display('Infeasible');end Infeasible There we have it. Constraint 5 and 11 are inconsistent. Figure out why! ### Debugging more omplex models In more complex models with interacting constraints, you might need more advanced strategies. One such idea is to try to re-order group constraints to detect problems. Begin by structuring your model into logical sets of constraints (clean up your messy code!), so your initial problematic model looks something like this (remember, we only want to detect the reason for infeasibility so we only solve the feasibility problem) Model = []; % Create the banana constraints BananaConstraints = ... Model = [Model,BananaConstraints]; % Create the apple constraints AppleConstraints = ... Model = [Model,AppleConstraints]; % Create the pear constraints PearConstraints = ... Model = [Model,PearConstraints]; % Create the salary constraints SalaryConstraints = ... Model = [Model,SalaryConstraints]; % Create the weather constraints WeatherConstraints = ... Model = [Model,WeatherConstraints]; % Create the objective constraints ObjectiveConstraints = ... Model = [Model,ObjectiveConstraints]; optimize(Model) You solve this nicely structured problem, and it turns out to be infeasible. What you do now is that you solve the problem after every addition of a new set of constraints, and find out where it first fails Model = []; % Create the banana constraints BananaConstraints = ... Model = [Model,BananaConstraints]; optimize(Model) % OK, works % Create the apple constraints AppleConstraints = ... Model = [Model,AppleConstraints]; optimize(Model) % ok, works % Create the pear constraints PearConstraints = ... Model = [Model,PearConstraints]; optimize(Model) % ok, works % Create the salary constraints SalaryConstraints = ... Model = [Model,SalaryConstraints]; optimize(Model) % ok, works % Create the weather constraints WeatherConstraints = ... Model = [Model,WeatherConstraints]; optimize(Model) % ok, works % Create the objective constraints ObjectiveConstraints = ... Model = [Model,ObjectiveConstraints]; optimize(Model) % fail! Hene, all you know now is that the last set of constraints turns the whole model infeasible, but that does not mean that there is an error in that single constraint. Indeed, we can check it individually, and see that it is feasible. optimize(ObjectiveConstraints) % works Instead, we move that block of constraints to the top, and perform the procedure again Model = []; % Create the objective constraints ObjectiveConstraints = ... Model = [Model,ObjectiveConstraints]; optimize(Model) % OK, works % Create the banana constraints BananaConstraints = ... Model = [Model,BananaConstraints]; optimize(Model) % OK, works % Create the apple constraints AppleConstraints = ... Model = [Model,AppleConstraints]; optimize(Model) % ok, works % Create the pear constraints PearConstraints = ... Model = [Model,PearConstraints]; optimize(Model) % ok, works % Create the salary constraints SalaryConstraints = ... Model = [Model,SalaryConstraints]; optimize(Model) % fails OK, it failed when we came to the salary constraints this time. Re-shuffle again Model = []; % Create the salary constraints SalaryConstraints = ... Model = [Model,SalaryConstraints]; optimize(Model) % OK, works % Create the objective constraints ObjectiveConstraints = ... Model = [Model,ObjectiveConstraints]; optimize(Model) % OK, works % Create the banana constraints BananaConstraints = ... Model = [Model,BananaConstraints]; optimize(Model) % OK, works % Create the apple constraints AppleConstraints = ... Model = [Model,AppleConstraints]; optimize(Model) % fail Apple constraints caused problems. Re-shuffle Model = []; % Create the apple constraints AppleConstraints = ... Model = [Model,AppleConstraints]; optimize(Model) % OK, works % Create the salary constraints SalaryConstraints = ... Model = [Model,SalaryConstraints]; optimize(Model) % OK, works % Create the objective constraints ObjectiveConstraints = ... Model = [Model,ObjectiveConstraints]; optimize(Model) % fails Model fails when apple constraints, salary constraints and objective constraints are used together. Hence, we have reduced the model to a much smaller model which we now must analyze in greater detail to understand why it is infeasible.
2020-07-07 11:24:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5886356234550476, "perplexity": 6824.358432047934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655892516.24/warc/CC-MAIN-20200707111607-20200707141607-00509.warc.gz"}
https://proofwiki.org/wiki/Fourier_Series/x_over_0_to_2,_x-2_over_2_to_4/Mistake
# Fourier Series/x over 0 to 2, x-2 over 2 to 4/Mistake ## Mistake Find the half-range cosine series for $f \left({x}\right) = \begin{cases} 1 & , 0 < x < 2 \\ x - 2 & , 2 < x < 4 \end{cases}$ for the half-range $0 < x < 4$. The subsequent analysis is performed for the function: $f \left({x}\right) = \begin{cases} x & : 0 < x \le 2 \\ x - 2 & : 2 < x < 4 \end{cases}$ and it is questionable whether it has actually been performed accurately.
2019-12-11 11:03:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9274622797966003, "perplexity": 2829.247872736922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530857.12/warc/CC-MAIN-20191211103140-20191211131140-00160.warc.gz"}
https://www.homebuiltairplanes.com/forums/threads/cnc-machine-for-home-building-aircraft.21677/page-4
# CNC Machine - for home-building aircraft Discussion in 'Workshop Tips and Secrets / Tools' started by Rienk, Apr 17, 2015. ### Help Support HomeBuiltAirplanes Forum by donating: 1. May 17, 2015 ### ScaleBirdsScott #### Well-Known Member Joined: Feb 10, 2015 Messages: 1,033 656 Location: Uncasville, CT I've got no problem getting quality import stuff and yet I can see that for many companies even with some disparity in production cost, they don't see much benefit after all the other costs of export in marketing their high-end stuff to the USA: first they have to fight the negative image of cheap asian import, and second they have to compete with the domestic preference in higher-end goods. I'll point to a situation that I went through recently where I was trying to decide on a 4" machinist's vise for my new desktop mill. I could go $150-200 for a cheap import vise from a variety of sources, I could spend$400 for a reputable US-brand that sells quality import parts (Glacern) or I could have gone with a full-domestic, known quality standard of Kurt for $550. On one hand, the Kurt stuff supports US industry and that's good, and on the other hand the Glacern would be almost identical in quality and utility as far as my concerns go, and$150 difference is enough to buy a nice set of collets. In the end I had to go with bang for the buck so I went with Glacern, as they were offering something made overseas (I believe from Taiwan) to good standards, and charging enough less than the US-made equivalent to justify the savings. If the price difference was $50, I would have gone Kurt no question. Another factor that I will admit is that the Glacern is black and steel, while the Kurt is blue and steel, and the blue and steel matches the color scheme of my mill very nicely which would have actually been worth some price increase on top of whatever the other differences were. (The epilogue to that anecdote is that a week later I got an email from Enco with a 20% off code that would have dropped the price of the Kurt vise to be the same as the Glacern, and I contemplated putting the Glacern up on ebay for a few dollars off my cost, just so I could turn around and get the Kurt... but I held off. I may still decide to go that way next time one of those specials comes around... so if anyone wants a brand new Glacern 4" vise for a few bucks less than retail...) I've got zero problem buying import goods if the quality matches my needs for the cost, is the bottom line, and I don't mind buying something that needs a little TLC if the costs are worth it. But I also will pay extra for something that has the right fit and finish (and color scheme) if it's reasonably within a budget. 2. May 18, 2015 ### WonderousMountain ### WonderousMountain #### Well-Known Member Joined: Apr 10, 2010 Messages: 1,853 Likes Received: 192 Location: Clatsop, Or I could really go for this as I'm learning CaD for the purpose of making things. Of course, I could probably finish learning what I need to and make one of these machines in the time it would take to iron out the basics on this site. LuPi 3. May 18, 2015 ### rv7charlie ### rv7charlie #### Well-Known Member Joined: Nov 17, 2014 Messages: 435 Likes Received: 170 Location: Jackson In support of 'Chinese stuff can be good', I offer this: When I decided to purchase an AC/DC TIG welder a few years ago, I considered purchasing one of the 'mystery brand' Chinese models that are available almost everywhere, even the local pro welding supply shops. I couldn't bring myself to gamble on one, so I watched ebay until I found a nice used Miller solid state variable frequency unit for about 1/2 the new price. Works fine, has lasted a long time. But.... after purchase, I started seeing reviews by professional welders of the various Chinese models, with almost universal thumbs-up ratings. Basically saying, if you want to do high volume production work, it might be worth buying one of the 'big 3' American brands, but for all others, the Chinese stuff is just fine. Then one of my friends decided to scratch build a Cub. He bought one of the Chinese welders, brand new, with more features than my Miller, for about half what I paid for my used Miller. He's got the fuselage almost complete and loves the welder. And guess what; all the new 'big 3' solid state welders are made in Asia, too. Anyone want to buy a nice used Miller Dynasty? Charlie 4. May 26, 2015 ### autoreply ### autoreply #### Moderator Joined: Jul 8, 2009 Messages: 10,732 Likes Received: 2,542 Location: Rotterdam, Netherlands I have some difficulty believing that. Waterjetcutting runs from 1-5 euro's per square meter, dependent on complexity of the cut lines. Probably less for thin MDF. Might be hard to find down under, but shipping thin MDF is pretty cheap too. 5. May 26, 2015 ### ScaleBirdsScott ### ScaleBirdsScott #### Well-Known Member Joined: Feb 10, 2015 Messages: 1,033 Likes Received: 656 Location: Uncasville, CT I had a similar epiphany: it cost us something like$350 at a local wood remodeling/sign shop to have shapes cut from a single piece of mdf. It also took over a week because there was some kind of glitch where their software was having problems reading my particular files correctly, and someone in the process of converting my lines in CAM drug a spline improperly and a part was misshapen. So it took further time of at least a week to get a new one of those parts made after we realized it was off and tracing the issue back. Similarly we had a water jet service cut a few sheets of aluminum for us and the result was parts that were very roughly cut, delays of weeks to get them cut in the first place, $150 min charge to do the CAM, and the price was somewhere around$100/sheet. Given that over the scope of an airplane such as the one I'm designing it might require 10-ish sheets of mdf for forming and fixturing, and probably 20-ish sheets of aluminum to be cut out on a cnc, at a minimum id have to spend a few grand in shop fees, plus likely a grand more in CAM, and that's hoping I never have to change the design or remake parts and that I can afford 2-week turn around on a set of parts. IF one has ALL the parts on file and they are proven aND it can all be done st once it might make sense to hire it out and they will probably be a better rate and can slot you in as it'll be a somewhat big job. But the price ends up at best cancelling out with a cheaper router build, and the utility of owning the machine and being able to tinker is worth a huge amount on its own. 6. May 26, 2015 #### Moderator Joined: Jul 8, 2009 Messages: 10,732 2,542 Location: Rotterdam, Netherlands That all sounds awfully expensive. DXF should solve the issue of CAM. We paid literally fractions of the earlier mentioned amounts. How come, great deal, or highly overcharged? 7. May 26, 2015 ### Jay Kempf Joined: Apr 13, 2009 Messages: 3,671 935 Location: Warren, VT USA My guy doesn't charge CAM. I hand him a DXF it is always perfect. He even nests parts at the machine and adds shaker tabs if it is that sort of job without any engineering charges. But AR your prices are way low to the average that I see. It's about run time so it has nothing to do with price per square meter. It has to do with the part, the sheet, the nesting, the scrap scheme, etc... I have the experience to design parts and sheets to operate fast and reduce run time. Sharp inside corners are notorious slow spots and not good designs anyway for any sheet metal parts that are stressed. But another question. How would you waterjet MDF. It isn't waterproof so you will wreck and stain it just cutting it. Same with any plywood. Laser or router is a better solution. My guy waterjet cuts under the surface of a shallow pool of water. That means all parts have to be cleaned after. Waterjet is great for glass, stainless, stone, aluminums, brass, plastic, rubber, and OK for steels that will be cleaned/blasted afterwards. 8. May 26, 2015 ### Rienk #### Well-Known Member Joined: Oct 11, 2008 Messages: 1,364 190 Location: Santa Maria, CA (SMX) There is waterproof MDF - but it's expensive! We waterjet cut plywood all the time. The entire TS-1 was waterjet out of furniture grade Birch ply - we love the stuff! The abrasive in the water does leave a stain (see photo), but that can easily be mitigated by purchasing plywood that is pre-coated with a light varnish, then there is no problem at all. Plus, with wood, the material is not submerged. BTW, don't bother cutting regular wood with a waterjet - especially more than 1" material; the jet will follow the grain, and the cut will not be even close to plumb. 9. May 26, 2015 ### Jay Kempf Joined: Apr 13, 2009 Messages: 3,671 935 Location: Warren, VT USA I guess your waterjet guy doesn't use a water bath. Mine does for all the materials we have specified so far. Not sure if he offers the service to empty the bath for ply. I'll ask him. Might be useful. 10. May 27, 2015 ### Rienk #### Well-Known Member Joined: Oct 11, 2008 Messages: 1,364 190 Location: Santa Maria, CA (SMX) Most newer waterjets have instant leveling capabilities, so they can lower and raise the water level based on the material being cut. However, other than glass, most thin material is rarely submerged. 11. Jun 7, 2015 ### JamesG #### Well-Known Member Joined: Feb 10, 2011 Messages: 2,408 754 Location: Columbus, GA and Albuquerque, NM After 2 years of neglect after a move, I finally have my Tiag mini CNC mill set up, lubed up, and dialed in again! I don't have anything to cut with it at the moment. But its nice to hear it humming again. 12. Mar 21, 2016 ### Rienk #### Well-Known Member Joined: Oct 11, 2008 Messages: 1,364 190 Location: Santa Maria, CA (SMX) I'm BAAaack! Amazing how bi-polar I am with the forum... on every night for months, and then don't even look at it for almost a year? Criminal. I'm still moving forward (baby steps) with setting up a MakerSpace in my community, but haven't done much about sourcing an inexpensive CNC router yet. Fritz, can you send me the information on yours (kit?) again? Thanks! 13. Mar 21, 2016 ### FritzW #### Well-Known Member Joined: Jan 31, 2011 Messages: 3,647 3,262 Location: Las Cruces, NM Hi Rienk, Welcome back. I'm been missing the action on HBA for a while also (tied up with a chapter Waiex project). My machine is the old CNCRouterparts 4x8 with ebay steppers and electronics (1,600 oz/in steppers and a 2.6 kW spindle). I just talked a friend of mine through building a 4x4 pro version. He's using smaller stepper motors and a router (smart idea, lessons learned from my overkill machine). The machine has come in pretty handy on the Waiex project: modified seat pan for the Waiex I made a bunch of canopy latch plates for the gang on the Sonex group. Rienk, the world needs a realistic home CNC machine to build the next generation of homebuilt airplanes with. I hope you come up with that machine Fritz 14. Apr 18, 2016 #### Well-Known Member Joined: Jan 27, 2012 Messages: 957 270 Location: Glendale, CA Hello All, I just found this thread through the 21st century volksplane thread. I designed a CNC router a few years ago, but due to may reasons I never built it. I am considering reviving it and building it this summer. I originally designed it to help cut the parts for a Bearhawk LSA. I drew most of the wing in solidworks and made a few mods to the ribs to favor routing. Unsure if I will ever build the Bearhawk, but the thought of whats happening in the 21st century volksplane has me interested in this again. I designed it with a useable area to route a 4' x 5' sheet. I figured, I could use alignment pins and move the sheet if I needed to route something longer. Below is a Jpeg from solidworks, but the Z axis is not shown as I was not finished with it. I designed the Machine around the cheap ballscrews found on Ebay right now with all bearing journals from ebay or Misumi. Bearings are pretty big Hiwin rail and truck type. At the time, there was a site with a bunch of surplus Hiwin stock and they were crazy cheap so I designed it around what they had in stock. I plan to visit the site again to see what they have now and may have to make some design changes. Sadly I have a bunch of expensive THK rails and trucks, but none are long enough for my application and buying additional rails to use what I have costs more than the overstock Hiwin items. If I remember correctly all my mechanical and electrical components to build this was right around 2K. This did not include money to build a frame. I purposely made many of the parts without any counterbores and just through holes so as to be able to use each part as a left or a right. Also it allows most of the flat plate parts to be waterjet cut. Marc 15. Apr 18, 2016 ### Jay Kempf Joined: Apr 13, 2009 Messages: 3,671 935 Location: Warren, VT USA Have you sourced all the parts and come up with a price for your purchase list? 16. Apr 18, 2016 ### Rienk #### Well-Known Member Joined: Oct 11, 2008 Messages: 1,364 190 Location: Santa Maria, CA (SMX) Marc, Any size bigger than 3'x4' would be useful, but being able to do full size sheets would be the most practical for any type of aircraft fabrication. Have you seen the Shopbot Buddy system with their 'Power Stick' option? That is what I think is the best way to go for the home shop CNC - a 24"x48" router table, with the ability to add something like the PowerStick option and do full size sheets; doing something similar for a few thousand dollars would be an incredible boon to the hobbiest/builder! Here is a link to a three minute video explaining their system... http://shopbottools.com/videos/Buddy and Powerstick 320x240.wmv 17. Apr 18, 2016 #### Well-Known Member Joined: Jan 27, 2012 Messages: 957 270 Location: Glendale, CA Hello Jay, I only priced out the mechanical items but with what was left, I was coming up with an estimate of 2K to 2.5K. All of that may be out the window now as I am unsure if the Hiwin surplus parts are still available. Anyone designing for something similar to a kit should design around not finding surplus items, but standard parts that would have a constant sell price. I could have designed around using Bishop Wisecarver V groove wheels and such, but rigidity is key in a machine and that system is not nearly as stout as a linear rail with a recirculating truck. V groove wheels are now available even cheaper now with the maker movement. Maybe I should rethink this design and go that route if I cant source the Hiwin rails, but I would be doing so at a cost of rigidity, but likely for cutting thin aluminum it may be OK. I have 20 years of experience designing linear motion platforms for the robotics, automation and film industry so I have done this quite a bit. I would like to run closed loop with Renishaw RGH24 linear encoders, but not everyone can afford those, but they can be found surplus as well. Marc 18. Apr 18, 2016 ### ScaleBirdsScott #### Well-Known Member Joined: Feb 10, 2015 Messages: 1,033 656 Location: Uncasville, CT My table uses delrin v wheels on makerslide, and while it could be better, plenty better, it does do the job; and you can get a fairly smooth running, reasonably accurate machine for a low cost and minimal effort. Ideal setup for the future will be linear rail and what not, but until then I'm able to make due with this setup and I'm cutting aluminum up to 1/8 inch. Just gotta take multiple passes and choose conservative speeds and feeds. Not suited for mass production but if you're doing small volume it does ok. 19. Apr 18, 2016 Joined: Apr 13, 2009 Messages: 3,671
2019-11-15 17:26:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1819782257080078, "perplexity": 3113.135022631437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668699.77/warc/CC-MAIN-20191115171915-20191115195915-00220.warc.gz"}
http://stats.stackexchange.com/questions/22512/what-is-rho-in-case-of-covx-y-rho-cdot-sdy-cdot-sdx-if-we-have-t
# What is $\rho$ in case of $Cov(X,Y) = \rho \cdot SD(Y) \cdot SD(X)$ if we have to undertake meta-analysis of sample-correaltions? [duplicate] $\rho$ has been interpreted by Mathai and Rathie in their book as linear correlation. What is the correct interpretation when the covariance equals $\rho$ multiplied by the standard deviation $X$ multiplied by standard deviation $Y$. Suppose we are working on two random variables and assume that we have a multivariate normal distribution. - What do you mean by interactive? I would take interactive to mean some type of dependence between the two variables, although that would contradict the independent part ... –  Andy W Feb 9 '12 at 13:00 The interaction does not imply that the statistical independence is lost. here, I am trying to explain the idea in terms of parameteric statistics. –  subhash c. davar Feb 9 '12 at 14:42 The use of "independent as well as interactive" here is puzzling, because it seems to suggest that independent random variables can "interact" in some way. Could you please provide a definition for your meaning of "interactive"? –  whuber Feb 9 '12 at 14:52 I do not understand what you mean by "composite variable" or "having ... [an] ingredient," but it seems that you intend for $X$ somehow to be constructed mathematically from $Y$ and other things. In what sense, then, could $X$ and $Y$ be independent? –  whuber Feb 12 '12 at 22:46 Odd uses of terminology aside, it appears that the answer to this question is that $\rho$ is the pearson correlation coefficient. –  Macro Jul 5 '12 at 13:20 ## marked as duplicate by Nick Cox, Gavin Simpson, whuber♦Nov 13 at 16:28 If the random variables $X$ and $Y$ are independent as you claim they are, then their covariance $\text{cov}(X,Y)$ equals $0$ and therefore so does the Pearson correlation coefficient $\displaystyle \rho = \frac{\text{cov}(X,Y)}{\sigma_X\sigma_Y}$ equal $0$. However, given independent samples $\{(X_i,Y_i) \colon 1 \leq i \leq n\}$ of independent random variables $X$ and $Y$, the sample covariance and the sample Pearson correlation coefficient are not necessarily identically $0$, though it is every statistician's fondest hope that both these statistics will be small in magnitude, especially when $n$ is large. - thanks.Technically, your response is good in terms of what exists generally in the statistical literature. The statistical independence does not imply that these two variable can not interact with each other, There is always a certain amount of interaction when we are working with the help of ANOVA type model. Let me clarify that we should interpret the formula of covariance given in the book written by statisticians such as Mathai and Rathie Probability and Statistics. could you please help me to move in the right direction. –  subhash c. davar Feb 9 '12 at 14:28 @subhashdavar Maybe you should edit your question to include the formula for covariance given by Mathai and Rathie, especially if it is different from the "standard" formula and either present your own interpretation of it or ask how the formula should be interpreted. Not everyone has access to the Mathai and Rathie book. –  Dilip Sarwate Feb 9 '12 at 14:33 Thanks for querry. please see response to edit by Andy above. –  subhash c. davar Feb 9 '12 at 14:47 The formula is different in the sense that it employs standard deviation of population for X and Standard deviation of population for Y. Please mention the standard formula you may be aware of. –  subhash c. davar Jul 7 '12 at 15:13 how the formula can be understood –  subhash c. davar Dec 1 at 15:38 It is (by definition) the Pearson correlation coefficient. Whether the two variables are independent or "interactive" has no bearing on this definition. - Karl Pearson formula produces sample correlation and not the rho - a parametric estimate. Do you agree? –  subhash c. davar Jul 7 '12 at 15:16 It seems that there is a little confusion in the use of terms. Here, $\rho$ is a parametric estimate i.e. $E(r)$. Mathai and Rathie (1977) have given the formula of $\rho$ in terms of $E(r)$ i.e. "linear correlation coefficient" and in any case it does not connote the sample correlation coefficient i.e. $r$. Moreover, we should interpret statistical independence between two variables as if there is no common moderator for the two random variables. And, generally, there is a real relationship (it may be small) between two independent explanatory variables that can be measured by covariance. Hi @subhash, please look at my edits if you are interested in seeing how equations are rendered in $\LaTeX$. –  Macro Jul 5 '12 at 13:16
2013-12-13 23:00:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.884361982345581, "perplexity": 545.5902027601364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386165158218/warc/CC-MAIN-20131204135238-00027-ip-10-33-133-15.ec2.internal.warc.gz"}
https://onepager.togaware.com/ml-data-glimpse.html
## 14.4 ML Data Glimpse 20210104 A dplyr::glimpse() over all the variables of the dataset provides a fuller picture of the data. glimpse(ds) ## Rows: 176,747 ## Columns: 24 ## $date <date> 2008-12-01, 2008-12-02, 2008-12-03, 2008-12-04, 2008-… ##$ location <chr> "Albury", "Albury", "Albury", "Albury", "Albury", "Alb… ## $min_temp <dbl> 13.4, 7.4, 12.9, 9.2, 17.5, 14.6, 14.3, 7.7, 9.7, 13.1… ##$ max_temp <dbl> 22.9, 25.1, 25.7, 28.0, 32.3, 29.7, 25.0, 26.7, 31.9, … ## $rainfall <dbl> 0.6, 0.0, 0.0, 0.0, 1.0, 0.2, 0.0, 0.0, 0.0, 1.4, 0.0,… ##$ evaporation <dbl> 4.8, 4.8, 4.8, 4.8, 4.8, 4.8, 4.8, 4.8, 4.8, 4.8, 4.8,… ## $sunshine <dbl> 8.5, 8.5, 8.5, 8.5, 8.5, 8.5, 8.5, 8.5, 8.5, 8.5, 8.5,… ##$ wind_gust_dir <ord> W, WNW, WSW, NE, W, WNW, W, W, NNW, W, N, NNE, W, SW, … ## $wind_gust_speed <dbl> 44, 44, 46, 24, 41, 56, 50, 35, 80, 28, 30, 31, 61, 44… ##$ wind_dir_9am <ord> W, NNW, W, SE, ENE, W, SW, SSE, SE, S, SSE, NE, NNW, W… ## $wind_dir_3pm <ord> WNW, WSW, WSW, E, NW, W, W, W, NW, SSE, ESE, ENE, NNW,… ##$ wind_speed_9am <dbl> 20, 4, 19, 11, 7, 19, 20, 6, 7, 15, 17, 15, 28, 24, 4,… ## $wind_speed_3pm <dbl> 24, 22, 26, 9, 20, 24, 24, 17, 28, 11, 6, 13, 28, 20, … ##$ humidity_9am <dbl> 71, 44, 38, 45, 82, 55, 49, 48, 42, 58, 48, 89, 76, 65… ## $humidity_3pm <dbl> 22, 25, 30, 16, 33, 23, 19, 19, 9, 27, 22, 91, 93, 43,… ##$ pressure_9am <dbl> 1007.7, 1010.6, 1007.6, 1017.6, 1010.8, 1009.2, 1009.6… ## $pressure_3pm <dbl> 1007.1, 1007.8, 1008.7, 1012.8, 1006.0, 1005.4, 1008.2… ##$ cloud_9am <dbl> 8, 5, 5, 5, 7, 5, 1, 5, 5, 5, 5, 8, 8, 5, 5, 0, 8, 8, … ## $cloud_3pm <dbl> 5, 5, 2, 5, 8, 5, 5, 5, 5, 5, 5, 8, 8, 7, 5, 5, 1, 1, … ##$ temp_9am <dbl> 16.9, 17.2, 21.0, 18.1, 17.8, 20.6, 18.1, 16.3, 18.3, … ## $temp_3pm <dbl> 21.8, 24.3, 23.2, 26.5, 29.7, 28.9, 24.6, 25.5, 30.2, … ##$ rain_today <fct> No, No, No, No, No, No, No, No, No, Yes, No, Yes, Yes,… ## $risk_mm <dbl> 0.0, 0.0, 0.0, 1.0, 0.2, 0.0, 0.0, 0.0, 1.4, 0.0, 2.2,… ##$ rain_tomorrow <fct> No, No, No, No, No, No, No, No, Yes, No, Yes, Yes, Yes… Your donation will support ongoing development and give you access to the PDF version of this book. Desktop Survival Guides include Data Science, GNU/Linux, and MLHub. Books available on Amazon include Data Mining with Rattle and Essentials of Data Science. Popular open source software includes rattle, wajig, and mlhub. Hosted by Togaware, a pioneer of free and open source software since 1984.
2021-06-22 20:37:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20472143590450287, "perplexity": 3810.4606630205617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00286.warc.gz"}
http://www.physicsforums.com/showthread.php?t=348528
## Fourier time shift The time shift property of the Fourier transform is defined as follows: $x(n - n_o ) \Leftrightarrow e^{ - j\omega n_o } X(e^{j\omega } )$ I am confused by this notation....what does $X(e^{j\omega } )$ mean? I know that $X(\omega)$ is the value of the Fourier transform at a given angular frequency but I'm confused why it has been put in a complex exponent. It is also sometimes written as: $h(x) = f(x - x_0)$ $\hat{h}(\xi)= e^{-2\pi i x_0\xi }\hat{f}(\xi)$ This notation I think I understand...it's saying that, if I have the DFT of $f(x)$, then I can get the DFT for $f(x - x_0)$ by multiplying each point in the DFT by a scale factor that depends on the frequency. Specifically, from Euler's relation, I should scale the real/complex part by, $\cos(-2 \pi x_0 \xi ) + i \sin(2 \pi x_0 \xi )$. I tested this out by constructing a DFT that has a single spike, then taking the inverse FFT to reconstruct a time signal...and scaling the DFT spike and reconstructing again to see if I got a time shifted signal. I did not. When I thought about it more, I realized that this doesn't make sense, because some time shifts would result in multiplication by zero, which means that a shift followed by a negative shift could result in the signal being destroyed. What have I got wrong? PhysOrg.com engineering news on PhysOrg.com >> Mathematical algorithms cut train delays>> Researchers design software to detect changes in colour vision>> Trend study identifies potential for humans and technology to interact in a manufacturing environment Mentor Blog Entries: 10 Quote by junglebeast The time shift property of the Fourier transform is defined as follows: $x(n - n_o ) \Leftrightarrow e^{ - j\omega n_o } X(e^{j\omega } )$ I am confused by this notation....what does $X(e^{j\omega } )$ mean? Looks like a typo to me, I think it should be simply X(ω). Here's another link, they do have F(ω) (different notation): http://cnx.org/content/m10100/latest/ I tested this out by constructing a DFT that has a single spike, then taking the inverse FFT to reconstruct a time signal...and scaling the DFT spike and reconstructing again to see if I got a time shifted signal. I did not. When I thought about it more, I realized that this doesn't make sense, because some time shifts would result in multiplication by zero, which means that a shift followed by a negative shift could result in the signal being destroyed. What have I got wrong? Not sure what is going on with your test. You might try starting in the time domain, with both a spike at t=0 and also a time-shifted spike. Take the FFT of both and compare. BTW, if you want to represent purely real time-domain signals, the frequency-domain should have the property X(-ω) = X*(ω), where * denotes the complex conjugate. So the only way to have a single spike in the frequency domain is when that spike is at ω=0. Alright, I found out where my confusion was...everything I said in my above post was correct, except for the place where I said "I tried this and it didn't work," because the reason it didn't work is I was doing the complex multiplication pointwise, and complex multiplication actually involves some addition! Now it all makes sense. X(-ω) = X*(ω), where * denotes the complex conjugate. So the only way to have a single spike in the frequency domain is when that spike is at ω=0. Oh I was only referring to a single spike in the positive frequencies...because obviously the negative frequency range is just a mirror of the positive data as you point out. I'm still confused about the X(e^jw) notation though...I've seen it in many different places so I don't think it's just a typo.. Mentor Blog Entries: 10 ## Fourier time shift Quote by junglebeast I'm still confused about the X(e^jw) notation though...I've seen it in many different places so I don't think it's just a typo.. That is bizarre. If X is the signal in the frequency domain, the argument must be a real number ... this is the Fourier Transform, not Laplace, after all. I am equally baffled. It's only a notation criterium. As usual in Fourier Analysis in function of the application area, differents conventions appear. the notation X(w) it's usual for physicist but the notation X(expiw) it's more usual for electronic engineering. The second notation has conection with the bilateral Laplace transform for cotninuous signal and Z transform for discrete signals. Esentially it means: a) for cotinuos signal, the Fourier Transform is a particular case of laplace transform, where the complex number s, is restricted to unit circle. b) for discrete signals, the same idea but with Z Transforms. Similar discussions for: Fourier time shift Thread Forum Replies Calculus & Beyond Homework 4 Math & Science Software 9 Computing & Technology 0 Calculus & Beyond Homework 2 Precalculus Mathematics Homework 2
2013-06-19 06:03:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746451139450073, "perplexity": 595.0032529487175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00012-ip-10-60-113-184.ec2.internal.warc.gz"}
https://ch.mathworks.com/help/symbolic/piecewise.html
# piecewise Conditionally defined expression or function ## Syntax ``pw = piecewise(cond1,val1,cond2,val2,...)`` ``pw = piecewise(cond1,val1,cond2,val2,...,otherwiseVal)`` ## Description example ````pw = piecewise(cond1,val1,cond2,val2,...)` returns the piecewise expression or function `pw` whose value is `val1` when condition `cond1` is true, is `val2` when `cond2` is true, and so on. If no condition is true, the value of `pw` is `NaN`.``` example ````pw = piecewise(cond1,val1,cond2,val2,...,otherwiseVal)` returns the piecewise expression or function `pw` that has the value `otherwiseVal` if no condition is true.``` ## Examples ### Define and Evaluate Piecewise Expression Define the following piecewise expression by using `piecewise`. `$y=\left\{\begin{array}{cc}-1& x<0\\ 1& x>0\end{array}$` ```syms x y = piecewise(x<0, -1, x>0, 1)``` ```y = piecewise(x < 0, -1, 0 < x, 1)``` Evaluate `y` at `-2`, `0`, and `2` by using `subs` to substitute for `x`. Because `y` is undefined at `x = 0`, the value is `NaN`. `subs(y, x, [-2 0 2])` ```ans = [ -1, NaN, 1]``` ### Define Piecewise Function Define the following function symbolically. `$y\left(x\right)=\left\{\begin{array}{cc}-1& x<0\\ 1& x>0\end{array}$` ```syms y(x) y(x) = piecewise(x<0, -1, x>0, 1)``` ```y(x) = piecewise(x < 0, -1, 0 < x, 1)``` Because `y(x)` is a symbolic function, you can directly evaluate it for values of `x`. Evaluate `y(x)` at `-2`, `0`, and `2`. Because `y(x)` is undefined at `x = 0`, the value is `NaN`. For details, see Create Symbolic Functions. `y([-2 0 2])` ```ans = [ -1, NaN, 1]``` ### Set Value When No Conditions Is True Set the value of a piecewise function when no condition is true (called otherwise value) by specifying an additional input argument. If an additional argument is not specified, the default otherwise value of the function is `NaN`. Define the piecewise function `$y\left(x\right)=\left\{\begin{array}{cc}-2& x<-2\\ 0& -2` ```syms y(x) y(x) = piecewise(x<-2, -2, -2<x<0, 0, 1)``` ```y(x) = piecewise(x < -2, -2, x in Dom::Interval(-2, 0), 0, 1)``` Evaluate `y(x)` between `-3` and `1` by generating values of `x` using `linspace`. At `-2` and `0`, `y(x)` evaluates to `1` because the other conditions are not true. ```xvalues = linspace(-3,1,5) yvalues = y(xvalues)``` ```xvalues = -3 -2 -1 0 1 yvalues = [ -2, 1, 0, 1, 1]``` ### Plot Piecewise Expression Plot the following piecewise expression by using `fplot`. `$y=\left\{\begin{array}{cc}-2& x<-2\\ x& -22\end{array}.$` ```syms x y = piecewise(x<-2, -2, -2<x<2, x, x>2, 2); fplot(y)``` ### Assumptions and Piecewise Expressions On creation, a piecewise expression applies existing assumptions. Apply assumptions set after creating the piecewise expression by using `simplify` on the expression. Assume `x > 0`. Then define a piecewise expression with the same condition `x > 0`. `piecewise` automatically applies the assumption to simplify the condition. ```syms x assume(x > 0) pw = piecewise(x<0, -1, x>0, 1)``` ```pw = 1``` Clear the assumption on `x` for further computations. `assume(x,'clear')` Create a piecewise expression `pw` with the condition `x > 0`. Then set the assumption that ```x > 0```. Apply the assumption to `pw` by using `simplify`. ```pw = piecewise(x<0, -1, x>0, 1); assume(x > 0) pw = simplify(pw)``` ```pw = 1``` Clear the assumption on `x` for further computations. `assume(x, 'clear')` ### Differentiate, Integrate, and Find Limits of Piecewise Expression Differentiate, integrate, and find limits of a piecewise expression by using `diff`, `int`, and `limit` respectively. Differentiate the following piecewise expression by using `diff`. `$y=\left\{\begin{array}{cc}1/x& x<-1\\ \mathrm{sin}\left(x\right)/x& x\ge -1\end{array}$` ```syms x y = piecewise(x<-1, 1/x, x>=-1, sin(x)/x); diffy = diff(y, x)``` ```diffy = piecewise(x < -1, -1/x^2, -1 < x, cos(x)/x - sin(x)/x^2)``` Integrate `y` by using `int`. `inty = int(y, x)` ```inty = piecewise(x < -1, log(x), -1 <= x, sinint(x))``` Find the limits of `y` at `0` and `-1` by using `limit`. Because `limit` finds the double-sided limit, the piecewise expression must be defined from both sides. Alternatively, you can find the right- or left-sided limit. For details, see `limit`. ```limit(y, x, 0) limit(y, x, -1)``` ```ans = 1 ans = limit(piecewise(x < -1, 1/x, -1 < x, sin(x)/x), x, -1)``` Because the two conditions meet at `-1`, the limits from both sides differ and `limit` cannot find a double-sided limit. ### Elementary Operations on Piecewise Expressions Add, subtract, divide, and multiply two piecewise expressions. The resulting piecewise expression is only defined where the initial piecewise expressions are defined. ```syms x pw1 = piecewise(x<-1, -1, x>=-1, 1); pw2 = piecewise(x<0, -2, x>=0, 2); add = pw1 + pw2 sub = pw1 - pw2 mul = pw1 * pw2 div = pw1 / pw2``` ```add = piecewise(x < -1, -3, x in Dom::Interval([-1], 0), -1, 0 <= x, 3) sub = piecewise(x < -1, 1, x in Dom::Interval([-1], 0), 3, 0 <= x, -1) mul = piecewise(x < -1, 2, x in Dom::Interval([-1], 0), -2, 0 <= x, 2) div = piecewise(x < -1, 1/2, x in Dom::Interval([-1], 0), -1/2, 0 <= x, 1/2)``` ### Modify or Extend Piecewise Expression Modify a piecewise expression by replacing part of the expression using `subs`. Extend a piecewise expression by specifying the expression as the otherwise value of a new piecewise expression. This action combines the two piecewise expressions. `piecewise` does not check for overlapping or conflicting conditions. Instead, like an if-else ladder, `piecewise` returns the value for the first true condition. Change the condition `x<2` in a piecewise expression to `x<0` by using `subs`. ```syms x pw = piecewise(x<2, -1, x>0, 1); pw = subs(pw, x<2, x<0)``` ```pw = piecewise(x < 0, -1, 0 < x, 1)``` Add the condition `x>5` with the value `1/x` to `pw` by creating a new piecewise expression with `pw` as the otherwise value. `pw = piecewise(x>5, 1/x, pw)` ```pw = piecewise(5 < x, 1/x, x < 0, -1, 0 < x, 1)``` ## Input Arguments collapse all Condition, specified as a symbolic condition or variable. A symbolic variable represents an unknown condition. Example: x > 2 Value when condition is satisfied, specified as a number, vector, matrix, or multidimensional array, or as a symbolic number, variable, vector, matrix, multidimensional array, function, or expression. Value if no conditions are true, specified as a number, vector, matrix, or multidimensional array, or as a symbolic number, variable, vector, matrix, multidimensional array, function, or expression. If `otherwiseVal` is not specified, its value is `NaN`. ## Output Arguments collapse all Piecewise expression or function, returned as a symbolic expression or function. The value of `pw` is the value `val` of the first condition `cond` that is true. To find the value of `pw`, use `subs` to substitute for variables in `pw`. ## Tips • `piecewise` does not check for overlapping or conflicting conditions. A piecewise expression returns the value of the first true condition and disregards any following true expressions. Thus, `piecewise` mimics an if-else ladder.
2021-02-24 21:39:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.917317271232605, "perplexity": 1658.5248570284693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347321.0/warc/CC-MAIN-20210224194337-20210224224337-00081.warc.gz"}
https://www.physicsforums.com/threads/find-the-average-speed.81814/
# Find the average speed StotleD A wandering tapir trots along at 9 ft/s for 9 minutes, then walks at 6 ft/s for 7 minutes, and finally runs at 27 ft/s for 1 minutes. How do I find the average speed of the tapir ? I have already added the numerators and denominators then divided by 3
2022-10-04 07:04:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899440348148346, "perplexity": 879.56307794704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00452.warc.gz"}
https://www.physicsforums.com/threads/mass-transfer.841223/
# Homework Help: Mass transfer 1. Nov 3, 2015 ### NYK 1. The problem statement, all variables and given/known data A (spherical) rubbery balloon of 20 cm in diameter is filed with helium. The rubber balloon wall has a thickness of 0.05 cm and diffusivity of 0.1x10-10 cm2 /s for helium. When the balloon is left in the air at 25°C, helium leaks into the air by diffusion through the rubbery wall and, as a result, the balloon shrinks. The Henry constant for helium in the rubber is 5 mol/cm3 .atm. (1) Derive an equation that correlates the balloon size to the time; (2) Estimate the time required for the balloon to shrink to 10 cm in diameter. (Note: The helium pressure in the balloon is 2 atm and is essentially constant during the shrinking process. To simplify calculation, a quasi steady state can be assumed for the problem). 2. Relevant equations J = (DH/L)(Ca - Cb) Q = J.A.Δt d(v(P/RT))/dt = -SJ 3. The attempt at a solution I am having trouble starting this problem. I think that I need to somehow incorporate the mass balance with the flux equation. Then I get confused as to how I would derive the equation to equate the the balloon size as a function of time. Does my though process sound like I am on the right track? Thank you in advance for any help! 2. Nov 4, 2015 ### Staff: Mentor How many moles of helium are in the balloon to start with? What is the concentration of the helium that is dissolved in the rubber on the helium side of the wall? Chet 3. Nov 4, 2015 ### NYK Hi Chet thank you for the tips on getting started, I used the ideal gas law (since the problem states to assume a quasi steady state) and found the number of moles of helium in the balloon initially to be: n = PV/RT = (2 atm*33510.32cm^3)/(82.06(cm^3*atm/mol*K)*298.15K) = 2.739 mol He Using n, I calculated the intial concentration of helium in the balloon to be: Co = 2.739 mol/33510.32 cm^3 = 8.17 x 10^-5 mol He/cm^3 Next to find the concentration of helium dissolved in the rubber on the helium side I am trying to use: J = (D/L)(Co-C1) I am having trouble with finding the concentration out side of the balloon (C1) Would i use the ideal gas law again as: C1 = P/RT then if so do I assume the pressure outside of the balloon to be 1 atm? 4. Nov 4, 2015 ### Staff: Mentor This was the only part that was correct. The concentration of helium dissolved in the rubber at the interface between the helium and balloon wall is determined by using the Helium pressure in the balloon and the Henry's law constant. What do you think the partial pressure of the helium will be in the room air after it has seeped through the wall into the room air? Do you really think it will be 1 atm.? Chet 5. Nov 4, 2015 ### NYK Hi Chet, I am in a class right now, but just to run a thought by you before I am able to continue working on this problem, the partial pressure outside of the balloon of the helium would be 0 atm, there isnt any boundary creating a pressure when the helium escapes through the rubber walls to the outside environment. Does that sound logical? Then using henrys law: P = HX X = P/H 6. Nov 4, 2015 ### Staff: Mentor Perfect. Actually, Henry's law is expressed as C=HP. So give me a number. Chet 7. Nov 4, 2015 ### NYK Co = 10 mol/cm^3? 8. Nov 4, 2015 ### LDavis I am working on the same problem. Can Henry's constant be expressed as P/C=H? 9. Nov 4, 2015 ### NYK http://www.ece.gatech.edu/research/labs/vc/theory/oxide.html [Broken] that is where i found C = HP But I did find the same eqn you are talking about where H = P/C in the text book Last edited by a moderator: May 7, 2017 10. Nov 4, 2015 ### LDavis Since the volume is changing and I assume the pressure is constant inside of the balloon, the mass balance gives me d(V(Pi/RT))/dt=0-S(DH/L)P and from there I assume that I can rearrange that and put S in terms of volume or radius 11. Nov 4, 2015 ### NYK S = 4πr2? then d(V(P1/RT))/dt = -S((DH/L)P1) dV/dt = -S(DH/L)RT dV = -(4πr2)(DH/L)RTdt 12. Nov 5, 2015 ### Staff: Mentor Correct. 13. Nov 5, 2015 ### Staff: Mentor Yes. What is dV in terms of r and dr? Do you think you are supposed to take into account the fact that the balloon rubber is incompressible so that $S(t)L(t)=S(0)L(0)$, or do you think they expect you to not realize that and assume that L is constant? 14. Nov 5, 2015 ### LDavis 15. Nov 5, 2015 ### Staff: Mentor Oh. He mistook the value of the diameter for the value of the radius. No big deal. Chet 16. Nov 5, 2015 ### LDavis Okay I just thought I missed something. So the idea then is to get all of the radius terms on one side of the equation to integrate from the starting radius to the final on the left and the starting and final times on the right. Does V also need to be rewritten in terms of r so we have (4/3πr^3)/(4πr^2 dr) Where the numerator is V and the denominator is S both in terms of r? 17. Nov 5, 2015 ### Staff: Mentor I don't understand your question. Can you elaborate? Chet 18. Nov 5, 2015 ### LDavis Sorry. So the equation is dV = -(4πr2)(DH/L)RTdt and I need all of the r terms on one side so dV = -(4πr^2)(DH/L)RTdt dV/(4πr^2) = -(DH/L)RTdt from here would I rewrite V in terms of r to get (4πr^3)/(4πr^2)dr = -(DH/L)RTdt which would simplify to rdr = -(DH/L)RTdt 19. Nov 5, 2015 ### LDavis Sorry to take over your post NYX and thank you so much for your assistance thus far Chestermiller it is greatly appreciated 20. Nov 5, 2015 ### Staff: Mentor Looks OK, except that you differentiated the volume incorrectly. Chet 21. Nov 5, 2015 ### LDavis I am not sure where to go from there, was my simplification of (4πr^3)/(4πr^2)dr incorrect or does it not need to be simplified? would r^3/r^2 dr be correct, this is where I've been stuck for a day, would it be simpler to right S in terms of V so i can just leave dV as is? 22. Nov 5, 2015 ### Staff: Mentor $dV=4πr^2dr$, not $dV=4πr^3dr$
2018-06-18 17:56:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6400023102760315, "perplexity": 1280.034107356652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860684.14/warc/CC-MAIN-20180618164208-20180618184208-00311.warc.gz"}
http://doosoo.wikidot.com/researchnotes11
Research Notes 10.25.2011 (Group Meeting) 1. Testing for asymmetric wind from star 2. Finding the way of speed-up • adjust refinement levels, but should fix the maximum refinement level of Jet nozzles Note) It is hard to think which is more important factor that determine the asymmetry in the wind. (density or velocity of the wind) 04.08.2011 Generating ellipse orbits of Stellar and BH objects. with eccentricity of 0.5 left panel shows the orbit with center of the star and right panel shows the fixed center of mass. Here are the track of the simulation results overlaying on the expected routine. 02.21.2011 • analyze the H$_{\alpha}$-ratio between neck and bubble (Need to match with observation) • The H$_{\alpha}$-ratio with respect to different viewing angle • FYI, observational image from Wiersema
2018-09-26 03:45:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8121148943901062, "perplexity": 4072.166584318691}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267163146.90/warc/CC-MAIN-20180926022052-20180926042452-00373.warc.gz"}
http://wikieducator.org/Thread:Adding_revolver_maps_widget_(2)
Balqis Thaahaveetti ji It is my great pleasure that your goodness has spared time for me for sharing valuable information.I am very much thankful to you . Your user page has attracted me. My Desire to learn from you has stimulated. I am afraid how could you spare your time from you busy life for people like me . I am trying to adding my page as per stated steps but plan does not work . Is it like adding external link? Regards harbans 07:32, 23 April 2012 Hello Harbanji , First let me thank you for your kind words and tell you that there is no need to call me ji because my age is nearly half your age :) It is an external link.Hopefully ,I would like to help you once you have signed up there.I am here to help you whenever and whatever way I can . Best Regards
2017-11-22 22:12:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29318106174468994, "perplexity": 1396.494098032614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806676.57/warc/CC-MAIN-20171122213945-20171122233945-00019.warc.gz"}
https://hpmuseum.org/forum/showthread.php?mode=threaded&tid=13388&pid=119840
The case of the disappearing angle units, or "the dangle of the angle" 08-16-2019, 12:02 PM Post: #23 ijabbott Senior Member Posts: 1,066 Joined: Jul 2015 RE: The case of the disappearing angle units, or "the dangle of the angle" Thanks SlideRule, I should work out how to do mean directions sometime. It may come in useful! — Ian Abbott « Next Oldest | Next Newest » Messages In This Thread The case of the disappearing angle units, or "the dangle of the angle" - ijabbott - 07-31-2019, 07:34 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - Geoff - 08-01-2019, 01:41 AM RE: The case of the disappearing angle units, or "the dangle of the angle" - ijabbott - 08-02-2019, 07:45 AM RE: The case of the disappearing angle units, or "the dangle of the angle" - Albert Chan - 08-02-2019, 01:31 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - SlideRule - 08-02-2019, 03:28 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - ijabbott - 08-02-2019, 05:06 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - Claudio L. - 08-03-2019, 01:11 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - Albert Chan - 08-03-2019, 08:51 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - Claudio L. - 08-04-2019, 12:02 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - ijabbott - 08-05-2019, 06:49 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - rprosperi - 08-06-2019, 02:24 AM RE: The case of the disappearing angle units, or "the dangle of the angle" - ijabbott - 08-07-2019, 11:45 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - rprosperi - 08-08-2019, 12:09 AM RE: The case of the disappearing angle units, or "the dangle of the angle" - Claudio L. - 08-07-2019, 09:26 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - StephenG1CMZ - 08-06-2019, 08:16 AM RE: The case of the disappearing angle units, or "the dangle of the angle" - Claudio L. - 08-07-2019, 09:38 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - jlind - 08-13-2019, 01:56 AM RE: The case of the disappearing angle units, or "the dangle of the angle" - ijabbott - 08-13-2019, 07:28 AM RE: The case of the disappearing angle units, or "the dangle of the angle" - jlind - 08-14-2019, 04:53 AM RE: The case of the disappearing angle units, or "the dangle of the angle" - ijabbott - 08-14-2019, 03:29 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - Albert Chan - 08-14-2019, 04:08 PM RE: The case of the disappearing angle units, or "the dangle of the angle" - SlideRule - 08-16-2019, 01:38 AM RE: The case of the disappearing angle units, or "the dangle of the angle" - ijabbott - 08-16-2019 12:02 PM User(s) browsing this thread: 1 Guest(s)
2022-01-27 18:05:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8236151933670044, "perplexity": 9600.33064641577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00244.warc.gz"}
https://blender.stackexchange.com/questions/8120/blender-not-exporting-mesh-properly
# Blender not exporting mesh properly I have just started using blender 2.66a. I was trying to model a chair. I used a cube to create the chair. I exported it as an .obj file. However when I tried to import it back in, it is just showing a cube instead of the chair which I had modeled. blend file • Exact problem but for another format, blender.stackexchange.com/questions/5249/… – iKlsR Mar 28 '14 at 17:40 • I downloaded your file and tried it. It works fine for me, just make sure you exported the right cube (your scene has 2). – David Mar 30 '14 at 0:25 The reason you are seeing the cube instead of the chair is because there is a Cube on layer 1. Your chair is on layer 3. You only had layer 3 visible which is why you did not see the cube even though it is there. There are two ways to avoid this problem in the future: # Method 1 - deleting the cube 1. Go to layer 1 by pressing 1 on your keyboard 2. Select the cube and delete it, X # Method 2 - only exporting the needed meshes 1. Select the chair and any other items you wish to export 2. In the export dialog check Selection Only Once you do this you can import the chair and it will work like you expect it to.
2020-01-19 02:24:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17317689955234528, "perplexity": 1373.677184156582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00032.warc.gz"}
https://www.linknovate.com/affiliation/national-center-for-seismology-868461/all/
Delhi, India Delhi, India Time filter Source Type Prakash R.,National Center for Seismology | Singh R.K.,National Center for Seismology Geomatics, Natural Hazards and Risk | Year: 2016 Spatio-temporal variations of seismicity within 300 km of the main Nepal earthquake of 25 April 2015 showed seismic quiescence since 2007. Decadal changes in b-value using the Gutenberg–Richter relation showed a well-marked decrease during the period January 2005–April 2015 preceding the main earthquake. Stress drop of this earthquake in the inter plate region was found to be 3.4 MPa which is much lower than the intra plate Bhuj earthquake, 2001. The un-ruptured portion of the seismic gap in western Nepal lies between longitude 82.5°E and 84.5°E, whose 200 km length (if assumed to rupture entirely in one earthquake) coupled with locked zone of about 100 km from GPS data, may generate an earthquake of magnitude about 8 although no historical data for a major earthquake is as yet available. © 2016 The Author(s). Published by Taylor & Francis. Kundu B.,National Institute of Technology Rourkela | Ghosh A.,University of California at Riverside | Mendoza M.,University of California at Riverside | Burgmann R.,University of California at Berkeley | And 2 more authors. Geophysical Research Letters | Year: 2016 The 2012 East Indian Ocean earthquake (Mw 8.6), so far the largest intraoceanic plate strike-slip event ever recorded, modulated tectonic tremors in the Cascadia subduction zone. The rate of tremor activity near Vancouver Island increased by about 1.5 times from its background level during the passage of seismic waves of this earthquake. In most cases of dynamic modulation, large-amplitude and long-period surface waves stimulate tremors. However, in this case even the small stress change caused by body waves generated by the 2012 earthquake modulated tremor activity. The tremor modulation continued during the passage of the surface waves, subsequent to which the tremor activity returned to background rates. Similar tremor modulation is observed during the passage of the teleseismic waves from the Mw 8.2 event, which occurs about 2 h later near the Mw 8.6 event. We show that dynamic stresses from back-to-back large teleseismic events can strongly influence tremor sources. ©2016. American Geophysical Union. All Rights Reserved. Kumar V.,National Center for Seismology | Kumar D.,Kurukshetra University | Chopra S.,Institute of Seismological Research Journal of Asian Earth Sciences | Year: 2016 The scaling relation and self similarity of earthquake process have been investigated by estimating the source parameters of 34 moderate size earthquakes (mb 3.4–5.8) occurred in the NW Himalaya. The spectral analysis of body waves of 217 accelerograms recorded at 48 sites have been carried out using in the present analysis. The Brune's ω−2 model has been adopted for this purpose. The average ratio of the P-wave corner frequency, fc(P), to the S-wave corner frequency, fc(S), has been found to be 1.39 with fc(P) > fc(S) for 90% of the events analyzed here. This implies the shift in the corner frequency in agreement with many other similar studies done for different regions. The static stress drop values for all the events analyzed here lie in the range 10–100 bars average stress drop value of the order of 43 ± 19 bars for the region. This suggests the likely estimate of the dynamic stress drop, which is 2–3 times the static stress drop, is in the range of about 80–120 bars. This suggests the relatively high seismic hazard in the NW Himalaya as high frequency strong ground motions are governed by the stress drop. The estimated values of stress drop do not show significant variation with seismic moment for the range 5 × 1014–2 × 1017 N m. This observation along with the cube root scaling of corner frequencies suggests the self similarity of the moderate size earthquakes in the region. The scaling relation between seismic moment and corner frequency Mofc 3=3.47×1016Nm/s3 estimated in the present study can be utilized to estimate the source dimension given the seismic moment of the earthquake for the hazard assessment. The present study puts the constrains on the important parameters stress drop and source dimension required for the synthesis of strong ground motion from the future expected earthquakes in the region. Therefore, the present study is useful for the seismic hazard and risk related studies for NW Himalaya. © 2016 Elsevier Ltd Hough S.E.,U.S. Geological Survey | Martin S.S.,Nanyang Technological University | Gahalaut V.,National Center for Seismology | Joshi A.,Indian Institute of Technology Roorkee | And 2 more authors. Natural Hazards | Year: 2016 We use 21 strong motion recordings from Nepal and India for the 25 April 2015 moment magnitude (MW) 7.8 Gorkha, Nepal, earthquake together with the extensive macroseismic intensity data set presented by Martin et al. (Seism Res Lett 87:957–962, 2015) to analyse the distribution of ground motions at near-field and regional distances. We show that the data are consistent with the instrumental peak ground acceleration (PGA) versus macroseismic intensity relationship developed by Worden et al. (Bull Seism Soc Am 102:204–221, 2012), and use this relationship to estimate peak ground acceleration from intensities (PGAEMS). For nearest-fault distances (RRUP < 200 km), PGAEMS is consistent with the Atkinson and Boore (Bull Seism Soc Am 93:1703–1729, 2003) subduction zone ground motion prediction equation (GMPE). At greater distances (RRUP > 200 km), instrumental PGA values are consistent with this GMPE, while PGAEMS is systematically higher. We suggest the latter reflects a duration effect whereby effects of weak shaking are enhanced by long-duration and/or long-period ground motions from a large event at regional distances. We use PGAEMS values within 200 km to investigate the variability of high-frequency ground motions using the Atkinson and Boore (Bull Seism Soc Am 93:1703–1729, 2003) GMPE as a baseline. Across the near-field region, PGAEMS is higher by a factor of 2.0–2.5 towards the northern, down-dip edge of the rupture compared to the near-field region nearer to the southern, up-dip edge of the rupture. Inferred deamplification in the deepest part of the Kathmandu valley supports the conclusion that former lake-bed sediments experienced a pervasive nonlinear response during the mainshock (Dixit et al. in Seismol Res Lett 86(6):1533–1539, 2015; Rajaure et al. in Tectonophysics, 2016. Ground motions were significantly amplified in the southern Gangetic basin, but were relatively low in the northern basin. The overall distribution of ground motions and damage during the Gorkha earthquake thus reflects a combination of complex source, path, and site effects. We also present a macroseismic intensity data set and analysis of ground motions for the MW7.3 Dolakha aftershock on 12 May 2015, which we compare to the Gorkha mainshock and conclude was likely a high stress-drop event. © 2016 Springer Science+Business Media Dordrecht (outside the USA) Chingtham P.,National Center for Seismology | Yadav R.B.S.,Kurukshetra University | Chopra S.,Institute of Seismological Research ISR | Yadav A.K.,Indian Institute of Technology Kharagpur | And 2 more authors. Natural Hazards | Year: 2016 The Northwest Himalaya and its adjoining regions are one of the most seismically vulnerable regions in the Indian subcontinent which have experienced two great earthquakes [1902 Caucasus of magnitude MS 8.6 and 1905 Kangra, India of MS 8.6 (MW 7.8)] and several large damaging earthquakes in the previous century. In this study, time-dependent seismicity analysis is carried out in five main seismogenic zones in the Northwest Himalaya and its adjoining regions by considering earthquake inter-arrival times using a homogeneous and complete earthquake catalogue for the period 1900–2010 prepared by Yadav et al. (Pure Appl Geophys 169:1619–1639, 2012a). For this purpose, we consider three statistical models, namely Poisson (time independent), Lognormal and Weibull (time dependent). Fitness of inter-arrival time data is investigated using Kolmogorov–Smirnov (K–S) test for Lognormal and Weibull models, while Chi-square test is applied for the Poisson model. It is observed that the Lognormal model fits remarkably well to the observed inter-arrival time data, while the Weibull model exhibits moderate fitting. The parameters A and B of the time-dependent seismicity equation $$\ln {\text{IAT}} = A + BM \pm C$$lnIAT=A+BM±C (where ln IAT is the log of inter-arrival times of earthquakes exceeding magnitude M and C is the standard deviation), developed by Musson et al. (Bull Seismol Soc Am 92:1783–1794, 2002) are evaluated in each of the five main seismogenic zones considered in the region. The mean of the inter-arrival times for the Lognormal distribution is found to be linearly related to the lower-bound magnitude (Mmin). Values of the slope (B) of the mean vary from 2.34 to 2.57, while the parameter A ranges from −9.06 to −7.01 in the examined seismogenic zones with standard deviation ranging from 0.21 to 0.38. It is observed that the Hindukush–Pamir Himalaya and Himalayan Frontal Thrust exhibit higher seismic hazard (i.e., high seismic activity and low recurrence periods), while the Sulaiman–Kirthar ranges show the lowest. The variation in estimated seismicity parameters from one zone to another reveals high crustal heterogeneity and seismotectonic complexity in the study region. © 2015, Springer Science+Business Media Dordrecht. Prajapati S.K.,National Center for Seismology | Dadhich H.K.,National Center for Seismology | Chopra S.,Institute of Seismological Research Journal of Asian Earth Sciences | Year: 2016 A devastating earthquake of Mw 7.8 struck central Nepal on 25th April, 2015 (6:11:25 UT) which resulted in more than ∼9000 deaths, and destroyed millions of houses. Standing buildings, roads and electrical installations worth 25-30. billions of dollars are reduced to rubbles. The earthquake was widely felt in the northern parts of India and moderate damage have been observed in the northern part of UP and Bihar region of India. Maximum intensity IX, according to the USGS report, was observed in the meizoseismal zone, surrounding the Kathmandu region. In the present study, we have compiled available information from the print, electronic media and various reports of damages and other effects caused by the event, and interpreted them to obtain Modified Mercalli Intensities (MMI) at over 175 locations spread over Nepal and surrounding Indian and Tibet region. We have also obtained a number of strong motion recordings from India and Nepal seismic network and developed an empirical relationship between the MMI and peak ground acceleration (PGA), peak ground velocity (PGV). We have used least square regression technique to derive the empirical relation between the MMI and ground motion parameters and compared them with the empirical relationships available for other regions of the world. Further, seismic intensity information available for historical earthquakes, which have occurred in the Nepal Himalaya along with the present intensity data has been utilized for developing an attenuation relationship for the studied region using two step regression analyses. The derived attenuation relationship is useful for assessing damage of a potential future large earthquake (earthquake scenario-based planning purposes) in the region. © 2016 Elsevier Ltd. Sharma B.,National Center for Seismology | Chopra S.,National Center for Seismology | Chopra S.,Institute of Seismological Research | Kumar V.,National Center for Seismology Natural Hazards | Year: 2016 Earthquakes are deadliest among all the natural disasters. The areas that have experienced great/large earthquakes in the past may experience big event in future. In this study, we have simulated Kangra earthquake (1905, Mw 7.8) and a hypothetical great earthquake (Mw 8.5) in the north-west Himalaya using Empirical Green’s Function (EGF) technique. Recordings of Dharamsala earthquake (1986, Mw 5.4) are used as Green function with a heterogeneous source model and an asperity. It has been observed that the towns of Kangra and Dharamsala can expect ground accelerations in excess of 1 g in case of a Mw 8.5 earthquake and could have experienced an acceleration close to 1 g during 1905 Kangra earthquake. The entire study region can expect acceleration in excess of 100 cm/s2 in case of Mw 7.8 and 200 cm/s2 in case of Mw 8.5. The sites located near the rupture initiation point can expect accelerations in excess of 1 g for the magnitudes simulated. For validation, the estimates of the PGA for Mw 7.8 simulation are compared with isoseismal studies carried out in the same region after the Kangra earthquake of 1905 by converting PGA values to intensities. It was found that the results are comparable. The target earthquakes (Mw 7.8 and Mw 8.5) are simulated at depth of 20 km and 30 km to examine the effect of PGA for different depths. The PGA values obtained in the present analysis gave us an idea about the level of accelerations experienced in the area during 1905 Kangra earthquake. Future construction in the area can be regulated, and built environ can be strengthened using PGA values obtained in the present analysis. © 2015, Springer Science+Business Media Dordrecht. Sharma B.,National Center for Seismology | Chopra S.,National Center for Seismology | Chopra S.,Institute of Seismological Research | Chingtham P.,National Center for Seismology | Kumar V.,National Center for Seismology Natural Hazards | Year: 2016 In the present work, acceleration response spectra are determined from earthquakes which have occurred in the NE region and the effect of local geology on its shape is studied. One hundred and ninety-five strong ground motion time histories from 45 earthquakes which have occurred in the NE region having a magnitude range of 3.5 ≤ Mw ≤ 6.9 and a distance range of 20–600 kms are used. It is observed that the shape of the normalized acceleration response spectra is influenced by the local site conditions and regional geology. The influence of magnitude and distance on the spectra is also studied. The present study is carried out for three categories of rocks: Pre-Cambrian, Tertiary and Quaternary. It is inferred that the acceleration response spectra in the current Indian code designed for the entire country are applicable for NE region as it is within the spectral limits prescribed in Indian code. The ground motion is amplified at higher frequencies for stations located on hard rock, while for stations located on alluvium sites, it is amplified at lower frequencies. The sites located on hard rock show lowest values of spectral acceleration than the sites located on alluvium sites. The results obtained in the present study are compared with the similar results obtained in the stable continent region like Gujarat. It is found that the dominating period of response spectrum of similar rock types is found to be at higher side for NE region as compared to Gujarat region. This may be attributed towards the tectonic complexity of the NE region than the stable continent region like Gujarat. © 2016 Springer Science+Business Media Dordrecht Gahalaut V.K.,National Center for Seismology | Kundu B.,National Institute of Technology Rourkela Geomatics, Natural Hazards and Risk | Year: 2016 Earthquakes in the Indo-Burmese wedge occur due to India-Sunda plate motion. These earthquakes generally occur at depth between 25 and 150 km and define an eastward gently dipping seismicity trend surface that coincides with the Indian slab. Although this feature mimics the subduction zone, the relative motion of Indian plate predominantly towards north, earthquake focal mechanisms suggest that these earthquakes are of intra-slab type which occur on steep plane within the Indian plate. The relative motion between the India and Sunda plates is accommodated at the Churachandpur-Mao fault (CMF) and Sagaing Fault. The 4 January 2016 Manipur earthquake (M 6.7) is one such earthquake which occurred 20 km west of the CMF at ∼60 km depth. Fortunately, this earthquake occurred in a very sparse population region with very traditional wooden frame houses and hence, the damage caused by the earthquake in the source region was very minimal. However, in the neighbouring Imphal valley, it caused some damage to the buildings and loss of eight lives. The damage in Imphal valley due to this and historical earthquakes in the region emphasizes the role of local site effect in the Imphal valley. © 2016 Informa UK Limited, trading as Taylor & Francis Group Kundu B.,National Institute of Technology Rourkela | Vissa N.K.,National Institute of Technology Rourkela | Gahalaut V.K.,CSIR - Central Electrochemical Research Institute | Gahalaut V.K.,National Center for Seismology Geophysical Research Letters | Year: 2015
2017-04-27 03:23:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1795138269662857, "perplexity": 4865.749516028374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121865.67/warc/CC-MAIN-20170423031201-00500-ip-10-145-167-34.ec2.internal.warc.gz"}
http://myvocabbook.com/download/principles-of-mathematical-logic-ams-chelsea-publishing
## Principles of Mathematical Logic Author: David Hilbert Publisher: American Mathematical Soc. ISBN: 9780821820247 Release Date: 1950 Genre: Mathematics David Hilbert was particularly interested in the foundations of mathematics. Among many other things, he is famous for his attempt to axiomatize mathematics. This now classic text is his treatment of symbolic logic. This translation is based on the second German edition and has been modified according to the criticisms of Church and Quine. In particular, the authors' original formulation of Godel's completeness proof for the predicate calculus has been updated. In the first half of the twentieth century, an important debate on the foundations of mathematics took place. Principles of Mathematical Logic represents one of Hilbert's important contributions to that debate. Although symbolic logic has grown considerably in the subsequent decades, this book remains a classic. ## Set Theory and Metric Spaces Author: Irving Kaplansky Publisher: American Mathematical Soc. ISBN: 9780821826942 Release Date: 2001 Genre: Mathematics This is a book that could profitably be read by many graduate students or by seniors in strong major programs ... has a number of good features. There are many informal comments scattered between the formal development of theorems and these are done in a light and pleasant style. ... There is a complete proof of the equivalence of the axiom of choice, Zorn's Lemma, and well-ordering, as well as a discussion of the use of these concepts. There is also an interesting discussion of the continuum problem ... The presentation of metric spaces before topological spaces ... should be welcomed by most students, since metric spaces are much closer to the ideas of Euclidean spaces with which they are already familiar. --Canadian Mathematical Bulletin Kaplansky has a well-deserved reputation for his expository talents. The selection of topics is excellent. -- Lance Small, UC San Diego This book is based on notes from a course on set theory and metric spaces taught by Edwin Spanier, and also incorporates with his permission numerous exercises from those notes. The volume includes an Appendix that helps bridge the gap between metric and topological spaces, a Selected Bibliography, and an Index. ## The History of Philosophical and Formal Logic Author: Alex Malpass Publisher: Bloomsbury Publishing ISBN: 9781472505255 Release Date: 2017-06-29 Genre: Philosophy The History of Philosophical and Formal Logic introduces ideas and thinkers central to the development of philosophical and formal logic. From its Aristotelian origins to the present-day arguments, logic is broken down into four main time periods: Antiquity and the Middle Ages (Aristotle and The Stoics) The early modern period (Bolzano, Boole) High modern period (Frege, Peano & Russell and Hilbert) Early 20th century (Godel and Tarski) Each new time frame begins with an introductory overview highlighting themes and points of importance. Chapters discuss the significance and reception of influential works and look at historical arguments in the context of contemporary debates. To support independent study, comprehensive lists of primary and secondary reading are included at the end of chapters, along with exercises and discussion questions. By clearly presenting and explaining the changes to logic across the history of philosophy, The History of Philosophical and Formal Logic constructs an easy-to-follow narrative. This is an ideal starting point for students looking to understand the historical development of logic. ## Mathematical Logic and Formalized Theories Author: Robert L. Rogers Publisher: Elsevier ISBN: 9781483257976 Release Date: 2014-05-12 Genre: Mathematics Mathematical Logic and Formalized Theories: A Survey of Basic Concepts and Results focuses on basic concepts and results of mathematical logic and the study of formalized theories. The manuscript first elaborates on sentential logic and first-order predicate logic. Discussions focus on first-order predicate logic with identity and operation symbols, first-order predicate logic with identity, completeness theorems, elementary theories, deduction theorem, interpretations, truth, and validity, sentential connectives, and tautologies. The text then tackles second-order predicate logic, as well as second-order theories, theory of definition, and second-order predicate logic F2. The publication takes a look at natural and real numbers, incompleteness, and the axiomatic set theory. Topics include paradoxes, recursive functions and relations, Gödel's first incompleteness theorem, axiom of choice, metamathematics of R and elementary algebra, and metamathematics of N. The book is a valuable reference for mathematicians and researchers interested in mathematical logic and formalized theories. ## Geometry and the Imagination Author: David Hilbert Publisher: American Mathematical Soc. ISBN: 9780821819982 Release Date: 1999 Genre: Mathematics This remarkable book endures as a true masterpiece of mathematical exposition. The book is overflowing with mathematical ideas, which are always explained clearly and elegantly, and above all, with penetrating insight. It is a joy to read, both for beginners and experienced mathematicians. Geometry and the Imagination is full of interesting facts, many of which you wish you had known before. The book begins with examples of the simplest curves and surfaces, including thread constructions of certain quadrics and other surfaces. The chapter on regular systems of points leads to the crystallographic groups and the regular polyhedra in $\mathbb{R}^3$. In this chapter, they also discuss plane lattices. By considering unit lattices, and throwing in a small amount of number theory when necessary, they effortlessly derive Leibniz's series: $\pi/4 = 1 - 1/3 + 1/5 - 1/7 + - \ldots$. In the section on lattices in three and more dimensions, the authors consider sphere-packing problems, including the famous Kepler problem. One of the most remarkable chapters is Projective Configurations''. In a short introductory section, Hilbert and Cohn-Vossen give perhaps the most concise and lucid description of why a general geometer would care about projective geometry and why such an ostensibly plain setup is truly rich in structure and ideas. The chapter on kinematics includes a nice discussion of linkages and the geometry of configurations of points and rods that are connected and, perhaps, constrained in some way. This topic in geometry has become increasingly important in recent times, especially in applications to robotics. This is another example of a simple situation that leads to a rich geometry. It would be hard to overestimate the continuing influence Hilbert-Cohn-Vossen's book has had on mathematicians of this century. It surely belongs in the "pantheon" of great mathematics books. ## The Mathematical Theory of Huygens Principle Author: Bevan B. Baker Publisher: American Mathematical Soc. ISBN: 9780821834787 Release Date: 2003 Genre: Mathematics Baker and Copson originally set themselves the task of writing a definitive text on partial differential equations in mathematical physics. However, at the time, the subject was changing rapidly and greatly, particularly via the developments coming from quantum mechanics. Instead, the authors chose to focus on a particular area of the broad theory, producing a monograph complete in itself. The resulting book deals with Huygens' principle in optics and its application to the theory of diffraction. Baker and Copson concern themselves with the general theory of the solution of the PDEs governing the propagation of light. Extensive use is made of Green's method. A chapter is dedicated to Sommerfeld's theory of diffraction, including diffraction of polarized light by a perfectly reflecting half-plane and by a black half-plane. New material was added for subsequent editions, notably Rayleigh's method of integral equations to the problem of diffraction by a planar screen. Some of the simpler diffraction problems are discussed as examples. Baker and Copson's book quickly became the standard reference on the subject of Huygens' principle. It remains so today. ## Lebesgue s Theory of Integration Author: Thomas Hawkins Publisher: American Mathematical Soc. ISBN: 0821829637 Release Date: 2001-01 Genre: Mathematics In this book, Hawkins elegantly places Lebesgue's early work on integration theory within in proper historical context by relating it to the developments during the nineteenth century that motivated it and gave it significance and also to the contributions made in this field by Lebesgue's contemporaries. Hawkins was awarded the 1997 MAA Chauvenet Prize and the 2001 AMS Albert Leon Whiteman Memorial Prize for notable exposition and exceptional scholarship in the history of mathematics. ## Mathematical Grammar of Biology Author: Michel Eduardo Beleza Yamagishi Publisher: Springer ISBN: 9783319626895 Release Date: 2017-08-31 Genre: Mathematics This seminal, multidisciplinary book shows how mathematics can be used to study the first principles of DNA. Most importantly, it enriches the so-called “Chargaff’s grammar of biology” by providing the conceptual theoretical framework necessary to generalize Chargaff’s rules. Starting with a simple example of DNA mathematical modeling where human nucleotide frequencies are associated to the Fibonacci sequence and the Golden Ratio through an optimization problem, its breakthrough is showing that the reverse, complement and reverse-complement operators defined over oligonucleotides induce a natural set partition of DNA words of fixed-size. These equivalence classes, when organized into a matrix form, reveal hidden patterns within the DNA sequence of every living organism. Intended for undergraduate and graduate students both in mathematics and in life sciences, it is also a valuable resource for researchers interested in studying invariant genomic properties. ## First Order Mathematical Logic Author: Angelo Margaris Publisher: Courier Corporation ISBN: 0486662691 Release Date: 1990 Genre: Mathematics "Attractive and well-written introduction." — Journal of Symbolic Logic The logic that mathematicians use to prove their theorems is itself a part of mathematics, in the same way that algebra, analysis, and geometry are parts of mathematics. This attractive and well-written introduction to mathematical logic is aimed primarily at undergraduates with some background in college-level mathematics; however, little or no acquaintance with abstract mathematics is needed. Divided into three chapters, the book begins with a brief encounter of naïve set theory and logic for the beginner, and proceeds to set forth in elementary and intuitive form the themes developed formally and in detail later. In Chapter Two, the predicate calculus is developed as a formal axiomatic theory. The statement calculus, presented as a part of the predicate calculus, is treated in detail from the axiom schemes through the deduction theorem to the completeness theorem. Then the full predicate calculus is taken up again, and a smooth-running technique for proving theorem schemes is developed and exploited. Chapter Three is devoted to first-order theories, i.e., mathematical theories for which the predicate calculus serves as a base. Axioms and short developments are given for number theory and a few algebraic theories. Then the metamathematical notions of consistency, completeness, independence, categoricity, and decidability are discussed, The predicate calculus is proved to be complete. The book concludes with an outline of Godel's incompleteness theorem. Ideal for a one-semester course, this concise text offers more detail and mathematically relevant examples than those available in elementary books on logic. Carefully chosen exercises, with selected answers, help students test their grasp of the material. For any student of mathematics, logic, or the interrelationship of the two, this book represents a thought-provoking introduction to the logical underpinnings of mathematical theory. "An excellent text." — Mathematical Reviews ## Combinatorial Problems and Exercises Author: L. Lovász Publisher: Elsevier ISBN: 9780080933092 Release Date: 2014-06-28 Genre: Mathematics The aim of this book is to introduce a range of combinatorial methods for those who want to apply these methods in the solution of practical and theoretical problems. Various tricks and techniques are taught by means of exercises. Hints are given in a separate section and a third section contains all solutions in detail. A dictionary section gives definitions of the combinatorial notions occurring in the book. Combinatorial Problems and Exercises was first published in 1979. This revised edition has the same basic structure but has been brought up to date with a series of exercises on random walks on graphs and their relations to eigenvalues, expansion properties and electrical resistance. In various chapters the author found lines of thought that have been extended in a natural and significant way in recent years. About 60 new exercises (more counting sub-problems) have been added and several solutions have been simplified. ## A Companion to Analysis Author: Thomas William Körner Publisher: American Mathematical Soc. ISBN: 9780821834473 Release Date: 2004 Genre: Mathematics ## The Foundations of Geometry Author: David Hilbert Publisher: ISBN: UCAL:B4073879 Release Date: 1910 Genre: Geometry ## The Principles of Inductive Logic Author: John Venn Publisher: Taylor & Francis US ISBN: 0828402655 Release Date: 1973 Genre: Mathematics Venn, best known for his diagrams for set theory, primarily studied logic and probability theory. The present book is a study of the principles of logic, with special emphasis on inference and induction. From the Preface to the First Edition (1889): As many readers will probably perceive, the main original guiding influence with me--as with most of those of the middle generation, and especially with most of those who approached logic with previous mathematical or scientific training--was that of Mill ... I still continue to regard the general attitude towards phenomena, which Mill took up as a logician, to be the soundest and most useful for scientific study ... '' From the Preface to the Second Edition (1907): Though thus leaving the main outlines unaltered I have done what I could to improve the work, and to try to bring it up to date ... A number of paragraphs have been altered, others have been re-written, and many hundreds of minor alterations, additions and corrections inserted ... '' ## Levels of Infinity Author: Hermann Weyl Publisher: Courier Corporation ISBN: 9780486489032 Release Date: 2012 Genre: Mathematics This original anthology collects 10 of Weyl's less-technical writings that address the broader scope and implications of mathematics. Most have been long unavailable or not previously published in book form. Subjects include logic, topology, abstract algebra, relativity theory, and reflections on the work of Weyl's mentor, David Hilbert. 2012 edition.
2018-11-18 08:48:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5945121049880981, "perplexity": 1234.2205025164405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744320.70/warc/CC-MAIN-20181118073231-20181118094600-00021.warc.gz"}
http://www.ugrad.math.ubc.ca/coursedoc/math100/notes/mordifeqs/euler.html
## Euler's Method We have seen how to use a direction field to obtain qualitative information about the solutions to a differential equation. This simple kind of reasoning lead to predictions for the eventual behaviour of solutions to the logistic equation. Sometimes, however, we want more detailed information. For instance, we might want to know how long it will take before the solution is near the limiting value. In this case, we can use the linear approximation to numerically approximate solutions to differential equations. We will demonstrate this approach through an example. A Simple Initial Value Problem Let's start by looking at an initial value problem whose solution is known: We know that the solution is . This means that after we find our approximate solution, we will be able to determine how good of an approximation it really is. Let's suppose that we are interested in the value of the solution at . We know the value at since that is a part of the initial value problem---namely, . Notice that the differential equation also tells us the derivative of the solution at since If we now form the linear approximation at , we find that . Then our approximation yields This approximation is not too good but it was easy to obtain. Graphically, the picture is like this: The problem with the approximation is that the derivative of the solution is changing across the interval but the approximation assumes that it is constantly 1 . We can try to fix this up by diving the interval into two pieces: First, we will use the linear approximation based at to approximate the value at . Then we will use a linear approximation at to obtain an approximate value at . We have already obtained the linear approximation based at . This produces the approximate value . This tells us that the solution curve approximately passes through . That means that We will then form the linear approximation at the point : it produces which yields the approximation . This is, in fact, a better approximation to the value . Graphically, what we have done is illustrated in the diagram. Here you can see why we have a better approximation: the derivative of the solution changes as we move across the interval . In the second approximation, we take this into account by stopping at , recomputing the derivative and then continuing on. Now you can probably imagine that we will get better approximations if we take shorter steps and correct the slope at every step. To do this, imagine walking from 0 to 1 by taking n steps, each of width . We will call the points we obtain . Notice that since this is where the initial value problem tells us to begin. To get from one step to the next, we are assuming that the solution approximately passes through . At that point, the derivative, which is equal to the y coordinate by the differential equation, is . That means that the linear approximation at that point is This means that at , we have The following demonstration will let you select the number of steps and show you the approximate solution (type in the number of steps and press "Return"). Notice that as the number of steps gets larger, the approximation becomes very good. Euler's Method Now we will work with a general initial value problem We will again form an approximate solution by taking lots of little steps. We will call the distance between the steps h and the various points . To get from one step to the next, we will form the linear approximation at . The derivative at this point is given by the differential equation: . The linear approximation is then so that This technique is called Euler's Method. The logistic equation Now we will consider the initial value problem Notice that this has the basic form of the logistic equation. We have studied this equation qualitatively, but we do not explicitly know solutions. As an example, we will approximate the solution on the interval by taking steps of width h. Applying Euler's Method, we can generate an approximate solution by In the demonstration below, you can enter the number of steps and see the approximate solution. Again, as you take more steps, the solution does not vary too much when you increase the number of steps. You can then feel confident that your solution is a good approximation.
2014-10-22 21:38:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959691524505615, "perplexity": 166.55809459656678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447657.38/warc/CC-MAIN-20141017005727-00116-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.dsemth.com/static/question-sample/whisker
### whisker Question Sample Titled 'whisker' The following box-and-whisker diagram shows the distribution of weights (in $\text{kg}$) of ${44}$ students. The mean weight of the students is ${63}$ $\text{kg}$ . ${51}$${77}$${65}$${58}$${69}$weight ( $\text{kg}$ ) (a) Find the inter-quartile range of the above distribution. (1 marks) (b) Five students are joined and their weights are ${51}$ $\text{kg}$ , ${55}$ $\text{kg}$ , ${65}$ $\text{kg}$ , ${69}$ $\text{kg}$ and ${67}$ $\text{kg}$ . Find the new mean and the new median of the weight (3 marks) (a) Inter-quartile range $={69}-{58}$ $={11}$ $\text{kg}$ 1A (b) New sum of the weights $={63}\times{44}+{51}+{55}+{65}+{69}+{67}$ 1M $={3079}$ $\text{kg}$ New mean $=\dfrac{{3079}}{{{44}+{5}}}$ $=\dfrac{{3079}}{{49}}$ $\text{kg}$ 1A ∵   Two new data are smaller than the median, ∴   Two new data are greater than the median. One datum is equal to the median. ∴   New median$={65}$ $\text{kg}$ 1A # 專業備試計劃 Level 4+ 保證及 5** 獎賞 ePractice 會以電郵、Whatsapp 及電話提醒練習 ePractice 會定期提供溫習建議 Level 5** 獎勵:會員如在 DSE 取得數學 Level 5** ,將獲贈一套飛往英國、美國或者加拿大的來回機票,唯會員須在最少 180 日內每天在平台上答對 3 題 MCQ。 Level 4 以下賠償:會員如在 DSE 未能達到數學 Level 4 ,我們將會全額退回所有會費,唯會員須在最少 180 日內每天在平台上答對 3 題 MCQ。 # FAQ ePractice 是甚麼? ePractice 是一個專為中四至中六而設的網站應用程式,旨為協助學生高效地預備 DSE 數學(必修部分)考試。由於 ePractice 是網站應用程式,因此無論使用任何裝置、平台,都可以在瀏覽器開啟使用。更多詳情請到簡介頁面。 ePractice 可以取代傳統補習嗎? 1. 會員服務期少於兩個月;或 2. 交易額少於 HK\$100。 Initiating... HKDSE 數學試題練習平台
2020-09-18 06:52:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 59, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7874017953872681, "perplexity": 6615.212649461141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187354.1/warc/CC-MAIN-20200918061627-20200918091627-00432.warc.gz"}
https://chorasimilarity.wordpress.com/2013/03/13/currying-by-using-zippers-and-an-allusion-to-the-cartesian-theater/
# Currying by using zippers and an allusion to the Cartesian Theater Here is a recipe for understanding currying with the help of zippers.  (Done in graphic lambda calculus.) We have a graph $A \in GRAPH$ which has one output and several inputs. We want to curry it. For this we have to artificially give names to the inputs, i.e. to number them (notice that such a thing is not needed in graphic lambda). The next step is to use a $n$-zipper in order to clip the inputs, by using $n$ graphic beta moves, until we get this: This graph is, in fact, the following one (we replace the $n$-zipper, which is just a notation, or a macro, with its expression). The graph inside the green dotted rectangle is the currying of $A$, let’s call him $Curry(A)$.  This graph has only one output and no inputs.  (The procedure of currying can be made itself into a graph which is applied to  the output of $A$, but we stop at this level for this post.) The graph inside the red dotted rectangle is almost a list. We shall transform it into a list by using again a zipper and one graphic beta move. Now we’re done! As you see, the currying creates the list, or the list creates the currying, or both form a pair, like the homunculus $Curry(A)$ and the scenic space $List(1,2, ... , n)$, an allusion to my post on the Cartesian Theater. ## 10 thoughts on “Currying by using zippers and an allusion to the Cartesian Theater” 1. What general mathematical relation is that which exists between the Curry(A) and the List (1,2, …, n)? 1. They are the two sides of a whole. The post is a mathematical proof of this. Think about A as being a function with several variables. Currying means to express this function as a function of a variable with values in ( functions of a variable with values in (functions of a variable with values in …))) until the list of variables ends. What is interesting here is that in graphic lambda there are no names for variables. The graph A from the first figure is equivalent with the graph from the last figure (i.e. A can be transformed by using graphic beta moves into the last graph). The graph from the last figure means that the curryed version of the “function” (though not a function without using extensionality) A is applied (at the right, funny but meaningful) to the list of variables. That’s equivalent with A with an arbitrary numbering of its inputs. This is obvious once you admit names for variables. But is less trivial to show that the pair (curry, cons) appears as a consequence of the procedure of naming variables. If you want, A is like a geometrical object (say, a sphere) and the graph at the end is like a parametrization of the geometrical object. The parametrization is not geometrical. Or A is like a physical property of a system expressed intrinsically and the graph at the end is like the same physical property expressed in a reference frame. But physical properties are those which are independent wrt the choice of reference frame. That’s exactly my critic in the post on the cartesian theater, namely that by using the theater in a box as a model of the cartesian theater, one is misled to think that homunculus existence leads to fallacies, but otherwise everything is OK with the scenic space. No, the scenic space is the other side of the homunculus. This is not meaning to deny the existence of the objective space exterior to the observer. 1. Thanks! So, once the currying and lists are in place, then the choice concerning the order of taking beta moves (i.e. reductions) narrows, to the point that one may practically reduce it to two main strategies (for example lazy). This gives the illusion that there is no other choice to be made by an exterior manipulator of the expressions. This is also what makes computers feasible, because we are not asked before any reduction step to choose among multiple reduction possibilities, if any. Coming back to the “geometric property” analogy, then, per my weak understanding of those matters, indeed you are right that the evaluation strategy has to do with bisimulation, as you communicated in private. For me this is like the old way of differentiable geometry, which is to say that a quantity (like a tensor given as an array of numbers) computed wrt a chart is geometrical if there is a given rule of transformation (between tensorial quantities given in coordinates) which transforms the initial quantity into the one which is computed by the same recipe in a different chart. Finally, we may speculate that natural computing systems, as maybe is our visual system, don’t need to curry, because the order of taking reduction step is given by a physical process (like diffusion, or some probabilistic rule concerning the local state of the graph). In this way, the number of “computational steps” (i.e. reductions) is much lower than in curried form, observation which might be relevant for the puzzle that in the visual system all “computation” is done by at most 6 synaptic jumps. Such natural systems would be then massively parallel, instead of being massively sequential, due to currying. But in order to understand how they work, in order to simulate them on a computer, we would have to curry the process. Maybe brain work is like a massively parallel version of Brainfuck, who knows? EDIT: One more thing. There is a way to check this hypothesis, in principle. It goes like this: take an algorithm, express it in untyped lambda calculus, better in Unlambda, then express it in graphic lambda calculus, then uncurry it and spread it as much as possible (i.e. such that it is transformed into a graph in graphic lambda calculus with the properties that: (1) there is as small as possible distance between inputs and outputs of the graph (measured as the number of nodes one has to pass from an input to an output, for example), (2) the reduction steps (or other moves) are as evenly spread as possible over the graph, allowing for as much as possible parallelism). If this procedure works then the result is the “geometrical” picture of the algorithm, and maybe the most resembling one with what happens in a brain. You imagine that just by looking at such a graph one would be hardly capable to say what is the algorithm.
2020-01-26 02:14:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7501219511032104, "perplexity": 503.80617599771796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251684146.65/warc/CC-MAIN-20200126013015-20200126043015-00505.warc.gz"}
https://mathoverflow.net/questions/288723/does-v-textitultimate-l-imply-gch
# Does $V = \textit{Ultimate }L$ imply GCH? In his Midrasha Mathematicae lectures ("In Search of Ultimate $L$", BSL 23 [2017]: 1–109), Woodin notes that $V = \textit{Ultimate }L$ implies $\textrm{CH}$ (Theorem 7.26, p.103). Is it known whether $V = \textit{Ultimate }L$ implies $\textrm{GCH}$? ## 2 Answers In his slide Absolutely ordinal definable sets John Steel writes: At the same time, one hopes that V = ultimate L will yield a detailed fine structure theory for V, removing the incompleteness that large cardinal hypotheses by themselves can never remove. It is known that V = ultimate L implies the CH, and many instances of the GCH. Whether it implies the full GCH is a crucial open problem During this year's conferene on inner model theory in Münster, Gabriel Goldberg proved that the so-called Ultrapower Axiom implies that $\mathrm{GCH}$ holds above a supercompact cardinal (and since then lowered the bound to a strongly compact cardinal). It seems very likely (it might even be known) that $\mathrm{Ultimate } \ L$ satisfies this requirement. Hence, given enough large cardinals, it will satisfy $\mathrm{GCH}$ at least on a tail end. For more information, see G. Goldberg. Strong Compactness and the Ultrapower Axiom.
2020-12-01 17:16:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262609481811523, "perplexity": 564.5036682439184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681209.60/warc/CC-MAIN-20201201170219-20201201200219-00653.warc.gz"}
https://www.intpforum.com/threads/jokes.9767/
# jokes #### Coolydudey60 ##### Banned does anyone know of any good jokes? i'm bored #### BigApplePi ##### Banned Soldiering Don't know if this is a joke as it could be true: The Major went out to find that none of his soldiers were there. One finally ran up, sweating heavily. "Sorry, sir! I can explain, you see I had a date and it ran a little late. I ran to the bus but missed it, I hailed a cab but it broke down, found a farm, bought a horse but it dropped dead, ran five miles, and now I'm here." The Major was very skeptical about this explanation but at least he was here so he let the soldier go. Moments later, more soldiers came up to the Major panting, he asked them why they were late. "Sorry, sir! I had a date and it ran a little late, I ran to the bus but missed it, I hailed a cab but it broke down, found a farm, bought a horse but it dropped dead, ran 10 miles, and now I'm here." The Major eyed them, feeling very skeptical but since he let the first guy go, he let them go, too. Another soldier jogged up to the Major, panting heavily. "Sorry, sir! I had a date and it ran a little late, I ran to the bus but missed it, I hailed a cab but..." "Let me guess," the Major interrupted, "it broke down." "No," said the soldier., "there were so many dead horses in the road, it took forever to get around them." #### BigApplePi ##### Banned Traffic A lady was out driving her car and when she stopped at a red light, the car just died. It was a busy intersection, and the traffic jam behind her starting growing. The guy in the car directly behind her started honking his horn continuously as the lady continued to try getting the car to start up again. Finally she got out of her car and approached the guy in the car behind her. "I can't seem to get my car started," she said, smiling. "Would you be a sweetheart and go and see if you can get it started for me. I'll stay here in your car and lean on your horn for you." #### Coolydudey60 ##### Banned 1st answer by pi: nice one! #### BigApplePi ##### Banned Punny? Punny -- 1. The fattest knight at King Arthur's round table was Sir Cumference. He acquired his size from too much pi. 2. I thought I saw an eye doctor on an Alaskan island, but it turned out to be an optical Aleutian . 3. She was only a whiskey maker, but he loved her still. 4. A rubber band pistol was confiscated from algebra class, because it was a weapon of math disruption. 5. No matter how much you push the envelope, it'll still be stationery. 6. A dog gave birth to puppies near the road and was cited for littering. 7. A grenade thrown into a kitchen in France would result in Linoleum Blownapart. 8. Two silk worms had a race. They ended up in a tie. 9. A hole has been found in the nudist camp wall. The police are looking into it. 10. Time flies like an arrow. Fruit flies like a banana. 11. Atheism is a non-prophet organization. 12. Two hats were hanging on a hat rack in the hallway. One hat said to the other: 'You stay here; I'll go on a head.' 13. I wondered why the baseball kept getting bigger. Then it hit me. 14. A sign on the lawn at a drug rehab center said: 'Keep off the Grass.' 15. The midget fortune-teller who escaped from prison was a small medium at large. 16. The soldier who survived mustard gas and Pepper spray is now a seasoned veteran. 17. A backward poet writes inverse. 18. In a democracy it's your vote that counts. In feudalism it's your count that votes. 19. When cannibals ate a missionary, they got a taste of religion. 20. If you jumped off the bridge in Paris, you'd be in Seine. 21. A vulture boards an airplane, carrying two dead raccoons. The stewardess looks at him and says, 'I'm sorry, sir, only one carrion allowed per passenger.' 22. Two fish swim into a concrete wall. One turns to the other and says 'Dam!' 23. Two Eskimos sitting in a kayak were chilly, so they lit a fire in the craft. Unsurprisingly it sank, proving once again that you can't have your kayak and heat it too. 24. Two hydrogen atoms meet. One says, 'I've lost my electron.' The other says 'Are you sure?' The first replies, 'Yes, I'm positive.' 25. Did you hear about the Buddhist who refused Novocain during a root canal? His goal: Transcend dental medication. 26. There was the person who sent ten puns to friends, with the hope that at least one of the puns would make them laugh. No pun in ten did. #### BigApplePi ##### Banned Politics In the hospital the relatives gathered in the waiting room, where a family member lay gravely ill. Finally, the doctor came in looking tired and somber. "I'm afraid I'm the bearer of bad news, he said as he surveyed the worried faces. The only hope left for your loved one at this time is a brain transplant. It's an experimental procedure, very risky, but it is the only hope. Insurance will cover the procedure, but you will have to pay for the BRAIN." The family members sat silent as they absorbed the news. After a time, someone asked, 'How much will a brain cost?' The doctor quickly responded, "$5,000 for a Democrat's brain;$200 for a Republican's brain." The moment turned awkward. Some of the Democrats actually had to 'try' to not smile, avoiding eye contact with the Republicans. A man unable to control his curiosity, finally blurted out the question everyone wanted to ask, "Why is the Democrats brain so much more than a Republicans brain?" The doctor smiled at the childish innocence and explained to the entire group, "It's just standard pricing procedure. We have to price the Republicans brains a lot lower because they've been used." #### EyeSeeCold ##### lust for life does anyone know of any good jokes? i'm bored Your mother is an ESFJ. #### ApostateAbe ##### The past is an asshole, so f*** it Q: What's so sad about four black men who get in a car wreck and die? A: They are human beings with friends and families who will miss them. #### Solitaire U. ##### Last of the V-8 Interceptors ^ Didn't see that coming... #### ApostateAbe ##### The past is an asshole, so f*** it ^ Didn't see that coming... What else would you expect? You must be racist. #### Solitaire U. ##### Last of the V-8 Interceptors You're right. I can't stand white people. #### BigApplePi ##### Banned You're right. I can't stand white people. What if they're half-white? Do you half hate them? #### Solitaire U. ##### Last of the V-8 Interceptors No, I flip 'em over and let 'em cook some more on the white part. #### BigApplePi ##### Banned Good Training Two Indians and a Hillbilly were walking through the woods. All of a sudden one of the Indians ran up a hill to the mouth of a small cave. "Wooooo! Wooooo! Wooooo!" he called into the cave and listened closely until he heard an answering, "Wooooo! Wooooo! Woooooo! He then tore off his clothes and ran into the cave. The Hillbilly was puzzled and asked the remaining Indian what it was all about. "Was that Indian crazy or what?" The Indian replied "No, It is our custom during mating season when Indian men see cave, they holler 'Wooooo! Wooooo! Wooooo!' into the opening. If they get an answer back, it means there's a beautiful woman in there waiting for us." Just then they came upon another cave. The second Indian ran up to the cave, stopped, and hollered, "Wooooo! Wooooo! Wooooo!" Immediately, there was the answer. "Wooooo! Wooooo! Wooooo!" from deep inside. He also tore off his clothes and ran into the opening. The Hillbilly wandered around in the woods alone for a while, and then spied a third large cave. As he looked in amazement at the size of the huge opening, he was thinking, "Hoo, man! Look at the size of this cave! It's bigger than those the Indians found. There must be some really big, fine women in this cave!" He stood in front of the opening and hollered with all his might, "Wooooo! Wooooo! Wooooo!" Like the others, he then heard an answering call, "WOOOOOOOOO, WOOOOOOOOO WOOOOOOOOO!" With a gleam in his eye and a smile on his face, he raced into the cave, tearing off his clothes as he ran. The following day, the headline of the local newspaper read.... NAKED HILLBILLY RUN OVER BY TRAIN!!!!!!! #### sammael ^lol. A man was walking through the zoo one day when he sees a little girl leaning into the lion's cage. Suddenly, the lion grabs the little girl by her jacket and tries to pull her inside. Her parents start screaming hysterically, The man runs to the cage and whacks the lion on the nose with his umbrella. Whimpering from the pain, the lion retreats and lets go of the little girl. The man reunites her with her parents - who thank him repeatedly for saving their daughter's life. Unbeknownst to the man, a journalist has been watching what happened. 'Sir,' he says, walking up to him afterwards, 'that was the bravest thing i ever saw in my life.' The man shrugs. 'It was nothing,' he says. 'The lion was in a cage and i knew God would protect me, just as he protected Daniel in the lion's den. When i saw the little girl was in danger, I just did what i thought was right.' The reporter is gobsmacked. 'Is that a bible I see in your pocket?' he asks. 'Yes,' says the man. 'I'm a Christian. In fact, I,m on my way to Bible class right now.' 'I'm a journalist,' replies the reporter. 'And you know what? I'm going to run what you did on tomorrow's front page. I'm going to make absolutely certain that your selfless act of heroism doesn't go unnoticed.' The following morning the man buys the paper. The headline reads as follows: Right-wing Christian Fundamentalist Assaults African Immigrant and Steals His Lunch. #### Coolydudey60 ##### Banned Your mother is an ESFJ. and my dad an INTP. i'm not 100% sure about ESFJ, but she comes pretty close EDIT: i'd say she's an ExxJ. She handles both abstract and concrete concepts well, and switches between the two depending on the situation. often more oriented towards F she also has an underlying T in there, cause in any case if she didn't she woudn't be where she is. it's quite funny that my mum is an ESFj and my dad an INTP though (they did recently get divorced), must be pretty rare. and funny I didn't get much of a mix either and that i'm INTP too. but i do talk to my mum most of the time, and we manage to get along amazingly well together. it's funny. anyway like my dad, i'm considering studying applied maths, because for me, like him, even though abstract concepts are cool, they become much more interesting when you apply them. Also m mum has made me slightly more S than my dad (only slightly). it's true though that even if I don't like being too sociable, i do have quite good social skills. a lot of exposure to an F friend of mine recently has turned me from 95% T to about 60%T as well. i am slowly learning to be able to do what a J does. it's called an all round character, with an extra thing for the best type (INTP's). maybe this should have gone in the introit... ah well. it's not got much to do with jokes anyway EDIT: how did u know anyway? #### Kev ##### Redshirt What did one tampon say to the other? Nothing, they were both stuck up cunts. #### Coolydudey60 ##### Banned BigApplePi, are you a comedian? her's one i invented with a firend of mine: Q: why is it better to marry a radio than a woman? A1: at least you can turn the radio off! A2: radios play music a well! this i saw graffited somewhere: Q: how do u make an archaelogist blush? A: give them a dirty tampon and ask them what period it's from! #### EyeSeeCold ##### lust for life and my dad an INTP. i'm not 100% sure about ESFJ, but she comes pretty close EDIT: i'd say she's an ExxJ. She handles both abstract and concrete concepts well, and switches between the two depending on the situation. often more oriented towards F she also has an underlying T in there, cause in any case if she didn't she woudn't be where she is. it's quite funny that my mum is an ESFj and my dad an INTP though (they did recently get divorced), must be pretty rare. and funny I didn't get much of a mix either and that i'm INTP too. but i do talk to my mum most of the time, and we manage to get along amazingly well together. it's funny. anyway like my dad, i'm considering studying applied maths, because for me, like him, even though abstract concepts are cool, they become much more interesting when you apply them. Also m mum has made me slightly more S than my dad (only slightly). it's true though that even if I don't like being too sociable, i do have quite good social skills. a lot of exposure to an F friend of mine recently has turned me from 95% T to about 60%T as well. i am slowly learning to be able to do what a J does. it's called an all round character, with an extra thing for the best type (INTP's). maybe this should have gone in the introit... ah well. it's not got much to do with jokes anyway EDIT: how did u know anyway? It was supposed to be a joke lol. The premise being "ESFJ" is derogatory. #### Coolydudey60 ##### Banned It was supposed to be a joke lol. The premise being "ESFJ" is derogatory. random, lol! #### Yet ##### Active Member There's this elder woman who wants to put the 'spark' back in het love life. She asks advice from one of her friends. Oh the friend replies ... I know exactly what you should do! Go to the finest lingery store in town and buy yourself a lovely little sexy nighty. Lie down on bed with a sexy pose and once your husband sees you Johnny wil wake up out of his winter sleep and you'll have the best time of your life. Trust me. Of the woman goes, buys herself a lovely set and lies down on bed smiling.... Hubby reads his book in bed puts down his specs when he gets tired and turns around and snores within 10 mins. The next day the woman phones her friend and tells her what happened ... 'ow I know what must be wrong' her friend says ... you did not show enough nude did you ... you need a tiny little sexy nighty. Of the woman goes back to the lingery store and buys herself the tiniest nighty with lots of lace. Sadly the same thing happens like the previous night. And of course next day she is back on the phone with her friend ... in tears. There there, her friend says, don't despair. I think you should even show more skin. He is a bit shortsighted your husband isn't he. Just try the same thing tonite but do not put anything on. Just lie down naked, he can't miss that can he? So the woman followed her friends advice and lied down naked that evening. Hubby, sticking to his routine, read his book in bed but before taking his glasses of lifted them to his forehead... gave his wife a strange look and said: 'I'm not sure what to think of your new nighties lately, but the one your wearing tonight really could use an iron'. #### BigApplePi ##### Banned Concentration This is a test of your concentration. I think it gives you an opportunity to take it again if you fail the first time. http://www.gjk2.com/test/test.swf #### Cogwulf ##### Is actually an INTJ What's worse than a papercut? -The holocaust you insensitive monster. A man walks into a bar. Ouch. #### BigApplePi ##### Banned Is He Smart? A man walks into a bar and notices a poker game at the far table. Upon taking a closer look he sees a dog sitting at the table. This peaks his curiousity and he walks closer and sees cards and chips in front of the dog. Then the next hand is dealt and cards are dealt to the dog. Then the dog acts in turn with all the other players, calling, raising, discarding, everything the other human players were doing. However none of the other players seemed to pay any mind to the fact that they were playing with a dog, they just treated him like any other player. Finally the man could not longer hold his tongue so between hands he quietly said to one of the players, "I can't believe that dog is playing poker, he must be the smartest dog in the world!" The player smiled and said, "He isn't that smart, every time he gets a good hand he wags his tail." #### BigApplePi ##### Banned Research A little town had a high birth rate that had attracted the attention of the sociologists at the state university. They wrote a grant proposal; got a huge chunk of money; hired a few additional sociologists, an anthropologist, and a family planning and birth control specialist; moved to town; rented offices; set up their computers; got squared away; and began designing their questionnaires and such. While the staff was busy getting ready for their big research effort, the project director decided to go to the local drugstore for a cup of coffee. He sat down at the counter, ordered his coffee, and while he was drinking it, he told the druggist what his purpose was in town, then asked him if he had any idea why the birth rate was so high. "Sure," said the druggist. "Every morning the five o'clock train comes through here and blows for the crossing. It wakes everybody up, and, well, it's too late to go back to sleep, and it's too early to get up." #### a detached retina ##### Active Member Where did the germans hide their armies in WW2? In their sleevies !! #### BigApplePi ##### Banned A friend posted this story: Wisdom Young King Arthur was ambushed and imprisoned by the monarch of a neighboring kingdom. The monarch could have killed him but was moved by Arthur's youth and ideals. So, the monarch offered him his freedom, as long as he could answer a very difficult question. Arthur would have a year to figure out the answer and, if after a year, he still had no answer, he would be put to death. The question?...What do women really want? Such a question would perplex even the most knowledgeable man, and to young Arthur, it seemed an impossible query. But, since it was better than death, he accepted the monarch's proposition to have an answer by year's end. He returned to his kingdom and began to poll everyone: the princess, the priests, the wise men and even the court jester. He spoke with everyone, but no one could give him a satisfactory answer. Many people advised him to consult the old witch, for only she would have the answer. But the price would be high; as the witch was famous throughout the kingdom for the exorbitant prices she charged. The last day of the year arrived and Arthur had no choice but to talk to the witch. She agreed to answer the question, but he would have to agree to her price first. The old witch wanted to marry Sir Lancelot, the most noble of the Knights of the Round Table and Arthur's closest friend! Young Arthur was horrified. She was hunchbacked and hideous, had only one tooth, smelled like sewage, made obscene noises, etc. He had never encountered such a repugnant creature in all his life.. He refused to force his friend to marry her and endure such a terrible burden; but Lancelot, learning of the proposal, spoke with Arthur He said nothing was too big of a sacrifice compared to Arthur's life and the preservation of the Round Table. Hence, a wedding was proclaimed and the witch answered Arthur's question thus: What a woman really wants, she answered....is to be in charge of her own life. Everyone in the kingdom instantly knew that the witch had uttered a great truth and that Arthur's life would be spared. And so it was, the neighboring monarch granted Arthur his freedom and Lancelot and the witch had a wonderful wedding. The honeymoon hour approached and Lancelot, steeling himself for a horrific experience, entered the bedroom. But, what a sight awaited him. The most beautiful woman he had ever seen lay before him on the bed. The astounded Lancelot asked what had happened The beauty replied that since he had been so kind to her when she appeared as a witch, she would henceforth, be her horrible deformed self only half the time and the beautiful maiden the other half. Which would he prefer? Beautiful during the day....or night? Lancelot pondered the predicament. During the day, a beautiful woman to show off to his friends, but at night, in the privacy of his castle, an old witch? Or, would he prefer having a hideous witch during the day, but by night, a beautiful woman for him to enjoy wondrous intimate moments? What would YOU do? What Lancelot chose is below. BUT....make YOUR choice before you scroll down below. OKAY? Noble Lancelot said that he would allow HER to make the choice herself. Upon hearing this, she announced that she would be beautiful all the time because he had respected her enough to let her be in charge of her own life. Now....what is the moral to this story? Scroll down The moral is..... If you don't let a woman have her own way.... Things are going to get ugly #### Abraxas ##### γνῶσις What does a nerd do after farting? Opens the Windows. #### BigApplePi ##### Banned What does an INTP do before? Readies a flaccid balloon. #### Cogwulf ##### Is actually an INTJ Did you hear about the baker who robbed a bank? When the police got there, all the money was scone. I'll get my coat. #### Moocow ##### Semantic Nitpicker I love puns. I made up a confucius joke once. Confucius say: Man who shows bear emotions loses face. ##### think again losers What do you call a black man flying a plane? A pilot you racist! #### BigApplePi ##### Banned What do you call a black man flying a plane? A pilot you racist! What do you call a track star you see when flying over him? A racist you pilot! #### EyeSeeCold ##### lust for life What do you call a track star you see when flying over him? A racist you pilot! what lol ##### think again losers Where there is a lol, there is a joke, regardless of level of understanding. Oh wait I just got it haha #### Cogwulf ##### Is actually an INTJ When is a sea creature not a friend? when it's anemone #### BigApplePi ##### Banned What are you if you are removed from a friend? A fiend. #### BigApplePi ##### Banned How do you catch a polar bear? You make a hole in the ice. Then you take a can of peas and spread the can of peas all around the hole. When the bear comes up to take a pea, you kick him in the ice hole. #### SpaceYeti ##### Prolific Member The shy pebble wished it was a little boulder. #### Perfectly Normal Beast ##### beware of crimethink Man walks into his bedroom with a sheep under his arm. His wife is lying in bed reading. Man says, "This is the pig I have sex with when you've got a headache." Wife replies, "I think you'll find that is a sheep." Man replies, "I think you'll find I was talking to the sheep." I parked in a disabled space today and a traffic warden shouted, "Oi, what's your disability?" I said, "Tourettes! Now fuck off you cunt!" As I started fucking her, she said, "Please stop. You must stop. I want you to stop." It's nice that she's enjoying it, I thought, but why is she talking like a telegram? #### Coolydudey ##### You could say that. Yay, somebody resurrected my jokes thread!! 2 great blonde jokes: 1)Two blondes are driving to Disneyland, where they want to spend the weekend. Just when they get really close, they turned around and went home. Why? A: There was a sign that said "Disneyland left"! (If you don't get it, they interpreted left as gone, not here any more). 2)Two blondes are having a light conversation. One says "I want to go to the moon". The other replies, "I want to go to the sun". The first replies, "You can't silly, it's too hot". The second retorts "No you silly, I'm going to go at night!" #### BigApplePi ##### Banned Yay, somebody resurrected my jokes thread!! Yay, thank you for recognizing I am some body. #### Clock ##### ʞɔolƆ Man walks into his bedroom with a sheep under his arm. His wife is lying in bed reading. Man says, "This is the pig I have sex with when you've got a headache." Wife replies, "I think you'll find that is a sheep." Man replies, "I think you'll find I was talking to the sheep." I parked in a disabled space today and a traffic warden shouted, "Oi, what's your disability?" I said, "Tourettes! Now fuck off you cunt!" As I started fucking her, she said, "Please stop. You must stop. I want you to stop." It's nice that she's enjoying it, I thought, but why is she talking like a telegram? Awesome. #### Jennywocky ##### guud languager Q: What does Tarzan say when he sees a herd of elephants in the distance? A: "Look, a herd of elephants in the distance" Q: What does Tarzan say when he sees a herd of elephants with sunglasses A: Nothing. He doesn't recognize them. Q: What does Tarzan say when he sees a herd of giraffes in the distance? A: "Haha! You fooled me once with those disguises, but not this time!" Q: What is the difference between en elephant and a plum? A: An elephant is grey. Q: What does Jane say when she sees a herd of elephants in the distance? A: "Look! A herd of plums in the distance" (Jane is colour blind) Q: How do you get an elephant into the fridge? 1. Open door. 2. Insert elephant. 3. Close door. Q: How do you get a giraffe into the fridge? 1. Open door. 2. Remove elephant. 3. Insert giraffe. 4. Close door. Q. The lion, the king of the jungle, decided to have a party. He invited all the animals in the jungle, and they all came except one. Which one? A. The giraffe, because he was still in the fridge. Q: How do you know there are two elephants in your fridge? A: The door won't close. Q: How do you know there are three elephants in your fridge? A: There'll be one waiting outside in the Mini. Q: How can you tell that an elephant has been in your fridge? A: By the footprints in the butter. Q: How do you get an elephant out of the water? A: Wet. Q: How do you get two elephants out of the water? A: One by one. Q: Why do elephants wear shoes with yellow soles? A: So you don't see them when they float upside down in a bowl of custard. 1)Two blondes are driving to Disneyland, where they want to spend the weekend. Just when they get really close, they turned around and went home. Why? A: There was a sign that said "Disneyland left"! (If you don't get it, they interpreted left as gone, not here any more). I can't believe you felt the need to explain a punchline ... and to a BLONDE joke nonetheless! (Of course, this is maybe because I'm a brunette, so I got it all on my own.) #### Coolydudey ##### You could say that. Yay, thank you for recognizing I am some body. I had noticed it was you, but the importance was on the resurrection, not the person who did it. Jenny- it's not the most obvious punch line (especially if you're slightly sleepy) #### Cherry Cola ##### Banned There once was this homosexual man and he was getting a tattoo of a truck on his dick. He said to the tattoist that it was of umost importance that the truck was equipped with a four wheel drive. The tattoist was baffled and wondered why; whereupon, the homosexual man answered that it was going to plow through a lot of shit. #### Jennywocky ##### guud languager There once was this homosexual man and he was getting a tattoo of a truck on his dick. He said to the tattoist that it was of umost importance that the truck was equipped with a four wheel drive. The tattoist was baffled and wondered why; whereupon, the homosexual man answered that it was going to plow through a lot of shit. "What's that smell?" his lover said later that night, after they were both collapsed panting on the bed. The gay man sniffed at the air. "I guess I really burned rubber." #### DrGregoryHouse ##### Banned Oh man, loving this thread. My type of humor. Dark, twisted, irreverent and a little off. I try not to make too many jokes around most people because they won't get them. Here is a true story which is an example of why I don't: An old woman nearing retirement age (beloved by all in the company; she had worked there her entire life) was walking out from the office one day carrying a large box. Some ice had built up around the exterior pathway immediately in front of the exterior door. She slipped and cracked her head on the door. As she was falling the box was tossed in the air and a short moment after she landed on her back on the ice the box landed on her ribs, knocking the wind out of her. Upon hearing the commotion, I, along with a handful of others, exited my office and came to survey the scene. She was obviously a little banged up but otherwise unharmed. Desiring to assuage the obvious embarrassment felt by the old woman, I blurted out my first thought, "That was a nasty tumble. I sure hope the door is alright." She immediately burst into tears, fled the scene and locked herself in the bathroom.
2019-05-20 06:21:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2599656879901886, "perplexity": 5271.344779880507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00413.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=151&t=61566
## 7E.5 Arrhenius Equation: $\ln k = - \frac{E_{a}}{RT} + \ln A$ JamieVu_2C Posts: 108 Joined: Thu Jul 25, 2019 12:16 am ### 7E.5 The hydrolysis of an organic nitrile, a compound containing a –CN group, in basic solution, is proposed to proceed by the following mechanism. Write a complete balanced equation for the overall reaction, list any intermediates, and identify the catalyst in this reaction. Step 1: RCN + OH- --> RCNOH Step 2: RCNOH + H2O --> RC(NH)OH + OH- Step 3: RC(NH)OH --> RCONH2 How do you know that OH- is not an intermediate but is a catalyst? How can you tell the difference? Jonathan Gong 2H Posts: 105 Joined: Sat Jul 20, 2019 12:16 am ### Re: 7E.5 You can tell that OH- is a catalyst in this reaction because it is consumed in step 1 as a reactant but formed again as a product in step 2, thus making it present throughout the reaction. The difference between a catalyst and an intermediate is that an intermediate is usually created by a step but consumed by a later step.
2020-08-11 22:06:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5524279475212097, "perplexity": 4791.8051930221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738855.80/warc/CC-MAIN-20200811205740-20200811235740-00056.warc.gz"}
https://blog.theleapjournal.org/2009/08/low-level-equilibrium-of-indian-finance.html
## Saturday, August 01, 2009 ### The low level equilibrium of Indian finance There is a fascinating editorial in Business World, which worries that Indian finance has `become moribund, and ... no longer promotes either growth or competition ' which ends with: To the question about what is to be done, there are no easy answers. It is necessary for wise people to reflect, to sit together and deliberate, and to rethink the entire system. But the question is, where are the wise people? The government has encroached on all the repositories of intellect. It has politicised the universities, and its funding has given it patronage that has emasculated research institutes. Democracy is supposed to have one great advantage over dictatorship — that it creates space for divergence and debate, that it keeps boiling a cauldron of clashing positions from which the truth can emerge. It is this diversity of opinions that the country needs today. Also see this column by Ashok Desai in The Telegraph on 28th July. Among other things there, he says: My findings are based on a cursory analysis of easily available banking statistics. So much more could be inferred from the masses of statistics accumulated by the Reserve Bank of India. All it needs is a good, elementary economist. The RBI employs economists by the hundreds; the finance ministry gives generous grants to many more. But their minds are focused on higher matters; looking at easily available figures and calculating simple ratios would not occur to them. So we continue to have one of the world’s best documented and least analysed banking systems. LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
2019-09-20 00:01:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20423755049705505, "perplexity": 2529.526271782635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573759.32/warc/CC-MAIN-20190919224954-20190920010954-00456.warc.gz"}
https://erj.ersjournals.com/highwire/markup/90915/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_tables%2Chighwire_reclass%2Chighwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed
Table 4— Summary statistics of estimated annual average air pollution concentrations for the home(birth) addresses of the full cohort PM2.5 μg·m−3 Soot/filter absorbance·10−5m−1 NO2 μg·m−3 Minimum 13.5 0.77 12.6 10th percentile 14.0 1.15 14.7 25th percentile 14.8 1.33 18.2 50th percentile 17.3 1.78 26.0 Mean 16.9 1.71 25.2 75th percentile 18.1 1.91 28.8 90th percentile 19.0 2.16 34.4 Maximum 25.2 3.68 58.4 • Data include the subjects for whom the questionnaire at 4 yrs was completed and estimated exposures were available, n = 3,532. PM2.5: particles of 50% cut-off aerodynamic diameter of 2.5 μm; NO2: nitrogen dioxide.
2019-10-21 07:09:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110003709793091, "perplexity": 6244.232376249176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987763641.74/warc/CC-MAIN-20191021070341-20191021093841-00083.warc.gz"}
https://www.biostars.org/p/180864/
ChrM in the Varscan2 copynumber output 0 0 Entering edit mode 5.7 years ago lhaiyan3 ▴ 60 Hi, I am using the varscan2 to call the copynumber for the WES. But my output.copynumber file just has the chrM call. I used the following script, can anyone please give me some suggestions? Thanks very much. HY samtools mpileup -q 1 -f $ref normal.bam tumor.bam | java -jar$VARSCANHOME/varscan.jar copynumber varScan --mpileup 1 java -Xmx64g -jar $VARSCANHOME/varscan.jar copyCaller output.copynumber --output-file output.copynumber.called varscan2 copy number • 1.5k views ADD COMMENT 0 Entering edit mode Do the chromosomes specified in$ref correspond to those in the bam file? Easiest way to check this would be to run samtools idxstats on the bam file and grep '>' on the reference fasta. 0 Entering edit mode I tried both, I have the chrM to chrY in the bam file and ref file, but the chrM is the first, it looks like the program stopped after running the chrM. 0 Entering edit mode You mean the chromosomes are not in the same order in both files? 0 Entering edit mode No, they are in the same order in both files. I mean chrM is the first in both files.
2021-11-29 07:59:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27633875608444214, "perplexity": 7768.525654363888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358702.43/warc/CC-MAIN-20211129074202-20211129104202-00394.warc.gz"}
https://mathoverflow.net/questions/218913/different-graphs-with-the-same-open-neighborhood-hypergraph
# Different graphs with the same open neighborhood hypergraph For any set $X$ we let $[X]^2 = \big\{\{x,y\}: x\neq y \in X\big\}$. Let $G=(V,E)$ be a simple, undirected graph. Its open neighborhood hypergraph $\mathcal{H}(G)$ has the same vertex set $V$ with a hyperedge for the open neighborhood of every vertex $v \in V$. (The open neighborhood of $v\in V$ is the set $N_v = \{y\in V: \{x,y\}\in E\}$.) Given a non-empty set $V$, are there $E_1\neq E_2\subseteq [V]^2$ such that ${\cal H}(V,E_1) = {\cal H}(V,E_2)$? Can $E_1, E_2$ even be chosen such that the graphs $(V,E_1), (V,E_2)$ are not isomorphic? • Do you consider a hypergraph or a multi-hypergraph? It seems that the answer for the second question depends on it. – Ilya Bogdanov Sep 22 '15 at 7:42 • I was just thinking of hypergraphs $H=(V,E)$ where $V$ is a set and $E\subseteq {\cal P}(V)$ – Dominic van der Zypen Sep 22 '15 at 7:43 • Sorry, it appears that the answer does not depend on it ;)... – Ilya Bogdanov Sep 22 '15 at 7:57 The answer to the first question is positive. Consider two graphs on eight vertices each consisting of two disjoint 4-cycles: the first one's cycles are $abcd$ and $efgh$, the second's ones are $afch$ and $ebgd$. The answer for the second question is also positive, even if we consider $\mathcal H(G)$ to be a multi-hypergraph (thus taking neighborhoods with multiplicities). Take any connected non-bipartite graph $G$ and construct two graphs from it: first one is the disjoint union of two copies of $G$, another one is the tensor product of $G$ with an edge. Both have the same neighborhood multi-hypergraph, but one is not connected and another one is.
2019-09-19 13:33:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9490288496017456, "perplexity": 181.22917788430644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573519.72/warc/CC-MAIN-20190919122032-20190919144032-00225.warc.gz"}
https://aboutpressurewashing.com/868p786.php?7861fc=latex-small-black-square
\vfill strech vertical space so that it fills all empty space This thread is locked. Ask Question Asked 4 years, 7 months ago. The difference is mainly cultural. Simply include the package tikz along with the shapes library, which should be part of most standard LaTeX builds, then define a command to represent the type of marker you wish to use. The PC (Personal Computer) is a highly configurable and upgradable gaming platform that, among home systems, sports the widest variety of control methods, largest library of games, and cutting edge graphics and sound capabilities. ; The Comprehensive LaTeX Symbol List. if you're using amsthm, you can simply \renewcommand{\qedsymbol}{...} with whatever symbol you want, but if you're not using amsthm, i can't help without knowing more. But to correct the issue, the steps are relatively easy. Active 3 years ago. Arrows would be used within math enviroment. How do I get rid of the black square box blinking cursor in large bing/web search bar About 2 weeks ago I noticed something strange when I opened IE in the search bar in center of msn page where my regular cursor used to be, a large black square blinking cursor box appeared, this is ugly and extremely distracting when reading my home page, etc. Here are some external resources for finding less commonly used symbols: Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! Each separate visible element contained within a TeX document is contained within a box. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. \vspace*{length} leave out given vertical space \smallskip, \medskip, \bigskip leave out certain spaces \addvspace{length} extend the vertical space until it reaches length. maintains a list of supported commands. the open square used by amsthm (not defined in amsmath) is the open box at location hex03 in the msam font if amsfonts are loaded, otherwise it's a drawn box. Links to product details, latex inks and media, applications, and customer stories. Here you go with some maths power symbols, like text square/squared symbol for x², plus a white and black text square box symbol assortment in case you were looking for those. If you \usepackage{amsmath}, the \blacksquare command will typeset a solid black square. Learn more about the portfolio of HP Latex printers for producing high-quality indoor and outdoor signage, vehicle wraps, wallcoverings and more. Here you go with some maths power symbols, like text square/squared symbol for x², plus a … Blacksquare (LaTeX symbol) | LaTeX Wiki | Fandom ... \\blacksquare Above you can see the page having the issue. Latex provides a huge number of different arrow symbols. how do i can change my cursor from black box to line my cursor was change to black box i want to change it to line plz help me. If you want to use them in text just put the arrow command between two $like this example:$\uparrow\$ now you got an up arrow in text. There was a page break occurring but when revealing the formatting, all that you could see what a little black square on the paragraph after. Monty Python Locations, Katana Zero Headhunter Reddit, Deer Skull Drawing, Types Of Advertising Slideshare, Castlevania Judgement - Aeon, Xenoblade Chronicles X - Cemu Vulkan, Star Citizen Shops, Best Views In Santa Clarita, Rogue Trader 40k, Mossberg 935 Turkey, King Rail Predators, Weathering The Storm Poem, Handy Manny Motorcycle Reunion Game, Norman Lloyd Age, North Dallas Forty Injury, Waluigi Amiibo Ebay, Charles V Languages Quote, Castlevania: Symphony Of The Night Android Cheats, Michelle Nolden Husband, Special Cargo Gta 5 Payout, Ikoria: Lair Of Behemoths Price List, Chinese Name Meaning Mountain, Bed Bug Bites Vs Hives, Colorado Hot Springs Map, Ally Bank Twitter, You Go In Malayalam, Perfect Dark Guns,
2021-01-16 23:10:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6428987979888916, "perplexity": 6019.289594231969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507971.27/warc/CC-MAIN-20210116225820-20210117015820-00319.warc.gz"}
http://www.r-bloggers.com/page/392/
## NBA, Logistic Regression, and Mean Substitution April 19, 2011 By I’m currently sitting at about 32K feet above sea level on my way from Tampa International to DIA and my options … Continue reading → ## RStudio, Revolution Analytics and Deducer: A Tale of Three GUIs April 19, 2011 By I’m in the process of moving from SPSS to R at the moment. It’s not been the easiest of rides, but then learning how to do a core part of your job never really should be. It’s been fun, though – don’t get me wrong – it’s definitely been an adventure!! Here I’m going to ## Day #25-26 R is soo static! April 19, 2011 By Today I stumbled upon a very nice package called “rgl”. For documentation and demos, take a look at it’s website. Rgl is: quoted by rgl site itself: The rgl package is a visualization device system for R, using OpenGL as the rendering... ## Day #25-26 R is soo static! April 19, 2011 By Today I stumbled upon a very nice package called “rgl”. For documentation and demos, take a look at it’s website. Rgl is: quoted by rgl site itself: The rgl package is a visualization device system for R, using OpenGL as the rendering... ## Flu Trends April 18, 2011 By Not a model, but certainly Mickey Mousey: here’s some R code that plots Google’s US flu data:df <- read.csv(url("http://www.google.org/flutrends/us/data.txt"), skip=11)df$Date <- as.Date(df$Date)dev.new(height=8, width=12)# Leave a thin outer... ## Mickey Mouse Models April 18, 2011 By My statistics professor once drew a little Markov chain on the board and called it “just a Mickey Mouse model,” because it was too simple to represent anything serious. ## pre-generate pictures of your knitting April 18, 2011 By This was a birthday present for my spouse. (Don't worry--I also covered a lot of things -- fruit/nuts/cocoa puffs/etc -- in chocolate. But I think both were appreciated!)Sometimes p... ## pre-generate pictures of your knitting April 18, 2011 By This was a birthday present for my spouse. (Don't worry--I also covered a lot of things -- fruit/nuts/cocoa puffs/etc -- in chocolate. But I think both were appreciated!)Sometimes p... ## A Population Regression April 18, 2011 By Here's a video on some of the theory behind simple linear regression.There's no R involved with this video, but the video provides some theory behind what it is that R's lm() command estimates. ## A Population Regression April 18, 2011 By Here's a video on some of the theory behind simple linear regression.There's no R involved with this video, but the video provides some theory behind what it is that R's lm() command estimates. ## Details of two-way sync between two Ubuntu machines April 18, 2011 By In a previous post I discussed my frustrations with trying to get Dropbox or Spideroak to perform BOTH encrypted remote backup and AND fast two way file syncing. This is the detail of how I set up for two machines, both Ubuntu 10.10, to perform two way sync where a file change on either machine ## GEOSTAT 2011 — Canberra April 18, 2011 By Just got back from the 2011 GEOSTAT summer school that recently took place in Canberra, Australia. Thanks to Tom Hengl for the invitation to co-teach the course, to the great folks at ANU who made it possible, and to all of the students who participat... ## Test Difference Between Diversity-Indices of Two Samples with Abundance Data April 18, 2011 By I adapted a scheme for a permutation test from the PAST Software (Hammer & Harper, http://folk.uio.no/ohammer/past/diversity.html) that tests difference between diversity-indices of two samples with abundance data... Read more » ## Introducing Rook April 18, 2011 By Rook is a web server interface and software package for R. It is very much like Ruby’s Rack. In fact it is so much like Ruby’s Rack that I decided to use the same name and basic class hierarchy. You could say I “borrowed heavliy” from Ruby’s ... April 18, 2011 By ## Progress reading SAS sas7bdat files (natively) in R April 18, 2011 By This post describes some preliminary results from a compatibility study of the SAS sas7bdat file format. The most current results stored in a github repository here: sas7bdat The ultimate goal is a native solution to the incompatibility between open-source statistical software (e.g. R) and sas7bdat database files. Demonstration There has been significant progress in interpreting ## Using R, Sweave and Latex to integrate animations into PDFs April 18, 2011 By The first week of April I attended an excellent workshop on biplots held by Michael Greenacre and Oleg Nenadić at the Gesis Institute in Cologne, Germany. Throughout his presentations, Michael used animations to visualize the concepts he was explaining. He also included  animations in some of his papers. This inspired me to do this post ## Multivariate Repeated Measurements With adonis(): April 18, 2011 By Lately I had to figure out how to do a repeated measures (or mixed effects) analysis on multivariate (species) data. Here I share code for a computation in R with the adonis function of the vegan package. Credit goes to Gavin Simpson providing most of ... ## Weight compared to risk fraction April 18, 2011 By How well do asset weight constraints constrain risk? The setup In “Unproxying weight constraints” I claimed that many constraints on asset weights are really a proxy for constraining risk. That is not a problem if weights are a good proxy for risk.  So the question is: how good of a proxy are they? To give … Continue reading... ## Historical Sources of Bond Returns-Comparison of Daily to Monthly April 17, 2011 By Thanks so much for the comment on my last post Historical Bond Price and Total Returns from 10y Yield Series “I know this might sound antithetical to a bond guy, but won't the monthly series get you close enough? “ which proved me wrong and allow... ## A Creative Use of R April 17, 2011 By Update (5/18/2011): Looks like Freakonomics approves as well. Let the record show that I approved first :)I approve: "I use the open-source program R to create the patterns."But, I'm not sure I approve of calling these distributions "evil."In case you... ## A Creative Use of R April 17, 2011 By Update (5/18/2011): Looks like Freakonomics approves as well. Let the record show that I approved first :)I approve: "I use the open-source program R to create the patterns."But, I'm not sure I approve of calling these distributions "evil."In case you... ## Export a Table Created by R to a TeX File April 17, 2011 By I am using xtable package of R to produce all the necessary codes for producing table suitable for LaTeX, and also discussing how to export the codes to a tex file. Producing tables in LaTeX might be a difficult task as we can not just copy and paste a table in the editor; we have to write all the... ## Export a Table Created by R to a TeX File April 17, 2011 By Producing tables in LaTeX might be a difficult task as we can not just copy and paste a table in the editor; we have to write all the numbers and other codes. But with the help of xtable package of R it is possible to produce all the necessary codes for producing table in LaTeX and also possible to... ## Going over the speed limit April 17, 2011 By In an earlier post   I had reported on how R compared with Stata for executing algorithms involving maximum likelihood estimation. This post  offers the following updates on the last post: Stata is in fact even faster than previously reported. The 64-bit version of the newly... ## Going over the speed limit April 17, 2011 By In an earlier post   I had reported on how R compared with Stata for executing algorithms involving maximum likelihood estimation. This post  offers the following updates on the last post: Stata is in fact even faster than previously reported. The 64-bit version of the newly... ## Exporting R graphics as LaTeX code – version 0.6.1 of the tikzDevice package is out April 17, 2011 By (Guest post on R-bloggers by Charlie Sharpsteen) Cameron and I are pleased to announce version 0.6.0 of the tikzDevice package which should be available shortly at your local CRAN mirror! The tikzDevice makes it possible to export R graphics as LaTeX code that can be included in other documents or compiled into stand alone figures. The full power of... ## Statistics without Maths April 17, 2011 By I got an interesting message from Chris Atherton the other day who has offered to do a workshop at the Technical Communications UK conference on statistics and data visualisation. The problem is that for some tech writers, their understandin...
2014-10-30 16:43:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36078184843063354, "perplexity": 4310.673313572393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898611.54/warc/CC-MAIN-20141030025818-00215-ip-10-16-133-185.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/85068-washer-method.html
# Math Help - washer method 1. ## washer method Find the volume of the solid formed by revolving the region under the graph of f(x)=9-x^2 for 0<x<3 about the line x=-2 using the washer method. 2. $\pi \int_{f(0)}^{f(3)} (f^{-1}(x))^2dx - \pi \int_{f(0)}^{f(3)} (2)^2dx$ 3. whatd is f^-1(x) derivative?
2014-07-22 21:44:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23537392914295197, "perplexity": 4270.724894395999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997865523.12/warc/CC-MAIN-20140722025745-00147-ip-10-33-131-23.ec2.internal.warc.gz"}
http://www.ctan.org/tex-archive/macros/latex/contrib/nfssext-cfr
# Direc­tory tex-archive/macros/latex/contrib/nfssext-cfr nfssext-cfr.sty is an extension of Philipp Lehman's nfssext.sty. nfssext.sty provides commands which enable one to specify font features not covered by the New Font Selection Scheme of LaTeX-2e. nfssext-cfr.sty provides additional commands, further extending the facilities offered by NFSS. nfssext-cfr.sty is required by various font support packages I've written. It is being released separately to avoid unnecessary duplication and confusion. At least, I hope it will remove at least one source of unnecessary confusion. I have no reason to think it will avoid any of the others. The code is somewhat experimental. It works for me. So far. If you discover problems, please let me know. If you know how to fix them, even better. The 2010 update includes an attempt to improve the behaviour of \ofstyle, and to add support for microtype. I didn't publish this at the time because I wanted to test it first. I have just discovered that I am still using a local copy. Insofar as one person can test something, I figure that 5 years ought to be enough to pick up the most obvious problems. However, your kilometres may, as always, vary. There should be no changes for the end user except that in certain cases it is possible that line-breaks may be altered if microtype is in use due to the enhanced support included for variant font families. - Clea F. Rees (ReesC21 <at> cardiff <dot> ac <dot> uk) 2015/06/18 ## Files Name Size Date Notes README 1414 2015-06-18 02:51 nf­s­sext-cfr.pdf 155942 2015-06-18 02:51 nf­s­sext-cfr.sty 22810 2015-06-18 02:25 nf­s­sext-cfr.tex 1971 2015-06-18 02:50 Down­load the con­tents of this pack­age in one zip archive (158.8k). ## nf­s­sext-cfr – Ex­ten­sions to the LaTeX NFSS The pack­age is a de­vel­op­ment of nf­s­sext.sty, dis­tributed with the ex­am­ples for the font in­stal­la­tion guide. The pack­age has been de­vel­oped for use in pack­ages such as cfr-lm and ven­tur­isadf, Pack­age De­tails nf­s­sext-cfr Ver­sion 2010-07-17 Li­cense The LaTeX Project Public Li­cense 1.3 Main­tainer Philipp Lehman (in­ac­tive)Clea F. Rees Con­tained in TeX Live as nf­s­sext-cfr MiKTeX as nf­s­sext-cfr Topics font se­lec­tion schemes
2016-04-30 22:37:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021634459495544, "perplexity": 9692.506708672205}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113010.58/warc/CC-MAIN-20160428161513-00048-ip-10-239-7-51.ec2.internal.warc.gz"}
https://oa.journalfeeds.online/2022/01/14/physics-informed-modeling-and-control-of-multi-actuator-soft-catheter-robots-seyede-fatemeh-ghoreishi-et-al/
# Physics-Informed Modeling and Control of Multi-Actuator Soft Catheter Robots Seyede Fatemeh Ghoreishi, et al. Jan 14, 2022 ## 1 Introduction The bulging aneurysm can put pressure on the nerves or brain tissue and it has the potential to rupture and cause bleeding within the brain or surrounding area. This leads to an extremely serious condition known as a haemorrhage, which can cause extensive health problems such as stroke, brain damage, coma, and even death. Approximately 12 percent of patients die before receiving medical attention Schievink (1997). Therefore, early detection and treatment of a brain aneurysm before its rupture is critical for saving lives of a large group of patients suffering from this issue Wiebers et al. (2003). The ultimate goal of aneurysm treatment is to exclude the aneurysmal sac from the intracranial circulation while preserving the parent artery. Treatment of cerebral aneurysms has long been the domain of neurosurgeons performing open brain surgeries, but since 1990, neuroradiologists have been using minimally invasive procedures to treat increasing numbers of patients with cerebral aneurysms Schievink (1997). Open brain surgery or surgical clipping is done by removing part of the skull to reach the aneurysm. The surgeon places a metal clip across the neck of the aneurysm to prevent blood flow into the aneurysm bulge. Once the clipping is done, the skull is closed back together. In minimally invasive procedure or endovascular coiling, also called coil embolization, a catheter is inserted into a blood vessel, typically the femoral artery, and passed through blood vessels into the cerebral circulation and the aneurysm. Once the catheter is in place, tiny platinum coils are put through the catheter into the aneurysm to block it and to reduce the flow of blood into the aneurysm. Open brain surgeries carry some risks such as possible damage to other tissue and blood vessels, the potential for aneurysm recurrence and rebleeding, and a risk of stroke Hunt and Hess (1968). However, in minimally invasive procedures, an incision in the skull is not required to treat the brain aneurysm, which leads to decreased morbidity and shorter recovery time. These factors make the catheter-based diagnosis and therapy in minimally invasive procedures a better option over open brain surgeries. The major cause of unsuccessful catheterization in minimally invasive procedures is the use of conventional micro-catheters which provide very limited maneuverability Hu et al. (2018). The arteries through which the catheter passes are extremely complex and delicate, and navigating to desired intravascular locations in particular can be frequently challenging due to factors such as vascular tortuosity, distal location, vascular stiffness, and numerous side branches arising from larger vessels Kearney and Shabot (1995). The traditional endovascular catheterization is usually performed manually by an interventionalist by push/pull or twisting of catheters with pre-bent tips designated for different anatomical structures. The procedure of manual insertion of a catheter is very risky and challenging as it relies on the skill and experience of the interventionalist. The repeated insertion of a catheter through several trials by the interventionalist could tear a blood vessel at a junction and cause bleeding. The high flexibility of the catheter makes the catheter guidance through the constrained environment very challenging. This task gets more difficult when manipulating the catheter tip through sharp turns where the catheter needs to enter a branch vessel with an origin different from the parent vessel. The reason for the complexity is that the torque transmission to the distal end is inaccurate and limited by frictional forces acting along the catheter length Jayender et al. (2009). Therefore, the interventionalist cannot precisely apply forces on the catheter tip which can sometimes lead to damage to the blood vessel. In addition, the pre-bent shape of the catheter or guidewire can change during the long and tortuous vascular environments. The limited maneuverability and the difficulties in steering and control of the catheters increase the risk for complications including vascular dissection, perforation, and thrombosis. Therefore, there is a need to develop a strategy that can plan and perform catheterization to minimize misplacement and to reduce both risks and patient suffering. We model the catheter using Euler–Bernoulli beam theory Moseley et al. (2016), considering the physics of the system as a continuum mechanism with infinite degrees of freedom. We propose a time-dependent model capable of capturing the position of catheter along the vasculature. Using the proposed dynamical model, this paper models the minimally invasive procedure as guiding the multi-actuated soft catheter along a predefined desired trajectory obtained by incorporating the anatomical information and implementing segmentation of pre-operative images and geometric data often available through CT scanning of the brain prior to the interventional procedures. The proposed framework formulates the navigation of multi-actuator soft catheter robot as a constrained optimization problem. The solution to this optimization problem sets the sequence of moments to be exerted by the actuators as well as the insertion depth needed to move the multi-actuator soft catheter along the desired trajectory in such a way that the stress on the vessel wall is minimized. The high performance of the proposed framework in terms of accuracy and speed is demonstrated through a comprehensive set of numerical experiments. The proposed framework provides a collaborative robotic catheterization within different anatomical geometries by independent control of insertion and bending moments. It develops a simulation-based strategy for selection of the catheter with proper number of actuators prior to endovascular catheterization procedures. The proposed multi-actuator catheter robots can significantly contribute to the success of complex catheterization procedures, allowing surgeons to access the areas of endovascular system that could not be reached with conventional catheters. In addition, by selecting the number of actuators prior to procedures, the number of insertions and retractions normally used by clinicians to guide the catheter correctly into a desired branch to reach the aneurysm location can be reduced considerably, thereby preventing or reducing the possibility of damage to the arteries during the endovascular catheterization procedures. Moreover, the correct placement and actuation of actuators are not always intuitive or simple. Therefore, this computational framework allows the interventionalist to determine the optimal length of catheter insertion and the necessary bending moments. This greatly improves the interventionalist’s ability to control the amount of moments being exerted by actuators while inserting the catheter to precisely follow a predefined trajectory. These deliver the promise of higher accuracy and shorter duration when compared to current catheter-based therapies that depend on the interventionalist’s intuition in guiding the catheter, combating the complications in catheterization procedures. The rest of the paper is organized as follows: Section 2 presents background on catheterization for endovascular treatment of cerebral aneurysms. The proposed framework is discussed in Section 3, including the formulation for deflection, dynamic modeling, and trajectory tracking of multi-actuator soft catheter robots. Numerical experiments are presented in section 4. Finally, conclusions are drawn and future work opportunities are described in Section 5. ## 2 Background Endovascular treatment of cerebral aneurysms is expected to benefit from current research and developments in the field of minimally invasive procedure and therapy. Robot-assisted and computer-assisted catheterization methods are the promising approaches to facilitate this medical operation Rafii-Tari et al. (2014). Minimizing the invasiveness of the catheterization process requires catheters capable of moving within the target vessel during trajectory tracking. Furthermore, endovascular procedure outcomes depend greatly on the correct positioning of the catheter. Trajectory tracking and catheter localization require sufficient information about the system dynamics, e.g., a force-deflection relationship of an ablation catheter, or a current-field map in the case of an electromagnetic passive actuation system. Kinematic models for soft robots have been developed based on finite element Duriez et al. (2006); Lenoir et al. (2006), deformation energy Tunay (2004), beam theory Khoshnam et al. (2012), piecewise constant curvature Webster and Jones (2010), and Cosserat rod theory Kratchman et al. (2016). Finite element and energy based methods can be used to predict the robots deformation with great accuracy. However, their high computational cost can hardly meet the required efficiency in kinematics applications Goury and Duriez (2018). The Cosserat rod model is an accurate model. However, the necessity for solving nonlinear differential equations with initial boundary values numerically with no closed-form solution makes this model less attractive. To improve the computational speed, Ref. Tang et al. (2012) explicitly models flexible tips using a less expensive generalized bending model that is more computational efficient than the elastic Cosserat rod model for the slender body. It also describes the simulation algorithms with the use of a minimum coordinates formulation Bergou et al. (2008) to achieve stable and real-time computation. Constant curvature modeling is an accurate approach assuming that external loads are negligible. Constant curvature modeling has been widely used in soft robotics due to the simplifications it enables in kinematic modeling Webster and Jones (2010). Constant curvature robots can be considered as consisting of a finite number of curvatures described by a finite set of arc parameters, which can be converted to analytical frame transformations Robinson and Davies (1999). Therefore, constant curvature can facilitate additional analysis on topics such as design, real-time control, and other useful computations. In case of a constant moment being applied along a beam, Euler-Bernoulli beam mechanics defines a constant curvature result Beléndez et al. (2002); Camarillo et al. (2008). The broad range of applications of catheterization within minimally invasive procedures demands additional improvements of surgical instruments. Remote-controlled catheters which use Magnetic Resonance Imaging (MRI) for remote steering and guidance have been a field of intensive research since the 1990s Heunis et al. (2020); Sitti (2009); Hwang et al. (2020); Fang et al. (2021). These catheters are equipped with a set of orthogonal coils and magnetic moments generated by the coils deflect the catheter under Magnetic Resonance (MR) magnetic field using Lorentz force. Ref. Greigarn and Cavusoglu (2014) presents a motion-planning algorithm which calculates a sequence of magnetic moments needed to move the tip of the MRI-actuated catheter along a predefined trajectory on a surface. Ref. Zhou et al. (2021) presents a ferromagnetic soft catheter robot system capable of in situ computer-controlled bioprinting in a minimally invasive manner based on magnetic actuation. Ref. Roberts et al. (2002) exploits the high magnetic field environment of a clinical MRI scanner and demonstrates the technical feasibility of developing a catheter whose tip can be remotely oriented within the magnetic field by applying a DC current to a coil wound around the catheter tip to generate a magnetic moment and consequent deflection. However, the clinical applicability of the method has been failed to address several practical problems due to the dependence of catheter tip deflection on the initial position relative to the external magnetic field. Furthermore, the catheter torqueability is reduced because of the large axial coil that is needed to attain acceptable catheter deflections. This results in increased heat generation leading to potentially dangerous temperatures at the catheter tip due to high dc currents that need to be applied to the coil. Ref. Slade et al. (2017) presents a soft catheter capable of apical extension to travel inside constrained environments with minimal shear force. However, the angle and radii of curvature of the trajectories that the designed catheter can track, are limited to specific ranges. A number of methods have been studied to achieve precise and effective positioning of the catheter tip. In Ref. Li et al. (2016), a method is developed to control the tip position using a vertebra-like outer tube with constraint inner tube, but this method requires too big catheter diameters for some surgical purposes. There are also studies that investigated incorporating shape memory alloys into the catheter to control the tip position Fukuda et al. (1994); Hadi et al. (2016). However, using shape memory alloys can cause heating and restrict the range of catheter tip movement considerably, and requires major considerations for safe heating and efficient cooling Fukuda et al. (1994). Despite several research and development efforts in catheter-based therapies, limited research has been conducted for pre-operative selection of catheters with desired level of maneuverability. The arteries through which the catheter passes are extremely complex and delicate, making the catheterization process a very challenging task. The repeated insertion of a catheter through several trials could tear a blood vessel at a junction and cause bleeding Abreu et al. (2004). Therefore, there is a demand for development of advanced catheters which allow interventionalists access to areas of vascular systems that cannot be reached with conventional catheters. Ref. Gopesh et al. (2021) overcomes this problem with submillimeter diameter, hydraulically actuated hyperelastic polymer devices at the distal tip of microcatheters to enable active steerability. In our previous work Ghoreishi et al. (2021), we proposed the design of soft catheters with multiple actuators, capable of alignment with desired vessel shapes near the target area, and developed a static analytical model for catheter deformation. In this work, we focus on dynamic modeling and control of multi-actuator soft catheters and given a fixed set of designs (i.e., catheters with different geometric and material properties and different number of actuators, etc.), we want to decide prior to catheter-based surgeries that which of these designs is appropriate (capable of tracking) for a desired trajectory leading to the aneurysm. This can advance the future generation of autonomous or semi-autonomous robotic catheterization systems in terms of time, cost, and accuracy, reducing the cognitive workload of the interventionalist while improving the quality of the catheter insertion. ## 3 Proposed Framework This section describes our proposed strategy for pre-operative selection of the catheter with proper maneuverability, capable of tracking a desired trajectory to reach the aneurysm’s location in the brain. The desired maneuverability is achieved by considering multiple pneumatic actuators along the circular catheter tube. In the following subsections, the proposed formulations for motion modeling and trajectory tracking of multi-actuator soft catheters are presented. ### 3.1 Deflection Formulation of Multi-Actuator Soft Catheter FIGURE 2. Actuated soft catheter and its deflection under the moment corresponding to the pneumatic actuator, resulting in a circular configuration with the constant radius ρ and bending angle α. As it can be seen, when the actuator is in action, the catheter is in the form of a cantilevered beam. According to the Euler-Bernoulli theorem Moseley et al. (2016), under static conditions, the radius of curvature of the catheter is determined by the bending moment M applied by the actuator and the catheter’s Young’s modulus of elasticity E and area moment of inertia I, as: Assuming that the catheter bends into a constant curvature shape, the bending angle αs corresponding to a point with distance ls from the initial point of the catheter is equal to: Therefore, the horizontal and vertical coordinates of the position of the point with distance ls from the initial point of the catheter under moment M can be obtained as: $xM,ls=ρsinαs=EIMsinMlsEI,yM,ls=ρ1−cosαs=EIM1−cosMlsEI.(3)$ Clearly, the coordinates of the tip of catheter ptip = [xtip, ytip] can be achieved by setting ls = l. Considering n actuators with length li along the catheter, the position of any point with distance ls from the initial point of the catheter which is with distance $lis$ from the initial point of the ith actuator, i.e. $ls=∑j=1i−1lj+lis$ , after some algebraic manipulations, can be obtained as: $xM,ls=∑j=1i−1EIMj−EIMj+1sin∑c=1jMclcEI+EIMisinMilisEI+∑j=1i−1MjljEI,(4)$ $yM,ls=EIM1+∑j=1i−1EIMj+1−EIMjcos∑c=1jMclcEI−EIMicosMilisEI+∑j=1i−1MjljEI,(5)$ where M = [M1, , Mn] denoting the moments applied by n actuators. The position of the tip of catheter ptip = [xtip, ytip] in this n-actuator soft catheter can be obtained by setting ls = l which means $lis=ln$ . A depiction of a soft catheter with three actuators is presented in Figure 3. Note that the moments exerted by actuators can be in different directions to obtain a S-curve shape. FIGURE 3. A three-actuator soft catheter and its deflection under the moments corresponding to the pneumatic actuators. Hence, the catheter tube is assumed as a circular arc implied by the Bernoulli-Euler beam mechanics and constant independent moments associated with actuators are applied along the catheter. The geometric and material properties along the catheter are assumed to be homogeneous. Specifically E, modulus of elasticity, and I, moment of inertia of cross section around the natural axis, are fixed along the catheter. Furthermore, the clamping effects are neglected, and the catheter is considered as an elastica, which is a homogeneous unshearable and inextensible slender medium. Therefore the length of the catheter robot does not change as a result of the applied loads. Here, static motion modeling of multi-actuator soft catheter is considered, where only the moments exerted by each of the actuators control the positioning of the catheter. In the next subsection, the dynamic modeling of multi-actuator soft catheter by considering the catheter movement along a desired trajectory is discussed. ### 3.2 Dynamic Modeling and Formulation of Multi-Actuator Soft Catheter The problem that we aim to address in this work is computation of the sequence of actions that guide the catheter to the target location in a given trajectory. This trajectory is often generated according to the available preoperative geometry of the vessel, which is obtained through preoperative CT/MR scans Zhou et al. (2016). Vascular centerlines are typically used as a reference trajectory, with researchers studying skeletonization of pre-operative images for extracting blood vessel shapes and centerlines Cheng et al. (2012); Li et al. (2021). The desired trajectory, also called the nominal trajectory, is the path that the catheter needs to pass in order to enter the branches which lead to the desired target location where aneurysm is located. Points need to be localized on the generated desired trajectory, which allow the catheter tip to reach the target location while avoiding unwanted collisions to the vessel walls. We consider each point along the nominal trajectory as the step-wise desired position of the tip of catheter and refer to these points as nominal points. The proximity, density, and location of the nominal points are adjusted according to the vessels’ anatomy and practical considerations O’Flynn et al. (2007). These considerations include the arterial bifurcations, sharpness of the change of angles along the trajectory, length of vascular branches, delicacy of veins, and many other factors that can be based on the expert’s knowledge. In some cases, the nominal trajectory is generated only near the bifurcations. Once the catheter is deflected and mechanically guided into the appropriate vessel branch, the moment can be removed and there will be no additional moment required to maintain the catheter position. This is analogous to existing catheter designs where the natural elasticity of the catheter tip, which would tend to restore native catheter tip geometry, will be offset by mechanical resistance from the vessel wall Roberts et al. (2002). An example of the nominal trajectory along a vessel with aneurysm and the nominal points generated on the trajectory are demonstrated in Figure 4. In the numerical experiments of this paper, without loss of generality, we consider the centerline of the vasculature leading to the aneurysm as the desired trajectory and generate points uniformly along the desired trajectory. FIGURE 4. The nominal trajectory and the nominal points generated along a vessel with aneurysm. According to the generated nominal trajectory and the nominal points along the trajectory, the goal is to obtain the set of actions that a clinician can take to guide the catheter through the vasculature and reach the target location. By defining t as the current step where the catheter is located and n as the number of actuators along the catheter, we consider the additional moments applied by the actuators (ΔM) and the insertion depth (axial motion) of the catheter (Δd) as the set of actions that can be taken at each step. Therefore, the set of actions at step t is represented in the action vector ut as: where ΔMt = [ΔM1,t, , ΔMn,t] with ΔMi,t being the additional moment applied by actuator i at step t, for i = 1, , n. Assuming that at step t, the catheter with initial coordinates $[xtinit,ytinit]$ is under the moments Mt = [M1,t, , Mn,t], we need to find the position of any point on the catheter at step t + 1 by applying the actions in the action vector ut defined in Eq. 6. Having the action moments ΔMt exerted by the actuators, the total moments applied to the catheter at step t + 1 can be obtained as: By axial movement of the catheter along the trajectory with the insertion depth of Δdt, and assuming that the angle of the initial point on the catheter with respect to the horizontal line at step t, i.e. $θtinit$ , is known based on the preoperative scans, the coordinates of the catheter’s initial point at step t + 1 are obtained as: $xt+1init=xtinit+Δdtcosθtinit,yt+1init=ytinit+Δdtsinθtinit.(8)$ A demonstration of the actions applied to the catheter to guide it from the state at step t to t + 1 is represented in Figure 5. FIGURE 5. Actions applied to the catheter to guide it from the state at step t to t +1. We define the state vector $st=[Mtxtinit,ytinit,θtinit]$ as the sufficient information to obtain the coordinates of any point on the catheter. Therefore, by having the knowledge about the state of catheter at step t and using Eqs 48, the coordinates of any point with distance ls from the initial point of the catheter at step t + 1 after applying ut = [ΔMt, Δdt] can be obtained as: $xt+1ut,ls=fst,ut,ls,yt+1ut,ls=gst,ut,ls,(9)$ where $fst,ut,ls=xtinit+Δdtcosθtinit+∑j=1i−1EIMj,t+ΔMj,t−EIMj+1,t+ΔMj+1,tsin∑c=1jMc,t+ΔMc,tlcEI+EIMi,t+ΔMi,tsinMi,t+ΔMi,tlisEI+∑j=1i−1Mj,t+ΔMj,tljEI×cosθtinit−EIM1,t+ΔM1,t+∑j=1i−1EIMj+1,t+ΔMj+1,t−EIMj,t+ΔMj,tcos∑c=1jMc,t+ΔMc,tlcEI−EIMi,t+ΔMi,t×cosMi,t+ΔMi,tlisEI+∑j=1i−1Mj,t+ΔMj,tljEIsinθtinit,(10)$ and $gst,ut,ls=ytinit+Δdtsinθtinit+∑j=1i−1EIMj,t+ΔMj,t−EIMj+1,t+ΔMj+1,tsin∑c=1jMc,t+ΔMc,tlcEI+EIMi,t+ΔMi,tsinMi,t+ΔMi,tlisEI+∑j=1i−1Mj,t+ΔMj,tljEI×sinθtinit+EIM1,t+ΔM1,t+∑j=1i−1EIMj+1,t+ΔMj+1,t−EIMj,t+ΔMj,tcos∑c=1jMc,t+ΔMc,tlcEI−EIMi,t+ΔMi,t×cosMi,t+ΔMi,tlisEI+∑j=1i−1Mj,t+ΔMj,tljEIcosθtinit.(11)$ The next section discusses trajectory tracking and movement of the multi-actuator soft catheter along a desired trajectory. ### 3.3 Trajectory Tracking of Multi-Actuator Soft Catheter Here, we focus on the primary goal of catheterization which is obtaining the motion trajectories by the set of actions that a clinician can take to guide the catheter through the vasculature and reach the target location. We define $[xt+1NPyt+1NP]$ as the coordinates of the nominal point along the desired trajectory at step t + 1. Having a catheter with length l [xt+1 (ut, l), yt+1 (ut, l)] specifies the coordinates of the tip of catheter. We aim to find the optimal set of actions at each step that move the tip of catheter to the next nominal point designated on the trajectory while keeping the body of catheter closest to the nominal trajectory for avoiding unwanted collisions to the vessel walls. Therefore, the problem is formulated as: $ut*=argminu∈Uxt+1NP−xt+1ut,l2+yt+1NP−yt+1ut,l2︸dt+1tip+γ∫0lxt+1NT−xt+1ut,ls2+yt+1NT−yt+1ut,ls2dls︸dt+1body=argminu∈Uxt+1NP−fst,ut,l2+yt+1NP−gst,ut,l2+γ∫0lxt+1NT−fst,ut,ls2+yt+1NT−gst,ut,ls2dlss.t.dt+1tip where $U$ is the space of actions, γ is a weight coefficient, and $[xt+1NTyt+1NT]$ are the coordinates of the point along the nominal trajectory closest to the point with distance ls from the catheter’s initial point. The weight coefficient γ characterizes the relative importance of closeness of catheter body and catheter tip to the centerline, and it is set according to the sensitivity of the vein to the catheter tip and catheter body. The first term in Eq. 12, i.e., $dt+1tip$ , is the distance of the tip of catheter from the point corresponding to step t + 1 on the nominal trajectory, and the second term, i.e. $dt+1body$ , is the distance of the catheter body from the nominal trajectory. The actions obtained in this optimization problem are constrained to maintaining the maximum threshold distance $dthldtip$ of the tip of catheter from the nominal point, and the maximum threshold distance $dthldbody$ of the catheter body from the nominal trajectory. The integral for computing $dt+1body$ is calculated by Monte Carlo (MC) approximation by generating S samples along the part of nominal trajectory that the catheter body is located. It should be noted that this integral can be computed using a grid. Thus, the problem in Eq. 12 is restated as: $ut*≈argminu∈Uxt+1NP−fst,ut,l2+yt+1NP−gst,ut,l2︸dt+1tip+γS∑j=1Sxt+1NT−fst,ut,ljs2+yt+1NT−gst,ut,ljs2︸dt+1bodys.t.dt+1tip where $ljs$ is the distance of jth MC sample from the initial point of the catheter. This non-linear constrained optimization problem can be solved using any non-linear optimizers, including population-based evolutionary optimization techniques such as genetic algorithms used in this paper. For the next procedural phase, the optimized actions $ut*$ are executed by the interventionalist in order to reach the new catheter position. This optimization problem is solved for all points on the nominal trajectory until the trajectory is complete. The proposed framework can benefit the catheterization procedures by providing the interventionalists the capability to select the appropriate catheter prior to surgeries. By accounting for the manufacturability considerations and the complexities that arise by adding the number of actuators, the optimal number of actuators that allow the catheter to reach the aneurysm location while satisfying the constraints in Eq. 13 will be selected by the interventionalist for the catheterization procedure. This pre-operative catheter selection avoids multiple insertion attempts often made by the interventionalist at a single site which lead to a more invasive procedure and cause patient discomfort. ## 4 Numerical Experiments In this section, computational experiments are conducted to demonstrate the catheter selection process according to the maneuverability that can be achieved through the multi-actuator soft catheter robots. In all the experiments, the moment of inertia of cross section around the natural axis, i.e., I, is fixed along the catheter, considering circular area with moment of inertia $I=πr44$ . The optimization problems are carried out using genetic algorithm implemented in MATLAB and the weight coefficient γ is set to 1 in all the experiments, assuming that it is equally important to keep the body and tip of catheter far from the centerline. All experiments are performed on a personal computer with an Intel 8-Core Core i7 CPU (3.8 GHz) and 16-GB memory. Throughout the numerical experiments, the unit of all the parameters associated with length is in cm and the other units are in SI metric system. ### 4.1 Single-Trajectory Scenarios In the first part of experiments, we consider two trajectory scenarios Ghoreishi et al. (2021) along the vessels with aneurysms as shown in Figure 6. These images demonstrate potential scenarios with increasing complexity and tortuosity. Scenario 1 demonstrates a small sidewall aneurysm arising from third order branch. Scenario 2 demonstrates added challenges due to increasing tortuosity and more distal location from fourth order branch vessel. However, the aneurysm axis is somewhat more in line with the vessel orientation. FIGURE 6. Simulated Single-Trajectory Scenarios (all dimensions are in cm). ##### 4.1.1 Scenario 1 In this scenario, we consider a circular catheter with the length of l = 10 cm, radius of r = 0.5 cm, and modulus of elasticity of E = 1 × 108 Pa. The lengths of actuators along the catheter are assumed to be equal, i.e., the length of each actuator in an n-actuated catheter is $li=ln$ , for i = 1, , n. As shown in the top plots in Figure 7, catheters with one, two, and three actuators are considered for this scenario and 18 points are generated along the nominal trajectory. The number of MC samples for integral computation is set to S = 100. The space of actions that can be taken is set adaptively at each step. The range of moments that can be exerted by the actuators at step t, i.e., ΔMt, is set to $min{−Mmax,−|Mi,t|}max{Mmax,|Mi,t|}$ , where Mmax is considered to be 1 N.m in this scenario, and the space of insertion depth of the catheter along the vessel is set to $0.5dtTN2dtTN$ , where $dtTN$ is the distance between the tip of catheter at current step and the next nominal point, i.e., $[xttipyttip]$ and $[xt+1NPyt+1NP]$ . The maximum threshold distances in the constraints are set to $dthldtip=0.5$ and $dthldbody=2$ both in cm. The rationale behind setting $dthldtip$ smaller than $dthldbody$ is the sharpness of the tip of catheter which can cause serious damage to the vessel walls if it gets very far from the nominal point at each step. However, more tolerance is considered for maximum distance of the catheter body from the nominal trajectory due to the natural resistance and elasticity of the vessel wall toward the blunt catheter body. FIGURE 7. The nominal trajectory and the points generated on the trajectory (top row). The moments (middle row) and the insertion depths (bottom row) applied by one-actuator, two-actuator, and three-actuator catheters in 18 steps. The plots in the columns (a), (b), and (c) in Figure 7 are associated with one-actuator, two-actuator, and three-actuator catheters respectively. The plots in the middle row represent the moments applied by each actuator and the plots in the bottom row show the insertion depth of catheter along the vessel at each step. It can be seen that the insertion depths in all three catheters are almost in the same range. However, the moment exerted by each actuator is different in catheters with one, two, and three actuators. The number of actuators and the moments applied by each actuator play the key role in performance of catheters in tracking the desired trajectory and reaching the target location where aneurysm is located. This is demonstrated in Figure 8, which represents the average and maximum distance of catheter body from the nominal trajectory and the distance of catheter tip from the nominal points, achieved by one-actuator, two-actuator, and three-actuator catheters at each step. The figure shows that the three-actuator catheter has the highest performance in following the desired trajectory as it maintains the smallest distance from the nominal trajectory and the nominal points compared to the catheters with one and two actuators. The two-actuator catheter performs reasonably well in this scenario, however the only actuator in the one-actuator catheter provides very limited flexibility which results in large distances while acceptable according the considered thresholds in the constraints. FIGURE 8. The average and maximum distance of catheter body from the nominal trajectory and the distance of catheter tip from the nominal points, achieved by one-actuator, two-actuator, and three-actuator catheters in tracking the trajectory scenario 1. The maximum threshold distances in the constraints are $dthldbody=2cm$ and $dthldtip=0.5cm$ . ##### 4.1.2 Scenario 2 Here, we consider a more complex scenario in smaller scale in comparison to the trajectory scenario in the previous experiment. In this scenario, the geometric and material properties of the circular catheter are set to be l = 1.5 cm, r = 0.4 cm, and E = 1 × 108 Pa. Like the previous experiment, the lengths of actuators along the catheter are assumed to be equal. As shown in the top left plot in Figure 9, we consider 26 points along the nominal trajectory. The number of MC samples for integral computation is set to S = 100. The space of actions that can be taken in order to guide the catheter from the current step to the next step is set adaptively at each step. The range of moments that can be exerted by the actuators at step t, i.e., ΔMt, is set to $min{−Mmax,−|Mi,t|}max{Mmax,|Mi,t|}$ , where Mmax is considered to be 10 N.m in this scenario, and the space of insertion depth of the catheter along the vessel is set to $0.5dtTN2dtTN$ , where $dtTN$ is the distance between the tip of catheter at current step and the next nominal point, i.e., $[xttipyttip]$ and $[xt+1NPyt+1NP]$ . The maximum threshold distance of the catheter tip from the nominal point and the threshold distance of the catheter body from the nominal trajectory are respectively set to $dthldtip=0.2$ and $dthldbody=0.5$ both in cm. These threshold distances are set smaller than those in the previous experiment due to the delicacy of the veins in this scenario compared to scenario 1. FIGURE 9. The nominal trajectory and the moments and the insertion depth applied by three-actuator catheter in 26 steps (top row) and the precision achieved in this trajectory tracking (bottom row). In this scenario, the one-actuator and two-actuator catheters are unable to provide the adequate maneuverability to follow the desired trajectory as the distances from the target points and the nominal trajectory exceed the maximum threshold distances. However, the three-actuator catheter is capable of tracking this trajectory scenario by maintaining the constraints. The moments exerted by each of the three actuators and the insertion depth of the catheter at each step to move the catheter to the next step till reaching the target aneurysm location are demonstrated in the two top right plots in Figure 9. The plots in the bottom row in this figure show the average and maximum distance of the catheter body from the nominal trajectory as well as the distance of the catheter tip from the nominal point. It can be seen that the maximum threshold distances are satisfied at each step. This is achieved through the use of three actuators along the catheter, increasing the flexibility and maneuverability of the catheter. This indicates the benefit of selection of the number of actuators prior to endovascular catheterization procedures, which results in preventing the difficulties that can arise due to inappropriate selection of catheters with limited maneuverability. ### 4.2 Multi-Trajectory/Fractal Tree Scenarios In this part of experiments, we consider fractal trees due to their similarity with branching structure patterns of arteries in vascular system Perdikaris et al. (2015). These fractal trees are assumed to be the vessel centerlines. To generate the fractal tree structures, we consider two branches stemming from each parent branch. We specify the number of branches nbr in a chain, the branching angles ϕ1 and ϕ2, and the length ratios $λ1=l1l0$ and $λ2=l2l0$ as the ratio of the length of branches to the length of the parent vessel at the bifurcation, as shown in Figure 10 for nbr = 2. In our computational analysis, we consider two fractal trees with parameters $nbr=6,λ1=0.75,λ2=0.8,ϕ1=5π12,ϕ2=−π6$ and $nbr=9,λ1=0.7,λ2=0.75,ϕ1=5π12,ϕ2=−5π36$ as in Ajam et al. (2017), which we show in Figure 11. FIGURE 10. Fractal tree modeling of the vascular system. FIGURE 11. Fractal trees with parameters (a) $nbr=6,λ1=0.75,λ2=0.8,ϕ1=5π12,ϕ2=−π6$ and (b) $nbr=9,λ1=0.7,λ2=0.75,ϕ1=5π12,ϕ2=−5π36$ . For this set of experiments, the geometric and material properties of the circular catheter are considered to be l = 2 cm, r = 0.4 cm, and E = 1 × 108 Pa, indicating the length, radius, and modulus of elasticity of the catheter respectively. For all the generated branches and considering each sequence of branches one at a time as the desired trajectory, we perform our proposed framework for different number of actuators with equal lengths of $li=ln$ for i = 1, , n, where n is the number of actuators. We consider uniform points along the nominal trajectories to follow by the catheter, as shown for one desired trajectory of the fractal trees in Figure 11. Similar to the previous experiments, the space of actions to guide the catheter from the current step to the next step is set adaptively at each step. The range of moments that can be exerted by the actuators at step t, i.e., ΔMt, is set to $min{−Mmax,−|Mi,t|}max{Mmax,|Mi,t|}$ , where Mmax is considered to be 5 N.m in these scenarios, and the space of insertion depth of the catheter along the vessel is set to $0.5dtTN2dtTN$ , where $dtTN$ is the distance between the tip of catheter at current step and the next point along the nominal trajectory. The maximum threshold distance of the catheter tip from the nominal point and the maximum threshold distance of the catheter body from the nominal trajectory are set to $dthldtip=0.1cm$ and $dthldbody=0.2cm$ . The number of MC samples for integral computation is set to S = 100. Table 1 represents the results averaged over all trajectories in the fractal trees shown in Figure 11. In this table, the success rate, average computation time, and average discrepancy from the centerline are presented for different number of actuators, where success rate is the percentage of the branches in a fractal tree that catheter could follow completely. According to the percentages of success rate, it can be seen that the three- and four-actuator catheters are able to successfully follow all the trajectories in fractal tree (a). However, considering the control and design complexities that arise by increasing the number of actuators, it is desired to select the three-actuator catheter for this fractal tree. In fractal tree (b), only the four-actuator catheter is capable of tracking all the trajectories while satisfying the threshold distance constraints. It can be seen that as the number of actuators increases, the average discrepancy of the catheter body from the centerlines decreases in both fractal trees; although the average discrepancies are higher in fractal tree (b) due to its complexity. The small average computation time of the proposed framework for trajectory tracking in these fractal trees emphasizes the benefits that can be achieved by pre-operative selection of proper catheters, avoiding the significant time, cost, and risk of catheterization procedures if inappropriate catheters are selected. TABLE 1. Results averaged over branches of fractal trees in Figure 11 obtained for catheters with different number of actuators. ## 5 Conclusion Interventional medicine is seeing a growing trend toward minimally invasive and catheter-based therapy, including in cerebrovascular procedures for treatment of cerebral aneurysms. Catheter-based surgeries can decrease hospitalization time and greatly lower patient morbidity compared to traditional open methods. However, catheter-based surgeries are often hindered by the lack of maneuverability of conventional catheters. Maneuverability of a catheter for intravascular navigation is a key to reaching the target area and it affects to a great extent the length and success of the procedure. This paper provided a simulation-based framework for pre-operative selection of catheters with desired level of maneuverability for treatment of cerebral aneurysm. The desired maneuverability is achieved by considering the appropriate number of pneumatic actuators along the catheter. The formulations for static deflection and dynamic modeling of multi-actuator soft catheter for trajectory tracking in two dimensions are provided. Future work includes the intertwined design and dynamic analysis of multi-actuator soft catheters in three-dimensional space. In this work, the shear forces between the catheter body and blood which are mainly important in dynamic analysis of catheters, are not considered. Further, the contact forces between the catheter and vessel walls are modeled by the tolerance of the vessel walls in keeping the catheter inside the vessels, accounted by the maximum threshold distance of the tip of catheter from the nominal point and the maximum threshold distance of the catheter body from the nominal trajectory. These contact forces play a critical and unavoidable role in catheterization procedures. Thus, the interactions of catheter with blood and vessel walls and the resultant contact forces need to be studied extensively in future research to model realistic catheterization scenarios. ## Data Availability Statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ## Author Contributions All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. ## Funding This work was supported by the Maryland Robotics Center at the University of Maryland and the National Institutes of Health through NIH NHLBI R01HL143468. ## Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ## Publisher’s Note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. ## Acknowledgments The authors acknowledge the supports of Maryland Robotics Center at the University of Maryland and National Institutes of Health through NIH NHLBI R01HL143468.
2022-01-21 22:46:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 57, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6271522641181946, "perplexity": 1279.9929373729724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303717.35/warc/CC-MAIN-20220121222643-20220122012643-00370.warc.gz"}
http://math.stackexchange.com/questions/268042/integral-of-1-sqrtx2-a2-where-a-0-why-does-a-has-to-be-greater-t?answertab=active
# Integral of $1/\sqrt{x^2 - a^2}$ where $a > 0$, why does $a$ has to be greater than $0$? I know how to solve the integral (set $x = a\sec(\theta)$ then $dx = a\sec(\theta)\tan(\theta)\,d\theta$ where $0 < \theta < \pi/2$ in order to have a one-to-one function), anyway, the problem specifies that $a > 0$ but I don't see how this changes anything or affects the substitution because $x = a\sec(\theta)$ still remains a one-to-one function, correct? I may be wrong but by seeing the graph of asec(theta) it appears that the sign of a does not change the fact that sec(theta) is one to one. Does it affect something else? Then we could also use the substitution of $x = a\cosh(t)$ then $dx = a\sinh(t)\,dt$ but the problem then specifies that $x > 0$. Why these restrictions? - Indeed, $a=-3$ is no different from $a=3$, because $a$ is squared in the function. It's just the same function to be integrated. The case $a=0$ should be treated separately, however. –  user53153 Dec 31 '12 at 8:30 @PavelM What would happen when a = 0? –  Lance Ferd Jan 1 '13 at 0:06 The substitution $x=s\sec\theta$ would become $x=0$, which is not a legal move. Also, you can see in the accepted answer how many times $a$ is in the denominator or under logarithm. Can't do any of that when $a=0$. // Of course, $1/\sqrt{x^2-a^2} = 1/|x|$ can be integrated without trig sub. –  user53153 Jan 1 '13 at 0:09 If the problem specifies that $0<\theta<\pi/2$ and $x = a \sec\theta$, then $x$ and $a$ have to be both positive or both negative (because $\sec\theta>0$). As shown in lab bhattacharjee's answer, it is not necessary that $x$ and $a$ have to be positive. The author may have specified them positive so that the reader could avoid dealing with $|a|$, to make the problem easier. Hard to say more without the exact statement of the problem. The same could be said about the second question, since $\cosh t > 0$. –  Michael E2 Jan 1 '13 at 5:50 If $x=a\sec \theta, dx=a\sec \theta\tan\theta \,d\theta$ $\sqrt{x^2-a^2}=\sqrt{a^2(\sec^2\theta-1)}=|a\tan \theta|=|a||\tan \theta|$ As $0< \theta< \frac\pi 2, |\tan \theta|=\tan \theta$ So, $\sqrt{x^2-a^2}=|a|\tan \theta$ if $0< \theta< \frac\pi 2$ $$\int \frac{dx}{\sqrt{x^2-a^2}}=\frac{a\sec\theta\tan\theta d\theta}{|a|\tan \theta}$$ =sign$(a)\int\sec\theta d\theta=$sign$(a)\ln|\sec\theta+\tan\theta|+C$ $=\ln|\frac xa+\frac{\sqrt{x^2-a^2}}{|a|}|+C$ If $\operatorname{sign}(a)>0, \int \frac{dx}{\sqrt{x^2-a^2}}=\ln|\frac xa+\frac{\sqrt{x^2-a^2}}a|+C=\ln|x+\sqrt{x^2-a^2}|+C-\ln a$ If $\operatorname{sign}(a)<0, \int \frac{dx}{\sqrt{x^2-a^2}}=-\ln|\frac xa-\frac{\sqrt{x^2-a^2}}a|+C=-\ln|x-\sqrt{x^2-a^2}|+C+\ln a$ But, $\ln|x+\sqrt{x^2-a^2}|+\ln|x-\sqrt{x^2-a^2}|=\ln|x^2-(x^2-a^2)|=\ln |a^2|$ which is constant. So, $\operatorname{sign}(a)<0, \int \frac{dx}{\sqrt{x^2-a^2}}=\ln|x+\sqrt{x^2-a^2}|-\ln |a^2|+C=\ln|x+\sqrt{x^2-a^2}|+C'$ where $C'=C-\ln |a^2|$ is also a constant. So, the value of the integration in the above two cases differ only by constant,hence the sign of $a$ does not matter. - Please take a look at mine. I hope it is applicabe. +1 –  B. S. Dec 31 '12 at 9:00 First question: When establishing that dx = asec(theta)tan(theta), we are assuming that sign(a) > 0, correct? Second question: I don't follow the last step starting with "But,...", what is it supposed to show? Thanks for your help so far. –  Lance Ferd Dec 31 '12 at 20:12 @LanceFerd, no. $a$ is after all a constant, right? –  lab bhattacharjee Dec 31 '12 at 20:15 Yes, it is a constant. –  Lance Ferd Dec 31 '12 at 23:51 @labbhattacharjee However, when determining sec(theta) the end results would be different if sign(a) < 0, wouldn't they? –  Lance Ferd Jan 1 '13 at 0:04
2014-07-28 06:34:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9147483706474304, "perplexity": 366.7506325937658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256757.9/warc/CC-MAIN-20140728011736-00118-ip-10-146-231-18.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/543561/partitions-of-n-into-distinct-odd-and-even-parts-proof
# Partitions of $n$ into distinct odd and even parts proof Let $p_\text{odd}(n)$ denote the number of partitions of $n$ into an odd number of parts, and let $p_\text{even}(n)$ denote the number of partitions of $n$ into an even number of parts. How do I prove that 1. $|p_\text{even}(n) - p_\text{odd}(n)|$ is equal to the partitions of $n$ into distinct odd parts. 2. Show that the number of partitions of $n$ for which no part appears exactly once is equal to the number of partitions of n for which every part is divisible by 2 or 3. 3. Show that the number of partitions of $n$ for which no part appears more than twice is equal to the number of partitions of $n$ for which no part is divisible by 3. • The answers to this question give two proofs of a slightly stronger version of (1), one combinatorial and the other via generating functions. – Brian M. Scott Oct 29 '13 at 14:30 For the sake of completeness here is a slightly different approach to part one, which then continues along the path in the link from Brian Scott. The generating function of the set of partitions by the number of parts is given by $$G(z,u) = \prod_{k\ge 1} \frac{1}{1-uz^k}.$$ It follows that the generating function of the set of partitions with an even number of parts is given by $$G_1(z) = \frac{1}{2} G(z, 1) + \frac{1}{2} G(z, -1) = \frac{1}{2} \prod_{k\ge 1} \frac{1}{1-z^k} + \frac{1}{2} \prod_{k\ge 1} \frac{1}{1+z^k}.$$ Similarly for an odd number of parts, $$G_2(z) = \frac{1}{2} G(z, 1) - \frac{1}{2} G(z, -1) = \frac{1}{2} \prod_{k\ge 1} \frac{1}{1-z^k} - \frac{1}{2} \prod_{k\ge 1} \frac{1}{1+z^k}.$$ Therefore $$G_1(z) - G_2(z) = Q(z) = \prod_{k\ge 1} \frac{1}{1+z^k}$$ and this is the generating function of the difference between the number of partitions into an even and odd number of parts. Now observe that this is $$\prod_{k\ge 1} \frac{1-z^k}{1-z^{2k}} = \prod_{k\ge 0} (1-z^{2k+1})$$ because the denominator cancels all the factors with even powers in the numerator. Here is an important observation: the above expression of $Q(z)$ enumerates partitions into unique odd parts in a generating function of signed coefficients where the sign indicates the parity of the number of parts. There is no cancellation between partitions that add up to the same value because the counts of constituent parts have the same parity. Moreover as all parts are odd, partitions of odd numbers must have an odd number of parts and of even numbers, an even number. Therefore the coefficients of $Q(z)$ alternate in sign. To get the series that generates the absolute values of these coefficients, we create generating functions for the even powers and the odd ones, inverting the sign of the coefficients of the odd ones. The even ones are generated by $$\frac{1}{2} Q(z) + \frac{1}{2} Q(-z)$$ and the odd ones by $$-\left(\frac{1}{2} Q(z) - \frac{1}{2} Q(-z)\right)$$ Adding these gives $$Q(-z).$$ But this is $$\prod_{k\ge 0} (1-(-z)^{2k+1}) = \prod_{k\ge 0} (1-(-1)^{2k+1} z^{2k+1}) = \prod_{k\ge 0} (1+ z^{2k+1}),$$ precisely the generating function of partitions into distinct odd parts. This is sequence A000700 from the OEIS. For the second part, the generating function of the partitions in which no part appears exactly once is given by $$\prod_{k\ge 1} \left(-z^k + \frac{1}{1-z^k}\right) = \prod_{k\ge 1} \frac{z^{2k}-z^k+1}{1-z^k} = \prod_{k\ge 1} \frac{1+z^{3k}}{1-z^{2k}}.$$ Now this generating function can be written as $$\prod_{k\ge 1} \frac{1+z^{3k}}{1-z^{2k}} \frac{1-z^{3k}}{1-z^{3k}} = \prod_{k\ge 1} \frac{1-z^{6k}}{(1-z^{2k})(1-z^{3k})}.$$ This precisely the generating function where all parts are divisible by two or three because the factor in the numerator compensates for the fact that parts divisible by six appear twice (once on the left and once on the right) in the two factors in the denominator. This is sequence A007690 from the OEIS. Continuing with what Brian Scott started, we have for number three, that this is the generating function for partitions in which no part appears more than twice:$$\prod_{k\ge 1} (1+z^k+z^{2k}).$$ And the generating function for partitions in which no part is divisible by three is $$\prod_{k\ge 0} \frac{1}{1-z^{3k+1}} \prod_{k\ge 0} \frac{1}{1-z^{3k+2}}.$$ Note that the second generating function can be written as $$\prod_{k\ge 1} \frac{1-z^{3k}}{1-z^k}$$ because the numerators cancel all denominators where a power of $z$ appears whose exponent is divisible by three. But we have $$\frac{1-z^{3k}}{1-z^k} = 1 + z^k + z^{2k},$$ proving the claim. This is sequence A000726 from the OEIS.
2019-06-24 13:30:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679253458976746, "perplexity": 62.182537634977436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00183.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-result-of-this-partial-derivative.918163/
# I What is the Result of this Partial Derivative Tags: 1. Jun 21, 2017 ### ecastro What is the result of this kind of partial differentiation? \begin{equation*} \frac{\partial}{\partial x} \left(\frac{\partial x}{\partial t}\right) \end{equation*} Is it zero? 2. Jun 21, 2017 ### haruspex Out of context it means nothing. A partial derivative means changing the indicated variable while keeping some other variable(s) constant. Usually it is obvious what those other variables are. In a 3D coordinate system partial wrt one coordinate implies keeping the other two constant. You need to provide a context for the expression. 3. Jun 21, 2017 ### ecastro I apologize for the missing context. For example, $x$ signifies position and $t$ as time. 4. Jun 21, 2017 ### haruspex In that case I assume that partial wrt x means other spatial coordinates are held constant, but what is the significance of the partial wrt to t? What is being held constant there? I.e., why is it not just dx/dt? Anyway, interpreting it as dx/dt: Consider some line of particles or elastic thread along the x axis. If we take x as the location of some element at time t, we can ask how quickly it is moving along the x axis: dx/dt. The answer may be different for different points along the line, i.e. at different x values. We could then ask how rapidly this velocity changes as we look along the line. This is the velocity gradient, $\frac d{dx}\frac{dx}{dt}$. Last edited: Jun 21, 2017
2018-07-20 18:44:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563758730888367, "perplexity": 879.9434583273626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591719.4/warc/CC-MAIN-20180720174340-20180720194340-00361.warc.gz"}
https://research.tue.nl/nl/publications/improved-bounds-for-the-union-of-locally-fat-objects-in-the-plane
# Improved bounds for the union of locally fat objects in the plane B. Aronov, M.T. Berg, de, E. Ezra, M. Sharir We show that, for any $\gamma > 0$, the combinatorial complexity of the union of $n$ locally $\gamma$-fat objects of constant complexity in the plane is $\frac{n}{\gamma^4} 2^{O(\log^*n)}$. For the special case of $\gamma$-fat triangles, the bound improves to $O(n \log^*{n} + \frac{n}{\gamma}\log^2{\frac{1}{\gamma}})$. Keywords: combinatorial geometry, union complexity, fat objects
2021-05-07 00:28:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8862884640693665, "perplexity": 2014.6824798531368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00000.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-11-section-11-1-finding-limits-using-tables-and-graphs-exercise-set-page-1141/91
## Precalculus (6th Edition) Blitzer Please note that in the definition of the limit $\lim_{x \to a}f(x)$, the function $f$ does not need to be defined at $x=a$. In the given case, the function is not defined at $3$, but is defined around this point.
2020-06-06 02:43:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9966902732849121, "perplexity": 79.58415151254994}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348509264.96/warc/CC-MAIN-20200606000537-20200606030537-00483.warc.gz"}
https://www.jobilize.com/algebra/section/key-concepts-linear-inequalities-and-absolute-value-by-openstax?qcr=www.quizover.com
# 2.7 Linear inequalities and absolute value inequalities  (Page 4/11) Page 4 / 11 $\begin{array}{c}-200\le x-600\le 200\\ -200+600\le x-600+600\le 200+600\\ 400\le x\le 800\end{array}$ This means our returns would be between $400 and$800. To solve absolute value inequalities, just as with absolute value equations, we write two inequalities and then solve them independently. ## Absolute value inequalities For an algebraic expression X, and $\text{\hspace{0.17em}}k>0,$ an absolute value inequality is an inequality of the form These statements also apply to $\text{\hspace{0.17em}}|X|\le k\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}|X|\ge k.$ ## Determining a number within a prescribed distance Describe all values $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ within a distance of 4 from the number 5. We want the distance between $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ and 5 to be less than or equal to 4. We can draw a number line, such as in [link] , to represent the condition to be satisfied. The distance from $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ to 5 can be represented using an absolute value symbol, $\text{\hspace{0.17em}}|x-5|.\text{\hspace{0.17em}}$ Write the values of $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ that satisfy the condition as an absolute value inequality. $|x-5|\le 4$ We need to write two inequalities as there are always two solutions to an absolute value equation. $\begin{array}{lll}x-5\le 4\hfill & \phantom{\rule{2em}{0ex}}\text{and}\phantom{\rule{2em}{0ex}}\hfill & x-5\ge -4\hfill \\ \phantom{\rule{1.8em}{0ex}}x\le 9\hfill & \hfill & \phantom{\rule{1.8em}{0ex}}x\ge 1\hfill \end{array}$ If the solution set is $\text{\hspace{0.17em}}x\le 9\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}x\ge 1,$ then the solution set is an interval including all real numbers between and including 1 and 9. So $\text{\hspace{0.17em}}|x-5|\le 4\text{\hspace{0.17em}}$ is equivalent to $\text{\hspace{0.17em}}\left[1,9\right]\text{\hspace{0.17em}}$ in interval notation. Describe all x- values within a distance of 3 from the number 2. $|x-2|\le 3$ ## Solving an absolute value inequality Solve $|x-1|\le 3$ . $\begin{array}{l}|x-1|\le 3\hfill \\ \hfill \\ -3\le x-1\le 3\hfill \\ \hfill \\ -2\le x\le 4\hfill \\ \hfill \\ \left[-2,4\right]\hfill \end{array}$ ## Using a graphical approach to solve absolute value inequalities Given the equation $y=-\frac{1}{2}|4x-5|+3,$ determine the x -values for which the y -values are negative. We are trying to determine where $\text{\hspace{0.17em}}y<0,$ which is when $\text{\hspace{0.17em}}-\frac{1}{2}|4x-5|+3<0.\text{\hspace{0.17em}}$ We begin by isolating the absolute value. Next, we solve for the equality $|4x-5|=6.$ $\begin{array}{lll}4x-5=6\hfill & \hfill & 4x-5=-6\hfill \\ \phantom{\rule{1.9em}{0ex}}4x=11\hfill & \phantom{\rule{2em}{0ex}}\text{or}\phantom{\rule{2em}{0ex}}\hfill & \phantom{\rule{1.9em}{0ex}}4x=-1\hfill \\ \phantom{\rule{2em}{0ex}}x=\frac{11}{4}\hfill & \hfill & \phantom{\rule{2em}{0ex}}x=-\frac{1}{4}\hfill \end{array}$ Now, we can examine the graph to observe where the y- values are negative. We observe where the branches are below the x- axis. Notice that it is not important exactly what the graph looks like, as long as we know that it crosses the horizontal axis at $\text{\hspace{0.17em}}x=-\frac{1}{4}\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}x=\frac{11}{4},$ and that the graph opens downward. See [link] . Solve $\text{\hspace{0.17em}}-2|k-4|\le -6.$ $k\le 1\text{\hspace{0.17em}}$ or $\text{\hspace{0.17em}}k\ge 7;$ in interval notation, this would be $\text{\hspace{0.17em}}\left(-\infty ,1\right]\cup \left[7,\infty \right).$ Access these online resources for additional instruction and practice with linear inequalities and absolute value inequalities. ## Key concepts • Interval notation is a method to indicate the solution set to an inequality. Highly applicable in calculus, it is a system of parentheses and brackets that indicate what numbers are included in a set and whether the endpoints are included as well. See [link] and [link] . • Solving inequalities is similar to solving equations. The same algebraic rules apply, except for one: multiplying or dividing by a negative number reverses the inequality. See [link] , [link] , [link] , and [link] . • Compound inequalities often have three parts and can be rewritten as two independent inequalities. Solutions are given by boundary values, which are indicated as a beginning boundary or an ending boundary in the solutions to the two inequalities. See [link] and [link] . • Absolute value inequalities will produce two solution sets due to the nature of absolute value. We solve by writing two equations: one equal to a positive value and one equal to a negative value. See [link] and [link] . • Absolute value inequalities can also be solved by graphing. At least we can check the algebraic solutions by graphing, as we cannot depend on a visual for a precise solution. See [link] . what is the answer to dividing negative index In a triangle ABC prove that. (b+c)cosA+(c+a)cosB+(a+b)cisC=a+b+c. give me the waec 2019 questions the polar co-ordinate of the point (-1, -1) prove the identites sin x ( 1+ tan x )+ cos x ( 1+ cot x )= sec x + cosec x tanh`(x-iy) =A+iB, find A and B B=Ai-itan(hx-hiy) Rukmini what is the addition of 101011 with 101010 If those numbers are binary, it's 1010101. If they are base 10, it's 202021. Jack extra power 4 minus 5 x cube + 7 x square minus 5 x + 1 equal to zero the gradient function of a curve is 2x+4 and the curve passes through point (1,4) find the equation of the curve 1+cos²A/cos²A=2cosec²A-1 test for convergence the series 1+x/2+2!/9x3 a man walks up 200 meters along a straight road whose inclination is 30 degree.How high above the starting level is he? 100 meters Kuldeep Find that number sum and product of all the divisors of 360 Ajith exponential series Naveen yeah Morosi prime number? Morosi what is subgroup Prove that: (2cos&+1)(2cos&-1)(2cos2&-1)=2cos4&+1
2019-02-24 05:01:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849159479141235, "perplexity": 584.8079453000838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249595829.93/warc/CC-MAIN-20190224044113-20190224070113-00585.warc.gz"}
https://aitopics.org/mlt?cdid=arxivorg%3A55D9EB21&dimension=pagetext
### Recovering metric from full ordinal information Given a geodesic space (E, d), we show that full ordinal knowledge on the metric d-i.e. knowledge of the function D d : (w, x, y, z) $\rightarrow$ 1 d(w,x)$\le$d(y,z) , determines uniquely-up to a constant factor-the metric d. For a subspace En of n points of E, converging in Hausdorff distance to E, we construct a metric dn on En, based only on the knowledge of D d on En and establish a sharp upper bound of the Gromov-Hausdorff distance between (En, dn) and (E, d). ### Gromov-Wasserstein Learning for Graph Matching and Node Embedding A novel Gromov-Wasserstein learning framework is proposed to jointly match (align) graphs and learn embedding vectors for the associated graph nodes. Using Gromov-Wasserstein discrepancy, we measure the dissimilarity between two graphs and find their correspondence, according to the learned optimal transport. The node embeddings associated with the two graphs are learned under the guidance of the optimal transport, the distance of which not only reflects the topological structure of each graph but also yields the correspondence across the graphs. These two learning steps are mutually-beneficial, and are unified here by minimizing the Gromov-Wasserstein discrepancy with structural regularizers. This framework leads to an optimization problem that is solved by a proximal point method. We apply the proposed method to matching problems in real-world networks, and demonstrate its superior performance compared to alternative approaches. ### Representative Datasets: The Perceptron Case One of the main drawbacks of the practical use of neural networks is the long time needed in the training process. Such training process consists in an iterative change of parameters trying to minimize a loss function. These changes are driven by a dataset, which can be seen as a set of labeled points in an n-dimensional space. In this paper, we explore the concept of it representative dataset which is smaller than the original dataset and satisfies a nearness condition independent of isometric transformations. The representativeness is measured using persistence diagrams due to its computational efficiency. We also prove that the accuracy of the learning process of a neural network on a representative dataset is comparable with the accuracy on the original dataset when the neural network architecture is a perceptron and the loss function is the mean squared error. These theoretical results accompanied with experimentation open a door to the size reduction of the dataset in order to gain time in the training process of any neural network. ### Fused Gromov-Wasserstein distance for structured objects: theoretical foundations and mathematical properties Optimal transport theory has recently found many applications in machine learning thanks to its capacity for comparing various machine learning objects considered as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of the objects but treat them independently, whereas the Gromov-Wasserstein distance focuses only on the relations between the elements, depicting the structure of the object, yet discarding its features. In this paper we propose to extend these distances in order to encode simultaneously both the feature and structure informations, resulting in the Fused Gromov-Wasserstein distance. We develop the mathematical framework for this novel distance, prove its metric and interpolation properties and provide a concentration result for the convergence of finite samples. We also illustrate and interpret its use in various contexts where structured objects are involved. ### Sliced Gromov-Wasserstein Recently used in various machine learning contexts, the Gromov-Wasserstein distance (GW) allows for comparing distributions that do not necessarily lie in the same metric space. However, this Optimal Transport (OT) distance requires solving a complex non convex quadratic program which is most of the time very costly both in time and memory. Contrary to GW, the Wasserstein distance (W) enjoys several properties (e.g. duality) that permit large scale optimization. Among those, the Sliced Wasserstein (SW) distance exploits the direct solution of W on the line, that only requires sorting discrete samples in 1D. This paper propose a new divergence based on GW akin to SW. We first derive a closed form for GW when dealing with 1D distributions, based on a new result for the related quadratic assignment problem. We then define a novel OT discrepancy that can deal with large scale distributions via a slicing approach and we show how it relates to the GW distance while being $O(n^2)$ to compute. We illustrate the behavior of this so called Sliced Gromov-Wasserstein (SGW) discrepancy in experiments where we demonstrate its ability to tackle similar problems as GW while being several order of magnitudes faster to compute
2019-09-21 05:25:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5472330451011658, "perplexity": 509.64877461461913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574265.76/warc/CC-MAIN-20190921043014-20190921065014-00143.warc.gz"}
https://drexel28.wordpress.com/2012/04/05/splitting-fields-and-algebraic-closures-pt-ii/
# Abstract Nonsense ## Splitting Fields and Algebraic Closures (Pt. II) Point of Post: This is a continuation of this post. Theorem(Isomorphism Extension Theorem): Let $k,k'$ be two fields and $\sigma:k\to k'$ a ring isomorphism. Then, if $\{f_j(x)\}\subseteq k[x]$ is a set of polynomials we can consider the set $\{\sigma(f_j(x))\}\subseteq k'[x]$ where $\sigma(f_j(x))$ is the polynomial obtained by applying $\sigma$ to the coefficients of $f_j$. Let $K$ be a splitting field for $\{f_j(x)\}$ and $K'$ be a splitting field for $\{\sigma(f_j(x))\}$. Then, there is an isomorphism $\varphi:K\to K'$ which is an extension of $\sigma$. Furthermore, if $\alpha\in K$ is some distinguished element we can take $\varphi$ to be such that $\varphi(\alpha)=\alpha'$ where $\alpha'$ is any root of $\sigma(m_{\alpha,k})$. Proof: Basically we can create monomorphisms $L\to K'$ where $L$ is some “small” subextension of $K/k$ (namely we can just specify where we want the generators to go), the problem is in extending all the way to $K$. Of course,  phrased this way it’s clear that Zorn’s lemma is going to be our lemma of choice for this proof [relevant]. Ok, so let’s formally define $\text{ }$ $\mathcal{S}=\left\{(L,\varphi):L\text{ is a subextension of }K/k\text{ and }\varphi\text{ is an extension of }\sigma\right\}$ $\text{ }$ We define a partial ordering structure $\leqslant$ on $\mathcal{S}$ by demanding that $(L,\varphi)\leqslant (L,\varphi')$ if and only if $L\subseteq L'$ and $\varphi'$ is an extension of $\varphi$. Of course, $\mathcal{S}$ is not empty since $(k,\sigma)\in\mathcal{S}$. Now, let’s show that every chain in $\mathcal{S}$ has a maximal element. Indeed, let $\{(L_c,\varphi_c)\}_{c\in C}$ be a chain. Define then $\displaystyle L=\bigcup_{c\in C}L_c$ and $\varphi:L\to K'$ by defining $\varphi(x)=\varphi_c(x)$ for any $c\in C$ with $x\in L_c$ (by the requirements on the $\varphi_c$ and the fact that we have a chain this is well-defined). Clearly then $(L,\varphi)\in\mathcal{S}$ and $(L_c,\varphi_c)\leqslant (L,\varphi)$ for al $c\in C$. Thus, we see that every chain in $\mathcal{S}$ has an upper bound and so our best friend Zorn hands us some maximal element $(M,\varphi)$ of $\mathcal{S}$. Our job now is to prove (we hope) that $M$ must actually be $K$ and that $\varphi(M)=K'$[there is, of course, no need to prove that $\varphi$ is injective since it’s a ring map out of a field].  To prove that $M=K$ we merely note that if this wasn’t true then necessarily there is some $f_j$ such that $f_j$ does not split in $M$ (since otherwise $M$ contains all the roots of all the $f_j$ and so must be equal to $K$). Thus, we can find some root $\alpha$ of $f_j$ not conatined in $M$. Then, by the lemma we can extend $\varphi:M\to \varphi(M)$ to $\varphi'M(\alpha)\to \varphi(M)(\alpha)$ which contradicts the maximality of $(M,\varphi)$.  Thus, we see that $M=K$. Now, to see that $\varphi(M)=\varphi(K)=K'$ we merely note that $\varphi(K)$ is a splitting field for $\{f_j^\sigma\}$ and thus must necessarily be equal to $K'$. $\blacksquare$ $\text{ }$ Ok, cool so we have proven two things up to this point: splitting fields exist for a single polynomial, and splitting fields for an arbitrary set of polynomials are unique up to isomorphism. Well, there seems to be somewhat of a discrepancy for these two statements. Namely, we have a theorem telling us a fact about the splitting field for arbitrarily polynomials yet, a priori, we only know that the splitting field of finitely polynomials exists. Strange, right? This begs the questions as to whether or not the splitting field of arbitrarily polynomials even exists. Well, for a finite set of polynomials $\{f_1,\cdots,f_n\}$ we actually have nothing to wonder about since, as one can easily deduce, a splitting field for $f_1\cdots f_n$ is a splitting field for $\{f_1,\cdots,f_n\}$. But, what about infinite polynomials? We surely can’t just take their product. Indeed, for the infinitary case we are going to need slightly more sophisticated machinery. $\text{ }$ To create splitting fields for arbitrarily many polynomials in $F[x]$ it suffices to find a field containing all the roots for all the polynomials in $F[x]$. Indeed, suppose that we have constructed such a field, let’s call it $A$, then to find a the splitting field for $S\subseteq F[x]$ we merely let $R$ be the union of all the sets of roots in $A$ of all the elements in $S$. We can easily see then that $F(R)\subseteq A$ is a splitting field for $S$. $\text{ }$ To create such a field, let’s consider the field $F$ and for each nonconstant $f(x)\in F[x]$ let $t_f$ be an indeterminate. Consider then the polynomial ring $R=F[\{t_f:f\in F[x]\}]$. Let then $\mathfrak{a}$ be the ideal generated by the $f(t_f)$ for each nonconstant $f\in F[x]$. What we’d like to do is put $\mathfrak{a}$ inside some maximal ideal $\mathfrak{m}$ (using Krull’s theorem), but for this we need to know that $\mathfrak{a}\subsetneq\mathfrak{m}$. To prove that this is impossible we note that if it’s true we could find $g_1,\cdots,g_n\in R$ and $t_{f_1},\cdots,t_{f_n}$ such that $\text{ }$ $g_1f_1(t_{f_1})+\cdots+g_nf_n(t_{f_n})=1\quad\mathbf{(1)}$ $\text{ }$ Ok, but now here’s the cool part, let’s rename the variables $t_{f_i}=t_i$ for $i\in[n]$ and let $t_{n+1},\cdots,t_m$ be the other variables that occur in the $g_1,\cdots,g_n$ so that we can rewrite $\mathbf{(1)}$ as $\text{ }$ $g_1(t_1,\cdots,t_m)f_1(t_1)+\cdots+g_n(t_1,\cdots,t_m)f_n(t_n)=1\quad\mathbf{(2)}$ $\text{ }$ That said, we can find some extension $k/F$ such that $k$ contains roots $\alpha_i$ for $f_i$. Plugging this into $\mathbf{(2)}$ gives $0=1$ which is evidently ridiculous. Thus, we see that $\mathfrak{a}\subsetneq R$ and so we can put $\mathfrak{a}\subseteq\mathfrak{m}$ where $\mathfrak{m}$ is maximal. Consider then $k_1=R/\mathfrak{m}$. Clearly then $k_1$ is an extension of $F$ which contains. We may repeat this process for $k_1$ to get another field $k_2$, and so on. Define then $A=F\cup k_1\cup k_2\cup k_3\cup\cdots$. Clearly then $A$ is a field containing $F$, and since any polynomial in $A[x]$ has coefficients in some $K_r$ which has a root in $K_{r+1}$. Thus, every polynomial in $A[x]$ has a root in $A$. But, this clearly implies that every polynomial $f(x)\in A[x]$ has all its roots in $A$ since once we have a root, say $\alpha$, we can write $f(x)=(x-\alpha)h(x)$ for some $h(x)$. But, then $h(x)$ has a root in $A$, so we can write $f(x)=(x-\alpha)(x-\beta)c(x)$, etc. Thus, we know, in particular, every polynomial $f(x)\in F[x]\subseteq A[x]$ has a root in $A$, and considering our previous discussion we can finally conclude that : $\text{ }$ Theorem: Let $F$ be a field and $S\subseteq F[x]$. Then, $S$ has a splitting field. $\text{ }$ We call a splitting field for $F[x]$ (all the polynomials) an algebraic closure of $F$ and denote it $\overline{F}$–note that while this notation may seem ambiguous, it really isn’t since (as we have proven) all algebraic closures for $F$ must necessarily be isomorphic. $\text{ }$ $\text{ }$ References: [1] Morandi, Patrick. Field and Galois Theory. New York: Springer, 1996. Print. [2] Dummit, David Steven., and Richard M. Foote. Abstract Algebra. Hoboken, NJ: Wiley, 2004. Print. [3] Lang, Serge. Algebra. New York: Springer, 2002. Print. [4] Conrad, Keith. Collected Notes on Field and Galois Theory. Web. <http://www.math.uconn.edu/~kconrad/blurbs/&gt;. [5] Clark, Pete. Field Theory. Web. <http://math.uga.edu/~pete/FieldTheory.print April 5, 2012 -
2017-08-21 04:41:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 151, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9647653102874756, "perplexity": 99.67149155078603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107490.42/warc/CC-MAIN-20170821041654-20170821061654-00215.warc.gz"}
https://www.nature.com/articles/s41598-021-02353-5
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Drug repurposing for COVID-19 using graph neural network and harmonizing multiple evidence Abstract Since the 2019 novel coronavirus disease (COVID-19) outbreak in 2019 and the pandemic continues for more than one year, a vast amount of drug research has been conducted and few of them got FDA approval. Our objective is to prioritize repurposable drugs using a pipeline that systematically integrates the interaction between COVID-19 and drugs, deep graph neural networks, and in vitro/population-based validations. We first collected all available drugs (n = 3635) related to COVID-19 patient treatment through CTDbase. We built a COVID-19 knowledge graph based on the interactions among virus baits, host genes, pathways, drugs, and phenotypes. A deep graph neural network approach was used to derive the candidate drug’s representation based on the biological interactions. We prioritized the candidate drugs using clinical trial history, and then validated them with their genetic profiles, in vitro experimental efficacy, and population-based treatment effect. We highlight the top 22 drugs including Azithromycin, Atorvastatin, Aspirin, Acetaminophen, and Albuterol. We further pinpointed drug combinations that may synergistically target COVID-19. In summary, we demonstrated that the integration of extensive interactions, deep neural networks, and multiple evidence can facilitate the rapid identification of candidate drugs for COVID-19 treatment. Introduction The emergence of SARS-CoV-2 (2019 novel coronavirus) has created the COVID-19 global pandemic. As of today (September 3, 2021), there have been over 219 million COVID-19 cases worldwide1. To prevent the COVID-19, several COVID-19 vaccines have received emergency approval2, and 3 billion dosages were administered. To treat the COVID-19, many research efforts are ongoing, and the FDA approved Remdesivir3 and Molnupiravir4 as the COVID-19 treatments. However, none of them has proved high effectiveness for COVID-195,6. Addressing the abundant needs to continue the COVID-19 drug development, many researchers have screened thousands of candidate therapeutic agents7,8. These agents can be divided into two broad categories: those that directly target the virus replication cycle, and those based on immunotherapy approaches either aimed to boost innate antiviral immune responses (e.g., targeting the host angiotensin-converting enzyme 2 (ACE2) that SARS-CoV-2 directly binds)9 or to alleviate damage induced by dysregulated inflammatory responses10. Research on the COVID-19 therapeutic agents has created valuable knowledge and data. For example, a curated list of potential COVID-19 therapeutics is available for research, such as Comparative Toxicogenomics Database (CTDbase), which have offered valuable resources for systematic integration of accumulated COVID-19 knowledge. Drug discovery, however, is an expensive and time-consuming process. It typically takes many years and costs billions of dollars to develop and obtain the approval of a drug. Drug repurposing is to identify existing drugs or compounds that can be efficacious to other conditions of interest. Drug repurposing via systematic integration of pharmacodynamics, in vitro drug screening, and population-scale clinical data analysis carries high potential for a novel approach by identifying highly promising drugs and their combinations to save the cost and accelerate discovery11. Based on accumulated genomic and pharmacological knowledge, several computational approaches have explored and identified potentially effective drug and/or vaccine candidates12. Examples include a network pharmacology study in protein–protein interaction (PPI) network13, in silico protein docking14, and sequencing analysis15. Another family of studies has utilized retrospective analysis of clinical data, such as electronic health records (EHRs). These studies have assessed the potential efficacy of drugs including angiotensin receptor blockers, estradiol, or antiandrogens16. Although network pharmacology and retrospective clinical data analysis provide complementary insight into potential drugs, few studies have integrated these complementary perspectives, particularly in COVID-19. This work attempts to identify repurposable drugs from SARS-CoV-2-drug interactions and validating the drugs from retrospective in vitro efficacy and large-scale clinical data to prioritize repurposable drugs. In this work, we innovated the traditional network analysis by deep graph neural representation to broaden the scope from local proximity to global topology. In traditional network analysis, network proximity is defined with explicit and direct interactions17, thus a node’s local role (e.g., neighbors, edge directions) and global position (e.g., overall topology or structure) are less considered. With the recent advancement in machine learning and representation learning, the graph neural network (GNN) approach is mature for the application of its state-of-the-art technology to network pharmacology. GNN is one field of deep neural networks that derive a vectorized representation of nodes, edges, or whole graphs. The graph node embedding can preserve the node’s local role and global position in the graph via iterative and nonlinear message passing and aggregation. It learns the structural properties of the neighborhood and the graph’s overall topological structure18. Adopting GNN into the biomedical network facilitates the integration of multimodal and complex relationships. Recently GNN has shown great promise in predicting interactions (e.g., PPIs, drug-drug adverse interactions, and drug-target interactions) and discovery of new molecules19. GNN can also benefit drug repurposing by representing the complex interaction between drugs and diseases. A recent attempt has been made to use the GNN for drug repurposing, which builds a general biomedical knowledge graph, called Drug Repurposing Knowledge Graph (DRKG), from seven biomedical databases and utilizes the embedding to discover a therapeutic association between drugs and diseases13. The knowledge graph includes 15 million edges across 39 different types connecting drugs, disease, genes, and pathways from seven databases including DrugBank, Hetionet, STRING, and a text-mining-driven database. This biomedical network representation offers a general and universal understanding of the interaction between drugs, genes, and diseases. In this study, we built the COVID-19 knowledge graph from curated COVID-19 literature, transferred the universal representation from DRKG, and then utilized deep GNN to derive repurposable drugs’ representations which were rigorously validated with retrospective in vitro efficacy, reversed gene expression pattern, and large-scale EHRs (Fig. 1). Compared to the existing studies13,17, our work’s novelty can be summarized as: (i) deriving the COVID-19 knowledge representation on top of comprehensive biomedical knowledge graph, (ii) prioritizing the drug candidates based on multiple criteria including in vitro efficacy, population-based treatment effect, and reversed gene expression pattern, and iii) identifying synergistic drug combinations using complementary patterns. Results COVID-19 knowledge graph representation We first built a comprehensive COVID-19 knowledge graph that represents interactions between SARS-CoV-2 baits, host genes, pathways, targets, drugs (including experimental compounds), and phenotype (Fig. 1b, Methods 1). We then derived embedding for each drug, gene, phenotype, and SARS-CoV-2 bait using GNN. The GNN embedding method was the variational graph autoencoder with multi-relational edges (Methods 2)21. We internally validated the confidence of our knowledge graph embedding via link prediction (Methods 3). We compared the link prediction accuracy of our model with and without transfer learning using DRKG. Our node embedding showed high accuracy in predicting the relation in the COVID-19 knowledge graph. The initial DRKG universal embedding (without fine-tuning) achieved 0.5695 AUROC and 0.6431 AUPRC. After fine-tuning the DRKG embedding to the COVID-19 knowledge graph, we achieved AUROC 0.8121 and AUPRC 0.8524, respectively (Table S1), implying that the node embedding contains the local interaction (i.e., edges). We also visualized the node embedding using t-Distributed Stochastic Neighbor Embedding (t-SNE) (Method 3). We found that the node embedding of SARS-CoV-2 baits, host genes, drugs, and phenotypes were distributed separately (Fig. 2a, Fig. S2). We found that a group of antiviral and anti-inflammatory drugs (including Tenofovir, Ritonavir) was closely located to SARS-CoV-2 baits. Another group of anti-inflammatory and immunosuppressive drugs was highlighted including Cyclosporine and Dexamethasone, which were surrounded by genes related to inflammation and infection such as CD68 and PRDM1. We also found a group of anticoagulants (e.g., Heparin), anti-hypertensives (e.g., Amlodipine), anti-platelet (e.g., Dipyridamole), and anti-inflammatory drugs (e.g., Indomethacin). This t-SNE plot showed us that our node embedding captures global topology respecting common biological knowledge. Initial drug ranking Using the rich representation of the candidate drugs, we built an initial ranking model that predicts the drug’s antiviral effectiveness (Methods 4). The ranking model accuracy was AUROC between 0.77 and 0.90 and AUPRC between 0.17 and 0.25 (Table 1). The COVID-19 knowledge graph embedding that was boosted by general embedding from DRKG showed the highest accuracy, thanks to rich representation in DRKG. The higher AUROC/AUPRC indicated that the graph representation can encapsulate the underlying mechanism of drugs and the ranking model can pick out the drugs with potential efficacy. Validation with multiple sources From the initial drug ranking, we selected the top 300 highly-ranked drugs as potential repurposable candidates. We validated the highly-ranked drugs using a wide spectrum of validation sources such as genetic (Methods 5), retrospective in vivo (Methods 6), and epidemiological evidence (Methods 7), which reflects complementary aspects of drug effectiveness. Note that we did not exclude the clinical trial drugs that were used in the ranking model training. Genetic validation using gene set enrichment analysis For the genetic validation, we compared the gene expression signature profiles (Fig. 2b) of candidate drugs with that of SARS-CoV-2-infected host cells. We used gene set enrichment analysis (GSEA) to identify a significant association between SARS-CoV-2 and candidate drugs (Methods 5). As a result, we identified 183 statistically significant drugs including Gefitinib (enrichment scores or ES = − 0.70), Chlorpromazine (ES = − 0.70), Dexamethasone (ES = − 0.67), Rimexolone (ES = − 0.67), and Naltrexone (ES = − 0.64) (Fig. 2d). The lower ES scores of drugs mean the stronger signals in reversing the SARS-CoV-2 infected cell’s genetic profiles. The recall and precision was 0.3, which means our prediction has moderate accuracy when compared to genetic patterns. Retrospective in vitro drug screening validation We validated the candidates by comparing them with in vitro drug screening results retrospectively. We collected four different drug screening studies that target viral entry and viral replication/infection (Methods 6)7,8. As a result, the recall was between 0.21 and 0.44 and the precision was between 0.04 and 0.18 (Table 2), implying moderate accuracy in predicting efficacy in those selected drugs. Caution is needed in interpreting the accuracy here, because the number of overlapping drugs is limited in some studies and, thus, the statistical power is limited. Population-based validation We examined drugs administered to the COVID-19 patients and estimated treatment effects of the drugs in reducing the risk of mortality among hospitalized COVID-19 patients using Optum de-identified EHR database (Table S2, Fig. S3b, Methods 7). The EHRs had a total of 391 drugs used for COVID-19 hospitalized patients; 138 drugs were common in the EHRs and our initial 3,635 drugs. Ten (out of 138) drugs were effective (averaged treatment effect among treated or ATT > 0 and p-value < 0.05) in the EHRs (Fig. 2d). Among the ten positive drugs, our method identified six positive drugs (Table 2): Acetaminophen (ATT = 0.25), Azithromycin (ATT = 0.18), Atorvastatin (ATT = 0.17), Albuterol (ATT = 0.14), Aspirin (ATT = 0.14), and Hydroxychloroquine (ATT = 0.08) (Fig. 2e) (Table S3). Validated high-ranked drugs Based on the extensive validation, we presented top repurposable drugs after filtering out and re-ordering the drug candidates according to the existence of validating evidence. We used a data programming technique to combine the multiple pieces of evidence (Note S3)23. We highlighted the most promising drugs as follows (Fig. 3a). Due to limited space, we presented the top 21 drugs in Table 3 and the remaining drugs are available in Table S4. The top 21 drugs include anti-infection, immunosuppressive or immunomodulatory, antiviral, anti-fever, antihypertensive, anti-cancer drugs, anticoagulant drugs which all have different possible functions in inhibiting SARS-CoV-2 proliferation or reducing symptoms. We highlight them in the Discussion. Drug combination search As indicated by the complexity of the COVID-19 interaction network, using single drugs to treat the viral infection might result in short term effects. To improve treatment efficacy, we further identified potential drug combinations from the top-ranking drugs with synergistic interactions without degradation in safety (Methods 8)42. We highlight the identified drug combinations as below (Table 4, Fig. 3b) and discuss potential mechanisms in discussion. Discussion The objective of this study is to prioritize repurposable drugs to treat COVID-19 by a novel pipeline which harmonizes several partial pieces of evidence. In this pipeline, we applied graph neural networks to transfer a general knowledge representation from a larger knowledge network and to optimize the general knowledge representation by a human-curated COVID-19 knowledge network. The optimized COVID-19 specific knowledge representation was applied to search and to prioritize the drugs similar to under COVID-19 trial drugs. After receiving those high-ranking drugs, we harmonize and validate those drugs' efficacy by GSEA scores, in vitro drug screening results, and population-based treatment effects. As a result, our proposed pipeline prioritized Azithromycin, Atorvastatin, Acetaminophen, and Aspirin. Also, we identified drug combinations with complementary exposure patterns: Etoposide + Sirolimus, Mefloquine + Sirolimus, Losartan + Ribavirin, and Hydroxychloroquine + Melatonin by complementary drug combination search. We highlight the identified drugs as follows: Antimicrobial agents Azithromycin and Teicoplanin can inhibit 23s ribosomes or RNA polymerase to stop the progress of infection. Some evidence supports Azithromycin regulating and/or decreasing the production of inflammatory mediators (IL-1β, IL-6, IL-8, IL-10, IL-12, and IFN-α), which might be effective to suppress viral entry24. Azithromycin targets ABCC1 (an inflammatory modulator) that has direct PPI with SARS-CoV-2 bait orf9c (Fig. 3a). The data imply that Azithromycin can be related to viral gene replication. In the population-based EHR validation, Azithromycin had the highest treatment effect, and it is currently under testing in a clinical trial(NCT04332107) to treat mild to moderate COVID-19 patients. Itraconazole can promote the production of IFN-1 that enhances viral-induced host responses39. Immunosuppressive drugs We identified immunosuppressive drugs such as Hydroxychloroquine, Chloroquine, and Sirolimus. Hydroxychloroquine or chloroquine are anti-parasite drugs but also have effects on toll-like receptors and ACE231, where toll-like receptors are associated with the production of inflammatory mediators (IL-1, IL-6, TNF-α, IFN-α, and IFN-$$\beta$$)45, and ACE2 is the entry receptor of SARS-CoV-225. Hydroxychloroquine and chloroquine are rather controversial in terms of effectiveness46. Hydroxychloroquine directly targets PPT1, SIGMAR1, TRAF6, and SDC1, and it indirectly targets ECSIT and COL6A1, which had PPIs with SARS-CoV-2 baits orf8, orf9c orf10, and nsp6 (Fig. 3a). Thus, hydroxychloroquine might interfere with the SARS-CoV-2 replication. Sirolimus also works on toll-like receptors to treat COVID-1928. Anti-inflammatory drugs Acetaminophen directly targets ACADM, CPT2, and indirectly targets ACSL3, and MARK2 which finally have PPI with SARS-CoV-2 orf9b, M, and nsp7 (Fig. 3a). This means Acetaminophen may hinder the SARS-CoV-2 assembling and replication47. Aspirin deactivates platelet function48. A recent study reports that SARS-CoV-2 may over-activate platelets and thus reduce platelet production49. Considering this evidence, Aspirin might be effective in COVID-19 patients by suppressing platelet function and inflammatory processes. Celecoxib is a COX2 selective inhibitor. According to a consensus docking result, Celecoxib inhibits SARS-CoV-2 main protease up to 37%36. Antiviral drugs We identified various antiviral drugs such as Remdesivir, Lopinavir, and Tenofovir. Currently, Remdesivir has been proved to inhibit SARS-CoV-2 replication34. In terms of PPI between the virus bait and host prey, Lopinavir targets HMOX1, which is a host prey that binds with SARS-CoV-2 orf3a (Fig. 3a). A recent study reports that Tenofovir may prevent SARS-CoV-2 replication41. Antihypertensive and lipid-lowering drugs We identified Atorvastatin, Amlodipine, and Nifedipine. In addition to the original function for lowering cholesterol and triglyceride levels as an HMG-CoA reductase inhibitor, Atorvastatin can treat inflammation by lowering C-reactive protein (CRP)26. Elevated CRP is highly associated with the aggravation of non-severe COVID-19 adult patients50. Also, Atorvastatin directly targets PLAT and indirectly targets HDAC2, which is a host prey of the SARS-CoV-2 nsp5. The nsp5 can assist in releasing nsp4 and nsp16, which are involved in viral replication51. Both Nifedipine and Amlodipine are calcium channel blockers. Nifedipine reduces the ACE2 expression29. In a retrospective study, Amlodipine prevents virus replication in COVID-1952. Anti-cancer, antipsychotic and hormone replacement drugs Chlorpromazine, an antipsychotic drug shows an in vitro efficacy in inhibiting viral entry of SARS-CoV-238. Progesterone decreases the severity of cytokine storms in COVID-19 patients40. In addition to those proposed repurposing drugs, some other highly potential drugs are also worth considering such as Bilirubin53 and Decorin54. We also propose the potential drug combinations as follows and present the possible mechanism. Etoposide and sirolimus Etoposide is an anti-cancer drug that targets DNA topoisomerase 2. A recent report proposes that Etoposide can also suppress the inflammatory cytokines in COVID-19, by reducing activated cytotoxic T cells that further lead to eliminate activated macrophages55. There are some clinical trials to test the effectiveness of sirolimus in COVID-19 patients (NCT04341675). There is a clinical trial to test the effectiveness of combining Sirolimus, Celecoxib, and Etoposide on cancer (NCT02574728). Based on the virus bait-host prey interactome, this combination’ targets interact with ten virus baits (including orf9c, orf8, orf3a, nsp1, nsp2, nsp5) without overlapping targets. We can infer this combination can be related to virus assembly in mitochondria due to an association with nsp251. Mefloquine and sirolimus Mefloquine not only treats malaria but also has some effects on the immune system56. The drug targets of Mefloquine and Sirolimus had similar baits-host prey interactome with Etoposide and Sirolimus. Losartan and ribavirin Losartan inhibits T-cell activation and also binds to ACE257. Ribavirin has an ant-SARS-CoV2 function30. From the bait-host gene PPI, this combination’s complementary drug targets had PPI with 9 virus baits including N, M, orf3a, orf8, nsp7, nsp1, nsp2, nsp13, and nsp14, which might affect the virus replication, assembling, and releasing51. Hydroxychloroquine and melatonin Melatonin has been proposed as an adjuvant for COVID-19 treatment58 because Melatonin can limit virus-related diseases with a high profile of safety. This might imply we can reduce the dosage of Hydroxychloroquine that decreases the risk of a long Q-T interval31. This speculation needs further verification. We also observed conflicts across different validation sources. For example, Aspirin and Albuterol had positive treatment effects in EHRs validation, but there was no positive efficacy in all the four in vitro experiments. Losartan was effective in GSEA but presents negative treatment effects in EHR validation. The reason for this discrepancy might be because each validation source captures different aspects of the drug’s function. The GSEA validation focused on inhibiting or activating the virus-associated host genes. The in vitro efficacy focused on viral entry, replication, or cytopathic effect. The population-based EHRs validation focused on the drugs’ antiviral effect and also clinical symptom relief. For example, Acetaminophen, Azithromycin, and Albuterol are frequently given to hospitalized patients for fever, pneumonia, and shortness of breath, respectively. These drugs might not have direct effects on the virus itself. Concordance in multiple validation sources may strengthen the confidence in the drug’s effectiveness. The drugs with conflicting validation results are still worth investigating. There are several limitations of this study. Our pipeline might have filtered out some potential drugs prematurely during the initial drug ranking step using the clinical trial drugs. The initial pool of 3635 drug candidates might miss an important set of drugs considering the fast-evolving knowledge of COVID-19 therapeutic agents. The population-based validation was from retrospective analyses of EHRs, which are inherently incomplete and erroneous compared to randomized experimental data. Our propensity score matching and weighting approach were designed to reduce bias and confounding effects, but unmeasured or hidden confounders may exist in the EHRs data. Important laboratory values measuring the severity of COVID-19, such as White Blood Cell count, d-Dimer, and C-reactive protein, were not well documented in EHRs during the early stage of COVID-19 pandemic. The other limitation is a discrepancy between gene sets from drug-induced gene expression and SARS-CoV-2-infected cell’s gene expression. cMAP provides the expression value for only 12,328 genes while the SARS-CoV-2-infected cell line (GSE153970) contains expression value for 17,899 genes. Consequently, the expression values for some genes in SARS-CoV-2 signature are missing, such as SARS-CoV2-gp10 and SARS-CoV2-gp01, which might cause bias. In spite of differences in cell lines as well as missing expression value of some genes, the results still have some value as a reference for further investigation. In conclusion, this study proposes an integrative drug repurposing pipeline for the rapid identification of drugs and their combination to treat COVID-19. Our pipelines were developed from extensive SARS-CoV-2 and drug interactions, deep graph neural representation, and ranking model, and validated from genetic profiles, in vitro efficacy, and population-based treatment effects. From a translational perspective, this pipeline can provide a general network pharmacology pipeline for various diseases, which can contribute to fast drug and drug combinations repurposing. Materials and methods Building the COVID-19 knowledge graph To build the COVID-19 knowledge graph, we identified drug-target interactions, pathways, gene/drug-phenotype interactions from CTDbase. We collected the SARS-CoV-2 and host PPIs from a recent systematic PPI experimental study for SARS-CoV-251. The graph had four types of nodes and five types of edges based on the interactions. The four types of nodes include 27 virus baits, 5677 unique host genes (from 322 host preys, 1783 genes on pathways, and 4427 drug targets, Fig. S1), 3635 drugs, and 1285 phenotypes. The five types of edges include 330 virus-host PPIs, 13,423 pairwise genes on the same pathway, 16,972 drug-target pairs, 1401 gene-phenotype pairs, and 935 drug-phenotype pairs. Details on each interaction are as follows: SARS-CoV-2 and human protein interactions We collected the SARS-CoV-2 and host interaction data from a recent work that identifies 322 high confidence PPIs between SARS-CoV-2 and the human51. This literature cloned 26 SARS-CoV-2 proteins in human cells and identified the human proteins that were physically associated with the SARS-CoV-2 proteins. We used the SARS-CoV-2 and human protein interaction with MiST > 0.8. In total, the virus-host interaction network consisted of 27 virus baits and 332 SARS-CoV-2-associated prey proteins. Drug–target interactions We collected drugs and targets from CTDbase’s COVID-19 curated list, which contains 5065 potential targetable genes for COVID-19 with supporting biological mechanisms or therapeutic evidence. Potential compounds for SARS-CoV-2 were identified if the compounds target the SARS-CoV-2-associated genes. There were 3635 compounds that target 4427 genes. The size of the intersection between host genes interacting with baits and drug targets is 94. Biological pathways We incorporated functional pathways related to SARS-CoV-2 infection and drugs of interest. We used the Kyoto Encyclopedia of Genes and Genomes (KEGG59), Reactome (which were curated in CTDbase), and PharmGKB. There were 1,763 unique genes and 13,423 pairs of genes that were associated with the pathways. Gene/drug–phenotype interactions We used a curated set of phenotypes from CTDbase, which inferred the phenotypes via drug interaction and/or gene to gene ontology annotation. There were 1,285 phenotypes (i.e., biological process gene ontology) that were associated with 31 potential drugs and/or 18 SARS-CoV-2-associated genes. Embedding using graph neural network To derive embedding from the COVID-19 knowledge graph, we utilized deep graph neural embedding with multi-relational data. We used variational graph autoencoders with GraphSAGE messages passing18,20. Due to uncertainty and incompleteness in our knowledge graph (i.e., COVID-19 is an emerging infectious disease and our knowledge on COVID-19 is developing), we chose to use variational autoencoders to account for the uncertainty. The graph autoencoder method is an unsupervised learning framework to encode the nodes into a latent vector (embedding) and reconstruct the given graph structure (i.e., graph adjacency matrix) with the encoded latent vector. The variational version of graph autoencoders is to learn the distribution of the graph to avoid overfitting during the reconstructing the graph adjacency matrix. In the message-passing step, each node’s (entity) embedding is iteratively updated by aggregating the neighbors embedding, in which the aggregation function is a mean of the neighbor’s features, concatenation with current embedding, and a single layer of a neural network on the concatenated one. We set different weight matrices for each of the five types of edges. Since our objective is to use the drug embedding to discover drugs that can functionally target SARS-CoV-2-associated host genes, the model was trained to reconstruct the missing interaction using the node embeddings as an unsupervised manner. We set the embedding size as 128 after several trials. We used PyTorch Geometric for implementation. The model structure was (1 × 400) → Graph convolution to (1 × 256) → RELU → Dropout → Concatenation of multiple edge types → Batch norm → Graph convolution to 1 × 128 (mean) and 1 × 128 (variance). We further boosted the representativeness of the embedding by transferring DRKG universal embedding to our embedding. The DRKG embedding contains general biological knowledge (e.g., drug embedding was derived from molecular structures, targets, anatomical therapeutic chemical classifications, side effects, pharmacologic classes, and treating diseases)13. By transferring the rich representation of DRKG to the COVID-19 knowledge graph, we can derive embeddings that are more faithful to underlying pharmacokinetics and pharmacodynamics. To this end, we initialized the COVID-19 knowledge graph node embedding with DRKG embedding and fine-tuned the node embedding by updating them via GNN’s message passing and aggregation. (Note S1). Evaluating the knowledge graph embedding We internally validated the confidence of our knowledge graph embedding via link prediction to confirm if the node embedding can capture the network topology centered by SARS-CoV-2. We measured an accuracy to predict interactions between the nodes (SARS-CoV-2 baits, genes, drugs, and phenotypes). We randomly selected 10% of the edges for validation. We also visualize the node embedding using lower-dimensional projection to observe the distribution of high-dimensional node embedding. The t-SNE plot projects a high-dimensional vector into a low-dimension vector while preserving the pairwise similarity between nodes, thus allowing us to examine the high-dimensional node embedding with low-dimension (e.g., 2-dimensions) visualization. Initial drug ranking After we derived the drug embedding, we built a ranking model to select the most potent drugs. We hypothesized that, because drugs testing in clinical trials are potentially efficacious in treating COVID-19, a drug that is similar to these trial drugs can have potential efficacy too. This drug ranking was an initial filtering step to select possibly potent drugs out of 3,635 candidates. The drugs under clinical trials were extracted from NIH ClinicalTrials.gov’s interventional trials. 99 trial drugs were matched to the CTDbase’s 3635 drugs. The remaining drugs without matched clinical trials were regarded as having negative efficacy. We designed a customized neural network ranking model based on Bayesian pairwise ranking loss60. The architecture was two fully connected layers (with the size of 128 → 128 → 1) with residual connection, nonlinear activation (ReLU), dropout, batch norm in the middle, and the optimization loss (Bayesian pairwise ranking loss). Baseline ranking models to compare were logistic regression, support vector machine, XGBoost, and Random forest. We measured the accuracy of the drug ranking model using the area under the receiver operating curve (AUROC) and area under the precision-recall curve (AUPRC) with 50% training and 50% test cross-validation. We purposely set the portion of the training set lower because the clinical trials are not our sole “gold standard” to prioritize drugs. Note that the unsupervised knowledge graph embedding and the supervised drug ranking were independent. We tried to avoid using the supervised label (clinical trials drugs) in the knowledge graph embedding because the drugs being considered in clinical trials do not guarantee the efficacy of the drugs. Genetic validation We obtained the gene expression signature of SARS-CoV-2 from SARS-CoV-2 infected human lung cells61, and obtained the drug's gene expression signature profile from the Connectivity Map (cMAP) database (GSE92742 and GSE70138)62. We determined whether the drug’s gene expression signature is negatively correlated with that of SARS-CoV-2 based on the enrichment score (ES)63. The combining ES < 0 and p-value < 0.05 was considered as the threshold to determine that a drug has a complementary expression pattern with COVID-19 infections (Note S3). Retrospective in vitro drug screening validation We validated the highly ranked candidate drugs by retrospectively comparing them with efficacious drugs in multiple in vitro drug screening studies. We utilized four drug screening studies targeting viral entry (ACE2 enzymatic activity, Spike-ACE2 protein–protein interaction) and viral replication/infection (cytopathic effect), which are obtained from NCATS OpenData COVID-19 Portal and Riva et al. study7,8. The two viral entry assay studies screened 2,678 compounds in the NCATS Pharmaceutical Collection and 739 compounds in the NCATS Anti-infectives Collection64. In the viral entry assay, a drug was regarded as efficacious if efficacy value was larger than 10 and 0 for ACE2 enzymatic activity and Spike-ACE2 interaction, respectively (the efficacy value was defined as an % inhibition at infinite concentration subtracted by % inhibition at zero concentration by curve fitting). The two cytopathic effects studies use either the NCATS collections or the ReFRAME drug library on the same Vero E6 cell65. In the NCATS cytopathic effect study, a drug was regarded as efficacious if the efficacy value was larger than 10. In the ReFRAME study, a drug was regarded as efficacious if the drug inhibited infection by 40% or more7. We calculated precision and recall between the predicted (top 300 highly-ranked) drugs and the efficacious drugs in each screening result (Fig. 2c). We focused on only those drug candidates that are included in the compound library in the screening study. Population-based validation We investigated drugs administered to the COVID-19 patients and estimated treatment effect using counterfactual analysis. We used Optum de-identified EHR database (2007–2020), which is (non-experimental data, as opposed to randomized clinical trials). In 140,016 positive COVID-19 patients, there were a total of 34,043 hospitalized COVID-19 patients; we selected 3200 deceased patients during the hospitalization and 15,078 recovered patients with medication history and length of stay > 2 days. The key to estimate treatment effect is to reduce bias or confounders in EHRs to control the difference of confounding variables between those who received and did not receive treatment. We calculated the average treatment effect on the treated (ATT) by using propensity score matching (PSM) and weighting to build the cohort (Note S2). From the selected hospitalized patients, we built a cohort with 2827 cases (deceased) and 2774 controls (recovered) that follow similar distributions in terms of demographics (race, ethnicity, sex, age) and admission severity (body temperature and SPO2) using PSM. The time period of severity risk factors was from before 2 h of admission and to after 6 h of admission. After we derived the matched cohort, there were a total of 391 medications that were administered in at least 35 patients. We calculated the treatment effect of the 391 medications using the average treatment effect among treated or ATT. For the inverse propensity score weighting, we considered demographics (age, gender, race), admission conditions (body temperature, SPO2), comorbidities (cancer, chronic kidney disease, obesity, a serious heart condition, solid organ transplant, COPD, type II diabetes, and sickle cell disease), and drug history before the treatment of interest. We assumed a drug is effective if ATT > 0 and the p-value is < 0.05. A full list of the drug's ATT coefficient is in Table S3. The Optum de-identified EHR database within this study has been approved by the Committee for the Protection of Human Subjects (UTHSC-H IRB) under protocol HSC-SBMI-13-0549. Drug combination search We identified efficacious drug combinations from top-ranked drugs. Our approach is to leverage drug targets and COVID-19 associated host genes. Our working hypothesis was based on the Complementary Exposure pattern that “a drug combination is therapeutically synergistic if the targets of the individual drugs hit the disease module, but target a separate neighborhood”43. We searched the drug combinations within the top 30 drugs. We identified the COVID-19 modules from human protein interactomes that are physically associated with SARS-CoV-2 baits51. The drug’s targets were identified from CTDbase’s COVID-19 curated list. We counted the number of genes in the COVID-19 module that a drug combination hits, where the drug combination’s targets are disjoint. Data availability Code is available at https://github.com/yejinjkim/drug-repurposing-graph. Data is available at Supplementary tables. The raw COVID-19 knowledge graph data derived from CTDbase (http://ctdbase.org/). References 1. 1. Lu, Q.-B. Reaction cycles of halogen species in the immune defense: Implications for human health and diseases and the pathology and treatment of COVID-19. Cells 9, 1461 (2020). 2. 2. Office of the Commissioner. FDA Approves First COVID-19 Vaccine. https://www.fda.gov/news-events/press-announcements/fda-approves-first-covid-19-vaccine (2021). 3. 3. Beigel, J. H. et al. Remdesivir for the treatment of covid-19: Final report. N. Engl. J. Med. 383, 1813–1826 (2020). 4. 4. Fischer, W. et al. Molnupiravir, an oral antiviral treatment for COVID-19. medRxiv. 5. 5. Singh, V. K. et al. Emerging prevention and treatment strategies to control COVID-19. Pathogens 9, 501 (2020). 6. 6. Kumar, Y., Singh, H. & Patel, C. N. In silico prediction of potential inhibitors for the Main protease of SARS-CoV-2 using molecular docking and dynamics simulation based drug-repurposing. J. Infect. Public Health 13, 1210–1233 (2020). 7. 7. Riva, L. et al. Discovery of SARS-CoV-2 antiviral drugs through large-scale compound repurposing. Nature 586(7827), 113–119 (2020). 8. 8. Brimacombe, K. R. et al. An OpenData portal to share COVID-19 drug repurposing data in real time. BioRxiv 6, 672 (2020). 9. 9. Feng, S. et al. Eltrombopag is a potential target for drug intervention in SARS-CoV-2 spike protein. Infect. Genet. Evol. 85, 104419 (2020). 10. 10. Tu, Y.-F. et al. A review of SARS-CoV-2 and the ongoing clinical trials. Int. J. Mol. Sci. 21, 2657 (2020). 11. 11. Tang, J. & Aittokallio, T. Network pharmacology strategies toward multi-target anticancer therapies: From computational models to experimental design principles. Curr. Pharm. Des. 20, 23–36 (2014). 12. 12. Ghaebi, M., Osali, A., Valizadeh, H., Roshangar, L. & Ahmadi, M. Vaccine development and therapeutic design for 2019-nCoV/SARS-CoV-2: Challenges and chances. J. Cell. Physiol. 235(12), 9098–9109 (2020). 13. 13. Zeng, X. et al. Repurpose open data to discover therapeutics for covid-19 using deep learning. J. Proteome Res. 19, 4624–4636 (2020). 14. 14. Shah, B., Modi, P. & Sagar, S. R. In silico studies on therapeutic agents for COVID-19: Drug repurposing approach. Life Sci. 252, 117652 (2020). 15. 15. Qamar, M. T., Alqahtani, S. M., Alamri, M. A. & Chen, L.-L. Structural basis of SARS-CoV-2 3CLpro and anti-COVID-19 drug discovery from medicinal plants. J. Pharm. Anal. 10(4), 313–319 (2020). 16. 16. Castro, V. M., Ross, R. A., McBride, S. M. J. & Perlis, R. H. Identifying common pharmacotherapies associated with reduced COVID-19 morbidity using electronic health records. MedRxiv 17. 17. Zhou, Y. et al. Network-based drug repurposing for novel coronavirus 2019-nCoV/SARS-CoV-2. Cell Discov. 6, 1–18 (2020). 18. 18. Hamilton, W. L., Ying, R. & Leskovec, J. Inductive Representation Learning on Large Graphs (Springer, 2017). 19. 19. Mohamed, S. K., Nováček, V. & Nounu, A. Discovering protein drug targets using knowledge graph embeddings. Bioinformatics 36, 603–610 (2020). 20. 20. Kipf, T. N. & Welling, M. Variational Graph Auto-Encoders (Springer, 2016). 21. 21. Schlichtkrull, M. et al. Modeling Relational Data with Graph Convolutional Networks (Springer, 2017). 22. 22. Plotly: The front end for ML and data science models. https://plotly.com/. 23. 23. Ratner, A., De Sa, C., Wu, S., Selsam, D. & Ré, C. Data programming: Creating large training sets quickly. Adv. Neural Inf. Process. Syst. 29, 3567–3575 (2016). 24. 24. Bleyzac, N., Goutelle, S., Bourguignon, L. & Tod, M. Azithromycin for COVID-19: More than just an antimicrobial?. Clin. Drug Investig. 1, 1–10 (2020). 25. 25. Xu, H. et al. High expression of ACE2 receptor of 2019-nCoV on the epithelial cells of oral mucosa. Int. J. Oral Sci. 12, 1–5 (2020). 26. 26. Khurana, S., Gupta, S., Bhalla, H., Nandwani, S. & Gupta, V. Comparison of anti-inflammatory effect of atorvastatin with rosuvastatin in patients of acute coronary syndrome. J. Pharmacol. Pharmacother. 6, 130 (2015). 27. 27. Solaimanzadeh, I. Nifedipine and amlodipine are associated with improved mortality and decreased risk for intubation and mechanical ventilation in elderly patients hospitalized for COVID-19. Cureus 12, e8069 (2020). 28. 28. Yang, N. & Shen, H.-M. Targeting the endocytic pathway and autophagy process as a novel therapeutic strategy in COVID-19. Int. J. Biol. Sci. 16, 1724–1731 (2020). 29. 29. Sharif-Askari, N. S. et al. Cardiovascular medications and regulation of COVID-19 receptors expression. Int. J. Cardiol. Hypertension 6, 100034 (2020). 30. 30. Khalili, J. S., Zhu, H., Mak, N. S. A., Yan, Y. & Zhu, Y. Novel coronavirus treatment with ribavirin: Groundwork for an evaluation concerning COVID-19. J. Med. Virol. 92, 740–746 (2020). 31. 31. Mitra, R. L., Greenstein, S. A. & Epstein, L. M. An algorithm for managing QT prolongation in coronavirus disease 2019 (COVID-19) patients treated with either chloroquine or hydroxychloroquine in conjunction with azithromycin: Possible benefits of intravenous lidocaine. Heart Rhythm Case Rep. 6, 244–248 (2020). 32. 32. Cao, B. et al. A trial of lopinavir-ritonavir in adults hospitalized with severe covid-19. N. Engl. J. Med. 382, 1787–1799 (2020). 33. 33. Baron, S. A., Devaux, C., Colson, P., Raoult, D. & Rolain, J.-M. Teicoplanin: An alternative drug for the treatment of COVID-19?. Int. J. Antimicrob. Agents 55, 105944 (2020). 34. 34. Grein, J. et al. Compassionate use of remdesivir for patients with severe covid-19. N. Engl. J. Med. 382, 2327–2336 (2020). 35. 35. Caly, L., Druce, J. D., Catton, M. G., Jans, D. A. & Wagstaff, K. M. The FDA-approved drug ivermectin inhibits the replication of SARS-CoV-2 in vitro. Antiviral Res. 178, 104787 (2020). 36. 36. Gimeno, A. et al. Prediction of novel inhibitors of the main protease (M-pro) of SARS-CoV-2 through consensus docking and drug reposition. Int. J. Mol. Sci. 21, 3793 (2020). 37. 37. Wu, C. et al. Analysis of therapeutic targets for SARS-CoV-2 and discovery of potential drugs by computational methods. Acta Pharm. Sin. B 10, 766–788 (2020). 38. 38. Weston, S., Haupt, R., Logue, J., Matthews, K. & Frieman, M. B. FDA approved drugs with broad anti-coronaviral activity inhibit SARS-CoV-2 in vitro. (2020). 39. 39. Al-Khikani, F. & Hameed, R. COVID-19 treatment: Possible role of itraconazole as new therapeutic option. Int. J. Health Allied Sci. 9, 101–101 (2020). 40. 40. Mauvais-Jarvis, F., Klein, S. L. & Levin, E. R. Estradiol, progesterone, immunomodulation and COVID-19 outcomes. Endocrinology 161, 127 (2020). 41. 41. Del Amo, J. et al. Incidence and severity of COVID-19 in HIV-positive persons receiving antiretroviral therapy: A cohort study. Ann. Intern. Med. 173(7), 536–541 (2020). 42. 42. Kim, Y. et al. Anti-cancer drug synergy prediction in understudied tissues using transfer learning. J. Am. Med. Inform. Assoc. 28(1), 42–51 (2020). 43. 43. Cheng, F., Kovács, I. A. & Barabási, A.-L. Network-based prediction of drug combinations. Nat. Commun. 10, 1197 (2019). 44. 44. Ono, K. Cytoscape. https://cytoscape.org/. 45. 45. Kawasaki, T. & Kawai, T. Toll-like receptor signaling pathways. Front. Immunol. 5, 461 (2014). 46. 46. Arshad, S. et al. Treatment with hydroxychloroquine, azithromycin, and combination in patients hospitalized with COVID-19. Int. J. Infect. Dis. 97, 396–403 (2020). 47. 47. A Review of the SARS-CoV-2 (COVID-19) Genome and Proteome. https://www.genetex.com/MarketingMaterial/Index/SARS-CoV-2_Genome_and_Proteome. 48. 48. Warner, T. D., Nylander, S. & Whatling, C. Anti-platelet therapy: Cyclo-oxygenase inhibition and the use of aspirin with particular regard to dual anti-platelet therapy. Br. J. Clin. Pharmacol. 72, 619 (2011). 49. 49. Xu, P., Zhou, Q. & Xu, J. Mechanism of thrombocytopenia in COVID-19 patients. Ann. Hematol. 99, 1205 (2020). 50. 50. Wang, G. et al. C-reactive protein level may predict the risk of covid-19 aggravation. Open Forum Infect. Dis. 7, 153 (2020). 51. 51. Gordon, D. E. et al. A SARS-CoV-2 protein interaction map reveals targets for drug repurposing. Nature 583, 459–468 (2020). 52. 52. Zhang, L. et al. Calcium channel blocker amlodipine besylate is associated with reduced case fatality rate of COVID-19 patients with hypertension. Cell Discov. 6(1), 1–12 (2020). 53. 53. Khurana, I. et al. Can bilirubin nanomedicine become a hope for the management of COVID-19?. Med. Hypotheses 149, 110534 (2021). 54. 54. Allawadhi, P. et al. Decorin as a possible strategy for the amelioration of COVID-19. Med. Hypotheses 152, 110612 (2021). 55. 55. Takami, A. Possible role of low-dose etoposide therapy for hemophagocytic lymphohistiocytosis by COVID-19. Int. J. Hematol. 112, 122–124 (2020). 56. 56. Thong, Y. H., Ferrante, A., Rowan-Kelly, B. & O’Keefe, D. E. Effect of mefloquine on the immune response in mice. Trans. R. Soc. Trop. Med. Hyg. 73, 388–390 (1979). 57. 57. Sonmez, A. et al. Effects of losartan treatment on T-cell activities and plasma leptin concentrations in primary hypertension. J. Renin Angiotensin Aldosterone Syst. 2, 112–116 (2001). 58. 58. Salles, C. Correspondence COVID-19: Melatonin as a potential adjuvant treatment. Life Sci. 253, 117716 (2020). 59. 59. Kanehisa, M., Furumichi, M., Sato, Y., Ishiguro-Watanabe, M. & Tanabe, M. KEGG: Integrating viruses and cellular organisms. Nucleic Acids Res. 49, D545–D551 (2021). 60. 60. Rendle, S., Freudenthaler, C., Gantner, Z. & Schmidt-Thieme, L. BPR: Bayesian Personalized Ranking from Implicit Feedback. (2012). 61. 61. Vanderheiden, A. et al. Type I and type III IFN restrict SARS-CoV-2 infection of human airway epithelial cultures. J. Virol. 382, 727 (2020). 62. 62. Subramanian, A. et al. A next generation connectivity map: L1000 Platform and the first 1,000,000 profiles. Cell 171, 1437-1452.e17 (2017). 63. 63. Lamb, J. The connectivity map: Using gene-expression signatures to connect small molecules, genes, and disease. Science 313, 1929–1935 (2006). 64. 64. Huang, R. et al. The NCGC pharmaceutical collection: A comprehensive resource of clinically approved drugs enabling repurposing and chemical genomics. Sci. Transl. Med. 3, 8016 (2011). 65. 65. Janes, J. et al. The ReFRAME library as a comprehensive drug repurposing library and its application to the treatment of cryptosporidiosis. Proc. Natl. Acad. Sci. USA. 115, 10750–10755 (2018). Acknowledgements We thank Lu Chen and Hall Matthew from NCATS for in vitro efficacy experiments. Funding Y. K. was supported in part by CPRIT RR180012, R01AG066749, and R01AG066749-01S1. K. H. was supported by CPRIT RR180012. Y. W. and J. T. were supported by ERC starting Grant No. 716063 and Academy of Finland funding No. 3176880. Z. Z. is partially supported by NIH grant R01LM012806 and CPRIT grant CPRIT RP180734. X. J. is CPRIT Scholar in Cancer Research (RR180012), and he was supported in part by Christopher Sarofim Family Professorship, UT Stars award, UTHealth startup, the National Institute of Health (NIH) under Award Number R01AG066749 and R01AG066749-01S1. Author information Authors Contributions X.J., J.T. and Z.Z. provided motivation for this study; X.J., K.H., and Y.W. collected necessary data; Y.K. built graph representation and ranking models; X.J. and L.C. performed population-based validation; Y.W. and K.H. performed genetic validation; K.H. and J.T. derived drug combinations; K.H. prepared plots; Y.K., K.H., Y.W., J.T., Z.Z., S.S., and X.J. wrote manuscript; and K.H. and S.S. provided clinical interpretation. Corresponding author Correspondence to Yejin Kim. Ethics declarations Competing interests The authors declare no competing interests. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Reprints and Permissions Hsieh, K., Wang, Y., Chen, L. et al. Drug repurposing for COVID-19 using graph neural network and harmonizing multiple evidence. Sci Rep 11, 23179 (2021). https://doi.org/10.1038/s41598-021-02353-5 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-021-02353-5
2022-01-27 21:55:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6154769659042358, "perplexity": 11668.633501447988}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00292.warc.gz"}
https://physics.stackexchange.com/questions/350494/why-does-a-rotating-circle-near-light-speed-increase-in-circumference-not-decre/350495
# Why does a rotating circle near light speed increase in circumference, not decrease (contract)? Say you have a circle that's rotating with a linear velocity of $v$ and radius $r$. It's circumference at rest is $2\pi r$. Speeding the disc up should (in my mind) cause this circumference to contract according to $$2\pi r \sqrt{1-\beta^2}$$However in the book I am reading (The Elegant Universe by Brian Greene) it states that the circumference doesn't contract, but increase in length, as in trying to measure it the 'rods' used to measure contract instead. Why is it that the circumference doesn't contract also? (I don't know if there's much maths involved, but I'm pretty competent in maths, so understanding any that may come up shouldn't be too much of a problem) EDIT: Also there's a picture of space-time warping in the shape of a saddle (apologies if that's not correct terminology). Does that mean that there are varying values of $r$ in relation to the stationary $r$? • Exactly, this paper does an easy derivation on section 3.4 using the velocity addition formula (but in the complex plane) to add up the the infinitesimal changes on the velocity, resulting in a 2$\pi\gamma$R arc length. – David Leonardo Ramos Aug 5 '17 at 20:34
2021-04-20 23:35:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7508090734481812, "perplexity": 510.2248157611613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039491784.79/warc/CC-MAIN-20210420214346-20210421004346-00532.warc.gz"}
https://stats.stackexchange.com/questions/490109/least-square-estimator-of-regression-x-onto-y?noredirect=1
# least square estimator of regression x onto y [duplicate] I've been reading linear regression and least square estimator. Suppose we have i.i.d data $$(x_1, y_q), (x_2, y_2), ..., (x_n,y_n)$$ such that we use a linear regression model $$y_i = \beta x_i + \epsilon_i$$ and learnt the fact that we often derive least square estimator $$\hat{\beta} = \frac{\sum_{i = 1}^{n}x_iy_i}{\sum_{i = 1}^{n}x_i^2}$$. However I wonder what would be the effect if we do regression of x onto y. Would we then be able to use the least square estimate for the regression of x onto y to estimate of $$\frac{1}{\beta}$$? How do we decide whether this is a good estimate in the first place? • Are you deliberately missing a constant term in your regression? Or are the means zero? – Henry Oct 3 '20 at 7:48 To prove that the reverse regression is not a good estimator for $$1/\beta$$, recall that OLS is generally consistent (when regressing $$y$$ on $$x$$) for $$cov(x,y)/var(x)$$. Correspondingly, it is consistent for $$cov(x,y)/var(y)$$ when regressing $$x$$ on $$y$$. When the relationship between error and regressor is (essentially, predeterminedness) is such that $$\beta=cov(x,y)/var(x)$$, to have that $$cov(x,y)/var(y)=1/\beta$$ would require that $$cov(x,y)/var(y)=var(x)/cov(x,y),$$ and there is no reason to expect this to hold in general. In fact, the condition could be reexpressed as $$\frac{cov(x,y)^2}{var(y)var(x)}=1,$$ which is the limiting case of the Cauchy-Schwarz inequality, which is known to only obtain if the random variables in question are multiples of each other. In that case, we have, say, $$y=\beta x$$, so that $$\frac{cov(x,y)}{var(x)}=\beta \cdot var(x)/var(x)=\beta$$ and $$\frac{cov(x,y)}{var(y)}=\frac{\beta \cdot var(x)}{\beta ^2var(x)}=\frac{1}{\beta }$$ Here is a little graphical illustration (where you'd want to read the cases of regressing $$x$$ on $$y$$ rotating the plot counterclockwise by 90 degrees): library(mvtnorm) n <- 10000 cov.xy <- 0.5 var.y <- 1 var.x <- 4 beta <- cov.xy/var.x dat <- rmvnorm(n, mean = rep(0,2), sigma = matrix(c(var.y, cov.xy, cov.xy, var.x), ncol=2)) y <- dat[,1] x <- dat[,2] par(mfrow=c(1,2)) plot(x, y, pch=19, cex=0.2, col="lightgreen") abline(lm(y~x),lwd=2, col="lightgreen") # a regression of y on x abline(a=0, b=beta, lwd=2, col="green") # what OLS of y on x is consistent for plot(y, x, pch=19, cex=0.2, col="lightblue") abline(lm(x~y), lwd=2, col="lightblue") # a regression of x on y abline(a=0, cov.xy/var.y, lwd=2, col="darkblue") # what OLS of x on y is consistent for abline(a=0, b=1/beta, lwd=2, col="red") # what OLS of x on y is NOT consistent for • how does the fact that OLS is consistent with regressing x onto y has to do with it is not a good estimate? – RnHdw Oct 4 '20 at 17:17 • Consistency is a reasonably well-accepted (frequentist) criterion for a good estimator. And my post attempts to demonstrate that the coefficient of a regression of $x$ and $y$ will generally not be consistent for $1/\beta$ if the coefficient of a regression of $y$ on $x$ is consistent for $\beta$. – Christoph Hanck Oct 5 '20 at 6:39 No, in general you will obtain a different line wih ordinary least squares if you interchange x and y. You can easily check this with your formula by interchanging x and y and comparing it to $$1/\beta$$. The reaaon for this descrepancy is that ordinary least squares is not about fitting a line through points, but about prediction and thus assumes a specific role of the variables: x is "predictor", y is "response". If your problem is actually about fitting a line through points, you should consider "orthogonal least squares", which is a symmetric approach and has (for straight lines) two equivalent solutions: 1. the right singular vector $$\vec{v}_1$$ corresponding to the largest singular value $$s_1\geq\ldots\geq s_n$$ in the singular value decomposition (SVD) $$Q=USV^T$$ of the matrix built from the centered data points $$Q^T = (\vec{q}_1,\ldots,\vec{q}_n) \quad\mbox{ with }\quad \vec{q}_i = \vec{x}_i - \vec{a}$$ 2. the eigenvector corresponding to the largest eigenvalue of $$Q^TQ$$. $$Q^TQ$$ is identical to the scatter matrix, or $$(n-1)$$ times the covariance matrix of the data points $$\vec{x}_1,\ldots,\vec{x}_n$$. Thus, this vector is simply the principal component obtained from a principal component analysis (PCA) Note that orthogonal least squares also yields a reasonable result when the points happen to fall on (or around) a vertical line. Reference: H. Späth: "Orthogonal least squares fitting with linear manifolds." Numerische Mathematik 48 (1986), pp. 441–445. • Thanks! I wonder whether there's a way to prove that it is not a good estimator for $\frac{1}{\beta}$? – RnHdw Oct 2 '20 at 11:04
2021-03-03 18:45:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7521554231643677, "perplexity": 524.0137740379641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367183.21/warc/CC-MAIN-20210303165500-20210303195500-00500.warc.gz"}
http://inspirationbriefcase.com/logs/wuta5j/e38b09-euler%27s-theorem-for-differential-equations
= Find its approximate solution using Euler method. Once again, we can see why we needed to require $$x > 0$$. {\displaystyle t_{0}} t This shows that for small Now, one step of the Euler method from is evaluated at the end point of the step, instead of the starting point. Euler’s theorem states that if a function f(a i, i = 1,2, …) is homogeneous to degree “k”, then such a function can be written in terms of its partial derivatives, as follows: k λ k − 1 f (a i) = ∑ i a i (∂ f (a i) ∂ (λ a i)) | λ x This equation is not rendering properly due to an incompatible browser. , = = 7. y [ These types of differential equations are called Euler Equations. It is the difference between the numerical solution after one step, $${\displaystyle y_{1}}$$, and the exact solution at time $${\displaystyle t_{1}=t_{0}+h}$$. ) In other words, since $$\eta>0$$ we can use the work above to get solutions to this differential equation. Given a differential equation dy/dx = f(x, y) with initial condition y(x0) = y0. With this transformation the differential equation becomes. is Lipschitz continuous in its second argument, then the global truncation error (GTE) is bounded by, where , [4], we would like to use the Euler method to approximate The global truncation error is the cumulative effect of the local truncation errors committed in each step. ) is known (see the picture on top right). A This conversion can be done in two ways. For the exact solution, we use the Taylor expansion mentioned in the section Derivation above: The local truncation error (LTE) introduced by the Euler method is given by the difference between these equations: This result is valid if = y It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. However, this is now a solution for any interval that doesn’t contain $$x = 0$$. . t For this reason, the Euler method is said to be a first-order method, while the midpoint method is second order. For a class of nonlinear impulsive fractional differential equations, we first transform them into equivalent integral equations, and then the implicit Euler method is adapted for solving the problem. Most of the effect of rounding error can be easily avoided if compensated summation is used in the formula for the Euler method.[20]. and can be handled by Euler's method or, in fact, by any other scheme for first-order systems. Derivations. July 2020 ; Authors: Zimo Hao. = h {\displaystyle A_{1}.} In step n of the Euler method, the rounding error is roughly of the magnitude εyn where ε is the machine epsilon. {\displaystyle (0,1)} = . It is customary to classify them into ODEs and PDEs.. We construct the general solution by using the trial power function $$y = {x^k}.$$ Substitute the derivatives of this function into the differential equation: A The general nonhomogeneous differential equation is given by x^2(d^2y)/(dx^2)+alphax(dy)/(dx)+betay=S(x), (1) and the homogeneous equation is x^2y^('')+alphaxy^'+betay=0 (2) y^('')+alpha/xy^'+beta/(x^2)y=0. 1 y That is, we can't solve it using the techniques we have met in this chapter (separation of variables, integrable combinations, or using an integrating factor), or other similar means. f {\displaystyle f} h To find the constants we differentiate and plug in the initial conditions as we did back in the second order differential equations chapter. {\displaystyle y_{1}} y(0) = 1 and we are trying to evaluate this differential equation at y = 1. n n is outside the region. {\displaystyle y_{3}} . will be close to the curve. Note that while this does not involve a series solution it is included in the series solution chapter because it illustrates how to get a solution to at least one type of differential equation at a singular point. Recall from the previous section that a point is an ordinary point if the quotients. Euler’s Method, is just another technique used to analyze a Differential Equation, which uses the idea of local linearity or linear approximation, where we use small tangent lines over a short distance to approximate the solution to an initial-value problem. t 2.3 {\displaystyle f(t_{0},y_{0})} h 5.2. {\displaystyle \mathbf {z} (t)} t y t , then the numerical solution is unstable if the product {\displaystyle (t-t_{0})/h} y t − ( = = ξ Here, a differential equation can be thought of as a formula by which the slope of the tangent line to the curve can be computed at any point on the curve, once the position of that point has been calculated. 4 h What is Euler’s Method?The Euler’s method is a first-order numerical procedure for solving ordinary differential equations (ODE) with a given initial value. Viewed 1k times 10. for In this section we want to look for solutions to. Assuming that the rounding errors are all of approximately the same size, the combined rounding error in N steps is roughly Nεy0 if all errors points in the same direction. It can be reduced to the linear homogeneous differential equation with constant coefficients. We terminatethis pr… {\displaystyle 1/h} Since the number of steps is inversely proportional to the step size h, the total rounding error is proportional to ε / h. In reality, however, it is extremely unlikely that all rounding errors point in the same direction. So, we get the roots from the identical quadratic in this case. Now, we assumed that $$x>0$$ and so this will only be zero if. Output of this is program is solution for dy/dx = x + y with initial condition y = 1 for x = 0 i.e. E on E ano ahni, itu ahni, auar era, shnil andaliya, hairya hah E olue , certain kind of uncertainty. t y The error recorded in the last column of the table is the difference between the exact solution at As a result, we need to resort to using numerical methods for solving such DEs. ∈ t h For many of the differential equations we need to solve in the real world, there is no "nice" algebraic solution. , then the numerical solution is qualitatively wrong: It oscillates and grows (see the figure). + {\displaystyle k} Wuhan University; Michael Röckner. {\displaystyle y_{i}} We chop this interval into small subdivisions of lengthh. Euler Method Online Calculator. Differential Equations play a major role in most of the science applications. x. in a first-year calculus context, and the MacLaurin series for. 2. Key–Words: Fractional differential equations, Initial value problem, Solution, Existence, Eulers method 1 Introduction With the rapid development of high-tech, the frac-tional calculus gets involved in more and more ar-eas, especially in control theoryviscoelastic theory-electronic chemicalsfractal theory and so on. y A {\displaystyle t_{n}=t_{0}+nh} Find its approximate solution using Euler method. / {\displaystyle \xi \in [t_{0},t_{0}+h]} Differential Equations Notes PDF. {\displaystyle A_{0}} 0 The first derivation is based on power series, where the exponential, sine and cosine functions are expanded as power series to conclude that the formula indeed holds.. , which we take equal to one here: Since the step size is the change in More accurate second-order Runge-Kutta methods have the form k1= Dxf(xn,y), k2= Dxf(x +aDx,y +bk1), yn+1= yn+ ak1+bk2. e Euler Equations; In the next three sections we’ll continue to study equations of the form $\label{eq:7.4.1} P_0(x)y''+P_1(x)y'+P_2(x)y=0$ where $$P_0$$, $$P_1$$, and $$P_2$$ are polynomials, but the emphasis will be different from that of Sections 7.2 and 7.3, where we obtained solutions of Equation \ref{eq:7.4.1} near an ordinary point $$x_0$$ in the form of power series in $$x-x_0$$. For this reason, the Euler method is said to be first order. {\displaystyle t} {\displaystyle f(t,y)=y} . "Eulers theorem for homogeneous functions". If we didn’t we’d have all sorts of problems with that logarithm. We can make one more generalization before working one more example. Our results are stronger because they work in any dimension and yield bounded velocity and pressure. {\displaystyle h=1} + Then, using the initial condition as our starting point, we generatethe rest of the solution by using the iterative formulas: xn+1 = xn + h yn+1 = yn + hf(xn, yn) to find the coordinates of the points in our numerical solution. To deal with this we need to use the variable transformation. {\displaystyle z_{1}(t)=y(t),z_{2}(t)=y'(t),\ldots ,z_{N}(t)=y^{(N-1)}(t)} This large number of steps entails a high computational cost. , and the exact solution at time (Here y = 1 i.e. ( The top row corresponds to the example in the previous section, and the second row is illustrated in the figure. A simple modification of the Euler method which eliminates the stability problems noted in the previous section is the backward Euler method: This differs from the (standard, or forward) Euler method in that the function y + For this reason, people usually employ alternative, higher-order methods such as Runge–Kutta methods or linear multistep methods, especially if a high accuracy is desired.[6]. Conventional theory of differential equation fails to handle this kind of vagueness. Another test example is the initial value problem y˙ = λ(y−sin(t))+cost, y(π/4) = 1/ √ 2, where λis a parameter. Let’s start off by assuming that $$x>0$$ (the reason for this will be apparent after we work the first example) and that all solutions are of the form. h working rule of eulers theorem. ( and we can ask for solutions in any interval not containing $$x = {x_0}$$. y Much like the familiar oceanic waves, waves described by the Euler Equations 'break' and so-called shock waves are formed; this is a nonlinear effect and represents the solution becoming multi-valued. y . {\displaystyle h=1} Xicheng Zhang. This is what it means to be unstable. Although the approximation of the Euler method was not very precise in this specific case, particularly due to a large value step size A. {\displaystyle y'=f(t,y)} h Online tool to solve ordinary differential equations with initial conditions (x0, y0) and calculation point (xn) using Euler's method. Other modifications of the Euler method that help with stability yield the exponential Euler method or the semi-implicit Euler method. A slightly different formulation for the local truncation error can be obtained by using the Lagrange form for the remainder term in Taylor's theorem. y Euler's Method after the famous Leonhard Euler. {\displaystyle h} y [15], The precise form of this bound is of little practical importance, as in most cases the bound vastly overestimates the actual error committed by the Euler method. illustrated on the right. The numerical solution is given by y In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. Consider the 1st-order Cauchy-Euler equation, in a multivariate extension: $$a_1\mathbf x'\cdot \nabla f(\mathbf x) + a_0f(\mathbf x) = 0 \tag{3}$$ 0 Of course, in practice we wouldn’t use Euler’s Method on these kinds of differential equations, but by using easily solvable differential equations we will be able to check the accuracy of the method. f The concept is similar to the numerical approaches we saw in an earlier integration chapter (Trapezoidal Rule, Simpson's Rule and Riemann Su… : is smaller. If . and apply the fundamental theorem of calculus to get: Now approximate the integral by the left-hand rectangle method (with only one rectangle): Combining both equations, one finds again the Euler method. {\displaystyle y_{n}\approx y(t_{n})} As suggested in the introduction, the Euler method is more accurate if the step size t 1 We should now talk about how to deal with $$x < 0$$ since that is a possibility on occasion. ( to 0 t 1 k We’ll get two solutions that will form a fundamental set of solutions (we’ll leave it to you to check this) and so our general solution will be. y = {\displaystyle y(t)=e^{t}} This is a fourth-order homogeneous Euler equation. Was Euler's theorem in differential geometry motivated by matrices and eigenvalues? z + 0 We first need to find the roots to $$\eqref{eq:eq3}$$. . and so the general solution in this case is. + ( We’ll get two solutions that will form a fundamental set of solutions (we’ll leave it to you to check this) and so our general solution will be,With the solution to this example we can now see why we required x>0x>0. y t The Euler method gives an approximation for the solution of the differential equation: $\frac{dy}{dt} = f(t,y) \tag{6}$ with the initial condition: $y(t_0) = y_0 \tag{7}$ where t is continuous in the interval [a, b]. N h {\displaystyle t_{n+1}=t_{n}+h} In this simple differential equation, the function 3 3 n 7 $\begingroup$ I am teaching a class on elementary differential geometry and I would like to know, for myself and for my students, something more about the history of Euler Theorem and Euler equation: the curvature of a … Help to clarify proof of Euler's Theorem on homogenous equations. We show that any such flow is a shear flow, that is, it is parallel to some constant vector. If instead it is assumed that the rounding errors are independent random variables, then the expected total rounding error is proportional to {\displaystyle y(4)=e^{4}\approx 54.598} ) {\displaystyle f} t z. since this result requires complex analysis. This value is then added to the initial The above steps should be repeated to find h 0 If the solution ( {\displaystyle y'=f(t,y)} y f (x, y), y(0) y 0 dx dy = = (1) So only first order ordinary differential equations can be solved by using Euler’s method. {\displaystyle M} Show Instructions. Now, as we’ve done every other time we’ve seen solutions like this we can take the real part and the imaginary part and use those for our two solutions. The other possibility is to use more past values, as illustrated by the two-step Adams–Bashforth method: This leads to the family of linear multistep methods. And not only actually is this one a good way of approximating what the solution to this or any differential equation is, but actually for this differential equation in particular you can actually even use this … = In this case it can be shown that the second solution will be. {\displaystyle h} ( A more general form of an Euler Equation is. y′ + 4 x y = x3y2. This is true in general, also for other equations; see the section Global truncation error for more details. The exact solution is on both sides, so when applying the backward Euler method we have to solve an equation. 4 . Eulers theorem in hindi. . t Finally, one can integrate the differential equation from We show a coincidence of index of rigidity of differential equations with irregular singularities on a compact Riemann surface and Euler characteristic of the associated spectral curves which are recently called irregular spectral curves. {\displaystyle A_{1}} This equation is a quadratic in $$r$$ and so we will have three cases to look at : Real, Distinct Roots, Double Roots, and Complex Roots. Again, this yields the Euler method. {\displaystyle y_{n+1}} can be computed, and so, the tangent line. » Differential Equations » 11. 0 {\displaystyle t_{n}} h divided by the change in value to obtain the next value to be used for computations. Indeed, it follows from the equation y : The differential equation states that t . [16] What is important is that it shows that the global truncation error is (approximately) proportional to [22], For integrating with respect to the Euler characteristic, see, % equal to: t0 + h*n, with n the number of steps, % i yi ti f(yi,ti), % 0 +1.00 +0.00 +1.00, % 1 +2.00 +1.00 +2.00, % 2 +4.00 +2.00 +4.00, % 3 +8.00 +3.00 +8.00, % 4 +16.00 +4.00 +16.00, % NOTE: Code also outputs a comparison plot. ) y The second derivation of Euler’s formula is based on calculus, in which both sides of the equation are treated as functions and differentiated accordingly. t t has a bounded third derivative.[10]. ) y e around , and the error committed in each step is proportional to In this scheme, since, the starting point of each sub-interval is used to find the slope of the solution curve, the solution would be correct only if the function is linear. {\displaystyle A_{0}A_{1}A_{2}A_{3}\dots } f The Euler method is explicit, i.e. / Differential equation Calculates the solution y=f(x) of the ordinary differential equation y'=F(x,y) using Euler's method. The Euler method for solving the differential equation dy/dx = f(x,y) can be rewritten in the form k1= Dxf(xn,y), yn+1= yn+k1, and is called a first-order Runge-Kutta method. n has a bounded second derivative and ) Whenever an A and B molecule bump into each other the B turns into an A: A + B ! Consider the problem of calculating the shape of an unknown curve which starts at a given point and satisfies a given differential equation. above can be used. The local truncation error of the Euler method is the error made in a single step. Thus, it is to be expected that the global truncation error will be proportional to The value of {\displaystyle h} {\displaystyle y} , The MacLaurin series: Warning 1 You might be wondering what is suppose to mean: how can we differentiate with respect to a derivative? 0 n 1 , In this case we’ll be assuming that our roots are of the form. Differential Equations + Euler + Phasors Christopher Rose ABSTRACT You have a network of resistors, capacitors and inductors. 2 {\displaystyle y} to treat the equation. The convergence analysis of the method shows that the method is convergent of the first order. h , after however many steps the methods needs to take to reach that time from the initial time. We give a reformulation of the Euler equations as a differential inclusion, and in this way we obtain transparent proofs of several celebrated results of V. Scheffer and A. Shnirelman concerning the non-uniqueness of weak solutions and the existence of energy-decreasing solutions. i = 1 16 [5], so first we must compute ) ty′ + 2y = t2 − t + 1. on the given interval and = {\displaystyle t_{n}} k f For my math investigation project, I was trying to predict the trajectory of an object in a projectile motion with significant air resistance by using the Euler's Method. Date: 1st Jan 2021. ) ( Euler's Method. and {\displaystyle \Delta y/\Delta t} The first fundamental theorem of calculus states that if is a continuous function in the interval [a,b], and is the antiderivative of , then. t A Given a differential equation dy/dx = f(x, y) with initial condition y(x0) = y0. The numerical results verify the correctness of the theoretical results. {\displaystyle L} Conjectures. {\displaystyle f} f In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. The local truncation error of the Euler method is the error made in a single step. − ) This region is called the (linear) stability region. N Plant Ovules 5 Letters, Tesco Finest Buffalo Mozzarella Tomato Pizza 420g Calories, What Did The Thylacine Eat, Scx24 C10 Wheelbase, Kodak Photo Printer Dock Troubleshooting, Calcium Aluminate Cement Uk, Skin Gets Worse Without Makeup,
2021-04-16 23:13:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9308466911315918, "perplexity": 355.1346293269365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038092961.47/warc/CC-MAIN-20210416221552-20210417011552-00279.warc.gz"}
https://worldbuilding.stackexchange.com/questions/124081/aritifical-gravity-through-rotating-wheel-generalisation-in-a-spatial-4th-dimens
# Aritifical Gravity through Rotating wheel generalisation in a spatial 4th dimension I'm taking the example of a rotating wheel space station, in which artificial gravity is created through rotation due to the inertia inside. The idea in the 3D environment is that the wheel (2D) is rotating with respect to a third axis (the one perpendicular to the wheel plane), so generalising this concept for a further dimension, the wheel would be a hollow sphere and everything would be rotating in another axis, that should be perpendicular to the other three. Could you design this hollow sphere (very big, like 100 km in diameter) that is rotating in that 4th spatial dimension (assuming there exists one) so that humans inside feel artificial gravity? Would that make sense, at least mathematically? If so, would people inside feel the spherical shape is changing? • You must of course first describe how gravitation works in a four-dimensional space. Hint: it's not trivial. – AlexP Sep 4 '18 at 10:37 • Well I don't know if I need gravity at all. If that hollow sphere is a spacecraft deep in space where gravity is negligible, and you want to create a sense for it just by rotating, then it's not gravity, it's just inertia, right? – Jordi Serra Sep 4 '18 at 10:47 • You need gravity because that is what you want to simulate. If we don't know how gravity works in a four-dimensional space we don't know what we are supposed to simulate. – AlexP Sep 4 '18 at 10:59 • Just a comment, it's perfectly fine to assume that things work out, but I don't think you can assume anything as a given in 4 dimensions. Can people even exist? Biomolecules heavily rely on having the right geometry. What is a 4d protein? How would that even work? Well, you need to do 4d quantum mechanics now, congratulations. While this is a cool idea of course, you are running into horribly complicated problems pretty fast. I personally wouldn't touch it without a Ph.D. in math, at least not in a super serious setup. If it's comedic or a homage or something, of course, who cares. – Raditz_35 Sep 4 '18 at 11:01 • To back up AlexP's and Raditz_35's comments, take a look at physics.stackexchange.com/q/50142/79374 - the TL;DR is that there aren't stable orbits in 4D. So if you've got a fourth spatial dimension, gravity has to behave differently in it or else there are no solar systems and galaxies. That opens a can of worms, though - how else is the fourth spatial dimension not the same as the other ones? – Rob Watts Sep 4 '18 at 17:25 This setup wouldn't work, not even in a mathematical sense, at least as you described it. In four dimensions, you can't actually rotate with respect to an axis. Instead, a 4D rotation leaves either a plane or a single point invariant. In more detail, a rotation in a space of arbitrary dimension can be thought of as composed of simple rotations. It can be seen mathematically that a simple rotation needs a two-dimensional subspace to take place in (you can't rotate anything in 1D). This means that: • In 2D, all rotations are simple. They leave a $2-2=0$-dimensional subspace invariant, i.e. a point. • In 3D, all rotations are simple. They leave a $3-2=1$-dimensional subspace invariant (a rotation axis). • In 4D, a rotation can be simple, leaving a $4-2=2$-dimensional subspace invariant (a plane), or it can be a double rotation composed of two simple rotations, leaving in total a $4-2-2=0$-dimensional subspace invariant (a point). These simple rotations can even have different angular speeds (if they have the same speed, it's called an isoclinic rotation). • In 5D, rotations are again of two types, which leave invariant either a 3D subspace or an axis. • In 6D, rotations can be of three types... ...and so on. Going back to your setup, we can find two cases. In the first case, assuming (in the spirit of your question) that the invariant plane contains the extra four-dimensional axis, the sphere would rotate normally as the Earth does, leaving two "North" and "South" poles invariant. Here the extra dimension is redundant, our ordinary 3D physics already tells us what would happen: there would be artificial gravity near the equator and no gravity near the poles. In the second case, the sphere will disappear from view almost all the time, only reappearing periodically at certain instants which depend on the rotation's angular speeds. Obviously this wouldn't be feasible as a space station of any kind, since all the air would quickly escape from it. In conclusion, you can't rotate all points of the sphere at the same time while keeping them inside your 3D environment. In fact, the setup wouldn't work no matter how many extra dimensions one adds. This is because of the so-called hairy ball theorem. This theorem says that all possible continuous tangent vector fields on a sphere must vanish at some point, and it is sometimes popularly stated as "you can't comb a hairy ball without creating at least one cowlick". An infinitesimal rotation defines a smooth vector field on the sphere (you can think of a small arrow attached to each point, and pointing at where it will move), and the theorem then implies there must be at least one arrow of length zero (the center of the "cowlick"), which means a point which won't move. • After this, my answer is worth deletion. – L.Dutch - Reinstate Monica Sep 4 '18 at 12:28
2020-02-18 13:26:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6883070468902588, "perplexity": 539.7780401168225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143695.67/warc/CC-MAIN-20200218120100-20200218150100-00489.warc.gz"}
https://istopdeath.com/solve-graphically-x2-x-10/
# Solve Graphically x^2-x-1=0 x2-x-1=0 Graph each side of the equation. The solution is the x-value of the point of intersection. x≈-0.61803398,1.61803398 Solve Graphically x^2-x-1=0
2022-11-26 08:19:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8568012118339539, "perplexity": 1811.3460081948876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00166.warc.gz"}
http://openstudy.com/updates/50512703e4b08214204428ae
## AcidRa1n Group Title Determine whether y varies directly with x. If so find the constant variation. y = 12x ****** The answer is Yes; 12 I dont understand how this is the answer. Can someone help me? 2 years ago 2 years ago 1. Nameless my guess would be that its because y is equal to whatever 12(x) is going to be. 2. Nameless but that seems too easy 3. AcidRa1n i know. How about y = 4x+1 4. AcidRa1n the asnwer is no 5. Nameless i am probably in over my head.. sorry 6. lgbasallote here's something you can remember... if y and x are in one line...then it's a direct variation if either y or x is in the denominator...then inverse variation 7. lgbasallote $y = kx \leftarrow \; \text{direct variation}$ $y = \frac kx \leftarrow \; \text{inverse variation}$ 8. AcidRa1n ok so how does that apply to my first problem 9. lgbasallote y = 12x <--in the for y = kx <--can you see that? 10. lgbasallote in the form* 11. AcidRa1n oh ok so how does it arie directly with x.. Im sorry im just not seeing the picture 12. AcidRa1n varie* My teacher isnt that good 13. lgbasallote look at what i said above about y = kx 14. AcidRa1n yeah I know but what does K mean how the flutter does it varie with anything? 15. lgbasallote k is a constant 16. lgbasallote it is called the proportionality constant 17. lgbasallote it means its value doesnt matter because it is constant..in other words...always stays the same 18. AcidRa1n ok now I get why 12 is there.. So now explain the rest please? 19. lgbasallote in your terms... k is the constant variation 20. lgbasallote first tell me... do you agree that y = 12x is in the form y = kx? 21. AcidRa1n is y = k? which is 12? and how does that varie to x? 22. AcidRa1n Yeah I get that 23. lgbasallote what do you see as difference between $\huge y = kx$ and $\huge y = 12x$ 24. AcidRa1n yeah I see that 25. lgbasallote spot the difference 26. lgbasallote not the similarity 27. AcidRa1n yeah 28. lgbasallote what do you mean yeah? 29. AcidRa1n I spot the difference 30. lgbasallote what's the difference?? 31. AcidRa1n 32. lgbasallote RIGHT! that's why 12 is the constant variation 33. lgbasallote because k is the constant variation..and k = 12 34. AcidRa1n ohhh K= Constant Variationnnnnn 35. lgbasallote yes! 36. AcidRa1n lmfao face to palm** 37. AcidRa1n 38. lgbasallote the answer to that one was "NOT direct variation" right? 39. AcidRa1n yes 40. lgbasallote that's because of the 1 y = 4x + 1 is not in the form y = kx because of that 1 41. AcidRa1n what about y - 6x = 0 42. AcidRa1n ;) 43. lgbasallote simplify it first 44. lgbasallote solve for y first 45. AcidRa1n can you do it? 46. AcidRa1n I dont wnana get confused 47. lgbasallote 48. AcidRa1n 49. lgbasallote 6x..not 6 50. AcidRa1n yeah 51. AcidRa1n y-=6x? 52. AcidRa1n im a idiot 53. lgbasallote y-? 54. AcidRa1n lmfao it was negative 55. AcidRa1n OH NVM 56. AcidRa1n alright so the answer is 6! 57. lgbasallote yes 58. AcidRa1n what about y _ 3 = -3x 59. AcidRa1n - 60. lgbasallote what do you think? 61. AcidRa1n 62. AcidRa1n y= x? 63. lgbasallote you can't combine 3 and 3x 64. AcidRa1n is that a constant variation? 65. lgbasallote 66. AcidRa1n Ah forget about that one can you help me with proportions? i'll open up a new question 67. lgbasallote i have to go soon 68. AcidRa1n awe ur like the best lol 69. lgbasallote call saifoo or amistre 70. AcidRa1n eh Saifoo is a fool now he used to be cool 71. AcidRa1n amistre doesnt help me anymore
2014-10-21 13:43:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6750880479812622, "perplexity": 11101.590227870003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444465.10/warc/CC-MAIN-20141017005724-00246-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.scipedia.com/public/Zheng_Cao_2015a
## Abstract Using the UVic Earth System Model, this study simulated the change of seawater chemistry and analyzed the chemical habitat surrounding shallow- and cold-water coral reefs from the year 1800 to 2300 employing RCP2.6, RCP4.5, RCP6.0, and RCP8.5 scenarios. The model results showed that the global ocean will continue to absorb atmospheric CO2 . Global mean surface ocean temperature will rise 1.1–2.8 K at the end of the 21st century across RCP scenarios. Meanwhile, the global mean surface ocean pH will drop 0.14–0.42 and the ocean surface mean concentration of carbonate will decrease 20%–51% across the RCP scenarios. The saturated state of sea water with respect to calcite carbonate minerals (Ω ) will decrease rapidly. During the pre-industrial period, 99% of the shallow-water coral reefs were surrounded by seawater with Ω  > 3.5 and 87% of the deep-sea coral reefs were surrounded by seawater with aragonite supersaturation. Within the 21st century, except for the high mitigation scenario of RCP2.6, almost none shallow-water coral reefs will be surrounded by seawater with Ω  > 3.5. Under the intensive emission scenario of RCP8.5, by the year 2100, the aragonite saturation horizon will rise to 308 m under the sea surface from 1138 m at the pre-industrial period, thus 73% of the cold-water coral reefs will be surrounded by seawater with aragonite undersaturation. By the year 2300, only 5% of the cold-water coral reefs will be surrounded by seawater with aragonite supersaturation. ## Keywords Simulation research ; Aragonite saturation state ; Ocean acidification ; Shallow-water coral reefs ; Cold-water coral reefs ## 1. Introduction Since the beginning of the industrial revolution, the CO2 concentration in atmosphere has increased rapidly, because of human activities such as fossil fuel combustion and land use. Anthropogenic CO2 emissions have reached 545 Pg C from 1750 to 2011 (IPCC, 2013 ). Not all of CO2 emissions remain in the atmosphere, 26% of them are absorbed by the ocean and 28% are absorbed in terrestrial soil (Sabine et al., 2004 ). Recently, both observation and simulation studies have agreed that the ocean absorbs a large amount of CO2 , which mitigates global climate change. However, this absorption is not entirely harmless. The average pH of the sea surface has dropped from 8.2 to 8.1 since pre-industrial time, which means that the ocean hydrogen ion concentration has increased by 26% (Gattuso and Hansson, 2011 ). Doney et al. (2009) reported the ocean carbon chemical equilibrium formula as: ${\displaystyle {\mbox{CO}}_{2\left({\mbox{atmos}}\right)}\rightleftarrows {\mbox{CO}}_{2\left({\mbox{aq}}\right)}+}$${\displaystyle {\mbox{H}}_{2}{\mbox{O}}\rightleftarrows {\mbox{H}}_{2}{\mbox{CO}}_{3}\rightleftarrows {\mbox{H}}^{+}}$${\displaystyle +{\mbox{HCO}}_{3}^{-}\rightleftarrows 2{\mbox{H}}^{+}+{\mbox{CO}}_{3}^{2-}{\mbox{.}}}$ ( 1) According to this equation, the increase of hydrogen ions (H+ ) will result in the decrease of carbonate ions concentration (${\textstyle {\mbox{CO}}_{3}^{2-}}$ ), which will further reduce the calcium carbonate (CaCO3 ) saturation state. These changes in sea water will harm marine calcified organisms, especially the coral reefs (Doney et al., 2009 , Fine and Tchernov, 2007 , Fabry et al., 2008  and Guinotte and Fabry, 2008 ). Coral reefs are an important component of marine ecosystems and are composed of aragonite (a relatively soluble mineral form of calcium carbonate). Many laboratory experiments showed that the decline of the ocean aragonite saturation state would lead to the decrease in the shallow-water coral calcification rate, because it is difficult for corals to extract calcium ions and bicarbonate ions from the surrounding seawater to form bones and shells (${\textstyle {\mbox{Ca}}^{2+}+2{\mbox{HCO}}_{3}^{-}\rightleftarrows {\mbox{CaCO}}_{3}+}$${\displaystyle {\mbox{CO}}_{2}+{\mbox{H}}_{2}{\mbox{O}}}$ ). In addition, cold-water coral reefs also suffer from the effects of this change in the marine environment (Langdon et al., 2003 ). Guinotte et al. (2006) suggested that the global distribution of cold-water coral in the deep sea is limited by the surrounding seawater aragonite saturation state. Therefore, an increase in the CO2 concentration in the atmosphere can have important influences on the marine environment and coral reef systems. Anthropogenic CO2 emissions will decrease the ocean surface pH and the concentration of carbonate ions (Caldeira and Wickett, 2005 , Orr et al., 2005 , Cao and Caldeira, 2008  and Steinacher et al., 2009 ). This development will directly affect the ocean aragonite saturation state. Aragonite saturation state (Ω ) is defined as ( Feely et al., 1988 ): ${\displaystyle \Omega =\left[{\mbox{Ca}}^{2+}\right]\left[{\mbox{CO}}_{3}^{2-}\right]/K_{\mbox{sp}}^{_{\ast }}{\mbox{,}}}$ ( 2) which is calculated from the calcium ion concentration ([Ca2+ ]), carbonate ion concentration (${\textstyle \left[{\mbox{CO}}_{3}^{2-}\right]}$ ) and equilibrium thermodynamic solubility product (${\textstyle K_{\mbox{sp}}^{\mbox{*}}}$ ). The [Ca2+ ] and ${\textstyle \left[{\mbox{CO}}_{3}^{2-}\right]}$ are calculated from the sea water salinity, total alkalinity (TAlk), and dissolved inorganic carbon (DIC). The solubility product ${\textstyle K_{\mbox{sp}}^{\mbox{*}}}$ is calculated based on seawater temperature, salinity, and pressure (Mucci, 1983 ), which is defined as the product of [Ca2+ ] and ${\textstyle \left[{\mbox{CO}}_{3}^{2-}\right]}$ concentrations at the saturation state. When Ω  < 1 the seawater is undersaturated with respect to aragonite, while Ω  > 1 means it is supersaturated. The aragonite saturation horizon is defined as the depth which Ω  = 1. In the ocean today, the marine aragonite saturation state decreases from the surface to the depth. Thus, the water above the saturation horizon is supersaturated and the water under the saturation horizon is undersaturated. There are numerous simulation studies about ocean acidification under various CO2 emission scenarios. Using the ocean circulation model of the Lawrence Livermore National Laboratory, Caldeira and Wickett (2005) found that the global mean ocean surface pH would drop 0.3–0.5 at the end of the 21st century under the SRES scenario. Orr et al. (2005) commented that surface seawater would be undersaturated with respect to aragonite at the middle of the 21st century under the IS92a scenarios, which was based on the simulation results of 13 models, which would have a negative impact on polar shell plankton. In a recent research, it was found that 20% of the surface area of the Canadian Basin has been undersaturated with respect to aragonite (Robbins et al., 2013 ). This mismatch between observations and models is probably due to the low resolution of global models and the inability of the global models to accurately simulate sea ice. The actual speed of the decline of Arctic sea ice is faster than the forecast, because of global warming (Stroeve et al., 2007 ). On the basis of UVic ESCM simulation, Cao and Caldeira (2008) speculated that the ocean mean pH at high latitude would drop by more than 0.2 when the atmospheric CO2 concentration stabilized at 450 × 10−6 , and in this case, 7% of the area of the south ocean (south of 60°S) would be undersaturated in aragonite. Fine and Tchernov (2007) experimentally determined that Scleractinian coral would be huge, soft, and skeleton-free in acidified seawater (pH = 7.4). Chinese scientists have conducted several studies using the global carbon cycle model simulation. For example, Xu and Li (2009) simulated the global ocean anthropogenic CO2 uptake. Cao et al. (2014a) and Wang et al. (2014) used UVic ESCM to simulate and analyze how climate sensitivity could affect the uptake of CO2 by the ocean under the RCP8.5 scenario. Cao et al. (2014b) analyzed the response of ocean acidification to a gradual increase and decrease of atmospheric CO2 and found that marine ecosystems would not respond well to recover to their natural chemical habitats even if the atmospheric CO2 content is lower than future predictions. On the basis of the previous researches, our study used an intermediate complex earth system climate model to compare various ocean acidification rates from before the industrial revolution to the year 2300 under four CO2 Representative Concentration Pathway scenarios (RCPs). These scenarios have been widely employed in the IPCC AR5 report. At the same time, this study also analyzed the effect of ocean acidification on both shallow- and cold-water coral reefs. Besides being a supplement to the CMIP5 multi-models prediction in the AR5 report, the work also represents the first attempt to analyze the chemical habits of cold-water coral reefs under four RCPs. ## 2. Model and method ### 2.1. Model description Developed at the University of Victoria at Canada, the UVic Earth System Climate Model is an intermediate complexity climate model, which consists of a 3D ocean general circulation model with a spherical grid resolution of 3.6° by longitude and 1.8° by latitude with 19 vertical layers in the ocean and one-layer in the atmosphere with an energy-moisture balance. It is coupled with a sea-ice model, an atmospheric model and a land-ice model (Weaver et al., 2001 ). The terrestrial carbon cycle model is based on the TRIFFID land surface and dynamic vegetation scheme of Hadley Centre Met Office (Meissner et al., 2003 ). In addition, the ocean carbon cycle consists of a CO2 air-sea interaction process, a marine inorganic carbon process (Orr et al., 1999 ) and a marine organic carbon process (Schmittner et al., 2008 ). The organic carbon process includes a simple marine ecosystems, which has nutrient (PO43− , NO3 ), interaction of phytoplankton and zooplankton and a feedback for the marine carbon cycle. This model has been widely used in numerous studies of climate change, ocean biochemistry cycles (Schmittner et al., 2008 ), climate feedback of ocean carbon cycles (Zickfeld et al., 2013 ) and ocean acidification (Cao and Caldeira, 2008  and Matthews et al., 2009 ). ### 2.2. Simulation experiments We ran the UVic model for 10,000 model years with an atmospheric CO2 concentration of 280 × 10−6 as being the CO2 level of the pre-industrial time (Indermühle et al., 1999 ). Next, we set this state as the initial condition for the nominal year 1800 and simulated the climate change from 1800 to 2300. From 1800 to 2005, the model was driven by the historical concentration of atmospheric CO2 . After 2005, it was forced by the CO2 concentration in the atmosphere as prescribed by four RCP scenarios (RCP2.6, RCP4.5, RCP6.0, RCP8.5, the numbers behind RCP mean that global radiation forcing will reach 2.6, 4.5, 6.0, 8.5 W m2 at the year 2100, while atmosphere CO2 concentration will reach 412 × 10−6 , 538 × 10−6 , 670 × 10−6 , 936 × 10−6 , respectively). ### 2.3. Correction and calculation The UVic model was used to simulate ocean temperature, alkalinity (ALK), salinity, DIC and phosphate. On the basis of these data, other variables were calculated using the chemistry routine from the OCMIP-3 project (http://www.ipsl.jussieu.fr/OCMIP/phase3 ). These variables included, pH, the concentration of carbonate ions and saturated state of aragonite. We estimated the aragonite saturation state for the shallow-water and deep-sea coral reef locations by assuming that each reef possessed the simulated seawater chemistry of the model grid cell where the reef was located. In this study, we used different assignment methods such as liner interpolation. We did not test the accuracy of each method, because of the uncertainty of the cold-water coral reefs' location and each methods own error. The information for the longitude and latitude of the shallow-water coral reef locations was obtained from the Reef Base Database (http:/www.reefbase.org ) and the information concerning the longitude, latitude, and depth of the cold-water coral locations was obtained from the Global Distribution of Cold-Water Coral Reefs (Freiwald et al., 2004 ). We used the Global Ocean Data Analysis Project (GLODAP) data to correct the model output data. The GLODAP data was obtained in the 1990s and was subsequently gridded into 1° horizontal resolution with 33 vertical layers by Key et al. (2004) . Because the observation data is more accurate than the model data, we linearly interpolated the former into the grid of the UVic model. ## 3. Results ### 3.1. Comparison of the model and observed results In this section, we compare the result of UVic model and the GLODAP observation data to test the reliability of the ocean simulation model. For example, the observed global mean DIC and TAlk were 2254 and 2363 μmol kg−1 , while during the same corresponding period, the model global mean DIC and TAlk were 2242 and 2368 μmol kg−1 . The model error of the DIC was 0.53% and the TAlk error was 0.21%. Furthermore, the model simulation distributions of the DIC and TAlk are similar to the observations (Fig. 1 ). This includes the simulation of the high value region and the data change of the upper ocean with latitude distribution. Fig. 1. Latitude–depth distribution of ocean DIC and alkalinity from GLODAP observation and UVic model simulation. (a1) DIC of GLODAP data, (a2) DIC of model data in year 1994, (b1) total alkalinity of GLODAP data, (b2) total alkalinity of model data in year 1994. To reduce the error between model results and the observation data, we corrected the model data following a formula used in previous studies (Caldeira and Wickett, 2005 , Orr et al., 2005  and Cao and Caldeira, 2008 ): ${\displaystyle D_{\mbox{co}}\left(n\right)=D_{\mbox{m}}\left(n\right)-}$${\displaystyle D_{{\mbox{m}}1994}+D_{\mbox{ob}}{\mbox{.}}}$ ( 3) Where Dco is the corrected data, Dm is the model data, Dm1994 is the model data of the year 1994, Dob is the observation data and n is the each year from 1800 to 2300. Here we assumed a simple linear error. The true error is more complex, so there is still some uncertainty. ### 3.2. Simulation of marine chemistry The atmospheric CO2 concentration increased at various rates under the RCP scenarios except RCP2.6 (Fig. 2 a), while the ocean continues to absorb CO2 from the atmosphere under all the scenarios (Fig. 2 b). Based on the model simulation results, the global oceans will absorb 162–411 Pg C from the year 2010 to 2100 across the RCP scenarios. By the year 2300, the global oceans will absorb 215−1135 Pg C. Compared with the pre-industrial period, the global mean ocean surface temperature will rise 1.1–2.8 K at the year 2100, which is in broad agreement with the prediction of CMIP5 (0.8–3.1 K) (IPCC, 2013 ). At the year 2300, the ocean temperature will increase 0.8–6.1 K. The increase of CO2 concentration is also important in the strength of the North Atlantic thermohaline circulation. The intensity of the North Atlantic Deep Water (NADW) was 21.5 Sv (1 Sv = 106  m3  s−1 ) at the year 1800. This will decrease to 16.6–19.6 Sv under RCP8.5 to RCP2.6 scenarios. After the CO2 concentration stabilizes, the NADW intensity will increase. Meanwhile, the ocean surface water warming will increase faster than that of the deep ocean so there will be a greater temperature gap between shallow-water and deep-water. Thus, ocean water stratification will be more stable and the convection currents will weaken. Fig. 2. Time series under 4 RCP scenarios (a) atmospheric CO2 concentration, (b) CO2 uptake by ocean, (c) global mean ocean surface pH, (d) global mean ocean surface [CO32− ], (e) global mean ocean surface Ω , (f) global mean ocean pH, (g) global mean ocean [CO32− ], (h) global mean Ω , (i) global mean ocean saturation horizon of aragonite, (j) percent of ocean volume with Ω  > 1.0. With respect to ocean chemistry, according to Equation (1) , the dissolved CO2 causes the seawater to produce more hydrogen ions, causing the pH to decrease. At the end of the 21st century, the global mean ocean surface pH will drop by 0.14–0.42 compared with the pre-industrial period (Fig. 2 c). The pH will rise 0.05 from the year 2100 to 2300 under the RCP2.6 scenario. Under the other three scenarios, the global mean pH will continue to decrease at different rates until the atmospheric CO2 concentration stabilizes. Under the intensive emission scenario of the RCP8.5 scenario, the global mean pH will drop 0.73 from the year 1800 to 2300, which means that the hydrogen ion concentration will increase by 437%. Between the period 1986–2005 and 2081–2100, the UVic model predicted a decrease in global mean surface pH by 0.067 units for RCP2.6, 0.144 for RCP4.5, 0.201 for RCP6.0, and 0.306 for RCP8.5. These projected pH changes are in close agreement with the CMIP5 Earth System Model results. For instance, between 1986–2005 and 2081–2100 the CMIP5 models project a model-mean decrease in the surface pH of 0.06–0.07 for RCP2.6, 0.14–0.15 for RCP4.5, 0.20–0.21 for RCP6.0, and 0.30–0.32 for RCP8.5. Due to the increase of the hydrogen ion concentration, the global mean ocean surface carbonate concentration will be reduced by 20%–51% by 2100 under the RCP scenarios. By 2300, the change will be 13%–71% (Fig. 2 d). The entirety of the ocean will change more moderately than the ocean surface. The physical and chemical properties of the deep ocean lag behind the surface water (Cao et al., 2014b ). Even if the change of the surface water has stabilized following the atmospheric CO2 concentration stabilization, the deep ocean will still experience continued acidification (Fig. 2 f). Averaged over the whole ocean, the decrease in ${\textstyle \left[{\mbox{CO}}_{3}^{2-}\right]}$ under the RCP scenarios ranges from 8% to 31%. Fig. 3 shows the latitude–depth distribution of ocean temperature, DIC, pH and ${\textstyle \left[{\mbox{CO}}_{3}^{2-}\right]}$ in years 2100 and 2300 relative to 1800. These changes extend from the surface to the deep sea. Temperature changes at different latitudes are relatively uniform and the area with the greatest warming appears on the surface ocean near 45°N latitude. The zonal average pH, DIC and ${\textstyle \left[{\mbox{CO}}_{3}^{2-}\right]}$ exhibit greater changes in subtropical surface region than in the tropic and high-latitudes. In the vertical direction, these changes are delivered from subtropical area to mid-latitude regions. This is due to the subtropical water sinking and carrying anthropogenic CO2 into the deep ocean by these convection currents (Sabine et al., 2004 ). Fig. 3. Latitude–depth distribution of ocean variables in 2100 and 2300 differ from 1800 (Dashed lines are latitude mean aragonite saturation horizon in the year 1800; solid lines are latitude mean aragonite saturation horizon in the year 2100 and 2300). ### 3.3. Chemical habits of coral reefs Aragonite is a relatively soluble mineral form of calcium carbonate, which is an important component of coral reefs. In the aragonite supersaturated water, corals can easily extract calcium and carbonate ion from the surrounding sea to build the reef. With a volume weighted average, 19% of the ocean was supersaturated in aragonite during the pre-industrial period (Fig. 2 j). This will drop to 5%–11% under RCP8.5 to RCP2.6 scenarios. By 2300, over 98% of the ocean will be undersaturated under the RCP8.5 scenario, which could be lethal to coral reefs. The increasing atmosphere CO2 concentration leads to higher seawater temperature and a lower aragonite saturation state, so that corals will suffer the combined effects of global warming and ocean acidification (Pandolfi et al., 2011  and Reynaud et al., 2003 ). The surface seawater will be undersaturated in the Arctic and the south ocean by the mid-21st century under the RCP8.5 scenario. This undersaturation will be delayed 20 years and 60 years under the RCP6.0 and RCP4.5 scenarios and undersaturation will not occur under the high mitigation scenario of RCP2.6. However, a relative observation shows that the Canadian basin surface appears as an aragonite undersaturated area. However, that is due to the limited resolution of the UVic model. Although UVic model has a coupled sea-ice model, the present sea-ice model cannot simulate and predict the future sea-ice variation trend. The real melt rate of sea-ice is faster than the model estimation (Stroeve et al., 2007 ). Averaged over the whole ocean, the aragonite saturation horizon will increase from a depth of 1138 m in the pre-industrial period to 308 m at the year 2100 under RCP8.5 scenario. In each part of the ocean, it will lift from 1967 m–530 m in Atlantic–Arctic and 805 m–196 m in Pacific–India. The aragonite saturation states around both shallow- and cold-water coral reefs will rapidly decline. Maps were generated using UV-CDAT (http://uvcdat.llnl.gov/ ). In the pre-industrial period, the water surrounding shallow-water coral reefs had a mean Ω value of 4.1. Over 99% of these areas are surrounded by seawater with Ω  > 3.5 ( Fig. 4 ). Under the RCP scenarios, the average Ω of the seawater surrounding the shallow-water coral reefs will decrease to 2.2–3.0. At the year 2055, there will be less than 1% of these areas surrounded by seawater with Ω  > 3.5. This situation will be delayed for 20 years and 35 years under the RCP6.0 and RCP4.5 scenarios, respectively. Even if CO2 emissions follow the RCP2.6 scenario, there will be only 27% of shallow-water coral reefs surrounded by seawater with Ω  > 3.5. Fig. 4. Model-simulated ocean surface aragonite saturation state with shallow-water coral reefs under RCP8.5 scenario, (a1–a4) surface aragonite saturation state overlaid with shallow-water coral reef locations (black dots) at CO2 levels of 280 × 10−6 (around the year 1800), 550 × 10−6 (around the year 2050), 750 × 10−6 (around the year 2080), and 950 × 10−6 (around the year 2100), (b1–b4) percentage distribution of shallow-water coral reefs surrounded by seawater at each aragonite saturation bin. With respect to the cold-water coral reefs, the pre-industrial aragonite saturation state of the seawater surrounding deep-sea coral reefs was 1.8 (Fig. 5 ). By the year 2100, the average saturation state of the water around cold-water coral reefs will decline to 0.9–1.4 (across RCP8.5 to RCP2.6). Meanwhile, there is only 27%–72% of the cold-water coral reefs that will be surrounded by seawater with Ω  > 1. By the year 2300, more than 96% of these will be surrounded by seawater with aragonite undersaturation according to the RCP8.5 scenario. Fig. 5. Model-simulated ocean aragonite saturation horizon with cold-water coral reefs under RCP8.5 scenario, (a1–a4) aragonite saturation horizon overlaid with deep-sea coral reef locations. Coral reef locations at different depth range are represented by different colored dots. Areas in pink represent regions where aragonite saturation horizon has reached the surface. (b1–b4) percentage distribution of cold-water coral reefs surrounded by seawater at each aragonite saturation bin. Based on the experiments of shallow-water coral reefs, the calcification rate of reef-building corals would be significantly reduced when aragonite saturation decreased slightly under the situation of supersaturated (Fabry et al., 2008  and Langdon et al., 2003 ). Whats more, recent observations show that the calcification rate of corals has been declining in recent decades (De'ath et al., 2013  and Su, 2012 ). Therefore, there are reasons to believe that ocean acidification would have significant influence on coral reefs before the seawater becomes undersaturated in aragonite. ## 4. Conclusions and discussion (1)The ocean will continuously absorb CO2 from the atmosphere under four RCP scenarios. Thus, the global mean ocean surface temperature will rise by 1.1–2.8 K while the global mean ocean surface pH and [CO32− ] will drop 0.14–0.42 and 20%–51% across the RCPs scenarios. The physical and chemical changes of the deep sea lag behind the ocean surface. The effects will last longer than the time under which the atmospheric CO2 concentration is stabilized. It is credible that a large amount of anthropogenic CO2 emissions will continue to impact the whole ocean. (2)The aragonite saturation state of sea water surrounding coral reefs will decrease rapidly. During the pre-industrial period, over 99% of shallow-water coral reefs were surrounded by seawater with Ω > 3.5 and 87% of cold-water coral reefs were surrounded by seawater with Ω > 1. On the basis of the model-simulations, by the end of the 21st century, less than 1% of shallow-water coral reefs will be surrounded by seawater with Ω > 3.5 with the exception of the RCP2.6 scenario, while 73% of cold-water coral reefs will suffer from undersaturated aragonite seawater under the RCP8.5 scenario. The calcification rate of coral reefs will decrease because of the lowering of the aragonite saturation state. Thus, both shallow- and cold-water coral reefs will suffer from ocean acidification. (3)The increasing CO2 concentration will lead to higher temperatures and lower aragonite saturation state in the ocean. This means that coral reefs will be synergistically impacted by global warming and ocean acidification. Analyzing the situation using four RCP scenarios, reducing CO2 emissions can effectively slow the process of ocean physical and chemical changes. The predictions of the deep ocean acidification have greater uncertainty in comparison to those of the ocean surface. This is mainly due to the ocean currents simulation and physical transmission process between models (Cao et al., 2009 ). The model prediction and simulation of deep ocean acidification and its influence on the cold-water coral reefs need to be quantified using the simulations from other earth system models. This study focuses on the open ocean chemistry of the aragonite saturation state of seawater surrounding coral reefs, but the carbonate chemistry within the coral reef system might be substantially different from that of the surrounding seawater (McCulloch et al., 2012  and Andersson et al., 2014 ). Furthermore, changes in a number of other environmental factors, many of which are associated with human activities, such as heat stress, light, salinity, the abundance of food and nutrients, overfishing, and pollution could all influence the fate of coral reefs (Pandolfi et al., 2011 ). ## Acknowledgements This work was supported by National Natural Science Foundation of China (41276073 , 41422503 ), National Key Basic Research Program of China (2015CB953601 ), Zhejiang University K.P. Chaos High Technology Development Foundation and the Fundamental Research Funds for the Central Universities . ## References 1. Andersson et al., 2014 A.J. Andersson, K.L. Yeakel, N.R. Bates, et al.; Partial offsets in ocean acidification from changing coral reef biogeochemistry; Nat. Clim. Change, 4 (2014), pp. 56–61 2. Caldeira and Wickett, 2005 K. Caldeira, M.E. Wickett; Ocean model predictions of chemistry changes from carbon dioxide emissions to the atmosphere and ocean; J. Geophys. Res. Oceans, 110 (2005), p. C09S04 http://dx.doi.org/10.1029/2004JC002671 3. Cao and Caldeira, 2008 L. Cao, K. Caldeira; Atmospheric CO2 stabilization and ocean acidification  ; Geophys. Res. Lett., 35 (19) (2008), p. L19609 4. Cao et al., 2009 L. Cao, M. Eby, A. Ridgwell, et al.; The role of ocean transport in the uptake of anthropogenic CO2; J. Biogeosci., 6 (2009), pp. 375–390 5. Cao et al., 2014a L. Cao, S.-J. Wang, M.-D. Zheng, et al.; Sensitivity of ocean acidification and oxygen to the uncertainty in climate change; J. Environ. Res. Lett., 9 (6) (2014), p. 0640 http://dx.doi.org/10.1088/1748-9326/9/6/064005 6. Cao et al., 2014b L. Cao, H. Zhang, M.-D. Zheng, et al.; Response of ocean acidification to a gradual increase and decrease of atmospheric CO2; Environ. Res. Lett., 9 (2) (2014), pp. 239–246 7. De'ath et al., 2013 G. De'ath, K. Fabricius, J. Lough; Yes — Coral calcification rates have decreased in the last twenty-five years!; Mar. Geol., 346 (2013), pp. 400–402 8. Doney et al., 2009 S.C. Doney, V.J. Fabry, R.A. Feely, et al.; Ocean acidification: the other CO2 problem  ; Ann. Rev. Mar. Sci., 1 (2009), pp. 169–192 9. Fabry et al., 2008 V.J. Fabry, B.A. Seibel, R.A. Feely, et al.; Impacts of ocean acidification on marine fauna and ecosystem processes; Ices J. Mar. Sci., 65 (2008), pp. 414–432 10. Feely et al., 1988 R.A. Feely, R.H. Byrne, J.G. Acker, et al.; Winter-summer variations of calcite and aragonite saturation in the Northeast Pacific; Mar. Chem., 25 (88) (1988), pp. 227–241 11. Fine and Tchernov, 2007 M. Fine, D. Tchernov; Scleractinian coral species survive and recover from decalcification; Science, 315 (2007), p. 1811 12. Freiwald et al., 2004 A. Freiwald, J.H. Fosså, A. Grehan, et al.; Cold-water Coral Reefs; UNEP World Conservation Monitoring Centre, Cambridge UK (2004) 13. Gattuso and Hansson, 2011 J.P. Gattuso, L. Hansson; Ocean Acidification: Background and History; Oxford University Press, Oxford (2011) 14. Guinotte and Fabry, 2008 J.M. Guinotte, V.J. Fabry; Ocean acidification and its potential effects on marine ecosystems; Ann. N. Y. Acad. Sci., 1134 (2008), pp. 320–342 15. Guinotte et al., 2006 J. Guinotte, J. Orr, S. Cairns, et al.; Will human-induced changes in seawater chemistry alter the distribution of deep-sea scleractinian corals?; Front. Ecol. Environ., 3 (2006), pp. 141–146 16. Indermühle et al., 1999 A. Indermühle, T.F. Stocker, F. Joos, et al.; Holocene carbon-cycle dynamics based on CO2 trapped in ice at Taylor Dome, Antarctica  ; Nature, 398 (6723) (1999), pp. 121–126 17. IPCC, 2013 IPCC; Climate Change 2013: the Physical Science Basis; Cambridge University Press, Cambridge, UK (2013) 18. Key et al., 2004 M.R. Key, A.L. Kozyr, C. Sabine, et al.; A global ocean carbon climatology: results from Global Data Analysis Project (GLODAP); Glob. Biogeochem. Cycles, 18 (4) (2004), pp. 357–370 19. Langdon et al., 2003 C. Langdon, W.S. Broecker, D.E. Hammond, et al.; Effect of elevated CO2 on the community metabolism of an experimental coral reef  ; Glob. Biogeochem. Cycles, 17 (1) (2003), p. 11 20. Matthews et al., 2009 H.D. Matthews, L. Cao, K. Caldeira; Sensitivity of ocean acidification to geoengineered climate stabilization; Geophys. Res. Lett., 36 (10) (2009), p. L10706 21. McCulloch et al., 2012 M.J. McCulloch, J. Falter, J. Trotter, et al.; Coral resilience to ocean acidification and global warming through pH upregulation; Nat. Clim. Change, 2 (2012), pp. 623–627 22. Meissner et al., 2003 K.J. Meissner, A.J. Weaver, H.D. Matthews, et al.; The role of land surface dynamics in glacial inception: a study with the UVic Earth System model; Clim. Dyn., 21 (2003), pp. 515–537 23. Mucci, 1983 A. Mucci; The solubility of calcite and aragonite in seawater at various salinities, temperatures and one atmosphere total pressure; Am. J. Sci., 283 (7) (1983), pp. 780–799 24. Orr et al., 1999 J.C. Orr, R. Najjar, C. Sabine, et al.; Abiotic-HOWTO; Internal OCMIP Report LSCE/CEA, Gifsur-Yvette, Saclay, France (1999) 25. Orr et al., 2005 J.C. Orr, V.J. Fabry, O. Aumont, et al.; Anthropogenic ocean acidification over the twenty first century and its impact on calcifying organisms; Nature, 437 (2005), pp. 681–686 26. Pandolfi et al., 2011 J.M. Pandolfi, S.R. Connolly, D.J. Marshall, et al.; Projecting coral reef futures under global warming and ocean acidification; Science, 333 (6041) (2011), pp. 418–422 27. Reynaud et al., 2003 S. Reynaud, N. Leclercq, S. Romainelioud, et al.; Interacting effects of CO2 partial pressure and temperature on photosynthesis and calcification in a scleractinian coral  ; Glob. Change Biol., 9 (2003), pp. 1660–1668 28. Robbins et al., 2013 L.L. Robbins, J.G. Wynn, J.T. Lisle, et al.; Baseline monitoring of the western Arctic Ocean estimates 20% of Canadian basin surface waters are undersaturated with respect to aragonite; PLoS One, 8 (9) (2013), p. 274 29. Sabine et al., 2004 C.L. Sabine, R.A. Feely, N. Gruber, et al.; The oceanic sink for anthropogenic CO2; Science, 305 (5682) (2004), pp. 367–371 30. Schmittner et al., 2008 A. Schmittner, A. Oschlies, H.D. Matthews, et al.; Future changes in climate, ocean circulation, ecosystems, and biogeochemical cycling simulated for a business-as-usual CO2 emission scenario until year 4000 AD  ; Glob. Biogeochem. Cycles, 22 (2008) http://dx.doi.org/10.1029/2007GB002953 31. Steinacher et al., 2009 M. Steinacher, F. Joos, T.L. Frölicher, et al.; Imminent ocean acidification in the Arctic projected with the NCAR global coupled carbon cycle-climate model; Biogeosciences, 6 (4) (2009), pp. 515–533 32. Stroeve et al., 2007 J. Stroeve, M.M. Holland, W. Meier, et al.; Arctic sea ice decline: faster than forecast; Geophys. Res. Lett., 34 (9) (2007), pp. 529–536 33. Su, 2012 R.-X. Su; Coral calcification under increasing atmospheric CO2 concentration and global warming in the southern South China Sea  ; Quat. Int., 279–280 (2012), p. 474 (in Chinese) 34. Wang et al., 2014 S.-J. Wang, L. Cao, N. Li; Responses of the ocean carbon cycle to climate change: results from an earth system climate model simulation; Adv. Clim. Change Res., 5 (2014), pp. 123–130 35. Weaver et al., 2001 A.J. Weaver, M. Eby, E.C. Wiebe, et al.; The UVic Earth system climate model: model description, climatology, and applications to past, present and future climate; Atmos. Ocean, 39 (4) (2001), pp. 361–428 36. Xu and Li, 2009 Y.-F. Xu, Y.-C. Li; Estimates of anthropogenic CO2 uptake in a global ocean model  ; Adv. Atmos. Sci., 26 (2) (2009), pp. 265–274 37. Zickfeld et al., 2013 K. Zickfeld, M. Eby, K. Alexander, et al.; Long-term climate change commitment and reversibility: an EMIC intercomparison; J. Clim., 26 (16) (2013), pp. 5782–5809 ### Document information Published on 15/05/17 Submitted on 15/05/17 Licence: Other ### Document Score 0 Views 29 Recommendations 0
2020-06-05 12:44:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6857679486274719, "perplexity": 9509.33566554324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348500712.83/warc/CC-MAIN-20200605111910-20200605141910-00027.warc.gz"}
https://byjus.com/chemistry/zero-order-reaction/
# Zero Order Reaction ## What is a Zero Order Reaction? Zero-order reaction is a chemical reaction wherein the rate does not vary with the increase or decrease in the concentration of the reactants. Therefore the rate of these reactions is always equal to the rate constant of the specific reactions (since the rate of these reactions is proportional to the zeroth power of reactants concentration). ### Differential and Integral Form of Zero Order Reaction The Differential form of a zero order reaction can be written as: Rate = $\frac{-dA}{dt} = k[A]^{0} = k$ Where ‘Rate’ refers to the rate of the reaction and ‘k’ is the rate constant of the reaction. This differential form can be rearranged and integrated on both sides to get the required Integral form as shown below. Rate = $\frac{-d[A]^{0}}{dt}= k$ Multiplying both sides with ‘-dt’, we get: $d[A]= -kdt$ Integrating on both sides, we get: $\int_{[A]_{0}}^{[A]}d[A] = -\int_{0}^{t}kdt$ Where [A]0 is the initial concentration of the reactant [A] at time t=0 . Solving for [A], we get: $[A] = [A_{0}] – kt$ Which is the required integral form. This form enables us to calculate the population of the reactant at any given time post the start of the reaction. ### Graph of Zero Order Reaction The integral form of zero order reactions can be rewritten as $[A] = – kt + [A_{0}]$ Comparing this equation with that of a straight line (y = mx + c), an [A] against t graph can be plotted to get a straight line with slope equal to ‘-k’ and intercept equal to [A]0 as shown below. ## Half-Life of a Zero Order Reaction The time scale in which there is a 50% reduction in the initial population is referred to as half-life. half-life is denoted by the symbol ‘t1/2’. From the integral form, we have the following equation $[A] = [A_{0}] – kt$ Replacing t with half-life t1/2 we get: $\frac{1}{2}[A] = [A_{0}] – kt_{1/2}$ Therefore, t1/2 can be written as: $kt_{1/2} = \frac{1}{2}[A]_{0}$ And, $t_{1/2} = \frac{1}{2k}[A]_{0}$ It can be noted from the equation given above that the half-life is dependant on the rate constant as well as the reactant’s initial concentration. ## Examples of Zero Order Reaction The following reactions are examples of zero order reactions that are not dependant on the concentration of the reactants. • The reaction of hydrogen with chlorine (Photochemical reaction). $H_{2}(g) + Cl_{2} (g)\overset{hv}{\rightarrow} 2HCl (g)$ • Decomposition of nitrous oxide over a hot platinum surface. $2N_{2}O \overset{Pt(hot)}{\rightarrow} 2N_{2} + O_{2}$ • Iodization of Acetone (In H+ ion rich medium) $CH_{3}COCH_{3} + I_{2} \overset{H^{+}}{\rightarrow} ICH_{2}COCH_{3} + HI$ Reactions wherein a catalyst is required (and is saturated by reactants) are generally zero order reactions. The unit of the rate constant in a zero order reaction is given by concentration/time or M/s  where ‘M’ is the molarity and ‘s’ refers to one second. ## Frequently Asked Questions In Exam: 1. What is meant by the Zero Order Reaction? 2. What are the units of k for a Zero Order Reaction? 3. What is the rate law for a Zero Order Reaction? 4. What are the units of k for a Zero Order Reaction? 5. How do you know if its a Zero Order Reaction? 6. Are second order reactions faster than Zero Order Reaction? 7. What is Zero order kinetics? 8. What are the units of a Zero Order Reaction? 9. What is the zero-order rate law? 10. How do you know if a graph is a zero or second order? 11. What is the equation for the half-life of a zero-order process? 12. What is the zero order integrated rate law? To learn more about the Zero Order Reaction from the experts you can register to BYJU’S. Other important links: #### Practise This Question Metal can be prevented from rusting by
2019-06-17 23:36:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806244432926178, "perplexity": 854.0903869532924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998581.65/warc/CC-MAIN-20190617223249-20190618005249-00504.warc.gz"}
https://planetcalc.com/4175/?license=1
homechevron_rightProfessionalchevron_rightFinance Nominal interest rates сomparison Сalculation of the effective interest rates for a given nominals This content is licensed under Creative Commons Attribution/Share-Alike License 3.0 (Unported). That means you may freely redistribute or modify this content under the same license conditions and must attribute the original author by placing a hyperlink from your site to this work https://planetcalc.com/4175/. Also, please do not modify any references to the original work (if any) contained in this content. I've finished my long forgotten calculator which allows you to compare several nominal interest rates in one table. Rates are set through the annual interest rate and the interest accrual period. The effective interest rate is calculated by nominal interest and nominal period. Effective interest rate formula $i = \left( 1+ \frac{j}{m} \right)^m - 1$ where j - nominal interest rate, m - the number of interest accrual periods Basically, it is clear that at the same nominal interest, the more often the accrual period, the more profitable the contribution is. Nominal interest rates arrow_upwardarrow_downwardPercentsarrow_upwardarrow_downwardAccrual Period Items per page: Digits after the decimal point: 2 PLANETCALC, Nominal interest rates сomparison
2019-08-24 07:23:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44953417778015137, "perplexity": 1957.168465040125}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319915.98/warc/CC-MAIN-20190824063359-20190824085359-00427.warc.gz"}
https://answersmcq.com/answer-consider-the-function-represented-by-the-equation-y-x-4-0-what-is-the-equation-written-in-function-notation-with-x-as-the-independent-variable/
# [Answer] Consider the function represented by the equation y – x – 4 = 0. What is the equation written in function notation with x as the independent variable? ###### Answer: f(x) = x + 4 Consider the function represented by the equation y – x – 4 = 0. What is the equation written in function notation with x as the independent variable? Wed Feb 19 2003 13:30:00 GMT-0500 (Eastern Standard Time) · Intuitively a function is a process that associates each element of a set X to a single element of a set Y .. Formally a function f from a set X to a set Y is defined by a set G of ordered pairs ( x y ) such that x ∈ X y ∈ Y and every element of X is the first component of exactly one ordered pair in G. In other words for every x in X there is exactly one element y … In general an algebraic equation or polynomial equation is an equation of the form = or = where P and Q are polynomials with coefficients in some field (e.g. rational numbers real numbers complex numbers).An algebraic equation is univariate if it involves only one variable .On the other hand a polynomial equation may involve several variables in which … Cauchy–Riemann equations – Wikipedia Function (mathematics) – Wikipedia Wave equation – Wikipedia Function (mathematics) – Wikipedia In mathematics a parametric equation defines a group of quantities as functions of one or more independent variables called parameters. Parametric equations are commonly used to express the coordinates of the points that make up a geometric object such as a curve or surface in which case the equations are collectively called a parametric representation or parameterization … A differential equation can be homogeneous in either of two respects.. A first order differential equation is said to be homogeneous if it may be written ( ) = ( ) where f and g are homogeneous functions of the same degree of x and y . In this case the change of variable y = ux leads to an equation of the form = () which is easy to solve by integration of the two members. An algebraic function is a function that satisfies a polynomial equation whose coefficients are themselves polynomials. For examp…
2022-06-25 20:58:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8393347263336182, "perplexity": 161.45193034502367}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036099.6/warc/CC-MAIN-20220625190306-20220625220306-00006.warc.gz"}
https://socratic.org/questions/how-do-you-find-derivative-of-f-x-x-2-4x-3-sqrtx
# How do you find derivative of f(x)=(x^2+4x+3)/(sqrtx)? $y ' = \frac{\left(2 x + 4\right) \cdot \sqrt{x} - \left({x}^{2} + 4 x + 3\right) \cdot \frac{1}{2 \sqrt{x}}}{\sqrt{x}} ^ 2 =$ $= \frac{\frac{\left(2 x + 4\right) \sqrt{x} \cdot 2 \sqrt{x} - {x}^{2} - 4 x - 3}{2 \sqrt{x}}}{x} =$ $= \frac{\left(2 x + 4\right) \cdot 2 x - {x}^{2} - 4 x - 3}{2 x \sqrt{x}} =$ $= \frac{4 {x}^{2} + 8 x - {x}^{2} - 4 x - 3}{2 x \sqrt{x}} = \frac{3 {x}^{2} + 4 x - 3}{2 x \sqrt{x}}$.
2021-01-27 08:14:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4180384576320648, "perplexity": 2760.238308234522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821253.82/warc/CC-MAIN-20210127055122-20210127085122-00549.warc.gz"}
http://www.physicsforums.com/showthread.php?t=583757
# What exactly is current? by PhantomPower Tags: current P: 14 The idea of current has been presented to me so many different ways I thought id try and find out a little more into what exactly is current. Most places refer to a flow of charge as a current, seems good to me, but - I heard the current may flow outside the wire in... I want to say charge desnty fields but don't quote me on that... Is this is case? if so how? Secondly the concept of "current flowing" seems to me very strage - if current = $\frac{\delta q}{\delta t}$ surely this implies the "current flow" people speak of is : $\frac{\delta^2 q}{\delta^2 t}$ ? Thank you for your help.
2014-08-23 03:36:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5630883574485779, "perplexity": 471.6992570032956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500825010.41/warc/CC-MAIN-20140820021345-00385-ip-10-180-136-8.ec2.internal.warc.gz"}
https://eng.libretexts.org/Bookshelves/Civil_Engineering/Book%3A_The_Delft_Sand_Clay_and_Rock_Cutting_Model_(Miedema)/08%3A_Rock_Cutting-_Atmospheric_Conditions/8.11%3A_Nomenclature
a, $$\ \tau_\mathrm{a}$$ Adhesive shear strength kPa A Adhesive force on the blade kN BTS Brazilian Tensile Strength kPa c, $$\ \tau_\mathrm{c}$$ Cohesive shear strength kPa cm Mobilized cohesive shear strength kPa C Cohesive force on shear plane kN Esp Specific energy kPa F Force kN Fc Cutting force on chisel Evans model kN Fn Normal force on chisel Evans model kN Fch Horizontal force component Evans model kN Fcv Vertical force component Evans model kN Fh Horizontal cutting force kN Fv Vertical cutting force kN g Gravitational constant (9.81) m/s2 G Gravitational force kN hi Initial thickness of layer cut m hb Height of the blade m K1 Grain force on the shear plane kN K2 Grain force on the blade kN I Inertial force on the shear plane kN n Power in Nishimatsu model - N1 Normal grain force on shear plane kN N2 Normal grain force on blade kN p Stress in shear plane Nishimatsu model kPa p0 Stress at tip of chisel Nishimatsu model kPa Pc Cutting power kW Q Production m3 r Radius in Evans model m r Adhesion/cohesion ratio - r1 Pore pressure on shear plane/cohesion ratio - r2 Pore pressure on blade/cohesion ratio - R Radius of Mohr circle kPa R Force on chisel Evans model kN Rn Normal force on chisel surface Evans model kN Rf Friction force on chisel surface Evans model kN R1 Acting point on the shear plane m R2 Acting point on the blade m S1 Shear force due to internal friction on the shear plane kN S2 Shear force due to external friction on the blade kN T Tensile force kN UCS Unconfined Compressive Strength kPa vc Cutting velocity m/s w Width of the blade m W1 Force resulting from pore under pressure on the shear plane kN W2 Force resulting from pore under pressure on the blade kN α Blade angle rad β Angle of the shear plane with the direction of cutting velocityrad rad ε Angle of chisel with horizontal Evans model rad $$\ \tau$$ Shear stress kPa $$\ \tau_\mathrm{a}$$, a Adhesive shear strength (strain rate dependent) kPa $$\ \tau_\mathrm{c}$$, c Cohesive shear strength (strain rate dependent) kPa $$\ \tau_{\mathrm{S1}}$$ Average shear stress on the shear plane kPa $$\ \tau_{\mathrm{S2}}$$ Average shear stress on the blade kPa σ Normal stress kPa σC Center of Mohr circle kPa σT Tensile strength kPa σmin Minimum principal stress in Mohr circle kPa σN1 Average normal stress on the shear plane kPa σN2 Average normal stress on the blade kPa φ Angle of internal friction rad δ Angle of external friction rad λ Distance in Nishimatsu model m λs Strengthening factor - λ1 Acting point factor on the shear plane - λ2 Acting point factor on the blade - λHF Flow Type/Crushed Type horizontal force coefficient - λVF Flow Type/Crushed Type vertical force coefficient - λHT Tear Type/Chip Type horizontal force coefficient - λVT Tear Type/Chip Type vertical force coefficient - ω Angle in Evans model rad
2022-08-12 22:09:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6916545629501343, "perplexity": 12026.602993736793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00718.warc.gz"}
https://socratic.org/questions/58a607c07c014959d5163162
# What are the molarity, molality, and mole fraction of ethylene glycol ("C"_2"H"_6"O"_2) in an aqueous solution that contains 40 % by mass of the solute? The density of the solution is "1.06 g/mL". Feb 16, 2017 1. MolaLity = $10.74 m$ 2. Molarity = $6.76 M$ 3. mole fraction = $0.162$ #### Explanation: 40% ethylene glycol means mass of ethylene glycol = $\text{40 g}$ mass of water (solvent) = $\text{60 g = 0.060 kg}$ mass of solution = $\text{100 g}$ 1. Molality Molar mass of ethylene glycol = $\text{62.07 g/mL}$ no. of moles = $\text{4 g"/("62.07 g.mol"^-1) = "0.6444 mol}$ Molality = $\text{no. of moles"/"mass of solvent in kg}$ Molality = $\text{0.6444 mol"/"0.060 kg" = "10.7 m}$ 2. Molarity $\text{Molarity" = "no. of moles"/"volume of solution in litres}$ ${\text{Density" = "1.05 g/cm}}^{3}$ $\text{Volume" = "Mass"/"Density}$ $\text{Volume" = "100 g"/("1.05 g/mL") = "95.24 mL" = "0.0952 L}$ $\text{Molarity" ="0.6444 mol"/"0.0952 L" = "6.76 M or 6.76 mol/L}$ 3. Mole fraction $\text{no. of moles of water" = "60 g"/"18.02 g.mol"^"-1" = "3.329 mol}$ $\text{Total moles" = "(3.329 + 0.6444) mol" = "3.974 mol}$ $\text{mole fraction of ethylene glycol" = "0.6444"/"3.974" = 0.162}$ Feb 16, 2017 ["C"_2"H"_6"O"_2] = "6.83 M" m_("C"_2"H"_6"O"_2) = "10.741 mol/kg" ${\chi}_{{\text{C"_2"H"_6"O}}_{2}} = 0.1621$ Read further to see how it was done. This is just an exercise in flexing the limits of what you have and calculating various types of concentrations. The solution is aqueous, so the solvent is water, which is why the density is close to $\text{1 g/mL}$. Knowing the percent by mass, which is: "% w/w" = "mass solute"/"mass solution"xx100% we can assume $\text{1000 g}$ solvent for convenience (given that the molality is per $\text{kg}$ of solvent) to get: $\text{40% w/w" => 0.40 = "x g solute"/"(1000 + x) g solution}$ Solving for $x$, we can get the mass of the solute: $0.40 \left(1000 + x\right) = x$ $\implies 400 + 0.40 x = x$ $\implies 400 = \left(1 - 0.40\right) x$ $\implies x = \frac{400}{1 - 0.40} = 666. \overline{66}$ $\text{g solute}$ Therefore, we can get the mols of solute and mols of solvent: $\textcolor{g r e e n}{{n}_{\text{solute") = (666.bar(66) "g solute")/(2xx12.011 + 6xx1.0079 + 2xx15.99"9 g/mol") = color(green)("10.741 mols ethylene glycol}}}$ $\textcolor{g r e e n}{{n}_{\text{solvent") = ("1000 g solvent")/("18.015 g/mol") = color(green)("55.509 mols water}}}$ From there, we have all the info we need to calculate the concentrations. MOLARITY For the molarity: color(blue)(["C"_2"H"_6"O"_2]) = "mols solute"/"L solution" = "10.741 mols solute"/((1000 + 666.bar(66)) cancel"g solution" xx cancel"mL"/(1.06 cancel"g") xx "L"/(1000 cancel"mL")) $=$ $\textcolor{b l u e}{\text{6.83 M}}$ MOLALITY The molality was made simple because we chose the mass of the solvent to be $\text{1000 g}$, i.e. $\text{1 kg}$: color(blue)(m_("C"_2"H"_6"O"_2)) = "mols solute"/"kg solvent" $= \text{10.741 mols solute"/"1 kg water}$ $=$ $\textcolor{b l u e}{\text{10.741 mol/kg}}$ Naturally, we chose $\text{1000 g}$ of water so that we couldn't mess up this calculation as long as we got the mols right (dividing by 1 is easy to get right). MOLE FRACTION The mol fraction is: color(blue)(chi_("C"_2"H"_6"O"_2)) = n_"solute"/(n_"solute" + n_"solvent") = "10.741 mols solute"/("10.741 mols solute" + "55.509 mols water") $=$ $\textcolor{b l u e}{0.1621}$
2019-12-12 10:55:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 49, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6087228059768677, "perplexity": 6240.227192248398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543252.46/warc/CC-MAIN-20191212102302-20191212130302-00437.warc.gz"}
https://www.math-only-math.com/dividing-decimal-by-a-decimal-number.html
Dividing Decimal by a Decimal Number Dividing decimal by a decimal number is just same like division as usual. How to divide a decimal by a decimal number? To divide a decimal by a decimal number follow the below steps: Convert the divisor into a whole number by multiplying the dividend and divisor by the suitable power of 10. Now, divide the new dividend by the whole number as discussed earlier. Worked-out problems to find the quotient of a decimal by a decimal number: Read the above explanation step-by-step and try to understand the examples on division of decimals. 1. Find the quotient of: (i) Divide 96.075 by 6.3 Solution: Since, the divisor has 1 decimal place. Therefore, multiply the dividend and divisor by 10 i.e., (96.075 × 10)/(6.3 × 10) = 960.75/63 Now, divide 960.75 by 63 i.e., 960.75 ÷ 63 Divide the decimal number without the decimal point, so we have 96075 ÷ 63 Since, 960.75 has 2 decimal places Therefore, 960.75 ÷ 63 will also have 2 decimal places Therefore, 960.75 ÷ 63 = 15.25 (ii) Divide 24.629 by 1.1 Solution: Since, the divisor has 1 decimal place. Therefore, multiply the dividend and divisor by 10 i.e., (24.629 × 10)/(1.1 × 10) = 246.29/11 Now, divide 246.29 by 11 i.e., 246.29 ÷ 11 Divide the decimal number without the decimal point, so we have 24629 ÷ 11 Since, 246.29 has 2 decimal places Therefore, 246.29 ÷ 11 will also have 2 decimal places Therefore, 960.75 ÷ 63 = 22.39 Word problems on division of decimal by a decimal number: 2. The length of a rectangle is 1.5 m and its area is 14.295. Find its breadth. Solution: Length of a rectangle is 1.5 Area of a rectangle is 14.295 Therefore, breadth of the rectangle = Area/Length = 14.295/1.5 = (14.295 × 10)/(1.5 × 10) = 142.95/15 = 9.53 m 3. If the cost of 9 books is $206.55, find the cost of 1 book. Solution: Number of books = 9 Cost of 9 books =$206.55 Therefore, cost of 1 book = $(206.55 ÷ 9) =$22.95 Related Concept
2021-06-19 03:09:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683049082756042, "perplexity": 4559.221908062844}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643380.40/warc/CC-MAIN-20210619020602-20210619050602-00148.warc.gz"}
http://luc.lino-framework.org/blog/2015/0831.html
# Monday, August 31, 2015¶ This weekend an optimization in atelier.sphinxconf.sigal_image in order to produce http://belglane.vana-vigala.ee/s6brad/20150829.html Worked on #469. New module lino_noi.projects.bs3. The first session revealed a subtle bug: actors with an empty set as required_roles were not visible to anonymous users because actions also required SiteUser. ## The murder bug¶ We finally managed to reproduce and understand what we internally called the murder bug. This bug had caused several cases of sudden data loss, hundreds of persons vanishing “overnight”. Continued from 20150824 (Monday, 24 August 2015) for the hopefully last time. The situation where Lino’s lino.core.ddh failed to throw a veto was the following: when deleting an MTI child, Lino did not ask vetos from its MTI parents. For example when deleting a person who is being used as the partner of a user, then Lino ran only the DDH for the Person, not those for the Partner. And since we now had a reproducible case, I discovered and fixed another bug: that new loop in kernel_startup (which sets on_delete to PROTECT for the FK fields which are not listed in their model’s allow_cascaded_delete) did not work due to a simple typo bug (== instead of =). Added three test cases and a diagnostic utility: Yes, #477 was the murder bug, and #452 was probably even innocent. Checkin. Release in Eupen. Last optimizations on #363. One day I should write more about how Lino manages to make deleting more secure. I stumbled over Stefan Haflidason’s article Safer (Soft) Deletion in Django how complex things are. Another thing to do is to check whether django-reversion <https://github.com/etianen/django-reversion> would be a replacement for lino.modlib.changes.
2018-11-20 13:38:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3971363306045532, "perplexity": 6699.412342392117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746398.20/warc/CC-MAIN-20181120130743-20181120152743-00430.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-1-review-exercises-page-132/73
## Intermediate Algebra (12th Edition) The given inequality, $|3-4x|+7\lt-4 ,$ is equivalent to \begin{array}{l}\require{cancel} |3-4x|\lt-4-7 \\\\ |3-4x|\lt-11 .\end{array} For any $x$, the left side of the inequality above is always a non negative number. This is never less than the negative number at the right. Hence, there is $\text{ no solution .}$
2018-04-21 08:25:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978378415107727, "perplexity": 565.0217045833717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945082.84/warc/CC-MAIN-20180421071203-20180421091203-00484.warc.gz"}
https://www.albert.io/ie/biochemistry/bicarbonate-buffer-system-and-blood-ph
Free Version Moderate # Bicarbonate Buffer System and Blood pH BIOCHM-B6Q3XJ The bicarbonate buffer system as shown in the reaction below is catalyzed by carbonic anhydrase and is extremely important in maintaining a stable blood pH through its control by respiration. $${ CO }_{ 2 }{ +H }_{ 2 }{ O\rightleftarrows }{ H }_{ 2 }{ CO }_{ 3 }{ \rightleftarrows HCO }_{ 3 }^{ - }{ +H }^{ + }$$ Consider the equation above and determine ALL of the following statements which are NOT true regarding this buffering system. A The blood pH of a person experiencing hyperventilation (fast, deep breathing) would increase. B In blood, acid is best neutralized by carbonic acid (${ H }_{ 2 }{ CO }_{ 3 }$), whereas base is best neutralized by bicarbonate (${ H }{ C }{ O }_{ 3 }^{ - }$). C Carbonic acid is a strong acid and bicarbonate is a weak conjugate base. D One way to lower blood pH would be to repeatedly breathe into a bag. E The pH of blood is typically 7.4 and the ${ pK }_{ a }$ of carbonic acid is 6.1; therefore, there is more carbonic acid than bicarbonate present in the blood to maintain the pH. F Diarrhea can lead to a significant loss of ${ H }{ C }{ O }_{ 3 }^{ - }$ from the blood. This would result in acidosis. G If there is too little ${ H }{ C }{ O }_{ 3 }^{ - }$ in the blood, the kidneys can excrete ${ H }^{ + }$. This would cause a shift in the reaction equilibrium and increase ${ H }{ C }{ O }_{ 3 }^{ - }$ levels.
2017-01-21 08:41:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4496231973171234, "perplexity": 3017.509550426896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00055-ip-10-171-10-70.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1172270/can-anyone-explain-what-is-the-intuition-behind-the-following-definition-of-p
# Can anyone explain what is the intuition behind the following definition of $p \Vdash^* \phi$? Can anyone explain what is the intuition behind the following definition? Definition 4.25 Let $\Bbb P$ be a poset. Let $\phi(x_1,\ldots,x_n)$ be a formula, $p\in\Bbb P$, and let $\tau_1,\ldots,\tau_n$ be $\Bbb P$-names. We define $p\Vdash^*\phi(\tau_1,\ldots,\tau_n)$ by recursion on the complexity of $\phi$ as follows. 1. $p\Vdash^*\tau_1=\tau_2$ if and only if the following hold. 1. For all $\langle\pi_1,s_1\rangle\in\tau_1$, the set $$\{q: q\leq s_1\rightarrow\exists\langle\pi_2,s_2\rangle\in\tau_2(q\leq s_2\land q\Vdash^*\pi_1=\pi_2)\}$$ is dense below $p$. 2. For all $\langle\pi_2,s_2\rangle\in\tau_2$, the set $$\{q:q\leq s_2\rightarrow\exists\langle\pi_1,s_1\rangle\in\tau_1(q\leq s_1\land q\Vdash^*\pi_1=\pi_2)\}$$ is dense below $p$. 2. $p\Vdash^*\tau_1\in\tau_2$ if and only if the set $$\{q:\exists\langle\pi,s\rangle\in\tau_2(q\leq s\land q\Vdash^*\tau_1=\pi)\}$$is dense below $p$. 3. $p\Vdash^*\phi(\tau_1,\ldots,\tau_n)\land\psi(\tau_1,\ldots,\tau_n)$ if and only if $$p\Vdash^*\phi(\tau_1,\ldots,\tau_n)\text{ and }p\Vdash^*\psi(\tau_1,\ldots,\tau_n).$$ I know that the sign $p \Vdash \phi(x_1,...,x_n)$ somehow suppose to tell me that for any generic filter which contains $p$, $M[G] \models \phi(x_1,...,x_n)$. But, what is the connections to the definition here above? Thank you • You can show that $p\Vdash^* \varphi(x_1,\dots,x_n)$ if and only if for any generic filter $G$ containing $p$, $M[G]\models\varphi(x_1,\dots,x_n)$ using a simple induction argument. – Alex Kruckman Mar 2 '15 at 19:02 • Three notes: 1) There's a typo in the definition. Case B should read "For all $\langle \pi_2,s_2\rangle\in \tau_2$..." 2) The definition as given only covers formulas which are conjunctions of atomics, though it can easy be extended to cover all formulas. 3) For the record, I was not the one who cast the vote to close. – Alex Kruckman Mar 2 '15 at 19:04 • There is no point asking this questions until you have read the proof of definability + truth lemma if you really care. I never actually read the details of these boring proofs from a book although I did see my instructor do the proof and we all wished it ended soon. By the way, your notes look like Kunen's 1980 book with typos but cuter font. – hot_queen Mar 2 '15 at 19:28 • @hot_queen: Not quite: there are a few minor changes from Definition $3.3$ in the $1980$ book. At a guess this is from Ken’s $2011$ book. – Brian M. Scott Mar 2 '15 at 20:01 ## 1 Answer Since you already know that $\Vdash$ is defined using generic filters, and you know that a filter is generic if and only it it meets every dense set of the ground model, this becomes quite an obvious choice of definition. We can prove by induction on the complexity of the formula, that if $p\Vdash^*\varphi(\tau)$ then $p\Vdash\varphi(\tau)$. The reason is that the atomic formulas are defined using dense sets, and as it turns out these are dense open sets. And the intersection of dense open sets [below a condition] is a dense open set [below the condition]. This means that $p\Vdash^*\varphi(\tau)$ if and only if every generic filter $G$ such that $p\in G$, has some condition below $p$ (and therefore $p$ itself) which ensures that $\varphi(\tau)$ is true. So the intuition, if I had to give any for the definition, would be that $\Vdash^*$ is defined as a condition "which occurs on dense open sets [below a condition]", and therefore it is equivalent to saying that it is realized by every generic filter [containing said condition]. Once you see the proof of the equivalence between the two relations, this becomes quite obvious. So let me add a bit of motivation instead. The reason to do that is that given a formula $\varphi(\tau)$ in the language of forcing, the statement $p\Vdash^*\varphi(\tau)$ is definable internally to the ground model. So we can ask whether or not some condition forces a formula or not, internally. So we don't have to use a generic filter in order to find out whether or a statement is independent of $\sf ZFC$. Namely, if we have a forcing notion such that $p\Vdash^*\varphi$ and $q\Vdash^*\lnot\varphi$, then we know that $\sf ZFC$ neither proves nor disproves $\varphi$. And this can be "pulled" to arithmetical meta-theories. So instead of using $\sf ZFC$ as a meta-theory and countable transitive models of [fragments of] $\sf ZFC$, you can use something as weak as $\sf PA$, and even weaker, as your meta-theory and still prove these sort of consistency results.
2021-04-14 16:53:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347550868988037, "perplexity": 215.64782751502733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00089.warc.gz"}
https://www.zbmath.org/?q=an%3A1151.05007
zbMATH — the first resource for mathematics The quasiregular projective planes of order 16. (English) Zbl 1151.05007 Summary: The projective planes of order 16 admitting a large ($$\geq 137$$) quasiregular group of collineations are classified. The classification is done using the theorem of P. Dembowski and F. Piper [Math. Z. 99, 53–75 (1967; Zbl 0145.41003)] and a complete search by computer. No new planes are found. MSC: 05B10 Combinatorial aspects of difference sets (number-theoretic, group-theoretic, etc.) 05B25 Combinatorial aspects of finite geometries 51E15 Finite affine and projective planes (geometric aspects) Full Text:
2021-05-18 11:35:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3558019995689392, "perplexity": 3257.38917321006}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00061.warc.gz"}
https://imathworks.com/tex/tex-latex-landscape-figure-in-latex/
# [Tex/LaTex] Landscape figure in LaTeX floatslandscape I have a figure I'm trying to insert which is in landscape and I'm using the following (snipped) code: \documentclass[12pt, oneside]{book} \usepackage{graphicx} \usepackage{wrapfig} \usepackage{lscape} \usepackage{rotating} \usepackage{epstopdf} \begin{document} \begin{figure}[ht] \includegraphics{../figures/pics/DivLibPropProfile} \caption{Property profile of the diverse library compared to the compound pool.} \label{fig:PropProf} \end{figure} \end{document} When I compile this with pdfLaTeX the output behaves as I'd expect – page in portrait with the caption at the bottom but the figure in the "wrong" orientation. However when I compile using LaTeX, the page is turned landscape with the figure now in the correct orientation but with the caption at what is now the left of the page rather than under the figure, which you should get using \begin{landscape}. When I do use the landscape environment and compile with LaTeX the caption the whole page is turned upside-down and everything is wrong. Any ideas how I can get the correct orientation of landscape figure on landscape page with the caption under the figure (attached for reference)? I also need to use LaTeX rather that pdfLaTeX for another package to function. This should work for you, without removing "unnecessary" packages: \documentclass[12pt, oneside]{book} \usepackage{graphicx} \usepackage{wrapfig} \usepackage{lscape} \usepackage{rotating} \usepackage{epstopdf} \begin{document} \begin{sidewaysfigure}[ht] \includegraphics{../figures/pics/DivLibPropProfile} \caption{Property profile of the diverse library compared to the compound pool.} \label{fig:PropProf} \end{sidewaysfigure} \end{document} This is a general minimal example using rotating \documentclass{article} % For rotating figures, tables, etc. % including their captions \usepackage{rotating} \begin{document} % A small example on how to use the "rotating" package. % Displays a figure and it's caption in landscape \begin{sidewaysfigure}[ht] \includegraphics[width=\textwidth]{figure.png} \caption{Caption in landscape to a figure in landscape.} \label{fig:LandscapeFigure} \end{sidewaysfigure} \end{document}
2023-03-21 21:01:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9764967560768127, "perplexity": 1998.7200269640766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00274.warc.gz"}
https://www.sparrho.com/item/on-the-number-of-vertex-disjoint-cycles-in-digraphs/1c615ef/
# On the number of vertex-disjoint cycles in digraphs Research paper by Yandong Bai, Yannis Manoussakis Indexed on: 08 May '18Published on: 08 May '18Published in: arXiv - Mathematics - Combinatorics #### Abstract Let $k$ be a positive integer. Bermond and Thomassen conjectured in 1981 that every digraph with minimum outdegree at least $2k-1$ contains $k$ vertex-disjoint cycles. It is famous as one of the one hundred unsolved problems selected in [Bondy, Murty, Graph Theory, Springer-Verlag London, 2008]. Lichiardopol, Por and Sereni proved in [SIAM J. Discrete Math. 23 (2) (2009) 979-992] that the above conjecture holds for $k=3$. Let $g$ be the girth, i.e., the length of the shortest cycle, of a given digraph. Bang-Jensen, Bessy and Thomass\'{e} conjectured in [J. Graph Theory 75 (3) (2014) 284-302] that every digraph with minimum outdegree at least $\frac{g}{g-1}k$ contains $k$ vertex-disjoint cycles. In this note, we first present a new shorter proof of the Bermond-Thomassen conjecture for the case of $k=3$, and then we disprove the conjecture proposed by Bang-Jensen, Bessy and Thomass\'{e} by constructing a family of counterexamples.
2021-01-17 05:47:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8784602880477905, "perplexity": 823.9188415169782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509973.34/warc/CC-MAIN-20210117051021-20210117081021-00163.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2014_v51n4_965
Lp-SOBOLEV REGULARITY FOR INTEGRAL OPERATORS OVER CERTAIN HYPERSURFACES Title & Authors Lp-SOBOLEV REGULARITY FOR INTEGRAL OPERATORS OVER CERTAIN HYPERSURFACES Heo, Yaryong; Hong, Sunggeum; Yang, Chan Woo; Abstract In this paper we establish sharp $\small{L^p}$-regularity estimates for averaging operators with convolution kernel associated to hypersurfaces in $\small{\mathbb{R}^d(d{\geq}2)}$ of the form $\small{y{\mapsto}(y,{\gamma}(y))}$ where $\small{y{\in}\mathbb{R}^{d-1}}$ and $\small{{\gamma}(y)={\sum}^{d-1}_{i=1}{\pm}{\mid}y_i{\mid}^{m_i}}$ with $\small{2{\leq}m_1{\leq}{\cdots}{\leq}m_}$$\small{{d-1}}$. Keywords $\small{L^p}$-Sobolev regularity; Language English Cited by References 1. M. Christ, Failure of an endpoint estimate for integrals along curves, Fourier analysis and partial differential equations (Miraflores de la Sierra, 1992), 163-168, Stud. Adv. Math. CRC, Boca Raton, FL, 1995. 2. E. Ferreyra, T. Godoy, and M. Urciuolo, Endpoint bounds for convolution operators with singular measures, Colloq. Math. 76 (1998), no. 1, 35-47. 3. A. Iosevich, E. Sawyer, and A. Seeger, On averaging operators associated with convex hypersurfaces of finite type, J. Anal. Math. 79 (1999), 159-187. 4. A. Nagel, A. Seeger, and S. Wainger, Averages over convex hypersurfaces, Amer. J. Math. 115 (1993), no. 4, 903-927. 5. A. Seeger, Some inequalities for singular convolution operators in Lp-spaces, Trans. Amer. Math. Soc. 308 (1988), no. 1, 259-272. 6. A. Seeger and T. Tao, Sharp Lorentz space estimates for rough operators, Math. Ann. 320 (2001), no. 2, 381-415.
2018-04-27 03:08:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 8, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.91044682264328, "perplexity": 1736.65360657099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948950.83/warc/CC-MAIN-20180427021556-20180427041556-00516.warc.gz"}
https://stats.stackexchange.com/questions/370691/calculating-the-deviance-when-performing-a-hypothesis-test-of-a-parameter-in-a-l
# Calculating the deviance when performing a hypothesis test of a parameter in a linear regression model I want to make sure I am using and calculating the deviance correctly when performing a hypothesis test of the significance of a parameter in a linear regression model. Suppose that I have two models at hand. $$\mu_F = \beta_0 + \beta_1x_1 + \beta_2x_2$$ and $$\mu_R = \beta_0 + \beta_1x_1$$. I want to test the hypothesis that $$H_0: \beta_2 = 0$$ v $$H_1: \beta_2 \ne 0$$. I can use the deviance of the two models as my test statistic to test this hypothesis, correct? If so is this how I can calculate my deviance? $$D = \frac{Likelihood_R}{Likelihood_F}$$ => $$-2D = -2*log(\frac{Likelihood_R}{Likelihood_F}) \sim \chi^2$$ $$L_F = \prod\frac{1}{\sqrt(2\pi\sigma^2)}exp(-\frac{1}{2\sigma^2}(y_i - \mu_i)^2$$ = $$\prod\frac{1}{\sqrt(2\pi\sigma^2)}exp(-\frac{1}{2\sigma^2}(y_i - (\beta_0 + \beta_1x_1 + \beta_2x_2)^2)$$ $$L_R = \prod\frac{1}{\sqrt(2\pi\sigma^2)}exp(-\frac{1}{2\sigma^2}(y_i - \mu_i)^2$$ = $$\prod\frac{1}{\sqrt(2\pi\sigma^2)}exp(-\frac{1}{2\sigma^2}(y_i - (\beta_0 + \beta_1x_1))^2)$$ $$D = 2[log(L_F) -log(L_R)] = 2[\sum-log(\frac{1}{\sqrt(2\pi\sigma^2)})-\frac{1}{2\sigma^2}(y_i - (\beta_0 + \beta_1x_1 + \beta_2x_2))^2 +log(\frac{1}{\sqrt(2\pi\sigma^2)})+\frac{1}{2\sigma^2}(y_i - (\beta_0 + \beta_1x_1))^2] = \frac{1}{\sigma^2}(SSE_R-SSE_F) \sim \chi^2((N-2)-(N-3)=1)$$ Is this how the deviance is calculated to test the hypothesis $$H_0: \beta_2 = 0$$? Isn't this test usually calculated as $$\frac{SSE_R-SSE_F}{df_R - df_F}$$? Does $$\frac{1}{\sigma^2}(SSE_R-SSE_F) = \frac{SSE_R-SSE_F}{df_R - df_F}$$? • I've noticed up to 10 views on my question and 0 input. Just want to be sure and ask is everything clear in this post? – Omar123456789 Oct 8 '18 at 16:17
2019-07-18 21:43:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7560490965843201, "perplexity": 300.5864230230408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525829.33/warc/CC-MAIN-20190718211312-20190718233312-00054.warc.gz"}
http://quant.stackexchange.com/tags/algorithmic-trading/hot
# Tag Info 9 You need to differentiate between OTC and listed options in order to appreciate the fact market makers are still active and relevant in either segment: Listed Options: Actually most listed options market making is governed by market making algorithms, however, most such algorithms are implemented with manual overlays. Something very similar goes on in the ... 6 I found this solid overview of different trading algorithms by Deutsche Bank Research: Trade execution algorithms Designed to minimise the price impact of executing trades of large volumes by ‘shredding’ orders into smaller parcels and slowly releasing these into the market. Strategy implementation algorithms Designed to read real-time market data and ... 5 If you're missing ticks, then no technique will get those ticks back. If you have two sources, then designate one source as the primary feed and then fill-in gaps from the secondary feed. Of course, you'll have to mind the timestamps when determining whether the secondary feed can be used properly. 5 Each venue will allow diferent order types, and will have different matching rules (the queue positions you mentioned), so this is not general to the whole market, but this is a paper from Nyse that is pretty much explains most of the order types I have heard of: http://www.nyse.com/pdfs/fact_sheet_nyse_orders.pdf Also, one factsheet/regulation from the ... 4 You will struggle to put a number on the potential returns of high-frequency trading (HFT) and I think it wouldn't make any sense anyway if you don't take into consideration its risk and its leverage. Achieving 100% return with low volatility seems highly improbable; so ask the trader in question his Sharpe ratio to start with and compare it with yours. ... 4 Repeating groups are a way for FIX to represent arrays. A "number of" field prepends the repeating group to alert the recipient how many elements to expect. For example, Arca uses TradingSessionID (tag 336) to identify pre-open (P1), primary (P2), and post-close (P3) market hours. This group is prepended by NoTradingSessions (tag 386). So, I would use the ... 4 On the request, here are my two cents. Suppose that the price follows the dynamics $$\begin{cases} \mathbf z_{k+1} &= F(\mathbf z_k,\mathbf i_k,\mathbf w_k), \\ \mathbf i_{k+1} &= G(\mathbf i_k, \mathbf w_k) \end{cases}$$ where $\mathbf z_k$ is a price of a traded assets at the time $k$, $\mathbf i_k$ is the value of parameters of the ... 3 Python / R (my favorite) / mathlab are fine to make a quick analysis, visualize data, prototype and backtest your strategy. But I'm not aware of any trading platform that runs with them. Keep going with whatever you feel comfortable for prototyping, but I would invest time to learn C (or even C++ on phase II, if you have enough time) as many trading ... 3 Proof of work systems are generally used where you do not trust the client; the Bitcoin one is used to slow down the generation of new coins and is adaptive; if hardware speeds up, the work gets harder. By contrast, an exchange has a contractual agreement with the client, and can require it to authenticate, encrypt etc. The central problem, though, is that ... 3 In the paper Optimal split of orders across liquidity pools: a stochastic algorithm approach (2011) we present the theoretical aspect of liquidity seeking, thus you will learn how they work. There is a seminal (once again) white paper by Robert Almgren on iceberg chasing that is very informative too. 3 Indeed, algorithmic trading is a very hidden subject. It is even known that working in the algorithmic trading sector is very lonely because nobody is willing to share secrets, ideas or innovations. Mentioning this, I have recently talked to a Technical Analyst/ Quant who has exposed some of his secrets. One of which was risk management. The terms you are ... 3 Some reading that may be of interest to you and which proceeds along similar lines of thought is that of Shmilovici in "Predicting Stock Returns Using a Variable Order Markov Tree Model". Abstract: "The weak form of the Efficient Market Hypothesis (EMH) states that the current market price fully reflects the information of past prices and rules out ... 3 Obviously merging two streams is harmless and it should be done. But it's hard to advise you regarding the "interpolation" methods you can use to generate the ticks without knowing why you need this. The reason is that any method will introduce a certain bias to the data. Therefore, it very much depends on what are you going to do with your altered data on ... 3 Whether its possible? Absolutely. However, you should probably keep in mind a couple points: * Many people claim a lot while proving very little to none. This is fine if the issue is a small-talk conversation. Believe it or not, no harm done. However, this is about money, and from my experience I cannot stress enough how important it is to do a very ... 2 it depends on how applied the class is. A deep understanding of stochastic calculus is not required for "P-Quants", the type of person that lives in the physical word of forecasting and risk. That being said understanding the type of models that get used by the Q-Side (requiring lots of stochasic theory) is a useful skill to have. Like John said, if you ... 2 The broker algorithms or the trading algorithms are designed to the optimal execution of large amounts of stocks with different benchmarks (e.g. VWAP, PoV, Implementation Shortfall or Slippage, Price Inline, TWAP, DWAP, etc.). These algorithms sometimes uses statistical methods and market microstructure analysis (to analyse spreads, volume, seasonality, ... 2 Assume $n$ markets where each market $n$ has features $Bid(n)$, $Ask(n)$, bid volume, $BidV(n)$, ask volume, $AskV(n)$, fixed costs, $FixC(n)$, and variable costs, $VarC(n)$. Assume you buy on market $n$ and sell on market $n+1$. The profit $\Pi(n,n+1)$ of each arbitrage opportunity amounts to \Pi(n,n+1) = V * [(1+VarC(n+1))*Bid(n+1) - ... 1 You are right, these work use deterministic control. Framework using stochastic control exist: Bouchard, B., Dang, N.-M., Lehalle, C.-A., 2011. Optimal control of trading algorithms: a general impulse control approach. SIAM J. Financial Mathematics 2 (1), 404-438. URL http://epubs.siam.org/doi/abs/10.1137/090777293?af=R Kharroubi, I., Pham, H., Jun. ... 1 I think Interactive Brokers offers everything you want. They do have a low cost student(by age) offering but it's not free. You can test your program for free if you use a demo account but it's very limited. They do have paper trading and live trading accounts that are better than the demo account but you must pay for real time data. This is required ... 1 It's got nothing to do with you being identified as a market maker or not. It is simply that the other participants at that time are passive traders. The choice between hitting a bid or lighting a new level with a new offer are distinct and very different (especially, in some markets, in terms of fees paid or rebates received). So, you're not being ... 1 It depends on your goal. Suppose we have a stock whose top-of-book quotes show far more size on the bid than on the ask. If you want the weighted mid to reflect sentiment at this moment, then certainly the market participants agree that the fair price is less than the mid. However, if you assume that these participants are informed market makers and your ... 1 Take a look at FIX4.4 protocol, accessible from http://www.dukascopy.com/swiss/english/forex/api/fix_api/ Thread about C# libraries: http://stackoverflow.com/questions/4876279/fix-library-for-net 1 No, it doesn't have to do with time frames. It's a protocol feature designed to enable something akin to nested data, whether for more compact data transmission, or just to allow one to adhere to rules of semantic sense. Take market data requests, for example, i.e. retrieving the current market depth for a certain instrument. Not only would sending one ... 1 You might want to check out the book Evidence Based Technical Analysis by David Aronson. In it he applies statistical techniques to determine whether certain time series patterns have any predictive power. It's an interesting read and should equip you with some ideas on how to differentiate between folklore and statistical rigor. It also gives you ample ... 1 Algorithmic Trading in general is no different from normal trading except all of the trading is automated. So it encompasses the same risk parameters that normal traders would. When it comes to High Frequency Trading, the risk management checks would be at Strategy Level as well as "individual trade" level.There would be checks for sizes, values etc. ... 1 The best options AMM guys are rumored to capture roughly 1/3tick per round trip, net of transaction costs + implementation shortfalls. I had worked for a regional index options MM. With the growth of competition in the recent years, expected returns are actually much lower than that today. So realistically, in today's environment, you could net maybe ... 1 One way to think about this is as a missing data problem. You observe the order book constantly, but trades only occur infrequently. One way to resolve this is to perform full information maximum likelihood (other techniques, such as multiple imputation, may be too slow for your needs but it might be useful to look into them), which has analytical formula ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-03-12 04:32:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3017304539680481, "perplexity": 1633.7436710814075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021338216/warc/CC-MAIN-20140305120858-00098-ip-10-183-142-35.ec2.internal.warc.gz"}
http://www.physicsforums.com/showpost.php?p=972579&postcount=1
Thread: Seperation of variables View Single Post ## Seperation of variables This is the the first time I've encountered seperation with partial differential equations. There are no worked examples, so I need some help to work through this problem. The question seems to be somewhat hand holding, since it seems to be THE introduction. Q: Apply seperation of variables $u_t = u_x$ by substituting $u=A(x)B(t)$ and then dividing by AB. If one side depends only on $t$ and the other only on $x$, they must equal a constant $k$; what are $A$ and $B$? $$\frac{\partial u}{\partial t}-\frac{\partial u}{\partial x} = 0$$ $$u = A(x)B(t)$$ $$\frac{\partial}{\partial t} \left[ A(x)B(t) \right] - \frac{\partial}{\partial x} \left[ A(x)B(t) \right] = 0$$ $$A(x)B'(t)-A'(x)B(t)=0$$ $$\frac{A(x)B'(t)-A'(x)B(t)}{A(x)B(t)}$$ $$\frac{B'(t)}{B(t)}-\frac{A'(x)}{A(x)}=0$$ Now I was reading on various websites, that I can set each independent term equal to seperation constants to make two coupled (is this the proper word to use?) differential equations. I don't understand where this step comes from. but... $$\frac{B'(t)}{B(t)}=k$$ $$\frac{A'(x)}{A(x)}=k$$ Now solving for $A(x)$ and $B(t)$. I'm a little rusty here, so I don't know if this part is correct. Rewriting the two equations above in Leibniz notation $$\frac{dB(t)}{dt} \cdot \frac{1}{B(t)} = k$$ Seperating: $$\frac{dB(t)}{B(t)} = k dt$$ $$\int \frac{dB(t)}{B(t)} = \int k\,\,dt$$ $$\ln B(t) = kt +c$$ $$B(t) = e^{kt+c}$$ And subsequently: $$A(x) = e^{kx+c}$$ Does this make sense? :)
2013-06-19 15:39:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7536903023719788, "perplexity": 318.97748980038136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708835190/warc/CC-MAIN-20130516125355-00078-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.nextgurukul.in/wiki/concept/cbse/class-8/science/stars-and-the-solar-system/celestial-bodies/3957907
Celestial Bodies ##### Get a free home demo of LearnNext Available for CBSE, ICSE and State Board syllabus. Call our LearnNext Expert on 1800 419 1234 (tollfree) OR submit details below for a call back clear arrow_back Celestial Bodies All natural bodies visible in the sky, outside the Earth's atmosphere, constitute the celestial bodies, e.g. stars, planets, their moons, comets, asteroids, meteors, etc.  The Moon is the celestial body closest to us. It is the only natural satellite of the Earth. It is a non-luminous body and it reflects the sunlight incident on it. Due to its revolution around the Earth, when it is at different positions in its path, the apparent disc of the Moon changes, which gives rise to its phases.  When the Moon is positioned between the Sun and the Earth, the illuminated portion of the Moon is away from the Earth, and we are not able to see the Moon. We call this day as the New Moon day. With time, the position of the Moon changes and the illuminated portion of the Moon exposed to the Earth gradually increases. Thus, the size of the apparent disc of the Moon increases gradually from a crescent to a full round when the Earth lies between the Moon and the Sun. We call this day the Full Moon day. The waxing or waning of the disc of the Moon every night at it revolves around the Earth is called phases of the Moon. The duration from one New Moon day to the succeeding New Moon day is called the lunar month. The Moon is about one-fourth the size of the Earth. The surface of the moon has many craters, which might have been formed by the collision of some heavenly bodies like a meteorites with the Moon. The Moon has no atmosphere because the gravity of the Moon is too small to hold it. Since there is no atmosphere on the Moon there is no life on it.  The huge distances between the Earth and other celestial bodies are measured in light years. A light year is the distance covered by light in one year. #### SUMMARY All natural bodies visible in the sky, outside the Earth's atmosphere, constitute the celestial bodies, e.g. stars, planets, their moons, comets, asteroids, meteors, etc.  The Moon is the celestial body closest to us. It is the only natural satellite of the Earth. It is a non-luminous body and it reflects the sunlight incident on it. Due to its revolution around the Earth, when it is at different positions in its path, the apparent disc of the Moon changes, which gives rise to its phases.  When the Moon is positioned between the Sun and the Earth, the illuminated portion of the Moon is away from the Earth, and we are not able to see the Moon. We call this day as the New Moon day. With time, the position of the Moon changes and the illuminated portion of the Moon exposed to the Earth gradually increases. Thus, the size of the apparent disc of the Moon increases gradually from a crescent to a full round when the Earth lies between the Moon and the Sun. We call this day the Full Moon day. The waxing or waning of the disc of the Moon every night at it revolves around the Earth is called phases of the Moon. The duration from one New Moon day to the succeeding New Moon day is called the lunar month. The Moon is about one-fourth the size of the Earth. The surface of the moon has many craters, which might have been formed by the collision of some heavenly bodies like a meteorites with the Moon. The Moon has no atmosphere because the gravity of the Moon is too small to hold it. Since there is no atmosphere on the Moon there is no life on it.  The huge distances between the Earth and other celestial bodies are measured in light years. A light year is the distance covered by light in one year. #### REFERENCES ###### Like NextGurukul? Also explore our advanced self-learning solution LearnNext Offered for classes 6-12, LearnNext is a popular self-learning solution for students who strive for excellence Explore Animated Video lessons All India Test Series Interactive Video Experiments Best-in class books
2019-07-16 11:24:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24135014414787292, "perplexity": 644.3920287026821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524522.18/warc/CC-MAIN-20190716095720-20190716121720-00354.warc.gz"}
https://www.esaral.com/q/the-density-of-a-solid-metal-sphere-is-determined-by-measuring-its-mass-and-its-diameter-31771
# The density of a solid metal sphere is determined by measuring its mass and its diameter. Question: The density of a solid metal sphere is determined by measuring its mass and its diameter. The maximum error in the density of the sphere is $\left(\frac{x}{100}\right) \%$. If the relative errors in measuring the mass and the diameter are $6.0 \%$ and $1.5 \%$ respectively, the value of $x$ is________ Solution: $(1050)$ Density, $\rho=\frac{M}{V}=\frac{M}{\frac{4}{3} \pi\left(\frac{D}{2}\right)^{3}} \Rightarrow \rho=\frac{6}{\pi} M D^{-3}$ $\therefore \%\left(\frac{\Delta \rho}{\rho}\right)=\frac{\Delta m}{m}+3\left(\frac{\Delta D}{D}\right)=6+3 \times 1.5=10.5 \%$ $\%\left(\frac{\Delta \rho}{\rho}\right)=\frac{1050}{100} \%=\left(\frac{x}{100}\right) \%$ $\therefore x=1050.00$
2023-03-26 06:43:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7801218628883362, "perplexity": 270.0658244033515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00787.warc.gz"}
https://paperity.org/journal/236782/boundary-value-problems
# Boundary Value Problems http://www.boundaryvalueproblems.com/ ## List of Papers (Total 1,629) #### Topological properties of C 0 $C^{0}$ -solution set for impulsive evolution inclusions In this paper, we study the topological properties to a C 0 $C^{0}$ -solution set of impulsive evolution inclusions. The definition of C 0 $C^{0}$ -solutions for impulsive functional evolution inclusions is introduced. The R δ $R_{\delta}$ -property of C 0 $C^{0}$ -solution set is studied for compact as well as noncompact semigroups on compact intervals. Applying the inverse... #### Dynamics of a predator–prey system with three species This paper is concerned with the dynamics of a predator–prey system with three species. When the domain is bounded, the global stability of positive steady state is established by contracting rectangles. When the domain is R $\mathbb{R}$ , we study the traveling wave solutions implying that one predator and one prey invade the habitat of another prey. More precisely, the... #### Global existence of solutions for a class of thermoelastic plate systems This paper is concerned with the initial-boundary value problem for a class of thermoelastic plate systems. Under some appropriate assumptions, the global existence of solutions is obtained. #### Existence of stable standing waves for the Schrödinger–Choquard equation In this paper, by variational methods and the profile decomposition of bounded sequences in H 1 $H^{1}$ we study the existence of stable standing waves for the Schrödinger–Choquard equation with an L 2 $L^{2}$ -critical nonlinearity. Our results extend some earlier results. #### Blow-up phenomena and lifespan for a quasi-linear pseudo-parabolic equation at arbitrary initial energy level In this paper, we continue to study the initial boundary value problem of the quasi-linear pseudo-parabolic equation u t − △ u t − △ u − div ( | ∇ u | 2 q ∇ u ) = u p $$u_{t}-\triangle u_{t}-\triangle u-\operatorname{div}\bigl(| \nabla u|^{2q}\nabla u\bigr)=u^{p}$$ which was studied by Peng et al. (Appl. Math. Lett. 56:17–22, 2016), where the blow-up phenomena and the lifespan... #### General decay and blow-up of solutions for a nonlinear viscoelastic wave equation with strong damping This article is concerned with the decay and blow-up properties of a nonlinear viscoelastic wave equation with strong damping. We first show a local existence theorem. Then, we prove the global existence of solutions and establish a general decay rate estimate. Finally, we show the finite time blow-up result for some solutions with negative initial energy and positive initial... #### Solutions for a class of fractional Langevin equations with integral and anti-periodic boundary conditions In this paper, we consider a class of fractional Langevin equations with integral and anti-periodic boundary conditions. By using some fixed point theorems and the Leray–Schauder degree theory, several new existence results of solutions are obtained. #### Existence of solution for integral boundary value problems of fractional differential equations In this paper, we discuss the existence of positive solutions of fractional differential equations on the infinite interval ( 0 , + ∞ ) $(0,+\infty)$ . The positive solution of fractional differential equations is gained by using the properties of the Green’s function, Leray–Schauder’s fixed point theorems, and Guo–Krasnosel’skii’s fixed point theorem. As an application, two... #### On a p ( x ) $p(x)$ -biharmonic problem with Navier boundary condition In this paper, we study a p ( x ) $p(x)$ -biharmonic equation with Navier boundary condition { Δ p ( x ) 2 u + a ( x ) | u | p ( x ) − 2 u = λ f ( x , u ) + μ g ( x , u ) in  Ω , u = Δ u = 0 on  ∂ Ω . \textstyle\begin{cases} \Delta^{2}_{p(x)}u+a(x)|u|^{p(x)-2}u= \lambda f(x,u)+\mu g(x,u)\quad \text{in } \Omega, \\ u=\Delta u=0 \quad \text{on } \partial\Omega. \end{cases... #### Fixed point theorems for a class of nonlinear sum-type operators and application in a fractional differential equation In this paper, we consider the fixed point for a class of nonlinear sum-type operators ‘ A + B + C $A+B+C$ ’ on an ordered Banach space, where A, B are two mixed monotone operators, C is an increasing operator. Without assuming the existence of upper-lower solutions or compactness or continuity conditions, we prove the unique existence of a positive fixed point and also construct... #### A diffusive stage-structured model with a free boundary In this paper we mainly consider a free boundary problem for a single-species model with stage structure in a radially symmetric setting. In our model, the individuals of a new or invasive species are classified as belonging either to the immature or to the mature cases. We firstly study the asymptotic behavior of the solution to the corresponding initial problem, then obtain a... #### A global nonexistence of solutions for a quasilinear viscoelastic wave equation with acoustic boundary conditions In this paper, we consider a quasilinear viscoelastic wave equation with acoustic boundary conditions. Under some appropriate assumption on the relaxation function g, the function Φ, p > max { ρ + 2 , m , q , 2 } $p > \max \{ \rho +2, m, q,2\}$ , and the initial data, we prove a global nonexistence of solutions for a quasilinear viscoelastic wave equation with positive initial... #### Positive solutions of conformable fractional differential equations with integral boundary conditions In this paper, we discuss the existence of positive solutions of the conformable fractional differential equation T α x ( t ) + f ( t , x ( t ) ) = 0 $T_{\alpha }x(t)+f(t,x(t))=0$ , t ∈ [ 0 , 1 ] $t\in [0,1]$ , subject to the boundary conditions x ( 0 ) = 0 $x(0)=0$ and x ( 1 ) = λ ∫ 0 1 x ( t ) d t $x(1)= \lambda \int_{0}^{1}x(t)\,\mathrm{d}t$ , where the order α belongs to ( 1... #### Existence of a regular solution for 1D Green–Naghdi equations with surface tension at a large time instant In this paper the model 1D-GNσ is considered, which concerns the 1D Green–Naghdi equations with non-flat bottom and under the influence of surface tension, to be widely used in coastal oceanography to describe the propagation of large-wave amplitudes. The purpose of this paper is to show that the solution of 1D-GNσ can be made by the Picard iterative scheme, which proves that...
2019-05-21 08:37:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7499651908874512, "perplexity": 494.9155534102434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256314.25/warc/CC-MAIN-20190521082340-20190521104340-00143.warc.gz"}
https://hpmuseum.org/forum/printthread.php?tid=8729
12c Solving for n - Printable Version +- HP Forums (https://www.hpmuseum.org/forum) +-- Forum: HP Calculators (and very old HP Computers) (/forum-3.html) +--- Forum: General Forum (/forum-4.html) +--- Thread: 12c Solving for n (/thread-8729.html) 12c Solving for n - Zac Bruce - 07-24-2017 11:27 AM Hi all, I'm hoping someone can lend their wisdom here. As most of you will well know, when using the 12c to solve for n, it will only solve as an integer, always rounding up. I tried to search but did not come across a solution. The user manual offers the following solution for when a PMT amount is involved; 10.5 g i 35000 PV 325 CHS PMT n (=328) To find the partial last payment; FV (181.89) RCL PMT (-325) + (-143.11) which is the final fractional payment To then make the answer correlate with other financial calculators I came up with the following; (with the -143.11 still on screen) RCL PMT X> 328 327 n PV => 34991,78 35000 – PV 0 PMT 328 n FV => 143,11 OK, that's a lot more complicated. #-) (07-25-2017 10:52 AM)Zac Bruce Wrote:  In place of my original approximate solution (n=327.44), it would instead be interpreted as "327 full payments of $325, plus a final payment of$143.11" and not in terms of time. I'd say that your approximate result of n=327,4403 may be interpreted as 327,4403 payments, i.e. 327 full and one final payment of 0,4403... times $325 =$143,11. (07-25-2017 10:52 AM)Zac Bruce Wrote:  The site that Paul suggested included a program to solve for a mathematically correct value of n, which is slightly different to your suggestion (allows for END or BEGIN by storing 1 in either STO 1 or STO 2) but comes up with the same results. Yes, my little program does only a very basic calculation for this particular case. Real TVM programs do a lot more stuff. ;-) On the other hand the HP-41 standard pac's TVM program (cf. line 06...20) shows how various scenarios (OK, END mode only) can be handled with one simple formula for n. Here is a translation for the 12C: Code: 01 RCL FV 02 CHS 03 RCL PMT 04 RCL i 05 / 06 EEX 07 2 08 x 09 + 10 RCL PV 11 LstX 12 + 13 / 14 LN 15 1 16 RCL i 17 % 18 + 19 LN 20 / 21 GTO 00 (07-25-2017 10:52 AM)Zac Bruce Wrote:  I guess if I do the work to internalize and memorize the equation, then yes, very simple! I'm perhaps too blessed to have constant access to electronics and the internet to do the "hard" work for me! C'mon, this compound interest formula is as basic at it gets. ;-) (07-25-2017 10:52 AM)Zac Bruce Wrote:  I bought a copy of Gene Wright's book, I think I might go and get it printed and bound tomorrow, start reading and try actually understand the maths, rather than just understanding which buttons to press! I think there are three levels involved here: (0) Knowing which buttons to press (1) Knowing the math behind this (2) Knowing the meaning of the math. ;-) Regarding the latter I'm sometimes a bit lost myself. Dieter RE: 12c Solving for n - Zac Bruce - 07-27-2017 12:19 PM Dieter, I've made some progress through Gene's book now, and you're right about it being pretty simple. My math is not strong so I didn't know that the log of a number to a power is equal to the power multiplied by the log of that number. What a mouthful. Gene does offer a quick and dirty way to approximate, but it's really no simpler than just working through the formula. I still don't understand was a logarithm really is, but at least compound interest is starting to make sense. So I guess I'm at stage two, at least! Regards, Zac RE: 12c Solving for n - Dieter - 07-27-2017 07:00 PM (07-27-2017 12:19 PM)Zac Bruce Wrote:  My math is not strong so I didn't know that the log of a number to a power is equal to the power multiplied by the log of that number. What a mouthful. (...) I still don't understand was a logarithm really is, but at least compound interest is starting to make sense. Power and exponential functions as well as their inverses (roots and logs) are basic math that is not too hard to understand. If it can be done at school in grade 8 or 9 you will be able to get it as well. All this stuff is required to understand the concept of the time value of money, both in simple compoud interest problems, in annuities and in other basic concepts like NPV or IRR. So every minute you spend on this for a better understanding of these basics will pay off later. Financial math simply is not possible without this. There is a reason why the 12C has only a few scientific functions while it does have y^x, e^x and ln x. ;-) Dieter RE: 12c Solving for n - Zac Bruce - 07-27-2017 09:40 PM Dieter, When I tell people that I'm studying accounting/finance, usually the first question I get asked is, "Are you good at maths? (sic)". Usually I just laugh and say, "Yeah." The truth being that I'm very systematic and I enjoy processes and logic. So, I would have been good at math if only I'd been paying more attention! I took advanced math in year 10, but I don't remember ever bringing my notebook or doing exercises. I did pass, but I remember very little. Last trimester I did my first statistics for business course and really enjoyed it, and topped my class. The math involved in (basic) probabilities and statistics is not so difficult to understand. But I realize that I lack any strong foundation, so I'm currently doing a self-paced bridging course offered by my university. Then it's on to quantitative skills with applications, which covers logarithmic, exponential and inverse functions in greater detail, among other things. Regards, Zac
2022-01-16 11:07:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5139145255088806, "perplexity": 1613.3615446268436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00527.warc.gz"}
http://www.mathisfunforum.com/viewtopic.php?pid=258217
Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. ## #701 2013-03-20 06:00:47 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner By the way I would like to add an important point about basic arithmetic using fractions: When adding fractions it is very important to always make the denominators (the bottom numbers) the same first. (The same applies to subtraction.) When multiplying however you can just multiply the numerators and denominators. Last edited by SteveB (2013-05-09 07:01:58) Offline ## #702 2013-03-20 10:56:41 mathgogocart Member Registered: 2012-04-29 Posts: 1,440 ### Re: Mandy Jane's Corner Steve,what is she doing? It looks like she is doing Fraction//percents/? Here are some questions for practice Mandy.Steve if you are here,teach mandy 1.$\large \dpi{120} \bg_white 6/7+8/9=?$ 2.$\large \dpi{120} 5/7+5/8$ 3.$\large \dpi{120} 3/4+7/8=?$ Hey. Offline ## #703 2013-03-21 02:15:58 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner mathgogocart wrote: Steve,what is she doing? It looks like she is doing Fraction//percents/? Percentages ... but fractions were necessary for my explanation to make sense. Every fraction can be written as a fraction with 100 as the denominator and is therefore a percentage. So percentages can be converted into a fraction with 100 as the denominator (and cancelled if appropriate). If you multiply by a percentage then it solves a problem in the form " A% of a number ". You could re-write this as:  (A %) x (number) = (A/100) x number = (A x number) / 100 Last edited by SteveB (2013-05-09 07:02:59) Offline ## #704 2013-03-21 02:56:44 mandy jane Member Registered: 2010-09-23 Posts: 1,126 ### Re: Mandy Jane's Corner [\math]\frac{1}{5} x \frac{3 }{7 } = \frac {3 }{23 }[\math] [\math\]\frac {2}{3 } x \frac {4}{10} = \frac {8}{30} [\math] [\math]\frac {3 }{5} x \frac {5}{3} = \frac {15}{15} [\math] Last edited by mandy jane (2013-03-21 03:01:21) Offline ## #705 2013-03-21 03:01:41 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner mandy jane wrote: \frac{1}{5} x \frac{3 }{7 } = \frac {3 }{23 } This one is incorrect. However I think it is just that you got the times table bit wrong. 5 x 7  is not 23. Try again perhaps? \frac {2}{3 } x \frac {4}{10} = \frac {8}{30} Yes that is correct. Do you know how to cancel the fraction down to a simpler form? \frac {3 }{5} x \frac {5}{3} = \frac {15}{15} Again that is correct. How could you cancel or simplify this ? PS. if you use the frac thing in Latex you need to use the [math] tags as well. PS no 2: If you are using Latex you need to start with [math] without the \  .... if you see what I mean. Last edited by SteveB (2013-03-21 03:05:42) Offline ## #706 2013-03-21 03:12:30 mandy jane Member Registered: 2010-09-23 Posts: 1,126 ### Re: Mandy Jane's Corner Last edited by bob bundy (2013-03-21 03:47:03) Offline ## #707 2013-03-21 03:14:07 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner Good you have got the hang of it. You can use \times to give an x I know what you mean, but some people could argue that you have used 'x' in the sense of the letter here. More importantly the denominator is still wrong. 5 x 7 is not 25 either. Last edited by SteveB (2013-03-21 03:21:54) Offline ## #708 2013-03-21 03:16:36 mandy jane Member Registered: 2010-09-23 Posts: 1,126 ### Re: Mandy Jane's Corner I have. Done post 706 again ok? Offline ## #709 2013-03-21 03:20:43 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner I have done my post again as well. Offline ## #710 2013-03-21 03:27:06 mandy jane Member Registered: 2010-09-23 Posts: 1,126 ### Re: Mandy Jane's Corner 5 x 7 =35 am I right? Offline ## #711 2013-03-21 03:28:07 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner The reason why I have given us another look at fractions is that any percentage can be re-written as a fraction with 100 as the denominator and cancelled down if appropriate. So let us say that the percentage is 50% We can write this as: Can you see how we could cancel or simplify this fraction ? Offline ## #712 2013-03-21 03:29:52 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner I have just looked at post #710 and you are correct. Good. Now look at post #711. Offline ## #713 2013-03-21 03:33:44 mandy jane Member Registered: 2010-09-23 Posts: 1,126 ### Re: Mandy Jane's Corner 50 goes into 100 2times and 50 goes into 50 1 so 1/2 am I right? Offline ## #714 2013-03-21 03:35:13 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner Yes this is correct so let us consider a problem involving 50 % I need to calculate  50 %  of  70.  How could I do this ? Offline ## #715 2013-03-21 03:38:44 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner Okay I will give you a clue: Substitute 50 % with (1/2) then multiply.  The number 70 could be thought of as a fraction of (70/1) Last edited by SteveB (2013-03-21 03:40:13) Offline ## #716 2013-03-21 03:40:44 mandy jane Member Registered: 2010-09-23 Posts: 1,126 ### Re: Mandy Jane's Corner not sure what to do? Offline ## #717 2013-03-21 03:45:44 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner Right here is what I was trying to get you to work out: You have worked out that 50% = (50/100) = (1/2) We need 50 %  of  70. So this is the same as:  (1/2) x 70 Which is:   (1/2) x (70/1) Using the fraction multiplication rules: This is:  (70/2) In this case because the number 70 was a whole number we did not really need to convert it into a fraction, but if we use fraction multiplication then you know how to multiply a fraction by a percentage as well. Offline ## #718 2013-03-21 03:46:51 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner Now what can we cancel (70/2) to ?  Is this a whole number ? Offline ## #719 2013-03-21 03:48:29 mandy jane Member Registered: 2010-09-23 Posts: 1,126 ### Re: Mandy Jane's Corner what do you think we should do now then? Offline ## #720 2013-03-21 03:50:08 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner Have you got a calculator handy? If so how about you type the division  70 divided by 2 into it. What do you get? What does this tell you about the fraction  (70/2)  ? Offline ## #721 2013-03-21 03:53:53 mandy jane Member Registered: 2010-09-23 Posts: 1,126 ### Re: Mandy Jane's Corner 70/2 can go down to 35/1 am i right? Offline ## #722 2013-03-21 04:01:26 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner Just to re-cap: 50 % of 70 was the original question. Offline ## #723 2013-03-21 04:04:50 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner Yes your post  #721 is correct. So this fraction (35/1) is actually a whole number. It is simply 35 because any number divided by 1 is the number itself. So 35 divided by 1 is 35.  So (35/1) = 35 also. Right I will now give you a similar problem. See if you can work it out.  What is 40% of 60 ? Offline ## #724 2013-03-21 04:05:36 mandy jane Member Registered: 2010-09-23 Posts: 1,126 ### Re: Mandy Jane's Corner not sure what to do next? Offline ## #725 2013-03-21 04:08:52 SteveB Member Registered: 2013-03-07 Posts: 574 ### Re: Mandy Jane's Corner Right let us look at the 40 %  bit of that question. What can we re-write that as ? Offline
2015-04-02 04:26:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363645911216736, "perplexity": 14130.664131547075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131317541.81/warc/CC-MAIN-20150323172157-00232-ip-10-168-14-71.ec2.internal.warc.gz"}
https://hbfs.wordpress.com/category/mathematics/
## Chaotic Rulers November 28, 2017 I’m currently working with one of my students on a laser-based range finder. To assess the precision of the device, I needed a calibration piece. Because of the setup, the piece should look like a stair. The piece should allow a wide range of different readings, say from 1 to 10 centimeters in known increments, say, 1cm. The naïve way of building such a piece is to build a stair with 10 steps. However, if you do it like this, the piece is wide, cumbersomely so. Is there a much better way to do so? ## The Middle Square Method (Generating Random Sequences VIII) November 21, 2017 Von Neumann proposed the middle square method of generating pseudo-random numbers in 1949, in a paper published a bit later. The method is simple: you take a seed, say 4 digits long, you square it, and extract the middle 4 digits, which become the next seed. For example: $4373\to{}19123129\to{}1231$. While it seems random enough, is it? ## Rational Approximations (Part II) October 10, 2017 Last week, we noticed a fun connection between lattices and fractions, which helped us get rational approximations to real numbers. Since only points close to the (real-sloped) line are of interests, only those points representing fractions are candidates for rational approximation, and the closer to the line they are, the better. But what if we find a point real close to the line? What information can we use to refine our guess? ## Rational Approximations October 3, 2017 Finding rational approximations to real numbers may help us simplify calculations in every day life, because using $\displaystyle \pi=\frac{355}{113}$ makes back-of-the-envelope estimations much easier. It also may have some application in programming, when your CPU is kind of weak and do not deal well with floating point numbers. Floating point numbers emulated in software are very slow, so if we can dispense from them an use integer arithmetic, all the better. However, finding good rational approximations to arbitrary constant is not quite as trivial as it may seem. Indeed, we may think that using something like $\displaystyle a=\frac{1000000 c}{1000000}$ will be quite sufficient as it will give you 6 digits precision, but why use 3141592/1000000 when 355/113 gives you better precision? Certainly, we must find a better way of finding approximations that are simultaneously precise and … well, let’s say cute. Well, let’s see what we could do. ## Halton Sequences (Generating Random Sequences VII) September 7, 2017 Quite a while ago, while discussing Monte Carlo Integration with my students, the topic of choosing sample locations came up, and we discussed low-discrepancy sequences (a.k.a. quasi-random sequences). In a low-discrepancy sequence, values generated look kind of uniform-random, but avoids clumping. A closer examination reveal that they are suspiciously well-spaced. That’s what we want in Monte Carlo integration. But how do we generate such sequences? Well, there are many ways to do so. Some more amusing than other, some more structured than others. One of the early example, Halton sequences (c. 1964) is particularly well behaved: it generates 0, 0.5, then 0.25 and 0.75, then 0.125, 0.375, 0.625, and 0.875, etc. It does so with a rather simple binary trick. ## In an Old Notebook (Generating Random Sequences VI) April 4, 2017 Looking for something else in old notebooks, I found a diagram with no other indication, but clearly a low-cost random generator. So, why not test it? ## The 1 bit = 6 dB Rule of Thumb, Revisited. March 28, 2017 Almost ten years ago I wrote an entry about the “1 bit = 6 dB” rule of thumb. This rule states that for each bit you add to a signal, you add 6 dB of signal to noise ratio. The first derivation I gave then was focused on the noise, where the noise maximal amplitude was proportional to the amplitude represented by the last bit of the (encoded) signal. Let’s now derive it from the most significant bit of the signal to its least significant.
2017-12-18 01:20:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6867393851280212, "perplexity": 779.7860523438779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948599549.81/warc/CC-MAIN-20171218005540-20171218031540-00624.warc.gz"}
https://galgebra.readthedocs.io/en/latest/generated/galgebra.metric.html
# galgebra.metric¶ Metric Tensor and Derivatives of Basis Vectors. ## Members¶ class galgebra.metric.Metric(basis, *, g=None, coords=None, X=None, norm=False, debug=False, gsym=None, sig='e', Isq='-')[source] Metric specification g metric tensor Type: sympy matrix[,] g_inv inverse of metric tensor Type: sympy matrix[,] norm normalized diagonal metric tensor Type: list of sympy numbers coords coordinate variables Type: list[] of sympy symbols is_ortho True if basis is orthogonal Type: bool connect_flg True if connection is non-zero Type: bool basis basis vector symbols Type: list[] of non-commutative sympy variables r_symbols reciprocal basis vector symbols Type: list[] of non-commutative sympy variables n dimension of vector space/manifold Type: integer n_range list of basis indices de derivatives of basis functions. Two dimensional list. First entry is differentiating coordiate. Second entry is basis vector. Quantities are linear combinations of basis vector symbols. Type: list[][] sig Signature of metric (p,q) where n = p+q. If metric tensor is numerical and orthogonal it is calculated. Otherwise the following inputs are used: Input Signature Type "e" (n,0) Euclidean "m+" (n-1,1) Minkowski (One negative square) "m-" (1,n-1) Minkowski (One positive square) p (p,n-p) General (integer not string input) Type: Tuple[int, int] gsym String for symbolic metric determinant. If self.gsym = ‘g’ then det(g) is sympy scalar function of coordinates with name ‘det(g)’. Useful for complex non-orthogonal coordinate systems or for calculations with general metric. Type: str Parameters: basis – string specification g – metric tensor coords – manifold/vector space coordinate list/tuple (sympy symbols) X – vector manifold function norm – True to normalize basis vectors debug – True to print out debugging information gsym – String s to use "det("+s+")" function in reciprocal basis sig – Signature of metric, default is (n,0) a Euclidean metric Isq – Sign of square of pseudo-scalar, default is ‘-‘ Christoffel_symbols(mode=1)[source] mode = 1 Christoffel symbols of the first kind mode = 2 Christoffel symbols of the second kind Isq = None Sign of I**2, only needed if I**2 not a number detg = None Determinant of g static dot_orthogonal(V1, V2, g=None)[source] Returns the dot product of two vectors in an orthogonal coordinate system. V1 and V2 are lists of sympy expressions. g is a list of constants that gives the signature of the vector space to allow for non-euclidian vector spaces. This function is only used to form the dot product of vectors in the embedding space of a vector manifold or in the case where the basis vectors are explicitly defined by vector fields in the embedding space. A g of None is for a Euclidian embedding space. g_adj = None g_inv = None Inverse of g metric_symbols_list(s=None)[source] rows of metric tensor are separated by “,” and elements of each row separated by ” “. If the input is a single row it is assummed that the metric tensor is diagonal. Output is a square matrix. galgebra.metric.collect(A, nc_list)[source] Parameters: A – a linear combination of noncommutative symbols with scalar expressions as coefficients nc_list – noncommutative symbols in A to combine A sum of the terms containing the noncommutative symbols in nc_list such that no elements of nc_list appear more than once in the sum. All coefficients of a given element of nc_list are combined into a single coefficient. sympy.Basic galgebra.metric.square_root_of_expr(expr)[source] If expression is product of even powers then every power is divided by two and the product is returned. If some terms in product are not even powers the sqrt of the absolute value of the expression is returned. If the expression is a number the sqrt of the absolute value of the number is returned. galgebra.metric.symbols_list(s, indices=None, sub=True, commutative=False)[source] Convert a string to a list of symbols. If galgebra.printer.Eprint is enabled, the symbol names will contain ANSI escape sequences. Parameters: s (str) – Specification. If indices is specified, then this is just a prefix. If indices is not specified then this is a string of one of the forms: prefix + "*" + index_1 + "|" + index_2 + "|" + ... + index_n prefix + "*" + n_indices name_1 + "," + name_2 + "," + ... + name_n name_1 + " " + name_2 + " " + ... + name_n indices (list, optional) – List of indices to append to the prefix. sub (bool) – If true, mark as subscript separating prefix and suffix with _, else mark as superscript using __. commutative (bool) – Passed on to sympy.Symbol. symbols list of sympy.Symbol Examples Names can be comma or space separated: >>> symbols_list('a,b,c') [a, b, c] >>> symbols_list('a b c') [a, b, c] Mixing commas and spaces gives surprising results: >>> symbols_list('a b,c') [a b, c] Subscripts will be converted to superscripts if requested: >>> symbols_list('a_1 a_2', sub=False) [a__1, a__2] >>> symbols_list('a__1 a__2', sub=False) [a___1, a___2] But not vice versa: >>> symbols_list('a__1 a__2', sub=True) [a__1, a__2] Asterisk can be used for repetition: >>> symbols_list('a*b|c|d') [a_b, a_c, a_d] >>> symbols_list('a*3') [a_0, a_1, a_2] >>> symbols_list('a*3') [a_0, a_1, a_2] Or the indices argument: >>> symbols_list('a', [2, 4, 6]) [a_2, a_4, a_6] >>> symbols_list('a', [2, 4, 6], sub=False) [a__2, a__4, a__6] sympy.symbols()
2020-04-02 02:44:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7320660948753357, "perplexity": 6366.373701657539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506580.20/warc/CC-MAIN-20200402014600-20200402044600-00434.warc.gz"}
https://electronics.stackexchange.com/questions/444313/how-to-find-gsm-service-level-of-a-ip-connection/444321
# how to find GSM service level of a IP-connection? I am using a GSM modem to send data to a remote server over TCP/IP connection. I am using STM32F103 for the software. I want to connect 3 LED's with the 'F103 to indicate which connectivity and service level is provided by my Tel-co for the current connection.. 2G or 2.5G or 3G (or maybe 4G also). How can I do this? do I need to measure the RSSI signal strength for this or do I need to measure some other parameters? Edit: I am using M95 Modem. My understanding is that if RSSI level is low that I will get lower speed connectivity to the server (lower data rate possible) and more chances of corrupt packets and data loss. So when I will be at 2G or 2.5G then I will not send or request big data files to and from the server as it will not complete successfully even after many retries. But if I get 3G connection from the Tel-co then I will do bigger files send or receive tasks. This is my vague understanding. Please correct me if I am wrong in it. • Properties of the physical link are not anything that can be asked from TCP/IP; that's the whole point of having a protocol layer stack. You need to ask your modem what it's currently doing. You forgot to say what modem you're using, so we can't help you. – Marcus Müller Jun 19 '19 at 7:10 • What I can say with certainty is that RSSI has nothing to do with it. And that "2G or 2.5G or 3G" are not "GSM service levels", but "telecommunication standard generations"; it's totally unclear what the whole purpose of your question is, because you don't state that, and due to your own confusion, it's impossible to infer. – Marcus Müller Jun 19 '19 at 7:11 • Included more info in the question. – alt-rose Jun 19 '19 at 7:38 This 4 band and Class 12 Modem chip is capable of supporting low bandwidth applications up to 85.6kbps up and down stream. Packets are kept small to reduce retry sizes when not received. Signal to Noise Ratio must increase above some threshold to achieve error free communication at fastest rates. This is often when the RSSI starts > -80dBm with a -105 dBm noise floor. • 85.6kbps maximum speed is very low. Is there any similar modem that provides higher speeds in the range 300kbps (possibly called 3G speed)? – alt-rose Jun 19 '19 at 8:48 • @alt-rose There are lots of choices of modem with higher speeds. But here is the thing, you need to know what network your provider uses. For example even '4G' LTE comes in CAT-1 through 4. So you could go get a '4G' modem and it be completely incompatible with your provider. Similarly '3G' comes in WCDMA, HSPA, HSPA+, etc. – hekete Jun 19 '19 at 12:21 • here is a list of 258 different modems that have 300kbps or higher speeds – hekete Jun 19 '19 at 12:27 • Wow only 3Mbps per US\$ digikey.com/product-detail/en/sierra-wireless/WP7607-G_1104192/… @hekete Did not know U were down there in AU – Tony Stewart Sunnyskyguy EE75 Jun 19 '19 at 12:35 • @SunnyskyguyEE75 its because of the orientation rectification ICs required to operate upside down. – hekete Jun 20 '19 at 4:22 (x)G is actually pretty meaningless. Since multiple technologies get called 3G and 4G, even though they are completely different and achieve different data rates. For example 3G speeds range from 144Kbps to 21.6Mbps. The providers of the 144Kbps network still call it 3G though. It doesn't really have anything to do with a service level. If your signal is bad, your modem might try dropping to a lower data rate. I don't know if it will report this as 'xG', but you should definitely be able to query what speed it is connected at. I would forget about all the Gs and just look at your connection speed and make choices based on that. Let me just re-iterate. The xG thing is pretty much purely marketing crap, there were efforts to make minimum requirements for something to be called xG, but it didn't stick. When dealing with cellular networks you need to know what technology the network is based on. The 'generation' of the network just gives you a vague indication of what the maximum data rate might be. • '2G' is obsolete now but let us say my connection speed is in 10's of kbps only then I can symbol it as '2G'. But I cannot understand how to query the modem to find the connection speed of up-link or down-link. Any idea how to do that? – alt-rose Jun 19 '19 at 8:08 • I think you use the AT+CGQREQ? command. Though looking at the M95 data sheet it seems to only support 85.6kbps maximum. The above command is supposed to return <cid>,<precedence>,<delay>,<reliability>,<peak>,<mean> – hekete Jun 19 '19 at 8:30 • @alt-rose Those values returned are from tables in the GSM 03.60 standard. So you would have to look that up. It's under Quality of Service Profile. – hekete Jun 19 '19 at 8:42 • The 85.6kbps means only up-link speed or both up-link+down-link speeds added? If it supports 85.6kbps maximum that means just 2.5G. S even if my Tel-co provides 4G services and my SIM card is also 4G enabled.. the data connection that I will get will be 2.5G. What if my Tel-co does not support 2.5G but only 4G? Is it possible that a Tel-co does not support the fall-back data rates of 2.5G after it upgrades to latest 4G services? – alt-rose Jun 19 '19 at 8:43 • Class 12 says 4 up-link and 4 down-link slots max with 5 active at a time. So if you were getting 85.6kbps down you would be left with 1 slot for up at 21.4kbps (at least I think that is how it works). – hekete Jun 19 '19 at 8:52
2020-07-15 09:55:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3891526460647583, "perplexity": 1748.78510261863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00443.warc.gz"}
https://mail.python.org/pipermail/python-dev/2001-July/016110.html
# [Python-Dev] 2.2 Unicode questions M.-A. Lemburg mal@lemburg.com Thu, 19 Jul 2001 15:05:55 +0200 Guido van Rossum wrote: > > > First, a short one, Mark Hammond's patch for supporting MBCS on > > Windows. I trust everyone can handle a little bit of TeX markup? > > > > % XXX is this explanation correct? > > \item When presented with a Unicode filename on Windows, Python will > > now correctly convert it to a string using the MBCS encoding. > > Filenames on Windows are a case where Python's choice of ASCII as > > the default encoding turns out to be an annoyance. > > > > This patch also adds \samp{et} as a format sequence to > > \cfunction{PyArg_ParseTuple}; \samp{et} takes both a parameter and > > an encoding name, and converts it to the given encoding if the > > parameter turns out to be a Unicode string, or leaves it alone if > > it's an 8-bit string, assuming it to already be in the desired > > encoding. (This differs from the \samp{es} format character, which > > assumes that 8-bit strings are in Python's default ASCII encoding > > and converts them to the specified new encoding.) > > > > (Contributed by Mark Hammond with assistance from Marc-Andr\'e > > Lemburg.) > > I learned something here, so I hope this is correct. :-) The last part is... the rest is for Mark to comment on. > > Second, the --enable-unicode changes: > > > > %====================================================================== > > \section{Unicode Changes} > > > > Python's Unicode support has been enhanced a bit in 2.2. Unicode > > strings are usually stored as UCS-2, as 16-bit unsigned integers. > > Python 2.2 can also be compiled to use UCS-4, 32-bit unsigned > > integers, as its internal encoding by supplying > > \longprogramopt{enable-unicode=ucs4} to the configure script. When > > built to use UCS-4, in theory Python could handle Unicode characters > > from U-00000000 to U-7FFFFFFF. > > I think the Unicode folks use U+, not U-, True. > and the largest Unicode > chracter is "only" U+10FFFF. (Never mind that the data type can > handle larger values.) I wouldn't count on that... (note that Andrew wrote "could" ;-) > > Being able to use UCS-4 internally is > > a necessary step to do that, but it's not the only step, and in Python > > 2.2alpha1 the work isn't complete yet. For example, the > > \function{unichr()} function still only accepts values from 0 to > > 65535, > > Untrue: it supports range(0x110000) (in UCS-2 mode this returns a > surrogate pair). Now, maybe that's not what it *should* do... It should definitely not, unless you want to break code which assumes that chr() and unichr() always return a single byte/code unit ! This was part of the UCS-4 checkins which hadn't had time yet to review. Should I remove the surrogate part for narrow builds ? > > and there's no \code{\e U} notation for embedding characters > > greater than 65535 in a Unicode string literal. > > Not true either -- correct \U has been part of Python since 2.0. It > does the same thing as unichr() described above. Right. Note that in this case, the handling of surrogates is needed to make the unicode-escape encoding roundtrip safe. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Consulting & Company: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/
2022-06-25 12:02:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8092095255851746, "perplexity": 10337.608819411227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00564.warc.gz"}
http://jairo.nii.ac.jp/0025/00035629
# Mechanism of dominance of the Breit interaction in dielectronic recombination 48 ( 14 )  , p.144002 , 2015-07 , IOP Publishing Ltd ISSN:0953-4075 NII書誌ID(NCID):AA10693237 The recent theoretical and experimental studies show that the Breit interaction plays a dominant role in the dielectronic recombination for some particular transitions. The detailed mechanism of why the Breit interaction is dominant for such a process is still unknown. In this work, we performed a simulation and decomposed each individual term in the transition matrix level and found that the Breit interaction is dominant when the leading term ($1/{{r}_{\gt }}$ with ${{r}_{\gt }}$ the larger of r1 and r2) contribution of the two-electron Coulomb interaction is vanished. Based on this mechanism, we explained why the dielectronic capture strength to $1{\rm s}2{{{\rm s}}^{2}}2{{{\rm p}}_{1/2}}\ {{J}_{d}}=1$ state is much stronger than the one to $1{\rm s}2{\rm s}2{\rm p}_{1/2}^{2}\ {{J}_{d}}=1$ as well as why the Breit interaction plays a dominant role in the anisotropic parameters. Furthermore, the present finding may guide us to search some physical processes in which the Breit interaction is dominant by simply analyzing the coupling coefficients for a given isoelectronic sequence. このアイテムのアクセス数:  回 その他の情報
2018-03-24 04:57:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4101589024066925, "perplexity": 663.0416841519003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649683.30/warc/CC-MAIN-20180324034649-20180324054649-00586.warc.gz"}
https://www.hackmath.net/en/math-problem/14163
# Cylinder The 1.8m cylinder contains 2000 liters of water. What area (in dm2) of this container is the water? Result S =  698.913 dm2 #### Solution: $D = 1.8 \ m = 1.8 \cdot \ 10 \ dm = 18 \ dm \ \\ r = D/2 = 18/2 = 9 \ dm \ \\ \ \\ V = 2000 \ l = 2000 \cdot \ 1 \ dm^3 = 2000 \ dm^3 \ \\ \ \\ S_{ 1 } = \pi \cdot \ r^2 = 3.1416 \cdot \ 9^2 \doteq 254.469 \ dm^2 \ \\ \ \\ h = V/S_{ 1 } = 2000/254.469 \doteq 7.8595 \ dm \ \\ \ \\ S = S_{ 1 } + \pi \cdot \ D \cdot \ h = 254.469 + 3.1416 \cdot \ 18 \cdot \ 7.8595 \doteq 698.9134 = 698.913 \ dm^2$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators Do you want to convert length units? Do you know the volume and unit volume, and want to convert volume units? ## Next similar math problems: 1. Water well Drilled well has a depth 20 meters and 0.1 meters radius. How many liters of water can fit into the well? 2. An oil An oil drum is cut in half. One half is used as a water trough. Use the dimensions; length 82cm, width 56cm to estimate the capacity of the water trough in liters. 3. Garden pond Concrete garden pond has bottom shape of a semicircle with a diameter 1.7 m and is 79 cm deep. Daddy wants make it surface. How many liters of water is in pond if watel level is 28 cm? 4. Conva How many liters of water fit into the shape of a cylinder with a bottom diameter 20 cm and a height 45 cm? 5. Tin with oil Tin with oil has the shape of a rotating cylinder whose height is equal to the diameter of its base. Canned surface is 1884 cm2. Calculate how many liters of oil is in the tin. 6. Water level How high reaches the water in the cylindrical barell with a diameter of 12 cm if there is a liter of water? Express in cm with an accuracy of 1 decimal place. 7. Rain How many mm of water rained the roof space 75 m2 if the empty barrel with a radius of 8 dm and height 1.2 m filled to 75% its capacity? :-) 8. Conserving water Calculate how many euros are spent annually on unnecessary domestic hot water, which cools during the night in pipeline. Residential house has 129 m of hot water pipelines 5/8" and the hot water has a price of 7 Eur/m3. 9. Hectoliters How many hectoliters of water is in garden barrel with 90 cm diameter and a height of 1.3 m, if it is filled to 80% of its capacity? 10. Cylinder - A&V The cylinder has a volume 1287. The base has a radius 10. What is the area of surface of the cylinder? 11. Cylinder surface area Volume of a cylinder whose height is equal to the radius of the base is 678.5 dm3. Calculate its surface area. 12. Kitchen Kitchen roller has a diameter 70 mm and width of 359 mm. How many square millimeters roll on one turn? 13. Total displacement Calculate total displacement of the 4-cylinder engine with the diameter of the piston bore B = 6.6 cm and stroke S=2.4 cm of the piston. Help: the crankshaft makes one revolution while the piston moves from the top of the cylinder to the bottom and back 14. Aquarium volume The aquarium has a cuboid shape and dimensions a = 0.3 m, b = 0.85 m, c =? , V = ?. What volume has a body, if after dipping into the aquarium water level rises by 28 mm? 15. Digging A pit is dug in the shape of a cuboid with dimensions 10mX8mX3m. The earth taken out is spread evenly on a rectangular plot of land with dimensions 40m X 30m. What is the increase in the level of the plot ? 16. Cube volume The cube has a surface of 384 cm2. Calculate its volume. 17. Cone area and side Calculate the surface area and volume of a rotating cone with a height of 1.25 dm and 17,8dm side.
2020-02-17 19:03:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5988136529922485, "perplexity": 1449.4512373456039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00028.warc.gz"}
https://math.stackexchange.com/questions/1580472/confidence-interval-in-comparing-service-times
# Confidence interval in comparing service times A store manager wishes to compare the service times of the express checkout with the service times of the self-serve checkout. Suppose that independent random samples of 121 customers at express and selfserve checkouts were taken, and the service times for each customer was recorded. The mean and standard deviation of the sample of customers using the express checkout were 3.7 and 0.9 minutes, respectively. For the self-serve customers, the mean and standard deviation were 4.2 and 1.7 minutes, respectively. a) Calculate the ratio of the maximum sample variance to the minimum sample variance. Does it appear that the population variances are equal or unequal? Explain. b) Construct the appropriate 95% confidence interval for the difference in the mean service times for customers using the express and self-serve checkouts, and interpret your result. My work: For a) I'm not really sure how to get the answer of 3.57, I tried $0.9^2+1.7^2,$ which equals 3.7, but that is wrong. For b) I tried $3.7-4.2±1.96*\sqrt{(0.81/121)+(2.89/121)}$, to which I get the confidence interval of $(-0.84,-0.16)$ The answer says it should be $-0.5 ± 0.81,$ which results in $(-1.31,0.31).$ Any help is very much appreciated!! (a) The requested $ratio$ of variances is $(1.7/.9)^2 = 3.567901.$ This exceeds the quantile .975 of F(120, 120), which is 1.433 (from F tables or from software). We conclude that variances differ. (b) A Welch (separate-variances) two-sample t interval is indicated. Minitab output: Sample N Mean StDev SE Mean 1 121 3.700 0.900 0.082 2 121 4.20 1.70 0.15 Difference = mu (1) - mu (2) Estimate for difference: -0.500 95% CI for difference: (-0.845, -0.155) T-Test of difference = 0 (vs not =): T-Value = -2.86 P-Value = 0.005 DF = 182 DF must exceed 120, so it is safe to use $\pm 1.96$ for the CI. The standard error of the difference in means is $\sqrt{.9^2/121 + 1.7^2/121} = 0.175$. Thus the CI for $\mu_{exp} - \mu_{ss}$ is approximately $-.5 \pm 1.96(0.175)$ or $(-0.84, -0.16),$ consistent with the printout and with your answer.
2019-07-16 03:56:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6912071704864502, "perplexity": 165.11649341548986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00533.warc.gz"}
https://www.physicsforums.com/threads/shape-of-the-wave-of-a-photon.715821/
Shape of the wave of a photon 1. Oct 11, 2013 DParlevliet With a double slit measurement a single photon is a wave which goes throught both slits. To extinguish each other at certain places the wave must have the same amplitude at both slits, also at large distance between the slit. The positions also extinguish with 0.5, 1,5, ... periode difference, so also in time the amplitude is the same. If taken to the extreme it would have the same amplitude everywhere in the universe. What is this wave? Outside the double explanations I don't see it elsewhere in publications. Is this the wave of a photon, freely travelling in space? 2. Oct 11, 2013 Staff: Mentor "Yes", but don't extend that answer too far. It will not. The amplitude is (nearly) the same in the region where you get the interference pattern if the slits have the same width and get the same intensity from the light source, but that won't happen everywhere. 3. Oct 11, 2013 DParlevliet How does the amplitude degrade in space-time? Or is there is formulea for this wave? 4. Oct 11, 2013 Staff: Mentor The amplitude of the wave decreases as 1/r where r is the distance from the source, if r is large enough that the source "looks" like a point. 5. Oct 11, 2013 Hyrum Don't you mean $$\frac{1}{r^{d-1}}$$ where d is the dimension of space (3 in our case)? 6. Oct 11, 2013 Staff: Mentor I assumed we were discussing the propagation of light/photons in 3-dimensional space. 7. Oct 11, 2013 Hyrum Right. So it should 1/r2 so it expands outward as a sphere, shouldn't it? 8. Oct 11, 2013 DParlevliet But: suppose the source is not far above the slits. For detector positions right or left from the slits the waves through both slits always have a different r. If that would mean a different amplitude they can never extinguish each other. 9. Oct 11, 2013 Bill_K The wave amplitude goes as 1/r. The energy ∝ |amplitude|2 ∝ 1/r2. 10. Oct 11, 2013 craigi Last edited: Oct 11, 2013 11. Oct 11, 2013 Staff: Mentor That is true. Single-slit effects (you always have them) are another complication that we did not consider yet here. 12. Oct 11, 2013 craigi You don't need complete destructive interference to observe an interference pattern, so it is quite forgiving of your experimental setup, in that respect. The human eye is pretty good at picking out contrast. Even if the amplitude through one slit is significantly lower than through the other, the interference pattern is still visible. The peaks just aren't as high and the troughs aren't as deep, on the interference pattern. Last edited: Oct 11, 2013 13. Oct 11, 2013 DParlevliet I did, and if I had found (or understood) the solution for a photon I would not be on this forum. I am here because I hope someone can explain without all formulae of wikipedia (which therefore is only readable for you guys). If the forum is not intended for that, I wil leave. Although I have the expression there are more here who don't know yet the answer 14. Oct 11, 2013 DParlevliet According Feynman (QED) there is a complete destructive interference. I don't see 1/r2 in his arrows. 15. Oct 11, 2013 Cthugha Please reread the response you have been given earlier, especially the "if"-part: Going from the standard electrical field picture at high intensities to the single photon level does not change much. You just move on from discussing fields interpreted as real entities to discussing probability amplitudes for the detection of a single photon. When averaging over many of those events, the interference patterns seen will be the same for single photons and bright light fields in the same geometry. The addition you get when discussing probability amplitudes is simply that you cannot detect a single photon twice. Having said that, the shape of the "wave" part about the photon (the probability amplitude) just depends on geometry. The realistic thing you can get in the lab is a point like source having a single atom or a single quantum dot as the emitter which will give you the 1/r decay. In principle and especially in theory any geometry is possible. You can have a plane wave, something looking like a standard beam or shaped beams. There is no intrinsic shape of the "wave of a photon". So if Feynman does not have any 1/r terms, it is very likely that he considered a plane wave geometry for single photons. To allow us to give a more well defined answer, you might need to quote the exact text in Feynman's book. 16. Oct 11, 2013 craigi Don't think for a second that your questions aren't welcome. If they weren't, we wouldn't respond. You should also expect that many people reading them are familiar with what it's like to try to get to grips with these things. Asking questions on here will give you some good answers, but it's not necessairily the fastest route to understanding the problems. I think many of us know what it's like to be given unsatisfying answers, even if they are well thought out answers given by people who are experts in their field. Well written books can take you through things in a matter of hours that took others many decades of questioning, because they are written by people who know the right questions to ask to arrive at the level of understanding that you want to get to. That said, I wholeheartedly understand anyone who doesn't find book learning particularly appealing and never let anything discourage your inquisition. Last edited: Oct 11, 2013 17. Oct 12, 2013 DParlevliet Feynman is talking about amplitudes of single photons to follow a certain path (in his book QED). In fig 20 (a mirror) he say (translated back from Dutch):"According quantum theory light has an amplitude for reflection which is equal for every position on the mirror". Different paths give the same amplitude but different phase (direction of the arrow). Fig 5 shows reflection in glas depending on thicknes, which is a cosine between 0-16%, even with glass of more then 50 meters thick. Then in fig 49 he shows the double slit and mentions (agains translated) "sometimes we get for a certain distance between the holes more ticks then expected, with a somewhat different distance the dector does not tick at all" (the detector is a photon counter). According Wikipedia: "and when they are in anti-phase, i.e. the path difference is equal to half a wavelength, one and a half wavelengths, etc., then the two waves cancel and the summed intensity is zero" 18. Oct 12, 2013 vanhees71 I don't know what reference of Feynman's you are referring to. His book on QED is pretty conventional using the old Fermi formulation in the operator formalism. This surprised me a bit, because the path-integral quantization method is most convenient particularly for gauge theories as compared to the operator formalism. Anyway, one cannot stress often enough that a naive particle picture for massless quanta is almost always wrong. The first thing is that for massless quanta with spin $\geq 1$ there is not even a position operator in the strict sense. By definition a single-photon state is an asymptotically free one-particle photon state with definite norm. The double-slit experiment with single photons is described as any double-slit experiment with single quanta as a scattering process with asymptotically free single-particle states coming in and asymptotically free single-particle states coming out. This leads to the probabilities to register an asymptotically free particle beyond the double slit (e.g., using a photo plate). A single quantum never makes an intereference pattern but only a single dot on the screen. The interference pattern, reflecting the detection probabilities as a function of position, can be found by repeating this scattering experiment several times. Note that nowhere I have used the idea of position of a single photon but only the possition of a registration of a photon at the screen. This position of a registration of a photon is a well defined physical quantity that can be measured, the position of a photon in the strict sense of an observable cannot even be defined in principle! For details, see http://www.mat.univie.ac.at/~neum/physfaq/topics/position.html 19. Oct 12, 2013 craigi Do you remember in the other thread we talked about needing a surface for specular reflection? Feynman is describing, using path integrals, how the different paths across this surface, superpose to cancel each other out, with the exception of the apparent reflection path from Fermat's law of reflection. He does mention that he is making an approximation when he says that the the amplitude is the same for all points on the section of surface. The approximation holds when the section of mirror that he is treating is very small, hence the distance that the light travels to each point is approximately the same. This is very similar to the approximation used in calculus, where you take 2 points on a curve which are close together to calculate its gradient. The cool thing about this is that he's demonstrating that the reflection from a mirror can be considered an interference patten itself. Last edited: Oct 12, 2013 20. Oct 12, 2013 Naty1 Hi DParlevliet...getting a 'picture' of this stuff takes some time, so be patient....Expect your head to spin a few more times!!! I'll see if I can put some pieces together to clarify and summarize some subtle points...no math because I remember relatively little of it!! From the Wikipedia link already provided: Interpretation of the wave function Here is another very insightful tidbit from another Wikipedia article: So this IS crazy: a deterministic expression for quantum behavior! Further, after almost 100 years, arguments still ensue about what the wave function 'really' represents. Post #2: [This 'wave' can be thought of as representing some distributed photon behaviors, a probability distribution relating to likely detection location, but says nothing about observables of this mode. When detected, photons are always pointlike as are all particles in the Standard Model of particle physics. As wiki says, oddly enough, it is a deterministic expression, yet measurements/observables based on it are NOT deterministic.] edit: Particle 'wave' characteristics are always detected as pointlike objects. post #9: Turns out the probability of locating a particle is also proportional the the amplitude squared. post #15: Haven't seen that before...I like it...Bravo!! In Wikipedia terminology, the 'standard electrical field picture' is deterministic, the single photon is a quantum particle, and measurements [averaging over many events ] turns out to be NON deterministic. Nobody knows why. Analogously, here is what Roger Penrose says: [and THAT changes things from deterministic to probabilistic!!] [It depends on geometry.] Post#16: a great point.....so when you mumble to yourself [as many of us have at times] "That seems crazy." it probably is. It was unlikely not scientists first choice of interpretation...Feynman says something like this about that: A good physicist is one who has the stubbornness to make all the mistakes possible before finally arriving at the correct conclusion. Last edited: Oct 12, 2013 21. Oct 12, 2013 Bill_K "Learning is the process of making progressively subtler mistakes." -- Eleanor Duckworth 22. Oct 12, 2013 DParlevliet I refer to "QED, The Strange Theory of Light and Matter", his New Sealand lectures which are often refered to on Youtube. He is saying equal, not approximate equal. Using small dx does not matter, because how small it is, it is the same over the whole mirror surface, so 1/r2 (it is about area) still applies. In his figure 24 the paths differ 1.4, which with 1/r2 is about 50% difference in arrow. That is not approximate. All arrows in his drawing are the same. But has someone references that with double slits the two waves does not cancel out completely because of 1/r2? 23. Oct 12, 2013 craigi Yup. I read chapter 2 from it earlier. Have another read of it and if you still don't find it, I'll get a quote from it for you about his approximation. 24. Oct 13, 2013 birulami My hunch is you want to understand a wave in general first, whether it is electromagnetic or probability amplitude. For me it was quite helpful to read about the wave equation (not wave function!) as well as the Airy disk. To me the latter was an eye opener, because it seems a single hole already shows the wave properties of photons nicely, why always double slits? Combine what you read there with the Huygens principle and you start getting a good idea what peculiar objects 3D waves are. Further search for solutions to the wave function on the net to find that indeed a plane wave would have the same amplitude on a plane spanning the whole universe while, interestingly, a spherical wave must have decreasing amplitude as it spreads out. If a single hole is the wave source, like with the Airy disk, the wave is nearly spherical. The really tough part for me is to consider the wave spreading from the hole not just to a screen in the lab, but over several light years: the amplitude must decrease to nearly nothing, yet when a part of the wave front finally hits an object, the (probability) amplitude, integrated over thee area hit, tells us how probable it is that the wave leaves its energy as a blip on this object. (P.S. read wikipedia always in all the languages you understand. The articles have differing content.) 25. Oct 13, 2013 DParlevliet I found it, somewhat hidden. That is the risk of using educational text. So: 1/r2 But now suppose case 2: the light source has a curved mirror which gives a parrallel beam. The wave fronts are now flat, no 1/r2. The interference will be complete, giving positions on the detector where the waves cancel fully, 0% light. That is conformal with measurements, which try to use a parrallel beam. But now with one photon. In case 2 you don't know where the photon will be absorbed by the detector, but you do know positions wher it certainly never will be detected. But in case 1 (with a single photon) at these positions the waves does not cancel each other fully, so there is a small change the photon will be absorbed. It seems that the photons in case 1 and case 2 differ: they have a different wave shape. That troubles me. 1/r2 is right as propability wave if you only know that a photon is generated but not in what direction. But in reality it went a certain direction, which you know afterwards when the photon was detected. Then there are two possible paths (through slit 1 or 2), so if you reconstruct the wave shape as it was before the slits, what was it then? Concluding from the measurement result I suppose what I mentioned before: a wave with flat wave fronts, everywhere in the universe (if without matter), with equal maximum amplitude. It is of course not a propabality wave, but is does determine propability at the moment the photon absorbed (or perhaps direct the photon). Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
2018-07-21 23:58:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7019119262695312, "perplexity": 845.2616628588196}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592861.86/warc/CC-MAIN-20180721223206-20180722003206-00549.warc.gz"}
http://aas.org/archives/BAAS/v25n4/aas183/abs/S9303.html
Crab Pulsar Giant Pulses: Radio and Gamma-Ray Results Session 93 -- Pulsars Oral presentation, Friday, January 14, 10:15-11:45, Crystal Forum Room (Crystal City Marriott) ## [93.03] Crab Pulsar Giant Pulses: Radio and Gamma-Ray Results S.C. Lundgren, J.M. Cordes (Cornell University), M. Ulmer, S.M. Matz, S. Lomatch (Northwestern University) We analyze joint radio and $\gamma$-ray observations of giant radio pulses from the Crab pulsar. These bursts can be as much as 2000 times the average flux density amplitude in the radio. Fitting the giant pulse radio flux density histogram requires a two component model---a sharply peaked distribution of low intensity pulses and a power law component for giant pulses with an index of $-$3.3 and a low flux density cutoff that is 33 times the mean of the low intensity pulses. The absence of pulses at flux densities between the low intensity pulses and the smallest giant pulses suggests we are seeing two entirely different emission mechanisms. However, lack of time delay between giant pulses and average pulses ($\Delta t = 6 \pm 12 \mu s$) suggests both mechanisms operate in or near the same spatial location. We have found an upper limit on $\gamma$-ray flux variation concurrent with the giant radio bursts. Flux during giant bursts is less than twice the average level. We discuss implications of the lack of $\gamma$-ray variation on models for pulsar emission. In particular, particle flow variability scales by the same factor as $\gamma$-ray flux. In addition the limit on $\gamma$-ray variability correlated with radio flux requires $n(E) \propto E^{-1}$ for particles available to inverse compton scatter radio photons up to $\gamma$-rays. Given current understanding of pulsar emission mechanisms, we will consider the possibility of high energy emission from radio pulsars recently discovered in $\gamma$-ray source error boxes.
2014-12-26 07:45:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7154541015625, "perplexity": 4025.237005374676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548651.52/warc/CC-MAIN-20141224185908-00041-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.nature.com/articles/s41598-018-32392-4?WT.feed_name=subjects_mechanical-engineering&error=cookies_not_supported&code=2516a99a-a53a-4024-9c66-283ab4c56649
Article | Open | Published: # Physical ageing of spreading droplets in a viscous ambient phase ## Abstract In this work, we study the spontaneous spreading of water droplets immersed in oil and report an unexpectedly slow kinetic regime not described by previous spreading models. We can quantitatively describe the observed regime crossover and spreading rate in the late kinetic regime with an analytical model considering the presence of periodic metastable states induced by nanoscale topographic features (characteristic area ~4 nm2, height ~1 nm) observed via atomic force microscopy. The analytical model proposed in this work reveals that certain combinations of droplet volume and nanoscale topographic parameters can significantly hinder or promote wetting processes such as spreading, wicking, and imbibition. ## Introduction Classical continuum descriptions consider liquid-fluid and liquid-solid interfaces as sharp, smooth, and homogeneous surfaces, which neglects the diffuse nature of the interfacial region, the presence of nanoscale heterogeneities of physical or chemical origin, and thermal fluctuations1,2,3,4. Despite remarkable successes in rationalizing the dynamics of wetting and related interfacial phenomena, classical continuum-based models are inadequate to describe the near-equilibrium behavior of diverse colloidal and multiphase systems where the interplay between thermal motion and nanoscale interfacial structure plays a dominant role. For example, single microparticles adsorbed at liquid-liquid interfaces have exhibited crossovers from initially fast dynamics, driven by capillary forces, to a much slower “kinetic” relaxation that can be nearly logarithmic in time, as first discovered by Kaz et al.5 and subsequently studied by other groups6,7,8,9,10. Similar near-equilibrium behavior has been also observed in the imbibition/drainage of water/oil in microscale capillaries with nanoscale surface roughness11. The observed near-equilibrium phenomena, resembling physical ageing in dense colloidal systems that exhibit jamming transitions12,13,14,15, have been attributed to random thermally activated transitions between multiple metastable states induced by numerous nanoscale “defects” on the solid surface. ## Spreading in a viscous ambient phase This work investigates the near-equilibrium behavior of millimeter-sized water droplets immersed in 100-cSt silicone oil and spreading spontaneously on a borosilicate glass substrate, for which nearly neutral wetting conditions are attained at equilibrium. For partial wetting conditions, direct contact between the water droplet and substrate and the associated formation of a finite contact angle takes place in the early stages of the spreading process through the dynamic breakup and dewetting of the lubricating film below the droplet17,18,21,22. Our spreading experiments are performed inside a transparent immersion cell (see Fig. 1a) that is placed on the stage of an optical goniometer (Dataphysics OCA 10). The immersion cell is filled with 100-cSt silicone oil with a dynamic viscosity μo = 96 mPa·s (mass density ρo = 0.96 g/mL) and the top of the cell is open to the ambient air at room temperature (T = 20 ± 4 C°). As illustrated in Fig. 1b, single drops of de-ionized (DI) water are injected into the oil bath through a tapered capillary tube (inner tip diameter ~0.04 mm) that is connected to a programmable syringe pump. A surface tension γ = 42.6 ± 2 mN/m between the silicone oil and DI water at room temperature and static equilibrium contact angles θE = 74.5 ± 3° were steadily observed for about 24 hours in Wilhelmy plate measurements (see Methods section). The injected drops have a radius $${R}_{0}\simeq 0.5$$ mm, which correspond to very small Bond numbers $$Bo=({\rho }_{w}-{\rho }_{o})g{R}_{0}^{2}/\gamma \simeq 0.002$$; here ρw is the water mass density and g is the gravitational acceleration. After detaching from the capillary, water droplets fall toward the glass slide on the cell bottom (see Fig. 1b), attaining a small terminal speed $$U=\mathrm{2(}{\rho }_{w}-{\rho }_{o})g{R}_{0}^{2}/9{\mu }_{o}\simeq 1$$ mm/s in agreement with Stokes flow predictions35 for a sedimenting spherical droplet (R0 = 0.5 mm and viscosity ratio 1:100). The terminal speed of the drop corresponds to very small Weber numbers $$We={\rho }_{w}{U}^{2}R/\gamma \sim {10}^{-5}$$, which indicates that inertial effects are negligible and the droplet deposition on the glass substrate can be considered as quasi-static. The spontaneous spreading of the deposited droplets begins immediately after making contact with the glass substrate. The time evolution of the droplet contact radius R(t) and height h(t) (see Fig. 1c,d) are recorded by combining high-speed video during the initial 0.002 to 130 seconds and time-lapse photography for up to 3 days in order to efficiently record the (fast) initial and (slow) late spreading regimes. We begin our analysis of the experimental results by assessing the classical modeling assumption that the studied droplets, characterized by very low Bond numbers $$Bo\ll 1$$, maintain the shape of a spherical cap with constant volume V during the spreading process. The volume $$V=\mathrm{(4/3)}\pi {R}_{0}^{3}$$ of the droplets is readily determined from their initial radius R(t = 0) = R0 obtained from acquired images (cf. Fig. 1c). Experimental observations indicate that the droplet contact radius R(t) and height h(t) (cf. Fig. 1d,e) correspond to those of a spherical cap of nearly constant volume $${V}_{S}(t)=\pi h({R}^{2}\mathrm{/2}+{h}^{2}\mathrm{/6)}\simeq V$$ for times between 0.5 and 104 seconds (cf. Fig. 2c). Outside this time window deviations from spherical shape (cf. Fig. 1c–e) are observed due to (1) the formation of a small meniscus below the droplet at short times t 0.5 s and (2) the slow diffusion of molecules diffusion across the oil-water interface, which gradually reduces the droplet volume over times t 104 s. For a spherical cap of constant volume V = VS, the contact angle θ(t) is prescribed by the contact radius R(t) through the geometric relation V = R3fV(θ), where $${f}_{V}(\theta )=\pi (\frac{2}{3}-\frac{3}{4}\,\cos \,\theta +\frac{1}{12}\,\cos \,3\theta )/{\sin }^{3}\theta$$. Hence, from the equilibrium radius $${R}_{E}=\sqrt[3]{V/{f}_{V}({\theta }_{E})}$$ and height hE = RE(1 − cosθE)/sinθE observed for times t 103 to 104 s (cf. Fig. 1d) we estimate apparent equilibrium contact angles θE = 75 ± 4° (see Fig. 1f), which agrees closely with values determined by Wilhelmy plate measurements (see Methods section). Although chemical equilibrium is attained for t > 104 when the studied droplets diffuse into the ambient phase (cf. Fig. 1c,d), the equilibrium contact angle θE was steadily observed for over 24 hours in Wilhelmy plate measurements. Our theoretical analysis will focus on the late spreading regime leading to the equilibrium configuration prescribed by the (size-independent) contact angle θE, which corresponds to a state of mechanical equilibrium for a droplet of constant volume. ## Crossover from power-law spreading to physical ageing $${ {\mathcal F} }_{0}=(\frac{2}{1+\,\cos \,\theta }-\,\cos \,{\theta }_{E})\gamma A.$$ (1) From Eq. (1) one can readily obtain the effective force $${\mathcal F}$$ = −(d$${\mathcal F}$$0/dA) × (dA/dR) = −2πRγ(cosθ − cosθE) driving the spreading of perfectly hemispherical droplets on ideally smooth surfaces. Within the classical thermodynamic treatment leading to Eq. (1), the free energy change, and thus the driving force F, is prescribed by the displacement of the contact line and is not sensitive to the surface conditions inside the contact area A(t). Following ideas from prior work6,7,10,11, we consider that three-dimensional topographic features with a characteristic base area Ad ~ $${\mathscr{O}}$$ (1 nm2) induce spatial energy fluctuations Δ$${\mathcal F}$$ ~ γAd that are neglected in the free energy expression (Eq. (1)) for a perfectly flat surface. Accordingly, the free energy for a droplet spreading on a surface that is densely populated with nanoscale topographic features with characteristic base area Ad can be approximately modeled as $${\mathcal F} ={ {\mathcal F} }_{0}+\frac{1}{2}{\rm{\Delta }} {\mathcal F} \,\sin (\frac{2\pi A}{{A}_{d}}+\varphi ),$$ (2) where Δ$${\mathcal F}$$ is the characteristic magnitude of energy fluctuations induced by the nanoscale topography, and the phase ϕ [0, 2π) can be arbitrarily chosen given that $${A}_{d}\ll A$$. For overdamped Markovian dynamics (i.e., neglecting inertia and memory effects) the evolution of the contact area can be described by a Langevin equation $${\xi }_{A}\frac{dA}{dt}=-\frac{d {\mathcal F} }{dA}+\sqrt{2{k}_{B}T{\xi }_{A}}\eta (t),$$ (3) where ξA(A) is the damping coefficient determining the dissipative force, kB is the Boltzmann constant, and the random function η is zero-mean unit-variance Gaussian noise. The random term in Eq. (3) is a mathematical ansatz designed to satisfy the fluctuation-dissipation theorem36 and it is included to model thermal fluctuations of the contact area A. The energy dissipation rate is dE/dt = −ξA(dA/dt)2 = −ξ(dR/dt)2 and thus ξA = ξ/(2πR)2 can be determined from the damping coefficient ξ = ξ(R) predicted by available models for contact line dynamics32,33,34. As discussed later in Results, the damping coefficient ξ will be determined from the spreading rate dR/dt observed in the initial dspreading dynamics, which results from different physical mechanisms involved in the contact line displacement and dissipation of energy. As the system approaches the expected equilibrium at $${A}_{E}=\pi {R}_{E}^{2}$$ and $$d{F}_{0}/dA{|}_{{A}_{E}}\to 0$$, the contact area evolution described by Eq. (3) becomes a sequence of random thermally activated transitions between neighboring metastable states corresponding to local minima in Eq. (2). Sufficiently close to equilibrium the noise-averaged evolution of the contact area governed by Eq. (3) can be described by the rate equation6 $$\frac{dA}{dt}={A}_{d}({{\rm{\Gamma }}}_{+}-{{\rm{\Gamma }}}_{-}),$$ (4) where according to Kramers theory37,38 the forward/backward (+/−) transition rates are $${{\rm{\Gamma }}}_{\pm }=\frac{\sqrt{{(\pi /{A}_{d})}^{4}{({\rm{\Delta }} {\mathcal F} \mathrm{/2)}}^{2}-{K}^{2}}}{2\pi {\xi }_{A}}\times \exp (-\frac{{\rm{\Delta }} {\mathcal F} +K{A}_{d}^{2}/8}{{k}_{B}T})\exp [\pm \frac{K{A}_{d}({A}_{E}-A)}{2{k}_{B}T}].$$ (5) Here, the parameter $$K={d}^{2}{ {\mathcal F} }_{0}/d{A}^{2}{|}_{{A}_{E}}$$ is the curvature at equilibrium of the free energy (Eq. (1)) for a perfectly flat surface. Analytical integration of Eq. (4) for the case that $${A}_{E}\gg 2{k}_{B}T/K{A}_{d}$$ (see Supplementary Information) gives the implicit relation $$\frac{t}{{T}_{K}}=-\frac{{R}_{E}}{R}\,\mathrm{log}\,[\tanh (\frac{{R}_{E}^{2}-{R}^{2}}{{R}_{K}^{2}})]$$ (6) between the contact radius and time. In Eq. (6) we have introduced the characteristic ‘kinetic” time $${T}_{K}=\frac{{A}_{d}}{2}\frac{{\xi }_{E}{R}_{K}^{2}}{\sqrt{{\pi }^{4}{({\rm{\Delta }} {\mathcal F} \mathrm{/2)}}^{2}-{K}^{2}{({A}_{d}/2\pi )}^{4}}}\times \exp (\frac{{\rm{\Delta }} {\mathcal F} +K{A}_{d}^{2}\mathrm{/8}}{{k}_{B}T}),$$ (7) the “kinetic” length $${R}_{K}=\sqrt{2{k}_{B}T/K\pi {A}_{d}}$$, and the equilibrium damping coefficient ξE = ξ(RE). It is worth noticing that Eq. (6) predicts a slow logarithmic evolution of the contact area $${R}^{2}={R}_{E}^{2}+{R}_{K}^{2}\,\mathrm{log}(t\mathrm{/2}{T}_{K})$$ (8) in the near-equilibrium spreading regime for which $$|R-{R}_{E}|\ll {R}_{E}$$. The predicted logarithmic droplet evolution near equilibrium over long times TK is analogous to the physical ageing phenomenon reported for microparticles at a water-oil interface. A slow kinetic regime is described by the implicit expression in Eq. (6) or the explicit logarithmic expression in Eq. (8), which are derived from the rate equation (Eq. (4)) for thermally activated transitions between metastable states. Such metastable states correspond to local minima in the free energy profile $${\mathcal F}$$($${\mathcal R}$$) given by Eq. (2), which can only exist when the droplet is sufficiently close to equilibrium and $$K|{R}^{2}-{R}_{E}^{2}|\le {\rm{\Delta }} {\mathcal F} /{A}_{d}$$. Accordingly, the kinetic spreading regime governed by Eq. (6) should be only be observed for contact radii R > RC larger than the crossover radius $${R}_{C}=\sqrt{|{R}_{E}^{2}-\alpha \frac{{\rm{\Delta }} {\mathcal F} }{K{A}_{d}}|},$$ (9) where α 0.5, based on numerical analysis and experimental observations for different systems6,7,10,11. ## Experimental results and model predictions Experimental results and analytical predictions for microliter droplets of different volumes (V = 0.24 to 0.63 μL) are reported in Fig. 2. The initial spreading dynamics for R < RC can be described by power-law scalings Rtα with exponents α 2/3 to 1 (see also Fig. 1d), which can be accounted for by using damping coefficients estimated by MKT (see Supplementary Information) in Eq. (3). For the studied near neutral wetting conditions with a viscous ambient phase, Tanner’s law α 0.1 does not agree with the experimental observations (see Fig. 2a–c). The time evolution of the contact radius (Fig. 2a–c) shows a crossover from power-law behavior to a nearly logarithmic regime described by the implicit relation in Eq. (6) and the approximate explicit expression in Eq. (8). The parameters employed to produce the analytical fits are reported in Table 1. A base defect area Ad = 4.2 nm and energy barriers Δ$${\mathcal F}$$ = 14.1 ± 2.7kBT account for the spreading rates in the kinetic regime and the crossover radius RC for which the regime crossover is observed (cf. Fig. 2). As expected, the crossover radius RC can be estimated in all cases by Eq. (9) for α = 0.55. While the regime crossover is attributed to the emergence of local minima in the free energy profile, and thus it is independent from dissipative effects, the spreading rates in the late kinetic regime are influenced by dissipative forces, which are determined by the effective damping coefficient ξ(R). We estimated damping coefficients $$\xi (R)\simeq -\,2\pi \gamma R(\cos \,\theta -\,\cos \,{\theta }_{E})/(dR/dt)$$ from observed contact radius displacement rate dR/dt and apparent contact angle θ(R) in the initial spreading dynamics where metastable states are not present. The damping coefficients thus estimated from our experimental observations (see Supplementary Information) are in close agreement with MKT predictions34,39, according to which ξ(R) = χμo2πR where the friction factor χ = (ν/λ3)exp(γλ2(1 + cosθE)/kBT) is prescribed by the characteristic molecular volume ν and adsorption site size λ. Friction factors χ = 111 to 175 accounting for our experimental observations (see Table 1) can be estimated by using ν = 1.27 × 10−28 m3 for the studied PDMS oil and assuming molecular adsorption site sizes λ = 0.63 to 0.67 nm, which are close to the values reported in previous studies using MKT21,39,40. Employing MKT and the friction factors estimated from the initial spreading dynamics we determine the equilibrium damping coefficients ξE = χμo2πRE employed in Eq. (7) for the kinetic relaxation time in late regime. As reported in Fig. 2d, the evolution of the contact radius of droplets of different volume after the crossover to the late kinetic regime can be collapsed onto a single curve where the evolution time is normalized by the kinetic time TK determined by Eq. (7). The proper combination of base “defect” area and energy barrier is required to account for the regime crossover position predicted by Eq. (9) and the relaxation rates in the near-equilibrium regime. The fact that a single defect area Ad = 4.2 nm2 and narrow range of energy barriers $${\rm{\Delta }} {\mathcal F} =14.1\pm 2.7{k}_{B}T$$ (see Table 1) accounts for experimental results for droplets of different volume suggests that the nanoscale topography of the substrate may strongly influence the observed near-equilibrium spreading. We therefore seek to determine whether the base area and energy barriers used for analytical fits are indeed related to roughness parameters such as the radial correlation length and the average height41. Atomic Force Microscopy (AFM) in non-contact mode (see Methods section) was employed to produce (512 × 512 pixels) topographic images of 100 × 100 nm sections of the borosilicate glass substrates employed in the experiments (see Fig. 3a). The analysis of topographic profiles obtained via AFM reveals the presence of random nanoscale roughness having a nearly isotropic and Gaussian height distribution with average amplitude $${z}_{a}=2{\int }_{0}^{\ell }|z|dx/\ell =0.6$$ nm (see Fig. 3b), standard deviation $$\sigma \simeq 0.75$$ nm, small positive skewness $$\zeta \simeq 0.3$$, and excess kurtosis $$\kappa \simeq 0.1$$. The radial autocorrelation function $$C(r)={\mathrm{lim}}_{\ell \to \infty }{\int }_{0}^{\ell }\,z(\tau )z(\tau +r)d\tau /\ell$$ computed from AFM data (Fig. 3c) shows a similar nearly exponential decay in different directions ϕ = atan(y/x). Notably, the decay of the height autocorrelation function is characterized by a correlation length $${r}_{d}\simeq 1.16$$ nm, which indicates that topographic features approximately have the average base area $${A}_{d}=\pi {r}_{d}^{2}\simeq 4.2$$ nm employed for analytical fits (see Table 1). Furthermore, by modeling topographic “defects” as cones with a base radius rd and height za (see inset in Fig. 3c) one can estimate a characteristic energy barrier $${\rm{\Delta }} {\mathcal F} \simeq \gamma {\rm{\Delta }}{A}_{wo}=14.6\,{k}_{B}T$$ associated to the small area variation ΔAwo = rd × za that occurs when the oil-water interface moves over a single “defect”. Hence, we find that the characteristic energy barrier estimated from the average geometric dimensions of 3D topographic features agrees closely with the mean energy barrier employed in the analytical fits reported in Fig. 2 (see Table 1). ## Summary and Outlook In summary, during the early dynamics of droplet spreading extending for about 0.1 s we observe power-law behaviors governed by capillary forces and effective damping forces that can be rationalized by MKT, as reported in previous studies21,39,40. The damping coefficient predicted by MKT solely considers the viscosity of the ambient phase, which is aobut 100 times larger than the droplet viscosity in our experiments. The late near-equilibrium behavior, hoewever, does not follow a power law and exhibits a nearly logarithmic-in-time evolution R(t)2 logt with characteristic times to reach equilibrium on the order of thousands of seconds. The observed late spreading behavior resembles the physical ageing phenomenon previously reported for colloidal microparticles at liquid-liquid interfaces5,7,10. The crossover between fast and slow spreading regimes is analytically estimated by considering that close to equilibrium the one-dimensional free energy profile becomes densely populated with metastable states having a characteristic period and energy barrier prescribed by the nanoscale topography of the substrate. The spreading rates in the near-equilibrium regime are estimated by using Kramers theory6,37,38 for thermally activated escape from metastable states. Notably, the spreading model proposed in this work yields quantitative agreement with experimental observations when employing as input parameters the average base area Ad = 4.2 nm2 and average height of $${z}_{a}\simeq 1.2$$ nm of nanoscale defects observed by AFM topographic analysis. The findings in this work indicate that physical topographic features induce the slow thermally activated spreading that is observed when the studied droplets are near mechanical equilibrium. The proposed model based on a one-dimensional energy profile with a single-mode perturbation of amplitude $${\rm{\Delta }} {\mathcal F} \simeq \gamma {z}_{a}\sqrt{{A}_{d}/\pi }$$ and period Ad is able to quantitatively predict both the crossover to the slow kinetic regime and the spreading rate in the final approach to equilibrium. An important implication of this work is that according to Eq. (9) certain combinations of defect height and base area could induce the crossover to the slow kinetic regime at very early stages in the spreading process, which would effectively hinder the spreading and adhesion of droplets with a specific range of volumes on surfaces with different wettability. Future experimental and computational work employing different liquid pairs with varying viscosity ratios and nanostructured surfaces with well-characterized topographic features can be readily designed to verify this prediction. ## Methods The working fluids are DI water (Sigma Aldrich 38796) and 100-cSt silicone oil (Sigma Aldrich 378364). In order to gently deposit single millimeter-sized droplets a small water volume (0.3 to 0.5 μL) is dispensed by a syringe pump to produce a spherical drop suspended at the capillary tip (cf. Fig. 1b). After a suspended drop is formed, the capillary is manually pulled upward toward the oil-air interface (see rightmost panels in Fig. 1b) in order to induce the detachment of the drop from the capillary and its subsequent deposition on the borosilicate glass slide where the spontaneous spreading process takes place. The volume of the deposited droplet differs from the injected volume because a small uncontrollable volume of the liquid filament inside the capillary becomes part of the droplet after detachment. The deposition and spreading process were recorded using digital imaging from a lateral wall of the immersion cell with a high-speed camera (AOS Technologies AG QPRI) up to 1000 fps and time-lapse photography at 1/500 fps. Digital processing of the acquired image sequences was performed using the public domain software ImageJ42. The spreading of droplets of various volumes was recorded during 130 seconds using imaging rates between 32 and 50 fps, which allowed for efficiently resolving the crossover from initial (fast) to late (slow) spreading regimes. A few experiments were recorded at 500 to 1000 fps during the first 10 seconds, after which time lapse photography at 500 second intervals was employed for up to 3 days to capture the entire evolution to mechanical equilibrium. The employed borosilicate glass slides (McMaster-Carr) are cleaned with DI water, heated at 400 °C for 1 hr and allowed to return to room temperature inside an oven, after which dry air is blown to remove dust particles before placing them in the oil bath. ### Contact angle characterization Contact angles were determined via the Wilhelmy plate (force-displacement) method using a force tensiometer (Sigma 700 by Biolin Scientific). Force measurements for borosilicate slides and working fluids employed in spreading experiments were performed continuously for 30 hours using very low displacement speeds V = 0.01 mm/min. After the relaxation of the water-oil meniscus, advancing and receding contact angles remained steadily within the range 74.5 ± 3° over a 24-hour period. ### AFM topographic imaging Topographic images of borosilicate glass slides employed in the spreading experiments were performed at the Center for Functional Nanomaterials in Brookhaven National Laboratory. Measurements were performed using a Park NX-20 AFM in Non-Contact (NC) mode and cantilever probes PPP-NCHR by Park Systems. NC-AFM images were obtained for square sections of 100 × 100 nm and 50 × 50 with resolutions varying from 256 × 256 pixels to 1024 × 1024 pixels, which produce similar statistical properties. ## Data Availability Data reported in this work will be made available upon request to the corresponding author [C.E.C.]. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Gibbs, J. W. et al. Thermodynamics, vol. 1 (Longmans, Green and Company, 1906). 2. 2. Rowlinson, J. S. & Widom, B. Molecular theory of capillarity (Clarendon Press, 1982). 3. 3. De Gennes, P.-G. Wetting: statics and dynamics. Rev. modern physics 57, 827 (1985). 4. 4. Brochard-Wyart, F., Di Meglio, J. M., Quére, D. & De Gennes, P. G. Spreading of nonvolatile liquids in a continuum picture. Langmuir 7, 335–338 (1991). 5. 5. Kaz, D. M., McGorty, R., Mani, M., Brenner, M. P. & Manoharan, V. N. Physical ageing of the contact line on colloidal particles at liquid interfaces. Nat. Mater. 11, 138–142 (2012). 6. 6. Colosqui, C. E., Morris, J. F. & Koplik, J. Colloidal adsorption at fluid interfaces: regime crossover from fast relaxation to physical aging. Phys. Rev. Lett. 111, 028302 (2013). 7. 7. Rahmani, A. M., Wang, A., Manoharan, V. N. & Colosqui, C. E. Colloidal particle adsorption at liquid interfaces: capillary driven dynamics and thermally activated kinetics. Soft Matter 12, 6365–6372 (2016). 8. 8. Coertjens, S., De Dier, R., Moldenaers, P., Isa, L. & Vermant, J. Adsorption of ellipsoidal particles at liquid–liquid interfaces. Langmuir 33, 2689–2697 (2017). 9. 9. Zanini, M. et al. Universal emulsion stabilization from the arrested adsorption of rough particles at liquid-liquid interfaces. Nat. communications 8, 15701 (2017). 10. 10. Keal, L., Colosqui, C. E., Tromp, H. & Monteux, C. Colloidal particle adsorption at water/water interfaces with ultra-low interfacial tension. Phys. Rev. Lett. In Press (2018). 11. 11. Colosqui, C. E., Wexler, J. S., Liu, Y. & Stone, H. A. Crossover from shear-driven to thermally activated drainage of liquid-infused microscale capillaries. Phys. Rev. Fluids 1, 064101 (2016). 12. 12. Struik, L. Physical aging in plastics and other glassy materials. Polym. Eng. & Sci. 17, 165–173 (1977). 13. 13. Fluerasu, A., Moussaïd, A., Madsen, A. & Schofield, A. Slow dynamics and aging in colloidal gels studied by x-ray photon correlation spectroscopy. Phys. Rev. E 76, 010401 (2007). 14. 14. Negi, A. S. & Osuji, C. O. Dynamics of internal stresses and scaling of strain recovery in an aging colloidal gel. Phys. Rev. E 80, 010404 (2009). 15. 15. Ovarlez, G., Barral, Q. & Coussot, P. Three-dimensional jamming and flows of soft glassy materials. Nat. materials 9, 115–119 (2010). 16. 16. Tanner, L. The spreading of silicone oil drops on horizontal surfaces. J. Phys. D 12, 1473 (1979). 17. 17. Chen, J.-D. Experiments on a spreading drop and its contact angle on a solid. J. colloid interface science 122, 60–72 (1988). 18. 18. Chen, J.-D. & Wada, N. Edge profiles and dynamic contact angles of a spreading drop. J. colloid interface science 148, 207–222 (1992). 19. 19. Leger, L. & Joanny, J. Liquid spreading. Reports on Prog. Phys. 55, 431 (1992). 20. 20. Zosel, A. Studies of the wetting kinetics of liquid drops on solid surfaces. Colloid Polym. Sci. 271, 680–687 (1993). 21. 21. De Ruijter, M. J., De Coninck, J., Blake, T., Clarke, A. & Rankin, A. Contact angle relaxation during the spreading of partially wetting drops. Langmuir 13, 7293–7298 (1997). 22. 22. De Ruijter, M. J., De Coninck, J. & Oshanin, G. Droplet spreading: partial wetting regime revisited. Langmuir 15, 2209–2216 (1999). 23. 23. McHale, G., Shirtcliffe, N., Aqil, S., Perry, C. & Newton, M. Topography driven spreading. Phys. review letters 93, 036102 (2004). 24. 24. Davidovitch, B., Moro, E. & Stone, H. A. Spreading of viscous fluid drops on a solid substrate assisted by thermal fluctuations. Phys. review letters 95, 244505 (2005). 25. 25. McHale, G., Newton, M. I. & Shirtcliffe, N. J. Dynamic wetting and spreading and the role of topography. J. Physics: Condens. Matter 21, 464122 (2009). 26. 26. Biance, A.-L., Clanet, C. & Quéré, D. First steps in the spreading of a liquid droplet. Phys. Rev. E 69, 016301 (2004). 27. 27. Bird, J. C., Mandre, S. & Stone, H. A. Short-time dynamics of partial wetting. Phys. review letters 100, 234501 (2008). 28. 28. Paulsen, J. D., Burton, J. C. & Nagel, S. R. Viscous to inertial crossover in liquid drop coalescence. Phys. Rev. Lett. 106, 114501 (2011). 29. 29. Eddi, A., Winkels, K. G. & Snoeijer, J. H. Short time dynamics of viscous drop spreading. Phys. fluids 25, 013102 (2013). 30. 30. Mitra, S. & Mitra, S. K. Understanding the early regime of drop spreading. Langmuir 32, 8843–8848 (2016). 31. 31. Jose, B. M. & Cubaud, T. Role of viscosity coefficients during spreading and coalescence of droplets in liquids. Phys. Rev. Fluids 2, 111601 (2017). 32. 32. Voinov, O. Hydrodynamics of wetting. Fluid Dyn. 11, 714–721 (1976). 33. 33. Cox, R. The dynamics of the spreading of liquids on a solid surface. part 1. viscous flow. J. Fluid Mech. 168, 169–194 (1986). 34. 34. Blake, T. & Haynes, J. Kinetics of liquidliquid displacement. J. Colloid Interface Sci. 30, 421–423 (1969). 35. 35. Hadamard, J. Motion of liquid drops (viscous). Comp. Rend. Acad. Sci. Paris 154, 1735–1755 (1911). 36. 36. Kubo, R. The fluctuation-dissipation theorem. Reports on progress physics 29, 255 (1966). 37. 37. Kramers, H. A. Brownian motion in a field of force and the diffusion model of chemical reactions. Phys. 7, 284–304 (1940). 38. 38. Hanggi, P. Escape from a metastable state. J. Stat. Phys. 42, 105–148 (1986). 39. 39. Blake, T. D. The physics of moving wetting lines. J. colloid interface science 299, 1–13 (2006). 40. 40. Ramiasa, M., Ralston, J., Fetzer, R. & Sedev, R. The influence of topography on dynamic wetting. Adv. colloid interface science 206, 275–293 (2014). 41. 41. Gadelmawla, E., Koura, M., Maksoud, T. & Elewa, I. & Soliman, H. Roughness parameters. J. Mater. Process. Technol. 123, 133–145 (2002). 42. 42. Schneider, C. A., Rasband, W. S. & Eliceiri, K. W. Nih image to imagej: 25 years of image analysis. Nat. methods 9, 671 (2012). ## Acknowledgements This work has been supported by the National Science Foundation CBET-1605809 and has used resources of the Center for Functional Nanomaterials (CFN), which is a U.S. DOE Office of Science Facility, at Brookhaven National Laboratory under Contract No. DE-SC0012704. N.D. was partially supported by Fellowship from Joint Photon Sciences Institute at Stony Brook University. We thank Xiao Tong and Dario Stacchiola at CFN for technical advice on performing high-resolution AFM imaging. ## Author information ### Affiliations 1. #### Department of Mechanical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA • Bibin M. Jose • , Dhiraj Nandyala • , Thomas Cubaud •  & Carlos E. Colosqui 2. #### Department of Applied Mathematics & Statistics, Stony Brook University, Stony Brook, NY, 11794, USA • Carlos E. Colosqui ### Contributions B.M.J. and T.C. designed the experimental apparatus for spreading experiments, conducted the spreading experiments and produced data sets employed in the analysis. D.N. performed Wilhelmy plate measurements and AFM topgraphic images and assisted with analysis and visualization of experimental data. C.E.C. led the development of the analytical model and interpretation of experimental data. All authors contributed to the preparation of the manuscript. ### Competing Interests The authors declare no competing interests. ### Corresponding author Correspondence to Carlos E. Colosqui.
2018-10-22 22:32:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7192577719688416, "perplexity": 2613.5628589167864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515555.58/warc/CC-MAIN-20181022222133-20181023003633-00392.warc.gz"}
https://www.jobilize.com/online/course/4-1-transformation-of-graphs-using-output-by-openstax?qcr=www.quizover.com
4.1 Transformation of graphs using output Page 1 / 2 We modify output of a function in a couple of ways through arithmetic operations like addition, subtraction, multiplication, division and negation. These operations are similar to the one that we use to modify independent variable. The general symbolic representation for modification to output of a function is represented as : $af\left(x\right)+d;\phantom{\rule{1em}{0ex}}a,d\in R$ These changes are called external or post-composition modifications. These modifications compliment modifications by input, but in slightly different manner. In the case of modification to output, all effects take place in y-direction i.e. vertical direction as against horizontal transformation arising from modifications affected to input. Second, these transformations are in the direction of operation on output. For example, if we multiply output by a positive constant greater than 1, then graph of core function is stretched along y-axis. This means change in the output is reflected in the same direction in which operation takes place. Addition and subtraction operation with function In order to understand this type of transformation, we need to explore how output of the function changes as we add constant value to the output. If we add 1 unit to the function, then each value of function is incremented by 1 unit. It is a straight forward situation. In notation, we would say that the graph of “f(x) + 1” is same as the graph of f(x), which has been moved up by 1 unit. Alternatively, we can also describe this transformation by saying that vertical reference of measurement i.e. x-axis has moved down by 1 unit. Similarly, if we subtract 1 unit from the function, then each value of function is decremented by 1 unit. In notation, we would say that the graph of “f(x) - 1” is same as the graph of f(x), which has been moved down by 1 unit. Alternatively, we can also describe this transformation by saying that vertical reference of measurement i.e. x-axis has moved up by 1 unit. We conclude : The plot of y=f(x) + |a|; |a|>0 is the plot of y=f(x) shifted up by unit “a”. The plot of y=f(x) - |a|; |a|>0 is the plot of y=f(x) shifted down by unit “a”. We use these facts to draw plot of transformed function f(x±|a|) by shifting plot f(x) by unit “|a|” along y-axis. Each point forming the plot is shifted parallel to x-axis. In the figure below, the plot depicts modulus function y=|x|. It is shifted “1” unit up and the function representing shifted plot is y=|x|+1. Note that corner of plot at x=0 is also shifted by 1 unit along y-axis. Further, the plot is shifted “2” units down and the function representing shifted plot is |x|-2. In this case, corner of plot is shifted by 2 units down along y-axis. Multiplication and division of function Multiplication and division scales core graph in accordance with the operation. Scaling, however, is limited to vertical i.e. y-direction. This means modification due to either of these two arithmetic operations has no scaling impact in x-direction. If we multiply output of the function by a positive constant greater than 1, then graph of core function is stretched vertically by the factor, which is equal to the constant being multiplied. The magnification of graph i.e. stretching in y-direction is more noticeable in non-linear graphs like sine and cosine graphs, whose values are bounded in the interval [-1,1]. Let us consider function, Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Abigail for teaching engĺish at school how nano technology help us Anassong Do somebody tell me a best nano engineering book for beginners? there is no specific books for beginners but there is book called principle of nanotechnology NANO what is fullerene does it is used to make bukky balls are you nano engineer ? s. fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball. Tarell what is the actual application of fullerenes nowadays? Damian That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes. Tarell what is the Synthesis, properties,and applications of carbon nano chemistry Mostly, they use nano carbon for electronics and for materials to be strengthened. Virgil is Bucky paper clear? CYNTHIA carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc NANO so some one know about replacing silicon atom with phosphorous in semiconductors device? Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure. Harper Do you know which machine is used to that process? s. how to fabricate graphene ink ? for screen printed electrodes ? SUYASH What is lattice structure? of graphene you mean? Ebrahim or in general Ebrahim in general s. Graphene has a hexagonal structure tahir On having this app for quite a bit time, Haven't realised there's a chat room in it. Cied what is biological synthesis of nanoparticles how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good What is power set Period of sin^6 3x+ cos^6 3x Period of sin^6 3x+ cos^6 3x
2019-06-17 14:36:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5423499345779419, "perplexity": 1653.9745515062234}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998509.15/warc/CC-MAIN-20190617143050-20190617164720-00002.warc.gz"}
https://mathzsolution.com/solving-special-function-equations-using-lie-symmetries/
# Solving Special Function Equations Using Lie Symmetries The Lie group and representation theory approach to special functions, and how they solve the ODEs arising in physics is absolutely amazing. I’ve given an example of its power below on Bessel’s equation. Kaufman’s article describes algebraic methods for dealing with Hermite, Legendre & associated Legendre functions. Can we take the other special functions mentioned in this paper, obtainable as linear combinations of the conformal symmetries of the Laplacian (expressed as Lie algebra elements), and obtain their solution analogously to how Bessel is solved below? I believe it’s something like a geometric interpretation of Weisner’s method. Bessel’s equation seems to be saying: find a function in the plane such that when we shift it right, then shift it back left again, all locally (i.e. differentially) in polar coordinates, we get the same function: (c.f. Killingbeck, Mathematical Technique’s and Applications, sec. 8.21). The idea is to take Bessel’s equation, factor it, add an extra variable to make the factors parameter independent so that they become elements of a Lie algebra, identify the meaning of those factors, in this case notice the Lie algebra factors are translations in polar coordinates, and realize it’s just a differential expression of a symmetry. Bessel’s equation arises from $$L R v = v$$ when you express $$L$$ & $$R$$ in polar coordinates. It makes sense to express them in polar coordinates since Bessel arises from separating the Laplacian assuming cylindrical symmetry, and the $$LRv = v$$ assumption (not $$LRv = w$$) is motivated by the symmetry of the Laplacian. Using this idea we can, for some reason, actually solve Bessel’s equation with a picture!: We just want to shift $$\mathcal{J}_n(r,\phi)$$ in the x-direction using the operator $$e^{a\tfrac{\partial}{\partial x}}$$ expressed in polar coordinates: $$e^{\tfrac{a}{2}(\mathcal{L}-\mathcal{R})}$$ and realize it will be equal to $$\mathcal{J}_n(r’,\phi’)$$: So the last line comes from dragging all this to the origin and putting it along the x-axis, here we see the geometric meaning of Bessel functions! (I am not yet completely sure how we link this calculation to the matrix representation of the Euclidean Group $$E_2$$ $$g(x,y,\theta) = \left( \begin{array}{ccc} \cos(\theta) & – \sin(\theta) & x \\ \sin(\theta) & \cos(\theta) & y \\ 0 & 0 & 1 \end{array} \right)$$ Sadly, vaguely we can see it represents planar motions, and this is all linked to representation theory, but it’s not fully clear to me how exactly). The question is: Is there a similar, easy, unified, geometric exposition for the rest of these types of equations, analogous to [or a better version of] this one, such as those in the following list: It’d be great to understand these other equations, their formulation and solution, with a geometric interpretation like this one. Understanding the intuition behind those groups seems to be the key. If you you feel so inspired, please post your versions of these problems here. References: 1. Killingbeck, Mathematical Techniques and Applications, sec. 8.21 2. Vilenkin, Representation of Lie Groups and Special Functions Vol. 1 3. Vilenkin, Special Functions and Theory of Group Representations 4. Miller, Lie Theory and Special Functions 5. Kaufman, Special Functions of Mathematical Physics from the Viewpoint of Lie Algebra
2022-10-04 11:20:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7982820272445679, "perplexity": 493.2658561930953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00306.warc.gz"}
https://ideas.repec.org/a/kap/enreec/v54y2013i4p571-591.html
# Cross-Country Polarisation in CO 2 Emissions Per Capita in the European Union: Changes and Explanatory Factors Listed: • Juan Duro () () ## Abstract We analyse the degree of polarisation in the international distribution of CO 2 emissions per capita in the European Union. It is analytically relevant to examine the degree of instability inherent to a distribution and, in the analysed case, the likelihood that the distribution and its evolution will increase or decrease the chances of reaching an agreement on climate policy. Two approaches were used to measure polarisation: the endogenous approach, in which countries are grouped according to their similarity in terms of emissions, and the exogenous approach, in which countries are grouped geographically. Our findings indicate a clear decrease in polarisation since the mid-1990s, which can essentially be explained by the fact that the different groups have converged (i.e. antagonism among the CO 2 emitters has decreased) as the contribution of energy intensity to between-group differences has decreased. This lower degree of polarisation in CO 2 distribution suggests a situation more conducive to the possibility of reaching EU-wide agreements on the mitigation of CO 2 emissions. Copyright Springer Science+Business Media Dordrecht 2013 ## Suggested Citation • Juan Duro & Emilio Padilla, 2013. "Cross-Country Polarisation in CO 2 Emissions Per Capita in the European Union: Changes and Explanatory Factors," Environmental & Resource Economics, Springer;European Association of Environmental and Resource Economists, vol. 54(4), pages 571-591, April. • Handle: RePEc:kap:enreec:v:54:y:2013:i:4:p:571-591 DOI: 10.1007/s10640-012-9607-x as File URL: http://hdl.handle.net/10.1007/s10640-012-9607-x As the access to this document is restricted, you may want to look for a different version below or search for a different version of it. ## References listed on IDEAS as 1. Peter C. B. Phillips & Donggyu Sul, 2007. "Transition Modeling and Econometric Convergence Tests," Econometrica, Econometric Society, vol. 75(6), pages 1771-1855, November. 2. Hedenus, Fredrik & Azar, Christian, 2005. "Estimates of trends in global income and resource inequalities," Ecological Economics, Elsevier, vol. 55(3), pages 351-364, November. 3. Shorrocks, A F, 1980. "The Class of Additively Decomposable Inequality Measures," Econometrica, Econometric Society, vol. 48(3), pages 613-625, April. 4. Wang, You-Qiang & Tsui, Kai-Yuen, 2000. " Polarization Orderings and New Classes of Polarization Indices," Journal of Public Economic Theory, Association for Public Economic Theory, vol. 2(3), pages 349-363. 5. Bianchi, Marco, 1997. "Testing for Convergence: Evidence from Non-parametric Multimodality Tests," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 12(4), pages 393-409, July-Aug.. 6. Esteban, Joan & Ray, Debraj, 1994. "On the Measurement of Polarization," Econometrica, Econometric Society, vol. 62(4), pages 819-851, July. 7. Padilla, Emilio & Duro, Juan Antonio, 2013. "Explanatory factors of CO2 per capita emission inequality in the European Union," Energy Policy, Elsevier, vol. 62(C), pages 1320-1328. 8. Duro, Juan Antonio & Padilla, Emilio, 2008. "Analysis of the international distribution of per capita CO2 emissions using the polarization concept," Energy Policy, Elsevier, vol. 36(1), pages 456-466, January. 9. Antonio Duro, Juan, 2010. "Decomposing international polarization of per capita CO2 emissions," Energy Policy, Elsevier, vol. 38(11), pages 6529-6533, November. 10. Wolfson, Michael C, 1994. "When Inequalities Diverge," American Economic Review, American Economic Association, vol. 84(2), pages 353-358, May. 11. X. Zhang & R. Kanbur, 2001. "What Difference Do Polarisation Measures Make? An Application to China," Journal of Development Studies, Taylor & Francis Journals, vol. 37(3), pages 85-98. 12. Roberto Ezcurra & Pedro Pascual & Manuel Rapún, 2007. "Spatial disparities in the European Union: an analysis of regional polarization," The Annals of Regional Science, Springer;Western Regional Science Association, vol. 41(2), pages 401-429, June. 13. Ezcurra, Roberto, 2007. "Is there cross-country convergence in carbon dioxide emissions?," Energy Policy, Elsevier, vol. 35(2), pages 1363-1372, February. 14. Quah, Danny, 1997. "Empirics for Growth and Distribution: Stratification, Polarization, and Convergence Clubs," CEPR Discussion Papers 1586, C.E.P.R. Discussion Papers. 15. Wolfson, Michael C, 1997. "Divergent Inequalities: Theory and Empirical Results," Review of Income and Wealth, International Association for Research in Income and Wealth, vol. 43(4), pages 401-421, December. 16. Duro, Juan Antonio & Padilla, Emilio, 2006. "International inequalities in per capita CO2 emissions: A decomposition methodology by Kaya factors," Energy Economics, Elsevier, vol. 28(2), pages 170-187, March. 17. Duro, Juan Antonio, 2005. "Another look to income polarization across countries," Journal of Policy Modeling, Elsevier, vol. 27(9), pages 1001-1007, December. 18. Joan Esteban & Carlos Gradín & Debraj Ray, 2007. "An Extension of a Measure of Polarization, with an application to the income distribution of five OECD countries," The Journal of Economic Inequality, Springer;Society for the Study of Economic Inequality, vol. 5(1), pages 1-19, April. 19. Shorrocks, Anthony F, 1984. "Inequality Decomposition by Population Subgroups," Econometrica, Econometric Society, vol. 52(6), pages 1369-1385, November. 20. Quah, Danny T, 1997. "Empirics for Growth and Distribution: Stratification, Polarization, and Convergence Clubs," Journal of Economic Growth, Springer, vol. 2(1), pages 27-59, March. Full references (including those not matched with items on IDEAS) ## Citations Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item. as Cited by: 1. repec:eee:rensus:v:79:y:2017:i:c:p:9-14 is not listed on IDEAS 2. Andrés J. Picazo-Tadeo & Juana Castillo & Mercedes Beltrán-Esteve, 2013. "A dynamic approach to measuring ecological-economic performance with directional distance functions: greenhouse gas emissions in the European Union," Working Papers 1304, Department of Applied Economics II, Universidad de Valencia. 3. Teixidó-Figueras, J. & Duro, J.A., 2014. "Spatial Polarization of the Ecological Footprint Distribution," Ecological Economics, Elsevier, vol. 104(C), pages 93-106. 4. Padilla, Emilio & Duro, Juan Antonio, 2013. "Explanatory factors of CO2 per capita emission inequality in the European Union," Energy Policy, Elsevier, vol. 62(C), pages 1320-1328. 5. Battisti, Michele & Delgado, Michael S. & Parmeter, Christopher F., 2015. "Evolution of the global distribution of carbon dioxide: A finite mixture analysis," Resource and Energy Economics, Elsevier, vol. 42(C), pages 31-52. 6. Adolfo Maza & José Villaverde & María Hierro, 2015. "Non- $$\hbox {CO}_2$$ CO 2 Generating Energy Shares in the World: Cross-Country Differences and Polarization," Environmental & Resource Economics, Springer;European Association of Environmental and Resource Economists, vol. 61(3), pages 319-343, July. ### Keywords CO 2 emissions; Distribution of emissions; European Union; Mitigation agreements; Polarisation; ## Corrections All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:kap:enreec:v:54:y:2013:i:4:p:571-591. See general information about how to correct material in RePEc. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Sonal Shukla) or (Rebekah McClure). General contact details of provider: http://www.springer.com . If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about. If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form . If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation. Please note that corrections may take a couple of weeks to filter through the various RePEc services. IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.
2018-04-23 14:29:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5050971508026123, "perplexity": 12059.203175423001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946011.29/warc/CC-MAIN-20180423125457-20180423145457-00025.warc.gz"}
https://jmutai.com/page/2/
# How to configure mpd and ncmpcpp on Linux Music Player Daemon (MPD) is a flexible, powerful, server-side application for playing music. Through plugins and libraries it can play a variety of sound files while being controlled by its network protocol. In order to interact with mpd, a client program is needed. Most commonly used client Applications being: # Arch Linux Installation Cheatsheet With LUKS Encryption Here is Arch Linux Installation Cheatsheet i made for my own reference. This Arch Linux Installation Cheatsheet uses UEFI and LVM on LUKS for the installation. # Virsh Commands Cheatsheet to Manage KVM Guest Virtual Machines This is a comprehensive virsh commands cheatsheet: Virsh is a management user interface for virsh guest domains. Virsh can be used to create, pause, restart, and shutdown domains. In addition, virsh can be used to list current domains available in your Virtualization hypervisor platform. # Qemu-img cheatsheet for working with qemu-img This is a brief qemu-img cheatsheet for working with qemu-img command on Linux and Unix systems supporting qemu. I made this qemu-img cheatsheet for my own reference and saw a need to share with you guys. Before working with qemu-img command to perform any disk operation, It’s good to first understand what qemu is and how it is used in Linux and Virtualization world. # Getting started with Xen Virtualization On CentOS 7.x Xen is an open-source baremetal hypervisor which allows you to run different operating systems in parallel on a single host machine. This type of hypervisor is normally referred to as type 1 hypervisor in Virtualization world. # Managing KVM Network Interfaces With virsh, nmcli and brctl in Linux There are many choices for network configurations in the KVM host. In this post, I’ll guide you through two main choices to configure KVM networking. We’ll consider internal networking and external networking for Guest operating systems running on KVM. The two network configurations we’ll cover are: • Using a Linux bridge with NAT for KVM guests • Using a Linux bridge (without NAT) for KVM guests # Understanding and Working With PAM Authentication in Linux Linux Pluggable Authentication Modules (PAM) is a system of libraries that handle the authentication tasks for system users and applications. The library provides a stable API that privilege granting programs like su and login defer to perform standard authentication tasks. A Linux system administrator can use PAM modules to configure the way programs should authenticate users. # How to import Mail accounts from one server to another using Imapsync I provide Web hosting services using Ispconfig. For quite sometime, it was a real pain to migrate existing Cpanel customers to Ispconfig, especially when you’re required to import all users’ Mailboxes and data. I had to manually import Dovecot’s Mailbox folders between the two servers using.
2018-02-23 16:39:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27942755818367004, "perplexity": 9289.190975572428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814801.45/warc/CC-MAIN-20180223154626-20180223174626-00379.warc.gz"}
http://www.researchgate.net/publication/36730316_Inverse_Problems_for_multiple_invariant_curves
Article Inverse Problems for multiple invariant curves 01/2007; DOI:10.1017/S0308210506000400 Source: OAI ABSTRACT Planar polynomial vector fields which admit invariant algebraic curves, Darboux integrating factors or Darboux first integrals are of special interest. In the present paper we solve the inverse problem for invariant algebraic curves with a given multiplicity and for integrating factors, under generic assumptions regarding the (multiple) invariant algebraic curves involved. In particular we prove, in this generic scenario, that the existence of a Darboux integrating factor implies Darboux integrability. Furthermore we construct examples where the genericity assumption does not hold and indicate that the situation is different for these. 0 0 · 0 Bookmarks · 61 Views • Source Article: Liouvillian first integrals of second order polynomial differential equations [hide abstract] ABSTRACT: We consider polynomial dierential systems in the plane with Liouvillian rst in- tegrals. It is shown that all such systems have Darbouxian integrating factors, and that thesearchforsuch integralscanbereduced toasearch fortheinvariantalgebraiccurves of the system and their 'degenerate' counterparts. Electronic Journal of Differential Equations. 01/1999; • Article: Algebraic aspects of integrability for polynomial systems [hide abstract] ABSTRACT: We present an introductory survey to the Darboux integrability theory of planar complex and real polynomial differential systems. Our presentation contains some improvements to the classical theory. Qualitative Theory of Dynamical Systems 01/1999;
2013-12-05 00:01:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884550333023071, "perplexity": 1554.0705415027137}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163037851/warc/CC-MAIN-20131204131717-00051-ip-10-33-133-15.ec2.internal.warc.gz"}
http://chrisvoncsefalvay.com/
# On the challenge of building HTTP REST APIs that don’t suck Here’s a harsh truth: most RESTful HTTP APIs (in the following, APIs) suck, to some degree or another. Including the ones I’ve written. Now, to an extent, this is not my fault, your fault or indeed anyone’s fault. APIs occupy a strange no man’s land between stuff designed for machines and stuff designed for humans. On one hand, APIs are intended to allow applications and services to communicate with each other. If humans want to interact with some service, they will do so via some wrapper around an API, be it an iOS app, a web application or a desktop client. Indeed, the tools you need to interact with APIs – a HTTP client – are orders of magnitude less well known and less ubiquitous than web browsers. Everybody has a web browser and knows how to use one. Few people have a dedicated desktop HTTP browser like Paw or know how to use something like curl. Quick, how do you do a token auth header in `curl`? At the same time, even if the end user of the API is the under-the-hood part of a client rather than a human end user, humans have to deal with the API at some point, when they’re building whatever connects to the API. Tools like Swagger/OpenAPI were intended to somewhat simplify this process, and the idea was good – let’s have APIs generate a schema that they also serve up from which a generic client can then build a specific client. Except that’s not how it ended up working in practice, and in the overwhelming majority of cases, the way an API handler is written involves Dexedrine, coffee and long hours spent poring over the API documentation. Which is why your API can’t suck completely. There’s no reason why your API can’t be a jumbled mess of methods from the perspective of your end user, who will interact without your API without needing to know what an API even is. That’s the beauty of it all. But if you want people to use your service – which you should very much want! -, you’ll have to have an API that people can get some use out of. Now, the web is awash with stuff about best practices in developing REST APIs. And, quite frankly, most of these are chock-full of good advice. Yes, use plural nouns. Use HATEOAS. Use the right methods. Don’t create `GET` methods that can change state. And so on. But the most important thing to know about rules, as a former CO of mine used to say, is to know when to break them, and what the consequences will be when you do. There’s a philosophy of RESTful API design called pragmatic REST that acknowledges this to an extent, and uses the ideas underlying REST as a guideline, rather than strict, immutable rules. So, the first step of building APIs that don’t suck is knowing the consequences of everything you do. The problem with adhering to doctrine or rules or best practices is that none of that tells you what the consequences of your actions are, whether you follow them or not. That’s especially important when considering the consequences of not following the rules: not pluralizing your nouns and using `GET` to alter state have vastly different consequences. The former will piss off your colleagues (rightly so), the latter will possibly endanger the safety of your API and lead to what is sometimes referred to in the industry as Some Time Spent Updating Your LinkedIn & Resume. Secondly, and you can take this from me – no rules are self-explanatory. Even with all the guidance in the world in your hand, there’s a decent chance I’ll have no idea why most of your code is the way it is. The bottom line being: document everything. I’d much rather have an API breaking fifteen rules and giving doctrinaire rule-followers an apoplectic fit but which is well-documented over a super-tidy bit of best practices incarnate (wouldn’t that be incodeate, given that code is not strictly made of meat?) that’s missing any useful documentation, any day of the week. There are several ways to document APIs, and no strictly right one – in fact, I would use several different methods within the same project for different endpoints. So for instance a totally run-of-the-mill `DELETE` endpoint that takes an object UUID as an argument requires much less documentation than a complex filtering interface that takes fifty different arguments, some of which may be mandatory. A few general principles have served me well in the past when it comes to documenting APIs: • Keep as much of the documentation as you can out of the code and in the parts that make it into the documentation. For instance, in Python, this is the docstring. • If your documentation allows, put examples into the docstring. An example can easily be drawn from the tests, by the way, which makes it a twofer. • Don’t document for documentation’s sake. Document to help people understand. Avoid tedious, wordy explanation for a method that’s blindingly obvious to everyone. • Eschew the concept of ‘required’ fields, values, query parameters, and so on. Nothing is ‘required’ – the world will not end if a query parameter is not provided, and you will be able to make the request at the very least. Rather, make it clear what the consequences of not providing the information will be. What happens if you do not enter a ‘required’ parameter? Merely calling it ‘required’ does not really tell me if it will crash, yield a cryptic error message or simply fail silent (which is something you also should try to avoid). • Where something must have a particular type of value (e.g. an integer), where a value will have to be provided in a particular way (e.g. a Boolean encoded as o/1 or True/False) or has a limited set of possible values (e.g. in an application tracking high school students, the year query parameter may only take the values `['freshman', 'sophomore', 'junior', 'senior']`), make sure this is clearly documented, and make it clear whether the values are case sensitive or not. • If you envisage even the most remote possibility that your API will have to handle Unicode, emojis or other fancy things (basically, anything beyond ASCII), make sure you explain how your API handles such values. Finally, eat your own dog food. Writing a wrapper for your API is not only a commercially sound idea (it is much more fun for other developers to just grab an API wrapper for their language of choice than having to homebrew one), it’s also a great way to gauge how painful it is to work with your API. Unless it’s anything above a 6 on the 1-10 Visual Equivalent Scale of Painful and Grumpy Faces, you’ll be fine. And if you need to make changes, make any breaking change part of a new version. An API version string doesn’t necessarily mean the API cannot change at all, but it does mean you may not make breaking changes – this means any method, any endpoint and any argument that worked on day 0 of releasing `v1` will have to work on v1, forever. Following these rules won’t ensure your API won’t suck. But they’ll make sucking much more difficult, which is half the victory. A great marksmanship instructor I used to know said that the essence of ‘technique’, be it in handling a weapon or writing an API, is to reduce the opportunity of making avoidable mistakes. Correct running technique will force you to run in a way that doesn’t even let you injure your ankle unless you deviate from the form. Correct shooting technique eliminates the risk of elevation divergences due to discrepancies in how much air remains in the lungs by simply making you squeeze the trigger at the very end of your expiration. Good API development technique keeps you from creating APIs that suck by restricting you to practices that won’t allow you to commit some of the more egregious sins of writing APIs. And the more you can see beyond the rules and synthesise them into a body of technique that keeps you from making mistakes, the better your code will be, without cramping your creativity. # Using screen to babysit long-running processes In machine learning, especially in deep learning, long-running processes are quite common. Just yesterday, I finished running an optimisation process that ran for the best part of four days –  and that’s on a 4-core machine with an Nvidia GRID K2, letting me crunch my data on 3,072 GPU cores!  Of course, I did not want to babysit the whole process. Least of all did I want to have to do so from my laptop. There’s a reason we have tools like `Sentry`, which can be easily adapted from webapp monitoring to letting you know how your model is doing. One solution is to spin up another virtual machine, `ssh` into that machine, then from that `ssh` into the machine running the code, so that if you drop the connection to the first machine, it will not drop the connection to the second. There is also `nohup`, which makes sure that the process is not killed when you ‘hang up’ the `ssh` connection. You will, however, not be able to get back into the process again. There are also reparenting tools like `reptyr`, but the need they meet is somewhat different. Enter terminal multiplexers. Terminal multiplexers are old. They date from the era of things like time-sharing systems and other antiquities whose purpose was to allow a large number of users to get their time on a mainframe designed to serve hundreds, even thousands of users. With the advent of personal computers that had decent computational power on their own, terminal multiplexers remained the preserve of universities and other weirdos still using mainframe architectures. Fortunately for us, two great terminal multiplexers, `screen` (aka `GNU Screen` ) and `tmux` , are still being actively developed, and are almost definitely available for your *nix of choice. This gives us a convenient tool to sneak a peek at what’s going on with our long-suffering process. Here’s how. Step 1 `ssh` into your remote machine, and launch `ssh`. You may need to do this as `sudo` if you encounter the error where `screen`, instead of starting up a new shell, returns `[screen is terminating]` and quits. If `screen` is started up correctly, you should be seeing a slightly different shell prompt (and if you started it as `sudo`, you will now be logged in as root). In some scenarios, you may want to ‘name’ your `screen` session. Typically, this is the case when you want to share your screen with another user, e.g. for pair programming. To create a named screen, invoke `screen` using the session name parameter `-S`, as in e.g. `screen -S my_shared_screen`. Step 2 In this step, we will be launching the actual script to run. If your script is Python based and you are using `virtualenv` (as you ought to!), activate the environment now using `source /<virtualenv folder>/bin/activate, `replacing  `virtualenv folder`by the name of the folder where your `virtualenv`s live (for me, that’s the `environments` folder, often enough it’s something like `~/.virtualenvs`) and by the name of your virtualenv (in my case, `research`). You have to activate your virtualenv even if you have done so outside of `screen` already (remember, `screen` means you’re in an entirely new shell, with all environment configurations, settings, aliases &c. gone)! With your `virtualenv` activated, launch it as normal — no need to launch it in the background. Indeed, one of the big advantages is the ability to see verbose mode progress indicators. If your script does not have a progress logger to `stdout` but logs to a logfile, you can start it using `nohup`, then put it into the background (`Ctrl--Z`, then `bg`) and track progress using `tail -f logfile.log` (where `logfile.log` is, of course, to be substituted by the filename of the logfile. Step 3 Press `Ctrl--A` followed by `Ctrl--D` to detach from the current screen. This will take you back to your original shell after noting the address of the screen you’re detaching from. These always follow the format `<identifier>.<session id>.<hostname>`, where hostname is, of course, the hostname of the computer from which the `screen` session was started, stands for the name you gave your screen if any, and is an autogenerated 4-6 digit socket identifier. In general, as long as you are on the same machine, the screen identifier or the session name will be sufficient – the full canonical name is only necessary when trying to access a screen on another host. To see a list of all screens running under your current username, enter `screen -list`. Refer to that listing or the address echoed when you detached from the `screen` to reattach to the process using `screen -r <socket identifier>[.<session identifier>.<hostname>]`. This will return you to the script, which keeps executing in the background. Result Reattaching to the process running in the background, you can now follow the progress of the script. Use the key combination in Step 3 to step out of the process anytime and the rest of the step to return to it. Bugs There is a known issue, caused by strace, that leads to `screen` immediately closing, with the message `[screen is terminating]` upon invoking `screen` as a non-privileged user. There are generally two ways to resolve this issue. • Use a privileged user account and always invoke `screen` as `sudo`. • As a privileged user, change the permissions of `screen` to `2775` by entering `sudo chmod 2775 \$(which screen)`. The first digit is responsible for a privilege elevation upon execution to sudo, which means that repeated sudoing will not be necessary. The overall effect of both solutions is the same. Notably, both may be undesirable from a security perspective. As always, weigh risks against utility. Do you prefer `screen` to staying logged in? Do you have any other cool hacks to make monitoring a machine learning process that takes considerable time to run? Let me know in the comments! Image credits: Zenith Z-19 by ajmexico on Flickr # Fixing the mysterious Jupyter Tensorflow import bug There’s a weird bug afoot that you might encounter when setting up a ‘lily white’ (brand new) development environment to play around with Tensorflow.  As it seems to have vexed quite a few people, I thought I’ll put my solution here to help future  tensorflowers find their way.  The problem presents after you have set up your new  `virtualenv` . You install Jupyter and Tensorflow, and  when importing, you get this: ```In [1]:   import tensorflow as tf --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) in () ----> 1 import tensorflow as tf ModuleNotFoundError: No module named 'tensorflow'``` Oh. Say you are a dogged pursuer of bugs, and wish to check if you might have installed Tensorflow and Jupyter into different virtualenvs. One way to do that is to simply activate your virtualenv (using activate or source activate, depending on whether you use virtualenvwrapper), and starting a Python shell. Perplexingly, importing Tensorflow here will work just fine. ### The solution Caution At this time, this works only for CPython aka ‘regular Python’ (if you don’t know what kind of Python you are running, it is in all likelihood CPython). Note In general, it is advisable to start fixing these issues by destroying your virtualenv and starting anew, although that’s not strictly necessary. Create a virtualenv, and note the base Python executable’s version (it has to be a version for which there is a Tensorflow wheel for your platform, i.e. 2.7 or 3.3-3.6). Step 1 Go to the PyPI website to find the Tensorflow installation appropriate to your system and your Python version (e.g. cp36 for Python 3.6). Copy the path of the correct version, then open up a terminal window and declare it as the environment variable `TF_BINARY_URL`. Use `pip` to install from the URL you set as the environment variable, then install Jupyter. ```CVoncsefalvay@orinoco~ \$ export TF_BINARY_URL=https://pypi.python.org/packages/b1/74/873a5fc04f1aa8d275ef1349b25c75dd87cbd7fb84fe41fc8c0a1d9afbe9/tensorflow-1.1.0rc2-cp36-cp36m-macosx_10_11_x86_64.whl#md5=c9b6f7741d955d1d3b4991a7942f48b9 CVoncsefalvay@orinoco~ \$ pip install --upgrade \$TF_BINARY_URL jupyter Collecting tensorflow==1.1.0rc2 from https://pypi.python.org/packages/b1/74/873a5fc04f1aa8d275ef1349b25c75dd87cbd7fb84fe41fc8c0a1d9afbe9/tensorflow-1.1.0rc2-cp36-cp36m-macosx_10_11_x86_64.whl#md5=c9b6f7741d955d1d3b4991a7942f48b9 Using cached tensorflow-1.1.0rc2-cp36-cp36m-macosx_10_11_x86_64.whl Collecting jupyter Using cached jupyter-1.0.0-py2.py3-none-any.whl (... lots more installation steps to follow ...) Successfully installed ipykernel-4.6.1 ipython-6.0.0 jedi-0.10.2 jinja2-2.9.6 jupyter-1.0.0 jupyter-client-5.0.1 jupyter-console-5.1.0 notebook-5.0.0 prompt-toolkit-1.0.14 protobuf-3.2.0 qtconsole-4.3.0 setuptools-35.0.1 tensorflow-1.1.0rc2 tornado-4.5.1 webencodings-0.5.1 werkzeug-0.12.1``` Step 2 Now for some magic. If you launch Jupyter now, there’s a good chance it won’t find Tensorflow. Why? Because you just installed Jupyter, your shell might not have updated the jupyter alias to point to that in the virtualenv, rather than your system Python installation. Enter which jupyter to find out where the Jupyter link is pointing. If it is pointing to a path within your virtualenvs folder, you’re good to go. Otherwise, open a new terminal window and activate your virtualenv. Check where the jupyter command is pointing now – it should point to the virtualenv. Step 3 Fire up Jupyter, and import tensorflow. Voila – you have a fully working Tensorflow environment! As always, let me know if it works for you in the comments, or if you’ve found some alternative ways to fix this issue. Hopefully, this helps you on your way to delve into Tensorflow and explore this fantastic deep learning framework! Header image: courtesy of Jeff Dean, Large Scale Deep Learning for Intelligent Computer Systems, adapted from Untangling invariant object recognition by DiCarlo and Cox (2007). # A deep learning There are posts that are harder to write than others. This one perhaps has been one of the hardest. It took me the best part of four months and dozens of rewrites. Because it’s about something I love. And about someone I love. And about something else I love. And how these three came to come into a conflict. And, perhaps, what we all can learn from that. As many of you might know, deep learning is my jam. Not in a faddish, ‘it’s what cool kids do these days’ sense. Nor, for that matter, in the sense so awfully prevalent in Silicon Valley, whereby the utility of something is measured in how many jobs it will get rid of, presumably freeing off humans to engage in more cerebral pursuits, or how it may someday cure intrinsically human problems if only those pesky humans were to listen to their technocratic betters for once. Rather, I’m a deep learning and AI researcher who believes in what he’s doing. I believe with all I am and all I’ve got that deep learning is right now our best chance to find better ways of curing cancer, producing more with less emissions, building structures that can withstand floods on a dime, identifying terrorists and, heck, create entertaining stuff. I firmly believe that it’s one of the few intellectual pursuits I am somewhat suited for that is also worth my time, not the least because I firmly believe that it will make me have more of it – and if not me, maybe someone equally worthy. Which is why it was so hard for me to watch this video, of my lifelong idol Hayao Miyazaki ripping a deep learning researcher to shreds. Now, quite frankly, I have little time for the researcher and his proposition. It’s badly made, dumb and pointless. Why one would inundate Miyazaki-san with it is beyond me. His putdown is completely on point, and not an ounce too harsh. All of his words are well deserved. As someone with a neurological chronic pain disorder that makes me sometimes feel like that creature writhing on the floor, I don’t have a shred of sympathy for this chap.1Least of all because I know how rudimentary and lame his work is. I’ve built evolutionary models of locomotion where the first stages look like this. There’s no cutting edge science here. Rather, it’s the last few words of Miyazaki-san that have punched a hole in my heart and have held my thoughts captive for months now, coming back into the forefront of my thoughts like a recurring nightmare. “I feel like we are nearing the end of times,” he says, the camera gracefully hovering over his shoulder as he sketches through his tears. “We humans are losing faith in ourselves.” Deep learning is something formidable, something incredible, something so futuristic yet so simple. Deep down (no pun intended), deep learning is really not much more than a combination of a few relatively simple tricks, some the best part of a century old, that together create something fantastic. Let me try to put it into layman’s terms (if you’re one of my fellow ML /AI nerds, you can just jump over this part). Consider you are facing the arduous and yet tremendously important task of, say, identifying whether an image depicts a cat or a dog. In ML lingo, this is what we call a ‘classification’ task. One traditional approach used to be to define what cats are versus what dogs are, and provide rules. If it’s got whiskers, it’s a cat. If it’s got big puppy eyes, it’s, well, a puppy. If it’s got forward pointing eyes and a roughly circular face, it’s almost definitely a kitty. If it’s on a leash, it’s probably a dog. And so on, ad infinitum, your model of a cat-versus-dog becoming more and more accurate with each rule you add. This is a fairly feasible approach, and is still used. In fact, there’s a whole school of machine learning called decision trees that relies on this kind of definition of your subjects. But there are three problems with it. 1. You need to know quite a bit about cats and dogs to be able to do this. At the very least, you need to be able to, and take the time and effort to, describe cats and dogs. It’s not enough to merely feed images of each to the computer.2There’s a whole aspect of the story called feature extraction, which I will ignore for the sake of simplicity, and assume that it just happens. It doesn’t, of course, and it plays a huge role in identifying things, but this story is complex enough already as it is. 2. You are limited in time and ability to put down distinguishing features – your program cannot be infinitely large, nor do you have infinite time to write it. You must prioritise by identifying the factors with the greatest differentiating potential first. In other words, you need to know, in advance, what the most salient characteristics of cats versus dogs are – that is, what characteristics are almost omnipresent among cats but hardly ever occur among dogs (and vice versa)? All dogs have a snout and no cat has a snout, whereas some cats do have floppy ears and some dogs do have almost catlike triangular ears. 3. You are limited to what you know. Silly as that may sound, there might be some differentia between cats and dogs that are so arcane, so mathematical that no human would think of it – but which might come trivially evident to a computer. Deep learning, like friendship, is magic. Unlike most other techniques of machine learning, you don’t need to have the slightest idea of what differentiates cats from dogs. What you need is a few hundred images of each, preferably with a label (although that is not strictly necessary – classifiers can get by just fine without needing to be told what the names of the things they are classifying are: as long as they’re told how many different classes they are to split the images into, they will find differentiating features on their own and split the images into ‘images with thing 1’ versus  ‘images with thing 2’. – magic, right?). Using modern deep learning libraries like TensorFlow and their high level abstractions (e.g. keras, tflearn) you can literally write a classifier that identifies cats versus dogs with a very high accuracy in less than 50 lines of Python that will be able to classify thousands of cat and dog pics in a fraction of a minute, most of which will be taken up by loading the images rather than the actual classification. Told you it’s magic. What makes deep learning ‘deep’, though? The origins of deep learning are older than modern computers. In 1943, McCullough and Pitts published a paper3McCulloch, W and Pitts, W (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics 5 (4): 115–133. doi:10.1007/BF02478259. that posited a model of neural activity based on propositional logic. Spurred by the mid-20th century advances in understanding how the nervous system works, in particular how nerve cells are interconnected, McCulloch and Pitts simply drew the obvious conclusion: there is a way you can represent neural connections using propositional logic (and, actually, vice versa). But it wasn’t until 1958 that this idea was followed up in earnest. Rosenblatt’s ground-breaking paper4Rosenblatt, F. (1958). The Perceptron: A Probabilistic Model For Information Storage And Organization In The Brain. Psych Rev 65 (6): 386–408. doi:10.1037/h0042519 introduced this thing called the perceptron, something that sounds like the ideal robotic boyfriend/therapist but in fact was intended as a mathematical model for how the brain stores and processes information. A perceptron is a network of artificial neurons. Consider the cat/dog example. A simple single-layer perceptron has a list of input neurons $x_1$, $x_2$  and so on. Each of these describe a particular property. Does the animal have a snout? Does it go woof? Depending on how characteristic they are, they’re multiplied by a weight $w_n$. For instance, all dogs and no cats have snouts, so $w_1$  will be relatively high, while there are cats that don’t have long curly tails and dogs that do, so $w_n$  will be relatively low. At the end, the output neuron (denoted by the big $\Sigma$ ) sums up these results, and gives an estimate as to whether it’s a cat or a dog. What was initially designed to model the way the brain works has soon shown remarkable utility in applied computation, to the point that the US Navy was roped into building an actual, physical perceptron machine – the first application of computer vision. However, it was a complete bust. It turned out that a single layer perceptron couldn’t really recognise a lot of patterns. What it lacked was depth. What do we mean by depth? Consider the human brain. The brain actually doesn’t have a single part devoted to vision. Rather, it has six separate areas5Or five, depending on whether you consider the dorsomedial area a separate area of the extrastriate cortex – the striate cortex (V1) and the extrastriate areas (V2-V6). These form a feedforward pathway of sorts, where V1 feeds into V2, which feeds into V3 and so on. To massively oversimplify: V1 detects optical features like edges, which it feeds on to V2, which breaks these down into more complex features: shapes, orientation, colour &c. As you proceed towards the back of the head, the visual centres detect increasingly complex abstractions from the simple visual information. What was found is that by putting layers and layers of neurons after one another, even very complex patterns can be identified accurately. There is a hierarchy of features, as the facial recognition example below shows. The first hidden layer recognises simple geometries and blobs at different parts of the zone. The second hidden layer fires if it detects particular manifestations of parts of the face – noses, eyes, mouths. Finally, the third layer fires if it ‘sees’ a particular combination of these. Much like an identikit image, a face is recognised because it contains parts of a face, which in turn are recognised because they contain a characteristic spatial alignment of simple geometries. There’s much more to deep learning than what I have tried to convey in a few paragraphs. The applications are endless. With the cost of computing decreasing rapidly, deep learning applications have now become feasible in just about all spheres where they can be applied. And they excel everywhere, outpacing not only other machine learning approaches (which makes me absolutely stoked about the future!) but, at times, also humans. Which leads me back to Miyazaki. You see, deep learning can’t just classify things or predict stock prices. It can also create stuff. To put an old misunderstanding to rest quite early: generative neural networks are genuinely creating new things. Rather than merely combining pre-programmed elements, they come as close as anything non-human can come to creativity. The pinnacle of it all, generating enjoyable music, is still some ways off, and we have yet to enjoy a novel written by a deep learning engine. But to anyone who has been watching the rapid development of deep learning and especially generative algorithms based on deep learning, these are literally just questions of time. Or perhaps, as Miyazaki said, questions of the ‘end of times’. What sets a computer-generated piece apart from a human’s composition? Someday, they will be, as far as quality is concerned, indistinguishable. Yet something that will always set them apart is the absence of a creator. In what is probably one of the worst written essays in  20th century literary criticism, a field already overflowing with bad prose for bad prose’s sake, Roland Barthes’s 1967 essay La mort de l’auteur posited a sort of separation between the author and the text, countering centuries of literary criticism that sought to explain the meaning of the latter by reference to the former.  According to Barthes, texts (and so, compositions, paintings &.) have a life and existence of their own. To liberate works of art of an  ‘interpretive  tyranny’ that is almost self-explanatorily imposed on it, they must be read, interpreted and understood by reference to its audience and not its author. Indeed, Barthes eschews the term in favour of the term ‘scriptor‘, the latter hearkening back to the Medieval monks who copied manuscripts: like them, the scriptor is not in control of the narrative or work of art that he or she composes. Devoid of the author’s authority, the work of art is now free to exist in a liberated state that allows you – the recipient – to establish its essential meaning. Oddly, that’s not entirely what post-modernism seems to have created. If anything, there is now an increased focus on the author, at the very least in one particular sense. Consider the curious case of Wagner’s works in Israel. Because of his anti-Semitic views, arguably as well as due to the favour his music found during the tragic years of the Third Reich, Wagner’s works – even those that do not even remotely express a political position – are rarely played in Israel. Even in recent years, other than Holocaust survivor Mendi Roman’s performance of Siegfried in 2000, there have been very few instances of Wagner played in Israel – despite the curious fact that Theodor Herzl, founder of Zionism, admired Wagner’s music (if not his vile racial politics). Rather than the death of the author, we more often witness the death of the work. The taint of the author’s life comes to haunt the chords of his composition and the stanzas of his poetry, every brush-stroke of theirs forever imbued with the often very human sins and mistakes of their lives. Less dramatic, perhaps, than Wagner’s case are the increasingly frequent boycotts, outbursts and protests against works of art solely based on the character of the author or composer. One must only look at the recent past to see protests, for instance, against the works of HP Lovecraft, themselves having to do more with eldritch horrors than racist horridness, due to the author’s admittedly reprehensible views on matters of race. Outrages about one author or another, one artist or the next, are commonplace, acted out on a daily basis on the Twitter gibbets and the Facebook  pillory. Rather than the death of the author, we experience the death of art, amidst an increasingly intolerant culture towards  the works of flawed or sinful creators. This is, of course, not to excuse any of those sins or flaws. They should not, and cannot, be excused. Rather, perhaps, it is to suggest that part of a better understanding of humanity is that artists are a cross-section of us as a species, equally prone to be misled and deluded into adopting positions that, as the famous German anti-Fascist and children’s book author Erich Kästner said, ‘feed the animal within man’. Nor is this to condone or justify art that actively expresses those reprehensible views – an entirely different issue. Rather, I seek merely to draw attention to the increased tendency to condemn works of art for the artist’s political sins. In many cases, these sins are far from being as straightforward as Lovecraft’s bigotry and Wagner’s anti-Semitism. In many cases, these sins can be as subtle as going against the drift of public opinion, the Orwellian sin of ‘wrongthink’. With the internet having become a haven of mob mentality (something I personally was subjected to a few years ago), the threshold of what sins  of the creator shall be visited upon their creations has significantly decreased. It’s not the end of days, but you can see it from here. In which case perhaps Miyazaki is right. Perhaps what we need is art produced by computers. As Miyazaki-san said, we are losing faith in ourselves. Not in our ability to create wonderful works of art, but in our ability to measure up to some flawless ethos, to some expectation of the artist as the flawless being. We are losing faith in our artists. We are losing faith in our creators, our poets and painters and sculptors and playwrights and composers, because we fear that with the inevitable revelation of greater – or perhaps lesser – misdeeds or wrongful opinions from their past shall not merely taint them: they shall no less taint us, the fans and aficionados and cognoscenti. Put not your faith in earthly artists, for they are fickle, and prone to having opinions that might be unacceptable, or be seen as such someday. Is it not a straightforward response then to  declare one’s love for the intolerable synthetic Baroque of Stanford machine learning genius Cary Kaiming Huang’s research? In a society where the artist’s sins taint the work of art and through that, all those who confessed to enjoy his works, there’s no other safe bet. Only the AI can cast the first stone. And if the cost of that is truly the chirps of Cary’s synthetic Baroque generator, Miyazaki is right on the other point, too. It truly is the end of days. References   [ + ] 1 ↑ Least of all because I know how rudimentary and lame his work is. I’ve built evolutionary models of locomotion where the first stages look like this. There’s no cutting edge science here. 2 ↑ There’s a whole aspect of the story called feature extraction, which I will ignore for the sake of simplicity, and assume that it just happens. It doesn’t, of course, and it plays a huge role in identifying things, but this story is complex enough already as it is. 3 ↑ McCulloch, W and Pitts, W (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics 5 (4): 115–133. doi:10.1007/BF02478259. 4 ↑ Rosenblatt, F. (1958). The Perceptron: A Probabilistic Model For Information Storage And Organization In The Brain. Psych Rev 65 (6): 386–408. doi:10.1037/h0042519 5 ↑ Or five, depending on whether you consider the dorsomedial area a separate area of the extrastriate cortex If you develop for Amazon’s Alexa-powered devices, you must at some point have come across Flask-Ask, a project by John Wheeler that lets you quickly and easily build Python-based Skills for Alexa. It’s so easy, in fact, that John’s quickstart video, showing the creation of a Flask-Ask based Skill from zero to hero, takes less than five minutes! How awesome is that? Very awesome. Bootstrapping a Flask-Ask project is not difficult – in fact, it’s pretty easy, but also pretty repetitive. And so, being the ingenious lazy developer I am, I’ve come up with a (somewhat opinionated) cookiecutter template for Flask-Ask. ## Usage Using the Flask-Ask cookiecutter should be trivial.  Make sure you have cookiecutter installed, either in a virtualenv that you have activated or your system installation of Python. Then, simply use  `cookiecutter gh:chrisvoncsefalvay/cookiecutter-flask-ask` to get started. Answer the friendly assistant’s questions, and voila! You have the basics of a Flask-Ask project all scaffolded. Once you have scaffolded your project, you will have to create a virtualenv for your project and install dependencies by invoking `pip install -r requirements.txt`. You will also need ngrok to test your skill from your local device. ## What’s in the box? The cookiecutter has been configured with my Flask-Ask development preferences in mind, which in turn borrow heavily from John Wheeler‘s. The cookiecutter provides a scaffold of a Flask application, including not only session start handlers and an example intention but also a number of handlers for built-in Alexa intents, such as `Yes`, `No` and `Help`. There is also a folder structure you might find useful, including an intent schema for some basic Amazon intents and a corresponding empty `sample_utterances.txt` file, as well as a gitkeep’d folder for custom slot types. Because I’m a huge fan of Sphinx documentation and strongly believe that voice apps need to be assiduously documented to live up to their potential, there is also a `docs/` folder with a `Makefile` and an opinionated `conf.py` configuration file. ## Is that all?! Blissfully, yes, it is. Thanks to John’s extremely efficient and easy-to-use Flask-Ask project, you can discourse with your very own skill less than twenty minutes after starting the scaffolding! You can find the cookiecutter-flask-ask project here. Issues, bugs and other woes are welcome, as are contributions (simply raise a pull request). For help and advice, you can find me on the Flask-Ask Gitter a lot during daytime CET. # In which my awesome father-in-law has taken care of my bedtime reading I only had an hour or so to go through them but can already see the difference between these books and the rest of the SDR literature out there. Instead of clobbering the reader with heavy maths out of the gate or reading like something written by radio anoraks for radio anoraks, the Clarks’ books read easy while going deep. If you have any interest in sdr radio or are as lucky as I was to have picked up a HackRF for Christmas, you MUST get these books! # Nostoi One of the best things about my job is traveling to new (or in this case, old) places. And yet it wasn’t until I had a home to go home to that I began to appreciate the wide world. Always on the road, existence is a sort of fleeting limbo. But if you have an Ithaca to yearn home for, a Penelope whose arms await you, you suddenly understand. It’s in being away that we discover our home. It is in home that we discover away. # “Next stop: Leiden University Faculty of Law!” Returning to a place from one’s old life is always a complex experience. I’ve spent a year studying in this town, a mere decade ago: yet today, it feels like an eternity or a past life. So much has changed since then, and I barely recognise the man I was. Back when I lived in Leiden, I was attached to the law faculty, housed in Kammerlingh Onnes’s old lab. If you had told me then that a decade and a bit later I would be back, but as a data scientist working with a client nearby, I would have laughed. Data science wasn’t even a thing back then, and while I was always into statistics and maths, I never saw myself doing it as a career until (relatively) quite recently. And so, to return to a town that holds all these memories from a past life is strange to say the least. Strange – but not necessarily unpleasant!
2017-12-14 22:50:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.285055935382843, "perplexity": 1772.7591358381997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948551162.54/warc/CC-MAIN-20171214222204-20171215002204-00089.warc.gz"}
https://www.sparrho.com/item/the-arrow-of-time-in-the-collapse-of-collisionless-self-gravitating-systems-non-validity-of-the-vlasov-poisson-equation-during-violent-relaxation/11284ae/
# The Arrow of Time in the collapse of collisionless self-gravitating systems: non-validity of the Vlasov-Poisson equation during violent relaxation Research paper by Leandro Beraldo e Silva, Walter de Siqueira Pedra, Laerte Sodré, Eder Perico, Marcos Lima Indexed on: 21 Mar '17Published on: 21 Mar '17Published in: arXiv - Astrophysics - Astrophysics of Galaxies #### Abstract The collapse of a collisionless self-gravitating system, with the fast achievement of a quasi-stationary state, is driven by violent relaxation, with a typical particle interacting with the time-changing collective gravitational potential. It is traditionally assumed that this evolution is described by the (time-reversible) Vlasov-Poisson equation, in which case entropy must be conserved. We use N-body simulations to follow the evolution of an isolated self-gravitating system, estimating the (fine-grained) distribution function and the corresponding Shannon entropy. We do this with three different codes: NBODY-6 (direct summation without softening), NBODY-2 (direct summation with softening) and GADGET-2 (tree code with softening), for different numbers of particles and initial conditions. We find that during violent relaxation entropy increases in a way that cannot be described by 2-body relaxation as modeled by the Fokker-Planck approximation. On the other hand, the long-term evolution is very well described by this model. Our results imply that the violent relaxation process must be described by a kinetic equation other than the Vlasov-Poisson, even if the system is collisionless. Our estimators provide a general method for testing any proposed kinetic equation. We also study the dependence of the 2-body relaxation time-scale $\tau_{col}$ on the number of particles N, obtaining $\tau_{col}\propto \sqrt{N}$, and the dependence of $\tau_{col}$ on the softening length $\varepsilon$, which can be fit by a function of the form $\tau_{col} \propto \sqrt{\varepsilon}\cdot e^{c\varepsilon}$, for a fixed number of particles.
2021-08-06 03:23:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7316601276397705, "perplexity": 1240.1097947550693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152112.54/warc/CC-MAIN-20210806020121-20210806050121-00436.warc.gz"}
https://amathew.wordpress.com/2010/11/01/bleg-talk-ideas-for-high-schoolers/
So I signed up to give a talk at HMMT some time back, which will be this Saturday. As expected, I procrastinated preparation for it until now. The problem is, I’m not sure what to talk about. In high school, I wasn’t really into math contests such as HMMT — my mind was never able to find creative solutions with the necessary speed, and I’d consistently turn in abysmal performances. So as a result, I was never exposed to much of the culture of high school math contests (the existence of which I found out not that long ago). Anyway, as a result I’m not completely sure how to prep this talk, or even what to talk about. Some topics that I consider talk-worthy and interesting are: 1. Lecture one of algebraic geometry class. Define varieties and algebraic sets, and state (or even prove) the Nullstellensatz. But I suspect this will use too much commutative algebra than I should assume. I understand that plenty of extremely accomplished HMMTers may not know what a ring is. 2. The p-adic numbers.  This has the benefit of my being able to recycle an old talk.  But I might have to re-tool it. 3. Quadratic reciprocity. Perhaps the proof via Gauss sums, for instance. But this is something that people will tend to know, right? 4. A brief intro to computability theory (as in — Turing machines, unsolvability of the halting problem, complexity classes, maybe say something about Kolmogorov complexity) The basic problem is that such topics essentially amount to picking your favorite textbook on subject X, choosing five or six pages, and reading them aloud to the students — in short, a normal class. Which is probably not what they’re looking for. But some of you readers have better ideas than I.  So, any thoughts? Pretend, or not, that you were in high school. What would you wish to know that I could cover in an hour? (If I end up using your topic, I’ll mention you in the talk!)
2023-03-21 23:15:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4973200857639313, "perplexity": 647.6965584541304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00744.warc.gz"}