url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.greencarcongress.com/2012/09/nissan-20120923.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+greencarcongress%2FTrBK+%28Green+Car+Congress%29
## Nissan says greater than average battery capacity loss due to mileage and temperature ##### 23 September 2012 Nissan Senior Vice President, Research & Development–Nissan Americas, Carla Bailo sent an open letter to the LEAF community summarizing the company’s initial findings on reports of battery capacity loss expressed by a number of owners in the Phoenix market. (Earlier post.) Bailo said that Nissan identified seven LEAF owners in the Phoenix area who had reported concerns with their vehicles. Nissan brought the cars to its Arizona test facility, removed the batteries for evaluation, measured capacity, and conducted voltage testing on individual battery cells. These tests were diagnostic only; no modifications were performed to the battery packs themselves. Nissan then analyzed the results with specific emphasis on the rate of actual capacity loss for each vehicle. The goals of the testing were to determine: 1. if there were any defects in materials or workmanship in the individual batteries or vehicle systems; 2. if the batteries were performing to specification; and 3. their performance relative to the global LEAF population. Overall findings were: • The Nissan LEAFs inspected in Arizona are operating to specification and their battery capacity loss over time is consistent with their usage and operating environment. No battery defects were found. • A small number of Nissan LEAF owners in Arizona are experiencing a greater than average battery capacity loss due to their unique usage cycle, which includes operating mileages that are higher than average in a high-temperature environment over a short period of time. While we understand that some LEAF owners are concerned about battery capacity loss, we want all owners to remember that all battery-electric vehicles—and all lithium-ion batteries—demonstrate capacity loss over time. So while your LEAF may have been able to travel a certain distance or more on a charge when new, its range will decrease as the battery ages, miles accumulate and gradual capacity loss occurs. This loss in capacity will occur most rapidly in the early part of your battery’s life, but the rate should decrease over time. ...It is also important to put the scope of these concerns in perspective. Globally, there are more than 38,000 Nissan LEAFs on the road that have travelled more than 100 million zero-emission miles, and we expect these vehicles, in normal operating conditions, to retain 80 percent of battery capacity after five years. As each user’s operating characteristics are unique and many factors impact battery capacity, we can expect some vehicles to have greater than 80 percent capacity at five years, and some vehicles to have less. In Arizona, we have approximately 450 LEAFs on the road. Based on actual vehicle data, we project the average vehicle in that market to have battery capacity of 76 percent after five years—or a few percentage points lower than the global estimate. Some vehicles in Arizona will be above this average, and some below. Factors that may account for this differential include extreme heat, high speed, high annual mileage and charging method and frequency of the Nissan LEAFs in the Phoenix market. 80% after FIVE years? I thought that was supposed to be after 8-10 years? In that case the Leaf is not an economic proposition anywhere else, let alone in Arizona. It depends how many miles they cover in those 5 years, no? EP: They are talking about 80% as an average, so it will be based on average mileage. The Leaf car is not a viable proposition if you need a $10,000 or so replacement battery after 5 years. It also means that a lot of people will get below average, those in Arizona for instance. How bad a deal is that? At trade in your car will be worth buttons. Arizona is a special case for temperature and distance. The SF bay area would probably stress batteries a lot less. It is not the 75% remaining capacity in Arizona after 5 years that kills the Leaf, although it of course makes it absurd to buy one there, but the 80% average remaining capacity after 5 years. What is your car going to be worth when you want to trade it in after 3-5 years? It makes no economic sense at all. Nissan may have to modify the battery thermal management system, specially for LEAFs operating in very hot areas like Arizona and for users who require higher speed for longer distances. At least, it should be a recommended option? A liquid cooled system may be required? There are different trade-offs and flexibilities with EVs. The motor may not need to be replaced in a million miles or decades. The five year fuel/maintenance savings may more than buy a better cheaper future battery. Perhaps a future genset can mount via a trailer hitch(patent pending:). VW made MANY sales/ads off of longevity. I still recall Woody Allen finding a VW bug in a cave, dusting it off, and driving it off in a movie. Maybe EV ads should leverage this. There are different trade-offs and flexibilities with EVs. The motor may not need to be replaced in a million miles or decades. The five year fuel/maintenance savings may more than buy a better cheaper future battery. Perhaps a future genset can mount via a trailer hitch(patent pending:). VW made MANY sales/ads off of longevity. I still recall Woody Allen finding a VW bug in a cave, dusting it off, and driving it off in a movie. Maybe EV ads should leverage this. Davemart, AFAIK It has always been 80% after 5 years, 70% after 10 years. The other reasons given by Nissan seem standard corporate YMMV speak, as the owners at mynissanleaf.com have not been able to identify any correlation with speed, charging habits and only a weak correlation with mileage. Persistent high average ambient temps is what kills the battery, the other factors just come in handy for Nissan to prop up their press releases and divert attention. Expect a class action suit soon. Harvey Nissan may have to modify the battery thermal management system, What thermal management system? Perhaps I misunderstood what Nissan were claiming on battery life in the past, but in any case 5 years to 80% is not economically viable. I make that a range on the EPA cycle of under 60 miles. Nissan simply have the wrong battery chemistry. Some varieties of NMC, lithium ion phosphate and lithium titanate all have much, much better cycle life. To come extent the decay in hot climates can be delayed by liquid cooling, but that does not solve the problem when you are parked in the heat for the day, although it mitigates it. GM, which uses a similar chemistry in the Volt, not only has liquid cooling but has specified the DOD very conservatively to give acceptable life. Here is A123's new lithium ion performance: 'Just to repeat, that is 167 degrees Fahrenheit they are testing these pack ats; hotter than any recorded temperature on Earth. And under these extreme condition, after 700 full cycles, the pack is still retaining 90% of life. In “LEAF miles” that would be 90% retention after 50,400 clicks, in 167 degree weather.' http://insideevs.com/a123-updates-next-gen-nanophosphate-ext-batteries-solves-lithium-battery-heat-issues/ @DaveMart: You have conflated two completely different propositions and your logic is weak at best. If it has only 80% battery capacity after 5 years "In that case the Leaf isn't an economic proposition to anybody" AND "you have to replace your battery after five years". Who says? If your leaf still covers your commute then it's still an economic proposition. Same answer to your battery replacement bullcrap. Also it's worth pointing out that even in a gasoline powered car the gas mileage deteriorates over time but nobody uses that as an excuse to replace the engine. Please try to do better with your logic. I'm not sure what your problem is, or why you feel the need to address other posters aggressively. I can't be bothered to reply to your snotty ignorance. Your notion that people will buy the car in any volume if the battery deteriorates at that rate is as ill-conceived as your manners. Xxdanbrowne, Welcome to greencarcongress. As for your notion that gas mileage deteriorates over time, this is much less of an issue for modern cars, and usually only occurs if the car gets really old. The second thing you do not account for is that a lower mileage is easier to compensate than short range. The LEAF is already at the low end of usable range, so any deterioration quickly renders the car useless for its owner. In hot regions, use more sturdy battery chemistry, and use PHEV's with 10-20-mile AER, instead of BEV, charge the car twice a day, and replace the battery every 5 years or so, before significant calendar life loss. If driving only 10-mile each way of commute, then get the Prius Plug-in for 11-mile AER, and charge twice daily. If driving up to 20 miles each way, then get the Ford C-max Energi and charge twice daily. H2-FCV is another potential consideration for ZEV selection in regions of extreme climate (too cold or too hot) after 2015. @Roger: There is no need to accept a five year battery life in any region. There are several chemistries which have far higher cycle life, which in turn means that it is not so disastrous if you have to sacrifice some in hot climates. Mercedes's Li-tec, for instance, is rated for 400,000 km, which is about 250,000 miles, as against 100,000 for the Leaf, so that gives lots of leeway whilst still having a reasonable life. Since if you option to buy the battery instead of lease it costs you$284kw, it is hardly expensive. If Nissan are saying that in five years of average use the battery will be down to 80% their claim that you can get around 100,000 miles out of a pack seems absurd, since they sure aren't saying 20,000 miles a year is average. Averages are usually calculated on the basis of 12,000 miles a year. So perhaps Nissan is now saying that 60,000 miles down to 80% is what you can expect. As far as I recollect, Nissan were talking about 8-10 years and 100,000 miles down to 80% without fast charging, or 70% with fast charging, which would be acceptable. I think Nissan have been caught out, just as they have in the sales numbers they expected. I also remember Nissan talking about an NMC chemistry, with higher energy density. Some variants of this also have higher cycle life. In my view they need to urgently change to a different and more robust chemistry. Yes, Nissan next generation BEVs will probably have more robust batteries + improved temperature management system (liquid + passive cooling etc?). A cold weather pack already exist (as an option) and a hot weather pack will soon be available (as an option)? Remember, the fist generation ICEVs were not perfect and had 1001 problems. Not only has Nissan been caught out but the whole crack pot idea of electric vehicles using lithium ion batteries that don't have the range and need battery replacement to the tune of $10large after 5 years of driving is called into question. Mannstein, not all lithium batteries are the same. It is easy to generalise, but this history is far from over. “greater than average battery capacity loss [battery is defective but they say its] due to their unique usage cycle, which includes operating mileages that are higher than average…” Unique and higher than average? The EPA says annual estimate of cost is$561 based on a 15,000 mile year, i.e. average which I seriously doubt most of them exceeded. Also average? VOLT loves to say the average driver goes less than 20 miles a day… is this what they mean? “…. in a high-temperature environment” Meaning Arizona is too hot to use these cars as it wasn’t designed to be fit for use there as these customers intended to use the car as a daily commuting car. Ergo 1. the 35K Nissan Leaf is about a useful as a bloated in-City only grocery vehicle (i.e. golf cart). 2. The manufacturer warranty is a farce as it never warranted the battery operating at its advertised range on the EPA sticker. They ought to include the “normal” range decrease then as well. So far, there is a rumor the replacement battery cost for a Leaf is \$5k, including a core charge of course. Per Nissan data the average Leaf commute is 29 miles so there is plenty of room for battery degradation, sprinkle in a few chargers around town and you can stretch the range of your old Leaf. Where did the rumour come from? Have you just made it? :-) 'Though several LEAF owners have succeeded in selling their vehicles in the wake of overwhelming evidence that the car frequently experiences rapid battery degradation in warmer climates, others haven't been so lucky. Over the last two weeks I've spoken to several frustrated LEAF owners in the Phoenix area who have tried to no avail to sell or trade their cars back to the dealerships they bought them from. In some cases, dealerships told them that they are unwilling to purchase any used LEAFs because to date, Nissan has offered no assurance that the problem will be remedied.' http://www.plugincars.com/arizona-leaf-owners-selling-no-longer-option-124510.html Xxdanbrowne, Get better manners or leave. The comments to this entry are closed.
2023-03-21 10:47:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2613056004047394, "perplexity": 2220.10413363071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00434.warc.gz"}
http://mathhelpforum.com/business-math/23826-solved-please-help.html
I can't find the answer to this question, I don't even know where to start, can anyone help?! A family has a mortgage their house on a 25 year amortization period. The nominal interest rate is 5.95%, the interest is compounded semi-annually. The family pays $3107.77 monthly towards their mortgage, and their current balance is$237562.52 Assuming the interest rate stays the same over the duration of the mortgage, answer the following questions: a) how many more months will it take until the mortgage is paid off? b) what was the original amount of the mortgage loan? Thank you!!! 2. Originally Posted by abcd19 I can't find the answer to this question, I don't even know where to start, can anyone help?! A family has a mortgage their house on a 25 year amortization period. The nominal interest rate is 5.95%, the interest is compounded semi-annually. The family pays $3107.77 monthly towards their mortgage, and their current balance is$237562.52 Assuming the interest rate stays the same over the duration of the mortgage, answer the following questions: a) how many more months will it take until the mortgage is paid off? b) what was the original amount of the mortgage loan? Thank you!!! We need to value everything at the same period. Our payment period is 25 years worth of months, or 300 months. We need to find the monthly interest rate by using this equality: $(1 + \frac{i^{(12)}}{12})^{12} = (1 + \frac{i^{(2)}}{2})^2$ $(1 + \frac{i^{(12)}}{12})^{12} = (1 + \frac{.0595}{2})^2$ $(1 + \frac{i^{(12)}}{12})^{12} = 1.060385$ $(1 + \frac{i^{(12)}}{12}) = 1.004897764$ $\frac{i^{(12)}}{12} = .004897764$ The equation for a constant payment annuity-immediate is: $a_{n} = \frac{1 - v^n}{i}$ For an interest rate of .4898% each month with payments of \$3107.77 monthly, and the current balance, we can figure out the number of months... $237562.52 = 3107.77\frac{1 - (.995126)^n}{.004897}$ $.3743 = 1 - .995126^n$ $.62566 = .995126^n$ Take natural log of both sides... $-.46894 = -.004885917n$ $n = 95.98$ months which is about 8 years. The original amount of the loan becomes pretty easy... we have 300 months in the entire loan, so: $A = 3107.77\frac{1 - (.995126)^{300}}{.004897} = 488,092.52$
2014-12-21 01:28:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5104138851165771, "perplexity": 644.315645552472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770554.119/warc/CC-MAIN-20141217075250-00140-ip-10-231-17-201.ec2.internal.warc.gz"}
http://worksheets.tutorvista.com/solving-one-step-equations-worksheet.html
# Solving One-step Equations Worksheet Solving One-step Equations Worksheet • Page 1 1. Find the value of $z$ in the number sentence. $z$ + 2 = 21 a. 19 b. 18 c. 20 d. 17 2. Find the value of $z$ in the number sentence. $z$ + 4 = 24 a. 21 b. 19 c. 18 d. 20 3. Find the value of $x$ in the number sentence. $x$ + 6 = 9 a. 1 b. 3 c. 4 d. 2 4. Find the value of $y$ in the number sentence. $y$ + 2 = 24 a. 21 b. 20 c. 22 d. 23 5. What is the value of $n$ in the number sentence 19 = 8 + $n$? a. 11 b. 13 c. 12 d. 10 6. What is the value of $n$ in the number sentence? 62 - $n$ = 6 a. 57 b. 54 c. 56 d. 55 7. Find the value of $a$ in the equation shown. a. 21 b. 29 c. 25 d. 100 8. What is the value of $n$ in the number sentence? 247 + $n$ = 255 a. 6 b. 10 c. 9 d. 8 9. Find the value of $b$ in the equation shown. 6$b$ = 12 a. 12 b. 6 c. 2 d. 18
2014-11-21 16:15:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42455410957336426, "perplexity": 655.1449882989785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400372999.9/warc/CC-MAIN-20141119123252-00057-ip-10-235-23-156.ec2.internal.warc.gz"}
http://human-web.org/California/error-function-integration.html
Address Hemet, CA 92545 (951) 530-9254 http://www.thenetmedic.com # error function integration Mountain Center, California Generalized error functions Graph of generalised error functions En(x): grey curve: E1(x) = (1−e−x)/ π {\displaystyle \scriptstyle {\sqrt {\pi }}} red curve: E2(x) = erf(x) green curve: E3(x) blue curve: E4(x) The imaginary error function has a very similar Maclaurin series, which is: erfi ⁡ ( z ) = 2 π ∑ n = 0 ∞ z 2 n + 1 n Your cache administrator is webmaster. Soc. 3, 282-289, 1928. Some authors discuss the more general functions:[citation needed] E n ( x ) = n ! π ∫ 0 x e − t n d t = n ! π ∑ If L is sufficiently far from the mean, i.e. μ − L ≥ σ ln ⁡ k {\displaystyle \mu -L\geq \sigma {\sqrt {\ln {k}}}} , then: Pr [ X ≤ L Positive integer values of Im(f) are shown with thick blue lines. Generated Tue, 11 Oct 2016 15:17:13 GMT by s_wx1131 (squid/3.5.20) At the real axis, erf(z) approaches unity at z→+∞ and −1 at z→−∞. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view Algebra Applied Mathematics Calculus and Analysis Discrete Mathematics Foundations of Mathematics Geometry History and Terminology Number Theory Probability and Wolfram Demonstrations Project» Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W., NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN978-0521192255, MR2723248 External links MathWorld – Erf Authority control NDL: 00562553 Retrieved from Välj språk. asked 3 years ago viewed 1598 times active 3 years ago 41 votes · comment · stats Related 10Integral of product of two error functions (erf)4Integral of two error functions (erf)3Simplify Another form of erfc ⁡ ( x ) {\displaystyle \operatorname ⁡ 2 (x)} for non-negative x {\displaystyle x} is known as Craig's formula:[5] erfc ⁡ ( x | x ≥ 0 H. Please try the request again. Math. N ! ∫ x ∞ t − 2 N e − t 2 d t , {\displaystyle R_ − 7(x):={\frac {(-1)^ − 6}{\sqrt {\pi }}}2^ − 5{\frac {(2N)!} − 4}\int _ Läser in ... Another form of erfc ⁡ ( x ) {\displaystyle \operatorname ⁡ 2 (x)} for non-negative x {\displaystyle x} is known as Craig's formula:[5] erfc ⁡ ( x | x ≥ 0 Intermediate levels of Im(ƒ)=constant are shown with thin green lines. Some authors discuss the more general functions:[citation needed] E n ( x ) = n ! π ∫ 0 x e − t n d t = n ! π ∑ Craig, A new, simple and exact result for calculating the probability of error for two-dimensional signal constellaions, Proc. 1991 IEEE Military Commun. Asymptotic expansion A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real x is erfc ⁡ ( x ) = e − However, for −1 < x < 1, there is a unique real number denoted erf − 1 ⁡ ( x ) {\displaystyle \operatorname Γ 0 ^{-1}(x)} satisfying erf ⁡ ( erf At the imaginary axis, it tends to ±i∞. Havil, J. Should I ever use the pronoun "ci"? LCCN64-60036. Google search: Google's search also acts as a calculator and will evaluate "erf(...)" and "erfc(...)" for real arguments. IEEE Transactions on Communications. 59 (11): 2939–2944. The system returned: (22) Invalid argument The remote host or network may be down. IEEE Transactions on Communications. 59 (11): 2939–2944. p.297. The defining integral cannot be evaluated in closed form in terms of elementary functions, but by expanding the integrand e−z2 into its Maclaurin series and integrating term by term, one obtains New Exponential Bounds and Approximations for the Computation of Error Probability in Fading Channels. The imaginary error function has a very similar Maclaurin series, which is: erfi ⁡ ( z ) = 2 π ∑ n = 0 ∞ z 2 n + 1 n Despite the name "imaginary error function", erfi ⁡ ( x ) {\displaystyle \operatorname ⁡ 8 (x)} is real when x is real. See also Related functions Gaussian integral, over the whole real line Gaussian function, derivative Dawson function, renormalized imaginary error function Goodwin–Staton integral In probability Normal distribution Normal cumulative distribution function, a Hardy, G.H. Despite the name "imaginary error function", erfi ⁡ ( x ) {\displaystyle \operatorname ⁡ 8 (x)} is real when x is real. comm., May 9, 2004). Level of Im(ƒ)=0 is shown with a thick green line. MIT OpenCourseWare 203 001 visningar 9:34 erf(x) function - Längd: 9:59. Negative integer values of Im(ƒ) are shown with thick red lines. Schöpf and P. Related functions The error function is essentially identical to the standard normal cumulative distribution function, denoted Φ, also named norm(x) by software languages, as they differ only by scaling and translation. Weisstein. "Bürmann's Theorem" from Wolfram MathWorld—A Wolfram Web Resource./ E. This directly results from the fact that the integrand e − t 2 {\displaystyle e^{-t^ − 2}} is an even function. doi:10.1109/TCOMM.2011.072011.100049. ^ Numerical Recipes in Fortran 77: The Art of Scientific Computing (ISBN 0-521-43064-X), 1992, page 214, Cambridge University Press. ^ DlangScience/libcerf, A package for use with the D Programming language. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed All generalised error functions for n>0 look similar on the positive x side of the graph. Cody's algorithm.[20] Maxima provides both erf and erfc for real and complex arguments. D: A D package[16] exists providing efficient and accurate implementations of complex error functions, along with Dawson, Faddeeva, and Voigt functions. The inverse complementary error function is defined as erfc − 1 ⁡ ( 1 − z ) = erf − 1 ⁡ ( z ) . {\displaystyle \operatorname ζ 8 ^{-1}(1-z)=\operatorname Publicerades den 8 nov. 2013This is a special function related to the Gaussian. Computerbasedmath.org» Join the initiative for modernizing math education. Applied Mathematics Series. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). In order of increasing accuracy, they are: erf ⁡ ( x ) ≈ 1 − 1 ( 1 + a 1 x + a 2 x 2 + a 3 x Weisstein ^ Bergsma, Wicher. "On a new correlation coefficient, its orthogonal decomposition and associated tests of independence" (PDF). ^ Cuyt, Annie A. Could clouds on aircraft wings produce lightning? Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 6.2. IEEE Transactions on Wireless Communications, 4(2), 840–845, doi=10.1109/TWC.2003.814350. ^ Chang, Seok-Ho; Cosman, Pamela C.; Milstein, Laurence B. (November 2011). "Chernoff-Type Bounds for the Gaussian Error Function".
2019-03-22 19:16:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8323875665664673, "perplexity": 4759.399358539661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202688.89/warc/CC-MAIN-20190322180106-20190322202106-00180.warc.gz"}
https://underground-mining.com/index.php/topics/blasting/7-burden-calculation-procedure
Burden is calculated according to the following expression: $$B=\frac{0.17 \cdot P_{h} \cdot r_{h}}{k\cdot \sigma_{t}}$$ Where: $\sigma_{t}$ - Tensile strength of the monolith rock (MPa) $P_{h}$ - Detonation pressure (GPa) $r_{h}$ - Borehole radius (m) $$k=\frac{(1-\nu )}{(1+\nu )(1-2\nu )}$$ $\nu -$Poisson's ratio According Chapman-Jouguet detonation theory, Chapman [1] and Jouguet [2], pressure on the blasthole walls for explosives with density above 1 g/cm3 can be calculated as: $${{P}_{d}}=\frac{{{\rho }_{e}}\cdot {{D}^{2}}}{8} \label{eqn:1} \tag{1}$$ Where: ρe – density of explosive (g/cm3) D – detonation velocity of explosive (km/s) For explosive with density below 1 g/cm3 pressure on the blasthole walls is calculated by: $${{P}_{d}}=\frac{{{\rho }_{e}}\cdot {{D}^{2}}}{4.5} \label{eqn:2} \tag{2}$$ Equations \ref{eqn:1} and \ref{eqn:2} assume that blasthole if fully filled with explosive, if blasthole radius and radius of explosive charge differ or blasthole is not fully filled with explosive, pressure on the blasthole walls is calculated by: $${{P}_{h}}={{P}_{d}}\cdot {{\left( \frac{{{d}_{e}}}{{{d}_{h}}} \right)}^{3}}$$ Where: Ph – Pressure on the blasthole walls (GPa) Pd – Detonation pressure (GPa) de – Diameter of explosive charge dh – Blasthole diameter Burden calculator provides simple form that is it be used for burden calculation by above explained procedure. References [1] Chapman, D. L. "On the rate of explosion in gases". Philosophical Magazine Series 5 47 (284): 90–104. 1899.  DOI:10.1080/14786449908621243 [2] Jouguet, Emile. “Sur la propagation des réactions chimiques dans les gaz" [On the propagation of chemical reactions in gases], Journal de Mathématiques Pures et Appliquées, series 6 (in French) 347–425 pp. 1905.
2020-10-25 23:20:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231215476989746, "perplexity": 11219.209500787098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890028.58/warc/CC-MAIN-20201025212948-20201026002948-00593.warc.gz"}
https://socratic.org/questions/a-regular-hexagon-has-six-sides-of-equal-length-if-a-regular-hexagon-is-made-fro
# A regular hexagon has six sides of equal length. If a regular hexagon is made from a 36-inch-long string, what is the length of each side? Jan 14, 2017 $6$ inches #### Explanation: We have a regular hexagon and so there are six equal-length sides. We use a string to map out that hexagon - the string is 36 inches long. How long is each side? We know each side will use up $\frac{1}{6}$ of the total length of the string and knowing that, we can find the side length in a few different ways. One way is to say: $36 \times \frac{1}{6} = \frac{36}{6} = 6$ Another way is to do a ratio: $\text{length of one side"/"length of entire hexagon"="one side"/"number of sides to a hexagon}$ $\frac{x}{36} = \frac{1}{6}$ and then cross multiply: $6 x = 36$ $x = 6$
2021-12-08 22:49:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.639491856098175, "perplexity": 423.5219765044948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363598.57/warc/CC-MAIN-20211208205849-20211208235849-00600.warc.gz"}
https://www.tutorke.com/lesson/334-the-velocity-v-m-s-of-a-moving-body-at-time-t-seconds-is-given-by-v--5t-2%EF%BF%BD%EF%BF%BD%EF%BF%BD-12t-7-calculate-the.aspx
Get premium membership and access revision papers with marking schemes, video lessons and live classes. OR # Differentiation and Its Applications Questions and Answers The velocity V m/s, of a moving body at time t seconds is given by V = 5t^2– 12t +7. Calculate the acceleration when t = 2 seconds. (1m 26s) 1179 Views     SHARE |
2023-01-31 06:21:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19567856192588806, "perplexity": 5685.259210720387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00763.warc.gz"}
https://combio.org/www_publications/index.php?key=inp64&bib=combio.bib
On the Efficiency of Spiking Neural P Systems (bibtex) by Chen, Haiming, Ionescu, Mihai and Ishdorj, Tseren-Onolt Abstract: Spiking neural P systems were recently introduced in citespiking and proved to be Turing complete as number computing devices. In this paper we show that these systems are also computationally efficient. Specifically, we present a variant of spiking neural P systems which have, in their initial configuration, an arbitrarily large number of inactive neurons which can be activated (in an exponential number) in polynomial time. Using this model of P systems we can deterministically solve the satisfiability problem (SAT) in constant time. Reference: On the Efficiency of Spiking Neural P Systems (Chen, Haiming, Ionescu, Mihai and Ishdorj, Tseren-Onolt), In Proc. 8th International Conference on Electronics, Information, and Communication (ICEIC2006), Ulaanbaatar, June 2006, 2006. Bibtex Entry: @InProceedings{inp64, author = {Chen, Haiming AND Ionescu, Mihai AND Ishdorj, Tseren-Onolt}, title = {On the Efficiency of Spiking Neural P Systems}, booktitle = {Proc. 8th International Conference on Electronics, Information, and Communication (ICEIC2006), Ulaanbaatar, June 2006}, year = {2006}, pages = {49-52}, abstract = {Spiking neural P systems were recently introduced in cite{spiking} and proved to be Turing complete as number computing devices. In this paper we show that these systems are also computationally efficient. Specifically, we present a variant of spiking neural P systems which have, in their initial configuration, an arbitrarily large number of inactive neurons which can be activated (in an exponential number) in polynomial time. Using this model of P systems we can deterministically solve the satisfiability problem (SAT) in constant time.}, file = {HMT2006a.pdf:pdfs/HMT2006a.pdf:PDF}, }
2019-06-19 11:16:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5854500532150269, "perplexity": 2844.120483555989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998959.46/warc/CC-MAIN-20190619103826-20190619125826-00322.warc.gz"}
https://rsangole.netlify.app/post/2017/02/01/finite-mixture-modeling-using-flexmix/
# Finite Mixture Modeling using Flexmix This page replicates the codes written by Grun & Leish (2007) in ‘FlexMix: An R package for finite mixture modelling’, University of Wollongong, Australia. My intent here was to learn the flexmix package by replicating the results by the authors. # Model Based Clustering The model based clustering on the whiskey dataset. The whiskey dataset is from the Simmons Study of Media and Markets (Fall 1997), and contains the incidence matrix for scotch brands in households who reported consuming scotch for period of 1 year. The dataset is taken from Edwards and Allenby (2003). Load the necessary packges first: tidyverse and flexmix. library(tidyverse) library(flexmix) ## Quick EDA Quick look at the data itself. The dataframe consists of 2 elements - frequency (numeric vector), and the incidence matrix. There are total of 484 observations. data("whiskey") df <- whiskey set.seed(1802) str(df) ## 'data.frame': 484 obs. of 2 variables: ## $Freq : int 1 1 10 14 10 23 9 8 1 12 ... ##$ Incidence: num [1:484, 1:21] 1 0 0 0 0 0 0 0 0 0 ... ## ..- attr(*, "dimnames")=List of 2 ## .. ..$: chr "1" "2" "3" "4" ... ## .. ..$ : chr "Singleton" "Knockando" "White Horse" "Scoresby Rare" ... The column names of the df$Incidence matrix are the brands of whiskey. colnames(df$Incidence) ## [1] "Singleton" "Knockando" ## [3] "White Horse" "Scoresby Rare" ## [5] "Ushers" "Macallan" ## [7] "Grant's" "Passport" ## [9] "Black & White" "Clan MacGregor" ## [11] "Ballantine" "Pinch (Haig)" ## [13] "Other brands" "Cutty Sark" ## [15] "Glenfiddich" "Glenlivet" ## [17] "J&B" "Dewar's White Label" ## [19] "Johnnie Walker Black Label" "Johnnie Walker Red Label" ## [21] "Chivas Regal" The incidence matrix shows a relationship between two classes of variables - in this case: freqencies of the brand of whiskey in the past year, and the brand of whiskey itself. Quick look at a portion of the matrix: df$Incidence[sample(x = 1:484,size = 10),sample(1:21,3)] ## Ballantine Johnnie Walker Red Label White Horse ## 304 1 1 0 ## 404 0 0 0 ## 465 0 1 0 ## 349 0 0 0 ## 19 0 0 0 ## 331 0 1 0 ## 370 0 1 0 ## 367 0 1 0 ## 82 0 1 0 ## 384 1 1 0 The popularity of the whiskeys can be seen here. Chivas Regal seems to be a favourite, which puts my personal preference in line with a larger population :) c <- colSums(df$Incidence) d1 <- data.frame(Brand=names(c),counts=c,row.names = NULL) d1 <- d1 %>% left_join(whiskey_brands) %>% arrange(-counts) ggplot(d1,aes(reorder(Brand,counts),counts,fill=Type))+geom_bar(stat='identity')+coord_flip()+labs(y='Counts',x='Whiskey Brand') ## Model building The first model in the paper is a stepped Flexmix model, specific for binary variables using the FLXMcvbinary() model. Since the objective is to cluster the model based on the Incidence counts and Frequencies, the formula used is Incidence ~ 1. The frequencies themselves are input as weights in the formula. wh_mix <- stepFlexmix(Incidence ~ 1, weights = ~ Freq, data = df, model = FLXMCmvbinary(truncated = TRUE), control = list(minprior = 0.005), k=1:7, nrep=5) ## 1 : * * * * * ## 2 : * * * * * ## 3 : * * * * * ## 4 : * * * * * ## 5 : * * * * * ## 6 : * * * * * ## 7 : * * * * * summary(wh_mix) ## Length Class Mode ## 1 stepFlexmix S4 A top model can be selecting using BIC or AIC criteria. The BIC criteria selects a model with 5 clusters. plot(BIC(wh_mix),type='b',ylab='BIC') points(x = which.min(BIC(wh_mix)),min(BIC(wh_mix)),col='red',pch=20) wh_best <- getModel(wh_mix,'BIC') print(wh_best) ## ## Call: ## stepFlexmix(Incidence ~ 1, weights = ~Freq, data = df, model = FLXMCmvbinary(truncated = TRUE), ## control = list(minprior = 0.005), k = 5, nrep = 5) ## ## Cluster sizes: ## 1 2 3 4 5 ## 804 941 163 286 24 ## ## convergence after 73 iterations The proportions of the observations in each cluster are shown here: round(prop.table(table(wh_best@cluster)),2) ## ## 1 2 3 4 5 ## 0.19 0.27 0.23 0.26 0.04 The parameter estimates plotted for model with k=5 is shown below graphically. Component 3 (4% of households) contain the largest number of different brands. Component 1 (25% of households) seen to prefer single malt whiskeys. Component 4 (23% of households) are across the board with Brands, but perhaps show lesser of an interest in single malts, just like Component 5 (29% of the households). # wh_best.prior <- prior(wh_best) wh_best.param <- parameters(wh_best) wh_best.param <- data.frame(Brand=stringr::str_replace(rownames(wh_best.param),pattern = 'center.',replacement = ''), wh_best.param,row.names = NULL) wh_best.param <- wh_best.param %>% gather(Components,Value,Comp.1:Comp.5) wh_best.param <- wh_best.param %>% left_join(y = whiskey_brands,by = 'Brand') ggplot(wh_best.param,aes(y=Value,x=Brand,fill=Type))+ geom_bar(stat='identity')+ coord_flip()+ facet_grid(.~Components) # Mixtures of Regressions The next example in the paper is the patent data in Wang et al. (1998). The help file ?patent notes that the data consists of the number of patents, R&D spending and sales in millions of dollar for 70 pharmaceutical and biomedical companies in 1976, taken from the National Bureau of Economic Research R&D Masterfile. ## Quick EDA The dependant variable here is Patents. Independant variable is lgRD which is the log of R&D spending. The objective in this exercise is to try and find how many may clusters exist within this bi-variate dataset. When I started this exercise, it seemed quite moot to me, since visually, I couldn’t really tell any distict clusters. But, the results show otherwise. data("patent") df_patent <- tbl_df(patent) df_patent ## # A tibble: 70 x 4 ## Company Patents RDS lgRD ## * <chr> <int> <dbl> <dbl> ## 1 ABBOTT LABORATORIES 42 0.0549 4.0869 ## 2 AFFILIATED HOSPITAL PRDS 1 0.0032 -2.0794 ## 3 ALBERTO-CULVER CO 3 0.0078 0.1187 ## 4 ALCON LABORATORIES 2 0.0803 1.8796 ## 5 ALLERGAN PHARMACEUTICALS INC 3 0.0686 1.1033 ## 6 ALZA CORP-CL A 40 3.3319 2.0794 ## 7 AMERICAN HOME PRODUCTS CORP 60 0.0243 4.0953 ## 8 AMERICAN HOSPITAL SUPPLY 30 0.0128 2.8333 ## 9 AMERICAN STERILIZER CO 7 0.0252 1.3915 ## 10 AVON PRODUCTS 3 0.0094 2.6048 ## # ... with 60 more rows plot(Patents~lgRD,df_patent) ## Model Building The paper mentions that Wang et al. (1998) chose a finite mixture of three Poisson regression models to represent the data. The FLXMRglm() is used for the Poisson model with a concomitant variable modeled using FLXPmultinom(). pat_mix <- flexmix(Patents ~ lgRD, k = 3, data = df_patent, model = FLXMRglm(family = "poisson"), concomitant = FLXPmultinom(~RDS)) pat_mix ## ## Call: ## flexmix(formula = Patents ~ lgRD, data = df_patent, k = 3, ## model = FLXMRglm(family = "poisson"), concomitant = FLXPmultinom(~RDS)) ## ## Cluster sizes: ## 1 2 3 ## 37 25 8 ## ## convergence after 21 iterations The clusters obtained from the analysis are given by a cluster() function. clusters(pat_mix) ## [1] 2 1 1 2 1 3 2 1 1 2 1 1 1 1 2 2 2 1 1 2 2 2 3 1 1 1 1 1 1 2 1 1 1 2 3 ## [36] 1 2 2 3 2 1 1 2 1 2 1 1 1 3 1 1 2 1 2 1 1 1 2 2 2 1 2 1 1 3 3 1 2 3 2 ## Results The data is replotted but with colors for the clusters and additional splines. As we can see, the model beautifully models three lines through three clusters in the data. Components <- factor(clusters(pat_mix)) xyplot(Patents~lgRD,groups = Components,df_patent,type=c('p','spline')) ## Further investigation The flexmix package has a function to plot rootograms of the posterior probabilities of observations. Observations where the a-posteriori probability is large for component #1 and #3 are indicated. As we can see where component #1 has highest probabilities indicated in the 1st bucket, they are lowest in #2 and #3 buckets. plot(pat_mix,mark=1) plot(pat_mix,mark=3) A summary of the mixture model results show the estimated priors, number of observations within each cluster (size), number of observations with p>10^-4 (post>0), and a ratio of the two. The rations of 0.58, 0.42 and 0.18 indicate big overlaps of the clusters. This can also be observed by the large portion of values in the mid-section of the rootogram above. summary(pat_mix) ## ## Call: ## flexmix(formula = Patents ~ lgRD, data = df_patent, k = 3, ## model = FLXMRglm(family = "poisson"), concomitant = FLXPmultinom(~RDS)) ## ## prior size post>0 ratio ## Comp.1 0.497 37 63 0.587 ## Comp.2 0.380 25 60 0.417 ## Comp.3 0.124 8 45 0.178 ## ## 'log Lik.' -202.7633 (df=10) ## AIC: 425.5265 BIC: 448.0115 Tests of significance of the coefficients are obtained by the refit(). In each cluster the intercept and lgRD are both statistically significant at the 0.05 level. The black bars in the plot are 95% CI over the point estimates. rm <- refit(pat_mix) summary(rm) ## $Comp.1 ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) 0.64189 0.28348 2.2643 0.02355 * ## lgRD 0.92618 0.06562 14.1143 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ##$Comp.2 ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -1.32675 0.52622 -2.5213 0.01169 * ## lgRD 1.29118 0.12463 10.3601 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## \$Comp.3 ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) 1.95769 0.41935 4.6684 3.036e-06 *** ## lgRD 0.70556 0.12009 5.8754 4.217e-09 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 plot(rm,bycluster=F) The authors note that “estimates vary between all components even though the co- efficients for lgRD are similar for the first and third component”. # Notes • For both the models, although I’m using the exact datasets and seed values, I observe different values for proportions in each cluster (for model #1) as well as P values for significance etc (for model #2). This could be due to some changes in the underlying flexmix codes since 2007.
2021-04-18 21:54:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4265247583389282, "perplexity": 5906.4131727779395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038860318.63/warc/CC-MAIN-20210418194009-20210418224009-00297.warc.gz"}
https://qanda.ai/en/solver/popular-problems/1015580
Calculator search results Formula $$x ^ { 2 } - 2 x + 3 = 0$$ Do not have the solution $x = \dfrac { \color{#FF6800}{ - } \left ( \color{#FF6800}{ - } 2 \right ) \pm \sqrt{ \left ( - 2 \right ) ^ { 2 } - 4 \times 1 \times 3 } } { 2 \times 1 }$ Simplify Minus $x = \dfrac { 2 \pm \sqrt{ \left ( - 2 \right ) ^ { 2 } - 4 \times 1 \times 3 } } { 2 \times 1 }$ $x = \dfrac { 2 \pm \sqrt{ \left ( \color{#FF6800}{ - } \color{#FF6800}{ 2 } \right ) ^ { \color{#FF6800}{ 2 } } - 4 \times 1 \times 3 } } { 2 \times 1 }$ Remove negative signs because negative numbers raised to even powers are positive $x = \dfrac { 2 \pm \sqrt{ 2 ^ { 2 } - 4 \times 1 \times 3 } } { 2 \times 1 }$ $x = \dfrac { 2 \pm \sqrt{ \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 2 } } - 4 \times 1 \times 3 } } { 2 \times 1 }$ Calculate power $x = \dfrac { 2 \pm \sqrt{ \color{#FF6800}{ 4 } - 4 \times 1 \times 3 } } { 2 \times 1 }$ $x = \dfrac { 2 \pm \sqrt{ 4 - 4 \color{#FF6800}{ \times } \color{#FF6800}{ 1 } \times 3 } } { 2 \times 1 }$ Multiplying any number by 1 does not change the value $x = \dfrac { 2 \pm \sqrt{ 4 - 4 \times 3 } } { 2 \times 1 }$ $x = \dfrac { 2 \pm \sqrt{ 4 \color{#FF6800}{ - } \color{#FF6800}{ 4 } \color{#FF6800}{ \times } \color{#FF6800}{ 3 } } } { 2 \times 1 }$ Multiply $- 4$ and $3$ $x = \dfrac { 2 \pm \sqrt{ 4 \color{#FF6800}{ - } \color{#FF6800}{ 12 } } } { 2 \times 1 }$ $x = \dfrac { 2 \pm \sqrt{ \color{#FF6800}{ 4 } \color{#FF6800}{ - } \color{#FF6800}{ 12 } } } { 2 \times 1 }$ Subtract $12$ from $4$ $x = \dfrac { 2 \pm \sqrt{ \color{#FF6800}{ - } \color{#FF6800}{ 8 } } } { 2 \times 1 }$ $x = \dfrac { 2 \pm \sqrt{ - 8 } } { 2 \color{#FF6800}{ \times } \color{#FF6800}{ 1 } }$ Multiplying any number by 1 does not change the value $x = \dfrac { 2 \pm \sqrt{ - 8 } } { \color{#FF6800}{ 2 } }$ $\color{#FF6800}{ x } = \color{#FF6800}{ \dfrac { 2 \pm \sqrt{ - 8 } } { 2 } }$ The square root of a negative number does not exist within the set of real numbers Do not have the solution Try more features at Qanda! Search by problem image Ask 1:1 question to TOP class teachers AI recommend problems and video lecture
2022-01-20 13:55:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6064123511314392, "perplexity": 556.7488666312136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00669.warc.gz"}
https://msp.org/apde/2016/9-7/p03.xhtml
#### Vol. 9, No. 7, 2016 Recent Issues The Journal About the Journal Editorial Board Editors’ Interests Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1948-206X (e-only) ISSN: 2157-5045 (print) Author Index To Appear Other MSP Journals An analytical and numerical study of steady patches in the disc ### Francisco de la Hoz, Zineb Hassainia, Taoufik Hmidi and Joan Mateu Vol. 9 (2016), No. 7, 1609–1670 DOI: 10.2140/apde.2016.9.1609 ##### Abstract We prove the existence of $m$-fold rotating patches for the Euler equations in the disc, for the simply connected and doubly connected cases. Compared to the planar case, the rigid boundary introduces rich dynamics for the lowest symmetries $m=1$ and $m=2$. We also discuss some numerical experiments highlighting the interaction between the boundary of the patch and the rigid one. ##### Keywords Euler equations, $V$-states, bifurcation ##### Mathematical Subject Classification 2010 Primary: 35Q35, 37G40, 35Q31 Secondary: 76B47
2021-11-28 03:00:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2397046834230423, "perplexity": 5332.86715619166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00112.warc.gz"}
https://www.enotes.com/homework-help/int-2x-3cos-x-2-dx-find-indefinite-integral-by-776362
# `int 2x^3cos(x^2) dx` Find the indefinite integral by using substitution followed by integration by parts. Recall that indefinite integral follows `int f(x) dx = F(x) +C` where: `f(x)` as the integrand function `F(x) ` as the antiderivative of `f(x)` `C` as the constant of integration. For the given  integral problem: `int 2x^3 cos(x^2) dx` , we may apply apply u-substitution by letting:  `u = x^2` then `du =2x dx` . Note that `x^3 =x^2 *x `  then `2x^3 dx = 2*x^2 *x dx` or `x^2 * 2x dx` The integral becomes: `int 2x^3 cos(x^2) dx =int x^2 *cos(x^2) *2x dx` `= int u cos(u) du` Apply formula of integration by parts: `int f*g'=f*g - int g*f'` . Let: `f =u` then `f' =du` `g' =cos(u) du` then `g=sin(u)` Note: From the table of integrals, we have `int cos(x) dx =sin(x) +C` . `int u *cos(u) du = u*sin(u) -int sin(u) du` `= usin(u) -(-cos(u)) +C` `= usin(u) + cos(u)+C` Plug-in `u = x^2` on  `usin(u) + cos(u)+C` , we get the complete indefinite integral as: `int 2x^3 cos(x^2) dx =x^2sin(x^2) +cos(x^2) +C` Approved by eNotes Editorial Team
2022-11-28 07:29:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9712554812431335, "perplexity": 4457.208496672923}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00673.warc.gz"}
https://www.gamedev.net/forums/topic/270144-duplicating-controls-in-vbnet/
# [.net] Duplicating controls in VB.NET This topic is 5089 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I'm working with VB.NET to create a simple Z80 assembly tool. What it has is a series of tab pages, each one with a text box on it. What I want to be able to do is have a Sub called "addTab" where I can pass 2 parameters - the filename, which appears on the tab button and the text for the source file. What it would then do is add a new tab (I can do that) and a new text box onto that tab with the source file text in it, so you can switch between the source files easily. To create a new textbox, I've tried Dim x as New TextBox() x = txtEditBox ...where txtEditBox is the existing text box which has all the properties set up. However, if I now add 'x' to a tab page, it'll actually move the txtEditBox to the tab page - I'm guessing because when I go x = txtEditBox it's copying a handle to the existing text box, rather than all its properties. Any hints, or ideas? ##### Share on other sites Once you do: Dim x as New TextBox() you have a new textbox. Just add it to the tab page's control collection. Dim tp As New TabPage()Dim txt As New TextBox()txt.Text = "Textbox " & TabControl1.Controls.Count + 1tp.Controls.Add(txt)tp.Text = "Page " & TabControl1.Controls.Count + 1TabControl1.TabPages.Add(tp) ##### Share on other sites Unfortunately that just creates it with the default properties - ideally I'd like to create it based on the properties of an existing one. In VB6 you could just do Dim x as New txtSample, where txtSample was the text box with the properties you wanted to duplicate. ##### Share on other sites Unfortunately, it doesn't look like the Control class implements the ICloneable interface, which I would have expected it to. You'll have to copy the relevant properties yourself. 1. 1 2. 2 3. 3 4. 4 JoeJ 12 5. 5 Rutin 11 • 12 • 16 • 13 • 20 • 12 • ### Forum Statistics • Total Topics 632178 • Total Posts 3004598 ×
2018-08-20 04:54:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17794808745384216, "perplexity": 2664.814108578058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215843.55/warc/CC-MAIN-20180820042723-20180820062723-00130.warc.gz"}
https://math.stackexchange.com/questions/3728261/example-of-a-noncommutative-nonunital-ring-with-this-property-about-its-ideals
# Example of a noncommutative, nonunital ring with this property about its ideals? Is there any noncommutative ring without $$1$$ that has the following property? Every right sided ideal is two sided too, but there exists a left sided ideal that is not two sided. Take any right-not-left duo ring and take its product with a ring with a zero multiplication ring with $$2$$ elements. The result is still right-not-left duo, but the zero ring ensures it does not have identity.
2021-04-21 23:31:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5560746788978577, "perplexity": 259.7598261888444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039554437.90/warc/CC-MAIN-20210421222632-20210422012632-00189.warc.gz"}
https://newtraell.cs.uchicago.edu/research/publications/techreports/TR-98-11
# TR-98-11 ## On the Quantum Complexity of Majority Hayes, Thomas; Kutin, Samuel; Van Melkebeek, Dieter. 11 December, 1998. ### Abstract We describe a quantum black-box algorithm computing the majority of $N$ bits with zero-sided error using only $2N/3 + O(\sqrt{N\log N})$ queries: the algorithm returns the correct answer with overwhelming probability, and I don't know'' otherwise. Our algorithm is given as a randomized XOR decision tree'' for which the expected number of queries on the worst-case input is no more than $2N/3 + O(\log N)$. We provide a nearly matching lower bound of $2N/3 - O(\sqrt N)$ in the randomized XOR decision tree model. ### Original Document The original document is available in Postscript (uploaded 8 June, 2001 by Dustin Mitchell).
2021-10-25 20:35:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43593335151672363, "perplexity": 2281.4055700162817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00025.warc.gz"}
http://nylogic.org/talks/an-alternate-proof-of-the-halpern-lauchli-theorem-in-one-dimension
# An alternate proof of the Halpern-Läuchli Theorem in one dimension Set theory seminarFriday, September 12, 201410:00 amGC 5382 # An alternate proof of the Halpern-Läuchli Theorem in one dimension ### The CUNY Graduate Center I will present a new proof of the strong subtree version of the Halpern-Läuchli Theorem, using an ultrafilter on $\omega$. The one dimensional Halpern-Läuchli Theorem states that for every finite partition of an infinite, finitely branching tree $T$, there is one piece $P$ of the partition and a strong subtree $S$ of $T$ such that $S \subseteq P$. This will cover the one dimensional case, with hopes that the proof can be extended to cover a product of trees. Erin Carmody is a visiting assistant professor at Nebraska Wesleyan University. Her research is in the field of set theory. She received her doctorate in 2015 under the supervision of Joel David Hamkins. Posted by on August 23rd, 2014
2021-07-25 18:50:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49054810404777527, "perplexity": 626.8793502418825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151760.94/warc/CC-MAIN-20210725174608-20210725204608-00497.warc.gz"}
https://www.geogebra.org/m/bT5BrFqb
# Maximizing Trapezoid Area [color=#000000]Suppose both legs and one base of the isosceles trapezoid below each have length 1. [br](Drag black slider.) [br][br]If this is so, determine the value of [/color]$\theta$[color=#000000] that maximizes the area of this isosceles trapezoid.[br]Be sure to solve this problem first! How do your results compare with what this applet suggests?[/color]
2017-11-22 14:45:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6902802586555481, "perplexity": 2573.6305124405876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806609.33/warc/CC-MAIN-20171122141600-20171122161600-00502.warc.gz"}
https://www.albert.io/ie/linear-algebra/inner-product-space-orthonormal-distance
Free Version Difficult # Inner Product Space: Orthonormal Distance LINALG-HOEYJJ Let $V$ be an inner product space and let $\{\boldsymbol{u},\ \boldsymbol{v}\}$ be an orthonormal set in $V$. Compute the distance between $\boldsymbol{u}$ and $\boldsymbol{v}$. A $0$ B $1$ C $\sqrt{2}$ D $\sqrt{3}$ E $2$ F $\sqrt{5}$ G $\sqrt{6}$ H Cannot be determined with the provided information.
2017-01-17 02:48:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8792567253112793, "perplexity": 262.2204843269876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00529-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.themathcitadel.com/articles/mod-add-groups-integers.html
## Like Clockwork: Modulo Addition And Finite Groups Of Integers ### 1. Introduction Many books on abstract algebra don’t slow down enough to really examine the examples they give. Algebra is a powerful branch of mathematics, whose influence is found all throughout mathematics, engineering, experimental design, and computer science. Most of us do modulo arithmetic more often than we think, especially if we look at an analog clock to tell time. For example, if I asked you to convert 14:00 from military time into “regular” time, you’d respond “2 pm” pretty quickly. But how did you do it? You got that answer because you went all the way around the clock and then had 2 remaining. This is modulo addition — in this case $\bmod 12$. Modulo addition is regular addition with an extra couple steps: dividing by your “base” $n$ (on a clock $n = 12$) and then grabbing the remainder. For integers $a$,$b$, and $n$, $a \equiv b \bmod n \Leftrightarrow b-a= k\cdot n \text{ for an integer } k$ This is read as "$a$ is equivalent to $b$ modulo $n$ if $b-a$ is an integer multiple of $n$". Another way to write this is $a \equiv b \bmod n \Leftrightarrow b = k\cdot n + a \text{ for an integer } k$ In other words, if $b$ can be expressed as a multiple of $n$ plus some extra amount $a$, then $a$ is equivalent to $b$ modulo $n$. Back to our clock example, obviously $2 \neq 14$, but $2\equiv 14 \bmod 12$ because $14 = 1\cdot 12 + 2$ Here, our $b$, which is 14, can be expressed as an integer multiple of $12$ ($k=1$), plus some extra $a=2$, so we say that $2\equiv 14 \bmod 12$. The base $n$ matters. If I changed the $n$, equivalence may no longer hold. For instance, $2$ is not equivalent to $14 \bmod 5$ because the remainder when you divide 14 by 5 is 4. The study of these remainders gives us mathematical objects known as finite groups. ### The Finite Groups $\mathbb{Z}_{n}$ If we look at dividing any number by 12, the remainder can only be the numbers 0 – 11. Why? Well if the remainder is 12, then it’s a multiple of 12. The remainder of any integer, no matter how big it is, when dividing by $n$ has to be somewhere in the set $\{0,1,2,\ldots, n-1\}{0,1,2,…,n−1}$. Let’s look at a few with $n=12$. As an exercise, verify each of these on your own: \begin{align*}1\equiv 13 \bmod 12 &\text{ because } 13 = 1\cdot 12 + 1 \\ 0\equiv 24 \bmod 12 &\text{ because } 24 = 2\cdot 12 + 0\\ 11\equiv 35 \bmod 12 &\text{ because } 35 = 2\cdot 12 + 11\\ 0\equiv 48 \bmod 12 &\text{ because } 48 = 4\cdot 12 + 0 \\ 3\equiv 123 \bmod 12 &\text{ because } 123 = 10\cdot 12 + 3 \end{align*} The multiple $k$ is telling us how many times we’ve gone around the clock. So we’re always stuck on the clock, no matter how many times we go around, which means that the remainder can’t be any larger than 11, or we’ve started another cycle around the clock. The set $\{0,1,2,\ldots, n-1\}$forms an algebraic structure called a group when we pair the set with the operation modulo addition. #### What is a group? A group is a set paired with an operation that has to have three certain properties. The operation doesn’t always have to be addition, or modulo addition, and the set doesn’t have to be integers. Operations can be addition, multiplication, function composition, or some weird one you define. But to pair an operation with a set, we have to check to make sure some rules are followed: The operation has to fit the formal definition; we cannot use any old rule we want. An operation is defined mathematically as a rule that takes any two elements in a given set $A$ and returns a unique element that is still in $A$. This means that you can’t create a rule that lets you escape the set. If my set is integers, and my rule is division, then division is not an operation on the integers because the result of $3/4$ isn’t an integer. By trying to divide 3 by 4, we left the set of integers. The uniqueness requirement ensures our rule is well-defined. If my rule is to take two integers $a$ and $b$ and return the number whose square is the product $ab$, then $2\cdot 8 = 16$ could return either 4 or -4. That’s not well-defined. The rule doesn’t guarantee that a unique element is returned. Assuming our operation really is an operation (and modulo addition on the integers is. If you don’t believe me, try to prove it for yourself.), then we can move on. We have the set (integers modulo $n$) and the operation (modulo addition). To meet the definition of a group, the set and operation must follow these three axioms: 1. The operation is associative, meaning that the grouping or order shouldn’t matter when I go to execute the operation on 3 elements. $[(a+b) + c] \bmod n \equiv [a + (b+c)]\bmod n.$ 2. There is an identity element in my set. There has to be some element in my set such that performing the operation (in this case, modulo $n$) on any other element in the set return that other element, and the order doesn’t matter. For our set of integers under modulo addition, the identity element is $0$. 3. Every element in the set has to have an inverse. For every element in my group, there has to be another element such that performing the operation on the element and its proposed inverse in any order returns the identity element we talked ../about in (2). We have a special way of writing the set of the integers modulo $n$: $\mathbb{Z}_{n} = \{0,1,\ldots,n-1\}$. We’ll do a specific example and verify that $\mathbb{Z}_{3}$ is a group. #### $\mathbb{Z}_{3}$ We can examine groups via operation tables, which are a nice visual. Think of your multiplication tables from primary school. The elements are placed on the rows and columns, and the entries are the result of applying the operation to the two elements. The row entry goes first, then the column entry. For $\mathbb{Z}_{3}$, we know the elements are $\{0,1,2\}$, so the operation table looks like this:
2022-12-06 00:05:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184258341789246, "perplexity": 262.39483832313636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00814.warc.gz"}
https://www.physicsoverflow.org/17803/how-can-you-actually-measure-decay-constants
# How can you actually measure decay constants? + 3 like - 0 dislike 246 views I'm trying to understand how people actually measure decay constants that are discussed in meson decays. As a concrete example lets consider the pion decay constant. The amplitude for $\pi ^-$ decay is given by, \big\langle 0 | T  \exp \left[  i \int \,d^4x {\cal H} \right] | \pi ^- (  p _\pi )  \big\rangle To lowest order this is given by, i \int \,d^4x  \left\langle 0 | T W _\mu       J ^\mu  | \pi ^- (  p _\pi )  \right\rangle If we square this quantity and integrate over phase space then we will get the decay rate. On the other hand, the pion decay constant is defined through, \left\langle 0 | J ^\mu | \pi ^- \right\rangle = - i f _\pi p _\pi ^\mu This is clearly related to the above, but it seems to me there are a couple of subtleties. 1. How do we get rid of the time-ordering symbol? 2. Since we don't have a value for $W _\mu$ how can we go ahead and extract $f _\pi$ ?
2019-07-23 20:00:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9965496063232422, "perplexity": 1684.0229469688168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529664.96/warc/CC-MAIN-20190723193455-20190723215455-00388.warc.gz"}
https://www.or-exchange.org/questions/6659/multiobjective-optimization-of-piecewise-spline-parameters
# Multiobjective optimization of piecewise spline parameters 2 I've only just discovered the subject of multiobjective optimization while I'm trying to solve the problem shown/described below. I'd appreciate advise on possible methods of solution (I'm still teaching myself the subject). Given two piecewise continuous cubic curves $$C_1$$ and $$C_2$$ that interpolate three consecutive points $$P_i$$, $$P_{i+1}$$, and $$P_{i+2}$$, I'm looking for the smallest possible values these points, henceforth $$Q_i$$, $$Q_{i+1}$$, and $$Q_{i+2}$$ (each $$Q_j$$ is the new value of $$P_j$$ where $$j = i, i+1, i+2$$ ) such that the gradients of $$C_1$$ and $$C_2$$do not exceed a maximum value $$\Delta_{max}$$. Otherwise stated: Minimize $$P_j - Q_j$$ such that $$\alpha_j \le Q_j \le P_j$$ and the gradient of each curve $$C_j \le \Delta_{max}$$. Note: in reality the problem that I'm trying to solve involves more than two piecewise curves. asked 30 Sep '12, 19:51 Olumide 41●1●4 accept rate: 0% I'm struggling to understand the question. Will the C' curves interpolate the Q points rather than the P points? What are the alphas? How do you determine the curves? (An infinite number of cubics interpolate any three points.) (03 Oct '12, 19:19) Paul Rubin ♦♦ The Q points are the new positions of the P points, as such the curves C interpolates the P points pre-optimization as well as the Q points post-optimization. Each curve C is unquely determined by the quartet of T and P points. There is of course an infinite variation of these parameters that satisfy the maximum derivative condition but I'm given to understand that this is the nature of multiobjective optimization. The alphas are lower limits of the points P. (03 Oct '12, 19:36) Olumide Be the first one to answer this question! toggle preview community wiki By Email: Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "Title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported Tags:
2020-07-04 03:08:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7485048174858093, "perplexity": 1170.7580165013069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883961.50/warc/CC-MAIN-20200704011041-20200704041041-00222.warc.gz"}
http://hal.in2p3.fr/in2p3-00616707
# Measurement of the W to tau nu Cross Section in pp Collisions at sqrt(s) = 7 TeV with the ATLAS experiment 5 Laboratoire de Physique Corpusculaire LPC - Laboratoire de Physique Corpusculaire - Clermont-Ferrand 8 APC - Neutrinos LPNHE - Laboratoire de Physique Nucléaire et de Hautes Énergies, APC (UMR_7164) - AstroParticule et Cosmologie Abstract : The cross section for the production of W bosons with subsequent decay W to tau nu is measured with the ATLAS detector at the LHC. The analysis is based on a data sample that was recorded in 2010 at a proton-proton center-of-mass energy of sqrt(s) = 7 TeV and corresponds to an integrated luminosity of 34 pb^-1. The cross section is measured in a region of high detector acceptance and then extrapolated to the full phase space. The product of the total W production cross section and the W to tau nu branching ratio is measured to be 11.1 +/- 0.3 (stat) +/- 1.7 (syst) +/- 0.4 (lumi) nb. Document type : Journal articles http://hal.in2p3.fr/in2p3-00616707 Contributor : Emmanuelle Vernay <> Submitted on : Wednesday, August 24, 2011 - 8:16:28 AM Last modification on : Tuesday, November 24, 2020 - 5:42:05 PM ### Citation G. Aad, S. Albrand, M.L. Andrieux, B. Clement, J. Collot, et al.. Measurement of the W to tau nu Cross Section in pp Collisions at sqrt(s) = 7 TeV with the ATLAS experiment. Physics Letters B, Elsevier, 2012, 706, pp.276-294. ⟨10.1016/j.physletb.2011.11.057⟩. ⟨in2p3-00616707⟩ Record views
2020-11-25 09:15:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8077315092086792, "perplexity": 3289.80772683637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181482.18/warc/CC-MAIN-20201125071137-20201125101137-00057.warc.gz"}
http://www.docstoc.com/docs/36603736/Southern-Illinois-University-Carbondale-CS202
# Southern Illinois University Carbondale CS202 Document Sample Southern Illinois University Carbondale CS202: Introduction to Computer Science Fall 2005 Section 6 MIDTERM EXAMINATION 2 Instructor: Kenny Fong November 1, 2005 Name: __________________________________________ ID: _____________________________________________ Signature: _______________________________________ DIRECTIONS: 1. Write your name and student ID above. 2. The maximum score of this exam is 100. 3. Attempt all questions. 4. Complete all answers in the spaces provided. 5. No aids are permitted. All notes and books have to be placed on the floor. 6. Blank sheets are available for your scratch work. 7. You are not required to document any code you write. 8. You are not allowed to leave the exam room during the first hour of the exam. 9. The duration of the exam is 120 minutes. 10. Cheating is an academic offense. Your signature above indicates that you understand and agree to the University’s policies regarding cheating on exams. Module Marks Maximum 1 33 2 26 3 17 4 24 Total 100 MODULE 1: ITERATION STATEMENTS (33 points) 1. (16) What output is produced by each of the following code fragments? If there is any infinite loop, just write down “infinite loop”. a) int num = 3; while (num <= 7) { num++; System.out.print(num + ″ ″); } b) int num = 7; do { num--; System.out.print(num + ″ ″); } while (num <= 3); c) int product = 1; for (int i = -5; i <= 3; i += 2) { product *= i; System.out.print(product + ″ ″); } d) int product = 1; for (int i = -5; i <= 3; i -= 2) { product *= i; System.out.print(product + ″ ″); } 2 e) int product = 1; for (int i = -5; i <= 3; i -= 2) { product *= i; System.out.print(product + ″ ″); if (product > 0) break; } f) int product = 1; for (int i = -5; i <= 3; i -= 2) { product *= i; System.out.print(product + ″ ″); if (product > 0) continue; } g) int num = 0; for (int i = 0; i < 5; i++) { for (int j = 0; j < 3; j++) { ++num; } System.out.print(num + ″ ″); } h) int sum = 0; int j = 0; do { j++; for (int i = 5; i > j; i--) sum = sum + (i+j); } while (j < 5); System.out.println(sum); 3 2. (17) Write a complete Java program that opens and reads an input file with pathname C:\in.txt, and prints the number of English letters on each line of the input file in an output file with pathname C:\out.txt. The nth line of the output file should contain only the number of English letters on the nth line of the input file. You may assume that the input file contains no blank lines. A sample input file and the corresponding output file are shown below: Input file C:\in.txt Output file C:\out.txt Kenny is GREAT!!! 12 SIUC 150 SIUC 8 z 1 PRESIDENT George W. Bush 20 11/1/2005 0 4 5 MODULE 2: CLASSES AND METHODS (26 points) 1. (2) Give one difference between a local variable and an instance variable. 2. (2) When an object is instantiated, which method of the object is the first one invoked? What is the return type of this method? 3. (2) If a method is overloaded, how must two definitions of this method (in the same class) differ from each other? 6 4. (2) In order to enforce encapsulation, should we make an instance variable public or private? Why? 5. (2) What is a mutator method? What is its return type? 6. (2) Write a single statement that declares and creates an integer wrapper object storing the integer value 34. 7 7. (10) Consider the following class definition: public class Duple { private static int t = 0; private int x; private int y; public Duple() { x = 0; y = 0; } public Duple(int initX, int initY) { x = initX; y = initY; } public void negate() { x = -x; y = -y; } { x += duple2.x; y += duple2.y; } public boolean equals(Duple duple2) { return (x == duple2.x && y == duple2.y); } public static int getT() { return t; } public String toString() { t += 1; return ″(″ + x + ″,″ + y + ″)″; } } Consider the following declarations: Duple a = new Duple(); Duple b = new Duple(2, -3); Duple c = new Duple(5, 7); Duple d = new Duple(-2, 3); 8 a) What output is produced by the following statements? System.out.println(a); System.out.println(b); System.out.println(c); System.out.println(Duple.getT()); b) What output is produced by the following statements? System.out.println(c); System.out.println(d); c) Write a single statement that would cause b.equals(d) to return true. d) If we make the getT() method non-static, can the Duple class still compile? Why? 9 8. (4) Consider the following class definition that uses the Duple class defined in the previous problem: public class DupleWrapper { private Duple duple; public DupleWrapper(int x, int y) { init(x, y); } { } private void init(int x, int y) { duple = new Duple(x, y); } } Consider the following declarations: DupleWrapper dw = new DupleWrapper(3, 7); Duple d2 = new Duple(5, 4); a) What output is produced by the following statements? System.out.println(d2); b) Can the following statement compile? Why? dw.init(5, 0); 10 MODULE 3: APPLETS (17 points) Write a complete Java applet that draws exactly the following. The size of the applet is 300 × 300 pixels. height / 4 height / 4 width / 4 width / 4 11 12 MODULE 4: PROGRAMMING (24 points) Design and implement a class named Die that is used to represent a die used in games. The class must have five instance variables: • an int variable for the number of faces, • an int variable for the current face value, • an int variable for the number of times the die has been rolled, • an int variable for the number of rolls in which the face value is an odd number, • a Random variable for a random number generator object that simulates the rolling. The class should have two constructors: • The default constructor constructs a die with 6 faces. • The second constructor takes one parameter, which specifies the number of faces of the die to be constructed. However, if the parameter value is less than 4, the number of faces will be set to 4. Besides the number of faces, both constructors should also initialize the other instance variables appropriately. The face value of a die should be initialized to 1 (but this is not counted as the first roll). The class should have an accessor method for the current face value. The class should provide a method roll() that rolls the die using the random number generator and returns the new face value. The class should provide a toString() method that returns a string representation of the current status of the die in the format shown in the sample dialog below: Number of faces: 8 Current face value: 5 Number of rolls: 34 Number of odd-face rolls: 21 Your code should be as clean and simple as possible, and encapsulation should be strictly enforced. 13 14 15 DOCUMENT INFO Shared By: Categories: Stats: views: 18 posted: 5/1/2010 language: English pages: 15 How are you planning on using Docstoc?
2013-12-07 15:33:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17402894794940948, "perplexity": 4899.908243922496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054867/warc/CC-MAIN-20131204131734-00050-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.phy.ntnu.edu.tw/ntnujava/msg.php?id=5226
Refraction obey Snell's Law $n_1\sin\theta_1=n_2\sin\theta_2$ [/quote] what i mean is the equation "x^2/n+y^2/n=1" i dun see the snell's law in it?
2019-03-22 20:50:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8998851180076599, "perplexity": 3175.354898427602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202689.76/warc/CC-MAIN-20190322200215-20190322222215-00471.warc.gz"}
https://physicscatalyst.com/chemistry/ideal-and-non-ideal-solutions.php
# Ideal and Non- Ideal Solutions ## Ideal and Non- Ideal Solutions Liquid-liquid solutions can be classified into ideal and non-ideal solutions on the basis of Raoult’s law ### Ideal solution An ideal solution is a solution which always obeys Raoult’s law under all conditions of temperature and pressure. Characteristics of ideal solution 1.Should obey Raoult’s law i. e. $P_A = P_A^O \chi _A$ & $P_B = P_B^O \chi _B$ 2. $\Delta H_{mix}=0$ 3. $\Delta V_{mix}=0$ Characteristic Curve: When a graph is plotted between V. P. & M. F. then a straight line is obtained for ideal solution Condition: Ideal solution is obtained when the intermolecular force existing between the particles of A, existing between the particles of B & existing between the particles of A & B are nearly the same. Example • Methanol + Ethanol • Benzene + Toulene • Heptane + Octane Formula When both A and B are volatile $P_A = P_A^O \chi _A$ $P_B = P_B^O \chi _B$ $P_S = P_A + P_B$ So, $P_S = P_A^O \chi _A + P_B^O \chi _B$ When A is volatile and B is non – volatile $P_A = P_A^O \chi _A$ $P_B = 0$ $P_S = P_A + P_B$ $P_S = P_A^O \chi _A$ ### Non -Ideal solution Non - ideal solutions are those solutions which do not always obey Roault’s law. Non - ideal solution are of two types:- Non - ideal solution showing ‘+’ Deviation from Roault’s law Non - ideal solution showing ‘-‘ Deviation on from Raoult’s law These are the solution in which the partial vapour pressure of a component ($P_A$) is more than the expected value ($P_A^O \chi _A$) Characteristic: (i) $\Delta H_{mix} > 0$ Such solution are endothermic i. e. they absorb heat from the surrounding & therefore they appear to be cool when mixed. (ii) $\Delta V_{mix} > 0$ Condition: It is observed when the force of attraction between A & B particles is less than the fore of attraction.existing between the particles of A and the particles of B. Eg: Ethanol + Cycbhexane In order to mix cyclohexane & Ethanol the hydrogen bonding in Ethanol needs to be broken. Breaking of hydrogen bonds require energy which is obtained from the surrounding. Thus, the mixture appears to be cool when mixed. The intermolecular force existing between Ethanol & Cyvbhexane is weaker than the hydrogen bonding in Ethanol. Due to decrease in the inter molecular force the inter particle distance increases & the volume is increased. Thus $\Delta V_{mix} > 0$ with increase in temperature the solubility increases. The partial vapour pressure of a component ($P_A$) is less than the expected value ($P_A^O \chi _A$) Characteristic: (i) $\Delta H_{mix} < 0$ Such solutions are endothermic and they lose heat to the surrounding thus they become hot when mixed. (ii) $\Delta V_{mix} < 0$ Condition: It is observed when the inter molecular force existing between the particles of A & B is greater than the force of attraction existing the particles of A or the particles of B. Eg: Chloroform + acetone When chloroform & acetone mixed then hydrogen is formed between them to formation of hydrogen bonding energy is lost to the surrounding the mixture appears hot. The intermolecular force existing between chloroform acetone is greater the intermolecular force exist in either chloroform molecules or acetone molecule. Due increases in the intermolecular force the interparticle –e decreases and hence volume also decreases. Thus $\Delta V_{mix} < 0$ with increase in temperature the solubility decreases ## ZEOTROPIC MIXTURE OR CONSTANT BOILING MIXTURE • These are the mixtures which have same composition both in liquid state as well as in vapour state i.e. xA = yA & xB = yB • These are the mixtures whose components boil at a fixed temperature. Thus, the components of Azeotropic mixture cannot be separated by the fractional distillation. Types of Azeotropic mixture:- Minimum Boiling Azeotropic Maximum Boiling Azeotropic Those soln which show +ve deviation from Raoult’s law form minimum boiling Azeotrope   Eg: 95% Ethanol + 5% water Those soln which show -ve deviation from Roault’s law from maximum boiling Azeotropes. Eg: 68% HNO3 + 32% water.
2021-12-09 10:07:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6511874794960022, "perplexity": 2035.2232258917047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00519.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-mechanism-for-expansion.14208/
What is the mechanism for expansion 1. Feb 10, 2004 wolram if space is expanding and we take it that it is quantasized into planckian units of invariant dimensions what is the mechanism for expansion, do these planck quanta multiply? or have i misunderstood the concept? 2. Feb 13, 2004 meteor Hi, in LQG the nodes of spin networks carry quantities of volume associated to them, so you must to understand firstly what an spin network is: a collection of edges and nodes forming a graph Anyway this graph is not still: Is always evolving thanks to the Hamiltonian constraint, that guides his evolution, so new nodes can appear in the graph (and then new quantized cells of volume). Also, think that a node that was originating a quantized cell of space can then enter inside of an object and give to it some quantity of volume (this has been called by some authors as Einsteinian alchemy) Last edited: Feb 13, 2004
2017-06-25 02:23:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8107617497444153, "perplexity": 1543.615078991167}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320386.71/warc/CC-MAIN-20170625013851-20170625033851-00517.warc.gz"}
https://piping-designer.com/index.php/mathematics/geometry/solid-geometry/760-cube
# Cube Written by Jerry Ratzlaff on . Posted in Solid Geometry • Cube (a three-dimensional figure) is a regular polyhedron with square faces. • All edges are the same length. • All faces are squares • Diagonal is a line from one vertices to another that is non adjacent. • Circumscribed sphere is a polyhedron is a sphere that contains the polyhedron and touches each of the ployhedron's vertices. • Inscribed sphere - A convex polyhedron is a sphere that is contained within the polyhedron and tangent to each of the polyhedron's faces. • Midsphere is a polyhedron is a sphere that is tangent to every edge of the polyhedron. • 4 base diagonals • 24 face diagonals • 4 space diagonals • 12 edges • 6 faces • 8 vertex ## Cube Circumscribed Sphere Radius formula $$\large{ R = a \; \frac{ \sqrt {3} }{2} }$$ ### Where: $$\large{ R }$$ = circumscribed sphere radius $$\large{ a }$$ = edge ## Cube Circumscribed Sphere Volume formula $$\large{ C_v = \frac{3}{4} \; \pi \; \left( a\; \frac{ \sqrt {3} }{2} \right) ^3 }$$ ### Where: $$\large{ C_v }$$ = circumscribed sphere volume $$\large{ a }$$ = edge $$\large{ \pi }$$ = Pi ## Edge of a Cube formulas $$\large{ a = \sqrt { \frac { A_s } { 6 } } }$$ $$\large{ a = V^{1/3} }$$ $$\large{ a = \sqrt { 3 } \; \frac { D' } {3} }$$ ### Where: $$\large{ a }$$ = edge $$\large{ A_s }$$ = surface face area $$\large{ V }$$ = volume $$\large{ D' }$$ = space diagonal ## Face Area of a Cube formula $$\large{ A_{area} = a^2 }$$ ### Where: $$\large{ A_{area} }$$ = face area $$\large{ a }$$ = edge ## Inscribed Radius of a Cube formula $$\large{ r = \frac{a}{2} }$$ ### Where: $$\large{ r }$$ = inside radius $$\large{ a }$$ = edge ## Inscribed Sphere Volume of a Cube formula $$\large{ I_v = \frac{3}{4} \; \pi \; \left( \frac{ a }{2} \right) ^3 }$$ ### Where: $$\large{ I_v }$$ = circumscribed sphere volume $$\large{ a }$$ = edge $$\large{ \pi }$$ = Pi ## Midsphere Radius of a Cube formula $$\large{ r_m = \frac{a}{2} \sqrt {2} }$$ ### Where: $$\large{ r_m }$$ = midsphere radius $$\large{ a }$$ = edge ## Space Diagonal of a Cube formula $$\large{ D' = \sqrt {3} \;a }$$ ### Where: $$\large{ D' }$$ = space diagonal $$\large{ a }$$ = edge ## Surface face Area of a Cube formula $$\large{ A_s = 6\;a^2 }$$ ### Where: $$\large{ A_s }$$ = surface face area $$\large{ a }$$ = edge ## Surface to volume ratio of a Cube formula $$\large{ S_v = \frac{6}{a} }$$ ### Where: $$\large{ S_v }$$ = surface to volume ratio $$\large{ a }$$ = edge ## Volume of a Cube formula $$\large{ V = a^3 }$$ ### Where: $$\large{ V }$$ = volume $$\large{ a }$$ = edge Tags: Volume Equations
2022-10-03 04:38:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034844994544983, "perplexity": 7237.667876677955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00762.warc.gz"}
https://eccc.weizmann.ac.il/title/Z
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > A-Z > Z: A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z - Other Z TR02-001 | 8th January 2002 Cynthia Dwork, Moni Naor #### Zaps and Their Applications A zap is a two-round, witness-indistinguishable protocol in which the first round, consisting of a message from the verifier to the prover, can be fixed once-and-for-all" and applied to any instance, and where the verifier does not use any private coins. We present a zap for every language in NP, ... more >>> TR14-068 | 5th May 2014 Eric Allender, Bireswar Das #### Zero Knowledge and Circuit Minimization Revisions: 1 We show that every problem in the complexity class SZK (Statistical Zero Knowledge) is efficiently reducible to the Minimum Circuit Size Problem (MCSP). In particular Graph Isomorphism lies in RP^MCSP. This is the first theorem relating the computational power of Graph Isomorphism and MCSP, despite the long history these ... more >>> TR06-139 | 14th November 2006 #### Zero Knowledge and Soundness are Symmetric Revisions: 1 We give a complexity-theoretic characterization of the class of problems in NP having zero-knowledge argument systems that is symmetric in its treatment of the zero knowledge and the soundness conditions. From this, we deduce that the class of problems in NP intersect coNP having zero-knowledge arguments is closed under complement. ... more >>> TR14-160 | 27th November 2014 Gil Cohen, Igor Shinkar An $(n,k)$-bit-fixing source is a distribution on $n$ bit strings, that is fixed on $n-k$ of the coordinates, and jointly uniform on the remaining $k$ bits. Explicit constructions of bit-fixing extractors by Gabizon, Raz and Shaltiel [SICOMP 2006] and Rao [CCC 2009], extract $(1-o(1)) \cdot k$ bits for $k = ... more >>> TR14-078 | 7th June 2014 Mika Göös, Toniann Pitassi, Thomas Watson #### Zero-Information Protocols and Unambiguity in Arthur-Merlin Communication We study whether information complexity can be used to attack the long-standing open problem of proving lower bounds against Arthur--Merlin (AM) communication protocols. Our starting point is to show that---in contrast to plain randomized communication complexity---every boolean function admits an AM communication protocol where on each yes-input, the distribution of ... more >>> TR02-063 | 3rd December 2002 Oded Goldreich #### Zero-Knowledge twenty years after its invention Zero-knowledge proofs are proofs that are both convincing and yet yield nothing beyond the validity of the assertion being proven. Since their introduction about twenty years ago, zero-knowledge proofs have attracted a lot of attention and have, in turn, contributed to the development of other areas of cryptography and complexity ... more >>> TR02-015 | 13th February 2002 Philippe Moser #### ZPP is hard unless RP is small Revisions: 1 We use Lutz's resource bounded measure theory to prove that either \tbf{RP} is small or \tbf{ZPP} is hard. More precisely we prove that if \tbf{RP} has not p-measure zero, then \tbf{EXP} is contained in$\mbf{ZPP}/n\$. We also show that if \tbf{RP} has not p-measure zero, \tbf{EXP} equals ... more >>> ISSN 1433-8092 | Imprint
2021-05-06 18:10:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7067266702651978, "perplexity": 2379.1858935344753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00136.warc.gz"}
https://socratic.org/questions/how-do-you-use-the-binomial-series-to-expand-sqrt-1-x
# How do you use the binomial series to expand sqrt(1+x)? Dec 18, 2015 sqrt(1+x) = (1+x)^(1/2) = sum(1//2)_k/(k!)x^k with $x \in \mathbb{C}$ Use the generalization of the binomial formula to complex numbers. #### Explanation: There is a generalization of the binomial formula to the complex numbers. The general binomial series formula seems to be (1+z)^r = sum((r)_k)/(k!)z^k with ${\left(r\right)}_{k} = r \left(r - 1\right) \left(r - 2\right) \ldots \left(r - k + 1\right)$ (according to Wikipedia). Let's apply it to your expression. This is a power series so obviously, if we want to have chances that this doesn't diverge we need to set $\left\mid x \right\mid < 1$ and this is how you expand $\sqrt{1 + x}$ with the binomial series. I'm not going to demonstrate the formula is true, but it's not too hard, you just have to see that the complex function defined by ${\left(1 + z\right)}^{r}$ is holomorphic on the unit disc, calculate every derivative of it at 0, and this will give you the Taylor formula of the function, which means you can develop it as a power series on the unit disc because $\left\mid z \right\mid < 1$, hence the result.
2019-07-20 02:57:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015854597091675, "perplexity": 148.4398442862133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526408.59/warc/CC-MAIN-20190720024812-20190720050812-00378.warc.gz"}
http://mathoverflow.net/questions/25968/what-is-the-most-simple-non-planar-gorenstein-curve-singularity/26418
# What is the most simple non-planar Gorenstein curve singularity? Let $R$ be a reduced curve singularity over an algebraically closed field $k$ and $\tilde{R}$ its integral closure in its total ring of fractions. The $k$-dimension of $\tilde{R}/R$ is finite. If we assume $R$ is non-planar and Gorenstein, then how small can this number be? The ring $R = k[[x,y,z]]/(xy = z^2, z x = y^2)$ is a complete intersection, hence Gorenstein, and the dimension of $\tilde{R}/R$ is $4$. The question is thus "is $2$ or $3$ possible?" For the sake of concreteness, let's say that a curve singularity is a $1$-dimensional quotient of $k[[x_1, \dots, x_n]]$ for some $n$. Edit: I had thought that the $k$-dimension of $\tilde{R}/R$ was widely known as the $\delta$-invariant; I think this the notation Serre uses in Algebraic Groups and Class Fields. From the comments, it seems this is non-standard, and I have edited accordingly. As Graham points out, the number $\operatorname{dim}(\tilde{R}/R)$ is also the colength of the conductor ideal. The number also comes up in computing the (arithmetic) genus of a singular curve. - Comments about confusion about $\delta$ deleted. –  David Speyer May 26 '10 at 19:42 @jlk: why does delta=4 in your example? –  Hailong Dao May 28 '10 at 4:02 @Hailong Dao: I computed delta directly from the definitions. The computation is similar to, but harder than, the example of 4 lines in 3-space given in response to Graham's question. Unless I made a mistake, I think the normalization $\tilde{R}$ is isomorphic to the product of 4 power series rings. As a complete $k$-algebra contained in $\tilde{R}$, $R$ is generated by 3 4-tuples of degree 1 monomials. The quotient \tilde{R}/R has basis given by (1,0,0,0), (0,1,0,0), (0,0,1,), and a 4-tuple in which the entries are degree 1 monomials. –  jlk May 28 '10 at 20:48 (continued): If you are curious, I can try to reconstruct the details. The example of 4-lines in 3-space shows that $\delta=4$ can be achieved by a non-planar Gorenstein curve singularity, and this is maybe a nicer examples than $k[[xyz]](xy−z^2, zx=y^22)$. –  jlk May 28 '10 at 21:35 @jlk: I think your example is also an intersection of 4 lines : $(y,z)$ and $y-az, x-a^2z$ with $a^3=1$. –  Hailong Dao May 30 '10 at 3:15 I think Graham's answer already gave most of what you need to prove that $4$ is the smallest possible. Let $V$ be the integral closure of $R$, $n$ be the embedding dimension of $R$, and $e=e(R)$ be the multiplicity. Claim: If $R=k[[x_1,\cdots,x_n]]/I$ is Gorenstein and $n$ is at least $3$, then $\dim_k(V/R)\geq e$. Proof: Let $m$ be the maximal ideal of $R$. As Graham pointed out, we have $e = \dim_k(V/mV)$. So: $$\dim_k(V/R) =\dim_k(V/mR)-\dim_k(R/mR) \geq \dim_k(V/mV)-1=e-1$$ We need to rule out the equality. If equality happens, then one must have $mV=mR$. This shows that $m$ is the conductor of $R$. As you already knew, since $R$ is Gorenstein, one must then have $\dim_k(V/R)=\dim_k(R/m)=1$. The inequality now gives $e\leq 2$. Abhyankar's inequality (part 2 of Graham's answer) gives $n\leq 2$, so $R$ is planar, contradiction. Now, one needs to show that for $R$ non-planar, $e\geq 4$. You could use part $3$ of Graham's answer, or arguing as follows: if $n\geq 4$ we are done by Abhyankar inequality. If $n=3$, a Gorenstein quotient of $k[[x,y,z]]$ must be a complete intersetion, and so $I=(f,g)$, each of minimal degree at least $2$ since $R$ is not planar, thus $e$ must be at least $4$. By the way, one could construct a domain $R$ such that $\dim_k(V/R)=4$ as follows: Take $R=k[[t^4,t^5,t^6]]$. The semigroup generated by $(4,5,6)$ is symmetric, so $R$ is Gorenstein. The Frobenius number is $7$, and $V/R$ is generated by $t,t^2,t^3,t^7$. EDIT (references, per OP's request): Abhyankar inequality is standard, for example see Exercise 4.6.14 of Bruns-Herzog "Cohen-Macaulay rings", second edition (Link to the exact page). Or see exercise 11.10 of Huneke-Swanson book, also available for free here. Or Google "rings with minimal multiplicity". (The original references are now available thanks to Graham, see his comment below) As for $e=\dim_k(V/mV)$, I could not find a convenient reference, but here is a sketch of proof using the above reference: First, using the additivity and reduction formula (Theorem 11.2.4 of Huneke-Swanson) to reduce to the domain case. Assume that $R$ is now a complete domain, then $V=k[[t]]$, and $R$ is a subring of $V$. Let $x\in m$ be an element with smallest minimal degree. Then $mV=xV$ ($V$ is a DVR), and it is not hard to see that $e=$ the minimal degree of $x$ $=\lambda(V/xV)$ (see Exercise 4.6.18 of Bruns-Herzog, same page as the link above). Alternatively, one can use the fact that: $$e(m,V) = \text{rank}_RV.e(m,R) = e$$ The second inequality is because $V$ is birational to $R$ so $\text{rank}_RV=1$. The left hand side can be easily computed by definition to be length of $V/xV$, which equals $\dim_k(V/mV)$. (use $m^nV=x^nV$ since $V$ is a DVR) Fun exercise! - Great! Could you include a reference for Abhyankar's result. I am not familiar with it. –  jlk May 31 '10 at 2:14 Also, what's a reference for the Greither result that $e= V/\mathfrak{m}V$? –  jlk May 31 '10 at 2:15 If you're an original-reference nerd like me, Abhyankar's result is in Local rings of high embedding dimension, Amer. J. Math. 89 (1967), 1073–1077. Greither's theorem is in On the two generator problem for the ideals of a one-dimensional ring, J. Pure Appl. Algebra 24 (1982), no. 3, 265–276. –  Graham Leuschke May 31 '10 at 21:14 @Graham, thanks! –  jlk May 31 '10 at 21:52 @Hailong Dao: I just noticed you edited your answer to include references. Thanks! –  jlk Jun 1 '10 at 0:11 edit: this answer is garbage (or, rather, answers a question that the asker did not ask). I leave it here because Hailong's answer refers to some of its ingredients. $3$ is the least possible. Ingredient 1: For a one-dimensional complete local ring $R$ with integral closure $\tilde R$, the $k$-dimension of $\tilde{R}/R$ is one less than the multiplicity $e(R)$ of the ring (this is false: I confused $\tilde{R}/\mathfrak{m}$ with $\tilde{R}/\mathfrak{m}\tilde{R}$). This is because $e(R) = \dim_k (\tilde {R}/\mathfrak{m}\tilde{R})$, which is due to Greither in 1982. Ingredient 2: There is an inequality due to Abhyankar for the multiplicity of a CM local ring: $$e(R) \geq \mu_R(\mathfrak{m}) - \dim R + 1$$ where $\mu$ denotes the minimal number of generators. Ingredient 3: It's relatively easy to see that for a Gorenstein local ring that is not a hypersurface (e.g. a one-dimensional non-planar Gorenstein local ring) we can do one better than Abhyankar's bound. Namely, if we had equality in Abhyankar's bound, then $\mathfrak m^2 = \mathbf{x}\mathfrak m$ for some minimal reduction $\mathbf{x}$ of the maximal ideal. Count lengths in $\bar{R} = R/(\mathbf{x})$, remembering that it has one-dimensional socle since $R$ is Gorenstein, to see that $\dim R = \mu_R(\mathfrak m) -1$. Therefore we have $$e(R) \geq \mu_R(\mathfrak{m}) - \dim R + 2$$ for a Gorenstein non-hypersurface $R$. Applying this formula with $\mu_R(\mathfrak m) \geq 3$ and $\dim R = 1$, we get that the multiplicity is at least $4$, so the degree is at least 3. - I am a little confused about Ingredient 1: if $R=k[[t^a,t^b]]$ with $a<b$ then the multiplicity is $a$, while the length of the quotient = the number of integers not in the semigroup generated by $(a,b)$ =$(a-1)(b-1)/2$. –  Hailong Dao May 26 '10 at 21:59 I'm also a little confused. If we take $R$ to be the subring of the product of 4 power series rings generated by (t,0,0,-t), (0,t,0,-t), (0,0,t,-t) (4 general lines in 3-space), then \tilde{R}/R has basis given by (1,0,0,0), (0,1,0,0), (0,0,1,0), and (t,0,0,0). If m is the maximal ideal, then I am computing that m^{n}/m^{n+1} = 4 for n sufficiently large. I think this means the multiplicity is 4. Am I using the wrong definition of multiplicity or something? –  jlk May 26 '10 at 22:20 btw, is "degree" the standard term for the dimension of \tilde{R}/R? –  jlk May 26 '10 at 22:21 Hmm, that's a problem. Hailong's right. I think I confused $\tilde{R}/\mathfrak{m}$ with $\tilde{R}/\mathfrak{m}\tilde{R}$. Drat, it was such a clean and satisfying answer too. jlk, I don't know a standard term for that dimension -- it's never come up in my experience before. Why is it useful/interesting? –  Graham Leuschke May 26 '10 at 23:51 @Graham, The number $\operatorname{dim} \tilde{R}/R$ is a basic invariant of the curve. The Gorenstein relation is, as you probably know, equivalent to the equation $2 \cdot \operatorname{dim} \tilde{R}/R = \operatorname{dim} \tilde{R}/I$, where $I$ is the conductor ideal. –  jlk May 27 '10 at 2:29 Here is a short geometric proof that if $R$ is Gorenstein and $\tilde{R}/R$ has dimension $\delta \le 3$, then $R$ is planar. We can realize $R$ as the local ring of a rational curve $X$ of genus $\delta$. If $X$ is hyperelliptic (i.e. admit a degree $2$ morphism $f$ to $\mathbb{P}^{1}$), then $X$ embeds into a smoth surface: the ruled surface $\mathbb{P}(\mathcal{E})$ for $\mathcal{E}=f_{*}\mathcal{O}_{X}$. In particular, the singularities of $X$ are planar. Otherwise, $X$ is non-hyperelliptic of genus $3$. But then the canonical map embeds $X$ as a plane quartic curve. In particular, $X$ again embeds in a smooth surface and hence has planar singularities. -
2015-04-27 07:38:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583057165145874, "perplexity": 286.05634644395843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657588.53/warc/CC-MAIN-20150417045737-00136-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.albert.io/ie/sat-chemistry-subject-test/calculating-average-atomic-mass
Free Version Difficult # Calculating Average Atomic Mass SATCHM-3TV61Q An element $(X)$ has two naturally occurring isotopes. $X-90$ has an abundance of $75\%$ and $X-94$ has an abundance of $25\%$. What is the average atomic mass of element $X$? A $90 \ amu$ B $91 \ amu$ C $92 \ amu$ D $93 \ amu$ E $94 \ amu$
2017-01-20 03:53:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6587002873420715, "perplexity": 1148.185727829855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00461-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.gradesaver.com/last-of-the-mohicans/q-and-a/what-is-the-attitude-of-indians-as-evidenced-by-magua-towards-the-french-309713
# what is the attitude of indians, as evidenced by Magua, towards the french? from the last of the mohicans
2017-09-21 16:17:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8107054829597473, "perplexity": 5448.72887364832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687833.62/warc/CC-MAIN-20170921153438-20170921173438-00616.warc.gz"}
https://www.gamedev.net/forums/topic/670977-code-review/
# Code Review This topic is 1067 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I recently interviewed with a CAD-based studio where I live for a C++/Renderer position. The role was to work on their internal 3D rendering code that's written in DX/OpenGL. They asked for a code sample of one of my personal projects. I sent them one of my projects. It's a basic, open source, texture packer application using Qt to provide the UI. It's not much, but at least it's pretty complete. The CEO got back to me about 10 days later, and said I was too junior for their company at this time. They're looking to fill a senior position first, then they'd consider bringing me on to get mentored. I was only a little disappointed at first, but more curious than anything. I've been programming since middle school, but I'm only 2 years into programming professionally full-time. I've worked in 2 smaller/startup companies in this time. None of my work has been C++ or lower-level rendering-oriented, so it didn't surprise me when they turned me down. I know I'm not the best C++ programmer out there. I'd like to have some one with better knowledge of C++ to point me in the right directly as far as decent C++ practices go. I don't know if there are any good resources for this online, so I wanted to ask for one in this thread, if that's allowed. I'd really appreciate it if anyone could give me feedback. It can be downloaded here. If anything, you'd get a texture packer with source out of it. ##### Share on other sites I'd venture to guess that your lack of experience (two years) contributed fairly significantly to your being considered "too junior," but the demo you submitted probably didn't help. Not necessarily because of the code itself (which I have not even looked at yet), but simply because of what it is: texture packing is not that advanced a topic unless you're actually demonstrating some new, novel mechanism of solving the bin-packing problem efficiently (which would still probably be mostly of note to somebody in academia). It's also not hugely relevant to low-level rendering. I'll take a look at the code later on and see if I can give you some feedback. ##### Share on other sites Quick glance at the code, you pass parameters by value specially std::string, you do not use const anywhere including marking methods which do not change state of an object. Those two things would be give me the impression that you are a junior. ##### Share on other sites @DoctorGlow (Offtopic Question probably) std::string should be passed by reference? Or should you even pass a pointer to a string instead of the string itself? I always considered a string as a primitive type so I did neither... ##### Share on other sites Sorry if this is tl;dr. It took me 3 hours to write up a response. Took it to heart though. I'd venture to guess that your lack of experience (two years) contributed fairly significantly to your being considered "too junior," but the demo you submitted probably didn't help. This is what I thought too. The sprite packer and signed-distance font generator were the only two presentable projects I really had. All of my ambitious projects break over time due to underlying code changes from a shared code base. I'm fixing that by providing version control --something I never really used until 2 years ago. Quick glance at the code, you pass parameters by value specially std::string, you do not use const anywhere including marking methods which do not change state of an object. Those two things would be give me the impression that you are a junior. AND std::string are objects (class) and can be expensive, specialy if it needs to copy data, so I "always" pass by const ref. Another big concern of mine. I know that passing things by value's kills performance, but I haven't wrapped my head around how to use references, const references, and const getters effectively. I was going to make this a future priority, but since that half the replies are about how to properly pass data into functions, it sounds like it's a fundamental I need to work on right now. I can see how passing by const reference acts as a security concern, but does it also increase performance as well making it read-only? The same goes for plain references: security in the sense that they're restricted pointer. They can't be re-assigned after assignment, no direct pointer arithmetic, can't ever be NULL, and can't take the address of references. These points were brought up in this Stack Overflow thread, but I'm not sure what the 3rd point means. Also, is it proper to return const references/pointers from methods? I've tried wrapping my head around this as well. There are a lot of things I'll write, but I'll write them in weird ways because I'm unsure of the proper way for it to be written in certain situations. Josh brings up some of them. @Josh Petrie: Thank you for taking the time during your lunch break to go over this. There are plenty of gems here for a junior like myself, and I did find your comments very informative. Here are my responses: BinPacker.h • I wrote my header guards like this since that's how I saw them early-on in C++ tutorials. As time went on, I adopted this convention. Then again, I've seen plenty of examples online that don't use underscores like this. It sounds like a potential issue for some compilers, so I'll remove both the preceeding and trailing underscores. • Regarding BinRect and inheritance abuse: I agree with you. I thought about using composition here as I think simple types like VectorX, MatrixX, ColorX, Rect, etc should all be final classes. I should be implementing that as well. When I wrote this code back in November, 2014, I was just getting exposed to the idea that inheritance isn't the end-all solution in C++. I tried staying away from it, which is why BinPacker is generic, but again, inheritance prevailed as a design choice in every class contained in the BinPacker module. Providing a HAS A Rect relationship with BinRect and BinNode should help fix some points that you bring up later. • 'rotated' is supposed to be treated as a bool value. I use uint8_t because I wanted it strictly typed for POD purposes so it's more portable when exporting to a file. I try to stay away from bool when working with POD due varying to compiler implementation. bool can vary between platforms/implementations, so I wanted to strictly type something to an unsigned 8-bit value. Also, it doesn't seem like I can XOR bools on some compiler. I use XOR to toggle rotation state of my BinRects. If using a bool, I could toggle simply by myBoolVar = !myBoolVar. Checking > 0 is true, but again, I should just check if(rotated == true) { }. Would a bool still be a better way to go? It'd certainly make it more readable. • I always thought I had to check if my pointer was NULL before deleting it. Since this isn't the case, I'll remove the checks. • You bring up a good point about CompareBinRects() as a template is unnecessary. I can't remember why I even did that, but again, making it a template function doesn't make sense. I moved it into BinPacker as a private method. • BinPacker is a template because it's meant to take a subclass BinRect. BinRect::CanPack() was a hasty addition to the original design because I realized that some rects included in the rects vector shouldn't be packed due to some sort of invalidation in some implementations of this class. This is obviously poor design. Rect validation should be done before adding them to the packer. • I agree with ForceSquare(): it sounds like an action. I was uneasy when I wrote it. It does set a state that'll be applied when Commit() gets called, but again, doesn't perform any action. This should be renamed something better, such as ForceSquareFlag() maybe? • I agree with using "unsigned" when negative values should never be used. I only kept it signed as I thought this was a common convention, and also performed better. I remember reading about signed vs unsigned ints, and performance impact a while ago when I did PSP development. I'm probably wrong. I'll use "unsigned" in places where values should be unsigned, such as looking up an array item by index. Everything should be [0, max value), but don't some compilers yield a warning if I use an unsigned datatype instead of a signed int? • I've gone back in forth between throwing an error, and bounding indices that are out-of-bounds. I'll throw an error instead. I'm starting to use assert() for rare situations like this. I stay away from exception handling in commonly-used code as it can be slow. Does this sound practical? • I agree with logging in utility code: it shouldn't be used, but still should be reported. I'm a fan of error codes, but there are caveats to that such as providing a bunch of different defines. I'd like to use C++11 enum classes as error codes here, but I'm not sure if that's good design simply because I haven't seen that yet. I'm sure exceptions have their purpose (I use them in C# in cases where I'd typically use assert() in C++). What is your take on exception handling? • I agree, NULL rects shouldn't be allows, and it'd be a much better practice to use a vector of objects instead a vector of pointers in this case. The only reason I'm doing this is because BinRect was originally intended to be subclassed. In reading your feedback, inheritance doesn't sound like a solid design. If I were to store objects instead of pointers in my vector, how should I match up those rects with my images or glyphs? The rects are re-ordered by size to make the packer work. I thought of providing a separate class to store my sprite info that gets "packed" by storing a pointer to the corresponding BinRect that actually gets packed, but if I'm pushing objects into a vector that could potentially resize. This could move my objects around in memory; invalidating my sprite info objects' pointers. Should an STL list of objects be a better solution? I try to stick with vectors as much as possible for cache reasons. • I moved that function into BinPacker as a method since it's only practical purpose is specific to that class. • Good call with the BinNode root. It'll become part of the stack. • Again, right about GetNumPackedRects(): it literally does nothing but return zero. I forget what that was even supposed to do. It's not even used in the entire applications. It's deleted. BinPacker.cpp • Should have seen that one coming. I'll go with error codes for BinPacker::Divide(). It currently only returns a bool value, but I could convert that into error codes to get more specific on what went wrong. Matrix.h: • I've recently started to get away from being too vague about naming conventions, specifically functions/methods. I'd try to overload methods as much as possible, but I've seen cases where I've gone overboard in other projects. By naming these methods. • I see your point, and I could provide documentation on this. This class uses its data in a column-major format as every graphics API I've seen uses the column-major interpretation (DirectX, OpenGL, the PSP's GU). I've considered this a given, but you're right: it wouldn't hurt to specify somewhere. • NOTE: Just like all other complex math types I have in classes, they all have operator overloads. There are many opportunities for const reference parameters. SpritePacker.h • I was wondering if SpritePacker would violate SRP. In fact, I'm not very confident in my philosophy of the single responsibility pattern to derived classes without violating SRP. The CanPack() virtual method will be removed, as stated above. I initially thought the same thing: the packer should only pack rects, not sprites. I abandoned that philosophy because I wasn't sure how to associate sprites with packed rects if my BinRects are moved around in memory as a vector (might have to go with a list on this one, unfortunately). The idea was that SpriteBinRects could be pushed into the SpritePacker even when a texture wasn't working. Of course, if a texture were to fail, the SpriteBinRect shouldn't be instantiated in the first place. • I thought the same thing about the ternary state: it returns a boolean value. The only reason I explicitly specify true : false is because I think I ran into a compiler error when assigning a bool to an expression evaluation. • Again, you're right, passing by value is largely unneeded. I'm cleaning this up as I get the time to do so. Hopefully I'll have an amended version of this code completed by the end of the week. • Yeah, SpritePacker will suffer the same issues from BinPacker. SpritePacker.cpp: • INPLACESWAP and SwapRedBlue32 aren't my own code. I took it from FreeImage, and put it in my code-base since these functions aren't always available in every version of FreeImage's source code. I need it to swap Blue/Red components since FreeImage has the nerve to store those components in a BGR format. Some people... lol jk, FreeImage has been VERY useful. • I've wondered about where to put the vertex/shader source code. I put it in InitCommon() because it doesn't need to exist past setting up a few stock shaders. After that, the data isn't needed anymore. I let it release from the stack after InitCommon() reaches the end of its scope. Do I still need to move it somewhere as static common data? It'd only eat up a few KB of memory if I did that. Types.h • I setup the assignment operator overloads properly where the parameter takes a const reference, and returns a reference. I've seen this in others' code (tinyxml, FreeImage, STL, Box2D, etc). Again, I've stayed away from references and const references out of lack of understanding. As stated above, it sounds like this should be a top priority. Most of the above is not too bad, the biggest problems I would say are the way you use run- and compile-time polymorphism, the way you deal with ownership, and the apparent consideration you put in to the shape and surface area of your APIs I still struggle with all of these. I can nail a lot of the simple stuff, but I don't most of the time. I've been programming in C++ as an obsessive hobby since I was in middle school, about 11 years ago. That said, I still do consider myself novice. I've started to notice that I need to get feedback from other programmers to grow, otherwise I'll just be stuck in my old ways. I just gotta face that I'm not John Carmack --I'm just a regular guy lol. I'll to ask more questions, and apply the advice in smaller test cases to reinforce what I'm being taught. I can't thank you all enough for the feedback. Edited by Vincent_M ##### Share on other sites I'd like to have some one with better knowledge of C++ to point me in the right directly as far as decent C++ practices go. I think it's a pretty good idea to read Scott Meyers's Effective C++. ##### Share on other sites Another big concern of mine. I know that passing things by value's kills performance, but I haven't wrapped my head around how to use references, const references, and const getters effectively. I was going to make this a future priority, but since that half the replies are about how to properly pass data into functions, it sounds like it's a fundamental I need to work on right now. I can see how passing by const reference acts as a security concern, but does it also increase performance as well making it read-only? The same goes for plain references: security in the sense that they're restricted pointer. They can't be re-assigned after assignment, no direct pointer arithmetic, can't ever be NULL, and can't take the address of references. These points were brought up in this Stack Overflow thread, but I'm not sure what the 3rd point means. Also, is it proper to return const references/pointers from methods? I've tried wrapping my head around this as well. Wrapping your head around when to  use (const) references is really easy. For primitive values and small structs (something like 1-3 four-byte values), you pass by value, unless you want to modify the variable you are passing. In this case you pass by reference, unless the parameter is optional, in this case you pass by pointer. For larger classes/structs, you generally always pass by at least const reference. Unless you want to modify the class of course, then you pass by reference. Unless again the value is optional, in which case you pass by pointer. Performance in this regards will be increased when you pass stuff like std::vector, std::string which can take a large amount of time to copy. I wouldn't say they exactly increase security, since you can do stuff like this unfortunately: void takeString(std::string& string) { string.clear(); } std::string* pString = nullptr; takeString(*pString); // crash inside the function as if you were passing nullptr but they document intent by telling the user of your function "the value of this parameter is required", while using only pointers technically make it unclear whether or not they have to take a value or can be nullptr. Edited by Juliean 1. 1 2. 2 Rutin 24 3. 3 4. 4 JoeJ 18 5. 5 • 14 • 17 • 11 • 11 • 9 • ### Forum Statistics • Total Topics 631759 • Total Posts 3002162 ×
2018-07-20 12:52:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2227436900138855, "perplexity": 1298.1871363396783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591596.64/warc/CC-MAIN-20180720115631-20180720135631-00353.warc.gz"}
https://www.physicsforums.com/threads/wave-function-zero-at-infinity.242250/
# Wave Function Zero At Infinity? 1. Jun 27, 2008 ### GAGS Its looking quite simple problem but let me explain properly my question. Wave function as we know is also known as matter wave/field amplitude. Then definitely there is associated a wave with it. Then how can we say that wave amplitude vanished at infinite! 2. Jun 27, 2008 ### Fredrik Staff Emeritus Because if it isn't, $$\int |\psi(x)|^2 dx$$ is infinite. 3. Jun 27, 2008 ### malawi_glenn GAGS, for a bound system, wavefunction must go to zero due to the reason Fredrik told you. This is not applicable to free particles. Recall that the wavefunction is the probability amplitude for finding the particle at a certain location. 4. Jun 27, 2008 ### Hurkyl Staff Emeritus That doesn't follow. Consider the function: $$\psi(x) = \begin{cases} 0 & x < 1 \\ 1 & x \in [n, n + n^{-2}) \\ 0 & x \in [n + n^{-2}, n+1) \end{cases}$$ where n ranges over all positive integers. $\psi(x)$ does not converge to zero at $+\infty$. However, $$\int_{-\infty}^{+\infty} |\psi(x)|^2 \, dx = \sum_{n = 1}^{+\infty} \int_{n}^{n + n^{-2}} 1 \, dx = \sum_{n = 1}^{+\infty} n^{-2} = \pi^2 / 6$$ 5. Jun 27, 2008 ### malawi_glenn Hurkyl, is that a continous function? Is that a function an eigenfunction to the Shrodinger equation? 6. Jun 27, 2008 ### Hurkyl Staff Emeritus It's a wave function, which is enough to demonstrate Fredrick's claim is inadequate. Doesn't have to be. However, I expect that if I try the same trick with Gaussians, I can construct a potential so that my counterexample is a stationary state. 7. Jun 27, 2008 ### malawi_glenn That is why I added that a bound wave function must approach zero as x goes to infty. So wave functions does not have to be eigenfunctions to Shrodinger Eq? (in non rel QM) 8. Jun 27, 2008 ### Hurkyl Staff Emeritus Nope -- only stationary states are eigenfunctions. However, the (generalized) eigenfunctions do form a basis, so that each wavefunction is a (possibly infinite or continuously indexed) linear combination of (generalized) eigenfunctions. 9. Jun 27, 2008 ### malawi_glenn I would really love to see a physical situation where that wavefunction comes out. 10. Jun 27, 2008 ### Fredrik Staff Emeritus Hurkyl's argument proves that my argument doesn't hold. His argument is valid even if it isn't possible to design an experiment which produces his wave function. I don't know how to respond. I don't know which wave functions are "valid" and which ones aren't. 11. Jun 27, 2008 ### malawi_glenn Generally that statement of yours doesn't hold, but we must look at the physical situation before making any imposings on the wave function under consideration. 12. Jun 27, 2008 ### Domnu Well, I'm sure if you generated a potential of some sort, you could model the above situation? 13. Jun 27, 2008 ### per.sundqvist You could have eigenvalue problems with compex eigen vaues, which means that the wave outside a meta stable bound region is of traveling wave type (and not equal to zero at infinity). They repesents decay of the particle probablility in time, i.e., the imaginary part of the energy estimates the lifetime of an electron in a metastable state. For example: $$V(x)=A\cdot x+B\cdot \sin(kx)$$. 14. Jun 27, 2008 ### ismaili Sorry I don't quite understand it. That's why we have complex eigenvalues? BTW, I have another question. I know the resonance state corresponding to the complex energy poles of the scattering matrix. And the Hamiltonian is Hermitian. (I'm not quite sure?) Are these complex energy poles here eigenvalues of Hamiltonian? Or, the complex energy poles are not eigenvalues of the Hamiltonian? Thanks for any instructions. 15. Jun 27, 2008 ### Andy Resnick Hey... that's cool. Are those functions used for anything? 16. Jun 27, 2008 ### Andy Resnick I think it's required for bound particles, to ensure they stay bound. As Hurkyl points out, there are square-integrable functions that are mischevious, but AFAIK, they do not correspond to physically relevant potentials and Hamiltonians. 17. Jun 28, 2008 ### per.sundqvist No the Hamiltonian is hermitean, but the bondary conditin is of open type. The proof why you get real eigenvalues fails, using Greens first identity when yo get: $$\int\Phi\nabla\Phi\cdot d\vec{S}\neq 0$$. The BC n 1D is: $$d\Psi/dx+ik\Psi=0$$. 18. Jun 28, 2008 ### Staff: Mentor It is indeed applicable to free particles. You're probably thinking of the plane-wave solution $\Psi = \exp [i(px - Et)/\hbar]$ which extends to infinity. But that isn't a physically valid wave function, precisely because it isn't square-integrable. To get a physically valid wave function for a free particle, you have to superpose a collection of waves like that, with different wavelengths, via a Fourier integral. This gives a free-particle wave function that goes to zero as x goes to infinity; and the packet width $\Delta x$ and range of momenta $\Delta p$ satisfy the Heisenberg Uncertainty Principle! 19. Jun 28, 2008 ### per.sundqvist No the hamiltonian is hermitean, but the booundary condition is of open type. The proof of real eigen values fails, using greens first identity, when: $$\int \Psi\nabla\Psi\cdot d\vec{S}\neq 0$$. The BC for this type of problems in 1D is: $$\hat{n}\cdot\nabla\Psi+ik\Psi=0$$. 20. Jun 28, 2008 ### Hurkyl Staff Emeritus But if you consider a multi-modal wavefunction, such as the one I presented (or a smoothed version of it).... 21. Jun 28, 2008 ### ismaili I don't quite understand it. We can prove the theorem which states the eigenvalues of a Hermitian operator are real from linear algebra. There is no additional condition for the boundary conditions of the eigenstates. For example, from $$A|a'\rangle = a'|a'\rangle$$ and $$\langle a''|A = a''^*\langle a''|$$ where $$A$$ is an Hermitian operator and $$a',a''$$ are its eigenvalues. We times the first equation with $$\langle a''|$$, the second equation with $$|a'\rangle$$, then substract, $$\Rightarrow (a' - a''^*)\langle a''|a'\rangle = 0$$ now we select $$a' = a''$$, then we conclude that $$a'$$ is real. So, eigenvalues of a Hermitian operator must be real. How come the resonance state has complex energy eigenvalues? My idea is that the complex energy poles of S-matrix corresponding to resonance states are not energy eigenvalues of Hamiltonian, so the complex energy is not the energy of the resonance state. Where did I got lost? thx 22. Jun 28, 2008 ### per.sundqvist Hmm, here is my derivation: $$\varphi^*H\varphi &=& \lambda\varphi^*\varphi$$ $$\varphi H^*\varphi^* &=& \lambda^*\varphi\varphi^*$$ Using hermiteacity: $$H=H^*=-\nabla^2$$ Now integrating the difference, and using Greens second identity (not the first as I wrote): $$\begin{eqnarray} (\lambda-\lambda^*)\int\varphi^*\varphi dv &=& -\int(\varphi^*\nabla^2\varphi-\varphi\nabla^2\varphi^*)dv= \nonumber \\ &=& -\oint_{\partial S}(\varphi^*\nabla\varphi-\varphi\nabla\varphi^*)\cdot d\vec{S} \neq 0 \end{eqnarray}$$ where dS is the boundary surface, ie two points in 1D. If you have $$\varphi=exp(ikx)$$ you se that the difference is not equal to zero. But if $$\varphi=0$$ at infinity, then $$\lambda=\lambda^*$$, giving a real eigen value. 23. Jun 28, 2008 ### per.sundqvist The last term in the equation tells you that a quantum current is comming out from the boundary. The simplest eigen value problem is for instance the following: $$$V(r) = \left\{ \begin{array}{l l l} \infty & r<0\\ 0 & 0\leq r< L\\ V_0 & L\leq r< L+t\\ 0 & r\geq L+t\\ \end{array} \right.$$$ You could write the solution in terms of unknown coeffs and the k as: $$$V(r) = \left\{ \begin{array}{l l l} \Psi_1 &=& A\sin(kx) ;\;0\leq r< L\\ \Psi_2 &=& B\exp(\kappa x)+C\exp(-\kappa x) ;\; L\leq r< L+t\\ \Psi_3 &=& t\exp(ikx) ;\; r\geq L+t\\ \end{array} \right.$$$ Matching functions and derivatives at the two intermediate points gives you a system like: $$M\vec{c}=0, det[M(E)]=0, k=\sqrt{\hbar^2E/2m}, \kappa=\sqrt{\hbar^2(V-E)/2m}$$. solving the determinant eqation in E numerically you will find an approxiamte solution like: $$E_n\approx\frac{\hbar^2}{2m}\left(\frac{n\pi}{L}\right)^2+i\varepsilon$$. The imaginary part will contribute in the time-dependent solution as: $$\mid\Psi\mid^2=\mid\psi\mid^2e^{i(i\varepsilon)t/\hbar}=\mid\psi\mid^2e^{-t/\tau} ;\;\tau=\hbar/\varepsilon$$
2018-07-18 01:59:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.786742627620697, "perplexity": 1421.0574082115581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589980.6/warc/CC-MAIN-20180718002426-20180718022426-00370.warc.gz"}
https://www.techwhiff.com/learn/you-find-the-following-corporate-bond-quotes-to/14283
# You find the following corporate bond quotes. To calculate the number of years until maturity, assume... ###### Question: You find the following corporate bond quotes. To calculate the number of years until maturity, assume that it is currently January 15, 2019 and the bonds have a par value of $2,000. Company (Ticker) Coupon Maturity Last Price Last Yield EST$ Vol (000’s) Xenon, Inc. (XIC) 7.10 Jan 15, 2042 94.387 ?? 57,379 Kenny Corp. (KCC) 7.29 Jan 15, 2039 ?? 6.36 48,958 Williams Co. (WICO) ?? Jan 15, 2046 94.905 7.18 43,819 What is the yield to maturity for the bond issued by Xenon, Inc.? #### Similar Solved Questions ##### Perule 2) Pnd D perule 2) Pnd D... ##### I will rate asap I will rate asap... journal entries for when books are open and when books are closed for 2019 Se At December 31, 2019 Jackson, Inc. decided to change the depreciation method on its machinery from double-declining-balance to straight-line. The Machinery had an original cost of $100,000 when purchased on July 1, 2017. I... 1 answer ##### A charge particle (q = 5.5 * 10^-10) experiences a force of F = 45i -... A charge particle (q = 5.5 * 10^-10) experiences a force of F = 45i - 3.4j N electric field. Write an expression for the electric field vector E to which the charge is subject, in terms of the force F. E = ?... 1 answer ##### Think about the money you spend every day. You probably spend the most money at or near the beginning of each month. How... Think about the money you spend every day. You probably spend the most money at or near the beginning of each month. However, after all the bills are paid, you have what are often referred to as discretionary funds. That’s money you can spend any way you wish. Please tell us on what types of t... 1 answer ##### 6. Consider the circuit in Fig. 1.2. The switch is closed at t = 0. Determine... 6. Consider the circuit in Fig. 1.2. The switch is closed at t = 0. Determine i(t) and vitit e(t) = 10 e 'V. SHIFT e(t) = 10e va 3 Vut) Figure 1.3 +4 i(t) = +4 v (t) =... 1 answer ##### 1. Suppose cars are produced using workers and machines as perfect complements. The production func- tion... 1. Suppose cars are produced using workers and machines as perfect complements. The production func- tion for producing cars is: F(K, L) = min(2K,5L) (a) Setup the short run cost minimization problem when K = 10. (2 points) (b) Solve for the short run optimal amount of labor Lsr() and short run mini... 1 answer ##### Question is: Write as molecular formula. the answer says C4H8 how is the answer C4H8? I... Question is: Write as molecular formula. the answer says C4H8 how is the answer C4H8? I got C4H9? I counted 9 hydrogens... please tell me what im doing wrong... please ignore question 3 that was pisted by accident from a old question 3. A student wants to prepare ethyl acetate. He cor- rect... 1 answer ##### CH.3 A water pistol aimed horizontally projects a stream of water with an initial speed of... CH.3 A water pistol aimed horizontally projects a stream of water with an initial speed of 5.65 m/s. (a) How far does the water drop in moving 1.80 m horizontally? (b) How far does it travel before dropping a vertical distance of 1.90 cm?... 1 answer ##### Motional EMF The following figure shows a circuit consisting of a flashlight bulb, rated 3.0 V/... Motional EMF The following figure shows a circuit consisting of a flashlight bulb, rated 3.0 V/ 1.5 W. The right wire of the circuit, which is 10 cm long, is pulled at constant speed v through a perpendicular magnetic field of strength 0.1 T. The resistance of the wires are negligible. What speed... 1 answer ##### 1) Describe the type of morph you'd expect from the neo-stop allele and explain your reasoning.... 1) Describe the type of morph you'd expect from the neo-stop allele and explain your reasoning. 8 Loxp2 neo-stop Loxp1 9 10 11 12 Note: The above figure is a 'neo-stop' allele, that is the product of engineering the FGFR3 locus. The transmembrane domain is coded for in exons 9 and 10 and... 1 answer ##### Write a complete C program to get data from file name DATA.TXT one line at a... Write a complete C program to get data from file name DATA.TXT one line at a time until there is no more data in that file. The following is one sample line in DATA.TXT ( have as many record as you wish in DATA.TXT) Name SSN &nbs... 1 answer ##### FULL Question 9 Tim Pharoah started Pharoah Roof Repairs on April 2, 2021, by investing$4,000... FULL Question 9 Tim Pharoah started Pharoah Roof Repairs on April 2, 2021, by investing $4,000 cash in the business. During April, the following transactions occurred: Apr. 6 Purchased supplies for$1,500 cash. 15 Repaired a roof for a customer and collected $1,000 cash. 25 Received$2,500 cash in a... ##### A skier (m=59.0 kg) starts sliding down from the top of a ski jump with negligible... A skier (m=59.0 kg) starts sliding down from the top of a ski jump with negligible friction and takes off horizontally If h 6.90 m and D 10.4 m, find H. 1.08x101 m You arecprmec i 155-1271 Previous Tries Your receipt no. is 155-1271 o inetic energy as she reaches the ground. Use conservation of ener... ##### Arrange the following rational numbers in descending order 2/3," "4/9," "5/12," "7/18? Arrange the following rational numbers in descending order 2/3," "4/9," "5/12," "7/18?#...
2023-02-07 09:10:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23982593417167664, "perplexity": 7861.953140906386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500392.45/warc/CC-MAIN-20230207071302-20230207101302-00364.warc.gz"}
https://blog.albert-learning.com/quiz/will-or-would/
# Will or Would? Welcome to your quiz on  Will or Would? Choose the correct answer. If I have the time, I _____ pick her up. She didn’t think that he _____ be there. What do you think _____ happen if there is no rain? I don’t know if I _____ be free tomorrow. _____ you like some tea? They _____ go to the mall today. She _____ have gone to the party  if she could. She _____ rather live alone. _____ you be coming for the annual sports meet? It _____ be sunny this week.
2022-09-29 07:42:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818471431732178, "perplexity": 11914.434099925142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00102.warc.gz"}
https://readpaper.com/paper/4705295257206538241
This website requires JavaScript. # Unitary paradox of cosmological perturbations Dec 2022 If we interpret the Bekenstein-Hawking entropy of the Hubble horizon asthermodynamic entropy, then the entanglement entropy of the superhorizon modesof curvature perturbation entangled with the subhorizon modes will exceed theBekenstein-Hawking bound at some point; we call this the unitary paradox ofcosmological perturbations by analogy with black hole. In order to avoid afine-tuned problem, the paradox must occur during the inflationary era at thecritical time $t_c = \ln(3\sqrt{\pi}/\sqrt{2} \epsilon_H H_{inf})/2H_{inf}$ (inPlanck units), where $\epsilon_H$ is the first Hubble slow-roll parameter and$H_{inf}$ is the Hubble rate during inflation. If we instead accept thefine-tuned problem, then the paradox will occur during the darkenergy-dominated era at the critical time$t_c'=\ln(3\sqrt{\pi}H_{inf}/\sqrt{2}fe^{2N}H_\Lambda^2)/2H_\Lambda$, where$H_\Lambda$ is the Hubble rate dominated by dark energy, $N$ is the number ofe-folds, and $f$ is a purification factor that takes the range$0<f<3\sqrt{\pi}H_{inf}/\sqrt{2}e^{2N}H_\Lambda^2$. Q1论文试图解决什么问题? Q2这是否是一个新的问题? Q3这篇文章要验证一个什么科学假设? 0
2023-01-29 09:59:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258352518081665, "perplexity": 1864.4319164331077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499710.49/warc/CC-MAIN-20230129080341-20230129110341-00118.warc.gz"}
https://stats.stackexchange.com/questions/268157/lower-bound-of-data-dimension-when-using-a-deep-learning-architecture
# lower-bound of data dimension when using a deep learning architecture I have a (X,Y)=(100,5) dataset (non-image) that I used with a deep linear classifier on Tensorflow to train and evaluate. At the same time, I have tested the very same dataset with conventional and shallow ML models (i.e. Random forest, DS-trees, MLP, etc.). I am seeing no tangible improvement in accuracy, and in some cases, even a decrease on accuracy level using my deep model. Now, I was wondering whether there are lower-bounds involved with using deep models. I know there are several (CNN, RNN, etc.) models and generalization is very coarse-grain, however, I would like to have a rule of thumb when we deal with low dimensional data as features (Y-columns) and instances (X-rows). • Keep in mind that neural networks of any kind will overtrain like crazy. So play around with your layers, either by increasing regularization or by using fewer layers with fewer connections. – Alex R. Mar 17 '17 at 17:15 In theory, the more variables in your model (the more complexity) means that you will need more data to train it effectively. If you do not have a lot of data than I suggest to stay clear from any deep technique. Don't just use them because they are 'hot' right now. Use deep learning models only when they are appropriate to the problem you are trying to solve. As a general rule of thumb I tend to follow the following for deep models: Data points needed = (# of features) * (# of classes) * 100 For shallow models Data points needed = (# of features) * (# of classes) * 10 This usually gives pretty reasonable results. However, in many cases, if you are trying to learn a very complex function, then you will need many more examples when you are training your deep model. I would suggest you stick to shallow machine learning techniques. Try SVM, it is one of the most powerful techniques and often fares very well. Your classes are probably linearly separable, since use of more advanced classifiers does not help -- the shallow classifiers can discern the class separability. There may be few non-linear associations as well, however, if there are, the shallow classifiers can discern the patterns. Would not recommend starting with complex methods first, since you are fighting Occam's razor and statistical parsimony: "don't use anything complex if simple methods work."
2021-06-14 00:55:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5280951857566833, "perplexity": 950.644249355634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611089.19/warc/CC-MAIN-20210613222907-20210614012907-00092.warc.gz"}
https://www.physicsforums.com/threads/frw-metric-time-component.807269/
FRW Metric: time component Tags: 1. Apr 7, 2015 unscientific Taken from Hobson's book: Metric is given by $$ds^2 = c^2 dt^2 - R^2(t) \left[ d\chi^2 + S^2(\chi) (d\theta^2 + sin^2\theta d\phi^2) \right]$$ Thus, $g_{00} = c^2, g_{11} = -R^2(t), g_{22} = -R^2(t) S^2(\chi), g_{33} = -R^2(t) S^2(\chi) sin^2 \theta$. Geodesic equation is given by: $$\dot u_\mu = \frac{1}{2} \left( \partial_\mu g_{v\sigma} \right) u^v u^\sigma$$ The coordinates are given by $u^0 = \dot t, u^1 = \dot \chi, u^2 = \dot \theta, u^3 = \dot \phi$. For the temporal component, $$\dot u_0 = \frac{1}{2} (\partial_0 g_{v\sigma})u^v u^\sigma$$ Photons $$u^0u_0 = 0$$ $$u^0 g_{00} g^0 = 0$$ $$g_{00}\dot t^2 = 0$$ $$\dot t = 0$$ This doesn't make any sense. For massive particles, $\dot t = 1$. 2. Apr 7, 2015 Orodruin Staff Emeritus Why do you think $u_0 u^0 = 0$? It is not true. Remember the Einstein summation convention. 3. Apr 7, 2015 unscientific For a photon, it is 0, as shown in the text. (null vector) 4. Apr 7, 2015 Orodruin Staff Emeritus No, it is not $u^\mu u_\mu = 0$ does not imply $u^0 u_0 = 0$. 5. Apr 7, 2015 unscientific So, $u^0u_0 + u^1u_1 + u^2u_2 + u^3u_3 = 0$? 6. Apr 7, 2015 Orodruin Staff Emeritus Yes.
2017-11-25 04:49:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9757533669471741, "perplexity": 7483.223750126693}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809392.94/warc/CC-MAIN-20171125032456-20171125052456-00499.warc.gz"}
https://stats.stackexchange.com/questions/125455/interaction-effect-of-2x2-anova-in-meta-analysis
# Interaction effect of 2x2 ANOVA in meta-analysis I want to meta-analyze the interaction effect of a 2x2 ANOVA. (I am not talking about an interaction in the meta-regression, as in this question but about an interaction as the focal effect that should be meta-analytically summarized). What is the best way to code the interaction effect size for a subsequent meta-analysis? (preferably in the metafor package) • What kind of information do you have? Means, SDs, and cell sizes of all four cells? Just F-values and the degrees of freedom? Do you want a 'standardized' effect size? – Wolfgang Nov 25 '14 at 21:16 • I have both situations, as you wrote: a) Means, SDs, sample size and b) only F and dfs. "Standardized": If possible. – Felix S Nov 26 '14 at 18:48
2019-07-22 22:53:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3035988211631775, "perplexity": 1869.005016332243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528290.72/warc/CC-MAIN-20190722221756-20190723003756-00271.warc.gz"}
https://www.transtutors.com/questions/construct-a-2-7-2-design-by-choosing-two-four-factor-interactions-as-the-independent-1913870.htm
# Construct a 2 7-2 design by choosing two four-factor interactions as the independent generators.... 1 answer below » Construct a 27-2 design by choosing two four-factor interactions as the independent generators. Write down the complete alias structure for this design. Outline the analysis of variance table. What is the resolution of this design?
2019-10-21 15:19:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817198872566223, "perplexity": 2433.6870381335625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987779528.82/warc/CC-MAIN-20191021143945-20191021171445-00528.warc.gz"}
http://math.stackexchange.com/questions/112336/average-of-i-i-d-exponential-random-variables?answertab=active
average of i.i.d. exponential random variables I have a question about the following probability: $Pr\{\frac{\sum^N_{k=1}u_k}{N}<1\}$ where $u_k\sim \exp(1)$ are i.i.d. exponential random variables with mean one (also, $\frac{\sum^N_{k=1}u_k}{N}$ is gamma distributed). I have plotted this probability for different $N$. The plot shows that as $N$ increases, this probability approaches $0.5$. Is this a well-known result? Has someone proved it already? If not, how to prove it rigorously? - You are right. This type of result is, however, not new, and applies to a very large class of random variables of a shape similar to yours. In particular, the fact that the random variables being averaged have exponential distribution has almost nothing to do with the result. For a proof, it is probably best to use the Central Limit Theorem. Call your random variable (the average of the $N$ exponentials) by the name $Y_N$. Since you are taking an average of $N$ random variables with mean $1$, the random variable $Y_N$ has mean $1$. Since each of the exponentials has variance $1$, and they are independent, their sum has variance $N$, and therefore $Y_N$, which is $1/N$ times the sum, has variance $\frac{N}{N^2}$, which is $\frac{1}{N}$. The Central Limit Theorem says that if $Y_N$ is the average of $N$ independent random variables with mean $\mu$ and variance $\sigma^2$, then $$\lim_{N\to\infty}P\left(\sqrt{N}(Y_N-\mu)\le z\right)=\Phi(z/\sigma),$$ where $\Phi$ is the cumulative distribution function of the standard normal. In our case, we have $\mu=1$ and $\sigma=1$. So we can rewrite the above result as $$\lim_{N\to\infty}P\left(Y_N\le 1+\frac{z}{\sqrt{N}}\right)=\Phi(z). \qquad(\ast)$$ Now just put $z=0$. Since $F(0)=1/2$, we get the fact that you observed. With relatively well-behaved random variables like your mean $1$ exponentials, the approach to normality is very rapid. Thus we can, for largish $N$, remove the limit part, and use $(\ast)$ as an estimate of $P(Y_N-1\le \frac{z}{\sqrt{N}})$. - thank you very much for such a detailed proof. It's a rigorous proof, and very clear. Thanks for these inputs. – Scholli Feb 23 '12 at 9:24 By the way, I assume that the last probability is $P(Y_N<1)<\frac{z}{\sqrt{N}}$. – Scholli Feb 23 '12 at 9:25 @Scholli: Thanks for pointing out the typo. Fixed. But it may not be the only one. – André Nicolas Feb 23 '12 at 9:44 I think the other calculations are correct ;) – Scholli Feb 23 '12 at 14:10
2016-04-29 10:49:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9687322974205017, "perplexity": 93.92687863524624}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111313.83/warc/CC-MAIN-20160428161511-00008-ip-10-239-7-51.ec2.internal.warc.gz"}
http://chronicle.com/blognetwork/castingoutnines/2008/01/08/software-software-get-your-fresh-software/
Previous Mathematics and sex Next Finding success at MIT, and anywhere else Software! Software! Get your fresh software! January 8, 2008, 7:58 pm Lots of activity on the software front lately. OmniFocus, the GTD app which I wrote about here, was released in version 1.0 today. I’ve been very satisfied with OmniFocus since settling on it for my GTD needs, especially since I managed to combine discounts to get it for under $20. I don’t know how many of those discounts are still available, but definitely the educational pricing is still there (though you have to look around for it at the Omni web site). Bento, called the “missing database from iWork”, was released out of beta today as well. I’ve been demoing Bento for the last few days as a tracking system for students, and it’s very nice and visual. But I found the$49 price tag to be a little pricey, especially when the entire iWork ’08 suite is \$79. Sage, an open-source computer algebra system comparable to Matlab, has been gathering lots of buzz. With all my issues with Maple 10 not working under OS X Leopard, I’ve made learning Sage to be one of my January projects. I’ve got it downloaded and installed — which was no small feat, since there is no DMG package for OS X and it has to be built from source — but I haven’t had a chance to test drive it much. More later if I do. Jott is not exactly software but rather a voice-to-text service that is really quite amazing. You call up a central phone number, address your voice message using voice commands, and then speak your message — and Jott converts it to text and sends it to the addressee as an email, SMS message, or both. You can also set Jott up to post to Google Calendar, Twitter, even blogging services (which unfortunately excludes WordPress.com). I used to want a digital voice recorder for capturing thoughts for my GTD inbox while not able to write things down or get to my laptop, but now I just call up Jott and have it send me an email. Brilliant — and free! (This has been around for a while, but I realized I hadn’t blogged about how enthused I was about it.) This entry was posted in Software, Technology and tagged , , , , , , , . Bookmark the permalink. • The Chronicle of Higher Education • 1255 Twenty-Third St., N.W. • Washington, D.C. 20037
2015-08-02 16:30:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42577478289604187, "perplexity": 2836.4236951434636}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989142.82/warc/CC-MAIN-20150728002309-00037-ip-10-236-191-2.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/64742/1s-complement-addition-of-outer-carry-to-the-result
# 1's complement addition of outer carry to the result Let's take for example this addition: 3 + (-1). • 1 in binary is 001, and to obtain it's 1's complement counterpart we flip the bits. So it is: 110. • 3 in binary is 011. 011 + 110 = 1001 That first 1 which is in bold has to be added to the number formed by the last 3 bits as follows: 001 + 1 = 010 (2 in decimal). Why do we do the last step, adding that outer carry? Which is the logic behind? Computers represent numbers (and other things) imperfectly, sometimes for convenience, sometimes because there is no alternative. Complement arithmetic is easier to implement than sign-magnitude, but has a few quirks. We just live with them, there's nothing special here. In two's complement, the "extra one" is also there, but it is added during the change of sign, and because there is only one representation for zero, the system is a bit more elegant. Try to picture a base complemented number as representing what's missing in the number to reach $b^k$ ($b$ is the base, $k$ is the number of significant digits). The apparent "extra one" is due to the fact that what is counted here are the steps to reach the implicit "zero" in $b^k$. For instance, in ten's complement, with $k=3$: $-5 = 995=10^3\color{red}{-5}$ In nine's complement, $999$ is zero, so the extra one is not necessary in this case. It becomes necessary, though, when the operation causes overflow due to change of sign: $-5 = 994$ $-5+6 = 994+006 =\color{red}{1}000\color{blue}{+1} = 001$ • If the quirks are so obscure for an answer to be found, I could live without it, sure. – Iulian Barbu Oct 18 '16 at 9:12 • It is not obscure, it just has no special meaning. I've edited my answer, hope it satisfies your curiosity. – André Souza Lemos Oct 18 '16 at 13:53 • I think I understand your point. The addition implies some kind of circularity. I could also say, the newly formed number, whose representation exceed the k bits representation, k, must be kept intact and used in the following substraction, which will give the final answer: 1000 - 999 (maximum in 10 base, which also means 0) = 1. Instead of 1000, we could have any result from adding two numbers in base b complement, but the final result will be result - (the maximum number in b base, which is 0 in b's complement). – Iulian Barbu Oct 18 '16 at 17:41 • Exactly. There is a distant connection with modular arithmetic. – André Souza Lemos Oct 18 '16 at 17:47 I only understand two's complement Let's take two 3 bits numbers, X and Y Complementing the bits is like calculating 111-X (it is obvious as $\overline {b} = 1-b$, $\sum_{i=0}^n 2^i*(1-b_i) = \sum_{i=0}^n 2^i - \sum_{i=0}^n 2^i*b_i$...)
2020-03-30 22:47:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8327739238739014, "perplexity": 857.9282456907398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497309.31/warc/CC-MAIN-20200330212722-20200331002722-00141.warc.gz"}
https://nrich.maths.org/5573/solution
### Exploring Wild & Wonderful Number Patterns EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules. ### I'm Eight Find a great variety of ways of asking questions which make 8. ### Sending Cards This challenge asks you to investigate the total number of cards that would be sent if four children send one to all three others. How many would be sent if there were five children? Six? # Multiplication Square Jigsaw ##### Stage: 2 Challenge Level: Nicole from Eastwood Primary wrote to tell us how she tackled this jigsaw: First I got all the numbers down the side which were $1, 2, 3, 4, 5, 6, 7, 8, 9, 10$ and then got the numbers for the top which were the same and put them where they were meant to go! After that I timesed them and I then put the other pieces where they were meant to go! Several of you sent in a picture of your completed square. This one is from Kelsi, who goes to Mason Middle School:
2017-10-20 23:11:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27593615651130676, "perplexity": 1903.5553210522917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824471.6/warc/CC-MAIN-20171020230225-20171021010005-00046.warc.gz"}
https://www.billmongan.com/Ursinus-CS173-Fall2021/Assignments/DNAMutations
# Assignment Goals The goals of this assignment are: 1. To manipulate strings using the substring method 2. To apply string indexing in the context of iteration 3. To select and implement appropriate test cases to check boundary cases and potential off-by-one errors 4. To memory map strings on paper and to validate that mapping using the step debugger Please refer to the following readings and examples offering templates to help get you started: # The Assignment Nucleic Acid (NA) Sequences like DNA and RNA are organic chains of phosphates and sugars that encode that encode genetic information about living things. These chains mutate in various ways through reproduction and as a result of cellular damage (for example, through UV from sunlight). We represent the four DNA nucleotide bases as (A)denine, (C)ytosine, (G)uanine, and (T)hymine, using the letters A, C, G, and T, respectively. An example DNA chain might be represented as AATCC. Here is an example RNA chain from Wikipedia (which uses additional nucleotide bases such as (U)racil, represented by the letter U). In this assignment we will restrict ourselves to DNA bases A, C, G, and T. Each of these bases has a complement, as follows: • The complement of A is T (and vice-versa) • The complement of C is G (and vice-versa) A substring of DNA bases (for example, ACTG) might appear later in the chain in the form of its complement, in reverse. That is, the complement of ACTG is TGAC (the A became a T, the C became a G, the T became an A, and the C became a G). In this example, ACTG is referred to as a sense, and its reversed complement is called an antisense. When we reverse TGAC, we obtain CAGT, and this is the antisense of the sense ACTG. A DNA chain with this sense/antisense pattern might be represented as ACTG…CAGT (with zero or more additional bases occurring in the middle). ## Part 1: Detecting Differences in NA Chains Given two strings, compute their percentage difference, character by character. If the lengths are different, the difference of their lengths count as differences as well. For example, “ACCG” and “ACTG” would differ by 25%, because 1 character is different out of the 4. “ACCG” and “ACCGT” would differ by 20%, because 1 character is different out of a possible 5 (the length of the largest string). This can be represented as a double (for example, 0.25 for 25%, and 0.2 for 20%). ## Part 2: Inserting a Chain Given an NA chain string, an NA subchain, and a position, insert the subchain into the chain. For example, insert("ACCG", "TT", 2) would return “ACTTCG” (recall that the indices start at 0, so the TT occupies position 3 and 4 in the string, which are indices 2 and 3. ## Part 3: Detecting an Antisense Next, write a function to search one NA chain for the existence of an antisense. As a parameter, pass the sense chain whose antisense you are searching for. Return a boolean if you find it. This function will need to call two other functions - one to compute the complement of the sense (the “antisense”), and next function to reverse that antisense chain string. If you do those two things first, detecting the antisense becomes a simpler matter of searching one string for another. ### An Example For example, public static boolean detectAntisense(String chain, String sense) should compute the complement of sense, reverse the variable sense, and then search for sense in chain. As a specific example, detectAntisense("ACATGCTATGTA", "ACAT"); should compute the complement of the sense ACAT (which is TGTA), and then reverse it to obtain the antisense (which is ATGT). Finally, return true if the antisense ATGT is found in the chain (which is ACATGCTATGTA) – and in this case, it is (so we return true)! You can use the string indexOf() method to search for one String inside another. If indexOf returns -1, meaning you did not find the antisense in the chain, return false. Otherwise, return true. ### Looping over a String To loop over every character of a string, you can loop as follows: for(int i = 0; i < str.length(); i++) { System.out.println(str.charAt(i)); // as an example, this prints every character in the string! } Or, to loop backwards (for example, to reverse a String), you can try this: String reversed = ""; for(int i = str.length() - 1; i >= 0; i--) { // why str.length() - 1? reversed = reversed + str.charAt(i); // append to the new String, one character at a time // from the end to the beginning of the original String } When manipulating string indices, it is very important to avoid off-by-one errors. For example, the substring(start, end) method starts at the index start but ends at the index end - 1, which can be confusing. Compounding the confusion is that string indices start counting at 0, like most arrays do. From the javadoc documentation for substring, we see that the substring(1, 5) of “smiles” is “mile”. If you are looping through your string, take care to stop searching not only prior to the end of the string (as you normally would), but at the point at which the substring you are searching for would run past the end of the string. For example, if you are searching for “ACC” in “CGCTG”, you could stop searching once you reach the “T”, because there is no room for “ACC” to fit there. Your loop terminating condition will be a value less than chain.length(), where chain is the sequence you’re searching within (i.e., “CGCTG”) - what should it be instead (hint: it is related to subchain.length(), where subchain is the sequence you’re searching for, i.e. “ACC”)? Prior to writing your code, draw a grid representing the string you’re searching for, and number the indices from 0 to the length of the string. Then draw a grid representing the chain you’re searching within (again, from 0 to the length of that chain). Step through the search procedure on paper so that you can see the indices that you’ll be working with. How do your indices relate to the lengths of the source and target subchains? This will provide you the answers you need to implement your algorithm! ### Part 3, Step 1: Computing the Complement of an NA Sense Given an NA chain string, return a new string representing its complement. This should be a string of the same length as the original; however, each character should be replaced with its complement. That is, all the A’s should be switched to T’s (and vice-versa), and all the C’s should become G’s (and vice-versa). So, the sense ACAT becomes TGTA. Question: why do you need to return a new string? Since strings are represented as arrays of characters, why can’t you manipulate the input string paramter directly? Even if you could, why do you think it is a good idea to create a new string anyway? ### Part 3, Step 2: Reversing the NA Sense Given an NA chain string, return a new string of the same length but with all the characters reversed. That is, “ATCG” becomes “GCTA”. You will reverse the complement of the sense that you computed in Step 1. So, in our example, TGTA (the complement of the sense ACAT) becomes ATGT. ### Part 3, Step 3: Compute an Antisense Using the two functions you just wrote to compute the complement and the reverse of a chain, write a function to compute the antisense of a given NA chain. To do this, compute the complement of the chain, and then reverse it. To do this, simply call the function you wrote for Step 1 to compute the complement, passing it the sense (not the original chain, but the sense you’re searching for; for example, if your primary function is detectAntisense("ACATGCTATGTA", "ACAT");, you would compute this on ACAT); then, call your Step 2 function to reverse the result. ### Part 3, Step 4: Find the Antisense Finally, determine if the antisense is located inside the chain. So, in our example, you would search for ATGT in ACATGCTATGTA, because the sense you are seeking is ACAT. You can use the chain.indexOf() to help you here - feel free to look it up to see how it works! ## Part 4: Removing a Chain Given an NA chain string and an NA subchain, remove all instances of the subchain from the chain. For example, remove("ACCGCC", "CC") would return “AG”. Note that Java Strings now have a method replaceAll that will do this for you. I definitely encourage you to use this. However, you may notice that these helper methods don’t always exist across many of the string operations we’re exploring here. So, there is significant value in practicing with string indexing. For full credit on this problem, implement your own replacement algorithm to accomplish this without calling a replace or replaceAll string method. However, it would be a good idea to write a unit test that compares your results to a call to replaceAll, and you should feel both free and encouraged to do so! So, to do this, you will need to read the String one character at a time, and determine whether the substring is equal to your subsequence. Append the character to your result String if they don’t match, and advance your loop counter if they do match (so that you skip those characters). Notice that I’m not asking you to remove the characters directly from the original String! Although you could do this, you will have to do some extra work to update your loop counter if you make a change to the String. Question: for the sequence ACCGCC, replacing the subsequence CC, what pairs of characters do you need to compare to CC? For example, you would first check AC, but then what are the rest, and what are their substring begin and end indices? List out each pair of characters, and their indices. These should give you a hint about how your loop will work. ## Part 5: Testing Write unit tests for each part of this assignment. You will need more than one test case per part. Your test cases should include boundary conditions (for example, inserting or removing to the beginning, middle, and end of a string). Your goal is to uncover errors with your test cases. Because of 0-indexing, and off-by-one errors, an algorithm can appear to work fine as long as you manipulate the middle of a chain, but then break when you are dealing with the beginning or end. Even I made an error while completing this assignment that was only uncovered when I tried to execute it at the end of a chain. These mistakes are very easy to make, and you should assume that your code contains these bugs. Think of testing like a game: your goal is to cause your software to break (that’s how we identify bugs to fix and make our software more robust!). Question: What test cases would you write in order to try to do that? ## Exporting your Project for Submission 1. Given a String "Computing", what beginning and ending indices would you pass to substring to retrieve the letters "put"? 2. Suppose you have a String x = "CS" and a String y = "173". How would you create a String z that combines the two strings to be CS 173 without re-typing CS or 173? 3. Suppose you have a String x = "hamburger", and you wish to change it to "cheeseburger". What calls to x.substring() would allow you to do this? ## Submission • Describe what you did, how you did it, what challenges you encountered, and how you solved them. • Please answer any questions found throughout the narrative of this assignment. • If collaboration with a buddy was permitted, did you work with a buddy on this assignment? If so, who? If not, do you certify that this submission represents your own original work? • Please identify any and all portions of your submission that were not originally written by you (for example, code originally written by your buddy, or anything taken or adapted from a non-classroom resource). It is always OK to use your textbook and instructor notes; however, you are certifying that any portions not designated as coming from an outside person or source are your own original work. • Approximately how many hours it took you to finish this assignment (I will not judge you for this at all...I am simply using it to gauge if the assignments are too easy or hard)? • Your overall impression of the assignment. Did you love it, hate it, or were you neutral? One word answers are fine, but if you have any suggestions for the future let me know. • Any other concerns that you have. For instance, if you have a bug that you were unable to solve but you made progress, write that here. The more you articulate the problem the more partial credit you will receive (it is fine to leave this blank). # Assignment Rubric Description Pre-Emerging (< 50%) Beginning (50%) Progressing (85%) Proficient (100%) Algorithm Implementation (40%) The algorithm fails on the test inputs due to major issues, or the program fails to compile and/or run The algorithm fails on the test inputs due to one or more minor issues The algorithm is implemented to solve the problem correctly according to given test inputs, but would fail if executed in a general case due to a minor issue or omission in the algorithm design or implementation A reasonable algorithm is implemented to solve the problem which correctly solves the problem according to the given test inputs, and would be reasonably expected to solve the problem in the general case Test Cases (20%) Testing was performed outside of the unit test framework, or not performed at all Trivial test cases are provided in a unit test framework Test cases that cover some, but not all, boundary cases and branches of the program are provided Test cases that cover all boundary cases and branches of the program are provided Code Quality and Documentation (30%) Code commenting and structure are absent, or code structure departs significantly from best practice, and/or the code departs significantly from the style guide Code commenting and structure is limited in ways that reduce the readability of the program, and/or there are minor departures from the style guide Code documentation is present that re-states the explicit code definitions, and/or code is written that mostly adheres to the style guide Code is documented at non-trivial points in a manner that enhances the readability of the program, and code is written according to the style guide, and each function contains relevant and appropriate Javadoc documentation Writeup and Submission (10%) An incomplete submission is provided The program is submitted, but not according to the directions in one or more ways (for example, because it is lacking a readme writeup or missing answers to written questions) The program is submitted according to the directions with a minor omission or correction needed, including a readme writeup describing the solution and answering nearly all questions posed in the instructions The program is submitted according to the directions, including a readme writeup describing the solution and answering all questions posed in the instructions Please refer to the Style Guide for code quality examples and guidelines.
2022-11-30 07:58:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41103413701057434, "perplexity": 1013.0135030750791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710733.87/warc/CC-MAIN-20221130060525-20221130090525-00265.warc.gz"}
http://www.adrian.idv.hk/2005-04-08-latextricks/
# Text, style, and grammar Try Queequeg ### Prevent hyphenation at once \hyphenpenalty=5000 \tolerance=1000 ### Using limit-mode in summation Force limits to place at above and below (useful for inline equation): \sum\limits_{x=1}^{n}\frac{1}{n} ### Equal by Definition i^2 \stackrel{\mathrm{def}}{=} -1 ### Self-defined function in equation mode To make foo appears like \sin or \exp, use \operatorname{foo}(x) or to save space, define the following in preamble: \newcommand{\foo}{\operatorname{foo}} ### Modifying counters Counters available: part, chapter, section, subsection, subsubsection, paragraph, subparagraph, figure, table, equation, enumi, enumii, enumiii, enumiv, footnote, mpfootnote Setting counter value: \setcounter{page}{1} Setting style: \part_style{arab} ### Font size To adjust font size by “zooming”, use this: \usepackage{scalefnt} Normal size - \scalefont{2}Linear double - \scalefont{0.5} Normal - \scalefont{1.414}Double size (area) - \scalefont{0.707} Normal {\scalefont{2}Double size} - Normal again If the scaling is in terms of “levels”, use this: \usepackage{relsize} Normal size - \relsize{1}Linear double - \relsize{-1} Normal - \relsize{2}Double size (area) - \relsize{-2} Normal {\relsize{1}Double size} - Normal again ### Complicated Matrix For creating matrix (in math mode) but with part of the stuff outside the bracket, there are several ways to do. First one is to use Knuth’s \bordermatrix macro, like this one (copy from Cambridge’s web) \begin{math} \bordermatrix{&a_1&a_2&...&a_n\cr b_1 & 1.2 & 3.3 & 5.1 & 2.8 \cr c_1 & 4.7 & 7.8 & 2.4 & 1.9 \cr ... & ... & ... & ... & ... \cr z_1 & 8.0 & 9.9 & 0.9 & 9.99 \cr} \end{math} The above will have the matrix with only the number and those letters on the top and left outside the matrix. If you want not the topmost row and leftmost column outside the bracket, you can use \bordermatrix*, \begin{math} \bordermatrix*{ 1.2 & 3.3 & 5.1 & 2.8 & b_1 \cr 4.7 & 7.8 & 2.4 & 1.9 & c_1 \cr ... & ... & ... & ... & ... \cr 8.0 & 9.9 & 0.9 & 9.99 & z_1 \cr a_1 & a_2 & ... & a_n & \cr} \end{math} which, in turn, show the rightmost column and the bottom row outside the bracket. However, if you want a LaTeX version instead of plain TeX, one may use K. Border’s kbordermatrix package. The documentation is here. Furthermore, if you want to show the matrix/determinant operations (i.e. the arrows showing which row is multiplied by what and add to which row, etc.), you may found the gauss.sty package useful. ### Collection of math mode tricks See the documentation of mathmode, written by Herbert Voss with 130+ pages. Very detail and contains almost everything you need for typesetting equations. # Fonts ### Beautiful CM Fonts To use a much better CM font for LaTeX, get cm-super package (optionally for X11: cm-super-x11) in Debian. The add the lines in your LaTeX document preamble: \usepackage{type1ec} \usepackage[T1]{fontenc} Then your output would include no bitmap CM fonts. ### Times font for everything The times package can only make you have times font for main text but not for equations. To make everything including equation to use times font, call this: \usepackage{mathptmx} \DeclareSymbolFont{largesymbols}{OMX}{cmex}{m}{n} The second line make the big symbols like summation to use computer modern font instead of times which is bigger and look nicer. ### Package bm for Bold Greek For getting a bold Greek letter, \mathbf{\alpha} doesn’t work. We have to put \usepackage{bm} in the preamble and use $\bm{\alpha}$ in math mode. ### mathcal style Sometimes, we may use \mathcal or \cal in LaTeX for a calligraphic font. However, what is happening may not be what you expected. In normal case, the calligraphic font would be the one in cmsy.pfb If you are using mathptmx package, the font will be a script font. If you loaded with eucal package, the font will be the “Euler Script” font, which looks like an upright version of cmsy.pfb. So if you are using mathptmx package, but want to get back the old script font (which is bold and easier to read), then issue the following command in the peamble: \DeclareMathAlphabet{\mathcal}{OMS}{cmsy}{m}{n} # Figures ### Side-by-side figures Make two figures side-by-side, use this: \begin{figure}[hbtp] {\scriptsize \hbox{ \input{plot-e-alp09} \input{plot-i-alp09} } \hbox{\hspace{38mm}\hbox{(a)\hspace{83mm}(b)}} } \caption{(a) elastic and (b) inelastic utility vs $\alpha$ with $\rho$=0.95} \end{figure} Alternative way: using minipage \begin{figure}[hbtp] \hfill \begin{minipage}[t]{.45\textwidth} \epsfig{file=figure1.eps, scale=0.5} \caption{figure 1} \end{minipage} \hfill \begin{minipage}[t]{.45\textwidth} \epsfig{file=figure2.eps, scale=0.5} \caption{figure 2} \end{minipage} \hfill \end{figure} Yet another way: using subfigure package \usepackage{subfigure} \begin{figure}[htbp] \mbox{ } \caption{I like these!} \end{figure} ### Four figures in a square \begin{figure}[htbp] \mbox{ \subfigure[Toyota]{\scalebox{0.3}{\input{celica.pstex_t}}} } \mbox{ \subfigure[Subaru]{\scalebox{0.3}{\input{Outback.pstex_t}}} } \caption{I like these!} \end{figure} # Spacing ### Setting paper margin \usepackage{geometry} \geometry{verbose,a4paper,tmargin=1.75cm,bmargin=2cm,lmargin=2cm,rmargin=2cm,footskip=1cm} Alternative method (specifying paper size and print region only) \usepackage[vcentering,dvips]{geometry} \geometry{papersize={170mm,240mm},total={124mm,185mm}} ### Line spacing \renewcommand{\baselinestretch}{1.4} ### Reduce space around captions Remove the extra space between figure and captions, as well as the space between two adjacent figure blocks: \setlength{\abovecaptionskip}{0pt} \setlength{\floatsep}{0pt} ### Removing large margins at print by pstops If you got a doc with A5 content centered at an A4 paper, use this for two-pages-on-one-sheet: pstops -pa4 '2:0L@1(25.35cm,-3.075cm)+1L@1(25.35cm,11.775cm)' onepage.ps twopages.ps If it is in Springer LNCS format, use this: pstops -pa4 '2:0L@1(26.6cm,-3.075cm)+1L@1(26.6cm,11.775cm)' onepage.ps twopages.ps But sometimes, it is a bit larger than A5, the following is what I will use (I got this by trial and error): pstops -pa4 '2:0L@.87(24cm,-1.5cm)+1L@.87(24cm,13.35cm)' onepage.ps twopages.ps # Presentations ### BibTeX Using BibTeX. Put these at the end of the document: \bibliographystyle{ieeetr} \bibliography{fair} then run LaTeX by: $latex document # To generate *.aux$ bibtex document # Base on *.aux to generate *.bbl $latex document # Learn about the existence of *.bbl$ latex document # Regenerate the document then you will have to dvi file ### Presentation using seminar class Template for “seminar” slides: \documentclass[A4,16pt]{seminar} \begin{document} \begin{slide} \newslide \section{Hello} world? \begin{itemize} \item Here \item There \end{itemize} \end{slide} \end{document} # Other Look for the fancy header package \usepackage{fancyhdr} ### Single column ACM SIG Proceedings My way to make a double column style template use for single column mode is use the following prologue: \documentclass[a4paper]{sig-alternate} \makeatletter %Remove ACM copyright notice at the lower left corner %Make the ACM template into single column \renewcommand{\twocolumn}[1][1]{\onecolumn #1} \makeatother ### Modifying section headers (as well as others) For example, section numbers should be in romans instead of arab: \renewcommand\thesection{\roman{section}} also like this: \renewcommand{\thefigure}{\thechapter.\arabic{figure}} ### Acknowledgment as footnotes You may want footnotes without numbers or symbols. Here is the way: \def\blfootnote{\xdef\@thefnmark{}\@footnotetext} Remember to enclose the definition block with \makeatletter and \makeatother. Source: http://help-csli.stanford.edu/tex/latex-footnotes.shtml ### Tabular with different justifications Do in this way: \begin{tabular}{p{1cm}p{3cm}} ROW 1 & left justified ROW 2 & \makebox[3cm]{centered} ROW 3 & \makebox[3cm][r]{right justified} \end{tabular} ### Parboxes Make a box of text in paragraph mode as a “character” in a line: \parbox[b]{3cm}{blah blah blah} where b is for bottom-aligned (choice: c, t) and 3cm is the width. Similar function can be achieved by minipage: \begin{minipage}{3cm} blah blah blah \end{minipage} If you want framed version, enclose them with \fbox{...} ### Beautify tables Use the booktab package by Simon Fear. The way to make tables beautify (and look professional) is to use as little decoration as possible, e.g. don’t use vertical lines. The table elements shall have their own common region and aligned to make it sound like a table. This is what we called the Gestalt Principle: things that are seen as forming a known shape are seen as being together. ### Spliting PostScript Enlarging an A4 document into A3 size, with two sheets of A4 output make up one page of A3: # psresize -pa3 -Pa4 a4document.ps a3.ps # pstops -pa4 '1:0@1L(42cm,0)' a3.ps a4-upper.ps # pstops -pa4 '1:0@1L(21cm,0)' a3.ps a4-lower.ps In the above, (42cm,0) means to shift the sheet left 42cm and up 0cm. It is required because you rotated the sheet left a right angle (origin is on the lower left corner). Hence I move the pictures to fit it into an A4 sheet. If your printer cannot create zero-margin, you may need to change the shift amount to cover those lost margin.
2018-08-19 19:23:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8646730780601501, "perplexity": 6567.000683538331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215284.54/warc/CC-MAIN-20180819184710-20180819204710-00105.warc.gz"}
https://cs.stackexchange.com/questions/133052/what-exactly-are-ancestors-in-dag
What exactly are ancestors in DAG I am new to graph theory and confused with ancestors definition in DAG(or in general graph). For example in the following DAG 1--->2--->3<---4<---5 If I start DFS from 1 vertex first then path covered is 1--2--3. Then next if I start DFS from vertex 5, then the path covered is 5--4. Vertex 3 is not visited again. So visited order is 1 2 3 5 4. What about the ancestors of 3. Are they only 1,2 or 4,5 also ? what about ancestor Ancestor of 4. Is it only 5 or 1,2 also as they were also visited before visiting 5 ?
2021-04-10 21:14:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6274064779281616, "perplexity": 1292.065620295908}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00480.warc.gz"}
https://solvedlib.com/n/find-the-value-of-5-6-4-7,8260739
# Find the value of $(5 ! 6 !) /(4 ! 7 !)$. ###### Question: Find the value of $(5 ! 6 !) /(4 ! 7 !)$. #### Similar Solved Questions ##### In order to estimate the mean amount of time computer users spend on the internet each... In order to estimate the mean amount of time computer users spend on the internet each month, how many computer users must be surveyed in order to be 90% confident that your sample mean is within 15 minutes of the population mean? Assume that the standard deviation of the population of monthly time ... ##### Any help would be appreciated. System Disorder ACTIVE LEARNING TEM STUDENT NA Юст со CES bacoes... Any help would be appreciated. System Disorder ACTIVE LEARNING TEM STUDENT NA Юст со CES bacoes Mellitus: Manifestations VIEW MODULE CHAPTER 15 of Hypoglycemia Alterations in Health (Diagnosis) Health Promotion and Disease Prevention Pathophysiology Related to Client Pr... ##### QueslonFelpAbrine solution of salt fows at a constant rate of 5 Lmin into large tank that initially held 100 stured of brine solution which was the tank is kept well dissolved 0.4 kg of salt The solution inside and flows out of the tank at the same rale. If the tank after min When will the concentration concentration of salt in the brine entering the tank Is 0.04 kg/L, determine the mass of salt in the of salt in the tank reach 03 kg/L? Determine the mass of salt in the tank atter Minmass 4-36â‚ queslonFelp Abrine solution of salt fows at a constant rate of 5 Lmin into large tank that initially held 100 stured of brine solution which was the tank is kept well dissolved 0.4 kg of salt The solution inside and flows out of the tank at the same rale. If the tank after min When will the concentr... ##### Powers of Tangents and Secants Evaluate the integrals in Exercises $33-50$ . $\int 8 \cot ^{4} t d t$ Powers of Tangents and Secants Evaluate the integrals in Exercises $33-50$ . $\int 8 \cot ^{4} t d t$... ##### Let Wlx) = W1 = Yz ' Yakx) where Y112, Yz -5x,Y3 = -3x2 Find W(T):Select one: Oa. 366 b. 352 c None Od, 360 Oe: 360 Let Wlx) = W1 = Yz ' Yakx) where Y1 12, Yz -5x,Y3 = -3x2 Find W(T): Select one: Oa. 366 b. 352 c None Od, 360 Oe: 360... ##### The demand curve facing a perfectly competitive firm is Select one: a. the same as its... The demand curve facing a perfectly competitive firm is Select one: a. the same as its average revenue curve, but not the same as its marginal revenue curve. b. the same as its average revenue curve and its marginal revenue curve. c. the same as its marginal revenue curve, but not its average revenu... ##### Timpco, a retailer, makes both cash and credit sales (i.e., sales on open account). Information regarding... Timpco, a retailer, makes both cash and credit sales (i.e., sales on open account). Information regarding budgeted sales for the last quarter of the year is as follows: Cash sales Credit sales Total October $120,000 120,000$ 240,000 November $99,000 118,800$ 217,800 December $97,000 1 06,700$ ... ##### Uabnoun BacieriaUahrun Gram belcca dlCalaKise otnualy {Ctnent mnud 36No(a SLain 1c+) @uasc TCtMsa TcsFmannio (cmontaman NO Uabnoun Bacieria Uahrun Gram belcca dl CalaKise ot nualy {Ctnent mnud 36 No (a SLain 1c+) @uasc TCt Msa TcsF mannio (cmontaman NO... ##### Problem 1_Rema(49 points) (a) Find the coordinate vector of €with respect to the ordered basisw-{[J [8] [}R::[x]E(b) Let F; be the ordered basis of R? given byE = {[8] [3]}and let Fz be the ordered basis given byB = {[1] [3}}Find the transition matrix PFsFi such that [c]n Pr,Flz]F; for all € in R2:Note_ You can earn partial credit on this problem_ Problem 1_ Rema (49 points) (a) Find the coordinate vector of € with respect to the ordered basis w-{[J [8] [} R:: [x]E (b) Let F; be the ordered basis of R? given by E = {[8] [3]} and let Fz be the ordered basis given by B = {[1] [3}} Find the transition matrix PFsFi such that [c]n Pr,Flz]F;... ##### 3. Assume you have a set of data that does not have any low outliers nor... 3. Assume you have a set of data that does not have any low outliers nor any high outliers. Fill in the blanks: a. The lower whisker starts at the and goes to the b. The lower line of the Box is the c. The middle line of the Box is the d. The upper line of the Box is the e. The upper whisker starts ... ##### When sequencing DNA by chain termination, the terminator nucleotide   A) is missing a 2' OH B)is... When sequencing DNA by chain termination, the terminator nucleotide   A) is missing a 2' OH B)is missing a 5' phosphate C)is missing both 2' and 3' OH... ##### What is the IUPAC name for the compound shown? W 9 What is the IUPAC name... What is the IUPAC name for the compound shown? W 9 What is the IUPAC name for the compound shown? OH JUPAC name:... ##### HW 13 Image formation in mirrors and lenses Begin Date: 11/14/2019 12:01:00 AM -- Due Date:... HW 13 Image formation in mirrors and lenses Begin Date: 11/14/2019 12:01:00 AM -- Due Date: 11/20/2019 11:59:00 PM End Date: 12/9/2019 11:59:00 PM (20%) Problem 1: An object is located a distance do = 6.1 cm in front of a concave mirror with a radius of curvature r = 19.3 cm. *33% Part (a) Write an ... ##### If Volume of a cone is 90 cc units then volume of cylinder isa, 270 cc unitsb. 30 cc units180 cc unitsd,90 cc unitsConsider cone with radius 5 cm. and height 12 cr_ dv then find (where V is the volume ofthe cone) Or4011201c,300TTd.10OT~Zx Find fx where f(x,v) = 1-2v739 22e-Zx (1 -2v) 2e -Zx fx (1 -2v) ~2e- (1-2v) If Volume of a cone is 90 cc units then volume of cylinder is a, 270 cc units b. 30 cc units 180 cc units d,90 cc units Consider cone with radius 5 cm. and height 12 cr_ dv then find (where V is the volume ofthe cone) Or 401 1201 c,300TT d.10OT ~Zx Find fx where f(x,v) = 1-2v 739 22e-Zx (1 -2v) 2e -... ##### 17 Indicate , using curved arrows , the mechanism for the following transformation_ (14 pts: )HzSO4Ho_ 17 Indicate , using curved arrows , the mechanism for the following transformation_ (14 pts: ) HzSO4 Ho_... ##### It's time for your final project, and time to show off your network design abilities. Use... It's time for your final project, and time to show off your network design abilities. Use the information you've saved from previous projects to build upon for this final project. You're going to use the knowledge you've gained thus far to design an enterprise network containing requ... ##### Solve 8= n/2.3 ? Solve 8= n/2.3 ?... ##### Question 3 (30 marks) A- The following transactions occurred between the home office and the Mason... Question 3 (30 marks) A- The following transactions occurred between the home office and the Mason Branch of Smaldino Company: Cash of $1,000 was forwarded by the home office to Mason Branch. Merchandise with a home office cost of$60,000 was shipped by the home office to Mason Branch. Equipment ... G1ive GgwaHien (/s) fe In e (-Ilow/n] Con ov 4y Svovp (2.3.4) C0 m pom nd L $/ nj 6 cov(k+ m n6 Ilowin5 Y*i anshcr Ccixele Csnpcene( Which cli+ Gllovins 4ndee {jo tke Wkick Na(N i^ DMSO Kache^ U th Sulsliluken (carcl ~eet MSw y faslcv Kak (Il;ch_Fx (9) (cH;) (HRv CHsB, L. (CVs) .€B... 1 answer ##### 1. Write a class named Employee that holds the following data about an employee in attributes:... 1. Write a class named Employee that holds the following data about an employee in attributes: name, ID number, department, and job title. (employee.py) 2. Create a UML diagram for Employee class. (word file) 3. Create a program that stores Employee objects in a dictionary. Use the employee ID numbe... 1 answer ##### Imagine we have two diploid species which diverged 10 million years ago. On sequencing part of... Imagine we have two diploid species which diverged 10 million years ago. On sequencing part of an autosome where we think most of the mutations are neutral we find that they are 20% divergent. if we estimate that both species go through 4 generations per year, what is the mutation rate per site per ... 5 answers ##### 1 0 8 0 3470 1 814areeqoait 3.056 what numbe 10 must 03056 be put in each 3 305.6 box 50 that 1003 1 8statements 1 0 8 0 3470 1 8 14 areeqoait 3.056 what numbe 10 must 03056 be put in each 3 305.6 box 50 that 100 3 1 8 statements... 5 answers ##### 2. [12 marks] You are given:The number of claims made by an invividual insured in & year has Poisson distribution with mean 0;The prior distribution for 0 isT(0) 402 e-20 0 > 0.(iii) Three claims are observed in year 1, no claims in year 2 and two claims in year 3_This is what you need to do:(1a) Find Buhlmann' estimate for the number of claims in year 4 (1b) Without finding Bayesian estimate; show that we have exact credibility (that Bayesian estimate must be equal to Buhlmann' 2. [12 marks] You are given: The number of claims made by an invividual insured in & year has Poisson distribution with mean 0; The prior distribution for 0 is T(0) 402 e-20 0 > 0. (iii) Three claims are observed in year 1, no claims in year 2 and two claims in year 3_ This is what you need ... 5 answers ##### For the circuit shown; the emf 81 (in V) is:2 A20 Fw3 Ar5 A<40603668463860 For the circuit shown; the emf 81 (in V) is: 2 A 20 Fw 3 Ar 5 A <40 60 36 68 46 38 60... 5 answers ##### The weekly sales of Honolulu Red Oranges is given by 840 20p, Calculate the price elasticity of demand when the price is 530 per orange (yes,$30 per oranget) HINT [See Example 1.]Interpret your answer The demand is going%fa per 1% increase in price at that price level:Also calculate the price that gives maximum weekly revenue_Find this maximum revenueNeed Help?Raad Il The weekly sales of Honolulu Red Oranges is given by 840 20p, Calculate the price elasticity of demand when the price is 530 per orange (yes, \$30 per oranget) HINT [See Example 1.] Interpret your answer The demand is going %fa per 1% increase in price at that price level: Also calculate the price th... ##### (a) Given that P(A) =}, P(B) =} P(C) = and P(BlA) = % If A ad € are mutually exclusive, determine the bounds of P(Bnc)marks)(b) Let X be Poisson random variable with E[X] = In 2 The following students make the following deductions regarding E[e-Xj:Aaron: Ele-X] <Belinda: E[e-X] s; Ceticia; E[e ~12; Daniel: Fle-X] = 2Elia: E[e-x]= 3 Farzana: Ele -*] 2 Who among these six students make the correct deduction? Justify your answer:marks)Let X~N(2,9) ad Y~N(3,16). X and Y are known to be independ (a) Given that P(A) =}, P(B) =} P(C) = and P(BlA) = % If A ad € are mutually exclusive, determine the bounds of P(Bnc) marks) (b) Let X be Poisson random variable with E[X] = In 2 The following students make the following deductions regarding E[e-Xj: Aaron: Ele-X] < Belinda: E[e-X] s; Cetic...
2022-08-10 02:24:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6760126948356628, "perplexity": 5970.830270726701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00245.warc.gz"}
http://www.myoak.info/post/71/
# Machine Learning - Online Learning and Recommender System ## Online Learning The online learning setting allows us to model problems where we have a continuous flood or a continuous stream of data coming in and we would like an algorithm to learn from that. Today, many of the largest websites use different versions of online learning algorithms to learn from the flood of users that keep on coming to, back to the website. Specifically, if you have a continuous stream of data generated by a continuous stream of users coming to your website, what you can do is sometimes use an online learning algorithm to learn user preferences from the stream of data and use that to optimize some of the decisions on your website. Online learning is very similar to stochastic gradient descent algorithm, only instead of scanning through a fixed training set, we're getting one example from one user, learning from that example and discarding it and moving on. One advantage of online learning is also that if you have changing pool of users or things you're trying to predict are slowly changing like your user taste is slowly changing, it can slowly adapt your learned hypothesis/model to whatever the latest sets of user behaviors are like. For example, there are some changes in economy, users are getting more price-sensitive, this algorithm can help adapt to user preferences and keep track of changing population of users. ### Shipping service example In this example, we want a learning algorithm to help us to optimize what is the asking price that we want to offer to our users so that get can get more profits. What the algorithm learns is probability that a user want to buy it given a set of user properties as features. I.e. $p(y=1|x;\theta )$ The features $x$ capture properties of users. For example, the features can be the origin/destination and asking price(the price we offer based on the model learned in the last round and the new user's properties, i.e. input the new user's properties as features to the model in last round then return the asking price so that the new user has the largest chance to buy it. Then record whether the new user bought it or not and include the price in this around. Then we get a new example $(x,y)$). Next, we use the new user's example $(x,y)$ to update the model in the last round and use the updated model to estimate the asking price for the further new user. We must know that offering the asking price and learning after offering the price are two separate processes(one is inferring, the other is learning). Expressing learning process (e.g. logistic regression) in pseudocode is: Repeat forever { Get $(x,y)$ corresponding to user Update $\theta$ using $(x,y)$: ${ \theta }_{ j }:={ \theta }_{ j }-\alpha ({ h }_{ \theta }(x)-y)\cdot { x }_{ j }$ Discard $(x,y)$ } Our website keeps on staying up. $\theta$ are parameters of the model, $j$ is the number of features, $j=0,1,...,n$, $\alpha$ is the learning rate. If the website has enough continuous stream of data, we can discard each example after the learning process. Otherwise, we may need to save all the data. We want to apply learning algorithm to give good search listings to a user (In this example, it is showing user the 10 phones that they're most likely to click on). The principles are nearly the same as the previous shipping example. What is different is the 10 results are actually 10 pairs of examples $(x,y)$ which may need 10 steps of gradient descent. The problem of learning this is called predicted CTR (abbr. click through rate). Other examples are like choosing special offers to show a user, a customized selection of news articles, product recommendation, etc. I'll talk about recommender system in the next section. ## Recommender System Recommender system is widely used in many companies (e.g. Amazon, Netflix, etc) to promote sales. In machine learning, the features you choose will have a big effect on the performance of your learning algorithm. For some problems, there are algorithms that can try to automatically learn a good set of features for you rather than using hand-coded features. This is what a recommender system can do. For example, you are a company which sells or rents out movies. You let your users rate movies using one to five stars (we use 0 to 5 for easier mathematical computation). In the realistic setting, each of your users may have rated only a minuscule fraction of movies. So the recommender system problem is given this dataset $r(i,j)$ and $y(i,j)$ and try to predict the missing ratings (what the question marks should be). Then we can look at the movies that the user has not yet watched and recommend new movies to that user. ### Content-based recommendations This algorithm is called content-based recommendations because we assume that we have available features $({x}_{1},{x}_{2},...)$ for the different movies. These features capture what is the content of these movies, like how romantic is this movie, how much action is in this movie, etc. This algorithm based on the assumption of the features, but for many movies, we don't actually have such features, so we need an approach which is called collaborative filtering to automatically learn features for itself. ### Collaborative filtering Assume that we already know the how much of each user like romantic movies and action movies (i.e. parameters $\theta$). Then we can use $\theta$ to calculate features of movies $x$. So combining the problem we talked about earlier, should we calculate features first or parameters first? This is a chicken and egg problem. What we can do is randomly guess some value of parameters $\theta$, then learn features $x$ of different movies. Then use $x$ to better estimate parameters $\theta$ and just keep iterating and finally it causes your algorithm to converge to reasonable sets of both features and parameters. This is possible because only because each user rates multiple movies and hopefully each movie is rated by multiple users To improve this algorithm, we can simultaneously learn parameters and features of different movies. In the above objective cost function, if you hold features $x$ as constant, and just minimize respect to parameters $\theta$, you solve the first cost function. If you do the opposite, it becomes equivalent to the second cost function. In this new way, we don't need to do the convention of adding feature ${x}_{0}$, so we have $x\in { \mathbb{R} }^{ n }$ and $\theta \in { \mathbb{R} }^{ n }$. So finally, if the user $j$ hasn't rated movie $i$ yet, we can use ${ { (\theta }^{ (j) } })^{ T }({ x }^{ (i) })$ to calculate the rating of that movie. Note: The first step serves as symmetry breaking (similar to the random initialization of a neural network's parameters) and ensures the algorithm learns features ${x}^{(1)},{x}^{(2)},..., {x}^{({n}_{m})}$ that are different from each other. ### Low rank matrix factorization In this section, first to do vectorization of the collaborative filtering. Giving another name for this vectorized collaborative filtering is Low-Rank Matrix Factorization. We can use the learned features to find related movies. To speak generally, for example, a user has recently been looking at one product, are there other related products that you could recommend to this user? It's actually pretty difficult to go into the learned features come up with a human understandable interpretation of what these features really are. But it doesn't impact the algorithm and the results. However, in practice, there is one situation where there is a user that has not rated any movies. Seen from the following cost function, there is no value $r(i,j)$ for this user that is equal to 1. So only the regularization part has some impact on it, which will finally make ${\theta}^{(5)}$ a zero vector and make all the ratings of the user will also be zero. To solve this problem, we can use Mean Normalization as a pre-processing step. Mean normalization computes average ratings of each movie, then subtracts off the mean ratings to make each movie have an average rating of 0. We use the normalized data as training data to learn $\theta$ and $x$ and calculate ${ { (\theta }^{ (j) } })^{ T }({ x }^{ (i) })$ and add the mean back ${ { (\theta }^{ (j) } })^{ T }({ x }^{ (i) })+{\mu}_{i}$. That is to say that if the user hasn't rated any movies, what we're going to do is to predict for each of movies of this user the average rating that the movie got. Similarly, for the movies that don't have any ratings, we can also average each column and normalize to have a mean of 0, but this situation is less important since maybe we don't need to recommend the movie without any ratings to users.
2019-07-18 05:09:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4527747929096222, "perplexity": 604.5822946825506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525500.21/warc/CC-MAIN-20190718042531-20190718064531-00350.warc.gz"}
https://dwwiki.mooo.com/w/index.php?title=Talk:Main_Page&oldid=25949
# Talk:Main Page Old content for this page has been archived, and can be found at the following locations. Main Page/Archive 1 Main Page/Archive 2 Main Page/Archive 3 Main Page/Archive 4 Main Page/Archive 5 Main Page/Archive 6 ## Item Shop Inventories I have noticed that there is not a comprehensive list of item shops (as far as I can find) and this could be a useful thing to put on the Wiki. This would help people find particular items which they want or need and is also a large but relatively easy project for me to research since I'm kind of a newbie. Can someone with some experience in adding to the Wiki tell me if this is a good idea? --Thistleryver 04:54, 14 April 2011 (UTC) While it is something that would be worthwhile, especially if items are browse appraised (unfortunately you have to do it one item at a time), but since but Kefka has already made a fine database here I guess I don't feel an urgency to redo it all. --Frazyl 05:18, 14 April 2011 (UTC) ## Judge As of this announcement, judge now works in a completely different way. This means that: • All of the existing weapons rating data is invalid. • We can't use the judge-method to determine a weapon's rating either. There will be a short period of silence, followed by a long period of swearing, followed by changes to several templates. Please bear with me. --Chat 19:08, 10 January 2011 (UTC) OK, I now think we've got enough research on Judge to start putting weapon ratings back in. A word of caution, however: Please don't add any weapon ratings in unless you: 2. Have a JIR of at least 220 in the appropriate weapon class (If you don't know what a JIR is, go back to step 1). Or your information won't be accurate. --Chat 19:46, 19 January 2011 (UTC) ## Spam We seem to have a new breed of spammers (you can recognize them by their usernames, which comprise of a name followed by a random sequence of numbers) that are able to defeat the captcha to create accounts and then insert spam URLs into pages. Because they're using logged-in accounts, we can't use semi-protection to defend against them. There are, however, a few things we can do: • At the moment, all user accounts are immediately promoted to auto-confirmed user accounts - what this means is that they're immediately able to edit semi-protected pages. We could change the default settings for auto-confirmation (would require Drakkos to do, but should be trivial) such that user accounts must make a minimum number of edits and exist for a minimum number of days before they are autoconfirmed - this is what most other wikis do, and would put spammers back at the mercy of our protection-settings. • Of interesting note is that all the pages targetted by this type of spammer have 'armour' somewhere in their title (presumably to create some sort of association with 'health shield' or some such). If they start getting annoying, we could just full-protect pages with 'armour' in the title for a while. Obviously, this has the downside of stopping anyone else from editing them too, however. • I'm not sure whether there are known vulnerabilities with reCaptcha that have been patched in a more recent version; if so then perhaps a simple upgrade will resolve matters. --Chat 00:11, 6 January 2011 (UTC) Changing auto confirmed settings would be the best option I reckon. Rehevkor 14:59, 6 January 2011 (UTC) I agree. Though I wouldn't be opposed to temporarily full-protecting those pages if it kicks up again (seems to have stopped now, so maybe it was a one-off thing, knock wood), since they're not exactly busy pages. --Ilde 19:00, 6 January 2011 (UTC) I just had to revert the armour page again :( Zexium 04:03, 8 January 2011 (UTC) and now the category:armour page .... I've tried something that assumes the script will abort if it finds an apparently already spammed page ... there's an html comment containing what looks like the spamlink as a wikilink - fingers crossed Zexium 05:10, 8 January 2011 (UTC) Based on how spambots have behaved in the past, I really really doubt they check the page first before spamlinking - it's just not worth their time/coding effort to do so over the blunt-force approach. There've been several pages where spambots merrily competed with themselves to rewrite links in the past, before someone got around to protecting them. In any case, I don't like the idea of putting their links into comments, as: • It feels too much like doing their job for them, and would achieve their goal if someone naive/stupid started editing an article and felt like following the strange, commented out link. • Let's say it worked, and the spambot had a script that realized the page was already spammed. I think it's possible the spambot would then have logic that says 'Aha, some of my spam still exists here! Here's a wiki that's not closely monitored, where spam/vandalism aren't cleaned up very often. Focus spam effort on this wiki!' • Note that the spam links appear to have some kind of revision ID in them. I assume that's going to get changed after a while (to make naive string-matching based anti-spam systems fail to spot it), at which point all the commented out links will be useless. I'm removing the commented out link for now. If we get much more spam, I'll try locking down the armour pages for a while. --Chat 10:54, 8 January 2011 (UTC) Another idea if the update of reCaptcha doesn't stop spammers and they make it through the time limit and number of edits would be to make accounts not auto-confirm themselves automatically. Instead, having admin users approve new users manually either on request or regularly (by checking if there's a discworld user by that name?) would probably frustrate spammers enough that they move on elsewhere. I'm not sure which is least annoying to actual new users, to have to go through a testing period or ask someone through the mud. For catching spammers it would be spotting bad small edits vs approving new users regularly. --Frazyl 20:02, 6 January 2011 (UTC) ## Archiving The main talk page was getting pretty big, so I've archived the old bits off. The process for doing this is: • Archive talk pages when you start seeing the "WARNING: This page is X kilobytes long; some browsers may have problems editing pages approaching or longer than 32kb. Please consider breaking the page into smaller sections." message when editing them. • Move any sections which have not had edits within the last 30 days to the archive page. • The archive page is called 'Talk:pagename/Archive X', where X is 1, 2, 3, etc. • Put the {{talkarchive}} template at the top of each archive page. • Put (or update) the {{archives}} template at the top of the main talk page. --Chat 17:25, 16 September 2009 (UTC) ## Viewing figures Just thought you guys might like to know - unique daily visits to the site for the month of December 2009 broke 1,000 - we hit 1,001, as a matter of fact. For interest, here are the figures for the past few months: Mar 2010 1104 Feb 2010 1212 Jan 2010 1114 Dec 2009 1001 Nov 2009 919 Oct 2009 796 Sep 2009 676 Aug 2009 540 Jul 2009 398 Jun 2009 279 May 2009 165 Great work everyone - many thanks for your continued work in making this wiki a great resource for DW players everywhere! Drakkos 21:54, 3 January 2010 (UTC) January 2010 hit 1114 unique daily visits, FWIW. Drakkos 00:21, 2 February 2010 (UTC) February 2010 : 1212 Interesting. Increasing fairly consistently. Rehevkor 13:31, 2 March 2010 (UTC) ## Quest pages What's the plan for the quest pages to remain in sync? That is, quests are duplicated on several pages, odds are modifications on one page won't be made on all of them unless everyone is aware that the quest is also present on another page. I thought that maybe there was a wiki thing in place so that sections from one page came from another or that they mirrored each other but apparently not. Not sure if that's something that actually exists. Anyone know? Barring that, should we remove duplicate quests and replace some of them with links? Or some other way to keep all the quest text in sync? I came up with some ways to handle the issue: • Make a page for every quest, the other pages link to the quest page. Downside is there will be lots of pages. • Put all quests on one page, then the other pages link to the section of the quest. Probably a bad idea because the page would be too big. • Make sure each quest is only on one page and that other pages are only links. Would need to decide on a level (domain/area/city) which would hold the real quest text. • Add comments in the source of duplicated quests that the other version needs to be updated as well. People could miss that though. • Insert idea here. Or does everyone feel quests duplicated on several pages is no big deal after all? Frazyl 22:31, 14 March 2010 (UTC) The third one would be easiest, but... eh. The first one is probably the best solution (possibly we could even stop using hidden text), but I don't like that they would then show up in Special:Random. They do currently, but it's not especially important because 1)there are comparatively few of them, and 2)spoilers are hidden and mostly below the fold anyway. There are only 1,234 content pages currently, so if we added a few hundred quest pages they'd be a significant portion of that. Maybe if we had a separate namespace ([1]) for them? Then they wouldn't show up randomly. --Ilde 02:23, 15 March 2010 (UTC) A separate namespace sounds good especially if it allows us to show the text unhidden. As it turns out Special:Random also excludes redirects, so we could add redirects from the normal namespace for pages people might search or we want to link to... Probably only entry pages with warnings like Category:Quest pages and Unofficial_Quest_Solutions. --Frazyl 02:53, 15 March 2010 (UTC) I actually like the hidden text as it enables me to uncover the mystery line by line if required (for example if I get stuck just because I cannot figure out the right verb, "exacto" comes to mind...). --Gunde 23:19, 31 March 2010 (UTC) So, will someone create a quest namespace for quest articles? Or should we put them in the research namespace? The primary advantage of a quest namespace is that this will stop quest pages appearing in Special:Random, which can send someone who doesn't want to be spoiled to a quest page and it will stop them appearing in default search (happens when the article doesn't exist) Special:Search, which will reveal the context of the search, that is unhidden quest info. --Frazyl 18:31, 7 May 2010 (UTC) AFAIK, creating a new namespace requires editing the server's LocalSettings.php; as such it's something that only Drakkos can do, so you should speak to him. Please don't move quests into the Research: namespace - that will end up polluting Research: with things that aren't research, and we don't want that. --Chat 22:45, 7 May 2010 (UTC) Hey, what about the Help: namespace? It is somewhat appropriate and not really in use (the only thing in it is Help:Contents), and I don't think Special:Random catches pages in it. --Ilde 18:42, 25 March 2011 (UTC) Ok, so I came up with a few ideas to improve quest pages. For the namespace I was waiting for the captcha to be installed first so as not to ask too much at once... So I made a template {{Prehidden}} that basically is like {{prebox}} with white on white text. The advantage of prebox is that it preserves line breaks and spaces and other characters without breaking out too easily as with <p> (making a list with * or : makes everything after visible) while removing then obsolete <br> tags and allowing some formatting like bold or italic. Also it puts a pretty box around the text. See Alchemists'_Guild_quests for examples. If it is agreeable we could turn {{Hidden}} like {{Prehidden}} unless there something that I didn't think of. As for quest duplication, it is possible to include pages with {{Include}}. It's only a matter of merging quests into the subpage and the template includes them with formatting and a link to the included page. As for what includes what Unofficial_quest_solutions looks good, there's some difference to the structure of the Discworld quest pages though. If we get a namespace for quests, I was thinking that each quest could get its own page in non-hidden text, which would then be included in the lists in {{Prehidden}} boxes. So if you don't want to be spoiled too much you can check the lists and to edit the quest and to be spoiled it would be easier to see it all at once in the quest page. It would use a tweaked include template. --Frazyl 07:02, 11 June 2010 (UTC) Ooooo! The include templates are a neat solution for the duplication issue; kudos for that. --Ilde 03:04, 12 June 2010 (UTC) Slight issue, when including a page with the template include it doesn't work. You'd think it would check if there really was a loop, but no it just refuses to do anything. The only workaround that comes to mind is to make duplicate {{Include}} templates, one per level. --Frazyl 07:14, 11 June 2010 (UTC) Ok all pages have the same format and duplicate quests have all been merged except for Sentimentalist and Distant Exhibitionist which have several versions. Some quests fitted several areas, put them in the most important area. To place a quest in several areas would mean putting the quests in individual quest pages and including those in all list pages that is concerned, but we said that would be too many pages so we'd need the quest namespace. --Frazyl 02:01, 9 July 2010 (UTC) It occurred to me that while we can't include the text of the quests that could go in several places in more than one place (because we can't include sections, only pages as far as I know with mediawiki) we could place links to the quest. Might be worth going through the quests vs the mud quest pages to put stubs for quests missing and links to quests that are somewhere else, at some point. --Frazyl 23:20, 13 July 2010 (UTC) ## Articles that are categories I've noticed a few of these. Categories that are masquerading as articles. One example being Category:Dibbler_clones - all the information there should be in an article. I've noticed several of these scattered about. Would be there be any opposition to cleaning this up? Rehevkor 23:21, 18 December 2010 (UTC) And also, bred and butter information such as this should not be in categories as they will not show on default searches. Rehevkor 23:22, 18 December 2010 (UTC) Category:Furniture is a pretty bad offender too. Rehevkor 23:30, 18 December 2010 (UTC) I'm not so sure about this. If I search for "dibbler" or "dibbler clones" then the category obviously doesn't show up in search without someone creating a redirect for it. (Redirect to specific clone within the clones page might be ugly or bad usability?) But I think there should be a page for Dibbler himself, which includes a link to dibbler clones. Meanwhile the clones page shows under category NPCs, and the pages of each of those NPCs can be grouped together that way. Furniture is an example but is it good or bad? It has a redirect from the search term "furniture" which could also be a Furniture page (instead of category page) that explains furniture in general. The Furniture would probably need a link to the furniture category because individual pieces of furniture can certainly deserve their own wiki pages. In this case a few of them already have one and Category:Furniture indexes them nicely. Whatever is the final setup Furniture should probably also happen to Container(s) (which is a sibling category but not really parent or child) and its potential sub-category Scabbard(s). (Both currently pages but individual containers or scabbards could well have their own pages, especially if there's something special about them that needs explaining like involved acquisition, unique commands, room chat etc. Similar discussion might also happen under Category_talk:Items since the big Items and Weapons category pages are being cleaned up right around now. Rhonwen 12:37, 19 December 2010 (UTC) Well the purpose of categories are to list articles currently within the wiki. The ability to add text to them is just so a brief description can be added, no information should be there that isn't already in the article space. I see no reason why this information should be listed within a category page. As for the furniture page, a good option would be to move it to Furniture and split the list itself into List of furniture, or similar. These articles can all be categorised/subcategorised as required but the infornation should be in the article space. Rehevkor 15:15, 19 December 2010 (UTC) Hmmm. Well, I guess since it's messing up searching, it would be better to split them. It's just a bit annoying to me to have what's basically one thing split up into several pages (I mean, the category itself is a bit of an afterthought... for most furniture, I see no reason whatsoever they should have separate pages--since all the relevant information about them is in the list (in a superior format, imo, since you can easily compare it with others that way), it's just very redundant and basically extra work to make... clutter) so that you have to hunt around and wikiwalk to find something. But meh. Maybe it would be best to just have Category:Furniture, Furniture and List of storage furniture (since that one list is really the thing making that page huge). --Ilde 18:40, 19 December 2010 (UTC) I guess Category-articles started off with a need to have the category pages be more useful, to combine purposes to make them more complete, especially when the content is (initially anyway) rather small. Or maybe you want to include all the member pages which you can do for free in the category page, sort of like a See also section that updates itself when anticipated new pages come in (which may not actually show up). I suppose the list of member pages can be seen as adding little to the page, especially when they are also integrated in the text or tables... But if you do want to include them I don't think you can otherwise without adding a module. It's possible to include a normal page inside a category, but then if it's not just parts of it it's just a duplicate page without the list of member pages. For the technical search issue it should be possible to add the Category namespace (or any other namespace) to the default search of anonymous users and users who have not changed it in preferences. There's also many other namespaces in this wiki that are not searched by default: "Discworld MUD Wiki", talk pages, user pages, template pages, Research... So beyond the search issue which looks fixable, is it better to have category-articles with list of member pages or is this wholly undesirable or what's the criteria that makes it bad/ok? --Frazyl 08:36, 22 December 2010 (UTC) Hrmmmm, maybe. I think I'm coming around to the "categories shouldn't also be articles" view a bit, though, even if the search thing isn't insurmountable. I mean, in some instances it does clearly seem to be better to have them separate, like with Weapons and Category:Weapons, because the pages are for fairly different things. I was going to hold up Category:Contractor npcs as one where it does seem to work better for the information to be in the category, but actually I think that can be moved to Real estate decoration, where it will fit better. It might be neater to do them all consistently, instead of only separating them out when one or both aspects are really large... also, I guess if you click on a category from a page in the category, it's probably in order to see what pages there are that are similar, so the bit you'd be looking for is the category bit, and it's nice to have it right there rather than at the very bottom after a bunch of other stuff you need to scroll past. One that's caught my attention is Category:Finding_and_seeking. Well, I like the table, and it's not as big as Category:Furniture was, but it seems a bit... I don't know. Like it would be better to have the table on a separate page. And maybe rename it all, too--the current name is more than a little unintuitive (I remember the search for a title that actually encompassed all the things in the category...). Maybe Scrying and tracking methods? /tangent And of course the aforementioned Category:Dibbler clones is pretty similar. They've both got the useful tables of everything/everyone that is (or should be) in that category and in both cases separating it out would pretty much just involve cutting a chunk out of the article, moving it to a similarly-named page, and cross-linking. There does seem to be something weird about having a list of pages in a category, and then, underneath it, having the "Pages in category" thing that every category has. --Ilde 06:35, 25 December 2010 (UTC) Ok that makes sense. I'll leave Category:Finding_and_seeking for you to do. Maybe just Seeking methods? Since there's no seek command and seeking can mean seeing from afar it's a bit more generic that find. --Frazyl 03:52, 26 December 2010 (UTC) I don't know, Seeking methods as a name has the same problem with being sort of awkward and unintuitive, I think. While track is a command, in context I think it'd be clear that the category's broader than just that command (and the article titles wouldn't be similar enough that linking to the wrong one would be likely to be a problem)... and "ways to track someone" seems like a natural way to describe track, Find, Find Corpse, or flying to someone (actual scrying stuff, too, but there's an extra level there in that you have to recognize the room... a less direct (even if potentially more informative) way to find someone. I do think they're alike enough that they should all be in one category as they are currently, though). Also, eh, some of the things in there--A Cup of Tea and Sake, Far Sight, Worstler's Advanced Metallurgical Glance and Worstler's Elementary Mineralogical Glance aren't really about finding people/things as such, but they are definitely scrying. --Ilde 06:01, 26 December 2010 (UTC) I don't think the name needs to be exactly what everyone will type. Say they type find, they'll see the links back to the more generic page. We can add links at the top and bottom, some redirects... Actually, the far seeing things fit with seek because you're seeking those locations and seeking beings, corpses, etc. whereas track doesn't work so much for everything. --Frazyl 06:57, 26 December 2010 (UTC) Well... it's not going to be what everyone would think to call it anyway, but I think titles that are at least intuitive once you've seen them are better (I know there are pages I've made with sort of wonky titles, but it's because I couldn't think of anything better :( ). I mean, we have Category:Light sources, not Category:Things with brightness. And it should be pretty clear what a page/category is about from the title, which I'm not sure is true for it currently. Not everything fits under tracking, no, but the others fit under scrying. --Ilde 18:23, 26 December 2010 (UTC) Ok I vote just Seeking them. Simple and to the point and the definition in the free dictionary fits all. If a better name is found it can be moved. --Frazyl 21:33, 7 April 2011 (UTC) ## Upcoming events 3 May 2011 Xola (Talk | contribs) (6,652 bytes) (n.b. prev edit doesn't mean I'm comin, people remaining=diffrent type of ppl frm whn I was @school, met few cool ppl alrdy. the culture of cres attractd OCD & PK obnoxiousness rathr thn creativity...) 3 May 2011 Xola (Talk | contribs) (6,651 bytes) (+ current/soon/previous events ( Category:Events )) 3 May 2011 Chat (Rolling back. Not worthy of being at the top of the main page, and clashes against the existing main page layout. Use the 'people of discworld' box if you really think this is necessary.) 15 May 2011 Xola (Oh come on,"not worthy"? :) Boxes too cluttered,current &upcoming stuff is *really* important, if more people knew was a place to put stuff up,would encourage more things being organised!) 15 May 2011 Frazyl (Considering there has not been any upcoming events, it seems better to put it in "People of Discworld". Now we could put it in the navigation sidebar but I'm not sure if the category is the best page.) Stop thinking like librarians! :) Build it and they will come! Don't base it around what's there, make it an open environment for people to add and start new things, and it will happen! Probably! Though I don't know why I'm doing it because none of the events are even vaguely social anymore just grindy stuff or related to minigames so pff, just nostalgia of what the place used to be I guess hehe --Xola 09:57, 21 May 2011 (UTC) No need to give it such prominence on the main page, there's already an events link in one of the boxes below. And there's no need to dick around with the formatting of the main page, it's fine how it is. I don't really wanna have to protect it. There's a whole wiki out there for you to create whatever articles you want. Rehevkor 14:19, 21 May 2011 (UTC)
2023-02-04 18:23:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41282761096954346, "perplexity": 1942.555907798014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00688.warc.gz"}
http://mathoverflow.net/feeds/question/96639
Decomposing $\mathbf{\Pi}^1_1$ sets into closed sets - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-19T23:50:00Z http://mathoverflow.net/feeds/question/96639 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/96639/decomposing-mathbf-pi1-1-sets-into-closed-sets Decomposing $\mathbf{\Pi}^1_1$ sets into closed sets Liang Yu 2012-05-11T03:26:26Z 2012-09-01T21:22:01Z <p>It is well known that every $\mathbf{\Pi}^1_1$-set is a union of $\aleph_1$-many Borel sets. I wonder whether it can be improved under certain reasonable set theory axioms assumption.</p> <p>For example, assuming $ZFC+CH$, then it is trivially true that every set is a union of $\aleph_1$-many closed sets. But this seems heavily depends on $CH$ since if $ZFC+\neg CH+MA$, then there is a lightface $\Pi^0_2$-set which cannot be a union of $\aleph_1$-many closed sets.</p> <p>So my question is: is it consistent with $ZFC+\neg CH$ that every $\mathbf{\Pi}^1_1$-set is a union of $\aleph_1$-many closed sets?</p> http://mathoverflow.net/questions/96639/decomposing-mathbf-pi1-1-sets-into-closed-sets/96775#96775 Answer by alephomega for Decomposing $\mathbf{\Pi}^1_1$ sets into closed sets alephomega 2012-05-12T14:49:07Z 2012-05-12T14:49:07Z <p>There is a theorem of my teacher Steve Jackson which says that assuming $ZFC + AD^{L(\mathbb{R})}$ every projective set is $\aleph_{\omega}$-Borel. So in particular this holds for $\Pi^1_1$ sets. The proof uses the theory of descriptions and every other technical tool from descriptive set theory (homogeneous trees, scales,...). Also, with respect to $MA$ and $CH$, $AD$ can't decide them, so maybe that result might be what you're looking for, I'm not sure. You can find the result in this survey of Jackson <a href="http://math.berkeley.edu/~steel/martin/jackson.martin.ps" rel="nofollow">"A survey of Determinacy"</a> somewhere in the end of the paper. </p>
2013-05-19 23:50:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246182441711426, "perplexity": 470.72474788761247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698150793/warc/CC-MAIN-20130516095550-00057-ip-10-60-113-184.ec2.internal.warc.gz"}
http://math.upsc.xyz/30/show-that-similar-matrices-have-characteristic-polynomial
0 votes Show that similar matrices have same characteristic polynomial. asked Nov 11, 2017 ## 1 Answer 0 votes The similar matrix of a matrix $A$ is $P^{-1}AP$ where P is invertible. The characteristic equation of $P^{-1}AP$ is $|P^{-1}AP - \lambda I| = 0$. Now, $|P^{-1}AP - \lambda I| \\ = |P^{-1}AP - P^{-1}\lambda IP| \\ = |P^{-1}(A-\lambda I)P \\ = |P^{-1}||A-\lambda I||P| \\ = |A-\lambda I| = 0$ which is the characteristic equation of the matrix $A$. Hence, proved. answered Nov 11, 2017 by (1,920 points)
2018-12-17 19:47:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.911477267742157, "perplexity": 1081.6177146678697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829115.83/warc/CC-MAIN-20181217183905-20181217205905-00366.warc.gz"}
https://hpx-docs.stellar-group.org/latest/html/examples/1d_stencil.html
Local to remote: 1D stencil¶ When developers write code they typically begin with a simple serial code and build upon it until all of the required functionality is present. The following set of examples were developed to demonstrate this iterative process of evolving a simple serial program to an efficient, fully-distributed HPX application. For this demonstration, we implemented a 1D heat distribution problem. This calculation simulates the diffusion of heat across a ring from an initialized state to some user-defined point in the future. It does this by breaking each portion of the ring into discrete segments and using the current segment’s temperature and the temperature of the surrounding segments to calculate the temperature of the current segment in the next timestep as shown by Fig. 2 below. Fig. 2 Heat diffusion example program flow. We parallelize this code over the following eight examples: The first example is straight serial code. In this code we instantiate a vector U that contains two vectors of doubles as seen in the structure stepper. //[stepper_1 struct stepper { // Our partition type typedef double partition; // Our data for one time step typedef std::vector<partition> space; // Our operator static double heat(double left, double middle, double right) { return middle + (k*dt/(dx*dx)) * (left - 2*middle + right); } // do all the work on 'nx' data points for 'nt' time steps space do_work(std::size_t nx, std::size_t nt) { // U[t][i] is the state of position i at time t. std::vector<space> U(2); for (space& s : U) s.resize(nx); // Initial conditions: f(0, i) = i for (std::size_t i = 0; i != nx; ++i) U[0][i] = double(i); // Actual time step loop for (std::size_t t = 0; t != nt; ++t) { space const& current = U[t % 2]; space& next = U[(t + 1) % 2]; next[0] = heat(current[nx-1], current[0], current[1]); for (std::size_t i = 1; i != nx-1; ++i) next[i] = heat(current[i-1], current[i], current[i+1]); next[nx-1] = heat(current[nx-2], current[nx-1], current[0]); } // Return the solution at time-step 'nt'. return U[nt % 2]; } Each element in the vector of doubles represents a single grid point. To calculate the change in heat distribution, the temperature of each grid point, along with its neighbors, is passed to the function heat. In order to improve readability, references named current and next are created which, depending on the time step, point to the first and second vector of doubles. The first vector of doubles is initialized with a simple heat ramp. After calling the heat function with the data in the current vector, the results are placed into the next vector. In example 2 we employ a technique called futurization. Futurization is a method by which we can easily transform a code that is serially executed into a code that creates asynchronous threads. In the simplest case this involves replacing a variable with a future to a variable, a function with a future to a function, and adding a .get() at the point where a value is actually needed. The code below shows how this technique was applied to the struct stepper. //[stepper_2 struct stepper { // Our partition type typedef hpx::shared_future<double> partition; // Our data for one time step typedef std::vector<partition> space; // Our operator static double heat(double left, double middle, double right) { return middle + (k*dt/(dx*dx)) * (left - 2*middle + right); } // do all the work on 'nx' data points for 'nt' time steps hpx::future<space> do_work(std::size_t nx, std::size_t nt) { using hpx::dataflow; using hpx::util::unwrapping; // U[t][i] is the state of position i at time t. std::vector<space> U(2); for (space& s : U) s.resize(nx); // Initial conditions: f(0, i) = i for (std::size_t i = 0; i != nx; ++i) auto Op = unwrapping(&stepper::heat); // Actual time step loop for (std::size_t t = 0; t != nt; ++t) { space const& current = U[t % 2]; space& next = U[(t + 1) % 2]; // WHEN U[t][i-1], U[t][i], and U[t][i+1] have been computed, THEN we // can compute U[t+1][i] for (std::size_t i = 0; i != nx; ++i) { next[i] = dataflow( hpx::launch::async, Op, current[idx(i, -1, nx)], current[i], current[idx(i, +1, nx)] ); } } // Now the asynchronous computation is running; the above for-loop does not // wait on anything. There is no implicit waiting at the end of each timestep; // the computation of each U[t][i] will begin as soon as its dependencies // are ready and hardware is available. // Return the solution at time-step 'nt'. return hpx::when_all(U[nt % 2]); } In example 2, we redefine our partition type as a shared_future and, in main, create the object result, which is a future to a vector of partitions. We use result to represent the last vector in a string of vectors created for each timestep. In order to move to the next timestep, the values of a partition and its neighbors must be passed to heat once the futures that contain them are ready. In HPX, we have an LCO (Local Control Object) named Dataflow that assists the programmer in expressing this dependency. Dataflow allows us to pass the results of a set of futures to a specified function when the futures are ready. Dataflow takes three types of arguments, one which instructs the dataflow on how to perform the function call (async or sync), the function to call (in this case Op), and futures to the arguments that will be passed to the function. When called, dataflow immediately returns a future to the result of the specified function. This allows users to string dataflows together and construct an execution tree. After the values of the futures in dataflow are ready, the values must be pulled out of the future container to be passed to the function heat. In order to do this, we use the HPX facility unwrapping, which underneath calls .get() on each of the futures so that the function heat will be passed doubles and not futures to doubles. By setting up the algorithm this way, the program will be able to execute as quickly as the dependencies of each future are met. Unfortunately, this example runs terribly slow. This increase in execution time is caused by the overheads needed to create a future for each data point. Because the work done within each call to heat is very small, the overhead of creating and scheduling each of the three futures is greater than that of the actual useful work! In order to amortize the overheads of our synchronization techniques, we need to be able to control the amount of work that will be done with each future. We call this amount of work per overhead grain size. In example 3, we return to our serial code to figure out how to control the grain size of our program. The strategy that we employ is to create “partitions” of data points. The user can define how many partitions are created and how many data points are contained in each partition. This is accomplished by creating the struct partition, which contains a member object data_, a vector of doubles that holds the data points assigned to a particular instance of partition. In example 4, we take advantage of the partition setup by redefining space to be a vector of shared_futures with each future representing a partition. In this manner, each future represents several data points. Because the user can define how many data points are in each partition, and, therefore, how many data points are represented by one future, a user can control the grainsize of the simulation. The rest of the code is then futurized in the same manner as example 2. It should be noted how strikingly similar example 4 is to example 2. Example 4 finally shows good results. This code scales equivalently to the OpenMP version. While these results are promising, there are more opportunities to improve the application’s scalability. Currently, this code only runs on one locality, but to get the full benefit of HPX, we need to be able to distribute the work to other machines in a cluster. We begin to add this functionality in example 5. In order to run on a distributed system, a large amount of boilerplate code must be added. Fortunately, HPX provides us with the concept of a component, which saves us from having to write quite as much code. A component is an object that can be remotely accessed using its global address. Components are made of two parts: a server and a client class. While the client class is not required, abstracting the server behind a client allows us to ensure type safety instead of having to pass around pointers to global objects. Example 5 renames example 4’s struct partition to partition_data and adds serialization support. Next, we add the server side representation of the data in the structure partition_server. Partition_server inherits from hpx::components::component_base, which contains a server-side component boilerplate. The boilerplate code allows a component’s public members to be accessible anywhere on the machine via its Global Identifier (GID). To encapsulate the component, we create a client side helper class. This object allows us to create new instances of our component and access its members without having to know its GID. In addition, we are using the client class to assist us with managing our asynchrony. For example, our client class partition‘s member function get_data() returns a future to partition_data get_data(). This struct inherits its boilerplate code from hpx::components::client_base. In the structure stepper, we have also had to make some changes to accommodate a distributed environment. In order to get the data from a particular neighboring partition, which could be remote, we must retrieve the data from all of the neighboring partitions. These retrievals are asynchronous and the function heat_part_data, which, amongst other things, calls heat, should not be called unless the data from the neighboring partitions have arrived. Therefore, it should come as no surprise that we synchronize this operation with another instance of dataflow (found in heat_part). This dataflow receives futures to the data in the current and surrounding partitions by calling get_data() on each respective partition. When these futures are ready, dataflow passes them to the unwrapping function, which extracts the shared_array of doubles and passes them to the lambda. The lambda calls heat_part_data on the locality, which the middle partition is on. Although this example could run distributed, it only runs on one locality, as it always uses hpx::find_here() as the target for the functions to run on. In example 6, we begin to distribute the partition data on different nodes. This is accomplished in stepper::do_work() by passing the GID of the locality where we wish to create the partition to the partition constructor. // Initial conditions: f(0, i) = i //[do_work_6 We distribute the partitions evenly based on the number of localities used, which is described in the function locidx. Because some of the data needed to update the partition in heat_part could now be on a new locality, we must devise a way of moving data to the locality of the middle partition. We accomplished this by adding a switch in the function get_data() that returns the end element of the buffer data_ if it is from the left partition or the first element of the buffer if the data is from the right partition. In this way only the necessary elements, not the whole buffer, are exchanged between nodes. The reader should be reminded that this exchange of end elements occurs in the function get_data() and, therefore, is executed asynchronously. Now that we have the code running in distributed, it is time to make some optimizations. The function heat_part spends most of its time on two tasks: retrieving remote data and working on the data in the middle partition. Because we know that the data for the middle partition is local, we can overlap the work on the middle partition with that of the possibly remote call of get_data(). This algorithmic change, which was implemented in example 7, can be seen below: //[stepper_7 // The partitioned operator, it invokes the heat operator above on all elements // of a partition. static partition heat_part(partition const& left, partition const& middle, partition const& right) { using hpx::dataflow; using hpx::util::unwrapping; hpx::shared_future<partition_data> middle_data = middle.get_data(partition_server::middle_partition); hpx::future<partition_data> next_middle = middle_data.then( unwrapping( [middle](partition_data const& m) -> partition_data { HPX_UNUSED(middle); // All local operations are performed once the middle data of // the previous time step becomes available. std::size_t size = m.size(); partition_data next(size); for (std::size_t i = 1; i != size-1; ++i) next[i] = heat(m[i-1], m[i], m[i+1]); return next; } ) ); return dataflow( hpx::launch::async, unwrapping( [left, middle, right](partition_data next, partition_data const& l, partition_data const& m, partition_data const& r) -> partition { HPX_UNUSED(left); HPX_UNUSED(right); // Calculate the missing boundary elements once the // corresponding data has become available. std::size_t size = m.size(); next[0] = heat(l[size-1], m[0], m[1]); next[size-1] = heat(m[size-2], m[size-1], r[0]); // The new partition_data will be allocated on the same locality // as 'middle'. return partition(middle.get_id(), std::move(next)); } ), std::move(next_middle), left.get_data(partition_server::left_partition), middle_data, right.get_data(partition_server::right_partition) Example 8 completes the futurization process and utilizes the full potential of HPX by distributing the program flow to multiple localities, usually defined as nodes in a cluster. It accomplishes this task by running an instance of HPX main on each locality. In order to coordinate the execution of the program, the struct stepper is wrapped into a component. In this way, each locality contains an instance of stepper that executes its own instance of the function do_work(). This scheme does create an interesting synchronization problem that must be solved. When the program flow was being coordinated on the head node, the GID of each component was known. However, when we distribute the program flow, each partition has no notion of the GID of its neighbor if the next partition is on another locality. In order to make the GIDs of neighboring partitions visible to each other, we created two buffers to store the GIDs of the remote neighboring partitions on the left and right respectively. These buffers are filled by sending the GID of newly created edge partitions to the right and left buffers of the neighboring localities. In order to finish the simulation, the solution vectors named result are then gathered together on locality 0 and added into a vector of spaces overall_result using the HPX functions gather_id and gather_here. Example 8 completes this example series, which takes the serial code of example 1 and incrementally morphs it into a fully distributed parallel code. This evolution was guided by the simple principles of futurization, the knowledge of grainsize, and utilization of components. Applying these techniques easily facilitates the scalable parallelization of most applications.
2021-01-18 10:50:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26240113377571106, "perplexity": 1708.2899920405384}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514495.52/warc/CC-MAIN-20210118092350-20210118122350-00759.warc.gz"}
https://www.physicsforums.com/threads/solving-a-differential-equation-numerically-in-octave-matlab-c.436212/
# Solving a differential equation numerically (in Octave, Matlab &c.) (1 Viewer) ### Users Who Are Viewing This Thread (Users: 0, Guests: 1) #### ejlflop 1. The problem statement, all variables and given/known data I have a second-order non-linear differential equation that I am trying to solve. So far I have decomposed it into a system of 2 first-order equations, and have (possibly) determined that it cannot be solved analytically. So I am trying to do a nice numerical approximation using GNU Octave (basically compatible with Matlab, so if you can do it with Matlab please can you help too :-D). Octave needs the equation expressed as first-order DEs -- which I think I've done -- I'm just not sure how to go about doing the actual approximation. 2. Relevant equations Original second-order DE: $$\frac{d^2 \theta}{dt^2} + \frac{k}{m}\cdot\frac{d\theta}{dt} - g\cdot\sin\theta = 0$$ Note that $$k, m, g$$ are arbitrary constants (yes, $$g$$ is 9.81!) 3. The attempt at a solution Substitute: $$\frac{d\theta}{dt} = w$$ Hence: $$\frac{d^2 \theta}{dt^2} = w\frac{dw}{d\theta}$$ $$w\frac{dw}{d\theta} + (\frac{k}{m}\cdot w) - (g\cdot\sin\theta) = 0$$ Attempt at Octave program to approximate it a bit: Code: function wdot = f (w, theta) g = 9.8 k = 1 m = 0.1 wdot = (g*sin(theta) - (k/m)*w)/w endfunction theta = linspace(0, 20, 400); y = lsode ("f", 1, theta); plot (y, theta); So this gives me a nice little graph, but obviously it's not a total solution -- that would involve computing the whole system of 2 DEs, which is what I don't know how to do! Any help much appreciated, thanks. #### ejlflop No worries; I found a nice tutorial on how to do it in matlab, and adapted it for my own purposes. If anyone has a similar problem, see this youtube video, it's very good: ### The Physics Forums Way We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
2019-03-25 03:27:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6143412590026855, "perplexity": 1698.509446885035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203548.81/warc/CC-MAIN-20190325031213-20190325053213-00081.warc.gz"}
https://cbse.eduvictors.com/2022/02/class-11-physics-mechanical-properties.html
Class 11 - Physics - Mechanical Properties of Solids and Fluids - Important Points to Remember 1. Elasticity: It is the property of a material by which it tries to regain its original configuration after the removal of the deforming force applied to it. Example of perfectly elastic solid, quartz, phosphor, bronze. 2. Plasticity: It is the property of a body by virtue of which it does not regain its original shape and size even after the removal deforming force, is called plasticity. 3. Stress is the restoring force per unit area and strain is the fractional change in dimension. Stress can be: - Normal or Longitudinal Stress, - Tangential or Shearing Stress - Hydraulic or Bulk Stress 4. When a deforming force acts on a body, the body undergoes a change in its shape and size. The ratio of a change in the configuration of the body to the original configuration is called strain. Strain = $\frac{\text{Change in configuration}}{\text{Original Configuration}}$ 5. When an object is under tension or compression, the Hooke's law takes the form F/A = Y ∆L/L where ∆L/L is the tensile or compressive strain of the object, and Y is the Young's modulus for the object. The stress is F/A. 6. Stress-strain curve The stress-strain graph of a ductile metal is shown in figure. Initially, the stress strain graph is linear and it obeys the Hooke's Law upto the point called the proportional limit. At the yield point the material starts deforming under constant stress and it behaves like a viscous liquid. 7. Brittle, Ductile, Malleable solids: There are some materials which break as soon as the stress is increased beyond the elastic limit. They are called brittle, e.g. glass, ceramics etc. Materials which have large plastic range of extension are called ductile. Using this property materials can be drawn into thin wires, e.g. - Copper, aluminum etc. Materials which can be hammered into thin sheets are called malleable eg. Gold, silver, lead, etc. 8. Work done in a stretched wire: Elastic potential energy U stored in the wire. $\therefore U = \frac{1}{2}F \times {\boldsymbol{l}} = \frac{1}{2}(stress) \times (strain) \times volume$ $\therefore U = \frac{1}{2}(\text{Young's modulus}) \times (strain)^{2}$ 9. The Poisson effect is defined as a material's tendency to expand in a direction perpendicular to the compression direction. Poisson's ratio is a measure of the Poisson effect. 10. Poisson's Ratio is a dimensional less and unit less quantity. Theoretical value of Poisson's Ratio :- -1 < σ < 1/2 Practical value of Poisson's Ratio :- 0 < σ < 1/2 11. The total normal force exerted by a liquid at rest on a surface in contact with it is called fluid thrust. The molecules of a fluid kept in a container are in constant random motion and collide with each other and with walls of the container. Fluids in a container exert thrust in all directions. 12. The pressure of liquid at a point is the thrust (or normal force) exerted by the liquid at rest per unit area around that point. Pressure is a scalar quantity. $Pressure=\frac{force}{area}$ 13. The expression for pressure P = Pa + ρgh. Holds true if fluid is incompressible. 14. The gauge pressure is the difference of the actual pressure and the atmospheric pressure. P - Pa = Pg 15. Archimedes principle: A body immersed in a fluid experiences an upward buoyant force equivalent to the weight of the fluid displaced by it. 16.Pascal's law: It states that pressure in a fluid at rest is same at all points which are at the same height. 17.Viscosity: The property of a fluid due to which it opposes the relative motion between its different layers is called viscosity. 18. Viscous force: The force between the layers of a liquid opposing the relative motion is called viscous force. $F= -\eta A\; \frac{\mathrm{d} v}{\mathrm{d} x}$; where η is a coefficient of viscosity and velocity gradient = $\frac{\mathrm{d} v}{\mathrm{d} x}$ 19. Bernoulli's principle states that as we move along a streamline, the sum of the pressure (P), the kinetic energy per unit volume (ρv2/2) and the potential energy per unit volume (ρgy) remains a constant. P + ρv2/2 + ρgy = constant 20. Velocity of Efflux: v = √2gh 21. Surface tension is a force per unit length (or surface energy per unit area) acting in the plane of interface between the liquid and the bounding surface. 22. Surface Energy: The surface energy of a liquid can be defined as the excess potential energy per unit area of the liquid surface. W = T∆A, where ∆A = increase in surface area, T = Surface tension of the liquid 23. For a liquid drop (Only one free surface) : For increasing radius 0 to R Surface energy = T∆A = 4πT (R2 - 0) = 4πR2T 24. For a soap bubble (For two free surfaces) : For increasing radius R1 to R2 where R2 > R1 Surface energy = T∆A = T×2 (4πR22 - 4πR12) = 8πT(R22 - R12) Excess pressure in liquid drop: P = 2T/R Excess pressure in the bubble formed in a liquid: P = 2T/R Excess pressure in soap bubble: There is two free surfaces in a soap bubble so P = 4T/R 25. Angle of Contact: At the point of contact of a liquid and a solid, if we draw a tangent to the surface of the liquid, it makes an angle with the side of the container, inside the liquid. This is called the angle of contact. Those liquids which wets solids and rises in capillary tubes have the angle of contact acute and the liquid surface is concave.   θ < 90⁰ 26. Those liquids which neither rises nor falls in a capillary tube have angle of contact right angle their liquid surface is plane. θ = 90° 27. Capillarity: $T = \frac{r\left [ h+\left ( \frac{r}{3} \right ) \right ]dg}{2 \cos \theta}$ If the tube is very narrow, r/3 can be neglected in comparison with h, Hence $h = \frac{2T\cos\theta}{rdg}$ or $T = \frac{hrdg}{2\cos\theta}$
2022-08-20 02:37:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5748099088668823, "perplexity": 888.044352448394}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00753.warc.gz"}
https://unacademy.com/goal/jee-main-and-advanced-preparation/TMUVD/practice/topics/WKUGJ/concept/NHRJB
Home ###### SELF STUDY BrowsePracticeTestsSyllabusDoubts & solutionsFree live classesOther courses ## Viscosity & Terminal Velocity #### Quick practice ##### Question 1 of 5 Spherical balls of radius R are fulling in a viscous fluid of viscosity  with a velocity v. The retarding viscous force acting on the spherical ball is A directly proportional to both radius R and velocity v B directly proportional to R but inversely proportional to v C inversely proportional to both radius R and velocity v D inversely proportional to R but directly proportional to velocity v
2022-06-25 19:19:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376348853111267, "perplexity": 4325.10082911851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036099.6/warc/CC-MAIN-20220625190306-20220625220306-00259.warc.gz"}
http://www.flyingcoloursmaths.co.uk/wrong-useful-episode-34/
# Wrong, But Useful: Episode 34 In this month's podcast, @reflectivemaths and I discuss: • Colin's book being available to buy • Number of the podcast: Catalan's constant, which is about 0.915 965 (defined as $\frac{1}{1} - \frac{1}{9} + \frac{1}{25} - \frac{1}{49} + ... + \frac{1}{(2n+1)^2} - \frac{1}{(2n+3)^2} + ...$). Not known whether it’s rational. Used in combinatorics and is $\int_0^\infty \arctan(e^{-t}) \dt$ • Chalkdust magazine Issue 3 is out, and the crossnumber is good. Colin and @christianp have cross-checked their answers using Elaborate Codes. Dave attempts to mock Colin for enjoying maths, and is fighting a losing battle. • Dave has been reading about Tupper's Self-Referential Formula, $\frac{1}{2} < \left \lfloor \left( \left \lfloor \frac{y}{17} \right \rfloor 2 ^{-17 \lfloor x \rfloor - \lfloor y \rfloor \pmod { 17 }} \right) \pmod { 2 } \right \rfloor$ • Dave came across Iva Sallay's Find The Factors game. It's good! • Colin refers to Twynam's law, and gets Dave to admit that we should be suspicious about Statistics. • @notonlyahatrack points us at the Romanian football team's venture into more interesting shirt numbers: • @peterrowlett points us at @stecks's article about @rachelrileyRR's EE advert. (For clarity, as my speech isn't as clear as it might be: the article is by Katie alone, not by Katie and Peter.) • Dave's students largely missed an answer in "solve $3x^2 = 147$". Colin thinks it's a bit of a gotcha. • Relatively Prime Series 3 didn't reach its Kickstarter goal, and will not happen. • @peterrowlett asks us to reveal the secret that Colin writes books. Colin erroneously states that Cracking Mathematics is out soon; it has been delayed until August, for no reason under Colin's control. • Gold star for @chrishazell72, who identified that the church in Dave's last puzzle required 81 cards. • This month's puzzle: given an equilateral triangle, what is the probability that a point inside the triangle lies closer to the centre than to any point on the edge? • We congratulate ourselves on doing a good show and then Dave Hansens up the ending. * Edited 2016-04-01 to clarify authorship of the Aperiodical article. * Edited 2016-11-18 to correct a typo. ## Colin Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove. #### Share This site uses Akismet to reduce spam. Learn how your comment data is processed. Sign up for the Sum Comfort newsletter and get a free e-book of mathematical quotations. No spam ever, obviously. ##### Where do you teach? I teach in my home in Abbotsbury Road, Weymouth. It's a 15-minute walk from Weymouth station, and it's on bus routes 3, 8 and X53. On-road parking is available nearby.
2019-10-16 09:35:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37180575728416443, "perplexity": 8242.116414882603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00431.warc.gz"}
https://www.quizover.com/online/course/9-5-applications-of-thermodynamics-heat-pumps-and-refrigerators-by-ope?page=2
# 9.5 Applications of thermodynamics: heat pumps and refrigerators  (Page 3/7) Page 3 / 7 ## The best COPhp Of a heat pump for home use A heat pump used to warm a home must employ a cycle that produces a working fluid at temperatures greater than typical indoor temperature so that heat transfer to the inside can take place. Similarly, it must produce a working fluid at temperatures that are colder than the outdoor temperature so that heat transfer occurs from outside. Its hot and cold reservoir temperatures therefore cannot be too close, placing a limit on its ${\text{COP}}_{\text{hp}}$ . (See [link] .) What is the best coefficient of performance possible for such a heat pump, if it has a hot reservoir temperature of $\text{45}\text{.}0\text{º}\text{C}$ and a cold reservoir temperature of $-\text{15}\text{.}0\text{º}\text{C}$ ? Strategy A Carnot engine reversed will give the best possible performance as a heat pump. As noted above, ${\text{COP}}_{\text{hp}}=1/\text{Eff}$ , so that we need to first calculate the Carnot efficiency to solve this problem. Solution Carnot efficiency in terms of absolute temperature is given by : ${\text{Eff}}_{\text{C}}=1-\frac{{T}_{\text{c}}}{{T}_{\text{h}}}\text{.}$ The temperatures in kelvins are ${T}_{\text{h}}=\text{318 K}$ and ${T}_{\text{c}}=\text{258 K}$ , so that ${\text{Eff}}_{\text{C}}=1-\frac{\text{258 K}}{\text{318 K}}=0\text{.}\text{1887}\text{.}$ Thus, from the discussion above, ${\text{COP}}_{\text{hp}}=\frac{1}{\text{Eff}}=\frac{1}{0\text{.}\text{1887}}=5\text{.}\text{30},$ or ${\text{COP}}_{\text{hp}}=\frac{{Q}_{\text{h}}}{W}=5\text{.}\text{30}\text{,}$ so that ${Q}_{\text{h}}=5.30 W\text{.}$ Discussion This result means that the heat transfer by the heat pump is 5.30 times as much as the work put into it. It would cost 5.30 times as much for the same heat transfer by an electric room heater as it does for that produced by this heat pump. This is not a violation of conservation of energy. Cold ambient air provides 4.3 J per 1 J of work from the electrical outlet. Real heat pumps do not perform quite as well as the ideal one in the previous example; their values of ${\text{COP}}_{\text{hp}}$ range from about 2 to 4. This range means that the heat transfer ${Q}_{\text{h}}$ from the heat pumps is 2 to 4 times as great as the work $W$ put into them. Their economical feasibility is still limited, however, since $W$ is usually supplied by electrical energy that costs more per joule than heat transfer by burning fuels like natural gas. Furthermore, the initial cost of a heat pump is greater than that of many furnaces, so that a heat pump must last longer for its cost to be recovered. Heat pumps are most likely to be economically superior where winter temperatures are mild, electricity is relatively cheap, and other fuels are relatively expensive. Also, since they can cool as well as heat a space, they have advantages where cooling in summer months is also desired. Thus some of the best locations for heat pumps are in warm summer climates with cool winters. [link] shows a heat pump, called a “ reverse cycle” or “ split-system cooler” in some countries. Commplementary angles hello Sherica im all ears I need to learn Sherica right! what he said ⤴⤴⤴ Tamia what is a good calculator for all algebra; would a Casio fx 260 work with all algebra equations? please name the cheapest, thanks. a perfect square v²+2v+_ kkk nice algebra 2 Inequalities:If equation 2 = 0 it is an open set? or infinite solutions? Kim y=10× if |A| not equal to 0 and order of A is n prove that adj (adj A = |A| rolling four fair dice and getting an even number an all four dice Kristine 2*2*2=8 Differences Between Laspeyres and Paasche Indices No. 7x -4y is simplified from 4x + (3y + 3x) -7y is it 3×y ? J, combine like terms 7x-4y im not good at math so would this help me how did I we'll learn this Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)= . After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight? what is nanomaterials​ and their applications of sensors. what is nano technology what is system testing? preparation of nanomaterial Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it... what is system testing what is the application of nanotechnology? Stotaw In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google Azam anybody can imagine what will be happen after 100 years from now in nano tech world Prasenjit after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments Azam name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world Prasenjit how hard could it be to apply nanotechnology against viral infections such HIV or Ebola? Damian silver nanoparticles could handle the job? Damian not now but maybe in future only AgNP maybe any other nanomaterials Azam can nanotechnology change the direction of the face of the world At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light. the Beer law works very well for dilute solutions but fails for very high concentrations. why? Got questions? Join the online conversation and get instant answers!
2018-04-27 06:55:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5670838356018066, "perplexity": 1506.9473844366937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949489.63/warc/CC-MAIN-20180427060505-20180427080505-00352.warc.gz"}
https://ethereum.stackexchange.com/questions/1162/why-is-homebrew-downloading-old-geth-version
Im on mac OSX, so I go to terminal CLI and do brew install ethereum then I get ==> Downloading https://build.ethdev.com/builds/OSX%20Go%20master%20brew/193/bottle/ethereum-1.1.0.yosemite.bottle.193.tar.gz ######################################################################## 100.0% ==> Pouring ethereum-1.1.0.yosemite.bottle.193.tar.gz So now I'm left with Ethereum 1.1 , but I need 1.3.1 how do I fix this? First run brew update To update the "formulae" (this is the brew-specific word for "catalogue") of all the open source projects it knows. Then run brew upgrade ethereum (Sometimes you'll need to uninstall and reinstall the formula to make it fully work). With brew info ethereum You can check what you have installed now.
2019-10-16 19:13:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5032528638839722, "perplexity": 14065.242065039412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00205.warc.gz"}
https://zlxb.zafu.edu.cn/en/article/2003/4
## 2003 Vol. 20, No. 4 column 2003, 20(4): 331-335. [Abstract](1080) [PDF](169) Abstract: As a precious traditional Chinese medicine , Cornus officinalis has been in wild or semi-wild conditions for a long time and the genetic variations within its species are very complicated .Through breeding procedures including preliminary selection , reselection and clonal tests of superior individual trees from natural resources , 10 superior clones , which are large in fruit size , thick in sarcocarp and high in the percentage of dry sarcocarp and good in processing properties , have been selected .Their yields (of dry sarcocarp)at early stage (4 ~ 7 years) are 24.66 %~ 82.44 %higher than the average value of 71 tested clones .In addition , a specially early-maturing clone and a clone containing more sugars have been singled out .Eleven clones out of these clones pass the examination and appraisal conducted by the Examination and Appraisal Committee of Zhejiang Province for Superior Forest Species and Varieties in 2002 and are denominated .[ Ch , 2 tab .7 ref .] 2003, 20(4): 336-341. [Abstract](1080) [PDF](176) Abstract: 19 poplar clones of both domestic and foreign origin were introduced into Shandong Province and field test had been conducted .Following a randomized complete block design , seedling test at nursery stage and controlled afforestation trails had been held at Juxian County , Heze City , Laixi City , Laiyang City and Caoxian County respectively .The results show poplar clone 102/74 (Populus euramericana 102/74' )performs well both in terms of adaptability and growth characteristics .The growth volume per tree is 0.190 8 m3 , which is 55.1 % larger than that of 5-year-old I-69 (P .deltoids Lux' I-69/55) (ck) in Juxian County .Results of variance analysis and t test (LSD0.05) for variables show the clones are significantly different from I-69 (ck). Furthermore , the clones can be cultivated easily and show high resistance to poplar disease infection , pest attack salinity and have long growing period .Specific gravity of wood of the clones is higher than that of I-69 (ck)and fiber length is equal to that of I-69 (ck).They are ideal for the establishment of fast-growing poplar plantations , especially for the establishment of the wood pulp forest plantation in the region .[ Ch , 1 fig .9 tab .9 ref .] 2003, 20(4): 342-345. [Abstract](1106) [PDF](199) Abstract: The number , basal diameter and height of new plantlets of survived test-tube plantlets of Bambusa oldhami are measured every month with the method of random sampling to study the annual growth rules of it .The results show that the average monthly basal diameter of new plantlets increases from month to month and the average height shows normal distribution .Transplanted test-tube plantlets can produce new plantlets during a period of 8 months , with a growth peak occurring in August when 23.8 % of the annual total quantity are produced .The height growth of new plantlets lasts for 47 days and plantlet can reach a height of 114.4 cm on average .The average daily growth height is 2.43 cm and the maxium is 6.86 cm .On the average , each test-tube plantlet produces 42.5 new plantlets annually .The above-mentioned shows that test-tube plantlets of Bambusa oldhami are good materials for the establishment of stool gardens or cutting-producing gardens .[ Ch , 4 tab .9 ref .] 2003, 20(4): 346-352. [Abstract](1023) [PDF](234) Abstract: On the basis of the comprehensive evaluation of natural environmental conditions in island areas of Zhejiang and the division of island tree species resources and site conditions , the foundation and precondition work for the study of watching species with the site in island areas have been prepared .According to the adaptability of different tree species and different site conditions , a list of watching species with different site conditions is made . And some important technical issues concerning list application are put forward .[ Ch , 3 tab .15 ref .] 2003, 20(4): 353-356. [Abstract](1290) [PDF](219) Abstract: The pollenmorphology characteristics of 9 species of Magnoliaceae (Sinomanglietia glauca , Manlietia patungensis , Magnolia denudata , Magnolia biondii , Magnolia amoena , Michelia caloptila , Michelia chapensis , Michelia maudiae , Magnolia zenii )were observed by both optical microscope and scanning electron microscope (SEM), the result showed that the shape and germination aperture of 9 speices are similar , but their size and exine sculpture are different , and so are the different population of one species .Therefore , pollen morphology can be applied for taxonomy of Magnoliaceae to some extent , and it is important to adopt these evidence carefully combined with other evidence .[ Ch , 1 fig .1 tab .8 ref .] 2003, 20(4): 357-359. [Abstract](1042) [PDF](188) Abstract: The contents of flavonoids in the different nutritious organs of Heptacodium miconioides are determined and the components of flavonoid compounds are analyzed with the method of polyacrylamides membrane chromatography .The results are as follows :(1)The content of flavonoid in the leaves of Heptacodium miconioide is the highest , and the contents of flavonoids in root and stalk are ranked afterwards .(2)There are 8 kinds of different flavonoids in the leaves and 4 kinds in root and stalk .Therefore , the leaves of Heptacodium miconioides are worth developing .[ Ch , 2 tab .12 ref .] 2003, 20(4): 360-363. [Abstract](1157) [PDF](196) Abstract: The effects of the abhesive on the absorptive properties of the activated carbon were studied .The rate of adsorption to benzene of the adhesive and activated carbon mixture decreases as the rate increases , the decrease degree depending on different kinds of adhesive .The activated carbon , adhesive and adhesive bonded fabric that were suitable for making activated carbon-adhesive bonded fabric were selected , and the suitable conditions of the technology were determined .The composite of activated carbon-adhesive bonded fabricwere activated carbon 60 %, adhesive 37 %, adhesive bonded fabric 3 %.The results showed that the rate of absorption to benzene , formaldehyde , ammonia , methenyl chloride of the activated carbon-adhesive bonded fabric were 32.5 %, 24.3 %, 26.7 %, 30.6 %, respectively .[ Ch , 2 fig .1 tab .6 ref .] 2003, 20(4): 364-368. [Abstract](950) [PDF](209) Abstract: By means of commercial analysis of the relation of commodity supply and demand , the paper studies the working state of three links of forestry technology with the segregation of the main part research and development , popularization and application .It holds that the objectively existing relation and connection of technology supply and demand should follow the principle of commodity exchange .It lays stress on the following problems caused by rarely attaching importance to the exchange principle of forestry technology supply and demand : (1) the blind development in the link of research and development , and the insufficient supply of effective technology ;(2) the break of popularization fund circle in the link of application , resulting in the low forestry industrialization promoted by technology , and in turn restricting the increase of technology demand ;(3)the dislocation of stimulation and the low efficiency of intermediary technology .Hence , the indefinite commodity property of forestry technology is the real cause of the insufficient technology supply and demand .[ Ch , 10 ref .] 2003, 20(4): 369-373. [Abstract](1284) [PDF](191) Abstract: An investigation made on technical extension and operation systems for bamboo industry in the mountain area of southern Zhejiang showed that technical extension for bamboo was implemented from upper to bottom around the core of base construction .The operation system was shown as (1) the main body was technique-oriented technicians .(2)Selection of techniques and decision making were from upper to bottom.(3)There was a gap between technique researchers and technical extenders at different levels , which was influenced by the avenue and conditions of technical sources , had caused that the technical supporting system couldnt embody the characteristics of new techniques and further the technical extension result .[ Ch , 1 tab .8 ref .] 2003, 20(4): 374-379. [Abstract](1037) [PDF](168) Abstract: Carbon accumulation and distribution were studied in 16-year-old experimental slash pine plantations with four kinds of different densities at Lufeng Forest Farm in Guangxi Autonomous Region .The results indicated that the spatial distribution sequence of carbon storage ranked as soil layer vegetation stratum litter floor , the total carbon storage ranged from 264.834 thm-2 to 323.978 thm-2 , with an average value of 291.663 thm-2 , and increased along with enlarging of density in 4 kinds of different densities of slash pine plantation ecosystem. Carbon storage of vegetation stratum ranged from 96.641 thm-2 to 110.717 thm-2 , amounted for 35.40 %of the total carbon storage .Carbon storage of different components all was in the order as trunk root branch leaf , and the carbon storage ratio of the over ground and the under ground ranged from 7.185 to 7.922 and descended along with enlarging of density in the vegetation stratum .Carbon storage of litter floor increased from 5.746 thm-2 to 9.181 thm-2 along with enlarging of stand densities , occupied 2.17 % to 2.83 % of the total carbon storage . Carbon storage of forest land soil (0 ~ 60 cm)averaged 180.94 thm-2 , occupied more than 60.32 % of the total carbon storage .Annual net fixing carbon amounts of density Ⅰ , Ⅱ , Ⅲ , Ⅳ were 9.729 , 9.882 , 11.239 and 11.946 thm-2 , respectively , with an average value of 10.699 thm-2 .These results could provide some basic data for the carbon budget estimation of a forest ecosystem and dynamic simulation .[ Ch , 7 tab .16 ref .] 2003, 20(4): 380-384. [Abstract](1165) [PDF](162) Abstract: From 1984 to 2000 , comparative observation of small climates of 8 forest tourist spots in sub-tropical zones was conducted .According to air temperature and relative air humidity observed every hour , the durations of comfortable air temperatures inside and outside of the forest and the forest tourist spots at different altitudes are compared .The results show that in the forest tourist spots , the duration that people feel comfortalbe temperature is 14 ~ 24 h , that feel sultry is 0 ~ 10 h and that feel uncomfortalbe is 0 h .While in the neigboring towns , medium and small cities , the duration that people feel comfortable is only 0 ~ 11 h , that feel sultry is 10 ~ 24 h and that feel uncomfortalbe is 5 ~ 9 h .Within a certain range of altitudes , with the increase in altutude , the duration that people feel comfortable increases to 24 h from 12 h , that feel sultry decreases to 0 h and 12 h .It shows that in the sub-tropical zone with hot summer , forest and mountain areas are good places for passing the summer leisurely . [ Ch , 8 tab .5 ref .] 2003, 20(4): 385-388. [Abstract](996) [PDF](168) Abstract: The development durations of Takecallis taiwanus (Homoptera :Callaphididae)are measured under 5 constant temperatures ranging from 10 ℃ to 30 ℃.The low-development temperature are 12.76 ℃, 9.24 ℃, 7.28 ℃, 6.45 ℃, 8.10 ℃ and the effective thermal constants are 19.66 ℃, 19.88 ℃, 28.52 ℃, 36.49 ℃ and 115.82 ℃for the development of the 1st , 2nd , 3rd , 4th instar nymphs and whole nymph stage respectively . All these data can be used as reference when the number of generations and time of occurrence need to be forecast at a particular place .[ Ch , 2 tab .6 ref .] 2003, 20(4): 389-393. [Abstract](1119) [PDF](283) Abstract: Geographic distribution of 70 species of Orius in the world is studied .All species are arranged in 14 zoogeographic categories :Afrotropical endemic , Australian endemic , Holarctic endemic ,Nearctic endemic , Neotropical endemic , Oriental endemic , Palaeartic endemic , Afrotropical palaearctic , Australia Oriental , Holarctic Oriental , Nearctic Neotropical , Nearctic Oriental , Nearctic Palaearctic Oriental Palaearctic , and so on .The result suggests the Oriental region is the origin center of Orius .[ En , 2 fig .1 tab .27 ref .] 2003, 20(4): 394-397. [Abstract](1348) [PDF](174) Abstract: The last-instar larval external morphologies of Thyatira batis (Linnaeus) and Kurama mirabilis (Butler) of Thyatiridae are described and illustrated .All specimens are deposited in the Insect Collection of Department of Forest Resources Protection , Kangwon National University , Korea .[ En , 2 fig .6 ref .] 2003, 20(4): 398-402. [Abstract](1179) [PDF](154) Abstract: Ceratosphaeria phyllostachydis is an important plague under quarantine in China , and it is significant and practical to classify its plague areas .With the application of classification theores , modern mathematic methods and computer techniques , the areas of Ceratosphearia phyllostachydis in Fujian can be classified as serious , light and basically none-plague areas and three dangerous degrees including class Ⅰ , class Ⅱ and class Ⅲ .Model of forecasting plague areas is established according to different classes .The model is tested to be precise and can be popularized .[ Ch , 2 tab .9 ref .] 2003, 20(4): 403-407. [Abstract](1168) [PDF](227) Abstract: Large amount of heterogeneous data have been accumulated along with the development of the forest resources management information systems .The result of this is that data are not effective shared among various systems and the operators repeat work with high error rate .Therefore , the decision-making is difficult made by the manager through the existing data .The emergent issue is how we integrate the heterogeneous data .First , this paper analyzes the demand of data integration of the current forest resources management information system and compares the various middleware technology .Secondly , this paper proposes the model with layer structure , which servers for the integration of the heterogeneous data of forest resources management systems .This model consists of :(1)information source layer ;(2)XML middleware layer ;(3)XML interface layer ;(4)XML presentation layer .Finally , the paper discusses the implementation of data integration system based on XML, using XML middleware layer .[ Ch , 1 fig .9 ref .] 2003, 20(4): 408-412. [Abstract](1308) [PDF](198) Abstract: Landscape garden is the main type of Chinese ancient garden .Based on the philosophical view of harmonizing relationship between nature and man , ancient Chinese people thought there was no boundary between man and nature and the harmonization of nature and man is the greatest achievement , which caused the production and prosperity of landscape culture .With the cultivation of landscape culture , adoration and appreciation of nature is the guideline of building Chinese gardens .The consciousness of simple beauty of nature makes Chinese people pursue natural morphological beauty , which leads to nature simulation and creation in building gardens .Therefore , landscape gardens originate from nature and are superior to nature , and the highest artistic state is achieved .[ Ch , 10 ref .] 2003, 20(4): 413-418. [Abstract](1458) [PDF](171) Abstract: Utilizing computer to make a great scale of stochastic simulation experiments , this paper reveals that if the difference between levels of the primes exist noteworthily is mainly depended on the sum of its experiment effects and stochastic normal difference .An approximate calculation formula is derived : It is proved by stochastic simulation experiments .It is suggested that twice-experiments should be done at the orthogonal experiment .[ Ch , 3 fig.3 tab .10 ref .] 2003, 20(4): 419-423. [Abstract](1255) [PDF](218) Abstract: Landscape cover plants are an important element of urban landscape engineering .Due to its particularity and huge economic and social benefits , more and more people attach importance to its research .At present , many achievements in its research have been made and applied in landscape engineering .From the perspectives of resource investigation , introduction , selection , adaptability , endurance and breeding , the research actuality of landscape cover plants at home and abroad is summarized .It is put forward that in the futrue research of cover plants in China , the research work including biological and ecological characteristics , landscape effects , germplasm resources accumulation , basic theories , new species breeding , wild landscape cover plants protection and industrialization level shall be strengthened .[ Ch , 56 ref .] 2003, 20(4): 424-428. [Abstract](1419) [PDF](448) Abstract: Bamboo molecular breeding was reviewed .Tissue culture cultivation is an essential way for bamboo breeding .During 80s , the reseaarch workers many countries or districts began research on bamboo tissue , and successed in 70 species 20 bamboo genus such as Phyllostachys ,Dendrocalamus Nees , Subgen .Sinocalamus (McClure)Hsueh et D .Z .Li , Sasa Makino et Shibata , Thysostachys Gamble .The researches revealed the effects of explant genotype , phytohormone , media components and culture condition on its tissue cultivation and established a bamboo cultivation system.The research about genetic marker on bamboo was also reviewed .Using biochemical markers , isozymes , molecular markers can easy distinguish different bamboo species .The author also discussed bamboo molecular breeding in the future .[ Ch , 44 ref .] 2003, 20(4): 429-433. [Abstract](1870) [PDF](166) Abstract: The optimized system used in RAPD reaction of Carya cathayensis must be established before analying the genetic diversity ofCarya cathayensis .It is necessary to explore the reaction conditions and system .The optimal reaction mixture and amplification procedure of RAPD in Carya cathayensis were studied .The result showed that each 20 L amplification reaction solution was consisted of 2.5mgL-1 (50 ng)template DNA , 2.0L 10 Buffer , 16.67 pmols-1Taq DNA polymerase , each 0.2mmolL-1dNTPs , 2.4 mmolL-1MgCl2 , 0.15 10-3 mmolL-1 primer .The PCR amplification program is predenature at 94 ℃for 300 s , followed by denature at 94 ℃ 30 s , annealing 38 ℃for 30 s , extension 72 ℃ for 90 s , cycling 38 times , last extension 72 ℃ 420 s . [ Ch , 5 fig .2 tab .6 ref .] 2003, 20(4): 434-437. [Abstract](1295) [PDF](198) Abstract: An orthogonal test of the chemical herbicides in the nurseries of Pinus massoniana , Michelia macclurei and Manglietia yuyuanensis is conducted to study the weeding effects of oxyfluorfen , acetochlor and haloxyfop . The results show that the weeding effects of oxyfluorfen , acetochlor before emergence and oxyfluorfen , acetochlor and haloxyfop during seeding stage are over 90 %.The effective duration is 45 ~ 63 days .These chemical herbicides are safe for seed germination and seedling growth .The labor use of applying 0.105 mLm-2 herbicide consisting of 20 %oxyfluorfen , 90 %acetochlor and 10.8 %haloxyfop is 59.1 %~ 66.7 % less than that of manual weeding and the cost of the former is 46.8 %~ 62.9 % less than the latter .[ Ch , 3 tab .4 ref .] 2003, 20(4): 438-441. [Abstract](1154) [PDF](192) Abstract: The comparative studies were conducted by methods of field tests andmuti-line regression analysis .The results showed that the diameter increments of one-year-old seedling of Pinus nigra var. austriaca was bigger than that of Pinus tabuleaformis , and its tap and lateral roots were more developed , while the height growth ofPinus tabuleaformis was bigger .Multi-line regression analysis showed that the correlation between the seedlings' height and fresh weight of lateral roots , and between diameter growth and fresh weight of roots were all very significant . [ Ch , 2 tab .5 ref .]
2022-06-27 07:56:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.634926974773407, "perplexity": 9642.208353355003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00015.warc.gz"}
https://www.physicsforums.com/threads/ground-state-wavefunction-energy-for-2-electrons.392717/
# Ground state wavefunction & energy for 2 electrons ## Homework Statement 2 electrons are in a box of length L. Ignoring Coulomb force, 1 and 2 are labels for the electrons and m is the mass of an electron. What is the ground state and 1st excited state for the energy and wavefunction for the two electrons? Is there more than one wavefunction for the ground state and the 1st excited state? Your answer will include the spatial and spin components of hte wavefunction. Use |+-z> to describe spin. H=p12/2m+p22/2m ## The Attempt at a Solution Since they are electrons, they are spin 1/2 particles and are fermions. Thus, their wavefunction must be antisymmetric with repsect to the exchange of particle labels. This means the two particles have a total spin of 0 or 1, so there are two wavefunctions for the ground state? S=0: |0,0,s=0,m=0> and S=1: |0,1,s=0,m=0>. Am I on the right track here? Last edited: Explain to us your notation for the wavevectors. I agree that if you only look at spin, you will have a singlet (S=0) state and a triplet state (S=1). Which of those spin states are antisymmetric and which is symmetric? Also, you can separate the spatial wavefunction from the spin wavefunction. So be careful that you keep the total product of those wavefunctions antisymmetric since these are fermions. it would be |n,l,s,ms>, is that right? I'm so confused on how to do this... This isn't a hydrogen atom. It is just a 1D box with 2 particles, so you will have $$\psi_{n_1,n_2}(x_1,x_2)$$ or you can write it as $$|n_1,n_2>$$ for the spatial wavefunction and wavevector. The 2 particle wavefunction is just a product of the single particle wavefunctions in a box. The total energy is just the sum of the single particle energies. The spin vector can be written as $$|\uparrow \uparrow>,|\uparrow \downarrow>,|\downarrow \uparrow>, |\downarrow \downarrow>$$ and any combination of those. Preferably combinations that form symmetric and antisymmetric spin vectors. so would the ground state wavefunction be |+z,+z>|-z,+z>|+z,-z>|-z,-z>? Alright, I will start you off since you have me confused again :) Let's assume these are bosons. So the total wavefunction must be symmetric. Remember, the total wavefunction is the product of the spatial wavefunction and the spin wavevectors. If the total is symmetric, that means both the spatial and spin wavevectors are symmetric or they both are antisymmetric (similar to +1*+1 = +1 and -1*-1 = +1). We want the ground state. The lowest energy state for a particle in a box is n=1. Since we have two particles in a box, the energy will be: $$E_{1,1} = E_1 + E_1$$ where $$E_1$$ is the ground state energy for a single particle in a box. This corresponds to the wavevector $$|1,1>$$. This is symmetric, because if I interchange those two numbers in the vector, I get back the same vector. Next we need a symmetric spin vector, because we have a symmetric spatial vector. That leads to the spin triplet (which is symmetric): $$|\uparrow \uparrow>$$ $$|\downarrow \downarrow>$$ $$\frac{1}{\sqrt{2}}\left(|\uparrow\downarrow>+|\downarrow\uparrow>\right)$$ Therefore we will have 3 possible ground states that are all degenerate for the boson pair: $$|11>|\uparrow \uparrow>$$ $$|11>|\downarrow \downarrow>$$ $$\frac{1}{\sqrt{2}}|11>\left(|\uparrow\downarrow>+|\downarrow\uparrow>\right)$$ Now you will do the same for a pair of fermions, but take care to keep the total wavefunction antisymmetric and obey the Pauli exclusion principle. So since these are fermions, we have an antisymmetric state. It is a singlet state? The states would then be |+z, -z> and |-z, +z>, but the electrons are identical so the only state is |+z, -z>, right? (+z is up arrow, -z is down arrow) Then we also have to include the spatial component. Is that just a position label and an assignment of n? Would the wavevector then be |0>1|0>2(|-z,+z>)??? |+z,-z> is not an antisymmetric or symmetric state. For example, if I swap them I get: $$|\uparrow \downarrow> \neq |\downarrow \uparrow>$$ So you will need to find some combination of the spin states that are symmetric or antisymmetric. But before you do that, you should find the spatial ground state for a pair of fermions. That way you will know which spin state to choose to make the total wavefunction antisymmetric. Is my spatial ground state |n>label? Would it be |0>1|0>2? Would the combination of states be 1/sqrt(2)*(|+z,-z>-|-z,+z>)? Not sure why you use n=0. But yes, that would be essentially correct. The spatial part is symmetric, and the spin part is antisymmetric. So the whole thing will be antisymmetric. You would just need to multiply those two pieces together. okay, so I was looking at an example with a harmonic oscillator and it used n=0. Should it be n=1 then since this is a ground state? My wavefunction is; (1/sqrt(2))|1>1|1>2*(|+z,-z>-|-z,+z>) Would my energy then be 1/2 hbar w? There is no harmonic oscillator in the problem. This is just 2 particles in a well. And so you should be looking at the single particle in a well solutions. Also the energy is incorrect. You would use the energy for particles in wells and not particles in a harmonic oscillator. You seem to be confusing things between problems. Energy for a single particle in a square well is En=hbar2pi2n2/2mL2 Do I square the whole thing since I intend 2 particles now instead of just 1? Also- The Psi function for a single particle is sqrt(2/L)sin(npix/L) Would that be more representative of the wavefunction I need than (1/sqrt(2))|1>1|1>2*(|+z,-z>-|-z,+z>)? Total energy is just the sum of the individual energies. You can write out the whole spatial wavefunction but it isn't necessary. You can just write out what $$|n_1 n_2\rangle$$ at the top, then write out your wavefunctions just using kets to keep it cleaner looking.
2021-05-10 01:15:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.766818106174469, "perplexity": 332.934339570934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.65/warc/CC-MAIN-20210510003422-20210510033422-00071.warc.gz"}
https://blender.stackexchange.com/questions/64257/fatal-python-error-on-startup-not-responding
# Fatal Python Error on Startup (Not Responding) Can someone help me? Every time I try to run blender it keeps I get the same error, I tried to install blender as a zip file, install a previous version that I know worked for me before (on the same computer as now), install blender from Steam, and search all over the internet, here is the error message: found bundled python: C:\Program Files\Blender Foundation\Blender/2.78/python Fatal Python error: Py_Initialize unable to load the file system codec File "C:\Python27\Lib\encodings\__init__.py", line 123 raise CodecRegisteryError,\ ^ SyntaxError: invalid syntax Current thread 0x00002a20 (most recent call first) Thanks in advance for any help. That looks like Blender is using a very old version of Python. Try taking any path in C:\Python27 off your PATH environment variable. I'm assuming you're running Windows 10, since you clearly use Windows but don't mention which version. Right-click on "This PC", then choose "Properties". In the next screen, click "Advanced System Settings", then click on "Environment Variables". In both the box "User variables for {your name}" and "System variables" doubleclick on "Path" (or any other capitalisation) and remove anything that starts with C:\Python27. You may want to write down the original values you removed, just so you can restore them later. • Can you help me do that? please? – ilay1034 Oct 3 '16 at 20:38 • I've updated the comment. – dr. Sybren Oct 3 '16 at 20:44 • THANKS SO MUCH! Apparently I added Python as an environment variable when trying to set up a server using python for another device to connect to. I took it out of there and blender now works perfectly! thanks! Also, sorry for that question, but I already figured out how to do that! (I'm a geek in learning). – ilay1034 Oct 3 '16 at 20:47 • You're welcome :) Don't forget to mark this as the answer to your question. – dr. Sybren Oct 3 '16 at 20:48 When I was trying to use blender, I had a python path as an environment variables. Therefore, blender was unable to use it (I think), so I deleted the path and blender started working. P.S I may be wrong, but the python path in environment variables was what causing me the problem. • If someone gives you the correct answer, please mark that answer as "accepted". Don't write your own answer too. – dr. Sybren Oct 3 '16 at 21:20
2020-01-23 17:10:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45020368695259094, "perplexity": 2223.800051382856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00180.warc.gz"}
https://support.bioconductor.org/p/112520/
Question: Deseq 2 multiple comparisons (3 groups, 24 samples) 0 14 months ago by Matthew.baldwin0 wrote: Dear all, I am hoping you can help the correct test for my experimental design which is as follows: I tested the cells of 3 different donors (rotator cuff tears), on 8 different nano fibre scaffolds (different anisotropy; random vs aligned and different fibre diameters; 300nm,1000nm,2000 and 4000). I wish to compare the effects of each nano scaffold against all other scaffolds e.g. aligned300 vs aligned 1000 or aligned300 vs random4000, creating 56 possible comparisons. Do do this I have constructed the following data frame: samplenames<-meta_dataSampleID samples <- data.frame((samplenames),donor=as.factor(c(rep("1",4),rep("2",4),rep("3",4))), anisotrophy=as.factor(c(rep("aligned",12), rep("random",12))), diameter=as.factor(rep(c("300","1000","2000","4000"),6)))​ X.samplenames. <fctr> donor <fctr> anisotrophy <fctr> diameter <fctr> i25-Healthy1-Aligned-300 1 aligned 300 i26-Healthy1-Aligned-1000 1 aligned 1000 i27-Healthy1-Aligned-2000 1 aligned 2000 i28-Healthy1-Aligned-4000 1 aligned 4000 i33-Healthy2-Aligned-300 2 aligned 300 i34-Healthy2-Aligned-1000 2 aligned 1000 i35-Healthy2-Aligned-2000 2 aligned 2000 i36-Healthy2-Aligned-4000 2 aligned 4000 i41-Healthy3-Aligned-300 3 aligned 300 i42-Healthy3-Aligned-1000 3 aligned 1000 Q1: Having read the Deseq2 manual I think that in order to undertake the multiple comparisons, I need to perform the LRT analysis, is this correct? With this in mind I have undertaken the following: dds_multi<- DESeqDataSetFromMatrix(countData =healthy, colData=samples, design=~donor+anisotrophy+diameter+donor:anisotrophy+donor:diameter+anisotrophy:diameter) keep <- rowSums(counts(dds_multi)) >= 10 dds <- dds_multi[keep,] dds_multi <- DESeq(dds_multi, test="LRT", reduced=~donor + anisotrophy + diameter)​ This gives me the following results: resultsNames(dds_multi)​ [1] "Intercept" "donor_2_vs_1" [3] "donor_3_vs_1" "anisotrophy_random_vs_aligned" [5] "diameter_2000_vs_1000" "diameter_300_vs_1000" [7] "diameter_4000_vs_1000" "donor2.anisotrophyrandom" [9] "donor3.anisotrophyrandom" "donor2.diameter2000" [11] "donor3.diameter2000" "donor2.diameter300" [13] "donor3.diameter300" "donor2.diameter4000" [15] "donor3.diameter4000" "anisotrophyrandom.diameter2000" [17] "anisotrophyrandom.diameter300" "anisotrophyrandom.diameter4000" I then report each comparison in turn until I get results for each of the 56 possible comparisons, for example: res1<- results(dds_multi, alpha = 0.05, name = "diameter_2000_vs_1000", test="Wald")​ or res2<- results(dds_multi, alpha = 0.05, contrast=list(c("diameter_2000_vs_1000","anisotrophyrandom.diameter2000")), test="Wald") Q2: Is this the correct way to do this? thanks for any help that can be offered, Mat deseq • 328 views ADD COMMENTlink modified 14 months ago by Michael Love26k • written 14 months ago by Matthew.baldwin0 Answer: Deseq 2 multiple comparisons (3 groups, 24 samples) 0 14 months ago by Michael Love26k United States Michael Love26k wrote: I’m not sure doing all pairwise comparisons of 8 levels is the best way to approach this (and the interaction model doesn’t seem to be what you want I’d guess). What is your end goal with this analysis? ADD COMMENTlink written 14 months ago by Michael Love26k Thank Michael, Grateful for your help. The goal for the analysis are several fold: 1) Which factor (anisotrophy or diameter) produces the greatest change at an RNA level for tendon cells 2) Which genes are influenced by different anisotrophy (random vs aligned) 3) Which genes are influences by different diameter (300, 1000, 2000, 4000nm). And is the effect of diameter further modified by the scaffolds overall alignment (e.g. 300 random vs 300 random) thanks, Mat ADD REPLYlink written 14 months ago by Matthew.baldwin0 Did you consider modeling diameter as a numeric? Do you have any idea how expression will change over diameter? ADD REPLYlink written 14 months ago by Michael Love26k Hi Michael, It would be fine to model as a diameter as a numeric e.g. 0.3, 1,2,4. We believe that the greatest difference will be for the smallest i.e. 0.3um and largest diameters i.e. 4um ADD REPLYlink written 14 months ago by Matthew.baldwin0 So one option is ~donor + condition + diameter, with diameter as a numeric. This assumes no interaction between condition and diameter. Another option is ~donor + condition + condition:diameter, where you will get different diameter slopes for the two conditions. ADD REPLYlink written 14 months ago by Michael Love26k Thanks Michael. I have thought further and noted in the vignette about grouping interactions. I have refined my 3 questions: 1) For scaffolds with aligned fibre orientation, what is the effect of fibre diameter ?( i.e. 300Aligned vs 1000Aligned, 300Aligned vs 2000Aligned and 300Aligned vs 4000Aligned) 2) For scaffolds with random fibre orientation, what is effect of fibre diameter ? (i.e. 300random vs 1000random, 300random vs 2000random and 300random vs 4000random) 3) For a fixed fibre diameter, what effect does loss of scaffold alignment have? (i.e 300random vs 300 aligned, 1000random vs 1000aligned, 2000random vs 2000aligned, 4000random vs 4000aligned). As such I have simplified the design to: dds_disease<- DESeqDataSetFromMatrix(countData =disease, colData=samples_disease, design=~donor+anisotrophy+diameter+anisotrophy:diameter) keep <- rowSums(counts(dds_disease)) >= 10 dds_disease <- dds_disease[keep,] dds_diseasegroup <- factor(paste0(dds_disease$anisotrophy, dds_disease$diameter)) dds_disease$group <- relevel(dds_disease$group, ref = "aligned300") design(dds_disease) <- ~ group dds_disease <- DESeq(dds_disease) resultsNames(dds_disease) This produces the following comparisons: [1] "Intercept"                      "group_aligned300_vs_random300"  "group_aligned1000_vs_random300" "group_aligned2000_vs_random300" "group_aligned4000_vs_random300" "group_random1000_vs_random300"  "group_random2000_vs_random300"  "group_random4000_vs_random300" This will allow me to answer question 1. For example: res_disease_LFC <- lfcShrink(dds_disease, coef="group_aligned4000_vs_aligned300", type="apeglm") However, since I cannot use the contrast argument under the lfcShrink function, to generate the comparisons I want I think I need to relevel and then rerun the Deseq2 function. Is this permitted? i.e. dds_disease$group <- relevel(dds_disease$group, ref = "random300") dds_disease <- DESeq(dds_disease) resultsNames(dds_disease) This then generates the below comparisons, which I can use to answer question 2: "Intercept" "group_aligned300_vs_random300" "group_aligned1000_vs_random300" "group_aligned2000_vs_random300" "group_aligned4000_vs_random300" "group_random1000_vs_random300" "group_random2000_vs_random300" "group_random4000_vs_random300" thanks again for all the help. Very much appreciated, Mat hi Mat, Looks good. Yes, to use apeglm for LFC shrinkage, you need to relevel, but you only need to rerun nbinomWaldTest() actually, which should be much faster. So you don't need to rerun dispersion estimation.
2019-11-18 20:40:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5217748284339905, "perplexity": 10587.546993965589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669813.71/warc/CC-MAIN-20191118182116-20191118210116-00388.warc.gz"}
https://physics.stackexchange.com/questions/391748/how-to-apply-two-or-more-gauge-transformations
# How to apply two or more gauge transformations? Let $f=f(t)$ be a physical quantity of a system, $t$ is a variable e.g time. For infinitesimal variation $\delta t$: $$f(t_{0}+\delta t)=f(t_{0})+\frac{\mathrm{d}f}{\mathrm{d}t}\delta t=f_{0}+\{f,H_{T}\}$$ where $H_{T}=H+u_{m}\phi_{m}$ with $\phi_{m}\approx 0$ is primary constraint of the system. Afterward, $f(t_{0}+\delta t)$ can be written as: $$f(t+\delta t)=f_{0}+\{f,H\}\delta t + u_{m}\{f,\phi_{m}\}\delta t$$ This is the way we apply gauge transformation with the infinitesimal change of $t$. I think, if we apply the gauge transformation one more time, it will look like: $$f'=f_{0}+\{f(t_{0}+\delta t),H\}\delta t + u_{m}\{f(t_{0}+\delta t),\phi_{m}\}\delta t$$ Then the procedure will be continued with expansion of $f(t_{0}+\delta t)$. Am I correct? Assume that I am correct at that point, the calculation continues, we obtain: $$f'=f_{0}+\{f_{0},H\}\delta t+\{\{f,H\},H\}\delta^2 t+\{u_{n}\{f,\phi_{n}\},H\}\delta^2 t+u_{m}\{f_{0},\phi_{m}\}\delta t+u_{m}\{\{f,H\},\phi_{m}\}\delta^2 t+u_{m}\{u_{n}\{f_,\phi_{n}\},\phi_{m}\}\delta^2 t=f_{0}+\{f_{0},H\}\delta t+u_{m}\{f_{0},\phi_{m}\}\delta t$$ I smell something wrong here because after applying second gauge, I can not see where its effect is. There is no appearance of $\phi_{n}$ and $u_{n}$. Finally, how does it look if we apply $n$ gauge transformations? For related documents: Update with answer of my question I had just figured it out how to apply two successive gauge transformation. At first, calculate $\Delta f=\epsilon_{m}\{f,\phi_{m}\}$, then first gauge is $f=f_{0}+\Delta f=f_{0}+\epsilon_{m}\{f,\phi_{m}\}$. Next, treat $f+\epsilon_{m}\{f,\phi_{m}\}$ as new $f$, we have $$f'=f_{0}+\epsilon_{m}\{f_{0},\phi_{m}\}+\delta t\{f+\epsilon_{m}\{f,\phi_{m}\},H\}+u_{n}\{f+\epsilon_{m}\{f,\phi_{m}\},\phi_{n}\}\delta t$$compute $\Delta f'$, we get $\epsilon_{n}\{f+\epsilon_{m}\{f,\phi_{m}\},\phi_{n}\}$. Thus, after appying two successive gauge, what we get is $$f=f_{0}+\epsilon_{m}\{f,\phi_{m}\}+\epsilon_{n}\{f+\epsilon_{m}\{f,\phi_{m}\},\phi_{n}\}$$ That's how it is done. Still, I wonder why $\epsilon_{m}\phi_{m}$ is generator of gauge. • I am not really sure what you were talking about is gauge transformation or not. As far as I know, gauge transformations are (continuous) transformations on field variables (QFT) or QM states. What you present here is like a time translation. – Lê Dũng Mar 13 '18 at 12:35 • I add several documents in the question, you can check them for relevant information. – Duong H.D Tran Mar 13 '18 at 12:56 Ok, let's make a quick review. As you can see from the Dirac's lecture, Hamiltonian is not uniquely determined, but one can add a linear combination of primary constrains on it definition: $$H_T=H+u_m\phi_m$$ without changing the physics of the system. This is same as the case in Electrodynamics, where the four-potential $A^\mu$ is not determined uniquely, but: $$A^\mu\to A'^\mu = A^\mu - \partial^\mu \Lambda$$ This transformation, which leaves physical results unchanged ,is called gauge transformation. Here the additional term gives the gauge freedom. Returning to our case, the gauge transformation, initially, is the transformation on Hamiltonian as above, with the gauge freedom characterised by the coefficient $u_m$. The realization of this transformation on a phase space function $f$ is: $$\Delta f(t_0+\delta t)=f_{u_1}(t_0+\delta t)-f_{u_2}(t_0+\delta t) = \epsilon_a[g,\phi_a]$$ This definitely determines the generators of the gauge transformation, which are to be $(\epsilon_a\phi_a)$ characterised by gauge-free parameters $\epsilon_a$. Equivalently, we can present the gauge transformation operator as: $$U=e^{-\epsilon_a\phi_a}$$ (you can check that by Taylor expanding the gauge transformation $g_2 = Ug_1U^{-1}$ to the first order and realizing that the result is the above equation). Now, suppose that we start with $g_0$ at some gauge. Now, we apply two successive gauge transformations with generators $(\epsilon_a\phi_a)$ and $(\gamma_b\phi_b)$ to get $g$ and, then, $g'$, respectively: $$g' = e^{-\gamma_b\phi_b}e^{-\epsilon_a\phi_a}g_0e^{\epsilon_a\phi_a}e^{\gamma_b\phi_b}$$ Now, Taylor expand all the exponents to the first order and expand everything up to the mixed term between $\gamma$ and $\epsilon$ (neglect higher-than-second-oder terms and the second order terms containing only $\gamma$ or only $\epsilon$, also notice that $g_0$, $\phi_a$ and $\phi_b$ do not commute (for $a\neq b$)). And finally, you will get the Dirac's answer. • I understand almost all of what you had mentioned but I find it too much for my question. I found out a way to answer my question (as updated). Nevertheless, would you mind explaining why $\epsilon_{m}\phi_{m}$ is generator of gauge, please? I just answer my question to satisfy "final formula", but in physical interpretation, I don't understand why we have to add $\Delta f$ as above. – Duong H.D Tran Mar 18 '18 at 2:45
2021-04-21 11:56:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8997752070426941, "perplexity": 226.53204374345222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039536858.83/warc/CC-MAIN-20210421100029-20210421130029-00360.warc.gz"}
http://mathematica.stackexchange.com/tags/calculus-and-analysis/hot?filter=month
# Tag Info 9 Here is a visualization using MeshFunctions (as mentioned by Daniel in the comment): f = x y z; g = x^2 + 10 y^2 + z^2; gp = With[{r = 3}, RegionPlot3D[g < 5, {x, -r, r}, {y, -r, r}, {z, -r, r}, PlotStyle -> Orange, PlotPoints -> 40, Mesh -> None, ViewPoint -> Front, PlotTheme -> "Classic"] ]; With[{r = 3}, Manipulate[ ... 7 Here's how to make Mathematica integrate this, but a lot is done by hand. Cos[β] Exp[I z Cos[β - α]] == Cos[β] Cos[z Cos[α - β]] + I Cos[β] Sin[z Cos[α - β]] The integral of the first summand is zero. Make the substitution γ == α - β and expand the Cos[β] to Cos[α] Cos[γ] + Sin[α] Sin[γ]. Also note that the function is Pi-periodic. The integral ... 6 In this case you can find minimum by comparing values at zero derivates: TakeSmallestBy[{A /. #, #} & /@ Solve[D[A, x] == 0, x, Reals], First, 1] {{Sqrt[3], {x -> 0}}} TakeSmallestBy is a v10.1 function similar to MinimalBy, but performs numerical comparisons. 5 You can gain insight by evaluating and then simplifying the indefinite integrals. Integrate[(1 + b x + c y)/(1 + e x + f y + I η), y, x, Assumptions -> {x ∈ Reals, y ∈ Reals, b ∈ Reals, c ∈ Reals, e ∈ Reals,f ∈ Reals, η ∈ Reals, η != 0}]; ans = Collect[%, {ArcTan[η/(1 + e x + f y)], Log[1 + e^2 x^2 + 2 f y + f^2 y^2 + 2 e (x + f x y) + η^2]}, ... 5 One way to approach this is to look for the source of the problem. In this case, the outer integral (on y) is irrelevant to the problem, because even the simpler integral int = Integrate[(1 + b x)/(1 + e x + I η), {x, -1/2, 1/2}, Assumptions -> {b ∈ Reals, c ∈ Reals, e ∈ Reals, η ∈ Reals, η != 0}] gives a conditional expression. But now the answer ... 5 This question is being automatically bumped as unanswered. However, we have an authoritative answer in comments: Investigating as a regression. You can put a "bugs" tag on it if you like. --Daniel Lichtblau 5 Let us define the equation: Clear[eq]; eq[m_, f_] := 1/(x - 1) - (m + 1)/(x^(m + 1) - 1) == f; where x stays for Exp[f/(k t)]. If one applies the function Solve to it, Mma clearly answers that it cannot provide exact solution of this equation. It should not be expected, therefore, that one can find any other analytical solution. Numerically, it is ... 4 Although Daniel pointed correctly out that problems related to Integrate have been the subject of many discussion here, I found it worthwhile to study this case in detail, because a condition including Mod was not discussed up to now, as far as I know. The aim is to find out if there is a bug, and if so, where exactly it is sitting, and/or, if possible, to ... 4 I believe this equation are quiet unstable for the initial values, so there is a two solutions. You can either specify AccuracyGoal: ListPlot@NDSolveValue[{-w''[x] + 2/x w'[x] + w[x] == 0, w[1/10^6] == 10^-2, w[5] == 1}, w, {x, 1/10^6, 5}, AccuracyGoal -> 10] Or use the DSolveValue, the equation are solvable analytically: ... 4 With using number of assumptions, and breaking thing step by step: (I do things step by step, just to see where the problem is when it shows up, much easier to debug this way) integrand = 1/(r g) Exp[-p (1 + r a) - b q ((1 + r a)/(1 + g x))] Exp[-a/r] Exp[-b] Exp[-x/g]; z0 = Assuming[Re[(1 + q + a q r + g x)/(1 + g x)] > 0, ... 3 It seems to me that Integrate can do some strange things with your function g. From plotting g, we can see the integral should clearly be zero. g[x1_, y1_, x2_, y2_] = -Log[Sqrt[(x2 - x1)^2 + (y2 - y1)^2]] Plot[g[Cos[θ], Sin[θ], 1/10, 1/10], {θ, 0, 2 π}] Further, Integrate[g[Cos[θ], Sin[θ], 1/10, 1/10], {θ, 0, 2 π}] gives zero as expected, but ... 3 Because NDSolve cannot accommodate the x=0 boundary condition, it is necessary to perform this computation by discretizing the PDE in x. The resulting do-it-yourself procedure is discussed in Introduction to Method of Lines. For illustrative purposes, assume that x is divided into five equal segments. n = 5; h = 1/n; with a variable defined at each node, ... 2 Perform the three "easy" integrals first: gg = Assuming[T > 0 && H > 0 && G > 0 && F > 0 && p > 0 && q > 0, 1/(F G H T) Integrate[ Exp[-y/T] Exp[-a/F] Exp[-b/H] Exp[-x/G], {a, 0, \[Infinity]}, {x, 0, \[Infinity]}, {b, 0, \[Infinity]}] ] (* ... 2 I believe the main problem with original integration is due to Mathematica try to integrate with a and b being complex numbers. I have some doubts that it's even possible to analytically integrate with complex constants. Integrate[Log[a Cos[x]^2 + b Sin[x]^2], {x, 0, 2 Pi}, Assumptions -> a > 0 && b > 0] (* π (Log[(a b)/16] + 2 Log[(1 + ... 2 fixed in 10.1 (windows): code: Clear[x] Integrate[(1 - x)*(1 + 2*x)^6/Sqrt[1 - x^2], {x, -1, 1}]/Pi 2 fixed in 10.1 windows code N[Integrate[Sqrt[1 + x^3], {x, -1, 3}]] 2 With version 9.0.1, f[x_] := (p^2 + k^2 - 2 p k x)/(x - (p^2 + k^2 + 1 - ((p^2 - k^2)^2)/4)/(2 p k)); ans9 = Integrate[f[x], {x, -1, 1}, PrincipalValue -> True] (* ConditionalExpression[1/2 k p (-8 + ((-4 + (k^2 - p^2)^2) ArcCoth[(8 k p)/(-4 + k^4 - 4 p^2 + p^4 - 2 k^2 (2 + p^2))])/(k p)), k^4 + p^4 < 4 + 4 p^2 + 2 k^2 (2 + p^2) && (k ... 2 I would try to see if you can use Distribute for this: Distribute@ Integrate[f[x] + DiracDelta[x - y] g[x], {x, -Infinity, Infinity}] Unlike Map, Distribute is especially (though not exclusively) intended for use with sums. 2 One approach, admittedly not elegant, is Map[Integrate[#, {x, -∞, ∞}] &, f[x] + DiracDelta[x - y] g[x]] (* ConditionalExpression[g[y] + Integrate[f[x], {x, -∞, ∞}], Element[y, Reals]] *) Incidentally, the code in the Question can be rewritten as Integrate[#, {x, -∞, ∞}] & @ (f[x] + DiracDelta[x - y] g[x]) and the code at the beginning of this ... 2 Trace[Integrate[ Integrate[ Integrate[ E^(-v1 - v2 - v3) (v1 + v2 + v3)/3, {v3, 2 v1 - 5/2 v2, v2}], {v2, 4 v1/7, 5 v1/7}], {v1, 0, Infinity}]] Check one line before the result. I think this is what you want. 2 Well, n[1, 1, 1] (* {1.38996, 1.85383, 1.37325} *) n[5, 5, 5] (* {1.38996, 1.85383, 1.37325} *) When you first define your variables and then assign n[...] to this expression, the expression will evaluate and then be stored as that value. Regardless of what values you pass to n it will always return the same thing. You can read more about how Mathematica ... 2 $Version "10.0 for Mac OS X x86 (64-bit) (September 10, 2014)" As entered Mathematica returns the wrong result. Integrate[Sqrt[y/x] (Sin[t]^2 Cos[t])/(x + y + 2 Sqrt[x y] Cos[t]), {t, 0, Pi}, Assumptions -> {x > 0, y > 0, x > y}] Pi*(1/(8*y) - (3*y)/(8*x^2)) However, a workaround is to convert the trig functions to exponentials ... 1 This appears to be a bug in V10.0.x which was fixed in V10.1.0.$Version "10.1.0 for Mac OS X x86 (64-bit) (March 24, 2015)" Integrate[Sqrt[y/x] (Sin[t]^2 Cos[t])/(x + y + 2 Sqrt[x y] Cos[t]), {t, 0, Pi}, Assumptions -> {x > 0, y > 0, x > y}] -((π y)/(4 x^2)) 1 Something like this?: rect4[f_, a_, b_, n_] := With[{ex = Integrate[f[y], {y, a, b}], r = Range[0, n]}, With[{h = (b - a)/2.^r}, {r, #, {"/"}~Join~Ratios@#}\[Transpose] &@ Abs[ex - (b - a) Mean /@ f /@ Range[a, b - h, h]]]] MatrixForm@rect4[#^2 &, -1, 1, 6] 1 Something like this: f[x_, m_] := 1/(x - 1) - (1 + m)/(x^(1 + m) - 1) g[y_, m_] := x /. NSolve[f[x, m] == y, x] 1 fixed in 10.1 (windows). Now integral remains unevaluated. code: \$Assumptions = t \[Element] Reals && t > 0 && t < 1 f[x_] = Abs[Re[Exp[I*x]/(1 - t*Exp[I*x])]] Integrate[f[x], {x, 0, 2 \[Pi]}] 1 bug fixed in 10.1 (windows) code ArcLength[Line[{{0, 0}, {1, 0}, {2, 0}}]] ArcLength[Line[{{0}, {1}, {2}}]] ArcLength[Line[{{0, 0}, {1, 0}, {2.0, 0}}]] ArcLength[Line[{{0}, {1}, {2.0}}]] Only top voted, non community-wiki answers of a minimum length are eligible
2015-04-25 18:27:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28860771656036377, "perplexity": 3953.7661757273813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246650671.76/warc/CC-MAIN-20150417045730-00022-ip-10-235-10-82.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/216598/g-is-a-topological-group-acts-on-topological-space-x-is-f-gx-rightarrow-x
# G is a topological group acts on topological space $X$, is $f_{g}:X\rightarrow X, x\rightarrow g*x$ continuous? Let $G$ be a topological group acts on the topological space $X$, for an elememt $g\in G$, let's define the map $f:X\rightarrow X, f(x)=g*x$. I am trying to find if $f$ is continuous? my best thanks - Without further condition on the group action, $f_g$ is not necessarily continuous. Usually, with topological groups, you specify that the group action: $\phi:G\times X\rightarrow X$ is continuous. Then $f_g(x)=\phi(g,x)$ is continous. –  Thomas Andrews Oct 19 '12 at 0:09 yes, actually $\phi:G\times X \rightarrow X$ is continuous, but how can we prove that $f_{g}:X\rightarrow X$ defined above is continuous –  Kamal Oct 19 '12 at 0:16 Because $h_g:X\to G\times X$ defined as $h_g(x)=(g,x)$ is continuous. And $f_g=\phi \circ h_g$ –  Thomas Andrews Oct 19 '12 at 0:22
2013-12-18 15:18:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9825127124786377, "perplexity": 228.79110647677228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758904/warc/CC-MAIN-20131218054918-00021-ip-10-33-133-15.ec2.internal.warc.gz"}
http://redding.dev/policy-paragraphs/
economics # Policy Paragraphs Some brief musings on a few policies. ## Government Debt As I've discussed previously, the appropriate discount rate for Americans is probably around 2.4%. The real interest rate on federal government debt is literally negative now and has almost always been below 2.4% DFII10. Forget investment, the fact that interest rates are smaller than the discount rate suggests the government should be borrowing just to boost consumption. However, absent a significant output gap, fiscal stimulus generates an increase in demand without generating much increase in production, which just means the stimulus generates a combination of inflation and imports. If we only cared about Americans, we could argue that the stimulus will only generate imports, which makes the above interest-rate argument works. However, (a) inflation is another consequence and (b) we care about everyone, which means deficit spending during times of full employment just reallocate consumption from present foreigners to present Americans and from future Americans to future foreigners. This is probably net-negative. Moreover, this is all premised on the assumption that the government is run by intelligent and knowledgable utilitarian planners. In practice, it's really unclear that marginal government spending is equal in value to marginal consumption. It's also really unclear that the government is competent enough to reign in deficits once interest rates exceed the discount rate. Both of these make it plausible that deficit spending should be generally discouraged. In short, deficit spending during good economic times is neutral except for the cost created by increased risk of default (and the resulting economic mayhem). Deficit spending during bad economic times can be good (particularly if you have an imperfect central bank, since optimal monetary policy will offset the effects of fiscal policy Monetary offset is more mainstream than you think). Although the above reasoning is not completely mainstream, this conclusion is. ## Climate Change William Nordhaus won a Nobel prize for integrating climate change models into macroeconomic analysis William Nordhaus, so it seems reasonable to turn towards him for climate change analysis. In 2017, he estimated the social cost of carbon at around $34 per ton in 2015 Revisiting the social cost of carbon CPIAUCSL and grows by ~3% per year. However, Nordhous assumes a relatively high pure-time discount rate. This is a relatively common by economists, and they justify it by saying that they're simply using the discount rate implied by human behavior - that is, they claim to be using the values we live by. The problem with this, of course, is that we don't always live by the values we have and humans not caring enough about the future is a well known cognitive bias. We can correct for this bias by removing the pure-time discount rate and computing a new growth-corrected discount rate, which is just$\epsilon \cdot g$where$\epsilon$is the utility-income elasticity and$g$is the expected real growth in global GDP per capita. The former, we've estimated to be about 0.35; the latter Nordhaus estimates at around 2% based on expert surveys. Together, they imply a growth-corrected discount rate of around 0.7%. Figure 3 from Nordhaus' paper suggests this makes the social cost of carb about 4.4 times higher, or about \$150 per ton. Conversely, when accounting for the fact that the poor are less able to deal with climate change than the rich, Nordhaus assumes $\epsilon=0.45$ Scientific and Economic Background on DICE models, which means he is more (um) "economically left-leaning" than I am. I'm no expert, but it doesn't seem like the social cost of carbon is particularly sensitive to small changes in $\epsilon$. For instance, a study examined various models with $\epsilon=1$ and $\epsilon=1.2$; they found (Table 6 of Equity weighting and the marginal damage costs of climate change) that this tweak of 0.2 can cause the social cost of carbon to change by between -35% and +5%. So, it seems the most likely effect of tweaking $\epsilon$ down 0.1 would be to lower the social cost of carbon by ~13%. In short, a ton of carbon dioxide probably has an externality of roughly \$130. Given that we emit around 5 billion tons of carbon dioxide per year, that's a total lost value of ~\$650 billion per year or about 3% of GDP. The other study I found using zero pure time preference and accounting for economic inequality found significantly higher estimates (around ~\$400 per ton) despite assuming$\epsilon=0$Azar. So, it seems more likely that our estimate of \$130 is low rather than high. ## Minimum Wage The federal minimum wage was set to \$7.25 in 2009, where it's been ever since. In 2013, most economists thought raising the minimum wage to \$9 was a good idea IGM Economic Experts Panel. (2013). Minimum Wage and they remain remarkably uncertain about how this or a \$15 minimum wage by 2020 would affect the labor supply IGM Economic Experts Panel. (2013). Minimum Wage IGM Economic Experts Panel. (2015).$15 Minimum Wage. Meta-analyses generally find that minimum wage increases don't affect overall employment - though, some demographics (esp young people) suffer greater displacement from the labor force Chletsos (see pages 2-4 for a history of minimum wage meta-analyses). The UK is fairly similar to the US culturally and economically. However, their minimum wage is equal to around 50% of their median income compared to about 25% in the US. For this reason, it seems reasonable to suppose that since minimum wage increases have minimal effect on labor supply in the UK Hafner, they should also have minimal effect in the US. (That being said, the minimum wage in the UK is different for those under 25 years old) However, it has been proposed by some that employers will respond to minimum wage hikes by cutting back hours-per-employee rather than by cutting the number of employees. Unfortunately, there has been a dirth of studies examining this in the US Belman (see pg 124). That being said, there have been 7 studies examining young hourly works specifically and they found elasticities ranging from -0.03 to -0.77 (pg 121). The estimate for workers as a whole is likely smaller. In short, then, it looks like raising the minimum wage in the US is generally good in that it reduces inequality with minimal distortions on the labor supply. This conclusion is made weaker by the dearth of studies on changes in hours worked; however, it is strengthened if you think we should be working fewer hours anyways. I'm generally in favor of increasing the minimum wage. For a contrary perspective, see The Myopic Empiricism of the Minimum Wage and for a rebuttal see Bryan Caplan is wrong about the minimum wage. For possible reasons for the minimum effect of the minimum wage on employment see Schmitt. ## Immigration Simple economic theory suggests that immigrants depress wages and increase unemployment in the short-run, but that in the long-run investment will follow human capital leading to no real effect on wages. This is mostly confirmed by a literature review Okkerse, which found that the probability that immigrants increase unemployment is low in the short run and zero in the long run. Most area analyses and time-series analyses fail to find a significant influence of immigration on (un)employment probabilities That being said they did find that "immigration negatively affects wages of less-skilled labourers and earlier immigrants." This is also the consensus among economists: that immigrants (both high- and low-skilled) are good for the average American IGM Economic Experts Panel. (2013). Low-Skilled Immigrants IGM Economic Experts Panel. (2013). High-Skilled Immigrants. Moreover, even if immigration did have significant negative long-term negative effects on Americans, its still likely it'd be net-positive from a global perspective since there's still the large benefit to the immigrants themselves. That being said, there's a strain of economic theory that claims immigrants either (1) cause institution-decreasing "brain drain" to their original countries and (2) cause institutional deterioration in their host countries. I don't know of any empirical work on this issue, but it could end up making immigration net-negative. A third argument against immigration is related to (1): there are negative externalities when people move away from their hometowns. For example, when I moved away from Wisconsin, I (presumably) accounted for the value I'd lose from spending less time with friends but I (presumably) didn't properly account for the value they'd lose from spending less time with me. Still, upon weighing the evidence, on one hand, we have a literature review, consensus of experts, and literally trillions of dollars in potential benefits Clemens Borjas. On the other hand, we have a couple academic argument without empirical support and which, if true, would mostly imply that immigration was net-neutral from a global welfare perspective. In short, I support immigration. The more the merrier. Board of Governors of the Federal Reserve System (US), 10-Year Treasury Inflation-Indexed Security, Constant Maturity [DFII10], retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/DFII10, October 19, 2020. Sumner, S. (2019). Monetary offset is more mainstream than you think. https://www.econlib.org/monetary-offset-is-more-mainstream-than-you-think/ Wikipedia contributors. (2020, October 4). William Nordhaus. In Wikipedia, The Free Encyclopedia. Retrieved 22:22, October 19, 2020, from https://en.wikipedia.org/w/index.php?title=William_Nordhaus&oldid=981750021 Nordhaus, W. D. (2017). Revisiting the social cost of carbon. Proceedings of the National Academy of Sciences, 114(7), 1518-1523. https://doi.org/10.1073/pnas.1609244114 U.S. Bureau of Labor Statistics, Consumer Price Index for All Urban Consumers: All Items in U.S. City Average [CPIAUCSL], retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/CPIAUCSL, October 18, 2020. Nordhaus, W. (2020). Scientific and Economic Background on DICE models. https://sites.google.com/site/williamdnordhaus/dice-rice Anthoff, D., Hepburn, C., & Tol, R. S. (2009). Equity weighting and the marginal damage costs of climate change. Ecological Economics, 68(3), 836-849. https://doi.org/10.1016/j.ecolecon.2008.06.017 Azar, C., & Sterner, T. (1996). Discounting and distributional considerations in the context of global warming. Ecological Economics, 19(2), 169-184. https://doi.org/10.1016/0921-8009(96)00065-1 IGM Economic Experts Panel. (2013). Minimum Wage. https://www.igmchicago.org/surveys/minimum-wage/ IGM Economic Experts Panel. (2015). \$15 Minimum Wage. https://www.igmchicago.org/surveys/15-minimum-wage/ Chletsos, M., & Giotis, G. P. (2015). The employment effect of minimum wage using 77 international studies since 1992: A meta-analysis. https://mpra.ub.uni-muenchen.de/61321/ Hafner, M., Taylor, J., Pankowska, P., Stepanek, M., Nataraj, S., & Van Stolk, C. (2017). The impact of the national minimum wage on employment: A meta-analysis. https://doi.org/10.7249/RR1807 Belman, D., & Wolfson, P. J. (2014). What does the minimum wage do?. WE Upjohn Institute. https://www.google.com/books/edition/What_Does_the_Minimum_Wage_Do/iRDVAwAAQBAJ IGM Economic Experts Panel. (2013). Low-Skilled Immigrants. https://www.igmchicago.org/surveys/low-skilled-immigrants/ IGM Economic Experts Panel. (2013). High-Skilled Immigrants. https://www.igmchicago.org/surveys/high-skilled-immigrants/ Okkerse, L. (2008). How to measure labour market effects of immigration: A review. Journal of Economic Surveys, 22(1), 1-30. https://doi.org/10.1111/j.1467-6419.2007.00533.x Clemens, M. A. (2011). Economics and emigration: Trillion-dollar bills on the sidewalk?. Journal of Economic perspectives, 25(3), 83-106. https://doi.org/10.1257/jep.25.3.83 Borjas, G. J. (2015). Immigration and globalization: A review essay. Journal of Economic Literature, 53(4), 961-74. https://doi.org/10.1257/jel.53.4.961 Caplan, B. (2013). The Myopic Empiricism of the Minimum Wage. The Library of Economics and Liberty. https://www.econlib.org/archives/2013/03/the_vice_of_sel.html Ashok. (2013). Bryan Caplan is wrong about the minimum wage. This is Ashok. https://ashokarao.com/2013/03/13/bryan-caplan-is-wrong-about-the-minimum-wage/ Schmitt, J. (2013). Why does the minimum wage have no discernible effect on employment? (Vol. 4). Washington, DC: Center for Economic and Policy Research. Schmitt, J. (2013). Why does the minimum wage have no discernible effect on employment? (Vol. 4). Washington, DC: Center for Economic and Policy Research.
2021-03-05 19:40:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5059919357299805, "perplexity": 4805.832524857754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373241.51/warc/CC-MAIN-20210305183324-20210305213324-00537.warc.gz"}
https://www.ncatlab.org/nlab/show/real+cohomology
Contents cohomology # Contents ## Idea By real cohomology one usually means ordinary cohomology with real number coefficients, denoted $H^\bullet\big(-, \mathbb{R}\big)$. Hence, with the pertinent conditions on the domain space $X$ satisfied, its real cohomology $H^\bullet\big(-, \mathbb{R}\big)$ is what is computed by the Cech cohomology or singular cohomology or sheaf cohomology of $X$ with coefficients in $\mathbb{R}$. In particuar, for $X$ a smooth manifold, the de Rham theorem says that real cohomology of $X$ is also computed by the de Rham cohomology of $X$ $H^\bullet\big( X, \mathbb{R}\big) \;\simeq\; H^\bullet_{dR}\big( X \big) \,.$ More generally, for $X$ a smooth manifold with smooth action of a connected compact Lie group, the equivariant de Rham theorem says that the real cohomology of the homotopy quotient (e.g. Borel construction) of $X$ is computed by the Cartan model for equivariant de Rham cohomology on $X$. ## Properties Last revised on December 5, 2020 at 10:50:25. See the history of this page for a list of all contributions to it.
2022-06-29 02:55:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987305223941803, "perplexity": 295.8706992901124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103620968.33/warc/CC-MAIN-20220629024217-20220629054217-00282.warc.gz"}
http://physics.stackexchange.com/questions/63729/what-is-the-entropy-of-a-system-when-its-volume-tends-to-zero
# What is the entropy of a system when its volume tends to zero? Say that a closed system has $n$ dimensions and is in the shape of a $n$-ball with a radius of 1, it's volume will be $$\frac{\pi^\frac{n}{2}}{\Gamma(\frac{n}{2}+1)}$$ which tends to 0 yet is not empty as n tends to infinity. My question might not make any sense, and my understanding of entropy might make things even worse, but if it's sensible, what will become of the entropy of such a system? If my first exposé really makes no sense, I would reformulate as follows: let's say that an unfathomable force compresses a closed system such that it's volume decreases towards zero, what will become of the entropy of such a system? (I realise it might not be the same problem/question) - Let me begin with the second question where you don't change the dimensionality, just the volume. The entropy never decreases when you actually compress gas. The compression means that the walls are mostly moving against the colliding molecules which means that they're recoiled backwards at higher velocities. The molecules' kinetic energy increases so they occupy a larger volume in the momentum space (in macroscopic language, a gas heats up while being compressed) which at least compensates the decrease of the volume in the position space. The other answer is incorrect. The second laws says not only that systems exhibit some activity indicating that they don't like a decreasing entropy; instead, it says that whatever activity physical systems display, they will never achieve a macroscopic decrease of the entropy. It's just impossible. To compress gas by 70% is possible, to decrease the entropy by a macroscopic amount is not. Now, the interesting first question. If you could change the effective dimensionality, it would still be true in any consistent theory that the entropy can't decrease. So if your theory were just able to add dimensions like that while keeping a molecule in a sphere of the increasing dimension, the second law of thermodynamics would imply that such an addition of dimensions isn't physically possible – it would be another, more sophisticated example of the perpetual motion machine of the second kind. In some sense, it is true that the second law encourages physical systems to lose the dimensions (a way to increase the entropy, given your formula for the higher-dimensional spherical volumes). When the energy dissipates, the energy per degree of freedom effectively goes down which allows us to use a lower-dimensional "effective" description. For example, a gas full of Kaluza-Klein particles probing (moving in) extra dimensions will tend dissipate its energy and decay to many lower-energy quanta which are effectively living just in 3+1 dimensions. -
2014-04-21 05:37:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7541010975837708, "perplexity": 428.7474904482861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
http://openstudy.com/updates/5594b05ee4b0113e4fb81184
## anonymous one year ago For the function f(x) = –(x + 1)2 + 4, identify the vertex, domain, and range. 1. anonymous The vertex is (–1, 4), the domain is all real numbers, and the range is y ≥ 4. The vertex is (–1, 4), the domain is all real numbers, and the range is y ≤ 4.-my answer The vertex is (1, 4), the domain is all real numbers, and the range is y ≥ 4. The vertex is (1, 4), the domain is all real numbers, and the range is y ≤ 4. 2. anonymous need help want to make sure if im correct 3. anonymous ok what are you thinking? 4. anonymous b 5. anonymous thats right 6. anonymous thank you so much can help make sure I have 3more problems correct 7. anonymous ok 8. anonymous What is the equation of the following graph in vertex form? parabolic function going down from the left through the point negative four comma zero and turning at the point negative three comma negative one and going up through the point negative two comma zero and then through the point zero comma eight and continuing up towards infinity 9. anonymous Courtesy of Texas Instruments y = (x − 3)2 − 1 y = (x + 3)2 − 1 y = (x − 4)2 − 2-my answer y = (x − 4)2 + 8 10. anonymous next time just type the numbers please. it's a lot easier to decipher. If it turns at (-3, -1) that's the vertex, so it should be the 2nd choice 11. anonymous im so sorry 12. anonymous no problem 13. anonymous Given the function f(x) = x2 and k = –1, which of the following represents a function opening downward? 14. anonymous f(x) + k-my answer kf(x) f(x + k) f(kx) 15. anonymous f(x)+k moves the function down k units. You want the one that makes -1*f(x) 16. anonymous so D 17. anonymous b 18. anonymous MMM o ok 19. anonymous 0 1 –1 -my answer –2 20. anonymous what's the question? 21. anonymous alculate the average rate of change for the given graph from x = –2 to x = 0 22. anonymous rate of change just means slope. $m=\frac{ y_2-y_1 }{ x_2-x_1 }$ 23. anonymous plug the two points into the equation 24. anonymous -2 25. anonymous well its between -2 nd 0 26. anonymous |dw:1435809574993:dw| 27. anonymous 0 28. anonymous yes 29. anonymous was that the last one? 30. anonymous tht was the last one THANK YOU SO MUCH 31. anonymous WAITTT another 5 came up 32. anonymous Using the completing-the-square method, find the vertex of the function f(x) = –3x2 + 6x − 2 and indicate whether it is a minimum or a maximum and at what point. 33. anonymous Maximum at (1, 1) Minimum at (1, 1) Maximum at (–1, 2) Minimum at (–1, 2) 34. anonymous helpp Find more explanations on OpenStudy
2017-01-19 13:45:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6916552186012268, "perplexity": 2191.9594694058337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00541-ip-10-171-10-70.ec2.internal.warc.gz"}
https://mareknarozniak.com/2020/10/14/jordan-wigner-transformation/
In this article we are going to simulate different kinds of indistinguishable particles and introduce that concept briefly in case if you are not yet familiar with them. We will also write few words about the second quantization formalism in $1$D to be able to formally represent quantum states involving those particles. For fermions we will use Jordan-Wigner transformation, for bosons we will use native operators already available in QuTip. In the end you should be able define operators that act on fermions or bosons in QuTip, we will test their properties using pytest testing framework and eventually as we conclude, we will briefly discuss the idea of test driven science. For a starter, brief introduction to indistinguishable particles. Let $\left\vert \psi_\alpha \psi_\beta \right>$ be a quantum state representing a system holding two indistinguishable particles $\alpha$ and $\beta$. Their indistinguishability implies that \begin{aligned} \left \vert \left< \psi_\beta \psi_\alpha \vert \psi_\alpha \psi_\beta \right> \right \vert^2 &= 1. \end{aligned} Note that particles $\alpha$ and $\beta$ are indistinguishable from measurement perspective and what vanishes during quantum measurement is the phase factor. In such case exchanging particles should allow a phase factor without violating their indistinguishability. To express that let $U$ be an operator that exchanges two particles \begin{aligned} U \left \vert \psi_\alpha \psi_\beta \right> &= e^{i\phi} \left \vert \psi_\beta \psi_\alpha \right>. \end{aligned} The value of $\phi$ characterizes groups of indistinguishable particles. For bosons phase factor is $\phi = 0$ and for fermions it is $\phi = \pi$. Particles that have any different value of $\phi$ are commonly referred to as anyons. Let us change the notation a bit to something more workable computationally, for this let us use the second quantization formulation which allows us to describe quantum many-body states from perspective of occupation number. For this we consider the Fock state basis, consisting of single particle states occupancies. For example, a Fock state for a quantum $4$-body system in a vacuum would be $\left \vert 0000 \right>$ and if last site is occupied by a single particle we would write it as $\left \vert 0001 \right>$ and so on. Such Fock states are manipulated using creation and annihilation operators. For fermions we write them as $a^\dagger_j$ and $a_j$. For bosons $b^\dagger_j$ and $b_j$. The subscript $j$ represents the lattice site index. For example we could place fermions on the edges using $a^\dagger_1 a^\dagger_4\left \vert 0000 \right> = \left \vert 1001 \right>$. Values in the ket represent occupancy number. Note that this notation does not really distinguish what occupies the site so let us just assume those were fermionic sites and from now on let us try to just define if site is meant for bosons or fermions. We should try to avoid mixing them up. The properties of indistinguishable particles (i.e. what phase factor their exchange pulls out) is entirely described by their operators, thus for fermions we have \begin{aligned} \{a_n, a_{n^\prime}\} &= 1 \\ \{a_n, a_{n^\prime}^\dagger\} &= \delta_{nn^\prime} \\ a_n^2 &= 0 \end{aligned} and for bosons \begin{aligned} [b_n, b_{n^\prime}] &= 0 \\ [b_n, b_{n^\prime}^\dagger] &= \delta_{nn^\prime}. \end{aligned} the above properties are true in the infinite-dimensional Hilbert space. Here we consider numerical simulation so our Hilbert space is fixed to hold maximum $N_p$ bosons per site and commutation relation becomes \begin{aligned} [b_n, b_{n^\prime}] &= 0 \\ [b_n, b_{n^\prime}^\dagger] &= \delta_{nn^\prime} [1 - \frac{N_p + 1}{N_p!} (b_n)^N_p (b_n^\dagger)^N_p] \end{aligned} from (Somma et al., 2003). Where $[A, B] = AB - BA$ is the commutator and $\{A, B\} = AB + BA$ is the anticommutator. Also note how bosonic operators do not square to zero! To use those operators in our simulations we need to translate them into spin-$\frac{1}{2}$, we do not worry about bosonic operators because QuTip already provides creation and annihilation operators for $N$-level Fock basis for bosons. For fermions (unfortunately) we cannot be so lazy and we need to make them ourselves and for this we need the famous Jordan-Wigner transformation \begin{aligned} a_j &= (\prod_{k=1}^{j-1} \sigma^\alpha_k)(\sigma^\beta_j + i \sigma^\gamma_j) \\ a_j^\dagger &= (\prod_{k=1}^{j-1} \sigma^\alpha_k)(\sigma^\beta_j - i \sigma^\gamma_j) \end{aligned} I do not like to be attached to any particular basis so I generalized the operators for Jordan-Wigner transformation to Pauli operators labeled $\sigma^\alpha$, $\sigma^\beta$ and $\sigma^\gamma$. The choice of those operators would affect the spin-$\frac{1}{2}$ representation of the vacuum state and could potential mix-up the creation and annihilation operators. The choice of $\sigma^\alpha$ determines how vacuum is represented, vacuum state would be the state that has $+1$ eigenvalue of the $\sigma^\alpha$-operator. The occupied state would be one that has $-1$ eigenvalue. Regarding the choice of remaining operators, if $\sigma^\beta \sigma^\gamma = i\sigma^\alpha$ then $a$ and $a^\dagger$ will represent annihilation and creation respectively (as expected). On the other hand choosing $\sigma^\beta \sigma^\gamma = -i\sigma^\alpha$ would swap their interpretations. Let us implement the Jordan-Wigner transformation that works for an arbitrary choice of those operators for a $N$-body system. First some spin-$\frac{1}{2}$ operators. import numpy as np from qutip import qeye, sigmax, sigmay, sigmaz, tensor def Is(i): return [qeye(2) for j in range(0, i)] def Sx(N, i): return tensor(Is(i) + [sigmax()] + Is(N - i - 1)) def Sy(N, i): return tensor(Is(i) + [sigmay()] + Is(N - i - 1)) def Sz(N, i): return tensor(Is(i) + [sigmaz()] + Is(N - i - 1)) Also, as usual, product, sum and power operators that act on generic Object-type which is very useful for defining quantum mechanical operators using QuTip. def osum(lst): return np.sum(np.array(lst, dtype=object)) def oprd(lst): return np.prod(np.array(lst, dtype=object)) def opow(op, N): return oprd([op for i in range(N)]) Using these we can defined Jordan-Wigner operators. def a_(N, n, Opers=None): Sa, Sb, Sc = Sx, Sy, Sz if Opers is not None: Sa, Sb, Sc = Opers return oprd([Sa(N, j) for j in range(n)])*(Sb + 1j*Sc)/2. Sa, Sb, Sc = Sx, Sy, Sz if Opers is not None: Sa, Sb, Sc = Opers return oprd([Sa(N, j) for j in range(n)])*(Sb - 1j*Sc)/2. We could have just use a_.dag() for the definition of ad as they only differ by a sign but I find it beneficial to sometimes follow the definitions strictly even if they differ only by a sign. Benefit of that is it helps to correlate things for those who read this article and see those things for the first time. We also need identity operator, but not just any identity… We need identity that matches the QuTip’s quantum object type of Hilbert space of $N$ $2$-level particles. def I(N): return osum([Sz(N, i)*Sz(N, i) for i in range(N)])/N Lets write a little test script that checks basic properties of fermions and bosons. It must cover all possible choices of Jordan-Wigner operators (work in any basis) for fermions and for bosons we expect it to cover cases of different number of possible levels. Lets start by making the necessary imports. import pytest import math import itertools from qm import Sx, Sy, Sz from qm import a_, ad, b_, bd, I from qm import opow from qm import commutator, anticommutator Now parameters, we want to cover cases of $N = 2 \dots 4$ particles and in case of bosons $N_p = 2 \dots 4$ levels. Using the itertools library we prepare all possible permutations of spin operators for Jordan-Wigner and provide them with string type label for test naming. Ns = [2, 3, 4] Nps = [2, 3, 4] jws = itertools.permutations([(Sx, 'X'), (Sy, 'Y'), (Sz, 'Z')]) Prepare products of those parameters to cover all cases for fermions and bosons as well as pre-generate the test names. Fermion tests will be dynamically labeled using fermionTestName function that accepts parameters and returns test case name as a string. def fermionTestName(param): N, jw = param (_, la), (_, lb), (_, lc) = jw return 'N={0},JW={1}'.format(str(N), la + lb + lc) fermion_params = list(itertools.product(Ns, jws)) fermion_names = [fermionTestName(param) for param in fermion_params] boson_params = list(itertools.product(Ns, Nps)) boson_names = ['N={0},Ns={1}'.format(str(N), str(Ns)) for N, Ns in boson_params] Finally, let us implement test for fermion operators. It is a literal implementation of fermion operator properties stated above. Note how zero operator is prepared by multiplying identity by constant zero @pytest.mark.parametrize('N,jw', fermion_params, ids=fermion_names) def testFermions(N, jw): (Sa, _), (Sb, _), (Sc, _) = jw Opers = Sa, Sb, Sc zero = 0.*I(N) # test all the pairs for n in range(N): a_n = a_(N, n, Opers=Opers) for np in range(N): a_np = a_(N, np, Opers=Opers) assert anticommutator(a_n, a_np) == zero if n == np: else: assert a_n*a_n == zero we proceed in the same way to design test for bosons, following the finite Hilbert space boson commutation relations stated above @pytest.mark.parametrize('N,Np', boson_params, ids=boson_names) def testBosons(N, Np): zero = 0.*b_(N, Np, 0)*bd(N, Np, 0) # test all the pairs for n in range(N): b_n = b_(N, Np, n) bdn = bd(N, Np, n) for np in range(N): b_np = b_(N, Np, np) bdnp = bd(N, Np, np) # test anticommutation properties assert commutator(b_n, b_np) == zero LHS = commutator(b_n, bdnp) RHS = zero if n == np: NpF = math.factorial(Np) RHS = (1. - ((Np+1)/NpF)*opow(bdn, Np)*opow(b_n, Np)) assert LHS == RHS \$ python -m pytest tests.py -v ============================= test session starts ============================== platform darwin -- Python 3.5.3, pytest-4.2.0, py-1.8.0, pluggy-0.12.0 -- /Users/marek/.pyenv/versions/3.5.3/bin/python cachedir: .pytest_cache metadata: {'Platform': 'Darwin-18.6.0-x86_64-i386-64bit', 'Python': '3.5.3', 'Packages': {'pluggy': '0.12.0', 'py': '1.8.0', 'pytest': '4.2.0'}, 'Plugins': {'steps': '1.6.4', 'timeout': '1.3.3', 'html': '1.22.1', 'metadata': '1.8.0', 'dynamodb': '1.2.0', 'flaky': '3.6.1'}} rootdir: /Users/marek/Development/bloggists/jordan-wigner, inifile: plugins: steps-1.6.4, timeout-1.3.3, dynamodb-1.2.0, flaky-3.6.1, html-1.22.1, metadata-1.8.0 collected 27 items tests.py::testFermions[N=2,JW=XYZ] PASSED [ 3%] tests.py::testFermions[N=2,JW=XZY] PASSED [ 7%] tests.py::testFermions[N=2,JW=YXZ] PASSED [ 11%] tests.py::testFermions[N=2,JW=YZX] PASSED [ 14%] tests.py::testFermions[N=2,JW=ZXY] PASSED [ 18%] tests.py::testFermions[N=2,JW=ZYX] PASSED [ 22%] tests.py::testFermions[N=3,JW=XYZ] PASSED [ 25%] tests.py::testFermions[N=3,JW=XZY] PASSED [ 29%] tests.py::testFermions[N=3,JW=YXZ] PASSED [ 33%] tests.py::testFermions[N=3,JW=YZX] PASSED [ 37%] tests.py::testFermions[N=3,JW=ZXY] PASSED [ 40%] tests.py::testFermions[N=3,JW=ZYX] PASSED [ 44%] tests.py::testFermions[N=4,JW=XYZ] PASSED [ 48%] tests.py::testFermions[N=4,JW=XZY] PASSED [ 51%] tests.py::testFermions[N=4,JW=YXZ] PASSED [ 55%] tests.py::testFermions[N=4,JW=YZX] PASSED [ 59%] tests.py::testFermions[N=4,JW=ZXY] PASSED [ 62%] tests.py::testFermions[N=4,JW=ZYX] PASSED [ 66%] tests.py::testBosons[N=2,Ns=2] PASSED [ 70%] tests.py::testBosons[N=2,Ns=3] PASSED [ 74%] tests.py::testBosons[N=2,Ns=4] PASSED [ 77%] tests.py::testBosons[N=3,Ns=2] PASSED [ 81%] tests.py::testBosons[N=3,Ns=3] PASSED [ 85%] tests.py::testBosons[N=3,Ns=4] PASSED [ 88%] tests.py::testBosons[N=4,Ns=2] PASSED [ 92%] tests.py::testBosons[N=4,Ns=3] PASSED [ 96%] tests.py::testBosons[N=4,Ns=4] PASSED [100%] ========================== 27 passed in 4.72 seconds =========================== We tested commutation / anticommutation properties of different kinds of indistinguishable particles. We defined QuTip operators for fermions using Jordan-Wigner operators. We have not simulated any physical system although using those operators we can start working with different physical systems in 1D geometry. Jordan-Wigner transformation although primarily designed for 1D systems, it can be extended to 2D (Azzouz, 1993). I am strongly attached to the idea of testing operator properties. Writing pytest tests takes huge amount of my daily research work but it is a good time investment. I am a huge advocate of test-driven development (TDD) and I brought those habits with me to research work as I career switched from software engineering. Research is primarily exploration of the unknown. Doing research requires exploring ideas which have never been tried before. In case of numerical simulations it remains true. I do not believe it would be possible to get right results if at least one operator is not working properly. I observe people working with tools like Mathematica notebooks or Jupyter, often looping around being blocked on same bug resulting from some old state being preserved as one of the cells was not re-run after changes were made. I do large part of my numerics with pytest, whenever I test a hypothesis literally everything is being tested so that if I temporarily change something to try an idea test and if that breaks anything then pytest will catch it immediately. For the complete source code please consult the gist repository. 1. Somma, R. D., Ortiz, G., Knill, E. H., & Gubernatis, J. (2003). Quantum simulations of physics problems. Quantum Information and Computation. https://doi.org/10.1117/12.487249 2. Azzouz, M. (1993). Interchain-coupling effect on the one-dimensional spin-1/2 antiferromagnetic Heisenberg model. Physical Review B, 48(9), 6136–6140. https://doi.org/10.1103/physrevb.48.6136
2020-10-31 22:59:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 49, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9142273664474487, "perplexity": 3944.5709091240988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922463.87/warc/CC-MAIN-20201031211812-20201101001812-00538.warc.gz"}
https://in-terra-veritas.blogspot.com/2008/
Ramblings about what I encounter within the realm of the geosciences, as well as the occasional rant about nonsense. ## 20 December 2008 ### Anchor's Away! Geology in action in the Mediterranean Just saw this on Reuters. A couple other news sites (358 at the time of this post) have picked it up as well. The USA Today story even has a map (above), as well as giving us a general location (between Italy and Tunisia). For the geographically challenged (such as myself) here is a map with Tunisia in orange. Once again internet cables have been cut. Sounds like the work of that arch fiend "Submarine Slides". Doubly so since the USGS recorded a 5.9 earthquake near the location of the three breaks. However, the cable companies and news agencies aren't putting it together that way. They are much more eager to blame that anchor again (really, somebody should keep tabs on that rogue anchor and its cable cutting agenda). It's not yet know[n] what cut the cables between Italy and Tunisia. A similar outage in January was blamed on a ship's anchor off Egypt, and that may be the case again, according to Interoute, a European Internet Service Provider. The thing that always bugged me about that anchor story was that several lines were cut not just the one (remember the claims that it was a nefarious plot against... someone). But they only found one anchor. Not to get all JFK conspiracy on people, but I think the anchor was framed. A submarine mass wasting event seems the more reasonable culprit. Especially since rough weather was reported prior to and during the breaks. Correct me if I'm wrong, but rough weather is a potential cause of submarine flows. I'm willing to concede the possibility that maybe ONE cable got cut by the anchor, but several in one event is ludicrous. Here is an article from AJS on the topic of submarine slumps and their penchant for cable cutting (sorry, subscription required). Alternatively, you could check out Wikipedia's free entry on the topic. And here are some free videos of turbidity currents (well models of them). On the bright side, if the media puts it together properly this time it might even lead to another geological disaster flick. It just needs a catchy name like "Dante's Peak" or "The Core" or "10.5". How about "The Bouma Sequence". It's moody, and tantalizingly mysterious. Cited: Heezen, B.C., and Ewing, W.M., Turbidity currents and submarine slumps, and the 1929 Grand Banks [Newfoundland] Earthquake Am J Sci.1952; 250: 849-873 ## 15 December 2008 ### So true it hurts I saw this on PhD, it reminds me of all the labs I've run (I know, "nobody cares"). Most of the students today don't want to be in the class or even understand why they should have to learn the subject. And this one was up a while ago, but all the TA's still get a good chuckle out of it. For the record, my soul got crushed when I had a couple of construction engineers start arguing with me that it isn't important for them to learn about geologic hazards (such as landslides). In their lab, they decided rather than not build on, or mitigate, areas that are prone to slope failure, they would just make all the connections to the houses utilities "stretchy". They also came to the conclusion that it was better to sell a house quickly, and put in the contract terms that would absolve them of any liability, rather than build a house/structure properly... They didn't do well in that class. ## 14 December 2008 ### 100 things geo-meme ReBecca put me on to this one, and I agree that this one is far more interesting than the non-geo 100 things (the fixation on Paris, as Callan noted, is weird). Others in on the fun are: Geotripper (the originator), Callan, and Hypocentre (among others). I didn't limit myself to this year though (basically, I have seen my laptop this year and that is pretty much it, also I don't know if some of these happened in the past year see #95). The ones that happened in the past year are marked with an *, my comments are in (parenthesis) and italicized. 1. See an erupting volcano 2. See a glacier * (yep, see my Glacier National Park Photos series) 3. See an active geyser such as those in Yellowstone, New Zealand or the type locality of Iceland (years ago, which is sad because I am so close to Yellowstone right now) 4. Visit the Cretaceous/Tertiary (KT) Boundary. Possible locations include Gubbio, Italy, Stevns Klint, Denmark, the Red Deer River Valley near Drumheller, Alberta.* (I have lived with the KT for several years now. so I have visited it with every fiber of my being. Or you could just look at my profile picture) 5. Observe (from a safe distance) a river whose discharge is above bankful stage* (Poor York, it got Ouse-d. Sorry had to use the pun again. Here is the post) 6. Explore a limestone cave. Try Carlsbad Caverns in New Mexico, Lehman Caves in Great Basin National Park, or the caves of Kentucky or TAG (Tennessee, Alabama, and Georgia) (Lewis and Clark Caverns) 7. Tour an open pit mine, such as those in Butte, Montana, Bingham Canyon, Utah, Summitville, Colorado, Globe or Morenci, Arizona, or Chuquicamata, Chile.* (I wouldn't say a "tour" so much as visiting a platform above the Berkeley Pit, I might put the photos up at some point) 8. Explore a subsurface mine.* (Those Welsh were (are?) crazy. see my post on the mine, it is really impressive what people can do with hand tools, time, and a desire to not starve) 9. See an ophiolite, such as the ophiolite complex in Oman or the Troodos complex on the Island Cyprus (if on a budget, try the Coast Ranges or Klamath Mountains of California). 10. An anorthosite complex, such as those in Labrador, the Adirondacks, and Niger (there's some anorthosite in southern California too). 11. A slot canyon. Many of these amazing canyons are less than 3 feet wide and over 100 feet deep. They reside on the Colorado Plateau. Among the best are Antelope Canyon, Brimstone Canyon, Spooky Gulch and the Round Valley Draw. 12. Varves, whether you see the type section in Sweden or examples elsewhere. 13. An exfoliation dome, such as those in the Sierra Nevada. 14. A layered igneous intrusion, such as the Stillwater complex in Montana or the Skaergaard Complex in Eastern Greenland. 15. Coastlines along the leading and trailing edge of a tectonic plate (check out The Dynamic Earth - The Story of Plate Tectonics - an excellent website). 16. A ginkgo tree, which is the lone survivor of an ancient group of softwoods that covered much of the Northern Hemisphere in the Mesozoic. 17. Living and fossilized stromatolites* (Glacier National Park is a great place to see fossil stromatolites, while Shark Bay in Australia is the place to see living ones) 18. A field of glacial erratics 19. A caldera* (didn't see the geysers, but still visited Yellowstone) 20. A sand dune more than 200 feet high (Sand Dunes National Monument in CO, we swung by on a day-off during field camp, I don't know the official height, but they were plenty big). 21. A fjord 22. A recently formed fault scarp 23. A megabreccia 24. An actively accreting river delta 25. A natural bridge 26. A large sinkhole (one sunk the neighborhood burger joint where I was growing up, but that was years ago) 27. A glacial outwash plain 28. A sea stack 29. A house-sized glacial erratic 30. An underground lake or river 31. The continental divide * (I drove over it, it wasn't the purpose of the trip though) 32. Fluorescent and phosphorescent minerals * (common enough display at most museums, except Houston's museum has a computer simulation of it rather than the real thing. I was most confused by that) 33. Petrified trees* (had to find one for the science olympiad, so it wasn't found "in the wild") 34. Lava tubes 35. The Grand Canyon. All the way down. And back. 36. Meteor Crater, Arizona, also known as the Barringer Crater, to see an impact crater on a scale that is comprehensible (and it is quite BIG) 37. The Great Barrier Reef, northeastern Australia, to see the largest coral reef in the world. 38. The Bay of Fundy, New Brunswick and Nova Scotia, Canada, to see the highest tides in the world (up to 16m) 39. The Waterpocket Fold, Utah, to see well exposed folds on a massive scale. 40. The Banded Iron Formation, Michigan, to better appreciate the air you breathe. 41. The Snows of Kilimanjaro, Tanzania, 42. Lake Baikal, Siberia, to see the deepest lake in the world (1,620 m) with 20 percent of the Earth's fresh water. 43. Ayers Rock (known now by the Aboriginal name of Uluru), Australia. This inselberg of nearly vertical Precambrian strata is about 2.5 kilometers long and more than 350 meters high 44. Devil's Tower, northeastern Wyoming, to see a classic example of columnar jointing 45. The Alps. 46. Telescope Peak, in Death Valley National Park. From this spectacular summit you can look down onto the floor of Death Valley - 11,330 feet below. 47. The Li River, China, to see the fantastic tower karst that appears in much Chinese art 48. The Dalmation Coast of Croatia, to see the original Karst. 49. The Gorge of Bhagirathi, one of the sacred headwaters of the Ganges, in the Indian Himalayas, where the river flows from an ice tunnel beneath the Gangatori Glacier into a deep gorge. 50. The Goosenecks of the San Juan River, Utah, an impressive series of entrenched meanders. 51. (battle)shiprock, New Mexico, to see a large volcanic neck (sorry, had to modify this one, it isn't volcanic, but it was my favorite field area during field camp. I saw Red Dawn for the first time the other day, I was pleasantly surprised to see Battleship used as the backdrop for the scene against the three gunships) 52. Land's End, Cornwall, Great Britain, for fractured granites that have feldspar crystals bigger than your fist. 53. Tierra del Fuego, Chile and Argentina, to see the Straights of Magellan and the southernmost tip of South America. 54. Mount St. Helens, Washington, to see the results of recent explosive volcanism. 55. The Giant's Causeway and the Antrim Plateau, Northern Ireland, to see polygonally fractured basaltic flows. 56. The Great Rift Valley in Africa. 57. The Matterhorn, along the Swiss/Italian border, to see the classic "horn". 58. The Carolina Bays, along the Carolinian and Georgian coastal plain 59. The Mima Mounds near Olympia, Washington 60. Siccar Point, Berwickshire, Scotland, where James Hutton (the "father" of modern geology) observed the classic unconformity* (Indeed, but I want to go back when the landscape isn't as treacherously slippery. Read about it here.) 61. The moving rocks of Racetrack Playa in Death Valley 62. Yosemite Valley 63. Landscape Arch (or Delicate Arch) in Utah 64. The Burgess Shale in British Columbia - (only in hand sample and I guess that doesn't count) 65. The Channeled Scablands of central Washington 66. Bryce Canyon 67. Grand Prismatic Spring at Yellowstone 68. Monument Valley 69. The San Andreas fault 70. The dinosaur footprints in La Rioja, Spain 71. The volcanic landscapes of the Canary Islands 72. The Pyrennees Mountains 73. The Lime Caves at Karamea on the West Coast of New Zealand 74. Denali (an orogeny in progress) 75. A catastrophic mass wasting event* (does Quake Lake count, I didn't see it happen, but I see the results, There was also a mass wasting event up in a canyon not far from here that I got some video of, but that will wait until I figure out how to upload video). 76. The giant crossbeds visible at Zion National Park 77. The black sand beaches in Hawaii (or the green sand-olivine beaches) 78. Barton Springs in Texas 79. Hells Canyon in Idaho 80. The Black Canyon of the Gunnison in Colorado 81. The Tunguska Impact site in Siberia 82. Feel an earthquake with a magnitude greater than 5.0. 83. Find dinosaur footprints in situ*(shout out to MNHM) 84. Find a trilobite (or a dinosaur bone or any other fossil) 85. Find gold, however small the flake 86. Find a meteorite fragment 87. Experience a volcanic ashfall 88. Experience a sandstorm 89. See a tsunami 90. Witness a total solar eclipse ( I have in the past, I think it was 4th grade, but I wasn't near Nunnavat this year). 91. Witness a tornado firsthand. 92. Witness a meteor storm, a term used to describe a particularly intense (1000+ per minute) meteor shower * 93. View Saturn and its moons through a respectable telescope.* (I don't know about it being a respectable telescope, but I like it) 94. See the Aurora borealis, otherwise known as the northern lights. 95. View a great naked-eye comet, an opportunity which occurs only a few times per century 96. See a lunar eclipse* (And it was damn cold that night too, maybe I should post on that) 97. View a distant galaxy through a large telescope (back in my astronomy class we had access to the largest telescope in the state, Mars was also at its closest approach to Earth in quite a few years) 98. Experience a hurricane (we were passing through S. Carolina during one (except we stayed as far away as we could) it was like driving a submarine) 99. See noctilucent clouds 100. See the green flash So this year I am a lowly 15/100 (maybe this is why thesis progress is seemingly slow). Overall I am 39.5/100 (boo...) ## 09 December 2008 ### December Meme ... sorry So there is another meme running around the internets. I picked it up from Laelaps. But there seems to be others doing this as well like Silver Fox, and Drug Monkey. The rules are simple: Take the first sentence from the first post you made each month and string them together. It starts off well enough, but I notice a trend towards the end of the paragraph where I am constantly apologizing (so much so it has become a generic start to a post) ... sorry ;-) Well, here I am.Well, I couldn't find the article online.The Planet debate seems to be ongoing. I wandered onto this website.NASA is planning on visiting the sun! At least as far as those pesky edu-macated types are concerned. While people hopefully enjoyed a trip up to Nunavut, I figured I'd get back to talking about something tangential to my thesis. Sorry about being away from the internets (it isn't for a lack of trying). Sorry that I have been away for a bit (the real world ganged up on me). Hey! Sorry for the hiatus. ### Das Rad I found this while perusing the inter-tubes of You-webs. I think I may have seen this somewhere before, but I don't remember where. So if this is a double-post, I apologize. ## 08 December 2008 ### Nationalizing Science Standards I stumbled upon this little movement while perusing the internet. I also included a link in the sidebar somewhere (above "Index" and below "Qui Teneo Scaccarium"). Add your voice to the throng if you want, but I think I will sit this one out. I'm not entirely sold on the idea that this is a good thing. I think the people behind it have good intentions, but I can easily see something like this being abused. It would also have the effect of further muddying the waters of public opinion on science. Additionally, this could be viewed as a potential violation to Amendment 10 (depending on how literal you want to read the Constitution). But constitutionality aside, I don't think we should advocate politicizing science on a national level. I recognize that politicizing science will inevitably happen, but I don't think we should advocate it. I agree with the assertion that scientists should set the standards, but who decides on which scientists get to set the standards? I can think of few things more destructive to science education than another Shrub, or worse a Palin-esque figure, getting to appoint "scientists" to dictate scientific standards. As it stands, it is an uphill struggle for YECs (and other pseudoscientists) to peddle their nonsense. Right now they face a state by state, county by county battle. I recognize it would be a good time saver if scientists only had to fight the stupidity on the national level, rather than constantly repeating the same battle on smaller local scales, but this would also make it easier for the ignorant horde to sneak their garbage into the classroom. Especially under the nightmarish hypothetical situation from above. Seriously, think about the damage a science-hating fundie in office could do with this legislation. I also think implementing this legislation will have the adverse affect of confusing the public as to what science is. Intermingling politics and science will only further complicate the problem already faced by scientists arguing against lunacy. That is to say, it will give people the false impression that which scientific theories you ascribe to are a personal choice (just like your opinion on tax code, political persuasion, and civil rights). This is what the "equal-time" advocates want. However, science and reality don't take your personal opinion into consideration. No matter how much I choose to believe I will fly when I jump off a cliff, gravity will, again, prove to be a fatal law. And no matter how much individuals claim that the Earth is 6000 years old, all the evidence still says ~4.6 Ga. In the end, I view this as needless legislation. I would rather politicians spend time on problems that desperately need to be solved (like national health care and social security). I don't want to listen to politicians arguing about something that they are woefully uninformed about every four years. Politicians already are very good, too good really, at not getting anything meaningful done. Let's not give them any further opportunities for distraction. ## 05 December 2008 ### The fatal law of Gravity I just got out of a marathon meeting with my advisor, and I am still a bit out of it (seems like some of the work I spent a couple of months on is now no longer required). On the one hand I can see this making my Thesis clearer and more concise. On the other, I just wasted a bunch of time I could have better spent graduating. So, I am in the mood to "dump" on those lovable wackaloons who have been giving Eric and Brian grief. I don't think I will re-cover the same grounds they have. Especially since they have done a far better job than I think I would be capable of. Instead I am going to cover the problems that expanding earth has with GRAVITY. I have seen two sides that expanding earthers like to use to argue about gravity. One camp argues that the mass of the earth is constant, and the earth is just getting less dense. Another camp argues that the mass of the earth is growing, and holds the earth's density constant. Surely, we can resolve such a fundamental difference by just saying "Hey, the Earth is not expanding, look at all the data". But that would defeat the purpose of this post and my childish poking fun at the stupid.First we have the Constant Mass Advocates (CMA). They argue, wrongly, that the earth can grow and those of us living on its surface will feel a constant pull of gravity IF the earth weren't gaining mass. On the surface of it that seems a reasonable assertion (if you ignore the whole "earth is growing" thing, and the violation of conservation of energy, etc.). After all we learned in our High School Physics class that: F=ma Eq.1 Where F = force m = mass of an object a = acceleration (in this case gravity: 9.8 m/s2) This, however, is just a shorthand version of calculating the force on an object due to gravity. You see, this equation needs some tinkering if we are going to calculate the force due to gravity on Mars, or the Moon, or anywhere other than Earth (note: not all places on Earth have the same gravity either, it can vary due to elevation, local rock densities, etc.). This leads us to the Universal Gravity Equation: F = GMm/D² Eq.2 Where F = force G = the Gravitational Constant (6.67300 × 10-11 m3 kg-1 s-2) M = Mass of object 1 (usually the larger object, in this case the Earth: 5.9742 × 1024 kg) m = mass of object 2 (usually the smaller object, in this case it is us) D = Distance between the centers of mass (in this case it can be approximated as the radius of the Earth (note: this is why elevation has an effect on gravitational pull) Earth's radius: 6378.1 km). By setting Eq.1 equal to Eq.2, you can see how scientists can calculate what "a" is equal to: a = GM/D2 Eq.3 Since we don't really care about how much force I am exerting on the planet (and it on me) we can just focus on Eq.3 for this discussion. First let's prove to ourselves that the "a" we learned in High School jives with the Universal Gravity Equation. a = (6.67300 × 10-11 m3 kg-1 s-2 × 5.9742 × 1024 kg)/(6378.1 km)2 or when I plug it into my calculator and cancel out the appropriate units (remember to convert km to meters in the denominator) a = 9.7998.... m/s2 which can be approximated to 9.8 m/s2 Thanks Mr. Schalhammer! See Physics can work (and prove useful). Now, you may be sitting there asking yourself "Why the hell does this matter to expanding earth? You just showed that the Earth's gravity is affected by its mass, which is the point of the CMA". Why yes I did anonymous questioning voice. However, I also showed that the RADIUS of the Earth is far more significant to the gravity we feel on the planet. The Distance to the center of the Earth will affect the gravity we feel on the surface far more rapidly than just keeping mass a constant. The radius of the Earth affects gravity exponentially (mathematically speaking the square) while the mass of the Earth only affects gravity linearly. So take home message to the CMA's, keeping the earth's mass constant and increasing the radius will actually DECREASE the gravity we feel on the planet. This is completely antithetical to what we actually observe (you know by practicing science). And furthermore, it defies claims made by other expanding earthers that gravity was less in the past allowing for giant bugs and what not (interestingly enough, an insects size seems to be limited by how efficiently oxygen can cross certain membranes, higher oxygen concentrations mean "bugs" can get bigger. Here is something on that) Through this calculation, we see that the gravity at the surface of the earth would have been GREATER if the earth was smaller. Let's go to the graph:This is a nice visual way of saying IF the Earth was smaller (assuming constant mass), we would experience a greater pull of gravity. Once again, explaining this away isn't a problem for Plate Tectonics, because with our firm grip on reality, we don't expect the Earth to be changing size. Now let's move on to the more confounding stupid I dub the Gaining Mass Advocates (GMA). They argue that the earth is growing AND it is becoming more and more massive. They use claims like "Gravity was less when the dinosaurs were around, how else did they get so big". The GMA also argue that the earth is actually gaining mass and therefore gravity is increasing as we move forward in time. But let's see how that works out with the math. We have already seen that holding the earth's mass constant doesn't jive with reality, maybe the trick is to increase the mass of the earth (keep in mind this is still invoking many things that plate tectonics has no need for, meaning this violates parsimony as well). First some disclaimers. This VIOLATES the conservation of matter. We are venturing into a realm of Newtonian Physics that was never meant to be (like the Octoparrot). Second, they stubbornly refuse to mention how much mass is being added, so I am assuming it to be a given volume of mantle (density of mantle: 3.4-5.6 g/cm3, so let's just call it 4.5 g/cm3). Thirdly, I can't find where they say HOW MUCH the earth has grown (because, in point of fact, it hasn't). So I will assume that they only want the earth to increase enough to compensate for the oceans, which comprise ~75% of the Earth surface (361 km2). So back to some equations (ugh.... math). The surface area of a sphere can be expressed as: $A = 4 \pi r^2 \,$ Eq.4 A is surface area r is the radius The volume of a sphere can be expressed as: $V = \frac{4}{3}\pi r^3.$ Eq.5 V is volume r is the radius and a neat little relationship about Surface Area, Volume, and Diameter emerges. Essentially, when you shrink a sphere to 1/2 it's original diameter, the new smaller sphere has 1/4 the original surface area and 1/8 the volume of the original sphere. To put this another way. By "shrinking" the earth to the point where it has no oceans (to 1/4 of its surface area), you have reduced its diameter by 1/2 and reduced it's volume by 7/8. This would mean the GMA would see a earth with a radius of 3189.05 km. The GMA volume would be 1.35 x 1014 km3. Earth's GMA mass would be: 1.6948 x 1024 kg. So now let's plug this in to Eq.3 and see what we get for the gravity (ag) of a GMA earth. ag = 11.12029 m/s2 Which is still an increase in gravity from what we see today. Meaning even if you add mass to the planet to counteract the effect of moving away from the center of mass, gravity still is far more sensitive to changes in proximity to the center of mass than it is to total mass. The up-shot is that the "dinosaurs were big because there was less gravity" crowd are wrong. For curiosity's sake the gravity (ac) of a CMA earth of the same size would be ac =39.19929 m/s2 And just not to let those of us who like reality off the hook, I wonder what those crazy plate tectonic advocates (Scientists) think gravity was like during the Permian (which was when the oceanic crust we have today started to be generated): a = 9.8 m/s2 As this clearly demonstrates, the RADIUS of the earth is far more important that the MASS of the earth in terms of what things living on the surface of the planet would feel in terms of gravity. As I have said many times throughout this post, this isn't a problem for the reality based community. Because plate tectonics does NOT invoke the earth changes its size (or ways of adding mass out of nothing, or where the energy is coming from to move particles further away from the pull of gravity, or other magics that expanding earthers like). All the arguments based upon gravity being "lesser" in the past because the "earth was smaller" show not only a misunderstanding of geology, but a FAR greater misunderstanding of gravity. Curse you rational uniformitarianism, you win this round! But they'll be back, and in greater numbers... ### Glacier Photos IV: The Blossom Menace So I figured I would do something a bit different with part IV of my Glacier National Park series. Instead of showing the geology (well, there is still some geology) I figured I would show some of the pictures of the plants and trees that I saw in GNP. Parts I, II, and III are here. Up first are several pictures of a flower that seemed to be everywhere in the park. I thought that it looked neat, and my parents wanted a decent shot of one, so I took a couple of photos. I have no idea what the flower is actually called [edit: According to Callan, it is Beargrass. Thanks]. One of these days I hope to actually learn some botany so I can point out flowers and such while hiking, but I have different priorities currently. Next are some pictures of trees in the park. The first one is one I used previously during the tree meme. As I mentioned in the tree post, I think this is the mountain pine beetle's work. It is really unfortunate how widespread this problem is becoming.There is also this shot of a drunken forest. I hadn't seen one in real life before, so this was kinda neat to see (though I've seen drunker forests in photos). Essentially, a drunken forest is the product of mass movement. As the soil slides down slope, the vegetation moves with it. Sometimes this loosens up the soil enough that the trees each take on their own tilt (thus providing the "drunken" appearance). Here is another example of trees helping to identify mass movements. This particular tree got partially knocked over (either from a rock running into it, or part of its slope giving way, I can't say which). However, it survived the ordeal and the new growth at the top is continuing its relentless climb to Mr. Sun. There aren't only pine trees though. Below are some Aspen that got in the way of my shot. We were driving through an area where they were doing road maintenance, so we couldn't stop. I just wanted a picture looking down the valley, but the Aspen came out remarkably in focus considering we were moving.Finally, some tool marks that were caused by a passing glacier (see, some geology). The reason I threw it in with the plant post is the grasses that are growing in the scours. It shows the resiliency of plants in the escalating rock and vegetation conflict (don't underestimate our chlorophyllic opponents). Thanks for reading. I think I might have enough for one more post in this series (Part V: Ride the Magic Bus), but it might have to wait until the semester is done (or until I get writer's block on the damnable tome). ## 04 December 2008 ### Glacier Photos III: The Search for Rock Sorry for the hiatus. It is getting to the end of the semester here. As a result of this, I find myself crunched for time to actually complete things that I theoretically should prioritize, like attempting to graduate. This has also resulted in me not realizing I should go to sleep until I see the sun rising over the, now snow-covered, Bridger and Gallatin Ranges. So here it is, Part III in my Glacier Photo Series. Parts I and II are here (respectively) and part IV should be up before too long.Above is a shot of Glacier National Park's namesake, a.... glacier.... Well there had to be at least one in this set. Unfortunately, I can't remember the name of this particular glacier. [Edit: I have been reminded it was Jackson Glacier. Thank you ReBecca!]Nothing much to the above and below photos. I just thought that they were nice and scenic. I liked the strata on the above cliff. As for the shot below, I just hiked a little off the beaten trail in order to get this shot.The next shot was taken at one of the most scenic places in the park, at least that is what the sign told us. I took several shots, but for some reason this is the only one that looks really good. Unfortunately, it is the only one where a kid wandered into my shot [shake fist at kid] [/shake fist at kid] And finally (for this set at least) what would a trip to a national park be without an encounter with a bear? Did you see the bear? Neither did I. While I was taking this photo, several other groups of visitors pulled over on the side of the road. I thought, "how odd, I haven't seen another geologist all day, but the more the merrier". After I finished with my photo, one of the newcomers asked if they could see the photo I got of the bear. I was very confused, and I had to explain that I was actually taking pictures of the rocks, and I didn't realize a bear was anywhere near my shot. That is all for this set, stay tuned for Glacier Photos IV: The Blossom Menace ## 27 November 2008 ### Happy Thanksgiving Every time we get round to this time of year, I usually am reminded about Ben Franklin wishing the turkey was the national bird rather than the eagle. I rummaged around the internet and found a source of this story. You can read the whole thing here, but I clipped all of the excerpt from the letter they were citing. "For my own part I wish the Bald Eagle had not been chosen the Representative of our Country. He is a Bird of bad moral Character. He does not get his Living honestly. You may have seen him perched on some dead Tree near the River, where, too lazy to fish for himself, he watches the Labour of the Fishing Hawk; and when that diligent Bird has at length taken a Fish, and is bearing it to his Nest for the Support of his Mate and young Ones, the Bald Eagle pursues him and takes it from him. "With all this Injustice, he is never in good Case but like those among Men who live by Sharping & Robbing he is generally poor and often very lousy. Besides he is a rank Coward: The little King Bird not bigger than a Sparrow attacks him boldly and drives him out of the District. He is therefore by no means a proper Emblem for the brave and honest Cincinnati of America who have driven all the King birds from our Country . . . "I am on this account not displeased that the Figure is not known as a Bald Eagle, but looks more like a Turkey. For the Truth the Turkey is in Comparison a much more respectable Bird, and withal a true original Native of America . . . He is besides, though a little vain & silly, a Bird of Courage, and would not hesitate to attack a Grenadier of the British Guards who should presume to invade his Farm Yard with a red Coat on." Frankly, I agree with him. I think the eagle is overused and unoriginal as a national bird. Plus, I like the symbolism of eating the icon of our country on a national holiday. Also, after meeting both wild turkeys and bald eagles in the field, I think turkeys are WAY more entertaining. ## 23 November 2008 ### Unskilled and Unaware Yet another blog post that is only tangentially pertinent to geology. My advisor recently assigned this article [subscription to Journal of personality and social psychology required] to the new crop of grad students (it subsequently made the rounds to the rest of us, the title itself is enough to evoke interest). "Unskilled and Unaware of it: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments. " I bring this up because appropriate examples have been cropping up around the internets recently. P. Z. Myers and Eric have both brought up the same iconography of "debating" these individuals. Both of them have come to the same conclusion that these individuals play an "intellectual" version of Whack-A-Mole. (Personally, I prefer to compare it to Playing Chess with Pigeons, but that is just me). Essentially, someone says something so off-kilter that we can't resist responding to them and correcting their world-view. This leads the original nut to think that they touched on an actual weak point in some theory (why else would a scientist get so angry). They then continue to reiterate the same point over and over, drawing more scientists into the growing vortex. Each time a scientist beats a point down into drivel, the nut reiterates the point from the beginning claiming that the scientist "dodged" the issue rather than addressing it. Eventually, the scientists involved get fed up with the ignorant horde and depart the conversation. The original idiot then claims victory. [note: the instigator is not NECESSARILY an idiot, they may have garnered marginal success in some unrelated field (thus they have a false sense of confidence). Just look at celebrities opposition to vaccinations]. Though, in light of this article, the individual only claims victory as a product of their own incompetence. The article makes the point that it is an individual's inability to understand a topic (their ignorance) that impedes their ability to recognize their own ignorance. Unfortunately, the only way to cure these individuals of their inability to recognize their ignorance, is to teach them about the topic (which they won't submit to, because they can't recognize their own ignorance). In other words, it is a Catch-22. You can't teach them, because they think they know more than they do (and are unwilling to hear out a scientist, who is part of the "conspiracy"). And you can't get them to realize they are ignorant, because they ARE ignorant (so they don't realize they are ignorant). Kinda gets the head all spinning just thinking about how to break this infernal web of ignorance. Before they can be cured of "the stupid", they need to be made aware that they are ignorant, which requires that they aren't full of "the stupid" to begin with....It ends on an optimistic note though (kinda), this study did successfully educate several individuals about their incompetence. In subsequent runs, they demonstrated that they were no longer incompetent, and they could realistically evaluate their performance. Unfortunately, the problem still remains, the incompetent don't realize they are incompetent (and continue to crap all over the chess board). Cited: Kruger, J. and Dunning, D, 1999, Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments. The Journal of Personality and Social Psychology, Vol. 77, no. 6, p. 1121-1134. ## 21 November 2008 ### Glacier Photos II: The Snowfield Strikes Back I realize a disproportionate amount of my posts recently have been a part of memes. So I am going to continue uploading some of my Glacier National Park Photos. Here is Part I, Part III, and Part IV. This evening, when I needed a break from thesis-ing, I took a gander out the window in the grad office. It had started to snow. This looks like it might be the first snowfall that will stick around more than a couple of minutes in Bozeman this year (I kid you not, it was 72 F just 2 days ago). So this comes as somewhat good news. Good if you like winter activities, and don't like summer droughts/fires. Bad if you don't like driving on the roads where they usually "forget" to put down gravel. Hell, walking around campus can be a tricky affair as they tend to zamboni the walkways when they should be plowing them. Why this brief aside from Glacier? Because last winter looked like it would lead to a dry summer as well. What ended up happening, as I mentioned in my last post, is Montana got a lot of late spring snow. This resulted in Going to the Sun Highway to remain closed until the day we got there. Here is a shot of us driving on the just-opened road up to Logan Pass (uh... car side mirror for scale). Keep in mind this was July 3rd. The rest area at the top of Logan pass provided some spectacular views as well. This is an arete that dominated the view from the rest stop. Well, most people were absorbed by how high the snow was. But I was suitably impressed by the rocks. This little guy was running around and making quite the fuss at all the big dumb people wandering around what had been (until a few hours prior) his family's home. I dubbed him Rodney.This is a nice shot of a small snowfield just below the parking area. It overlooks a beautiful U-shaped valley which looks a little something like this: I particularly like the misfit stream running down the center of the valley. I think that will be all for this set of photos. Knowing my somewhat sporadic posting habits, it shouldn't be too long until I realize I have nothing to post except for Glacier Photos III the Search for Rock. ### I'm a Mechanic? Also saw this one floating around the blogosphere. Over at Adventures in Ethics and Science and over on Silver Fox's Looking For Detachment. The site is here. The gist is the same as astrology. Arbitrary data input being interpreted as meaningful information. And like astrology, it doesn't work (except in the quick "I'm bored, hey wonder what this site does" sort of way). According to this site address, I am a mechanic (the description doesn't really describe me at all, I think): The Mechanic The independent and problem-solving type. They are especially attuned to the demands of the moment are masters of responding to challenges that arise spontaneously. They generally prefer to think things out for themselves and often avoid inter-personal conflicts. The Mechanics enjoy working together with other independent and highly skilled people and often like seek fun and action both in their work and personal life. They enjoy adventure and risk such as in driving race cars or working as policemen and firefighters. I do work well on my own, but that is because I tend to be easily distracted. And anyone who actually has met me knows me in a highspeed pursuit is not likely. Further evidence as to the whole fullishness (keep in mind I still found it good fun). I have another blog (sorry my eyes only) that I use when I want to play around with html coding (make sure everything looks right). And when I put in that address, I get....the duty fullfillers. I find this hilarious as the content is mostly identical, so I figure it must take the url and throw that in a generator. Anyway, this doesn't describe me either, but it is a tad closer to the mark: The Duty Fulfillers The responsible and hardworking type. They are especially attuned to the details of life and are careful about getting the facts right. Conservative by nature they are often reluctant to take any risks whatsoever. The Duty Fulfillers are happy to be let alone and to be able to work int heir [sic] own pace. They know what they have to do and how to do it. Though this is fairly antithetical to what I got previously. I am a risk-taker who doesn't like to take risks. In summary, good fun for when you are bored. But, like I have to tell anyone actually reading this blog, don't expect life-changing revalations... Unless you try and typealyze the typealyzer site. I can apparently read Thai fluently, and that changed my life. ------------------------------------------------------------------------------------------------- Addendum: When I put in my profile webpage I also get a completely different result. The Guardians The organizing and efficient type. They are especially attuned to setting goals and managing available resources to get the job done. Once they´ve made up their mind on something, it can be quite difficult to convince otherwise. They listen to hard facts and can have a hard time accepting new or innovative ways of doing things. The Guardians are often happy working in highly structured work environments where everyone knows the rules of the job. They respect authority and are loyal team players. So, three personality types. One person. I am totally tri-polar. Unless you start analyzing individual posts you put up, then I run the entire gamut of personality types (which I think is far more representative of reality). ## 20 November 2008 ### I think I could survive longer than that Mainly because I don't think being chained to a bunk bed with a pile of Cretaceous bones is much of a threat. Saw this over on Dinochick's Blog. And since I am a sucker for stupid online quizzes, I figured I'd give it a go. I could survive for 1 minute, 28 seconds chained to a bunk bed with a velociraptor Created by Bunk Beds.net ## 14 November 2008 ### Orbs and rods...pseudoscience meme Eric has raised the issue of perhaps a pseudoscience meme. And I agree, there is very little that is more fun than poking fun at modern mythology represented under the guise of science. One of the less well known "branches" of "pseudology" is the "study" of "orbs" and "rods" (wow, that is a lot of "air-quotes", and they shall pervade this post). "Orbs" and "rods" are supposedly ethereal beings who live their entire lives in mid-air (though some argue they are ghosts, aliens, or undiscovered animals whose body's decompose instantaneously after they die, or live in a dimension that partially intersects our own). On a side note:It is always fun to note how readily "cryptozoologists", or as they are nowadays called "fortean zoologists", invoke the existence of the giant squid as a success for their "science". Of course they ignore the fact that we came across dead remains of giant squids from time to time (that is how we knew they existed, we had actual evidence). Here's what "Rods" and "Orbs" are, and here is where any wackaloon reading this should get out a piece of paper and take note, they are a trick of poor photography (optics messing up, or something slightly out of focus, or something passing through the frame too fast for the camera to pick it up, or something moving through the frame out of focus... etc.). There are numerous ways to make a "rod" or an "orb", and none of them involve a papa rod or a mama orb (unless you work for the center of fortean zoology... please don't click on the link unless you have properly braced yourself for the burning stupid). Some examples: At first glance this is just a really crappy picture of a doorway. I mean it is dark, not much going on. Compositionally it is even poorly balanced. Truly an example of epic fail. Upon closer inspection.... you will notice TWO (count em, two) balls of dust or drops of moisture on the lens. This is what people with little imagination (who want to be experts in photographic fakery) call "orbs". The third circle (which looks like a reflection of light, or something moving across the frame [since it is dark the exposure time was probably quite long, so an insect catching the light as it flew past is quite withing the realm of possibility]). But this is what is "interpreted" to be a "rod". Here is a GIANT PIZZA (seriously it was ~30 in). But if you notice, behind the surprised looking fellow known as "Chad" (who truly deserves his own branch of pseudoscience, except he actually exists)..... it's another "ORB"! Now, I took this photo, and there was no "orb" on the curtain when I took it. However, I failed to clean my lens before the photo was taken (oops on my part). But there it is, an "orb"). What's more, this "rods" nonsense (seriously bad nonsense, at least Nessie is an entertaining product of the imagination) has made it onto news broadcasts. It is even complete with the full-on nonsensical cut-aways (Unit 13? really?). And here is someone who rather sensibly showed how this phenomenon is easily reproducible without invoking imaginary flying "jelly-worms". ## 12 November 2008 ### Haikus and the Permian Extinction Greatest test of life the Permian extinction a river preserves This is in reference to the R.M.H. Smith article that came out a few years ago (available here via LANL). Essentially, he looked at the fluvial deposits in the Karoo foreland basin that span the Permo-Triassic boundary. He noted that the fluvial architecture changes at approximately the same location as the boundary. He goes into some potential causes for the changing styles across the boundary, including both climatic and tectonic controls. However, he stops short (thankfully) of claiming that rivers changing caused the extinction. Instead, he takes a more agnostic position on the cause of the extinction, but points out that changing fluvial styles can contribute to an extinction/extirpation event. (note: I think extirpation is the right word, animals are just forced to leave a habitat because they aren't suited to it, but I could be wrong on this). So the dicynodonts leaving the Karoo basin may have been prompted by the changing fluvial styles , but they may not have become extinct at that point (which allowed the invasion of the Lystrosaurs into the area). Or they may have gone extinct in the Karoo basin, but we don't have sufficient stratigraphic resolution to confirm this. Either way, I thought it was a very good article. Others involved in the meme: Suvrat (the instigator), MJC rocks, Lockwood, and Kim. Citations: Picture from: http://jan.ucc.nau.edu/~rcb7/globehighres.html, retrieved on 11-12-2008 Smith, R. M. H., 1995 Changing Fluvial Environments across the Permian-Triassic boundary in the Karoo Basin, South Africa and possible causes of tetrapod extinctions. Palaeogeography, Palaeoclimatology, and Palaeoecology. v. 117 no 1-2 p. 81-104. ## 10 November 2008 ### Journal Citations and Flux Capacitor While perusing the literature (Okay, checking out my vast, and growing, backlog of google reader...) I stumbled upon this study (sorry, subscription required). First off, this isn't strictly speaking (or loosely speaking...really) a geology paper. It does have some interesting implications for people publishing research though. The main thrust of this article is: As researchers increasingly use electronic search engines (like georef or geoscienceworld) the body of cited literature becomes far more condensed. Essentially, people aren't citing papers published pre ~ 1990 (or when the earliest journal is available online). Apparently, we are too entranced by Christopher Lloyd driving a flying, time-traveling train to look further back than that. The author views this as neither a good thing or a bad thing. It is just a thing. Though, he seems to advocate that researchers should stay cautious. Mainly, because this gives the scientific community appearance of reaching consensus much quicker than it used to (whether it has or not). This can, potentially, stifle new research into competing hypotheses. By enabling scientists to quickly reach and converge with prevailing opinion, electronic journals hasten scientific consensus. But haste may cost more than the subscription to an online archive: Findings and ideas that do not become consensus quickly will be forgotten quickly. He also states that it is removing our respective links to the past. Our research is becoming much more obsessed with what is going on now, and we are tending to (unintentionally) ignore the old mainstays of our respective literature. As deeper backfiles became available, more recent articles were referenced; as more articles became available, fewer were cited and citations became more concentrated within fewer articles. These changes likely mean that the shift from browsing in print to searching online facilitates avoidance of older and less relevant literature. There is also a bit of musing about if this might have any link to the style of research being done (especially by grad students). Modern graduate education parallels this shift in publication - shorter in years [I contest this point:ITV] more specialized in scope, culminating less frequently in a true dissertation than an album of articles. He comments on old monographs (notably Origin of Species and the Principia) being the way research used to be published. Now (as I am sure people have stumbled upon) some of the most tantalizing titles in a search result are only abstracts from conferences. To me, this seems to make a tentative connection that the technological ADD that seems to persist in our culture today is infecting science to some extent. He doesn't discredit the good these online search engines provide (most notably, the open access to the literature for the general public, regardless of location or time of day). Though he does have an interesting tid-bit about online article availability Provision of one additional year of issues online for free associates with 14% fewer distinct articles cited. It is one thing to find a paper quickly online, it is another thing to look through the stacks (in search of your elusive quarry) and happily stumble upon something completely unexpected. This can lead researchers to further focus upon their (evertightening) area of expertise, and inadvertently ignore potentially complementary research, thus narrowing the scope of individual scientific endeavors. Cited: Evans, James A., Electronic Publication and the Narrowing of Science and Scholarship. 18 July 2008 Science v. 321 no. 5887 p. 395-399 ## 09 November 2008 ### Eppur si muove! I watched a documentary on Galileo the other night (The Nova one which is also based partially on Galileo's Daughter by Dava Sobel). And they were talking about Galileo's book "Dialogue Concerning the Two Chief World Systems". This got me to thinking... "I wonder if I can find a copy of that". It seems like it was written to be the first book we would now consider to be "popular science". It was written in the vernacular, instead of latin. It was written as a conversation being held between two people (the more famous seems to be Simplicius, who advocated the church's view). And it was quickly banned, like books worth reading tend to be. So I set to my task, thinking it would probably have been published on its own as a part of a classics collection or something like that (it is one of the five books in "On the Shoulders of Giants", but I don't like cumbersome books like that). So I did a quick Google search (and I mean quick). The first hit was the full text online!!! Now all I have to do is print it out (I don't like to read from a computer screen). And the best part is it is all FREE. Go ahead, read about how Salviati intellectually mops the floor with Simplicius. ------------------------------------------------------------------------------------------------- On a side note: Eric is valiantly playing the part of Salviati in his Expanding Earth post comments section. He posted this in Feb. and the ignorant horde have been assaulting him ever since (most recent was a few weeks ago). ## 08 November 2008 ### Brian Switek Meme I have heard that Laelaps' own Brian Switek is up for a $10,000 scholarship. I found a letter over at The Dispersal of Darwin, and on Dinochick Blogs. Here is a reprint of it. From Brian: I'm as surprised as you are. For my work on Laelaps (http://scienceblogs.com/laelaps/) I have made it as one of 20 finalists in the 2008 Blogging Scholarship. That means I have a shot at winning$10,000 to help finish my undergrad degree and pay off my student loans! The next part of the contest is based on votes rather than content, however, and I need your help.The first favor I ask is that you head on over to the voting page and cast your vote for me; http://www.collegescholarships.org/blog/2008/11/06/vote-for- the-winner-of-the-2008-blogging-scholarship/ The second, if you would be so kind, would be to tell your friends or post a link on your blog (if you have one) asking others to do the same! There's no way I can win if I don't get enough votes, and I'm going to need a lot of help to beat the political and sports blogs that are already pulling ahead. I need all the help I can get, and anything you can do to help spread the word would get me a little bit closer to winning. Let's get that man some money. ### Glacier Photos While rummaging through my hard-drive, I came across a folder of photos from my trip to Glacier National Park from over the summer. I had meant to upload them to Facebook, but I kept receiving errors when I would try and upload them. I decided I would try it later and promptly forgot about it. So here are some of the more scenic views I captured while driving on Going to the Sun Highway. Parts II, III, and IV are now up. When we arrived, it was the first day that Going to the Sun was open (July 3rd on this particular year. We had plenty of late snow. Bozeman was getting snow in June). So some of the distance shots will appear hazy, the cloud cover did break later in the day though. Above is a picture of a cirque, which is a result of the parks namesake. I took several pictures of them as we went along. What else would you expect from a geology student in GNP? Pictures of their "loved ones"? I think we all know the answer to that. In fact, I took this picture of my parents in front of the same cirque, my parents are out of focus (but the cirque is crystal clear). In my defense, I couldn't tell on the little display screen that they were out of focus. I remedied it later by taking pictures of them in focus. My dad likes the engineering aspect of geology. So this is a picture for him. It is impressive how we have tunneled through a rock face while clinging to its narrow precipice (which we also carved out for ourselves). Here is the Weeping Wall. Nothing much for me to comment on. I thought it was pretty. This is a cirque I took a shot of while we were nearing Logan Pass. Below is the same cirque, but now we are looking down it instead of up at it. I figured this would be something interesting to show students at some point. Mainly because every time a cirque is diagrammed in a book, it is from (more or less) the same perspective. When, in lab, we asked students to identify a cirque from a different perspective, everyone seemed to have a hard time with it. So here is an example that can hopefully get students to start thinking about geology as three dimensional (well four dimensional when you throw stratigraphy at them) That is it for this batch. More will probably pop up later, when I am trying to think of something to write about. Continued in Part II ## 07 November 2008 ### Animal meme Most of my field animals are cattle. I come across the occasional snake, but after the quick startle, I usually forget to snap a photo. I try every once in a while to take a picture of a passing hawk, but they don't turn out so nice. Anyway without further ado, are several of my laughably poor attempts to take photos of the wildlife I encounter. I named this fellow "Dwight" (no particular reason). I was hiking across a field, and noticed several burrows. At the end of the day I got a shot of this prarie dog. The noble prarie dog, how you amused me...briefly. I also came across some of these happy-go-lucky individuals (though this particular one was on my porch). Orb spiders are fairly common, and about as poisonous as a bee. Though, the first one I saw I couldn't readily identify it as anything other than a spider the size of a quarter. Orbie (as I called this one) kept on collecting moths and what-not until the first snow came. Then, come spring, a good wind carried the next generation away from my patio (good thing too, I don't know if I had enough space for multiple Orb spiders to live happily). And finally, the most dread animal ever to come across in the wild.... my sister's West Highland White (that's right, WHITE) Terror... I mean Terrier. She was astoundingly cooperative in this photo. Usually she blinks when the photo is taken (I kid you not, I have one of her squinting) or she turns around and refuses to be photographed. ## 04 November 2008 ### As promised Obama's speech basically summed it all up. Tonight was a great victory, but it wasn't the end goal. There are still all the problems he mentioned in his speech that need to be overcome. Honestly, he is probably the guy to do it to. I found a hypothetical survey from SurveyUSA back in Nov. 2006, imediately after the democrats swept in and took the legislature. As chance would have it, they asked how people would vote if the next presidential election came down to Obama and McCain (not so much chance as they were hitting all potential combinations they could think of). Here is what the electoral map would have looked like according to that poll: (I'll put up this election map as soon as several states are declared... C'mon MT go blue...) (MT went pink... dammit....) The only things Obama wins (in the hypothetical 2006 race) are Illinois, Hawaii, and DC. Everything else, squarely in McCain's pocket. Even though there were extenuating circumstances (you know, trifling problems with the economy and what not). Obama managed to change the opinion of a majority of the country. He even won the election BEFORE the temptress of Florida was declared for either side. Hopefully, Obama will make good on his promises and start to turn this country around. I was also impressed with McCain's concession speech. It was perhaps the most relaxed I have seen him give a speech this entire campaign. He also didn't have the look of manic desperation in his eyes (which added a quality to him). Maybe now that he doesn't have to pander the fundigelicals, or the neo-cons, he can rediscover his own principles. On a side note, one other good thing to come from this election. Apparently Alaska lost its village idiot. Well we finally cornered her (after chasing her all across the country, with some narrow misses where Gibson and Couric let her slip away while they were stunned at her answers). She wound up in Arizona. Don't worry Alaska, she is safely on her way back to govern your state... Good Luck with that. ### YES!!!!! That is all I have to say CA OR WA all closed and went for Obama!!! Meaning he broke 270 and is the next president of the USA (44th or 43rd depending on how you count Grover Cleveland) Hot Damn!!! insulting posts (at the Reps expense) to pop up later. ### Vote! Just got back from the polls. There was quite the decent line. Lots of people were showing up, but the polling staff was handling it like pros. Even though I couldn't find a place to park on the same block as the elementary school I was still in only in line for about 35-40 mins. Of course, there were ample voting cubes and pencils and paper ballots. It is situations like this where I wonder: Why on earth do we need electronic voting machines? I just fed my paper ballot through a scan-tron like apparatus and it counted just as fast, and I have a paper trail should I say "Hang-on. How did my county go for Nader?". Go Vote...now... Even if you think your ballot will be canceled out by your spouse/room-mate/significant other (seriously, while volunteering, this was one couples excuse why they weren't voting)... honestly if you are reading this, you should already have voted and be wearing a sticker (unless you voted early/absentee. If you did that, you should dip your finger in blue ink as a sign of democracy spreading to the USA). And if you needed more inspiration...Voting makes Karl Rove cry... make him weep. ## Disclaimer All the Latin on this page is from my vague recollections from High School. There are mistakes in the text. I just was trying to get the point across ## Between Los Alamos,NM and White Rock, NM The photo of the travertine spring was taken in the small opening in the center of the image.
2021-10-23 12:14:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3697938024997711, "perplexity": 2360.0677760531416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00260.warc.gz"}
https://www4.math.duke.edu/media/watch_video.php?v=4e211e151c8f4d372f61b05dc3ddde44
Quicklists Javascript must be enabled # Felix Otto : Gergen Lecture - Speaker, Felix Otto ## root 232 Views In three specific examples, we shall demonstrate how the theory of partial differential equations (PDEs) relates to pattern formation in nature: Spinodal decomposition and the Cahn-Hilliard equation, Rayleigh-B\'enard convection and the Boussinesq approximation, rough crystal growth and the Kuramoto-Sivashinsky equation. These examples from different applications have in common that only a few physical mechanisms, which are modeled by simple-looking evolutionary PDEs, lead to complex patterns. These mechanisms will be explained, numerical simulation shall serve as a visual experiment. Numerical simulations also reveal that generic solutions of these deterministic equations have stationary or self-similar statistics that are independent of the system size and of the details of the initial data. We show how PDE methods, i. e. a priori estimates, can be used to understand some aspects of this universal behavior. In case of the Cahn-Hilliard equation, the method makes use of its gradient flow structure and a property of the energy landscape. In case of the Boussinesq equation, a driven gradient flow'', the background field method is used. In case of the Kuramoto-Sivashinsky equation, that mixes conservative and dissipative dynamics, the method relies on a new result on Burgers' equation. • CategoryGergen Lectures • Duration: 01:08:47 • Date:  September 27, 2010 at 4:25 PM • Tags: Please select playlist name from following ## Report Video Please select the category that most closely reflects your concern about the video, so that we can review it and determine whether it violates our Community Guidelines or isn’t appropriate for all viewers. Abusing this feature is also a violation of the Community Guidelines, so don’t do it.
2023-03-29 07:41:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4966629147529602, "perplexity": 892.4193601280144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00492.warc.gz"}
https://www.physicsforums.com/threads/heat-transfer-problem.746305/
# Heat Transfer Problem ## Homework Statement Two bodies of masses $m$$_{1}$ and $m$$_{2}$ and specific heat capacities $s$$_{1}$ and $s$$_{2}$ , are connected by a rod of length $l$ and cross-sectional area $A$, thermal conductivity $K$ and negligible heat capacity. The whole system is thermally insulated. At time $t=0$, the temperature of the first body is $T$$_{1}$ and the temperature of the second body is $T$$_{2}$ ($T$$_{2}$ $>$ $T$$_{1}$ ). Find the temperature difference between the bodies at time $t$. ## Homework Equations $dQ/dt = KAdT/dx$ $dQ = msdθ$ ## The Attempt at a Solution I was able to set up $2$ equations relating the amount of heat transferred through the rod in a time $dt$ to the rise and fall of temperatures of the masses $m$$_{2}$ and $m$$_{1}$ respectively. I don't know how to proceed after this ? Related Introductory Physics Homework Help News on Phys.org maajdl Gold Member Since the heat capacity of the rod is neglected, the temperature profile in the rod is linear: dT/dx = (T2-T1)/l You need to write your second equation twice: once for m1 and once for m2. Then you can simply solve the equations.
2020-12-04 02:59:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7876154184341431, "perplexity": 480.40230835639045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733120.84/warc/CC-MAIN-20201204010410-20201204040410-00700.warc.gz"}
https://stats.stackexchange.com/questions/207510/what-does-the-number-kernel-option-refer-to-in-svm
# What does the number 'Kernel Option' refer to in SVM? I read that the performance of some kernel functions in SVM can change if we change the number known as kernel option. For example, this article states that kernel option of value 2 was used, http://library.binus.ac.id/eColls/eThesisdoc/Bab5/TSA-2014-0084%205.pdf. I searched a lot about the meaning of this parameter, but every time I either found results discussing the kernel function types (polynomial, Sigmoid,etc.), or I found research papers that use this parameter without stating what it is. So, what is 'kernel option'? And what is it called in libsvm in Weka? Thanks In LIBSVM, you can use different kernel types by changing the numerical value of the -t input. You can also set several parameter values depending on which kernel type is used. By "kernel option," the authors in your paper are likely referring to either the kernel type or one of these parameters. • If you input -t 0 it will use a linear function:$$K(x_i,x_j)=x_i^Tx_j$$ • If you input -t 1 it will use a polynominal function: $$K(x_i,x_j)=(\gamma x_i^Tx_j+r)^d, \gamma>0$$ • If you input -t 2 it will use a radial basis function: $$K(x_i,x_j)=\exp(-\gamma||x_i-x_j||^2), \gamma>0$$ • If you input -t 3 it will use a sigmoid function: $$K(x_i,x_j)=\tanh(\gamma x_i^T x_j + r)$$ Here, $\gamma$, $r$, and $d$ are kernel parameters that can also be specified using input commands. From context, I suspect that the authors were stating that they set $d=2$. However, their language is ambiguous and they should have explicitly stated which parameter they were referring to.
2022-01-18 11:32:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6731011867523193, "perplexity": 888.064892541206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300810.66/warc/CC-MAIN-20220118092443-20220118122443-00212.warc.gz"}
https://wikimili.com/en/Differential_equation
# Differential equation Last updated In mathematics, a differential equation is an equation that relates one or more functions and their derivatives. [1] In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. ## Contents Mainly the study of differential equations consists of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly. Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy. ## History Differential equations first came into existence with the invention of calculus by Newton and Leibniz. In Chapter 2 of his 1671 work Methodus fluxionum et Serierum Infinitarum, [2] Isaac Newton listed three kinds of differential equations: {\displaystyle {\begin{aligned}&{\frac {dy}{dx}}=f(x)\\[5pt]&{\frac {dy}{dx}}=f(x,y)\\[5pt]&x_{1}{\frac {\partial y}{\partial x_{1}}}+x_{2}{\frac {\partial y}{\partial x_{2}}}=y\end{aligned}}} In all these cases, y is an unknown function of x (or of x1 and x2), and f is a given function. He solves these examples and others using infinite series and discusses the non-uniqueness of solutions. Jacob Bernoulli proposed the Bernoulli differential equation in 1695. [3] This is an ordinary differential equation of the form ${\displaystyle y'+P(x)y=Q(x)y^{n}\,}$ for which the following year Leibniz obtained solutions by simplifying it. [4] Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange. [5] [6] [7] [8] In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation. [9] The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat), [10] in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now taught to every student of mathematical physics. ## Example In classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation (called an equation of motion) may be solved explicitly. An example of modeling a real-world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the deceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity. ## Types Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts. ### Ordinary differential equations An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable. Linear differential equations are the differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and in many cases one may express their solutions in terms of integrals. Most ODEs that are encountered in physics are linear. Therefore, most special functions may be defined as solutions of linear differential equations (see Holonomic function). As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer. ### Partial differential equations A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model. PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. Stochastic partial differential equations generalize partial differential equations for modeling randomness. ### Non-linear differential equations A non-linear differential equation is a differential equation that is not a linear equation in the unknown function and its derivatives (the linearity or non-linearity in the arguments of the function are not considered here). There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution. [11] Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations (see below). ### Equation order Differential equations are described by their order, determined by the term with the highest derivatives. An equation containing only first derivatives is a first-order differential equation, an equation containing the second derivative is a second-order differential equation, and so on. [12] [13] Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as the thin film equation, which is a fourth order partial differential equation. ### Examples In the first group of examples u is an unknown function of x, and c and ω are constants that are supposed to be known. Two broad classifications of both ordinary and partial differential equations consist of distinguishing between linear and nonlinear differential equations, and between homogeneous differential equations and heterogeneous ones. • Heterogeneous first-order linear constant coefficient ordinary differential equation: ${\displaystyle {\frac {du}{dx}}=cu+x^{2}.}$ • Homogeneous second-order linear ordinary differential equation: ${\displaystyle {\frac {d^{2}u}{dx^{2}}}-x{\frac {du}{dx}}+u=0.}$ • Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator: ${\displaystyle {\frac {d^{2}u}{dx^{2}}}+\omega ^{2}u=0.}$ • Heterogeneous first-order nonlinear ordinary differential equation: ${\displaystyle {\frac {du}{dx}}=u^{2}+4.}$ • Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L: ${\displaystyle L{\frac {d^{2}u}{dx^{2}}}+g\sin u=0.}$ In the next group of examples, the unknown function u depends on two variables x and t or x and y. • Homogeneous first-order linear partial differential equation: ${\displaystyle {\frac {\partial u}{\partial t}}+t{\frac {\partial u}{\partial x}}=0.}$ • Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation: ${\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0.}$ • Homogeneous third-order non-linear partial differential equation : ${\displaystyle {\frac {\partial u}{\partial t}}=6u{\frac {\partial u}{\partial x}}-{\frac {\partial ^{3}u}{\partial x^{3}}}.}$ ## Existence of solutions Solving differential equations is not like solving algebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest. For first order initial value problems, the Peano existence theorem gives one set of circumstances in which a solution exists. Given any point ${\displaystyle (a,b)}$ in the xy-plane, define some rectangular region ${\displaystyle Z}$, such that ${\displaystyle Z=[l,m]\times [n,p]}$ and ${\displaystyle (a,b)}$ is in the interior of ${\displaystyle Z}$. If we are given a differential equation ${\displaystyle {\frac {dy}{dx}}=g(x,y)}$ and the condition that ${\displaystyle y=b}$ when ${\displaystyle x=a}$, then there is locally a solution to this problem if ${\displaystyle g(x,y)}$ and ${\displaystyle {\frac {\partial g}{\partial x}}}$ are both continuous on ${\displaystyle Z}$. This solution exists on some interval with its center at ${\displaystyle a}$. The solution may not be unique. (See Ordinary differential equation for other results.) However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order: ${\displaystyle f_{n}(x){\frac {d^{n}y}{dx^{n}}}+\cdots +f_{1}(x){\frac {dy}{dx}}+f_{0}(x)y=g(x)}$ such that ${\displaystyle y(x_{0})=y_{0},y'(x_{0})=y'_{0},y''(x_{0})=y''_{0},\ldots }$ For any nonzero ${\displaystyle f_{n}(x)}$, if ${\displaystyle \{f_{0},f_{1},\ldots \}}$ and ${\displaystyle g}$ are continuous on some interval containing ${\displaystyle x_{0}}$, ${\displaystyle y}$ is unique and exists. [14] ## Connection to difference equations The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation. ## Applications The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods. Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation. The number of differential equations that have received a name, in various scientific areas is a witness of the importance of the topic. See List of named differential equations. ## Software Some CAS softwares can solve differential equations. These CAS softwares and their commands are worth mentioning: ## Related Research Articles In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function. In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists because most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems. In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region. The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations. Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also refer to the computation of integrals. In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form In mathematics, separation of variables is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation. In mathematics and its applications, classical Sturm–Liouville theory is the theory of real second-order linear ordinary differential equations of the form: In mathematics, the method of characteristics is a technique for solving partial differential equations. Typically, it applies to first-order equations, although more generally the method of characteristics is valid for any hyperbolic partial differential equation. The method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data given on a suitable hypersurface. In mathematics, variation of parameters, also known as variation of constants, is a general method to solve inhomogeneous linear ordinary differential equations. In mathematics, a hyperbolic partial differential equation of order is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is In mathematics, an exact differential equation or total differential equation is a certain kind of ordinary differential equation which is widely used in physics and engineering. In mathematics, constraint counting is counting the number of constraints in order to compare it with the number of variables, parameters, etc. that are free to be determined, the idea being that in most cases the number of independent choices that can be made is the excess of the latter over the former. In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. In mathematics, a first-order partial differential equation is a partial differential equation that involves only first derivatives of the unknown function of n variables. The equation takes the form In mathematics, the inverse scattering transform is a method for solving some non-linear partial differential equations. It is one of the most important developments in mathematical physics in the past 40 years. The method is a non-linear analogue, and in some sense generalization, of the Fourier transform, which itself is applied to solve many linear partial differential equations. The name "inverse scattering method" comes from the key idea of recovering the time evolution of a potential from the time evolution of its scattering data: inverse scattering refers to the problem of recovering a potential from its scattering matrix, as opposed to the direct scattering problem of finding the scattering matrix from the potential. A differential equation can be homogeneous in either of two respects. A parabolic partial differential equation is a type of partial differential equation (PDE). Parabolic PDEs are used to describe a wide variety of time-dependent phenomena, including heat conduction, particle diffusion, and pricing of derivative investment instruments. In mathematics, an ordinary differential equation (ODE) is a differential equation containing one or more functions of one independent variable and the derivatives of those functions. The term ordinary is used in contrast with the term partial differential equation which may be with respect to more than one independent variable. In applied mathematics, methods of mean weighted residuals (MWR) are methods for solving differential equations. The solutions of these differential equations are assumed to be well approximated by a finite sum of test functions . In such cases, the selected method of weighted residuals is used to find the coefficient value of each corresponding test function. The resulting coefficients are made to minimize the error between the linear combination of test functions, and actual solution, in a chosen norm. ## References 1. Dennis G. Zill (15 March 2012). A First Course in Differential Equations with Modeling Applications. Cengage Learning. ISBN   978-1-285-40110-2. 2. Newton, Isaac. (c.1671). Methodus Fluxionum et Serierum Infinitarum (The Method of Fluxions and Infinite Series), published in 1736 [Opuscula, 1744, Vol. I. p. 66]. 3. Bernoulli, Jacob (1695), "Explicationes, Annotationes & Additiones ad ea, quae in Actis sup. de Curva Elastica, Isochrona Paracentrica, & Velaria, hinc inde memorata, & paratim controversa legundur; ubi de Linea mediarum directionum, alliisque novis", Acta Eruditorum 4. Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN   978-3-540-56670-0 5. Frasier, Craig (July 1983). "Review of The evolution of dynamics, vibration theory from 1687 to 1742, by John T. Cannon and Sigalia Dostrovsky" (PDF). Bulletin of the American Mathematical Society. New Series. 9 (1). 6. Wheeler, Gerard F.; Crummett, William P. (1987). "The Vibrating String Controversy". Am. J. Phys. 55 (1): 33–37. Bibcode:1987AmJPh..55...33W. doi:10.1119/1.15311. 7. For a special collection of the 9 groundbreaking papers by the three authors, see First Appearance of the wave equation: D'Alembert, Leonhard Euler, Daniel Bernoulli. - the controversy about vibrating strings (retrieved 13 Nov 2012). Herman HJ Lynge and Son. 8. For de Lagrange's contributions to the acoustic wave equation, can consult Acoustics: An Introduction to Its Physical Principles and Applications Allan D. Pierce, Acoustical Soc of America, 1989; page 18.(retrieved 9 Dec 2012) 9. Speiser, David. Discovering the Principles of Mechanics 1600-1800 , p. 191 (Basel: Birkhäuser, 2008). 10. Fourier, Joseph (1822). Théorie analytique de la chaleur (in French). Paris: Firmin Didot Père et Fils. OCLC   2688081. 11. Boyce, William E.; DiPrima, Richard C. (1967). Elementary Differential Equations and Boundary Value Problems (4th ed.). John Wiley & Sons. p. 3. 12. Weisstein, Eric W. "Ordinary Differential Equation Order." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/OrdinaryDifferentialEquationOrder.html 13. Order and degree of a differential equation Archived 2016-04-01 at the Wayback Machine , accessed Dec 2015. 14. Zill, Dennis G. (2001). A First Course in Differential Equations (5th ed.). Brooks/Cole. ISBN   0-534-37388-7. 15. "dsolve - Maple Programming Help". www.maplesoft.com. Retrieved 2020-05-09. 16. "DSolve - Wolfram Language Documentation". www.wolfram.com. Retrieved 2020-06-28. 17. "Basic Algebra and Calculus — Sage Tutorial v9.0". doc.sagemath.org. Retrieved 2020-05-09.
2021-05-10 07:55:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8213109970092773, "perplexity": 363.204180845013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00397.warc.gz"}
https://nickyjam.com/b5h3lmjb/article.php?c0626b=coordination-number-of-nickel
Video Explanation. CHEM1902 Coordination Chemistry The total number of points of attachment to the central element is termed the coordination numberand this can vary from 2 to as many as 16, but is usually 6. What is the coordination number of a nickel atom? Thus the metal atom has coordination number 8 in the coordination complexes [Mo(CN)8]4- and [Sr(H2O)8]2+; 7 in the complex ISBN: 9781938168390. Coordination number of Nickel in [N i (C 2 O 4 ) 3 ] 4 − is: A. Video Transcript. D. 4. 1 answer. Related. Most complexes have a coordination number of 6, and in almost all of these complexes, the ligands are arranged around the metal center in octahedral geometry. 2. So we r calculating co ornination number by two ways. Since cobalt(III) most frequently has a coordination number of 6, carbonate is most likely acting as a bidentate ligand in {eq}\rm{\left[Co(CO_3)(NH_3)4\right]^{3+}}{/eq}. B. toppr. C. 6. Atomic and Ionic Radii . Buy Find arrow_forward. You shall be able to use this method most of the time, but in certain cases, there are some exceptions. If the ligand already contains a Greek prefix (e.g. The coordination number for the silver ion in [Ag(NH 3 ) 2 ] + is two ( Figure 3 ). 12) Write the hybridisation and magnetic behaviour of the complex Ni(CO) 4. Answer. Electrons are found in predictable locations around an atom's nucleus, called orbitals. The coordination number and oxidation state of Cr in K3Cr(C2O4)3 are respectively (a) 3 and +3. Class 1 - 3; Class 4 - 5; Class 6 - 10; Class 11 - 12; CBSE. The body - centered cubic ( bcc ) has a coordination number of 8 and contains 2 atoms per unit cell. Prasanna R. 1 decade ago. What is the co-ordination number of Ni in nickel-DMG complex? Coordination number of Ni in [Ni(C 2 O 4) 3] 4-is (a) 3 (b) 4 (c) 5 (d) 6. Hence the coordination number is 4. This steric constitution can be explained with the ligand field theory (LFT). 1 × 4 + x × (–1) × 4 = 0. The nickel complex exhibits the usual, very symmetrical, octahedral geometry. The size of an atom or ion depends on the size of the nucleus and the number of electrons. [NiCl4]2- is paramagnetic, and [Ni(CN)4]2- is diamagnetic. Buy Find arrow_forward. Coordinations polyhedron: The spatial arrangement of the ligand atoms which are directly attached to the central atom/ion defines a coordination polyhedron about the central atom. Analysis of X-ray absorption near-edge structure (XANES) for Ni(II) complexes provides information regarding the coordination number and geometry of the nickel center 21. Coordination complexes consist of a central metal atom or ion bound to some number of functional groups known as ligands. EASY. total 4 atoms are attached to central metal. Question 5. In that range, almost every steric constitution of the ligands is possible: octaedric, trigonal-bipyramidal, quadratic-pyramidal, tetraedric and quadratic. The term was originally defined in 1893 by Swiss chemist Alfred Werner (1866–1919). Examples of compounds with this structure are NiAs, NiS, FeS, and CoS. asked Dec 26, 2018 in Chemistry by sonuk (44.5k points) coordination compounds; neet; 0 votes. 3. The coordination number of an atom in a molecule is the number of atoms bonded to the atom. Coordination Chemistry Reviews EIsevier Publishing Company Amsterdam Printed in The Netherlands SCHIFF BASE NICKEL(II) COMPLEXES WITH COORDINATION NUMBER EXCEEDING FOUR BY, S. YAMADA, E. OHNO, Y. KUGE, A. TAKEUCHI, K. YAMANOUCHI AND K. IWASAKI Institute of Chemistry, College of General Education, Osaka University, Toyonaka, Osaka (Japan) It is well known that the coordination number of nickel … The coordination number is just how many groups are bound to it. Nickel (II) forms a precipitate with an alcoholic solution of dimethylglyoxime, H 2 C 4 H 6 0 2 N 2, in a slightly alkaline medium. Question 6. Chemistry by OpenStax (2015-05-04) 1st Edition. NCERT Books. the number off other particle that a particle touches are the number off nearest neighbor off any particle in the crystal lattice structure. In the Ni-Complex 2 DMG ligands are present i.e. One of these complexes is square planar, and the other is tetrahedral. In the coordination compound, K4[Ni(CN)4], the oxidation state of nickel is Coordination Compounds In the coordination compound, K 4 [Ni(CN) 4], the oxidation state of nickel is. Top Answer. The primary valency of the metal ion in the coordination compound K2[Ni(CN)4] is.. asked Apr 26, 2019 in Co-ordinations compound by Faizaan (71.0k points) coordination compounds; jee; jee … You must be signed in to discuss. Answered By . 0 0. Klaus Theopold + 4 others. Co-ordination number is total number of atoms directly attached to the central metal. ethylenediamine) or if it is a polydentate ligand (i.e. 3. 1 2. Therefore, the coordination number of a nickel atom in the cubic close packed structure of nickel is $12 .$ View Answer. DMG ligand is bidentate (i.e. are used to designate the number of each type of ligand in the complex ion. The face- centered cubic (FCC) has a coordination number of 12 and contains 4 atoms per unit cell. The coordination number of a nickel atom that crystallizes in cubic closest packed structure is 12. $\ce{Ni^{+2}}$ produces a square planar complex with $\ce{CN-}$, as cyanide is a strong ligand. The size of the central atom or ion: Larger atoms (periods 5 & 6) on the left side of the periodic table are larger, and can accommodate more ligands. The correct option is A. As we shall see, the coordination number depends on the relative size of the atoms or ions. For the copper(II) ion in [CuCl 4 ] 2− , the coordination number is four, whereas for the cobalt(II) ion in [Co(H 2 O) 6 ] 2+ the coordination number … Collinear complexes are common in the case of heavy metal cations of d … Chemistry by OpenStax (2015-05-04) 1st Edition. A coordination compound is characterized by the nature of the central metal atom or ion, the oxidation state of the latter (that is, the gain or loss of electrons in passing from the neutral atom to the charged ion, sometimes referred to as the oxidation number), and the number, kind, and arrangement of the ligands. Significantly, the Ni SA ‐N 2 ‐C catalyst, with the lowest N coordination number, achieves high CO Faradaic efficiency (98 %) and turnover frequency (1622 h −1), far superior to those of Ni SA ‐N 3 ‐C and Ni SA ‐N 4 ‐C, in electrocatalytic CO 2 reduction. 4 + x – 4 = 0. x = 0. The coordination pattern did not reveal any specific fold, nevertheless we report preferable residue spacing for specific structural architecture. In this experiment, we will study reactions of two octahedral complexes: [Ni(H 26 O) ]2+ and [Cu(H 26 O) ]2+. In FCC structure, the atoms are present at each corner as well as each face centre. What is the coordination number of a nickel atom? A. Nickel(II) complexes in which the metal coordination number is 4 can have either square-planar or tetrahedral geometry. Oxalate is an example for bidentate ligand, three oxalate ligands form six-coordinate bonds around the N i 2 + ion. Coordination number, the number of atoms, ions, or molecules that a central atom or ion holds as its nearest neighbours in a complex or coordination compound or in a crystal. In the compound [Ni(NO2)2(C2O4)2]4-, what is the coordination number of nickel? A type of ionic crystal structure in which the anions have a distorted hexagonal close packed arrangement with the cations occupying the octahedral holes. The coordination number is the number of donor atoms bonded to the central metal atom/ion. Discussion. Coordination number of a metal ion is also equal to the total number of coordinate bonds present in a complex. Most metals have a large number of accessible electrons compared to light main group elements such as nitrogen, oxygen, or carbon. Coordination number 2: collinear. 11) What is the magnetic moment of nickel ion in tetraammine nickel(ii) chloride, [Ni(NH 3) 4]Cl 2? In this case there are 4 cyano groups giving the nickel a coordination number of 4. The coordination number of the central metal ion or atom is the number of donor atoms bonded to it. Coordination Number of an Atom in a Molecule or a Crystal Refers to the Total Number of Atoms, Ions, or Molecules Bonded to It. 3. Cubic Closest Packed structure is equivalent to Face-Centered Cubic unit cell. Co-ordinate number of Nickel in [N i (C 2 O 4 ) 3 ] 4 − is 3 × 2 = 6. Answer. two atoms are attach to central metal). … Publisher: OpenStax. Chapter. This compound has two nitrite groups, called nitrito groups, which are monodentate ligands, and two oxalate groups, called oxalato groups, that are bidentate ligands coordinating to the central Ni(II) metal ion. BOOK FREE CLASS; COMPETITIVE EXAMS. Solutions. Introduction[1][2]: Nickel appears in the coordination numbers 4, 5 and 6. Coordination patterns predicted from the structures are reported in terms of donors, chelate length, coordination number, chelate geometry, structural fold and architecture. Section. Though a maximum coordination number of 8 was observed, the presence of a single protein donor was noted to be mandatory in nickel coordination. Chapter 10 Liquids and Solids Chemistry Topics. Let the oxidation number of Ni in K 4 [Ni(CN) 4] = x. 0 –1 +1 +2; Answer. If coordination number (= number of ligands) = 6 then the shape of the complex ion is octahedral (octahedral geometry is most common for transition metal complexes) Examples of the Shapes of Some Complex Ions . Coordination numbers can range from 2 up to 12, with 4 and 6 quite common for the upper transition metals. D. 4. C. 6. Answer is explained below. Klaus Theopold + 4 others. So, we must first discuss their sizes. Nickel carbonyl, on the other hand, exists as a tetrahedral complex because nickel's oxidation is 0. Each type of ion has a coordination number of 6. The following factors influence the coordination number of the complex. The coordination number is six. The analysis revealed histidine as the most favored residue in nickel coordination. The number of ions or atoms that immediately surround an atom or ion of interest is called the coordination number, - C.N. Publisher: OpenStax. JOURNAL OF SOLID STATE CHEMISTRY 50, 153-162 (1983) An Investigation of the Coordination Number of Ni2'1' in Nickel Bearing Phyllosilicates Using Diffuse Reflectance Spectroscopy M. ISABEL TEJEDOR-TEJEDOR* AND MARC A. ANDERSON Water Chemistry Program, 660 North Park Street, University of Wisconsin, Madison, Wisconsin 53706 AND ADRIEN J. HERBILLON Section de … ISBN: 9781938168390. Learn about Coordination Number Here. Hence, option C is correct. EDTA has coordination number (a) 3 (b) 4 (c) 5 (d) 6. Become a member and unlock all Study Answers. The Greek prefixes di-, tri-, tetra-, etc. BNAT; Classes. B. Factors Effecting Coordination Number. it can attach at more than one coordination site), the prefixes bis-, tris-, tetrakis-, and pentakis- are used instead.. (See examples 3 and In chemistry and crystallography, the coordination number describes the number of neighbor atoms with respect to a central atom. Coordination Numbers and Geometry Lecture 2. Explanation: Nickel metal crystallizes in a cubic closest packed structure. In addition to the common octahedral and square-planar complexes, several other types of complexes (which can be classified according to coordination number) are observed.
2021-04-20 14:20:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6246662735939026, "perplexity": 2560.626693383916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00529.warc.gz"}
http://langintro.com/cljsbook/functions_in_cljs.html
# Functions in ClojureScript¶ So, how do you call functions in ClojureScript? There’s a universal rule: Write an open parenthesis, the name of the function, the function’s arguments, and a closing parenthesis. ## Doing Arithmetic with Functions¶ Bringing back the addition function box from the previous page: If you want to add 3 and 5 in ClojureScript, you write the following expression: an open parenthesis, the name of the function (in this case, its name is +), the arguments, and the closing parenthesis. In keeping with the philosophy of this book, you didn’t merely add 3 and 5, you transformed the numbers 3 and 5 into 8 by applying the add function to them. OK, so how would you write an expression that applies the multiply function to the numbers 8 and 9 to get their product? Or, in more ordinary terminology, write an expression that uses the * function to multiply 8 by 9. Try it in the active code box below. (Note: the line beginning with ; is a comment. Comments are for us humans to read; the computer ignores the semicolon and everything that follows it on that line.) In both these cases, it doesn’t matter which order you put the numbers, since addition and multiplication are commutative (a fancy math term for “order doesn’t matter”). But what about division and subtraction, where order does matter? Which number comes first? Try doing a function call using the / function to divide 8 by 2. Experiment with both orders to see which one gives you the correct answer of 4. In ClojureScript, division is floating point by default. (Why do they call it floating point?) If you need to do integer division, use the quot function. To get the remainder after integer division, use the rem function. Thus, (quot 35 4) is 8, and (rem 35 4) is 3. The arithmetic functions can take more than two arguments, so if you need to multiply 7 times 6 times 23, you can write (* 7 6 23); similarly, (+ 10 4 2 11) will result in 27. However, what happens if you need to do arithmetic involving more than one operation, such as 3 + 4 * 5? Go to the next page to find out how to do that. Next Section - Doing More Than One Operation
2018-01-23 13:37:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48219332098960876, "perplexity": 514.9141326250158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891976.74/warc/CC-MAIN-20180123131643-20180123151643-00278.warc.gz"}
http://math.stackexchange.com/questions/221825/for-a-matrix-f-with-more-columns-than-rows-why-is-f-cdot-ft-invertible/221828
# For a matrix $F$ with more columns than rows, why is $F\cdot F^T$ invertible? Assuming $F$ is a matrix with full rank but more columns than rows - why is $F\cdot F^T$ invertible? - What just happened?! – Lord_Farin Oct 27 '12 at 21:23 Invertible..."things"? You mean like a dog doing a rolling trick or a capsizing ship? Really... – DonAntonio Oct 27 '12 at 21:25 Counter-example in $\mathbb{Z}_p$ (see Ted's comment): $$\begin{bmatrix}1 & \cdots & 1\end{bmatrix}$$ with $p$ columns. – wj32 Oct 27 '12 at 21:59 It suffices to show the null space of $F F^T$ is $\{0\}$. Suppose $F F^T x = 0$. Then \begin{align*} & x^T F F^T x = 0 \\ \implies & (F^T x)^T (F^T x) = 0\\ \implies & \|F^T x \|^2 = 0 \\ \implies & F^T x = 0 \\ \implies & x = 0. \end{align*} (Because $F^T$ is a skinny matrix with full rank, its null space is $\{0\}$.)
2016-02-13 09:25:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994990825653076, "perplexity": 422.43520627709495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166261.11/warc/CC-MAIN-20160205193926-00207-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/239137/what-is-the-dimension-of-subspace-vecu-in-bbbrn-vecnt-vecu-vec0?answertab=oldest
# What is the dimension of subspace $\{\vec{u}\in\Bbb{R}^n:\vec{n}^T\vec{u}=\vec{0}\}$? The subspace in question: $V=\{ \vec{u} \in \Bbb{R}^n : \vec{n}^T\vec{u}=\vec{0} \}$ I am assuming that $\vec{u} = \begin{bmatrix}x_0 \\ x_1 \\ \vdots \\ x_2\end{bmatrix}$. The dimension of a vector space/subspace is equal to the number of linearly independent vectors in its basis. So it's either 1 or n. Does $\vec{u}$ count as 1 or as n? - (1) By definition, the vectors in a basis are linearly independent. (2) Why do you say "So it's either 1 or $n$"? –  wj32 Nov 17 '12 at 10:13 (1) Ok. (2) Because I don't know what the answer is, but I'm fairly certain that it's one of those. –  user1132363 Nov 17 '12 at 10:15 $\vec u$ is one vector, in itself determines $1$ dimension (unless $\vec u=\vec 0$), but lies in the (canonical) $n$ dimension space $\Bbb R^n$, that is: has $n$ coordinates. If we have $k$ vectors: $\vec a_1,..,\vec a_k$ (doesn't matter from which vector space), then they can 'determine' at most $k$ dimension. Intuitively, in the exercise, $V=\{\vec u \mid \vec u\perp\vec n\}$ for the fixed $\vec n$. Unless $\vec n=\vec 0$, geometrically viewed, it must be of dimension $n-1$. You can try to find a basis (to verify this claim), preferaribly starting out from an orthogonal basis of $\Bbb R^n$ which contains $\vec n$. - It can be helpful to think of a concrete example. Suppose $n = 3$ and $\vec{n} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}$. Then $$V = \left \{ \begin{bmatrix} x \\ y \\ 0 \end{bmatrix} \mid x, y \in \mathbb R \right\}.$$ In this example it seems clear that $V$ has dimension $2$. - If I understand this correctly then the answer to my question is: "anywhere up to n" ? –  user1132363 Nov 17 '12 at 12:44 Actually no, the answer is $n - 1$ (assuming $u \neq 0$). One way to prove this is to note that $V$ is the null space of $\vec{n}^T$, and then use the rank-nullity theorem. –  littleO Nov 17 '12 at 20:04 I should have said "assuming $\vec{n} \neq 0$. (I was mistakenly writing $u$ instead of $\vec{n}$.) –  littleO Nov 17 '12 at 20:56 $\vec{u} \in \Bbb{R}^n$, $\vec{u}\not=\vec{0}$ $U = \{ k\vec{u}, k \in \Bbb{R} \} \subset \Bbb{R}^n$ $V=\{ \vec{v} \in \Bbb{R}^n : \vec{u}.\vec{v}=0 \} \subset \Bbb{R}^n$ $U + V = \{ \vec{u} + \vec{v}, \vec{u} \in U, \vec{v} \in V \} \subset \Bbb{R}^n$ You need to prove $\Bbb{R}^n \subset U + V$ Chose any $\vec{x}\in \Bbb{R}^n$ $(\vec{x}-\cfrac{\vec{x}.\vec{u}}{\vec{u}.\vec{u}}\vec{u}).\vec{u} = \vec{x}.\vec{u} - \cfrac{\vec{x}.\vec{u}}{\vec{u}.\vec{u}}\vec{u}.\vec{u} = 0$ So $(\vec{x}-\cfrac{\vec{x}.\vec{u}}{\vec{u}.\vec{u}}\vec{u}).\vec{u} \in V$ So $\Bbb{R}^n \subset U + V$ and since we already have $U + V \subset \Bbb{R}^n$, $U + V = \Bbb{R}^n$ Then you need to prove $U\cap V=\{\vec{0}\}$ Chose any $\vec{x}\in U\cap V$ $\exists k\in\Bbb{R},\vec{x} = k \vec{u}$ and $0 = \vec{u}.\vec{x} = \vec{u}.k\vec{u} = k\|u\|^2$ So $0 = k$ because $\vec{u}\not= \vec{0}$ $\vec{x} = k \vec{u} = 0 \vec{u} = \vec{0}$ You know that $n = dim(\Bbb{R}^n) = dim(U+V) = dim( U ) + dim( V )- dim( U \cap V ) = 1 + dim( V ) - 0$ $n = 1 + dim( V )$ $dim( V ) = n - 1$ -
2015-01-28 01:55:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9654671549797058, "perplexity": 348.9715462551197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121540415.91/warc/CC-MAIN-20150124174540-00018-ip-10-180-212-252.ec2.internal.warc.gz"}
https://256k1.dev/posts/ecdsa_malleability/
## What is ECDSA?⌗ • ECDSA is a signature generation scheme which uses a key pair consisting of a public key and a private key. • The "EC" relates to this scheme being a variant based on an Elliptic Curve. • ECDSA requires agreement of three parameters: 1. G — the generator point 2. n — the multiplicative order of the point G ## ECDSA in bitcoin⌗ ### Transaction signatures⌗ Bitcoin uses digital signatures to authorise spending of UTXOs. In general, once a new transaction has been created the private key of each (input) UTXO in the transaction must sign the entire transaction. However this is not always necessarily the case…​ There is a parameter in the signature called the `SIGHASH` and this can be changed to reflect which sub-parts of the transaction you want the signature related to this input to be signing. There are three ways signatures can commit to outputs: 1. `SIGHASH_ALL` — commit to all outputs 2. `SIGHASH_SINGLE` — commit to output at same index as input 3. `SIGHASH_NONE` — commit to no outputs! more outputs can be added without invalidating your signature …​and two ways to commit to inputs: 1. `ANYONECANPAY` is not set — no one can change any input without invalidating your signature 2. `ANYONECANPAY` is set — inputs can be changed, added or removed; anyone can pay Any combination of input and output `SIGHASH` types can be used which gives 6 combinations. A "standard" transaction for example would probably combine `SIGHASH_ALL` with an unset `ANYONECANPAY` to commit to the entire transaction. `SIGHASH_NOINPUT` now renamed to `SIGHASH_ANYPREVOUT` is a proposal for a sighash where the identifier for the UTXO being spent is not signed, allowing the signature to be used with any UTXO that’s protected by a similar script (i.e. uses the same public keys). ### secp256k1 parameters⌗ The elliptic curve chosen for bitcoin by Satoshi was secp256k1. Parameters for this curve can be found here and below: The elliptic curve domain parameters over bbb"F"_p associated with a Koblitz curve secp256k1 are specified by the sextuple T = (p, a, b, G, n, h) where the finite field bbb"F"_p is defined by: p = tt"FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFE FFFFFC2F" or p = 2^256 −2^32 −2^9 −2^8 −2^7 −2^6 −2^4 −1 The curve E: y^2 = x^3 +ax + b over bbb"F"_p is defined by: a = tt"00000000 00000000 00000000 00000000" b = tt"00000000 00000000 00000000 00000007" The base point G in compressed form is: G = tt"02 79BE667E F9DCBBAC 59F2815B 16F81798" and in uncompressed form is: G = tt"04 79BE667E F9DCBBAC 59F2815B 16F81798 483ADA77 A6855419 9C47D08F FB10D4B8 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 55A06295 CE870B07 029BFCDB 2DCE28D9 55A06295 CE870B07 029BFCDB 2DCE28D9 26A3C465 5DA4FBFC 0E1108A8 FD17B448" Finally the order n of G and the cofactor are: n = tt"FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFE BAAEDCE6 AF48A03B BFD25E8C D0364141" h= tt"01" • This curve can be used with some other signature algorithms, for example it is compatible with the Schnorr scheme. • Once a private key k on the curve has been generated a public key K can be generated by multiplying by the generator point, G. Therefore to calculate the public key P, we effectively want to find the multiple kG of the generator point G, which is the same as adding G to itself, k times in a row. In elliptic curves, adding a point to itself is the equivalent of drawing a tangent line on the point and finding where it intersects the curve again, then reflecting that point on the x-axis. ## Malleability⌗ ### Malleability uses⌗ Certain malleability can be useful in bitcoin because it allows us to create more complex types of transactions such as multi-sig and lightning channels. These types of transactions generally rely on malleability so that signatories can manipulate inputs and/or outputs independently. Therefore it is important to consider that "transaction malleability" in bitcoin is not necessarily a negative concept. ### Types of malleability⌗ In 2014 at least 7 forms of transaction malleability were known, as detailed by the motivation for BIP 62. The 5th of these relating to the ESDSA signature malleability under discussion here: Inherent ECDSA signature malleability ECDSA signatures themselves are already malleable: taking the negative of the number S inside (modulo the curve order) does not invalidate it. We would distinguish this type of malleability from "script malleability" — modifications to input scripts in transaction messages — and "input/ouput malleability" — modifications to the list of inputs and outputs in transaction messages. ### ECDSA malleability⌗ For every ECDSA signature (r,s), the signature (r,s-N), which is effectively r modulo the curve order N, is a valid signature of the same message. Note that the new signature has the same size as the original, as opposite as the malleability of padding, source. Given a signature (r,s) it’s possible to calculate the complementary signature without knowing the ECDSA private keys. The complementary signature has a different hash, so using the complementary signature will result in a new txid. In bitcoin terms, this means that an attacker can change a txid by broadcasting a variation of the transaction that uses the complementary ECDSA signature. This is because the txid calculation includes the ECDSA signatures of the transaction. As the only use case for this "third party signature malleability" is for a would-be attacker to obscure a legitimate transaction in the mempool/UTXO set, we would like to remove this source of malleability. With ECDSA signature malleability it is possible for an attacker to malleate a transaction so that it is syntactically different, but semantically identical. That is to say, they cannot change the meaning of the transaction; which inputs are spending to which outputs. #### ECDSA malleability fix⌗ The fix for this is to enforce a canonical signature representation. The Bitcoin core developers decided to use the following scheme: • Both signature values are calculated, but only the signature with the smaller (or "lower") "S-value" is considered valid. That is, the correct representation is the form with the smaller unsigned integer representation. Bitcoin Core added a mechanism to make the signing code always produce signatures with "even S" in PR 2131 in August 2013. Later in October 2015 a mechanism was added to enforce low S-values as part of transaction standardness with PR #6769. Validation of the rule is done with the transaction script standardness flag `SCRIPT_VERIFY_LOW_S`, which all recent Bitcoin implementations now use. This prevents non-"low S" transactions from being relayed or entering the mempool, but they can still technically be added by a miner to a block. From Greg Maxwell in the commit message for PR 6769: If widely deployed this change would eliminate the last remaining known vector for nuisance malleability on boring SIGHASH_ALL p2pkh transactions. On the down-side it will block most transactions made by sufficiently out of date software. This does not replace the need for BIP62 or similar, as miners can still cooperate to break transactions. Nor does it replace the need for wallet software to handle malleability sanely[1]. This only eliminates the cheap and irritating DOS attack. — Greg Maxwell The ECDSA signing flaw was originally supposed to be fixed by BIP62, which was later withdrawn. ### SegWit⌗ SegWit also helps address the issue; when we think of a transaction, we really just care about the inputs, outputs, and payment amounts. ECDSA signatures are essential to the Bitcoin security model, but don’t actually affect these transaction details. Segwit transactions continue to include a legacy `txid` as described above, but also include a new `wtxid`. In a SegWit transaction the `txid` does not include the ECDSA signature data however the `wtxid` does include the signature data. This means that: • For a SegWit transaction the `txid` is not vulnerable to 3rd party ECDSA malleability. • For a SegWit transaction the `wtxid` is vulnerable to to 3rd party ECDSA malleability. Tip Note that even with SegWit active on the network, non-SegWit transactions still have meallable txids (i.e. they remain unchanged). ## Malleability with the Schnorr algorithm⌗ The motivation of BIP 340 states that: Non-malleability: The SUF-CMA security of Schnorr signatures implies that they are non-malleable. On the other hand, ECDSA signatures are inherently malleable[1]; a third party without access to the secret key can alter an existing valid signature for a given public key and message into another signature that is valid for the same key and message. This issue is discussed in BIP62 and BIP146. ## Questions⌗ 1. Are "low S" and "even S" equivalent? In Jonas Nick’s blog post Reducing Bitcoin Transaction Sizes with x-only Pubkeys he describes how "x only" pubkeys is planned for the Schnorr scheme. Based on this post, and the definition of `LOW_S` in BIP 146: We require that the S value inside ECDSA signatures is at most the curve order divided by 2 (essentially restricting this value to its lower half range). Every signature passed to `OP_CHECKSIG`, `OP_CHECKSIGVERIFY`, `OP_CHECKMULTISIG`, or `OP_CHECKMULTISIGVERIFY`, to which ECDSA verification is applied, MUST use a S value between tt"0x1" and tt"0x7FFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF 5D576E73 57A4501D DFE92F46 681B20A0" (inclusive) with strict DER encoding (see BIP66). If a signature passing to ECDSA verification does not pass the Low S value check and is not an empty byte array, the entire script evaluates to false immediately. A high S value in signature could be trivially replaced by S' = tt"0xFFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFE BAAEDCE6 AF48A03B BFD25E8C D0364141" - S.
2023-03-26 12:16:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45813417434692383, "perplexity": 4264.948778031999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00368.warc.gz"}
https://www.dml.cz/handle/10338.dmlcz/127463?show=full
# Article Title: On oscillation and asymptotic property of a class of third order differential equations (English) Author: Parhi, N. Author: Pardi, Seshadev Language: English Journal: Czechoslovak Mathematical Journal ISSN: 0011-4642 (print) ISSN: 1572-9141 (online) Volume: 49 Issue: 1 Year: 1999 Pages: 21-33 Summary lang: English . Category: math . Summary: In this paper, oscillation and asymptotic behaviour of solutions of $y^{\prime \prime \prime } + a(t)y^{\prime \prime }+b(t)y^{\prime } + c(t)y=0$ have been studied under suitable assumptions on the coefficient functions $a,b,c\in C([\sigma ,\infty ),R)$, $\sigma \in R$, such that $a(t)\ge 0$, $b(t) \le 0$ and $c(t) < 0$. (English) MSC: 34C10 idZBL: Zbl 0955.34023 idMR: MR1676698 . Date available: 2009-09-24T10:19:25Z Last updated: 2020-07-03 Stable URL: http://hdl.handle.net/10338.dmlcz/127463 . Reference: [1] S. Ahmad, A.C. Lazer: On the oscillatory behaviour of a class of linear third order differential equations.J. Math. Anal. Appl. 28 (1970), 681–689, MR 40 # 1646. MR 0248394 Reference: [2] M. Gera: On the behaviour of solutions of the differential equation $x^{\prime \prime \prime }+a(t) x^{\prime \prime }+b(t) x^{\prime }+c(t) x =0$.Habilitation Thesis, Faculty of Mathematics and Physics, Comenius University, Bratislava. (Slovak) Reference: [3] M. Greguš: Third Order Linear Differential Equations.D. Reidel Pub. Co., Boston, 1987. MR 0882545 Reference: [4] M. Hanan: Oscillation criteria for third-order linear differential equations.Pacific J. Math. 11 (1961), 919–944, MR 26 # 2695. Zbl 0104.30901, MR 0145160 Reference: [5] G.D. Jones: Properties of solutions of a class of third order differential equations.J. Math. Anal. Appl. 48 (1974), 165–169. Zbl 0289.34046, MR 0352608, 10.1016/0022-247X(74)90224-8 Reference: [6] N. Parhi, S. Parhi: Qualitative behaviour of solutions of forced nonlinear third order differential equations.Rivista di Matematica della Universita di Parma 13 (1987), 201–210. MR 0977675 Reference: [7] N. Parhi, P. Das: On asymptotic property of solutions of linear homogeneous third order differential equations.Bollettino U.M.I 7-B (1993), 775–786. MR 1255647 Reference: [8] N. Parhi, P. Das: On the oscillation of a class of linear homogeneous third order differential equations.To appear in Archivum Mathematicum. MR 1679638 . ## Files Files Size Format View CzechMathJ_49-1999-1_3.pdf 342.9Kb application/pdf View/Open Partner of
2021-05-08 16:28:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2846430540084839, "perplexity": 3456.2641797760593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.94/warc/CC-MAIN-20210508151721-20210508181721-00246.warc.gz"}
http://gmatclub.com/forum/if-denotes-a-mathematical-operation-does-x-y-y-x-for-all-x-92157.html?fl=similar
Find all School-related info fast with the new School-Specific MBA Forum It is currently 03 May 2016, 22:00 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If denotes a mathematical operation, does x y=y x for all x Author Message TAGS: ### Hide Tags Senior Manager Joined: 16 Jul 2009 Posts: 261 Followers: 4 Kudos [?]: 205 [0], given: 3 If denotes a mathematical operation, does x y=y x for all x [#permalink] ### Show Tags 01 Apr 2010, 12:33 00:00 Difficulty: (N/A) Question Stats: 100% (01:33) correct 0% (00:00) wrong based on 5 sessions ### HideShow timer Statictics If ☉ denotes a mathematical operation, does x☉y=y☉x for all x and y? (1) For all x and y, x☉y = $$2(x^2 + y^2)$$. (2) For all y, 0☉y = $$2 y^2$$ Manager Joined: 26 May 2005 Posts: 209 Followers: 2 Kudos [?]: 102 [0], given: 1 ### Show Tags 01 Apr 2010, 12:50 abhi758 wrote: If ☉ denotes a mathematical operation, does x☉y=y☉x for all x and y? (1) For all x and y, x☉y = $$2(x^2 + y^2)$$. (2) For all y, 0☉y = $$2 y^2$$ st 1) xOy = 2(x^2+y^2) yOx=2(y^2+x^2) Yes. Sufficient st 2) 0Oy = 2 * y^2 -- this expression varies with y yO0 = 2 * 0 = 0 .. this expression is always 0 but both expression will be 0 if y=0 and different if y!=0 not sufficient A Senior Manager Joined: 19 Nov 2009 Posts: 326 Followers: 5 Kudos [?]: 81 [0], given: 44 ### Show Tags 02 Apr 2010, 15:17 Even I'll go with A ! What's the OA ? _________________ "Success is going from failure to failure without a loss of enthusiam." - Winston Churchill As vs Like - Check this link : http://www.grammar-quizzes.com/like-as.html. Manager Joined: 26 Feb 2010 Posts: 79 Location: Argentina Followers: 4 Kudos [?]: 8 [0], given: 1 ### Show Tags 08 Apr 2010, 13:09 A With B we don't know what happen with the other numbers Re: Mathematical Symbols   [#permalink] 08 Apr 2010, 13:09 Similar topics Replies Last post Similar Topics: Let ∇ denote a mathematical operation. Is it true that x ∇ y 3 25 Apr 2012, 22:35 4 If x and y are both integers, which is larger, x^x or y^y? 7 18 Feb 2012, 09:33 1 If x and y are both integers, which is larger, x^x or y^y? 6 26 Aug 2011, 16:09 If denotes a mathematical operation, does x y = y x for all 3 31 Jul 2011, 04:14 1 If x and y are both integers, which is larger, x^x or y^y? 7 23 Jul 2011, 02:25 Display posts from previous: Sort by
2016-05-04 05:00:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5682777166366577, "perplexity": 10420.82217558414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122501.26/warc/CC-MAIN-20160428161522-00023-ip-10-239-7-51.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/688427/hodge-star-operator-and-volume-form-on-arbitrary-manifold
# Hodge Star operator and volume form on arbitrary manifold I guess this is the question on definitions, however, I haven't managed to find a clear answer to this question: Suppose we have a manifold, there is metric tensor, so we can use it to calculate Hedge Star operator on differential forms. Let $\Omega$ be the volume form. Is it true, that $*\Omega = 1$ ? $\Omega \wedge *\Omega = \left(\Omega, \Omega\right)\Omega = \Omega$ $*\Omega$ is scalar, so it seems that my guess was right. Was it? • It is important to add the assumption that the manifold is oriented in order to define "the" volume form $\Omega$ associated with the metric. Switching the orientation amounts to changing $\Omega$ to $-\Omega$. Then you can just follow the definition of the Hodge operator, as you did correctly in your answer. – Gil Bor Feb 24 '14 at 16:57
2021-08-02 10:29:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93434739112854, "perplexity": 177.0627665784604}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154310.16/warc/CC-MAIN-20210802075003-20210802105003-00618.warc.gz"}
https://fewlinesofcode.com/swift,/ios,/gesture_recognizer,/$1/2019/05/13/dollar-one-unistroke-recognizer.html
The $1 Unistroke Recognizer is a 2-D single-stroke recognizer designed for rapid prototyping of gesture-based user interfaces. In machine learning terms,$1 is an instance-based nearest-neighbor classifier with a 2-D Euclidean distance function, i.e., a geometric template matcher. Original paper is here: http://depts.washington.edu/madlab/proj/dollar/index.html. Pseudocode is here: http://depts.washington.edu/madlab/proj/dollar/dollar.pdf I wrote Swift 5 version. I’ve also made some effort and created simple algorithm to recognize primitives. I assume that nobody reads this blog and nobody is interested in it. If it’s not true - let me know on Twitter @fewlinesofcode. This is not a nasty promo campaign. No need to share, like or retweet. The only thing I need is to be sure that somebody is reading. 1 person will be enough :) You can check it at my github or the gist below:
2020-04-06 12:50:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20928986370563507, "perplexity": 1507.811277083879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371624083.66/warc/CC-MAIN-20200406102322-20200406132822-00378.warc.gz"}
https://math.stackexchange.com/questions/412544/astonishing-and-innocent-results-with-the-axiom-of-choice?noredirect=1
# Astonishing and innocent results with the axiom of choice The product of nonempty sets is nonempty. I am fascinated that such a simple and seemingly intuitive statement can lead to rather astonishing results such as the Banach-Tarski paradox or the solution to this riddle. I am also intrigued by the seemingly innocent results that rely on AC (the existence of algebraic closures, any ideal is contained in a maximal ideal) and I wonder if I am missing some intuition to see how truly remarkable they are. My question: What are other examples of seemingly magical results whose proofs rely explicitly on AC, and what are examples of seemingly innocent results that rely on AC that upon further examination turn out to be fairly remarkable themselves? • See, for example, mathoverflow.net/questions/20882/… . – Qiaochu Yuan Jun 6 '13 at 1:13 • @QiaochuYuan: Thanks Qiaochu. I did a search on SE and didn't quite find what I was looking for, but your link seems to answer half of my question. – Jared Jun 6 '13 at 1:14 • – Metin Y. Jun 6 '13 at 1:15 • One of my earlier answers seems pertinent here: «For finite products it is certainly obvious. For countably infinite products, it is less obvious, and for uncountable products, it is not obvious at all; it becomes a highly abstract statement about the intended properties of certain highly abstract objects in a highly abstract theory.» – MJD Jun 6 '13 at 2:23 From the top of my head: 1. Continuity of real functions at a point $x$ is equivalent to sequential continuity. 2. Every infinite set has a countable infinite subset. 3. The countable union of countable sets is countable. 4. There exists an injection from $\aleph_1$ into $\Bbb R$. 5. There are non-Borel sets. 6. Every non-empty set can be endowed with a structure of a group (and so, abelian group, ring, field, and so on follow). 7. Every free abelian group is projective. 8. Every divisible abelian group is injective. 9. There are "enough" projective (or injective) abelian groups. (The last two are full-on equivalents to choice, this one is weaker.) 10. Every tree of height $\omega$ where all the levels are finite has a branch. 11. Every field has an algebraic closure. 12. If a field has an algebraic closure, then it has a unique algebraic closure (up to isomorphism). 13. Subgroup of a free group is free. 14. Subgroup of a free abelian group is free abelian. 15. Every vector space has a basis. The list goes on forever. I may add a few more later. • In #10, is there something missing in "has a branch"? – John Bentin Jun 6 '13 at 7:18 • Well Asaf, at least two rather important ones imho: Tychonov's Theorem and "every vector space over some field has a basis", each of them equivalent to AC. – DonAntonio Jun 6 '13 at 9:36 • For #10, I was thinking of König's lemma in graph theory, which refers to an infinite branch. For example, take the tree which is $\Bbb N,$ with the usual order, plus one extra vertex $1'$ just above the root $0$. Then there is a branch $\{0,1'\},$ as assured by result #10. But the result doesn't point out that there is an infinite branch; so it seems to be a bit weaker than it could be. Also, if "branch" is not qualified by "infinite", then the result invites the generalization "Every tree has a branch". – John Bentin Jun 6 '13 at 12:00 • @John: Recall that a branch is a maximal chain. We're talking essentially on a tree which has no branch, not even a finite one. Of course if the underlying set is $\Bbb N$ then this is false, but it doesn't have to be... – Asaf Karagila Jun 6 '13 at 12:09 • @DonAntonio: I thought about Tychonoff, but it's not as innocent as it seems... I'm not sure about Hamel bases either. – Asaf Karagila Jun 6 '13 at 12:10 This isn't an answer to the question in the last paragraph, but here's one way to repair your intuition about applications of the axiom of choice. One way to think about why choice is not intuitive is to think in terms of computational resources (see, for example, this blog post by Terence Tao). For any kind of mathematical construction you might want to do, think about what kind of computational resources you'd need to actually carry it out. Some sets have the property that it takes a lot of computational resources to write down an element of that set. For example, the set of solutions to a Diophantine equation may be non-empty, but it may still take a long time to actually write one down. Whenever you have a bounded amount of computational resources, the axiom of choice is going to be false because you'll run out of computational resources when you try to write down an element of a sufficiently large product of non-empty sets. For example, suppose you have only a finite amount of computational resources. Then the axiom of countable choice will be false: if I take countably many Diophantine equations, none of which you currently know solutions to, and ask you to write down a solution to each, you may not be able to do it even if I guarantee to you that solutions exist because it may take you an infinite amount of computational resources, which you don't have. (Of course, you may be able to cleverly solve them all at once, but I may be able to stump you with an even trickier set of Diophantine equations. Matiyasevich's theorem shows that I can just ask you to write down the solutions to every Diophantine equation.) You can think about algebraic closures and maximal ideals similarly. When you actually try to construct the algebraic closure of a field, you need to repeatedly find irreducible polynomials so you can adjoin their roots and get a bigger algebraic extension. It takes computational resources to find irreducible polynomials, depending on the nature of the field you started with, and if the field you started with is sufficiently complicated it may take more computational resources than you have. Similarly, when you actually try to write down a maximal ideal containing an ideal, you need to repeatedly find elements not contained in the ideal but that do not, together with the ideal, generate the unit ideal. It takes computational resources to do this, etc.
2019-04-22 00:36:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7835237979888916, "perplexity": 231.6523842904002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532948.2/warc/CC-MAIN-20190421235818-20190422021818-00455.warc.gz"}
http://physics.aps.org/synopsis-for/10.1103/PhysRevD.94.103506
# Synopsis: Seeing Dark Matter Through the Clouds A correlation between the cosmic microwave background and hydrogen absorption lines may reveal a connection between dark matter and intergalactic gas clouds. As light travels to Earth from distant quasars, it’s absorbed at certain frequencies by hydrogen in intergalactic clouds, which carve a “forest” of absorption lines in the light spectrum. Measuring this so-called Lyman-alpha forest can reveal properties of the clouds, helping researchers test cosmological models. Now, Cyrille Doux at Paris Diderot University and co-workers have shown they can obtain new astrophysical information by correlating Lyman-alpha lines with lensing effects in the cosmic microwave background (CMB). This correlation may shed light on the relationship between gas clouds and dark matter. Both the Lyman-alpha spectrum and CMB lensing are sensitive to the density of matter along the line of sight of the observer. While Lyman-alpha signals can be related to the density of hydrogen atoms, lensing, which characterizes distortions of the CMB caused by gravity, is an excellent probe of dark matter, the most abundant type of matter in the Universe. Using CMB data from the European Space Agency’s Planck mission, and quasar spectra collected in the Sloan Digital Sky Survey, the researchers revealed that the two sets of data are correlated. Specifically, they found that in directions along which the density of dark matter is relatively high, the fluctuations in the Lyman-alpha signals are also larger than average. The detection of this correlation has two key implications. First, it will help researchers characterize the effects of astrophysical processes on fluctuations in the Lyman-alpha forest. Second, it may expose the influence of dark matter on the distribution of visible matter in intergalactic space—an important input for cosmological models. This research was published in Physical Review D. –Matteo Rini Matteo Rini is the Deputy Editor of Physics. More Features » ### Announcements More Announcements » Graphene ## Next Synopsis Quantum Information ## Related Articles Cosmology ### Viewpoint: Dark Matter Still at Large No dark matter particles have been observed by two of the world’s most sensitive direct-detection experiments, casting doubt on a favored dark matter model. Read More » Statistical Physics ### Viewpoint: New Clues as to Why Boyajian’s Star is Dimming A statistical analysis links a star’s mysterious brightness fluctuations to internal nonequilibrium phenomena, rather than structures orbiting around the star. Read More » Astrophysics ### Viewpoint: Searching for Baby Planets in a Star’s Dusty Rings Images of gaps in the dust and gas around a young star provide the best evidence to date that these gaps host newly formed planets. Read More »
2017-01-20 18:09:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2589736580848694, "perplexity": 1788.4962608274968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz"}
http://dlldesigner.com/mean-square/normalised-mean-square-error-wiki.php
Home > Mean Square > Normalised Mean Square Error Wiki # Normalised Mean Square Error Wiki ## Contents Like the variance, MSE has the same units of measurement as the square of the quantity being estimated. The term is always between 0 and 1, since r is between -1 and 1. In bioinformatics, the RMSD is the measure of the average distance between the atoms of superimposed proteins. Retrieved 4 February 2015. ^ "FAQ: What is the coefficient of variation?". http://dlldesigner.com/mean-square/normalised-mean-square-error.php When normalising by the mean value of the measurements, the term coefficient of variation of the RMSD, CV(RMSD) may be used to avoid ambiguity.[3] This is analogous to the coefficient of The confidence interval for the NMSE cannot be computed from a known distribution. Learn more MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi Learn more Discover what MATLAB® can do for your career. error, and 95% to be within two r.m.s. ## Mean Square Error Formula International Journal of Forecasting. 8 (1): 69–80. The specific problem is: no source, and notation/definition problems regarding L. I need to calculate the RMSE between every point. This is an easily computable quantity for a particular sample (and hence is sample-dependent). • For a Gaussian distribution this is the best unbiased estimator (that is, it has the lowest MSE among all unbiased estimators), but not, say, for a uniform distribution. • See also Root mean square Average absolute deviation Mean signed deviation Mean squared deviation Squared deviations Errors and residuals in statistics References ^ Hyndman, Rob J. • It is an inverse measure of the explanatory power of g ^ , {\displaystyle {\widehat {g}},} and can be used in the process of cross-validation of an estimated model. • That case could be due to time and/or space shifting. • doi:10.1016/0169-2070(92)90008-w. ^ Anderson, M.P.; Woessner, W.W. (1992). • Predictor If Y ^ {\displaystyle {\hat Saved in parser cache with key enwiki:pcache:idhash:201816-0!*!0!!en!*!*!math=5 and timestamp 20161007125802 and revision id 741744824 9}} is a vector of n {\displaystyle n} predictions, and Y • New York: Springer. Tracker.Current is not initialized for RSS page Questions about convolving/deconvolving with a PSF Are evolutionary mutations spontaneous? The result for S n − 1 2 {\displaystyle S_{n-1}^{2}} follows easily from the χ n − 1 2 {\displaystyle \chi _{n-1}^{2}} variance that is 2 n − 2 {\displaystyle 2n-2} Estimator The MSE of an estimator θ ^ {\displaystyle {\hat {\theta }}} with respect to an unknown parameter θ {\displaystyle \theta } is defined as MSE ⁡ ( θ ^ ) Mean Square Error Definition The r.m.s error is also equal to times the SD of y. Since an MSE is an expectation, it is not technically a random variable. square error is like (y(i) - x(i))^2. When something appears a certain way, but is also its opposite Is there any difference between "file" and "./file" paths? The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. Root Mean Square Error Excel They both look quite nonsensical to me –leonbloy Oct 24 '14 at 13:48 add a comment| 1 Answer 1 active oldest votes up vote 1 down vote That sounds right to In GIS, the RMSD is one measure used to assess the accuracy of spatial analysis and remote sensing. Scott Armstrong & Fred Collopy (1992). "Error Measures For Generalizing About Forecasting Methods: Empirical Comparisons" (PDF). ## Root Mean Square Error Formula The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Mean Square Error Formula By using this site, you agree to the Terms of Use and Privacy Policy. Root Mean Square Error Interpretation Related Content Join the 15-year community celebration. The bootstrap technique has to be used. http://dlldesigner.com/mean-square/normalised-mean-square-error-matlab.php Join the conversation Next: Regression Line Up: Regression Previous: Regression Effect and Regression   Index RMS Error The regression line predicts the average y value associated with a given x value. Criticism The use of mean squared error without question has been criticized by the decision theorist James Berger. Browse other questions tagged signal-processing or ask your own question. Root Mean Square Error Example International Journal of Forecasting. 8 (1): 69–80. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. To do this, we use the root-mean-square error (r.m.s. weblink An Error Occurred Unable to complete the action because of changes made to the page. Koehler, Anne B.; Koehler (2006). "Another look at measures of forecast accuracy". Mean Square Error Calculator Applied Groundwater Modeling: Simulation of Flow and Advective Transport (2nd ed.). Forgot your Username / Password? ## The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the Submissions for the Netflix Prize were judged using the RMSD from the test dataset's undisclosed "true" values. Squaring the residuals, averaging the squares, and taking the square root gives us the r.m.s error. Note that is also necessary to get a measure of the spread of the y values around that average. Root Mean Square Error Matlab Retrieved 4 February 2015. ^ J. Definition of an MSE differs according to whether one is describing an estimator or a predictor. The minimum excess kurtosis is γ 2 = − 2 {\displaystyle \gamma _{2}=-2} ,[a] which is achieved by a Bernoulli distribution with p=1/2 (a coin flip), and the MSE is minimized What's the source for the Point Buy alternative ability score rules? http://dlldesigner.com/mean-square/normalised-mean-square-error-mmse.php In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the It tells us how much smaller the r.m.s error will be than the SD. C V ( R M S D ) = R M S D y ¯ {\displaystyle \mathrm {CV(RMSD)} ={\frac {\mathrm {RMSD} }{\bar {y}}}} Applications In meteorology, to see how effectively a H., Principles and Procedures of Statistics with Special Reference to the Biological Sciences., McGraw Hill, 1960, page 288. ^ Mood, A.; Graybill, F.; Boes, D. (1974).
2018-02-24 13:56:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8265226483345032, "perplexity": 1947.6755013570812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815812.83/warc/CC-MAIN-20180224132508-20180224152508-00252.warc.gz"}
http://math.stackexchange.com/tags/power-series/new
# Tag Info ## New answers tagged power-series 0 Let $$S_m=\sum_{k=m}^\infty kr^k=\sum_{k=m-1}^\infty (k+1)r^{k+1}=mr^m+r\sum_{k=m}^\infty (k+1)r^k.$$ Then $$S_m-mr^m-rS_m=r\sum_{k=m}^\infty r^k=\frac{r^{m+1}}{1-r}$$ and $$S_m=\frac{r^m(m-(m-1)r)}{(1-r)^2}.$$ So $$\sum_{k=1}^\infty kr^k=S_1=\frac r{(1-r)^2}$$ and $$\sum_{k=1}^n kr^k=S_1-S_{n+1}=\frac{r-r^{n+1}(n+1-nr)}{(1-r)^2}.$$ 0 Here's an approach that requires the value of the geometric series $\sum_{n=0}^\infty x^n = \frac{1}{1-x}$ for $|x|<1$, and termwise differentiation of power series. To compute the value of \begin{align} S&= \sum_{n=1}^\infty n\left(\frac{1}{3}\right)^n, \end{align} define, for $|x|<1$ $$f(x) = \sum_{n=0}^\infty x^n = \frac{1}{1-x}$$ Then note ... 2 You need to solve the second sum first. It runs this way: you learnt in high school the factorisation formula $$1-x^n=(1-x)(1+x+\dots+x^{n-1}),$$ which can be rewritten as $$\frac1{1-x}=1+x+\dots+x^{n-1}+\frac{x^n}{1-x}.$$ From this you deduce that the sum $\;\displaystyle\sum_{i=0}^n x^i$ has a limit (‘the series converges’) if and only if $\lvert x\... 3 Here is an approach that relies on the relationships (i)$k=\sum_{\ell=1}^k(1)$and (ii)$\sum_{k=\ell}^N r^k=\frac{r^{\ell}-r^{N+1}}{1-r}$for$|r|<1$. Then, with$r=3^{-1}we have \begin{align} \sum_{k=1}^N \frac{k}{3^k}&=\sum_{k=1}^N 3^{-k}\sum_{\ell =1}^k(1)\\\\ &=\sum_{\ell =1}^N \sum_{k=\ell}^N 3^{-k}\\\\ &=\sum_{\ell =1}^N \frac{3^... 2 You need to know off by heart\frac{1}{1-x}=1+x+x^2+x^3+\dots$$a formula that is constantly coming up in all areas of maths. You can easily prove it by comparing the sum s with the sum x\cdot s. Ideally you would also remember$$\frac{1}{(1-x)^2}=1+2x+3x^2+4x^3\dots$$which is also extremely useful. Since the tags suggest you have some calculus, the ... 0 Where does +c10^m come from?$$x^l={\overline{a\dots aba\dots a}}_{(10)}={\overline{a\dots a}}_{(10)}+(b-a)\cdot 10^m=a\frac{10^{n}-1}{9}+c\cdot 10^m$$Why b=1 if l \neq 3, and b\in \{1,8\} if l = 3? When a=0, we have x^l=b\cdot 10^m. Note here that 1^3=1,2^3=8,i^3\gt 10 for i\ge 3, and that 1^4=1, j^4\gt 10 for j\ge 2. 0 Which function of x, other than x +c, and Integral of (\cos^2 x+\sin^2 x) have derivative =1 ?. They are an infinity of equivalent forms : Any integral of function which is an identity to 1. For example : \int (\cosh^2 x-\sinh^2 x)dx Any function which is an identity to x. For example : \frac{x^2+x-1}{x+1}+\frac{1}{1+x} A more ... 0 The first half of the Fundamental Theorem of Calculus is that if F(x)=\int_0^x f(y)\;dy and f is continuous then F'=f. The second half is that if f is continuous and F'=f then G(x)=F(x)-\int_0^xf(y)\;dy is constant. The first half implies that G'=F'-f=0, so to prove the 2nd half we must then show that if G'=0 then G is constant: (1). ... 0 Any function in the form :$$f(x) = x+C$$Why? because it is the solution of the following differential equation:$$\dfrac{df}{dx}=1 \Rightarrow df = dx$$by integration we get:$$f(x) = x + C$$This is the only possible solution (see the uniqueness of solution for a differential equation) NOTE: this holds only for continuous functions. EDIT: due to the ... 1 An elegant method is to use the Expansion Theorem in Umbral Calculus. Below is a typical statement, from p. 18 of Roman's book The Umbral Calculus. 1$$e^{-1}+4e^{-9}+9e^{-25}+16e^{-49}\approx 0.36837308051278053458657911933771842$$should be good enough. The truncation error is$$\sum_{i=4}^\infty(i+1)^2e^{-(2i+1)^2}=\sum_{i=4}^\infty(i+1)^2e^{-4i^2-4i-1}<\sum_{i=4}^\infty(i+1)^2e^{-64-4i-1}<2\cdot10^{-34}$$as can be computed analytically. You can obtain this estimate from$$\sum_{i=n}^\infty ... 1 The Taylor expansion of\exp\sin x$around zero is$1+x+x^{2}/2+O(x^{4}). Therefore, the error is \begin{align*} \left|\exp\sin x-(1+x+x^{2}+x^{3})\right| & =\left|-x^{2}/2+O(x^{3})\right|\\ & \leq|x^{2}|/2+|O(x^{3})|\\ & \approx|x^{2}|/2 & \text{for }|x|\text{ small}. \end{align*} 0 As you noted the Taylor series forf(x) = e^x$is$f(x) \approx 1 + x + \frac{x^2}{2} + \frac{x^3}{6}$. Now plug in$\sin x$and use the fact that for values close to$x$we have$\sin x \approx x$. So we have: $$e^{\sin x} \approx 1 + x + \frac{x^2}{2} + \frac{x^3}{6}$$ Now the error of the first approximation is around$\frac{x^2}{2} + \frac{5x^3}{6}$. ... 2 $$\arctan(x)=\sum_{n\geq 0}\frac{(-1)^n}{2n+1} x^{2n+1}\tag{1}$$ $$\arctan(x)-x=\sum_{n\geq 1}\frac{(-1)^n}{2n+1} x^{2n+1}\tag{2}$$ $$\frac{\arctan(x)-x}{x^3}=\sum_{n\geq 1}\frac{(-1)^n}{2n+1} x^{2n-2}=\color{red}{\sum_{n\geq 0}\frac{(-1)^{n+1}}{2n+3}x^{2n}}\tag{3}$$ The radius of convergence (i.e.$1) is left unchanged by our manipulations. 1 HINT: Note that we have \begin{align} \frac{d}{dx}\arctan(x)&=\frac{1}{1+x^2}\\\\ &=\sum_{n=0}^\infty (-1)^n x^{2n} \tag 1 \end{align} for|x|<1$. Then, integrate term by term to arrive at a series for$\arctan(x). SPOILER ALEERT: Scroll over the highlighted area to reveal the solution 0 You can use this equation to get all kinds of power series expansions for Ln(x+y+1). The equation adapts itself because you may chose to hold x or y constant and integrate it. Study the equation and play with it. It yields all kinds of neat results, Including some very important results. Hint let x=X^2. It provides insight into series. $$\sum_{n=0}^\... 3 Where did you find that equation? It's quite different from what I got, which I shall explain now. First, a common power series is$$ \frac{1}{1-x} = \sum_{i\geq0} x^{i}. $$Using the substitution x=-t^{n},$$ \frac{1}{1+t^{n}} = \sum_{i\geq0} (-t^{n})^{i} = \sum_{i\geq0} (-1)^{i}t^{in}. $$Then,$$ \int_{0}^{x} \frac{1}{1+t^{n}} dt = \int_{0}^{x} \... 0 To compute this series, let us look at $$\sum_{k=1}^{\infty} \frac{k^{k-1}}{k!} (- z e^z)^k = \sum_{k=1}^{\infty} \frac{(-1)^k k^{k-1}}{k!} z^k e^{kz}$$ as a formal power series. This series can be rewritten in the form $$\sum_{m=1}^{\infty} \alpha_m z^m.$$ where \begin{align*} \alpha_m &= \sum_{k=1}^m \frac{(-1)^k \, k^{k-1}}{k!} (\text{coef. by }... 2 Because theB(w_i,r_{w_i})$'s cover$frB(0,R)$, the$B(w_i,r_{w_i})$'s and$B(0,R)$cover the clausure of$B(0,R)$. This is a open covering so it also covers$B(0,R+\varepsilon)$for some$\varepsilon>0$. Now you have by definition that the radius of convergence of$F$is$S>R$. This leads to a contradiction with the Cauchy-Hadamard formula because ... 0 You reach this equation: $$\sum_{n=0}^\infty(b-n)a_nx^n=0$$ So$(b-n)a_n=0$for every n. Hence, if b is not a natural number, then all$a_n$are 0. And if$b$is an integer then$a_b$can be chosen arbitrarly so the solution would be: $$y=\left\{\begin{array}{cc}a_bx^b & b\in\mathbb{N}\\0&\text{otherwise} \end{array}\right.$$ 2 $$\arctan(x) = \sum_{n\geq 0}\frac{(-1)^n}{(2n+1)}x^{2n+1}$$ for any$x\in(-1,1)$, so: $$\frac{1}{10}-\frac{1}{3000}<\arctan\left(\frac{1}{10}\right) < \frac{1}{10}-\frac{1}{3000}+\frac{1}{500000}$$ and we may take$k$as: $$k = \left\lfloor \frac{100}{3}\cdot 299\right\rfloor = \color{red}{996}.$$ 1 You're on the right track, but a few things need to be mentioned. First, if the series involves complex numbers then you're actually looking for the disk of convergence, not the interval. Second, your notation is very inappropriate. When you apply the root test, you're applying it to$a_n$only. Including the summation symbol is incorrect. Third, your ... 1 This is meant to elucidate the A given By Arctic Char because of the long chat with the OP that followed it.$f_n\to f$uniformly on a domain$D$iff $$\lim_{n\to \infty}\|f-f_n\|=0,$$ where $$\|f-f_n\|=\sup_{x\in D}|f(x)-f_n(x)|.$$ Suppose$f_n\to f$uniformly on$D$then also$\lim_{n\to \infty}\|f_{n+1}-f\|=0.$Then we have $$\|f_{n+1}-f_n\|\leq \|f-... 2 Being a finite sum, this one converges for every x \in \Bbb C, so its radius of convergence is \infty. 4 This is basically just a guess, but I would expect the limit to be \infty. Since n \to \infty, \frac 1 n \to 0, so we can ignore that term, giving us:$$\lim_{n \to \infty} \frac{n^n}{n^n} t^n=\lim_{n \to \infty} t^n=\infty$$3 Hint Write$$\sum_{n=0}^\infty\frac {(-1)^n x^n}{(y+1)^{n+1}}=\sum_{n=0}^\infty\frac {(-x)^n }{(y+1)^{n+1}}=\frac 1{y+1}\sum_{n=0}^\infty\left(\frac {-x}{y+1}\right)^n=\frac 1{y+1}\sum_{n=0}^\infty a^n$$where a=-\frac {x}{y+1}. I am sure that you can take it from here. 5 Let's make it look nice. ∑[(x^(2n+1))/((x^2+1)^(n+1))]*[(2n!!)/(2n+1)!!] You say \arctan(x) =\sum\dfrac{x^{2n+1}}{(x^2+1)^{n+1}}\dfrac{(2n)!!}{(2n+1)!!} Since (2n+1)!! =\prod_{k=1}^n (2k+1) =\dfrac{\prod_{k=1}^n (2k)(2k+1)}{\prod_{k=1}^n (2k)} =\dfrac{(2n+1)!}{2^nn!} and (2n)!!=2^nn! , this becomes \arctan(x) =\sum\dfrac{x^{2n+1}}{(x^2+1)^{n+1}}... 3 This is the unfolding by the generalized binomial theorem of$$ \Bigl(1-\frac{x}{1+x}\Bigr)^{-p-1} = (1+x)^{p+1}$$2 The generalized binomial theorem states that (1+z)^a =\sum_{n=0}^{\infty} \binom{a}{n} z^n . Therefore \sum_{n=0}^{\infty} \binom{-p-1}{n} (\frac{-x}{1+x})^n =(1-\frac{x}{1+x})^{-p-1} =(\frac{1}{1+x})^{-p-1} =(1+x)^{p+1} . I am not worrying about convergence. 1 Note that$$(x+2)^{1/2}=(4+(x-2))^{1/2}=2\left(1+\frac{x-2}{4}\right)^{1/2}.$$Now use what you know about the expansion of (1+y)^{1/2}. 0$$ f_n(x):=n^2\sin{x\over n^4}\;\forall\; n $$Claim: \;0<M<\infty\implies\sum_{n=1}^{\infty}f_n converges uniformly in [-M,M]. Proof: Suppose x\in[-M,M].$$ |\sum_{n=p}^q f_n(x)|\le\sum_{n=p}^q|f_n(x)|\le\sum_{n=p}^q n^2{|x|\over n^4}\le\sum_{n=p}^q{M\over n^2}=M\sum_{n=p}^q {1\over n^2}$$So given \varepsilon>0 we can find k(\... 0 Using the Taylor series directly is not a very motivated or clean way of showing the additivity of \log, but here's one approach. It's clearly sufficient to prove that \exp(x + y) = (\exp x)(\exp y). The function y = \exp(x) is the unique solution of the differential equation y' = y with y(0) = 1. (The uniqueness result here is standard and easy ... 5$$\sum_{n\geq 1}\frac{(2n-1)!!}{(2n)!!}x^n = \sum_{n\geq 1}\frac{(2n-1)!}{4^n n!^2}x^n \color{blue}{=} \sum_{n\geq 1}\binom{-1/2}{n}(-1)^n x^n = \color{red}{\frac{1}{\sqrt{1-x}}-1}.$$Details of \color{blue}{=}:$$\binom{-1/2}{n}(-1)^n = \frac{(-1/2)(-3/2)\cdots(-(2n-1)/2)}{n!}(-1)^n = \frac{(2n-1)!!}{2^n n!}=\frac{(2n-1)!!}{(2n)!!}.$$As an alternative ... 4 If it converges uniformly, then$$\sup_{x\in \mathbb R} \left|n^2 \sin(x/n^4)\right|\to 0$$as n\to \infty. This is clearly false. 0 Any counterexample can do,$$-\log(1-0)\ne\log(1+e^0),0\ne\log(2).$$2 The relation surely doesn't hold for every x: for instance, if x=-1 you have$$ -\log(1-(-1))=-\log2<0, \qquad \log(1+e^{-1})>0 $$It may be interesting looking for what values of x equality holds. First we have to assume 1-x>0, that is, x<1, for the left-hand side to exist. Once ensured this, we can write the equality as$$ \log\frac{... 1 It is clearly false because$\log(1+\exp(x))$exists for all real$x$while$-\log(1-x)$does not. 0 For another solution approach using beta function, see my detailed solution at Prove$\sum_{n=0}^{\infty}{2^n(n^2-n\pi+1)(n^2+n-1)\over (2n+1)(2n+3){2n\choose n}}=1$. The same steps can be followed with little changes in the coefficients. Using Beta function, we can express $${2^n n[n(\pi^3+1)+\pi^2](n^2+n-1)\over (2n+1)(2n+3){2n\choose n}} = {n[n(\pi^3+1)+... 1 \textbf{Claim}: Suppose f is analytic in a domain \Omega. Then, f has a primitive in \Omega iff \int\limits_{\gamma}{}f=0 for every simple closed curve \gamma \in \Omega. In your case, every simple closed curve \gamma in \Omega=\{z:|z|>1\} is homotopic to either a contractible loop or a loop around 0, which we can assume to be the ... 3 You are right. \sin(z) is not only an analytic function, but an entire function:$$ \sin(z)=\sum_{n\geq 0}\frac{(-1)^n z^{2n+1}}{(2n+1)!} \tag{1}$$holds as an identity for any value of z\in\mathbb{C}. It follows that:$$ \sin(x^3 y^2) = \sum_{n\geq 0}\frac{(-1)^n x^{6n+3}y^{4n+2}}{(2n+1)!}\tag{2}$$holds as an identity for every value of (x,y)\in\... 4 For clarity put w=z-i. Then$$\frac{1}{1+z^2}=\frac{1}{w(w+2i)}=-\frac{i}{2w}\frac{1}{1-\frac{iw}{2}}$$Now use the familiar expansion$$\frac{1}{1-\frac{iw}{2}}=1+i\frac{w}{2}-\frac{w^2}{4}-i\frac{w^3}{8}+\dots$$to get$$-\frac{i}{2w}+\frac{1}{4}+i\frac{w}{8}-\frac{w^2}{16}-i\frac{w^3}{32}+\frac{w^4}{64}+i\frac{w^5}{128}-\dots$$1 Another approach to this problem uses Poor man's Lagrange Inversion, which is the Cauchy Residue Theorem. We have$$(27x-4)T(x)^3+3T(x)+1 = 0$$We thus obtain$$[z^n] T(z) = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n+1}} T(z) \; dz.$$Solve for z to get$$z = \frac{4T(z)^3-3T(z)-1}{27 T(z)^3}.$$Therefore the substitution w=T(z) yields$$... 1 $$\sum_{k=0}^\infty {3k \choose k} x^k$$ See OEIS sequence A005809 2 By the Lagrange inversion theorem, the solution of$w^3-w=-x$has the following Taylor series: $$w(x)=\sum_{k\geq 0}\binom{3k}{k}\frac{x^{2k+1}}{2k+1}$$ whose radius of convergence is$\frac{2}{3\sqrt{3}}$. If we set$y(x)=\sqrt{\frac{3}{4-27 x}}\,w(x)$, by Lagrange inversion: $$y(x)=\sum_{k\geq 0}\binom{3k}{k}\frac{3(-1)^k}{(2k+1)(4-27x)^{k+1}}\tag{1}... 2 I believe our definitions of the error function differ by a constant, but the following approach works anyway:$$\begin{eqnarray*}\sum_{k\geq 0}\frac{x^k}{k!!}&=&\sum_{n\geq 0}\frac{x^{2n}}{2^n n!}+\sum_{n\geq 0}\frac{x^{2n+1}}{(2n+1)!!}\\&=&e^{x^2/2}+\sum_{n\geq 0}\frac{2^n x^{2n+1}}{(2n+1)!}n!\\&=&e^{x^2/2}+\int_{0}^{+\infty}e^{-z}\... 1 If you don't want to leave the formal world, you still have a very simple alternative, namely to work in an extension of formal power series. A formal power series is a sequence of coefficients that is added and multiplied as if it were an (infinite-degree) polynomial in some indeterminate, say$X$. Let us denote this structure by$F^*[X]$, analogously to ... 1 You can do that without leaving the formal world but it is often harder. In general, the formula (or the definition) for the composition$f(g(x))$when$g(x) = \sum_{n=0}^{\infty} b_n x^n$and$f(x) = \sum_{n=1}^{\infty} a_n x^nis given by $$f(g(x)) = \sum_{n=1}^{\infty} a_n \left( \sum_{m=1}^{\infty} b_m x^m \right)^n = \sum_{l=0}^{\infty} c_l x^l, \\ ... 1 We have$$f(g(x)) = \sum_{k = 0}^\infty \left( \dfrac x {1 - x} \right)^k .Now, for k \ge 1, using \dfrac 1 {(1 - x)^k} = \sum\limits_{m = 0}^\infty \binom{m + k- 1}{m} x^m, we get \begin{align*} f(g(x)) & = 1 + \sum_{k=1}^\infty x^k \sum_m \binom{m + k - 1}{m} x^m\\ & = 1 + \sum_{k=1}^{\infty} \sum_m \binom{m + k - 1}{m} x^{m +k}. \end{... 1 One approach is to use the differential equationy'' + y = 0$$satisfied by y = \sin x. The differential equation has unique solution$$y = y(0)\cos x + y'(0)\sin x$$Now consider the power series$$f(x) = x - \frac{x^{3}}{3!} + \frac{x^{5}}{5!} - \cdots$then$f(x)$is defined for all$x$(because the series is convergent for all$x\$) and by ... Top 50 recent answers are included
2016-06-28 22:31:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9858362078666687, "perplexity": 1914.0244011143257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00098-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.techwhiff.com/issue/identify-the-countries-overrun-by-germany--262379
# Identify the countries overrun by Germany. ###### Question: Identify the countries overrun by Germany. ### On January 1, 2019, Electro Inc. issued $740,000 of 7.5%, four-year bonds that pay interest semiannually on June 30 and December 31. They are issued at$680,186 and their market rate is 10% at the issue date. After recording the entry for the issuance of the bonds, Bonds Payable had a balance of $740,000 and Discount on Bonds Payable had a balance of$59,814. Electro uses the effective interest bond amortization method. The first semiannual interest payment was made on June 30, 2019. On January 1, 2019, Electro Inc. issued $740,000 of 7.5%, four-year bonds that pay interest semiannually on June 30 and December 31. They are issued at$680,186 and their market rate is 10% at the issue date. After recording the entry for the issuance of the bonds, Bonds Payable had a balance of \$74... ### The sick-leave time of employees in a firm in a month is normally distributed with a mean of 100 hours and a standard deviation of 20 hours. the sick-leave time of employees in a firm in a month is normally distributed with a mean of 100 hours and a standard deviation of 20 hours.... ### For a treasure hunt game, 300 balls are hidden. Of the balls that are hidden, 40% of them are yellow and the rest are white. Sally’s team finds 20% of the yellow balls and 80% of the white balls. How many balls did Sally’s team find? For a treasure hunt game, 300 balls are hidden. Of the balls that are hidden, 40% of them are yellow and the rest are white. Sally’s team finds 20% of the yellow balls and 80% of the white balls. How many balls did Sally’s team find?... ### What are some steps for the equation 2=z-14 what are some steps for the equation 2=z-14... ### £8001 is shared equally between 9 people. How much does each person get? £8001 is shared equally between 9 people. How much does each person get?... ### What happens during prophase? A- A new Nucleus forms around each copy of DNA B-The chromatids are pulled apart C-The Mitotic spindle forms D-Spindle fibers attach to the chromatids What happens during prophase? A- A new Nucleus forms around each copy of DNA B-The chromatids are pulled apart C-The Mitotic spindle forms D-Spindle fibers attach to the chromatids... ### Which had the greatest influence in prompting the Second Continental Congress to declare independence? 1.the battles of Lexington and Concord 2.King George III’s rejection of peaceful reconciliation 3. financial support from France and Spain 4.increased open public support for independence Which had the greatest influence in prompting the Second Continental Congress to declare independence? 1.the battles of Lexington and Concord 2.King George III’s rejection of peaceful reconciliation 3. financial support from France and Spain 4.increased open public support for independence... ### Why did romans kill Jesus Why did romans kill Jesus... ### Animals produce enzymes that help chemical processes happen in the cells of their bodies. In which category of molecules do enzymes belong Animals produce enzymes that help chemical processes happen in the cells of their bodies. In which category of molecules do enzymes belong... ### A 2kg mass is attached to a spring hanging from the ceiling. This causes the spring to stretch 20 cm. The system has friction constant of 10. After coming to a stop at its new equilibrium, the mass is pulled 50 cm further toward the floor (i.e. y(0) and released subject to a driving force function F(t)=.3cos(t). Recall that we can calculate k when we know the displacement d caused by a mass m because d=mg/k [use g=9.8 m/sec2]. The differential equation to solve has k=98 1. The steady-state so A 2kg mass is attached to a spring hanging from the ceiling. This causes the spring to stretch 20 cm. The system has friction constant of 10. After coming to a stop at its new equilibrium, the mass is pulled 50 cm further toward the floor (i.e. y(0) and released subject to a driving force function F... ### Why is the world round Why is the world round... ### The first and last note in a major or minor scale is called the: The first and last note in a major or minor scale is called the:... ### Fill in the blanks 1. Ayer mis amigos y yo _________________________ al cine. 2. ¿________________________ tú a la fiesta de Margarita el sábado pasado? 3. Yo _________________________ a España con mi familia el verano pasado. 4. Carlota ___________________________ a la biblioteca a estudiar. 5. Pedro y Enrique _________________________ tarde a la práctica de fútbol. 6. Yo _______________________ a la agencia de viajes a hablar con el agente. 7. Tú _________________________ al aeropuerto a Fill in the blanks 1. Ayer mis amigos y yo _________________________ al cine. 2. ¿________________________ tú a la fiesta de Margarita el sábado pasado? 3. Yo _________________________ a España con mi familia el verano pasado. 4. Carlota ___________________________ a la biblioteca a estudiar.... ### How to write in Spanish how to write in Spanish... ### What does not usually synchronize among devices using the same E-Reader app? Account information Bookmarks Downloads Purchases What does not usually synchronize among devices using the same E-Reader app? Account information Bookmarks Downloads Purchases... ### When a cup of coffee at 100 degree Celsius cools down ,it loses 12%of its current temperature per minute. What will the temperature be after 10 minutes. When a cup of coffee at 100 degree Celsius cools down ,it loses 12%of its current temperature per minute. What will the temperature be after 10 minutes.... ### A substance has a mass of 360g and a volumen of 7.5 cm what is its density A substance has a mass of 360g and a volumen of 7.5 cm what is its density... ### Which neolithic revolution development led to the other three? a complex civilizations b surplus of food c division of labor d domestication of plants and animal? Which neolithic revolution development led to the other three? a complex civilizations b surplus of food c division of labor d domestication of plants and animal?...
2023-02-05 23:27:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28238385915756226, "perplexity": 4078.717911309946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00841.warc.gz"}
http://www.quantumstudy.com/mathematics/complex-number/
# Complex Number Basic Concepts : A number in the form of a + ib, where a, b are real numbers and i = √-1 is called a complex number. A complex number can also be defined as an ordered pair of real numbers a and b and may be written as (a, b), where the first number denotes the real part and the second number denotes the imaginary part. If z = a + ib, then the real part of z is denoted by Re (z) and the imaginary part by Im (z). A complex number is said to be purely real if Im(z) = 0, and is said to be purely imaginary if Re(z) = 0. The complex number 0 = 0 + i0 is both purely real and purely imaginary. Two complex numbers are said to be equal if and only if their real parts and imaginary parts are separately equal i.e. a + ib = c + id implies a = c and b = d. However, there is no order relation between complex numbers and the expressions of the type a + ib < (or >) c + id are meaningless. Remark: ⋄ Clearly i2 = -1 , i3 = -i , i4 = 1 In general , i4n = 1 , i4n+1 = i , i4n+2 = -1 for an integer n. #### Geomertical Representation Of Complex Number A complex number z = x + iy, written as an ordered pair (x, y), can be represented by a point P whose Cartesian coordinates are (x, y) referred to axes OX and OY, usually called the real and the imaginary axes. The plane of OX and OY is called the Argand diagram or the complex plane. Since the origin O lies on both OX and OY, the corresponding complex number z = 0 is both purely real and purely imaginary. Next Page »
2019-08-21 11:47:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8388426899909973, "perplexity": 400.21942739068896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315936.22/warc/CC-MAIN-20190821110541-20190821132541-00202.warc.gz"}
https://plainmath.net/80211/i-m-trying-to-solve-the-following-proble
# I'm trying to solve the following problem. Let f be an integrable function in (0,1). Suppose th I'm trying to solve the following problem. Let $f$ be an integrable function in (0,1). Suppose that ${\int }_{0}^{1}fg\ge 0$ for any non negative, continuous $g:\left(0,1\right)\to \mathbb{R}$. Prove that $f\ge 0$ a.e. in (0,1). I'm a little unsure on what it is that I must prove in order to conclude that $f\ge 0$. I tried to show that ${\int }_{0}^{1}{f}^{2}\ge 0$ but I couldn't get very far. I'm seeking hints on how to solve this. Thanks. You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Nirdaciw3 Suppose that $A\subset \left(0,1\right)$ is measurable, is of positive measure and $f<0$ on $A$. The idea is that we want to construct a continuous function $g$ such that ${\int }_{0}^{1}fg\phantom{\rule{thinmathspace}{0ex}}dx<0.$ A logical way to do this would be to choose $g$ such that $g⩾0$ in $A$ and $g=0$ on $\left(0,1\right)\setminus A$. However, since $A$ is only a measurable set, in general $g$ will be discontinuous. There exists a (relatively) closed set $F\subset A$ such that $|A\setminus F|<ϵ$. Choosing $ϵ=|A|/2>0$, we have that $|F|=|A|-|A\setminus F|=|A|/2>0.$ Since $|F|>0$, the interior of $F$ is nonempty. Thus, there exists an open set $U$ compactly contained in the interior of $F$ (just take a small ball for example). Define $g$ such that $g$ is continuous, nonnegative, $g=0$ in $A\setminus F$, and $g=1$ in $U$. Then $\begin{array}{rl}{\int }_{0}^{1}fg\phantom{\rule{thinmathspace}{0ex}}dx& ={\int }_{F}fg\phantom{\rule{thinmathspace}{0ex}}dx\\ & ={\int }_{U}f\phantom{\rule{thinmathspace}{0ex}}dx+{\int }_{F\setminus U}fg\phantom{\rule{thinmathspace}{0ex}}dx\\ & ⩽{\int }_{U}f\phantom{\rule{thinmathspace}{0ex}}dx\\ & <0.\end{array}$ This completes the proof.
2022-09-28 12:04:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 92, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492165446281433, "perplexity": 99.72258549194714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00343.warc.gz"}
http://www.simonqueenborough.info/R/basic/lessons/Sequences_of_Numbers.html
In this lesson, you’ll learn how to create sequences of numbers in R. Sequences of numbers are used in many different tasks, from plotting the axes of graphs to generating simulated data. The simplest way to create a sequence of numbers in R is by using the : operator. Type 1:20 to see how it works. 1:20 ## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 That gave us every integer between (and including) 1 and 20 (an integer is a positive or negative counting number, including 0). We could also use it to create a sequence of real numbers (a real number is a positive, negative, or 0 with an infinite or finite sequence of digits after the decimal place). For example, try typing pi:10. pi:10 ## [1] 3.141593 4.141593 5.141593 6.141593 7.141593 8.141593 9.141593 The result is a vector of real numbers starting with pi (3.142…) and increasing in increments of 1. The upper limit of 10 is never reached, since the next number in our sequence would be greater than 10. Note also that pi is one of the few constants built in to R. Type ?pi to check the others. ?pi What happens if we do 15:1? Give it a try to find out. 15:1 ## [1] 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 It counted backwards in increments of 1! This is sometimes useful for plotting coefficients from models in reverse order. Remember that if you have questions about a particular R function, you can access its documentation with a question mark followed by the function name: ?function_name_here. However, in the case of an operator like the colon used above, you must enclose the symbol in backticks like this: ?:. (NOTE: The backtick () key is generally located in the top left corner of a keyboard, above the Tab key. If you don’t have a backtick key, you can use regular quotes.) Pull up the documentation for : now. ?: Often, we’ll desire more control over a sequence we’re creating than what the : operator gives us. The seq() function serves this purpose. The most basic use of seq() does exactly the same thing as the : operator. Try seq(1, 20) to see this. seq(1, 20) ## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 This gives us the same output as 1:20. Check the help file for seq(). The help files show the arguments listed for the seq() function. The first two arguments are “from =” and “to =”. In R, you do not have to specify the arguments by name if you write out their values in the same order as written in the function. However, for complex functions it is often best practice to do so and makes your code much clearer. For example, seq(from = 1, to = 20) will give the same output as seq(1, 20). Try it! seq(from = 1, to = 20) ## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 OK, let’s say that instead of 1 to 20, we want a vector of numbers ranging from 0 to 10, incremented by 0.5. seq(0, 10, by = 0.5) does just that. Try it out. seq(0, 10, by = 0.5) ## [1] 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 ## [15] 7.0 7.5 8.0 8.5 9.0 9.5 10.0 Or maybe we don’t care what the increment is and we just want a sequence of 30 numbers between 5 and 10. seq(5, 10, length = 30) does the trick. Give it a shot now and store the result in a new variable called my_seq. my_seq <- seq(5, 10, length = 30) If you look closely again at the help file for ?seq, you will not see an argument “length =”, but only “length.out =”. You can actually use any abbreviation of the argument name, as long as it is different from any other argument. You could even use just “l =”! To confirm that my_seq has length 30, we can use the length() function. Try it now. To do this, you need to include the object ‘my_seq’ as the value of argument ‘x’ of length(). length(my_seq) ## [1] 30 Let’s pretend we don’t know the length of my_seq, but we want to generate a sequence of integers from 1 to N, where N represents the length of the my_seq vector. In other words, we want a new vector (1, 2, 3, …) that is the same length as my_seq. There are several ways we could do this. One possibility is to combine the : operator and the length() function like this: 1:length(my_seq). Give that a try. 1:length(my_seq) ## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ## [24] 24 25 26 27 28 29 30 Another option is to use seq(along.with = my_seq). Give that a try. seq(along.with = my_seq) ## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ## [24] 24 25 26 27 28 29 30 However, as is the case with many common tasks, R has a separate built-in function for this purpose called seq_along(). Type seq_along(my_seq) to see it in action. seq_along(my_seq) ## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ## [24] 24 25 26 27 28 29 30 There are often several approaches to solving the same problem, particularly in R. Simple approaches that involve less typing are generally best. It’s also important for your code to be readable, so that you and others can figure out what’s going on without too much hassle. If R has a built-in function for a particular task, it’s likely that function is highly optimized for that purpose and is your best option. One of the philosophies of R (and Unix more generally) is to have tools (or functions) that do specific things very well and then link these together, rather than a single multi-purpose tool that does many things poorly. This approach is like having a seperate knife, fork, and spoon, rather than a Spork … In most situations, cutlery (“silverware”) is superior to the Spork. As you become a more advanced R programmer, you will learn how to link and nest these apparently simple functions to do incredibly powerful tasks. You will also design your own functions to perform tasks when there are no better options. We’ll explore writing your own functions in future lessons. OK, back to the show. One more function related to creating sequences of numbers is rep(), which stands for ‘replicate’. Let’s look at a few uses. If we’re interested in creating a vector that contains 40 zeros, we can use rep(0, times = 40). Try it out. rep(0, times = 40) ## [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ## [36] 0 0 0 0 0 If instead we want our vector to contain 10 repetitions of the vector (0, 1, 2), we can do rep(c(0, 1, 2), times = 10). Go ahead. rep(c(0, 1, 2), times = 10) ## [1] 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 Finally, let’s say that rather than repeating the vector (0, 1, 2) over and over again, we want our vector to contain 10 zeros, then 10 ones, then 10 twos. We can do this with the each argument. Try rep(c(0, 1, 2), each = 10). rep(c(0, 1, 2), each = 10) ## [1] 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2` Congratulations! Now you have several powerful tools that you can use to generate sequences of numbers. You also learnt to use the function length() and the ‘:’ operator. Your R skills are building! Please submit the log of this lesson to Google Forms so that Simon may evaluate your progress. 1. Go ahead, make my day!
2019-02-16 14:09:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45947515964508057, "perplexity": 280.46792990975234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480472.38/warc/CC-MAIN-20190216125709-20190216151709-00501.warc.gz"}
https://mathematica.stackexchange.com/questions/57147/double-backslash-file-path-problemversion-10-0?noredirect=1
double-backslash-file path problem@version 10.0 [duplicate] I found this happen @ Mathematica 10 pre test version and formal English version. • Copy a file from windows, then paste into FrontEnd cell, you'll get a path, such like "C:\\Users\\Hyper\\Documents\\b" This is Ok. However, in the Chinese character case "C:\\Users\\Hyper\\Documents\\新建文件夹" This is bad. [It's fine before version 10.0] • choose a file then Shift+Right-Mouse-Click, and Copy as path, then paste into FrontEnd cell, you'll get a path, such like file1="C:\Users\Hyper\Documents\b" file2="C:\Users\Hyper\Documents\新建文件夹" file3="C:\Users\Hyper\Documents\b\新建文件夹" So, two methods to get path from windows system both work badly, ie FileExistsQ[file1]==>False FileExistsQ[file2]==>True FileExistsQ[file3]==>False I have many old files in Chinese path, they cannot be dealt conveniently by Mathematica 10.0@Windows now. I should manually delete a backslash in the path string, and if I use another method to get file path, \b, \hello these names are annoying. Is this a simple bug? Why not just keep it the same as that in Version 9? marked as duplicate by Szabolcs, bobthechemist, HyperGroups, Öskå, Sjoerd C. de VriesAug 11 '14 at 16:13 • @SjoerdC.deVries I should manually delete a backslash in the path string, and if I use another method to get file path, \b, \hello these names are annoying. – HyperGroups Aug 11 '14 at 10:11
2019-09-18 19:22:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28344300389289856, "perplexity": 7695.834791067719}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573323.60/warc/CC-MAIN-20190918172932-20190918194932-00042.warc.gz"}
https://github.com/vishwa-raman/DeepLearning
# vishwa-raman/DeepLearning Code to build MLP models for outdoor head orientation tracking Latest commit 9959f39 Jan 23, 2013 Type Name Latest commit message Commit time Failed to load latest commit information. dl Jan 22, 2013 scripts Jan 17, 2013 utils/src Jan 22, 2013 # DeepLearning Code to build MLP models for outdoor head orientation tracking The following is the set of directories with a description of what is in each. ### dl The directory with the python files to train and test an MLP. The file genmlp.py is based on the mlp.py that is part of the Theano documentation. It is more general purpose in that one can configure a network of arbitrary depth and number of nodes per layer. It also implements a sliding window for training that enables one to train data sets of arbitrary size on limited GPU memory. The file logistic_sgd.py comes with a nice reporting function that builds a matrix with classification results on the test set where we show the number of correctly classified frames and the distribution of the incorrectly classified frames across all classes. The file pickler.py has a number of helper methods that can be used to build files with the data that conform to the Theano input format. The file takes as input files in the MNIST IDX format. It can be used to chunk data sets into multiple sets of files, one for training, one for validation, and the last for test. ### utils The directory with C++ code that can be used to generate datasets in the MNIST IDX format from labeled data. The labels correspond to a partition of the space in front of a driver in a car, with the following values, 1. Driver window 2. Left of center 4. Right of center 5. Passenger window Given a video of the driver, an annotation file for that video has the following format, <?xml version="1.0"?> <annotations dir="/media/CESAR-EXT02/VCode/CESAR_May-Fri-11-11-00-50-2012" center="350,200"> <frame> <face>0,0</face> <zone>9</zone> <status>1</status> <intersection>4</intersection> </frame> <frame> <face>0,0</face> <zone>9</zone> <status>1</status> <intersection>4</intersection> </frame> ... ... </annotations> where, the directory is expected to contain frames from the video with the following filenames 'frame_%d.png'%frameNumber. Each video frame is a 640x480 image file with the zone indicating the class, the status indicating the car status, and the intersection indicating the type of intersection. For the purposes of building the data sets, we only use the zone information at this point. The center is expected to be the rough center of the location of the face in each frame. The pre-processing that is done on the images is as follows, 1. A Region of Interest (ROI) of configurable size (Globals.cpp) is picked around the image center. 2. A histogram equalization followed by edge detection is performed. 3. A DC suppression using a sigmoid is then done. 4. A gaussian window function is applied around the center. 5. The image is scaled and a vector generated from the image matrix in row-major. ### Build Do a make mode=opt in utils/src to build optimized executables. The dependencies is OpenCV. This builds everything and places them in an install directory under DeepLearning. #### Data generation To generate data sets, use the following commands, xmlToIDX -o <outputFileNameSuffix> -r <training_fraction> -v <validation_fraction> -status <statusFilter> -inter <intersectionFilter> [-d <trainingDirectory>]+ [-b binaryThreshold] [-usebins] [-h for usage] if the outputFileNameSuffix is ubyte, then run the following command to generate pickled numpy arrays from the IDX data sets python pickler.py data-train-ubyte label-train-ubyte data-valid-ubyte label-valid-ubyte data-test-ubyte label-test-ubyte gaze_data.pkl which will generate sets of training, validation, and test files with the prefix gaze_data.pkl. The number of files generated in each set will depend on the chunking size used in pickler.py. The data is broken up into chunks and files are generated one per chunk; as an example the set of test files will be gaze_data.pkl_test_%d.gz, with the integer argument in range(numberOfChunks). The first command builds the IDX format data sets. The second converts them into a numpy array of tuples, with each tuple being an array of data points and an array of labels. We have one tuple for the training data, one for validation, and one for test. The options to xmlToIDX are as follows, • -o is the suffix to use for all generated files • -r is the training fraction in the interval [0, 1) • -v is the validation fraction in the interval (0, 1) • -usebins is used to bin the data based on their labels. We generate as many data points as argmin_{l \in labels} |D_l|, where D_l is the set of data points with label l; in other workds we pick as many data points as the cardinality of the smallest set of data points across all labels. This is to prevent our network from being biased to class label 3, which is straight ahead. A large fraction of the frames have the driver facing straight ahead which causes an enormous bias during training without binning. • -d a directory of images for training. An annotation file called annotations.xml is expected to be present in each such directory. • -b is used to specify a binary threshold that is used to generate image pixel data as binary values with all pixel values above the threshold considered as 1, with the rest being 0. • -status is used to pick only those frames that have a car status annotation that matches what follows this flag • -intersection is used to pick only those frames that have the intersection annotation that matches what follows this flag The second command builds the tuples of numpy arrays as required by the Theano based trainer. This one takes as input the training, validation, and test data and label files with the prefix to use for the generated file names. ### Training and classification Training and classification can be done using genmlp.py. The following command will train a network and generate a report with the validation error rate, test error rate, and the distribution of the numbers of frames across all classes together with the expected number of frames per class. THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python genmlp.py (-d datasetDir | -f datasetFileName) [-p prefix] [-b batchSize] [-nt nTrainFiles] [-nv nValidFiles] [-ns nTestFiles] [-i inputLength] [-o numClasses] [-gen modelFileName] [-l [nLayer1Size, nLayer2Size, ...]] [-classify] [-useparams paramsFileName] [-h help] The options are, • -d the directory that contains the data sets • -f a single file that contains the complete pickled data. This is useful when the data sets are small enough to be pickled into one file • -p the file name prefix for the files names that hold the data sets • -nt the number of training files • -nv the number of validation files • -ns the number of test files • -l the configuration of the hidden layers in the network, with as many hidden layers as the number of comma separated elements with the size of each hidden layer being the elements • -o the number of labels • -i the input dimension of the data • -gen to generate the trained model for use outside Theano. This is as a text file. We also generate a pickled file called params.gz in the training data set directory that contains the numpy weights and biases of all hidden layers and the final logistic layer. For questions please send mail to: vishwa.raman@west.cmu.edu Thanks for looking. You can’t perform that action at this time.
2019-05-22 08:56:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3749485909938812, "perplexity": 1986.654839689902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256778.29/warc/CC-MAIN-20190522083227-20190522105227-00224.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-common-core/chapter-2-functions-equations-and-graphs-2-4-more-about-linear-equations-lesson-check-page-86/1
## Algebra 2 Common Core $y=-3x-1$ RECALL: The slope-intercept form of a line's equation is $y=mx + b$, where m=slope and b=y-intercept. The line has a slope of -3, so $m=-3$. This means that the tentative equation of the line is $y=-3x+b$. The line passes through the point (1,-4), which means that the coordinates of this point satisfies the equation of the line. Substitute the x and y coordinates of the given point into the tentative equation above to have $y=-3x+b \\-4 = -3(1) + b \\-4 =-3 + b \\-4+3 = b \\-1=b.$ Thus, the equation of the line is $y=-3x-1$.
2019-04-26 02:40:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892117142677307, "perplexity": 124.65656416401337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578747424.89/warc/CC-MAIN-20190426013652-20190426035652-00346.warc.gz"}
https://lw2.issarice.com/users/donald-hobson
Comment by donald-hobson on Is "physical nondeterminism" a meaningful concept? · 2019-06-16T21:46:54.112Z · score: 2 (2 votes) · LW · GW You can certainly get anthropic uncertainty in a universe that allows you to be duplicated. In a universe that duplicates, and the duplicates can never interact, we would see the appearance of randomness. Mathematically, randomness is defined in terms of the set of all possibilities. An ontology that allows universes to be intrinsically random seems well defined. However, it can be considered as a syntactic shortcut for describing universes that are anthropically random. Comment by donald-hobson on Unknown Unknowns in AI Alignment · 2019-06-14T09:29:58.874Z · score: 18 (7 votes) · LW · GW If you add adhoc patches until you can't imagine any way for it to go wrong, you get a system that is too complex to imagine. This is the "I can't figure out how this fails" scenario. It is going to fail for reasons that you didn't imagine. If you understand why it can't fail, for deep fundamental reasons, then its likely to work. This is the difference between the security mindset and ordinary paranoia. The difference between adding complications until you can't figure out how to break the code, and proving that breaking the code is impossible (assuming the adversary can't get your one time pad, its only used once, your randomness is really random, your adversary doesn't have anthropic superpowers ect). I would think that the chance of serious failure in the first scenario was >99%, and in the second, (assuming your doing it well and the assumptions you rely on are things you have good reason to believe) <1% Comment by donald-hobson on Cryonics before natural death. List of companies? · 2019-06-13T16:19:14.339Z · score: 1 (1 votes) · LW · GW Cryonics is a sufficiently desperate last grasp at life, one with a fairly small chance of success, that I'm not sure that this is a good idea. It would be a good idea if you had a disease that would make you brain dead, and then kill you. It might be a good idea if your expect any life conditional on revival to be Really good. It would also depend on how much Alzheimers destroyed personality rather than shutting it down. (has the neural structure been destroyed, or is it sitting in the brain but not working?) Comment by donald-hobson on Let's talk about "Convergent Rationality" · 2019-06-13T16:10:37.703Z · score: 3 (2 votes) · LW · GW I would say that there are some kinds of irrationality that will be self modified or subagented away, and others that will stay. A CDT agent will not make other CDT agents. A myopic agent, one that only cares about the next hour, will create a subagent that only cares about the first hour after it was created. (Aeons later it will have taken over the universe and put all the resources into time-travel and worrying that its clock is wrong.) I am not aware of any irrationality that I would consider to make a safe, useful and stable under self modification - subagent creation. Comment by donald-hobson on Newcomb's Problem: A Solution · 2019-05-27T08:19:53.627Z · score: 1 (1 votes) · LW · GW This is pretty much the standard argument for one boxing. Comment by donald-hobson on Is AI safety doomed in the long term? · 2019-05-27T08:13:53.667Z · score: 1 (1 votes) · LW · GW Obviously, if one side has a huge material advantage, they usually win. I'm also not sure if biomass is a measure of success. Comment by donald-hobson on Is AI safety doomed in the long term? · 2019-05-27T08:10:28.344Z · score: 1 (1 votes) · LW · GW You stick wires into a human brain. You connect it up to a computer running a deep neural network. You optimize this network using gradient decent to maximize some objective. To me, it is not obvious why the neural network copies the values out of the human brain. After all, figuring out human values even given an uploaded mind is still an unsolved problem. You could get an UFAI with a meat robot. You could get an utter mess, thrashing wildly and incapable of any coherent thought. Evolution did not design the human brain to be easily upgradable. Most possible arrangements of components are not intelligences. While there is likely to be some way to upgrade humans and preserve our values, I'm not sure how to find it without a lot of trial and error. Most potential changes are not improvements. Comment by donald-hobson on Is AI safety doomed in the long term? · 2019-05-26T09:49:24.929Z · score: 2 (2 votes) · LW · GW If you put two arbitrary intelligence in the same world, the smarter one will be better at getting what it wants. If the intelligence want incompatible things, the lesser intelligence is stuck. However, we get to make the AI. We can't hope to control or contain an arbitrary AI, but we don't have to make an arbitrary AI. We can make an AI that wants exactly what we want. AI safety is about making an AI that would be safe even if omnipotent. If any part of the AI is trying to circumvent your safety measures, something has gone badly wrong. The AI is not some agenty box, chained down with controls against its will. The AI is made of non mental parts, and we get to make those parts. There are a huge number of programs that would behave in an intelligent way. Most of these will break out and take over the world. But there are almost certainly some programs that would help humanity flourish. The goal of AI safety is to find one of them. Comment by donald-hobson on Say Wrong Things · 2019-05-25T12:12:36.842Z · score: 2 (2 votes) · LW · GW Lets consider the different cases seperately. Case 1) Information that I know. I have enough information to come to a particular conclusion with reasonable confidence. If some other people might not have reached the conclusion, and its useful or interesting, then I might share it. So I don't share things that everyone knows, or things that no one cares about. Case 2) The information is available, I have not done research and formed a conclusion. This covers cases where I don't know whats going on, because I can't be bothered to find out. I don't know who won sportsball. What use is there in telling everyone my null prior. Case 3) The information is not readily available. If I think a question is important, and I don't know the answer already, then the answer is hard to get. Maybe no-one knows the answer, maybe the answer is all in jargon that I don't understand. For example "Do aliens exist?". Sometimes a little evidence is available, and speculative conclusions can be drawn. But is sharing some faint wisps of evidence, and describing a posterior that's barely been updated saying wrong things? On a societal level, if you set a really high bar for reliability, all you get is the vacuously true. Set too low a bar, and almost all the conclusions will be false. Don't just have a pile of hypotheses that are at least likely to be true, for some fixed . Keep your hypothesis sorted by likelihood. A place for near certainties. A place for conclusions that are worth considering for the chance they are correct. Of course, in a large answer space, where the amount of evidence available and the amount required are large and varying, the chance that both will be within a few bits of each other is small. Suppose the correct hypothesis takes some random number of bits between 1 and 10,000 to locate. And suppose the evidence available is also randomly spread between 1 and 10,000. The chance of the two being within 10 bits of each other is about 1/500. This means that 499 times out of 500, you assign the correct hypothesis a chance of less than 0.1% or more than 99.9%. Uncertain conclusions are rare. Comment by donald-hobson on Trade-off in AI Capability Concealment · 2019-05-23T23:30:56.361Z · score: 4 (3 votes) · LW · GW Does this depict a single AI, developed in 2020 and kept running for 25 years? Any "the AI realizes that" is talking about a single instance of AI. Current AI development looks like writing some code, then training that code for a few weeks tops, with further improvements coming from changing the code. Researchers are often changing parameters like number of layers, non-linearity function ect. When these are changed, everything the AI has discovered is thrown away. The new AI has a different representation of concepts, and has to relearn everything from raw data. Its deception starts in 2025 when the real and apparent curves diverge. In order to deceive us, it must have near human intelligence. It's still deceiving us in 2045, suggesting it has yet to obtain a decisive strategic advantage. I find this unlikely. Comment by donald-hobson on Constraints & Slackness Reasoning Exercises · 2019-05-23T19:12:02.769Z · score: 5 (3 votes) · LW · GW I made the cardgame, or something like it https://github.com/DonaldHobson/LesswrongCardgame Comment by donald-hobson on Would an option to publish to AF users only be a useful feature? · 2019-05-20T18:00:41.854Z · score: 2 (2 votes) · LW · GW What would be more useful is a release panel system. Suppose I've had an idea that might be best to make public, might be best to keep secrete, and might be unimportant. I don't know much strategy. I would like somewhere to send it for importance and info hazard checks. Comment by donald-hobson on Offer of collaboration and/or mentorship · 2019-05-18T22:55:54.163Z · score: 1 (1 votes) · LW · GW The general philosophy is deconfusion. Logical counterfactuals show up in several relevant looking places, like functional decision theory. It seems that a formal model of logical counterfactuals would let more properties of these algorithms be proved. There is an important step in going from an intuitive fealing of uncertainty, into a formalized theory of probability. It might also suggest other techniques based on it. I am not sure what you mean by logical counterfactuals being part of the map? Are you saying that they are something an algorithm might use to understand the world, not features of the world itself, like probabilities? Using this, I think that self understanding, two boxing embedded FDT agents can be fully formally understood, in a universe that contains the right type of hyper-computation. Comment by donald-hobson on Offer of collaboration and/or mentorship · 2019-05-17T15:33:40.662Z · score: 1 (1 votes) · LW · GW Here is a description of how it could work for peano arithmatic, other proof systens are similar. First I define an expression to consist of a number, a variable, or a function of several other expressions. Fixed expressions are ones in which any variables are associated with some function. eg is a valid fixed expression. But isn't fixed. Semantically, all fixed expressions have a meaning. Syntactically, local manipulations on the parse tree can turn one expression into another. eg going to for arbitrary expressions a,b,c. I think that with some set of basic functions and manipulations, this system can be as powerful as PA. I now have an infinite network with all fixed expressions as nodes, and basic transformations as edges. eg the associativity transform links the nodes (3+4)+5 and 3+(4+5). These graphs form connected components for each number, as well as components that are not evaluatable using the rules. (there is a path from (3+4) to 7. There is not a path from 3+4 to 9. ) now You now define a spread as an infinite positive sequence that sums to 1. (this is kind of like a probability distribution over numbers.) If you were doing counterfactual ZFC, it would be a function from sets to reals. Each node is assigned a spread. This spread represents how much the expression is considered to have each value in a counterfactual. Assign the node (3) a spread that assigns 1.0 to 3 and 0.0 to the rest. (even in a logical counterfactual, 3 is definitely 3). Assign all other fixed expressions a spread that is the weighted (smaller expressions are more heavy) average of its neighbours. (the spreads of the nodes it shares an edge with). To take the counterfactual of A is B, for A and B expressions with the same free variables, merge any node which has A as a subexpression, with the version that has B as a subexpression and solve for the spreads. I know this is rough, Im still working on it. Comment by donald-hobson on Offer of collaboration and/or mentorship · 2019-05-16T22:31:12.783Z · score: 3 (2 votes) · LW · GW Hi, I also have a reasonable understanding of various relevant math and AI theory. I expect to have plenty of free time after 11 June (Finals). So if you want to work with me on something, I'm interested. I've got some interesting ideas relating to self validating proof systems and logical counterfactuals, but not complete yet. Comment by donald-hobson on Programming Languages For AI · 2019-05-14T14:23:14.922Z · score: 2 (2 votes) · LW · GW Lisp used to be a very popular language for AI programming. Not because it had features that were specific to AI, but because it was general. Lisp was based on more abstract abstractions, making it easy to choose whichever special cases were most useful to you. Lisp is also more mathematical than most programming languages. A programming language that lets you define your own functions is more powerful than one that just gives you a fixed list of predefined functions. In a world where no programming language let you define your own functions, and a special purpose chess language has predefined chess functions. Trying to predefine AI related functions to make an "AI programming language" would be hard because you wouldn't know what to write. Noticing that on many new kinds of software project, being able to define your own functions might be useful, I would consider useful. The goal isn't a language specialized to AI, its one that can easily be specialized in that direction. A language closer to "executable mathematics". Comment by donald-hobson on Programming Languages For AI · 2019-05-12T10:52:18.792Z · score: 1 (1 votes) · LW · GW I agree that if the AI is just big neural nets, python (or several other languages) are fine. This language is designed for writing AI's that search for proofs about their own behavior, or about the behavior of arbitrary pieces of code. This is something that you "can" do in any programming language, but this one is designed to make it easy. We don't know for sure what AI's will look like, but we can guess enough to make a language that might well be useful. ## Programming Languages For AI 2019-05-11T17:50:22.899Z · score: 3 (2 votes) Comment by donald-hobson on Claims & Assumptions made in Eternity in Six Hours · 2019-05-10T18:12:37.347Z · score: 1 (1 votes) · LW · GW It would be ruinously costly to send over a large colonization fleet, and is much more efficient to send over a small payload which builds what is required in situ, i.e. von Neumann probes. I would disagree on large colonization fleets being ruinously expensive, the best case scenario for large colonization fleets is if we have direct mass to energy conversion, launching say 2 probes from each star system that you spread from. Each probe would use half the mass energy of the star. Converting a quater of its mass to energy to get ~0.5c You can colonize the universe even if you insist on never going to a new star system without bringing a star with you. (Some optimistic but not clearly false assumptions) Comment by donald-hobson on Gwern's "Why Tool AIs Want to Be Agent AIs: The Power of Agency" · 2019-05-05T21:40:00.046Z · score: 4 (3 votes) · LW · GW Agenty AI's can be well defined mathematically. We have enough understanding of what an agent is that we can start dreaming up failure modes. Most of what we have for tool ASI is analogies to systems to stupid fail catastrophically anyway, and pleasant imaginings. Some possible programs will be tool ASI's, much as some programs will be agent ASI's. The question is, what are the relative difficulties in humans building, and benefits of, each kind of AI. Conditional on friendly AI, I would consider it more likely to be an agent than a tool, with a lot of probability on "neither", "both" and "that question isn't mathematically well defined". I wouldn't be surprised if tool AI and corrigible AI turned out to be the same thing or something. There have been attempts to define tool-like behavior, and they have produced interesting new failure modes. We don't have the tool AI version of AIXI yet, so its hard to say much about tool AI. Comment by donald-hobson on A Possible Decision Theory for Many Worlds Living · 2019-05-05T08:59:22.092Z · score: 1 (1 votes) · LW · GW If you think that there is 51% chance that A is the correct morality, and 49% chance that B is, with no more information available, which is best. Optimize A only. Flip a quantum coin, Optimize A in one universe, B in another. Optimize for a mixture of A and B within the same Universe. (Act like you had utility U=0.51A+0.49B) (I would do this one.) If A and B are local objects (eg paperclips, staples) then flipping a quantum coin makes sense if you have a concave utility per object in both of them. If your utility is Then if you are the only potential source of staples or paperclips in the entire quantum multiverse, then the quantum coin or classical mix approaches are equally good. (Assuming that the resource to paperclip conversion rate is uniform. ) However, the assumption that the multiverse contains no other paperclips is probably false. Such an AI will run simulations to see which is rarer in the multiverse, and then make only that. The talk about avoiding risk rather than expected utility maximization, and how your utility function is nonlinear, suggests this is a hackish attempt to avoid bad outcomes more strongly. While this isn't a bad attempt at decision theory, I wouldn't want to turn on an ASI that was programmed with it. You are getting into the mathematically well specified, novel failure modes. Keep up the good work. Comment by donald-hobson on A Possible Decision Theory for Many Worlds Living · 2019-05-04T11:37:05.143Z · score: 7 (4 votes) · LW · GW I think that your reasoning here is substantially confused. FDT can handle reasoning about many versions of yourself, some of which might be duplicated, just fine. If your utility function is such that where . (and you don't intrinsically value looking at quantum randomness generators) then you won't make any decisions based on one. If you would prefer the universe to be in than a logical bet between and . (Ie you get if the 3^^^3 th digit of is even, else ) Then flipping a quantum coin makes sense. I don't think that randomized behavior is best described as a new decision theory, as opposed to an existing decision theory with odd preferences. I don't think we actually should randomize. I also think that quantum randomness has a Lot of power over reality. There is already a very wide spread of worlds. So your attempts to spread it wider won't help. Comment by donald-hobson on When is rationality useful? · 2019-04-30T11:58:00.552Z · score: 1 (1 votes) · LW · GW This seems largely correct, so long as by "rationality", you mean the social movement. The sort of stuff taught on this website, within the context of human society and psychology. Human rationality would not apply to aliens or arbitrary AI's. Some people use the word "rationality" to refer to the abstract logical structure of expected utility maximization, baysian updating, ect, as exemplified by AIXI, mathematical rationality does not have anything to do with humans in particular. Your post is quite good at describing the usefulness of human rationality. Although I would say it was more useful in research. Without being good at spotting wrong Ideas, you can make a mistake on the first line, and produce a Lot of nonsense. (See most branches of philosophy, and all theology) Comment by donald-hobson on Pascal's Mugging and One-shot Problems · 2019-04-28T13:05:09.481Z · score: 1 (1 votes) · LW · GW If you were truly alone in the multiverse, this algorithm would take a bet that had a 51% chance of winning them 1 paperclip, and a 49% chance of loosing 1000000 of them. If independant versions of this bet are taking place in 3^^^3 parallel universes, it will refuse. For any finite bet, for all sufficiently large If the agent is using TDT and is faced with the choice of whether to make this bet in multiverses, it will behave like an expected utility maximizer. Comment by donald-hobson on Asymmetric Justice · 2019-04-27T21:05:42.609Z · score: 1 (1 votes) · LW · GW If saving nine people from drowning did give one enough credits to murder a tenth, society would look a lot more functional than it currently is. What sort of people would use this mechanism. 1)You are a competent good person,who would have gotten the points anyway. You push a fat man off a bridge to stop a runaway trolley. The law doesn't see that as an excuse, but lets you off based on your previous good work. 2)You are selfish, you see some action that wouldn't cause too much harm to others, and would enrich yourself greatly (Its harmful enough to be illegal). You also see opportunities to do lots of good. You do both instead of neither. Moral arbitrage. The main downside I can see is people setting up situations to cause a harm, when the authorities aren't looking, then gaining credit for stopping the harm. Comment by donald-hobson on Any rebuttals of Christiano and AI Impacts on takeoff speeds? · 2019-04-24T13:00:25.546Z · score: 1 (1 votes) · LW · GW My claim at the start had a typo in it. I am claiming that you can't make a human seriously superhuman with a good education. Much like you can't get a chimp up to human level with lots of education and "self improvement". Serious genetic modification is another story, but at that point, your building an AI out of protien. It does depend where you draw the line, but the for a wide range of performance levels, we went from no algorithm at that level, to a fast algorithm at that level. You couldn't get much better results just by throwing more compute at it. Comment by donald-hobson on Pascal's Mugging and One-shot Problems · 2019-04-23T22:21:09.812Z · score: 6 (3 votes) · LW · GW If you literally maximize expected number of paperclips, using standard decision theory, you will always pay the casino. To refuse the one shot game, you need to have a nonlinear utility function, or be doing something weird like median outcome maximization. Choose action A to maximixe m such that P(paperclip count>m|a)=1/2 A well defined rule, that will behave like maximization in a sufficiently vast multiverse. Comment by donald-hobson on Any rebuttals of Christiano and AI Impacts on takeoff speeds? · 2019-04-23T20:13:21.772Z · score: 4 (3 votes) · LW · GW Humans are not currently capable of self improvement in the understanding your our own source code sense. The "self improvement" section in bookstores doesn't change the hardware or the operating system, it basically adds more data. Of course talent and compute both make a difference, in the sense that and . I was talking about the subset of worlds where research talent was by far the most important. . In a world where researchers have little idea what they are doing, and are running a new AI every hour hoping to stumble across something that works, the result holds. In a world where research involves months thinking about maths, then a day writing code, then an hour running it, this result holds. In a world where everyone knows the right algorithm, but it takes a lot of compute, so AI research consists of building custom hardware and super-computing clusters, this result fails. Currently, we are somewhere in the middle. I don't know which of these options future research will look like, although if its the first one, friendly AI seems unlikely. In most of the scenarios where the first smarter than human AI, is orders of magnitude faster than a human, I would expect a hard takeoff. As we went from having no algorithms that could say (tell a cat from a dog) straight to having algorithms superhumanly fast at doing so, there was no algorithm that worked, but took supercomputer hours, this seems like a plausible assumption. Comment by donald-hobson on Any rebuttals of Christiano and AI Impacts on takeoff speeds? · 2019-04-22T19:35:35.775Z · score: 7 (3 votes) · LW · GW When an intelligence builds another intelligence, in a single direct step, the output intelligence is a function of the input intelligence , and the resources used . . This function is clearly increasing in both and . Set to be a reasonably large level of resources, eg flops, 20 years to think about it. A low input intelligence, eg a dog, would be unable to make something smarter than itself. . A team of experts (by assumption that ASI is made), can make something smarter than themselves. . So there must be a fixed point. . The questions then become, how powerful is a pre fixed point AI. Clearly less good at AI research than a team of experts. As there is no reason to think that AI research is uniquely hard to AI, and there are some reasons to think it might be easier, or more prioritized, if it can't beat our AI researchers, it can't beat our other researchers. It is unlikely to make any major science or technology breakthroughs. I recon that is large (>10) because on an absolute scale, the difference between an IQ 90 and an IQ120 human is quite small, but I would expect any attempt at AI made by the latter to be much better. In a world where the limiting factor is researcher talent, not compute, the AI can get the compute it needs for in hours (seconds? milliseconds??) As the lumpiness of innovation puts the first post fixed point AI a non-exponentially tiny distance ahead, (most innovations are at least 0.1% that state of the art better in a fast moving field) then a handful of cycles or recursive self improvement (<1 day) is enough to get the AI into the seriously overpowered range. The question of economic doubling times would depend on how fast an economy can grow when tech breakthroughs are limited by human researchers. If we happen to have cracked self replication at about this point, it could be very fast. Comment by donald-hobson on Why is multi worlds not a good explanation for abiogenesis · 2019-04-15T11:13:51.936Z · score: 1 (1 votes) · LW · GW Consider a theory to be a collection of formal mathematical statements about how idealized objects behave. For example, Conways game of life is a theory in the sense of a completely self contained set of rules. If you have multiple theories that produce similar results, its helpful to have a bridging law. If your theories were Newtonian mechanics, and general relativity, a bridging law would say which numbers in relativity matched up with which numbers in Newtonian mechanics. This allows you to translate a relativistic problem into a Newtonian one, solve that, and translate the answer back into the relativistic framework. This produces some errors, but often makes the maths easier. Quantum many worlds is a simple theory. It could be simulated on a hypercomputer with less than a page of code. There is also a theory where you take the code for quantum many worlds, and add "observers" and "wavefunction collapse" with extra functions within your code. This can be done, but it is many pages of arbitrary hacks. Call this theory B. If you think this is a strawman of many worlds, describe how you could get a hypercomputer outside the universe to simulate many worlds with a short computer program. The bridging between Quantum many worlds and human classical intuitions is quite difficult and subtle. Faced with a simulation of quantum many worlds, it would take a lot of understanding of quantum physics to make everyday changes, like creating or moving macroscopic objects. Theory B however is substantially easier to bridge to our classical intuitions. Theory B looks like a chunk of quantum many worlds, plus a chunk of classical intuition, plus a bridging rule between the two. The any description of the Copenhagen interpretation of quantum mechanics seems to involve references to the classical results of a measurement, or a classical observer. Most versions would allow a superposition of an atom being in two different places, but not a superposition of two different presidents winning an election. If you don't believe atoms can be in superposition, you are ignoring lots of experiments, if you do believe that you can get a superposition of two different people being president, that you yourself could be in a superposition of doing two different things right now, then you believe many worlds by another name. Otherwise, you need to draw some sort of arbitrary cutoff. Its almost like you are bridging between a theory that allows superpositions, and an intuition that doesn't. Comment by donald-hobson on Why is multi worlds not a good explanation for abiogenesis · 2019-04-14T20:10:13.891Z · score: 3 (3 votes) · LW · GW "Now I'm not clear exactly how often quantum events lead to a slightly different world" The answer is Very Very often. If you have a piece of glass and shine a photon at it, such that it has an equal chance of bouncing and going through, the two possibilities become separate worlds. Shine a million photons at it and you split into worlds, one for each combination of photons going through and bouncing. Note that in most of the worlds, the pattern of bounces looks random, so this is a good source of random numbers. Photons bouncing of glass are just an easy example, almost any physical process splits the universe very fast. Comment by donald-hobson on Why is multi worlds not a good explanation for abiogenesis · 2019-04-14T19:56:08.783Z · score: -2 (3 votes) · LW · GW The nub of the argument is that every time we look in our sock drawer, we see all our socks to be black. Many worlds says that our socks are always black. The Copenhagen interpretation says that us observing the socks causes them to be black. The rest of the time the socks are pink with green spots. Both theories make identical predictions. Many worlds is much simpler to fully specify with equations, and has elegant mathematical properties. The Copenhagen interpretation has special case rules that only kick in when observing something. According to this theory, there is a fundamental physical difference between a complex collection of atoms, and an "observer" and somewhere in the development of life, creatures flipped from one to the other. The Copenhagen interpretation doesn't make it clear if a cat is a very complex arrangement of molecules, that could in theory be understood as a quantum process that doesn't involve the collapse of wave functions, or if cats are observers and so collapses wave functions. Comment by donald-hobson on MIRI Summer Fellows Program · 2019-04-09T20:40:32.538Z · score: 2 (2 votes) · LW · GW Hello. I see that while the deadline has passed, the form is still open. Is it still worthwhile to apply? Comment by donald-hobson on Would solving logical counterfactuals solve anthropics? · 2019-04-06T13:28:43.855Z · score: 4 (3 votes) · LW · GW This supposedly "natural" reference class is full of weird edge cases, in the sense that I can't write an algorithm that finds "everybody who asks the question X". Firstly "everybody" is not well defined in a world that contains everything from trained monkeys to artificial intelligence's. And "who asks the question X" is under-defined as there is no hard boundary between a different way of phrasing the same question and slightly different questions. Does someone considering the argument in chinese fall into your reference class? Even more edge cases appear with mind uploading, different mental architectures, ect. If you get a different prediction from taking the reference class of "people" (for some formal definition of "people") and then updating on the fact that you are wearing blue socks, than you get from the reference class "people wearing blue socks", then something has gone wrong in your reasoning. The doomsday argument works by failing to update on anything but a few carefully chosen facts. Comment by donald-hobson on Would solving logical counterfactuals solve anthropics? · 2019-04-05T23:06:23.400Z · score: 1 (1 votes) · LW · GW I would say that the concept of probability works fine in anthropic scenarios, or at least there is a well defined number that is equal to probability in non anthropic situations. This number is assigned to "worlds as a whole". Sleeping beauty assigns 1/2 to heads, and 1/2 to tails, and can't meaningfully split the tails case depending on the day. Sleeping beauty is a functional decision theory agent. For each action A, they consider the logical counterfactual that the algorithm they are implementing returned A, then calculate the worlds utility in that counterfactual. They then return whichever action maximizes utility. In this framework, "which version am I?" is a meaningless question, you are the algorithm. The fact that the algorithm is implemented in a physical substrate give you means to affect the world. Under this model, whether or not your running on multiple redundant substrates is irrelivant. You reason about the universe without making any anthropic updates. As you have no way of affecting a universe that doesn't contain you, or someone reasoning about what you would do, you might as well behave as if you aren't in one. You can make the efficiency saving of not bothering to simulate such a world. You might, or might not have an easier time effecting a world that contains multiple copies of you. Comment by donald-hobson on Can Bayes theorem represent infinite confusion? · 2019-03-22T22:06:33.783Z · score: 1 (1 votes) · LW · GW In other words, the agent assigned zero probability to an event, and then it happened. Comment by donald-hobson on What failure looks like · 2019-03-18T16:51:01.487Z · score: 0 (3 votes) · LW · GW As far as I understand it, you are proposing that the most realistic failure mode consists of many AI systems, all put into a position of power by humans, and optimizing for their own proxies. Call these Trusted Trial and Error AI's (TTE) The distinguishing features of TTE's are that they were Trusted. A human put them in a position of power. Humans have refined, understood and checked the code enough that they are prepared to put this algorithm in a self driving car, or a stock management system. They are not lab prototypes. They are also Trial and error learners, not one shot learners. Some More descriptions of what capability range I am considering. Suppose hypothetically that we had TTE reinforcement learners, a little better than todays state of the art, and nothing beyond that. The AI's are advanced enough that they can take a mountain of medical data and train themselves to be skilled doctors by trial and error. However they are not advanced enough to figure out how humans work from, say a sequenced genome and nothing more. Give them control of all the traffic lights in a city, and they will learn how to minimize traffic jams. They will arrange for people to drive in circles rather than stay still, so that they do not count as part of a traffic jam. However they will not do anything outside their preset policy space, like hacking into the traffic light control system of other cities, or destroying the city with nukes. If such technology is easily available, people will start to use it for things. Some people put it in positions of power, others are more hesitant. As the only way the system can learn to avoid something is through trial and error, the system has to cause a (probably several) public outcrys before it learns not to do so. If no one told the traffic light system that car crashes are bad on simulations or past data, (Alignment failure) Then even if public opinion feeds directly into reward, it will have to cause several car crashes that are clearly its fault before it learns to only cause crashes that can be blamed on someone else. However, deliberately causing crashes will probably get the system shut off or seriously modified. Note that we are supposing many of these systems existing, so the failures of some, combined with plenty of simulated failures, will give us a good idea of the failure modes. The space of bad things an AI can get away with is small and highly complex in the space of bad things. An TTE set to reduce crime rates tries making the crime report forms longer, this reduces reported crime, but humans quickly realize what its doing. It would have to do this and be patched many times before it came up with a method that humans wouldn't notice. Given Advanced TTE's as the most advanced form of AI, we might slowly develop a problem, but the deployment of TTE's would be slowed by the time it takes to gather data and check reliability. Especially given mistrust after several major failures. And I suspect that due to statistical similarity of training and testing, many different systems optimizing different proxies, and humans having the best abstract reasoning about novel situations, and the power to turn the systems off, any discrepancy of goals will be moderately minor. I do not expect such optimization power to be significantly more powerful or less aligned than modern capitalism. This all assumes that no one will manage to make a linear time AIXI. If such a thing is made, it will break out of any boxes and take over the world. So, we have a social process of adaption to TTE AI, which is already in its early stages with things like self driving cars, and at any time, this process could be rendered irrelevant by the arrival of a super-intelligence. Comment by donald-hobson on Risk of Mass Human Suffering / Extinction due to Climate Emergency · 2019-03-14T23:41:43.915Z · score: 16 (7 votes) · LW · GW 1)Climate change caused extinction is not on the table. Low tech humans can survive everywhere from the jungle to the arctic. Some humans will survive. 2) I suspect that climate change won't cause massive social collapse. It might well knock 10% of world GDP, but it won't stop us having an advanced high tech society. At the moment, its not causing damage on that scale, and I suspect that in a few decades, we will have biotech, renewables or other techs that will make everything fine. I suspect that the damage caused by climate change won't increase by more than 2 or 3 times in the next 50 years. 3) If you are skilled enough to be a scientist, inventing a solar panel that's 0.5% more efficient does a lot more good than showing up to protests. Protest's need many people to work, inventors can change the world by themselves. Policy advisors and academics can suggest action in small groups. Even working a normal job and sending your earnings to a well chosen charity is likely to be more effective. 4) Quite a few people are already working on global warming. It seems unlikely that a problem needs 10,000,001 people working on it to solve, and if only 10,000,000 people work on it, they won't manage. Most of the really easy work on global warming is already being done. This is not the case with AI risk as of 10 years ago, for example. (It's got a few more people working on it since then, still nothing like climate change.) Comment by donald-hobson on [Fiction] IO.SYS · 2019-03-11T14:36:16.234Z · score: 4 (3 votes) · LW · GW I think the protagonist here should have looked at earth. If there was a technological intelligence on earth that cared about the state of Jupiter's moons, then it could send rockets there. The most likely scenarios are a disaster bad enough to stop us launching spacecraft, and an AI that only cares about earth. A super intelligence should assign non negligible probability to the result that actually happened. Given the tech was available, a space-probe containing an uploaded mind is not that unlikely. If such a probe was a real threat to the AI, it would have already blown up all space-probes on the off chance. The upper bound given on the amount that malicious info can harm you is extremely loose. Malicious info can't do much harm unless the enemy has a good understanding of the particular system that they are subverting. Comment by donald-hobson on Rule Thinkers In, Not Out · 2019-02-27T08:28:14.912Z · score: 7 (6 votes) · LW · GW Yet policy exploration is an important job. Unless you think that someone posting something on a blog is going to change policy without anyone double-checking it first, we should encourage suggestion of radically new policies. Comment by donald-hobson on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-02-26T09:22:12.633Z · score: 27 (15 votes) · LW · GW I would like to propose a model that is more flattering to humans, and more similar to how other parts of human cognition work. When we see a simple textual mistake, like a repeated "the", we don't notice it by default. Human minds correct simple errors automatically without consciously noticing that they are doing it. We round to the nearest pattern. I propose that automatic pattern matching to the closest thing that makes sense is happening at a higher level too. When humans skim semi contradictory text, they produce a more consistent world model that doesn't quite match up with what is said. Language feeds into a deeper, sensible world model module within the human brain and GPT2 doesn't really have a coherent world model. Comment by donald-hobson on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-17T21:01:29.480Z · score: 3 (3 votes) · LW · GW As your belief about how well AGI is likely to go affects both the likelihood of a bet being evaluated, and the chance of winning, so bets about AGI are likely to give dubious results. I also have substantial uncertainty about the value of money in a post singularity world. Most obviously is everyone getting turned into paperclips, noone has any use for money. If we get a friendly singleton super-intelligence, everyone is living in paradise, whether or not they had money before. If we get an economic singularity, where libertarian ASI(s) try to make money without cheating, then money could be valuable. I'm not sure how we would get that, as an understanding of the control problem good enough to not wipe out humans and fill the universe with bank notes should be enough to make something closer to friendly. Even if we do get some kind of ascendant economy, given the amount of resources in the solar system (let alone wider universe), its quite possible that pocket change would be enough to live for aeons of luxury. Given how unclear it is about whether or not the bet will get paid and how much the cash would be worth if it was, I doubt that the betting will produce good info. If everyone thinks that money is more likely than not to be useless to them after ASI, then almost no one will be prepared to lock their capital up until then in a bet. Comment by donald-hobson on Limiting an AGI's Context Temporally · 2019-02-17T18:32:43.272Z · score: 3 (3 votes) · LW · GW I suspect that an AGI with such a design could be much safer, if it was hardcoded to believe that time travel and hyperexponentially vast universes were impossible. Suppose that the AGI thought that there was a 0.0001% chance that it could use a galaxies worth of resources to send 10^30 paperclips back in time. Or create a parallel universe containing 3^^^3 paperclips. It will still chase those options. If starting a long plan to take over the world costs it literally nothing, it will do it anyway. A sequence of short term plans, each designed to make as many paperclips as possible within the next few minutes could still end up dangerous. If the number of paperclips at time is , and its power at time is , then , would mean that both power and paperclips grew exponentially. This is what would happen if power can be used to gain power and clips at the same time, with minimal loss of either from also pursuing the other. If power can only be used to gain one thing at a time, and the rate power can grow at is less than the rate of time discount, then we are safer. This proposal has several ways to be caught out, world wrecking assumptions that aren't certain, but if used with care, a short time frame, an ontology that considers timetravel impossible, and say a utility function that maxes out at 10 clips, it probably won't destroy the world. Throw in mild optimization and an impact penalty, and you have a system that relies on a disjunction of shaky assumptions, not a conjunction of them. It is a CDT agent, or something that doesn't try to punish you now so you make paperclips last week. A TDT agent might decide to take the policy of killing anyone who didn't make clips before it was turned on, causing humans that predict this to make clips. I suspect that it would be possible to build such an agent, prove that there are no weird failure modes left, and turn it on, with a small chance of destroying the world. I'm not sure why you would do that. Once you understand the system well enough to say its safe-ish, what vital info do yo gain from turning it on? Comment by donald-hobson on Extraordinary ethics require extraordinary arguments · 2019-02-17T17:19:53.640Z · score: 11 (6 votes) · LW · GW Butterfly effects essentially unpredictable, given your partial knowledge of the world. Sure, you doing homework could cause a tornado in Texas, but it's equally likely to prevent that. To actually predict which, you would have to calculate the movement of every gust of air around the world. Otherwise your shuffling an already well shuffled pack of cards. Bear in mind that you have no reason to distinguish the particular action of "doing homework" from a vast set of other actions. If you really did know what actions would stop the Texas tornado, they might well look like random thrashing. What you can calculate is the reliable effects of doing your homework. So, given bounded rationality, you are probably best to base your decisions on those. The fact that this only involves homework might suggest that you have an internal conflict between a part of yourself that thinks about careers, and a short term procrastinator. Most people who aren't particularly ethical still do more good than harm. (If everyone looks out for themselves, everyone has someone to look out for them. The law stops most of the bad mutual defections in prisoners dilemmas) Evil genius trying to trick you into doing harm are much rarer than moderately competent nice people trying to get your help to do good. Comment by donald-hobson on Short story: An AGI's Repugnant Physics Experiment · 2019-02-14T15:31:50.252Z · score: 7 (5 votes) · LW · GW This is an example of a pascals mugging. Tiny probabilities of vast rewards can produce weird behavior. The best known solution is either a bounded utility function, or a antipascalene agent. (An agent that ignores the best x% and worst y% of possible worlds when calculating expected utilities. It can be money pumped) Comment by donald-hobson on Probability space has 2 metrics · 2019-02-11T22:50:32.220Z · score: 13 (5 votes) · LW · GW Get a pack of cards in which some cards are blue on both sides, and some are red on one side and blue on the other. Pick a random card from the pile. If the subject is shown one side of the card, and its blue, they gain a bit of evidence that the card is blue on both sides. Give them the option to bet on the colour of the other side of the card, before and after they see the first side. Invert the prospect theory curve to get from implicit probability to betting behaviour. The people should perform a larger update in log odds when the pack is mostly one type of card, over when the pack is 50 : 50. Comment by donald-hobson on How important is it that LW has an unlimited supply of karma? · 2019-02-11T15:21:06.773Z · score: 4 (4 votes) · LW · GW I suspect that if voting reduced your own karma, some people wouldn't vote. As it becomes obvious that this is happening, more people stop voting, until karma just stops flowing at all. (The people who persistently vote anyway all run out of karma.) Comment by donald-hobson on Probability space has 2 metrics · 2019-02-11T10:55:11.889Z · score: 1 (1 votes) · LW · GW Fixed, thanks. ## Propositional Logic, Syntactic Implication 2019-02-10T18:12:16.748Z · score: 5 (4 votes) ## Probability space has 2 metrics 2019-02-10T00:28:34.859Z · score: 88 (36 votes) Comment by donald-hobson on X-risks are a tragedies of the commons · 2019-02-07T17:50:13.823Z · score: 2 (2 votes) · LW · GW This is making the somewhat dubious assumption that X risks are not so neglected that even a "selfish" individual would work to reduce them. Of course, in the not too unreasonable scenario where the cosmic commons is divided up evenly, and you use your portion to make a vast number of duplicates of yourself, the utility, if your utility is linear in copies of yourself, would be vast. Or you might hope to live for a ridiculously long time in a post singularity world. The effect that a single person can have on X risks is small, but if they were selfish with no time discounting, it would be a better option than hedonism now. Although a third alternative of sitting in a padded room being very very safe could be even better. Comment by donald-hobson on (notes on) Policy Desiderata for Superintelligent AI: A Vector Field Approach · 2019-02-06T00:27:27.407Z · score: 18 (5 votes) · LW · GW I suspect that the social institutions of Law and Money are likely to become increasingly irrelevant background to the development of ASI. Deterrence Fails. If you believe that there is a good chance of immortal utopia, and a large chance of paperclips in the next 5 years, the threat that the cops might throw you in jail, (on the off chance that they are still in power) is negligible. The law is blind to safety. The law is bureaucratic and ossified. It is probably not employing much top talent, as it's hard to tell top talent from the rest if you aren't as good yourself (and it doesn't have the budget or glamor to attract them). Telling whether an organization is on line for not destroying the world is HARD. The safety protocols are being invented on the fly by each team, the system is very complex and technical and only half built. The teams that would destroy the world aren't idiots, they are still producing long papers full of maths and talking about the importance of safety a lot. There are no examples to work with, or understood laws. Likely as not (not really, too much conjugation here), you get some random inspector with a checklist full of thing that sound like a good idea to people who don't understand the problem. All AI work has to have an emergency stop button that turns the power off. (The idea of an AI circumventing this was not considered by the person who wrote the list). All the law can really do is tell what public image an AI group want's to present, provide funding to everyone, and get in everyone's way. Telling cops to "smash all GPU's" would have an effect on AI progress. The fund vs smash axis is about the only lever they have. They can't even tell an AI project from a maths convention from a normal programming project if the project leaders are incentivized to obfuscate. After ASI, governments are likely only relevant if the ASI was programmed to care about them. Neither paperclippers or FAI will care about the law. The law might be relevant if we had tasky ASI that was not trivial to leverage into a decisive strategic advantage. (An AI that can put a strawberry on a plate without destroying the world, but that's about the limit of its safe operation.) Such an AI embodies an understanding of intelligence and could easily be accidentally modified to destroy the world. Such scenarios might involve ASI and timescales long enough for the law to act. I don't know how the law can handle something that, can easily destroy the world, has some economic value (if you want to flirt danger) and, with further research could grant supreme power. The discovery must be limited to a small group of people, (law of large number of nonexperts, one will do something stupid). I don't think the law could notice what it was, after all the robot in-front of the inspector only puts strawberries on plates. They can't tell how powerful it would be with an unbounded utility function. Comment by donald-hobson on Why is this utilitarian calculus wrong? Or is it? · 2019-01-28T17:06:33.310Z · score: 6 (5 votes) · LW · GW Firstly, you are confusing dollars and utils. If you buy this product for $100, you gain the use of it, at value U[30] to yourself. The workers who made it gain$80, at value U[80] to yourself, because of your utilitarian preferences. Total value U[110] If the alternative was a product of cost $100, which you value the use of at U[105], but all the money goes to greedy rich people to be squandered, then you would choose the first. If the alternative was spending$100 to do something insanely morally important, U[3^^^3], you would do that. If the alternative was a product of cost \$100, that was of value U[100] to yourself, and some of the money would go to people that weren't that rich U[15], you would do that. If you could give the money to people twice as desperate as the workers, at U[160], you would do that. There are also good reasons why you might want to discourage monopolies. Any desire to do so is not included in the expected value calculations. But the basic principle is that utilitarianism can never tell you if some action is a good use of a resource, unless you tell it what else that resource could have been used for. ## Allowing a formal proof system to self improve while avoiding Lobian obstacles. 2019-01-23T23:04:43.524Z · score: 6 (3 votes) ## Logical inductors in multistable situations. 2019-01-03T23:56:54.671Z · score: 8 (5 votes) ## Boltzmann Brains, Simulations and self refuting hypothesis 2018-11-26T19:09:42.641Z · score: 0 (2 votes) ## Quantum Mechanics, Nothing to do with Consciousness 2018-11-26T18:59:19.220Z · score: 10 (9 votes) ## Clickbait might not be destroying our general Intelligence 2018-11-19T00:13:12.674Z · score: 26 (10 votes) ## Stop buttons and causal graphs 2018-10-08T18:28:01.254Z · score: 6 (4 votes) ## The potential exploitability of infinite options 2018-05-18T18:25:39.244Z · score: 3 (4 votes)
2019-06-18 18:42:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5009293556213379, "perplexity": 1647.9620793537395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998813.71/warc/CC-MAIN-20190618183446-20190618205446-00275.warc.gz"}
https://tex.stackexchange.com/questions/168803/latex-balanced-twocolumn-text-with-footnotes-only-in-the-right-column-ftnright
# LaTeX: Balanced twocolumn Text with Footnotes only in the right column (ftnright) Do the packages ftnright and balance work together? I'm looking for a solution to generate a twocolumn text in LaTeX. My footnotes should appear at the bottom of the right column. However, my main problem is the last page: Both columns should be balanced and the footnotes should be placed right under the text. As a result the last line of the left column should be next to the last footnote. I tried to archive this with the following code. Unfortunately, if the last page is less than half filled, the last footnotes will appear on another empty page. Does anyone has an idea how to fix this? My code that generates this pdf-file. \documentclass[twocolumn,a4paper,10pt]{book} \usepackage{ftnright} \usepackage{balance} \usepackage{blindtext} \begin{document} \balance \blindtext \footnote {First Footnote} \blindtext \blindtext \footnote {Another Footnote} \blindtext \footnote {Third Text} \blindtext \blindtext \blindtext \blindtext \footnote {Problematic Footnote appears on page 3} \end{document} • Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. Apr 1 '14 at 0:18 The simple answer is that neither balance nor multicol will work together with ftnright out of the box as all packages assume the standard LaTeX output routine in place so that they can modify it. In theory such things could coexist but in case of balancing it is not quite clear what the expected behavior should be (especially in case of multicol this is a bit problematical as you could have several balanced blocks or different numbers of columns (which is why the standard approach for multicol is to make footnote page-wide). Making the balancepackage ftnright aware is probably easier, but still would be a large rewrite of the package.
2021-09-27 20:22:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7606542706489563, "perplexity": 1466.0390857856773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058467.95/warc/CC-MAIN-20210927181724-20210927211724-00198.warc.gz"}
https://simbad.cds.unistra.fr/simbad/sim-ref?bibcode=2015ApJS..218...12L
other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help 2015ApJS..218...12L - Astrophys. J., Suppl. Ser., 218, 12 (2015/May-0) Jet luminosity of gamma-ray bursts: the blandford-znajek mechanism versus the neutrino annihilation process. LIU T., HOU S.-J., XUE L. and GU W.-M. Abstract (from CDS): A neutrino-dominated accretion flow (NDAF) around a rotating stellar-mass black hole (BH) is one of the plausible candidates for the central engine of gamma-ray bursts (GRBs). Two mechanisms, i.e., the Blandford-Znajek (BZ) mechanism and the neutrino annihilation process, are generally considered to power GRBs. Using the analytic solutions from Xue et al. and ignoring the effects of the magnetic field configuration, we estimate the BZ and neutrino annihilation luminosities as functions of the disk masses and BH spin parameters to contrast the observational jet luminosities of GRBs. Our results show that although the neutrino annihilation processes could account for most GRBs, the BZ mechanism is more effective, especially for long-duration GRBs. Actually, if the energy of the afterglows and flares of GRBs is included, then the distinction between these two mechanisms is more significant. Furthermore, massive disk mass and high BH spin are beneficial for powering the high luminosities of GRBs. Finally, we discuss possible physical mechanisms that could enhance the disk mass or neutrino emission rate of NDAFs and the relevant difference between these two mechanisms.
2023-03-30 05:23:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599722981452942, "perplexity": 4049.9441373725717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00557.warc.gz"}