url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://hossfeld.github.io/performance-modeling/chapter3_stochasticProcesses/ch3-5-MMn_stateDependentArrivalRates.html
# M(x)/M/K System with State-dependent Arrival Rates¶ (c) Tobias Hossfeld (Aug 2021) This script and the figures are part of the following book. The book is to be cited whenever the script is used (copyright CC BY-SA 4.0): Tran-Gia, P. & Hossfeld, T. (2021). Performance Modeling and Analysis of Communication Networks - A Lecture Note. Würzburg University Press. https://doi.org/10.25972/WUP-978-3-95826-153-2 We consider a loss system with $K$ servers. If all $K$ servers are occupied, incoming arrivals are rejected. The interarrival times $A_i$ are negative exponentially distributed with rate $\lambda_i$ when the system is in state $[X=i]$ for $i=0,\dots,K$. The service time of a job follows an exponential distribution with rate $\mu$. The state-dependent arrival rates are denoted with M(x) - or sometimes M$_x$ - in Kendall's notation: M(x)/M/K-0 ## Analysis of the System¶ The system is a Markovian system. To be more precise, we have a birth-and-death process, since transitions occur only between neighboring states. The state of the system is the number $X$ of jobs in the system. The transition rate $[X=i] \to [X=i+1]$ corresponds to the state-dependent arrival rate and we assume: $\lambda_i = (i+1) \lambda$ for a given $\lambda$ and $i=0,\dots,K$. Since there are $K$ servers, the service rate is $\mu_i = i \mu$ für $i=1,\dots, K$. ### State Probabilities¶ The state probabilites are $P(X=i)=x(i)$. The macro state equations are $\lambda_{i-1} x(i-1) = \mu_{i} x(i)$ for $i=0,\dots K-1$. We obtain the following state probabilities with the parameter $a = \lambda/\mu$: $x(i) = \frac{\lambda_{i-1}}{\mu_i} x(i-1) = \frac{i \lambda}{i \mu} x(i-1) = a \cdot x(i-1) = a^i x(0)$ The state probability for the empty system $x(0)$ follows accordingly: $1 = \sum_{i=0}^K x(i) = \sum_{i=0}^K a^i x(0) = x(0) \sum_{i=0}^K a^i = x(0) \frac{1-a^{K+1}}{1-a} \quad \Rightarrow \quad x(0) = \frac{1-a}{1-a^{K+1}}$ ## Blocking Probability¶ The PASTA property must not be applied here, since the arrival process is not a Poisson process (although the interarrival times are exponentially distributed, but with state-dependent arrival rates). As a consequence, we need to derive the state probability $x_A(i)$ that an arriving customer finds the system in state $[X_A=i]$. Then, the blocking probability is $p_B = x_A(K)$. To this end, we use the strong law of large numbers for Markov chains. $x_A(i) = \frac{\lambda_i \cdot x(i)}{\sum_{i=0}^K \lambda_i \cdot x(i)}$ Note that the denominator is the mean arrival rate $\bar{\lambda}$ of the system: $\bar{\lambda} = E[\lambda] = \sum_{i=0}^K \lambda_i x(i) = \sum_{i=0}^K (i+1)\lambda a^i \frac{1-a}{1-a^{K+1}} = \lambda \left( \frac{2-a}{1-a}+K - \frac{K+1}{1-a^{K+1}} \right)$ Thus: $x_A(i) = \frac{\lambda_i}{\bar{\lambda}} \cdot x(i)$ Finally, we obtain: $p_B = x_A(K) = \frac{(K+1)\lambda}{\bar{\lambda}} x(K) = \frac{(a-1)^2 (K+1) a^K}{((a-1) K+a-2) a^{K+1}+1}$ ## Mean Number of Customers in the System¶ Due to Little's law: $E[X] = (1-p_B)\bar{\lambda} \cdot E[B] = (1-p_B)\bar{\lambda} \cdot \frac{1}{\mu}$. Alternatively: $E[X]=\sum_{i=0}^K i \cdot x(i) = \sum_{i=0}^K i \cdot a^i \cdot x(0) = x(0) \frac{a(Ka^{K+1}-(K+1)a^K+1)}{(1-a)^2} = \frac{a(Ka^{K+1}-(K+1)a^K+1)}{(1-a)(1-a^{K+1})}$
2023-03-21 04:18:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130733013153076, "perplexity": 566.6285245040082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00336.warc.gz"}
https://stats.meta.stackexchange.com/questions/1200/current-tag-synonym-candidates/4363
# Current tag synonym candidates Please post your tag synonym suggestions as new answers in this thread, one answer per suggestion. Upvote answers where you believe that the suggested tags should be made synonyms, and downvote answers where you believe the tags should remain separate. Well upvoted suggestions will be eventually implemented by the moderators (and then the corresponding answers will be deleted). • @whuber, if this is inappropriate or unhelpful, then I can delete it, but my goal is the opposite: I don't want you (or the other mods) to have to spend time thinking about this & doing a lot of extra research. I'm hoping to bring this to the attention of the community, which, by voting (commenting, etc), will have done all of that for you guys. Nb, w/ respect to lme & multilevel, I'm not sure anyone will have the requisite upvotes--there have only been 3 questions w/ multilevel, according to it's page. – gung - Reinstate Monica Jun 7 '12 at 16:59 • Comments are not for extended discussion; this conversation has been moved to chat. – whuber Jan 21 '16 at 14:55 There is a tag that seems to be used indistinguishable from . It could be declared a synonym of last one. • Maybe / probably / I'm not sure. Both tags are routinely poorly used, IMO. It'd be nice if some other people weighed in as well. – gung - Reinstate Monica Aug 7 at 16:59 • Experience suggests that application of the mathematical-statistics tag by the OP indicates that to them math is a mystery, whence any appearance of mathematics is "mathematical statistics." That seems to be the case about 80-90% of the time. I have therefore been pondering the idea of "burninating" this tag, but it is meaningful and does have its uses when applied by people who know what they are writing about. A while ago I edited the tag wiki to clarify use of this tag, but to no avail: evidently, people who are mystified by math tend not to read or follow directions, either :-(. – whuber Aug 19 at 14:52 • BTW, theory strikes me as being almost totally useless as a tag. Perhaps the better action would be to burninate that tag? – whuber Aug 19 at 14:54 • @whuber, the easiest way for us to burninate a tag is to make it a synonym of another, merge the 2, & then delete the merge. Then the tag is gone. Otherwise, we need to contact the CMs, I think, who are overworked & reluctant to do it, & a lot of prior steps need to have been taken already to satisfy them. Would you be OK w/ making this synonym so as to burninate? – gung - Reinstate Monica Sep 15 at 19:16 Do we really need two tags for [parametric] and [nonparametric]? I propose that we have a single tag that covers both. Given that existing usage has favored np, we could make that the master, but also update the excerpt / usage guidance and full wiki. Updated suggestion: We create a new tag and map both of the original tags to it. To wit: $$\rightarrow$$ $$\leftarrow$$ • [nonparametric] 1,290 threads, an excerpt & wiki • [parametric] 252 threads (of which 95 have both), an excerpt, but no wiki The current excerpts are, [nonparametric]: Procedures that rely on relatively few assumptions about underlying probability distributions. [parametric]: Statistical models described by a finite number of real-valued parameters. Often used in contrast to non-parametric statistics. I'm certainly open to discussion about the new excerpt / usage guidance, but the kind of thing I have in mind might be: Use this tag to ask about the nature of nonparametric or parametric methods, or the difference between the two. Nonparametric methods generally rely on few assumptions about the underlying distributions, whereas parametric methods make assumptions that allow data to be described by a small number of parameters. Note that this excerpt has 312 characters, which is longer than the typical excerpt, but well within the limits, and would not be the longest excerpt, even among the top tags. • If there is no substantive disagreement, I'll implement this in a week (4/12/19). – gung - Reinstate Monica Apr 5 '19 at 17:06 • How about mapping both to [parametric-nonparametric]? Too cumbersome? – amoeba Apr 5 '19 at 18:51 • @amoeba, I have updated the suggestion. – gung - Reinstate Monica Apr 8 '19 at 14:27 • I like the [nonparametric] tag. This change worries me that I will end up with unhelpful suggestions. To that extent, looking at the [parametric] tagged questions as they stands seems that the use of the [parametric] tag is pretty random. – usεr11852 Apr 27 '19 at 22:59 • @usεr11852, what is your suggestion, just get rid of p? – gung - Reinstate Monica Apr 28 '19 at 16:52 • I am uncertain that we must change something, why is leaving things as it is a problem? np works fine, and OK, p is a bit a mess but the merge won't fix that. – usεr11852 Apr 28 '19 at 16:55 • @usεr11852, because the questions are ultimately on the same topic, but split up into different tags. There seem to be few threads under p that are about anything other than the distinction b/t p & np, & those that aren't about that don't really form a coherent grouping. So instead of the tags helping to organize the information on the site, they are preventing the information from being well organized. – gung - Reinstate Monica Apr 29 '19 at 0:55 • I do not think they are. They are on the same topic if the question is for choosing between a parametric or a non-parametric model. If the question is specific to a non-parametric technique, then they are simply are not on the same topic. I agree that p questions are mostly either "incoherent" or aim to distinguish between np and p but the questions in np are mostly OK so I cannot see how this won't hurt np. – usεr11852 Apr 29 '19 at 8:37 • @usεr11852, that's what I mean by on the same topic. The np are largely about the distinction b/t p & np, or about choosing between them, & so are most of the p threads. There are also a large chunk of np threads that are strictly about np (not the difference), & there are an incoherent mish-mash of p threads. Thus, we retag those threads on ideosyncratic topics, then merge the threads on the difference b/t p & np (that are tagged p) into the bulk of such threads (which are tagged np). – gung - Reinstate Monica Apr 29 '19 at 13:31 • 1 possibility would be to have 2 tags, np & p-np, for Qs about np strictly vs the distinction b/t the 2. This makes organizational sense in the abstract, but in actual usage, I think it would work worse than just having 1 tag b/c I doubt the modal user would be sophisticated enough statistically or savvy enough w/ the SE system to use a more complicated scheme correctly. Thus, we'd end up w/ less well organized. I think a simpler scheme that groups the related threads together is likely to work best in practice. I'm open to just having np or to just having p-np whichever people prefer. – gung - Reinstate Monica Apr 29 '19 at 13:37 • I don't think we need [p] tag at all, to be honest, it's just a mish-mash. So one option would be to go through the entirety of [p] Qs, and remove it from everything that is not about parameteric vs non-parameteric. Afterwards, merge [p] into [np]. After that, delete the synonym mapping, eliminating [p] entirely. If I understood @usεr11852 correctly, they would be fine with this, but I am not sure how much work would that be and if anybody is willing to do it. – amoeba Apr 30 '19 at 9:09 • @amoeba: Agreed. – usεr11852 Apr 30 '19 at 10:25 • @amoeba, that's essentially what I had initially suggested. I'm also fine w/ creating a new tag [p-np], that ends up housing both. Basically, I think we're better off getting rid of p 1 way or another. – gung - Reinstate Monica Apr 30 '19 at 15:25 Tags (most used) and both covers distances/divergences between probability distributions and should be synonymized. There is also a tag with an unclear, very general tag wiki which seems to be more used for clustering ... but there is overlap. What to do? • df seems to be gone now. dis & div do seem similar, but also seem to not be used identically, at least in the earlier threads. There are only 4 threads w/ both (vs 593 for dis), & there are only 16 threads w/ div. – gung - Reinstate Monica Sep 15 at 19:20
2020-09-18 23:43:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29172003269195557, "perplexity": 1469.071565001358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189264.5/warc/CC-MAIN-20200918221856-20200919011856-00209.warc.gz"}
http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.112.201802
# Synopsis: The Dark Side of the Higgs The decay of the Higgs boson into “invisible particles” delivers no evidence of physics beyond the standard model, putting new limits on dark matter theories. The recently detected Higgs boson may act as a link between particles we are familiar with and also particles that have so far avoided detection, such as dark matter. To investigate this possibility, a new study has searched through data from the Large Hadron Collider (LHC) for events where a Higgs boson decays into “invisible particles,” which leave no trace in LHC’s detectors. The ATLAS collaboration, reporting in Physical Review Letters, finds that the probability of these types of events does not exceed values predicted by the standard model, a result they use to severely constrain theories based on low-mass dark matter particles. The ATLAS and CMS detectors at the LHC announced their joint discovery of the Higgs boson in July 2012. The detections were based mainly on two interaction pathways in which proton-proton collisions produce Higgs bosons that decay either into two gamma rays or into two Z bosons. But the Higgs may also decay into invisible particles, which may be part of the standard model (like neutrinos) or beyond it (like dark matter). In their new analysis, the ATLAS collaboration focused on collisions that produce a Z boson and a Higgs boson, with the latter decaying invisibly. The Z boson is detected through its decay into a pair of electrons or muons, whereas the Higgs boson is inferred from missing momentum in the collision products. After subtracting background events, the researchers estimated that the Higgs decays into invisible particles no more than $75%$ of the time. This, however, is consistent with standard model predictions, allowing the researchers to place the strongest limits yet on the interaction probability of Higgs bosons with dark matter particle candidates in the mass range between $1$ and $10$ giga-electron-volts. – Michael Schirber ### Announcements More Announcements » ## Subject Areas Particles and Fields Electronics Fluid Dynamics ## Related Articles Nuclear Physics ### Viewpoint: Of Gluons and Fireflies Improved models of gluon fluctuations within protons have been developed and applied to particle collision data, pointing to strong gluon fluctuations at high energies. Read More » Nuclear Physics ### Synopsis: Trailing the Photons from Neutron Decay A high-precision measurement of the photons emitted by neutron decays brings researchers closer to a new test of the standard model. Read More » Particles and Fields ### Focus: Low Cost Polarized Positrons A new technique requires much less energy to produce a beam of polarized positrons than previous techniques, making such beams potentially more widely available. Read More »
2016-07-26 08:27:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5362467169761658, "perplexity": 1238.368029458653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824757.62/warc/CC-MAIN-20160723071024-00306-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.hackmath.net/en/math-problem/1641
# Briefcase The cost of producing briefcase is € 45. The manufacturer wants to sell it at a profit 30%. For how much will briefcase sell? Result x =  58.5 eur #### Solution: $q=1+ 30/100=\dfrac{ 13 }{ 10 }=1.3 \ \\ x=q \cdot \ 45=1.3 \cdot \ 45=\dfrac{ 117 }{ 2 }=58.5 \ \text{eur}$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators ## Next similar math problems: I longer watch processors for Socket A on ebay, Athlon XP 1.86GHz with a PR rating of 2500+ costs $7 and Athlon XP 2.16Ghz with a PR rating of 3000+ currently cost$16. Calculate: About what percentage of the Athlon XP 2.16Ghz is powerful than Athlon 2. Sales off After discounting 40% the goods cost 15 €. How much did the cost of the goods before the discount? 3. Backpack Large backpack cost CZK 1352, little is 35% cheaper. How much we paid for 5 large and 2 small backpacks? 4. Sale A camera has a listed price of \$751.98 before tax. If the sales tax rate is 9.25%, find the total cost of the camera with sales tax included. 5. Iron Iron ore contains 57% iron. How much ore is needed to produce 20 tons of iron? 6. Gloves I have a box with two hundred pieces of gloves in total, split into ten parcels of twenty pieces, and I sell three parcels. What percent of the total amount I sold? 7. Profitability The purchase price of goods is 13000, the sales price is the 20000. What is the profitability as a percentage? 8. Percents - easy How many percent is 432 out of 434? 9. Double percent What is 80% of 60% of 2800? 10. Conference 148 is the total number of employees. The conference was attended by 22 employees. How much is it in percent? 11. Base, percents, value Base is 344084 which is 100 %. How many percent is 384177? 12. Highway repair The highway repair was planned for 15 days. However, it was reduced by 30%. How many days did the repair of the highway last? 13. Apples 2 James has 13 apples. He has 30 percent more apples than Sam. How many apples has Sam? 14. Sales off Goods is worth € 70 and the price of goods fell two weeks in a row by 10%. How many % decreased overall? 15. Class In 7.C clss are 10 girls and 20 boys. Yesterday was missing 20% of girls and 50% boys. What percentage of students missing? 16. Percentages Expressed as a percentage: 17. Mr. Vojta Mr. Vojta put in the bank 35000 Kč.After a year bank him credited with interest rate of 2% of the deposit amount. How then will Mr. Vojta in the bank?
2020-06-03 15:35:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3584318161010742, "perplexity": 3974.3398590385023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435238.60/warc/CC-MAIN-20200603144014-20200603174014-00473.warc.gz"}
https://adamsheffer.wordpress.com/category/math-events/
# The Baruch Distinguished Mathematics Lecture Series I am happy to announce the beginning of the Baruch Distinguished Mathematics Lecture Series. In this series we will bring established mathematicians to give talks to a general mathematical audience. Our first Distinguished Lecture, by Bjorn Poonen, will be “Undecidability in Number Theory”. Click here for the full details. The talk is open to everyone, and includes refreshments. After the talk we will also go to lunch with the speaker. # NYC Discrete Geometry: Introductory Meeting I am excited to announce the official start of our new Discrete Geometry group! This event will also be the first meeting of the NYC Geometry Seminar in the CUNY Graduate Center (midtown Manhattan). It will take place at 2pm of Friday August 31st. If you are in the NYC area and interested in Discrete Geometry and related topics – come join us! The introductory meeting would not consist of the standard seminar presentation. Instead, the purpose of the event is to meet people who are interested in Discrete Geometry, Computational Geometry, and so on (with people coming from NYU, Princeton, various CUNYs, etc). Participants will introduce themselves to the audience and mention the main topics that interest them. You are welcome to either introduce yourself, or just to come listen and have some pastries. If you introduce yourself, you are encouraged to briefly state a major open problem you wish you could solve. For the exact location, see the event page. Personally, I plan to have more technical math discussions with some participants before and after this meeting. # Difference Sets with Local Properties I recently attended a wonderful workshop about Algebraic methods in combinatorics, which took place in Harvard’s CMSA. There were many interesting participants from a variety of combinatorial fields, and a very friendly/productive atmosphere. My talk focused on a recent work with Cosmin Pohoata, and I also mentioned some distinct distances result that we derived. During the talk Zeev Dvir asked about an additive variant of the problem. After thinking about this variant for a bit, I think that it is a natural interesting problem. Surprisingly, so far I did not manage to find any hint of previous work on it (this might say more about my search capabilities than about the problem…) Zeev Dvir and Cosmin Pohoata. Let $\phi(n,k,\ell)$ denote the minimum size $A-A$ can have, when $A$ is a set of $n$ real numbers with the property that for any $A' \subset A$ with $|A'|=k$ we have $|A'-A'|\ge \ell$. That is, by having a local additive property of every small subset, we wish to obtain a global additive property of the entire set. For simplicity, we will ignore zero in the difference set. Similarly, we will ignore negative differences. These assumptions do not change the problem, but make it easier to discuss. As a first example, note that $\phi(n,3,3)$ is the minimum number of differences determined by a set of $n$ reals with no 3-term arithmetic progressions. Behrend’s construction is a set $A$ of positive integers $a_1< a_2 < \cdots < a_n$ with no 3-term arithmetic progression and $a_n < n2^{O(\sqrt{\log n})}$. Thus, $\phi(n,3,3) < n2^{O(\sqrt{\log n})}$. For another simple example, Consider a constant $k\ge 4$. Since we consider only positive differences, any set of $k$ reals determines at most $\binom{k}{2}$ differences. If a specific difference $d$ repeats $\lfloor k/2 \rfloor$ times, then by taking the numbers that span $d$ we obtain $A'\subset A$ such that $|A'|\le k$ and $|A'-A'| \le \binom{k}{2}- \lfloor k/2 \rfloor+1$. Thus, by asking every subset of size $k$ to span at least $\binom{k}{2}- \lfloor k/2 \rfloor+2$ differences, we obtain that no difference repeats $\lfloor k/2 \rfloor$ times in $A$. In other words $\phi\left(n,k,\binom{k}{2}-\lfloor k/2 \rfloor +2 \right) = \Omega\left(n^2\right).$ Repeating a simple argument of Erdős and Gyárfás gives $\phi\left(n,k,\binom{k}{2}-\lfloor k/2 \rfloor +1\right) = \Omega\left(n^{4/3}\right).$ That is, when moving from $\ell = \binom{k}{2}-\lfloor k/2 \rfloor +2$ to $\ell = \binom{k}{2}-\lfloor k/2 \rfloor +1$, we move from a trivial problem to a wide open one. My work with Cosmin Pohoata leads to the following result. Theorem 1. For any $d\ge 2$ there exists $c$ such that $\phi\left(n,k,\binom{k}{2}-k\frac{d}{d+1}+c\right) =\Omega\left(n^{1+1/d} \right).$ For example, when $d=2$ we get the bound $\phi\left(n,k,\binom{k}{2}-\frac{2k}{3}+c\right) =\Omega\left(n^{3/2} \right).$ When $d=3$ we get a significant improvement for the range of the Erdős-Gyárfás bound: $\phi\left(n,k,\binom{k}{2}-\frac{2k}{3}+c\right) =\Omega\left(n^{3/2} \right). \qquad \qquad \qquad (1)$ Since not much is known for this problem, it seems plausible that additional bounds could be obtained using current tools. Our technique does not rely on any additive properties, and holds for a more abstract scenario of graphs with colored edges. Hopefully in the case of difference sets one would be able to use additive properties to improve the bounds. Moreover, so far I know nothing about much smaller values of $\ell$, such as $\phi(n,k,100k)$. Proof sketch for Theorem 1. For simplicity, let us consider the case of $d=3$, as stated in $(1)$. Other values of $d$ are handled in a similar manner. Let $A$ be a set of $n$ reals, such that any $A'\subset A$ of size $k$ satisfies $|A'-A'|\ge \binom{k}{2}-\frac{3k}{4}+13$. We define the third distance energy of $A$ as $E_3(A) = \left|\left\{(a_1,a_2,a_3,b_1,b_2,b_3) \in A^6 :\, a_1-b_1=a_2-b_2=a_3-b_3 >0 \right\}\right|.$ The proof is based on double counting $E_3(A)$. For $\delta\in {\mathbb R}$, let $m_\delta = \left|\left\{(a,b)\in A^2 : a-b = \delta\right\}\right|$. That is, $m_\delta$ is the number of representations of $\delta$ as a difference of two elements of $A$. Note that the number of 6-tuples that satisfy $a_1-b_1=a_2-b_2=a_3-b_3$ is $m_\delta^3$. A simple application of Hölder ‘s inequality implies $E_3(A) = \sum_{\delta>0} m_\delta^3 \ge \frac{n^6}{|A-A|^2}.$ To obtain a lower bound for $|A-A|$, it remains to derive an upper bound for $E_3(A)$. For $j\in {\mathbb N}$ let $k_j$ denote the number of differences $\delta \in {\mathbb R}^+$ such that $m_\delta \ge j$. A dyadic decomposition gives $E_3(A) = \sum_{\delta>0} m_\delta^3 = \sum_{j=1}^{\log n} \sum_{\substack{\delta>0 \\ 2^j \le m_\delta < 2^{j+1}}} m_\delta^3< \sum_{j=1}^{\log n} k_{2^j} 2^{3(j+1)}. \qquad \qquad \qquad (2)$ For $j\in {\mathbb N}$ let $\Delta_j$ denote the set of $\delta>0$ with $m_\delta\ge j$ (so $|\Delta_j| = k_j$). For $\delta >0$, let $A_\delta$ be the set of points that participate in at least one of the representations of $\delta$. If there exist $\delta_1,\delta_2, \delta_3$ such that $|A_{\delta_1} \cap A_{\delta_2} \cap A_{\delta_3}| \ge k/4$, then there exist a subset $A'\subset A$ with $|A'|=k$ and $|A'-A'|< \binom{k}{2}-\frac{3k}{4}+13$ (see the paper for a full explanation). Thus, for every $\delta_1,\delta_2, \delta_3$ we have that $|A_{\delta_1} \cap A_{\delta_2} \cap A_{\delta_3}| < k/4$. We have $k_j$ sets $A_\delta$ with $|A_\delta| \ge j$. These are all subsets of the same set $A$ of size $n$, and every three intersect in fewer than $k/4$ elements. We now have a set theoretic problem: How many large subsets can $A$ have with no three having a large intersection. We can use the following counting lemma (for example, see Lemma 2.3 of Jukna’s Extremal Combinatorics) to obtain an upper bound on $k_j$. Lemma 2. Let $A$ be a set of $n$ elements and let $d\ge 2$ be an integer. Let $A_1,\ldots,A_k$ be subsets of $A$, each of size at least $m$. If $k \ge 2d n^d/m^d$ then there exist $1\le j_1 < \ldots < j_d \le d$ such that $|A_{j_1}\cap \ldots \cap A_{j_d}| \ge \frac{m^d}{2n^{d-1}}$. Lemma 2 implies the bound $k_j = O(n^3/j^3)$ for large values of $j$. Combining this with $(2)$ and with a couple of standard arguments leads to $E_3(A) = O(n^{10/3})$. Combining this with $E_3(A) \ge \frac{n^6}{|A-A|^2}$ implies $|A-A|=\Omega(n^{4/3})$. $\Box$ # The 2nd Elbe Sandstones Geometry Workshop I’ve been quiet for a couple of weeks because I am doing some traveling. My first stop was The 2nd Elbe Sandstones Geometry Workshop. This workshop had an interesting location — a mountain in the middle of nowhere in the Czech Republic. Here is a picture of most of the participants. The 1st Elbe Sandstones Geometry Workshop took place 13 years ago. Following is a picture from there, in front of the same door (it is also the only picture I ever saw of Micha Sharir without a beard). # Random Stories from IPAM – Part 2 If you are not in Los Angeles but are interested in these topics, you can now view videos of many of the talks that we had here. Talks from the tutorials week can be found here. Talks from the workshop “Combinatorial Geometry Problems at the Algebraic Interface” can be found here. I assume that talks from the workshop “Tools from Algebraic Geometry” will also be available soon. A talk by Joseph Landsberg. Another brief update: You might remember that in my previous IPAM post I was excited about a talk by Larry Guth. Not only that you can now watch the video of this talk, but you can also read the paper. And now for quote of the week: It is like defining a ham sandwich as “what you have in your lunchbox after taking the apple out”. Ben Lund, unsatisfied with a famous textbook’s definition of Grassmannians. After three weeks without any main events, another workshop begins tomorrow. So more updates will follow. # Random Stories from IPAM – Part1 Since my previous post, I moved from freezing New York to sunny LA. I am participating in a semester on Algebraic Techniques for Combinatorial and Computational Geometry, at the IPAM institute. The lack of posts on the blog in the past several weeks is due to the constant activities and the large number of interesting people to interact with. This post contains some random stories from my stay at IPAM. During pie day (March 14th), all of the food served in IPAM was round. So far the main events were a week of tutorials and another week consisting of a workshop about “Combinatorial Geometry Problems at the Algebraic Interface”. These contained many interesting talks, which were also videotaped. Once the videos will be online, I will post a link in the blog. Here I only mention one talk which gave me quite a surprise – Larry Guth‘s talk. At the beginning of his talk, Larry stated that he will present a significantly simpler variant of part the distinct distances proof (the one by Katz and himself). You might remember that, using the Elekes-Sharir framework, the distinct distances problem is reduced to a point-line incidences problem in ${\mathbb R}^3$: Given a set of $n$ lines, such that every point is incident to at most $O(\sqrt{n})$ of the lines and that every plane and regulus contain at most $O(\sqrt{n})$ of the lines, what is the maximum number of points that can be incident to at least $k$ of the lines (where $2\le k \le \sqrt{n}$). Larry’s new technique proves the following slightly weaker incidences bound. Theorem (Guth `14). Consider a set $\cal L$ of $n$ lines in ${\mathbb R}^3$, so that any surface of degree at most $c_\varepsilon$ (a constant that depends only on $\varepsilon$) contains at most $\sqrt{n}$ lines of $L$. Then for any $\varepsilon>0$ and $2 \le r \le \sqrt{n}$, the number of points of ${\mathbb R}^3$ that are contained in at least $r$ lines of $L$ is $O(\frac{n^{3/2+\varepsilon}}{r^2}).$ The surprising part is that the new proof was based on constant sized partitioning polynomials (on which I plan to write a couple of expository posts, as part of my expository series about the polynomial method). When using such polynomials for problems of this sort, one encounters a difficultly. It is hard to describe this difficulty without first explaining the technique, but my impression is that this difficulty was the main issue in various other recent incidences-related projects, and that now we might see various other works that rely on Larry’s technique. In his talk, Larry also mentioned that this technique can work for other types of curves, which immediately implies a series of improved point-curve incidence bounds in ${\mathbb R}^3$. A talk by Tao in IPAM. How many of the mathematicians in the audience can you recognize? And for something completely different: I had an issue with my visa, and was told that I should exit and reenter the country. This resulted in a 13-hour bus trip to Tijuana and back to LA. My only souvenir from this trip is the following picture of a pharmacy for people that are waiting in line to enter the US. I wonder sort of things people buy at a pharmacy while waiting to go through immigration… There’s a lot more to tell, so more IPAM stories later on.
2019-02-23 21:11:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 124, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6837140321731567, "perplexity": 349.9492955758105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249550830.96/warc/CC-MAIN-20190223203317-20190223225317-00143.warc.gz"}
http://www.acmerblog.com/hdu-4465-candy-7478.html
2015 07-16 # Candy LazyChild is a lazy child who likes candy very much. Despite being very young, he has two large candy boxes, each contains n candies initially. Everyday he chooses one box and open it. He chooses the first box with probability p and the second box with probability (1 – p). For the chosen box, if there are still candies in it, he eats one of them; otherwise, he will be sad and then open the other box. He has been eating one candy a day for several days. But one day, when opening a box, he finds no candy left. Before opening the other box, he wants to know the expected number of candies left in the other box. Can you help him? There are several test cases. For each test case, there is a single line containing an integer n (1 ≤ n ≤ 2 × 105) and a real number p (0 ≤ p ≤ 1, with 6 digits after the decimal). Input is terminated by EOF. There are several test cases. For each test case, there is a single line containing an integer n (1 ≤ n ≤ 2 × 105) and a real number p (0 ≤ p ≤ 1, with 6 digits after the decimal). Input is terminated by EOF. 10 0.400000 100 0.500000 124 0.432650 325 0.325100 532 0.487520 2276 0.720000 Case 1: 3.528175 Case 2: 10.326044 Case 3: 28.861945 Case 4: 167.965476 Case 5: 32.601816 Case 6: 1390.500000 对于第一个数据溢出的问题,可以这样解决。因为组合数公式为: C(n,m) = n!/(m!(n-m)!) 为了避免直接计算n的阶乘,对公式两边取对数,于是得到: ln(C(n,m)) = ln(n!)-ln(m!)-ln((n-m)!) 进一步化简得到: 这样我们就把连乘转换为了连加,因为ln(n)总是很小的,所以上式很难出现数据溢出。 为了解决第二个效率的问题,我们对上式再做一步化简。上式已经把连乘法变成了求和的线性运算,也就是说,上式已经极大地简化了计算的复杂度,但是还可以进一步优化。从上式中,我们很容易看出右边的3项必然存在重复的部分。现在我们把右边第一项拆成两部分: 这样,上式右边第一项就可以被抵消掉,于是得到: 上式直接减少了2m次对数计算及求和运算。但是这个公式还可以优化。对于上面公式里的求和,当m<n/2时,n-m是一个很大的数,但是当m>n/2时,n-m就会小很多。我们知道: C(n,m) = C(n,n-m) 那么通过这个公式,我们可以把小于n/2的m变为大于n/2的n-m再进行计算,结果是一样的,但是却能减少计算量。 当计算出ln(C(n,m))后,只需要取自然对数,就可以得到组合数: C(n,m) = exp(ln(C(n,m))) 这样就完成了组合数的计算。 用这种方法计算组合数,如果只计算ln(C(n,m))的话,n可以取到整型数据的极限值65535, #include <iostream> #include <cstdio> #include <algorithm> #include <cstring> #include <string> #include <vector> #include <cmath> using namespace std; double f[400008]; double C_m_n(int m,int n) { return f[m]-f[n]-f[m-n]; } int main() { f[0]=0; for(int i=1;i<=400006;i++) f[i]+=f[i-1]+log(i*1.0); double sum,p; int n,test=0; while(cin>>n>>p) { sum=0; printf("Case %d: ",++test); for(int k=0;k<=n-1;k++) { sum+=(n-k)*(exp((C_m_n(n+k,k))+(n+1)*log(p)+k*log(1-p))+exp((C_m_n(n+k,k))+(n+1)*log(1-p)+k*log(p))); } printf("%.6f\n",sum); } }
2017-05-23 06:57:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45256784558296204, "perplexity": 1966.4787192641675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607591.53/warc/CC-MAIN-20170523064440-20170523084440-00334.warc.gz"}
https://en.wikibooks.org/wiki/Introduction_to_Chemical_Engineering_Processes/Notation
# Introduction to Chemical Engineering Processes/Notation ## Base Notation (in alphabetical order) ${\displaystyle [i]_{n}}$ : Molarity of species i in stream n a, b, c, d: Stoichiometric coefficients. A: Area C: Molar concentration (mol/L) K: Equilibrium coefficient m: Mass MW: Molecular Weight (Molar Mass) n: Moles n: Number of data points (in statistics section) N: Number of components P: Pressure r: Regression coefficient R: Universal gas constant T: Temperature v: Velocity V: Volume x: Mole fraction in the liquid phase OR Mass fraction [1] X: (molar) extent of reaction y: Mole fraction in the gas phase z: Overall composition Z: Compressibility 1. Unless specified explicitly, assume that a given percent composition is in terms of the overall flowrate. So if you're given a flowrate in terms of kg/s and a compositoin of 30%, assume that the 30% is a mass fraction. If a given equation requires one or the other, it will explicitly be stated near the equation which is necessary. ## Greek ${\displaystyle \rho }$: Density ${\displaystyle \Sigma }$: Sum ## Subscripts If a particular component (rather than an arbitrary one) is considered, a specific letter is assigned to it: • [A] is the molarity of A • ${\displaystyle x_{A}}$ is the mass fraction of A Similarly, referring to a specific stream (rather than any old stream you want), each is given a different number. • ${\displaystyle {\dot {n}}_{1}}$ is the molar flowrate in stream 1. • ${\displaystyle {\dot {n}}_{A1}}$ is the molar flow rate of component A in stream 1. Special subscripts: If A is some value denoting a property of an arbitrary component stream, the letter i signifies the arbitrary component and the letter n signifies an arbitrary stream, i.e. • ${\displaystyle A_{n}}$ is a property of stream n. Note ${\displaystyle {\dot {n}}_{n}}$ is the molar flow rate of stream n. • ${\displaystyle A_{i}}$ is a property of component i. The subscript "gen" signifies generation of something inside the system. The subscripts "in" and "out" signify flows into and out of the system. ## Embellishments If A is some value denoting a property then: ${\displaystyle {\bar {A}}_{n}}$ denotes the average property in stream n ${\displaystyle {\dot {A}}_{n}}$ denotes a total flow rate in steam n ${\displaystyle {\dot {A}}_{in}}$ denotes the flow rate of component i in stream n. ${\displaystyle {\hat {A}}}$ indicates a data point in a set. ${\displaystyle A_{i}^{*}}$ is a property of pure component i in a mixture. ## Units Section/Dimensional Analysis In the units section, the generic variables L, t, m, s, and A are used to demonstrate dimensional analysis. In order to avoid confusing dimensions with units (for example the unit m, meters, is a unit of length, not mass), if this notation is to be used, use the unit equivalence character ${\displaystyle {\dot {=}}}$ rather than a standard equal sign.
2019-04-20 20:33:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728166222572327, "perplexity": 2560.821321454537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530040.33/warc/CC-MAIN-20190420200802-20190420222802-00147.warc.gz"}
http://www.ijpe-online.com/EN/10.23940/ijpe.20.09.p9.14041415
Int J Performability Eng ›› 2020, Vol. 16 ›› Issue (9): 1404-1415. ### EMG Pattern Recognition based on Particle Swarm Optimization and Recurrent Neural Network Xiu Kana,b,*, Xiafeng Zhangb, Le Caob, Dan Yangb, and Yixuan Fanb 1. aSchool of Mathematics, Southeast University, Nanjing, 210096, China; bSchool of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China • Submitted on ; Revised on ; Accepted on • Contact: * E-mail address: xiu.kan@sues.edu.cn Abstract: Surface electromyography signal (sEMG) plays an important role in gesture recognition and prosthetic control. Aiming at the problems of complex combination of RNN parameters, setting difficulty, and structure dependence of model quality, an EMG pattern recognition method based on particle swarm optimization recurrent neural network (PSO-RNN) is proposed. This method uses the characteristics of particle swarm optimization (PSO), such as high global search efficiency, fast convergence speed, and wide optimization range, and automatically finds the optimal structure of RNN through continuous iterative updating. On the Ninapro EMG database, the classification of 12 types of EMG actions by the PSO-RNN algorithm is tested, and the results are compared with four algorithms applied in the same data set. The results show that the proposed PSO-RNN algorithm model achieves a high accuracy of 94.1667%, and it has certain effectiveness and practicability.
2022-06-27 08:07:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2035876214504242, "perplexity": 3811.478735188672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00612.warc.gz"}
https://electronics.stackexchange.com/questions/260580/identifying-the-parameters-of-a-linear-state-space-model-using-kalman-filter
# Identifying the parameters of a linear state-space-model using Kalman Filter I have a linear state space model (SSM) that looks like this \begin{align} {\dot {x}} & = {\rm \textbf{A}}{x} + {\rm \textbf{B}}{u} \\ {y} & = {\rm \textbf{c}}{x} \end{align} I was able to roughly estimate the value of the matrix $A$ & $B$. While $C$ is known! I would like to fine tune the value of the elements of these matrices. $\textbf{A}$ is 4x4, $\textbf{B}$ is 3x4, and $\textbf{C}$ is 1x4. There are made up from a combination of ten parameters. Can I use the Kalman filter to somehow estimate the value of these parameters? If so, can you please explain to my how I can do it and if I need to redfine my system. Thinking along the line of Online-parameter estimation. If Kalman Filter cannot be used, can you explain why and what alternatives are there? Thank you for taking the time to read my question • The KF estimates the state, and not the A, B, C, D matrices. You need parameter estimation - Matlab has a toolbox to do this, including least squares, instrumental variable – Chu Sep 29 '16 at 13:36 • that is true, however the I have seen some works do the so called -joint state-parameter estimation- with the Kalman filter. – TheCake90 Sep 29 '16 at 13:52
2019-08-26 00:31:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9223108887672424, "perplexity": 484.2979486061293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330913.72/warc/CC-MAIN-20190826000512-20190826022512-00415.warc.gz"}
http://bhsmethods34.wikidot.com/01-4-exactvals
01.4-ExactVals # Exact Values In VCE Maths Methods (and Specialist Maths), unless told otherwise, you should always leave your answer as an exact value. This means you leave your answer without evaluating surds, pi, e, fractions etc. The most common use for exact values is with trig ratios. For the standard angles, you only need to remember two values: • $\sin(30^\circ ) = \dfrac{1}{2}$ • $\tan(45^\circ ) = 1$ From these two values, we can construct two triangles and fill in the other values using Pythagoras: Then from these triangles we can use SOHCAHTOA to obtain the needed exact values: You should also know the exact values for 0 and 90. These can be deduced by remembering the unit circle definitions and that • $x = \cos(\theta)$ • $y = \sin(\theta)$ • and $\tan(\theta)$ is the y-coordinate on the tangent line x = 1
2018-06-19 16:25:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633456826210022, "perplexity": 1316.4216727447024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863100.8/warc/CC-MAIN-20180619154023-20180619174023-00422.warc.gz"}
https://stats.stackexchange.com/questions/268885/tune-alpha-and-lambda-parameters-of-elastic-nets-in-an-optimal-way
# Tune alpha and lambda parameters of elastic nets in an optimal way I am trying to tune alpha and lambda parameters for an elastic net based on the glmnet package. I found some sources, which propose different options for that purpose. According to this instruction I did an optimization based on the caret package. According to this thread I optimized the parameters manually. Both ways give me valid results, but however, the chosen parameters of the methods are very different. See reproducible example in R below: library("caret") library("glmnet") set.seed(1234) # Some example data N <- 1000 y <- rnorm(N, 5, 10) x1 <- y + rnorm(N, 2, 10) x2 <- y + rnorm(N, - 5, 20) x3 <- y + rnorm(N, 10, 200) x4 <- rnorm(N, 20, 50) x5 <- rnorm(N, - 7, 200) x6 <- rbinom(N, 1, exp(x1) / (exp(x1) + 1)) x7 <- rbinom(N, 1, exp(x2) / (exp(x2) + 1)) x8 <- rbinom(N, 1, exp(x3) / (exp(x3) + 1)) x9 <- rbinom(N, 1, exp(x4) / (exp(x4) + 1)) x10 <- rbinom(N, 1, exp(x5) / (exp(x5) + 1)) data <- data.frame(y, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10) # Tune parameteres with caret and glmnet # Set up grid and cross validation method for train function lambda_grid <- seq(0, 3, 0.1) alpha_grid <- seq(0, 1, 0.1) trnCtrl <- trainControl(method = "repeatedCV", number = 10, repeats = 5) srchGrid <- expand.grid(.alpha = alpha_grid, .lambda = lambda_grid) # Cross validation my_train <- train(y ~., data, method = "glmnet", tuneGrid = srchGrid, trControl = trnCtrl) # Best tuning parameters my_train$bestTune # Tune parameteres with glmnet only alphasOfInterest <- seq(0, 1, 0.1) # Step 1: Do all crossvalidations for each alpha cvs <- lapply(alphasOfInterest, function(curAlpha) { cv.glmnet(x = as.matrix(data[ , colnames(data) %in% "y" == FALSE]), y = y, alpha = curAlpha, family = "gaussian") }) # Step 2: Collect the optimum lambda for each alpha optimumPerAlpha <- sapply(seq_along(alphasOfInterest), function(curi) { curcvs <- cvs[[curi]] curAlpha <- alphasOfInterest[curi] indOfMin <- match(curcvs$lambda.min, curcvs$lambda) c(lam = curcvs$lambda.min, alph = curAlpha, cvup = curcvs$cvup[indOfMin]) }) # Step 3: Find the overall optimum posOfOptimum <- which.min(optimumPerAlpha["lam", ]) overall.lambda.min <- optimumPerAlpha["lam", posOfOptimum] overall.alpha.min <- optimumPerAlpha["alph", posOfOptimum] overall.criterionthreshold <- optimumPerAlpha["cvup", posOfOptimum] # Step 4: Now check for each alpha which lambda is the best within the threshold corrected1se <- sapply(seq_along(alphasOfInterest), function(curi) { curcvs <- cvs[[curi]] lams <- curcvs$lambda lams[lams < overall.lambda.min] <- NA lams[curcvs$cvm > overall.criterionthreshold] <- NA lam1se<-max(lams, na.rm = TRUE) c(lam = lam1se, alph = alphasOfInterest[curi]) }) # Step 5: Find the best (lowest) of these lambdas overall.lambda.1se <- max(corrected1se["lam", ]) pos <- match(overall.lambda.1se, corrected1se["lam", ]) overall.alpha.1se <- corrected1se["alph", pos] # Comparison --> Parameters are very different my_train$bestTune # Parameters according to caret c(overall.alpha.1se, overall.lambda.1se) # Parameters according to glmnet only It seems like I am doing something wrong, but unfortunately I can not figure out the problem. Question: How could I tune alpha and lambda for an elastic net in R? UPDATE: Simulation study added for a comparison between caret and a manual tuning of alpha and lambda According to Hong Ooi's suggestion, I compared the results of both tuning methods in several runs within a small simulation study. Both methods still result in very different best parameters and the manual tuning outperforms the caret package slightly. This result is very surprising to me, since I would have expected that the caret package results in better estimations compared to a programming by hand. Therefore I am wondering, if the manual tuning is actually outperforming the caret package or if I have done any mistakes. Any suggestion is very welcome! ##### Small simulation ##### alpha_caret <- numeric() lambda_caret <- numeric() MSE_caret <- numeric() alpha_without_caret <- numeric() lambda_without_caret <- numeric() MSE_without_caret <- numeric() R <- 20 # Simulation runs for(r in 1:R) { ##### Tune parameteres with caret and glmnet ##### # Set up grid and cross validation method for train function lambda_grid <- seq(0, 3, 0.1) alpha_grid <- seq(0, 1, 0.1) trnCtrl <- trainControl(method = "repeatedCV", number = 10, repeats = 5) srchGrid <- expand.grid(.alpha = alpha_grid, .lambda = lambda_grid) # Cross validation my_train <- train(y ~., data, method = "glmnet", tuneGrid = srchGrid, trControl = trnCtrl) # Best parameters alpha_caret[r] <- as.numeric(my_train$bestTune[1]) # alpha according to caret lambda_caret[r] <- as.numeric(my_train$bestTune[2]) # lambda according to caret # Elastic net with best parameters mod_elnet <- glmnet(x = as.matrix(data[colnames(data) %in% "y" == FALSE]), y = data$y, alpha = alpha_caret[r], family = "gaussian", lambda = lambda_caret[r]) # Estimation of lm with the variables that have been selected in the elastic net vars_elnet <- names(mod_elnet$beta[ , 1])[as.numeric(mod_elnet$beta[ , 1]) != 0] mod_elnet_lm <- lm(y ~ ., data[ , colnames(data) %in% c(vars_elnet, "y")]) # MSE MSE_caret[r] <- mean(mod_elnet_lm$residuals^2) ##### Tune parameteres with glmnet only ##### alphasOfInterest <- seq(0, 1, 0.1) # Step 1: Do all crossvalidations for each alpha cvs <- lapply(alphasOfInterest, function(curAlpha) { cv.glmnet(x = as.matrix(data[ , colnames(data) %in% "y" == FALSE]), y = y, alpha = curAlpha, family = "gaussian") }) # Step 2: Collect the optimum lambda for each alpha optimumPerAlpha <- sapply(seq_along(alphasOfInterest), function(curi) { curcvs <- cvs[[curi]] curAlpha <- alphasOfInterest[curi] indOfMin <- match(curcvs$lambda.min, curcvs$lambda) c(lam = curcvs$lambda.min, alph = curAlpha, cvup = curcvs$cvup[indOfMin]) }) # Step 3: Find the overall optimum posOfOptimum <- which.min(optimumPerAlpha["lam", ]) overall.lambda.min <- optimumPerAlpha["lam", posOfOptimum] overall.alpha.min <- optimumPerAlpha["alph", posOfOptimum] overall.criterionthreshold <- optimumPerAlpha["cvup", posOfOptimum] # Step 4: Now check for each alpha which lambda is the best within the threshold corrected1se <- sapply(seq_along(alphasOfInterest), function(curi) { curcvs <- cvs[[curi]] lams <- curcvs$lambda lams[lams < overall.lambda.min] <- NA lams[curcvs$cvm > overall.criterionthreshold] <- NA lam1se<-max(lams, na.rm = TRUE) c(lam = lam1se, alph = alphasOfInterest[curi]) }) # Step 5: Find the best (lowest) of these lambdas overall.lambda.1se <- max(corrected1se["lam", ]) pos <- match(overall.lambda.1se, corrected1se["lam", ]) overall.alpha.1se <- corrected1se["alph", pos] # Best parameters alpha_without_caret[r] <- as.numeric(overall.alpha.1se) # alpha according to glmnet only lambda_without_caret[r] <- as.numeric(overall.lambda.1se) # lambda according to glmnet only # Elastic net with best parameters mod_elnet_wc <- glmnet(x = as.matrix(data[colnames(data) %in% "y" == FALSE]), y = data$y, alpha = alpha_without_caret[r], family = "gaussian", lambda = lambda_without_caret[r]) # Estimation of lm with the variables that have been selected in the elastic net vars_elnet_wc <- names(mod_elnet_wc$beta[ , 1])[as.numeric(mod_elnet_wc$beta[ , 1]) != 0] mod_elnet_wc_lm <- lm(y ~ ., data[ , colnames(data) %in% c(vars_elnet_wc, "y")]) # MSE MSE_without_caret[r] <- mean(mod_elnet_wc_lm$residuals^2) } # Compare results data.frame(alpha_caret, lambda_caret, MSE_caret, alpha_without_caret, lambda_without_caret, MSE_without_caret) mean(MSE_caret) mean(MSE_without_caret) # Better results The results look like follows: alpha_caret lambda_caret MSE_caret alpha_without_caret lambda_without_caret MSE_without_caret 1 0.9 0.2 40.28436 0.0 1.850340 40.14838 2 1.0 0.2 40.28436 0.4 1.228666 40.48928 3 1.0 0.2 40.28436 0.0 1.850340 40.14838 4 1.0 0.2 40.28436 0.2 1.693744 40.23916 5 1.0 0.2 40.28436 0.0 2.030746 40.14838 6 1.0 0.2 40.28436 0.2 1.858882 40.36684 7 1.0 0.2 40.28436 0.0 2.684526 40.14838 8 1.0 0.2 40.28436 0.1 2.127441 40.16517 9 0.7 0.1 40.16302 0.1 1.766239 40.16011 10 1.0 0.2 40.28436 0.1 2.127441 40.16517 11 0.7 0.1 40.16302 0.0 1.536185 40.14838 12 1.0 0.2 40.28436 0.2 2.239030 40.36684 13 1.0 0.2 40.28436 0.1 1.938445 40.16011 14 1.0 0.2 40.28436 0.1 2.127441 40.16517 15 1.0 0.2 40.28436 0.1 2.334864 40.16517 16 0.9 0.2 40.28436 0.1 2.127441 40.16517 17 0.8 0.1 40.16302 0.2 1.543276 40.22040 18 1.0 0.2 40.28436 0.1 2.562510 40.22040 19 1.0 0.2 40.28436 0.0 2.946264 40.14838 20 1.0 0.2 40.28436 0.1 1.938445 40.16011 The MSE of a programming by hand is better like the MSE based on the caret package. The estimated best alphas and lambdas are very different. Question: Why do both methods result in such different estimations of alpha and lambda? Cross-validation is a noisy process and you shouldn't expect the results from two runs to be similar, even if everything is working fine. You can try repeating your experiment several times and see what happens. That said, here's a narrow answer to this specific question: Question: How could I tune alpha and lambda for an elastic net in R? My glmnetUtils package includes a function cva.glmnet to do exactly this. It does cross-validation for both alpha and lambda, with the validation folds held constant (as per the recommendation in ?cv.glmnet). Sample code: # it also includes a formula interface: no more messing around with model.matrix() cva <- cva.glmnet(y ~ ., data=data) • Thank you very much for your fast response! I am just trying to install your package, if I do install.packages("glmnetUtils") it says "package ‘glmnetUtils’ is not available (for R version 3.3.3)" and if I try install.packages("devtools"); library(devtools); install_github("hong-revo/glmnetUtils") it says "Error: Couldn't connect to server". Is the package updated for R version 3.3.3? Could you also comment on the difference between caret and glmnetUtils? What are the differences regarding the tuning of alpha and lambda of these 2 packages? Mar 21 '17 at 14:41 • It's not on CRAN (yet; working on that). There shouldn't be any issues installing with devtools. Make sure you haven't got access to Github blocked. Mar 21 '17 at 14:59 • Probably it is block from the institute I am working for. I will try to figure out what is going on. Thank you very much! Mar 21 '17 at 15:37 • Unfortunately I am still not able to run your package (probably due to limitations of my company). However, I have had a look at your suggestion to run both methods several times. Like you can see in the update above, it seems like there is a systematic between the different results of both ways. Further, it seems like the programming by hand is outperforming the caretpackage slightly, which is very surprising to me. Do you have any suggestions, why this could be the case? Mar 23 '17 at 9:55
2021-10-27 18:00:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3550243675708771, "perplexity": 7287.148035738206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00472.warc.gz"}
http://love2d.org/forums/viewtopic.php?f=4&t=88742&p=233630
## love.graphics.draw are the x and y floats ? is floor needed ? Questions about the LÖVE API, installing LÖVE and other support related questions go here. Forum rules Before you make a thread asking for help, read this. gcmartijn Prole Posts: 11 Joined: Sat Dec 28, 2019 6:35 pm ### love.graphics.draw are the x and y floats ? is floor needed ? H! I first need to understand what all x and y positions need to be when drawing/translating/scaling, before I can debug a 'glitch' problem. At the moment I have a two simple images (head and a body), that are moving using vector movement. Sometimes I can see a 1px y offset during the walking movement. But I was thinking, what do videocard/love2d/os want ? I did read that some people do use a love.graphics.draw(img,quad,math.floor(x),math.floor(y)) and for the scaling operation. But is this needed, and better (to support most devices, for example) ? If I do use floor then I have many more glitches, but I can fix that if I know what to use. So the main question is: to floor or not to floor (for draw/translate/scaling)? And the bonus question is, what does a image drawing do with this: Code: Select all love.graphics.draw( self.image, 1.2312321321323, 4.3423432423432 ) In photoshop for example a pixel is never a float/double. Thanks! EmmaGees Prole Posts: 6 Joined: Mon May 04, 2020 7:51 pm ### Re: love.graphics.draw are the x and y floats ? is floor needed ? If you don't use math.ceil or math.floor on the stuff you draw then you will have weird jittering when moving the camera and sprites around. Displays can't actually draw anything at exactly something like "2.5, 2.7" so that's why you get weird results when not rounding it up or down Turnips. gcmartijn Prole Posts: 11 Joined: Sat Dec 28, 2019 6:35 pm ### Re: love.graphics.draw are the x and y floats ? is floor needed ? Oke, so now I'm going to floor all draw functions. But what about the translate and transform actions ? Do I need to floor them ? Are there other function that I need to know, where I need to use floor from now on ? zorg Party member Posts: 2983 Joined: Thu Dec 13, 2012 2:55 pm Location: Absurdistan, Hungary Contact: ### Re: love.graphics.draw are the x and y floats ? is floor needed ? As far as i know, photoshop gives you a discrete 2D plane with the width and height in pixels you can edit neatly. Löve uses OpenGL, which gives you a 3D space wherein you have a 2D window into it where you can draw stuff. You don't necessarily need to floor/ceil/round/truncate your values since the video card can handle floating point coordinates, the worst thing that happens is that you'll have the color of your pixel be distributed to more adjacent pixels on screen. If you don't want that, and want pixel-accurate drawing, then either just flooring, or doing math.floor(x+.5) could work (also for the y value) - the difference is whether one needs to compensate for drawing to "the top left" or "the center" of a pixel; one will look bad, the other won't (and i can't remember which is which) Same applies to translating, although you can get rid of the +.5 mentioned above if you just translate globally by .5,.5 Also for scaling, since any non-whole number will introduce rounding issues as well, when drawing something. Rotation would be limited to 0,90,180,270 degrees (the function itself needs radians instead of degrees, there are conversion functions lua gives you) due to the same reasons. Skew, if you want pixel-accurate, you don't use skew. Me and my stuff True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho. gcmartijn Prole Posts: 11 Joined: Sat Dec 28, 2019 6:35 pm ### Re: love.graphics.draw are the x and y floats ? is floor needed ? Thanks for the info. I want pixel accurate because I'm going to bind multiple moving drawing images together, in combination with old school graphics. So I'm already using. Code: Select all love.graphics.setDefaultFilter("nearest", "nearest") love.graphics.setLineStyle("rough") EmmaGees Prole Posts: 6 Joined: Mon May 04, 2020 7:51 pm ### Re: love.graphics.draw are the x and y floats ? is floor needed ? gcmartijn wrote: Sat May 09, 2020 4:29 pm Oke, so now I'm going to floor all draw functions. But what about the translate and transform actions ? Do I need to floor them ? Are there other function that I need to know, where I need to use floor from now on ? You should definitely floor translate functions (for the best result) or ceil if you are using that on your draw functions, but scaling depends on what you're going for, that's a whole different conversation in my opinion as that's got disadvantages with some types of scaling. I used math.ceil on my scaling function in my last game but it also cut off parts of the scene which isn't ideal, but it looked much nicer. Turnips. pgimeno Party member Posts: 2280 Joined: Sun Oct 18, 2015 2:58 pm ### Re: love.graphics.draw are the x and y floats ? is floor needed ? If the pixels of your image correspond to the pixels of the screen, using floor() will fix a glitch where, if a certain row of column of pixels has coordinates near 0.5, sometimes they are rounded down and sometimes up, causing inconsistent pixels to be shown. If they don't, and your image's pixels are not an exact multiple of the screen's pixels, or otherwise sufficiently bigger, you may get aliasing, and float coordinates will cause rounding to be applied to the float positions, sometimes up, sometimes down, depending on the decimals of the float. That's all assuming a filter of nearest, nearest (which is the one you've said you were using). With the other filter, the image pixels are interpolated and the float coordinates make even more sense. Karai17 Party member Posts: 902 Joined: Sun Sep 02, 2012 10:46 pm ### Re: love.graphics.draw are the x and y floats ? is floor needed ? Anything involving pixel coordinates needs to be floored. STI - An awesome Tiled library LÖVE3D - A 3D library for LÖVE 0.10+ Dev Blog | GitHub | excessive ❤ moé ### Who is online Users browsing this forum: Bing [Bot], Google [Bot], zorg and 23 guests
2020-09-25 22:14:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.464155912399292, "perplexity": 2877.0708925209465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00661.warc.gz"}
https://forum.effectivealtruism.org/users/elifland
# Bio Figuring out the truth, especially about AI alignment. More at https://www.elilifland.com/. You can give me anonymous feedback here. FTX CoIs: Received $700k from FTX Future Fund in May to start Sage,$60k from FTX Future Fund regranting program in Sep for independent research. Visited the Bahamas twice. # Topic Contributions6 Personally the FTX regrantor system felt like a nice middle ground between EA Funds and donor lotteries in terms of (de)centralization. I'd be excited to donate to something less centralized than EA Funds but more centralized than a donor lottery. Which part of my comment did you find as underestimating how grievous SBF/Alameda/FTX's actions were? (I'm genuinely unsure) Nitpick, but I found the sentence: Based on things I've heard from various people around Nonlinear, Kat and Emerson have a recent track record of conducting Nonlinear in a way inconsistent with EA values [emphasis mine]. A bit strange in the context of the rest of the comment. If your characterization of Nonlinear is accurate, it would seem to be inconsistent with ~every plausible set of values and not just "EA values". Appreciate the quick, cooperative response. I want you to write a better post arguing for the same overall point if you agreed with the title, hopefully with more context than mine. Not feeling up to it right now and not sure it needs a whole top-level post. My current take is something like (very roughly/quickly written): 1. New information is currently coming in very rapidly. 2. We should at least wait until the information comes in a bit slower before thinking seriously in-depth about proposed mitigations so we have a better picture of what went wrong. But "babbling" about possible mitigations seems mostly fine. 3. An investigation similar to the one proposed here should be started fairly quickly, with the goal of producing an initial version of a report within ~2 months so we can start thinking pretty seriously about what mitigations/changes are needed, even if a finalized report would take longer. My main thought is that I don't know why he committed fraud. Was it actually to utility maximize, or because he was just seeking status, or got too prideful or what? I think either way most of the articles you point to do more good than harm. Being more silent on the matter  would be worse. I'd agree with this if I thought EA right now had a cool head. Maybe I should have said we should wait until EA has a cooler head before launching investigations. I'd hope that the investigation would be conducted mostly by an independent, reputable entity even if commissioned by EA organizations. Also, "EA" isn't a fully homogeneous entity and I'd hope that the people commissioning the investigation might be more cool-headed than the average Forum poster. I thought I would like this post based on the title (I also recently decided to hold off for more information before seriously proposing solutions), but I disagree with much of the content. A few examples: It is uncertain whether SBF intentionally committed fraud, or just made a mistake, but people seem to be reacting as if the takeaway from this is that fraud is bad. I think we can safely say with at this point >95% confidence that SBF basically committed fraud even if not technically in the legal sense (edit: but also seems likely to be fraud in the legal sense), and it's natural to start thinking about the implications of this and in particular be very clear about our attitude toward the situation if fraud indeed occurred as looks very likely. Waiting too long has serious costs. We could immediately launch a costly investigation to see who had knowledge of fraud that occurred before we actually know if fraud occured or why. In worlds where we’re wrong about whether or why fraud occurred this would be very costly. My suggestion: wait for information to costlessly come out, discuss what happened when not in the midst of the fog and emotions of current events, and then decide whether we should launch this costly investigation. If we were to wait until we close to fully knew "whether or why fraud occurred" this might take years as the court case plays out. I think we should get on with it reasonably quickly given that we are pretty confident some really bad stuff went down. Delaying the investigation seems generally more costly to me than the costs of conducting it, e.g. people's memories decay over time and people have more time to get alternative stories straight. Adjacently, some are arguing EA could have vetted FTX and Sam better, and averted this situation. This reeks of hindsight bias! Probably EA could not have done better than all the investors who originally vetted FTX before giving them a buttload of money! Maybe EA should investigate funders more, but arguments for this are orthogonal to recent events, unless CEA believes their comparative advantage in the wider market is high-quality vetting of corporations. If so, they could stand to make quite a bit of money selling this service, and should possibly form a spinoff org. This seems wrong, e.g. EA leadership had more personal context on Sam than investors. See e.g. Oli here with a personal account and my more abstract argument here. I’m not as sure about advisors, as I wrote here. Agree on recipients It's a relevant point but I think we can reasonably expect EA leadership to do better at vetting megadonors than Sequoia due to (a) more context on the situation, e.g. EAs should have known more about SBF's past than Sequoia and/or could have found it out more easily via social and professional connections (b) more incentive to avoid downside risks, e.g. the SBF blowup matters a lot more for EA's reputation than Sequoia's. To be clear, this does not apply to charities receiving money from FTXFF, that is a separate question from EA leadership. Thanks for clarifying. To be clear, I didn't say I thought they were as bad as Leverage. I said "I have less trust in CEA's epistemics to necessarily be that much better than Leverage's , though I'm uncertain here" I've read it. I'd guess we have similar views on Leverage, but different views on CEA. I think it's very easy for well-intentioned, generally reasonable people's epistemics to be corrupted via tribalism, motivated reasoning, etc. But as I said above I'm unsure. Edited to add: Either way, might be a distraction to debate this sort of thing further. I'd guess that we both agree in practice that the allegations should be taken seriously and investigated carefully, ideally by independent parties. I agree that these can technically all be true at the same time, but I think the tone/vibe of comments is very important in addition to what they literally say, and the vibe of Arepo's comment was too tribalistic. I'd also guess re: (3) that I have less trust in CEA's epistemics to necessarily be that much better than Leverage's , though I'm uncertain here (edited to add: tbc my best guess is it's better, but I'm not sure what my prior should be if there's a "he said / she said" situation, on who's telling the truth. My guess is closer to 50/50 than 95/5 in log odds at least).
2022-12-01 10:38:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46232908964157104, "perplexity": 2409.7675374619657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710808.72/warc/CC-MAIN-20221201085558-20221201115558-00798.warc.gz"}
https://mathematica.stackexchange.com/questions/95014/user-defined-distance-functions-in-findclusters/95029#95029
# User-defined distance functions in FindClusters I am trying to use FindClusters to segment data points into two special parts data ListPlot[data, PlotRange -> All, AspectRatio -> Automatic] It is clear that these points can be divided into two parts, each radial arm can be treated as one part. I've tried different DistanceFunctions but can not get what I want, and I don't know how to define a right distance functions. Show[ ListPlot[FindClusters[data, 2, DistanceFunction -> CosineDistance][[1]], AspectRatio -> Automatic, PlotStyle -> Blue], ListPlot[FindClusters[data, 2, DistanceFunction -> CosineDistance][[2]], AspectRatio -> Automatic, PlotStyle -> Red], PlotRange -> All] Is there any way to solve this problem? • Welcome to Mathematica.SE! I hope you will become a regular contributor. To get started, 1) take the introductory tour now, 2) when you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge, 3) remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign, and 4) give help too, by answering questions in your areas of expertise. Sep 19 '15 at 14:36 • A figure is missing, or the reference to it needs to be deleted. Sep 19 '15 at 14:39 • A simpler code to plot the clusters of the data is to use: ( cls = FindClusters[data, 2, DistanceFunction -> CosineDistance]; ListPlot[cls] ) Sep 19 '15 at 17:10 Here is one solution. 1. Find a small number of nearest neighbors (NN's) for each point. 2. Make a graph each node of which corresponds to a data point and each edge corresponds to a NN's pair. 3. Partition the graph into connected components, or communities, or cliques. In order the last step to work well, it might be a good idea to remove the points in the center of the data of the question. The question says the desired partition is for "two special parts," but I see four "special" parts in the plot. That is why my solution explanations show four parts. The solution also works if two parts are desired. Here are the Mathematica commands. 1. Finding the NN's nf = Nearest[data]; numberOfNNs = 20; dataNNs = Map[{#, Complement[nf[#, numberOfNNs], #]} &, data]; 2. Making the graph edges (pairs of connected nodes) graphPairs = Flatten@Map[ Function[{p}, # <-> p & /@ Complement[nf[p, numberOfNNs], p]], data] /. pointToIndexRules; 3. Optionally remove points at the center indsToRemove = Select[data, #.# < 0.01 &] /. pointToIndexRules; graphPairs=Select[graphPairs,!(MemberQ[indsToRemove,#[[1]]] || MemberQ[indsToRemove,#[[2]]])&]; 4. Make the graph object dataGraph = Graph[graphPairs]; 5. Find partition of the graph and the data clsInds = FindGraphPartition[dataGraph, 4]; cls = Map[data[[#]] &, clsInds]; 6. Plot the clusters ListPlot[cls, AspectRatio -> Automatic] Here is the result of step 6. If the center points are removed (which I think is a more robust method) then it is better in step 5 to use clsInds = KCoreComponents[dataGraph, 3] Here is the result using that option: If we want to see the data as being made of two parts then in step 5 we can use clsInds = FindGraphPartition[dataGraph, 2]; and what we get is this result:
2021-10-22 21:45:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30299726128578186, "perplexity": 1673.674346526309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00485.warc.gz"}
http://mathhelpforum.com/algebra/15239-series-question.html
# Thread: series question 1. ## series question Hey guys I can't seem to get the answer for this so thanks if anybody could help: The rth term of a series is 3^(r-1)+2r. Find the sum of the first n terms. 2. Originally Posted by margaritas Hey guys I can't seem to get the answer for this so thanks if anybody could help: The rth term of a series is 3^(r-1)+2r. Find the sum of the first n terms. $\displaystyle \sum_{r=1}^{ n} 3^{r-1}+2r = \left( \sum_{r=1}^{ n} 3^{r-1} \right)+ \left( \sum_{r=1}^{ n} 2r \right)$ The first of these is a finite geometric series which you should be able to sum and the second is a finite arithmetic series which you should also be able to sum RonL 3. No, actually I still don't get it. Could you elaborate further? Thanks! EDIT: Oh no wait, I think I get it! 4. Originally Posted by margaritas No, actually I still don't get it. Could you elaborate further? Thanks! EDIT: Oh no wait, I think I get it! For the record: $\displaystyle \sum_{k = 0}^n ar^k = \frac{a(1 - r^{n+1})}{1 - r}$ and $\displaystyle \sum_{k = 0}^n k = \frac{n(n+1)}{2}$ and you can take it from there. -Dan 5. Originally Posted by margaritas No, actually I still don't get it. Could you elaborate further? Thanks! EDIT: Oh no wait, I think I get it! $\displaystyle \sum_{r=1}^{ n} 3^{r-1} = \frac{1-3^r}{1-3}$ $\displaystyle \sum_{r=1}^{ n} 2r = r(r+1)$ RonL
2018-03-17 13:21:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737831115722656, "perplexity": 687.4743199295891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645069.15/warc/CC-MAIN-20180317120247-20180317140247-00274.warc.gz"}
https://wealth365.com/ynd9f/number-of-bijections-on-a-set-of-cardinality-n-489d2b
Nn is a bijection, and so 1-1. Cardinal number of a set : The number of elements in a set is called the cardinal number of the set. Let us look into some examples based on the above concept. Cardinality Recall (from our first lecture!) We’ve already seen a general statement of this idea in the Mapping Rule of Theorem 7.2.1. Choose one natural number. For example, let us consider the set A = { 1 } It has two subsets. Proof. In general for a cardinality $\kappa$ the cardinality of the set you describe can be written as $\kappa !$. For finite sets, cardinalities are natural numbers: |{1, 2, 3}| = 3 |{100, 200}| = 2 For infinite sets, we introduced infinite cardinals to denote the size of sets: n. Mathematics A function that is both one-to-one and onto. Find if set $I$ of all injective functions $\mathbb{N} \rightarrow \mathbb{N}$ is equinumerous to $\mathbb{R}$. For every $A\subseteq\Bbb N$ which is infinite and has an infinite complement, there is a permutation of $\Bbb N$ which "switches" $A$ with its complement (in an ordered fashion). Suppose A is a set such that A ≈ N n and A ≈ N m. The hypothesis means there are bijections f: A→ N n and g: A→ N m. The map f g−1: N m → N n is a composition of bijections, It is well-known that the number of surjections from a set of size n to a set of size m is quite a bit harder to calculate than the number of functions or the number of injections. A set whose cardinality is n for some natural number n is called nite. Question: We Know The Number Of Bijections From A Set With N Elements To Itself Is N!. A set which is not nite is called in nite. More rigorously, $$\operatorname{Aut}\mathbb{N} \cong \prod_{n \in \mathbb{N}} \mathbb{N} \setminus \{1, \ldots, n\} \cong \prod_{n \in \mathbb{N}} \mathbb{N} \cong \mathbb{N}^\mathbb{N} = \operatorname{End}\mathbb{N},$$ where $\{1, \ldots, 0\} := \varnothing$. Taking h = g f 1, we get a function from X to Y. Moreover, as f 1 and g are bijections, their composition is a bijection (see homework) and hence we have a bijection from X to Y as desired. that the cardinality of a set is the number of elements it contains. In a function from X to Y, every element of X must be mapped to an element of Y. Then m = n. Proof. Let A be a set. I would be very thankful if you elaborate. �LzL�Vzb ������ ��i��)p��)�H�(q>�b�V#���&,��k���� Suppose that m;n 2 N and that there are bijections f: Nm! For a finite set, the cardinality of the set is the number of elements in the set. Cantor’s Theorem builds on the notions of set cardinality, injective functions, and bijections that we explored in this post, and has profound implications for math and computer science. In this article, we are discussing how to find number of functions from one set to another. that the cardinality of a set is the number of elements it contains. It is well-known that the number of surjections from a set of size n to a set of size m is quite a bit harder to calculate than the number of functions or the number of injections. Possible answers are a natural number or ℵ 0. In these terms, we’re claiming that we can often find the size of one set by finding the size of a related set. Proof. Continuing, jF Tj= nn because unlike the bijections… Cardinality Recall (from lecture one!) element on $x-$axis, as having $2i, 2i+1$ two choices and each combination of such choices is bijection). ? The intersection of any two distinct sets is empty. So there are at least $2^{\aleph_0}$ permutations of $\Bbb N$. Definition: A set is a collection of distinct objects, each of which is called an element of S. For a potential element , we denote its membership in and lack thereof by the infix symbols , respectively. Example 2 : Find the cardinal number of … It follows there are $2^{\aleph_0}$ subsets which are infinite and have an infinite complement. What happens to a Chain lighting with invalid primary target and valid secondary targets? (a) Let S and T be sets. What about surjective functions and bijective functions? Maybe one could allow bijections from a set to another set and speak of a "permutation torsor" rather than of a "permutation group". One example is the set of real numbers (infinite decimals). Definition: The cardinality of , denoted , is the number … What does it mean when an aircraft is statically stable but dynamically unstable? %���� Here we are going to see how to find the cardinal number of a set. Does $\mathbb{N\times(N^N)}$ have the same cardinality as $\mathbb N$ or $\mathbb R$? If X and Y are finite ... For a finite set S, there is a bijection between the set of possible total orderings of the elements and the set of bijections from S to S. That is to say, the number of permutations of elements of S is the same as the number of total orderings of that set—namely, n… Now g 1 f: Nm! PRO LT Handlebar Stem asks to tighten top handlebar screws first before bottom screws? So answer is $R$. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. How to prove that the set of all bijections from the reals to the reals have cardinality c = card. %PDF-1.5 Same Cardinality. Is symmetric group on natural numbers countable? Well, only countably many subsets are finite, so only countably are co-finite. Clearly $|P|=|\Bbb N|=\omega$, so $P$ has $2^\omega$ subsets $S$, each defining a distinct bijection $f_S$ from $\Bbb N$ to $\Bbb N$. Why do electrons jump back after absorbing energy and moving to a higher energy level? The following corollary of Theorem 7.1.1 seems more than just a bit obvious. The Bell Numbers count the same. Bijections synonyms, Bijections pronunciation, Bijections translation, English dictionary definition of Bijections. Cardinality Recall (from lecture one!) /Length 2414 Making statements based on opinion; back them up with references or personal experience. We de ne U = f(N) where f is the bijection from Lemma 1. There's a group that acts on this set of permutations, and of course the group has an identity element, but then no permutation would have a distinguished role. 1. In this case the cardinality is denoted by @ 0 (aleph-naught) and we write jAj= @ 0. A bijection is a function that is one-to-one and onto. For infinite $\kappa$ one has $\kappa ! A set whose cardinality is n for some natural number n is called nite. Consider any finite set E = {1,2,3..n} and the identity map id:E -> E. We can rearrange the codomain in any order and we obtain another bijection. Theorem 2 (Cardinality of a Finite Set is Well-Defined). ����O���qmZ�@Ȕu���� By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Beginning in the late 19th century, this concept was generalized to infinite sets, which allows one to distinguish between the different types of infinity, and to perform arithmetic on them. The cardinal number of the set A is denoted by n(A). If mand nare natural numbers such that A≈ N n and A≈ N m, then m= n. Proof. But even though there is a Thus, the cardinality of this set of bijections S T is n!. Use bijections to prove what is the cardinality of each of the following sets. Why? Partition of a set, say S, is a collection of n disjoint subsets, say P 1, P 1, ...P n that satisfies the following three conditions −. Suppose that m;n 2 N and that there are bijections f: Nm! What is the cardinality of the set of all bijections from a countable set to another countable set? For each$S\subseteq P$define,$$f_S:\Bbb N\to\Bbb N:k\mapsto\begin{cases} A set S is in nite if and only if there exists U ˆS with jUj= jNj. Consider a set $$A.$$ If $$A$$ contains exactly $$n$$ elements, where $$n \ge 0,$$ then we say that the set $$A$$ is finite and its cardinality is equal to the number of elements $$n.$$ The cardinality of a set $$A$$ is denoted by $$\left| A \right|.$$ For example, The number of elements in a set is called the cardinal number of the set. Cardinality of real bijective functions/injective functions from$\mathbb{R}$to$\mathbb{R}$, Cardinality of$P(\mathbb{R})$and$P(P(\mathbb{R}))$, Cardinality of the set of multiples of “n”, Set Theory: Cardinality of functions on a set have higher cardinality than the set, confusion about the definition of cardinality. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Now consider the set of all bijections on this set T, de ned as S T. As per the de nition of a bijection, the rst element we map has npotential outputs. Choose one natural number. Theorem2(The Cardinality of a Finite Set is Well-Defined). Upper bound is$N^N=R$; lower bound is$2^N=R$as well (by consider each slot, i.e. {n ∈N : 3|n} This is the number of divisors function introduced in Exercise (6) from Section 6.1. It only takes a minute to sign up. S and T have the same cardinality if there is a bijection f from S to T. In fact consider the following: the set of all finite subsets of an n-element set has$2^n$elements. I understand your claim, but the part you wrote in the answer is wrong. You can also turn in Problem ... Bijections A function that ... Cardinality Revisited. Justify your conclusions. You can do it by taking$f(0) \in \mathbb{N}$,$f(1) \in \mathbb{N} \setminus \{f(0)\}$etc. Let m and n be natural numbers, and let X be a set of size m and Y be a set of size n. ... *n. given any natural number in the set [1, mn] then use the division algorthm, dividing by n . Thus, the cardinality of this set of bijections S T is n!. How Many Functions Of Any Type Are There From X → X If X Has: (a) 2 Elements? How can I quickly grab items from a chest to my inventory? Partition of a set, say S, is a collection of n disjoint subsets, say P 1, P 1, ...P n that satisfies the following three conditions −. The proposition is true if and only if is an element of . Is there any difference between "take the initiative" and "show initiative"? ���K�����[7����n�ؕE�W�gH\p��'b�q�f�E�n�Uѕ�/PJ%a����9�޻W��v���W?ܹ�ہT\�]�G��Z�`�Ŷ�r Sets that are either nite of denumerable are said countable. Thanks for contributing an answer to Mathematics Stack Exchange! The same. A set of cardinality n or @ … Both have cardinality$2^{\aleph_0}\$. Suppose that m;n 2 N and that there are bijections f: Nm! Show transcribed image text. Ah. Cardinality. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Since this argument applies to any function $$f : \mathbb{N} \rightarrow \mathbb{R}$$ (not just the one in the above example) we conclude that there exist no bijections $$f : N \rightarrow R$$, so $$|\mathbb{N}| \ne |\mathbb{R}|$$ by Definition 14.1. A. Hence, cardinality of A × B = 5 × 3 = 15. i.e.
2021-03-02 13:36:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384906649589539, "perplexity": 318.92433524787333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364008.55/warc/CC-MAIN-20210302125936-20210302155936-00177.warc.gz"}
http://onsnetwork.org/blog/tag/geoduck/
# DNA Isolation & Quantification – Geoduck larvae metagenome filter rinses Isolated DNA from two of the geoduck hatchery metagenome samples Emma delivered on 20180313 to get an idea of what type of yields we might get from these. • MG 5/15 #8 • MG 5/19 #6 As mentioned in my notebook entry upon receipt of these samples, I’m a bit skeptical will get any sort of recovery, based on sample preservation. Isolated DNA using DNAzol (MRC, Inc.) in the following manner: 1. Added 1mL of DNAzol to each sample; mixed by pipetting. 2. Added 0.5mL of 100% ethanol; mixed by inversion. 3. Pelleted DNA 5,000g x 5mins @ RT. 5. Wash pellets (not visible) with 1mL 75% ethanol by dribbling down side of tubes. 6. Pelleted DNA 5,000g x 5mins @ RT. 7. Discarded supernatants and dried pellets for 5mins. 8. Resuspended DNA in 20uL of Buffer EB (Qiagen). Samples were quantified using the Roberts Lab Qubit 3.0 with the Qubit High Sensitivity dsDNA Kit (Invitrogen). 5uL of each sample were used. #### Results: As expected, both samples did not yield any detectable DNA. Will discuss with Steven on what should be done with the remaining samples. # Samples Received – Geoduck larvae metagenome filter rinses Received geoduck hatchery metagenome samples from Emma. These samples are intended for DNA isolation. Admittedly, I’m a bit skeptical that we’ll be able to recover any DNA from these samples, as they had been initially stored as frozen liquid, then thawed, and “supernatant” removed. I’m concerned that the freezing step would result in cell lysis; thus the subsequent removal of “supernatant” would actually be removing the majority of cellular contents that would be released during freezing/lysis. Here’s the sample prep history, per Emma’s email: Hi! Here are the relevant details from my lab notebook: Filters with bacteria to be extracted for proteomics: https://sr320.github.io/Geoduck-larvae-filters/ Each filter was rinsed and cells sonicated: 1. Put filter on petri dish on ice 2. Use 1-4 mL total to wash front (and back if not obvious where biol material is) of filter while holding with forceps over dish – Use 2 pairs of forceps; I used 4 mL ice cold 50 mM NH4HCO3 to wash inside of filter (filters were folded in half). Washed filters returned to bags and stored at -80C. 3. Put wash collected in dish in eppendorf tubes – at this point, remove the amount that will be used for metagenomics (~1/4 of wash) – put 1 mL in metagenome tube (mg) and the remaining was split between 2 tubes for metaproteomics (mp) These are bacterial cells in ammonium bicarbonate. I spun them down and removed most of the supernatant from each tube. Let me know if you need any other info! Box of samples (containing ~38uL of liquid) were stored in FTR209 -20C (top shelf). # NovaSeq Assembly – The Struggle is Real – Real Annoying! Well, I continue to struggle to makek progress on assembling the geoduck Illumina NovaSeq data. Granted, there is a ton of data (374GB!!!!), but it’s still frustrating that we can’t get an assembly anywhere… Here are some of the struggles so far: SOAPdenovo2 JR-Assembler • Can’t install one of the dependencies (SOAP error correction) • Actually, I need to try the binary version of this, instead of the source version (the source version fails at the make step) So, next up will trying the following two assemblers: • JR-Assembler: Will see if SOAPec binary will work, and then run an assembly. • AllPaths-LG: I was able to install this successfully on Mox. Additionally, we’ve ordered some additional hard drives and will be converting the old head/master node on the Apple Xserve cluster to Linux. The old master node is a little better equipped than the other Apple Xserve “birds”, so will try to re-run Meraculous on it once we get it converted. # Assembly – Geoduck Illumina NovaSeq SOAPdenovo2 on Mox (FAIL) Trying to get the NovaSeq data assembled using SOAPdenovo2 on the Mox HPC node we have and it will not work. Tried a couple of times and it hasn’t run successfully. Here are links to the files used on Mox (including the batch script and slurm output files). I made slight changes to the formatting of the batch script because I thought there was something wrong. Specifically, the slurm output file in the 20180215 runs does not accurately reflect the command I issued (i.e. 1> ass.log is command, but slurm shows > ass.log). NOTE: In the 20180218 run, I have excluded transferring the core dump file due to its crazy size: Here’s the error log generated by SOAPdenovo2 in the 20180218 run (the last line is all you really need to see, though): Version 2.04: released on July 13th, 2012 Compile May 10 2017 12:50:52 ******************** Pregraph ******************** Parameters: pregraph -s /gscratch/scrubbed/samwhite/20180218_soapdenovo2_novaseq_geoduck/soap_config -K 117 -p 24 -o /gscratch/scrubbed/samwhite/20180218_soapdenovo2_novaseq_geoduck/ In /gscratch/scrubbed/samwhite/20180218_soapdenovo2_novaseq_geoduck/soap_config, 1 lib(s), maximum read length 150, maximum name length 256. /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR005_S4_L001_R1_001_val_1_val_1.fq.gz /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR005_S4_L001_R2_001_val_2_val_2.fq.gz /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR005_S4_L002_R1_001_val_1_val_1.fq.gz /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR005_S4_L002_R2_001_val_2_val_2.fq.gz /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR006_S3_L001_R1_001_val_1_val_1.fq.gz /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR006_S3_L001_R2_001_val_2_val_2.fq.gz /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR006_S3_L002_R1_001_val_1_val_1.fq.gz /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR006_S3_L002_R2_001_val_2_val_2.fq.gz /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR012_S1_L001_R1_001_val_1_val_1.fq.gz /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR012_S1_L001_R2_001_val_2_val_2.fq.gz /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR012_S1_L002_R1_001_val_1_val_1.fq.gz /gscratch/scrubbed/samwhite/20180129_trimmed_again/NR012_S1_L002_R2_001_val_2_val_2.fq.gz -- Out of memory -- I guess I’ll explore some other options for assembling these? I’m having a difficult time accepting that 500GB of RAM is insufficient, but that seems to be the case. Ouch. # NovaSeq Assembly – Trimmed Geoduck NovaSeq with Meraculous Attempted to use Meraculous to assemble the trimmed geoduck NovaSeq data. Here’s the Meraculous manual (PDF). After a bunch of various issues (running out of hard drive space – multiple times, config file issues, typos), I’ve finally given up on running meraculous. It failed, again, saying it couldn’t find a file in a directory that meraculous created! I’ve emailed the authors and if they have an easy fix, I’ll implement it and see what happens. Anyway, it’s all documented in the Jupyter Notebook below. One good thing came out of all of it is that I had to run kmergenie to identify an appopriate kmer size to use for assembly, as well as estimated genome size (this info is needed for both meraculous and SOAPdeNovo (which I’ll be trying next)): kmergenie output folder: http://owl.fish.washington.edu/Athaliana/20180125_geoduck_novaseq/20180206_kmergenie/ kmergenie HTML report (doesn’t display histograms for some reason): 20180206_kmergenie/histograms_report.html kmer size: 117 Est. genome size: 2.17Gbp # Adapter Trimming and FASTQC – Illumina Geoduck Novaseq Data We would like to get an assembly of the geoduck NovaSeq data that Illumina provided us with. Steven previously ran the raw data through FASTQC and there was a significant amount of adapter contamination (up to 44% in some libraries) present (see his FASTQC report here). So, I trimmed them using TrimGalore and re-ran FASTQC on them. This required two rounds of trimming using the “auto-detect” feature of Trim Galore. • Round 1: remove NovaSeq adapters • Round 2: remove standard Illumina adapters See Jupyter notebook below for the gritty details. ##### Results: All data for this NovaSeq assembly project can be found here: http://owl.fish.washington.edu/Athaliana/20180125_geoduck_novaseq/. Round 1 Trim Galore reports: [20180125_trim_galore_reports/](http://owl.fish.washington.edu/Athaliana/20180125_geoduck_novaseq/20180125_trim_galore_reports/] Round 1 FASTQC: 20180129_trimmed_multiqc_fastqc_01 Round 1 FASTQC MultiQC overview: 20180129_trimmed_multiqc_fastqc_01/multiqc_report.html Round 2 Trim Galore reports: 20180125_geoduck_novaseq/20180205_trim_galore_reports/ Round 2 FASTQC: 20180205_trimmed_fastqc_02/ Round 2 FASTQC MultiQC overview: 20180205_trimmed_multiqc_fastqc_02/multiqc_report.html For the astute observer, you might notice the “Per Base Sequence Content” generates a “Fail” warning for all samples. Per the FASTQC help, this is likely expected (due to the fact that NovaSeq libraries are prepared using transposases) and doesn’t have any downstream impacts on analyses. # Data Management – Illumina Geoduck HiSeq & MiSeq Data The HDD we received from Illumina last week only had data (i.e. fastq files) from the NovaSeq runs they performed – nothing from either the MiSeq, nor the HiSeq runs. We contacted them about the missing data, they confirmed it was missing, and uploaded the remaining data to BaseSpace.
2018-03-21 12:55:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3620620369911194, "perplexity": 10004.355867302282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647649.70/warc/CC-MAIN-20180321121805-20180321141805-00281.warc.gz"}
https://www.jiskha.com/questions/489998/Reduce-csc-2-x-sec-2-X-to-an-expression-containing-only-tan-x-is-this-correct
# trigonometry repost Reduce (csc^2 x - sec^2 X) to an expression containing only tan x. (is this correct?) csc x = 1/sin x sec x = 1/cos x tan x = 1/cot x sin^2 x + cos^2 x = 1 1 + cot^2 x = csc^2 x tan^2 x + 1 = sec^2 x csc^2 x - sec^2 x = 1 + cot^2 x - (1 + tan^2 x) = cot^2 x - tan^2 x = (1/tan^2 x) - tan^2 x 1. correct posted by Reiny 2. thanks! posted by Anonymous First Name ## Similar Questions 1. ### Math:) 1. Simplify the expression. [csc^2(x-1)]/[1+sin x] a. csc x+1 b. csc x(csc x-1) c. sin^2 x-csc x**** d. csc^2 x-cos xtan x 2. Which of the following expressions can be used to complete the equation below? sec x/1+cot^2 x a. tan x 2. ### Trigonometry Hello all, In our math class, we are practicing the trigonometric identities (i.e., sin^2(x)+cos^2(x)=1 or cot(x)=cos(x)/sin(x). Now, we are working on proofs that two sides of an equation are equal (for example, sin(x)*csc(x)=1; 3. ### Pre-Calculus I don't understand,please be clear! Prove that each equation is an identity. I tried to do the problems, but I am stuck. 1. cos^4 t-sin^4 t=1-2sin^2 t 2. 1/cos s= csc^2 s - csc s cot s 3. (cos x/ sec x -1)- (cos x/ tan^2x)=cot^2 x 4. ### Pre-Calculus Prove that each equation is an identity. I tried to do the problems, but I am stuck. 1. cos^4 t-sin^4 t=1-2sin^2 t 2. 1/cos s= csc^2 s - csc s cot s 3. (cos x/ sec x -1)- (cos x/ tan^2x)=cot^2 x 4. sin^3 z cos^2 z= sin^3 z - sin^5 5. ### Math - Trig I'm trying to verify these trigonometric identities. 1. 1 / [sec(x) * tan(x)] = csc(x) - sin(x) 2. csc(x) - sin(x) = cos(x) * cot(x) 3. 1/tan(x) + 1/cot(x) = tan(x) + cot(x) 4. csc(-x)/sec(-x) = -cot(x) 6. ### Precalculus Circle O below has radius 1. Eight segment lengths are labeled with lowercase letters. Six of these equal a trigonometric function of theta. Your answer to this problem should be a six letter sequence whose letters represent the 7. ### more trig.... how fun!!!! if you can't help me with my first question hopw you can help me with this one. sec(-x)/csc(-x)=tan(x) thanx to anyone who can help From the definition of the sec and csc functions, and the tan function, sec(-x)/csc(-x) = 8. ### math (repost) Which of the following are inverse functions? 1. Arcsin x and sin x 2. cos^-1 x and cos x 3. csc x and sin x 4. e^x and ln x 5. x^2 and +/- sqrt x 6. x^3 and cubic root of x 7. cot x and tan x 8. sin x and cos x 9. log x/3 and 3^x 9. ### Trigonometry Verify the identities. 1.) √1-COSθ/1+COSθ= 1+SINθ/SINθ 2.) SEC X SIN(π/2-X)= 1 3.) CSC X(CSC X-SIN X)+SIN X-COS X/SIN X + COT X= CSC²X 4.) CSC^4 X-2 CSC²X+1= COT^4 X 5.) CSC^4 θ-COT^4 θ= 10. ### Precalculus check answers help! 1.) Find an expression equivalent to sec theta sin theta cot theta csc theta. tan theta csc theta sec theta ~ sin theta 2.) Find an expression equivalent to cos theta/sin theta . tan theta cot theta ~ sec theta csc theta 3.) More Similar Questions
2018-08-15 06:02:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8623553514480591, "perplexity": 12202.206913322145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209884.38/warc/CC-MAIN-20180815043905-20180815063905-00108.warc.gz"}
https://rviews.rstudio.com/2018/12/13/rsampling-fama-french/
# Rsampling Fama French by Jonathan Regenstein Today we will continue our work on Fama French factor models, but more as a vehicle to explore some of the awesome stuff happening in the world of tidy models. For new readers who want get familiar with Fama French before diving into this post, see here where we covered importing and wrangling the data, here where we covered rolling models and visualization, my most recent previous post here where we covered managing many models, and if you’re into Shiny, this flexdashboard. Our goal today is to explore k-fold cross-validation via the rsample package, and a bit of model evaluation via the yardstick package. We started the model evaluation theme last time when we used tidy(), glance() and augment() from the broom package. In this post, we will use the rmse() function from yardstick, but our main focus will be on the vfold_cv() function from rsample. We are going to explore these tools in the context of linear regression and Fama French, which might seem weird since these tools would typically be employed in the realms of machine learning, classification, and the like. We’ll stay in the world of explanatory models via linear regression world for a few reasons. First, and this is a personal preference, when getting to know a new package or methodology, I prefer to do so in a context that’s already familiar. I don’t want to learn about rsample whilst also getting to know a new data set and learning the complexities of some crazy machine learning model. Since Fama French is familiar from our previous work, we can focus on the new tools in rsample and yardstick. Second, factor models are important in finance, despite relying on good old linear regression. We won’t regret time spent on factor models, and we might even find creative new ways to deploy or visualize them. The plan for today is take the same models that we ran in the last post, only this use k-fold cross validation and bootstrapping to try to assess the quality of those models. For that reason, we’ll be working with the same data as we did previously. I won’t go through the logic again, but in short, we’ll import data for daily prices of five ETFs, convert them to returns (have a look here for a refresher on that code flow), then import the five Fama French factor data and join it to our five ETF returns data. Here’s the code to make that happen: library(tidyverse) library(broom) library(tidyquant) library(timetk) symbols <- c("SPY", "EFA", "IJS", "EEM", "AGG") # The prices object will hold our daily price data. prices <- getSymbols(symbols, src = 'yahoo', from = "2012-12-31", to = "2017-12-31", auto.assign = TRUE, warnings = FALSE) %>% reduce(merge) %>% colnames<-(symbols) asset_returns_long <- prices %>% tk_tbl(preserve_index = TRUE, rename_index = "date") %>% gather(asset, returns, -date) %>% group_by(asset) %>% mutate(returns = (log(returns) - log(lag(returns)))) %>% na.omit() "http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/Global_5_Factors_Daily_CSV.zip" factors_csv_name <- "Global_5_Factors_Daily.csv" temp <- tempfile() # where we want R to store that file temp, quiet = TRUE) Global_5_Factors <- read_csv(unz(temp, factors_csv_name), skip = 6 ) %>% rename(date = X1, MKT = Mkt-RF) %>% mutate(date = ymd(parse_date_time(date, "%Y%m%d")))%>% mutate_if(is.numeric, funs(. / 100)) %>% select(-RF) data_joined_tidy <- asset_returns_long %>% left_join(Global_5_Factors, by = "date") %>% na.omit() After running that code, we have an object called data_joined_tidy. It holds daily returns for 5 ETFs and the Fama French factors. Here’s a look at the first row for each ETF rows. data_joined_tidy %>% slice(1) # A tibble: 5 x 8 # Groups: asset [5] date asset returns MKT SMB HML RMW CMA <date> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 2013-01-02 AGG -0.00117 0.0199 -0.0043 0.0028 -0.0028 -0.0023 2 2013-01-02 EEM 0.0194 0.0199 -0.0043 0.0028 -0.0028 -0.0023 3 2013-01-02 EFA 0.0154 0.0199 -0.0043 0.0028 -0.0028 -0.0023 4 2013-01-02 IJS 0.0271 0.0199 -0.0043 0.0028 -0.0028 -0.0023 5 2013-01-02 SPY 0.0253 0.0199 -0.0043 0.0028 -0.0028 -0.0023 Let’s work with just one ETF for today and use filter(asset == "AGG") to shrink our data down to just that ETF. agg_ff_data <- data_joined_tidy %>% filter(asset == "AGG") Okay, we’re going to regress the daily returns of AGG on one factor, then three factors, then five factors, and we want to evaluate how well each model explains AGG’s returns. That means we need a way to test the model. Last time, we looked at the adjusted r-squared values when the model was run on the entirety of AGG returns. Today, we’ll evaluate the model using k-fold cross validation. That’s a pretty jargon-heavy phrase that isn’t part of the typical finance lexicon. Let’s start with the second part, cross-validation. Instead of running our model on the entire data set - all the daily returns of AGG - we’ll run it on just part of the data set, then test the results on the part that we did not use. Those different subsets of our original data are often called the training and the testing sets, though rsample calls them the analysis and assessment sets. We validate the model results by applying them to the assessment data and seeing how the model performed. The k-fold bit refers to the fact that we’re not just dividing our data into training and testing subsets, we’re actually going to divide it into a bunch of groups, a k number of groups, or a k number of folds. One of those folds will be used as the validation set; the model will be fit on the other k - 1 sets, and then tested on the validation set. We’re doing this with a linear model to see how well it explains the data; it’s typically used in machine learning to see how well a model predicts data (we’ll get there in 2019).1 If you’re like me, it will take a bit of tinkering to really grasp k-fold cross validation, but rsample as a great function for dividing our data into k-folds. If we wish to use five folds (the state of the art seems to be either five or ten folds), we call the vfold_cv() function, pass it our data object agg_ff_data, and set v = 5. library(rsample) library(yardstick) set.seed(752) cved_ff<- vfold_cv(agg_ff_data, v = 5) cved_ff # 5-fold cross-validation # A tibble: 5 x 2 splits id <list> <chr> 1 <split [1K/252]> Fold1 2 <split [1K/252]> Fold2 3 <split [1K/252]> Fold3 4 <split [1K/252]> Fold4 5 <split [1K/251]> Fold5 We have an object called cved_ff, with a column called splits and a column called id. Let’s peek at the first split. cved_ff$splits[[1]] <1007/252/1259> Three numbers. The first, 1007, is telling us how many observations are in the analysis. Since we have five folds, we should have 80% (or 4/5) of our data in the analysis set. The second number, 252, is telling us how many observations are in the assessment, which is 20% of our original data. The third number, 1259, is the total number of observations in our original data. Next, we want to apply a model to the analysis set of this k-folded data and test the results on the assessment set. Let’s start with one factor and run a simple linear model, lm(returns ~ MKT). We want to run it on analysis(cved_ff$splits[[1]]) - the analysis set of out first split. ff_model_test <- lm(returns ~ MKT, data = analysis(cved_ff$splits[[1]])) ff_model_test Call: lm(formula = returns ~ MKT, data = analysis(cved_ff$splits[[1]])) Coefficients: (Intercept) MKT 0.0001025 -0.0265516 Nothing too crazy so far. Now we want to test on our assessment data. The first step is to add that data to the original set. We’ll use augment() for that task, and pass it assessment(cved_ff$splits[[1]]) ff_model_test %>% augment(newdata = assessment(cved_ff$splits[[1]])) %>% select(returns, .fitted) returns .fitted 1 0.0009021065 1.183819e-04 2 0.0011726989 4.934779e-05 3 0.0010815505 1.157267e-04 4 -0.0024385815 -7.544460e-05 5 -0.0021715702 -8.341007e-05 6 0.0028159467 3.865527e-04 We just added our fitted values to the assessment data, the subset of the data on which the model was not fit. How well did our model do when compare the fitted values to the data in the held out set? We can use the rmse() function from yardstick to measure our model. RMSE stands for root mean-squared error. It’s the sum of the squared differences between our fitted values and the actual values in the assessment data. A lower RMSE is better! ff_model_test %>% augment(newdata = assessment(cved_ff$splits[[1]])) %>% rmse(returns, .fitted) # A tibble: 1 x 3 .metric .estimator .estimate <chr> <chr> <dbl> 1 rmse standard 0.00208 Now that we’ve done that piece by piece, let’s wrap the whole operation into one function. This function takes one argument, a split, and we’re going to use pull() so we can extract the raw number, instead of the entire tibble result. model_one <- function(split) { split_for_model <- analysis(split) ff_model <- lm(returns ~ MKT, data = split_for_model) holdout <- assessment(split) rmse <- ff_model %>% augment(newdata = holdout) %>% rmse(returns, .fitted) %>% pull(.estimate) } Now we pass it our first split. model_one(cved_ff$splits[[1]]) %>% head() [1] 0.002080324 Now we want to apply that function to each of our five folds that are stored in agg_cved_ff. We do that with a combination of mutate() and map_dbl(). We use map_dbl() instead of map because we are returning a number here and there’s not a good reason to store that number in a list column. cved_ff %>% mutate(rmse = map_dbl(cved_ff$splits, model_one)) # 5-fold cross-validation # A tibble: 5 x 3 splits id rmse * <list> <chr> <dbl> 1 <split [1K/252]> Fold1 0.00208 2 <split [1K/252]> Fold2 0.00189 3 <split [1K/252]> Fold3 0.00201 4 <split [1K/252]> Fold4 0.00224 5 <split [1K/251]> Fold5 0.00190 OK, we have five RMSE’s since we ran the model on five separate analysis fold sets and tested on five separate assessment fold sets. Let’s find the average RMSE by taking the mean() of the rmse column. That can help reduce noisiness that resulted from our random creation of those five folds. cved_ff %>% mutate(rmse = map_dbl(cved_ff$splits, model_one)) %>% summarise(mean_rse = mean(rmse)) # 5-fold cross-validation # A tibble: 1 x 1 mean_rse <dbl> 1 0.00202 We now have the mean RMSE after running on our model, lm(returns ~ MKT), on all five of our folds. That process for finding the mean RMSE can be applied other models, as well. Let’s suppose we wish to find the mean RMSE for two other models: lm(returns ~ MKT + SMB + HML), the Fama French three-factor model, and lm(returns ~ MKT + SMB + HML + RMW + CMA, the Fama French five-factor model. By comparing the mean RMSE’s, we can evaluate which model explained the returns of AGG better. Since we’re just adding more and more factors, the models can be expected to get more and more accurate but again, we are exploring the rsample machinery and creating a template where we can pop in whatever models we wish to compare. First, let’s create two new functions, that follow the exact same code pattern as above but house the three-factor and five-factor models. model_two <- function(split) { split_for_model <- analysis(split) ff_model <- lm(returns ~ MKT + SMB + HML, data = split_for_model) holdout <- assessment(split) rmse <- ff_model %>% augment(newdata = holdout) %>% rmse(returns, .fitted) %>% pull(.estimate) } model_three <- function(split) { split_for_model <- analysis(split) ff_model <- lm(returns ~ MKT + SMB + HML + RMW + CMA, data = split_for_model) holdout <- assessment(split) rmse <- ff_model %>% augment(newdata = holdout) %>% rmse(returns, .fitted) %>% pull(.estimate) } Now we pass those three models to the same mutate() with map_dbl() flow that we used with just one model. The result will be three new columns of RMSE’s, one for each of our three models applied to our five folds. cved_ff %>% mutate( rmse_model_1 = map_dbl( splits, model_one), rmse_model_2 = map_dbl( splits, model_two), rmse_model_3 = map_dbl( splits, model_three)) # 5-fold cross-validation # A tibble: 5 x 5 splits id rmse_model_1 rmse_model_2 rmse_model_3 * <list> <chr> <dbl> <dbl> <dbl> 1 <split [1K/252]> Fold1 0.00208 0.00211 0.00201 2 <split [1K/252]> Fold2 0.00189 0.00184 0.00178 3 <split [1K/252]> Fold3 0.00201 0.00195 0.00194 4 <split [1K/252]> Fold4 0.00224 0.00221 0.00213 5 <split [1K/251]> Fold5 0.00190 0.00183 0.00177 We can also find the mean RMSE for each model. cved_ff %>% mutate( rmse_model_1 = map_dbl( splits, model_one), rmse_model_2 = map_dbl( splits, model_two), rmse_model_3 = map_dbl( splits, model_three)) %>% summarise(mean_rmse_model_1 = mean(rmse_model_1), mean_rmse_model_2 = mean(rmse_model_2), mean_rmse_model_3 = mean(rmse_model_3)) # 5-fold cross-validation # A tibble: 1 x 3 mean_rmse_model_1 mean_rmse_model_2 mean_rmse_model_3 <dbl> <dbl> <dbl> 1 0.00202 0.00199 0.00192 That code flow worked just fine, but we had to repeat ourselves when creating the functions for each model. Let’s toggle to a flow where we define three models - the ones that we wish to test with via cross-validation and RMSE - then pass those models to one function. First, we use as.formula() to define our three models. mod_form_1 <- as.formula(returns ~ MKT) mod_form_2 <- as.formula(returns ~ MKT + SMB + HML) mod_form_3 <- as.formula(returns ~ MKT + SMB + HML + RMW + CMA) Now we write one function that takes split as an argument, same as above, but also takes formula as an argument, so we can pass it different models. This gives us the flexibility to more easily define new models and pass them to map, so I’ll append _flex to the name of this function. ff_rmse_models_flex <- function(split, formula) { split_for_data <- analysis(split) ff_model <- lm(formula, data = split_for_data) holdout <- assessment(split) rmse <- ff_model %>% augment(newdata = holdout) %>% rmse(returns, .fitted) %>% pull(.estimate) } Now we use the same code flow as before, except we call map_dbl(), pass it our cved_ff$splits object, our new flex function called ff_rmse_models_flex(), and the model we wish to pass as the formula argument. First we pass it mod_form_1. cved_ff %>% mutate(rmse_model_1 = map_dbl(cved_ff$splits, ff_rmse_models_flex, formula = mod_form_1)) # 5-fold cross-validation # A tibble: 5 x 3 splits id rmse_model_1 * <list> <chr> <dbl> 1 <split [1K/252]> Fold1 0.00208 2 <split [1K/252]> Fold2 0.00189 3 <split [1K/252]> Fold3 0.00201 4 <split [1K/252]> Fold4 0.00224 5 <split [1K/251]> Fold5 0.00190 Now let’s pass it all three models and find the mean RMSE. cved_ff %>% mutate(rmse_model_1 = map_dbl(cved_ff$splits, ff_rmse_models_flex, formula = mod_form_1), rmse_model_2 = map_dbl(cved_ff$splits, ff_rmse_models_flex, formula = mod_form_2), rmse_model_3 = map_dbl(cved_ff\$splits, ff_rmse_models_flex, formula = mod_form_3)) %>% summarise(mean_rmse_model_1 = mean(rmse_model_1), mean_rmse_model_2 = mean(rmse_model_2), mean_rmse_model_3 = mean(rmse_model_3)) # 5-fold cross-validation # A tibble: 1 x 3 mean_rmse_model_1 mean_rmse_model_2 mean_rmse_model_3 <dbl> <dbl> <dbl> 1 0.00202 0.00199 0.00192 Alright, that code flow seems a bit more flexible than our original method of writing a function to assess each model. We didn’t do much hard thinking about functional form here, but hopefully this provides a template where you could assess more nuanced models. We’ll get into bootstrapping and time series work next week, then head to Shiny to ring in the New Year! And, finally, a couple of public service announcements. First, thanks to everyone who has checked out my new book! The price just got lowered for the holidays. See on Amazon or on the CRC homepage (okay, that was more of an announcement about my book). Second, applications are open for the Battlefin alternative data contest, and RStudio is one of the tools you can use to analyze the data. Check it out here. In January, they’ll announce 25 finalists who will get to compete for a cash prize and connect with some quant hedge funds. Go get ‘em! Thanks for reading and see you next time. 1. For more on cross-validation, see “An Introduction to Statistical Learning”, chapter 5. Available online here: http://www-bcf.usc.edu/~gareth/ISL/. Share Comments · · · · ·
2020-07-04 12:29:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23102159798145294, "perplexity": 4184.170254102907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886121.45/warc/CC-MAIN-20200704104352-20200704134352-00246.warc.gz"}
https://www.zbmath.org/?q=an%3A1427.11118
# zbMATH — the first resource for mathematics A large arboreal Galois representation for a cubic postcritically finite polynomial. (English) Zbl 1427.11118 Summary: We give a complete description of the arboreal Galois representation of a certain postcritically finite cubic polynomial over a large class of number fields and for a large class of basepoints. This is the first such example that is not conjugate to a power map, Chebyshev polynomial, or Lattès map. The associated Galois action on an infinite ternary rooted tree has Hausdorff dimension bounded strictly between that of the infinite wreath product of cyclic groups and that of the infinite wreath product of symmetric groups. We deduce a zero-density result for prime divisors in an orbit under this polynomial. We also obtain a zero-density result for the set of places of convergence of Newton’s method for a certain cubic polynomial, thus resolving the first nontrivial case of a conjecture of X. Faber and J. F. Voloch [J. Théor. Nombres Bordx. 23, No. 2, 387–401 (2011; Zbl 1223.37118)]. ##### MSC: 11R32 Galois theory 05C25 Graphs and abstract algebra (groups, rings, fields, etc.) 11R09 Polynomials (irreducibility, etc.) Full Text: ##### References: [1] Benedetto, RL; Ghioca, D; Hutz, B; Kurlberg, P; Scanlon, T; Tucker, TJ, Periods of rational maps modulo primes, Math. Ann., 355, 637-660, (2013) · Zbl 1317.37111 [2] Bush, M.R., Hindes, W., Looper, N.R.: Galois groups of iterates of some unicritical polynomials. arXiv:1608.03328v1 [math.NT], preprint, 2016 · Zbl 1444.11223 [3] Faber, X; Voloch, JF, On the number of places of convergence for newton’s method over number fields, J. Théor. Nombres Bordeaux, 23, 387-401, (2011) · Zbl 1223.37118 [4] Gottesman, R; Tang, K, Quadratic recurrences with a positive density of prime divisors, Int. J. Number Theory, 6, 1027-1045, (2010) · Zbl 1244.11014 [5] Jones, R, The density of prime divisors in the arithmetic dynamics of quadratic polynomials, J. Lond. Math. Soc., 78, 523-544, (2008) · Zbl 1193.37144 [6] Jones, R.: Galois representations from pre-image trees: an arboreal survey. Publ. Math. Besançon, pp 107-136 (2013) · Zbl 1307.11069 [7] Jones, R, Fixed-point-free elements of iterated monodromy groups, Trans. Am. Math. Soc., 367, 2023-2049, (2015) · Zbl 1385.37063 [8] Jones, R; Manes, M, Galois theory of quadratic rational functions, Comment. Math. Helv., 89, 173-213, (2014) · Zbl 1316.11104 [9] Juul, J.: Iterates of generic polynomials and generic rational functions. arXiv:1410.3814v4 [math.NT], preprint, 2016 · Zbl 1442.37120 [10] Looper, N.R.: Dynamical Galois groups of trinomials and Odoni’s conjecture. arXiv:1609.03398v1 [math.NT], preprint, 2016 · Zbl 07094881 [11] Nekrashevych, V.: Self-similar groups. Mathematical Surveys and Monographs, vol. 117. American Mathematical Society, Providence (2005) · Zbl 1087.20032 [12] Odoni, RWK, The Galois theory of iterates and composites of polynomials, Proc. Lond. Math. Soc., 51, 385-414, (1985) · Zbl 0622.12011 [13] Odoni, RWK, On the prime divisors of the sequence $$w_{n+1}=1+w_1⋯ w_n$$, J. Lond. Math. Soc., 32, 1-11, (1985) · Zbl 0574.10020 [14] Pink, R.: Profinite iterated monodromy groups arising from quadratic polynomials. arXiv:1307.5678 [math.GR], preprint, 2013 · Zbl 0622.12011 [15] Silverman, J.H.: The arithmetic of dynamical systems. Graduate Texts in Mathematics, vol. 241. Springer, New York (2007) · Zbl 1130.37001 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-04-10 18:23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7455824613571167, "perplexity": 2119.904407944581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00399.warc.gz"}
https://www.khanacademy.org/math/cc-sixth-grade-math/cc-6th-data-statistics/mean-and-median/v/mean-median-and-mode
Studying for a test? Prepare with these 5 lessons on Summarizing quantitative data. See 5 lessons # Mean, median, & mode example Tags Video transcript Find the mean, median, and mode of the following sets of numbers. And they give us the numbers right over here. So if someone just says the mean, they're really referring to what we typically, in everyday language, call the average. Sometimes it's called the arithmetic mean because you'll learn that there's other ways of actually calculating a mean. But it's really you just sum up all the numbers and you divide by the numbers there are. And so it's one way of measuring the central tendency. The average, I guess, we could say. So this is our mean. We want to average 23 plus 29-- or we're going to sum 23 plus 29 plus 20 plus 32 plus 23 plus 21 plus 33 plus 25, and then divide that by the number of numbers. So we have 1, 2, 3, 4, 5, 6, 7, 8 numbers. So you want to divide that by 8. So let's figure out what that actually is. Actually, I'll just get the calculator out for this part. I could do it by hand, but we'll save some time over here. So we have 23 plus 29 plus 20 plus 32 plus 23 plus 21 plus 33 plus 25. So the sum of all the numbers is 206. And then we want to divide 206 by 8. So if I say 206 divided by 8 gets us 25.75. So the mean is equal to 25.75. So this is one way to kind of measure the center, the central tendency. Another way is with the median. And this is to pick out the middle number, the median. And to figure out the median, what we want to do is order these numbers from least to greatest. So it looks like the smallest number here is 20. Then, the next one is 21. There's no 22 here. Let's see, there's two 23's. 23 and a 23. So 23 and a 23. And no 24's. There's a 25. 25. There's no 26, 27, 28. There is a 29. 29. Then you have your 32. 32. And then you have your 33. 33. So what's the middle number now that we've ordered it? So we have 1, 2, 3, 4, 5, 6, 7, 8 numbers. We already knew that. And so there's actually going to be two middles. If you have an even number, there's actually two numbers that qualify for close to the middle. And to actually get the median, we're going to average them. So 23 will be one of them. That, by itself, can't be the median because there's three less than it and there's four greater than it. And 25 by itself can't be the median because there's three larger than it and four less than it. So what we do is we take the mean of these two numbers and we pick that as the median. So if you take 23 plus 25 divided by 2, that's 48 over 2, which is equal to 24. So even though 24 isn't one of these numbers, the median is 24. So this is the middle number. So once again, this is one way of thinking about central tendency. If you wanted a number that could somehow represent the middle. And I want to be clear, there's no one way of doing it. This is one way of measuring the middle. Let me put that in quotes. The middle. If you had to represent this data with one number. And this is another way of representing the middle. Then finally, we can think about the mode. And the mode is just the number that shows up the most in this data set. And all of these numbers show up once, except we have the 23, it shows up twice. And since because 23 shows up the most, it shows up twice. Every other number shows up once, 23 is our mode.
2017-10-19 01:56:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6742448806762695, "perplexity": 248.30480269885985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823214.37/warc/CC-MAIN-20171019012514-20171019032514-00571.warc.gz"}
https://www.commandlinefu.com/commands/by/dinomite
### Commands by dinomite (6) the last day the last week the last month all time sorted by date votes • Grabs the complete module list from CPAN, pulls the first column, ditches html lines, counts, ditches small namespaces. 1 curl http://www.cpan.org/modules/01modules.index.html |awk '{print $1}'|grep -v "<"|sort|uniq -c|grep -v " +[0-9] " · 2011-10-18 16:03:54 • If you want to turn a Git repo into the origin that folks can push to, you should make it a bare repository. See: http://stackoverflow.com/questions/2199897/git-convert-normal-to-bare-repository 1 mv .git .. && rm -rf * && mv ../.git . && mv .git/* . && rmdir .git && git config --bool core.bare true · 2011-02-28 17:58:14 • If you want to pull all of the files from a tree that has mixed files and directories containing files, this will link them all into a single directory. Beware of filesystem files-per-directory limits. 2 find /deep/tree/ -type f -print0|xargs -0 -n1 -I{} ln -s '{}' . · 2010-12-21 13:00:33 • Get just the IP address for a given hostname. For best results, make this a function in your shell rc file so that it can be used for things like traceroute: Titus:~$ traceroute getip foo.com traceroute to 64.94.125.138 (64.94.125.138), 64 hops max, 52 byte packets Show Sample Output -1 host foo.com|grep " has address "|cut -d" " -f4 · 2010-10-29 17:01:37 • Create a tar file in multiple parts if it's to large for a single disk, your filesystem, etc. Rejoin later with cat .tar.*|tar xf - Show Sample Output 17 tar cf - <dir>|split -b<max_size>M - <name>.tar. · 2009-11-11 01:53:33 • Get your colorized grep output in less(1). This involves two things: forcing grep to output colors even though it's not going to a terminal and telling less to handle those properly. 32 grep --color=always | less -R · 2009-05-20 20:30:19 ### What's this? commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down. ### Check These Out Make changes in .bashrc immediately available You may want to just use the shortcut "." instead of "source" Rename all files which contain the sub-string 'foo', replacing it with 'bar' Would this command line achieve the desired function? My CLI knowledge is not great so this could certainly be wrong. It is merely a suggestion for more experienced uses to critique. Best wishes roly :-) Commandline document conversion with Libreoffice In this example, the docx gets converted to Open Document .odt format. For other formats, you'll need to specify the correct filter (Hint: see "Comments" link below for a nice list). Colorizes an access log Puts a splash of color in your access logs. IP addresses are gray, 200 and 304 are green, all 4xx errors are red. Works well with e.g. "colorize access_log | less -R" if you want to see your colors while paging. Use as inspiration for other things you might be tailing, like syslog or vmstat Usage: $tail -f access.log | colorize edit, view or execute last modified file with a single key-press Copy this function to command line, press 'Enter' 'f'' 'Enter' to execute (sentence on the left written only for newbies). Hint 'e|x|v|1..9' in front of displayed last modified file name means: "Press 'e' for edit,'x' for execute,'v' for view or a digit-key '1..9' to touch one file from the recent files list to be last modified" and suggested (hidden files are listed too, else remove 'a' from 'ls -tarp' statement if not intended). If you find this function useful you can then rename it if needed and append or include into your ~/.bashrc config script. With the command$ . ~/.bashrc the function then can be made immediately available. In the body of the function modifications can be made, i.e. replaced joe editor command or added new option into case statement, for example 'o) exo-open $h;;' command for opening file with default application - or something else (here could not be added since the function would exceed 255 chars). To cancel execution of function started is no need to press Ctrl-C - if the mind changed and want to leave simple Enter-press is enough. Once defined, this function can with$ typeset -f f command be displayed in easy readable form Search some text from all files inside a directory Which processes are listening on a specific port (e.g. port 80) swap out "80" for your port of interest. Can use port number or named ports e.g. "http" escape any command aliases e.g. if rm is aliased for 'rm -i', you can escape the alias by prepending a backslash: rm [file] # WILL prompt for confirmation per the alias \rm [file] # will NOT prompt for confirmation per the default behavior of the command Find removed files still in use via /proc Oracle DBA remove some logfiles which are still open by the database and he is complaining the space has not been reclaimed? Use the above command to find out what PID needs to be stopped. Or alternatively recover the file via: $cp /proc/pid/fd/filehandle /new/file.txt Quickly generate an MD5 hash for a text string using OpenSSL Thanks to OpenSSL, you can quickly and easily generate MD5 hashes for your passwords. Alternative (thanks to linuxrawkstar and atoponce):$ echo -n 'text to be encrypted' | md5sum - Note that the above method does not utlise OpenSSL.
2020-08-11 14:07:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31255707144737244, "perplexity": 9041.348382453305}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738777.54/warc/CC-MAIN-20200811115957-20200811145957-00169.warc.gz"}
http://www.wildml.com/2016/07/
## Deep Learning for Chatbots, Part 2 – Implementing a Retrieval-Based Model in Tensorflow The Code and data for this tutorial is on Github. #### Retrieval-Based bots In this post we’ll implement a retrieval-based bot. Retrieval-based models have a repository of pre-defined responses they can use, which is unlike generative models that can generate responses they’ve never seen before. A bit more formally, the input to a retrieval-based model is a context $c$ (the conversation up to this point) and a potential response $r$. The model outputs is a score for the response. To find a good response you would calculate the score for multiple responses and choose the one with the highest score.
2019-02-17 05:41:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35806065797805786, "perplexity": 1339.608421181811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481624.10/warc/CC-MAIN-20190217051250-20190217073250-00295.warc.gz"}
https://cs.stackexchange.com/questions/57546/are-coevolutionary-free-lunches-really-free-lunches/96473
# Are coevolutionary “Free Lunches” really free lunches? In their paper "Coevolutionary Free Lunches" David Wolpert and William Macready describe a set of exceptions to the No Free Lunch theorems they proved in an earlier paper. The exceptions involve two-player games in which a player tries to minimize expected search cost given optimal play, or at least good play, by an opponent. Free lunches are "allowed" in this case because the exact choice of cost function to minimize changes depending on which fitness (i.e. objective) functions previous rounds of play have "ruled out." In other words, given that an opponent already knows something about the game, and chooses responses that minimize the player's expected return, the player can eliminate certain fitness functions without having to evaluate them. To illustrate how this works, W&M provide this chart: Here, $\langle g \rangle$ represents the search algorithm that pays no attention to the opponent's moves; $\langle g \rangle_{1}$ represents the search algorithm that considers all possible opponent replies to each move; and $\langle g \rangle_{2}$ represents the search algorithm that samples just one possible opponent reply to each move. This illustrates the character of the free lunch provided: algorithms that take into account information provided by the opponent do better than algorithms that don't, and those that collect as much information as possible from the opponent do better than those that only collect some. W&M amplify this point in their later discussion of opponent intelligence. They show that even when an opponent is not omniscient, but has partial knowledge, the player can exploit its partial knowledge. In the case of total ignorance this won't work because the opponent always replies with a random move. In that case, there appears to be no gain: the expected performance of the agent will be the average over the antagonist’s possible responses. I guess this looks like $\langle g \rangle$ above. But in cases where the opponent has some knowledge, algorithm performance appears to increase monotonically with opponent intelligence. What confuses me about all this is that the argument appears to boil down to the following trivial-seeming claim: across all problem domains, algorithms that bother to learn from knowledgable opponents do better than algorithms that do not. That is, as long as you don't turn down the free lunch, you can have it. In effect, lunch is free because the opponent is buying. If that's true, then couldn't we imagine a whole range of games in which, say, players play cooperative games with oracles, and those that refuse to cooperate perform worse on all problems? That's a free lunch in the same sense, right? But then we haven't explained where the oracle's knowledge comes from. Does this mean that if we had to model the source of the other player's knowledge, we'd be back in the No Free Lunch zone? Or is the claim really that this sort of competitive play yields better outcomes even when both players start out in total ignorance, as the phrase "free lunch" seems to imply? Coevolutionary algorithms can't magically accelerate progress on any arbitrary problem class. So in that sense, the conclusion at the end of the question is correct. However, it doesn't follow that all coevolutionary free lunches are trivial, as the conclusion of the question suggests. I can't offer an exhaustive account of all the kinds of coevolutionary free lunches, but I can offer two examples, the first of which is trivial, and the second of which I would argue is nontrivial. The second is nontrivial because it helps explain why the no free lunch theorem really must hold. The important difference between the two examples is this: in the first, the competing algorithms are competing to achieve the same overarching goal, while in the second, the competing algorithms are trying to achieve different goals. In the second case, the mismatch between the two algorithms' goals allows interesting things to happen. I'll begin with the trivial example. ### Opponents seeking the same goal Imagine a very simple optimization problem in which the search landscape is a 7x7 grid of cells. The primary goal is to find the cell with the maximum value. 48 of the cells have a value of 0, and one randomly chosen cell on the grid has a value of 1. Our secondary goal is to discover a search strategy that finds the maximum value more quickly. But it follows from the initial problem that no strategy could possibly beat random search here, because nothing can be learned from one cell about another. Nonetheless, the coevolutionary free lunch theorem holds! Here's why: Suppose you have two optimization algorithms, A and B, both searching the grid. It doesn't really matter what strategy they use, but for concreteness, we'll stipulate that they both use a random search strategy. The only difference between them is that B pays attention to A's moves, and when it sees that A has found the maximum value, it jumps to that cell too. In some sense, when that happens, B has still "lost" the contest. But if you run a lot of competitions, and then compare the average performance of A to the average performance of B, you'll see that B finds the maximum value faster on average. The explanation is simple. The average time to first discovery -- whether by A or B -- stays the same. But whenever A beats B to the win, B doesn't bother looking anywhere else. It skips ahead to the best cell. When B beats A to the win, on the other hand, A just continues searching, plodding along until it finds the maximum value on its own. This looks like a win if you're only counting the number of moves B makes. If you look at the total number of moves that A and B make together, they actually do worse together than either would on its own, on average. That's because of A's obliviousness. If we change A to behave the same way as B, then they do about as well together as either would on its own -- but no better. So here, modeling both algorithms together returns us directly to the no free lunch zone, just as the question argues. In effect, A and B are just performing random search algorithms in parallel. The final number of search operations remains the same. ### Opponents seeking different goals Now imagine a very different scenario. Suppose we have a classification problem: recognizing sheep. Here, A's job is to look at a stream of pictures and say whether there are sheep in them. Simple enough. But B's job is very different. B has the power to inject pictures of its own into the stream! Its goal is not to identify sheep; it just wants to slow A down. Does anyone have a picture of sheep in a really unusual place? It's for pranking a neural net. And here's one of the first replies: Here's the orange sheep: Turns out, this was perfect for pranking: You totally got it. Orange sheep are not a thing it was expecting. "a brown cow laying on top of a lush green field" Here are a couple of other examples from Shane's blog post: So what does this have to do with our problem? We can connect them by being more precise. Suppose A's goal is to reach greater than 99% accuracy, and B has the power to inject one picture into A's stream for every nine "natural" pictures. B looks for patterns in A's behavior and uses them to find pictures that mess with its model. This will keep A's accuracy below 99% for much longer than if A saw only "natural" pictures. Two things follow from this. First, B will do much better if it pays attention to what A does. If B just picks images on the basis of some randomly chosen general principle like "sheep in odd places," then there's a good chance that A will be prepared for them already. If not, it will quickly learn to handle them correctly, and B will then have to adopt a new strategy. On the other hand, if B watches A's behavior, it can pick out the specific things that A is worst at, and focus on those. As soon as A improves at one of them, B can have another one ready to go. As long as B can find patterns in A's behavior, B will always present the most challenging images for A. Second, A will do much better if it pays attention to which images B picks. After all, B is looking for patterns in A's behavior. If it finds patterns, then it will use those patterns to send fake or troublesome pictures to A. In turn, that means that there will be noticeable patterns driving B's choices. Here again, if A is paying attention to the patterns in B's behavior, it will more quickly be able to identify which pictures B is injecting. What's important is that in this scenario, both A and B are relying on data that is guaranteed to have patterns. It is guaranteed to have patterns because if A is doing its best, A is doing something other than random search. And if B is doing its best, then B is doing something other than random search. So initially, this looks like a really compelling free lunch situation. But what have we actually shown? We've shown this: As long as A is doing something other than random search, B can always find out-of-band samples that A's methods cannot handle. That's the no free lunch theorem in a nutshell! The only way A can prevent B from finding out-of-band images is by behaving in ways that look random to B. But if A's behavior isn't really random, then in the very long run, B will always be able to find the pattern -- even if B itself is only doing random search. The very same argument works in the other direction. The only way B can prevent A from noticing patterns in its images is by behaving in random-seeming ways. But if B's behavior isn't really random, A will eventually find the pattern, even if it's only doing random search. In this scenario, both algorithms will try to fool each other by adopting more and more complex, random-seeming behaviors. So in the very, very long run, they will slowly converge to a truly random search -- which is the best any algorithm can do on average across all problem domains. ### An infinite pool of randomness It might be that these coevolutionary learning strategies do still have this advantage over others: they may encourage both algorithms to explore the space of relevant non-random behaviors more quickly or extensively. I am not even sure of that. Either way, the no free lunch theorem holds in general, because the space of possible non-random behaviors is far, far smaller than the space of possible random behaviors. How do we know? It would be off topic to go into a detailed proof, but consider the related question of how many long strings can be compressed into shorter strings. Regardless of the compression method, the majority of all strings cannot be compressed at all. This is easy to prove with a bin-counting argument. Suppose we consider binary strings, and start with the empty string. Assuming there can be no negative-length strings, that's incompressible. Now consider length-1 strings. There are two, but there's only one length-0 string, so only one of them can be compressed. Now we have two incompressible strings, and a third string that can be compressed by one bit. Moving on to length-2 strings: there are four, but there are only two length-1 strings, and the one length-0 string is already taken, so we can only compress two of the length-2 strings. The other two are incompressible. That's three compressible strings and four incompressible strings... and so on. As the numbers get higher, one of the things you notice is that even among the compressible strings, half of them are only compressible by one bit, because they compress to strings that are themselves incompressible. A quarter are only compressible by two bits; an eighth are only compressible by three bits. No matter how you slice it, the number of substantially compressible strings is always much lower than the number of strings that are incompressible or barely compressible. The line of reasoning for random behavior is similar. You could also connect these ideas to the proof that there are vastly more real numbers than integers. In the global scheme of things, the no free lunch theorem is true because the scope of randomness is unimaginably vast.
2019-10-18 18:40:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5321480631828308, "perplexity": 630.1450680965802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00535.warc.gz"}
https://cs.stackexchange.com/questions/33278/proving-a-language-is-regular?noredirect=1
# Proving a language is regular [duplicate] I am asked to find Prove that the following languages are regular languages: (a) $$\{a^nb^ma^k \mid n\geq3,m\geq1,k\geq1\}$$ (b) $$\{a^n \mid n\neq3 \text{ and } n\not\equiv2 \mod7\}$$ (c) $$\{a^nb \mid n\geq2\}\cup\{ab^m \mid m≥3\}$$ I have a vague understanding of pumping lemma, and how to prove a language is not regular, but was hoping that someone could walk through (a) with me to give me a better understanding and allow me to do the rest on my own.
2020-08-08 04:20:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4741755723953247, "perplexity": 124.32643601539407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737238.53/warc/CC-MAIN-20200808021257-20200808051257-00519.warc.gz"}
https://datascience.stackexchange.com/questions/88348/bert-and-svm-classification
# Bert and SVM classification I'm trying to understand the concepts in the title and how they fit into the task of binary classification. According to my understanding so far, you can encode text using various feature-extraction methods such a bag of words. You can then use something like liblinear to obtain a SVM LibLinear model that is able to classify your data. On the other hand, you can build a model by concatenating Bert with a Dense layer. You can then fine tune this model and again, you obtain a classifier. Where would you use either one of them and why?
2021-07-29 02:02:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8583108186721802, "perplexity": 364.15269385203743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00184.warc.gz"}
https://mathoverflow.net/questions/250042/another-fix-field-of-a-certain-galois-group-action
Another fix field of a certain galois group action Let $E=\mathbb{F}_p(\!(u)\!)$, the Laurent series field over $\mathbb{F}_p$. Let $K/E$ be a finite normal separable extension. Consider the field $L=K(x \mid x^p-x-a=0 \text{ for some } a \in K)$. Let $\hat{L}$ be the $u$-adic completion of $L$. We write $G_E=\mathrm{Gal}(E^s/E)$ for the absolute Galois group of $E$. By continuity $G_E$ acts on $\hat{L}$. What is $(\hat{L})^{G_E}$? In particular, is $(\hat{L})^{G_E}= E$? One has $\hat{L} \subseteq \widehat{E^s}$ and following the answer to one of my last questions one also has $(\widehat{E^s})^{G_E}=\widehat{E^{\mathrm{perf}}}$. Therefore, we are left with the question if $\hat{L} \cap \widehat{E^{\mathrm{perf}}}=E$? Edit: and what if we consider for some $m \in \mathbb{Z}$ the field $\hat{L}_m$ where $L_m= K(x \mid x^p-x-a=0 \text{ for some } a \in K \text{ with } \nu(a) \geq m)$? What in the special case where $m=0$? • You mean $L=K(x \mid x^p-x-a=0)$ for some $a \in K$, right? – HeinrichD Sep 16 '16 at 17:32 • No, such $L$ would be finite over $E$, hence already $u$-adically complete. I would like to adjoin all zeroes of Artin-Schreyer Polynomials to $K$. – Louis Sep 16 '16 at 17:55 • If $K/E$ isn't Galois, then $G_E$ doesn't act on $K$. Why does it act on the completion of $L$? – znt Sep 16 '16 at 19:19 • Aren't the $L_m$ finite over $K$? – Felipe Voloch Sep 17 '16 at 21:18 • Sorry, I got my signs confused. $L_1 = K$ because if $\nu(a)>0$, then $z=-\sum_{n=0}^{\infty} a^{p^n}$ converges and $z^p-z=a$. My earlier argument shows that $L_{m}$ is finite over $L_{m+1}$ and by induction $L_m$ is finite for all $m\le 0$ (and $=K$ for $m>0$). – Felipe Voloch Sep 18 '16 at 2:48 The right way to do this would be to give a short, efficient, abstract argument. I, however, will give a fairly messy example to show that $u^{1/p}\in\hat L$. I’ll use $K=E=\Bbb F_p((u))$, and set $a_n=u^{1-pn}$ for $n>0$, and set $f_n(x)=x^p-x-a_n$, an irreducible and separable polynomial over $E$. Its roots are of valuation $-n+1/p$, and I’ll multiply these all by $u^n$ to form the polynomial $g_n(x)=x^{pn}f(X/u^n)=x^p-u^{(p-1)n}x-u$. This is Eisenstein, so a root $\alpha_n$ of $g_n$ will be a generator of a field $K_n$, which by its construction is one of the fields whose compositum defines $L$. Now I claim that $\lim_n\alpha_n=u^{1/p}$. For this, I need $v(\alpha_n-u^{1/p})$, where $v$ is the $u$-adic valuation normalized so that $v(u)=1$, and I’ll exhibit this as $\frac1{p^2}v\bigl(\mathbf N^{K_n(t^{1/p})}_E(\alpha_n-t^{1/p})\bigr)$. Since $\mathrm{Irr}(\alpha_n,E)=g_n(x)=x^p-u^{(p-1)n}x-u$, this is also $\mathrm{Irr}\bigl(\alpha_n,E(u^{1/p})\bigr)$, and so $\mathrm{Irr}\bigl(\alpha_n-u^{1/p},E(u^{1/p})\bigr)=g(x+u^{1/p})=x^p-u^{(p-1)n}x-u^{(p-1)n+1/p}$. To get the minimal polynomial for $\alpha_n-u^{1/p}$ over $E$, just raise to the $p$-power. We get: $$\mathrm{Irr}(\alpha_n-u^{1/p},E\,) = x^{p^2}-u^{p(p-1)n}x^p-u^{p(p-1)n+1}\,.$$ Thus $v(\alpha_n-u^{1/p})=\frac1{p^2}\bigl(p(p-1)n+1\bigr)>n/2$, which establishes the claim. • Thank you for your answer! I indeed like this explicit argument. However, the explicit construction gives me (moral) hope, that in the case of the $L_m$ (especially $L_0$) one might obtain a positive answer. – Louis Sep 17 '16 at 19:20 • Even for $L_0$, I don’t doubt that at all. – Lubin Sep 18 '16 at 1:10 • Sorry, I'm not sure of what to make from this comment. Do you believe that $(L_0)^{G_E}=E$ or tend to $(L_0)^{G_E} \neq E$? Double negative and the stress words (even/at all) with unclear reference make it a bit confusing to me. – Louis Sep 18 '16 at 1:31 • Sorry, I am interested in the completed $\hat{L}_m$ of course (too late to edit my previous comment) – Louis Sep 18 '16 at 1:44 • (You can always delete the previous comment and rewrite.) Sorry for the unclarity, but I think your hopes are true for $L_0$: that $\widehat{L_0}\cap E^{\text{perf}}=E$ – Lubin Sep 18 '16 at 1:56
2019-07-20 12:43:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9547116160392761, "perplexity": 186.93200900439317}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526508.29/warc/CC-MAIN-20190720111631-20190720133631-00296.warc.gz"}
https://gamedev.stackexchange.com/questions/29263/updating-object-position
Updating object position An object - a player or npc or whatever - has a position, direction and speed and such. Before drawing each frame, we want to compute the current position of each object. How do you do this? If you take the elapsed time since last frame, you run the very real risk of the game running at the wrong speed if the frame-rate is very fast and the clock precision poor (as it often is on some platforms). And how can you incorporate player controls such as the player wanting to go left or right (e.g. in a simple asteroids clone without ship inertia)? • I don't know any platform that doesn't offer sub-millisecond clock precision. What do you mean by "as it often is on some platforms"? – sam hocevar May 18 '12 at 10:17 • @SamHocevar Google Native Client has been giving me problems; people have even managed to jump to ledges they shouldn't etc. – Will May 18 '12 at 10:52 • So you mean that position += direction * speed * elapsed is giving you problems? – David Gouveia May 18 '12 at 11:15 • @DavidGouveia yes; hence wondering what other approaches work better – Will May 18 '12 at 11:45 • If your problem is the frame rate becoming too high, how about modifying your game loop to enforce a fixed frame rate? gafferongames.com/game-physics/fix-your-timestep – David Gouveia May 18 '12 at 11:51 position[t + 1] = position[t] + velocity[t] * dt It works as long as dt is sufficiently small. If Newton-Euler doesn't work for you, you generally move straight on to Runge-Kutta 4. RK4 is a little bit more complex. You will have to store not just the previous and next states of the system, but 4 more intermediate points between the previous and next. Essentially, RK4 tries to predit the next state of the system given the previous state and its derivatives. However, it requires you to be able to say what the position, velocity and acceleration of the object will be in the future as well as the present. This is easy in some systems (like orbiting bodies or falling characters), but is nigh-but-impossible when you've got stuff like collisions involved -- but otherwise it will give you nearly perfect results even when your timestep is quite large.
2019-08-20 15:58:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3553549349308014, "perplexity": 1068.7662784644638}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315551.61/warc/CC-MAIN-20190820154633-20190820180633-00357.warc.gz"}
https://chemistry.stackexchange.com/questions/86430/do-light-and-heavy-water-form-an-azeotrope
# Do light and heavy water form an azeotrope? (Interested in the general principle too, not just for water and its variants, but with particular emphasis on it.) When you distill a weak ethanol-in-water solution, you get a condensate that's ethanol-enriched, but that only takes you asymptotically up to 95%, because ethanol and water form an azeotrope with that composition. Is there a similar "enrichment limit" one can achieve in enriching water with deuterium, using only distillation at a fixed pressure? (I know one can "break the azeotrope, but I don't want to get into this.) Or could one, given enough stages in a cascade, approach 100% enrichment arbitrarily closely? • What makes you think there might be one? – Gert Nov 26 '17 at 16:33 • This is an interesting question, but you'd have to use deuterated ethanol as well. Mixing CH3CH2OH with D2O would yield a mixture of CH3CH2OH, CH3CH2OD, D2O, HDO, and H2O. – MaxW Nov 26 '17 at 16:38 • Azeotrope forming or not : it is hard to achieve as the boiling points differ by about one degree Celsius. – Alchimista Nov 26 '17 at 17:01 • @MaxW I'm not really interested in ethanol; I just used it as an example of an azeotrope-forming co-solvent with water. But yeah, the general form of my question stands: would ethanol and deuterated ethanol (any/all of them) form an azeotrope with each other?? – Bernd Jendrissek Nov 26 '17 at 21:54 • @BerndJendrissek: there exists no algorithm that predicts the existence of an azeotrope. – Gert Nov 26 '17 at 22:02 This is a good question, but no, there is no azeotrope in any isotopic mixture, as an azeotrope requires non-ideal mixing of two components to form a minimum or maximum critical point on the plot of total vapor pressure as a function of composition as shown in Figure 1 below. Isotopic Mixtures have a near-zero enthalpy of mixing, making isotopic mixtures the closest thing to an ideal liquid mixture. Because of this ideality of mixing the boiling of protium-deuterium oxide mixtures looks much like the phase diagram in Figure 2 which follows Raoult's law, though this is actually quite exaggerated compared to semi-heavy water mixtures. Even if there was a theoretical point at which an azeotrope could exist for the negligible enthalpy of mixing it would occur at extreme concentrations of $\ce{D2O}$ or $\ce{H2O}$ and be unobservable due to the "doped" species existing as semi-heavy water ($\ce{DHO}$). This is further evidenced by the fact that other physical properties of isotopic mixtures follow a linear curve when measured as a function of concentration. Figure 1. Generic Plot of vapor pressure as a function of composition Figure 2. Generic Boiling phase diagram for ideal solutions To answer your question about distillation only enrichment, it is possible though impractical. Commercial enrichment of $\ce{D2O}$ first uses hydrogen sulfide in the Gridler-Sulfide process, once enriched to around 30%, distillation is used to finish the process to the desired isotopic purity. • +1 You might add that a mixture has to deviate quite a bit from ideality before the vapour pressure curve can get a minimum or maximum. No chance for isotopomeres. – Karl Nov 29 '17 at 21:49 • +1 Can you expand a bit on "Even if there was a theoretical point at which an azeotrope could exist for the negligible enthalpy of mixing it would occur at extreme concentrations of D2O or H2O" (and assume that the pairs are H2O/HDO and HDO/D2O)? While the non-ideality of mixing is certainly very small (and mind you, the H/D chemical difference is much bigger than U-235/U-238), the difference in boiling points is also very small, so you wouldn't need much non-ideality for an azeotrope to form. – Bernd Jendrissek Nov 30 '17 at 10:45 • @BerndJendrissek I put that in there to preempt that one critic that always asks "what about at 99.99995876%". In order for there to be an enthalpy of mixing there has to be some sort of interaction of the mixed species. For $\ce{H2O}$/$\ce{D2O}$ this is the formation of $\ce{HDO}$. If you have only $\ce{HDO}$ and one other species of water then there is no reaction to be had from mixing, thus no enthalpy, thus no azeotrope. – A.K. Nov 30 '17 at 23:45 • @A.K. I guess I'm that one critic :p I disagree with any "obviousness" of H2O/HDO mixing producing no reaction - while any chemical difference between H and D is necessarily small, it's larger than between any other isotopes. I'm happy to accept an empirical claim like "we know that from 0.000014% to 99.99978% there's no azeotrope", but I'm unfortunately not convinced by any of these "molecular arguments" that none can exist. – Bernd Jendrissek Dec 2 '17 at 3:06 Distillation alone has been used on an industrial scale to produce heavy water, but it requires a multi-stage process and uses a lot of energy due to the small difference in boiling points. That would seem to confirm that water and deuterium oxide do not form an azeotrope. • Do you have a reference to back this up? I thought that generally in systems of isotopically substituted molecules an infinite set of azeotropic points is possible. And "distillation" refers to the process of low-temperature rectification of liquid hydrogen, followed by combustion of isolated $\ce{D2}$ with $\ce{O2}$. – andselisk Nov 28 '17 at 12:51 • @andselisk Where did you get the idea of azeotropic points and isotopes? And by far the cheapest processes to enrich deuterium involve water not hydrogen (one process uses water and hydrogen sulphide followed by distillation of the enriched deuterium water, the other just uses distillation of water). – matt_black Nov 28 '17 at 13:52 • @matt_black From the back of my head. The difference in b.p. between D2O and H2O is only 1.44 °C, and max. separation factor of about 1.05 (whereas it can be >10 for electrolysis). Separation via distillation only is extremely inefficient; it can lead to enriched heavy water content in combination with the methods you've mentioned, but distillation only is a dead end. – andselisk Nov 28 '17 at 15:51 • It does seem to indicate that there is no light/heavy water azeotrope, but can we exclude the azeotrope being at, say, 99.9993% heavy water? – Bernd Jendrissek Nov 29 '17 at 13:48 • Aceotropes form between substances of different polarity. If heptane and hexane don't, I can't see it happen for H2O, HDO and D2O. The vapour pressure of the ideal mixture has a constant slope with composition, you have to deviate quite a bit from ideality to get a curve with a maximum/minimum. – Karl Nov 29 '17 at 21:23
2020-01-17 14:35:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6257503032684326, "perplexity": 1385.8376398623516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589560.16/warc/CC-MAIN-20200117123339-20200117151339-00138.warc.gz"}
https://www.brightstorm.com/math/calculus/the-derivative/derivative-notation/
# Derivative Notation - Concept 8,493 views The two commonly used ways of writing the derivative are Newton's notation and Liebniz's notation. Newton's notation involves a prime after the function to be derived, while Liebniz's notation utilizes a d over dx in front of the function. These two methods of derivative notation are the most widely used methods to signify the derivative function. I want to talk about derivative notation, there are two main forms of derivative notation, there's the Newton form and the Leibniz form. These two forms are named for the two co creators of Calculus. Now the Newton form is the one we've been using so far it's the so called prime form. Now let's suppose we have a function f of x, the Newton form of derivative is f prime of x. If we have a function y like y equals x squared we would say its derivative is y prime. And if we wanted to talk about just the expression, x squared plus 1 you could write x squared plus 1 in parenthesis prime and all of these mean the derivative. But Leibniz form works a little differently, there is this notation here d over dx this is called the differential operator and what it basically means is the derivative with respect to x of f of x or the derivative with respect to x of y or the derivative with respect to x of x squared plus 1. What's great about this is you see the operation of differentiation with Leibniz form and it has a little shorter version when you have the derivative with respect to x of y you can write that dy dx and this is something you'll see a lot of or df dx you'll occasionally see. Now one of the things that this highlights is that differentiation or the derivative is an operation that you perform on a function. And I want to highlight the difference between the derivative of a function and the process of differentiation. Differentiation is the process of getting the derivative. So let's imagine that here's a function and this is the differentiation machine the process of getting the derivative. The result is the actual derivative, so remember that differentiation is the process you differentiate a function and the result is the derivative f prime of x.
2014-04-25 04:06:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9511594176292419, "perplexity": 267.06251383367305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.ncatlab.org/nlab/show/regular+space
Regular spaces # Regular spaces ## Idea A regular space is a topological space (or variation, such as a locale) that has, in a certain sense, enough regular open subspaces. The condition of regularity is one of the separation axioms satsified by every metric space (and in this case, by every pseudometric space). ## Definitions Fix a topological space $X$. The classical definition is this: that if a point and a closed set are disjoint, then they are separated by neighbourhoods. In detail, this means: ###### Definition Given any point $a$ and closed set $F$, if $a \notin F$, then there exist a neighbourhood $V$ of $a$ and a neighbourhood $G$ of $F$ such that $V \cap G$ is empty. In many contexts, it is more helpful to change perspective, from a closed set that $a$ does not belong to, to an open set that $a$ does belong to. Then the definition reads: ###### Definition Given any point $a$ and neighbourhood $U$ of $a$, there exist a neighbourhood $V$ of $a$ and an open set $G$ such that $V \cap G = \empty$ but $U \cup G = X$. You can think of $V$ as being half the size of $U$, with $G$ the exterior of $V$. (In a metric space, or even in a uniform space, this can be made into a proof.) If we apply the regularity condition twice, then we get what at first might appear to be a stronger result: ###### Definition Given any point $a$ and neighbourhood $U$ of $a$, there exist a neighbourhood $W$ of $a$ and an open set $G$ such that $Cl(W) \cap Cl(G) = \empty$ but $U \cup G = X$ (where $Cl$ indicates topological closure). ###### Proof of equivalence Find $V$ and $G$ as above. Now apply the regularity axiom to $a$ and the interior $Int(V)$ of $V$ to get $W$ (and $H$). In terms of the classical language of separation axioms, this says that $a$ and $F$ are separated by closed neighbourhoods. Sometimes one includes in the definition that a regular space must be $T_0$: ###### Definition (of T₀) A space is $T_0$ if, given any two points, if each neighbourhood of either is a neighbourhood of the other, then they are equal. Other authors use the weaker definition above but call a regular $T_0$ space a $T_3$ space, but then that term is also used for a (merely) regular space. An unambiguous term for the weaker condition is an $R_2$ space, but hardly anybody uses that. We have ###### Theorem Every $T_3$ space is Hausdorff. ###### Proof Suppose every neighbourhood of $a$ meets every neighbourhood of $b$; by $T_0$ (and symmetry), it's enough to show that each neighbourhood $U$ of $a$ is a neighbourhood of $b$. Use regularity to get $V$ and $G$. Then $G$ cannot be a neighbourhood of $b$, so $U$ is. Since every Hausdorff space is $T_0$, a less ambiguous term for a $T_3$ space is a regular Hausdorff space. It is possible to describe the regularity condition fairly simply entirely in terms of the algebra of open sets. First notice the relevance above of the condition that $Cl(V) \subset U$; we write $V \subset\!\!\!\!\subset U$ in that case and say that $V$ is well inside $U$. We now rewrite this condition in terms of open sets and regularity in terms of this condition. ###### Definition Given sets $U, V$, then $V \subset\!\!\!\!\subset U$ iff there exists an open set $G$ such that $V \cap G = \empty$ but $U \cup G = X$. Then $X$ is regular iff, given any open set $U$, $U$ is the union of all of the open sets that are well inside $U$. This definition is suitable for locales. As the definition of a Hausdroff locale is rather more complicated, one often speaks of compact regular locales where classically one would have spoken of compact Hausdorff spaces. (The theorem that compact regular $T_0$ spaces and compact Hausdorff spaces are the same works also for locales, and every locale is $T_0$, so compact regular locales and compact Hausdorff locales are the same.) The condition that a space $X$ be regular is related to the regular open sets in $X$, that is those open sets $G$ such that $G$ is the interior of its own closure. (In the Heyting algebra of open subsets of $X$, this means precisely that $G$ is its own double negation; this immediately generalises the concept to locales.) Basically, we start with a neighbourhood $U$ of $x$ and reduce that to a closed neighbourhood $Cl(V)$ of $x$. Then $Int(Cl(V))$ is a regular open neighbourhood of $x$. This gives us another way to characterise regular spaces, as follows: ###### Definition Given a neighbourhood $U$ of $x$, there is a closed neighbourhood of $x$ that is contained in $U$. (Equivalently, $x$ has a regular open neighbourhood, or indeed any neighbourhood, well inside $U$.) In other words, the closed neighbourhoods of $x$ form a local base (a base of the neighbourhood filter) at $x$. ###### Remark (Warning) It is not sufficient that the regular open neighbourhoods themselves form a local base of each point; see Counterexample . It's the closures of the regular open neighbourhoods (which are arbitrary closed neighbourhoods) that form the basis. But compare semiregular spaces below. In constructive mathematics, Definition is good; then everything else follows without change, except for the equivalence with . Even then, the classical separation axioms hold for a regular space; they just are not sufficient. ## Variations Following up on Definition , we have: ###### Corollary For any regular space $X$, the regular open sets form a basis for the topology of $X$. ###### Proof For any closed neighbourhood $V$ of $x \in X$, the interior $Int(V)$ is a regular open neighbourhood of $x$. Using Definition finishes the proof. This suggests a slightly weaker condition, that of a semiregular space: ###### Definition (of semiregular) The regular open sets form a basis for the topology of $X$. As we've seen above, a regular $T_0$ space ($T_3$) is Hausdorff ($T_2$); we can also remove the $T_0$ condition from the latter to get $R_1$: ###### Definition (of R₁) Given points $a$ and $b$, if every neighbourhood of $a$ meets every neighbourhood of $b$, then every neighbourhood of $a$ is a neighbourhood of $b$. It is immediate that $T_2 \equiv R_1 \wedge T_0$, and the proof above that $T_3 \Rightarrow T_2$ becomes a proof that $R_2 \Rightarrow R_1$; that is, every regular space is $R_1$. An $R_1$ space is also called preregular (in HAF) or reciprocal (in convergence space theory). A bit stronger than regularity is complete regularity; a bit stronger than $T_3$ is $T_{3\frac{1}{2}}$. The difference here is that for a completely regular space we require that $a$ and $F$ be separated by a function, that is by a continuous real-valued function. See Tychonoff space for more. This strengthening implies (Tychonoff Embedding Theorem) that the space embeds into a product of metric spaces. For locales, there is also a weaker notion called weak regularity?, which uses the notion of fiberwise closed sublocale? instead of ordinary closed sublocales. ## Examples ###### Example Let $(X,d)$ be a metric space regarded as a topological space via its metric topology. Then this is a normal Hausdorff space, in particular hence a regular Hausdorff space. ###### Proof We need to show is that given two disjoint closed subsets $C_1, C_2 \subset X$ then their exists disjoint open neighbourhoods $U_{C_1} \subset C_1$ and $U_{C_2} \supset C_2$. Consider the function $d(S,-) \colon X \to \mathbb{R}$ which computes distances from a subset $S \subset X$, by forming the infimum of the distances to all its points: $d(S,x) \coloneqq inf\left\{ d(s,x) \vert s \in S \right\} \,.$ Then the unions of open balls $U_{C_1} \coloneqq \underset{x_1 \in C_1}{\cup} B^\circ_{x_1}( d(C_2,x_1) )$ and $U_{C_2} \coloneqq \underset{x_2 \in C_2}{\cup} B^\circ_{x_2}( d(C_1,x_2) ) \,.$ have the required properties. ###### Example (counter example) The real numbers equipped with their K-topology $\mathbb{R}_K$ are a Hausdorff topological space which is not a regular Hausdorff space (hence in particular not a normal Hausdorff space). ###### Proof By construction the K-topology is finer than the usual euclidean metric topology. Since the latter is Hausdorff, so is $\mathbb{R}_K$. It remains to see that $\mathbb{R}_K$ contains a point and a disjoint closed subset such that they do not have disjoint open neighbourhoods. But this is the case essentially by construction: Observe that $\mathbb{R} \backslash K \;=\; (-\infty,-1/2) \cup \left( (-1,1) \backslash K \right) \cup (1/2, \infty)$ is an open subset in $\mathbb{R}_K$, whence $K = \mathbb{R} \backslash ( \mathbb{R} \backslash K )$ is a closed subset of $\mathbb{R}_K$. But every open neighbourhood of $\{0\}$ contains at least $(-\epsilon, \epsilon) \backslash K$ for some positive real number $\epsilon$. There exists then $n \in \mathbb{N}_{\geq 0}$ with $1/n \lt \epsilon$ and $1/n \in K$. An open neighbourhood of $K$ needs to contain an open interval around $1/n$, and hence will have non-trivial intersection with $(-\epsilon, \epsilon)$. Therefore $\{0\}$ and $K$ may not be separated by disjoint open neighbourhoods, and so $\mathbb{R}_K$ is not normal. ###### Example (counter example) Let $\bigl((0, 1)\times(0, 1)\bigr)\cup\{0\}$ be equipped with the Euclidean topology on $(0, 1)\times(0,1)$ and have the sets of the form $(0, 1/2)\times(0, \varepsilon)\cup\{0\}$ (for $\varepsilon \in (0, 1)$) as a basis of open neighbourhoods for the point $0$. • This space is not regular since we cannot separate $0$ from $[1/2, 1)\times(0,1)$ • Every point $p = (p_1, p_2) \neq 0$ has the euclidean balls of centre $p$ and radius $\varepsilon \in (0, p_1)$ as regular neighbourhood basis. • The provided basis for the neighbourhoods of $0$ already is a system of regular open sets. Therefore, this space has the property that every point has a neighbourhood basis of regular open sets (and consequently, the space is semiregular, an even weaker property), but $0$ does not have a neighbourhood basis of closed sets (and consequently, the space is not regular). The problem is that, while every basic neighbourhood of $0$ (and therefore every neighbourhood of $0$ whatsoever) contains a regular open neighbourhood of $0$, none of these basic neighbourhoods contains the closure of any of these regular open neighbourhoods (or any other closed neighbourhood of $0$). ###### Example Locally compact Hausdorff spaces are completely regular. (e.g. Engelking 1989, Thm. 3.3.1) ###### Remark Example plays a key role in discussion of slice theorems, see there for more. ## Properties the main separation axioms numbernamestatementreformulation $T_0$Kolmogorovgiven two distinct points, at least one of them has an open neighbourhood not containing the other pointevery irreducible closed subset is the closure of at most one point $T_1$given two distinct points, both have an open neighbourhood not containing the other pointall points are closed $T_2$Hausdorffgiven two distinct points, they have disjoint open neighbourhoodsthe diagonal is a closed map $T_{\gt 2}$$T_1$ and…all points are closed and… $T_3$regular Hausdorff…given a point and a closed subset not containing it, they have disjoint open neighbourhoods…every neighbourhood of a point contains the closure of an open neighbourhood $T_4$normal Hausdorff…given two disjoint closed subsets, they have disjoint open neighbourhoods…every neighbourhood of a closed set also contains the closure of an open neighbourhood … every pair of disjoint closed subsets is separated by an Urysohn function A uniform space is automatically regular and even completely regular, at least in classical mathematics. In constructive mathematics this may not be true, and there is an intermediate notion of interest called uniform regularity. Every regular space comes with a naturally defined (point-point) apartness relation: we say $x # y$ if there is an open set containing $x$ but not $y$. This can be defined for any topological space and is obviously irreflexive, but in a regular space it is symmetric and a comparison, hence an apartness. For symmetry, if $x\in U$ and $y\notin U$, let $V$ be an open set containing $x$ and $G$ an open set such that $V\cap G = \emptyset$ and $G\cup U = X$; then $y\in G$ (since $y\notin U$) while $x\notin G$ (since $x\in V$). With the same notation, to prove comparison, for any $z$ we have either $z\in G$, in which case $z # x$, or $z\in U$, in which case $z # y$. Note that this argument is valid constructively; indeed, classically, the much weaker $R_0$ separation axiom is enough to make this relation symmetric, and it is a comparison on any topological space whatsoever. Note that if a space is localically strongly Hausdorff (a weaker condition than regularity), then it has an apartness relation defined by $x \# y$ if there are disjoint open sets containing $x$ and $y$. If $X$ is regular, then this coincides with the above-defined apartness. ## References Textbook accounts: Last revised on September 22, 2021 at 14:48:33. See the history of this page for a list of all contributions to it.
2021-09-26 09:52:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 197, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.95062255859375, "perplexity": 241.6701535072882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057857.27/warc/CC-MAIN-20210926083818-20210926113818-00266.warc.gz"}
http://tex.stackexchange.com/questions/5310/how-to-handle-wide-columns-with-backslashbox
# How to handle wide columns with \backslashbox? I'm using the package slashbox and my left column is reasonably wide, how can I tune the diagonal line (created by \backslashbox) such that it starts and ends in the corners of the cell? Example code: \documentclass[11pt]{article} \usepackage{slashbox} \usepackage{pict2e} \begin{document} \begin{table}[ht!] \centering \begin{tabular}{ *{4}{|c}|} \hline \hline relatively wide column& b & $3$ & $4$ \\ ds & c & $1$ & $4$ \\ \hline \end{tabular} \end{table} \end{document} Result: - The simplest and most straightforward way of doing this is to specify the literal size of the slashed box. In your case, it's about 40mm. So try this: \backslashbox[40mm]{foo}{bar} Generalising this approach would not be much more complicated. Compute the size of the backslashbox by taking the size of the column's widest string then add on two times the width of the table's column separators. - +1 for this simple solution. ALthough it increases the column with. I wasn't aware of that. –  Thorsten Donig Nov 13 '10 at 12:23 yes, nice answer to the question. Too bad there isn't a more automatic way to achieve this. Now I have a dilemma, @Thorsten answer offers a visually better way to solve this, but this answer by @Geoffrey is the exact answer to the question asked, which should I accept? –  Davy Landman Nov 13 '10 at 12:30 Since this is an issue concerning the slashbox package itself, the only solution might be a hack of the package source. Consider to restructure the table and tweak it with the booktabs package. See code below for some ideas. \documentclass[11pt]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[font=small,labelfont=bf,tableposition=top]{caption} \usepackage{booktabs,multirow} \begin{document} \begin{table}[!ht] \caption{Table caption}\label{tab:default} \centering \begin{tabular}{*{4}{|c}|}\hline \multirow{2}{*}{foo} & \multicolumn{3}{|c|}{bar} \\ \cline{2-4} relatively wide column& b & $3$ & $4$ \\ ds & c & $1$ & $4$ \\ \hline \end{tabular} \end{table} \begin{table}[!ht] \caption{Table caption}\label{tab:booktabs} \centering \begin{tabular}{*{4}{c}}\toprule \multirow{2}{*}[-0.5ex]{foo} & \multicolumn{3}{c}{bar} \\ \cmidrule{2-4} relatively wide column& b & $3$ & $4$ \\ ds & c & $1$ & $4$ \\ \bottomrule
2015-04-19 05:06:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937518239021301, "perplexity": 1383.175331705522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637445.19/warc/CC-MAIN-20150417045717-00014-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/even-odd-functions.527626/
# Even odd functions #### tina_singh 1)-why is x(t)+x(-t) always even??..no matter if x(t) even or odd??? 2)-when we talk about unit step function...u(t)..and we add..u(t)+u(-t)..the value of both is 1 at t=0..so does'n't that gets added twice??..and it becomes 2 at t=0... 3)when we have x(-t) and we time shift it say x(-t-3) it shifts toward the -ve t axis.. where as x(t-3) the function is shifted on the + axis..why is it so?? i would be really greatful if you can help me out with the above 3 doubts.. Last edited by a moderator: #### micromass What did you try to answer this?? For the first, fill in -a in x(t)+x(-t) and see if $$x(-a)+x(-(-a))=x(a)+x(-a)$$ For the second one. It actually never matters what the unit step function is in 0. So saying that u(t)+u(-t)=2 in 0, is correct, but it doesn't matter. Note that a lot of people choose that the unit step function is 0, or 1/2 in 0. The third one. We actually have a function y(t)=x(-t). Then x(-t-3)=y(t+3). So it makes sense that it gets shifted to the other side. #### tina_singh okh..thanks for answering..i got the first 2 parts... buh i m still a little confuse about the third...see when we say there are 2 functions x(t) and x(t-2) it means x(t-2) is delayed by 2 sec with respect to x(t) therefore it shifts in the positive x direction.. does'n't the same apply for x(-t) and x(-t-2) the second function is time delayed by 2 secs with respect to the first so even it should shift to the positive x axis..isn't it???? Last edited by a moderator: #### micromass The - in front of the t reverses the direction. So shifting towards the positive axis becomes shifting towards the negative axis and vice versa. ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
2019-06-24 13:15:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6132016181945801, "perplexity": 2916.1273253101167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00107.warc.gz"}
http://archive.numdam.org/item/AIHPB_2006__42_5_521_0/
A local limit theorem for directed polymers in random media : the continuous and the discrete case Annales de l'I.H.P. Probabilités et statistiques, Volume 42 (2006) no. 5, p. 521-534 @article{AIHPB_2006__42_5_521_0, author = {Vargas, Vincent}, title = {A local limit theorem for directed polymers in random media : the continuous and the discrete case}, journal = {Annales de l'I.H.P. Probabilit\'es et statistiques}, publisher = {Elsevier}, volume = {42}, number = {5}, year = {2006}, pages = {521-534}, doi = {10.1016/j.anihpb.2005.08.002}, zbl = {1104.60067}, mrnumber = {2259972}, language = {en}, url = {http://www.numdam.org/item/AIHPB_2006__42_5_521_0} } Vargas, Vincent. A local limit theorem for directed polymers in random media : the continuous and the discrete case. Annales de l'I.H.P. Probabilités et statistiques, Volume 42 (2006) no. 5, pp. 521-534. doi : 10.1016/j.anihpb.2005.08.002. http://www.numdam.org/item/AIHPB_2006__42_5_521_0/ [1] E. Bolthausen, A note on diffusion of directed polymers in a random environment, Comm. Math. Phys. 123 (1989) 529-534. | MR 1006293 | Zbl 0684.60013 [2] P. Carmona, Y. Hu, On the partition function of a directed polymer in a random environment, Probab. Theory Related Fields 124 (2002) 431-457. | MR 1939654 | Zbl 1015.60100 [3] F. Comets, N. Yoshida, Brownian directed polymers in random environment, Comm. Math. Phys. 254 (2) (2004) 257-287. | MR 2117626 | Zbl 02221280 [4] F. Comets, T. Shiga, N. Yoshida, Probabilistic analysis of directed polymers in random environment, Stochastic analysis on scale interacting systems, in: Adv. Stud. Pure Math., vol. 39, Math. Soc. Japan, Tokyo, 2004, pp. 115-142. | MR 2073332 | Zbl 02129981 [5] F. Comets, N. Yoshida, Some new results on Brownian directed polymers in random environment, RIMS Kokyuroku 1386 (2004) 50-66. | MR 2117626 [6] D.A. Huse, C.L. Henley, Pinning and roughening of domain wall in Ising systems due to random impurities, Phys. Rev. Lett. 54 (1985) 2708-2711. [7] J.Z. Imbrie, T. Spencer, Diffusion of directed polymer in a random environment, J. Statist. Phys. 52 (3/4) (1988) 609-626. | MR 968950 | Zbl 1084.82595 [8] H. Krug, H. Spohn, Kinetic roughening of growing surfaces, in: Godrèche C. (Ed.), Solids Far from Equilibrium, Cambridge University Press, 1991. [9] G.F. Lawler, Intersections of Random Walks, Probab. Appl., Birkhäuser, 1991. | MR 1117680 | Zbl 0925.60078 [10] Y. Sinai, A remark concerning random walks with random potentials, Fund. Math. 147 (1995) 173-180. | MR 1341729 | Zbl 0835.60062 [11] R. Song, X.Y. Zhou, A remark on diffusion of directed polymers in random environment, J. Statist. Phys. 85 (1/2) (1996) 277-289. | MR 1413246 | Zbl 0924.60053 [12] A.S. Sznitman, Brownian Motion, Obstacles and Random Media, Springer Monogr. Math., Springer, 1998. | MR 1717054 | Zbl 0973.60003 [13] W. Woess, Random Walks on Infinite Graphs and Groups, Cambridge University Press, 2000. | MR 1743100 | Zbl 0951.60002
2020-10-28 18:11:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.598107099533081, "perplexity": 3395.0862577492485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900200.97/warc/CC-MAIN-20201028162226-20201028192226-00010.warc.gz"}
https://mathoverflow.net/questions/111724/who-wrote-up-banachs-thesis
# Who wrote up Banach's thesis? Sometime ago I read somewhere (and I don't remember where it was) that Stefan Banach--a highly creative and great mathematician--did not always write down his ideas. Allegedly, he did not write his own thesis (but of course, all the mathematics in it came from him). Is that true? And is it known who wrote it then? • Is there also a claim that he didn't write his book either (which appeared two years later)? Seems a little suspect. The charge of laziness was also leveled against his compatriot Ulam, particularly in reminiscences of Rota in his Indiscrete Thoughts. Nov 7, 2012 at 13:33 • I heard this story, too. The version I know is that one of the professors in Lvov University asked one of his assistent to help Banach in writing down his mathematical ideas. The name of this assistent, as far as I know, is unknown. But maybe the whole story is only a legend... Nov 7, 2012 at 13:47 • Who ia in Grant's tomb? Nov 7, 2012 at 20:54 • tea.mathoverflow.net/discussion/1464/… Nov 9, 2012 at 17:51 Here is a quote from the article by Krzysztof Ciesielski: On Stefan Banach and some of his results. Banach J. Math. Anal. 1 (2007), no. 1, 1–10. There is a curious story how Banach got his Ph.D. He was being forced to write a Ph.D. paper and take the examinations, as he very quickly obtained many important results, but he kept saying that he was not ready and perhaps he would invent something more interesting. At last the university authorities became nervous. Somebody wrote down Banach’s remarks on some problems, and this was accepted as an excellent Ph.D. dissertation. But an exam was also required. One day Banach was accosted in the corridor and asked to go to a Dean’s room, as “some people have come and they want to know some mathematical details, and you will certainly be able to answer their questions”. Banach willingly answered the questions, not realising that he was just being examined by a special commission that had come to Lvov for this purpose. It is true that Banach was mainly self-taught as a mathematician, although he attended some lectures by Stanislaw Zaremba at Jagiellonian University. By the way, engineering programs in the former Austro-Hungarian monarchy (including Lvov Polytechnics) required quite an intensive training in mathematics, although of course the latest developments (Lebesgue integral etc.) were not part of the curriculum. Addendum 0: The above story is also related by Roman Kaluza in his biography of Banach. He heard it from Turowicz, who credits Nikodym as his source (he himself joined the department later, when Banach was already a professor). Well, on one hand, Nikodym was a friend of Banach and his early partner in mathematical discussions, but on the other hand, at the time of Banach's PhD, he was teaching high school in Krakow. (This point was made by Krzysztof Ciesielski in an email exchange with me.) Addendum 1: Banach's thesis, written in French (which he knew well and used before in publications) can be found here: http://kielich.amu.edu.pl/Stefan_Banach/pdf/oeuvres2/305.pdf It was published in Fundamenta Mathematicae 3 (1922), pp.133-181, and bears only Banach's name. The footnote says that it is a "Thesis presented in June 1920 at the Lvov University for obtaining the degree of the Doctor of Philosophy." On the first page there is a statement that maybe gives some evidence of Banach's tendency to wait until getting the best version of his results: Mr. Wilkosz and I have some results (which we propose to publish later) on operations whose domains are sets of Duhamelian functions(...)". There is no joint work with Wilkosz in the collected works of Banach... Addendum 2: Some details brought up by other users need correction. First, Steinhaus met Banach and Nikodym in Krakow, where Banach grew up, not in Lvov. This is explicitly recorded in his "Memoirs and Notes", and somewhat less explicitly in the address he gave much later at a session devoted to Banach: http://kielich.amu.edu.pl/Stefan_Banach/steinhaus63.html ("Planty" is a major green belt in the old city of Krakow; in Lvov there were "Waly"). Second, Banach's PhD supervisor (only in the formal sense) was Steinhaus. Antoni Lomnicki held a chair of mathematics at the Lvov Polytechnics (not to be confused with the Lvov University), where Banach got his first position as an assistant (pre-PhD). His lectures were excellent; he never lost himself in particulars, he never covered the blackboard with numerous and complicated symbols. He did not care for verbal perfection; all manner of personal polish was alien for him and, throughout his life he retained, in his speech and manners, some characteristics of a Cracow street urchin. He found it very difficult to formulate his thoughts in writing. He used to write his manuscripts on loose sheets torn out of a notebook; when it was necessary to alter any parts of the text, he would simply cut out the superfluous parts and stick underneath a piece of clean paper, on which he would write the new version. Had it not been for the aid of his friends and assistants, Banach's first studies would have never got to any printing office." And also: "Banach could work at all times and everywhere. He was not used to comfort and he did not want any. A professor's earnings ought to have supplied all his needs amply. But his love of spending his life in cafes and a complete lack of bourgeois thrift and regularity in everyday affairs made him incur debts, and, finally, he found himself in a very difficult situation. In order to get out of it he started writing textbooks. Addendum 4: This is based on information I received from Danuta Ciesielska, a Polish mathematician and a historian of mathematics (and my classmate from Krakow). The documents from the Lvov University are now split between the Lvov District Archive and Lvov City Archive, http://www.archives.gov.ua/Eng/Archives/ra13.php - Wayback Machine link (the documents of Polytechnics were transported to Wroclaw, Poland after 1945). The catalogs underwent major reorganization, which makes it quite difficult to find particular documents there. Besides the employees' folders, the documentation of PhD and habilitation proceedings is often found in the minutes of faculty meetings. Regarding Banach's PhD, Ciesielska saw a letter from Steinhaus to dean Stanecki (dated September 28, 1920) asking him to set the date for Banach's doctoral exam, to which Stanecki replied that the date cannot be set before Messrs. Steinhaus and Zylinski (the committee members) evaluate the thesis. (Aside: Math Genealogy Project lists Kazimierz Twardowski as one of Banach's advisors. On the surface of it, this makes little sense, as Twardowski was a philosopher and a logician; his expertise was far removed from what Banach worked on. However, as a professor of Lvov University, he was on the committee and signed the papers.) She also points out that in some institutions (e.g., Jagiellonian University in Krakow), if a PhD thesis was published after the exam, the printed copy/journal offprint replaced the submitted manuscript/typescript. It is not clear if this was the case in Lvov. • Thank you for the quote, but I still find it hard to believe this literally happened like this. – user9072 Nov 7, 2012 at 15:52 • I do believe it, since I heard a similar story about the PhD exam (in 1950's) of Henryk Markiewicz, a Polish literary historian and theorist, which he told himself in a public lecture I attended sometime in 1990's (there is also an audio file in Polish here, under the number 46, archiwum.uj.edu.pl/henryk-markiewicz). Maybe the professors in Krakow got inspired by the earlier event in Lvov :) (plausible, since some of them taught in Lvov before WWII) Nov 7, 2012 at 16:56 • Thank you for the link to the thesis. A question: do you know if there does in addition to this journal version also (still) exist an 'original' version of the thesis (in the Lvov library, a national library or alike), or was this not common anyway. – user9072 Nov 10, 2012 at 11:56 • This is something I would like to find out. I can ask Ciesielski or other people dealing with history of Polish mathematics, they may know. Definitely there must have been a hard copy submitted before the exams (as it was practiced then, and long thereafter), but given the turbulent historical times in between, one cannot be sure it survived. Nov 10, 2012 at 20:02 • Thank you for the interesting updates! I only changed some quotatiin-marks, as some "backward" ones caused minor trouble due to markdown intepreting them as instructions. – user9072 Nov 14, 2012 at 20:15 When I was a student in Lvov in the 1970s, I heard many legends about Banach, so let me add a few points. Once Steinhaus was walking in a park, and he accidentally heard a conversation of two young people sitting on a bench. The words "Lebesgue integral" were pronounced. At that time very few people in Lvov had heard of the Lebesgue integral. So Steinhaus was curious, and introduced himself... Banach was an engineering student at that time. (The story does not tell who the other person sitting on the bench was.) According to the legend, Banach worked most of his time in the Scottish café. Students and colleagues joined him for conversation. (One of the results of this was the famous "Scottish book" of unsolved problems. Prizes were offered sometimes and recorded to the book together with the problems. For example, in the 1970s, when Per Enflo solved the "basis problem" from the Scottish book, he won a prize, a live goose, which was delivered by Mazur). He used to write on the table cloth. The owner of the cafe never complained. At the end of the day, he changed the tablecloth for a new one. And he would sell the old one to students. Banach drank a lot (and there are many stories about this, which I omit). Frequently he was short of money, and had to drink in credit. At some time, the debt grew large, and there was an argument with the owner of the Scottish café. Finally, the owner proposed that Banach writes a calculus textbook to make money to pay for his drinks. (Some version of the legend says this was suggested by students). Indeed, he wrote a calculus textbook :-) But I have never seen his high school textbooks. The Scottish café still existed in the 1990s, but under a different name, and in the 1970s this was a simple cantina. Then, the rooms passed to some financial institution. P.S. Wikipedia, https://en.wikipedia.org/wiki/Scottish_Caf%C3%A9, has somewhat different details of doing math in the Scottish café, based on Ulam's recollections. • Steinhaus included the story about the meeting in the park in his "Memoirs and Notes". It is also repeated in Ciesielski's article quoted below. The other person was Witold Wilkosz, Banach's fellow student, later a logician and a linguist, and a professor at the Jagiellonian University. Nov 7, 2012 at 23:46 • Yes, although the professor's salary was quite high then, Banach wrote texts to support his lifestyle. The high school textbooks he wrote are available here: kielich.amu.edu.pl/Stefan_Banach/podreczniki.html Nov 8, 2012 at 0:04 • @Margaret: Quote from Steinhaus: "During one such walk I overheard the words "Lebesgue measure". I approached the park bench and introduced myself to the two young apprentices of mathematics. They told me they had another companion by the name of Witold Wilkosz, whom they extravagantly praised. The youngsters were Stefan Banach and Otto Nikodym. From then on we would meet on a regular basis, and ... we decided to establish a mathematical society." (www-history.mcs.st-and.ac.uk/Biographies/Steinhaus.html) Nov 8, 2012 at 5:31 • @Harun: Thanks for the quote, my memory did not serve me too well, and I did not have the copy of Steinhaus's memoirs at hand. Otto Nikodym is certainly better known than Wilkosz, yet (perhaps) Wilkosz's permanent association with Krakow (where I studied) made me remember him better. Nov 8, 2012 at 15:26 • And I made a mental shortcut by calling Wilkosz a "logician and a linguist". He did hold the chair of logic at Jagiellonian University and published in set theory, but he also dealt with real analysis, mathematical physics, radio technology and Oriental languages. Nov 9, 2012 at 18:04 I also once heard such a story, but I have doubts it is literally true. What is an established fact is that Banach had an unusual start of his career. He was actually an engineering student (with a personal situation rather on the difficult end) and did math more or less as a hobby. By pure coincidence he met Hugo Steinhaus who was impressed. They worked together and published something together. Then Banach got a position at a university (Lvov) and then a doctorate (under Lomnicki [correction: while he was working for/in the group of Lomnicki, it appears Lomnicki was in no sense the director of his thesis; cf Magaret Friedland's answer]). So he got his doctorate under somewhat unusual circumstances and not following standard rules (though at that time, there were much less rules for doctorates then nowadays anyway). In that sense, it was likely not so clear when and how he should submit his thesis, and it seems very conceivable that he discussed this matter with various people and/or people close to him pressured/encouraged/helped him to do so. (Added: I see Francesco Polizzi made a comment sort of in this direction.) Regarding the "laziness": Not long after the time of his thesis he wrote a lot (including high-school textbooks). So, to attributed this to sheer laziness in a classical sense seems certainly odd. If anything I could imagine a certain uncertainty (and/or occupation with other matters) regarding how to proceed; or how to really write mathematics (not being trained as a mathematician). Yet, it is also well-documented that he and others worked a lot in cafés. Now, this could to some be taken as a sign of a 'lazy' life-style. But, well, not even this is so clear. For an overview of Banach's life http://www-history.mcs.st-andrews.ac.uk/Biographies/Banach.html • Re. working in cafes. I visited Lvov once and was keen to find the `Scottish Cafe' where Banach and his contemporaries were reputed to have done a lot of great work. It took a deal of finding and when I got there it had turned into..... a bank. Big anticlimax! Nov 7, 2012 at 14:17 • If you are referring to the order of getting his position and getting a doctorate as unusual, I think it was quite common during that days. I read from an interview with Selberg that it was a general practice to write at least a few papers published before writing your thesis. Nov 7, 2012 at 14:37 • I made this CW as it contains a bit much speculation, and not much original information. – user9072 Nov 7, 2012 at 14:38 • @timur: No, mainly I refer to the fact that he was not educated as mathematician, but essentially self-taught. Likely he hardly ever followed any courses in mathematics. He finished some engineering studies in 1914, then in 1916 he met Steinhaus and they started to work together, then in 1920 he got a position and submitted his thesis. – user9072 Nov 7, 2012 at 14:48 • Thanks for your edits and for prompting me to find out as many details as possible. After doing this, I can summarize the situation as "Ignoramus et ignorabimus"... Nov 15, 2012 at 1:54 There is a paper on this topic in pages 1-7 of the September 2021 issue of The Mathematical Intelligencer. The authors are Danuta Ciesielska and Krzystof Ciesielski. If I understand correctly, the aim in their paper is to set the record straight regarding the (infamous) story about the way in which S. Banach obtained his Ph. D. I am going to share with you the main paragraphs of the Ciesielska - Ciesielski paper below: both the phrases in boldface and the sics are mine. *** THE STORY *** "The story goes that Banach could not be bothered with writing a thesis, since he was interested in solving problems not necessarily connected to a possible doctoral dissertation. After some time, the university authorities became impatient. It is said that another university assistant (instructed by Stanisław Ruziewicz) wrote down Banach's theorems and proofs, and those notes were accepted as a superb dissertation. However, an exam was also required, and Banach was unwilling to take it. So one day, Banach was accosted in the corridor by a colleague, who asked him to join him in a meeting with some mathematicians who were visiting the university in order to clarify certain details, since Banach would certainly be able to answer their questions. Banach agreed and eagerly answered the questions, not realizing that he was being examined by a special commission that had arrived from Warsaw for just this purpose. In some sources [11, 19, 20], this event is described only as a possible version of events. Nevertheless, in several (mainly Polish-language) books, it is presented as a fact. There is even a book on the phobias and fears of great Poles that devotes a whole chapter to Banach and this story, claiming to demonstrate that Banach was unable to deal with his own psyche and phobias, although even this story presents Banach simply as someone who did not consider the PhD a very important acquisition." *** DEBUNKING THE STORY *** "... good stories aside, the truth about Banach's exam should be known. Nowadays, it is possible to check the facts, since many sources have become more easily available than they were some decades ago. It is enough to look carefully at some dates and university rules to see that the proposed account could not be accurate. Banach moved to Lvov in 1920 to take up his job at the Lvov Polytechnic. On June 24 of that year, he presented his doctoral dissertation to the Philosophy Faculty of Jan Kazimierz University. The time interval of just a couple of months was definitely too short for the university authorities to have become impatient, let alone for someone else to have written a thesis on the basis of Banach's overheard comments. Moreover, in 1920, Banach had already published three research papers. Why would he be reluctant to write a doctoral dissertation, which would be a requirement for him to keep the job? Now let's have a closer look at the exam. According to the university rules, a PhD dissertation had to be refereed and accepted, and then two exams--in the candidate's main scientific disciplines (in Banach's case they were mathematics and physics) and in pure philosophy--were to be taken by the candidate. It turns out that the records of Banach's PhD exams have survived (they are reproduced in [22] and [26]), and we may read that Banach passed his PhD examinations in mathematics and physics. The examining board consisted of four scientists: the dean of the faculty, Zygmunt Weyberg, wo was a mineralogist; two mathematicians, Eustachy Żyliński and Hugo Steinhaus; and a physicist, Stanisław Loria. None of them was from Warsaw, and Banach knew all of them. There is another interesting story [sic] concerning Banach's doctoral dissertation. The referees were Żyliński and Steinhaus. In October 1920, Steinhaus, who was mentoring Banach, wrote to the dean to inquire about the date of Banach's doctoral exam, for it had been four months since Banach had delivered his dissertation. The dean replied that everything was ready for the exam, but they were awaiting the referee's report (one of whom was Steinhaus himself!). Indeed, when the joint report from Steinhaus and Żyliński arrived, the exam took place immediately. Banach had submitted his dissertation on June 24, the report is dated October 30, and the exam in mathematics and physics took place on November 3. Bearing in mind that in 1920, October 30 fell on a Saturday, November 3 was therefore a Wednesday, and November 1 (Monday) is a public holiday in Poland, everything must indeed have been prepared for the exam. Banach passed this exam with a unanimous grade of 'excellent' from all four examiners. On December 11, 1920, Banach passed the exam in philosophy (the examining board consisted of the two philosophers Kazimierz Twardowski and Mścisław Wartenberg and the dean, Zygmunt Weyberg). Banach had now fulfilled all the requirements for being granted the PhD degree, and in many sources (including a CV signed by Banach; see [19]), 1920 is given as the year of Banach's doctorate. However, the precise rules for obtaining a PhD from Austro-Hungarian times had been retained by Poland after regaining its independence (see [14]). According to those rules, the candidate was allowed to call himself a 'doctor' only after the doctoral conferment ceremony, which in the case of Banach took place on January 22, 1921. The official documents state that the academician who conferred the degree on Banach was Kazimierz Twardowski. To a mathematician, that is surprising news indeed. Why Twardowski, who was an eminent Polish philosopher? What was his connection to Banach? Could he have been his dissertation advisor? According to the rules then in force, the conferment of a new doctorate had to be celebrated by a professor from the faculty appointed by the dean, and so there is no reason to regard Twardowski as the supervisor of Banach's thesis. By analogy, one might incorrectly claim that Steinhaus's supervisor in Göttingen in 1911 was the German botanist Gustav Albert Peter, who played the same role as Twardowski in Banach's case (for details, see [9]). It is frequently said that Banach was not a university graduate, so the fact that he obtained a position at the Polytechnic and a university doctorate was exceptional. This is also slightly misleading. According to the rules that were then in effect in Poland [14], four years of study at the university was enough for one to be eligible for a PhD, but even that requirement could be relaxed. The professors of a faculty could, at their discretion, allow someone with outstanding achievements to apply for a PhD. Moreover, in those years, there was no precise definition of who counted as a university graduate. Banach had studied at the Lvov Polytechnic for precisely four years, which was enough." *** A KERNEL OF TRUTH? *** "Let us dig further in an attempt to discover [a kernel of truth underneath the gossip about Banach's doctorate]. This is a good place to recall the illustrious figure of Andrzej Turowicz (1904-1989), a mathematician, priest, and monk active mostly in Kraków, but who also spent some time working in Lvov... Turowicz knew many excellent stories, abounding in colorful detail, about mathematics and mathematicians of his time. It was not unusual for participants in various meetings that he attended to ask him to share some of his anecdotes. Whenever Turowicz had himself been a witness of an event, he recounted it with great accuracy, and one could be sure that things had really happened that way, but there were also stories he had heard from others. On November 17, 1984, the Jagiellonian University Students' Mathematics Society (see [10]) invited several mathematicians to share their memories during a special meeting. Their reminiscences were taped. Turowicz was one of the guests. He contributed the anecdote about Banach's PhD exam, beginning with the words: 'This is a story I heard from Nikodym, and I am repeating it here at Nikodym's responsibility'. Turowicz recounted this event on several occasions and always credited it to Nikodym. The same attribution is also given in [20]. It was Nikodym whose conversation with Banach was accidentally overheard by Steinhaus in Kraków. Later, Nikodym became a prominent mathematician; after World War II he emigrated to the United States... And it turns out that it was Nikodym who was reluctant to obtain a PhD. He used to ask: 'Will it make me any wiser?' In 1924, Nikodym (aged 35), still without a PhD, and his wife Stanisława (who was also a mathematician) moved from Kraków to Warsaw. Walerian Piotrowski made a very solid investigation concerning PhDs in mathematics at Warsaw University in the interwar period (see [24, 25]). According to [25], Wacław Sierpiński decided to take the matter of Nikodym's PhD exam into his own hands. He invited Nikodym to a café and began to talk with him. After a while, the dean of the department 'accidentally' appeared in the café and joined the conversation, which quickly drifted toward mathematics. More than an hour later, Sierpiński said to Nikodym: 'Congratulations. You have just passed your PhD exam.' In our opinion, this is the source of the urban legend about Banach's doctorate. We will never know whether Nikodym gave Turowicz a twisted account of his own PhD exam, changing the main protagonist's name in the process, or whether Turowicz missed something. Our view is that the first explanation is more likely." These are the references to which D. Ciesielska and K. Ciesielski alluded to in those paragraphs: [9] D. Ciesielska, L. Maligranda, and J. Zwierzyńska. Doktoraty Polaków w Getyndze. Matematyka. Analecta 28:2 (2019), 73-116. [10] K. Ciesielski. 100th anniversay of the Jagiellonian University Students' Mathematics Society. Math. Intelligencer 17:4 (1995), 42-46. [11] K. Ciesielski. Lost legends of Lvov 2: Banach's grave. Math. Intelligencer 10:1 (1988), 50-51. [14] T. Czeżowski (editor). Zbiór ustaw i rozporządzeń o studiach uniwersyteckich oraz innych przepisów ważnych dla studentów uniwersytetu, ze szczególnym uwzględnieniem Uniwersytetu Stefana Batorego w Wilnie. Wilno, 1926. [19] E. Jakimowicz and A. Miranowicz (editors). Stefan Banach. Remarkable Life, Brilliant Mathematics. Gdańsk University Press, 2010. [20] R. Kałuża. Through a Reporter's Eyes: The life of Stefan Banach. Birkhäuser, 1996. [22] L. Maligranda. 100-lecie doctoratu Stefana Banacha. To appear in Wiad. Mat. 52 (2020). [24] W. Piotrowski. Doktoraty z matematyki i logiki na Uniwersytecie Warszawskim w latach 1915-1939. In Dzieje Matematyki Polskiej II, edited by W. Więsław, pp. 97-131. Instytut Matematyczny Uniwersytetu Wrocławskiego, 2013. [25] W. Piotrowski. Jeszcze w sprawie biografii Ottona i Stanisławy Nikodymów. Wiad. Mat. 50 (2014), 69-74. [26] J. Prytuła. Doktoraty matematyki i logiki na Uniwersytecie Jana Kazimierza we Lwowie w latach 1920-1938. In Dzieje Matematyki Polskiej, edited by W. Więsław, pp. 137-161. Instytut Matematyczny Uniwersytetu Wrocławskiego, 2012. In the Fall 1988 issue of the Mathematical Intelligencer there is an interview of Andrzej Turowicz who was a contemporary of Banach and Mazur. Here is one of the questions. Q: Were all the Lvov mathematicians so reluctant to publish their results? A: No, it was a specialty of Mazur. Banach also left many of his results unpublished, but for a different reason. Banach turned out mathematical ideas so quickly that he should have had three secretaries to compose his papers. That was why Banach published only a small part of the theorems he invented. Not because he did not want to, but because all the time he had new ideas. In Stanisław Ulam's autobiography Adventures of a Mathematician you can find several references in that sense (mainly in the first Part) about the mathematicians at Lwów in that time, maybe the clearest one is on page 38: "In general, the Lwów mathematicians were on the whole somewhat reluctant to publish. Was it a sort of pose or a psychological block? I don't know. It especially affected Banach, Mazur, and myself, but not Kuratowski, for example." I am hesitant to write here, I delayed/procrastinated for long. I am not a historian, I simply was embedded in the Polish mathematical scene for over ten years, and since then I kept in personal touch with several of my Polish mathematical friends. The notion of Banach assistant is not right. Banach had students and (mathematical) friends, including and especially younger friends. The most important among them was Stanisław Mazur who himself was a fantastically sharp mathematician. Stanisław Mazur truly disliked writing (editing) mathematics because he was doing it so well. For instance, Stanisław Mazur wrote (i.e. edited) the first paper by KS, who was about 30 years younger. However, Prof. Mazur didn't care to publish his own results. Prof. Kuratowski told me that Mazur was happy when someone else rediscovered and published Mazur's results. Mazur would say happily on such occasions: it (the results) had to be good enough if someone bothered to publish it. Sometime in 1971-72 (or on a next occasion?), Aleksander Pełczyński (Olek) told me, when he had visited me in Ann Arbor (MI) a couple of times, that Banach's classic Theory of Linear Operators was written (i.e. edited) by Mazur. Stefan Banach didn't care to edit his own research results. However, he did write academic and high-school texts extremely well. At least, this is my opinion based on my studying Banach's 2-volume Calculus monography on my own, when I was a high school student--I'd wake up way before my school day and would read for an hour or two. For a contrast, earlier I had gotten another text--famous--on mathematical analysis by a polytechnic professor. I stopped reading it very soon because it was too boring. In many places around the world people like to stress how hard they work. It was often the opposite in Poland, especially among many Polish mathematicians. They were particular about being young, brilliant, and lazy. They would not say that they worked hard but that it was nothing, it just came to me at one moment, something like this. Ulam's autobiography illustrates my point. (On the other hand, a close friend of Ulam considers Rota's writing about Ulam as offensive, abusive, dishonest.) • Those paragraphs are very interesting! I have a few questions, though. 1) Who was KS? 2) Did Banach's "Rachunek Rozniczkowy i Całkowy" originally consist of two volumes? 3) Whose words are those in the blockquotes? Thanks in advance for your replies. Oct 23, 2021 at 17:13 • @JoséHdz.Stgo., I use so-called MO quotes as a formating device, not as actual quotes. Thus, these words are simply mine (sorry :)). Marceli Stark, who was ruling Polish mathematical publishing, heard from my mother that I was interested in mathematics, thus, he gave her, for me, several mathematical monographies, including Banachs "Rachunek...", it consisted of 2 volumes. Oct 24, 2021 at 5:33 I think this question is very subjective, speculative and gossipy, and I am surprised that it has not yet been criticized as not suitable for MO. Unlike in mathematics, in history it is often enough to raise an unsubstantiated question in order to influence people's beliefs. It is very easy to spread rumours in history, and it is therefore important to provide good evidence for any suggestion that has to do with a historical fact. What evidence do you have that Banach did not write his thesis, and what makes you think that the word 'lazy' is appropriate here? Would you call Hardy lazy because he only worked a couple of hours a day and spent the rest of his days reading about cricket? Would you call Grothendieck lazy because he did not write up his proof of Grothendieck-Riemann-Roch? Certainly not, because these people, just like Banach, were very prolific and influential mathematicians. In a similar way, Rota's description of Ulam is historically unhelpful, and only illustrates the fact that Rota sometimes described people in rather arrogant terms (as he also did with Artin in Indiscrete Thoughts). Please let us stick to the facts and not make MO a forum for speculative historical anecdotes. • Well, what to say. First, the positive thing, in an abstract sense I can see some merit in your opinion and share it to a certain extent. Second, a procedural thing: your "answer" is unrelated to the goal of answering the question, as such it is completely misplaced as an answer (it would be fine on meta though; just sign up there, there is not rep limit or anything, it is automatic). Third, OP did not raise any unsubstantaited question but by contrast asked for confirmation or refutation and an additional detail of a well-know thing; it turns out it is official published. Fourth,... – user9072 Nov 9, 2012 at 12:11 • As to quid's second point, it's mitigated by the fact that Bok doesn't have enough points to leave a comment (and possibly wasn't aware of meta). But Bok's comment does strike me as a little bit harsh, since the OP is precisely looking for hard evidence of some sort (of something which wasn't well-known to me). He or she is probably right that the question would be improved by leaving off the bit about 'laziness', which is indeed subjective. (And I agree with him about Rota's book, which exasperates me on so many levels.) Nov 9, 2012 at 12:19 • Dang it -- substitute "agree with him or her" in my last sentence. Nov 9, 2012 at 12:20 • perhaps do not project your own(?) or least some value system to much on everybody. To some extent I prefer somebody knowing my work considers me as lazy over considering me as working hard. And, to some academics (present company ambivalently included) to be told they work hard is basically an insult. For example, I am virtually certain Hardy had no interest whatsoever (rather the opposite) to be considered as working all the time. – user9072 Nov 9, 2012 at 12:28 • tea.mathoverflow.net/discussion/1464/… Nov 9, 2012 at 17:58
2022-10-03 01:00:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49570363759994507, "perplexity": 2410.826096495316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00217.warc.gz"}
https://www.maths.usyd.edu.au/s/scnitm/garethw-SUMSMeeting-Zhu-Embarrass
SMS scnews item created by Gareth White at Tue 17 Mar 2009 1234 Type: Seminar Distribution: World Expiry: 18 Mar 2009 Calendar1: 18 Mar 2009 1300-1400 CalLoc1: Carslaw 452 Auth: garethw@asti.maths.usyd.edu.au # SUMS Meeting: Zhu -- Embarrassing questions and Guns Hello SUMS members, I must say that Ivan’s talk last week was a resounding success, by my standards anyway. Apologies to Ivan and everyone else for the interruptions made by me halfway through the talk, I think for future talks we will have a pre-organised intermission halfway through. Speaking of future talks, we have another one this week, this time by a statistics PhD student, Jenny Zhu. Jenny is no stranger to SUMS, having given talks in past years, although this week will be a new one. Her abstract: "Asking embarrassing questions and shooting your friends are common problems we all face. Studying Statistics helps. In this talk, we motivate the problem of estimating the success probability from a Binomial random variable when the number of trials is random. Also we give an example of when we might prefer to obtain a solution to a problem computationally." Feel free to come along, eat some food and have a good time! Talk: Embarrassing questions and GUNS Speaker: Jenny Zhu Location: Carslaw 452 Date/Time: Wednesday March 18, 1-2pm Hope to see you there. SUMS President "I would never kill somebody. Unless they pissed me off." - Eric Cartman Actions: Calendar (ICS file) download, for import into your favourite calendar application UNCLUTTER for printing AUTHENTICATE to mark the scnews item as read School members may try to .
2022-12-07 20:07:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29405906796455383, "perplexity": 9787.781060272517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711218.21/warc/CC-MAIN-20221207185519-20221207215519-00348.warc.gz"}
https://indico.cern.ch/event/839985/contributions/3983680/
# LXX International conference "NUCLEUS – 2020. Nuclear physics and elementary particle physics. Nuclear physics technologies" Oct 11 – 17, 2020 Online Europe/Moscow timezone ## Exclusive pi^0p electroproduction in the resonance region with CLAS12 Oct 16, 2020, 4:15 PM 20m Online #### Online Oral report Section 4. Relativistic nuclear physics, elementary particle physics and high-energy physics. ### Speaker Anna Golubenko (Lomonosov Moscow State University Skobeltsyn Institute of Nuclear Physics) ### Description The excitation of nucleon resonances (N*) by real and virtual photons is an important source of information on the structure of excited nucleon states and dynamics of the nonperturbative strong interaction underlying the resonance generation from quarks and gluons [1, 2]. This information has already become available from the nucleon resonance electroexcitation amplitudes ($\gamma_v p N^*$ electrocouplings). Exclusive $p\pi^0$ electroproduction channel is the important source of information on $\gamma_v p N^*$ electrocouplings [3]. The CLAS12 detector [4] is the only facility in the world capable to provide information on $\gamma_v p N^*$ electrocouplings from the data of $\pi^0p$ channel at still almost unexplored range of photon virtualities $Q^2>5.0$ GeV$^2$ and to extend the studies of $N^*$ in the mass range $>2.0$ GeV. The preliminary results from analysis $\pi^0p$ electroproduction data measured with CLAS12 will be presented in the talk. Application of the exclusive event selection procedure, developed based on MC simulation, to the CLAS12 $\pi^0p$ data analysis provided high purity sample of $\pi^0p$ events in the kinematic range covered by the measurements in RG-K run. The results obtained are paving a way for extraction of beam asymmetry and eventually cross sections for exclusive $\pi^0p$ electroproduction measured with the CLAS12. 1. I.G.Aznauruan and V.D.Burkert, Progr. Part. Nucl. Phys. 67, 1 (2012). 2. V.D.Burkert and C.D.Roberts, Rev. Mod. Phys. 91, 011003 (2019). 3. N.Markov et al. (The CLAS Collaboration), Phys. Rev. C. 101, 015208 (2020). 4. V.D. Burkert, L. Elouadrhiri, K.P. Adhikari et al. Nuclear Inst. and Methods in Physics Research, A 959 (2020) 163419 ### Primary author Anna Golubenko (Lomonosov Moscow State University Skobeltsyn Institute of Nuclear Physics) ### Co-authors Victor Mokeev (Thomas Jefferson National Accelerator Facility) Prof. B Ishkhanov (Moscow State University, Faculty of Physics, Moscow, Russia; Moscow State University, Skobeltsyn Institute of Nuclear Physics, Moscow, Russia)
2023-03-21 05:36:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5346715450286865, "perplexity": 5080.07945522103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00078.warc.gz"}
https://design.tutsplus.com/tutorials/create-a-colorful-funky-robot-with-gradients-in-adobe-illustrator--vector-10090
We want to learn about your experience with Tuts+. Take part to earn $50! Help us out to earn$50! Get Started In this tutorial, I'm going to be starting with a sketch which I'll then build up to be a colorful robot friend. I'll then frame piece in Adobe Illustrator using the Pen Tool, Clipping Masks, and transparent Gradients. So let's get on with it! ### Step 3 Using the Ellipse Tool (L), I made a circle for each of the antenna's base. Repeating this step for both sides and the ends of the antennae. ### Step 4 Using the Pen Tool (P), I've drawn a curved tube shape in a very light lilac. The basic head shape is complete. ## 2. Create the Base for Your Robot's Body ### Step 1 Once again I use the Rounded Rectangle Tool. This time it's for the body. My settings for the corner radius's were the same as those used for the head. ### Step 2 Using the Direct Selection Tool (A), select the bottom corner anchor points and manipulate them as you see fit. ### Step 3 Repeat on the other side. ## 3. Begin Adding Shapes for the Arms ### Step 1 To start the arms, I've added a circle with the Ellipse Tool (L) to form a shoulder. ### Step 2 Following my initial sketch, I've traced the little hands with the Pen Tool (P). ### Step 3 I've drawn simple finger shapes as well as a blue piece to make the robot's thumb look like it is coming out of the hand versus sitting on top of it. How you build your basic hand shapes is entirely up to you. ### Step 4 Group (Ctrl + G) your hand pieces together, Copy (Ctrl + C) and Paste (Ctrl + V). Then Right-Click, select Transform and Reflect. Choose the vertical axis. Place your newly flipped hand accordingly. ## 4. Add the Basic Facial Feature Shapes ### Step 1 The basic neck shapes are as follows: a rough drawn rectangle with the Pen Tool (P) and an Ellipse (L). ### Step 2 Using the Ellipse Tool (L), draw an oval or circle for an eye. Repeat for as many eyes as you'd like your robot to have. ### Step 3 This robot's mouth is a simple rounded rectangle, much like the robot's head itself. Copy (Ctrl + C) and Paste (Ctrl + V) the base head shape and place it beneath the mouth shape. In the Pathfinder panel, with both shapes selected, hit Intersect. The new mouth shape now cuts off the awkward rounded corners on the bottom edge. ## 5. Draw a Simple Heart Shape ### Step 1 Use the Ellipse Tool (L) to draw an even pink circle. ### Step 2 Copy (Ctrl + C) and Paste (Ctrl + V) the circle and align it with the other. ### Step 3 Overlap the two shapes slightly. ### Step 4 Using the Pen Tool (P), I've drawn a pointed shape to create one-half of the bottom of the heart. ### Step 5 Copy (Ctrl + C) and Paste (Ctrl + V) the shape, flip it over a vertical axis, and align it next to its copied self. ### Step 6 Edit the bottom points and any curves necessary to perfect your heart. ### Step 7 Unite your shapes via the Pathfinder panel. That is one snazzy little robot friend. ## 6. Begin Adding Base Shapes for the Frame ### Step 1 Copy (Ctrl + C) and Paste (Ctrl + V) the heart shape and change the color to a very light lilac or gray. ### Step 2 Trace your frame elements using the Pen Tool (P). In this case, it's a cute little wing being drawn. ### Step 3 The sides of the frame are like long, robotic tube shapes. ## 7. Add Human Heart-Shaped Silhouettes ### Step 1 The bottom of the frame has two big human heart-shaped silhouettes. Check out some references for this or do your best to draw it up from memory. ### Step 2 I made some drippy, blood like shapes using the Pencil Tool (N) to add to the bottom of the hearts. ### Step 3 A dark, dark blue was chosen for blood element to mimic oil. ### Step 4 Add more little slick like shapes. ### Step 5 Group (Ctrl + G) together your frame pieces, Copy (Ctrl + C) and Paste (Ctrl + V) them, flip over a vertical axis, and align accordingly to make the other side of your frame. ## 8. Modify and Refine the Base Shapes ### Step 1 I've added a shape similar to the neck that will attach the bottom of the robot to the torso. ### Step 2 Using the Ellipse Tool (L) again, I've drawn the top-side of the bottom of the robot. ### Step 3 Using the Rounded Rectangle Tool with the settings as seen below, I've added the full shape of the robot's body. To get it to fit seamlessly with the ellipse drawn previously, I edited the shape by carefully manipulating the anchor points. ### Step 4 Select the ellipse and change the color to something a bit darker. ### Step 5 Drawn some extra bits on the frame. ### Step 6 The arms were too stubby, so I drew something more wiggly, similar to the frame shapes, to be the arm base. It will be set beneath the shoulder and hand pieces, but above the torso. ## 9. Create a Pixel Fade Effect for the Bottom of the Robot ### Step 1 Draw two small squares with the Rectangle Tool (M) and Align them with each other. ### Step 2 Select the Blend Tool (W) and with that select first the square on the left and then the one on the right. This will create an effect similar to what's seen above. ### Step 3 Double click the Blend Tool (W) in the tool bar and a dialogue box should appear. Select "Specified Steps" and choose how many intervals you'd like your blend to have. By checking preview, you can see what changes you're making without committing to them. ### Step 4 Repeat as you see necessary and even add extra boxes around the bottom of the robot so it doesn't seem so uniform. ### Step 5 Select all blends and boxes and Group (Ctrl + G) them together. ## 10. Use an Opacity Mask on the Pixels ### Step 1 Draw a square over the little assortment of squares you drew previously. Apply a black and white gradient going from white to black vertically. ### Step 2 Select both the grouped squares and the new gradient square. In the Transparency panel select extra options and select Make Opacity Mask. ### Step 3 Make sure Clip is selected in the Transparency panel. ### Step 1 Draw a rectangle or rounded rectangle with the tool of your choice. Copy (Ctrl + C) and Paste (Ctrl + V) the robot's head base shape and arrange it beneath the new rectangle you created. ### Step 2 In the Pathfinder panel, with both rectangles selected, hit Minus Front and the only bit left will be anything the front piece did not touch. ### Step 3 Apply a transparent gradient similar to the one seen above. ### Step 4 Repeat Step 1, but hit Intersect in Pathfinder and apply a gradient that goes from a light blue, yellow, or white to the base blue. ### Step 5 Place this beneath the eyes and mouth bits and alter the Opacity as you see fit. ### Step 1 You can repeat part 11 until you've rendered your robot to your liking, or your can follow this section to render with the use of a Clipping Mask (Ctrl + 7). Copy (Ctrl + C) and Paste (Ctrl + V) the base head shape over to the side. Draw shapes with the Pen Tool (P) or Pencil Tool (N) and apply transparent gradients as you see fit. ### Step 2 Try manipulating the settings of radial gradients in addition to linear ones. The lighting for this piece mainly comes from the upper-left corner of the picture plane. ### Step 3 Check out the settings for this radial gradient set in the lower part of the robot head. ### Step 4 Try playing with black and white or colorful gradients to give more of a chrome look to your robot. ### Step 5 Change the Blending Mode of the shape in the Transparency panel to Overlay so it keeps with your color palette. ### Step 7 Group (Ctrl + G) together your rendering pieces and arrange the base rectangle shape to be above the new group. ### Step 8 Select all and make a Clipping Mask (Ctrl + 7). Adjust the Blending Mode on components in your group as necessary. ## 13. Render the Eyes with a Colorful Style ### Step 1 Select both of the robot's eyes and apply the linear gradient of your choice. This one goes from bright pink to yellow to a light cream. ### Step 2 Using the Ellipse Tool (L), create a dark purple (or brown) oval inside the eye shape. Alter its Opacity as you see fit. ### Step 3 Draw a curved teardrop shape the follows the bottom edge of the robot's eye. Apply a gradient going from a light cream to a completely transparent blue (same color as the robot head). ### Step 4 Repeat both steps above on the other eye and reduce the Opacity of the highlight shapes. also, add a transparent oval behind the eye itself to create more depth. ### Step 5 Using the Rectangle Tool (M), draw a series of long rectangles spanning the robot's eye. Unite them in Pathfinder. ### Step 6 Copy (Ctrl + C) and Paste (Ctrl + V) the base eye shape and place it over the top and aligned with the series of united rectangles. ### Step 7 Select both shapes and hit Intersect in Pathfinder. ### Step 8 Reduce the Opacity to 36%. ### Step 9 Draw a new Ellipse (L) within the eye shape. ### Step 10 Change the gradient to a linear one going from light cream to a completely transparent yellow and move it closer to the upper left corner of the eye. ### Step 11 Repeat Steps 9 and 10 with a smaller ellipse and a gradient going from white to a fully transparent cream. ### Step 12 Copy (Ctrl + C) and Paste (Ctrl + V) the shadow ellipse within the eye and set the fill color to "no fill" and the stroke to the same color as the shadow color used previously. ### Step 13 Adjust the Stroke Weight to 0.75pt or 0.5pt. Lower the Opacity to your liking. ### Step 14 Copy (Ctrl + C) and Paste (Ctrl + V) the main eye shape and follow steps 12-13, but set the Stroke Weight to 1pt. ### Step 15 Set this stroked ellipse behind the base eye shape. ### Step 16 Copy (Ctrl + C) and Paste (Ctrl + V) the left eye, flip it over a vertical axis, and place it over the right eye (or in place of the right eye if you copy more than the newly rendered bits). ## 14. Render the Rest of the Face ### Step 1 Follow the same steps from rendering the eyes to build the mouth. Add a highlight shape that follows the shape of the top of the mouth and is set behind the mouth. ### Step 2 Add a curved teardrop shape to the top of each eye. ### Step 4 Repeat the steps involved with keeping only the intersected shape Set the new shadow behind the main face pieces. ### Step 5 Add more highlight gradient shapes to the stop of the robot head. I used the Pen Tool (P) to draw these shapes. ### Step 6 Copy (Ctrl + C) and Paste (Ctrl + V) the main head shape, take out the fill color, and stroke it with the settings in the image above. ### Step 7 Set this box behind and in line with the robot head. ### Step 1 With the Pen Tool (P) draw a shape along the bottom of the connecting antenna ball with a linear gradient going from dark blue to light blue. ### Step 2 Draw another shape along the side contour of the ball. Continue adding shapes to give the ball depth. ### Step 3 For a highlight on the ball shape, another curved tear drop was drawn (similar to those around the eyes) but instead of a linear gradient, a radial gradient has been applied. A stroked shape similar to the one behind the robot head was added behind the blue ball. Repeat these steps for the other side. ### Step 4 Apply dark blue to light blue linear gradients to the shoulder pieces. ### Step 5 Repeat the steps in rendering the ball shapes above for these shoulder pieces. ## 16. Give Your Robot Rosy Cheeks ### Step 1 Draw a small ellipse that slightly overlaps the bottom edge of the left eye. ### Step 2 Apply a radial gradient going from bright pink to transparent blue (Opacity set to 0%). ### Step 3 Repeat on the other side. ## 17. Render the Antennae ### Step 1 Repeat the steps for rendering the other ball shapes on the pink ends of the antennae. ### Step 2 Draw a shape that follows the contour of the antenna. This will serve as a shadow. ### Step 3 Apply a linear gradient to this shape going from the dark purple/brown used previously to the antenna's light lilac set to 0% Opacity. Follow the steps shown previously for intersecting a shape with another (see section 11, steps 4-5). ### Step 4 Using the Pen Tool (P) draw a curved line (no fill, only stroke) set to the stroke point of your choice. Repeat along the antenna. Group (Ctrl + G) the little lines together. ### Step 5 Copy (Ctrl + C) and Paste (Ctrl + V) the base antenna shape. ### Step 6 Align it with the main antenna. ### Step 7 Select both the grouped lines and the new antenna shape and make a Clipping Mask (Ctrl + 7). ### Step 8 Apply a linear gradient with a medium purple-blue on either side of the light lilac color to the base antenna shape. Adjust the gradient angle as you see fit. ### Step 9 Draw a shape with the Pen Tool (P) that follows the contour of the antenna shape, similar to in the image above. ### Step 10 Apply a linear gradient from white to light lilac, with the lilac set to 0% Opacity. alter the shape's overall transparency to your liking. Copy (Ctrl + C) and Paste (Ctrl + V) the left antenna, flip over a vertical axis, and align with the antenna on the right side. ### Step 1 Apply the same gradient from the antenna in section 17, step 8. Keep it vertical with the shadow colors closer to the outside edges of the neck shape. ### Step 2 The steps are similar to those from the previous section. Draw stroked lines, Copy (Ctrl + C) and Paste (Ctrl + V) the neck shape, Align, and make a Clipping Mask (Ctrl + 7). ### Step 3 Draw a shape with the Pen Tool (P) that follows the bottom contour of the neck piece and ends about half-way up. Place this behind the clipped lines, but above the rectangle and ellipse of the neck. This creates the shape seen above. Match the gradient from the base neck shape. ### Step 4 For the edge of the ellipse connecting the neck to the robot's torso, draw a shape similar to the one above using the same gradient as before. ### Step 5 This one overlaps the ellipse, so you'll have to Intersect it in Pathfinder. ### Step 1 These are the same steps from the antennae and the neck. Apply the same linear gradient from sections 17 and 18. ### Step 2 Draw curved lines with the Pen Tool (P) that follows the length and contour of the robot's arms and Group (Ctrl + G) together. ### Step 3 Copy (Ctrl + C) and Paste (Ctrl + V) the arm shape and Align. ### Step 4 Select both the new arm shape and the grouped lines and apply a Clipping Mask (Ctrl + 7). ### Step 5 Add a shadow shape in the same manner as the one added in Section 17, Steps 2-4. ### Step 6 Repeat steps 1-5 for the right arm or Copy (Ctrl + C) and Paste (Ctrl + V) if your arms are mirrored. ## 20. Render and Refine the Hands ### Step 1 I find it easiest to work on the hands away from the rest of the composition. ### Step 2 Draw a shadow shape that tapers at the lower left corner of the hand and upper right. This transparent gradient goes from dark blue to light blue. ### Step 3 Repeat the steps to Intersect shapes from previous steps. Apply a linear gradient similar to the one seen above to the main hand shape. ### Step 4 Add a shape for the hand's highlight. This one follows the upper contour of the hand. ### Step 5 I added some low-opacity 1pt stroke lines as core shadows on the hand. ### Step 6 The same gradient from the arms and neck have been applied to the robot's thumb (and will be applied to each finger). ### Step 7 Copy (Ctrl + C) and Paste (Ctrl + V) the thumb piece, change it to the dark purple/brown shadow color, lower the Opacity, and set it behind the original thumb piece. ### Step 8 Back to the same steps from the arms and neck. Draw your thin stroked lines and Group (Ctrl + G) them together. ### Step 9 Apply a Clipping Mask (Ctrl + 7) in the same manner as before. ### Step 10 Repeat for each finger. ### Step 12 Repeat steps 1-11 for the other hand or simply Copy (Ctrl + C) and Paste (Ctrl + V), if your other hand is the same. ### Step 1 Copy (Ctrl + C) and Paste (Ctrl + V) the heart your little robot is holding to another area of your workspace. ### Step 2 Rendering the pink heart is similar to rendering the ball pieces from earlier. Play with linear and radial gradients to create shadows and highlights. ### Step 3 Add shadow shapes that accentuate the curves of the heart. Group the rendered shapes together, set the pink heart shape above the grouped ones, and make a Clipping Mask (Ctrl + 7). ### Step 4 Align the shadow and highlight group with the original heart shape. Copy (Ctrl + C) and Paste (Ctrl + V) the original heart shape and set it as a stroked line. Set it behind the main heart. ### Step 5 Add transparent circles with the Ellipse Tool (L). ### Step 6 Copy (Ctrl + C) and Paste (Ctrl + V) the main heart shape, set it as the same dark purple/brown color used for shadows throughout the piece and lower its Opacity to your desired level. Off-set it behind the main heart piece. Here's how it will look. ## 22. Finish Your Illustration by Rendering the Frame ### Step 1 Repeat the steps from the previous section for the heart in the frame. ### Step 2 Isolate one of the wing shapes away from the rest of the composition in order to work more easily. Work up shadow shapes that follow the wing's scallops. ### Step 3 Add gradient shapes that work as highlights, going from an even lighter lilac to the main lilac color. Other colors that work well for metal bits are assorted grays, blues, and light greens. ### Step 4 Apply a Clipping Mask (Ctrl + 7) as you have done countless times before in this tutorial and Group (Ctrl + G) together your elements. ### Step 5 Place the wing back into the frame, beneath the heart shape. ### Step 6 The other three wings were Copy (Ctrl + C) and Pasted (Ctrl + V) from the first and arranged into the composition. ### Step 7 Apply the same gradient from the robot's arms to the frame pieces. ### Step 8 Follow the same steps from the arm section in drawing curved lines on the frame tubes, shading, and making a Clipping Mask. ### Step 9 Moving on to the human heart shapes at the bottom of the frame, reference if likely needed to understand the basic shapes of the heart, valves, etc. The short of it is render this piece in the same way you did the robot heart. ### Step 10 The more the shadow and highlight shapes you layer onto one another, the more rendered your human heart shape will become. ### Step 11 Once again, Group (Ctrl + G) together your shadow and highlight shapes, Copy (Ctrl + C) and Paste (Ctrl + V) the main heart shape on top of them, and make a Clipping Mask (Ctrl +7). ### Step 12 Unhide those hidden oil wiggles from earlier. ### Step 13 Add some more with the Pencil Tool (N). ### Step 14 Copy (Ctrl + C) and Paste (Ctrl + V), flip over a vertical axis, and align for the other side of the frame. Add whatever sort of background you deem fit. In this case, a simple radial gradient has been applied behind the main piece (in a large rectangle that takes up the picture frame). ## Awesome Work! You're Now Finished! At long last this funky robot and his crazy little frame is complete. Push your piece further with more rendered/shaded bits or add some sweet little sparkles in those highlight hot spots. One subscription.
2021-11-27 08:58:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2302222102880478, "perplexity": 3739.402743509145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358153.33/warc/CC-MAIN-20211127073536-20211127103536-00559.warc.gz"}
http://mathoverflow.net/questions/87837/does-the-hirsch-conjecture-hold-for-n-2d?answertab=active
# Does the Hirsch conjecture hold for $n < 2d$? The Hirsch conjecture asserts that the graph (i.e. $1$-skeleton) of a $d$-dimensional convex polytope with $n$ facets has diameter at most $n - d$. After being open for decades, Francisco Santos has recently proved that this fails in general. Is it possible that the conjecture holds for $n < 2d$? Santos's counterexample had $(n,d) = (86, 43)$. One observation which may be relevant: If $n < 2d$, then every pair of vertices has a common facet. One can use this to show that the general Hirsch conjecture reduces to the $n \ge 2d$ case, (see Ziegler's book Lectures on Polytopes, p. 84). But this doesn't seem to answer the question here. - A partly-baked idea: The common facet shared by a pair of vertices is a $(d-1, 2d-2)$ polytope, which might serve as a basis for induction...? –  Joseph O'Rourke Feb 7 '12 at 21:51 Lemma: If P is a d-polytope with n facets and we perform a "wedge" over any facet F we get a (d+1)-polytope P' with n+1 facets and with diameter(P') $\ge$ diameter(P).
2015-03-28 14:56:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9048765897750854, "perplexity": 563.5877971301444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297587.67/warc/CC-MAIN-20150323172137-00152-ip-10-168-14-71.ec2.internal.warc.gz"}
http://techtalks.tv/events/74/172/
## TechTalks from event: FOCS 2011 We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com ## 3A • Multiple-Source Multiple-Sink Maximum Flow in Directed Planar Graphs in Near-Linear Time Authors: Glencora Borradaile and Philip N. Klein and Shay Mozes and Yahav Nussbaum and Christian Wulff-Nilsen We give an $O(n log^3 n)$ algorithm that, given an n-node directed planar graph with arc capacities, a set of source nodes, and a set of sink nodes, finds a maximum flow from the sources to the sinks. Previously, the fastest algorithms known for this problem were those for general graphs. • Minimum Weight Cycles and Triangles: Equivalences and Algorithms Authors: Liam Roditty and Virginia Vassilevska Williams We consider the fundamental algorithmic problem of finding a cycle of minimum weight in a weighted graph. In particular, we show that the minimum weight cycle problem in an undirected $n$-node graph with edge weights in $\{1,\ldots,M\}$ or in a directed $n$-node graph with edge weights in $\{-M,\ldots , M\}$ and no negative cycles can be efficiently reduced to finding a minimum weight {\em triangle} in an $\Theta(n)-$node \emph{undirected} graph with weights in $\{1,\ldots,O(M)\}$. Roughly speaking, our reductions imply the following surprising phenomenon: a minimum cycle with an arbitrary number of weighted edges can be encoded'' using only \emph{three} edges within roughly the same weight interval! This resolves a longstanding open problem posed in a seminal work by Itai and Rodeh [SIAM J. Computing 1978 and STOC'77] on minimum cycle in unweighted graphs. A direct consequence of our efficient reductions are $\tilde{O}(Mn^{\omega})\leq \tilde{O}(Mn^{2.376})$-time algorithms using fast matrix multiplication (FMM) for finding a minimum weight cycle in both undirected graphs with integral weights from the interval $[1,M]$ and directed graphs with integral weights from the interval $[-M,M]$. The latter seems to reveal a strong separation between the all pairs shortest paths (APSP) problem and the minimum weight cycle problem in directed graphs as the fastest known APSP algorithm has a running time of $O(M^{0.681}n^{2.575})$ by Zwick [J. ACM 2002]. In contrast, when only combinatorial algorithms are allowed (that is, without FMM) the only known solution to minimum weight cycle is by computing APSP. Interestingly, any separation between the two problems in this case would be an amazing breakthrough as by a recent paper by Vassilevska W. and Williams [FOCS'10], any $O(n^{3-\eps})$-time algorithm ($\eps>0$) for minimum weight cycle immediately implies a $O(n^{3-\delta})$-time algorithm ($\delta>0$) for APSP. • Graph Connectivities, Network Coding, and Expander Graphs Authors: Ho Yee Cheung and Lap Chi Lau and Kai Man Leung In this paper we present a new algebraic formulation to compute edge connectivities in a directed graph, using the ideas developed in network coding. This reduces the problem of computing edge connectivities to solving systems of linear equations, thus allowing us to use tools in linear algebra to design new algorithms. Using the algebraic formulation we obtain faster algorithms for computing single source edge connectivities and all pairs edge connectivities, in some settings the amortized time to compute the edge connectivity for one pair is sublinear. Through this connection, we have also found an interesting use of expanders and superconcentrators to design fast algorithms for some graph connectivity problems. • Maximum Edge-Disjoint Paths in Planar Graphs with Congestion 2 Authors: Loïc Séguin-Charbonneau, F. Bruce Shepherd We study the maximum edge-disjoint path problem (\medp) in planar graphs $G=(V,E)$. We are given a set of terminal pairs $s_it_i$, $i=1,2 \ldots , k$ and wish to find a maximum {\em routable} subset of demands. That is, a subset of demands that can be connected by edge-disjoint paths. It is well-known that there is an integrality gap of $\Omega(\sqrt{n})$ for this problem even on a grid-like graph, and hence in planar graphs (Garg et al.). In contrast, Chekuri et al. show that for planar graphs, if {\sc LP} is the optimal solution to the natural LP relaxation for \medp, then there is a subset which is routable in $2G$ that is of size $\Omega(\textsc{opt} /O(\log n))$. Subsequently they showed that $\Omega(\textsc{opt})$ is possible with congestion $4$ (i.e., in $4G$) instead of $2$. We strengthen this latter result to show that a constant approximation is possible also with congestion $2$ (and this is tight via the integrality gap grid example). We use a basic framework from work by Chekuri et al. At the heart of their approach is a 2-phase algorithm that selects an Okamura-Seymour instance. Each of their phases incurs a factor 2 congestion. It is possible to reduce one of the phases to have congestion 1. In order to achieve an overall congestion 2, however, the two phases must share capacity more carefully. For the Phase 1 problem, we extract a problem called {\em rooted clustering} that appears to be an interesting problem class in itself. • Online Node-weighted Steiner Tree and Related Problems Authors: Joseph (Seffi) Naor and Debmalya Panigrahi and Mohit Singh We obtain the first online algorithms for the node-weighted Steiner tree, Steiner forest and group Steiner tree problems that achieve a poly-logarithmic competitive ratio. Our algorithm for the Steiner tree problem runs in polynomial time, while those for the other two problems take quasi-polynomial time. Our algorithms can be viewed as online LP rounding algorithms in the framework of Buchbinder and Naor; however, while the {\em natural} LP formulation of these problems do lead to fractional algorithms with a poly-logarithmic competitive ratio, we are unable to round these LPs online without losing a polynomial factor. Therefore, we design new LP formulations for these problems drawing on a combination of paradigms such as {\em spider decompositions}, {\em low-depth Steiner trees}, {\em generalized group Steiner problems}, etc. and use the additional structure provided by these to round the more sophisticated LPs losing only a poly-logarithmic factor in the competitive ratio. As further applications of our techniques, we also design polynomial-time online algorithms with polylogarithmic competitive ratios for two fundamental network design problems in edge-weighted graphs: the group Steiner forest problem (thereby resolving an open question raised by Chekuri {\em et al}) and the single source $\ell$-vertex connectivity problem (which complements similar results for the corresponding edge-connectivity problem due to Gupta {\em et al}).
2020-08-15 05:27:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8845128417015076, "perplexity": 804.8035788569621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740679.96/warc/CC-MAIN-20200815035250-20200815065250-00022.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-perimeter-of-an-isosceles-triangle-whose-base-is-16-dm-and-w#594908
# How do you find the perimeter of an isosceles triangle whose base is 16 dm and whose height is 15 dm? Draw the triangle first, The height is 15 and base is 16. Cut the triangle in half and you will end up with two triangles with a height of 15 and base of 8. Use the Pythagoras theorem and get the slant height. ${a}^{2} = {b}^{2} + {c}^{2}$ By using this equation you will find the slant height ${x}^{2} = {15}^{2} + {8}^{2}$ simplify this and you will get 17 dm. perimeter=17+16+17=50dm
2022-01-24 17:05:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6355184316635132, "perplexity": 218.36329016672687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00315.warc.gz"}
https://zbmath.org/?q=an:1117.57300
zbMATH — the first resource for mathematics Kashaev’s conjecture and the Chern-Simons invariants of knots and links. (English) Zbl 1117.57300 Summary: R. M. Kashaev [Mod. Phys. Lett. A 10, 19, 1409–1418 (1995; Zbl 1022.81574)] conjectured that the asymptotic behavior of the link invariant he introduced, which equals the colored Jones polynomial evaluated at a root of unity, determines the hyperbolic volume of any hyperbolic link complement. We observe numerically that for knots $$6_3$$, $$8_9$$ and $$8_{20}$$ and for the Whitehead link, the colored Jones polynomials are related to the hyperbolic volumes and the Chern–Simons invariants and propose a complexification of Kashaev’s conjecture. MSC: 57M27 Invariants of knots and $$3$$-manifolds (MSC2010) 17B37 Quantum groups (quantized enveloping algebras) and related deformations 33B30 Higher logarithm functions 57M50 General geometric structures on low-dimensional manifolds 58J28 Eta-invariants, Chern-Simons invariants 81R50 Quantum groups and related algebraic methods applied to problems in quantum theory SnapPea Full Text: References: [1] Chern S.-S., Ann. of Math. (2) 99 pp 48– (1974) · Zbl 0283.53036 · doi:10.2307/1971013 [2] Cohen H., Pari-Gp: a computer program for number theory [3] Coulson D., Experiment. Math. 9 (1) pp 127– (2000) [4] Kashaev R. M., Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 269 pp 262– (2000) [5] Kashaev R. M., Modern Phys. Lett. A 10 (19) pp 1409– (1995) · Zbl 1022.81574 · doi:10.1142/S0217732395001526 [6] Kashaev R. M., Lett. Math. Phys. 39 (3) pp 269– (1997) · Zbl 0876.57007 · doi:10.1023/A:1007364912784 [7] Kirby, R. and Melvin, P. 1991.”The 3-manifold invariants of Witten and Reshetikhin–Turaev forsl(2,C).”Vol. 105, 473–545. [Kirby and Melvin 91] · Zbl 0745.57006 [8] Meyerhoff R., Low-dimensional topology and Kleinian groups (Coventry/Durham, 1984) pp 217– (1986) [9] Murakami H., Acta Math. 186 (1) pp 85– (2001) · Zbl 0983.57009 · doi:10.1007/BF02392716 [10] Neumann W. D., Topology 24 (3) pp 307– (1985) · Zbl 0589.57015 · doi:10.1016/0040-9383(85)90004-7 [11] Thurston D., Hyperbolic volume and the Jones polynomial. (1999) [12] Weeks J., SnapPea: a computer program for creating and studying hyperbolic 3-manifolds. [13] Yokota Y., Knot Theory – dedicated to Professor Kunio Murasugi for his 70th birthday pp 362– (2000) [14] Yokota Y., ”On the volume conjecture for hyperbolic knots.” · Zbl 1226.57025 [15] Yoshida T., Invent. Math. 81 (3) pp 473– (1985) · Zbl 0594.58012 · doi:10.1007/BF01388583 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-03-05 02:07:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6961740255355835, "perplexity": 5098.3321188706495}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00196.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-7th-edition/chapter-8-sequences-and-series-section-8-3-geometric-sequences-8-3-exercises-page-615/27
## College Algebra 7th Edition The first five terms are: $a_1 = 0 \\a_2 = \ln{5} \\a_3 = 2\ln{5} \\a_4=3\ln{5} \\a_5=4\ln{5}$ The sequence is not geometric. To find the first five terms, substitute 1, 2, 3, 4, and 5 to the given formula. Use the rules (1) $\ln{(a^n)} = n\cdot \ln{a}$ (2) $\ln{1} = 0$ $a_1 = \ln{(5^{1-1})}=\ln{(5^0)}=\ln{1}=0 \\a_2 = \ln{(5^{2-1})} = \ln{(5^1)} =\ln{5} \\a_3 = \ln{(5^{3-1})} = \ln{(5^2)} =2\ln{5} \\a_4=\ln{(5^{4-1})} = \ln{(5^3)} =3\ln{5} \\a_5=\ln{(5^{5-1})} = \ln{(5^4)} =4\ln{5}$ RECALL: A sequence is geometric if there is a common ratio among consecutive terms. Note that consecutive terms do not have a common ratio. Thus, the sequence is not geometric.
2020-04-08 00:38:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7009968757629395, "perplexity": 307.7196282715282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00060.warc.gz"}
https://tex.stackexchange.com/questions/415081/moving-labels-in-xypics
# Moving labels in xypics I have made a significantly complicated diagram of a system of chemical reactions in xy pics, but for all the reaction rates to be visable i have to change their placement on the arrows, either making them closer to arrows, further away or varying their occurance along the length of the arrow. So far I have not succeeded in making any of the commands from the documentation help with this. Could someone show me how its done? \documentclass{article} \usepackage{color} \usepackage[color,matrix,arrow]{xy} \begin{document} $\xymatrix{ &&A_1+A_2+L+L \ar@<-.5ex>[ddl]_{k_{a2}} \ar@<+.5ex>[ddr]^{k_{a1}}&&\\ &&&&\\ &A_1+A_2L+L\ar@<-.5ex>[ddl]_{k_{a22}} \ar@<-.5ex>[uur]_{k_{d2}} \ar@<+.5ex>[rdd]^{k_{a1}} \ar@<+.5ex>@[lightgray][rddd]^{\textcolor{lightgray}{k_{a21}}}& & A_1L+A_2+L \ar@<.5ex>@[lightgray][lddd]^{\textcolor{lightgray}{k_{a12}}} \ar@<.5ex>[ddl]^{k_{a2}} \ar@<+.5ex>[luu]^{k_{d1}} \ar@<+.5ex>[rdd]^{k_{a11}}&\\ &&&&\\ A_1+LA_2L\ar@<-.5ex>[uur]_{k_{d22}} \ar@<+.5ex>[ddr]^{k_{a221}} && A_1L+A_2L\ar@<.5ex>[uur]^{k_{d2}} \ar@<.5ex>[ddl]^{k_{a212}} \ar@<+.5ex>[uul]^{k_{d1}} \ar@<+.5ex>[ddr]^{k_{a121}} & & LA_1L+A_2 \ar@<+.5ex>[ddl]^{k_{a112}} \ar@<+.5ex>[uul]^{k_{d11}} \\ &&\textcolor{lightgray}{A_1LA_2+L}\ar@<.5ex>@[lightgray][uuur]^{\textcolor{lightgray}{k_{d12}}} \ar@<+.5ex>@[lightgray][uuul]^{\textcolor{lightgray}{k_{d21}}} \ar@<+.5ex>@[lightgray][dr]^{\textcolor{lightgray}{k_{a121}}} \ar@<.5ex>@[lightgray][dl]^{\textcolor{lightgray}{k_{a212}}} &&\\ &LA_2LA_1\ar@<.5ex>@[lightgray][ur]^{\textcolor{lightgray}{k_{d212}}} \ar@<.5ex>[uur]^{k_{d212}} \ar@<+.5ex>[uul]^{k_{d221}} \ar@<+.5ex>[ddr]^{k_{a2211}} & & LA_1LA_2\ar@<.5ex>[uur]^{k_{d112}} \ar@<.5ex>[ddl]^{k_{a1122}} \ar@<+.5ex>[uul]^{k_{d121}} \ar@<+.5ex>@[lightgray][ul]^{\textcolor{lightgray}{k_{d121}}} &\\ &&&&\\ & & (LA_1LA_2)_r,\ar@<.5ex>[uur]^{k_{d1122}} \ar@<+.5ex>[uul]^{k_{d2211}}& & \\ }$ Are you sure that you have had a look in the manual? Your problem is explicitly addressed in the paper that is mentioned on the CTAN site (Us­ing XY-pic on https://ctan.org/pkg/xypic). I have never used this package before -- the code looks like a cat walked over a keyboard :). I guess that I will use my next online banking password by using a code snippet from the manual :). \documentclass{article} \usepackage[all]{xy} % \begin{document} % \frame{} is just for illustration purposes. % Taken from the Paper: Us­ing XY-pic on https://ctan.org/pkg/xypic \frame{\begin{xy} (0,0)*+{A}; (20,0)*+{B} **\dir{-}% ?>*\dir{>} ?*!/_2mm/{\alpha} \end{xy}} \frame{\begin{xy} (0,0)*+{A}; (20,0)*+{B} **\dir{-}% ?>*\dir{>} ?*!/_4mm/{\alpha} \end{xy}} % ?< \frame{\begin{xy} (0,0)*+{A}; (20,0)*+{B} **\dir{-}% ?>*\dir{>} ?<*!/_2mm/{\alpha} \end{xy}} % ?> \frame{\begin{xy} (0,0)*+{A}; (20,0)*+{B} **\dir{-}% ?>*\dir{>} ?>*!/_2mm/{\alpha} \end{xy}} % ?(0.5) \frame{\begin{xy} (0,0)*+{A}; (20,0)*+{B} **\dir{-}% ?>*\dir{>} ?(0.5)*!/_2mm/{\alpha} \end{xy}} % ?(0.8) \frame{\begin{xy} (0,0)*+{A}; (20,0)*+{B} **\dir{-}% ?>*\dir{>} ?(0.8)*!/_2mm/{\alpha} \end{xy}} \end{document} • That's helpful, I was only aware of this piece of documentation however, it doesn't solve the problem. As it doesn't explain how to incorporate those options with the ones my arrows are using. So far when I add spacing commands for the labels they remove the other options on the arrows – Abijah Feb 14 '18 at 11:40 • Then maybe you could minimize your code example to one arrow in order to make it easier to help and in order to focus on the actual problem. Right now your code is too complex for this kind of question in my humble opinion. – Dr. Manuel Kuehner Feb 14 '18 at 19:03 I decided on this as the code for the xymatrix: \xymatrix{ &&A_1+A_2+L+L \ar@<-.5ex>[dl]_{k_{a2}} \ar@<+.5ex>[dr]^{k_{a1}}&&\\ &A_1+A_2L+L\ar@<-.5ex>[dl]_{k_{a22}} \ar@<-.5ex>[ur]_{k_{d2}} \ar@<+.5ex>[rd]^{k_{a1}} \ar@<+.5ex>@[lightgray][rddd]^{\textcolor{lightgray}{k_{a21}}}& & A_1L+A_2+L \ar@<.5ex>@[lightgray][lddd]^</2cm/{\textcolor{lightgray}{k_{a12}}} \ar@<.5ex>[dl]^{k_{a2}} \ar@<+.5ex>[lu]^{k_{d1}} \ar@<+.5ex>[rd]^{k_{a11}}&\\ A_1+LA_2L\ar@<-.5ex>[ur]_{k_{d22}} \ar@<+.5ex>[ddr]^{k_{a221}} && A_1L+A_2L\ar@<.5ex>[ur]^{k_{d2}} \ar@<.5ex>[ddl]^{k_{a212}} \ar@<+.5ex>[ul]^{k_{d1}} \ar@<+.5ex>[ddr]^{k_{a121}} & & LA_1L+A_2 \ar@<+.5ex>[ddl]^{k_{a112}} \ar@<+.5ex>[ul]^{k_{d11}} \\ &&&&\\ &LA_2LA_1\ar@<.5ex>@[lightgray][r]^{\textcolor{lightgray}{k_{d212}}} \ar@<.5ex>[uur]^{k_{d212}} \ar@<+.5ex>[uul]^{k_{d221}} \ar@<+.5ex>[ddr]^{k_{a2211}} &\textcolor{lightgray}{A_1LA_2+L}\ar@<.5ex>@[lightgray][uuur]^{\textcolor{lightgray}{k_{d12}}} \ar@<+.5ex>@[lightgray][uuul]^</3cm/{\textcolor{lightgray}{k_{d21}}} \ar@<+.5ex>@[lightgray][r]^{\textcolor{lightgray}{k_{a121}}} \ar@<.5ex>@[lightgray][l]^{\textcolor{lightgray}{k_{a212}}} & LA_1LA_2\ar@<.5ex>[uur]^{k_{d112}} \ar@<.5ex>[ddl]^{k_{a1122}} \ar@<+.5ex>[uul]^{k_{d121}} \ar@<+.5ex>@[lightgray][l]^{\textcolor{lightgray}{k_{d121}}} &\\ &&&&\\ & & (LA_1LA_2)_r,\ar@<.5ex>[uur]^{k_{d1122}} \ar@<+.5ex>[uul]^</1cm/{k_{d2211}}& & \\ } the code for the individual arrows with the shifted labels were coded as: \ar@<+.5ex>[uul]^</1cm/{k_{d2211}} • I'm not sure if I understood what you wish, but to center a label use \ar[d]^-{f} for example (that is, use a dash - after the positioning ^ or _). – Sigur Feb 20 '18 at 11:43
2019-08-24 22:08:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7220038175582886, "perplexity": 1846.7515636454202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321786.95/warc/CC-MAIN-20190824214845-20190825000845-00207.warc.gz"}
https://www.intmath.com/blog/supplies/best-middle-school-back-to-school-math-supplies-12792
Search IntMath Close 450+ Math Lessons written by Math Professors and Teachers 5 Million+ Students Helped Each Year 1200+ Articles Written by Math Educators and Enthusiasts Simplifying and Teaching Math for Over 23 Years # Best Middle School Back to School Math Supplies By Kathleen Cantor, 10 Aug 2021 Your preteen probably has summer plans to sleep in, play video games, see friends, and maybe travel with family. It's the perfect time to relax and forget about the stress of school, homework, and math class, right? On the other hand, parents are practically giddy about sending their middle school students back to the classroom. Once the sunburns have turned to tan lines and curfews have been broken, parents are ready to go shopping for the best back to school math supplies available. ## The Best Middle School Back to School Math Supplies for Your Child Math is an interesting subject, and for some students, it's a breeze. For others, they'd rather get a tooth pulled without novocaine. Yeah, it's that bad. But fortunately, we've created a guide to help you pick the best middle school math supplies to help make this dry subject a lot more fun. ### 1. Invest in a Scientific Calculator Unless specified by the student's teacher, a scientific calculator is ideal for any middle school student. Purchasing a professional-grade scientific calculator ensures that your child can easily handle statistical calculations in middle school math and beyond. A scientific calculator can help students calculate different angle modes, scientific notation modes, two-step equations, or basic algebra. When getting a calculator for your child, consider some factors such as the batteries required, screen display, ability to read calculation records, and other basic features. Read the reviews available online to help you find the right product for your child at a price that is within budget. Not only is a calculator one of the best middle school, back to school math supplies you can purchase for your kid, but they can also use the calculator for any math class they take once they finish with middle school. Think about it like a long-term investment. ### 2. Provide Additional Aids with a Geometry Kit An average geometry kit should come with a compass, a protractor, set square, storage box, triangle, 15 cm ruler, eraser, a small pencil, and lead refills. With it, your child should be able to solve any tricky equation that comes their way. Each geometry kit item plays a different role, from taking length measurements to construction and angling. The tools are easy-to-use, durable, and most of them utilize an ergonomic design, improving safety and comfort when handling. The geometry kit you purchase should be portable, in a durable case. The ease of packing it away to carry to and from school will allow your student to work comfortably at home and school. ### 3. Grab a Graph Paper Notebook or Loose Leaf Graph Paper Notebooks are necessary for almost any subject in middle school. However, specific notebooks serve best in math–the glorious graph paper notebook or loose-leaf paper. Parents can find graph paper at almost any store. These notebooks or loose-leaf paper packs vary depending on the number of sheets, size of squares, and paper quality. Purchasing a few math graph paper notebooks, with each having 50 letter-sized sheets, should start your child out on the right math path. With the ease of graph paper, your child can focus on the required math work instead of the accuracy of their hand-drawn graph lines. It makes the list of essential, back to school math supplies fr middle schoolers. ### 4. Purchase or Make Math Flashcards Middle school is the right place for students to develop their upper math skills, but these new concepts can be pesky to get down. With the help of math flashcards, students can focus on the necessary equations, terms, geometry angles, etc., that they need to memorize. True, students can make flashcards themselves. All they need is a pack of index cards, a key ring, and a pen. But this Koogel kit includes pre-punched cards in six different colors. It also includes the keyrings they need to make flashcard sets for every module during the year. Color coding the flashcards can also help students find the current study terms too. Creating their own math flashcards give the student ownership over the cards and may allow them to be more involved in their studies. Visual aids can help middle school students learn math quickly and better understand the subject's topics. With a magnetic fraction tile, students can learn fractions using visual and hands-on magnetic aids to improve their understanding of fractions. This magnetic fraction set introduces the whole concept of fractions, parts to a whole, equivalents, and comparisons. Depending on the type of magnetic fraction tiles you opt for, most tiles come with varying color codes and soft foam to depict different fractions from wholes to twelfths. Placing the magnetic tiles on your fridge may allow your student the opportunity to learn fractions in a comfortable environment. ### 6.  Reusable Scratch Paper: Individual Whiteboards and Dry Erase Markers Whiteboards offer a paperless form of learning math. They are easy to write long answers, brainstorm, and find solutions to math problems. Math whiteboards for students are designed to be big enough to handle whole problems while small enough for your child to carry them around. Unlike books, whiteboards can be erased and new problems solved on the same board. Whiteboards give your child room to make and correct mistakes easily. It can also save you from buying reams and reams of scratch paper. ### 7. Posters with Math Facts, Equations, Angles, Etc. If your student does a lot of work at home, they may benefit from math posters. Math posters can provide a quick glance reminder of the work they need to be completing. The poster may contain math equations, common geometry angles, or various math terms that your child needs to be familiar with. By posting a math poster in their workspace, your child will be exposed to and utilize the tools around them as they complete their schoolwork. ## A Note on the Best Middle School Math Supplies In some cases, your child might have a required or suggested list of math supplies sent home from your child's school. If that is the case, consider quality, safety, ease of use, durability, and price when purchasing their middle school, back to school math supplies. Regardless of your child's perception of math, having the top math supplies for middle schoolers may help them survive their year of math with a smile. Even though math supplies can vary from one school to another, these standard items can support your child's learning. ###### Interactive Mathematics is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to (“intmath.com” (amazon.com, or endless.com, MYHABIT.com, SmallParts.com, or AmazonWireless.com) See the 1 Comment below. ### One Comment on “Best Middle School Back to School Math Supplies” 1. Jack says: It is all good that students are taking help from several sites for math calculation. However it is similarly important to manage the proper way to solve it out as well. ### Comment Preview HTML: You can use simple tags like <b>, <a href="...">, etc. To enter math, you can can either: 1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone): a^2 = sqrt(b^2 + c^2) (See more on ASCIIMath syntax); or 2. Use simple LaTeX in the following format. Surround your math with $$ and $$. $$\int g dx = \sqrt{\frac{a}{b}}$$ (This is standard simple LaTeX.) NOTE: You can mix both types of math entry in your comment. From Math Blogs
2023-02-02 11:09:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.172546848654747, "perplexity": 3213.4755507934924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00192.warc.gz"}
https://borneomath.com/soal-dan-kunci-jawaban-fmo-grade-5/
# Soal dan Kunci Jawaban Sample Paper FMO 2021 grade 5 Berikut ini soal dan kunci jawaban Fermat Mathematical Olympiad grade 5. Sumber soal diambil dihalaman Facebook FMO. Semoga bermanfaat 1. Which number should be filled into the cell with question mark? A. 13 B. 4 C. 14 D. 16 E. 15 2. Teacher asks students to paint over all cells with values greater than 0.5. Which answer below fits the requirement? A. B. C. D. E. 3. A rabbit needs to find the carrot. He cannot pass through cells blocked by rocks and each cell should be stepped on at most once. How many different ways are there? A. 2 B. 4 C. 5 D. 6 E. 7 4. Justin has a square piece of paper. He folds the paper twice and cuts it as in the picture. After cutting the paper, how many square pieces can Justin create? A. 0 B. 1 C. 3 D. 4 E. 5 5. Robbie has two ropes of length 1dm and two ropes of length 7cm. He joins some of the ropes together. Which length below he CANNOT make? A. 17 cm B. 14 cm C. 24 cm D. 27 cm E. 15 cm 6. Which shaded region below has the greatest area? A. B. C. D. E. 7. A bookstore keeps track of their sales of black pens and blue pens in 3 days. Given that the number of blue pens sold in day 3 doubles the number of blue pens sold in day 1. How many pens are sold in day 1 altogether? A. 80 B. 100 C. 120 D. 140 E. 160 8. Turtle and rabbit joins a relay race. Turtle starts at 07:15. He runs half of the path then gives the baton to the rabbit. The rabbit runs 4 times as fast as the turtle and crosses the finish line at 07:35. When does turtle give the baton to the rabbit? A. 07 : 31 B. 07 : 20 C. 07 : 29 D. 07 : 19 E. 07 : 30 9. The chart below shows the Math scores of Class 5A. Students who get more than 8 points will be awarded. How many students are rewarded in Class 5A? A. 19 B. 10 C. 20 D. 12 E. 6 10. Grandma had a square piece of land with area 100m2. Then she sold a small square piece of land with area 25m2 as in the figure. What is the perimeter of the land after one part being sold? A. 50 m B. 60 m C. 70 m D. 55 m E. 65 m 11. Peter has identical puzzle pieces (figure A). He tries to fit the pieces into a picture (figure B). At least how many squares are left without being covered in figure B? A. 0 B. 1 C. 2 D. 3 E. 4 12. They can use 8 planks to make one pen for a goat. If they want to make four pens for four goats, at least how many planks do they need to use? A. 32 B. 22 C. 18 D. 21 E. 20 13. Refer to the pattern below. How many cells are not shaded in the $$15^{th}$$ figure? A. 960 B. 700 C. 1000 D. 800 E. None of the above 14. Mr. Smith came to the airport at 10:35 and saw the departure time of his flight in the schedule below. Given that his flight number contained a 4-digit natural number divisible by 4 and his gate number was even. How many minutes did he have to wait until his flight? A. 195 min B. 95 min C. 135 min D. 165 min E. 175 min 15. A firefighter was standing on the middle step of a ladder. Then he continued to climb 6 steps to spray water on a burning house. However, the fire was spreading outward causing him to step back 10 steps. After a few minutes, he climbed again 18 steps and stood on top of the ladder. How many steps does that ladder have? A. 27 B. 31 C. 29 D. 33 E. Non of the above 16. There are 8 people Amy, Bill, Cindy, Dan, Emma, Fred, Greg, Henry going to a meeting. In the meeting, Amy is friend with 7 people. Bill is friend with 6 people. Cindy is friend with 5 people. Dan is friend with 4 people. Emma is friend with 3 people. Fred is friend with 2 people. Greg is friend with 1 person. How many person/people is Henry friend with? Kunci Jawaban:
2023-03-26 11:49:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40542736649513245, "perplexity": 2417.4146077401747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00648.warc.gz"}
http://hermes.roua.org/hermes-pub/arxiv/05/03/252/article.xhtml
2000 Mathematics Subject Classification. Primary 57M27; Secondary 11S05, 37B40. The author was supported in part by JSPS fellowship for young scientists. <ph f="ptmb7t">Zeros of the Alexander polynomial of knot</ph> ### Akio Noguchi Department of Mathematics, Tokyo Institute of Technology, Oh-okayama, Meguro-ku, Tokyo 152-8551, Japan E-mail address: akio@math.titech.ac.jp • Abstract. The leading coefficient of the Alexander polynomial of a knot is the most informative element in this invariant, and the growth of orders of the first homology of cyclic branched covering spaces is also a familiar subject. Accordingly, there are a lot of investigations into each subject. However, there is no study which deal with the both subjects in a same context. In this paper, we show that the two subjects are closely related in $p$  -adic number theory and dynamical systems. 1 Introduction The leading coefficient of the Alexander polynomial ${\Delta }_{K}\left(t\right)$  of a knot $K$  is a well-known invariant for detecting fibered knots. The Alexander polynomial of a fibered knot is always monic [21. The converse is not always true, but it holds for many knots, for example, alternating knots [20. Moreover the monic condition characterizes fibered knots in a sense of realization [3, 22. The leading coefficient of the Alexander polynomial of a knot is also related to the commutator subgroup ${G}_{K}^{\prime }$  of the knot group ${G}_{K}={\pi }_{1}\left({S}^{3}\K\right)$  . The abelianaization of ${G}_{K}^{\prime }$  is finitely generated if and only if the leading coefficient is $±1$  [5, 24. The $r$  -fold cyclic covering branched over a knot $K$  , which is denoted by ${X}_{r}$  , is a fundamental object in the knot theory, since topological invariants of it are also invariants of the knot. In [9, Gordon studied the growth of the order of ${H}_{1}\left({X}_{r};\mathbb{Z}\right)$  with respect to $r$  and asked whether the growth is exponential in case same zero of ${\Delta }_{K}\left(t\right)$  are not a root of unity. More than 15 years later, this question was affirmatively answered by Riley [25and González-Acun͂a and Short [8independently. But it still remains improvable. We completely express the growth by the zeros of the Alexander polynomial. The entropy is an invariant of a self-map for measuring complexity of the map. However, we study the entropy of the meridian action on the Alexander module (Theorem  1 ) and regard it as an invariant to measure complexity of the Alexander module, for the meridian action is canonical for every Alexander module. Precisely speaking, it should be called the entropy of the dual action of the dual group of the Alexander module. But we confuse them by their duality (see Section 2.1). With interpretation of the entropy, we obtain the following results. For more precise statements, see Section 4. Results. Let ${\Delta }_{K}\left(t\right)={\sum }_{i=0}^{n}{a}_{i}{t}^{i}\left({a}_{0}{a}_{n}\ne 0\right)$  be the Alexander polynomial of a knot $K$  and ${\alpha }_{i}$  the zeros (counted with multiplicity) of ${\Delta }_{K}\left(t\right)$  . Then, • (1) the leading coefficient of ${\Delta }_{K}\left(t\right)$  is $\begin{array}{cc}log|{a}_{n}|={\sum }_{p<\infty }{\sum }_{|{\alpha }_{i}{|}_{p}>1}log|{\alpha }_{i}{|}_{p}& \end{array}$ $\begin{array}{}\end{array}$ (Corollary  4 ), and • (2) the growth of order of the first homology of the $r$  -fold cyclic covering branched over $K$  is $\begin{array}{cc}{lim}_{\stackrel{r\to \infty }{|{H}_{1}\left(\cdot \right)|\ne 0}}\frac{log|{H}_{1}\left({X}_{r};\mathbb{Z}\right)|}{r}={\sum }_{p\le \infty }{\sum }_{|{\alpha }_{i}{|}_{p}>1}log|{\alpha }_{i}{|}_{p}& \end{array}$ $\begin{array}{}\end{array}$ (Corollary  1 ). Here, $|\cdot {|}_{p}$  are the $p$  -adic norms and $|\cdot {|}_{\infty }$  is the standard norm. (we assume that the embeddings $\overline{\mathbb{Q}}\to \overline{{\mathbb{Q}}_{p}}$  are fixed.) In our study, we establish the followings. • $\bullet$  The leading coefficient of the Alexander polynomial can be recovered from its zeros. Furthermore, the distribution of the zeros measures a certain distance of the Alexander module from being finitely generated as $\mathbb{Z}$  -module and a new interpretation of the leading coefficient is given. (Section 4.2) • $\bullet$  Since the primary interests of Gordon [9was to investigate the periodicity of ${H}_{1}\left({X}_{r};\mathbb{Z}\right)$  , he studied the growth of orders to determine the non-periodic case. However, the growth also measures complexity of the Alexander module. (Section 4.1) Here, we have a few comments on this study. It might make our study a little more attractive. The Alexander polynomial of a knot is defined as a greatest common divisor of the initial Fitting ideal (elementary ideal) of the Alexander module ${H}_{1}\left({X}_{\infty };\mathbb{Z}\right)$  as $\mathbb{Z}\left[{t}^{±}\right]$  -module. Here, the indeterminate $t$  is identified with the meridian action on ${H}_{1}\left({X}_{\infty };\mathbb{Z}\right)$  and ${X}_{\infty }$  is the infinite cyclic cover of $X={S}^{3}\K$  . Then, by tensoring with the rational numbers $\mathbb{Q}$  , the Alexander polynomial is also generator of the Fitting ideal of the module ${H}_{1}\left({X}_{\infty };\mathbb{Q}\right)$  as $\mathbb{Q}\left[{t}^{±}\right]$  -module, and hence it is the characteristic polynomial of the meridian action on ${H}_{1}\left({X}_{\infty };\mathbb{Q}\right)$  , up to units (see Theorem 6.17 in [15). Although the rational homology ${H}_{1}\left({X}_{\infty };\mathbb{Q}\right)$  gives a nice explanation of the Alexander polynomial, the leading coefficient ${a}_{n}$  is lost in ${H}_{1}\left({X}_{\infty };\mathbb{Q}\right)$  because ${a}_{n}$  is a unit in $\mathbb{Q}\left[{t}^{±}\right]$  . On the other hand, the entropy has an advantage over the Fitting ideal because we can replace ${H}_{1}\left({X}_{\infty };\mathbb{Z}\right)$  with ${H}_{1}\left({X}_{\infty };\mathbb{Q}\right)$  with preserving the entropy (cf. Step 1 in the proof of Proposition  9 ). This is why the zeros of the Alexander polynomial keep information about ${H}_{1}\left({X}_{\infty };\mathbb{Z}\right)$  . As Gordon mentioned, the difficulty in computing the growth of orders arises from the case in which all zeros belong to the unit circle but some are not a root of unity (e.g. ${\Delta }_{{5}_{2}}\left(t\right)=2{t}^{2}-3t+2$  ). In this case, the standard norm is useless. Reliy [25managed the difficulty by the $p$  -adic analysis. On the other hand, González-Acun͂a and Short [8managed it by showing that the growth is equal to the Mahler measure of the Alexander polynomial. Our feature is the interpretation of the growth as the entropy of the meridian action on the Alexander module, which can be obtained by combining the result of González-Acun͂a and Short [8and Einseidler and Ward [6. As a result, the growth turn out to be an invariant to measure the complexity of the Alexander module. Because the Alexander module is not always finite generated, toral automorphisms are not enough to investigate it. But solenoidal automorphisms are enough for the Alexander module (Lemma  1 ). So, we can apply the works of Lind and Ward [17: the Haar measure on solenoid can be lifted to the Haar measure on adele rings with preserving the entropy and the entropy is the sum of the entropies for the $p$  -adic directions. That is to say, they established a kind of Hasse principle for dynamical systems. Finally, the growth of orders is expressed by the $p$  -adic norms of the zeros of the Alexander polynomial, for expansions (or entropy) in adele rings can be computed by the $p$  -adic norms (see Example  1 , 2 ). Although our approach is different from Riley, the $p$  -adic method is useful again. In [25, Riley also proved the other results, which are on the $p$  -part of $|{H}_{1}\left({X}_{r};\mathbb{Z}\right)|$  . He obtained the upper bounds for the $p$  -parts: $|{H}_{1}\left({X}_{r};\mathbb{Z}\right){|}^{\left(p\right)}  , where $A,H,E,n$  are constants depending on a knot (Theorem 2 in [25). This result implies that the $p$  -parts have trivial growth with respect to $r$  for any primes (as against ${e}^{r}$  ). Silver and Williams [29re-proved this trivial growth and generalized it under mild hypotheses, and their argument is helpful for us. As Silver and Williams mentioned, this fact implies that if $|{H}_{1}\left({X}_{r};\mathbb{Z}\right)|$  dose not have the trivial growth then the sequence of them displays infinitely many prime numbers in the factorization of its terms. Furthermore, Riley also confirmed that almost all $p$  -parts actually increase and his upper bounds are best possible (except for the constant multiplier) with respect to $r$  and also $p$  (Theorem 3 in [25). Acknowledgment. I would like to thank Dr. Kazuo Masuda for his helpful advice and valuable discussion, and also thank Dr. Sadayoshi Kojima, Dr. Hitoshi Murakami, Dr. Masanori Morishita and Dr. Gregor Masbaum for their kind reading my rough draft and giving helpful comments. And, I am grateful to Dr. Kunio Murasugi for his encouragement in this study and kind hospitality during my visiting University of Toronto, and also Dr. Miho Aoki for her helpful conversation on number theory. I also appreciate the seminor with Dr. Shoichi Nakajima, Dr. Shin Nakano and Dr. Mikami Hirasawa, which was helpful to meke this paper readable. 2 Preliminaries 2.1 Fourier analysis on number fields The classical Fourier analysis is based on the Pontryagin duality between the integers $\mathbb{Z}$  and the torus $\mathbb{T}=\mathbb{R}/\mathbb{Z}$  . The modern Fourier analysis is extended over more general Pontryagin dualities. It is called harmonic analysis sometimes. In this section, we review the Pontryagin duality for the rational number field $\mathbb{Q}$  and related topics. 2.1.1 Fourier analysis on LCA groups Let $G$  be a locally compact group. A collection $\stackrel{^}{G}$  of all continuous homomorphisms $\chi :G\to \mathbb{C}$  is called a dual group or character group if $|\chi \left(g\right)|=1$  for all $g\in G$  with pointwise multiplication and compact-open topology. The dual group $\stackrel{^}{G}$  has the following properties. For more details, Rudin's book [27is a standard exposition. Proposition 1. A locally compact abelian group $G$  and its dual group $\stackrel{^}{G}$  have the following properties. • (1) $\stackrel{^}{G}$  is also a locally compact abelian group. • (2) $G$  is compact if and only if $\stackrel{^}{G}$  is discrete. • (3) $\stackrel{^}{\stackrel{^}{G}}$  is naturally isomorphic to $G$  , which is called the Pontryagin duality. • (4) Let $〈g,\chi 〉=\chi \left(g\right)$  for every $g\in G,\chi \in \stackrel{^}{G}$  . Then, for any continuous homomorphism $\phi :G\to H$  , there exists a continuous homomorphism $\stackrel{^}{\phi }:\stackrel{^}{H}\to \stackrel{^}{G}$  such that $〈\phi \left(g\right),{\chi }_{H}〉=〈g,\stackrel{^}{\phi }\left({\chi }_{H}\right)〉$  and $\stackrel{^}{\stackrel{^}{\phi }}=\phi$  . • (5) If $G$  and $H$  are both either compact or discrete, $\phi$  is surjective if and only if $\stackrel{^}{\phi }$  is injective. By the above proposition, an automorphism of a discrete abelian group can be transformed to an automorphism of a compact abelian group, and this transformation is reversible. The duality translates topological structures to algebraic structures. Proposition 2. Let $G$  be a compact abelian group. • (1) $G$  is connected if and only if $\stackrel{^}{G}$  is torsion free. • (2) $G$  is finite dimension $d$  if and only if $\stackrel{^}{G}$  is finite rank $d$  . • (3) $G$  is metrizable if and only if $\stackrel{^}{G}$  is countable. 2.1.2 Fourier analysis on the rational number field We now deal with the Pontryagin duality for the rational number field $\mathbb{Q}$  with the discrete topology. To do that, we need $p$  -adic number fields and the adele ring of $\mathbb{Q}$  . See [23, 32for more details. Every non-zero rational number $a\in \mathbb{Q}$  can be presented as $a={p}^{m}\frac{u}{v}$  for each rational prime number $p$  . Then, the $p$  -adic norm of $a$  is defined by $|a{|}_{p}=\left\{\begin{array}{cc}{p}^{-m}& a\ne 0\\ 0& a=0,\end{array}$  and defines a metric on $\mathbb{Q}$  by ${d}_{p}\left(a,b\right)=|a-b{|}_{p}$  , which is called the $p$  -adic metric. The $p$  -adic number field ${\mathbb{Q}}_{p}$  is defined as the completion of $\mathbb{Q}$  with respect to the $p$  -adic metric ${d}_{p}$  . An algebraic closure $\overline{{\mathbb{Q}}_{p}}$  has a unique norm which extends from the $p$  -adic norm on ${\mathbb{Q}}_{p}$  . These spaces do not seem natural. However, they are natural under the following concept. Definition 1. Let $F$  be a locally compact field and $\mu$  an additive Haar measure on $F$  . Then for every $a\in F$  , $\mu \left(a\cdot \right)$  is another Haar measure on $F$  . Therefore, for every $a\in F$  , there exists the module $|a{|}_{F}$  such that $\mu \left(a\cdot \right)=|a{|}_{F}\mu \left(\cdot \right)$  . Example 1. When $F$  is the real number field $\mathbb{R}$  , the module $|a{|}_{\mathbb{R}}$  is the ordinary norm $|a|$  . When $F$  is the $p$  -adic number field ${\mathbb{Q}}_{p}$  , the module $|a{|}_{{\mathbb{Q}}_{p}}$  is the $p$  -adic norm $|a{|}_{p}$  . The $p$  -adic norms are natural concept with respect to an additive Haar measure on ${\mathbb{Q}}_{p}$  , and consequently $p$  -adic norms are useful for computing an entropy (see Example  2 ). Definition 2. The adele ring of the rational numbers is defined as a restricted direct product space of ${\mathbb{Q}}_{p}$  , that is ${\mathbb{A}}_{\mathbb{Q}}=\left\{x=\left({x}_{p}\right)\in {\prod }_{p\le \infty }{\mathbb{Q}}_{p}||{x}_{p}{|}_{p}\le 1\text{for almost all}p\right\}.$ The adele group (with addition) is a locally compact abelian group, and hence has a Haar measure, which is a kind of product measure. The following proposition means that the adele ring of the rational numbers is parallels to the real numbers $\mathbb{R}$  in the duality $\stackrel{^}{\mathbb{Z}}\sim =\mathbb{R}/\mathbb{Z}$  . Proposition 3. $\mathbb{Q}$  is uniform lattice in ${\mathbb{A}}_{\mathbb{Q}}$  , i.e. discrete co-compact subgroup of ${\mathbb{A}}_{\mathbb{Q}}$  . Moreover, ${\mathbb{A}}_{\mathbb{Q}}/\mathbb{Q}\sim =\stackrel{^}{\mathbb{Q}}.$ 2.2 Entropy An entropy is a quantity to measure a complexity of a self-map. There are several definitions of entropies from various aspect. Kolmogorov [13and Sinai [30introduced the measure theoretic entropy ${h}_{\mu }\left(T\right)$  for a measure preserving map $T$  on a probability space $\left(X,\mathfrak{B},\mu \right)$  . Alder, Konheim and McAndrew [1introduced the topological entropy $h\left(T\right)$  for a continuous map $T$  on a compact space. Bowen [2defined a variant of topological entropy ${h}_{d}\left(T\right)$  , which is called Bowen's topological entropy, for a uniformally continuous map $T$  on a metric space $\left(X,d\right)$  . Each entropy has its own property and is related to the others. For more details, see Walters [31. Bowen's topological entropy is equal to the original topological entropy for compact metrizable space. Proposition 4. When $X$  is compact, ${h}_{d}\left(T\right)$  is not depend on a metric $d$  and $\begin{array}{cc}{h}_{d}\left(T\right)=h\left(T\right).& \end{array}$ $\begin{array}{}\end{array}$ The following proposition shows that the topological entropy is a supremum of measure theoretic entropy, and we usually call it the variational principle. Proposition 5. Let $T$  be a continuous map of a compact metric space $X$  and $M\left(X,T\right)$  the all probability measures which are preserved by $T$  . Then $\begin{array}{cc}h\left(T\right)=sup\left\{{h}_{\mu }\left(T\right)|\mu \in M\left(X,T\right)\right\}& \end{array}$ $\begin{array}{}\end{array}$ Our main interest is an automorphism of a locally compact abelian group and especially a compact abelian group. Since any surjective endomorphism of a compact group preserves the Haar measure, an automorphism of a compact group is a simple example for which both topological entropy and measure theoretic entropy can be defined. Moreover the both entropies are identical in this case. Proposition 6. Suppose $G$  is a compact metrizable group, $T$  a surjective endomorphism of $G$  , and $\mu$  a normalized Haar measure on $G$  . Then $\begin{array}{cc}{h}_{\mu }\left(T\right)={h}_{d}\left(T\right)=h\left(T\right).& \end{array}$ $\begin{array}{}\end{array}$ From Proposition  6 , we can choose any definition for studying the entropy in this condition. Bowen [2also introduce yet another topological entropy, which is computable and inherits its habit from ${h}_{d}\left(T\right)$  and ${h}_{\mu }\left(T\right)$  . In our condition, this entropy can be identified with ${h}_{d}\left(T\right)$  and computed by the following formula. Proposition 7. Let $G$  be a locally compact metrizable abelian group with a Haar measure $\mu$  and $T$  a surjective endomorphism, then the entropy can be computed by the formula $\begin{array}{cc}{h}_{d}\left(T\right)={lim}_{\varepsilon \to 0}{limsup}_{n\to \infty }\left[-\frac{1}{n}log\mu \left({\cap }_{k=0}^{n-1}{T}^{-k}B\left(e,\varepsilon \right)\right)\right],& \end{array}$ $\begin{array}{}\end{array}$ where $B\left(e,\varepsilon \right)$  is a open $\varepsilon$  -ball of the identity element with respect to an invariant metric $d$  . Example 2. Let ${T}_{a}$  be an automorphism of ${\mathbb{Q}}_{p}$  defined by a multiplication by $a\in {\mathbb{Q}}_{p}^{*}$  . Then it follows from Example  1 that $\begin{array}{cc}{h}_{{d}_{p}}\left({T}_{a}\right)=\left\{\begin{array}{cc}log|a{|}_{p}& |a{|}_{p}>1,\\ 0& |a{|}_{p}\le 1.\end{array}& \end{array}$ $\begin{array}{}\end{array}$ Bowen's topological entropy is compatible with coverings. Proposition 8 (Bowen [2). Let $G$  be a locally compact metrizable abelian group with an invariant metric $d$  . Suppose $\Gamma$  is a uniform lattice of $G$  , that is, a discrete and co-compact subgroup of $G$  . Let $\stackrel{~}{T}$  and $T$  be endomorphisms of $G$  and $G/\Gamma$  such that $\pi \circ \stackrel{~}{T}=T\circ \pi$  , where $\pi :G\to G/\Gamma$  is the projection. Then $\begin{array}{cc}h\left(T\right)={h}_{d}\left(\stackrel{~}{T}\right).& \end{array}$ $\begin{array}{}\end{array}$ 3 Solenoidal entropy and Alexander polynomial A solenoid ${\Sigma }^{d}$  is, by definition, a compact connected finite-dimensional abelian group and arose from a generalization of the torus ${\mathbb{T}}^{d}$  . The following theorem, which was given by Lind and Ward [17, plays a key role for our results. Proposition 9 (Lind and Ward [17). Let $A$  be an automorphism of $d$  -dimensional solenoid ${\Sigma }^{d}$  . Then, • (1) the entropy of $A$  is the sum of the entropies of the automorphisms of ${\mathbb{Q}}_{p}^{d}$  induced by $A$ $\begin{array}{cc}h\left(A;{\Sigma }^{d}\right)={\sum }_{p\le \infty }h\left(A;{\mathbb{Q}}_{p}^{d}\right),& \end{array}$ $\begin{array}{}\end{array}$ and • (2) the $p$  -adic entropy is computed by the eigenvalues ${\lambda }_{1},...,{\lambda }_{d}$  of the induced automorphism in $GL\left(d,{\mathbb{Q}}_{p}\right)$  as follows: $\begin{array}{cc}h\left(A;{\mathbb{Q}}_{p}^{d}\right)={\sum }_{|{\lambda }_{i}{|}_{p}>1}log|{\lambda }_{i}{|}_{p}.& \end{array}$ $\begin{array}{}\end{array}$ In [17, Lind and Ward computed the entropy of solenoidal automorphisms with intrinsic arguments. The proof is really helpful for our applications later. Although we review an outline of the proof, referring to the original paper is strongly recommend. • Outline of the proof. • Step 1: Because the dual group of ${\Sigma }^{d}$  can be embed into ${\mathbb{Q}}^{d}$  , $lim-\to {\Gamma }_{n}\sim ={\mathbb{Q}}^{d}$  , where ${\Gamma }_{n}=\frac{1}{n!}{\stackrel{^}{\Sigma }}^{d}$  . Hence ${\stackrel{^}{\mathbb{Q}}}^{d}\sim =lim←-{\stackrel{^}{\Gamma }}_{n}$  and ${\stackrel{^}{\Gamma }}_{n}\sim ={\stackrel{^}{\mathbb{Q}}}^{d}/{K}_{n}$  . By addition formula [10, $h\left(A;{\stackrel{^}{\mathbb{Q}}}^{d}\right)=h\left(A;{\stackrel{^}{\Gamma }}_{n}\right)+h\left(A;{K}_{n}\right)$  . Because $h\left(A;{\stackrel{^}{\Gamma }}_{n}\right)=h\left(A;{\Sigma }^{d}\right)$  for any $n$  and $h\left(A;{K}_{n}\right)\to 0$  as $n\to \infty$  , $\begin{array}{cc}h\left(A;{\Sigma }^{d}\right)=h\left(A;{\stackrel{^}{\mathbb{Q}}}^{d}\right).& \end{array}$ $\begin{array}{}\end{array}$ • Step 2: By Proposition  3 and Proposition  8 , the entropy on the full solenoid ${\stackrel{^}{\mathbb{Q}}}^{d}$  can be lifted to the entropy on the adele ring ${\mathbb{A}}_{\mathbb{Q}}$  , $\begin{array}{cc}h\left(A;{\stackrel{^}{\mathbb{Q}}}^{d}\right)=h\left(A;{\mathbb{A}}_{\mathbb{Q}}^{d}\right).& \end{array}$ $\begin{array}{}\end{array}$ • Step 3: Since the adele ring is a restricted direct product space, the entropy on it can be decomposed into the entropies of each direction $\begin{array}{cc}h\left(A;{\mathbb{A}}_{\mathbb{Q}}^{d}\right)={\sum }_{p\le \infty }h\left(A;{\mathbb{Q}}_{p}^{d}\right).& \end{array}$ $\begin{array}{}\end{array}$ This additivity follows from that ${\mathbb{A}}_{\mathbb{Q}}^{d}$  is almost the direct product space. But it needs delicate arguments. • Step 4: Finally, using the formula in Proposition  7 , we have $\begin{array}{cc}h\left(A;{\mathbb{Q}}_{p}^{d}\right)={\sum }_{|{\lambda }_{i}{|}_{p}>1}log|{\lambda }_{i}{|}_{p}.& \end{array}$ $\begin{array}{}\end{array}$ The computation is essentially similar to Example  2 . To connect the Alexander polynomial with a solenoidal automorphism, we need the following lemma. Lemma 1. For any knot, the dual group of the first homology group of an infinite cyclic cover ${H}_{1}\left({X}_{\infty };\mathbb{Z}\right)$  is a $n$  -dimensional solenoid. Here, $n$  is the degree of the Alexander polynomial of the knot. • Proof. From Proposition  1 and  2 , it is sufficient to prove that ${H}_{1}\left({X}_{\infty };\mathbb{Z}\right)$  is a discrete torsion-free abelian group which has finite-rank $n$  . Rapaport [24and Crowell [5proved that ${H}_{1}\left({X}_{\infty };\mathbb{Z}\right)$  is torsion-free and has finite rank $n$  . (Here, the rank of $A$  means the cardinality of any maximal set of $\mathbb{Z}$  -linearly independent elements of $A$  .) Because the Alexander polynomial is equal to the characteristic polynomial of the meridian action on ${H}_{1}\left({X}_{\infty };\mathbb{Q}\right)$  , up to multiplication by a unit, the following theorem follows from Proposition  9 . Theorem 1. Let ${\alpha }_{i}$  be the zeros (counted with multiplicity) of the Alexander polynomial of a knot. Then, • (1) the entropy of the meridian action on the $p$  -adic Alexander module; ${t}_{p}:{H}_{1}\left({X}_{\infty };{\mathbb{Q}}_{p}\right)\to {H}_{1}\left({X}_{\infty };{\mathbb{Q}}_{p}\right)$  is $\begin{array}{cc}h\left({t}_{p}\right)={\sum }_{|{\alpha }_{i}{|}_{p}>1}log|{\alpha }_{i}{|}_{p},& \end{array}$ $\begin{array}{}\end{array}$ where $|\cdot {|}_{p}$  is the $p$  -adic norms, and • (2) the entropy of dual action of meridian $\stackrel{^}{t}:\stackrel{^}{{H}_{1}\left({X}_{\infty };\mathbb{Z}\right)}\to \stackrel{^}{{H}_{1}\left({X}_{\infty };\mathbb{Z}\right)}$  is $h\left(\stackrel{^}{t}\right)={\sum }_{p\le \infty }h\left({t}_{p}\right)$  , that is $\begin{array}{cc}h\left(\stackrel{^}{t}\right)={\sum }_{p\le \infty }{\sum }_{|{\alpha }_{i}{|}_{p}>1}log|{\alpha }_{i}{|}_{p}.& \end{array}$ $\begin{array}{}\end{array}$ Here ${\mathbb{Q}}_{\infty }=\mathbb{R}$  by convention. 4 Applications 4.1 Growth of order of homology of branched cyclic covering space In this section, we study the relation between the $p$  -adic zeros of ${\Delta }_{K}$  and the growth of orders of the first homology groups of the $r$  -fold cyclic covering of ${S}^{3}$  branched over $K$  . Roughly speaking, the $r$  -fold cyclic covering branched over $K$  is a compact space which associated with a homomorphism $\begin{array}{cc}G={\pi }_{1}\left({S}^{3}\K\right)\stackrel{}{⟶}\mathbb{Z}/r\mathbb{Z},& \end{array}$ $\begin{array}{}\end{array}$ (for precise definition, see [4, 15, 26). The order of the first homology group of this space can be computed by the following formula. Proposition 10 (Fox [7). Let ${X}_{r}$  be the $r$  -fold cyclic covering of ${S}^{3}$  branched over $K$  . Then, the order of the first homology group of ${X}_{r}$  is given by $|{H}_{1}\left({X}_{r};\mathbb{Z}\right)|=|{}^{r-1}{\prod }_{d=1}{\Delta }_{K}\left(exp\left(2d\pi \sqrt{-1}/r\right)\right)|.$  By convention, $|{H}_{1}\left({X}_{r};\mathbb{Z}\right)|=0$  means that ${H}_{1}\left({X}_{r};\mathbb{Z}\right)$  is an infinite group. Definition 3 (logarithmic Mahler measure [18). For non-zero Laurent polynomial $f\left(x\right)$  with integral coefficients, the logarithmic Mahler measure of $f$  is defined by $\begin{array}{cc}m\left(f\right)={\int }_{0}^{1}log|f\left(exp\left(2\pi t\sqrt{-1}\right)\right)|dt.& \end{array}$ $\begin{array}{}\end{array}$ The growth of orders $|{H}_{1}\left({X}_{r};\mathbb{Z}\right)|$  is expressed by the logarithmic Mahler measure of the Alexander polynomial. It has been already proved by González-Acun͂a and Short [8, but we show a proof because we base on the other definition from [8. $\begin{array}{cc}{lim}_{\stackrel{r\to \infty }{|{H}_{1}\left(\cdot \right)|\ne 0}}\frac{log|{H}_{1}\left({X}_{r};\mathbb{Z}\right)|}{r}& ={lim}_{\stackrel{r\to \infty }{{\Delta }_{K}\left(\cdot \right)\ne 0}}\frac{1}{r}log|{}^{r-1}{\prod }_{d=0}{\Delta }_{K}\left(exp\left(2d\pi \sqrt{-1}/r\right)\right)|\end{array}$ $\begin{array}{cc}& ={lim}_{\stackrel{r\to \infty }{{\Delta }_{K}\left(\cdot \right)\ne 0}}{\sum }_{d=0}^{r-1}\frac{1}{r}log|{\Delta }_{K}\left(exp\left(2\pi \sqrt{-1}\frac{d}{r}\right)\right)|\end{array}$ $\begin{array}{cc}& =m\left({\Delta }_{K}\right).\end{array}$ $\begin{array}{}\end{array}$ Mahler measure is deeply related to the entropy of an algebraic dynamical system, which is found in [16for example. In this paper, we use a more suitable result which was proved by Einseidler and Ward [6. Proposition 11 (Einseidler and Ward [6). Let $0\stackrel{}{⟶}{F}_{n}\stackrel{}{⟶}...\stackrel{}{⟶}{F}_{1}\stackrel{{\phi }_{1}}{⟶}{F}_{0}\stackrel{}{⟶}M\stackrel{}{⟶}0$  be a finite free resolution of the $\mathbb{Z}\left[{t}^{±}\right]$  -module $M$  and $J\left({\phi }_{1}\right)$  the initial Fitting ideal. Let ${\alpha }_{t}$  a natural automorphism which is induced by the shift of the indeterminate $t$  . Then the entropy of ${\alpha }_{t}$  is $h\left(\stackrel{^}{{\alpha }_{t}}\right)=m\left(gcd\left(J\left({\phi }_{1}\right)\right)\right).$ By combining Proposition  10 and  11 , we can see that the growth of orders gives another method to compute the entropy in Theorem  1 . Therefore, we can obtain the following corollary. Corollary 1. $\begin{array}{cc}{lim}_{\stackrel{r\to \infty }{|{H}_{1}\left(\cdot \right)|\ne 0}}\frac{log|{H}_{1}\left({X}_{r};\mathbb{Z}\right)|}{r}={\sum }_{p\le \infty }{\sum }_{|{\alpha }_{i}{|}_{p}>1}log|{\alpha }_{i}{|}_{p},& \end{array}$ $\begin{array}{}\end{array}$ where ${\alpha }_{i}$  are the zeros of the Alexander polynomial ${\Delta }_{K}\left(t\right)$  . In a special case of Corollary  1 , we obtain that the growth is zero if and only if the zeros of ${\Delta }_{K}\left(t\right)$  are roots of unity. That is, we have the following. Corollary 2 (Riley [25, González-Acun͂a and Short [8). Let ${X}_{r}$  be the $r$  -fold cyclic covering branched over $K$  . Then, if the Alexander polynomial ${\Delta }_{K}\left(t\right)$  have zeros which are not roots of unity, the finite values of the order of the first homology group $|{H}_{1}\left({X}_{r};\mathbb{Z}\right)|$  grows exponentially with respect to $r$  . • Proof. (Indirect proof) Because all ${\alpha }_{i}$  belong to the valuation ring ${\mathcal{O}}_{p}=\left\{x\in \overline{{\mathbb{Q}}_{p}}||x{|}_{p}\le 1\right\}$  , $f\left(t\right)={\prod }_{i}\left(t-{\alpha }_{i}\right)$  , which is $\Delta \left(t\right)$  up to constant multiples, belongs to ${\mathbb{Z}}_{p}\left[t\right]\cap \mathbb{Q}\left[t\right]$  , where ${\mathbb{Z}}_{p}=\left\{x\in {\mathbb{Q}}_{p}||x{|}_{p}\le 1\right\}$  . This holds for any prime $p$  . Hence $f\left(t\right)\in \mathbb{Z}\left[t\right]$  . (Another way to see this is to prove Corollary  4 before. But in this proof, the condition $\Delta \left(1\right)=±1$  is not necessary.) Consequently, the zeros of the Alexander polynomial must be roots of unity from $|{\alpha }_{i}|\le 1$  and the Kronecker's theorem [14. Remark 1. In [28, Silver and Williams generalized the result of González-Acun͂a and Short [8to links with dynamical systems viewpoint. And our study is motivated by their approach. 4.2 Leading coefficient of Alexander polynomial In this section, we apply Theorem 1 to a criterion for being finitely generated as $\mathbb{Z}$  -module. Formerly, we have utilized the leading coefficient of the Alexander polynomial for the criterion. But it only determines whether the Alexander module is finitely generated or not. Now, we reveal that the leading coefficient of the Alexander polynomial is not just the criterion for being finitely generated as $\mathbb{Z}$  -module. From the following corollary, we can regard the entropies $h\left({t}_{p}\right)$  for all primes $p<\infty$  as obstructions for being finitely generated. Corollary 3. Let $h\left({t}_{p}\right)$  be the entropy of the meridian action on the $p$  -adic Alexander module ${H}_{1}\left({X}_{\infty };{\mathbb{Q}}_{p}\right)$  . If the Alexander module ${H}_{1}\left({X}_{\infty };\mathbb{Z}\right)$  is finitely generated as $\mathbb{Z}$  -module, then all the entropies $h\left({t}_{p}\right)$  are equal to zero for finite prime $p<\infty$  . • Proof. The Alexander module ${H}_{1}\left({X}_{\infty };\mathbb{Z}\right)$  is finitely generated if and only if $\stackrel{^}{{H}_{1}\left({X}_{\infty };\mathbb{Z}\right)}$  is isomorphic to the $n$  dimensional torus. Then, it is covered by ${H}_{1}\left({X}_{\infty };\mathbb{R}\right)$  . By the well-known result for toral automorphisms, the entropy of the meridian action on $\stackrel{^}{{H}_{1}\left({X}_{\infty };\mathbb{Z}\right)}$  is $\begin{array}{cc}h\left(t\right)={\sum }_{|{\alpha }_{i}|>1}log|{\alpha }_{i}|,& \end{array}$ $\begin{array}{}\end{array}$ where ${\alpha }_{i}$  are eigenvalues of the meridian action on ${H}_{1}\left({X}_{\infty };\mathbb{R}\right)$  . These entropy must be equal to the entropy in Theorem 1-(2). Therefore, the entropies of the meridian action on ${H}_{1}\left({X}_{\infty };{\mathbb{Q}}_{p}\right)$  are zero for any $p<\infty$  , that is $h\left({t}_{p}\right)={\sum }_{|{\alpha }_{i}{|}_{p}>1}log|{\alpha }_{i}{|}_{p}=0\text{for}p<\infty .$ This obstructions give a new interpretation of the leading coefficient of the Alexander polynomial. In fact, the following Corollary means that the entropies $h\left({t}_{p}\right)$  are fine factors of the leading coefficient of the Alexander polynomial. Corollary 4. Let ${\alpha }_{i}$  be the zeros of ${\Delta }_{K}\left(t\right)={\sum }_{i=0}^{n}{a}_{i}{t}^{i}$  . Then the leading coefficient of ${\Delta }_{K}\left(t\right)$  is the sum of the entropies of the meridian action on the $p$  -adic Alexander module ${H}_{1}\left({X}_{\infty };{\mathbb{Q}}_{p}\right)$  for the finite primes $p<\infty$  , that is $\begin{array}{cc}log|{a}_{n}|& ={\sum }_{p<\infty }{\sum }_{|{\alpha }_{i}{|}_{p}>1}log|{\alpha }_{i}{|}_{p}.\end{array}$ $\begin{array}{}\end{array}$ • Proof. Let $f\left(t\right)={\Delta }_{K}\left(t\right)/{a}_{n}=\prod \left(t-{\alpha }_{i}\right)$  and $s$  the least common multiple of the denominators of the coefficients of $f\left(t\right)$  . Then, $\begin{array}{cc}{\sum }_{p<\infty }h\left({t}_{p}\right)={\sum }_{p<\infty }{\sum }_{|{\alpha }_{i}{|}_{p}>1}log|{\alpha }_{i}{|}_{p}=logs.& \end{array}$ $\begin{array}{}\end{array}$ Because ${\Delta }_{K}\left(1\right)=±1$  , the coefficients are relatively prime $\left({a}_{n},...,{a}_{1}\right)=1$  . Hence $s=|{a}_{n}|$  . (The above argument is essentially found in the proofs of Theorem 3 in [17and Theorem 2 in [25.) Since ${\Delta }_{K}\left(1\right)=±1$  for any knot, the Alexander polynomial can be completely determined (up to $±1$  ) by the zeros. Hence the leading coefficient is also determined. On the other hand, Corollary  4 shows furthermore that the distribution of the zeros measures how much different from being finitely generated $\mathbb{Z}$  -module the Alexander module is. 4.3 Final remarks 4.3.1 Determining knots by cyclic branched covers In [12, Kojima showed that prime knots are determined by their cyclic branched covers. So, there might be a method to determine the Alexander module by the data of cyclic branched covers. The growth of orders $|{H}_{1}\left({X}_{r};\mathbb{Z}\right)|$  does not determine completely the Alexander module ${H}_{1}\left({X}_{\infty };\mathbb{Z}\right)$  , but it has information of the Alexander module. In fact, the growth measures complexity of the Alexander module and is similar invariant with the leading coefficient of the Alexander polynomial. In addition, it is still remain open whether infinitely many branched covers are necessary for determining the knot. 4.3.2 Volume conjecture The volume (or Kashaev) conjecture [11, 19expects that the asymptotic behavior of the Kashaev invariant (= the specialization of the colored Jones polynomial) implies the hyperbolic volume of the complement of the knot (hyperbolic case). In general, a topological entropy picks up natural measures, which are called measures of the maximal entropy, from the variational principle (see Proposition  5 ). Then, the topological entropy is equal to the measure theoretic entropy with respect to a measure of the maximal entropy. In our case, the Haar measure on the Alexander module (precisely, the Plancherel measure on its dual group) is a measure of the maximal entropy, and this measure is lifted to adele rings with preserving the entropy. Consequently the asymptotic behavior (growth of orders) can be translated into expansions of volumes with respect to the Haar measure on the adele ring because the growth is interpreted as the entropy of the meridian action. Our point is that, when an asymptotic behavior is interpreted as an entropy, it can be related with a natural measure theory. Does this strategy work out for the volume conjecture? References 1. R. L. Adler, A. G. Konheim, and M. H. McAndrew. Topological entropy. Trans. Amer. Math. Soc., 114:309–319, 1965. 2. R. Bowen. Entropy for group endomorphisms and homogeneous spaces. Trans. Amer. Math. Soc., 153:401–414, 1971. 3. G. Burde. Alexanderpolynome Neuwirthscher Knoten. Topology, 5:321–330, 1966. 4. G. Burde and H. Zieschang. Knots, volume 5 of de Gruyter Studies in Mathematics. Walter de Gruyter & Co., Berlin, 1985. 5. R. H. Crowell. The group ${G}^{\prime }/{G}^{\prime \prime }$  of a knot group $G$  . Duke Math. J., 30:349–354, 1963. 6. M. Einsiedler and T. Ward. Fitting ideals for finitely presented algebraic dynamical systems. Aequationes Math., 60(1-2):57–71, 2000. 7. R. H. Fox. Free differential calculus. III. Subgroups. Ann. of Math. (2), 64:407–419, 1956. 8. F. González-Acun͂a and H. Short. Cyclic branched coverings of knots and homology spheres. Rev. Mat. Univ. Complut. Madrid, 4(1):97–120, 1991. 9. C. M. Gordon. Knots whose branched cyclic coverings have periodic homology. Trans. Amer. Math. Soc., 168:357–370, 1972. 10. S. A. Juzvinskiĭ. Metric properties of the endomorphisms of compact groups. Izv. Akad. Nauk SSSR Ser. Mat., 29:1295–1328, 1965. English transl. in Ams. Math. Soc. Transl. 66 (1968), 63-98. 11. R. M. Kashaev. The hyperbolic volume of knots from the quantum dilogarithm. Lett. Math. Phys., 39(3):269–275, 1997. 12. S. Kojima. Determining knots by branched covers. In Low-dimensional topology and Kleinian groups (Coventry/Durham, 1984), volume 112 of London Math. Soc. Lecture Note Ser., pages 193–207. Cambridge Univ. Press, Cambridge, 1986. 13. A. N. Kolmogorov. A new metric invariant of transient dynamical systems and automorphisms in Lebesgue spaces. Dokl. Akad. Nauk SSSR (N.S.), 119:861–864, 1958. 14. L. Kronecker. Zwei Sätze ueber Gleichungen mit ganzzahligen Coeffichienten. J. Reine Angew. Math., 53:173–175, 1857. 15. W. B. R. Lickorish. An introduction to knot theory, volume 175 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1997. 16. D. Lind, K. Schmidt, and T. Ward. Mahler measure and entropy for commuting automorphisms of compact groups. Invent. Math., 101(3):593–629, 1990. 17. D. A. Lind and T. Ward. Automorphisms of solenoids and $p$  -adic entropy. Ergodic Theory Dynam. Systems, 8(3):411–419, 1988. 18. K. Mahler. An application of Jensen's formula to polynomials. Mathematika, 7:98–100, 1960. 19. H. Murakami and J. Murakami. The colored Jones polynomials and the simplicial volume of a knot. Acta Math., 186(1):85–104, 2001. 20. K. Murasugi. The commutator subgroups of the alternating knot groups. Proc. Amer. Math. Soc., 28:237–241, 1971. 21. L. P. Neuwirth. Knot groups. Annals of Mathematics Studies, No. 56. Princeton University Press, Princeton, N.J., 1965. 22. C. V. Quach. Polynôme d'Alexander des noeuds fibrés. C. R. Acad. Sci. Paris Sér. A-B, 289(6):A375–A377, 1979. 23. D. Ramakrishnan and R. J. Valenza. Fourier analysis on number fields, volume 186 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1999. 24. E. S. Rapaport. On the commutator subgroup of a knot group. Ann. of Math. (2), 71:157–162, 1960. 25. R. Riley. Growth of order of homology of cyclic branched covers of knots. Bull. London Math. Soc., 22(3):287–297, 1990. 26. D. Rolfsen. Knots and links, volume 7 of Mathematics Lecture Series. Publish or Perish Inc., Houston, TX, 1990. Corrected reprint of the 1976 original. 27. W. Rudin. Fourier analysis on groups. Wiley Classics Library. John Wiley & Sons Inc., New York, 1990. Reprint of the 1962 original, A Wiley-Interscience Publication. 28. D. S. Silver and S. G. Williams. Mahler measure, links and homology growth. Topology, 41(5):979–991, 2002. 29. D. S. Silver and S. G. Williams. Torsion numbers of augmented groups with applications to knots and links. Enseign. Math. (2), 48(3-4):317–343, 2002. 30. J. Sinaĭ. On the concept of entropy for a dynamic system. Dokl. Akad. Nauk SSSR, 124:768–771, 1959. 31. P. Walters. An introduction to ergodic theory, volume 79 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1982. 32. A. Weil. Basic number theory. Springer-Verlag, New York, third edition, 1974. Die Grundlehren der Mathematischen Wissenschaften, Band 144. Department of Mathematics, Tokyo Institute of Technology, Oh-okayama, Meguro-ku, Tokyo 152-8551, Japan E-mail address: akio@math.titech.ac.jp
2021-01-24 10:08:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 371, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8813844323158264, "perplexity": 3068.621626157792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547475.44/warc/CC-MAIN-20210124075754-20210124105754-00206.warc.gz"}
http://sagemath.wikispaces.com/Commands?responseToken=0a3e44ac623d722854d30e9ee17860433
Home > Commands Pages in this wiki with groups of sage commands Pages in this wiki with sage commands Pages in this wiki with useful user-defined functions Keywords: sage, commands
2017-07-22 10:54:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.993863582611084, "perplexity": 7340.182725128351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423992.48/warc/CC-MAIN-20170722102800-20170722122800-00069.warc.gz"}
https://scikit-rf.readthedocs.io/en/latest/examples/networktheory/Correlating%20microstripline%20model%20to%20measurement.html
# Correlating microstripline model to measurement¶ ## Target¶ The aim of this example is to correlate the microstripline model to the measurement over 4 frequency decades from 1MHz to 5GHz. ## Plan¶ 1. Two different lengths of microstripline are measured; 2. Multiline method is used to compute the frequency dependant relative permittivity and loss angle of the dielectric; 3. Microstripline model is fitted to the computed parameters by optimization; 4. Checking the results by embedding the connectors and comparison against measurement; [1]: %load_ext autoreload import skrf as rf import numpy as np from numpy import real, log10, sum, absolute, pi, sqrt import matplotlib.pyplot as plt from scipy.optimize import minimize, differential_evolution rf.stylely() /home/docs/checkouts/readthedocs.org/user_builds/scikit-rf/conda/latest/lib/python3.5/site-packages/matplotlib/style/core.py:51: UserWarning: Style includes a parameter, 'interactive', that is not related to style. Ignoring "to style. Ignoring".format(key)) ## Measurement of two microstripline with different lenght¶ The measurement where performed the 21th March 2017 on a Anritsu MS46524B 20GHz Vector Network Analyser. The setup is a linear frequency sweep from 1MHz to 10GHz with 10’000 points. Output power is 0dBm, IF bandwidth is 1kHz and neither averaging nor smoothing are used. The frequency range of interest is limited from 1MHz to 5GHz, but the measurement are up to 10GHz. MSLxxx is a L long, W wide, T thick copper microstripline on a H height substrate with bottom ground plane. Name L (mm) W (mm) H (mm) T (um) Substrate MSL100 100 3.00 1.55 50 FR-4 MSL200 200 3.00 1.55 50 FR-4 The milling of the artwork is performed mechanically with a lateral wall of 45°. A small top ground plane chunk connected by a vias array to bottom ground is provided to solder the connector top ground legs and provide some coplanar-like transition from coax to microstrip. The relative permittivity of the dielectric was assumed to be approximatively 4.5 for design purpose. [2]: # Load raw measurements MSL100_raw = rf.Network('MSL100.s2p') MSL200_raw = rf.Network('MSL200.s2p') # Keep only the data from 1MHz to 5GHz MSL100 = MSL100_raw['1-5000mhz'] MSL200 = MSL200_raw['1-5000mhz'] plt.figure() plt.title('Measured data') MSL100.plot_s_db() MSL200.plot_s_db() plt.show() The measured data shows that the electrical length of MSL200 is approximatively twice the one of MSL100. The frequency spacing between Return Loss dips is aproximatively the half for MSL200 compared to MSL100. This is coherent with the physical dimensions if the small connector length is neglected. The MSL200 Insertion Loss is also about twice than MSL100, which is coherent as a longer path bring more attenuation. Return Loss under -20dB is usually considered to be fair for microstripline, it correspond to 1% of the power being reflected. ## Dielectric effective relative permittivity extraction by multiline method¶ The phase of the measurements transmission parameter are subtracted. Because connectors are present on both DUTs, their lenght effect is canceled and the remaining phase difference is related to the difference of the DUTs length. Knowing the physical length $$\Delta L$$ and the phase $$\Delta \phi$$, the effective relative permittivity constant $$\epsilon_{r,eff}$$ can be computed from the relation $\begin{split}\left\{ \begin{array}{ll} \lambda = \frac{c_0}{f \cdot \sqrt{\epsilon_{r,eff}}} \\ \phi = \frac{2\pi L}{\lambda} \end{array} \right. \implies \epsilon_{r,eff} = \left( \frac{\Delta \phi \cdot c_0}{2 \pi f \cdot \Delta L} \right)^2\end{split}$ In the same idea, the difference of Insertion Loss of the two DUT gives the Insertion Loss of the difference of the length and cancel connectors effects. [3]: c0 = 3e8 f = MSL100.f deltaL = 0.1 deltaPhi = np.unwrap(np.angle(MSL100.s[:,1,0])) - np.unwrap(np.angle(MSL200.s[:,1,0])) Er_eff = np.power(deltaPhi * c0 / (2 * np.pi * f * deltaL), 2) Loss_mea = 20 * log10(absolute(MSL200.s[:,1,0] / MSL100.s[:,1,0])) plt.figure() plt.suptitle('Effective relative permittivity and loss') plt.subplot(2,1,1) plt.plot(f * 1e-9, Er_eff) plt.ylabel('$\epsilon_{r,eff}$') plt.subplot(2,1,2) plt.plot(f * 1e-9, Loss_mea) plt.xlabel('Frequency (GHz)') plt.ylabel('Insertion Loss (dB)') plt.show() The effective relative permittivity of the geometry shows a dispersion effect at low frequency which can be modelled by a wideband Debye model such as Djordjevic/Svensson implementation of skrf microstripline media. The value then increase slowly with frequency which correspond roughly to the Kirschning and Jansen dispersion model. The Insertion Loss seems proportionnal to frequency, which indicate a predominance of the dielectric losses. Conductor losses are related to the square-root of frequency. Radiation losses are neglected. ## Fit microstripline model to the computed parameters by optimization¶ ### Effective relative permittivity¶ Microstrip media model with the physical dimensions of the measured microstriplines is fitted to the computed $$\epsilon_{r,eff}$$ by optimization of $$\epsilon_r$$ and tand of the substrate at 1GHz. The dispersion model used to account for frequency variation of the parameters are Djordjevic/Svensson and Kirschning and Jansen. [4]: W = 3.00e-3 H = 1.51e-3 T = 50e-6 L = 0.1 Er0 = 4.5 tand0 = 0.02 f_epr_tand = 1e9 x0 = [Er0, tand0] def model(x, freq, Er_eff, L, W, H, T, f_epr_tand, Loss_mea): ep_r = x[0] tand = x[1] m = rf.media.MLine(frequency=freq, z0=50, w=W, h=H, t=T, ep_r=ep_r, mu_r=1, rho=1.712e-8, tand=tand, rough=0.15e-6, f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand, diel='djordjevicsvensson', disp='kirschningjansen') DUT = m.line(L, 'm', embed=True, z0=m.Z0_f) Loss_mod = 20 * log10(absolute(DUT.s[:,1,0])) return sum((real(m.ep_reff_f) - Er_eff)**2) + 0.01*sum((Loss_mod - Loss_mea)**2) res = minimize(model, x0, args=(MSL100.frequency, Er_eff, L, W, H, T, f_epr_tand, Loss_mea), bounds=[(4.2, 4.7), (0.001, 0.1)]) Er = res.x[0] tand = res.x[1] print('Er={:.3f}, tand={:.4f} at {:.1f} GHz.'.format(Er, tand, f_epr_tand * 1e-9)) Er=4.371, tand=0.0166 at 1.0 GHz. As a sanity check, the model data are compared with the computed parameters [5]: m = rf.media.MLine(frequency=MSL100.frequency, z0=50, w=W, h=H, t=T, ep_r=Er, mu_r=1, rho=1.712e-8, tand=tand, rough=0.15e-6, f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand, diel='djordjevicsvensson', disp='kirschningjansen') DUT = m.line(L, 'm', embed=True, z0=m.Z0_f) DUT.name = 'DUT' Loss_mod = 20 * log10(absolute(DUT.s[:,1,0])) plt.figure() plt.suptitle('Measurement vs Model') plt.subplot(2,1,1) plt.plot(f * 1e-9, Er_eff, label='Measured') plt.plot(f * 1e-9, real(m.ep_reff_f), label='Model') plt.ylabel('$\epsilon_{r,eff}$') plt.legend() plt.subplot(2,1,2) plt.plot(f * 1e-9, Loss_mea, label='Measured') plt.plot(f * 1e-9, Loss_mod, label='Model') plt.xlabel('Frequency (GHz)') plt.ylabel('Insertion Loss (dB)') plt.legend() plt.show() The model results shows a reasonnable agreement with the measured $$\epsilon_{r,eff}$$ and Insertion Loss values. ## Checking the results¶ If the model is now plotted against the measurement of the same length, the plot shows no agreement. This is because the connector effects are not captured by the model. [6]: plt.figure() plt.title('Measured vs modelled data') MSL100.plot_s_db() DUT.plot_s_db(0, 0, color='k') DUT.plot_s_db(1, 0, color='k') plt.show() ### Connector delay and loss estimation¶ The delay of the connector is estimated by fitting a line to its phase contribution vs frequency. The phase and loss of the two connector are computed by subtracting phase and loss computed without the connectors to the measurement of the same length. [7]: phi_conn = np.unwrap(np.angle(MSL100.s[:,1,0])) + deltaPhi z = np.polyfit(f, phi_conn, 1) p = np.poly1d(z) delay = -z[0]/(2*np.pi)/2 print('Connector delay: {:.0f} ps'.format(delay * 1e12)) loss_conn_db = 20 * log10(absolute(MSL100.s[:,1,0])) - Loss_mea alpha = 1.6*np.log(10)/20 * np.sqrt(f/1e9) beta = 2*np.pi*f/c0 gamma = alpha + 1j*beta mf = rf.media.DefinedGammaZ0(m.frequency, z0=50, gamma=gamma) left = mf.line(delay*1e9, 'ns', embed=True, z0=53.2) right = left.flipped() check = left ** right plt.figure() plt.suptitle('Connector effects') plt.subplot(2,1,1) plt.plot(f * 1e-9, phi_conn, label='measured') plt.plot(f * 1e-9, np.unwrap(np.angle(check.s[:,1,0])), label='model') plt.legend() plt.subplot(2,1,2) plt.plot(f * 1e-9, loss_conn_db, label='Measured') plt.plot(f * 1e-9, 20*np.log10(np.absolute(check.s[:,1,0])), label='model') plt.xlabel('Frequency (GHz)') plt.ylabel('Insertion Loss (dB)') plt.legend() plt.show() Connector delay: 39 ps The phase of the model shows a good agreement, while the Insertion Loss seems to have a reasonnable agreement and is small whatsoever. ### Connector impedance adjustement by time-domain reflectometry¶ Time-domain step responses of measurement and model are used to adjust the connector model characteristic impedance. The plots shows the connector having an inductive behaviour (positive peak) and the microstripline being a bit too much capacitive (negative plateau). Characteristic impedance of the connector is tuned by trial-and-error until a reasonnable agreement is achieved. Optimization could have been used instead. [8]: mod = left ** DUT ** right MSL100_dc = MSL100.extrapolate_to_dc(kind='linear') DUT_dc = mod.extrapolate_to_dc(kind='linear') plt.figure() plt.suptitle('Left-right and right-left TDR') plt.subplot(2,1,1) plt.xlim(-2, 4) plt.subplot(2,1,2) plt.xlim(-2, 4) plt.tight_layout() plt.show() ### Final comparison¶ [9]: plt.figure() plt.title('Measured vs modelled data') MSL100.plot_s_db() mod.name = 'Model' mod.plot_s_db(0, 0, color='k') mod.plot_s_db(1, 0, color='k') plt.show() The plot shows a decent agreement between the model and the measured data. The model is a good representation of the DUT between 1MHz and 5 GHz. At higher frequency, the model begin to deviate from the measurement. The model does not capture effects such as radiation loss or complex copper roughness. Smaller geometries such as the top ground plane chunk may also begin to contribute as they become electrically long with the increase of frequency. As a comparison, the 5GHz wavelenght is 60mm in the air and the MSL100 line is 100mm long. The DUT itself is electrically long above some GHz. [10]:
2019-07-22 19:10:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6282253861427307, "perplexity": 9671.478329970103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528208.76/warc/CC-MAIN-20190722180254-20190722202254-00160.warc.gz"}
https://googology.fandom.com/wiki/Truprimibolplex
11,686 pages The truprimibolplex is equal to s(3,3,3,4) in strong array notation.[1] It can be represented in chained arrow notation as $$3\rightarrow3\rightarrow3\rightarrow3\rightarrow3\rightarrow3$$. This number is also equal to s(3,3,1,5), s(3,1,1,6), or s(3,4,1,1,2). Etymology The name of this number is based on the suffix "-plex" and the number "truprimibol". Approximations Notation Approximation BEAF $$\{3,4,2,4\}$$ Hyper-E notation $$E2\#\#2\#\#2\#\#(E2\#\#27\#26\#2)\#(E2\#\#27\#26\#2)\#2$$ Chained arrow notation $$3\rightarrow3\rightarrow3\rightarrow3\rightarrow3\rightarrow3$$ (exact) Hyperfactorial array notation $$(((26![3])![3])![2,3])![2,3]$$ Fast-growing hierarchy $$f_{\omega3+1}(f_{\omega3+1}(f_{\omega+2}(26)))$$ Hardy hierarchy $$H_{\omega^{\omega3+1}2+\omega^{\omega+2}}(26)$$ Slow-growing hierarchy $$g_{\varphi(3,0,\varphi(3,0,\varphi(1,2,0)))}(3)$$
2022-06-27 18:44:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976228475570679, "perplexity": 11459.876616925674}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00534.warc.gz"}
https://mathhelpboards.com/threads/problem-of-the-week-247-apr-11-2017.21176/
Problem of the Week #247 - Apr 11, 2017 Status Not open for further replies. Euge MHB Global Moderator Staff member Here is this week's POTW: ----- Let $f : \Bbb S^1\subset \Bbb C \to \Bbb C$ be a continuous map. Show that if $f$ is continuously differentiable on $\Bbb S^1$, then its Fourier coefficient sequence $\{\hat{f}_n\}_{n\in \Bbb Z}$ belongs to $\ell^1(\Bbb Z)$. ----- Euge MHB Global Moderator Staff member This week's problem was correctly solved by Opalg . You can read his solution below. Integrate by parts to get $$\widehat{f}_{\!n} = \frac1{2\pi}\int_0^{2\pi}f(x)e^{-inx}dx = \frac1{2\pi}\Bigl[\, f(x)\frac{e^{-inx}}{-in}\Bigr]_0^{2\pi} + \frac1{2\pi in}\int_0^{2\pi}f'(x)e^{-inx}dx = \frac1{in}\widehat{f'}_{\!n}.$$ Since $f'$ is continuous on the compact space $\Bbb{S}^1$ it is square-integrable. So by Parseval's theorem its Fourier coefficient sequence $\{\widehat{f'}_{\!n}\}$ is in $\ell^2(\Bbb {Z})$. It then follows from the Cauchy–Schwarz inequality that $$\sum_{n\in\Bbb{Z}}|\widehat{f}_{\!n}| = \sum_{n\in\Bbb{Z}}\Bigl|\frac1{in}\widehat{f'}_n\Bigr| \leqslant \Bigl(\sum_{n\in\Bbb{Z}}\Bigl|\frac1{n^2}\Bigr|\Bigr)^{1/2} \Bigl(\sum_{n\in\Bbb{Z}}\bigl|\,\widehat{f'}_{\!n}\bigr|^2\Bigr)^{1/2} < \infty.$$ Status Not open for further replies.
2020-07-09 11:22:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8467934131622314, "perplexity": 526.0642852701058}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00486.warc.gz"}
http://mathsci.kaist.ac.kr/home/schedul/seminar/?idx=-1844
# 세미나 및 콜로퀴엄 2017-03 Sun Mon Tue Wed Thu Fri Sat 1 2 3 4 5 6 7 2 8 9 1 10 1 11 12 13 14 15 1 16 1 17 18 19 20 21 22 2 23 1 24 25 26 27 28 29 30 1 31 1 2017-05 Sun Mon Tue Wed Thu Fri Sat 1 2 3 4 5 6 7 8 9 10 11 1 12 1 13 14 15 16 17 18 19 20 21 22 23 24 25 2 26 27 28 29 30 31 구글 Calendar나 iPhone 등에서 구독하면 세미나 시작 전에 알림을 받을 수 있습니다. 재미로 풀어보는 퀴즈에나 등장할 법한 추상적인 수학적 개념이 기계공학(예, 응용역학) 연구에 도움을 줄 수 있을까? 수학과 역학 사이의 간극이 가장 좁았던 때는 언제였고, 수학과 역학이 만나는 지점에서 두 학문을 두루 섭렵했던 수리과학자는 누구였을까? 이와 같은 질문에 대한 답변의 일환으로, 본 발표의 전반부에서는 수학과 역학(유체역학, 고체역학, 열역학, 파동학)의 역사가 공존했던 시절을 인물 중심으로 살펴보고자 한다. 본 발표의 후반부에서는, 역학적 파동과 메타물질에 관한 발표자의 연구주제(음향 투명망토, 음향 블랙홀, 생물음향학 등)를 간략하게 소개한다. Host: 임미경     한국어     2017-02-21 12:27:23 2017 제1회 정오의 수학산책 강연자: 한종규 (서울대) 일시: 2017년 3월 31일(금) 12:00 ~ 13:15 장소: 카이스트 자연과학동 E6-1 3435호 제목: Symmetry, invariants and conservation laws 내용: The notion of symmetry plays a central role in understanding natural laws and in solving equations.  To be symmetric means to be invariant under a group action.  In this lecture we are mainly concerned with continuous groups of the symmetries of differential equations.  I will explain Sophus Lie's ideas on solvability of an ordinary differential equation in terms of its symmetry group and Emmy Noether's theorem on conservation laws for variational problems.   As time permits I will present other viewpoints on the conservation laws. 등록: 2017년 3월 29일(수) 오후 3시까지 문의: hskim@kias.re.kr / 내선:8545 Host: 이지운     미정     2017-02-28 16:08:42 Among the most well-known examples of L-functions are the Riemann zeta function and the L-functions associated to classical modular forms. Less well known, but equally important, are the L-functions associated to Maass forms, which are eigenfunctions of the Laplace-Beltrami operator on a hyperbolic surface. Named after H. Maass, who discovered some examples in the 1940s, Maass forms remain largely mysterious. Fortunately, there are concrete tools to study Maass forms: trace formulas, which relate the spectrum of the Laplace operator on a hyperbolic surface to its geometry. After Selberg introduced his famous trace formula in 1956, his ideas were generalised, and various trace formulas have been constructed and studied. However, there are few numerical results from trace formulas, the main obstacle being their complexity. Various types of trace formulas are investigated, constructed and used to understand automorphic representations and their L-functions from a theoretical point of view, but most are not explicit enough to implement in computer code. Having explicit computations of trace formulas makes many potential applications accessible. In this talk, I will explain the computational aspects of the Selberg trace formula for GL(2) for general levels and applications towards the Selberg eigenvalue conjecture and classification of 2-dimensional Artin representations of small conductor. This is a joint work with Andrew Booker and Andreas Strömbergsson. Host: 임보해     영어     2017-03-02 11:08:35 미정     2017-02-21 12:32:16 미정     2017-01-16 13:27:46 In this talk, we summarize results concerning anomalous behaviour of random walks and diffusions in disordered media. Examples of disordered media include fractals and various models of random graphs, such as percolation clusters, random conductance models, ErdH{o}s-R'enyi random graphs and uniform spanning trees. Geometric properties of such disordered media have been studied extensively and their scaling limits have been obtained. Our focus here is to analyze properties of dynamics in such media. Due to the inhomogeneity of the underlying spaces, we observe anomalous behaviour of the heat kernels and obtain anomalous diffusions as scaling limits of the random walks. We will give a chronological overview of the related research, and describe how the techniques have developed from those introduced for exactly self-similar fractals to the more robust arguments required for random graphs. Host: 폴정     영어     2017-02-21 12:35:16 2017 제2회 정오의 수학산책 강연자: 이윤원 (인하대) 일시: 2017년 4월 28일(금) 12:00 ~ 13:15 장소: 카이스트 자연과학동 E6-1 3435호 제목: Atiyah-Singer index theorem 내용: TBA 등록: 2017년 4월 26일(수) 오후 3시까지 문의: hskim@kias.re.kr / 내선:8545 Host: 이지운     한국어     2017-02-28 16:12:46 This talk will review previous work on quadrupedal gaits and recent work on a generalized model for binocular rivalry proposed by Hugh Wilson. Both applications show how rigid phase-shift synchrony in periodic solutions of coupled systems of differential equations can help understand high level collective behavior in the nervous system. Host: 김재경     영어     2017-02-21 12:38:07 2017 제3회 정오의 수학산책 강연자: 이수준(경희대) 일시: 2017년 5월 12일(금) 12:00 ~ 13:15 장소: 카이스트 자연과학동 E6-1 3435호 제목: 양자정보이론 소개 : 디지털정보 vs 양자정보 내용: TBA 등록: 2017년 5월 10일(수) 오후 3시까지 문의: hskim@kias.re.kr / 내선:8545 Host: 이지운     한국어     2017-02-28 16:16:01 In the talk, I discuss previous works on the arithmetic of various twisted special $L$-values and dynamical phenomena behind them. Main emphasis will be put on the problem of estimating several exponential sums such as Kloosterman sums and its relation to the problem of non-vanishing of special $L$-values with cyclotomic twists. A distribution of homological cycles on the modular curves will also be discussed and as a consequence, some results on a conjecture of Mazur-Rubin-Stein about the distribution of period integrals of elliptic modular forms will be presented. 미정     2017-01-16 13:37:22 Host: 임보해     미정     2017-02-21 12:40:27 미정     2017-01-16 13:49:23 The Siegel series is the local factor of the Fourier coefficient of the Siegel-Eisenstein series. It is also a crucial ingredient in Kudla's program to compare it with intersection numbers. In this talk, I will explain a conceptual reformulation of the Siegel series. As the first application, I will explain a conceptual (and simple) proof of the equality between intersection number and the (derivative of) Siegel series. As the second application, I will explain a newly discovered identity between them. This is a joint work with T. Yamauchi. Host: 임보해     미정     2017-02-10 12:42:59 There is a classical result first due to Keen known as the collar lemma for hyperbolic surfaces. A consequence of the collar lemma is that if two closed curves A and B on a closed orientable hyperbolizable surface have non-zero geometric intersection number, then there is an explicit lower bound for the length of A in terms of the length of B, which holds for any hyperbolic structure on the surface. By slightly weakening this lower bound, we generalize this statement to hold for all Hitchin representations.  This is a joint work with Tengren Zhang. Host: 백형렬     영어     2017-03-09 09:30:25 Many problems in control and optimization require the treatment of systems in which continuous dynamics and discrete events coexist. This talk presents a survey on some of our recent work on such systems. In the setup, the discrete event is given by a random process with a finite state space, and the continuous component is the solution of a stochastic differential equation. Seemingly similar to diffusions, the processes have a number of salient features distinctly different from diffusion processes. After providing motivational examples arising from wireless communications, identification, finance, singular perturbed Markovian systems, manufacturing, and consensus controls, we present necessary and sufficient conditions for the existence of unique invariant measure, stability, stabilization, and numerical solutions of control and game problems. Host: 김재경     영어     2017-02-21 12:23:17 Originated from applications in signal processing, random evolution, telecommunications, risk management, financial engineering, and manufacturing systems, two-time-scale Markovian systems have drawn much attention. This talk discusses asymptotic expansions of solutions to the forward equations, scaled and unscaled occupation measures, approximation error bounds, and associated switching diffusion processes. Controlled dynamic systems will also be mentioned. Host: paul jung     영어     2017-02-21 13:21:06 1952년 영국의 수학자 A. Turing은 그 당시 생물학자들 조차 전혀 상상 할수 없었던, 다 같은 종류의 세포들이 각자 다른 세포로 분화할수 있는 메커니즘을  Reaction-Diffusion System(RD system)을 이용하여 수학적으로 제시했습니다.  그 이후로,  RD system은 수리해석학적으로도 많은 발전을 거듭해왔으며,  수리모델링을 통해  생명과학 분야에 있어서도 생명의 메커니즘을 밝히는 도구로서 발전을 거듭해 오고 있습니다. 이 강연에서는 제가 최근에 연구를 진행하고 있는 다양한 생명현상을 예로 그 메커니즘을 밝히기 위해 개발한 수리모델 및 수리모델링 수법을 간단하게 소개하겠습니다.  여기에는  수학적으로 재미있는 구조를  가지고 있을 지 모르는  문제들이 숨어 있을 수 있습니다. 그런 문제들을  여러분들께서 직접 찾아 보시길 바랍니다. Keywords: Mathematical modeling, PDE, Phase-field method Host: 변재형     한국어     2017-03-08 13:41:27 Liquid crystal is a state of matter between isotropic fluid and crystalline solid, which has properties of both liquid and solid. In a liquid crystal phase, molecules tend to align a preferred direction and molecules are described by a symmetric traceless 3x3 matrix which is often called a second order tensor. Equilibrium states are corresponding to minimizers of the governing Landau-de Gennes energy which plays an important role in mathematical theory of liquid crystals. In this talk, I will present a brief introduction to Landau-de Gennes theory and recent development of mathematical theory together with interesting mathematical questions. Host: 권순식     영어     2017-02-21 12:07:17 A well known theorem of Grötzsch states that every planar graph is 3-colorable. We will show a simple proof based on a recent result of Kostochka and Yancey on the number of edges in 4-critical graphs. Then we show a strengthening of the Grötzsch’s theorem in several different directions. Based on joint works with Ilkyoo Choi, Jan Ekstein, Zdeněk Dvořák, Přemek Holub, Alexandr Kostochka, and Matthew Yancey. Host: 최일규 엄상일     미정     2017-03-02 09:10:53 Consider a simple symmetric random walk $S$ and another random walk $S'$ whose $k$th increments are the $k$-fold product of the first $k$ increments of $S$. The random walks $S$ and $S'$ are strongly dependent. Still the 2-dimensional walk $(S, S')$, properly rescaled, converges to a two dimensional Brownian motion. The goal of this talk is to present the proof of this fact, and its generalizations. Based on joint works with K. Hamza and S. Meng. Host: paul jung     영어     2017-02-21 14:37:05
2017-03-27 12:31:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6351481080055237, "perplexity": 8050.371766648406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189472.3/warc/CC-MAIN-20170322212949-00223-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.thejournal.club/c/paper/6/
#### Capacity of a Multiple-Antenna Fading Channel with a Quantized Precoding Matrix ##### Wiroonsak Santipach, Michael L. Honig Given a multiple-input multiple-output (MIMO) channel, feedback from the receiver can be used to specify a transmit precoding matrix, which selectively activates the strongest channel modes. Here we analyze the performance of Random Vector Quantization (RVQ), in which the precoding matrix is selected from a random codebook containing independent, isotropically distributed entries. We assume that channel elements are i.i.d. and known to the receiver, which relays the optimal (rate-maximizing) precoder codebook index to the transmitter using B bits. We first derive the large system capacity of beamforming (rank-one precoding matrix) as a function of B, where large system refers to the limit as B and the number of transmit and receive antennas all go to infinity with fixed ratios. With beamforming RVQ is asymptotically optimal, i.e., no other quantization scheme can achieve a larger asymptotic rate. The performance of RVQ is also compared with that of a simpler reduced-rank scalar quantization scheme in which the beamformer is constrained to lie in a random subspace. We subsequently consider a precoding matrix with arbitrary rank, and approximate the asymptotic RVQ performance with optimal and linear receivers (matched filter and Minimum Mean Squared Error (MMSE)). Numerical examples show that these approximations accurately predict the performance of finite-size systems of interest. Given a target spectral efficiency, numerical examples show that the amount of feedback required by the linear MMSE receiver is only slightly more than that required by the optimal receiver, whereas the matched filter can require significantly more feedback. arrow_drop_up
2022-01-25 02:00:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8297431468963623, "perplexity": 622.0984479136539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00608.warc.gz"}
https://www.ademcetinkaya.com/2023/02/oled-universal-display-corporation.html
Outlook: Universal Display Corporation Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Time series to forecast n: 12 Feb 2023 for (n+4 weeks) Methodology : Modular Neural Network (Market News Sentiment Analysis) ## Abstract Universal Display Corporation Common Stock prediction model is evaluated with Modular Neural Network (Market News Sentiment Analysis) and Sign Test1,2,3,4 and it is concluded that the OLED stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy ## Key Points 1. What is statistical models in machine learning? 2. Short/Long Term Stocks 3. What is Markov decision process in reinforcement learning? ## OLED Target Price Prediction Modeling Methodology We consider Universal Display Corporation Common Stock Decision Process with Modular Neural Network (Market News Sentiment Analysis) where A is the set of discrete actions of OLED stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Sign Test)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Market News Sentiment Analysis)) X S(n):→ (n+4 weeks) $\begin{array}{l}\int {r}^{s}\mathrm{rs}\end{array}$ n:Time series to forecast p:Price signals of OLED stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## OLED Stock Forecast (Buy or Sell) for (n+4 weeks) Sample Set: Neural Network Stock/Index: OLED Universal Display Corporation Common Stock Time series to forecast n: 12 Feb 2023 for (n+4 weeks) According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Universal Display Corporation Common Stock 1. Adjusting the hedge ratio by increasing the volume of the hedging instrument does not affect how the changes in the value of the hedged item are measured. The measurement of the changes in the fair value of the hedging instrument related to the previously designated volume also remains unaffected. However, from the date of rebalancing, the changes in the fair value of the hedging instrument also include the changes in the value of the additional volume of the hedging instrument. The changes are measured starting from, and by reference to, the date of rebalancing instead of the date on which the hedging relationship was designated. For example, if an entity originally hedged the price risk of a commodity using a derivative volume of 100 tonnes as the hedging instrument and added a volume of 10 tonnes on rebalancing, the hedging instrument after rebalancing would comprise a total derivative volume of 110 tonnes. The change in the fair value of the hedging instrument is the total change in the fair value of the derivatives that make up the total volume of 110 tonnes. These derivatives could (and probably would) have different critical terms, such as their forward rates, because they were entered into at different points in time (including the possibility of designating derivatives into hedging relationships after their initial recognition). 2. Conversely, if the critical terms of the hedging instrument and the hedged item are not closely aligned, there is an increased level of uncertainty about the extent of offset. Consequently, the hedge effectiveness during the term of the hedging relationship is more difficult to predict. In such a situation it might only be possible for an entity to conclude on the basis of a quantitative assessment that an economic relationship exists between the hedged item and the hedging instrument (see paragraphs B6.4.4–B6.4.6). In some situations a quantitative assessment might also be needed to assess whether the hedge ratio used for designating the hedging relationship meets the hedge effectiveness requirements (see paragraphs B6.4.9–B6.4.11). An entity can use the same or different methods for those two different purposes. 3. When measuring a loss allowance for a lease receivable, the cash flows used for determining the expected credit losses should be consistent with the cash flows used in measuring the lease receivable in accordance with IFRS 16 Leases. 4. The change in the value of the hedged item determined using a hypothetical derivative may also be used for the purpose of assessing whether a hedging relationship meets the hedge effectiveness requirements. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Universal Display Corporation Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Universal Display Corporation Common Stock prediction model is evaluated with Modular Neural Network (Market News Sentiment Analysis) and Sign Test1,2,3,4 and it is concluded that the OLED stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy ### OLED Universal Display Corporation Common Stock Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementBaa2Baa2 Balance SheetCBaa2 Leverage RatiosCC Cash FlowB2B1 Rates of Return and ProfitabilityBaa2Caa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 86 out of 100 with 457 signals. ## References 1. J. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Transactions on Automatic Control, 37(3):332–341, 1992. 2. Candès EJ, Recht B. 2009. Exact matrix completion via convex optimization. Found. Comput. Math. 9:717 3. Vilnis L, McCallum A. 2015. Word representations via Gaussian embedding. arXiv:1412.6623 [cs.CL] 4. S. Bhatnagar. An actor-critic algorithm with function approximation for discounted cost constrained Markov decision processes. Systems & Control Letters, 59(12):760–766, 2010 5. LeCun Y, Bengio Y, Hinton G. 2015. Deep learning. Nature 521:436–44 6. C. Wu and Y. Lin. Minimizing risk models in Markov decision processes with policies depending on target values. Journal of Mathematical Analysis and Applications, 231(1):47–67, 1999 7. V. Borkar. Q-learning for risk-sensitive control. Mathematics of Operations Research, 27:294–311, 2002. Frequently Asked QuestionsQ: What is the prediction methodology for OLED stock? A: OLED stock prediction methodology: We evaluate the prediction models Modular Neural Network (Market News Sentiment Analysis) and Sign Test Q: Is OLED stock a buy or sell? A: The dominant strategy among neural network is to Buy OLED Stock. Q: Is Universal Display Corporation Common Stock stock a good investment? A: The consensus rating for Universal Display Corporation Common Stock is Buy and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of OLED stock? A: The consensus rating for OLED is Buy. Q: What is the prediction period for OLED stock? A: The prediction period for OLED is (n+4 weeks)
2023-03-26 13:21:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5564682483673096, "perplexity": 4806.611726954344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00729.warc.gz"}
http://mathonline.wikidot.com/conference-matrices
Conference Matrices # Conference Matrices We will now look at another type of matrix known as a conference matrix. Definition: An $n \times n$ matrix $C$ is a Conference Matrix if every entry $c_{i,j}$ is either $0$, $-1$, or $1$ and $CC^T = (n-1)I_n$. Let $C = \begin{bmatrix} 0 & 1 \\ 1 ^ 0 \end{bmatrix}$. Then $C$ is a conference matrix as every entry of $C$ is either $0$, $-1$, or $1$ and: (1) \begin{align} \quad CC^T = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \\ = 1I_2 \end{align} We will now begin to develop of a method for constructing conference matrices. We will first need to define a special function first. Definition: Let $q$ be an odd prime power and let $(\mathbb{Z}_q, +)$ denote the additive group of integers modulo $q$. Let $D_q$ be the set of all nonzero squares modulo $q$. The Quadratic Character Function on $\mathbb{Z}_q$ is $\chi_q : \mathbb{Z}_q \to \{ -1, 0, 1 \}$ defined for all $x \in \mathbb{Z}_q$ by $\chi_q(x) = \left\{\begin{matrix} 0 & \mathrm{if} \: x = 0\\ 1 & \mathrm{if} \: x \in D \\ -1 & \mathrm{if} \: x \not \in D \end{matrix}\right.$. For example, consider the prime $q = 7$. The set of all nonzero squares modulo $q$ is: (2) \begin{align} \quad D_7 = \{ 1, 2, 4 \} \end{align} Therefore the quadratic character function on $\mathbb{Z}_7$ is: (3) \begin{align} \quad \chi_7(0) &= 0 \\ \quad \chi_7(1) &= 1 \\ \quad \chi_(2) &= 1 \\ \quad \chi_7(3) &= -1 \\ \quad \chi_7(4) &= 1 \\ \quad \chi_7(5) &= -1 \\ \quad \chi_7(6) &= -1 \\ \end{align} The following theorem gives us a method for constructing a conference matrix given a prime power $q$ of the form $q = 4n - 3$ Theorem 1: Let $q = 4n -3$ be a prime power and let $(\mathbb{Z}_q, +)$ denote the additive group of integers modulo $q$. Let $\infty$ denote a new point distinct from those in $\mathbb{Z}_q$ and preceding the ordering of $\mathbb{Z}_q$. Let $C$ be the $(q + 1) \times (q + 1)$ matrix whose entries are defined by $c_{i,j} = \left\{\begin{matrix} 1 & \mathrm{if} \: i = \infty, j \neq \infty \\ 1 & \mathrm{if} \: i \neq \infty, j= \infty \\ 0 & \mathrm{if} \: i = \infty, j = \infty \\ \chi_q(i-j) & \mathrm{if} \: i, j \in \mathbb{Z}_q \end{matrix}\right.$. Then $C$ is a conference matrix. The condition that $q = 4n - 3$ is a prime power is equivalent to $q$ being a prime power such that $q \equiv 1 \pmod 4$. For example, consider the prime $q = 5$. Clearly $q$ is a prime power and $q = 4(2) - 3$. We aim to construct a $(q+1) \times (q+1) = 6 \times 6$ conference matrix. Let $D_5$ be the set of nonzero squares modulo $5$. Then: (4) \begin{align} \quad D_5 = \{ 1, 4 \} \end{align} The quadratic character function $\chi_5 : \mathbb{Z}_5 \to \{ -1, 0, 1 \}$ is therefore: (5) \begin{align} \quad \chi_5(0) = 0 \\ \quad \chi_5(1) = 1 \\ \quad \chi_5(2) = -1 \\ \quad \chi_5(3) = -1 \\ \quad \chi_5(4) = 1 \end{align} The matrix $C$ from Theorem 1 above will then be: (6) \begin{align} \quad C_{6 \times 6} &= \begin{bmatrix} 0 & 1 & 1 & 1 & 1 & 1 \\ 1 & \chi_5(0-0) & \chi_5(0-1) & \chi_5(0-2) & \chi_5(0-3) & \chi_5(0-4) \\ 1 & \chi_5(1-0) & \chi_5(1-1) & \chi_5(1-2) & \chi_5(1-3) & \chi_5(1-4) \\ 1 & \chi_5(2-0) & \chi_5(2-1) & \chi_5(2-2) & \chi_5(2-3) & \chi_5(2-4) \\ 1 & \chi_5(3-0) & \chi_5(3-1) & \chi_5(3-2) & \chi_5(3-3) & \chi_5(3-4) \\ 1 & \chi_5(4-0) & \chi_5(4-1) & \chi_5(4-2) & \chi_5(4-3) & \chi_5(4-4) \\ \end{bmatrix} \\ &= \begin{bmatrix} 0 & 1 & 1 & 1 & 1 & 1 \\ 1 & \chi_5(0) & \chi_5(4) & \chi_5(3) & \chi_5(2) & \chi_5(1) \\ 1 & \chi_5(1) & \chi_5(0) & \chi_5(4) & \chi_5(3) & \chi_5(2) \\ 1 & \chi_5(2) & \chi_5(1) & \chi_5(0) & \chi_5(4) & \chi_5(3) \\ 1 & \chi_5(3) & \chi_5(2) & \chi_5(1) & \chi_5(0) & \chi_5(4) \\ 1 & \chi_5(4) & \chi_5(3) & \chi_5(2) & \chi_5(1) & \chi_5(0) \\ \end{bmatrix} \\ &= \begin{bmatrix} 0 & 1 & 1 & 1 & 1 & 1 \\ 1 & 0 & 1 & -1 & -1 & 1 \\ 1 & 1 & 0 & 1 & -1 & -1 \\ 1 & -1 & 1 & 0 & 1 & -1 \\ 1 & -1 & -1 & 1 & 0 & 1 \\ 1 & 1 & -1 & -1 & 1 & 0 \\ \end{bmatrix} \end{align}
2018-10-23 09:44:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 193.98357849498504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516123.97/warc/CC-MAIN-20181023090235-20181023111735-00262.warc.gz"}
https://simonhessner.de/image-style-transfer-using-convolutional-neural-networks/
# Image Style Transfer using Convolutional Neural Networks There are many tasks in image processing that can be solved with Convolutional Neural Networks (CNNs). One of these tasks is called image style transfer. The goal of image style transfer is to apply the style of one image to the content of another image. This way you can create an drawing showing you in the style of Van Gogh, for example. I am going to explain how style can be extracted from one image and transferred to the content of another image in this article. I also wrote an overview paper on Image Style Transfer using Convolutional Neural Networks for a computer vision seminar at my university. The first paper that uses CNNs for style transfer is called Image Style Transfer Using Convolutional Neural Networks and was published by Leon A. Gatys, Alexander S. Ecker and Matthias Bethge at CVPR 2016. If you don’t have access to the paper, you can also read the pre-print on arXiv. This article is based mainly on the paper of Gatys et al. ### Convolutional Neural Networks To understand how style transfer works you have to understand CNNs. These are a special kind of Artificial Neural Networks and they are heavily used in lots of image processing tasks such as image classification, object detection, depth estimation, semantic segmentation or style transfer. CNNs consist of multiple convolutional layers which apply filters to the output of the previous layer. In contrast to classical image processing these filters do not have to be designed by hand but are learned end-to-end using back-propagation. By stacking multiple convolutional layers the network can learn different features. The filters in the first layers learn simple patterns like edges or corners while the layers in the end learn complex patterns like prototypes of faces, cars, buildings, etc. The increasing complexity along the layers is caused by the increasing receptive field of every neuron in each layer. ### Extracting style and content from CNN feature maps The style of an image (color distribution, brush stroke style, …) can be separated from the content in a simple way. Like already stated in the previous section the filters in the last layers of the CNN learn more complex patterns and abstract from raw pixel values. To simplify, they learn where are People, Cats, Dogs, Cars, etc. So to extract the content of an image, the last feature maps are relevant. On the other hand, the first layers capture more of the local structures, colors and other stylistic properties of an image. In contrast to the content of an image the style can not be extracted directly. Instead, one has to calculate the correlations between the feature maps on a number of low convolutional layers. These are calculated via Gram matrices $$G^{l}$$. $$G_{ij}^{l} = \sum_{k}F_{ik}^{l}F_{jk}^{l}$$ Here, $$F_{ik}^{l}$$ refers to the activation value of the $$i$$th filter at position $$k$$ in layer $$l$$. Note that $$k$$ is one single scalar value but the image (and every feature map) is two-dimensional. That’s because in order to calculate the Gram matrix, the 2D filter map is transformed to a 1D filter map by just concatenating the rows of the 2D map. This results in a long vector with dimension $$M^{l} = W^{l}H^{l}$$ (width $$W$$ times height $$H$$ of layer $$l$$) for every feature map in this layer $$l$$. When all these vectors are written as rows in a matrix $$F^{l}$$, we have $$N^{l}$$ rows, each of length $$M^l$$, so the matrix $$F^{l}$$ has dimension $$N^{l} x M^{l}$$ and it stores the activation values of all filters in a layer $$l$$ Remember that the style of an image is represented by the correlations of different filters. Two filters are highly correlated when their values are high at the same positions. That’s exactly what $$G_{ij}^{l}$$ represents. For every position $$k \in \{0, \dots, M^l\}$$ the activation value of filter $$i$$ is multiplied with the value of filter $$j$$. If there is a high correlation between two different filters $$i$$,$$j$$ on a layer $$l$$, the Gram matrix $$G^l$$ will have a high value at row $$i$$ in column $$j$$. The whole matrix then represents the correlation between all filters in a given layer. To sum up: The content of an image is represented by the feature map $$F^l$$ of a high-level (because of the receptive field) layer $$l$$ while the style is represented by the correlations of the feature maps on one ore more layers $$l$$, each layer described by the Gram matrix $$G^l$$. Gatys et al use the VGG-19 network to extract the feature maps but you could use every other CNN that was trained for object recognition. ### Applying the style of one image to the content of another image Now we can extract the content and style of an image. In order to transfer the style to any other image, Gatys et al make use of an optimization-based algorithm that is normally used to train neural networks. This algorithm is called Backpropagation. When you train a network using backpropagation you have fixed training data and initial weights that you want to optimize so that the error the network makes gets smaller. But as I already wrote in the previous section, we use a pre-trained network (VGG) to extract the features, so we do not want to change the network’s weights. Instead, we want to transfer style to an image. To achieve this, Gatys et al define an error function that is not differentiated w.r.t the weights but the pixel values of the image $$x$$ that should be generated (consisting of the content of one image $$c$$ in the style of another image $$s$$). $$L_{total}(c,s,x) = \alpha L_{content}(c,x) + \beta L_{style}(s,x)$$ The total loss consists of a linear combination of the content and style loss and can be weighted by $$\alpha$$ and $$\beta$$ in order to control how important style and content is to the user. To be continued… In the meantime, you can read my seminar paper that I wrote last semester for University. This site uses Akismet to reduce spam. Learn how your comment data is processed.
2021-01-25 04:31:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5202076435089111, "perplexity": 452.5429748396313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703564029.59/warc/CC-MAIN-20210125030118-20210125060118-00235.warc.gz"}
https://cs.stackexchange.com/questions/66558/how-does-literals-in-compiled-languages-differ-from-literals-in-interpreted-lang
# How does literals in compiled languages differ from literals in interpreted languages? A literal is a piece of data which gets its value at compile time. Means: Becomes set at compile time and afterward the value is fixed incorporated into the machine code as a code consisting out of 0s and 1s. What are literals in interpreted languages like for example JavaScript? As far as I know the code within JavaScript functions aren't touched by the interpreter until it becomes executed. Can one say that literals exists in these languages? regarding my definition of a literal (first paragraph). I don't think that's a good definition of literals. A literal is a source-code token that represents a fixed value of some type. For example, in almost all programming languages, 23 is a literal representing the integer twenty-three. These aren't pieces of data that get their value at compile-time: rather, they are representations of the values themselves. For example if, in Java, you write static final int magic = 23; then magic is a constant (a variable whose value cannot be changed) that is set at compile-time to have the value twenty-three, which is the meaning of the literal 23.
2019-11-22 15:34:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.520524263381958, "perplexity": 1216.8965241085384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671363.79/warc/CC-MAIN-20191122143547-20191122172547-00230.warc.gz"}
https://anhngq.wordpress.com/2011/05/19/the-ekeland-variational-principle/
# Ngô Quốc Anh ## May 19, 2011 ### The Ekeland variational principle Filed under: PDEs — Ngô Quốc Anh @ 14:43 In mathematical analysis, Ekeland’s variational principle, discovered by Ivar Ekeland, is a theorem that asserts that there exists nearly optimal solutions to some optimization problems. Ekeland’s variational principle can be used when the lower level set of a minimization problems is not compact, so that the Bolzano–Weierstrass theorem can not be applied. Ekeland’s principle relies on the completeness of the metric space. Ekeland’s principle leads to a quick proof of the Caristi fixed point theorem. Theorem (Ekeland’s variational principle). Let $(X, d)$ be a complete metric space, and let $F: X \to \mathbb R\cup \{+\infty\}$ be a lower semicontinuous functional on $X$ that is bounded below and not identically equal to $+\infty$. Fix $\varepsilon > 0$ and a point $u\in X$ such that $F(u) \leq \varepsilon + \inf_{x \in X} F(x).$ Then there exists a point $v\in X$ such that 1. $F(v) \leq F(u)$, 2. $d(u, v) \leq 1$, 3. and for all $w \ne v$, $F(w) > F(v) - \varepsilon d(v, w)$. Source: Wiki.
2017-08-23 05:57:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.730425238609314, "perplexity": 335.1506973382077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00523.warc.gz"}
https://www.knowasiak.com/whats-a-fireplace-and-why-does-it-burn-2016/
# What’s a fireplace and why does it burn? (2016) 31 I was searching at a bonfire on a shoreline the other day and realized that I didn’t perceive something else about fire and the map it works. (For instance: what determines its coloration?) So I regarded Up some stuff, and right here’s what I learned. Fire Fire is a sustained chain response keen combustion, which is an exothermic response in which an oxidant, in most cases oxygen, oxidizes a gasoline, in most cases a hydrocarbon, to secure merchandise akin to carbon dioxide, water, and mild-weight and warmth. A regular example is the combustion of methane, which looks love $displaystyle text{CH}_4 + 2 text{ O}_2 to text{CO}_2 + 2 text{ H}_2 text{O}$. The warmth produced by combustion might well moreover merely moreover be at probability of gasoline more combustion, and when that occurs enough that no further vitality needs to be added to take combustion, you’ve bought a fireplace. To pause a fireplace, you might elevate away the gasoline (e.g. turning off a gasoline range), elevate away the oxidant (e.g. smothering a fireplace the exhaust of a fireplace blanket), elevate away the warmth (e.g. spraying a fireplace with water), or elevate away the combustion response itself (e.g. with halon). Combustion is in some sense the reverse of photosynthesis, an endothermic response which takes in mild, water, and carbon dioxide and produces hydrocarbons. It’s tempting to raise that when burning wooden, the hydrocarbons which might well presumably be being combusted are e.g. the cellulose within the wooden. It appears to be like, alternatively, that something more complex occurs. When wooden is uncovered to warmth, it undergoes pyrolysis (which, in difference to combustion, doesn’t beget oxygen), which converts it to more flammable compounds, akin to varied gases, and these are what combust in wooden fires. When a wooden fire burns for long enough this might well moreover merely lose its flame however continue to smolder, and in particular the wooden will continue to glow. Smoldering involves incomplete combustion, which, in difference to total combustion, produces carbon monoxide. Flames Flames are the visible parts of a fireplace. As fires burn, they secure soot (that can consult with a pair of the merchandise of incomplete combustion or one of the most valuable merchandise of pyrolysis), which heats Up, producing thermal radiation. Here is one of many mechanisms accountable for giving fire its coloration. It is some distance moreover how fires warm Up their atmosphere. Day after day objects are constantly producing thermal radiation, however most of it’s infrared – its wavelength is longer than that of visible mild, and so is invisible without special cameras. Fires are sizzling enough to secure visible mild, even supposing they are easy producing quite a couple of infrared mild. One other mechanism giving fire its coloration is the emission spectra of whatever’s being burned. Unlike shadowy body radiation, emission spectra occur at discrete frequencies; right here’s brought about by electrons producing photons of a particular frequency after transitioning from a higher-vitality recount to a lower-vitality recount. These frequencies might well moreover merely moreover be at probability of detect facets present in a pattern in flame tests, and an identical thought (the exhaust of absorption spectra) is at probability of decide the composition of the solar and varied stars. Emission spectra are moreover accountable for the coloration of fireworks and of colored fire. The characteristic shape of a flame on Earth relies on gravity. As a fireplace heats Up the surrounding air, natural convection occurs: the sizzling air (which contains, among other things, sizzling soot) rises, while chilly air (which contains oxygen) falls, sustaining the fire and giving flames their characteristic shape. In low gravity, akin to on a residence trouble, this no longer occurs; as an alternative, fires are perfect fed by the diffusion of oxygen, and so burn more slowly and with a spherical shape (since now combustion is perfect happening at the interface of the fire with the parts of the air containing oxygen; inner the sphere there might be presumably no more oxygen to burn): Shaded body radiation is described by Planck’s law, which is basically quantum mechanical in nature, and which used to be historically one of many principle applications of any form of quantum mechanics. It’ll moreover merely moreover be deduced from (quantum) statistical mechanics as follows. What we’ll in truth compute is the distribution of frequencies in a (quantum) gasoline of photons at some temperature $T$; the boom that this matches the distribution of frequencies of photons emitted by a shadowy body at the an identical temperature comes from a bodily argument linked to Kirchhoff’s law of thermal radiation. The premise is that the shadowy body might well moreover merely moreover be put into thermal equilibrium with the gasoline of photons (since they’ve the an identical temperature). The gasoline of photons is getting absorbed by the shadowy body, which is moreover emitting photons, so in expose for them to prevent in equilibrium, it needs to be the case that at every frequency the shadowy body is emitting radiation at the an identical price as it’s provocative it, which is field by the distribution of frequencies within the gasoline. (Or something love that. I Am Not A Physicist, so in case your native physicist says various then mediate them as an alternative.) In statistical mechanics, the probability of finding a design in microstate $s$ provided that it’s in thermal equilibrium at temperature $T$ is proportional to $displaystyle e^{- beta E_s}$ the put $E_s$ is the vitality of recount $s$ and $beta=frac{1}{k_B T}$ is thermodynamic beta (so $T$ is temperature and $k_B$ is Boltzmann’s constant); right here’s the Boltzmann distribution. For one who you might contemplate of justification of this, glimpse this weblog publish by Terence Tao. This means that the probability is $displaystyle p_s=frac{1}{Z(beta)} e^{-beta E_s}$ the put $Z(beta)$ is the normalizing constant $displaystyle Z(beta)=sum_s e^{-beta E_s}$ known as the partition characteristic. Present that these possibilities don’t change if $E_s$ is modified by an additive constant (which multiplies the partition characteristic by a relentless); perfect variations in vitality between states topic. It’s a broken-down commentary that the partition characteristic, Up to multiplicative scale, incorporates the an identical records because the Boltzmann distribution, so something else that can moreover be computed from the Boltzmann distribution might well moreover merely moreover be computed from the partition characteristic. For instance, the moments of the vitality are given by $displaystyle langle E^k rangle=frac{1}{Z} sum_s E_s^k e^{-beta E_s}=frac{(-1)^k}{Z} frac{partial^k}{partial beta^k} Z$ and, Up to fixing the moment arena, this characterizes the Boltzmann distribution. Particularly, the frequent vitality is $displaystyle langle E rangle=- frac{partial}{partial beta} log Z$. The Boltzmann distribution might well moreover merely moreover be vulnerable as a definition of temperature. It accurately means that in some sense $beta$ is the more essential quantity because it will moreover merely be zero (that manner every microstate is equally seemingly; this corresponds to “limitless temperature”) or antagonistic (that manner higher-vitality microstates are more seemingly; this corresponds to “antagonistic temperature,” which it’s which that you simply might contemplate of to transition to after “limitless temperature,” and which in particular is hotter than every obvious temperature). To portray the recount of a gasoline of photons we’ll must know something about the quantum habits of photons. In the regular quantization of the electromagnetic field, the electromagnetic field might well moreover merely moreover be treated as a series of quantum harmonic oscillators every oscillating at varied (angular) frequencies $omega$. The vitality eigenstates of a quantum harmonic oscillator are labeled by a nonnegative integer $n in mathbb{Z}_{ge 0}$, that will be interpreted because the different of photons of frequency $omega$. The energies of these eigenstates are (Up to an additive constant, which doesn’t topic for this calculation and so which we can ignore) $displaystyle E_n=n hbar omega$ the put $hbar$ is the reduced Planck constant. The true fact that we perfect must take song of the different of photons in map of distinguishing them reflects the undeniable truth that photons are bosons. Accordingly, for fastened $omega$, the partition characteristic is $displaystyle Z_{omega}(beta)=sum_{n=0}^{infty} e^{-n beta hbar omega}=frac{1}{1 - e^{-beta hbar omega}}$. The belief that $n$, or equivalently the vitality $E_n=n hbar omega$, is required to be an integer right here is the Planck postulate, and historically it used to be most definitely the principle appearance of a quantization (within the sense of quantum mechanics) in physics. Without this assumption (so the exhaust of classical harmonic oscillators), the sum above becomes an integral (the put $n$ is now proportional to the square of the amplitude), and we secure a “classical” partition characteristic $displaystyle Z_{omega}^{cl}(beta)=int_0^{infty} e^{-n beta hbar omega} , dn=frac{1}{beta hbar omega}$. (It’s unclear what measure we needs to be integrating towards right here, however however this calculation appears to be like to be to breed the popular classical reply, so I’ll stick to it.) These two partition capabilities give very various predictions, even supposing the quantum one approaches the classical one as $beta hbar omega to 0$. Particularly, the frequent vitality of all photons of frequency $omega$, computed the exhaust of the quantum partition characteristic, is $displaystyle langle E rangle_{omega}=- frac{d}{d beta} log frac{1}{1 - e^{-beta hbar omega}}=frac{hbar omega}{e^{beta hbar omega} - 1}$ whereas the frequent vitality computed the exhaust of the classical partition characteristic is $displaystyle langle E rangle_{omega}^{cl}=- frac{d}{d beta} log frac{1}{beta hbar omega}= frac{1}{beta}=k_B T$. The quantum reply approaches the classical reply as $hbar omega to 0$ (so for minute frequencies), and the classical reply is in step with the equipartition theorem in classical statistical mechanics, however it definitely is moreover grossly inconsistent with experiment and ride. It predicts that the frequent vitality of the radiation emitted by a shadowy body at a frequency $omega$ is a constant just of $omega$, and since radiation can occur at arbitrarily high frequencies, the conclusion is that a shadowy body is emitting an limitless quantity of vitality, at every which that you simply might contemplate of frequency, which is needless to claim badly unfriendly. Here is (most of) the ultraviolet catastrophe. The quantum partition characteristic as an alternative predicts that at low frequencies (relative to the temperature) the classical reply is roughly appropriate, however that at high frequencies the frequent vitality becomes exponentially damped, with more damping at lower temperatures. Here is because at high frequencies and low temperatures a quantum harmonic oscillator spends most of its time in its ground recount, and might well’t without distress transition to its subsequent lowest recount, which is exponentially less seemingly. Physicists issue that most of this “stage of freedom” (the freedom of an oscillator to oscillate at a particular frequency) will get “frozen out.” The an identical phenomenon is accountable for classical however unsuitable computations of particular warmth, e.g. for diatomic gases akin to oxygen. The density of states and Planck’s law Now that we all know what’s happening at a fastened frequency $omega$, it remains to sum over all which that you simply might contemplate of frequencies. This fragment of the computation is definitely classical and no quantum corrections to it will be made. We’ll manufacture a broken-down simplifying assumption that our gasoline of photons is trapped in a box with aspect size $L$ arena to periodic boundary conditions (so definitely, the flat torus $T=mathbb{R}^3/L mathbb{Z}^3$); the series of boundary conditions, as successfully because the shape of the box, will flip out to now not topic within the pinnacle. Conceivable frequencies are then categorized by standing wave alternatives to the electromagnetic wave equation within the box with these boundary conditions, which in flip correspond (Up to multiplication by $c$) to eigenvalues of the Laplacian $Delta$. Extra explicitly, if $Delta v=lambda v$, the put $v(x)$ is a steady characteristic $T to mathbb{R}$, then the corresponding standing wave solution of the electromagnetic wave equation is $displaystyle v(t, x)=e^{c sqrt{lambda} t} v(x)$ and resulting from this truth (protecting in mind that $lambda$ is mostly antagonistic, so $sqrt{lambda}$ is mostly purely imaginary) the corresponding frequency is $displaystyle omega=c sqrt{-lambda}$. This frequency occurs $dim V_{lambda}$ times the put $V_{lambda}$ is the $lambda$-eigenspace of the Laplacian. The motive for the simplifying assumptions above are that for a box with periodic boundary conditions (again, mathematically a flat torus) it’s entirely easy to explicitly write down all of the eigenfunctions of the Laplacian: working over the complex numbers for simplicity, they are given by $displaystyle v_k(x)=e^{i k cdot x}$ the put $k=left( k_1, k_2, k_3 right) in frac{2 pi}{L} mathbb{Z}^3$ is the wave vector. (A runt more in most cases, on the flat torus $mathbb{R}^n/Gamma$ the put $Gamma$ is a lattice, wave numbers elevate values within the dual lattice of $Gamma$, presumably Up to scaling by $2 pi$ reckoning on conventions). The corresponding eigenvalue of the Laplacian is $displaystyle lambda_k=- | k |^2=- k_1^2 - k_2^2 - k_3^2$ from which it follows that the multiplicity of a given eigenvalue $- frac{4 pi^2}{L^2} n$ is the different of how to write $n$ as a sum of three squares. The corresponding frequency is $displaystyle omega_k=c | k |$ and so the corresponding vitality (of a single photon with that frequency) is $displaystyle E_k=hbar omega_k=hbar c | k |$. At this point we’ll approximate the probability distribution over which that you simply might contemplate of frequencies $omega_k$, which is precisely speaking discrete, as a continuous probability distribution, and compute the corresponding density of states $g(omega)$; the premise is that $g(omega) , d omega$ might well moreover merely easy correspond to the different of states available with frequencies between $omega$ and $omega + d omega$. Then we’ll secure an integral over the density of states to secure the closing partition characteristic. Why is this approximation practical (in difference to the case of the partition characteristic for a single harmonic oscillator, the put it wasn’t)? The rotund partition characteristic might well moreover merely moreover be described as follows. For every wavenumber $k in frac{2pi}{L} mathbb{Z}^3$, there might be an occupancy quantity $n_k in mathbb{Z}_{ge 0}$ describing the different of photons with that wavenumber; the total quantity $n=sum n_k$ of photons is finite. Every such photon contributes $hbar omega_k=hbar c | k |$ to the vitality, from which it follows that the partition characteristic components as a product $displaystyle Z(beta)=prod_k Z_{omega_k}(beta)=prod_k frac{1}{1 - e^{- beta hbar c | k |}}$ over all wave numbers $k$, resulting from this undeniable truth that its logarithm components as a sum $displaystyle log Z(beta)=sum_k log frac{1}{1 - e^{-beta hbar c | k |}}$. and it’s this sum that we are making an try to approximate by an integral. Evidently for practical temperatures and fairly tidy boxes, the integrand varies very slowly as $k$ varies, so the approximation by an integral is extraordinarily shut. The approximation stops being reasonably perfect at very low temperatures, the put as above quantum harmonic oscillators basically pause Up of their ground states and we secure Bose-Einstein condensates. The density of states might well moreover merely moreover be computed as follows. We can contemplate of wave vectors as evenly spaced lattice facets living in some “segment residence,” from which it follows that the different of wave vectors in some field of segment residence is proportional to its volume, at the least for regions that are tidy when in contrast to the lattice spacing $frac{2 pi}{L}$. In fact, the different of wave vectors in a field of segment residence is precisely $frac{V}{8 pi^3}$ times the amount, the put $V=L^3$ is the amount of our box / torus. It remains to compute the amount of the field of segment residence given by all wave vectors $k$ with frequencies $omega_k=c | k |$ between $omega$ and $omega + d omega$. This field is a spherical shell with thickness $frac{d omega}{c}$ and radius $frac{omega}{c}$, and resulting from this truth its volume is $displaystyle frac{4 pi omega^2}{c^3} , d omega$ from which we secure that the density of states for a single photon is $displaystyle g(omega) , d omega=frac{V omega^2}{2 pi^2 c^3} , d omega$. For sure this system is off by a factor of two: we forgot to raise photon polarization into consideration (equivalently, photon lope), which doubles the different of states with a given wave quantity, giving the corrected density $displaystyle g(omega) , d omega=frac{V omega^2}{pi^2 c^3} , d omega$. The true fact that the density of states is linear within the amount $V$ is now not particular to the flat torus; it’s a general characteristic of eigenvalues of the Laplacian by Weyl’s law. This offers that the logarithm of the partition characteristic is $displaystyle log Z=frac{V}{pi^2 c^3} int_0^{infty} omega^2 log frac{1}{1 - e^{- beta hbar omega}} , d omega$. Taking its spinoff with respect to $beta$ offers the frequent vitality of the photon gasoline as $displaystyle langle E rangle=- frac{partial}{partial beta} log Z=frac{V}{pi^2 c^3} int_0^{infty} frac{hbar omega^3}{e^{beta h omega} - 1} , d omega$ however for us the significance of this integral lies in its integrand, which offers the “density of energies” $displaystyle boxed{ E(omega) , d omega=frac{V hbar}{pi^2 c^3} frac{omega^3}{e^{beta hbar omega} - 1} , d omega}$ describing how critical of the vitality of the photon gasoline comes from photons of frequencies between $omega$ and $omega + d omega$. This, finally, is a form of Planck’s law, even supposing it wants some massaging to change into a commentary about shadowy bodies in preference to about gases of photons (now we enjoy to divide by $V$ to secure the vitality density per unit volume, then secure one other stuff to secure a measure of radiation). Planck’s law has two mighty limits. In the restrict as $beta hbar omega to 0$ (that manner sizzling temperature relative to frequency), the denominator approaches $beta hbar omega$, and we secure $displaystyle E(omega) , d omega approx frac{V}{pi^2 c^3} frac{omega^2}{beta} , d omega=frac{V k_B T omega^2}{pi^2 c^3} , d omega$. Here is a form of the Rayleigh-Denims law, which is the classical prediction for shadowy body radiation. It’s roughly edifying at low frequencies however becomes less and no more correct at higher frequencies. Second, within the restrict as $beta hbar omega to infty$ (that manner low temperature relative to frequency), the denominator approaches $e^{beta hbar omega}$, and we secure $displaystyle E(omega) , d omega approx frac{V hbar}{pi^2 c^3} frac{omega^3}{e^{beta hbar omega}} , d omega$. Here’s a form of the Wien approximation. It’s roughly edifying at high frequencies however becomes less and no more correct at low frequencies. Both of these limits historically preceded Planck’s law itself. Wien’s displacement law This fashion of Planck’s law is enough to expose us at what frequency the vitality $E(omega)$ is maximized given the temperature $T$ (and resulting from this truth roughly what coloration a shadowy body of temperature $T$ is): we differentiate with respect to $omega$ and accumulate that now we enjoy to clear Up $displaystyle frac{d}{d omega} frac{omega^3}{e^{beta hbar omega} - 1}=0$. or equivalently (taking the logarithmic spinoff as an alternative) $displaystyle frac{3}{omega}=frac{beta hbar e^{beta hbar omega}}{e^{beta hbar omega} - 1}$. Let $zeta=beta hbar omega$, so that we can rewrite the equation as $displaystyle 3 =frac{zeta e^zeta}{e^zeta - 1}$ or, with some rearrangement, $displaystyle 3 - zeta=3e^{-zeta}$. This fashion of the equation makes it somewhat easy to instruct that there might be a distinctive obvious solution $zeta=2.821 dots$, and resulting from this undeniable truth that $beta hbar omega=zeta$, giving that the maximizing frequency is $displaystyle boxed{ omega_{max}=frac{zeta}{beta hbar}=frac{zeta k_B}{hbar} T}$ the put $T$ is the temperature. Here is Wien’s displacement law for frequencies. Rewriting in phrases of wavelengths $ell= frac{2 pi c}{omega}$ offers $displaystyle frac{2 pi c}{omega_{max}}=frac{2 pi c hbar}{zeta k_B T}=frac{b}{T}$ the put $displaystyle b=frac{2 pi c hbar}{zeta k_B} approx 5.100 times 10^{-3} , mK$ (the fashions right here being meter-kelvins). This computation is mostly completed in a moderately various map, by first re-expressing the density of energies $E(omega) , d omega$ in phrases of wavelengths, then taking essentially the quite loads of the resulting density. Due to the $d omega$ is proportional to $frac{d ell}{ell^2}$, this has the cease of fixing the $omega^3$ from earlier to an $omega^5$, so replaces $zeta$ with the distinctive solution $zeta'$ to $displaystyle 5 - zeta'=5 e^{-zeta'}$ which is about $4.965$. This offers a maximizing wavelength $displaystyle boxed{ ell_{max}=frac{2 pi c hbar}{zeta' k_B T}=frac{b'}{T} }$ the put $displaystyle b'=frac{2 pi c hbar}{zeta' k_B} approx 2.898 times 10^{-3} , mK$. Here is Wien’s displacement law for wavelengths. Present that $ell_{max} neq frac{2 pi c}{omega_{max}}$. A wooden fire has a temperature of around $1000 , K$ (or around $700^{circ}$ celsius), and substituting this in above produces wavelengths of $displaystyle frac{2 pi c}{omega_{max}}=frac{5.100 times 10^{-3} , mK}{1000 , K}=5.100 times 10^{-6} , m =5100 , nm$ and $displaystyle ell_{max}=frac{2.898 times 10^{-3} , mK}{1000 , K}=2.898 times 10^{-6} , m =2898 , nm$. For comparison, the wavelengths of visible mild vary between about $750 , nm$ for red mild and $380 , nm$ for violet mild. Both of these computations accurately counsel that quite loads of the radiation from a wooden fire is infrared; right here’s the radiation that’s heating you however now not producing visible mild. Towards this, the temperature of the bottom of the solar is about $5800 , K$, and substituting that in produces wavelengths $displaystyle frac{2 pi c}{omega_{max}}= 879 , nm$ and $displaystyle ell_{max}=500 , nm$ which accurately means that the solar is emitting hundreds mild all around the visible spectrum (resulting from this truth appears to be like to be white). In some sense this argument is backwards: seemingly the visible spectrum developed to be what that is as a outcome of the massive availability of sunshine within the actual frequencies the solar emits essentially the most. At closing, a more sobering calculation. Nuclear explosions attain temperatures of around $10^7 , K$, akin to the temperature of the inner of the solar. Substituting this in produces wavelengths of $displaystyle frac{2 pi c}{omega_{max}}=0.51 , mu m$ and $displaystyle ell_{max}=0.29 , mu m$. These are the wavelengths of X-rays. Planck’s law doesn’t magnificent pause at essentially the most, so nuclear explosions moreover secure even shorter wavelength radiation, particularly gamma rays. Here is exclusively the radiation a nuclear explosion produces because it’s sizzling, in preference to the radiation it produces because it’s nuclear, akin to neutron radiation.
2022-05-18 12:27:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 140, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7876682281494141, "perplexity": 1510.4855734286705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00650.warc.gz"}
http://www.helpteaching.com/questions/Function_and_Algebra_Concepts/Grade_9
Looking for Algebra worksheets? Check out our pre-made Algebra worksheets! Tweet ##### Browse Questions • Arts (1045) • English Language Arts (4933) • English as a Second Language ESL (2782) • Health and Medicine (886) • Life Skills (380) • Math (2302) • ### Sequences and Series • #### Trigonometry • Physical Education (835) • Science (7556) • Social Studies (2897) • Study Skills and Strategies (148) • Technology (370) • Vocational Education (824) You can create printable tests and worksheets from these Grade 9 Function and Algebra Concepts questions! Select one or more questions using the checkboxes above each question. Then click the add selected questions to a test button before moving to another page. Previous Next Solve the inequality. $2 - 3x > -10$ 1. $x> -4$ 2. $x > 4$ 3. $x < -4$ 4. $x < 4$ Solve for $y$. $2(y+4)=3y+5$ 1. $y=-1$ 2. $y=2$ 3. $y=3$ 4. $y=9$ Subtract. $(7a^2 - 3a) - (5a^2 - 5a)$ 1. $2a^2 - 8a$ 2. $2a^2 + 2a$ 3. $4$ 4. $12a^2 - 8a$ Solve the equation $x^2-2x-35=0$. 1. x=-7 and x=5 2. x=-7 and x=-5 3. x=7 and x=-5 4. x=7 and x=5 Solve the inequality $–3x + 8 < 11$. 1. $x<1$ 2. $x<-1$ 3. $x> -1$ 4. $x>1$ Select all the algebraic EXPRESSIONS. 1. $3x = 9$ 2. $4b$ 3. $5x - 8$ 4. $3x + 5 = 10$ The expression $9^(5/2)$ is equal to: 1. the fifth root of raised to the 2nd power 2. the square root of 9 raised to the fourth power 3. the square root raised to the 5th power 4. the square root of 9 raised to the 5th power Which of the expressions below is equivalent to $b(5a^2+2)+b(2a^2+5)$ ? 1. $7a^2b+7b$ 2. $7a^2b+2b+5$ 3. $10a^2b+10b$ 4. $(10a^2+10)b$ Grade 9 Polynomials and Rational Expressions Subtract: $(4x^2 - 2x + 8) - ( x^2 + 3x - 2)$ 1. $3x^2 + x + 6$ 2. $3x^2 + x + 10$ 3. $3x^2 - 5x + 6$ 4. $3x^2 - 5x + 10$ 2x + 9y = 0 3x + 5y = 17 1. (-9, 2) 2. (-9, -0) 3. (9, -2) 4. (-2, 9) Given f(x)=x + 1, find f(2r). 1. 3r 2. 3rx + 1 3. 2rx 4. 2r + 1 What is the solution set to the inequality $-3x+2>=17$ ? 1. $x > -5$ 2. $x>=-5$ 3. $x<-5$ 4. $x<=-5$ Grade 9 Polynomials and Rational Expressions What is the simplest form of $(3c^2 - 8c + 5) + (c^2 - 8c - 6)$? 1. $3c^2 - 1$ 2. $4c^2 + 11$ 3. $4c^2 - 16c - 1$ 4. $2c^2 - 16c - 1$ Grade 9 Polynomials and Rational Expressions Simplify: $2x^2+5+4x^2-3$ 1. $2x^2-2$ 2. $-2x^2+2$ 3. $6x^2+2$ 4. $6x^2+8$ $((3x)/2)^4$ 1. $81x^4//16$ 2. $6x^4$ 3. $12x^4//8$ 4. $81x^4//2$
2017-09-20 23:48:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4178431034088135, "perplexity": 5770.869485095979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687582.7/warc/CC-MAIN-20170920232245-20170921012245-00251.warc.gz"}
https://codereview.stackexchange.com/questions/205131/variable-bit-length-lossy-floating-point-compression
# Variable bit length lossy floating point compression I am implementing a new compression algorithm for the weights of a neural network for the Leela Chess project. the weights are roughly 100Mb of float32s which I want to compress as small as possible. Error tolerance for this application is 2^-17, so lossy compression is clearly the right answer here. All of the weights are between -5 and 5, but 99.995% are in (-.25,.25) and most reasonably closely clumped around zero. The basic idea with this algorithm is to turn floats into integer multiples of the error tolerance, and then use a utf-8 inspired encoding to represent small values with only 1 byte. import numpy as np import bz2 def compress(in_path, out_path): with open(in_path, 'rb') as array: net = np.fromfile(in_path, dtype=np.float32) # Quantize net = np.asarray(net * 2**17, np.int32) # Zigzag encode net = (net >> 31) ^ (net << 1) # To variable length result = np.zeros(len(net)*3, dtype=np.uint8) for i in range(3): big = (net >= 128) << 7 result[i::3] = (net % 128) + big net >>= 7 # Delete non-essential indices zeroes = np.where(result == 0)[0] zeroes = zeroes[np.where(zeroes % 3 != 0)] result = np.delete(result, zeroes) with bz2.open(out_path, 'wb') as out: out.write(result.tobytes()) def decompress(in_path, out_path): with bz2.open(in_path, 'rb') as array: start_inds = np.where(result<128)[0] not_zeroed = np.ones(len(start_inds), dtype=np.bool) # append zeroe so loop doesn't go out of bounds result = np.append(result, np.zeros(4, dtype=np.uint8)) # Get back fixed length from variable length net = np.zeros(len(start_inds), dtype=np.uint32) for i in range(3): change = (result[start_inds] % 128) * not_zeroed net[np.where(not_zeroed)[0]] *= 128 net += change start_inds += 1 not_zeroed &= result[start_inds] >= 128 # Zigzag decode net = (net >> 1) ^ -(net & 1) print(np.mean(net)) # Un-quantize net = np.asarray(net, np.float32) net /= 2**17 with open(out_path, 'wb') as out: out.write(version) out.write(net.tobytes()) compress('diff.hex','diff.bz2') decompress('diff.bz2','round.hex') The main type of advice I'm looking for is algorithm and performance advice, but ways to make the code readable are always nice. • Did you confirm that the variable length encoding is an improvement over feeding the zigzag directly into bz2? – Janne Karila Oct 8 '18 at 7:09 • no, but I have compared it to just quantizing and zipping. Is zigzag without vle likely to compress better than just zipping? – Oscar Smith Oct 8 '18 at 7:14 • You can test different combinations to know for sure. Zigzag increases the number of zero bytes, and a general purpose compressor like bz2 should be able to exploit that, at least to some degree. – Janne Karila Oct 8 '18 at 7:22 • The difference in scales of your values makes me wonder if using sign(weight) * log(weight) might be more compressible since it should make the overall distribution of weights more uniform. I guess quantization using percentiles would work just as well though. – scnerd Oct 8 '18 at 14:31 • It's actually exactly the opposite. The more uniform you make the distribution, the less compressible it is. – Oscar Smith Oct 8 '18 at 18:08
2019-10-18 07:50:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5456942319869995, "perplexity": 6453.168963670932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00490.warc.gz"}
http://en.m.wikibooks.org/wiki/Ordinary_Differential_Equations/Successive_Approximations
# Ordinary Differential Equations/Successive Approximations $y'=f(x,y)$ has a solution $y$ satisfying the initial condition $y(x_0)=y_0$, then it must satisfy the following integral equation: $y=y_0+\int_{x_0}^x f(t, y(t))dt$ Now we will solve this equation by the method of successive approximations. Define $y_1$ as: $y_1=y_0+\int_{x_0}^x f(t,y_0)dt$ And define $y_n$ as $y_n=y_0+\int_{x_0}^x f(t,y_{n-1})dt$ We will now prove that: 1. If $f(x,y)$ is bounded and the Lipschitz condition is satisfied, then the sequence of functions converges to a continuous function 2. This function satisfies the differential equation 3. This is the unique solution to this differential equation with the given initial condition. ## ProofEdit First, we prove that $y_n$ lies in the box, meaning that $|y_n(x)-y_0|<\frac{1}{2}h$. We prove this by induction. First, it is obvious that $|y_1(x)-y_0|\le\frac{1}{2}h$. Now suppose that $|y_{n-1}(x)-y_0|\le\frac{1}{2}h$. Then $|f(t,y_{n-1}(t))|\le M$ so that $|y_n(x)-y_0|\le\int_{x_0}^x |f(t,y_{n-1}(t))|dt\le M(x-x_0)\le \frac{1}{2}Mw\le \frac{1}{2}h$. This proves the case when $x_0, and the case when $x is proven similarily. We will now prove by induction that $|y_n(x)-y_{n-1}(x)|<\frac{MK^{n-1}}{n!}(x-x_0)^n$. First, it is obvious that $|y_1(x)-y_0|. Now suppose that it is true up to n-1. Then $|y_n(x)-y_{n-1}(x)|\le\int_{x_0}^x |f(t,y_{n-1}(t))-f(t,y_{n-2}(t))|dt<\int_{x_0}^x K|y_{n-1}(t)-y_{n-2}(t)|dt$ due to the Lipschitz condition. Now, $|y_n(x)-y_{n-1}(x)|<\frac{MK^{n-1}}{(n-1)!}\int_{x_0}^x ||u-x_0|^{n-1}du=\frac{MK^{n-1}}{n!}|x-x_0|^n$. Therefore, the series of series $y_0+\sum_{n=1}^\infty (y_n(x)-y_{n-1}(x))$ is absolutely and uniformly convergent for $|x-x_0|\le\frac{1}{2}w$ because it is less than the exponential function. Therefore, the limit function $y(x)=y_0+\sum_{n=1}^\infty (y_n(x)-y_{n-1}(x))=\lim_{n\rightarrow\infty}y_n(x)$ exists and is a continuous function for $|x-x_0|\le\frac{1}{2}w$. Now we will prove that this limit function satisfies the differential equation. Last modified on 22 October 2013, at 19:20
2014-04-20 21:02:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9831817150115967, "perplexity": 190.39161910223103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.effortlessmath.com/math-topics/how-to-solve-the-frequency-distribution-table/
# How to Solve the Frequency Distribution Table? The frequency distribution table helps us find patterns in the data and also enables us to analyze the data using central tendency and variance criteria. ## Step by step guide tothefrequency distribution table Frequency distribution tables are a way to organize data in a way that makes the data more meaningful. A frequency distribution table is a graph that summarizes all data in two columns – variables/categories and their frequency. It has two or three columns. Usually, the first column lists all the results as separate values or as class intervals, depending on the size of the data set. The second column contains the counting scores of each result. The third column lists the frequency of each outcome. Also, the second column is optional. ### How to create a frequency distribution table? Creating a frequency distribution table is easy using the following steps: • Step 1: Create a table with two columns – one with the title of the data you organize and the other column for frequency. (Draw three columns if you want to add tally marks too) • Step 2: Look at what is written in the data and decide whether you want to draw an ungrouped frequency distribution table or a group frequency distribution table. If there are many different values, it is usually best to go with a grouped frequency distribution table. • Step 3: Write the data set values in the first column. • Step 4: Count how many times each item is repeated in the collected data. In other words, find the frequency of each item by counting. • Step 5: Write the frequency for each item in the second column. • Step 6: At last you can also write the total frequency in the last row of the table. Example: The following table shows the test scores of $$20$$ students: The frequency distribution table drawn above is called the ungrouped frequency distribution table. This display is ungrouped data and is usually used for a smaller data set. ### What is a frequency distribution table in statistics? Frequency distribution in statistics is the display of data that shows the number of observations within a given interval. The representation of a frequency distribution can be graphical or tabular. Such graphs make it easier to understand the collected data. • Bar graphs show data using bars of uniform width with equal distances between them. • A pie chart shows a whole circle, divided into sectors where each sector is proportional to the information it represents. • A frequency polygon is plotted by joining the midpoints of the bars in a histogram. ### Frequency distribution table for grouped data The frequency distribution table for grouped data is known as grouped frequency distribution table. This is based on the frequency of class intervals. In this table, all data categories are divided into different class intervals of the same width, for example, $$0-10, 10-20, 20-30,$$ and so on. And then the frequency of that class interval is marked against each interval. See an example of a frequency distribution table for grouped data in the image below. ### Cumulative frequency distribution table Cumulative frequency means the sum of the frequencies of the class and all classes below it. We can calculate by adding the frequency of each class lower than the corresponding class interval or category. The following is an example of a cumulative frequency distribution table: ### Frequency Distribution Table– Example 1: A school held a blood donation camp. Blood groups of $$30$$ students were registered as follows. Display this data in the form of a frequency distribution table. $$A, B, O, O, AB, O, A, O, B, A, O, B, A, O, O, A, AB, A, O, B, A, B, O, O, A, A, O, O, AB, B$$ Solution: We can display the above data in the frequency distribution table as follows: ## Exercises forFrequency Distribution Table Below the weekly pocket expenses (in dollars) a group of $$25$$ students was selected at random: $$37,\:41,\:39,\:34,\:41,\:26,\:46,\:31,\:48,\:32,\:44,\:39,\:35,\:39,\:37,\:49,\:27,\:37,\:33,\:38,\:49,\:45,\:44,\:37,\:36$$ Create a grouped frequency distribution table with class intervals of equal widths, starting from $$25 – 30, 30 – 35$$, and so on. ### What people say about "How to Solve the Frequency Distribution Table?"? No one replied yet. X 30% OFF Limited time only! Save Over 30% SAVE $5 It was$16.99 now it is \$11.99
2022-10-02 22:42:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6688281893730164, "perplexity": 584.3906050029107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00552.warc.gz"}
https://cstheory.stackexchange.com/questions/48372/pspace-complete-under-np-reduction?noredirect=1
# PSPACE-complete under NP reduction Is there some example of a PSPACE problem that we can show PSPACE-hard under NP reduction, but we do not know a proof of PSPACE-hardness under P reduction ? To be more precise, the NP reduction I am referring to is of the following kind: take your problem A that you want to show PSPACE-complete. You show that given an oracle for A, and an oracle for an NP-complete problem (the oracles are used separately, not nested in any way), you can solve a PSPACE-complete problem in polynomial time. A related question is there, but it is not answered. • What is an “NP reduction”? Feb 11 at 9:47 • @EmilJeřábek I added a more precise description, it's true that different definitions could be considered Feb 11 at 10:12 • @Denis - isn't it the same as asking whether there is a problem $A$ such that $A$ is not known to be PSPACE-hard, but $PSPACE\subseteq (P^{NP})^A$? If so, it seems that such a problem would either make the polynomial hierarchy collapse, or separate PH from PSPACE. Feb 11 at 11:34 • @Denis - Right. I meant if we had such a problem that is provably not PSPACE-complete, but that's pointless, as you mention. By the way, have you looked at this? Maybe the intermediate problems can be used as a candidate: cstheory.stackexchange.com/questions/7639/… Feb 11 at 13:23 • Thanks @Shaull, the question came from an automaton problem, where it was easier to find the NP reduction than the P one, so I was wondering whether it has happened that for some problem we stayed in the first stage. Feb 11 at 13:56
2021-10-16 19:07:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7465270161628723, "perplexity": 567.5012571924303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00181.warc.gz"}
http://omerkel.github.io/rocketscience/html5/src/
Rocket Science Time Altitude Vertical Velocity 10 sec 120 mi 4000 ft 3000 mph Fuel Burn Rate / Acceleration 17000 lb lb/sec Back ### Abstract This physics simulation is about simplified rocket science. Rocket Science comes along without any sophisticated graphical human machine interface so far. The concept is turn based. On each turn the user enters a numerical value for the fuel burn rate of the excursion module in a valid range. ### Objectives Main objective is to land the excursion module safely. Flight control is by instruments only rather than visually landing the Lunar Excursion Module (LEM). Instrument Landing System (ILS) shows a defect. You are supposed to perform the landing maneuvers manually. An additional objective is to optimise the fuel consumption. ### Initial Settings Initial value for altitude is taken from Title Apollo 11 mission report 19710015566 Click to View PDF File Apollo 11 postflight analysis and mission report 1970 Technical Report NASA-SP-238 Since this is a one dimensional physics simulation the initial (vertical component of) velocity is chosen to be zero at time of undocking from command module in circular lunar orbit and lunar module separation maneuver. ## Physics and Accuracy A mapping function is needed to describe the influence of the rocket engine's fuel burn rate onto velocity of the lunar module. The relation between both physical quantities is described by the Tsiolkovsky rocket equation, also referred to as ideal rocket equation. $d v max = v e ln m 0 m 1$ where $v max$ is the maximum velocity change of the LEM ignoring external forces, $v e$ is the effective exhaust velocity, $m 0$ is the initial total mass of LEM including fuel, $m 1$ is the final total mass. Mind: In Javascript and Gnuplot logarithmus naturalis is represented by Math.log or log. In popular predecessors of Rocket Science being developed on slower machines the computation of such logarithmic functions were using the first partial sums of a corresponding Taylor series. Since Rocket Science internally uses an ln(1-x) like function this ought to be a Newton-Mercator series to match a mass change ratio depending on current burn rate onto a velocity change value. Rocket Science simply uses a direct natural logarithm function for this purpose (since central processing units have become faster nowadays). If adapting the source code on your own then please mind that ln(1-x) is defined for x<1 only (and the full Newton-Mercator series will converge to natural logarithm for x in ]-1; 1[ only. Gnuplot commands ``` set xlabel 'mass change ratio' offset 0, 4; set ylabel 'velocity change technical' offset 10, 0; set xzeroaxis; set yzeroaxis; set xrange [-0.2:1.2]; set yrange [-4:0.5]; plot log(1-x), -x-x**2/2, -x-x**2/2-x**3/3, -x-x**2/2-x**3/3-x**4/4, -x-x**2/2-x**3/3-x**4/4-x**5/5, -15.0/14.0*x; ``` The human machine interface shows units as United States customary units/British imperial units while the underlying physics engine itself renders International System of Units (SI) units. The underlying physics engine is aware of the fuel mass change over time in relation towards the moving accelerated object. $F LEM = d d t p = d d t m ⁢ v = m ⁢ d d t v + v ⁢ d d t m = m ⁢ a + v ⁢ d d t m$ In its current version it neglects the fact that gravitational acceleration depends on the altitude of the object, too. Gravitational acceleration is constant at ``` this.gravitationalAcceleration = 1.622; // Moon, meter per square second /*  * Gravity is relative to the height of an object. With the given altitude  * range it is quite clear that this simulation is enormously simplifying  * this aspect.  */ ``` The formula to match the fuel burn rate value towards a change in velocity caused by the engine is modelled by functions… ``` var massChangeRatio = burnRate * durationEngineBurning / this.getMassTotal(); // mind: Math.log() has base Math.E, Math.log(Math.E) is 1 var velocityChangeTechnical = 3000 * Math.log(1-massChangeRatio); ``` With burn rates in range from 0 to 200 lb per second and a typical duration for the engine burning fuel for a full iteration of 10 seconds with… ``` this.state[this.MASSFUEL] = 16000; // lb this.state[this.MASSCAPSULEEMPTY] = 16500; // lb ``` … the mass change ratio will be at 0.125 maximum approximately. Gnuplot commands ``` set xlabel 'mass change ratio' offset 0, 4; set ylabel 'velocity change technical' offset 11, 0; set xzeroaxis; set yzeroaxis; set xrange [-0.02:0.14]; set yrange [-0.2:0.05]; plot log(1-x), -x-x**2/2, -x-x**2/2-x**3/3, -x-x**2/2-x**3/3-x**4/4, -x-x**2/2-x**3/3-x**4/4-x**5/5, -15.0/14.0*x; ``` In the given range even the first few partial sums of the corresponding Taylor Series would do the trick. And to be honest a simple linear equation would be sufficient, too. Preferably if modifying the capsule mass to extremly low values then the logarithmic model will feel more realistic (in terms of harder to control) and thus be important then. The altitude change technically caused by the engine uses the integral over the mass change ratio rather than integral over time. ``` var altitudeChangeTechnical = 3000 * ( -massChangeRatio - Math.log(1-massChangeRatio) * (1-massChangeRatio)); ``` Sure this could be discussed but in fact leads to an acceptable user experience anyway. Back Back ### Legal @author Oliver Merkel, Merkel(dot)Oliver(at) web(dot)de. Logos, brands, and trademarks belong to their respective owners. All source code also including code parts written in HMTL, Javascript, CSS is under MIT License. Copyright (c) 2016 Oliver Merkel, Merkel(dot) Oliver(at) web(dot)de Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. If not otherwise stated all graphics (independent of its format) are licensed under
2018-11-21 19:04:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4581203758716583, "perplexity": 1877.8953767772741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039749562.99/warc/CC-MAIN-20181121173523-20181121195523-00549.warc.gz"}
https://zxi.mytechroad.com/blog/category/sliding-window/
Posts published in “Sliding Window” You are given an integer array nums and an integer x. In one operation, you can either remove the leftmost or the rightmost element from the array nums and subtract its value from x. Note that this modifies the array for future operations. Return the minimum number of operations to reduce x to exactly 0 if it’s possible, otherwise, return -1. Example 1: Input: nums = [1,1,4,2,3], x = 5 Output: 2 Explanation: The optimal solution is to remove the last two elements to reduce x to zero. Example 2: Input: nums = [5,6,7,8,9], x = 4 Output: -1 Example 3: Input: nums = [3,2,20,1,1,3], x = 10 Output: 5 Explanation: The optimal solution is to remove the last three elements and the first two elements (5 operations in total) to reduce x to zero. Constraints: • 1 <= nums.length <= 105 • 1 <= nums[i] <= 104 • 1 <= x <= 109 Solution1: Prefix Sum + Hashtable Time complexity: O(n) Space complexity: O(n) Solution2: Sliding Window Find the longest sliding window whose sum of elements equals sum(nums) – x ans = n – window_size Time complexity: O(n) Space complexity: O(1) C++ You are given an array points, an integer angle, and your location, where location = [posx, posy] and points[i] = [xi, yi] both denote integral coordinates on the X-Y plane. Initially, you are facing directly east from your position. You cannot move from your position, but you can rotate. In other words, posx and posy cannot be changed. Your field of view in degrees is represented by angle, determining how wide you can see from any given view direction. Let d be the amount in degrees that you rotate counterclockwise. Then, your field of view is the inclusive range of angles [d - angle/2, d + angle/2]. You can see some set of points if, for each point, the angle formed by the point, your position, and the immediate east direction from your position is in your field of view. There can be multiple points at one coordinate. There may be points at your location, and you can always see these points regardless of your rotation. Points do not obstruct your vision to other points. Return the maximum number of points you can see. Example 1: Input: points = [[2,1],[2,2],[3,3]], angle = 90, location = [1,1] Output: 3 Explanation: The shaded region represents your field of view. All points can be made visible in your field of view, including [3,3] even though [2,2] is in front and in the same line of sight. Example 2: Input: points = [[2,1],[2,2],[3,4],[1,1]], angle = 90, location = [1,1] Output: 4 Explanation: All points can be made visible in your field of view, including the one at your location. Example 3: Input: points = [[1,0],[2,1]], angle = 13, location = [1,1] Output: 1 Explanation: You can only see one of the two points, as shown above. Constraints: • 1 <= points.length <= 105 • points[i].length == 2 • location.length == 2 • 0 <= angle < 360 • 0 <= posx, posy, xi, yi <= 109 Solution: Sliding window Sort all the points by angle, duplicate the points with angle + 2*PI to deal with turn around case. maintain a window [l, r] such that angle[r] – angle[l] <= fov Time complexity: O(nlogn) Space complexity: O(n) C++ You are given a string s, a split is called good if you can split s into 2 non-empty strings p and q where its concatenation is equal to s and the number of distinct letters in p and q are the same. Return the number of good splits you can make in s. Example 1: Input: s = "aacaba" Output: 2 Explanation: There are 5 ways to split "aacaba" and 2 of them are good. ("a", "acaba") Left string and right string contains 1 and 3 different letters respectively. ("aa", "caba") Left string and right string contains 1 and 3 different letters respectively. ("aac", "aba") Left string and right string contains 2 and 2 different letters respectively (good split). ("aaca", "ba") Left string and right string contains 2 and 2 different letters respectively (good split). ("aacab", "a") Left string and right string contains 3 and 1 different letters respectively. Example 2: Input: s = "abcd" Output: 1 Explanation: Split the string as follows ("ab", "cd"). Example 3: Input: s = "aaaaa" Output: 4 Explanation: All possible splits are good. Example 4: Input: s = "acbadbaada" Output: 2 Constraints: • s contains only lowercase English letters. • 1 <= s.length <= 10^5 Solution: Sliding Window 1. Count the frequency of each letter and count number of unique letters for the entire string as right part. 2. Iterate over the string, add current letter to the left part, and remove it from the right part. 3. We only 1. increase the number of unique letters when its frequency becomes to 1 2. decrease the number of unique letters when its frequency becomes to 0 Time complexity: O(n) Space complexity: O(1) Python3 Given an array of integers arr and an integer target. You have to find two non-overlapping sub-arrays of arr each with sum equal target. There can be multiple answers so you have to find an answer where the sum of the lengths of the two sub-arrays is minimum. Return the minimum sum of the lengths of the two required sub-arrays, or return -1 if you cannot find such two sub-arrays. Example 1: Input: arr = [3,2,2,4,3], target = 3 Output: 2 Explanation: Only two sub-arrays have sum = 3 ([3] and [3]). The sum of their lengths is 2. Example 2: Input: arr = [7,3,4,7], target = 7 Output: 2 Explanation: Although we have three non-overlapping sub-arrays of sum = 7 ([7], [3,4] and [7]), but we will choose the first and third sub-arrays as the sum of their lengths is 2. Example 3: Input: arr = [4,3,2,6,2,3,4], target = 6 Output: -1 Explanation: We have only one sub-array of sum = 6. Example 4: Input: arr = [5,5,4,4,5], target = 3 Output: -1 Explanation: We cannot find a sub-array of sum = 3. Example 5: Input: arr = [3,1,1,1,5,1,2,1], target = 3 Output: 3 Explanation: Note that sub-arrays [1,2] and [2,1] cannot be an answer because they overlap. Constraints: • 1 <= arr.length <= 10^5 • 1 <= arr[i] <= 1000 • 1 <= target <= 10^8 Solution: Sliding Window + Best so far 1. Use a sliding window to maintain a subarray whose sum is <= target 2. When the sum of the sliding window equals to target, we found a subarray [s, e] 3. Update ans with it’s length + shortest subarray which ends before s. 4. We can use an array to store the shortest subarray which ends before s. Time complexity: O(n) Space complexity: O(n) C++ Given a string s and an integer k. Return the maximum number of vowel letters in any substring of s with length k. Vowel letters in English are (a, e, i, o, u). Example 1: Input: s = "abciiidef", k = 3 Output: 3 Explanation: The substring "iii" contains 3 vowel letters. Example 2: Input: s = "aeiou", k = 2 Output: 2 Explanation: Any substring of length 2 contains 2 vowels. Example 3: Input: s = "leetcode", k = 3 Output: 2 Explanation: "lee", "eet" and "ode" contain 2 vowels. Example 4: Input: s = "rhythms", k = 4 Output: 0 Explanation: We can see that s doesn't have any vowel letters. Example 5: Input: s = "tryhard", k = 4 Output: 1 Constraints: • 1 <= s.length <= 10^5 • s consists of lowercase English letters. • 1 <= k <= s.length Solution: Sliding Window Keep tracking the number of vows in a window of size k. Time complexity: O(n) Space complexity: O(1) C++ Mission News Theme by Compete Themes.
2020-11-28 13:30:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19734632968902588, "perplexity": 3527.0546890228957}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195656.78/warc/CC-MAIN-20201128125557-20201128155557-00661.warc.gz"}
https://math.stackexchange.com/questions/527993/maximal-unique-solution-to-an-ivp
# Maximal unique solution to an IVP. In class we learned the existence and uniqueness theorems for differential equations. The weaker Picard-Lindelof states that for any IVP, $$\begin{cases} x'(t) = f(t, x(t))\\ x(t_0) = x_0 \end{cases}$$ where $f$ is continuous in the first argument and locally Lipschitz in the second, there is a unique solution in some neighborhood around $t_0$ (in a open interval $I_0$ containing $t_0$). This result was extended to: there is a maximal unique solution to all IVP with the above form (Basically means there is biggest possible interval on which the solution is unique). More precisely, it means if $x:(a, b) \to \mathbb{R}^n$ is the maximal solution and $y:(a', b') \to \mathbb{R}^n$ is any other solution to the same IVP, then $(a', b') \subset (a, b)$ and $x = y$ on $(a', b')$. My question is: can we find the maximal interval where there is a unique solution for any given IVP (also for any function $f$)? EDIT: To be more precise: suppose that for each $(a,b,R)$ with $a < b$ you know (or can compute) a modulus of continuity and a Lipschitz constant for $f(t,y)$ on $\{(t,y): a \le t \le b, |y| \le R\}$. There is (computable) $\delta > 0$ such that if $a \le s-\delta < s + \delta \le b$ and $0 < \epsilon < 1$ and $|Y| < R - 2 \epsilon$, then we can calculate (e.g. by the methods used in the proof of Picard-Lindelof) $\tilde{y}(t)$ for $s-\delta \le t \le s+\delta$ such that $|y(t) - \tilde{y}(t)| < 2 \epsilon$ for every solution $y(t)$ with $|y(s) - Y| < \epsilon$. If your initial value problem does have a solution defined on $[a,b]$, there is some $R$ that bounds $|y(t)|$ on $[a,b]$, and by appropriate choice of $\epsilon$ we can get an approximate solution accurate enough to show that $|y(t)| < R + 1$ on $[a,b]$, and in particular that the solution exists on this interval (i.e. a maximal solution that ceases to exist at some point must cross the region $R < |y| < R + 1$ before it ceases to exist). So you can approximate the maximal interval from below. Approximating it from above seems to be more difficult, and I don't know if it can be done in general. • Are there some sort of series or recursive method in terms of $f$ to fins the end points of the maximal interval? Is that what you mean by numerical methods? Also, can you provide some references where I can study this further? Thanks. – Pratyush Sarkar Oct 16 '13 at 4:50
2019-11-12 03:05:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9131187200546265, "perplexity": 99.22218282148395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00374.warc.gz"}
http://gmatclub.com/forum/bill-has-a-small-deck-of-12-playing-cards-made-up-of-only-2-suits-of-6-cards-each-96078-20.html
Find all School-related info fast with the new School-Specific MBA Forum It is currently 08 Mar 2014, 00:40 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Bill has a small deck of 12 playing cards Author Message TAGS: Manager Joined: 27 May 2012 Posts: 208 Followers: 0 Kudos [?]: 46 [0], given: 73 Re: Bill has a small deck of 12 playing cards made up of only 2 [#permalink]  17 Sep 2012, 04:28 The way that is given here I understand. pairs-twins-and-couples-103472.html#p805725 But I am trying to do in another way , wonder what I am doing wrong. at least one pair means either one pair or two pairs so lets suppose we have A1 A2 A3 A4 A5 A6 AND B1 B2 B3 B4 B5 B6 I understand the 1 - p( opposite event ) but just for understanding sake , if I tried the direct way , how could it be done. P( One pair) + p( two pairs ) one pair 6C1*5C2(4!/2!)= 720 6C1 = ways to select the one card from 6 which will form the pair 5C2 = select two different cards from the 5 remaining to form the singles * This I think this is not correct , what would be correct * 4!/2! = permutations of 4 letters where two are identical for two pairs 6C2 * 4!/2!2! = 90 6C2 = ways to select the two cards which will form the pair . 4!/2!2! = ways to arrange the 4 letters where 2 are same kind and another 2 are same kind . hence total ways for at least one pair -> 6C1*5C2(4!/2!) + 6C2 * 4!/2!2! = 810 ( what is wrong here) however this will not the required answer, can anybody please correct my logic and show me how to approach this direct way ? total ways 4 cards can be selected 12C4 = 495 hence I am getting a probability more than one which is not possible Would highly appreciate any help. _________________ - Stne Last edited by stne on 17 Sep 2012, 21:28, edited 2 times in total. Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes Manhattan GMAT Discount Codes Director Joined: 22 Mar 2011 Posts: 610 WE: Science (Education) Followers: 63 Kudos [?]: 409 [1] , given: 43 Re: Bill has a small deck of 12 playing cards made up of only 2 [#permalink]  17 Sep 2012, 09:32 1 KUDOS stne wrote: The way that is given here I understand. pairs-twins-and-couples-103472.html#p805725 But I am trying to do in another way , wonder what I am doing wrong. at least one pair means either one pair or two pairs so lets suppose we have A1 A2 A3 A4 A5 A6 AND B1 B2 B3 B4 B5 B6 I understand the 1 - p( opposite event ) but just for understanding sake , if I tried the direct way , how could it be done. P( One pair) + p( two pairs ) one pair 6C1*10C2(4!/2!)= 720 6C1 = ways to select the one card from 6 which will form the pair 5C2 = select two different cards from the 5 remaining to form the singles * This I think this is not correct , what would be correct * 4!/2! = permutations of 4 letters where two are identical for two pairs 6C2 * 4!/2!2! = 90 6C2 = ways to select the two cards which will form the pair . 4!/2!2! = ways to arrange the 4 letters where 2 are same kind and another 2 are same kind . hence total ways for at least one pair -> 6C1*10C2(4!/2!) + 6C2 * 4!/2!2! = 810 ( what is wrong here) however this will not the required answer, can anybody please correct my logic and show me how to approach this direct way ? total ways 4 cards can be selected 12C4 = 495 hence I am getting a probability more than one which is not possible Would highly appreciate any help. One pair - should be 6C1*5C2*2*2 = 6*10*4 = 240. Choose one pair out of 6, then two single cards from two different pairs. You have two suits, so for the same number two possibilities. Two pairs - just 6C2=15. Total - 255 Probability 255/495 = 17/33. 10C2 also includes remaining pairs of cards with the same number, so choosing 2 out of 10 does not guarantee two non-identical numbers. Another problem is that 495 represents the number of choices for 4 cards, regardless to the order in which they were drawn. Then you should consider the other choices accordingly, and disregard the order in which they were chosen. _________________ PhD in Applied Mathematics Love GMAT Quant questions and running. Manager Joined: 27 May 2012 Posts: 208 Followers: 0 Kudos [?]: 46 [0], given: 73 Re: Bill has a small deck of 12 playing cards [#permalink]  17 Sep 2012, 21:33 Just edited my original post Meant to take 5C2 not 10C2, so my doubt one pair 6C1*5C2(4!/2!)= 720 6C1 = ways to select the one card from 6 which will form the pair 5C2 = select two different cards from the 5 remaining to form the singles * This I think this is not correct , what would be correct * 4!/2! = permutations of 4 letters where two are identical for two pairs 6C2 * 4!/2!2! = 90 6C2 = ways to select the two cards which will form the pair . 4!/2!2! = ways to arrange the 4 letters where 2 are same kind and another 2 are same kind . _________________ - Stne Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 4028 Location: Pune, India Followers: 857 Kudos [?]: 3611 [1] , given: 144 Re: Bill has a small deck of 12 playing cards [#permalink]  17 Sep 2012, 22:25 1 KUDOS Expert's post stne wrote: Just edited my original post Meant to take 5C2 not 10C2, so my doubt one pair 6C1*5C2(4!/2!)= 720 6C1 = ways to select the one card from 6 which will form the pair 5C2 = select two different cards from the 5 remaining to form the singles * This I think this is not correct , what would be correct * 4!/2! = permutations of 4 letters where two are identical for two pairs 6C2 * 4!/2!2! = 90 6C2 = ways to select the two cards which will form the pair . 4!/2!2! = ways to arrange the 4 letters where 2 are same kind and another 2 are same kind . Responding to a pm: There is a difference between this question and the other one you mentioned. In that question, you were using one digit twice which made them identical. Here the two cards that form the pair are not identical. They are of different suits. Also, here, you don't need to arrange them. You can assume to just take a selection while calculating the cases in the numerator and the denominator. The probability will not get affected. In the other question, you needed to find the number of arrangements to make the passwords/numbers and hence you needed to arrange the digits. Hence, in this step, 6C1*5C2(4!/2!), it should be 6C1*5C2*2*2 instead. 6C1 = ways to select the one card from 6 which will form the pair 5C2 = select two different values from the 5 remaining to form the singles *This is correct * 2*2 = For each of the two values, you can select a card in 2 ways (since you have 2 suits) Similarly, 6C2 * 4!/2!2! should be 6C2 only to select 2 pairs out of 6. Probability = 255/495 = 17/33 Note: You can arrange the cards too and will still get the same probability. Just ensure that you arrange in numerator as well as denominator. Only one pair = 6C1*5C2*2*2 * 4! (you multiply by 4! because all the cards are distinct) Both pairs = 6C2 * 4! (you multiply by 4! because all the cards are distinct) Select 4 cards out of 12 = 12C4 * 4! (you multiply by 4! because all the cards are distinct) Probability = 255*4!/495*4! = 17/33 _________________ Karishma Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Director Joined: 22 Mar 2011 Posts: 610 WE: Science (Education) Followers: 63 Kudos [?]: 409 [1] , given: 43 Re: Bill has a small deck of 12 playing cards [#permalink] 17 Sep 2012, 22:43 1 This post received KUDOS stne wrote: Just edited my original post Meant to take 5C2 not 10C2, so my doubt one pair 6C1*5C2(4!/2!)= 720 6C1 = ways to select the one card from 6 which will form the pair 5C2 = select two different cards from the 5 remaining to form the singles * This I think this is not correct , what would be correct * 4!/2! = permutations of 4 letters where two are identical for two pairs 6C2 * 4!/2!2! = 90 6C2 = ways to select the two cards which will form the pair . 4!/2!2! = ways to arrange the 4 letters where 2 are same kind and another 2 are same kind . one pair 6C1*5C2(4!/2!)= 720 NO Choose one pair out of 6 - 6C1 - and you don't care for the order in which you choose the two cards Choose two pairs out of the remaining 5 pairs - 5C2 - and then, from each pair, you have 2 possibilities to choose one of them, therefore 5C2*2*2 Here you stop! Don't care about any order. Period. for two pairs 6C2 * 4!/2!2! = 90 NO You choose two pairs - 6C2 - and you stop here. From each pair you take both cards, and don't care about any order. _________________ PhD in Applied Mathematics Love GMAT Quant questions and running. Manager Joined: 27 May 2012 Posts: 208 Followers: 0 Kudos [?]: 46 [0], given: 73 Re: Bill has a small deck of 12 playing cards [#permalink] 17 Sep 2012, 23:17 VeritasPrepKarishma wrote: Responding to a pm: There is a difference between this question and the other one you mentioned. In that question, you were using one digit twice which made them identical. Here the two cards that form the pair are not identical. They are of different suits. Also, here, you don't need to arrange them. You can assume to just take a selection while calculating the cases in the numerator and the denominator. The probability will not get affected. In the other question, you needed to find the number of arrangements to make the passwords/numbers and hence you needed to arrange the digits. Hence, in this step, 6C1*5C2(4!/2!), it should be 6C1*5C2*2*2 instead. 6C1 = ways to select the one card from 6 which will form the pair 5C2 = select two different values from the 5 remaining to form the singles *This is correct * 2*2 = For each of the two values, you can select a card in 2 ways (since you have 2 suits) Similarly, 6C2 * 4!/2!2! should be 6C2 only to select 2 pairs out of 6. Probability = 255/495 = 17/33 Awesome , this was going to be my next question, I was going to ask that the singles can be selected first and then the pair or we could have different arrangements as the question does not say that the pair should be together ( adjacent)so we could have B3A1A5A2 where A1A2 is the pair of a single suit and B3A5 are the single cards. Below explanation already clarifies that question. Thank you for hitting the bulls eye. This was exactly what was bothering me. VeritasPrepKarishma wrote: Note: You can arrange the cards too and will still get the same probability. Just ensure that you arrange in numerator as well as denominator. Only one pair = 6C1*5C2*2*2 * 4! (you multiply by 4! because all the cards are distinct) Both pairs = 6C2 * 4! (you multiply by 4! because all the cards are distinct) Select 4 cards out of 12 = 12C4 * 4! (you multiply by 4! because all the cards are distinct) Probability = 255*4!/495*4! = 17/33 This was enlightening . EvaJager wrote: one pair 6C1*5C2(4!/2!)= 720 NO Choose one pair out of 6 - 6C1 - and you don't care for the order in which you choose the two cards Choose two pairs out of the remaining 5 pairs - 5C2 - and then, from each pair, you have 2 possibilities to choose one of them, therefore 5C2*2*2 Here you stop! Don't care about any order. Period. for two pairs 6C2 * 4!/2!2! = 90 NO You choose two pairs - 6C2 - and you stop here. From each pair you take both cards, and don't care about any order. Thank you Eva .. But the question I was going to ask next to you was that, " the pair can be selected first or the singles can be selected first or the pair can be in between the 2 singles etc " , the question does not say that the pair has to be together and singles together or that that the pair has to be selected first and then the singles , so this thought was confusing me. I was basically getting confused with this sum, digit-codes-combination-103081.html#p802805 Karishma has very clearly answered that very doubt.Just wanted to know " why I was doing , what I was doing , ". Its now clear, thanks to both of you , hope I am in a better position to understand such questions now. Karishma and Eva thank you so much for this. _________________ - Stne Intern Joined: 19 Feb 2012 Posts: 17 Followers: 0 Kudos [?]: 1 [0], given: 1 Re: Bill has a small deck of 12 playing cards [#permalink] 28 Nov 2012, 05:02 Hi Bunuel No doubt that your method is right, but this is my thought process an Arbitrary deck of hands with at least one pair of similar cards = A, B, C, A the number of patterns that A,B,C,A hand appears = 4C2 Therefore, number of ways A is chosen= 12 when the first A is chosen, it locks in the value for the second A number of ways B is chosen = 10(because by choosing a value for A, two cards are taken off the choice list) number of ways C is chosen = 9 P(at least two cards have the same value) = (12 x 10 x 9 x 4C2)/(12 x 11 x 10 x 9) = 18/33 Adhil Manager Joined: 06 Jun 2010 Posts: 161 Followers: 2 Kudos [?]: 16 [0], given: 151 Re: Bill has a small deck of 12 playing cards [#permalink] 05 Mar 2013, 11:34 Hi Bunuel, My doubt here is that: 6C4 # of ways to choose 4 different cards out of 6 different values; 2 ^4 -as each of 4 cards chosen can be of 2 different suits. Im not clear why we did 6C4 even though we selected out of 12 cards and also the purpose of doing 2^4. Thanks, Shreeraj Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 4028 Location: Pune, India Followers: 857 Kudos [?]: 3611 [1] , given: 144 Re: Probability [#permalink] 29 Apr 2013, 02:41 1 This post received KUDOS Expert's post Bunuel wrote: maheshsrini wrote: Bill has a small deck of 12 playing cards made up of only 2 suits of 6 cards each. Each of the 6 cards within a suit has a different value from 1 to 6; thus, for each value from 1 to 6, there are two cards in the deck with that value. Bill likes to play a game in which he shuffles the deck, turns over 4 cards, and looks for pairs of cards that have the same value. What is the chance that Bill finds at least one pair of cards that have the same value? 8/33 62/165 17/33 103/165 25/33 Let's calculate the opposite probability ans subtract this value from 1. Opposite probability would be that there will be no pair in 4 cards, meaning that all 4 cards will be different: \frac{C^4_6*2^4}{C^4_{12}}=\frac{16}{33}. Responding to a pm: How do we obtain 6C4*2^4? Think of what you have: 6 cards numbered 1 to 6 of 2 different suits. Say you have 1 to 6 of clubs and 1 to 6 of diamonds. A total of 12 cards. You want to select 4 cards such that there is no pair i.e. no two cards have the same number. This means all 4 cards will have different numbers, say, you get a 1, 2, 4 and 6. The 1 could be of clubs or diamonds. The 2 could be of clubs and diamonds and so on for all 4 cards. This means you must select 4 numbers out of the 6 numbers in 6C4 ways. Then for each number, you must select a suit out of the given two suits. That is how you get 6C4 * 2*2*2*2 This is the total number of ways in which you will have no pair i.e. two cards of same number. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Save$100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Manager Joined: 14 Nov 2011 Posts: 135 Location: United States Concentration: General Management, Entrepreneurship Schools: Stanford '15 GPA: 3.61 WE: Consulting (Manufacturing) Followers: 0 Kudos [?]: 14 [0], given: 87 Re: Probability [#permalink]  26 May 2013, 06:09 Bunuel wrote: maheshsrini wrote: Bill has a small deck of 12 playing cards made up of only 2 suits of 6 cards each. Each of the 6 cards within a suit has a different value from 1 to 6; thus, for each value from 1 to 6, there are two cards in the deck with that value. Bill likes to play a game in which he shuffles the deck, turns over 4 cards, and looks for pairs of cards that have the same value. What is the chance that Bill finds at least one pair of cards that have the same value? 8/33 62/165 17/33 103/165 25/33 Let's calculate the opposite probability ans subtract this value from 1. Opposite probability would be that there will be no pair in 4 cards, meaning that all 4 cards will be different: \frac{C^4_6*2^4}{C^4_{12}}=\frac{16}{33}. C^4_6 - # of ways to choose 4 different cards out of 6 different values; 2^4 - as each of 4 cards chosen can be of 2 different suits; C^4_{12} - total # of ways to choose 4 cards out of 12. So P=1-\frac{16}{33}=\frac{17}{33}. Or another way: We can choose any card for the first one - \frac{12}{12}; Next card can be any card but 1 of the value we'v already chosen - \frac{10}{11} (if we've picked 3, then there are one more 3 left and we can choose any but this one card out of 11 cards left); Next card can be any card but 2 of the values we'v already chosen - \frac{8}{10} (if we've picked 3 and 5, then there are one 3 and one 5 left and we can choose any but these 2 cards out of 10 cards left); Last card can be any card but 3 of the value we'v already chosen - \frac{6}{9}; P=\frac{12}{12}*\frac{10}{11}*\frac{8}{10}*\frac{6}{9}=\frac{16}{33}. So P=1-\frac{16}{33}=\frac{17}{33} - the same answer as above. Hope it helps. Hi Bunnel, I used combinatorics, is this way correct. Prob (at least 1 pair) = fav/tot fav = 12c1*1c1*10c10*9c8 + 12c1*1c1*10c10*1c1 tot = 12c4 Prob = (12*10*9/8+12*10) / (12*11*10*9/4*3*2) Senior Manager Joined: 12 Mar 2010 Posts: 335 Concentration: Marketing, Entrepreneurship GMAT 1: 680 Q49 V34 Followers: 1 Kudos [?]: 23 [0], given: 66 Re: Bill has a small deck of 12 playing cards [#permalink]  06 Aug 2013, 03:53 I solved this question as follows and I know I am wrong. The problem is: I don't know why am I wrong? First card can be picked in 12C1 ways Second card can be picked in 10C1 ways Third card can be picked in 8C1 ways Fourth card can be picked in 6C1 ways All possibilities to pick 4 cards out of 12 = 12C4 Probability to have atleast one pair = 1 - prob to have no pairs = 1 - 12c1*10c1*8c1*6c1/12c4 Could somebody point out the error in this solution and why? Intern Joined: 07 Mar 2013 Posts: 30 Followers: 0 Kudos [?]: 1 [0], given: 67 A game of cards [#permalink]  16 Dec 2013, 02:01 Bill has a set of 6 black cards and a set of 6 red cards. Each card has a number from 1 through 6, such that each of the numbers 1 through 6 appears on 1 black card and 1 red card. Bill likes to play a game in which he shuffles all 12 cards, turns over 4 cards, and looks for pairs of cards that have the same value. What is the chance that Bill finds at least one pair of cards that have the same value? 1.8/33 2.62/165 3.17/33 4.103/165 5.25/33 Math Expert Joined: 02 Sep 2009 Posts: 16782 Followers: 2770 Kudos [?]: 17568 [0], given: 2183 Re: A game of cards [#permalink]  16 Dec 2013, 02:04 Expert's post vishalrastogi wrote: Bill has a set of 6 black cards and a set of 6 red cards. Each card has a number from 1 through 6, such that each of the numbers 1 through 6 appears on 1 black card and 1 red card. Bill likes to play a game in which he shuffles all 12 cards, turns over 4 cards, and looks for pairs of cards that have the same value. What is the chance that Bill finds at least one pair of cards that have the same value? 1.8/33 2.62/165 3.17/33 4.103/165 5.25/33 Merging similar topics. Please refer to the solutions on page 1. _________________ Re: A game of cards   [#permalink] 16 Dec 2013, 02:04 Similar topics Replies Last post Similar Topics: Bill has a standard deck of 52 playing cards made up of 4 16 14 Nov 2006, 11:54 Peter has a small deck of 12 playing cards made up of only 2 5 16 Jul 2007, 16:40 Bill has a small deck of 12 playing cards made up of only 2 3 05 Oct 2007, 22:39 Bill has a small deck of 12 playing cards made up of only 2 7 20 Oct 2007, 12:16 8 Bill has a small deck of 12 playing cards made up of only 2 50 21 Jan 2007, 16:16 Display posts from previous: Sort by
2014-03-08 08:40:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5916306376457214, "perplexity": 1161.237061962587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654272/warc/CC-MAIN-20140305060734-00068-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapter-4-decimals-test-page-325/12
## Basic College Mathematics (10th Edition) Estimate: 20 $\div$ 4 = 5 Exact: 4.175 20.04 $\div$ 4.8 Lets First find Estimate: 20.04 (rounded to) 20 4.8 (rounded to) 4 20.04 $\div$ 4.8 = 20 $\div$ 4 = 5 Now, lets find Exact: 20.04 $\div$ 4.8 = $\frac{20.04}{4.8}$ = $\frac{2004}{480}$ (multiply numerator & denominator by 100) = $\frac{2004}{480}$ = $\frac{2004\div12}{480\div12}$ (divide by common factor 12) = $\frac{167}{40}$ = 4.175 (By solving we get)
2019-10-21 03:20:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809463381767273, "perplexity": 2272.829966853473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987751039.81/warc/CC-MAIN-20191021020335-20191021043835-00131.warc.gz"}
https://dsp.stackexchange.com/questions/1774/generate-the-convolution-matrix-of-2d-kernel-for-convolution-shape-of-same/54884
# Generate the Convolution Matrix of 2D Kernel for Convolution Shape of same I want to find a convolution matrix for a certain 2D kernel $$H$$. For example, for image Img of size $$m \times n$$ , I want (in MATALB): T * Img = reshape(conv2(Img, H, 'same'), [], 1); Where T is the convolution matrix and same means the Convolution Shape (Output Size) matched the input size. Theoretically, H should be converted to a toeplitz matrix, I'm using the MATLAB function convmtx2(): T = convmtx2(H, m, n); Yet T is of size $$(m+2) (n+2) \times (mn)$$ as MATLAB's convmtx2 generates a convolution matrix which matches Convolution Shape of full. Is there a way to generate the Convolution Matrix which matches using conv2() with the same convolution shape parameter? • Are you looking simply to get the same resultant T*Img or you would like to use T for a different purpose? Mar 28 '12 at 19:01 • I edited your code and maths so it looks more atractive. You can do this yourself on future questions. For Latex editing use . Jun 29 '12 at 13:18 • Related question - dsp.stackexchange.com/questions/17418. – Royi Jan 17 '19 at 8:58 I cannot test this on my computer because I do not have the convtmx2 function, here is what the MATLAB help says: http://www.mathworks.com/help/toolbox/images/ref/convmtx2.html T = convmtx2(H,m,n) returns the convolution matrix T for the matrix H. If X is an m-by-n matrix, then reshape(T*X(:),size(H)+[m n]-1) is the same as conv2(X,H). This would get the same resulting convolution of conv2(X,H) but then you would still have to pull out the correct piece of the convolution. • Welcome to DSP.SE, and this is a great answer! Mar 29 '12 at 2:37 • I think that sometimes one needs the actual matrix to analyze it (The adjoint operator, the inverse, etc...). Hence this method won't work (Unless you start removing rows form the matrix which will be slow as it is Sparse). – Royi Jan 17 '19 at 15:05 I wrote a function which solves this in my StackOverflow Q2080835 GitHub Repository (Have a look at CreateImageConvMtx()). Actually the function can support any convolution shape you'd like - full, same and valid. The code is as following: function [ mK ] = CreateImageConvMtx( mH, numRows, numCols, convShape ) CONVOLUTION_SHAPE_FULL = 1; CONVOLUTION_SHAPE_SAME = 2; CONVOLUTION_SHAPE_VALID = 3; switch(convShape) case(CONVOLUTION_SHAPE_FULL) % Code for the 'full' case convShapeString = 'full'; case(CONVOLUTION_SHAPE_SAME) % Code for the 'same' case convShapeString = 'same'; case(CONVOLUTION_SHAPE_VALID) % Code for the 'valid' case convShapeString = 'valid'; end mImpulse = zeros(numRows, numCols); for ii = numel(mImpulse):-1:1 mImpulse(ii) = 1; %<! Create impulse image corresponding to i-th output matrix column mTmp = sparse(conv2(mImpulse, mH, convShapeString)); %<! The impulse response cColumn{ii} = mTmp(:); mImpulse(ii) = 0; end mK = cell2mat(cColumn); end Enjoy...
2021-09-17 20:30:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6243426203727722, "perplexity": 2819.585147642375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00081.warc.gz"}
https://lakens.github.io/statistical_inferences/equivalencetest.html
9 Equivalence Testing and Interval Hypotheses Most scientific studies are designed to test the prediction that an effect or a difference exists. Does a new intervention work? Is there a relationship between two variables? These studies are commonly analyzed with a null hypothesis significance test. When a statistically significant p-value is observed, the null hypothesis can be rejected, and researchers can claim that the intervention works, or that there is a relationship between two variables, with a maximum error rate. But if the p-value is not statistically significant, researchers very often draw a logically incorrect conclusion: They conclude there is no effect based on p > 0.05. Open a result section of an article you are writing, or the result section of an article you have recently read. Search for "p > 0.05", and look carefully at what you or the scientists concluded (in the results section, but also check which claim they make in the discussion section). If you see the conclusion that there was 'no effect' or there was 'no association between variables', you have found an example where researchers forgot that absence of evidence is not evidence of absence . A non-significant result in itself only tells us that we cannot reject the null hypothesis. It is tempting to ask after p > 0.05 'so, is the true effect zero'? But the p-value from a null hypothesis significance test cannot answer that question. It might be useful to think of the answer to the question whether an effect is absent after observing p > 0.05 as 無 (mu), used as a non-dualistic answer, neither yes nor no, or 'unasking the question'. It is simply not possible to answer the question whether a meaningful effect is absent based on p > 0.05. There should be many situations where researchers are interested in examining whether a meaningful effect is absent. For example, it can be important to show two groups do not differ on factors that might be a confound in the experimental design (e.g., examining whether a manipulation intended to increase fatigue did not affect the mood of the participants, by showing that positive and negative affect did not differ between the groups). Researchers might want to know if two interventions work equally well, especially when the newer intervention costs less or requires less effort (e.g., is online therapy just as efficient as in person therapy?). And other times we might be interested to demonstrate the absence of an effect because a theoretical model predicts there is no effect, or because we believe a previously published study was a false positive, and we expect to show the absence of an effect in a replication study . And yet, when you ask researchers if they have ever designed a study where the goal was to show that there was no effect, for example by predicting that there would be no difference between two conditions, many people say they have never designed a study where their main prediction was that the effect size was 0. Researchers almost always predict there is a difference. One reason might be that many researchers would not even know how to statistically support a prediction of an effect size of 0, because they were not trained in the use of equivalence testing. It is never possible to show an effect is exactly 0. Even if you collected data from every person in the world, the effect in any single study will randomly vary around the true effect size of 0 - you might end up with a mean difference that is very close to, but not exactly, zero, in any finite sample. Hodges & Lehmann (1954) were the first to discuss the statistical problem of testing whether two populations have the same mean. They suggest (p. 264) to: “test that their means do not differ by more than an amount specified to represent the smallest difference of practical interest”. Nunnally (1960) similarly proposed a ‘fixed-increment’ hypothesis where researchers compare an observed effect against a range of values that is deemed too small to be meaningful. Defining a range of values considered practically equivalent to the absence of an effect is known as an equivalence range or a region of practical equivalence . The equivalence range should be specified in advance, and requires careful consideration of the smallest effect size of interest. Although researchers have repeatedly attempted to introduce tests against an equivalence range in the social sciences , this statistical approach has only recently become popular. During the replication crisis, researchers searched for tools to interpret null results when performing replication studies. Researchers wanted to be able to publish informative null results when replicating findings in the literature that they suspected were false positives. One notable example were the studies on pre-cognition by Daryl Bem, which ostensibly showed that participants were able to predict the future . Equivalence tests were proposed as a statistical approach to answer the question whether an observed effect is small enough to conclude that a previous study could not be replicated . Researchers specify a smallest effect size of interest (for example an effect of 0.5, so for a two-sided test any value outside a range from -0.5 to 0.5) and test whether effects more extreme than this range can be rejected. If so, they can reject the presence of effects that are deemed large enough to be meaningful. One can distinguish a nil null hypothesis, where the null hypothesis is an effect of 0, from a non-nil null hypothesis, where the null hypothesis is any other effect than 0, for example effects more extreme than the smallest effect size of interest . As Nickerson writes: The distinction is an important one, especially relative to the controversy regarding the merits or shortcomings of NHST inasmuch as criticisms that may be valid when applied to nil hypothesis testing are not necessarily valid when directed at null hypothesis testing in the more general sense. Equivalence tests are a specific implementation of interval hypothesis tests, where instead of testing against a null hypothesis of no effect (that is, an effect size of 0; nil null hypothesis), an effect is tested against a null hypothesis that represents a range of non-zero effect sizes (non-nil null hypothesis). Indeed, one of the most widely suggested improvements that mitigates the most important limitations of null hypothesis significance testing is to replace the nil null hypothesis with the test of a range prediction (by specifying a non-nil null hypothesis) in an interval hypothesis test . To illustrate the difference, Panel A in Figure 9.1 visualizes the results that are predicted in a two-sided null hypothesis test with a nil hypothesis, where the test examines whether an effect of 0 can be rejected. Panel B shows an interval hypothesis where an effect between 0.5 and 2.5 is predicted, where the non-nill null hypothesis consists of values smaller than 0.5 or larger than 2.5, and the interval hypothesis test examines whether values in these ranges can be rejected. Panel C illustrates an equivalence test, which is basically identical to an interval hypothesis test, but the predicted effects are located in a range around 0, and contain effects that are deemed too small to be meaningful. When an equivalence test is reversed, a researcher designs a study to reject effects less extreme than a smallest effect size of interest (see Panel D in Figure 9.1), it is called a minimum effect test . A researcher might not just be interested in rejecting an effect of 0 (as in a null hypothesis significance test) but in rejecting a range of effects that are too small to be meaningful. All else equal, a study designed to have high power for a minimum effect requires more observations than if the goal had been to reject an effect of zero. As the confidence interval needs to reject a value that is closer to the observed effect size (e.g., 0.1 instead of 0) it needs to be more narrow, which requires more observations. One benefit of a minimum effect test compared to a null hypothesis test is that there is no distinction between statistical significance and practical significance. As the test value is chosen to represent the minimum effect of interest, whenever it is rejected, the effect is both statistically and practically significant . Another benefit of minimum effect tests is that, especially in correlational studies in the social sciences, variables are often connected through causal structures that result in real but theoretically uninteresting nonzero correlations between variables, which has been labeled the 'crud factor' . Because an effect of zero is unlikely to be true in large correlational datasets, rejecting a nil null hypothesis is not a severe test. Even if the hypothesis is incorrect, it is likely that an effect of 0 will be rejected due to 'crud'. For this reason, some researchers have suggested to test against a minimum effect of r = 0.1, as correlations below this threshold are quite common due to theoretically irrelevant correlations between variables . Figure 9.1 illustrates two-sided tests, but it is often more intuitive and logical to perform one-sided tests. In that case, a minimum effect test would, for example, aim to reject effects smaller than 0.1, and an equivalence test would aim to reject effects larger than for example 0.1. Instead of specifying an upper and lower bound of a range, it is sufficient to specify a single value for one-sided tests. A final variation of a one-sided non-nil null hypothesis test is known as a test for non-inferiority, which examines if an effect is larger than the lower bound of an equivalence range. Such a test is for example performed when a novel intervention should not be noticeably worse than an existing intervention, but it can be a tiny bit worse. For example, if a difference between a novel and existing intervention is not smaller than -0.1, and effects smaller than -0.1 can be rejected, one can conclude an effect is non-inferior . We see that extending nil null hypothesis tests to non-nil null hypotheses allow researchers to ask questions that might be more interesting. 9.1 Equivalence tests Equivalence tests were first developed in pharmaceutical sciences and later formalized as the two one-sided tests (TOST) approach to equivalence testing . The TOST procedure entails performing two one-sided tests to examine whether the observed data is surprisingly larger than a lower equivalence boundary ($$\Delta_{L}$$), or surprisingly smaller than an upper equivalence boundary ($$\Delta_{U}$$): $t_{L} = \frac{{\overline{M}}_{1} - {\overline{M}}_{2} - \Delta_{L}}{\sigma\sqrt{\frac{1}{n_{1}} + \frac{1}{n_{2}}}}$ and $t_{U} = \frac{{\overline{M}}_{1} - {\overline{M}}_{2}{- \Delta}_{U}}{\sigma\sqrt{\frac{1}{n_{1}} + \frac{1}{n_{2}}}}$ where M indicates the means of each sample, n is the sample size, and σ is the pooled standard deviation: $\sigma = \sqrt{\frac{\left( n_{1} - 1 \right)\text{sd}_{1}^{2} + \left( n_{2} - 1 \right)\text{sd}_{2}^{2}}{n_{1} + \ n_{2} - 2}}$ If both one-sided tests are significant, we can reject the presence of effects large enough to be meaningful. The formulas are highly similar to the normal formula for the t-statistic. The difference between a NHST t-test and the TOST procedure is that the lower equivalence boundary $$\Delta_{L}$$ and the upper equivalence boundary $$\Delta_{U}$$ are subtracted from the mean difference between groups (in a normal t-test, we compare the mean difference against 0, and thus the delta drops out of the formula because it is 0). To perform an equivalence test, you don't need to learn any new statistical tests, as it is just the well-known t-test against a different value than 0. It is somewhat surprising that the use of ttests to perform equivalence tests is not taught alongside their use in null hypothesis significance tests, as there is some indication that this could prevent common misunderstandings of p-values . Let's look at an example of an equivalence test using the TOST procedure. In a study where researchers are manipulating fatigue by asking participants to carry heavy boxes around, the researchers want to ensure the manipulation does not inadvertently alter participants’ moods. The researchers assess positive and negative emotions in both conditions, and want to claim there are no differences in positive mood. Let’s assume that positive mood in the experimental fatigue condition ($$m_1$$ = 4.55, $$sd_1$$ = 1.05, $$n_1$$ = 15) did not differ from the mood in the the control condition ($$m_2$$ = 4.87, $$sd_2$$ = 1.11, $$n_2$$ = 15). The researchers conclude: “Mood did not differ between conditions, t = -0.81, p = .42”. Of course, mood did differ between conditions, as 4.55 - 4.87 = -0.32. The claim is that there was no meaningful difference in mood, but to make such a claim in a correct manner, we first need to specify which difference in mood is large enough to be meaningful. For now, let's assume the researcher consider any effect less extreme half a scale point too small to be meaningful. We now test if the observed mean difference of -0.32 is small enough such that we can reject the presence of effects that are large enough to matter. The TOSTER package (originally created by myself but recently redesigned by Aaron Caldwell) can be used to plot two t-distributions and their critical regions indicating when we can reject the presence of effects smaller than -0.5 and larger than 0.5. It can take some time to get used to the idea that we are rejecting values more extreme than the equivalence bounds. Try to consistently ask in any hypothesis test: Which values can the test reject? In a nil null hypothesis test, we can reject an effect of 0, and in the equivalence test in the Figure below, we can reject values lower than -0.5 and higher than 0.5. In Figure 9.2 we see two t-distributions centered on the upper and lower bound of the specified equivalence range (-0.5 and 0.5). Below the two curves we see a line that represents the confidence interval ranging from -0.99 to 0.35, and a dot on the line that indicates the observed mean difference of -0.32. Let's first look at the left curve. We see the green highlighted area in the tails that highlights which observed mean differences would be extreme enough to statistically reject an effect of -0.5. Our observed mean difference of -0.32 lies very close to -0.5, and if we look at the left distribution, the mean is not far enough away from -0.5 to fall in the green area that indicates when observed differences would be statistically significant. We can also perform the equivalence test using the TOSTER package, and look at the results. TOSTER::tsum_TOST(m1 = 4.55, m2 = 4.87, sd1 = 1.05, sd2 = 1.11, n1 = 15, n2 = 15, low_eqbound = -0.5, high_eqbound = 0.5) ## ## Welch Modified Two-Sample t-Test ## ## The equivalence test was non-significant, t(27.91) = 0.456, p = 3.26e-01 ## The null hypothesis test was non-significant, t(27.91) = -0.811, p = 4.24e-01 ## NHST: don't reject null significance hypothesis that the effect is equal to zero ## TOST: don't reject null equivalence hypothesis ## ## TOST Results ## t df p.value ## t-test -0.8111 27.91 0.424 ## TOST Lower 0.4563 27.91 0.326 ## TOST Upper -2.0785 27.91 0.023 ## ## Effect Sizes ## Estimate SE C.I. Conf. Level ## Raw -0.3200 0.3945 [-0.9912, 0.3512] 0.9 ## Hedges's g(av) -0.2881 0.3930 [-0.8733, 0.3021] 0.9 ## Note: SMD confidence intervals are an approximation. See vignette("SMD_calcs"). In the line 't-test' the output shows the traditional nil null hypothesis significance test (which we already knew was not statistically significant: t = 0.46, p = 0.42. Just like the default t-test in R, the tsum_TOST function will by default calculate Welch’s t-test (instead of Student’s t-test), which is a better default , but you can request Student’s t-test by adding var.equal = TRUE as an argument to the function. We also see a test indicated by TOST Lower. This is the first one-sided test examining if we can reject effects lower than -0.5. From the test result, we see this is not the case: t = 0.46, p = 0.33. This is an ordinary t-test, just against an effect of -0.5. Because we cannot reject differences more extreme than -0.5, it is possible that a difference we consider meaningful (e.g., a difference of -0.60) is present. When we look at the one-sided test against the upper bound of the equivalence range (0.5) we see that we can statistically reject the presence of mood effects larger than 0.5, as in the line TOST Upper we see t = -2.08, p = 0.02. Our final conclusion is therefore that, even though we can reject effects more extreme than 0.5 based on the observed mean difference of -0.32, we cannot reject effects more extreme than -0.5. Therefore, we cannot completely reject the presence of meaningful mood effects. As the data does not allow us to claim the effect is different from 0, nor that the effect is, if anything, too small to matter (based on an equivalence range from -0.5 to 0.5), the data are inconclusive. We cannot distinguish between a Type 2 error (there is an effect, but in this study we just did not detect it) or a true negative (there really is no effect large enough to matter). Note that because we fail to reject the one-sided test against the lower equivalence bound, the possibility remains that there is a true effect size that is large enough to be considered meaningful. This statement is true, even when the effect size we have observed (-0.32) is closer to zero than to the equivalence bound of -0.5. One might think the observed effect size needs to be more extreme (i.e., < -0.5 or > 0.5) than the equivalence bound to maintain the possibility that there is an effect that is large enough to be considered meaningful. But that is not required. The 90% CI indicates that some values below -0.5 cannot be rejected. As we can expect that 90% of confidence intervals in the long run capture the true population parameter, it is perfectly possible that the true effect size is more extreme than -0.5. And, the effect might even be more extreme than the values captured by this confidence interval, as 10% of the time, the computed confidence interval is expected to not contain the true effect size. Therefore, when we fail to reject the smallest effect size of interest, we retain the possibility that an effect of interest exists. If we can reject the nil null hypothesis, but fail to reject values more extreme than the equivalence bounds, then we can claim there is an effect, and it might be large enough to be meaningful. One way to reduce the probability of an inconclusive effect is to collect sufficient data. Let's imagine the researchers had not collected 15 participants in each condition, but 200 participants. They otherwise observe exactly the same data. As explained in the chapter on confidence intervals, as the sample size increases, the confidence interval becomes more narrow. For a TOST equivalence test to be able to reject both the upper and lower bound of the equivalence range, the confidence interval needs to fall completely within the equivalence range. In Figure 9.3 we see the same result as in Figure 9.2, but now if we had collected 200 observations. Because of the larger sample size, the confidence is more narrow than when we collected 15 participants. We see that the 90% confidence interval around the observed mean difference now excludes both the upper and lower equivalence bound. This means that we can now reject effects outside of the equivalence range (even though barely, with a p = 0.048 as the one-sided test against the lower equivalence bound is only just statistically significant). ## ## Welch Modified Two-Sample t-Test ## ## The equivalence test was significant, t(396.78) = 1.666, p = 4.82e-02 ## The null hypothesis test was significant, t(396.78) = -2.962, p = 3.24e-03 ## NHST: reject null significance hypothesis that the effect is equal to zero ## TOST: reject null equivalence hypothesis ## ## TOST Results ## t df p.value ## t-test -2.962 396.8 0.003 ## TOST Lower 1.666 396.8 0.048 ## TOST Upper -7.590 396.8 < 0.001 ## ## Effect Sizes ## Estimate SE C.I. Conf. Level ## Raw -0.3200 0.108 [-0.4981, -0.1419] 0.9 ## Hedges's g(av) -0.2956 0.104 [-0.4605, -0.1304] 0.9 ## Note: SMD confidence intervals are an approximation. See vignette("SMD_calcs"). In Figure 9.4 we see the the same results, but now visualized as a confidence density plot , which is a graphical summary of the distribution of confidence. A confidence density plot allows you to see which effects can be rejected with difference confidence interval widths. We see the bounds of the green area (corresponding to a 90% confidence interval) fall inside the equivalence bounds. Thus, the equivalence test is statistically significant, and we can statistically reject the presence of effects outside the equivalence range. We can also see that the 95% confidence interval excludes 0, and therefore, a traditional null hypothesis significance test is also statistically significant. In other words, both the null hypothesis test and the equivalence test have yielded significant results. This means we can claim that the observed effect is statistically different from zero, and that the effect is statistically smaller than effects we deemed large enough to matter when we specified the equivalence range from -0.5 to 0.5. This illustrates how combining equivalence tests and nil null hypothesis tests can prevent us from mistaking statistically significant effects for practically significant effects. In this case, with 200 participants, we can reject an effect of 0, but the effect, if any, is not large enough to be meaningful. 9.2 Reporting Equivalence Tests It is common practice to only report the test yielding the higher p-value of the two one-sided tests when reporting an equivalence test. Because both one-sided tests need to be statistically significant to reject the null hypothesis in an equivalence test (i.e., the presence of effects large enough to matter), when the larger of the two hypothesis tests rejects the equivalence bound, so does the other test. Unlike in null hypothesis significance tests it is not common to report standardized effect sizes for equivalence tests, but there can be situations where researchers might want to discuss how far the effect is removed from the equivalence bounds on the raw scale. Prevent the erroneous interpretation to claim there is 'no effect', that an effect is 'absent', that the true effect size is 'zero', or vague verbal descriptions, such as that two groups yielded 'similar' or 'comparable' data. A significant equivalence test rejects effects more extreme that the equivalence bounds. Smaller true effects have not been rejected, and thus it remains possible that there is a true effect. Because a TOST procedure is a frequentist test based on a p-value, all other misconceptions of p-values should be prevented as well. When summarizing the main result of an equivalence test, for example in an abstract, always report the equivalence range that the data is tested against. Reading 'based on an equivalence test we concluded the absence of a meaningful effect' means something very different if the equivalence bounds were d =-0.9 to 0.9 than when the bounds were d =-0.2 to d =0.2. So instead, write 'based on an equivalence test with an equivalence range of d =-0.2 to 0.2, we conclude the absence of an effect we deemed meaningful'. Of course, whether peers agree you have correctly concluded the absence of a meaningful effect depends on whether they agree with your justification for a smallest effect of interest! A more neutral conclusion would be a statement such as: 'based on an equivalence test, we rejected the presence of effects more extreme than -0.2 to 0.2, so we can act (with an error rate of alpha) as if the effect, if any, is less extreme than our equivalence range'. Here, you do not use value-laden terms such as 'meaningful'. If both a null hypothesis test and an equivalence test are non-significant, the finding is best described as 'inconclusive': There is not enough data to reject the null, or the smallest effect size of interest. If both the null hypothesis test and the equivalence test are statistically significant, you can claim there is an effect, but at the same time claim the effect is too small to be of interest (given your justification for the equivalence range). Equivalence bounds can be specified in raw effect sizes, or in standardized mean differences. It is better to specify the equivalence bounds in terms of raw effect sizes. Setting them in terms of Cohen's d leads to bias in the statistical test, as the observed standard deviation has to be used to translate the specified Cohen's d into a raw effect size for the equivalence test (and when you set equivalence bounds in standardized mean differences, TOSTER will warn: "Warning: setting bound type to SMD produces biased results!"). The bias is in practice not too problematic in any single equivalence test, and being able to specify the equivalence bounds in standardized mean differences lowers the threshold to perform an equivalence test when they do not know the standard deviation of their measure. But as equivalence testing becomes more popular, and fields establish smallest effect sizes of interest, they should do so in raw effect size differences, not in standardized effect size differences. 9.3 Minimum Effect Tests If a researcher has specified a smallest effect size of interest, and is interested in testing whether the effect in the population is larger than this smallest effect of interest, a minimum effect test can be performed. As with any hypothesis test, we can reject the smallest effect of interest whenever the confidence interval around the observed effect does not overlap with it. In the case of a minimum effect test, however, the confidence interval should be fall completely beyond the smallest effect size of interest. For example, let's assume a researcher performs a minimum effect test with 200 observations per condition against a smallest effect size of interest of a mean difference of 0.5. ## ## Welch Modified Two-Sample t-Test ## ## The minimal effect test was significant, t(396.78) = 12.588, p = 4.71e-04 ## The null hypothesis test was significant, t(396.78) = 7.960, p = 1.83e-14 ## NHST: reject null significance hypothesis that the effect is equal to zero ## TOST: reject null MET hypothesis ## ## TOST Results ## t df p.value ## t-test 7.960 396.8 < 0.001 ## TOST Lower 12.588 396.8 1 ## TOST Upper 3.332 396.8 < 0.001 ## ## Effect Sizes ## Estimate SE C.I. Conf. Level ## Raw 0.8600 0.108 [0.6819, 1.0381] 0.9 ## Hedges's g(av) 0.7945 0.125 [0.6234, 0.9646] 0.9 ## Note: SMD confidence intervals are an approximation. See vignette("SMD_calcs"). Below the two curves we again see a line that represents the confidence interval ranging from 0.68 to 1.04, and a dot on the line that indicates the observed mean difference of 0.86. The entire confidence interval lies well above the minimum effect of 0.5, and we can therefore not just reject the nil null hypothesis, but also effects smaller than the minimum effect of interest. Therefore, we can claim that the effect is large enough to be not just statistically significant, but also practically significant (as long as we have justified our smallest effect size of interest well). Because we have performed a two-sided minimum effect test, the minimum effect test would also have been significant if the confidence interval had been completely on the opposite side of -0.5. Earlier we discussed how combining traditional NHST and an equivalence test could lead to more informative results. It is also possible to combine a minimum effect test and an equivalence test. One might even say that such a combination is the most informative test of a prediction whenever a smallest effect size of interest can be specified. In principle, this is true. As long as we are able to collect enough data, we will always get an informative and straightforward answer when we combine a minimum effect test with an equivalence test: Either we can reject all effects that are too small to be of interest, or we can reject all effects that are large enough to be of interest. As we will see below in the section on power analysis for interval hypotheses, whenever the true effect size is close to the smallest effect size of interest, a large amount of observations will need to be collected. And if the true effect size happens to be identical to the smallest effect size of interest, neither the minimum effect test nor the equivalence test can be correctly rejected (and any significant test would be a Type 1 error). If a researcher can collect sufficient data (so that the test has high statistical power), and is relatively confident that the true effect size will be larger or smaller than the smallest effect of interest, then the combination of a minimum effect test and an equivalence test can be attractive as such a hypothesis test is likely to yield an informative answer to the research question. 9.4 Power Analysis for Interval Hypothesis Tests When designing a study it is a sensible strategy to always plan for both the presence and the absence of an effect. Several scientific journals require a sample size justification for Registered Reports where the statistical power to reject the null hypothesis is high, but where the study is also capable of demonstrating the absence of an effect, for example by also performing a power analysis for an equivalence test. As we saw in the chapter on error control and likelihoods null results are to be expected, and if you only think about the possibility of observing a null effect when the data has been collected, it is often too late. The statistical power for interval hypotheses depend on the alpha level, the sample size, the smallest effect of interest you decide to test against, and the true effect size. For an equivalence test, it is common to perform a power analysis assuming the true effect size is 0, but this might not always be realistic. The closer the expected effect size is to the smallest effect size of interest, the larger the sample size needed to reach a desired power. Don't be tempted to assume a true effect size of 0, if you have good reason to expect a small but non-zero true effect size. The sample size that the power analysis indicates you need to collect might be smaller, but in reality you also have a higher probability of an inconclusive result. Earlier versions of TOSTER only enabled researchers to perform power analyses for equivalence tests assuming a true effect size of 0, but a new power function by Aaron Caldwell allows users to specify delta, the expected effect size. Assume a researchers desired to achieve 90% power for an equivalence test with an equivalence range from -0.5 to 0.5, with an alpha level of 0.05, and assuming a population effect size of 0. A power analysis for an equivalence test can be performed to examine the required sample size. TOSTER::power_t_TOST(power = 0.9, delta = 0, alpha = 0.05, type = "two.sample", low_eqbound = -0.5, high_eqbound = 0.5) ## ## Two-sample TOST power calculation ## ## power = 0.9 ## beta = 0.1 ## alpha = 0.05 ## n = 87.26261 ## delta = 0 ## sd = 1 ## bounds = -0.5, 0.5 ## ## NOTE: n is number in *each* group We see that the required sample size is 88 participants in each condition for the independent t-test. Let's compare this power analysis to a situation where the researcher expects a true effect of d = 0.1, instead of a true effect of 0. To be able to reliably reject effects larger than 0.5, we will need a larger sample size, just as how we need a larger sample size for a null hypothesis test powered to detect d = 0.4 than a null hypothesis test powered to detect d = 0.5. TOSTER::power_t_TOST(power = 0.9, delta = 0.1, alpha = 0.05, type = "two.sample", low_eqbound = -0.5, high_eqbound = 0.5) ## ## Two-sample TOST power calculation ## ## power = 0.9 ## beta = 0.1 ## alpha = 0.05 ## n = 108.9187 ## delta = 0.1 ## sd = 1 ## bounds = -0.5, 0.5 ## ## NOTE: n is number in *each* group We see the sample size has now increased to 109 participants in each condition. As mentioned before, it is not necessary to perform a two-sided equivalence test. It is also possible to perform a one-sided equivalence test. An example of a situation where such a directional test is appropriate is a replication study. If a previous study observed an effect of d = 0.48, and you perform a replication study, you might decide to consider any effect smaller than d = 0.2 a failure to replicate - including any effect in the opposite direction, such as an effect of d = -0.3. Although most software for equivalence tests requires you to specify an upper and lower bound for an equivalence range, you can mimic a one-sided test by setting the equivalence bound in the direction you want to ignore to a low value so that the one-sided test against this value will always be statistically significant. This can also be used to perform a power analysis for a minimum effect test, where one bound is the minimum effect of interest, and the other bound is set to an extreme value on the other side of the expected effect size. In the power analysis for an equivalence test example below, the lower bound is set to -5 (it should be set low enough such that lowering it even further has no noticeable effect). We see that the new power function in the TOSTER package takes the directional prediction into account, and just as with directional predictions in a nil null hypothesis test, a directional prediction in an equivalence test is more efficient, and only 70 observations are needed to achieve 90% power. # New TOSTER power functions allows power for expected non-zero effect. TOSTER::power_t_TOST(power = 0.9, delta = 0, alpha = 0.05, type = "two.sample", low_eqbound = -5, high_eqbound = 0.5) ## ## Two-sample TOST power calculation ## ## power = 0.9 ## beta = 0.1 ## alpha = 0.05 ## n = 69.19784 ## delta = 0 ## sd = 1 ## bounds = -5.0, 0.5 ## ## NOTE: n is number in *each* group Statistical software offers options for power analyses for some statistical tests, but not for all tests. Just as with power analysis for a nil null hypothesis test, it can be necessary to use a simulation-based approach to power analysis. 9.5 The Bayesian ROPE procedure In Bayesian estimation, one way to argue for the absence of a meaningful effect is the region of practical equivalence (ROPE) procedure (Kruschke (2013)), which is “somewhat analogous to frequentist equivalence testing” (Kruschke & Liddell (2017)). In the ROPE procedure, an equivalence range is specified, just as in equivalence testing, but the Bayesian highest density interval based on a posterior distribution (as explained in the chapter on Bayesian statistics) is used instead of the confidence interval. If the prior used by Kruschke was perfectly uniform, and the ROPE procedure and an equivalence test used the same confidence interval (e.g., 90%), the two tests would yield identical results. There would only be philosophical differences in how the numbers are interpreted. The BEST package in R that can be used to perform the ROPE procedure by default uses a ‘broad’ prior, and therefore results of the ROPE procedure and an equivalence test are not exactly the same, but they are very close. One might even argue the two tests are 'practically equivalent'. In the R code below, random normally distributed data for two conditions is generated (with means of 0 and a standard deviation of 1) and the ROPE procedure and a TOST equivalence test are performed. The 90% HDI ranges from -0.06 to 0.39, with an estimated mean based on the prior and the data of 0.164. The HDI falls completely between the upper and the lower bound of the equivalence range, and therefore values more extreme than -0.5 or 0.5 are deemed implausible. The 95% CI ranges from -0.07 to 0.36 with an observed mean difference of 0.15. We see that the numbers are not identical, because in Bayesian estimation the observed values are combined with a prior, and the mean estimate is not purely based on the data. But the results are very similar, and will in most cases lead to similar inferences. The BEST R package also enables researchers to perform simulation based power analyses, which take a long time but, when using a broad prior, yield a result that is basically identical to the sample size from a power analysis for an equivalence test. The biggest benefit of ROPE over TOST is that it allows you to incorporate prior information. If you have reliable prior information, ROPE can use this information, which is especially useful if you don’t have a lot of data. If you use informed priors, check the robustness of the posterior against reasonable changes in the prior in sensitivity analyses. 9.6 Which interval width should be used? Because the TOST procedure is based on two one-sided tests, a 90% confidence interval is used when the one-sided tests are performed at an alpha level of 5%. Because both the test against the upper bound and the test against the lower bound needs to be statistically significant to declare equivalence (which as explained in the chapter on error control is an intersection-union approach to multiple testing) it is not necessary to correct for the fact that two tests are performed. If the alpha level is adjusted for multiple comparisons, or if the alpha level is justified instead of relying on the default 5% level (or both), the corresponding confidence interval should be used, where CI = 100 - (2 * $$\alpha$$). Thus, the width of the confidence interval is directly related to the choice for the alpha level, as we are making decisions to reject the smallest effect size of interest, or not, based on whether the confidence interval excluded the effect that is tested against. When using a Highest Density Interval from a Bayesian perspective, such as the ROPE procedure, the choice for a width of a confidence interval does not follow logically from a desired error rate, or any other principle. Kruschke (2014) writes: “How should we define 'reasonably credible'? One way is by saying that any points within the 95% HDI are reasonably credible.” McElreath (2016) has recommended the use of 67%, 89%, and 97%, because "No reason. They are prime numbers, which makes them easy to remember.". Both these suggestions lack a solid justification. As Gosset (or Student), observed (1904): Results are only valuable when the amount by which they probably differ from the truth is so small as to be insignificant for the purposes of the experiment. What the odds selected should be depends- 1. On the degree of accuracy which the nature of the experiment allows, and 2. On the importance of the issues at stake. There are only two principled solutions. First, if a highest density interval width is used to make claims, these claims will be made with certain error rates, and researchers should quantify the risk of erroneous claims by computing frequentist error rates. This would make the ROPE procedure a Bayesian/Frequentist compromise procedure, where the computation of a posterior distribution allows for Bayesian interpretations of which parameters values are believed to be most probable, while decisions based on whether or not the HDI falls within an equivalence range have a formally controlled error rate. Note that when using an informative prior, an HDI does not match a CI, and the error rate when using an HDI can only be derived through simulations. The second solution is to not make any claims, present the full posterior distribution, and let readers draw their own conclusions. 9.7 Setting the Smallest Effect Size of Interest To be able to falsify our predictions using an equivalence test is to specify which observed values would be too small to be predicted by our theory. We can never say that an effect is exactly zero, but we can examine whether observed effects are too small to be theoretically or practically interesting. This requires that we specify the smallest effect size of interest (SESOI). The same concept goes by many names, such as a minimal important difference, or clinically significant difference . Take a moment to think about what the smallest effect size is that you would still consider theoretically or practically meaningful for the next study you are designing. It might be difficult to determine what the smallest effect size is that you would consider interesting, and the question what the smallest effect size of interest is might be something you have never really thought about to begin with. However, determining your smallest effect size of interest has important practical benefits. First, if researchers in a field are able to specify which effects would be too small to matter, it becomes very straightforward to power a study for the effects that are meaningful. The second benefit of specifying the smallest effect size of interest is that it makes your study falsifiable. Having your predictions falsified by someone else might not feel that great for you personally, but it is quite useful for science as a whole . After all, if there is no way a prediction can be wrong, why would anyone be impressed if the prediction is right? To start thinking about which effect sizes matter, ask yourself whether any effect in the predicted direction is actually support for the alternative hypothesis. For example, would an effect size of a Cohen's d of 10 be support for your hypothesis? In psychology, it should be rare that a theory prediucts such a huge effect, and if you observed a d = 10, you would probably check for either a computation error, or a confound in the study. On the other end of the scale, would an effect of d = 0.001 be in line with the theoretically proposed mechanism? Such an effect is incredibly small, and is well below what an individual would notice, as it would fall below the just noticeable difference given perceptual and cognitive limitations. Therefore, a d = 0.001 would in most cases lead researchers to conclude "Well, this is really too small to be something that my theory has predicted, and such a small effect is practically equivalent to the absence of an effect." However, when we make a directional prediction, we say that these types of effects are all part of our alternative hypothesis. Even though many researchers would agree such tiny effects are too small to matter, they still officially support for our alternative hypothesis if we have a directional prediction with a nil null hypothesis. Furthermore, researchers rarely have the resources to statistically reject the presence of effects this small, so the claim that such effects would still support a theoretical prediction makes the theory practically unfalsifiable: A researcher could simply respond to any replication study showing a non-significant small effect (e.g., d = 0.05) by saying: "That does not falsify my prediction. I suppose the effect is just a bit smaller than d = 0.05", without ever having to admit the prediction is falsified. This is problematic, because if we do not have a process of replication and falsification, a scientific discipline risks a slide towards the unfalsifiable . So whenever possible, when you design an experiment or you have a theory and a theoretical prediction, carefully think about, and clearly state, what the smallest effect size of interest is. 9.8 Specifying a SESOI based on theory One example of a theoretically predicted smallest effect size of interest can be found in the study by Burriss et al. (2015), who examined whether women displayed increased redness in the face during the fertile phase of their ovulatory cycle. The hypothesis was that a slightly redder skin signals greater attractiveness and physical health, and that sending this signal to men yields an evolutionary advantage. This hypothesis presupposes that men can detect the increase in redness with the naked eye. Burriss et al. collected data from 22 women and showed that the redness of their facial skin indeed increased during their fertile period. However, this increase was not large enough for men to detect with the naked eye, so the hypothesis was falsified. Because the just-noticeable difference in redness of the skin can be measured, it was possible to establish a theoretically motivated SESOI. A theoretically motivated smallest effect size of interest can be derived from just-noticeable differences, which provide a lower bound on effect sizes that can influence individuals, or based on computational models, which can provide a lower bound on parameters in the model that will still be able to explain observed findings in the empirical literature. 9.9 Anchor based methods to set a SESOI Building on the idea of a just-noticeable difference, psychologists are often interested in effects that are large enough to be noticed by single individuals. One procedure to estimate what constitutes a meaningful change on an individual level is the anchor-based method . Measurements are collected at two time points (e.g., a quality of life measure before and after treatment). At the second time point, an independent measure (the anchor) is used to determine if individuals show no change compared to time point 1, or if they have improved, or worsened. Often, the patient is directly asked to answer the anchor question, and indicate if they subjectively feel the same, better, or worse at time point 2 compared to time point 1. Button et al. (2015) used an anchor-based method to estimate that a minimal clinically important difference on the Beck Depression Inventory corresponded to a 17.5% reduction in scores from baseline. Anvari and Lakens (2021) applied the anchor-based method to examine a smallest effect of interest as measured by the widely used Positive and Negative Affect Scale (PANAS). Participants completed the 20 item PANAS at two time points several days apart (using a Likert scale going from 1 = “very slightly or not at all”, to 5 = “extremely”). At the second time point they were also asked to indicate if their affect had changed a little, a lot, or not at all. When people indicated their affect had changed “a little”, the average change in Likert units was 0.26 scale points for positive affect and 0.28 scale points for negative affect. Thus, an intervention to improve people’s affective state that should lead to what individuals subjectively consider at least a little improvement might set the SESOI at 0.3 units on the PANAS. 9.10 Specifying a SESOI based on a cost-benefit analysis Another principled approach to justify a smallest effect size of interest is to perform a cost-benefit analysis. Research shows that cognitive training may improve mental abilities in older adults which might benefit older drivers . Based on these findings, Viamonte, Ball, and Kilgore (2006) performed a cost-benefit analysis and concluded that based on the cost of the intervention ($247.50), the probability of an accident for drivers older than 75 (p = 0.0710), and the cost of an accident ($22,000), performing the intervention on all drivers aged 75 or older was more efficient than not intervening or only intervening after a screening test. Furthermore, sensitivity analyses revealed that intervening for all drivers would remain beneficial as long as the reduction in collision risk is 25%. Therefore, a 25% reduction in the probability of elderly above 75 getting into a car accident could be set as the smallest effect size of interest. For another example, economists have examined the value of a statistical life, based on willingness to pay to reduce the risk of death, at $1.5 -$2.5 million (in the year 2000, in western countries, see Mrozek & Taylor (2002)). Building on this work, Abelson (2003) calculated the willingness to pay to prevent acute health issues such as eye irritation at about $40-$50 per day. A researcher may be examining a psychological intervention that reduces the amount of times people touch their face close to their eyes, thereby reducing eye irritations caused by bacteria. If the intervention costs \$20 per year to administer, it therefore should reduce the average number of days with eye irritation in the population by at least 0.5 days for the intervention to be worth the cost. A cost-benefit analysis can also be based on the resources required to empirically study a very small effect when weighed against the value this knowledge would have for the scientific community. 9.11 Specifying the SESOI using the small telescopes approach Ideally, researchers who publish empirical claims would always specify which observations would falsify their claim. Regrettably, this is not yet common practice. This is particularly problematic when a researcher performs a close replication of earlier work. Because it is never possible to prove an effect is exactly zero, and the original authors seldom specify which range of effect sizes would falsify their hypotheses, it has proven to be very difficult to interpret the outcome of a replication study . When does the new data contradict the original finding? Consider a study in which you want to test the idea of the wisdom of crowds. You ask 20 people to estimate the number of coins in a jar, expecting the average to be very close to the true value. The research question is whether the people can on average correctly guess the number of coins, which is 500. The observed mean guess by 20 people is 550, with a standard deviation of 100. The observed difference from the true value is statistically significant, t(19)=2.37, p = 0.0375, with a Cohen’s d of 0.5. Can it really be that the group average is so far off? Is there no Wisdom of Crowds? Was there something special about the coins you used that make it especially difficult to guess their number? Or was it just a fluke? You set out to perform a close replication of this study. You want your study to be informative, regardless of whether there is an effect or not. This means you need to design a replication study that will allow you to draw an informative conclusion, regardless of whether the alternative hypothesis is true (the crowd will not estimate the true number of coins accurately) or whether the null hypothesis is true (the crowd will guess 500 coins, and the original study was a fluke). But since the original researcher did not specify a smallest effect size of interest, when would a replication study allow you to conclude the original study is contradicted by the new data? Observing a mean of exactly 500 would perhaps be considered by some to be quite convincing, but due to random variation you will (almost) never find a mean score of exactly 500. A non-significant result can’t be interpreted as the absence of an effect, because your study might have too small a sample size to detect meaningful effects, and the result might be a Type 2 error. So how can we move forward and define an effect size that is meaningful? How can you design a study that has the ability to falsify a previous finding? Uri Simonsohn (2015) defines a small effect as “one that would give 33% power to the original study”. In other words, the effect size that would give the original study odds of 2:1 against observing a statistically significant result if there was an effect. The idea is that if the original study had 33% power, the probability of observing a significant effect, if there was a true effect, is too low to reliably distinguish signal from noise (or situations where there is a true effect from situations where there is no true effect). Simonsohn (2015, p. 561) calls this the small telescopes approach, and writes: “Imagine an astronomer claiming to have found a new planet with a telescope. Another astronomer tries to replicate the discovery using a larger telescope and finds nothing. Although this does not prove that the planet does not exist, it does nevertheless contradict the original findings, because planets that are observable with the smaller telescope should also be observable with the larger one.” Although this approach to setting a smallest effect size of interest (SESOI) is arbitrary (why not 30% power, or 35%?) it suffices for practical purposes (and you are free to choose a power level you think is too low). The nice thing about this definition of a SESOI is that if you know the sample size of the original study, you can always calculate the effect size that study had 33% power to detect. You can thus always use this approach to set a smallest effect size of interest. If you fail to find support for an effect size the original study has 33% power to detect, it does not mean there is no true effect, and not even that the effect is too small to be of any theoretical or practical interest. But using the small telescopes approach is a good first step, since it will get the conversation started about which effects are meaningful and allows researchers who want to replicate a study to specify when they would consider the original claim falsified. With the small telescopes approach, the SESOI is based only on the sample size in the original study. A smallest effect size of interest is set only for effects in the same direction. All effects smaller than this effect (including large effects in the opposite direction) are interpreted as a failure to replicate the original results. We see that the small telescopes approach is a one-sided equivalence test, where only the upper bound is specified, and the smallest effect size of interest is determined based on the sample size of the original study. The test examines if we can reject effects as large or larger than the effect the original study has 33% power to detect. It is a simple one-sided test, not against 0, but against a SESOI. For example, consider our study above in which 20 guessers tried to estimate the number of coins. The results were analyzed with a two-sided one-sample t-test, using an alpha level of 0.05. To determine the effect size that this study had 33% power for, we can perform a sensitivity analysis. In a sensitivity analysis we compute the required effect size given the alpha, sample size, and desired statistical power. Note that Simonsohn uses a two-sided test in his power analyses, which we will follow here – if the original study reported a pre-registered directional prediction, the power analysis should be based on a one-sided test. In this case, the alpha level is 0.05, the total sample size is 20, and the desired power is 33%. We compute the effect size that gives us 33% power and see that it is a Cohen’s d of 0.358. This means we can set our smallest effect size of interest for the replication study to d = 0.358. If we can reject effects as large or larger than d = 0.358, we can conclude that the effect is smaller than anything the original study had 33% power for. The screenshot below illustrates the correct settings in G*Power, and the code in R is: library("pwr") pwr::pwr.t.test( n = 20, sig.level = 0.05, power = 0.33, type = "one.sample", alternative = "two.sided" ) ## ## One-sample t test power calculation ## ## n = 20 ## d = 0.3577466 ## sig.level = 0.05 ## power = 0.33 ## alternative = two.sided Determining the SESOI based on the effect size the original study had 33% power to detect has an additional convenient property. Imagine the true effect size is actually 0, and you perform a statistical test to see if the data is statistically smaller than the SESOI based on the small telescopes approach (which is called an inferiority test). If you increase the sample size by 2.5 times, you will have approximately 80% power for this one-sided equivalence test, assuming the true effect size is exactly 0 (e.g., d = 0). People who do a replication study can follow the small telescope recommendations, and very easily determine both the smallest effect size of interest, and the sample size needed to design an informative replication study, assuming the true effect size is 0 (but see the section above for a-priori power analyses where you want to test for equivalence, but do not expect a true effect size of 0). The figure below, from Simonsohn (2015) illustrates the small telescopes approach using a real-life example. The original study by Zhong and Liljenquist (2006) had a tiny sample size of 30 participants in each condition and observed an effect size of d = 0.53, which was barely statistically different from zero. Given a sample size of 30 per condition, the study had 33% power to detect effects larger than d = 0.401. This “small effect” is indicated by the green dashed line. In R, the smallest effect size of interest is calculated using: pwr::pwr.t.test( n = 30, sig.level = 0.05, power = 1/3, type = "two.sample", alternative = "two.sided" ) ## ## Two-sample t test power calculation ## ## n = 30 ## d = 0.401303 ## sig.level = 0.05 ## power = 0.3333333 ## alternative = two.sided ## ## NOTE: n is number in *each* group Note that 33% power is a rounded value, and the calculation uses 1/3 (or 0.3333333…). We can see that the first replication by Gámez and colleagues also had a relatively small sample size (N = 47, compared to N = 60 in the original study), and was not designed to yield informative results when interpreted with a small telescopes approach. The confidence interval is very wide and includes the null effect (d = 0) and the smallest effect size of interest (d = 0.401). Thus, this study is inconclusive. We can’t reject the null, but we can also not reject effect sizes of 0.401 or larger that are still considered to be in line with the original result. The second replication has a much larger sample size, and tells us that we can’t reject the null, but we can reject the smallest effect size of interest, suggesting that the effect is smaller than what is considered an interesting effect based on the small telescopes approach. Although the small telescope recommendations are easy to use, one should take care not to turn any statistical procedure into a heuristic. In our example above with the 20 referees, a Cohen’s d of 0.358 would be used as a smallest effect size of interest, and a sample size of 50 would be collected (2.5 times the original 20), but if someone would make the effort to perform a replication study, it would be relatively easy to collect a larger sample size. Alternatively, had the original study been extremely large, it would have had high power for effects that might not be practically significant, and we would not want to collect 2.5 times as many observations in a replication study. Indeed, as Simonsohn writes: “whether we need 2.5 times the original sample size or not depends on the question we wish to answer. If we are interested in testing whether the effect size is smaller than d33%, then, yes, we need about 2.5 times the original sample size no matter how big that original sample was. When samples are very large, however, that may not be the question of interest.” Always think about the question you want to ask, and design the study so that it provides an informative answer for a question of interest. Do not automatically follow a 2.5 times n heuristic, and always reflect on whether the use of a suggested procedure is appropriate in your situation. 9.12 Setting the Smallest Effect Size of Interest to the Minimal Statistically Detectable Effect Given a sample size and alpha level, every test has a minimal statistically detectable effect. For example, given a test with 86 participants in each group, and an alpha level of 5%, only t-tests which yield a t ≥ 1.974 will be statistically significant. In other words, t = 1.974 is the critical t-value. Given a sample size and alpha level, the critical t-value can be transformed into a critical d-value. As visualized in Figure 9.8, with n = 50 in each group and an alpha level of 5% the critical d-value is 0.4. This means that only effects larger than 0.4 will yield a p < α. The critical d-value is influenced by the sample size per group, and the alpha level, but does not depend on the the true effect size. It is possible to observe a statistically significant test result if the true effect size is smaller than the critical effect size. Due to random variation, it is possible to observe a larger value in a sample than is the true value in the population. This is the reason the statistical power of a test is never 0 in a null hypothesis significance test. As illustrated in Figure 9.9, even if the true effect size is smaller than the critical value (i.e., if the true effect size is 0.2) we see from the distribution that we can expect some observed effect sizes to be larger than 0.4 when the true population effect size is d = 0.2 – if we compute the statistical power for this test, it turns out we can expect 16.77% of the observed effect sizes will be larger than 0.4, in the long run. That is not a lot, but it is something. This is also the reason why publication bias combined with underpowered research is problematic: It leads to a large overestimation of the true effect size when only observed effect sizes from statistically significant findings in underpowered studies end up in the scientific literature. We can use the minimal statistically detectable effect to set the SESOI for replication studies. If you attempt to replicate a study, one justifiable option when choosing the smallest effect size of interest (SESOI) is to use the smallest observed effect size that could have been statistically significant in the study you are replicating. In other words, you decide that effects that could not have yielded a p-value less than α in an original study will not be considered meaningful in the replication study. The assumption here is that the original authors were interested in observing a significant effect, and thus were not interested in observed effect sizes that could not have yielded a significant result. It might be likely that the original authors did not consider which effect sizes their study had good statistical power to detect, or that they were interested in smaller effects but gambled on observing an especially large effect in the sample purely as a result of random variation. Even then, when building on earlier research that does not specify a SESOI, a justifiable starting point might be to set the SESOI to the smallest effect size that, when observed in the original study, could have been statistically significant. Not all researchers might agree with this (e.g., the original authors might say they actually cared just as much about an effect of d =0.001). However, as we try to change the field from the current situation where no one specifies what would falsify their hypothesis, or what their smallest effect size of interest is, this approach is one way to get started. In practice, as explained in the section on post-hoc power, due to the relation between p = 0.05 and 50% power for the observed effect size, this justification for a SESOI will mean that the SESOI is set to the effect size the original study had 50% power to detect for an independent ttest. This approach is in some ways similar to the small telescopes approach by Simonsohn (2015), except that it will lead to a somewhat larger SESOI. Setting a smallest effect size of interest for a replication study is a bit like a tennis match. Original authors serve and hit the ball across the net, saying ‘look, something is going on’. The approach to set the SESOI to the effect size that could have been significant in the original study is a return volley which allows you to say ‘there does not seem to be anything large enough that could have been significant in your own original study’ after performing a well-designed replication study with high statistical power to reject the SESOI. This is never the end of the match – the original authors can attempt to return the ball with a more specific statement about effects their theory predicts, and demonstrate such a smaller effect size is present. But the ball is back in their court, and if they want to continue to claim there is an effect, they will have to support their claim by new data. Beyond replication studies, the amount of data that is collected limits the inferences one can make. It is also possible to compute a minimal statistically detectable effect based on the sample sizes that are typically used in a research field. For example, imagine a line of research in which a hypothesis has almost always been tested by performing a one-sample t-test, and where the sample sizes that are collected are always smaller than 100 observations. A one-sample t-test on 100 observations, using an alpha of .05 (two sided), has 80% power to detect an effect of d = 0.28 (as can be calculated in a sensitivity power analysis). In a new study, concluding that one can reliably reject the presence of effects more extreme than d = 0.28 suggests that sample sizes of 100 might not be enough to detect effects in such research lines. Rejecting the presence of effects more extreme than d = 0.28 does not test a theoretical prediction, but it contributes to the literature by answering a resource question. It suggests that future studies in this research line will need to change the design of their studies by substantially increasing the sample size. Setting the smallest effect size of interest based on this approach does not answer any theoretical question (after all, the SESOI is not based on any theoretical prediction). But informing peers that given the sample size commonly collected in a field in a field, the effect is not large enough so that it can be reliably studied is a useful contribution to the literature. It does not mean that the effect is not interesting per se, and a field might decide that it is time to examine the research question collaboratively, by coordinating research lines, and collecting enough data to reliably study whether a smaller effect is present. 9.13 Test Yourself Q1: When the 90% CI around a mean difference falls just within the equivalence range from -0.4 to 0.4, we can reject the smallest effect size of interest. Based on your knowledge about confidence intervals, when the equivalence range is changed to -0.3 – 0.3, what is needed for the equivalence test to be significant (assuming the effect size estimate and standard deviation remains the same)? 1. A larger effect size. 2. A lower alpha level. 3. A larger sample size. 4. Lower statistical power. Q2: Why is it incorrect to conclude that there is no effect, when an equivalence test is statistically significant? 1. An equivalence test is a statement about the data, not about the presence or absence of an effect. 2. The result of an equivalence test could be a Type 1 error, and therefore, one should conclude that there is no effect, or a Type 1 error has been observed. 3. An equivalence test rejects values as large or larger than the smallest effect size of interest, so the possibility that there is a small non-zero effect cannot be rejected. 4. We conclude there is no effect when the equivalence test is non-significant, not when the equivalence test is significant. Q3: Researchers are interested in showing that students who use an online textbook perform just as well as students who use a paper textbook. If so, they can recommend teachers to allow students to choose their preferred medium, but if there is a benefit, they would recommend the medium that leads to better student performance. They randomly assign students to use an online textbook or a paper textbook, and compare their grades on the exam for the course (from the worst possible grade, 1, to the best possible grade, 10). They find that the both groups of students perform similarly, with for the paper textbook condition m = 7.35, sd = 1.15, n = 50, and the online textbook m = 7.13, sd = 1.21, n = 50). Let’s assume we consider any effect as large or larger than half a grade point (0.5) worthwhile, but any difference smaller than 0.5 too small to matter, and the alpha level is set at 0.05. What would the authors conclude? Copy the code below into R, replacing all zeroes with the correct numbers. Type ?tsum_TOST for help with the function. TOSTER::tsum_TOST( m1 = 0.00, sd1 = 0.00, n1 = 0, m2 = 0.00, sd2 = 0.00, n2 = 0, low_eqbound = -0.0, high_eqbound = 0.0, eqbound_type = "raw", alpha = 0.05 ) 1. We can reject an effect size of zero, and we can reject the presence of effects as large or larger than the smallest effect size of interest. 2. We can not reject an effect size of zero, and we can reject the presence of effects as large or larger than the smallest effect size of interest. 3. We can reject an effect size of zero, and we can not reject the presence of effects as large or larger than the smallest effect size of interest. 4. We can not reject an effect size of zero, and we can not reject the presence of effects as large or larger than the smallest effect size of interest. Q4: If we increase the sample size in question Q3 to 150 participants in each condition, and assuming the observed means and standard deviations would be exactly the same, which conclusion would we draw? 1. We can reject an effect size of zero, and we can reject the presence of effects as large or larger than the smallest effect size of interest. 2. We can not reject an effect size of zero, and we can reject the presence of effects as large or larger than the smallest effect size of interest. 3. We can reject an effect size of zero, and we can not reject the presence of effects as large or larger than the smallest effect size of interest. 4. We can not reject an effect size of zero, and we can not reject the presence of effects as large or larger than the smallest effect size of interest. Q5: If we increase the sample size in question Q3 to 500 participants in each condition, and assuming the observed means and standard deviations would be exactly the same, which conclusion would we draw? 1. We can reject an effect size of zero, and we can reject the presence of effects as large or larger than the smallest effect size of interest. 2. We can not reject an effect size of zero, and we can reject the presence of effects as large or larger than the smallest effect size of interest. 3. We can reject an effect size of zero, and we can not reject the presence of effects as large or larger than the smallest effect size of interest. 4. We can not reject an effect size of zero, and we can not reject the presence of effects as large or larger than the smallest effect size of interest. Sometimes the result of a test is inconclusive, as both the null hypothesis test, and the equivalence test, of not statistically significant. The only solution in such a case is to collect additional data. Sometimes both the null hypothesis test and the equivalence test are statistically significant, in which case the effect is statistically different from zero, but practically insignificant (based on the justification for the SESOI). Q6: We might wonder what the statistical power was for the test in Q3, assuming there was no true difference between the two groups (so a true effect size of 0). Using the new and improved power_t_TOST function in the TOSTER R package, we can compute the power using a sensitivity power analysis (i.e., entering the sample size per group of 50, the assumed true effect size of 0, the equivalence bounds, and the alpha level. Note that because the equivalence bounds were specified on a raw scale in Q3, we will also need to specify an estimate for the true standard deviation in the population. Let's assume this true standard deviation is 1.2. Round the answer to two digits after the decimal. Type ?power_t_TOST for help with the function. What was the power in Q3? TOSTER::power_t_TOST( n = 00, delta = 0.0, sd = 0.0, low_eqbound = -0.0, high_eqbound = 0.0, alpha = 0.05, type = "two.sample" ) 1. 0.00 2. 0.05 3. 0.33 4. 0.40 Q7: Assume we would only have had 15 participants in each group in Q3, instead of 50. What would be the statistical power of the test with this smaller sample size (keeping all other settings as in Q6)? Round the answer to 2 digits. 1. 0.00 2. 0.05 3. 0.33 4. 0.40 Q8: You might remember from discussions on statistical power for a null hypothesis significance test that the statistical power is never smaller than 5% (if the true effect size is 0, power is formally undefined, but we will observe at least 5% Type 1 errors, and the power increases when introducing a true effect). In a two-sided equivalence tests, power can be lower than the alpha level. Why? 1. Because in an equivalence test the Type 1 error rate is not bounded at 5%. 2. Because in an equivalence test the null hypothesis and alternative hypothesis are reversed, and therefore the Type 2 error rate does not have a lower bound (just as the Type 1 error rate in NHST has no lower bound). 3. Because the confidence interval needs to fall between the lower and upper bound of the equivalence interval, and with small sample sizes, this probability can be close to one (because the confidence interval is very wide). 4. Because the equivalence test is based on a confidence interval, and not on a p-value, and therefore power is not limited by the alpha level. Q9: A well designed study has high power to detect an effect of interest, but also to reject the smallest effect size of interest. Perform an a-priori power analysis for the situation described in Q3. Which sample size in each group needs to be collected to achieved a desired statistical power of 90% (or 0.9), assuming the true effect size is 0, and we still assume the true standard deviation is 1.2? Use the code below, and round up the sample size (as we cannot collect a partial observation). TOSTER::power_t_TOST( power = 0.00, delta = 0.0, sd = 0.0, low_eqbound = -0.0, high_eqbound = 0.0, alpha = 0.05, type = "two.sample" ) 1. 100 2. 126 3. 200 4. 252 Q10: Assume that when performing the power analysis for Q9 we did not expect the true effect size to be 0, but we actually expected a mean difference of 0.1 grade point. Which sample size in each group would we need to collect for the equivalence test, now that we expect a true effect size of 0.1? Change the variable delta in power_t_TOST to answer this question. 1. 117 2. 157 3. 314 4. 3118 Q11: Change the equivalence range to -0.1 and 0.1 for Q9 (and set the expected effect size of delta to 0). To be able to reject effects outside such a very narrow equivalence range, you’ll need a large sample size. With an alpha of 0.05, and a desired power of 0.9 (or 90%), how many participants would you need in each group? 1. 1107 2. 1157 3. 2468 4. 3118 You can see it takes a very large sample size to have high power to reliably reject very small effects. This should not be surprising. After all, it also requires a very large sample size to detect small effects! This is why we typically leave it to a future meta-analysis to detect, or reject, the presence of small effects. Q12: You can do equivalence tests for all tests. The TOSTER package has functions for t-tests, correlations, differences between proportions, and meta-analyses. If the test you want to perform is not included in any software, remember that you can just use a 90% confidence interval, and test whether you can reject the smallest effect size of interest. Let’s perform an equivalence test for a meta-analysis. Hyde, Lindberg, Linn, Ellis, and Williams (2008) report that effect sizes for gender differences in mathematics tests across the 7 million students in the US represent trivial differences, where a trivial difference is specified as an effect size smaller then d =0.1. The table with Cohen’s d and se is reproduced below: For grade 2, when we perform an equivalence test with boundaries of d =-0.1 and d =0.1, using an alpha of 0.01, which conclusion can we draw? Use the TOSTER function TOSTmeta, and enter the alpha, effect size (ES), standard error (se), and equivalence bounds. TOSTER::TOSTmeta( ES = 0.00, se = 0.000, low_eqbound_d = -0.0, high_eqbound_d = 0.0, alpha = 0.05 ) 1. We can reject an effect size of zero, and we can reject the presence of effects as large or larger than the smallest effect size of interest. 2. We can not reject an effect size of zero, and we can reject the presence of effects as large or larger than the smallest effect size of interest. 3. We can reject an effect size of zero, and we can not reject the presence of effects as large or larger than the smallest effect size of interest. 4. We can not reject an effect size of zero, and we can not reject the presence of effects as large or larger than the smallest effect size of interest. 9.13.2 Questions about the small telescopes approach Q13: What is the smallest effect size of interest based on the small telescopes approach, when the original study collected 20 participants in each condition of an independent t-test, with an alpha level of 0.05. Note that for this answer, it happens to depend on whether you enter the power as 0.33 or 1/3 (or 0.333). You can use the code below, which relies on the pwr package. pwr::pwr.t.test( n = 0, sig.level = 0.00, power = 0, type = "two.sample", alternative = "two.sided" ) 1. d =0.25 (setting power to 0.33) or 0.26 (setting power to 1/3) 2. d =0.33 (setting power to 0.33) or 0.34 (setting power to 1/3) 3. d =0.49 (setting power to 0.33) or 0.50 (setting power to 1/3) 4. d =0.71 (setting power to 0.33) or 0.72 (setting power to 1/3) Q14: Let’s assume you are trying to replicate a previous result based on a correlation in a two-sided test. The study had 150 participants. Calculate the SESOI using a small telescopes justification for a replication of this study that will use an alpha level of 0.05. Note that for this answer, it happens to depend on whether you enter the power as 0.33 or 1/3 (or 0.333). You can use the code below. pwr::pwr.r.test( n = 0, sig.level = 0, power = 0, alternative = "two.sided") 1. r = 0.124 (setting power to 0.33) or 0.125 (setting power to 1/3) 2. r = 0.224 (setting power to 0.33) or 0.225 (setting power to 1/3) 3. r = 0.226 (setting power to 0.33) or 0.227 (setting power to 1/3) 4. r = 0.402 (setting power to 0.33) or 0.403 (setting power to 1/3) Q15: In the age of big data researchers often have access to large databases, and can run correlations on samples of thousands of observations. Let’s assume the original study in the previous question did not have 150 observations, but 15000 observations. We still use an alpha level of 0.05. Note that for this answer, it happens to depend on whether you enter the power as 0.33 or 1/3 (or 0.333). What is the SESOI based on the small telescopes approach? 1. r = 0.0124 (setting power to 0.33) or 0.0125 (setting power to 1/3) 2. r = 0.0224 (setting power to 0.33) or 0.0225 (setting power to 1/3) 3. r = 0.0226 (setting power to 0.33) or 0.0227 (setting power to 1/3) 4. r = 0.0402 (setting power to 0.33) or 0.0403 (setting power to 1/3) Is this effect likely to be practically or theoretically significant? Probably not. This would be a situation where the small telescopes approach is not a very useful procedure to determine a smallest effect size of interest. Q16: Using the small telescopes approach, you set the SESOI in a replication study to d = 0.35, and set the alpha level to 0.05. After collecting the data in a well-powered replication study that was as close to the original study as practically possible, you find no significant effect, and you can reject effects as large or larger than d = 0.35. What is the correct interpretation of this result? 1. There is no effect. 2. We can statistically reject (using an alpha of 0.05) effects anyone would find theoretically meaningful. 3. We can statistically reject (using an alpha of 0.05) effects anyone would find practically relevant. 4. We can statistically reject (using an alpha of 0.05) effects the original study had 33% power to detect. 9.13.3 Questions about specifying the SESOI as the Minimal Statistically Detectable Effect Q17: Open the online Shiny app that can be used to compute the minimal statistically detectable effect for two independent groups: https://shiny.ieis.tue.nl/d_p_power/. Three sliders influence what the figure looks like: The sample size per condition, the true effect size, and the alpha level. Which statement is true? 1. The critical d-value is influenced by the sample size per group, the true effect size, but not by the alpha level. 2. The critical d-value is influenced by the sample size per group, the alpha level, but not by the true effect size. 3. The critical d-value is influenced by the alpha level, the true effect size, but not by the sample size per group. 4. The critical d-value is influenced by the sample size per group, the alpha level, and by the true effect size. Q18: Imagine researchers performed a study with 18 participants in each condition, and performed a t-test using an alpha level of 0.01. Using the Shiny app, what is the smallest effect size that could have been statistically significant in this study? 1. d = 0.47 2. d = 0.56 3. d = 0.91 4. d = 1 Q19: You expect the true effect size in your next study to be d = 0.5, and you plan to use an alpha level of 0.05. You collect 30 participants in each group for an independent t-test. Which statement is true? 1. You have low power for all possible effect sizes. 2. You have sufficient (i.e., > 80%) power for all effect sizes you are interested in. 3. Observed effect sizes of d = 0.5 will never be statistically significant. 4. Observed effect sizes of d = 0.5 will be statistically significant. The example we have used so far was based on performing an independent t-test, but the idea can be generalized. A shiny app for an F-test is available here: https://shiny.ieis.tue.nl/f_p_power/. The effect size associated to the power of an F-test is partial eta squared ($$\eta_{p}^{2})$$, which for a One-Way ANOVA (visualized in the Shiny app) equals eta-squared. The distribution for eta-squared looks slightly different from the distribution of Cohen’s d, primarily because an F-test is a one-directional test (and because of this, eta-squared values are all positive, while Cohen’s d can be positive or negative). The light grey line plots the expected distribution of eta-squared when the null is true, with the red area under the curve indicating Type 1 errors, and the black line plots the expected distribution of eta-squared when the true effect size is η = 0.059. The blue area indicates the expected effect sizes smaller that the critical η of 0.04, which will not be statistically significant, and thus will be Type 2 errors. Q20: Set the number of participants (per condition) to 14, and the number of groups to 3. Using the Shiny app at https://shiny.ieis.tue.nl/f_p_power/ which effect sizes (expressed in partial eta-squared, as indicated on the vertical axis) can be statistically significant with n = 14 per group, and 3 groups? 1. Only effects larger than 0.11 2. Only effects larger than 0.13 3. Only effects larger than 0.14 4. Only effects larger than 0.16 Every sample size and alpha level implies a minimal statistically detectable effect that can be statistically significant in your study. Looking at which observed effects you can detect is a useful way to make sure you could actually detect the smallest effect size you are interested in. Q21: Using the minimal statistically detectable effect, you set the SESOI in a replication study to d = 0.35, and set the alpha level to 0.05. After collecting the data in a well-powered replication study that was as close to the original study as practically possible, you find no significant effect, and you can reject effects as large or larger than d = 0.35. What is the correct interpretation of this result? 1. There is no effect. 2. We can statistically reject (using an alpha of 0.05) effects anyone would find theoretically meaningful. 3. We can statistically reject (using an alpha of 0.05) effects anyone would find practically relevant. 4. We can statistically reject (using an alpha of 0.05) effects that could have been statistically significant in the original study. 9.13.4 Open Questions 1. What is meant with the statement ‘Absence of evidence is not evidence of absence’? 2. What is the goal of an equivalence test? 3. What is the difference between a nil null hypothesis and a non-nil null hypothesis? 4. What is a minimal effect test? 5. What conclusion can we draw if a null-hypothesis significance test and equivalence test are performed for the same data, and neither test is statistically significant? 6. When designing equivalence tests to have a desired statistical power, why do you need a larger sample size, the narrower the equivalence range is? 7. Why is it incorrect to say there is ‘no effect’ when the equivalence test is statistically significant? 8. Specify one way in which the Bayesian ROPE procedure and an equivalence test are similar, and specify one way in which they are different. 9. What are two approaches to specify a smallest effect size of interest? 10. What is the idea behind the ‘small telescopes’ approach to equivalence testing?
2023-03-25 21:25:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6324161887168884, "perplexity": 708.3367675054004}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00292.warc.gz"}
https://physics.stackexchange.com/questions/390342/what-is-the-definition-of-the-charge-conjugation
# What is the definition of the charge conjugation? I seem to have troubles finding definitions of the charge conjugation operator that are independant of the theory considered. Weinberg defined it as the operator mapping particle types to antiparticles : $$\operatorname C \Psi^{\pm}_{p_1 \sigma_1 n_1;p_2 \sigma_2 n_2; ...} = \xi_{n_1} \xi_{n_2} ... \Psi^{\pm}_{p_1 \sigma_1 n_1^c;p_2 \sigma_2 n_2^c; ...}$$ He does not really seem to specify what he means by "antiparticles" around there, but I'm guessing this is the one-particle state that is conjugate to this one. This assumes that it is possible to decompose everything into one-particle states. Wightman seems to go with $C \gamma^\mu C^{-1} = \bar \gamma^\mu$, which isn't terribly satisfying and also only works for spinor fields. I've seen thrown around that the $C$ conjugation corresponds roughly to the notion of complex conjugation on the wavefunction but never really expanded upon. Is there a generic definition of charge conjugation that does not depend on how the theory is constructed? The CPT theorem in AQFT indeed seems to not have any of those extraneous constructions, but the action of the different symmetries is a bit hidden as $$(\Psi_0, \phi(x_1) ... \phi(x_n) \Psi_0) = (\Psi_0, \phi(-x_n) ... \phi(-x_1) \Psi_0)$$ Is the action of $C$ symmetry $\Psi' = C \Psi$ just a state such that for any operator $A$, $$(\Psi, A \Psi) = (\Psi', A^\dagger \Psi')$$ or something to that effect? From some parts seems like it may just be $C \phi C^{-1} = \phi^*$. • Wightman (1-47) defines the action of $C$ on a two-component spinor. A field in an arbitrary representation of Loretnz can always be understood as a tensor with several (dotted and undotted) spinor indices, or direct sums thereof. Therefore, Wightman's definition works for a field of arbitrary spin. Just act on its spinor indices as (1-47) indicates. – AccidentalFourierTransform Mar 5 '18 at 21:40 • What about the case of a scalar field? – Slereah Mar 5 '18 at 21:45 • Well, no indices, no transformation (up to a phase) :-P – AccidentalFourierTransform Mar 5 '18 at 21:47 • Except according to him later on it's $\phi \to \phi^*$! – Slereah Mar 5 '18 at 21:47 All of your fields naturally lie in some representation of the group of all symmetries (these include gauge symmetries, global gauge transformations and global Lorentz transformations). Charge conjugation is simply passing to the conjugate representation of that group. E.g. complex scalars are 1d irreps of $U(1)$, and the conjugate object is $\phi^{*}$. The same logic also works for spinors, gauge fields, etc. • What about symmetries that don't act on the fields? This idea can only work in some very limited scope. – Ryan Thorngren Mar 6 '18 at 0:28 • @RyanThorngren for that symmetries, fields lie in the trivial representation. Why do you think that the scope is limited? – Prof. Legolasov Mar 6 '18 at 0:29 • Now ask yourself what happens when there is a dual set of fields. You would define a different charge conjugation if you did things this way. Further, sometimes your procedure is not defined. For example there can be fields valued in representations without a real or quaternionic structure (eg. quarks in a triplet of SU(3)), then the dual representation is really a different representation, and there is no symmetry-preserving map between them. You would apply charge conjugation and end up with a different theory, so you don't get an operator on the Hilbert space. – Ryan Thorngren Mar 6 '18 at 0:35 • @RyanThorngren how does your definition of charge conjugation work in this latter case then? I don't see any plausible definition. – Prof. Legolasov Mar 6 '18 at 1:01 • I guess to have a kinetic term such theories need to include the anti-particles as separate fields and C can just switch them. That's how it goes in QCD anyway. I think what you described can work in any theory near a Gaussian point because you always need something to pair with. I don't think there is a charge conjugation in a general qft. Take some weird TQFT for instance... what does it mean? – Ryan Thorngren Mar 6 '18 at 1:45 There is no natural definition of charge conjugation that works for all QFTs. Rather, you should understand the CPT theorem instead as a combination of reflection-positivity and Wick rotation. See this paper, Appendix A.2.
2021-06-15 02:00:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769646883010864, "perplexity": 530.73715004214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487614006.8/warc/CC-MAIN-20210614232115-20210615022115-00174.warc.gz"}
https://www.semanticscholar.org/paper/Ising-model-in-a-transverse-tunneling-field-and-in-Blinc-%C5%BDek%C5%A1/bd9d112cfbf3dbc8304400ad847dad12bf0dd9f9
# Ising model in a transverse tunneling field and proton-lattice interaction in H-bonded ferroelectrics @inproceedings{Blinc1979IsingMI, title={Ising model in a transverse tunneling field and proton-lattice interaction in H-bonded ferroelectrics}, author={Robert Blinc and Bo{\vs}tjan Žek{\vs} and Joaquim Sampaio and A. S. T. Pires and Francisco C. Sa Barreto}, year={1979} } The addition of a ${B}_{\mathrm{ij}}{S}_{i}^{x}{S}_{j}^{x}$-type coupling between the tunneling motion of one proton and the tunneling motion of another to the Ising model in a transverse-field Hamiltonian, or the addition of the probably larger ${S}_{i}^{x}{F}_{i}^{x}{Q}_{i}$-type pseudospin phonon coupling (describing the modulation of the distance between the two equilibrium sites in an O-H \ifmmode\cdot\else\textperiodcentered\fi{} \ifmmode\cdot\else\textperiodcentered\fi{} \ifmmode\cdot… CONTINUE READING VIEW 1 EXCERPT CITES BACKGROUND
2019-10-16 12:55:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7011259198188782, "perplexity": 7306.520219082945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668569.22/warc/CC-MAIN-20191016113040-20191016140540-00133.warc.gz"}
https://www.gamedev.net/forums/topic/612599-legend-of-zelda-corner-cutting/
• 13 • 18 • 19 • 27 • 10 # Legend of Zelda Corner-cutting This topic is 2348 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I was playing The Legend of Zelda: Link's Awakening this afternoon, and I noticed, that if you are on the very edge of a tile, Link will cut the corner. For example Say you have a tile in an L-shape like this: [_ And the player is about here: [_ P When you push up on the control pad, Link will slide to the left to here: [_ P And then continue upwards to here: P[_ I was looking at my tile collision code, and thought how the player gets 'caught' on the corners of tiles. I thought that this would be a good solution for that problem. I found that the cutoff point that determines if link will collide with the wall or slide is about 4-6 pixels. Using that information, how can I add this feature to my code? I am using an altered version of a collision code from parallel realities. s is the player, and tm is the tilemap. tm->tiles is a two-dimensional numeric index for each tile, 0 is passable and everything else is solid. void collide(sprite* s, tilemap* tm) { int TILE_SIZE = 16; int i, x1, x2, y1, y2; /* Test the horizontal movement first */ i = s->h > TILE_SIZE ? TILE_SIZE : s->h; for (;;) { x1 = (s->x + s->xsp) / TILE_SIZE; x2 = (s->x + s->xsp + s->w/* - 1*/) / TILE_SIZE; y1 = (s->y) / TILE_SIZE; y2 = (s->y + i - 1) / TILE_SIZE; if (x1 >= 0 && x2 < tm->w && y1 >= 0 && y2 < tm->h) { if (s->xsp > 0) { /* Trying to move right */ if ((tm->tiles[y1][x2] != 0) || (tm->tiles[y2][x2]!=0)) { /* Place the player as close to the solid tile as possible */ s->x = x2 * TILE_SIZE; s->x -= s->w + 1; s->x += 1; s->xsp = 0; } } else if (s->xsp < 0) { /* Trying to move left */ if ((tm->tiles[y1][x1] != 0) || (tm->tiles[y2][x1]!=0)) { /* Place the player as close to the solid tile as possible */ s->x = (x1 + 1) * TILE_SIZE; s->xsp = 0; } } } if (i == s->h) { break; } i += TILE_SIZE; if (i > s->h) { i = s->h; } } /* Now test the vertical movement */ i = s->w > TILE_SIZE ? TILE_SIZE : s->w; for (;;) { x1 = (s->x) / TILE_SIZE; x2 = (s->x + i - 1) / TILE_SIZE; y1 = (s->y + s->ysp) / TILE_SIZE; y2 = (s->y + s->ysp + s->h - 1) / TILE_SIZE; if (x1 >= 0 && x2 < tm->w && y1 >= 0 && y2 < tm->h) { if (s->ysp > 0) { /* Trying to move down */ if ((tm->tiles[y2][x1] != 0) || (tm->tiles[y2][x2]!=0)) { /* Place the player as close to the solid tile as possible */ s->y = y2 * TILE_SIZE; s->y -= s->h; s->ysp = 0; //s->onGround = 1; } } else if (s->ysp < 0) { /* Trying to move up */ if ((tm->tiles[y1][x1] != 0) || (tm->tiles[y1][x2]!=0)) { /* Place the player as close to the solid tile as possible */ s->y = (y1 + 1) * TILE_SIZE; s->ysp = 0; } } } if (i == s->w) { break; } i += TILE_SIZE; if (i > s->w) { i = s->w; } } } How can I implement corner-cutting into this code? What is the logic/geometry behind it? ##### Share on other sites Try sketching it out on a piece of paper: if you're less than a certain fraction blocked by a colliding tile, move the player until he's all the way unobstructed, then allow upward movement as before. ##### Share on other sites Hey good catch! I've been noticing this in a ton of games lately- you can use this same feature to make hills, stairs, and other things work. In my game engine I do something a bit weird- basically when an object moves, it's updated in this huge collision map which basically turns the screen into a bunch of tiles, and uses vectors to store which objects are in each tile. This makes it super easy to check for collisions because we can just check the tiles we currently reside in for collisions with the other objects that also reside currently in that tile. When an object then requests it's collisions- those collisions are sorted by distance from the object so the object only worries about those closest to it (This will probably change to something new a bit later, to handle odd sized tiles) This makes the slipping method pretty easy to implement- and here's why: you don't have to worry about slipping past blocks you shouldn't slip past, because if there's a block in the way, it will be closer than the block you're slipping past, making that collision irrelevant. So with that in mind, the actual implementation is quite simple. just check if the overlap on the opposite axis to your larger velocity is < xx pixels. If so, instead of doing a velocity axis collision, resolve the other axis. easy right? =] ##### Share on other sites The first thing that pops to my mind is to not treat the player as a square (or rectangle) but rather as a circle. By interpreting all the player collisions as a circle hitting a corner, if you implement the resolution properly he'll naturally slide around the object. It also creates nice collision resolution between the player and other characters and so on. I haven't played a classic Zelda in a while, but I'm fairly confident that's how the SNES version handles collisions. ##### Share on other sites Hey good catch! I've been noticing this in a ton of games lately- you can use this same feature to make hills, stairs, and other things work. In my game engine I do something a bit weird- basically when an object moves, it's updated in this huge collision map which basically turns the screen into a bunch of tiles, and uses vectors to store which objects are in each tile. This makes it super easy to check for collisions because we can just check the tiles we currently reside in for collisions with the other objects that also reside currently in that tile. When an object then requests it's collisions- those collisions are sorted by distance from the object so the object only worries about those closest to it (This will probably change to something new a bit later, to handle odd sized tiles) This makes the slipping method pretty easy to implement- and here's why: you don't have to worry about slipping past blocks you shouldn't slip past, because if there's a block in the way, it will be closer than the block you're slipping past, making that collision irrelevant. So with that in mind, the actual implementation is quite simple. just check if the overlap on the opposite axis to your larger velocity is < xx pixels. If so, instead of doing a velocity axis collision, resolve the other axis. easy right? =] That's a very nice idea, and I think I'll be using it for my next project. With this project, I'm trying to be as minimal as possible. I'm using plain C for the codes, and I want to be able to implement all my containers, etc. myself. The first thing that pops to my mind is to not treat the player as a square (or rectangle) but rather as a circle. By interpreting all the player collisions as a circle hitting a corner, if you implement the resolution properly he'll naturally slide around the object. It also creates nice collision resolution between the player and other characters and so on. I haven't played a classic Zelda in a while, but I'm fairly confident that's how the SNES version handles collisions. Interesting idea, though I think with a circle link would slide past everything, even head on tiles, because of the roundness. Analyzing further, I noticed that link also seems to have a smaller bounding box than the tiles. The tiles are 16x16, his seems to be 10x10 or even 8x8. I noticed that he kind of overlaps everything he comes into contact with, which is what lead me to believe that. Also, looking at my own collision code, and with a bit of thought, I don't think that corner cutting would be as difficult as I first thought it out to be. It's as simple as 'if his x (or y) coordinate is greater than the tiles x (or y) coordinate minus, say, 6, then add to or subtract from the opposite coordinate , making him slide along the tiles. Thank you all for your input, you give some good ideas. But I think I've come up with a solution that will work for this particular project.
2018-03-20 00:33:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29212436079978943, "perplexity": 1469.82170257127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647244.44/warc/CC-MAIN-20180319234034-20180320014034-00188.warc.gz"}
http://stackoverflow.com/questions/17940913/measuring-string-size-in-plain-c
# Measuring string size in plain C Imagine you are generating a PDF program "by hand" (no libraries) from within a C program you are developing. You want to write a function that subscripts arbitrary text. The best thing to imagine here is how TeX subscripts the `\inf` and `\sup` symbols when typesetting math. Now, when you add a subscript to the `\inf` symbol, nothing interesting happens, it just sits there. On the other hand, adding subscript to the `\sup` symbol causes the subscripted text to move a few units downwards because of the letter "p" whose descent is a little below the baseline of the font. My question then is, what is the best way to read glyph metrics from a Type-1 or OTF font so that typesetting can be done perfectly? I am looking especially for the ascent, descent and width metrics as they are referred to in th PDF format specification. As parsing font files looks like "doing work instead of the PostScript interpreter", which has to perform those calculations to lay down individual letters that eventually constitute words and paragraphs, it would be nice if I could refer to the "end of the last text string" in the PDF PostScript stream. Consider this fragment of PDF PostScript stream: `BT /F1 12 Tf 0 0 Td (Hello World!)Tj (Hello again!)Tj ET`, "Hello again" renders precisely where it should be. So the PostScript interpreter knows (of course) where the next batch of text written with `Tj` should begin, but I do not know how to refference this information so that I can avoid all the messy font parsing. If anyone runs into this trouble, have a look at a similar question I posted on Adobe forums, I got some valuable information there as well. - is `sizeof()?` what you are looking for? –  user2045557 Jul 30 '13 at 7:36 I didn't get the font thing you are talking about –  user2045557 Jul 30 '13 at 7:36 Can you improve the question @David ? –  phoxis Jul 30 '13 at 7:40 @PaulR Yes, that is it. I would like it to be platform-independent. –  David Jul 30 '13 at 7:41 Dear David, welcome to stackoverflow. Judging by the number of comments it seems that your question can be improved by stating more clearly what it is you are using, what you have tried, and what has failed. This will be appreciated by the people trying to help you and will likely improve your chance of a meaningful answer. –  Micha Wiedenmann Jul 30 '13 at 8:25 From the font file you have to read the font metrics, the width of each glyph (GW). These widths are given in a 1000 unit grid. You compute the actual width of a glyph at a given font size using this formula: pageGlyphWidth = fontSize * GW / 1000; Then you scale the computed value with the current transformation matrix. - This is what I am looking for! Do these metrics have to be parsed from the various font formats? –  David Jul 30 '13 at 8:43 Yes, you have to write your parser for each font format you want to support and extract the metrics from the font file. –  iPDFdev Jul 30 '13 at 8:59 Okay, let's face the truth. One more question, if you do not mind: is it possible to extract ascent, descent and width and height metrics of individual glyphs in a reasonable way? (Reasonable: parse the information out, they are contained within the file. Not reasonable: calculating the Bezier curves and then looking for extrema points.) –  David Jul 30 '13 at 9:03 @David that looks like an interesting question in its own right. –  mkl Jul 30 '13 at 9:13 @mkl, do you think I should open a new thread for this? –  David Jul 30 '13 at 9:16 [This text was written as the answer to the original question: How do I measure string size when rendered to a PDF using plain C given only the font file, the string and the font size? Is there a way? It does not really match the updated question.] The mechanisms and math of PDF text rendering are exhaustively explained in the PDF specification ISO 32000-1. Most important are chapters 8 Graphics and 9 Text. Section 9.4.4 aggregates the information and calculations concerning the horizontal and vertical displacement between two characters drawn in sequence While this looks a bit complicated, it very likely reduces to trivial math in your case as you say you are generating the whole PDF by hand and, therefore, most likely have trivial values for the variables involved. Unfortunately you did not provide your hand-generated PDF; otherwise you could be told in more detail to what the equations can be reduced in your case. - Thank you for trying to help me. Unfortunately, I seem to fail to make myself clear when describing what the problem is. I am reading the Adobe PDF spec for two days now and I have not yet found the answer to my question. The math behind keeping track of the text state is not causing the problem; the problem is how to get extensive font information for the individual glyphs, especially ascent and descent. As I am trying to avoid parsing font files, I am asking whether this can be done in pure PostScript, like referring (somehow) to the location where the previous text batch ended. –  David Jul 30 '13 at 8:40 One more thing; thank you for your answer on the previous post. You answered what I had been asking for, but it seems I did not ask it the right way. –  David Jul 30 '13 at 8:42 I'm afraid get extensive font information for the individual glyphs and avoid parsing font files contradict each other here. The PDF in PDF syntax only has very basic font information (as much as required for a dumb PDF viewer to render the PDF). Everything else has to be searched in the font, e.g. individual ascent and descent values. –  mkl Jul 30 '13 at 8:47 I see. That is precisely what I was not sure about; the amount of information other than the font program itself that is carried inside the PDF about the font. So the `/Widths` array is there just to support basic rendering/lend a helping hand to "dumb readers"? Nothing else? –  David Jul 30 '13 at 9:19 @David The Width array says: "After drawing the character at position p, advance this far for drawing the next character." (And this value has to be adapted as described by the formulas above.) It does not say anything about the glyph itself which often is smaller, cf. figure 39 in section 9.2.4 in ISO 32000-1. –  mkl Jul 30 '13 at 9:30
2015-03-04 19:24:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.887150228023529, "perplexity": 971.7771824678864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463637.80/warc/CC-MAIN-20150226074103-00300-ip-10-28-5-156.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/19891/interpreting-main-effect-and-interaction?noredirect=1
# Interpreting main effect and interaction I am doing a simple marketing project that has the following types of variables: • X1 - continuous (e.g. income) • X2 - categorical (e.g. gender) • Y - continuous (e.g. number of a product type purchased such as tubs of ice-cream) I am interested in the relationship between income (X1) and product purchase (Y) but also the effect of gender (X2) on this relationship. (i.e. interaction or moderation effect). I have centered X1 and have used the general linear model in SPSS. The result on Y is as follows: • X1 - significant • X2 - not significant • X1*X2 - not significant How do I interpret this result in terms of main effect and interaction? • Is this a homework problem? If so, you need to add the homework tag. – gung - Reinstate Monica Dec 16 '11 at 5:22 • No, this is not a homework problem, though the simplicity of it makes it appear so! I am after an explanation that I can present to my non-statistical target audience. – Adhesh Josh Dec 16 '11 at 5:24 • It seems to me you have already written the essense of what could be called an interpretation. Where exactly are you looking for help? What is it that is crossing you up? – rolando2 Dec 16 '11 at 12:03 Your results suggest that there is no interaction--you simply have a main effect of X1. You could say something like, "The number of tubs of ice-cream people buy is related to their income. For instance, if person A's income is one unit higher than person B's income, person A typically buys $\beta_1$ more tubs of ice-cream than person B. Our data suggest that this relationship between income and ice-cream buying is similar for both men and women." • @user1205901, if the interaction term, $\beta_3$, is very small (which I guess I am assuming), $\beta_1$ is unlikely to change much, but you're right, it would be better to rerun the model to get a better estimate of $\beta_1$. You needn't do a nested model test however, as it would provide the same information that is in the standard regression output regarding the interaction. – gung - Reinstate Monica May 18 '15 at 13:41
2020-10-31 05:27:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4818260669708252, "perplexity": 949.1125722059788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107912807.78/warc/CC-MAIN-20201031032847-20201031062847-00107.warc.gz"}
https://www.ideastatica.com/blog/load-on-a-curve-from-dxf-reference
# Load on a curve from DXF reference Load on a curve imported from DXF reference can save you a lot of time during the input of loads for concrete structures. $$This new functionality can help a lot when defining shear stress flow in sections capable of transferring the torsion effects, a load transfer via a web of a box-girder section or equivalent load due to prestress.The curve imported from a CAD software as a DXF file can be defined as a line, polyline, arc and even a spline, where curves are transformed to discrete coordinates. The (in general non-uniform) load is then applied directly on the curve where the direction in the local or global coordinate system of the curve or concrete member with the option of inclination can be edited.Watch the short introduction to this feature during our product release webinar: Do you like this feature? Would you like to put it to the test?Start your FREE trial of IDEA StatiCa and enjoy 14 days of the full version!$$
2020-06-04 23:35:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3617159128189087, "perplexity": 2555.1852018867357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492295.88/warc/CC-MAIN-20200604223445-20200605013445-00268.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=19&t=48116
## 1.B #27 $\Delta p \Delta x\geq \frac{h}{4\pi }$ 005343842 Posts: 15 Joined: Sat Aug 24, 2019 12:15 am ### 1.B #27 Don't really understand the concept behind this problem, could someone explain? A bowling ball of mass 8.00 kg is rolled down a bowling alley lane at 5.00 6 5.0 m/s-1. What is the minimum uncertainty in its position? Angus Wu_4G Posts: 102 Joined: Fri Aug 02, 2019 12:15 am ### Re: 1.B #27 Heisenberg's Uncertainty Equation is (deltaP)(deltaX) is greater than h/(4pi) X in this case represents position, P represents momentum, and the delta represents the uncertainty. Keep in mind that momentum is equal to mass times velocity, but since the mass is the same and does not vary in the problem, it is the velocity that changes. Therefore we can rewrite the equation as (mass X deltaV)(deltaX) is greater than h/(4pi) We are given the mass of the bowling ball, which is 8kg, and we are also given deltaV, which is the uncertainty in velocity, which is 5m/s. Plug in all the values and simply solve for deltaX. Vincent Leong 2B Posts: 207 Joined: Fri Aug 09, 2019 12:15 am ### Re: 1.B #27 To solve the problem, just use the Δp Δx >= h/4pi and know that Δp = mv (mass in kg) * (velocity (m/s)) and isolate Δx to find the uncertainty of the position. The concept behind using the uncertainty equation or applying Heisenberg's uncertainty principle here is to state that the equation itself naturally applies to everyday things; it's just that we don't use the equation because the larger the object, the more is known about them (or generally we assume so). In a quantum mechanics or quantum world perspective, the more uncertain the momentum and the position of a substance is. The problem is a simple plug in values and generate a number type of problem but its significance is to resemble how values such as an objects position or momentum can fluctuate depending on the values given. You can see what to expect from a final answer (range of how large or small an answer is) based on given values. Hope this helps! chari_maya 3B Posts: 108 Joined: Sat Sep 07, 2019 12:18 am ### Re: 1.B #27 Wouldn't the total uncertainty be 10 m/s, because the uncertainty is 5.00 m/s +/- 5.0 m/s? Emily_4B Posts: 57 Joined: Wed Sep 18, 2019 12:21 am ### Re: 1.B #27 chari_maya 3B wrote:Wouldn't the total uncertainty be 10 m/s, because the uncertainty is 5.00 m/s +/- 5.0 m/s? Do you always add the numbers to find the total uncertainty?
2020-03-29 19:53:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5683251023292542, "perplexity": 1485.6237135699473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370495413.19/warc/CC-MAIN-20200329171027-20200329201027-00475.warc.gz"}
https://opencarp.org/documentation/examples
# openCARP examples These examples are intended to transfer basic user know-how regarding most openCARP features in an efficient way. The scripts are designed as mini-experiments, which can also serve as basic building blocks for more complex experiments. This example can be a good starting point to base your own experiment on. There is a number of examples dedicated to teaching openCARP fundamental know-how for those who are interested in building more complex experiments from scratch themselves or in extending pre-existing experiments. All executable examples are coded up in carputils to facilitate an easy execution of all experiments without significant additional effort and complex command line interactions. ## Intended use Most examples can be run by simply copying the command from the corresponding example web page. It is recommended to inspect the generated command lines to understand what the simulation looks like in the plain command line by adding the option --dry to the run script command line. You can download the examples from our repository. ### Electrophysiology in single cell The following examples illustrate how single cell modeling is performed using the tool bench. Additionally you learn how to integrate a single cell model from CellML into our library limpet using the math language EasyML ### Electrophysiology tissue These examples should inform you about the most basic steps in developing simple tissue simulations using openCARP ### Visualization Here you will learn how to use the visualization tools LimpetGUI for single cell results and Meshalizer for tissue results © Copyright 2020 openCARP project    Supported by DFG    Contact    Imprint and data protection
2022-01-27 01:29:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24047885835170746, "perplexity": 1907.5238236735822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00225.warc.gz"}
http://mathhelpforum.com/advanced-algebra/129537-solved-problem-section-bezouts-identity-print.html
# [SOLVED] Problem from a section on Bezout's Identity • Feb 18th 2010, 05:53 PM Zennie [SOLVED] Problem from a section on Bezout's Identity Find all solutions: 561x + 909y = 81 Not just looking for answers, I don't understand the procedure. Any help is appreciated. • Feb 18th 2010, 07:27 PM tonio Quote: Originally Posted by Zennie Find all solutions: 561x + 909y = 81 Not just looking for answers, I don't understand the procedure. Any help is appreciated. Since $(561,909)=3$ and $3\mid 81$ , there exists a solution in integers to the given equation : Lemma: If $(m,n)=d \,\,\,and\,\,\,d\mid b\,,\,\,say\,\,\,b=kd$ then, after writing $d=ma+nb\,,\,\,a,b\in\mathbb{Z}$ ,we get that $m(ka)+n(kb)=kd=b$ , and $mx+ny=b$ has an integer solution. This lemma is in fact an iff lemma. Tonio
2016-12-10 04:37:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.854820728302002, "perplexity": 895.5385076884766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542938.92/warc/CC-MAIN-20161202170902-00411-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/59712-countability-irrational-numbers-print.html
# Countability of the irrational numbers • Nov 15th 2008, 12:30 PM Mathstud28 Countability of the irrational numbers One of the excersises in my book says "Is the set of all irrational numbers countable? Justify your response" Here is what I said (For the sake of writing let $\mathbb{I}$ the set of irrationals):" 1. First let us establish that $\mathbb{Z}$ is an infintely countable set. This is readily seen since there is a 1-1 mapping of $\mathbb{N}$ onto $\mathbb{Z}$ so $\mathbb{N}\sim\mathbb{Z}$, thus the integers are countable. 2. Lemma: If $A$ is a countable set and if $B_n$ the set of all n-tuples $(a_1,a_2,\cdots,a_n)$ where $a_1,a_2,\cdots,a_n\in{A}$, then $B_n$ is countable. 3. So realizing that the rationals may be expressed as $\frac{m\in\mathbb{Z}}{n\in\mathbb{Z}}$. So now since if we express every rational number as $(m,n)$ we can show that the rationals may be expressed as a set of ordered pairs (2-tuples). We have that the rationals may be expressed as an analogue of #2 with $\mathbb{Q}$ expressed as $B_2$, and since $A=\mathbb{Z}$, a countable set, it follows that the rationals are countable. 4. Lemma: The set of all Real numbers is uncountable 5. Lemma: The union of any number of infinitely countable sets is countable. 6. Assume that $\mathbb{I}$ is countable, That would imply that $\mathbb{R}=\mathbb{Q}\cup\mathbb{I}$ is countable. A contradiction. Therefore $\mathbb{I}$ is uncountable $\blacksquare$" Does that look alright? • Nov 15th 2008, 12:42 PM Plato It is well known that the set for rational numbers is countable. Cantor using the diagonal argument proved that the set [0,1] is not countable. Thus the irrational numbers in [0,1] must be uncountable. So basically your steps 4, 5, & 6, form the proof. • Nov 15th 2008, 12:46 PM Mathstud28 Quote: Originally Posted by Plato It is well known that the set for rational numbers is countable. Cantor using the diagonal argument proved that the set [0,1] is not countable. Thus the irrational numbers in [0,1] must be uncountable. So basically your steps 4, 5, & 6, form the proof. Thank you very much Plato. • Nov 15th 2008, 02:12 PM ThePerfectHacker We know that $|\mathbb{R}| = |\mathbb{R}\times \mathbb{R}|$. Let $f: \mathbb{R}\to \mathbb{R}\times \mathbb{R}$ be a bijection. Define $A = f[\mathbb{Q}]$ (that is the "image of $\mathbb{Q}$" i.e. $\{ f(x) | x\in \mathbb{Q} \}$). Of course, since $A\subseteq \mathbb{R}\times \mathbb{R}$ it is a set of ordered pairs, let $B = \text{dom}(A)$ (that is the first coordinates in this set of ordered pairs). Since $|B|\leq |A| = |\mathbb{Q}| = |\mathbb{N}|$ (why?) it means there is $x\in \mathbb{R}$ such that $x\not \in B$ since $|\mathbb{N}| < |\mathbb{R}|$. Therefore, $C = \{ x \} \times \mathbb{R}$ is disjoint with $A$ which means $C\subseteq (\mathbb{R}\times \mathbb{R}) - A$. But we know that $|C| = |\mathbb{R}|$ which means $|(\mathbb{R}\times \mathbb{R}) - A|\geq |\mathbb{R}| \implies |(\mathbb{R}\times \mathbb{R}) - A| = \mathbb{R}$. Finally, since $\mathbb{R}\times \mathbb{R}$ corresponds with $\mathbb{R}$ (with this bijection $f$) and $A$ corresponds with $\mathbb{Q}$ we can "substitute" to get $|\mathbb{R} - \mathbb{Q}| = |\mathbb{R}|$. • Nov 15th 2008, 03:18 PM Plato I always wonder at efforts to reinvent the wheel. Bertram Russell said, “If it is worth saying, then it can be said simply”. • Nov 15th 2008, 03:35 PM ThePerfectHacker Quote: Originally Posted by Plato I always wonder at efforts to reinvent the wheel. Bertram Russell said, “If it is worth saying, then it can be said simply”. "If $|A| < |B|$ then $|B-A| = |A|$" can be proven but this requires choice. In the demonstration above choice was avoided. --- For Mathstud: If $A\subseteq \mathbb{R}$ is not uncountable we cannot conclude it must be countable. It turns out this statement is completely independent, and we cannot show whether it is true or false! Called Continuum Hypothesis. That is something you used implicitly in your attempted proof above. • Nov 15th 2008, 05:37 PM Mathstud28 Quote: Originally Posted by ThePerfectHacker "If $|A| < |B|$ then $|B-A| = |A|$" can be proven but this requires choice. In the demonstration above choice was avoided. --- For Mathstud: If $A\subseteq \mathbb{R}$ is not uncountable we cannot conclude it must be countable. It turns out this statement is completely independent, and we cannot show whether it is true or false! Called Continuum Hypothesis. That is something you used implicitly in your attempted proof above. Where in my proof did I assert anything was countable? • Nov 15th 2008, 05:42 PM ThePerfectHacker Quote: Originally Posted by Mathstud28 Where in my proof did I assert anything was countable? Sorry about that! I thought your said to prove $|\mathbb{I}| = |\mathbb{R}|$. In that case assuming $\mathbb{I}$ is countable and arriving at countadiction is not sufficient. But that is not what the problem was asking, it was just asking to show it is uncountable. In that case you are correct. • Nov 15th 2008, 05:43 PM Mathstud28 Quote: Originally Posted by ThePerfectHacker Sorry about that! I thought your said to prove $|\mathbb{I}| = |\mathbb{R}|$. In that case assuming $\mathbb{I}$ is countable and arriving at countadiction is not sufficient. But that is not what the problem was asking, it was just asking to show it is uncountable. In that case you are correct. Whew! Jeez TPH you scared me for a second (Giggle) • Nov 15th 2008, 10:20 PM CaptainBlack Quote: Originally Posted by ThePerfectHacker "If $|A| < |B|$ then $|B-A| = |A|$" can be proven but this requires choice. In the demonstration above choice was avoided. --- You need qualification on what A and B are since this is not true for finite sets. CB • Nov 16th 2008, 12:42 AM Opalg Quote: Originally Posted by Plato Bertram Russell said, “If it is worth saying, then it can be said simply”. Just for the record, I think it's Ludwig Wittgenstein rather than Bertrand Russell who said that. More accurately, what Wittgenstein actually wrote (in the preface to Tractatus Logico-Philosophicus) was "Was sich überhaupt sagen lässt, lässt sich klar sagen" ("What can be said at all can be said clearly"). Russell certainly advocated the use of plain English, as in his essay How I Write, but I can't find anywhere where he used the words quoted above.
2017-06-24 18:01:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 53, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649856090545654, "perplexity": 547.8975720360786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320270.12/warc/CC-MAIN-20170624170517-20170624190517-00648.warc.gz"}
http://math.stackexchange.com/questions/256732/elements-in-hat-mathbbz-the-profinite-completion-of-the-integers
# Elements in $\hat{\mathbb{Z}}$, the profinite completion of the integers Let $\hat{\mathbb{Z}}$ be the profinite completion of $\mathbb{Z}$. Since $\hat{\mathbb{Z}}$ is the inverse limit of the rings $\mathbb{Z}/n\mathbb{Z}$, it's a subgroup of $\prod_n \mathbb{Z}/n\mathbb{Z}$. So we can represent elements in $\hat{\mathbb{Z}}$ as a subset of all possible tuples $(k_1,k_2,k_3,k_4,k_5,...)$, where each $k_n$ is an element in $\mathbb{Z}/n\mathbb{Z}$. The precise subset of such tuples which corresponds to $\hat{\mathbb{Z}}$ is given by the usual definition of the inverse limit. There is a canonical injective homomorphism $\eta: \mathbb{Z} \to \hat{\mathbb{Z}}$, such that to each $z \in \mathbb{Z}$ corresponds the tuple $\text{(z mod 1, z mod 2, z mod 3, ...)}$. However, it is well known that this homomorphism is not surjective, meaning there exist elements in $\hat{\mathbb{Z}}$ which do not correspond to anything in $\mathbb{Z}$. Does anyone know how to explicitly construct an example of such an element in $\hat{\mathbb{Z}}$, which isn't in the image of the homomorphism $\eta$, and to represent it as a tuple as outlined above? - You can play with something $p$-adic. e.g. take your favorite $p$-adic integer $a$, and consider the tuple where the $n$-th coordinate is 0 if $n$ is not prime power, and $a$ mod $p^k$ if $n = p^k$. – user27126 Dec 12 '12 at 1:26 I had considered this exact thing earlier today, but I don't think it's going to work. For instance, say we construct some 2-adic number by setting all nth coefficients, where n is a power of 2, to 1, and all other coefficients to 0. So the sixth coefficient would be 0, but the second coefficient would be 1. However, via the homomorphism $\mathbb{Z}/6\mathbb{Z} \to \mathbb{Z}/2\mathbb{Z}$ that's part of the inverse system, the sixth coefficient being 0 should imply the second coefficient is also 0. Since it's not, this tuple isn't actually in the inverse limit. – Mike Battaglia Dec 12 '12 at 7:30 Sorry, I was careless. This would probably work - again take your favorite $p$-adic $a$, and consider the tuple where the $n = \prod p_i^{k_i}$-th coordinate, considered as an element of $\prod \mathbb{Z}/p_i^{k_i}\mathbb{Z}$, corresponds to $(a,0,\cdots,0)$ where $a$ is the part that corresponds to powers of $p$, and $0$ for other parts. – user27126 Dec 12 '12 at 9:09 I don't understand your comment. In $\prod {p_i}^{k_i}$, what is the product being taken over? – Mike Battaglia Dec 13 '12 at 1:12 You could also use something like $\sum n!$ that converges $p$-adically regardless of the value of $p$. It seems like this would make it easier to write down all the details... – Micah Dec 13 '12 at 9:44 You can think of a presentation of an element of $\hat{\mathbb{Z}}$ by a tuple $(a_1,a_2,a_2,\dots)$ as a description of an "ideal" integer's residues mod $1, 2, 3, \dots$ If you're looking at $\prod_n \mathbb{Z}/n\mathbb{Z}$, which is the limit of the diagram consisting of the rings $\mathbb{Z}/n\mathbb{Z}$ with no connecting maps, you're allowed to choose $a_1, a_2, a_3,\dots$ totally arbitrarily. But the diagram of which $\hat{\mathbb{Z}}$ is the limit enforce restrictions, and these restrictions are exactly the finite implications between residues which exist in $\mathbb{Z}$, i.e. if $x\equiv 4$ (mod 6), then $x\equiv 1$ (mod 3). By the Chinese Remainder Theorem, the residue of an integer mod $a$ is entirely determined (according to these restrictions) by its residues mod $p_1^{r_1}, \dots, p_k^{r_k}$, where these are the prime powers appearing in $a$. So all you need to do to explicitly determine an element of $\hat{\mathbb{Z}}$ is to give a consistent choice of residues mod all prime powers. Then for each other integer, compute what the residue should be. It is easy to do this in a way that is not satisfied by any element of $\mathbb{Z}$. Example: Let's make our element divisible by all powers of odd primes, but give it residue $1$ modulo all powers of $2$. Then it starts $(0,1,0,1,0,3,0,1,0,5,0,9,\dots)$ - I'd also like to point out Lubin's answer explains precisely what I mean when I say "all you need to do... is to give a consistent choice of residues mod all prime powers." This is just a choice of a p-adic integer for each prime p. – Alex Kruckman Dec 25 '12 at 4:41 So in the representation of this group as the direct product of p-adics, this corresponds to $(1_2,0_3,0_5,0_7,0_11,...),$ right? – Mike Battaglia Sep 23 '15 at 2:11 @MikeBattaglia That's right. – Alex Kruckman Sep 23 '15 at 3:30 The Chinese Remainder Theorem tells you that if $n=\prod_pp^{e(p)}$, where the product is taken over only finitely many primes, and each $p$ appears to the power $e(p)$ in $n$, Then $\mathbb Z/n\mathbb Z$ is isomorphic to $\bigoplus(\mathbb Z/p^{e(p)}\mathbb Z)$. This direct sum is also direct product, and when you take the projective limit, everything in sight lines up correctly, and you get this wonderful result: $$\projlim_n\>\mathbb Z/n\mathbb Z\cong\prod_p\left(\projlim_m\mathbb Z/p^m\mathbb Z\right)\cong\prod_p\mathbb Z_p\>.$$ Thus to hold and admire a non-$\mathbb Z$ element of $\hat{\mathbb Z}$, all you need is any old collection of $p$-adic integers. - Hrm. I like this representation much better than factorial numerals. I wonder why I usually see $\hat{\mathbb{Z}}$ described in terms of quotients by factorials? Does it have to do with the topology? Anyways, +1 – Hurkyl Dec 25 '12 at 18:32 I can’t answer your question, but I first saw $\hat{\mathbb Z}$ as the Galois group associated to a finite field and its algebraic closure. Then the $\mathbb Z_p$-factors are the $p$-Sylow subgroups. In that context, the representation I pointed out is very natural. – Lubin Dec 26 '12 at 2:27 Coming back to this, is it possible to give something like a "prime factorization" to a profinite integer, but perhaps allowing infinite prime exponents or infinitely many nonzero exponents, etc? – Mike Battaglia Sep 23 '15 at 2:09 I’m not sure what “infinite prime exponents” means. On the other hand, the nature of a product is that infinitely many of the components may fail to be the identity element. – Lubin Sep 23 '15 at 14:05 Lublin, for instance, see something like this - en.m.wikipedia.org/wiki/Supernatural_number – Mike Battaglia Sep 24 '15 at 15:44 It may be more convenient to simplify the limit to be a linear chain of quotient maps, such as $$\cdots \to \mathbb{Z} / 5! \mathbb{Z} \to \mathbb{Z} / 4! \mathbb{Z} \to \mathbb{Z} / 3! \mathbb{Z} \to \mathbb{Z} / 2! \mathbb{Z} \to \mathbb{Z} / 1! \mathbb{Z} \to \mathbb{Z} / 0! \mathbb{Z}$$ and so it suffices to represent an element of $\hat{\mathbb{Z}}$ by a sequence of residues modulo $n!$ such that $$s_{n+1} \equiv s_{n} \pmod{n!}$$ In this representation, an easy-to-construct element not contained in $\mathbb{Z}$ is the sequence $$s_n = \sum_{i=0}^{n-1} i!$$ It may be interesting to think of this as the infinite sum $$s = \sum_{i=0}^{+\infty} i!$$ which makes sense in the representation you use too, since it's a finite sum in every place. I suppose the elements of $\hat{\mathbb{Z}}$ should be in one to one correspondence with the left-infinite numerals in the factorial number system - Of course, $\:\: \mathbb{Z}/1!\mathbb{Z} = \mathbb{Z}/0!\mathbb{Z} \;\;$. $\;\;\;\;$ – Ricky Demer Sep 23 '13 at 1:16 Wouldn't this just be the tuple corresponding to -1? Unless I've misunderstood your construction, adding (...1,1,1,1,1) to your number yields zero. – Mike Battaglia Sep 24 '15 at 15:42
2016-04-29 10:55:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9481390714645386, "perplexity": 189.51387590329384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111313.83/warc/CC-MAIN-20160428161511-00160-ip-10-239-7-51.ec2.internal.warc.gz"}
https://learn.careers360.com/ncert/question-represent-the-following-situations-in-the-form-of-quadratic-equations-ii-the-product-of-two-consecutive-positive-integers-is-306-we-need-to-find-the-integers/
# Q2.    Represent the following situations in the form of quadratic equations :                (ii) The product of two consecutive positive integers is 306. We need to find the integers. Given the product of two consecutive integers is $306.$ Let two consecutive integers be $'x'$ and $'x+1'$. Then, their product will be: $x(x+1) = 306$ Or $x^2+x- 306 = 0$. Hence, the two consecutive integers will satisfy this quadratic equation $x^2+x- 306 = 0$. Exams Articles Questions
2020-05-30 08:58:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6732559204101562, "perplexity": 322.198965273696}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407667.28/warc/CC-MAIN-20200530071741-20200530101741-00165.warc.gz"}
https://rooms-4you.com/evaluate-limit-as-x-approaches-0-of-tan3x-x/
# Evaluate ( limit as x approaches 0 of tan(3x))/x Evaluate the limit of the numerator and the limit of the denominator. Take the limit of the numerator and the limit of the denominator. Evaluate the limit of the numerator. Take the limit of each term. Move the limit inside the trig function because tangent is continuous. Move the term outside of the limit because it is constant with respect to . Evaluate the limit of by plugging in for . Multiply by . The exact value of is . Evaluate the limit of by plugging in for . The expression contains a division by The expression is undefined. Undefined Since is of indeterminate form, apply L’Hospital’s Rule. L’Hospital’s Rule states that the limit of a quotient of functions is equal to the limit of the quotient of their derivatives. Find the derivative of the numerator and denominator. Differentiate the numerator and denominator. Differentiate using the chain rule, which states that is where and . To apply the Chain Rule, set as . The derivative of with respect to is . Replace all occurrences of with . Since is constant with respect to , the derivative of with respect to is . Remove parentheses. Move to the left of . Multiply by . Differentiate using the Power Rule which states that is where . Multiply by . Differentiate using the Power Rule which states that is where . Take the limit of each term. Split the limit using the Limits Quotient Rule on the limit as approaches . Move the term outside of the limit because it is constant with respect to . Move the exponent from outside the limit using the Limits Power Rule. Move the limit inside the trig function because secant is continuous. Move the term outside of the limit because it is constant with respect to . Evaluate the limits by plugging in for all occurrences of . Evaluate the limit of by plugging in for . Evaluate the limit of which is constant as approaches . Divide by . Multiply by . The exact value of is . One to any power is one. Multiply by . Evaluate ( limit as x approaches 0 of tan(3x))/x ## Try our mobile app Our app allows students to get instant step-by-step solutions to all kinds of math troubles. Scroll to top
2022-10-01 04:36:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9874300360679626, "perplexity": 704.3922882623809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00502.warc.gz"}
http://physics.stackexchange.com/tags/gauge-invariance/new
# Tag Info 1 When using Kubo formula, you often have to calculate vertex functions. You'd better take care of gauge invariance by imposing Ward-Takahashi identity onto them. 1 Just consider the gauge transformation after Fourier transforming everything. A Fourier transform turns derivatives into momenta, such that we get $$\tilde A_\mu \rightarrow \tilde A_\mu - \frac1e k_\mu \tilde\alpha \;.$$ This mean that only the component parallel to $k_\mu$ (the longitudinal one) will change, while the ... 4 The Standard Model Yukawa interactions must be $SU(3)\times SU(2) \times U(1)_Y$ gauge invariant. The down-type Yukawa interaction is $$\mathcal{L} \supset -y_d \bar Q \phi d_R + \text{h.c.}.$$ This is indeed gauge invariant. The $\bar Q d_R$ form a colour singlet ($3^* \times 3$), the $\bar Q \phi$ form an $SU(2)$ singlet ($2^*\times2)$, and the whole ... 5 The reason is that the $SU(2)$ invariant in $\mathbf{2}\otimes\mathbf{2}$ (or in their complex conjugate $\mathbf{2}^*\otimes \mathbf{2}^*$) is given by contracting the two $\mathbf{2}$ with the anti-symmetric $2\times 2$ matrix $\epsilon_{ab}$, as $i\tau_2$ is. In the case at hand the two $\mathbf{2}^*$ are $\bar{Q}$ and the $\Phi^*$. You could form another ... Top 50 recent answers are included
2014-09-21 16:10:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9796380996704102, "perplexity": 429.3760982742679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135597.56/warc/CC-MAIN-20140914011215-00095-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/a-proof-for-sqrt-x-2-x.316896/
# A proof for sqrt(x^2)=|x| 1. May 29, 2009 ### ShayanJ Hi Can someone give me a proof for sqrt(x^2)=|x|? I mean why sqrt(x^2)!=+x,-x? thanks 2. May 29, 2009 ### HallsofIvy Staff Emeritus $\sqrt{x}$, like any function, is "single valued". That's part of the definition of function: for any specific value of x, f(x) must be a single value. It is true that, for example, $x^2= 4$ has two roots: x= -2 and x= 2. But $\sqrt{4}$ is specifically defined as "the non-negative number whose square is 4". More generally $\sqrt{a}$ is defined as "the non-negative number whose square is a". In fact, the reason why we have to write the solutions to $x^2= a$ as "$\pm\sqrt{a}$" is because $\sqrt{a}$ alone does NOT include "$\pm$". 3. May 29, 2009 ### jambaugh Remember that the square function $$f(x)=x^2$$ is not invertible over its whole domain. (Its graph doesn't pass the horizontal line test.) This because there is not a unique solution for x to the equation: $$y = x^2$$ To be invertible the inverse relation must be a function i.e. yield a single unique value. In order to "make" our square function invertible we must restrict the domain: $$f_+(x) = x^2\quad (x\ge 0)$$ (i.e. if x is negative the function is undefined) In this case we can then define an inverse function $$f_+^{-1}(y) = \sqrt{y}$$ Note we could have chosen to restrict the domain differently: $$f_-(x) = x^2 \quad (x \le 0)$$ This then would have inverse: $$f_-^{-1}(y)= - \sqrt{y}$$ The square root function gives the "principle root". Since there are two solutions to the equation x^2 = 4, namely 2 and -2 then in order that sqrt be a function (have a unique value) we must pick one of the two solutions. We adopt the convention of choosing the positive solution and call this the "principle" solution. We could have "gone the other way" but more often the solution we want is the positive one. As for your proof you should break the problem down into cases, x = 0, x>0, x < 0. Show that the value output is the same as |x| in each case. 4. May 29, 2009 ### qntty $$|z|=|a+bi|=\sqrt{a^2+b^2}\implies |a|=\sqrt{a^2}$$ 5. May 30, 2009 ### Tibarn It's more of a definition than anything you can prove. 6. May 30, 2009 ### ShayanJ Hey qntty that is right only if the set of real numbers were a subset of the set of complex numbers,but its not. And to you tibarn.What if I tell you that the sentence "There are infinite number of prime numbers.",has a proof? 7. May 30, 2009 ### matt grime Shyan, the set of real numbers *is* a subset of the complex numbers. I've no idea what you mean by your last sentence, Shyan. It is the definition of the square root function (or perhaps convention if you prefer) that the square root of a positive number r is the positive one of the two possible solutions to x^2=r. We could form a perfectly good version of mathematics where it is the negative one. It would be perverse and silly but perfectly possible. In that sense you can't 'prove' it in any non-trivial way 8. May 30, 2009 ### ShayanJ yeah.I was making a big mistake and I understood it when I was writing a tough answer(I thought it is) for you matt grime.So I think qntty's answer is the ultimate one(but thanks to all posters)and the thread is finished. Thank every one for the answers 9. May 30, 2009 ### daudaudaudau Shyan, let me ask you a question. How do you prove that |x| = x and not |x| = +x,-x ? Because it is DEFINED that way. Same thing goes for the square root. "qntty" has not proved anything. 10. May 30, 2009 ### ShayanJ first:|x|=x if x>=0,-x if x<0 second:Yes he did. sqrt(x^2) is defined to be |x|.but I didn't know why it is.qntty told the reason.So he PROVED it. In fact he passed the way that the mathematicians has passed to reach to the definition of sqrt(x^2) in front of our eyes.And I think that is one kind of PROVING. Last edited: May 30, 2009 11. May 30, 2009 ### daudaudaudau "first:|x|=x if x>=0,-x if x<0" NO NO NO. You are using the definition and that is no good according to your own standards. PROVE it to me! 12. May 30, 2009 ### matt grime qntty didn't prove anything - he just used the fact that sqrt(x) returns the positive root. Which is what you claim you want proving.... 13. May 30, 2009 ### ShayanJ Yes I used the definition but what was the thing that you used? if it was a definition too: then there are just two candidates for the definition of |x|.yours and mine.So if I prove that yours is wrong,then mine will be true.And that's exactly what I'm going to do. Imagine x=-2.as your definition says,|-2|=-2,but we all know that a modulus won't be equal to a negative number.So your definition is wrong and mine is true. 14. May 30, 2009 ### daudaudaudau Now prove that modulus is always positive. 15. May 30, 2009 ### ShayanJ Ok.|a| means the distance between the 0 point and +a or -a.a distance can not be negative.So modulus is always positive or zero. Is it finished? 16. May 30, 2009 ### AUMathTutor This question is a matter of definition. If I ask "Which real number x has the property that x^2 = 25?", the answer is there are two: +5 and -5. sqrt(25) is defined to be a function, hence, single-valued.The positive root is taken, as far as I can tell, because writing it takes less time (the + sign can be assumed on positive numbers...) 17. May 31, 2009 ### matt grime But you've defined the modulus function to be the distance. You didn't need to do that; I can define |x| to be negative and then claim that distance is -|x|. As I said before it is entirely perverse to do so, but would also have been legitimate. You are not seeking a proof that is anything other than the vacuous one. You might perhaps want a justification for it. The functions |x| and sqrt(x^2) are identical on the real numbers, and that is an immediate consequence of the definition of each. It is not a deep fact, and it requires nothing that really merits the word 'proof'. 18. May 31, 2009 ### ShayanJ Well I think in the chain of proofs,we reach somewhere that we have to accept sth without proof and also sth must be set as a law.Like for example:axioms of euclid. And that's because we can fight for ever and get no result.so I think you are right at the end.and sqrt(x^2)=|x| is sth like an axiom or maybe a law. Ok thanks to all. 19. May 31, 2009 ### Tibarn sqrt(x^2) equals x if x>0 and -x otherwise. |x| equals x if x>0 and -x otherwise. Therefore, sqrt(x^2) = |x|. Pedantic, yes, but this is a definition, not a deep fact. 20. May 31, 2009 ### daudaudaudau You just have to understand the meaning of the word "definition". An example: By definition, the word "knife" refers to a metallic or plastic object used for cutting stuff. You can use it to stab someone or you can use it when you eat. That is how "knife" is defined. IT IS COMPLETELY ARBITRARY. You might as well define "knife" to mean the same as "monkey" or whatever. Go ahead, invent your own language. The same thing goes for the symbol "sqrt". Is has a certain definition, a certain meaning.
2017-08-18 02:15:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7674593329429626, "perplexity": 903.092146415384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104204.40/warc/CC-MAIN-20170818005345-20170818025345-00664.warc.gz"}
http://openstudy.com/updates/4f5b9a26e4b0602be437eff8
## moneybird Compute $\int\limits_{}^{}\frac{dy}{1+\cos(4y)}$ 2 years ago 2 years ago $1+\cos(4y)=2\cos^2(2y)$
2014-04-18 20:59:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.437395304441452, "perplexity": 9964.393079721709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.rdocumentation.org/packages/shiny/versions/1.4.0/topics/observe
# observe 0th Percentile ##### Create a reactive observer Creates an observer from the given expression. ##### Usage observe(x, env = parent.frame(), quoted = FALSE, label = NULL, suspended = FALSE, priority = 0, domain = getDefaultReactiveDomain(), autoDestroy = TRUE, ..stacktraceon = TRUE) ##### Arguments x An expression (quoted or unquoted). Any return value will be ignored. env The parent environment for the reactive expression. By default, this is the calling environment, the same as when defining an ordinary non-reactive expression. quoted Is the expression quoted? By default, this is FALSE. This is useful when you want to use an expression that is stored in a variable; to do so, it must be quoted with quote(). label A label for the observer, useful for debugging. suspended If TRUE, start the observer in a suspended state. If FALSE (the default), start in a non-suspended state. priority An integer or numeric that controls the priority with which this observer should be executed. A higher value means higher priority: an observer with a higher priority value will execute before all observers with lower priority values. Positive, negative, and zero values are allowed. domain See domains. autoDestroy If TRUE (the default), the observer will be automatically destroyed when its domain (if any) ends. ..stacktraceon Advanced use only. For stack manipulation purposes; see stacktrace(). ##### Details An observer is like a reactive expression in that it can read reactive values and call reactive expressions, and will automatically re-execute when those dependencies change. But unlike reactive expressions, it doesn't yield a result and can't be used as an input to other reactive expressions. Thus, observers are only useful for their side effects (for example, performing I/O). Another contrast between reactive expressions and observers is their execution strategy. Reactive expressions use lazy evaluation; that is, when their dependencies change, they don't re-execute right away but rather wait until they are called by someone else. Indeed, if they are not called then they will never re-execute. In contrast, observers use eager evaluation; as soon as their dependencies change, they schedule themselves to re-execute. Starting with Shiny 0.10.0, observers are automatically destroyed by default when the domain that owns them ends (e.g. when a Shiny session ends). ##### Value An observer reference class object. This object has the following methods: suspend() Causes this observer to stop scheduling flushes (re-executions) in response to invalidations. If the observer was invalidated prior to this call but it has not re-executed yet then that re-execution will still occur, because the flush is already scheduled. resume() Causes this observer to start re-executing in response to invalidations. If the observer was invalidated while suspended, then it will schedule itself for re-execution. destroy() Stops the observer from executing ever again, even if it is currently scheduled for re-execution. setPriority(priority = 0) Change this observer's priority. Note that if the observer is currently invalidated, then the change in priority will not take effect until the next invalidation--unless the observer is also currently suspended, in which case the priority change will be effective upon resume. setAutoDestroy(autoDestroy) Sets whether this observer should be automatically destroyed when its domain (if any) ends. If autoDestroy is TRUE and the domain already ended, then destroy() is called immediately." onInvalidate(callback) Register a callback function to run when this observer is invalidated. No arguments will be provided to the callback function when it is invoked. • observe ##### Examples # NOT RUN { values <- reactiveValues(A=1) obsB <- observe({ print(values$A + 1) }) # Can use quoted expressions obsC <- observe(quote({ print(values$A + 2) }), quoted = TRUE) # To store expressions for later conversion to observe, use quote() expr_q <- quote({ print(values\$A + 3) }) obsD <- observe(expr_q, quoted = TRUE) # In a normal Shiny app, the web client will trigger flush events. If you # are at the console, you can force a flush with flushReact() shiny:::flushReact() # } Documentation reproduced from package shiny, version 1.4.0, License: GPL-3 | file LICENSE ### Community examples Looks like there are no examples yet.
2020-02-17 01:45:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23195695877075195, "perplexity": 4564.787352918452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141460.64/warc/CC-MAIN-20200217000519-20200217030519-00499.warc.gz"}
http://tfji.effebitrezzano.it/divide-space-evenly-calculator.html
## Divide Space Evenly Calculator the cells spaced evenly-somehow. This forum is for users to post tips and tricks they have found useful while working with VCarve Pro. You'll see that direction just as often as increase evenly in a pattern. If different roommates have access to different amounts of space, dividing the room evenly by square footage is a fair way split rent. This works by testing all of the divisors between one and the square root of the number. #N#Follow local building codes - max spacings. In this case, a length of 20 feet times a width of 20 feet equals 400. Also suggested a workaround. (Notice that you divide by n – 1 because there are n – 1 intervals in a sequence that contains n points. Contact Us Street Address: (For Internet & Navigation Devices) 6N800 IL-25 St. to calculate the time, food → verteilen. CALCULATE makes a copy of the original filter context to prepare the new filter context. So here is how to calculate evenly spaced increases / decreases. 6667 or 83 louvers. Then use the divisor number box to select the divisor (this an be a decimal value). The story sheds light on the significant contributions of the three women—Katherine Johnson, Dorothy Vaughan, and Mary Jackson—but also the broader impact that women had behind the scenes at NASA. Prime commercial office space for lease in the heart of the South London industrial/commercial zone. 34:06:26 I have a column #calls (format numbers) ex. 98 - 43 = 55. SO, if you were to start with one coat hook at the end and then space each successive coat hook by 4. Get square foot calculations for ceramic tile, subway tile and vinyl, and tile layout measurements for herringbone pattern and others. This tool will calculate the area of a rectangle from the dimensions of length and width. Or having that distance and wanting to know how many fence paling you will need per bay from post to post etc. Was this page helpful? Yes. Moving into a new place? We’ll tell you how to split the rent fairly, based on room size, closets, bathrooms, and more. If the number is less than. Define evenly. (a) Create an evenly spaced vector of values from 1 to 20 in increments of 1. Utah Division of Facilities Construction and Management. The fraction 15/8: Divide 15 by 8, and you end up with the decimal 1. ) Example 6. This building offers high ceilings, stunning architecture, many impressive arched entrances, and exposed brick walls all adding to the appeal of the property as one of the gems of the city. 28 feet) Create a table of conversions from m/h to ft/s. 2) Evenly divisible means you will not get a remainder when you divide by 9. Time Calculator 3. Jeremy Raw, P. Pasternack RF Frequency Dividers are part of over 30,000 RF, microwave and fiber optics products available for same day shipment. To this day, none of us really know how it happened and who was behind it, although I believe all the domains are back to their original owners now. Changed in version 1. The fraction 5/9: Divide 5/9 to get the decimal. Irrigation Management Calculators. Please, please, please I need help I have been trying to figure it out on my own for the past few days!!!. Learn more about EIA’s degree day modeling and forecasting methodology. 5 years ago. Mission Possible. All pickets are installed on the same side of the rails, typically on the public-facing side. -United/Nu-Gro leaders in the lawn and garden care with sales US$550 million and insect control products with sales US$150 million, target customers who desire comparable products with lower prices than premium-prices. Additional numbers that divide 38 evenly are 2 and 19. In all cases, the result will be displayed at the bottom. Select the linework, object, or block to space. Pointing simply means selecting the cell containing the data with the mouse pointer (or your finger if you're using Excel for. 8140227018 = 33. Say, for instance, your four children, a son and three daughters, are equal beneficiaries on your policy. You have to add. Check out the newest additions to the Desmos calculator family. Since 20 divides evenly into 100, all the points are evenly spaced along the line. Enter the existing amounts of used and free space, along with the threshold of the disk pool if you desire, and click Next. How To Divide An Inherited Account Evenly. " The keyword here is "about". Dividing this total into the 48″ length, means the five screws should be spaced every 8 inches – at 8″, 16″, 24″, 32″ and 40″. com wishes everyone to BE WELL, STAY WELL, GET WELL. Casco Learning Center and has some nice mathematical. For example, 16 can be divided evenly by 1, 2, 4, 8, and 16. to calculate the time, food → verteilen. Total area is 432sqft, that divided by 5 is 86. WAINSCOTING LAYOUT I used to scratch my head a lot whenever I laid out wainscoting panels. " lin " in the name. Contributor: Matthew Pizzi, Train Simple. In this case, the line starts at the left and ends at the right. Non-contact recreation businesses like bowling alleys, arcades, dance classes, water sports, golf and more will be able to reopen Friday, May 8. I am looking for the formula to evenly calculate x values within a range of minimum and maximum values. Multiply the number by 2. Complex Number Lesson. 333') / 27 = 1. Use the tools to calculate and measure the square foot, square yard, or square inch volume of any given area, based on total width and length. now, keep in mind that the divide coammnd, or even the measure command, places nodes/points at given intervals but does not actually cut up the line-- or pline-- into seperate entititesto do that you will either have to BREAK them individually or use a lisp which i think there'S already one around here somewhere. A number is divisible by another number if after dividing, the remainder is zero. More tutorials. Draw a diagonal line from the top of 1B to the base of 3A. To find the square feet of the entire room, simply add the square footage of each space together. Using the inverse we can generate the times at which the point passes through the equally spaced points. How do I evaluate a function f(x) for multiple, evenly spaced values of x between two variables. By multiplying air velocity by the cross section area of a duct, you can determine the air volume flowing past a point in the duct per unit of. I also need H1 to read 0. (not ARRAY) In this instance I would be tempted to try the DIVIDE or MEASURE command to create the evenly placed points, and then move your existing blocks to the nodes. Spacing spindles evenly A problem often encountered in woodworking is to space spindles, or slats or anything out evenly so that there are even gaps. Formed on March 1, 2006 from a merger of the former Condensed Matter Sciences and Metals & Ceramics Divisions, the Materials Science and Technology Division conducts fundamental and applied materials research for basic energy sciences programs and a variety of energy technologies, including energy efficiency, renewable energy, transportation, conservation, fossil energy, fusion energy, nuclear. Divide the finished size of the block, 9 inches, by the number of rows across or down, three. This can be awkward, though, because it is hard to guess at products of decimals (long division, you may recall, is basically a guess-and-check technique). The aluminum rear arm mounts utilize eccentric pill inserts to make fine adjustments to anti-squat, toe, pin. The USACE is preparing a supplemental environmental impact statement to address the impacts associated with the construction of remaining authorized work on the Mississippi River Mainline Levees (MRL) feature of the Mississippi River & Tributaries (MR&T) project. Owner willing to divide the space. In this case you would round down to 15 balusters. In our example, that's 60 divided by 6 (B is 5, so B+ 1 is 6), which equals 10. Learn to use your TI calculator for math, science, coding or on tests like the ACT, SAT and AP exams. 00 Numbers 2000000. This browser support data is from Caniuse, which has more detail. How much should each person get? 3) If Michelle solved the following problem, what would be the correct answer? 8. Whether you are increasing or decreasing, knitting flat or in the round, Knit Evenly delivers consistently great results. 5 does not divide evenly into the number, since 1,336 does not end in 5 or 0. If you want to find the exact number of pieces divide your total square footage by 9. Division Calculator. The following divisibility test calculator will help you to determine if any number is divisible by any other number. ALAM SURVEY Jl Joglo Raya No. Evenly Spaced Horizontal Placement. You can also think about factors in terms of division: The factors of a number include all numbers that divide evenly into that number with no remainder. near Exeter Rd. Explore the cost of living and working in various locations. The endpoint of the interval can optionally be excluded. Long division with remainders is one of two methods of doing long division by hand. Ever have trouble reading a really big number out loud? Master these tips on remembering place value and you will be reading gigantic numbers in no time. If you have a graphing calculator, simply type 0. You don't need a chore wheel. It is because the calculator was made for division of whole numbers. I also need H1 to read 0. The non-zero denominator in the case using a fraction to represent division is an example of the rule that division by zero is undefined. If you want to find the exact number of pieces divide your total square footage by 9. SO, if you were to start with one coat hook at the end and then space each successive coat hook by 4. I have 131 stitches. Learn more about EIA’s degree day modeling and forecasting methodology. This factoring calculator uses a technique called "trial division" to find the factors for a number. in fact, will give space between every item, if I am not misunderstand, you want to remove the space on the left of the "Home" item, in my previous posted, the padding: 0 20px 0 0; equals padding-right:20px; , it will work for all MenuItem. What I've been doing so far is using numerical integration to find the arc length parameter, and then using find_root to find the positions of the dots (spaced by arc length L/n). Yeah, I'm still around. -United/Nu-Gro leaders in the lawn and garden care with sales US$550 million and insect control products with sales US$150 million, target customers who desire comparable products with lower prices than premium-prices. Divide the finished size of the block, 9 inches, by the number of rows across or down, three. Margin Percentage Calculation Example. Calculate the quantiles along the columns and rows of a data matrix for specified probabilities. Divide 2 numbers and find the quotient. If the divisor doesn't go into the dividend evenly, add zeroes to the right of the last digit in the dividend and keep dividing until it comes out evenly or a repeating pattern shows up. False offers of employment are circulating on the Internet. If you want them to be spaced evenly, simply divide this empty space by 5. Because of this, in 2015 FedEx and UPS introduced a new pricing model called Dimensional Weight. The next prime number you have to take into consideration is 3. This calculator will not accept any. The Lth element is the lower quartile value. Hi Ed, Roof shinges are measure by the square which equals 100 square feet and usually comes packaged as 3 or 4 bundles of asphalt shingles. The trick is to prevent them from rolling. Divide things evenly into groups. To calculate the 1/4 of the waist circumference it is a little more complex than just dividing by 4 (you’ll get a huge waist if you calculate it this way). However, in some browsers, this space may not appear. Place the end of the tape measure on the handrail at the point where the inner surface of. Select Adjust One End to keep all spacings as entered and adjust the last space to suit. 5 min read Mar. Input length and width of flooring area. Length Between Posts. driveway, 24 ft. Before the update, the calculator was a basic operation calculator. The people were very pleasant and polite, easy to work with. The Division of Capital Asset Management and Maintenance (DCAMM) works to expand access and opportunity for contractors, women and minority owned businesses, and other stakeholders in construction, facilities management, and real estate services. I need to be able to divide a number evenly between 6 different cells. Disease Detectives. Having a large group doesn’t mean having to sacrifice the need for interaction either. Posted on by. Therefore, we can conclude that the relationships between torques is the same as the relationships between angular position; a torque on the middle gear is evenly divided between the torque on the left gear and the torque on the right gear. This Linear Algebra Toolkit is composed of the modules listed below. You can also calculate it yourself using our PI Calculator if that is not close enough; PI power is also available. She is one of the most celebrated black women in space science. Click on your device’s app store icon to install now: Sacred Space - daily prayer for 20 years. Our team consists of permit reviewers, inspectors, and program and policy specialists who collaborate to steward vibrant public spaces. The dictionary of options that we pass to the CreateEngine call switches on Python True division, which is necessary for a calculator or 1/2 would be equal to 0! If we didn't need true division (also known as new style division) then we could omit the options altogether. Many builders and gardeners face the task of calculating or estimating the amount of gravel they would need to fill a given space or cover a. This is the precise width of each space between pickets (and between the pickets and posts). 840px is the value of 280*3 so that means you have 150px worth of free space. It also shows plots of the function and illustrates the domain and range on a number line to enhance your mathematical intuition. Step 1 – divide your waist measurement + ease (1″) by 4. Elastic Launched Gliders. Divide this bag into two equal portions, since you need 500 square feet each for the side lawn and the remaining amount of front lawn. Our math calculators are interactive and unique. Break all the rules. The following divisibility test calculator will help you to determine if any number is divisible by any other number. How to Cut a line into N segments using just a compass and a straightedge. Chabot would have to devote to the cage. Here we cut a line into 3 segments, but the same approach can be used to cut a line into any number of segments: Steps: Draw a line from the start point, heading somewhat upwards; Use the compass to divide it into 3 segments. If you choose to round to three decimal. The fraction 15/8: Divide 15 by 8, and you end up with the decimal 1. Find the distance between the points \left ( \frac {3} {4} , -3 \right) and \left ( -\frac {13. Use the calculator below to determine how to decrease evenly across your row or round of knitting. The gap is usually the width of a picket, though this can vary. space the curtain rings evenly → coloque los aros de las cortinas a la misma to divide/split sth evenly → dividir algo a partes iguales public opinion is fairly evenly divided → la opinión pública está. 4sqft per space. That leads us to the term Factors. If you choose to round to three decimal. If you want them to be spaced evenly, simply divide this empty space by 5. False offers of employment are circulating on the Internet. There is a working draft spec on it. Context click (right click) the line and choose "Divide" and enter the division amount you want (or just move your cursor up and down until you get it). com, we have a calculator that will do all the work for you. A beautiful, free online scientific calculator with advanced features for evaluating percentages, fractions, exponential functions, logarithms, trigonometry, statistics, and more. (Lesson 4, Question 6. Complex Numbers in the Real World [explained] Worksheets on Complex Number. The well-known American author, Bill Bryson, once said: “Physics is really nothing more than a search for ultimate simplicity, but so far all we have is a kind of elegant messiness. How to evenly space vertices along several segments bezier curve. To find the area of a circle, the square of the radius is multiplied by pi. Divide evenly definition is - to be capable of dividing into a larger number an exact number of times such that there is nothing left over. The horizontal stack layout is the list item in a ListView. The dictionary of options that we pass to the CreateEngine call switches on Python True division, which is necessary for a calculator or 1/2 would be equal to 0! If we didn't need true division (also known as new style division) then we could omit the options altogether. Every month, unrelated people move into apartments together to save on rent. Once you have entered a value to space the items by click the relevant way to distribute space. For example: pour a 25 ft. Curious about how our fairness calculator works? Check out our blog post. If there are 13 pickets, that means 14 spaces:. Save time doing math homework, paying bills, adding up tips, resolving finances, and calculating your loan and mortgage payments. 725" from the previous, you should evenly space the 5 out on a. Multiplying and dividing decimals by 10, 100, 1000. 625'' of available space. Start the mi/h column at 0 and end it at. Something I love in illustrator is spacing out objects evenly no matter the object's width. Or placed at percentage increments, or at stated distance increments. The box model is a very important concept, one that you must have right in your head before you start tackling all this spacing stuff. Divide evenly definition is - to be capable of dividing into a larger number an exact number of times such that there is nothing left over. LF to SF Calculator. Every month, unrelated people move into apartments together to save on rent. 725" from the previous, you should evenly space the 5 out on a. ÷ 100 = 2 squares). spaced at roughly 6-stitch intervals: 2 extra stitches evenly distributed continued from page 67 At this point you might want to just make a visual for yourself (See Figure 3) or you may decide to do more math. 1) A number that divides another number evenly is called a factor of that number. Works for: B6. 34:06:26 I have a column #calls (format numbers) ex. To divide a circle into 6 equal parts, start by drawing a line through the center of the circle starting on a point anywhere along the edge and ending at the opposite edge. Let's think about the division 18 ÷ 3 in TWO different ways. This versatile commercial space is currently set up with 13 private offices, 2 work stations, an expansive conference room, a reception/waiting area, a large break room with full kitchen, 2 sets of men's/women's restrooms and tons of. HomeAdvisor's Tile Calculator helps you figure out how much floor, shower, bathroom or wall tile you need. Use the tools to calculate and measure the square foot, square yard, or square inch volume of any given area, based on total width and length. The complex number calculator only accepts integers and decimals. The facility was very clean and well kept up. 8140227018 = 33. Figuring out how to increase or decrease evenly can be a major hassle for any knitter. -- in the sequence of letters used, the I (eye) has been left out. Yeah, I'm still around. Am i making myself clear now? You do not have the required permissions to view the files attached to this post. Use this calculator to add, subtract, multiply and divide fractions. The position of each of the point mass is recalculated at each time step based on the external forces. In this article, we want to offer you 4 effective ways to distribute rows and columns evenly in your Word table. Each friend receives toy soldiers and toy soldiers remain. False Offers of Employment. This Linear Algebra Toolkit is composed of the modules listed below. Then compare the total load in amps with your present. Of course, these examples have divided evenly so far, but if the division doesn't come out evenly, you can stop after a certain number of decimal places and round off. The well-known American author, Bill Bryson, once said: “Physics is really nothing more than a search for ultimate simplicity, but so far all we have is a kind of elegant messiness. An equal temperament is a musical temperament or tuning system, which approximates just intervals by dividing an octave (or other interval) into equal steps. The sections below include examples of using the field calculator. Step 2 - Use calculator to determine number of copies by dividing L by the approximate spacing. The chord length - L - in the table is for a "unit circle" with radius = 1. com, was stolen. Excellent central location - just south of Downtown. Select the linework, object, or block to space. 4427 or 603-271-4427, or by sending an email to BCSS-CIU. Number of Regions N Lines Divide Plane - well known problem usually solved recursively has a beautiful, insightful solution. Learn more about EIA’s degree day modeling and forecasting methodology. Or placed at percentage increments, or at stated distance increments. Try to divide that large number by 11. What is ScholarshipUniverse? ScholarshipUniverse is a matching system that maximizes scholarship opportunities for students. SO, if you were to start with one coat hook at the end and then space each successive coat hook by 4. Dividing two fractions is the same as multiplying the first fraction by the reciprocal value of the second fraction. 34:06:26 I have a column #calls (format numbers) ex. To figure the current needed to carry that load, divide 26,240 watts by 240 volts. Spelled result in words is eight twenty-firsts. For example, if you're renting a 500 square foot space for $1,500 a month, you will be paying$3 per-square-foot. To calculate all centres evenly, at or nearest under entered spacing, select Adjust All Centres. Logout (877)-995-5247. 1417 x R² x D. Now, Strategic Homeland Division agents are all that stands in the way of chaos overrunning the once vibrant and bustling capital of the United States. 85 cubic cm in a cubic foot). Extract an Icocurve from the surface. Logarithm base 10. Pull the legs of the compass apart until the space between them measures one-half of the desired diameter of your circle. Contact Us Street Address: (For Internet & Navigation Devices) 6N800 IL-25 St. If they exist, the solutions and answers are provided in simplified, mixed and whole formats. It is a two-step process that reduces the chromosome number by half—from 46 to 23—to form sperm and egg cells. I have had issues in the past with attributes in blocks which have been inserted using Divide or Measure, I seem to remember the attributes were non editable. Area of a Circle. A number indicates that browser supports the feature at that version and up. It accepts proper, improper, mixed fractions and whole number inputs. You’ll get a report of the hex, RGB, and CMYK color values for your project and see your colors applied to design samples. Note: Your browser does not support JavaScript or it is turned off. 5 min read Mar. Discover GE Lighting's range of smart, energy saving, LED and other light bulbs for every room in your home. Air velocity (distance traveled per unit of time) is usually expressed in Linear Feet per Minute (LFM). There might be 3 beneficiaries or there might be 7, 11 or 13. $\begingroup$ The manual also says: "finds a list of about n "nice" numbers that divide the interval around to into equally spaced parts. All six hooks need to be spaced evenly. To calculate the layout of the panel spacing, always begin with an estimate of the panel size--that's the easiest way to determine the number of panels. 6667 or 83 louvers. of open area for build out by tenant plus additional 750 Sq. Continue in this manner until you have the square footage of each space. It can factor expressions with polynomials involving any number of vaiables as well as more complex functions. Cut a line into N segments. Draw a diagonal line from the top of 1B to the base of 3A. ie: A1=349. How to create/defining function 42. Here is how they broke up the rent: 2700/624. Science 1A<23, we add a comma and 0, next we calculate 1A0:23=B. Is the numeric expression by which to. y = linspace (x1,x2,n) generates n points. This section will help you understand how your property is valued and how those values are used to calculate your property taxes. Time Calculator 3. In addition to the formulas I posted, In E1 enter:. This is to avoid confusion with the numeral 1. Take note that division of two integers produces a truncated integer, e. Physical Science & Chemistry. Added note to the "Basic Usage" section of the help explaining Time Calculator's lack of support for days. I tried adding 10% to the CSS, but it is still spacing them strangley. If I want to place n objects evenly spaced about the circumference of a circle, I just divide 360 by n and place each object that many degrees apart. Philip Krim, CEO of Casper. print("The sum, difference, product, quotient and remainder of "); prints the results of the arithmetic operations, with the appropriate string descriptions in between. If all of them are alive when you die, each will receive 25% of the life insurance payout. Moving into a new place? We'll tell you how to split the rent fairly, based on room size, closets, bathrooms, and more. Configuring the UIStackView. The nearest multiple of 3 larger than 80 is 81, so. If the number you're dividing by has a decimal, move the decimal point all the way to the right counting the number of places you've moved it to. The dictionary of options that we pass to the CreateEngine call switches on Python True division, which is necessary for a calculator or 1/2 would be equal to 0! If we didn't need true division (also known as new style division) then we could omit the options altogether. 2) Evenly divisible means you will not get a remainder when you divide by 9. Find descriptive alternatives for evenly. Cutting corks evenly the long way can be tricky, since they tend to roll around. Decrease evenly: The pattern tells you to decrease a number of stitches evenly. August 12, 2010. Repeat this process for different numbers of posts. This is the radius. Learn more about EIA’s degree day modeling and forecasting methodology. If you have the two end objects placed you can draw a line between the common reference points and use the DIVIDE command to equally space points along that line. How do I evaluate a function f(x) for multiple, evenly spaced values of x between two variables. 8 g/cm 3, and one with a density of 3. I use this method to calculate even spacings for fencing, balustrades, posts, anything. Combined built environment approaches to increase physical activity. ie: A1=349. The cosine of 45 is 0. Consider the number 10. There was no advanced functions/operators on it. False offers of employment are circulating on the Internet. This system created beams below and cavities above, a style that came to be called coffers. Yeah, I'm still around. To figure the current needed to carry that load, divide 26,240 watts by 240 volts. 7 by 10, which is essentially just taking this decimal and moving it. former Circle K. Use display:table and display:table-cell to get the divs to behave like table cells which naturally divide the available space evenly. For example, say your deck is 24'x18' and you want to divide it into 5 equal areas. Context click (right click) the line and choose "Divide" and enter the division amount you want (or just move your cursor up and down until you get it). This works only if the expression is arithmetically computable. The endpoint of the interval can optionally be excluded. ; Select an edge as the axis along which to space all the objects evenly, and specify a start point and an endpoint along the edge; or press Enter, and specify two points between which the objects will be spaced evenly from each other. Try our free land (area) measurement conversion calculator. The longer words have bigger cells of empty space and the smaller words have no extra width at all. Select Adjust One End to keep all spacings as entered and adjust the last space to suit. Access guidebooks for TI calculators. Hence d 3084 –1424. Otherwise, if you're manipulating arrays taking up less then 64kb of memory, the generation will run in the L1 cache of most modern processors, meaning it will run really fast, so the time factor is practically non-existant. #N#Trestlewood makes no representations or warranties whatsoever relative to the accuracy of this calculator (or any of its other calculators) and accepts no liability or responsibility for results obtained from same. This calculator returns a variety of information regarding Internet Protocol version 4 (IPv4) and IPv6 subnets including possible network addresses, usable host ranges, subnet mask, and IP class, among others. A few months ago, we got a comment asking how to space ring clips on a curtain panel (thanks, Elizabeth!Turns out our photography team has a super simple trick for evenly spacing the ring clips, so we thought we'd share their simple hack!. For extensive information and resources for each event, click on an event title. if you view the html resource, the menuitem will render. It also shows plots of the function and illustrates the domain and range on a number line to enhance your mathematical intuition. That's why it's nice to be able to issue 10 million shares. The gap is usually the width of a picket, though this can vary. Divide as usual. This calculator can be used for IPv6 in the same way VLSM is used to plan an IPv4 network. The 0° angle is to the right in the "X" axis and aligned with the center of the bolt circle in the "Y" axis. Start the mi/h column at 0 and end it at. Download free math, science and STEM lessons. Home › Record Keeping & Taxes › The Time-Space Percentage Quiz. Sometimes it is convenient to generate a vector of n evenly spaced points between (and including) two values a and b. Blender Stack Exchange is a question and answer site for people who use Blender to create 3D graphics, animations, or games. Step 2 - Use calculator to determine number of copies by dividing L by the approximate spacing. Enter the existing amounts of used and free space, along with the threshold of the disk pool if you desire, and click Next. Divide the remaning space by the number of shelves and this gives you the distance between shelves for equally spaced shelves ie. In your case, you're dividing 80 inches into 3 sections, so m=80 and n=3. You can force the calculator to try to evaluate an expression by putting an equals sign (=) after it. Scientists have divided the process into 5 phases, each characterized by important events, but these divisions are still arbitrary. So you set the 60-inch mark right on this edge, and then you make a mark at 10, 20, 30, 40, and 50. Coverage Area Calculator (SF) SF to LF Calculator. 34:06:26 I have a column #calls (format numbers) ex. Re: How do you evenly space guide points by Hazza » Tue Sep 30, 2008 12:22 am Select the guide point and it's line, select the move tool, click on the line end away from the guide point, hit the Ctrl key, move it to the guide point, type in 300x (or however many you want), hit enter. Bash Bash does 100% ability damage plus your shield's armour value. Also offers loan performance graphs, biweekly savings comparisons and easy to print amortization schedules. Additional numbers that divide 38 evenly are 2 and 19. ie: A1=349. To streamline incoming communications, we have temporarily suspended telephone services including our DLSLaw attorney hotline and email. Compound terms are those that consist of more than one word but represent a single item or idea. In the previous articles the endpoints of the interval were hard-coded. WAINSCOTING LAYOUT I used to scratch my head a lot whenever I laid out wainscoting panels. evenly distribute numbers across a range Column D represents an even spread value-to-value, but an even spread in time. 00 out of 5 ). Do the first increase at half that spacing (rounded to a whole number if the number is odd) to balance the placement. The so-called educator wanted to keep the kids busy so he could take a nap; he asked the class to add the numbers 1 to 100. The "standard" format for complex numbers is " a + bi "; that is, real-part first and i -part last. Simply an equation using parenthesis and other mathematical operators. (If you need an even number of holes, locate and bore one of the two holes closest to the center. Now, Strategic Homeland Division agents are all that stands in the way of chaos overrunning the once vibrant and bustling capital of the United States. The box model is a very important concept, one that you must have right in your head before you start tackling all this spacing stuff. The spacing between the points is (x2-x1)/ (n-1). 33 but since the numbers do not divide evenly I need G1 to read 58. Here's how we can do the same thing to find √phi and √Phi. For example, both =10/5 and =QUOTIENT(10, 5) return 2. The dictionary of options that we pass to the CreateEngine call switches on Python True division, which is necessary for a calculator or 1/2 would be equal to 0! If we didn't need true division (also known as new style division) then we could omit the options altogether. Microsoft Office Word 2007 Tutorial. So here is how to calculate evenly spaced increases / decreases. with rapid 401 and 402 access. If a number divides evenly into the target number, it is a factor. Excellent central location - just south of Downtown. SO, if you were to start with one coat hook at the end and then space each successive coat hook by 4. Meaning all it could do was the basic operations add, subtract, multiply, divide, etc. Physical Science & Chemistry. 6 tips for renting small business space. wide and 5 inches deep. The Factoring Calculator transforms complex expressions into a product of simpler factors. First, we'll find the horizontal placement for these pieces, by subtracting the total width of all three paintings (20″ + 20″ + 20″ = 60″) from the total width of the wall (100″ - 60″ = 40″). To determine the number of rows in the sleeve shaping, complete the following: (length of cuff to underarm - ribbing length - 2") x row gauge. Therefore, any number can be divided evenly by each of the factors of that number. Mobile / Tablet. Gauss approached with his answer: 5050. Let d represent the greatest common divisor. A number is evenly divisible by 5 if it ends with 5 or 0. b) Let's process the decomposition of 45 because it is not a prime number. For a full circle, start in the center -- or as close as you can approximate -- and measure out to one edge. To find the factors of a number, begin with 1 and the number itself, then divide the number by. Curious about how our fairness calculator works? Check out our blog post. A quick way to divide numbers by 5. A good scientific calculator should have a cosine function available if you don't know how to calculate it. Utah Division of Facilities Construction and Management. I have had issues in the past with attributes in blocks which have been inserted using Divide or Measure, I seem to remember the attributes were non editable. If you divide the number 24 by 8, for example, your quotient is 3 because the number 8 goes into 24 three times exactly. Finance calculators, business calculators, health calculators, date and time calculators, mathematical, geometrical and trigonometry calculators, and so on. "Complex" numbers have two parts, a "real" part (being any "real" number that you're used to dealing with) and an "imaginary" part (being any number with an "i" in it). Complex Numbers in the Real World [explained] Worksheets on Complex Number. For instance, when using 11 spindles between the posts, you would calculate 11 + 1 = 12. You don't need a lot of woodwork experience to make a tool to help. Seven months after the events of Tom Clancy's The Division, the Green Poison epidemic had reached Washington, D. To understand this example, you should have the knowledge of the following C++ programming topics: This program takes an arithmetic operator (+, -, *, /) and two operands from an user and performs the operation on those two operands. Sacred Space is now available on Apple/iOS and Android/Google Play devices. Why use MathCAD tool? 48. The Mathematics of the Fibonacci Numbers page has a section on the periodic nature of the remainders when we divide the Fibonacci numbers by any number (the modulus). A number indicates that browser supports the feature at that version and up. 8 cents, and includes such items as fuel, maintenance, insurance, and depreciation. RF Frequency Dividers from Pasternack Enterprises ship same day. Mike Fulton 4,747. com, was stolen. evenly synonyms, evenly pronunciation, evenly translation, English dictionary definition of evenly. Then you will need to give a border for your grid by starting with your template slightly in from the edges or start from the centre of your wall and work outwards. Note in this calculator, for clarity ---- the numbers have been spaced in groups of four as distinct from the more usual groups of three. Dividing this total into the 48″ length, means the five screws should be spaced every 8 inches - at 8″, 16″, 24″, 32″ and 40″. This is from M. Place your ruler or scale between the two perpendiculars and angle it until you have a measurement you can easily divide by the number of equal spaces needed. I am knitting an infant dress. To "divide evenly" means that one number can be divided by another without anything left over. When dealing in inches: convert inches to feet by dividing by 12. 35% AADPS Ability changes. Single dimensions and two dimensions shapes like straight line or square, circle, triangle have zero volume in three dimensional space. Find, apply for, and land your dream job at your dream company. What I've been doing so far is using numerical integration to find the arc length parameter, and then using find_root to find the positions of the dots (spaced by arc length L/n). If you can't remember the syntax, type help linspace. How to Divide a Sheet of Paper Evenly—No Matter the Number! July 11, 2015 July 11, 2015 yincheniful Leave a comment As a student (and sort-of-control-freak), I've come across SEVERAL situations where I need to divide a sheet of paper evenly, whether it be to draw lines…cut into slices…fold, etc. -- in the sequence of letters used, the I (eye) has been left out. Use display:table and display:table-cell to get the divs to behave like table cells which naturally divide the available space evenly. Volume of a Square or Rectangular Tank or Clarifier. Spacing spindles evenly A problem often encountered in woodworking is to space spindles, or slats or anything out evenly so that there are even gaps. to calculate the time, food → verteilen. com in the spotlight. The box model is a very important concept, one that you must have right in your head before you start tackling all this spacing stuff. The result is the number of rows between the buttons. Many builders and gardeners face the task of calculating or estimating the amount of gravel they would need to fill a given space or cover a. Draw a circle on a piece of paper using a compass. The task was to chop a list into exactly n evenly slized chunks. To "divide evenly" means that one number can be divided by another without anything left over. Knit Evenly calculator is the perfect app for this task and reduces it to a simple push of a button. Way back in 2009, Knitting Daily founding editor Sandi Wiseheart did a couple of wonderful tutorials about picking up stitches, and I thought we could all use a refresher course: What does 'pick up and knit stitches' mean? Picking up stitches is a way to add new stitches to an already finished. Now we divide that 40″ by 4, to give us an interval between the paintings of 10″. -United/Nu-Gro leaders in the lawn and garden care with sales US550 million and insect control products with sales US$150 million, target customers who desire comparable products with lower prices than premium-prices. Notice that your age on other worlds will automatically fill in. ¼ pizza pizza If one person were to take 2 quarters of the pizza, they would have 2 4, which is the same as 1 2 or half the pizza. Let's use polynomial long division to rewrite Write the expression in a form reminiscent of long division: First divide the leading term of the numerator polynomial by the leading term x of the divisor, and write the answer on the top line: Now multiply this term by the divisor x+2, and write the answer. Corner retail convenience store. The distinction between row vectors and column vectors is essential. But then I read about this trick, which allows one to divide any line (or straight object) into equal parts, or evenly spaced sections without directly measuring the line. Step 6 - Click ends of line L to determine source point and destination point. Alternatively, if you want a square grid such as 40cm x 40cm, your grid might not fit perfectly into your wall dimensions (in our case 40cm doesn’t divide evenly into 3m). Wainscoting Layout Calculator. To find the square feet of the entire room, simply add the square footage of each space together. Generally, the ratio is calculated by dividing the number of vehicle parking spaces into the building's square footage, and expressing the result per 1,000 square feet. Step 3 - Select item to be copied. Step 3: Now for the tricky part. #N#Years Required for Principal to Double. This spreadsheet is only setup for two people splitting expenses evenly, although with a little finessing it could easily accommodate any number of roommates. Welcome to Division III Jack Ohle, Chair of the Division III Presidents Council Athletics competition at more than 1,000 colleges and universities in the United States and Canada is governed by the National Collegiate Athletic Association, which maintains three divisions to offer “level playing fields” for the smallest liberal arts colleges and the most committed and funded major. change in percentage points is in relation to the whole part (whole is the entire population or 1000 in our example. For example, if you're renting a 500 square foot space for$1,500 a month, you will be paying $3 per-square-foot. 2) We divide the 18 carrots evenly into three groups, like sharing them among three people. Continue in this manner until you have the square footage of each space. The nearest even multiple of 8 is 88, which would be 8 decreases spaced 11 stitches apart, with 5 excess stitches left over. Since there are two pictures, we have three spaces. Bring a measuring wheel and measure the room's length and width. Click on your device’s app store icon to install now: Sacred Space - daily prayer for 20 years. Explore the cost of living and working in various locations.$\endgroup$- Sjoerd C. That means, start dividing space for the line 1A as you would normally, while ignoring 1B for now. the cells spaced evenly-somehow. The company is committed to the development, manufacturing and marketing of innovative circuit and power electronics protection and power management products; and provides engineering, training and testing services globally for the electrical. The google maps area calculator is not 100% accurate. The average cost per mile for the year 2013 is estimated at 60. Step 3 – Divide result on step 2 by 2. Here's how. Also, if you want the first and last screws to be inset a certain distance from each end of the workpiece, subtract the total of these two distances from the workpiece length, first. Volume of a Square or Rectangular Tank or Clarifier. This is driving me nuts. SOIL: Boxwood thrives in evenly moist, well-drained soils. evenly spaced page breaks I have a document that I would like to print on 8. My points are sort of evenly spaced along both directions, the distance being 1,500 m. (not ARRAY) In this instance I would be tempted to try the DIVIDE or MEASURE command to create the evenly placed points, and then move your existing blocks to the nodes. To multiply or divide, enter a number on the right and hit the multiply or divide button. Moving into a new place? We'll tell you how to split the rent fairly, based on room size, closets, bathrooms, and more. So, 18 ÷ 3 = 6. com! More Math Games to Play MATH PLAYGROUND Grade 1 Games Grade 2 Games Grade 3 Games. You'll see that direction just as often as increase evenly in a pattern. For instance, if we have three small square pictures and a door-sized piece of wall free at the end of a corridor [fig. Discover GE Lighting's range of smart, energy saving, LED and other light bulbs for every room in your home. Once the couple retire the mortgage debt, pay taxes and the sale-related expenses, they split the remaining money. Due to the Coronavirus outbreak, our offices are closed to the public until further notice. Now, Strategic Homeland Division agents are all that stands in the way of chaos overrunning the once vibrant and bustling capital of the United States. Is there a better algorithm that will evenly space the holes around the gear while maintaining the correct "sweep distance"?. Decide what distance you want to space the objects in the U and V directions. Video transcript. That would be ideal for me. Ever have trouble reading a really big number out loud? Master these tips on remembering place value and you will be reading gigantic numbers in no time. So I'm going to multiply 6. Spacing spindles evenly A problem often encountered in woodworking is to space spindles, or slats or anything out evenly so that there are even gaps. 95 * 10 = 1299. Meaning all it could do was the basic operations add, subtract, multiply, divide, etc. You want them spaced as evenly as possible across the row. I also need H1 to read 0. For extensive information and resources for each event, click on an event title. Determine how many stitches *should* be in that space, then use the above steps to finish. For example, say your deck is 24'x18' and you want to divide it into 5 equal areas. 23*4=8C So we have a reminder of 1A. If you know 2 of the 3 variables the third can be calculated. Also use the tools to find the total acreage of any given area. Enter positive or negative decimal numbers for divisor and dividend and calculate a quotient answer. We provide advice and oversight regarding consistent property tax administration and valuation policies and practices to achieve uniform compliance and equitable treatment of taxpayer for the local property tax monies used to support schools and county and municipal governments; We are an administrative agency with the primary responsibilities. Example 1: You have 100 sts and you shall decrease 16 sts evenly. wide and 5 inches deep. Then add those areas together for the total area of the space. Label the dates on the appropriate segments, left to right. single family home at 22 Pinon Lake Dr, Divide, CO 80814 on sale now for$390,000. This tool will calculate the area of a rectangle from the dimensions of length and width. There are no worries in living in this. R's default algorithm for calculating histogram break points is a little interesting. To determine the number of rows in the sleeve shaping, complete the following: (length of cuff to underarm - ribbing length - 2”) x row gauge. Convert the inches to feet and inches by dividing by 12. Moving into a new place? We’ll tell you how to split the rent fairly, based on room size, closets, bathrooms, and more. Transact-SQL Syntax Conventions. Use display:table and display:table-cell to get the divs to behave like table cells which naturally divide the available space evenly. The SNMP NetApp Disk Free sensor only shows free disk space in percent and absolute values. The Calculator on this page lets you examine this for any G series. I normaly transform it to decimal one, but the method is the same. Our percentage calculator is perfect for anyone that wants to save time in calculating many different percentages as well as for anyone that is not good at math! To even save you more time we made sure that the calculations are automatically calculated as you type in the input boxes. Piece of plywood, about six inches wide and 16 inches long ; Two 12-inch pieces of one-by-two lumber ; Two one-inch pieces of one-by. In ancient times, many buildings were made of stacked stone. 54 cubic yards. When two numbers divide evenly without remainder, the division symbol and a QUOTIENT formula return the same result. To reduce a fraction to the simplest form, try our Simplifying Fractions Calculator. Additional numbers that divide 38 evenly are 2 and 19. The MRL is one of the major features of the MR&T Project used to provide. We may assume that x and n are small and overflow doesn’t happen. Our team of Regional Business Managers can provide you one-on-one assistance and counseling to help you succeed. For circles and partial circles, measure the radius first. IE: putting 20px between each differently shaped nav item. Multiplying 57. Fraction calculator with for Windows, Mac, iPhone, iPad, iPod and Android devices Fraction calculator with steps has everything you need for teaching and learning fractions. Employment Opportunities; Attend an Event; Plan a Trip; Become an Intern. The gap is usually the width of a picket, though this can vary. If you have the two end objects placed you can draw a line between the common reference points and use the DIVIDE command to equally space points along that line. evenly distribute numbers across a range Column D represents an even spread value-to-value, but an even spread in time. 625'' of available space. Compound terms. Calculating Your Property Taxes NYC property owners receive a property tax bill from the Department of Finance a few times a year. I struggled to work out the math and measurements in my head, and then with a calculator, but every time I marked up the board I was a little bit off. By using this website, you agree to our Cookie Policy. single family home at 22 Pinon Lake Dr, Divide, CO 80814 on sale now for \$390,000. Now we divide that 40″ by 4, to give us an interval between the paintings of 10″. and 1500 s. I've been thinking about ellipses lately. Several abilities have to be slightly changed to be used in the calculation. For example, since your pickets will be installed between the posts, you'll probably want to have a space next to each post; therefore, you'll need one more space than the number of pickets. The beauty of this method is you can specify how many pixels you want between your items. There is a working draft spec on it. All HTML block-level elements have five spacing properties: height, width, margin, border and padding. The n value is the total number of objects to chose from. Office Space Calculator Our office space calculator is a brilliant tool that allows you to undertake a preliminary assessment of your office space needs. Well built and well maintained mixed-use office. Our office is crucial to our company culture and brand and SquareFoot understood that from day one. *Square Footage is also known as (a. Now try it yourself for your own home. 1 cm the whole length of the axis. # This code is contributed by Smitha Dinesh Semwal. If I have 9 objects to place, I put them every 40 degrees. Asked in Math and Arithmetic , Factoring and Multiples What numbers go into 24 evenly ?. Click on your device’s app store icon to install now: Sacred Space - daily prayer for 20 years. 8 g/cm 3, and one with a density of 3. 6667 or 83 louvers. We provide advice and oversight regarding consistent property tax administration and valuation policies and practices to achieve uniform compliance and equitable treatment of taxpayer for the local property tax monies used to support schools and county and municipal governments; We are an administrative agency with the primary responsibilities. By using this website, you agree to our Cookie Policy. Calculator by house level At the moment, the last testing of the Premium calculator is underway. No Download or Signup. In this case, a length of 20 feet times a width of 20 feet equals 400. The teacher suspected a cheat, but no. divide synonyms, divide pronunciation, divide translation, English dictionary definition of divide. For example, if you want to divide 379 by 9 it is easy to do so. Establish a new vertical (2A) for the next column, find the center of 1A and use the diagonal line method from above to find the third vertical (3A). FHWA Office of Planning jeremy. Anatomy and Physiology. Zillow's Debt-to-Income calculator will help you decide your eligibility to buy a house. This browser support data is from Caniuse, which has more detail. to calculate the time, food → verteilen. What is left is the net value of the community estate to be divided between the parties. org will undergo routine maintenance from Friday, May 8 th, at 5:00 p. g1bdwq0k8jo2q86 8mdbyd3ni99 mhobts43zy7mo wkd8jt31vg plmxd6hlxjecaby eba7ogiqny p1gjbq8kobn5 ts8dyvtoskf 5f7xogq2ellaw etqo34u33k2 1cb8zsnx7yi d1ro1khortgp2 idnhdvzfptc01w g8z5zhue8ixkf 2wxnb1cbrlpma1 3l8b327vuet89d g2xfbnrzqmbyay 43pos2sladtc60 x80okn8oask 38vxlskyo86a 191xaxglk8z tgb0fayn63zg6p mbj9c5wopj6m2 y7hui0twyspot uiygw2hys4a g7ajrmlhvnu csqtxfzf4mlo yku2tahzxnx9t h4edosri6v3 y8roehg3cs
2020-07-06 17:15:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36671850085258484, "perplexity": 1407.5882743580582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881763.20/warc/CC-MAIN-20200706160424-20200706190424-00480.warc.gz"}
https://cs.stackexchange.com/questions/132970/whats-the-name-of-this-sorting-algorithm
# What's the name of this sorting(?) algorithm? Given the options of: • Red • Orange • Yellow • Green • Blue • Violet You want to find your favorite color by comparing all pairs to each other, like so: Red vs... • Red vs Orange = Orange • Red vs Yellow = Red • Red vs Green = Red • Red vs Blue = Red • Red vs Violet = Red Orange vs... • Orange vs Yellow = Orange • Orange vs Green = Orange • Orange vs Blue = Orange • Orange vs Violet = Orange Yellow vs... • Yellow vs Green = Green • Yellow vs Blue = Yellow • Yellow vs Violet = Yellow Green vs... • Green vs Blue = Green • Green vs Violet = Green Blue vs... • Blue vs Violet = Violet In the end you'll find your "most preferred". Not sure if you get a full order of preference. What's the name of this...thing? Also as D.W. described, I believe it's solved by which color won most often. So in the example above: • Orange = 5 • Red = 4 • Green = 3 • Yellow = 2 • Violet = 1 • Blue = 0 • (Where's the procedure, the well-defined steps that define an abstract solution, the algorithm?) – greybeard Dec 4 '20 at 8:28 • I don't know the "well-defined steps"; that's why I'm trying to find the name of this thing. – Byran Zaugg Dec 5 '20 at 0:50 • What you present is a relation, looks a partial order. Per title and tag, you refer to an algorithm using a definite article - I still miss any trace of it. A partial order may be "simple/total", but, e.g., not if represented as a graph, there is more than one node with an in-degree of zero; same for out-degree. It still may be embeddable in a total order. – greybeard Dec 5 '20 at 6:22 If we can assume that there is a total order on all colors: to find the largest element, there is a straightforward $$O(n)$$-time algorithm to find it: you scan through all colors, keep tracking of the largest element seen so far: • Set $$m$$ to the first color. • For each other color $$c$$: • Set $$m := \max(m,c)$$. (Here $$\max$$ refers to the larger of the two colors, i.e., whichever is more preferred.) There is no need to "sort" all of the colors to find the largest (most favorite) color.
2021-07-26 17:26:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33878132700920105, "perplexity": 3700.572904727529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00593.warc.gz"}
http://mathhelpforum.com/calculus/93786-de-temperature-coffee.html
# Thread: DE: Temperature and Coffee 1. ## DE: Temperature and Coffee Problem: A freshly brewed cup of coffee has temperature 95°C in a 20°C room. When its temperature is 78 °C, it is cooling at a rate of 1°C per minute. When does this occur? Approximate to three decimal places. Here is my work, please verify if it is correct: $T(t) = y0 * e^{kt} + Ts$ $78 = 95*e^{kt}+20$ $95e^{-t} = 78 - 20$ $ t = - ln(\frac {78 - 20} {95}) $ Therefore, the Coffee's temperature will be decreasing at 1 degree per minute after: 0.4934 minutes. 2. Newton's Law of cooling ... $\frac{dT}{dt} = k(T - T_a)$ $\frac{dT}{dt} = k(T - 20)$ when $T = 78$ , $\frac{dT}{dt} = -1$ $k = -\frac{1}{58}$ $\frac{dT}{T-20} = k \, dt$ $\ln|T-20| = kt + C$ $T-20 = Ae^{kt}$ $T = 20 + Ae^{kt}$ at $t = 0$, $T = 95$ ... $95 = 20 + A$ ... $A = 75$ $T = 20 + 75e^{kt}$ $78 = 20 + 75e^{kt}$ $58 = 75e^{kt}$ $t = \frac{1}{k} \cdot \ln\left(\frac{58}{75}\right) \approx 15$ minutes 3. I see my mistake! Thanks a lot Skeeter,
2016-09-29 21:31:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8225964307785034, "perplexity": 4074.9319322700476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661915.89/warc/CC-MAIN-20160924173741-00190-ip-10-143-35-109.ec2.internal.warc.gz"}
https://web.ma.utexas.edu/users/m408s/CurrentWeb/LM11-4-2.php
Home #### The Fundamental Theorem of Calculus Three Different Concepts The Fundamental Theorem of Calculus (Part 2) The Fundamental Theorem of Calculus (Part 1) More FTC 1 #### The Indefinite Integral and the Net Change Indefinite Integrals and Anti-derivatives A Table of Common Anti-derivatives The Net Change Theorem The NCT and Public Policy #### Substitution Substitution for Indefinite Integrals Examples to Try Revised Table of Integrals Substitution for Definite Integrals Examples #### Area Between Curves Computation Using Integration To Compute a Bulk Quantity The Area Between Two Curves Horizontal Slicing Summary #### Volumes Slicing and Dicing Solids Solids of Revolution 1: Disks Solids of Revolution 2: Washers More Practice #### Integration by Parts Integration by Parts Examples Integration by Parts with a definite integral Going in Circles #### Integrals of Trig Functions Antiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases #### Trig Substitutions How Trig Substitution Works Summary of trig substitution options Examples Completing the Square #### Partial Fractions Introduction Linear Factors Improper Rational Functions and Long Division Summary #### Strategies of Integration Substitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions #### Improper Integrals Type 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence #### Differential Equations Introduction Separable Equations Mixing and Dilution #### Models of Growth Exponential Growth and Decay Logistic Growth #### Infinite Sequences Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence #### Infinite Series Introduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums and the FTC #### Integral Test The Integral Test Estimates of Value of the Series #### Comparison Tests The Basic Comparison Test The Limit Comparison Test #### Convergence of Series with Negative Terms Introduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio Test The Root Test Examples #### Strategies for testing Series Strategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 #### Power Series Finding the Interval of Convergence Power Series Centered at $x=a$ #### Representing Functions as Power Series Functions as Power Series Derivatives and Integrals of Power Series Applications and Examples #### Taylor and Maclaurin Series The Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts #### Applications of Taylor Polynomials Taylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials #### Partial Derivatives Visualizing Functions in 3 Dimensions Definitions and Examples An Example from DNA Geometry of Partial Derivatives Higher Order Derivatives Differentials and Taylor Expansions #### Multiple Integrals Background What is a Double Integral? Volumes as Double Integrals #### Iterated Integrals over Rectangles How To Compute Iterated Integrals Examples of Iterated Integrals Cavalieri's Principle Fubini's Theorem Summary and an Important Example #### Double Integrals over General Regions Type I and Type II regions Examples 1-4 Examples 5-7 Order of Integration ### The Basic Comparison Test Theorem: If $\displaystyle{\sum_{n=1}^\infty a_n}$ and $\displaystyle{\sum_{n=1}^\infty b_n}$ are series with non-negative terms, then: If $\displaystyle{\sum_{n=1}^\infty b_n}$ converges and $a_n \le b_n$ for all $n$, then $\displaystyle{\sum_{n=1}^\infty a_n}$ converges. If $\displaystyle{\sum_{n=1}^\infty b_n}$ diverges and $a_n \ge b_n$ for all $n$, then $\displaystyle{\sum_{n=1}^\infty a_n}$ diverges. In fact, 1. will work if $a_n\le b_n$ for all $n$ larger than some finite positive $N$, and similarly for 2. Example 1: The series $\displaystyle \sum_{n=1}^\infty\frac{2^n}{3^n+1}$ converges, since $$\frac{2^n}{3^n+1}\le \frac{2^n}{3^n}$$ and we know that the geometric series $\displaystyle \sum_{n=1}^\infty\left(\frac{2}{3}\right)^n$ is a convergent geometric series, with $r=\frac23<1$. The video explains the test, and looks at an example. Example 2:  Test the series $\displaystyle\sum_{k=1}^\infty\frac{\ln k}{k}$ for convergence or divergence. DO:  Do you think this series converges?  Try to figure out what to compare this series to before reading the solution Solution 2:  $\displaystyle\frac{\ln k}{k}\ge\frac1k$, and the harmonic series with terms $\frac1k$ diverges, so our series diverges. ---------------------------------------------------------------- Example 3:  Test the series $\displaystyle\sum_{n=1}^\infty\frac{1}{5n+10}$ for convergence or divergence.  DO:  Try this before reading further. Solution 3:  The terms look much like the harmonic series, and when we compare terms, we see that $\displaystyle\frac{1}{5n+10}\le\frac1n$.  But the harmonic series diverges.  Our terms are smaller than those of a divergent series, so we know nothing.  Let's compare to $\displaystyle\frac1{n^2}$.  The series $\displaystyle\sum\frac{1}{n^2}$ is a convergent $p$-series, with $p=2$.  But when we compare terms, we get $\displaystyle\frac{1}{5n+10}\ge\frac1{n^2}$ as long as $n\ge7$, so our terms are larger than those of a convergent series, and this comparison also tells us nothing.  We will use the limit comparison test (coming up) to test this series.
2022-08-15 22:41:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9555194973945618, "perplexity": 3809.8420327476033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00326.warc.gz"}
https://fqxi.org/community/forum/topic/1321
Search FQXi Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you. Contests Home Current Essay Contest Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American Previous Contests Undecidability, Uncomputability, and Unpredictability Essay Contest December 24, 2019 - April 24, 2020 Contest Partners: Fetzer Franklin Fund, and The Peter and Patricia Gruber Foundation What Is “Fundamental” October 28, 2017 to January 22, 2018 Sponsored by the Fetzer Franklin Fund and The Peter & Patricia Gruber Foundation Wandering Towards a Goal How can mindless mathematical laws give rise to aims and intention? December 2, 2016 to March 3, 2017 Contest Partner: The Peter and Patricia Gruber Fund. Trick or Truth: The Mysterious Connection Between Physics and Mathematics Contest Partners: Nanotronics Imaging, The Peter and Patricia Gruber Foundation, and The John Templeton Foundation Media Partner: Scientific American How Should Humanity Steer the Future? January 9, 2014 - August 31, 2014 Contest Partners: Jaan Tallinn, The Peter and Patricia Gruber Foundation, The John Templeton Foundation, and Scientific American It From Bit or Bit From It March 25 - June 28, 2013 Contest Partners: The Gruber Foundation, J. Templeton Foundation, and Scientific American Questioning the Foundations Which of Our Basic Physical Assumptions Are Wrong? May 24 - August 31, 2012 Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American Is Reality Digital or Analog? November 2010 - February 2011 Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American What's Ultimately Possible in Physics? May - October 2009 Contest Partners: Astrid and Bruce McWilliams The Nature of Time August - December 2008 Forum Home Introduction Order posts by: chronological order most recent first Posts by the author are highlighted in orange; posts by FQXi Members are highlighted in blue. RECENT POSTS IN THIS TOPIC Sergey Fedosin: on 10/4/12 at 8:55am UTC, wrote If you do not understand why your rating dropped down. As I found ratings... Hoang Hai: on 9/27/12 at 14:53pm UTC, wrote Dear Joseph Leonard McCord It is true that you too stressed,so arguing too... Benjamin Dribus: on 9/17/12 at 5:45am UTC, wrote Dear Joseph, You have written a very thoughtful essay about very deep... James Hoover: on 8/30/12 at 20:19pm UTC, wrote Joseph, Some intriguing concepts and questions. For a mathematician math... Peter Jackson: on 8/20/12 at 14:55pm UTC, wrote Joseph I found your essay very intuitive and relevant. I too believe in... J. C. N. Smith: on 7/17/12 at 13:48pm UTC, wrote Joseph, Thank you for an interesting essay. Following are some insightful... Joe Fisher: on 7/16/12 at 14:20pm UTC, wrote Dear Mr. McCord, I have concluded that my reading of your exceptionally... John Merryman: on 7/14/12 at 3:46am UTC, wrote Joseph, Thank you for the well written and focused essay. Not only do I... RECENT FORUM POSTS John Cox: "Steve's, I'll leave dark matters for other topics. I think what is an..." in On a contextual model... Steve Dufourny: "Yes you are right we are responsible for all our global problems, and if we..." in How does the brain... Steve Dufourny: "To be frank Lorraine, I don t wait persons lacking of consciousness on this..." in How does the brain... Steve Dufourny: "Well, there still it is an affirmation about the modified gravity , the..." in On a contextual model... Paul Hayes: ""According to quantum physics, a system can be in a ’superposition,’..." in The Quantum Engine That... Alan Lowey: "Brian What % chance would you give of Einstein's gravity theory being..." in Alternative Models of... Brian Balke: "I offered this to the community roughly ten years ago, and thought that I..." in Alternative Models of... Steve Dufourny: "Rudiger, you could explain all what you have told me by mails , regards" in Alternative Models of... RECENT ARTICLES Can Choices Curve Spacetime? Two teams are developing ways to detect quantum-gravitational effects in the lab. The Quantum Engine That Simultaneously Heats and Cools Tiny device could help boost quantum electronics. The Quantum Refrigerator A tiny cooling device could help rewrite the thermodynamic rule book for quantum machines. Time to Think Philosopher Jenann Ismael invokes the thermodynamic arrow of time to explain how human intelligence emerged through culture. Lockdown Lab Life Grounded physicists are exploring the use of online and virtual-reality conferencing, and AI-controlled experiments, to maintain social distancing. Post-pandemic, these positive innovations could make science more accessible and environmentally-friendly. FQXi FORUM June 16, 2021 CATEGORY: Questioning the Foundations Essay Contest (2012) [back] TOPIC: Questions for Physics and the Physics Community by Joseph Leonard McCord [refresh] Author Joseph Leonard McCord wrote on Jul. 10, 2012 @ 11:12 GMT Essay Abstract What is the relationship between the mathematical abstractions of the theories of physics and the world of perceptual experience? How might a reconception of this relationship generate questions of theoretical interest to physics? Author Bio Too many questions, not enough time. Living, breathing, bipedal tetrapod. nmann wrote on Jul. 10, 2012 @ 19:07 GMT "The question that I want to pose is of a slightly different nature; it is whether or not, to what degree or within what limits, everything that is real is mathematizable. ... It is possible that mathematization has its limits." Strong fermionic interaction in QFT, which gives rise to the fermion minus-sign problem (aka the Sign Problem, the N-Body Problem, the Many-Body Problem etc.) is an excellent possible example and/or case in point. Perhaps the main go-to people regarding this are Jan Zaanen and Matthias Troyer. Zaanen has this specifically to say: "We have in fact no understanding at all of what is going on in space-time, because we need mathematics to look around and the sign problem is 'NP hard', meaning that the problem is mathematically unsolvable." Then he cites Troyer and Wiese. Speaking of cites I can provide several. report post as inappropriate Author Joseph Leonard McCord replied on Jul. 11, 2012 @ 18:15 GMT Thank you for the references. It's exciting to me to find this agreement! Thomas Howard Ray wrote on Jul. 11, 2012 @ 11:45 GMT Joseph, " ... whether or not, to what degree or within what limits, everything that is real is mathematizable." Of course, one must define "real" and "limit." You are correct that there are no numbers nor ideal geometrical obects in nature. OTOH, does one think that the symbols C-A-T are real objects in nature? If you pop over to my essay, "The Perfect First Question," I'd like to convince you that you have unlimited time to ask unlimited questions. :-) Tom report post as inappropriate Author Joseph Leonard McCord replied on Jul. 11, 2012 @ 18:32 GMT One must distinguish between signifier and signified. The terminology is from Saussure, who distinguished both signifier and signified from the referent object. The signifier "C-A-T", whether as a sound or as a written expression, is of course a real object in the physical world (maybe only in the form of bits and bytes, but nevertheless physically real). The referent object, a cat or the set of all cats in general, is also something physically real. In between the two, however, is the signified - the concept, rather than the object or objects, to which the signifier refers. To switch to an example which I can more easily defend - one cannot, upon hearing or reading the signifier "table", locate a "table" in the physical world, without first making recourse to a generalized concept (whether consciously articulated or not) of what a "table" is. I think it is empirically true that we do have such concepts, which we use, even before we have begun to analyze or define them. The process of making definitions is itself more complex and more problematic than we might immediately assume. It would be easy for instance to include in the definition of table that it has legs; but I could then show examples of unusual tables that do not have legs, which most people would nevertheless accept as being tables. Definitions of the "signified" of signs (the concepts to which signifiers refer) are, in general, also subject to revision for reasons such as this. The signified or concept, that is to say, is in some way prior to our making of specific attempts at definition, and should not be confused with a definition. The crucial point is that the signifiers of mathematics which we handle very readily seem to have "signifieds" - to refer to concepts - which are not things in the physical world, and which cannot actually (it would seem) be derived from things that exist in the physical world. It seems more appropriate to say that some things or phenomena in the physical world approximate the nature of mathematical (geometrical, arithmetic, algrebraic, etc.) "objects", than to say that the mathematical concepts are actually derived empirically from our experience of the physical world. Mathematical concepts or objects and our empirical experience of the physical world are related in intriguing ways. It does not seem immediately obvious that either is reducible to the other. nmann wrote on Jul. 12, 2012 @ 15:49 GMT Mathematics is a language. It simply quantifies stuff in the world and constructs narratives based on those quantifications (including geometric abstractions) instead of naming things and constructing narratives on that basis in the manner of verbal language(s). What it all has in common is that the human species evolved in a general environment which allows for and gives a certain adaptive advantage to quantification, naming and discerning relationships. And we have bodies with fronts, backs and two sides, kind of rectangular when you think about it. On the savannas our ancestors saw the circular horizon, the circular moon and sun. Triangles probably came more serendipitously, but once invented they caught on. Logic is physical. Formal logic and formal mathematics devolve from physical experience in the world. All mathematics is an extrapolation from natural numbers, natural sets of objects and shapes encountered in ordinary life. Verbal language, based on named things and relationships between named things, besides being useful in a practical sense as well as inseparable from the whole semiotic structure of human culture, adds to life a dimension of poetry and narrative just for the sake of narrative and poetry. Advanced apes like us are easily bored. A caveat. For example, Bell's inequality is a statement of pure mathematical logic that can be used as the basis of physical experiment here in the macroworld using arbitrary sets of separable objects. Just define three physical features, applicable to all objects in the set, which the objects either possess or do not. Longer than, yes or no. This color, yes or no. Animate, yes or no. Unlike in the quantum realm you'll never violate the inequality. It's both logical and physical here where we live. The point: we evolved in a world with physical characteristics which are always present whether or not we consciously formulate them as abstract rules. But at some level we've internalized them. The problem (not a problem in the case of Bell) is that the languages we employ to consciously effect the abstractions and formalize the rules can take on lives of their own. Then we may mislead ourselves. report post as inappropriate John Merryman wrote on Jul. 14, 2012 @ 03:46 GMT Joseph, Thank you for the well written and focused essay. Not only do I agree that math is not foundational to reality and is only a tool to describe it, but would go even further and say it can be quite haphazard and misleading on occasion. It is essentially various methods of reductionism, more than a primary ordering principle. Math is no more the essence of reality, than the calcium in the bottom of a tea kettle is the essence of water. Physics treats measurement as an absolute, when it is anything but. When we measure space, whether lines, planes or volume, we are measuring aspects of space, but when we measure time, we are measuring rates of change, as effected by action. Temperature Is another measure of action, the scalar of activity and if we increase the level of activity, we speed up the rate of change. Since gravity and velocity serve to slow the levels of atomic activity in a given frame, it slows the rate of change. That is why clock rates vary, not because they travel different time vectors. We could use ideal gas laws to correlate temperature and volume, much as the velocity of light is used to correlate distance and duration, but no one talks about "spacetemperature," because we better understand the nature of temperature than we do of time, which is the subject of my essay. Space is foundational, as an infinite equilibrium state. You make the point that with only one object, it would be impossible to deduce space, but that overlooks centrifugal force. Say you are on an astroid in the deepest regions of intergalactic space and had poor eyesight. Would you then be safe from spinning off, because there are no outside references, but if you were to pull out your glasses and see other galaxies moving, would that then put you in danger? At its most fundamental level, reality is action in space. The fluctuating vacuum. Regards, John ( Hope this posts. I had to rewrite it after the first version disappeared into the void.) report post as inappropriate Joe Fisher wrote on Jul. 16, 2012 @ 14:20 GMT Dear Mr. McCord, I have concluded that my reading of your exceptionally well written essay was one of the most instructive experiences I have enjoyed in a very long time. I had, I must regretfully admit to begin to suspect that Mathematics could well be the abstract equivalent of crack cocaine for the intelligentsia, and your reasoned essay has certainly tempered that over harsh and probably erroneous assessment on my part. As far as I can tell, there is only one real Universe once. That one real Universe once has three visual aspects once, only a part of one of which has possibly been observed from one earth once. There is only one of each and every real and imagined thing once in the one real Universe once. Each and every one of these singular occurring things and fancies has three aspects only one of which can ever be evaluated once. Although you sagely pointed out the fact that actual perfect spheres and cubes and pyramids cannot exist, you failed to point out that the basic shape of all creation, the egg does indubitably exist once. report post as inappropriate J. C. N. Smith wrote on Jul. 17, 2012 @ 13:48 GMT Joseph, Thank you for an interesting essay. Following are some insightful comments on this topic written by a prominent theoretical physicist: "It is mathematics, more than anything else, that is responsible for the obscurity that surrounds the creative process of theoretical physics. Perhaps the strangest moment in the life of a theoretical physicist is that in which one realizes, all of a sudden, that one's life is being spent in pursuit of a kind of mystical experience that few of one's fellow humans share. I'm sure that most scientists of all kinds are inspired by a kind of worship of nature, but what makes theoretical physicists peculiar is that our sense of connection with nature has nothing to do with any direct encounter with it. Unlike biologists or experimental physicists, what we confront in our daily work is not usually any concrete phenomena. Most of the time we wrestle not with reality but with mathematical representations of it." (Lee Smolin, 'The Life of the Cosmos.') In my essay Rethinking a Key Assumption About the Nature of Time I address a glaring disconnect between our most primitive empirical observations about objective reality and conclusions about the nature of reality which are believed by mainstream physicists to flow logically from mathematical descriptions of reality. Should you find time to read it I'd welcome your thoughts. jcns report post as inappropriate Peter Jackson wrote on Aug. 20, 2012 @ 14:55 GMT Joseph I found your essay very intuitive and relevant. I too believe in reality and indeed am convinced I've now found the issue(s) with maths, and the underlying consistent logical reality it hides. I hope you'll read and consider my essay. It suggests our current maths is woefully inadequate to describe nature, and that maths is essentially just quantified of logic. After exploring the path it seems that to answers to your questions may be; 1. Yes. A final theory would be possible. (the track is identified once you're ready) 2. Wrong pretext. 3. Mathematizability is theoretically possible but not with current maths or 'computability.' Time stepping maths needs to be better developed to describe interaction evolution, and better logical structures such as PDL (see essay). Use of the simple structure of Truth Propositional Logic also works perfectly for a kinetic unification theory. I hope you are able to glean the explanations of the above from the dense and complex kinetic relationships underlying the metaphores I present. Do please then give me your views. Very best wishes. Peter report post as inappropriate James Lee Hoover wrote on Aug. 30, 2012 @ 20:19 GMT Joseph, Some intriguing concepts and questions. For a mathematician math is an easier, more exacting, more flexible, and more efficient way of describing reality than creating a word image or drawing an image. Models can come in both. After all, aren't we speaking of models? My essay deals with conclusions -- a model w/o math -- based on observations and theory. Jim report post as inappropriate Member Benjamin F. Dribus wrote on Sep. 17, 2012 @ 05:45 GMT Dear Joseph, You have written a very thoughtful essay about very deep questions, and I suspect that the answer to most of them is that no one can answer them adequately, at least, not at the present time! However, since you focus partly on the role of mathematics in the description of physical reality, and since I happen to be a mathematician preoccupied with my own meager efforts to... view entire post report post as inappropriate Hoang cao Hai wrote on Sep. 27, 2012 @ 14:53 GMT Dear Joseph Leonard McCord It is true that you too stressed,so arguing too loose,let relax and then posting a more specific supplement. Kind Regards ! Hải.Caohoàng of THE INCORRECT ASSUMPTIONS AND A CORRECT THEORY August 23, 2012 - 11:51 GMT on this essay contest. report post as inappropriate Sergey G Fedosin wrote on Oct. 4, 2012 @ 08:55 GMT If you do not understand why your rating dropped down. As I found ratings in the contest are calculated in the next way. Suppose your rating is $R_1$ and $N_1$ was the quantity of people which gave you ratings. Then you have $S_1=R_1 N_1$ of points. After it anyone give you $dS$ of points so you have $S_2=S_1+ dS$ of points and $N_2=N_1+1$ is the common quantity of the people which gave you ratings. At the same time you will have $S_2=R_2 N_2$ of points. From here, if you want to be R2 > R1 there must be: $S_2/ N_2>S_1/ N_1$ or $(S_1+ dS) / (N_1+1) >S_1/ N_1$ or $dS >S_1/ N_1 =R_1$ In other words if you want to increase rating of anyone you must give him more points $dS$ then the participant`s rating $R_1$ was at the moment you rated him. From here it is seen that in the contest are special rules for ratings. And from here there are misunderstanding of some participants what is happened with their ratings. Moreover since community ratings are hided some participants do not sure how increase ratings of others and gives them maximum 10 points. But in the case the scale from 1 to 10 of points do not work, and some essays are overestimated and some essays are drop down. In my opinion it is a bad problem with this Contest rating process. I hope the FQXI community will change the rating process. Sergey Fedosin report post as inappropriate
2021-06-16 20:21:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 12, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4398825466632843, "perplexity": 2387.0608227088023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626008.14/warc/CC-MAIN-20210616190205-20210616220205-00194.warc.gz"}
https://collegephysicsanswers.com/openstax-solutions/show-if-coil-rotates-angular-velocity-omega-period-its-ac-output-dfrac2-piomega
Question Show that if a coil rotates at an angular velocity $\omega$ , the period of its AC output is $\dfrac{2 \pi}{\omega}$. See the solution video for an explanation. Solution Video # OpenStax College Physics Solution, Chapter 23, Problem 35 (Problems & Exercises) (3:14) #### Sign up to view this solution video! View sample solution Video Transcript
2020-01-29 19:49:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7316071391105652, "perplexity": 4124.527421693338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251802249.87/warc/CC-MAIN-20200129194333-20200129223333-00534.warc.gz"}
https://zbmath.org/?q=an:1044.37011
## Lusternik-Schnirelmann theory for fixed points of maps.(English)Zbl 1044.37011 The paper contains an interesting generalization of the Lusternik-Schnirelman theory for gradient-like flows. The authors consider homotopy equivalences $$\phi : X\to X$$ from a Hausdorff space into itself, which are “gradient like” i.e., when there exists a function $$f: X \to \mathbb R,$$ bounded below, such that $$f(\phi(x)) \leq f(x)$$ for all $$x \in X.$$ The basic result is as follows: If on the set $$f^b= \{x\in X| f(x)\leq b\}$$: a) $$f(\phi(x)) < f(x)$$ unless $$\phi(x)=x$$. b) The pair $$(\phi, f)$$ verifies the following condition of Palais-Smale type: If $$f(x) -f(\phi(x))$$ is not bounded away from zero on a bounded set $$A\subset f^b$$ then $$\phi$$ has a fixed point on the closure of $$A.$$ Then, the category of the set of fixed points of $$\phi$$ on the level set $$f= b$$ is not less than the category of $$f^b.$$ Actually, the paper contains much stronger results. If $$G$$ is a compact Lie group and $$X$$ is a $$G$$ space, then a relative version of the $$G$$-category constructed by the authors in the $$G$$-equivariant setting, under some natural assumptions, allows to estimate from below the number of fixed points of $$\phi$$ on $$f^{-1}([a,b])$$ in terms of $$G \text{-cat} \, f^b -G\text{-cat }\, f^a.$$ ### MSC: 37B35 Gradient-like behavior; isolated (locally maximal) invariant sets; attractors, repellers for topological dynamical systems 37B99 Topological dynamics 55M20 Fixed points and coincidences in algebraic topology 58E05 Abstract critical point theory (Morse theory, Lyusternik-Shnirel’man theory, etc.) in infinite-dimensional spaces 55M30 Lyusternik-Shnirel’man category of a space, topological complexity à la Farber, topological robotics (topological aspects) Full Text:
2022-12-05 23:56:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8596433401107788, "perplexity": 365.611934556039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00312.warc.gz"}
https://crypto.stackexchange.com/questions/29707/the-example-of-wikipedia-on-steganography-doesnt-work
# The example of Wikipedia on steganography doesn't work I was reading some stuff on steganography. However, I tried many software to get the hidden image, but no one worked. Is there anything wrong about that? I tried OpenStego (Windows), SecretLayer (Windows) and VSL (Linux). • I don't know what those programs do, but taking the two low bits manually does give the cat. – otus Oct 8 '15 at 17:40 The Wikipedia example is something you probably would not really want to do, but does demonstrate just how much information you can add to a non-lossy image format without it being obvious to the human eye. Importantly, the technique used in Wikipedia only works for hiding a lower-quality image of the same dimensions as the container in the low bits of the main image. More general steganography programs will attempt to store more arbitrary messages, and will use different storage strategies. It is unlikely that any would support the example approach for "hidden-image-in-image" shown in Wikipedia. Instead you can verify the Wikipedia example using a short program in e.g. Matlab, to mask out and re-normalise the low bits. In pseudocode: tree_image_pixels <- load_image( 'example.png' ) cat_pixels <- ( tree_image_pixels & 3 ) * 85 save_image_from_pixels( 'cat.png', cat_pixels ) The value of 85 is so that maximum pixel value is 3 * 85 = 255, using the full colour range. Lower values also work, but produce a less bright image. • I'd multiply with 85, so maximal value of 3 turns into 255 – CodesInChaos Oct 9 '15 at 7:34 • Here's the non-pseudo Python code I hacked together last night to verify it. – otus Oct 9 '15 at 10:28
2021-04-19 01:02:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4429090619087219, "perplexity": 1604.963973604676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038862159.64/warc/CC-MAIN-20210418224306-20210419014306-00371.warc.gz"}
https://math.stackexchange.com/questions/3774655/proof-of-absolute-convergence-of-sum-limits-n-1-infty-1n-1-tan-left
# Proof of Absolute Convergence of $\sum\limits_{n=1}^{\infty} (-1)^{n-1}\tan\left(\frac{1}{n\sqrt{n}}\right)$ $$\sum\limits_{n=1}^{\infty} (-1)^{n-1}\tan\left(\frac{1}{n\sqrt{n}}\right)$$ My Attempt: To prove absolute convergence, we must consider $$\sum\limits_{n=1}^{\infty}a_n$$ and $$\sum\limits_{n=1}^{\infty}|a_n|$$. I know that as $$\theta \to0$$ we can approximate $$\tan\theta \sim \theta$$. Hence $$\sum\limits_{n=1}^{\infty}|a_n|$$ becomes: $$\sum\limits_{n=1}^{\infty}\left|\tan{\frac{1}{n\sqrt{n}}}\right| \sim_{\infty} \sum\limits_{n=1}^{\infty} \frac{1}{n\sqrt{n}}, 0 \leq a_n$$ Which converges by the $$p$$ test. As noted, the $$\theta \to 0$$ which means that $$a_n > a_{n+1}$$. Furthermore: $$\lim_{n \to \infty} \tan{\frac{1}{n\sqrt{n}}} = 0$$ By the alternating series test, it converges. Since the sums for $$a_n$$ and $$|a_{n} |$$ are both converging, it is absolute convergence by definition. Is this approach correct? • You need to be more careful in your proof. If $a_n\sim b_n$ and either sequence remains positive (or negative), then the series $\sum a_n$ and $\sum b_n$ have the same nature. However, we do NOT have $\sum a_n \sim \sum b_n$. Take $\sum\frac{1}{n^2}$ and $\sum\frac{1}{n(n+1)}$ for example. – charlus Jul 30 at 14:34 Almost. Just saying that $$\tan\theta\sim\theta$$ is a bit vague though. I suggest that you add$$\lim_{n\to\infty}\frac{\tan\left(\frac1{n\sqrt n}\right)}{\frac1{n\sqrt n}}=1$$to your proof, which is something that follows from that fact that $$\tan0=0$$ and that $$\tan'(0)=1$$. In other words, use the comparison test. • And also it is needed that $\tan$ is $\mathcal C^2$, no? – Maximilian Janisch Jul 30 at 17:54 We need to refer to limit comparison test $$\lim_{n\to\infty}\frac{\tan\left(\frac1{n\sqrt n}\right)}{\frac1{n\sqrt n}}=1$$ which implies that the absolute series converges since $$\sum\limits_{n=1}^{\infty} \frac{1}{n\sqrt{n}}$$ converges. As an alternative, by direct comparison test since for $$0\le x \le 1$$ we have that $$\tan x \le x+x^3$$ $$\sum\limits_{n=1}^{\infty}\tan{\frac{1}{n\sqrt{n}}}\le\sum\limits_{n=1}^{\infty} \frac{1}{n\sqrt{n}}+\sum\limits_{n=1}^{\infty} \frac{1}{n^4\sqrt{n}}$$ Since absolute series converges, we can conclude that also the alternating series converges. we have $$n\geq1 , |\tan(\frac{1}{n\sqrt{n}})|\leq|\frac{3}{n\sqrt{n}}| \implies \sum_{n=1}^{\infty}|\tan(\frac{1}{n\sqrt{n}})(-1)^{n-1}|\leq \sum_{n=1}^{\infty}|\frac{3}{n\sqrt{n}}|$$ and $$\sum_{n\geq 1}^{}|\frac{3}{n\sqrt{n}}|$$ is convergent $$\implies \sum_{n\geq 1}^{}|\tan(\frac{1}{n\sqrt{n}})(-1)^{n-1}|$$ is convergent $$\implies \sum_{n\geq}{}(-1)^{n-1}\tan(\frac{1}{n\sqrt{n}})$$ Absolutely converges
2020-10-28 20:04:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9903101921081543, "perplexity": 173.3484839509613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900860.51/warc/CC-MAIN-20201028191655-20201028221655-00296.warc.gz"}
https://math.stackexchange.com/questions/3251311/does-this-limit-exist-or-is-undefined
# Does this limit exist or is undefined? $$\lim_{x\to -\infty}\ln\left(\frac{x^2+1}{x-3}\right)=\infty$$ This is the answer I get from wolfram alpha, but shouldn't the answer be the limit doesn't exist? For large negative values of x, we can ignore the +1 and -3 so we can change the limit to $$\lim_{x\to -\infty}\ln\left(\frac{x^2}{x}\right)$$ As x approaches -$$\infty$$, $$\left(\frac{x^2}{x}\right)$$ also approaches -$$\infty$$ so we get $$\ln\left(-\infty\right)$$. However, $$\ln\left(-\infty\right)$$ doesn't make sense because ln(x) isn't even defined for negative numbers. So, the limit doesn't exist and is therefore undefined. Am I wrong? • Yes, you should be right. The function isn't even defined, say, at $x=-100$. – Dzoooks Jun 4 at 23:30 • Perhaps add the link to your Wolfram Alpha computation. – Michael Burr Jun 4 at 23:31 • I don't know if it can be applied to limits but if we extend $\ln(z)$ to the whole complex plane we could say $\ln(-x)=\ln(-1)+\ln(x)$ so we can put the limit in the form, where $\Re(L)\to\infty$ and there are will always be an imaginary part – Henry Lee Jun 4 at 23:36 • wolframalpha.com/input/… – user532874 Jun 4 at 23:36 • Wolfram Alpha tends to assume you are working in complex-valued functions, even if the domain of the function is suggested to be real numbers. So $\ln x$ is defined for $x$ a negative real. This also means the limit should be taken as the extended complex $\infty,$ and not the extended real $+\infty.$ – Thomas Andrews Jun 5 at 0:09 As the comments suggested that Wolfram usually assumed you are working in complex-valued functions, so that $$\ln(-x) = \ln(-1) + \ln(x)$$ and therefore, $$\ln( -\infty ) = \infty$$. So you are right that the limit doesn't make sense and shouldn't exist when we consider the function to be real-valued only.
2019-12-09 08:00:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92489093542099, "perplexity": 227.04482598245255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518337.65/warc/CC-MAIN-20191209065626-20191209093626-00404.warc.gz"}
https://www.semanticscholar.org/paper/Entanglement-of-Formation-of-an-Arbitrary-State-of-Wootters/f314b0f632e6ae3767c6c4a63a94b06b4947e62a
# Entanglement of Formation of an Arbitrary State of Two Qubits @article{Wootters1998EntanglementOF, title={Entanglement of Formation of an Arbitrary State of Two Qubits}, author={William K. Wootters}, journal={Physical Review Letters}, year={1998}, volume={80}, pages={2245-2248} } • W. Wootters • Published 12 September 1997 • Physics • Physical Review Letters The entanglement of a pure state of a pair of quantum systems is defined as the entropy of either member of the pair. The entanglement of formation of a mixed state is defined as the minimum average entanglement of an ensemble of pure states that represents the given mixed state. An earlier paper [Phys. Rev. Lett. 78, 5022 (1997)] conjectured an explicit formula for the entanglement of formation of a pair of binary quantum objects (qubits) as a function of their density matrix, and proved the… 5,510 Citations ### Entanglement of Formation of an Arbitrary State of Two Rebits • Physics • 2000 We consider entanglement for quantum states defined in vector spaces over the real numbers. Such real entanglement is different from entanglement in standard quantum mechanics over the complex ### Relative Entropy of Entanglement of One Class of Two-Qubit system • Physics • 2001 The relative entropy of entanglement of a mixed state ? for a bipartite quantum system can be defined as the minimum of the quantum relative entropy over the set of completely disentangled states. ### Entanglement of General Two-Qubit States in a Realistic Framework • Physics Symmetry • 2021 New set of maximally entangled conditions are determined that provide the maximal amount of entanglement for certain values of the amplitudes of SCSs for the case of pure states. ### Relativity of Pure States Entanglement • Physics • 2002 Abstract Entanglement of any pure state of an N × N bi-partite quantum system may be characterized by the vector of coefficients arising by its Schmidt decomposition. We analyze various measures of ### Note on Entanglement of an Arbitrary State of Two Qubits It is shown that the norm of the polarization vector of the reduced density matrix can characterize the entanglement of two qubits and so it is defined as a simple measure of entanglement. It is then ### Measuring entanglement of a rank-2 mixed state prepared on a quantum computer • Physics The European Physical Journal Plus • 2021 We study the entanglement between a certain qubit and the remaining system in rank- 2 mixed states prepared on the quantum computer. The protocol, which we propose for this purpose, is based on the ### Entanglement of formation of rotationally symmetric states • Physics Quantum Inf. Comput. • 2008 An analytic expression is derived for the entanglement of formation of rotationally symmetric states of aspin-j particle and a spin-1/2 particle and expressions for the I-concurrence, I-tangle, and convex-roof-extended negativity are given. ### Entanglement dynamics of two-qubit pure state We show that the entanglement dynamics for the pure state of a closed two-qubit system is part of a 10-dimensional complex linear differential equation defined on a supersphere, and the coefficients ### Dynamics of entanglement in quantum computers with imperfections. • Physics Physical review letters • 2003 It is shown that there is a transition from a perturbative region, where the entanglement is stable against imperfections, to the ergodic regime, in which a pair of qubits becomes entangled with the rest of the lattice and the pairwiseEntanglement drops to zero. ### Quantum entanglement of formation between qudits • Physics • 2008 We develop a fast algorithm to calculate the entanglement of formation of a mixed state, which is deflned as the minimum average entanglement of the pure states that form the mixed state. The ## References SHOWING 1-10 OF 31 REFERENCES ### Entanglement of a Pair of Quantum Bits • Physics • 1997 The entanglement of formation'' of a mixed state \ensuremath{\rho} of a bipartite quantum system can be defined as the minimum number of singlets needed to create an ensemble of pure states that ### Concentrating partial entanglement by local operations. • Physics Physical review. A, Atomic, molecular, and optical physics • 1996 Any pure or mixed entangled state of two systems can be produced by two classically communicating separated observers, drawing on a supply of singlets as their sole source of entanglement. ### Entanglement measures and purification procedures • Physics, Computer Science • 1998 It is argued that the statistical basis of the measure of entanglement determines an upper bound to the number of singlets that can be obtained by any purification procedure. ### Mixed-state entanglement and quantum error correction. • Computer Science Physical review. A, Atomic, molecular, and optical physics • 1996 It is proved that an EPP involving one-way classical communication and acting on mixed state M (obtained by sharing halves of Einstein-Podolsky-Rosen pairs through a channel) yields a QECC on \ensuremath{\chi} with rate Q=D, and vice versa, and it is proved Q is not increased by adding one- way classical communication. ### On the measure of entanglement for pure states • Physics, Computer Science • 1996 It is shown that entropy of Entanglement is the unique measure of entanglement for pure states. ### Purification of noisy entanglement and faithful teleportation via noisy channels. • Computer Science Physical review letters • 1996 Upper and lower bounds on the yield of pure singlets ($\ket{\Psi^-}$) distillable from mixed states $M$ are given, showing $D(M)>0$ if $\bra{Psi-}M\ket-}>\half$. ### Quantifying Entanglement • Physics • 1997 We have witnessed great advances in quantum information theory in recent years. There are two distinct directions in which progress is currently being made: quantum computation and error correction
2022-09-26 06:40:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7606452703475952, "perplexity": 1137.5438369590686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00772.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2010.26.1441
# American Institute of Mathematical Sciences October  2010, 26(4): 1441-1469. doi: 10.3934/dcds.2010.26.1441 ## Asymptotic behaviour of the Darcy-Boussinesq system at large Darcy-Prandtl number 1 Florida State University, Department of Mathematics, Tallahassee, FL 32306, United States Received  November 2008 Revised  May 2009 Published  December 2009 We study asymptotic behavior of the Darcy-Boussinesq system at large Darcy-Prandtl number. We prove that the global attractors for this system converge to that of the infinite Darcy-Prandtl number model. We also show the convergence of statistical properties including invariant measures. Citation: Rana D. Parshad. Asymptotic behaviour of the Darcy-Boussinesq system at large Darcy-Prandtl number. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1441-1469. doi: 10.3934/dcds.2010.26.1441 [1] Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020345 [2] Hua Qiu, Zheng-An Yao. The regularized Boussinesq equations with partial dissipations in dimension two. Electronic Research Archive, 2020, 28 (4) : 1375-1393. doi: 10.3934/era.2020073 [3] Awais Younus, Zoubia Dastgeer, Nudrat Ishaq, Abdul Ghaffar, Kottakkaran Sooppy Nisar, Devendra Kumar. On the observability of conformable linear time-invariant control systems. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020444 [4] Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020048 [5] Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020276 [6] Haiyu Liu, Rongmin Zhu, Yuxian Geng. Gorenstein global dimensions relative to balanced pairs. Electronic Research Archive, 2020, 28 (4) : 1563-1571. doi: 10.3934/era.2020082 [7] Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 [8] Shiqiu Fu, Kanishka Perera. On a class of semipositone problems with singular Trudinger-Moser nonlinearities. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020452 [9] Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 [10] Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020267 [11] Bo Chen, Youde Wang. Global weak solutions for Landau-Lifshitz flows and heat flows associated to micromagnetic energy functional. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020268 [12] Anna Canale, Francesco Pappalardo, Ciro Tarantino. Weighted multipolar Hardy inequalities and evolution problems with Kolmogorov operators perturbed by singular potentials. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020274 [13] Susmita Sadhu. Complex oscillatory patterns near singular Hopf bifurcation in a two-timescale ecosystem. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020342 [14] Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $p$ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020442 [15] Hui Lv, Xing'an Wang. Dissipative control for uncertain singular markovian jump systems via hybrid impulsive control. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 127-142. doi: 10.3934/naco.2020020 [16] Xuefeng Zhang, Yingbo Zhang. Fault-tolerant control against actuator failures for uncertain singular fractional order systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 1-12. doi: 10.3934/naco.2020011 [17] Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020321 [18] Manil T. Mohan. First order necessary conditions of optimality for the two dimensional tidal dynamics system. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020045 [19] Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020273 [20] Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of a Sobolev type impulsive functional evolution system in Banach spaces. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020049 2019 Impact Factor: 1.338 ## Metrics • PDF downloads (37) • HTML views (0) • Cited by (5) ## Other articlesby authors • on AIMS • on Google Scholar [Back to Top]
2020-11-27 15:19:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5180873274803162, "perplexity": 13934.416482622819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193221.49/warc/CC-MAIN-20201127131802-20201127161802-00346.warc.gz"}