url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://plainmath.net/31030/explain-why-z_9-not-isomorphic-z_3-times-z_3-times-isomorphic-times-times
Explain why. (a) Z_9 is not isomorphic to Z_3 times Z_3. (b) Z_9 times Z_9 is not isomorphic to Z_9 times Z_3 times Z_3. Explain why (a) Z_9 is not isomorphic to Z_3 × Z_3. (b) Z_9 × Z_9 is not isomorphic to Z_9 × Z_3 × Z_3. • Questions are typically answered in as fast as 30 minutes Plainmath recommends • Get a detailed answer even on the hardest topics. • Ask an expert for a step-by-step guidance to learn to do it yourself. liingliing8 a)Notice that 1 is of order 9 in Z9. Every element of Z3*Z3 is of order 1 or 3. truly let $$(a,b)∈ Z3*Z3, (a,b)=/(0,0)$$(so that its order is not 1.) Then 3(a,b)=(a,b)+(a,b)+(a,b)=(3a,3b)=(0,0) so it is of order 3. Suppose that these groups are isomorphic, and let φ be some isomorphism. Then $$\displaystyleφ{\left({3}\right)}=φ{\left({1}+{1}+{1}\right)}=φ{\left({1}\right)}+φ{\left({1}\right)}+φ{\left({1}\right)}={3}φ{\left({1}\right)}={0}$$ since φ(1) is an element of Z3*Z3. This means that 1 is in kernel of φ, which means that φ is not in injective (φ is injective if and only if ker φ={0}, since 0 is the neutral element of Z9, which 1 is not.) This is a contradiction since φ was supposed to be an isomorphism! Thus, Z9 an Z3*Z3 are not isomorphic. b)Notice that a ∈Z9 is of order 9 if and only if geg(a,9)=1. Thus 1,2,4,5,7,8 are of order 9. thus 6 of them. Now, (a,b) ∈ Z9*Z9 is of order 9 if and only if a is of order 9 and b is of order 9. Thereforem there are $$\displaystyle{9}\cdot{6}+{6}\cdot{9}={54}+{54}={108}$$ element of order 9. On the order hand, $$\displaystyle{\left({a},{b},{c}\right)}∈{Z}{9}\cdot{Z}{3}\cdot{Z}{3}$$ is of order 9 if and only if a is of order 9, so there are 6*3*3=54 such elements. Now suppose that $$\displaystyleφ:{Z}{9}\cdot{Z}{9}\to{Z}{9}\cdot{Z}{3}\cdot{Z}{3}$$ is an isomorphism. Then $$\displaystyle{a}∈{Z}{9}\cdot{Z}{9}$$ is of order $$\displaystyle{9}{<}\toφ{\left({a}\right)}∈{Z}{9}\cdot{Z}{3}\cdot{Z}{3}$$ is of order 9 (isomorphisms preserve the order of the element). However, this is impossible, since $$\displaystyle{Z}{9}\cdot{Z}{3}\cdot{Z}{3}$$ has less elements of order 9 than Z9*Z9. Thus, there exists no isomorphism $$\displaystyle{Z}{9}\cdot{Z}{9}\to{Z}{9}\cdot{Z}{3}\cdot{Z}{3}$$.
2021-11-28 11:42:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429503083229065, "perplexity": 728.4708820719279}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358520.50/warc/CC-MAIN-20211128103924-20211128133924-00223.warc.gz"}
https://physics.stackexchange.com/questions/123299/absorption-cross-section-and-absorption-coefficient
# Absorption cross section and absorption coefficient What is the absorption cross section, how is it measured? How to convert it to the absorption coefficient (measured in cm$^{-1}$)? The cross section $\sigma$ is related to the absorption coefficient $\alpha$ by: $$\sigma = \frac{\alpha}{N}$$ where $N$ is the number density of the scattering medium i.e. the number of particles per unit volume. This is described in more detail in the Wikipedia article on the absorption cross section. If you want $\alpha$ in units of cm$^{-1}$ you need to express $\sigma$ in cm$^2$ and the density as the number of particles per cubic cm. • Do you mean that the number of particles is the number of molecules or it is the number of atoms? For example, what is the cross section for CO$_2$? – jokersobak Jul 15 '14 at 4:32 • @jokersobak: if you use the number density of molecules of CO$_2$ you get the scattering cross section for a CO$_2$ molecule. If you use the number density of atoms you get the cross section for an atom. So it depends on what cross section you want to calculate. – John Rennie Jul 15 '14 at 5:01 • I don't have access to HITRAN, but if you look on HITEMP you'll see data on molecules like CO$_2$, CO, NO, OH, etc. You need to use the number density of these molecules e.g. the number of CO$_2$ molecules per unit volume. Note that while the answer I've given is generally true it wouldn't hurt to check the FAQs on the database just to make sure they don't use some other convention. – John Rennie Jul 15 '14 at 6:40
2018-08-16 04:55:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7521438598632812, "perplexity": 233.59473124106253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210413.14/warc/CC-MAIN-20180816034902-20180816054902-00283.warc.gz"}
https://www.ademcetinkaya.com/2023/02/ifm-infomedia-ltd.html
Outlook: INFOMEDIA LTD is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Wait until speculative trend diminishes Time series to forecast n: 07 Feb 2023 for (n+1 year) Methodology : Modular Neural Network (CNN Layer) ## Abstract INFOMEDIA LTD prediction model is evaluated with Modular Neural Network (CNN Layer) and Wilcoxon Rank-Sum Test1,2,3,4 and it is concluded that the IFM stock is predictable in the short/long term. According to price forecasts for (n+1 year) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ## Key Points 1. Understanding Buy, Sell, and Hold Ratings 2. What are the most successful trading algorithms? 3. Which neural network is best for prediction? ## IFM Target Price Prediction Modeling Methodology We consider INFOMEDIA LTD Decision Process with Modular Neural Network (CNN Layer) where A is the set of discrete actions of IFM stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Wilcoxon Rank-Sum Test)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (CNN Layer)) X S(n):→ (n+1 year) $R=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)$ n:Time series to forecast p:Price signals of IFM stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## IFM Stock Forecast (Buy or Sell) for (n+1 year) Sample Set: Neural Network Stock/Index: IFM INFOMEDIA LTD Time series to forecast n: 07 Feb 2023 for (n+1 year) According to price forecasts for (n+1 year) period, the dominant strategy among neural network is: Wait until speculative trend diminishes X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for INFOMEDIA LTD 1. If, at the date of initial application, it is impracticable (as defined in IAS 8) for an entity to assess whether the fair value of a prepayment feature was insignificant in accordance with paragraph B4.1.12(c) on the basis of the facts and circumstances that existed at the initial recognition of the financial asset, an entity shall assess the contractual cash flow characteristics of that financial asset on the basis of the facts and circumstances that existed at the initial recognition of the financial asset without taking into account the exception for prepayment features in paragraph B4.1.12. (See also paragraph 42S of IFRS 7.) 2. To make that determination, an entity must assess whether it expects that the effects of changes in the liability's credit risk will be offset in profit or loss by a change in the fair value of another financial instrument measured at fair value through profit or loss. Such an expectation must be based on an economic relationship between the characteristics of the liability and the characteristics of the other financial instrument. 3. Leverage is a contractual cash flow characteristic of some financial assets. Leverage increases the variability of the contractual cash flows with the result that they do not have the economic characteristics of interest. Stand-alone option, forward and swap contracts are examples of financial assets that include such leverage. Thus, such contracts do not meet the condition in paragraphs 4.1.2(b) and 4.1.2A(b) and cannot be subsequently measured at amortised cost or fair value through other comprehensive income. 4. An entity may retain the right to a part of the interest payments on transferred assets as compensation for servicing those assets. The part of the interest payments that the entity would give up upon termination or transfer of the servicing contract is allocated to the servicing asset or servicing liability. The part of the interest payments that the entity would not give up is an interest-only strip receivable. For example, if the entity would not give up any interest upon termination or transfer of the servicing contract, the entire interest spread is an interest-only strip receivable. For the purposes of applying paragraph 3.2.13, the fair values of the servicing asset and interest-only strip receivable are used to allocate the carrying amount of the receivable between the part of the asset that is derecognised and the part that continues to be recognised. If there is no servicing fee specified or the fee to be received is not expected to compensate the entity adequately for performing the servicing, a liability for the servicing obligation is recognised at fair value. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions INFOMEDIA LTD is assigned short-term Ba1 & long-term Ba1 estimated rating. INFOMEDIA LTD prediction model is evaluated with Modular Neural Network (CNN Layer) and Wilcoxon Rank-Sum Test1,2,3,4 and it is concluded that the IFM stock is predictable in the short/long term. According to price forecasts for (n+1 year) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ### IFM INFOMEDIA LTD Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementB3Caa2 Balance SheetBaa2C Leverage RatiosBaa2C Cash FlowBa2B3 Rates of Return and ProfitabilityCBaa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 93 out of 100 with 717 signals. ## References 1. Jiang N, Li L. 2016. Doubly robust off-policy value evaluation for reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning, pp. 652–61. La Jolla, CA: Int. Mach. Learn. Soc. 2. Abadir, K. M., K. Hadri E. Tzavalis (1999), "The influence of VAR dimensions on estimator biases," Econometrica, 67, 163–181. 3. Clements, M. P. D. F. Hendry (1995), "Forecasting in cointegrated systems," Journal of Applied Econometrics, 10, 127–146. 4. Hastie T, Tibshirani R, Wainwright M. 2015. Statistical Learning with Sparsity: The Lasso and Generalizations. New York: CRC Press 5. J. Z. Leibo, V. Zambaldi, M. Lanctot, J. Marecki, and T. Graepel. Multi-agent Reinforcement Learning in Sequential Social Dilemmas. In Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2017), Sao Paulo, Brazil, 2017 6. Brailsford, T.J. R.W. Faff (1996), "An evaluation of volatility forecasting techniques," Journal of Banking Finance, 20, 419–438. 7. Burgess, D. F. (1975), "Duality theory and pitfalls in the specification of technologies," Journal of Econometrics, 3, 105–121. Frequently Asked QuestionsQ: What is the prediction methodology for IFM stock? A: IFM stock prediction methodology: We evaluate the prediction models Modular Neural Network (CNN Layer) and Wilcoxon Rank-Sum Test Q: Is IFM stock a buy or sell? A: The dominant strategy among neural network is to Wait until speculative trend diminishes IFM Stock. Q: Is INFOMEDIA LTD stock a good investment? A: The consensus rating for INFOMEDIA LTD is Wait until speculative trend diminishes and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of IFM stock? A: The consensus rating for IFM is Wait until speculative trend diminishes. Q: What is the prediction period for IFM stock? A: The prediction period for IFM is (n+1 year)
2023-03-22 04:15:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5670079588890076, "perplexity": 6573.745241076039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00039.warc.gz"}
http://mathoverflow.net/questions/12621/are-there-universe-indexed-spectra-over-simplicial-sets
# Are there universe-indexed spectra over simplicial sets? In "Rings, Modules, and Algebras in Stable Homotopy Theory" Elmendorf, Kriz, Mandell and May introduce a notation of spectra indexed over an universe $\mathcal{U}$ as a collection of pointed topological spaces index by finite subspaces of the universe. Has anyone seen a definition using pointed simplicial sets instead? What would be a simplicial model for the one-point compactification $S^V$ of a real finite dimensional vector space? Just its singular complex? - Yes to both interpretations of your question. It is not clear to me where you want to put pointed simplicial sets. One interpretation of your question is that you want to replace pointed topological spaces with pointed simplicial giving the notion of a spectrum as a functor from supspaces of U to pointed sSet. This is a very common thing to do, and in some circles in homotopy theory is the standard definition of spectrum. Often "space" is interpreted as meaning simplicial set. This is because of the standard Quillen equivalence between Top and sSet. The other possible interpretation of your question is that you are trying to replace vector spaces with simplicial sets. This is also (essentially) something which has been done. It gives a model of spectra known as W-spaces. This is one of the standard diagram category models of spectra. See the following paper for a comparison: Model categories of diagram spectra, by M. A. Mandell, J. P. May, S. Schwede, and B. Shipley - I think one issue the original poster referred to with "universe-indexed" spectra is that in, say, the EKMM definition one has structure maps $S^V \wedge E(W) \to E(V \oplus W)$ for any orthogonal pair of subspaces of $\mathcal U$. The lack of simplicial structure on $S^V$ obstructs writing down a direct analogue. – Tyler Lawson Jan 22 '10 at 13:35 Thanks! This is already partially helpful, but Tyler is right. As far as I see EKMM work with topological spaces only and I wonder how to handle the $S^V$ in the definition of structure maps, if I try to use simplicial sets and no topological spaces. – user2146 Jan 22 '10 at 14:29 Ahh I see. If you really must work with EKMM, then you need to do a two stage comparison. (1) Go from EKMM spectra to orthogonal spectra: sequences of spaces indexed on inner product spaces together with structure maps, $S^V \wedge E(W) \to E( V \oplus W)$. There is no (overt) operad in the game though. Then you want to pass to the simplicial version of orthogonal spectra: simplicial sets indexed by inner product spaces together with maps like the above, but where $S^V$ is the singular simp. set, as you guessed. I believe both of these transitions are explained in the paper I cited. – Chris Schommer-Pries Jan 22 '10 at 19:33 Chris, that is not actually what we did. Personally, I find indexing simplicial sets by inner product spaces to be unnecessary and unhelpful, and I've not coauthored any paper with such a construction. One can easily compare symmetric spectra in simplicial sets with symmetric spectra in topological spaces, and one can easily compare symmetric spectra in topological spaces with orthogonal spectra. I see no point in a hybrid. As a matter of detail, in defining orthogonal spectra one can perfectly well work with all finite dimensional inner product spaces, without choosing a universe, whereas the universe is needed to define the linear isometries operad used in the EKMM construction. It is nice to keep the S^V as they are: that makes generalization to G-spectra effortless, where G is a compact Lie group, and that works for both orthogonal spectra of spaces and EKMM S-modules. I prefer eclecticism: the different models have different advantages. Here is an eclectic correct definition: a map of symmetric spectra (of spaces) is a weak equivalence iff its pushforward map of orthogonal spectra induces an isomorphism of homotopy groups. (Proven in the paper MMSS Chris cites.) - ps: I really don't like if you really must ...''. There are serious advantages to working in a model category in which every object is fibrant, and, related to that, both for theory and computations it is very helpful to have a clean zeroth space functor from spectra to highly structured spaces. -
2016-05-30 09:09:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932052850723267, "perplexity": 428.37650840964585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464050955095.64/warc/CC-MAIN-20160524004915-00214-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/kinematic-equations-question.727262/
Kinematic Equations Question 1. Dec 8, 2013 Striders 1. The problem statement, all variables and given/known data How long will an arrow be in flight if it is shot at an angle of 25 degrees above the horizontal and hits a target 50.0m away at the same elevation? Known: displacement in x-axis: 50.0m acceleration in x-axis: 0m/s2 displacement in y-axis: 0m acceleration in y-axis: -9.81m/s2 theta = 25 degrees 2. Relevant equations This question is at the end of a chapter that deals with kinematic equations, so I'm quite certain that one of the kinematic equations must be employed to solve the problem. Those are: vf = vi + a∆t ∆d = (vf + vi)/2 * ∆t ∆d = vi∆t + 1/2a(∆t)2 ∆d = vf∆t - 1/2a(∆t)2 vf2=vi2 + 2a∆d 3. The attempt at a solution Found the y-component of the displacement vector by doing 50m(tan25) = dy, getting a value of 23.315m. I then used pythagorean theorem to get the length of the diagonal displacement, 55.169m. (this is assuming there is no acceleration due to gravity, which is obviously not the case). That was a dead end so I then basically tried using some of the kinematic equations, just as a guess+ check to see if they'd work. One example attempt is: ∆dy = (viy)∆t + 1/2 * ay(∆t)2 The y-displacement is 0m, so I moved the first term on the right side of the equation over to the left side to get -(viy)∆t = 1/2 * ay(∆t)2 Divide each side by t to get -(vi) = 1/2 * a∆t but I only know one of the three variables and cannot solve. There are a few other, similar attempts to solve the question that ended up not working. Some help on this problem would be great, I really appreciate everyone taking the time to read and (hopefully!) solve this =) Last edited: Dec 8, 2013 2. Dec 8, 2013 BOYLANATOR Hi Strider, welcome to PF. What do you mean by y-component of the vertical displacement vector? Do you mean y-component of the initoal velocity of the arrow? Remember the y-displacement overall is 0. You need to consider the horizontal and vertical components of the motion separately. If the arrow was fired with a speed of v, what would be the horizontal and vertical components of its velocity initially? 3. Dec 8, 2013 Striders Boylanator- Thanks for the welcome! "What do you mean by y-component of the vertical displacement vector? Do you mean y-component of the initoal velocity of the arrow?" Sorry for the ambiguity. I meant that I assumed the arrow did not experience any acceleration due to gravity, travelling in a line rather than following a parabolic trajectory. Under that assumption I figured out how high the arrow would have climbed when the range was equal to 50m, given an initial launch angle of 25 degrees. To figure that out I drew a right-angle triangle where one side (which represented the range) was equal to 50m, and the angle between the 50m side and the hypotenuse is 25 degrees. The vertical height that the arrow would have reached was represented by the line opposite the 25 degree angle, and is given by 50m(tan25). That is not the y component of the initial velocity of the arrow, as I currently do not know what its initial velocity was. "You need to consider the horizontal and vertical components of the motion separately. If the arrow was fired with a speed of v, what would be the horizontal and vertical components of its velocity initially? " vx = vcos25 vy = vsin25 4. Dec 8, 2013 BOYLANATOR All objects with mass experience gravity, it's why we can't fire arrows or throw balls into space (not without engines anyway). Remember as well that it is hitting a target on the ground. It IS following a parabolic arc. The components are correct. Now that you understand what is going on, can you make any progress? 5. Dec 8, 2013 Striders I don't think I can make any progress because I don't know what the arrow's initial velocity was. If I had 3 other values that comprise the kinematic equations (final velocity, acceleration, time, or displacement) I would be able to find the arrow's initial velocity and then solve for those components. But I am unable to do that because in either axis I only know two variables. In the x-axis I know that displacement is equal to 50m and that acceleration is 0m/s2. In the y-axis I know that the displacement equals 0m and acceleration is -9.81m/s2 It seems that I'm missing some prerequisite information - namely, a third variable included in the kinematic equations - which is needed to solve the problem. 6. Dec 8, 2013 BOYLANATOR It can be solved. We assumed an initial speed of v, and you found the components. Can you come up with a simple expression for the time taken to travel 50m in the horizontal? 7. Dec 8, 2013 Striders speed = distance / time time = distance / speed Assume an initial speed of v vx = vcos25 time = 50m / vcos25 Now I just need to find v, so I can in turn get the answer to 50m / vcos25. Presumably v can be found by using one of the 5 KE's? To summarize my thought process at the moment: I'm looking for v (initial velocity), I know acceleration is -9.81m/s2 and displacement is 50m. I know time as well but only when expressed in terms of v, so I can't use the time value in order to find the initial velocity. 8. Dec 8, 2013 BOYLANATOR Yes your thought process is good. As you noticed earlier there is not enough information to calculate the time or velocity from the horizontal component alone, nor the vertical component alone. You will need to find a link that combines the two, can you think of one? 9. Dec 8, 2013 Striders A link that combines the horizontal and vertical components. I'm just typing out my thoughts so my apologies for any irrelevant info that may come up: I know that v = √[(vcos25)2 + (vsin25)2). While that does take into account both the horizontal and vertical components, it doesn't seem very helpful because I don't have any concrete numbers to put into that equation. I already know the acceleration and displacement, so I must find one of the three remaining variables: initial velocity, time, or final velocity. Since the arrow starts and ends at the same height, a position-time graph of the arrow would have the same tangent slope magnitude at the beginning (when the arrow is fired) and the end (just before the arrow hits the target). This means that the initial and final velocity are equal in magnitude but opposite in direction. Keeping that in mind I can say that vi = vf, and just change the signs on them as needed once I figure out those values. I now have: a = -9.81m/s2 ∆d = 50m vi = vf = ? t = vicos25 Plugging those values into the kinematic equation vf2 = vi2 + 2a∆d vf2 = vf2 + 2a∆d vf2 - vf2 = 2a∆d 0 = 2a∆d 0 = -9.81m/s2 * 50m 0 = -490.5m2/s2 That, um, that doesn't seem quite right. Last edited: Dec 8, 2013 10. Dec 8, 2013 Striders Oh wait one second I may be on to something In the equations in my above post, it was incorrect to say vf2= vi2. They have the same magnitude, but opposite directions. So from the equation: vf2 = vi2 + 2a∆d vf2 - vi2 = 2a∆d * * * vf2 - vi2 = vf2 + vf2 * * * 2vf2 = 2a∆d 2vf2 = -490.5m2/s2 vf2 = -245.25m2/s2 This is probably not great form, but the above line will give me a negative number under the square root sign once I root both sides. To avoid that I'm going to retroactively state that forwards and downwards will be represented by positive numbers, allowing gravity's acceleration to be written as 9.81m/s2 I now have: vf2 = 245.25m2/s2 vf = 15.66045976m/s So I have displacement, acceleration, and final velocity. I could find time with the equation ∆d=vf∆t - 1/2a(∆t)2 but I don't know how to isolate for t when it's in both terms and neither term is equal to 0. So instead I'll find vi and use that value to find the horizontal component of velocity, which is constant over time. 50m / vcos25 and I should be good to go. Last edited: Dec 8, 2013 11. Dec 8, 2013 BOYLANATOR Vf isn't the same as Vi. They have different directions. Looking at the vertical velocities Vf = -Vi. Correct this in your working and come up with an expression for the time. 12. Dec 8, 2013 Striders I tried solving it my way and got the answer wrong. So back to the drawing board. "Vf = -Vi. Correct this in your working and come up with an expression for the time. " I need a kinematic equation with both velocities and time. ∆d = ([vf + vi]/2) * ∆t ∆d = ([vf + (-vf)]/2) * ∆t ∆d = (vf - vf) * ∆t ∆d = 0 * ∆t ∆d/0 = ∆t That can't be right either. 13. Dec 8, 2013 BOYLANATOR This is the equation for motion with no acceleration, we have acceleration. So we need: vf = vi + at right? Remember that we're only considering the vertical motion here so use the vertical velocity component you stated earlier. 14. Dec 8, 2013 Striders vf = vi + at vf - vi = at vf - (-vf) = at 2vf = at 2vf/a = t 2(15.66045976m/s)/9.81m/s2 = t 31.32091952m/s * s2/9.81m = t 3.192754283s = t That answer does make sense (right units, feasible time scale) but it disagrees with the answer in my textbook, which is 2.18s. I double-checked to make sure I was reading the answer for the right question, and it's definitely (and disconcertingly!) listed as 2.18s in my textbook. Has the textbook made an error? Or is there some mistake lurking in the text above? 15. Dec 8, 2013 BOYLANATOR The final velocity you calculated is incorrect because delta d in the vertical direction is 0. From this step: 2vf/a = t Swap vf for -vi and use the expression for vi in terms of v that you correctly stated at the beginning. 16. Dec 8, 2013 Striders This elicits the answer that t=3.19s, which matches what I got in my post above, but differs from the textbook's listed answer. 17. Dec 8, 2013 BOYLANATOR Are you remembering these? vx = vcos25 vy = vsin25 So what is t in terms of v? (not vy or vf or vi) Don't worry, you're on your way to 2.18s 18. Dec 8, 2013 Striders vi = 15.66045976m/s vx = 15.66045976m/s * cos25 vx = 14.193197m/s "So what is t in terms of v? (not vy or vf or vi)" I'm a tad confused here because we said earlier that v is equal to vi. From post #6, "We assumed an initial speed of v, and you found the components." So assuming v = vi t = 50m / vcos25 Since vcos25 = 14.193197m/s t = 50m / 14.193197m/s = 3.5228s Edited to add: if by v you mean the horizontal velocity (vx) then t = 50m /v 19. Dec 8, 2013 BOYLANATOR The values for velocity you calculated earlier are incorrect. Thus far we have no numerical value for any velocity. At the beginning you derived a simple expression for the time taken to travel to the target in the horizontal frame. Since then you have been working with the vertical frame only. You got this far correctly in the vertical frame: 2vf/a = t Now you should get rid of vf (vertical final velocity) and replace it with the more general v (initial speed of arrow). We want to do this because the horizontal time expression has v as an unknown and we want a similar expression for the vertical time. Do you follow? 20. Dec 8, 2013 Striders The expression for the time taken to travel to the target in the horizontal frame is 50 / vxcos25 As for 2vf/a = t vertical final velocty = vsin25 So 2vsin25/a = t I don't really follow though, I'm not sure why my previously calculated velocities are incorrect. Nor do I understand why the equation 2vf/a = t doesn't suffice. Isn't the time the arrow takes the same in the vertical frame and horizontal frame? Also this method for solving the problem seems very convoluted and long. I acknowledge that's because of my tangents, but nonetheless I think it is not feasible to repeat this method during a test/exam when I don't have 2+ hours and somebody else to help me.
2017-08-22 18:21:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131047487258911, "perplexity": 926.6138468164494}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112533.84/warc/CC-MAIN-20170822162608-20170822182608-00163.warc.gz"}
https://ioover.net/learn/solution-log.2.md/
## Climbing Stairs You are climbing a stair case. It takes n steps to reach to the top. Each time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top? 《编程珠玑》8.4 节上有。
2017-09-26 12:34:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27499401569366455, "perplexity": 819.049976731928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695726.80/warc/CC-MAIN-20170926122822-20170926142822-00351.warc.gz"}
http://www.mathemafrica.org/?p=11427
We are about to show that you can get incredible structure from the simplest of algorithms when we use complex numbers. The equation we are going to look at is an iterative equation: $z_{i+1}=z_i^2+C$ with $z_0=0$. You simply get the next $z_i$ from plugging in the previous one, squaring it and adding a number $C$. I’m going to give you a value for $C$, then you’re going to iterate this equation and see what happens. For instance, if I give you the number $C=3$: $\left( \begin{array}{ccc} i & z_i^2+C & \left| z_{i+1}\right| \\ 1 & 0^2+3 & 3 \\ 2 & 3^2+3 & 12 \\ 3 & 12^2+3 & 147 \\ 4 & 147^2+3 & 21612 \\ \end{array} \right)$ You can see that this number is just going to keep on increasing without end if we keep applying the algorithm. How about a smaller number, let’s say $C=0.1$: $\left( \begin{array}{ccc} i & z_i^2+C & \left| z_{i+1}\right| \\ 1 & 0^2+0.1 & 0.1 \\ 2 & 0.1^2+0.1 & 0.1121 \\ 3 & 0.1121^2+0.1 & 0.112566 \\ 4 & 0.112566^2+0.1 & 0.112671 \\ \end{array} \right)$ It looks like this is tending to some value. In fact it has come to a fixed point where $z=z^2+0.1$. There are actually two solutions to this equation but one of them is 0.112702 which is where we are tending towards. I can clearly perform this procedure with $C$ being any number and I can ask whether the $z_i$ diverges as $i\rightarrow \infty$, or whether it always stays small. In fact I can be a bit more strict. I can ask whether the magnitude of the $z_i$ always stays less than $2$ or whether it becomes greater than $2$. Let’s ask this for all numbers which are real. It turns out that this is a pretty easy problem. If $-2 then the magnitude of $z$ always stays below 2 and if it’s outside this range, then it blows up. As an example Let’s look at the value of the $z_i$ as we iterate for numbers just below and just above $C=0.25$. This is shown in the figure below} Values of $|z_i|$ for $C=0.249$ (blue points) and $C=0.251$ (red points). We see that for the blue points the value of $|z_i|$ converges to 0.5, whereas for $C=0.251$ it diverges. It is true that for all points between -2 and 0.25 $z_i$ converges and for all points outside of this set of numbers, it diverges. But we’ve only looked at values of $C$ on the real line, and frankly the results were not very interesting. Why should it be any more interesting if we let $C$ be any complex number? It turns out that the answer is both surprising and beautiful. We’ll work out what is the set of complex numbers for which $z_i$ always stays below 2 using a technique of picking numbers in the complex plane at random and testing them – ie. plugging them into the iterative algorithm and seeing what happens to $z_i$. We’re only going to run the iterative algorithm for a maximum number of times, but if during that time the magnitude of $z$ goes above 2 we will quite the algorithm and say that that value of $C$ that we used is not in the set that we are interested in. In fact to get the set accurately we would have to be more sophisticated than this and run the algorithm in a more intelligent way so that we’re not including points in the set which should not be there. For instance, in the figure above had we only run the algorithm 50 times, we would have thoughts that 0.251 was in the set, whereas we see that eventually, the magnitude gets above 2 and so it’s not in the set. However, for the purposes of this explanation, running the algorithm 100 times will be sufficient to see the complexity emerge. Let’s try with a random complex number. Let’s take $C=\frac{1+i}{4}$. This, together with $C=\frac{1+i}{2.5}$ is shown in the figure here: Values of $|z_i|$ for $C=\frac{1+i}{4}$ (blue points) and $C=\frac{1+i}{2.5}$ (red points). For the blue points the values fluctuation but get closer and closer to a fixed value which is less than 2, whereas for the red points, they quickly diverge. We can see immediately that the behaviour looks somewhat different. For real numbers the numbers either converged, or diverged fairly obviously, but for complex numbers they seem to oscillate before they converge or diverge and it’s not altogether clear what will happen if you look at the values at any particular $i$. Let’s now take random points in the complex plane (values of $C$) and if the value of $|z_i|$ converges, we’ll put a blue point there and if it diverges (within 100 iterations of the algorithm) we’ll put a red point there. What will this look like? This is shown in the figure here for 100,000 random points. Graph of those values of $C$ (in the complex plane) for which the iterative values of $|z_i|$ diverge within 100 iterations (in red) and those for which they don’t (in blue). This is a sample of around 100,000 points. You can see that it looks rather like a random splotch of paint. In fact it turns out that this really isn’t a very detailed view of the set at all (the word set in ‘Mandelbrot set’ is because we are looking for the set of numbers with a particular behaviour, in this case the blue points are in the set, and the red points are not). In fact it turns out that there is an infinite amount of detail to be see. What does this mean. Well, let’s say that we take a small patch of the previous figure and try and look at it in more detail. Let’s take a region of the complex numbers $C$ with real parts between 0.2 and 0.5 and imaginary parts 0.25 and 0.8. We sample in this smaller region with 100,000 points and find the image here: We zoom in on a small region of this graph to try and get down to the lowest level of detail. We’ve taken a small part of the previous figure and sampled 100,000 points in that small region. But it looks like there might be more structure at even smaller levels. Let’s zoom in on an even smaller region here. In the next figure we look at a small region in the previous one. In particular we look at complex numbers $C$ with real parts between 0.3 and 0.325 and imaginary part between 0.55 and 0.62. We now take a small region of the previous figure and zoom in on that, again sampling a very small area with 100,000 random points. Again, we zoom in on a tiny region in this figure, again sampling 100,000 points, but this time in the tiny region of $C$ with real part between 0.315 and 0.316 and imaginary part between 0.576 and 0.578. This is shown here: Sampling a small region in the previous figure with another 100,000 points. We find that we can zoom in further and further and there will always be more detail at every level. This goes on for an infinite number of zooms! In fact we could keep doing this for every and the images would never become smooth and we would never stop getting more detail. This is a fractal and you can zoom in infinitely far and keep seeing more structure. Think of this like a coastline. If you take a picture of a country from above, you will get a certain detail of the coastline, but as you zoom in more and more you will start to see the structure of individual beaches, then further in you will see structure of individual curves in the beach, then you will start to see structure in the rocks in the beach, then you will start to see structure in the sand, then in smaller and smaller particles. However, you can only zoom in so far because there seems to be a fundamental limit to the scale of the universe. However, a fractal is different. In the mathematical world you can zoom in more and more and always get more detail….always!!! This is a pretty amazing fact for such a simple equation and its because of the wonderful behaviour of complex numbers that we get this complexity! In fact using a relatively small number of sample points (only 100,000) doesn’t give us a very good picture of the intricacy of the image. In  the figure below there is an image taken from wikipedia which shows in much better detail the mandelbrot set A much better rendering of the Mandelbrot set, though it’s not as obvious here how it is found as with our discrete sampling technique. Anything in black is in the set, anything in white is not in the set. Each of the nodes can be zoomed into infinitely far and you will never stop seeing new structure. https://en.wikipedia.org/wiki/Mandelbrot_set#/media/File:Mandelset_hires.png What we have done here is very crude, we have only zoomed in a tiny bit (in fact, what does tiny mean when we can zoom in infinitely far, in theory – isn’t anything tiny compared to that?). Here is a video of a Mandelbrot set over 200 orders of magnitude. Note that the colours are related to those numbers which are not in the set but they can be colour coded by the number of iterations it takes for the algorithm to give you a $z_i$ whose magnitude is greater than 2. What is truly amazing here is that from a truly simple algorithm we are able to create infinite complexity. We have seen a number of times now how the complex numbers helps us to build a bridge between seemingly unconnected areas, and now we see how it links us to the world of chaos, fractals, and complexity. Sadly in the first year we are only able to skim the surface of what complex numbers can really bring us, but for those who are going to go on and study more mathematics through university, you will see a whole world of possibilities open up to you… How clear is this post?
2021-04-12 06:51:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6713119745254517, "perplexity": 156.50440309488573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00564.warc.gz"}
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Delftse_Foundations_of_Computation/4%3A_Sets%2C_Functions%2C_and_Relations/4.4_Functions/4.4.D_First-class_objects
# 4.4.D First-class objects One difficulty that people sometimes have with mathematics is its generality. A set is a collection of entities, but an ‘entity’ can be anything at all, including other sets. Once we have defined ordered pairs, we can use ordered pairs as elements of sets. We could also make ordered pairs of sets. Now that we have defined functions, every function is itself an entity. This means that we can have sets that contain functions. We can even have a function whose domain and range are sets of functions. Similarly, the domain or range of a function might be a set of sets, or a set of ordered pairs. Computer scientists have a good name for this. They would say that sets, ordered pairs, and functions are first-class objects or first-class citizens. Once a set, ordered pair, or function has been defined, it can be used just like any other entity. If they were not first-class objects, there could be restrictions on the way they can be used. For example, it might not be possible to use functions as members of sets. (This would make them ‘second class.’) One way that programming languages differ is by what they allow as first- class objects. For example, Java added a ‘lambda syntax’ to help writing ‘closures’ in version 8. For example, suppose that AB, and are sets. Then since × is a set, we might have a function × → C. If (ab) ∈ × B, then the value of at (ab) would be denoted ((ab)). In practice, though, one set of parentheses is usually dropped, and the value of at (ab) is denoted (ab). As a particular example, we might define a function : N × N → N with the formula p(nm) = nm + 1. Similarly, we might define a function : N × N × N → N × N by q(nmk) = (nm − knk − n). Suppose that and are sets. There are, in general, many functions that map to B. We can gather all those functions into a set. This set, whose elements are all the functions from $$A$$ to $$B,$$ is denoted $$B^{A} .$$ (We'll see later why this notation is reasonable.) Using this notation, saying $$f : A \rightarrow B$$ is exactly the same as saying $$f \in B^{A} .$$ Both of these notations assert that is a function from to B. Of course, we can also form an unlimited number of other sets, such as the power set $$\mathcal{P}\left(B^{A}\right),$$ the cross product $$B^{A} \times A,$$ or the set $$A^{A \times A}$$, which contains all the functions from the set × to the set A. And of course, any of these sets can be the domain or range of a function. An example of this is the function $$\varepsilon : B^{A} \times A \rightarrow B$$ defined by the formula $$\mathcal{E}(f, a)=f(a) .$$ Let's see if we can make sense of this notation. Since the domain of $$\mathcal{E}$$ is $$B^{A} \times A,$$ an element in the domain is an ordered pair in which the first coordinate is a function from to and the second coordinate is an element of A. Thus, E( a) is defined for a function → and an element ∈ A. Given such an and a, the notation (a) specifies an element of B, so the definition ofE( a) as (a) makes sense. The function E is called the ‘evaluation function’ since it captures the idea of evaluating a function at an element of its domain. Exercises 1. Let = {1, 2, 3, 4} and let = {abc}. Find the sets × and × A. 2.Let be the set {a,b,c,d}. Let be the function from to given by the set of ordered pairs {(ab), (bb), (ca), (dc)}, and let be the function given by the set of ordered pairs{(ab), (bc), (cd), (dd)}. Find the set of ordered pairs for the composition ◦ . 1. Let = {abc} and let = {0, 1}. Find all possible functions from to B. Give each function as a set of ordered pairs. (Hint: Every such function corresponds to one of the subsets of A.) 2. Consider the functions from Z to Z which are defined by the following formulas. Decide whether each function is onto and whether it is one-to-one; prove your answers. a) (n) = 2n        b) g(n) = + 1        c) $$h(n)=n^{2}+n+1$$        d) $$s(n)=\left\{\begin{array}{ll}{\mathrm{n} / 2,} & {\text { if } n \text { is even }} \\ {(\mathrm{n}+1) / 2,} & {\text { if } n \text { is odd }}\end{array}\right.$$ 5. Prove that composition of functions is an associative operation. That is, prove that functions f → B→ C, and → D, the compositions (◦ g) ◦ and ◦ (◦ ) are equal. 6. Suppose that → and → are functions and that ◦ is one-to-one.a) Prove that is one-to-one. (Hint: use a proof by contradiction.) b) Find a specific example that shows that is not necessarily one-to-one. 7.Suppose that f→ and g→ C, and suppose that the composition g◦ is an onto function. a) Prove that is an onto function. b) Find a specific example that shows that is not necessarily onto.
2019-08-17 23:40:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8939048647880554, "perplexity": 293.38749686262076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313501.0/warc/CC-MAIN-20190817222907-20190818004907-00426.warc.gz"}
http://images.planetmath.org/conjugatepoints
conjugate points Primary tabs Type of Math Object: Definition Major Section: Reference Groups audience: Mathematics Subject Classification Is the poles being "conjugate points" from which the y-axis of an ellipse/spheroid is considered the conjugate axis/diameter?: If so, then how can "ConjugateDiametersOfEllipse" be?: Aren't they actually "oblique diameters"? In "A treatise on analytical geometry", on pg.199, conjugate and transverse axes are noted, regarding oblate and prolate spheroids: But, back on pg.107, the above concept of "ConjugateDiametersOfEllipse" appears to being discussed How can that be? Are these two different meanings of "conjugate diameter? ~Kaimbridge~
2018-02-25 04:10:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3158459961414337, "perplexity": 14891.566199932915}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816094.78/warc/CC-MAIN-20180225031153-20180225051153-00647.warc.gz"}
https://discuss.codechef.com/questions/136440/match2-editorial
× MATCH2 - EDITORIAL Setter: Teja Vardhan Reddy Tester: Hasan Editorialist: Taranpreet Singh Medium PREREQUISITES: Time complexity analysis, Set data structure and non-trivial implementation skills. Merge sort Tree or binary search and value compression. PROBLEM: Given two array $A$ and $B$, define similarity of two arrays as number of positions i such that $A[i] == B[i]$. Perform $Q$ range updates and print similarity of arrays after each update. Range update assign every element of $A$ in range $[L, R]$ to $C$. Queries are online. i.e. we have to answer queries in same order as input. SUPER QUICK EXPLANATION: • Represent ranges of consecutive positions having same values and store them in a balanced binary search tree. Calculate initial similarity naively, and for every update, update the intervals for new values, updating similarity simultaneously. • Utilize the fact that $B$ doesn't change, preprocess to answer queries of type - count Number of positions in range $[L,R]$ such that $B[i] == C$. • This solution works fast enough, as proved in Time complexity analysis part at the end. EXPLANATION: This problem has a relatively simple solution if comparing with usual second hardest problem, but proving that the solution runs with time is not trivial, earning this problem spot of second hardest problem. Time complexity of solution will be analysed at the end, so please have patience, when i make any assumption. :P So, Solution for first sub-task is simple, perform updates naively, and calculate similarity every time, giving $O(N^2)$ time complexity, too slow to pass final sub task. First of all, Notice a fact that suppose we have similarity already calculated before ith query $[L,R]$. We can see, that similar positions change only in range of update, since at all other positions, both arrays remain unchanged. This way, If we can calculate the change in similarity cause due to current update within update range, we can calculate answer. Let us represent the array A in a different format. Create tuple of type $(l,r,c)$ representing that elements of $A$ in range $[l, r]$ have value $c$, and store it in a set, ordered by left end of range (right end will also work, with changes in implementation. Now, From this point, dont worry about complexity till i mention myself. :D Talking about updates first, Perform update in manner, suppose update range is $[L, R]$. Give a thought about how will $A$ look like after update. Some ranges which coincide with update range, would be modified, and ranges lying completely inside update range would be deleted and one new range would be inserted. Got this much? Right. If not, re-read because next paragraph won't make sense without this. Now, updating answer. Suppose before update, we had $x$ positions in update range which were similar to array $B$. Current answer gets reduced by $x$. Also suppose, that after update, $y$ positions in update range are similar. Answer increases by $y$. So, After every update, Answer becomes $Ans-x+y$. Let's calculate $y$ first. We know, after update, all elements in update range in array $A$ will be $c$. So, $y$ is same as Number of positions in array $B$ within update range, which have value $c$. This problem is well known, and can be solved using binary search, Merge sort tree (though slower). This problem will be explained soon. Suppose we are using Merge sort tree. So, now that we have calculated $y$, all that remains is to calculate $x$. Let us represent $x$ as a summation in terms of ranges completely and partially covered in range. For every range $(l,r,c)$ completely inside update range, we know all elements in range have value $c$. So, we query to Merge sort Tree, to tell us number of positions in range $(l,r)$ in array $B$ having value $c$. This is same as problem above. But we need to handle boundary ranges carefully, make new range for each border. Suppose we have array $A$ initially 1 1 2 1 2. So, its represent is set ${(0,1,1),(2,2,2), (3,3,1), (4,4,2)}$. (zero-based indexing). Say update is [1,3] to 4 We Final array after update should look like ${(0,0,1), (1,3,4), (4,4,2)}$. Another case to be handled carefully is when a range fully covers the update range itself. This all is implementation stuff, so, will depend more on how you handle cases. Refer setter solution for implementation. But, we still need to solve sub-problem, Given a permanent array, count number of elements in range l to r, with value $c$. It can be solved using two methods, one faster, other slower but simpler. Binary search and Merge sort tree. Binary search idea involve compressing the values in array first, make a sorted vector corresponding to each value in array which stores every position of value in $B$, and for every query $(l,r,c)$, if there doesn't exist vector for value $c$, there isn't any position in $B$ which have $c$ as value. Otherwise we can see that number of elements in range l to r is same as number of values in range from start to r less number of elements in range start to l-1. This can be easily done using binary search, and gives $O(logN)$ (or even faster, exactly, O(log(no of elements in vector)). Using merge sort tree, we maintain a map on every segment (total $N*logN$ maps it become). It is very simple to query, just like segment tree, as explained here and here too. So, we have solved the problem. Implementation is messy, but worth a try. I know you all are shouting at me saying that this solution would be inefficient, rest assured, this time i have a proof to proudly claim that this is efficient enough, let's discuss it. TIME COMPLEXITY ANALYSIS: First of all, set, at any point, cannot have more than $N$ ranges. That happens when no two consecutive values are same. Second thing, Let us find maximum number of insertions possible. We see, that we can alter at most three ranges, (one for l to r), and two boundary conditions. This way, We can insert at max 3 ranges, resulting in maximum $N+3*Q$ insertions, which is linear in respect of $N+Q$. Finally, Total number of deletions can never be more than initial ranges plus inserted ranges, thus deletions also are of linear complexity. Every insertion and deletion takes only $O(LogN)$ time (binary search on $B$), so, this solution has overall complexity $O((N+Q)*logN)$. Preprocessing on $B$ takes $O(NlogN)$ time only, and thus, is dominated by queries. AUTHOR'S AND TESTER'S SOLUTIONS: Until above links are not working, feel free to refer solution here. EDIT: Found an interesting problem here. Feel free to Share your approach, If it differs. Suggestions are always welcomed. :) This question is marked "community wiki". 3.9k2387 accept rate: 22% 1 You can solve it with SQRT Decomp - https://www.codechef.com/viewsolution/20416795 Time complexity is greater N + Q * sqrt(N) Also you can play with the size of the decomposition to obtain better times. answered 01 Oct '18, 08:20 6★inseder 350●4●8●19 accept rate: 0% Yes, there were many square root dec submissions which passed. (01 Oct '18, 12:34) @inseder Can you explain your SQRT Decomp Solution ? (01 Oct '18, 23:23) 1 @tihorsharma123 here you go: 1) split array in sqrt(N) blocks (smaller A and B) 2) for a block compute the sim of A and B & the map of frequencies for B 3) updates can affect 3 types of areas 3.1) a part of the block containing left 3.2) some full blocks 3.3) a part of the block containing right 4) for partial block queries just update A and in the same time compute sim 5) for full block just mark that block of having only value C and compute the new sim as being the count of C in the B of the block 6) partial update on a block having only C -> need to fill the A array (04 Oct '18, 04:58) inseder6★ I too have used segment tree with lazy propagation and binary search(in a sorted array of pairs of (B[i],i)) for similarity. The complexity is O(N(log N)^2). Algorithm: Maintain Segment Tree for similarity. In each query: 1. Propagate changes in the path from root to L and root to R away from root.O((log N)^2) 2. Calculate similarity in the sections between L and R. O((log N)^2) 3. Propagate changes that we just made towards root. O((log N)) My code passes all given tests except one. I have used random tests but could not find any test in which it fails. Could someone find the bug in code or flaw in the algorithm or a test in which it fails. Link to Code Thanks 121 accept rate: 0% I have used Segment tree and Lazy Propagation approach but I am getting TLE. Can u please tell me what improvement can be done as it is also having same time complexity. I think clearing the map everytime is resulting in TLE, How can I improve that? (01 Oct '18, 13:25) @siddhant22 bro i am also facing same problem with similar kind of logic.. in one case getting Runtime error.. ur issue resolved? (02 Oct '18, 15:06) Not Yet... Got a very different result after resubmission(https://www.codechef.com/viewsolution/20436261) with minor changes. The one test that was not passed earlier is passed now but other cases now have Wrong answer. So, it seems like my code has some undefined behaviour(maybe something like segmentation fault that is not detected). Also, another resubmission(https://www.codechef.com/viewsolution/20436365) with additional conditions 'and'ed in assert converts the SIGABRT in my original submission to a WA which is also weird. (03 Oct '18, 01:07) @boost_insane : 1)both build and update are recursive. You can try making it with loops instead for saving function call overhead. 2)(minor effect) change line 80 to: tree[right] = tree[pos] - tree[left]; (03 Oct '18, 06:35) @siddhant22 There is surely some problem with the input file of this problem changing the input format is resulting in different outputs, If you get the answer correct do comment your correct solution link. (03 Oct '18, 13:06) @siddhant22 @boost_insane i think issue resolved now.. my same solution now got AC .. try to resubmit your solutions (08 Oct '18, 06:04) showing 5 of 6 show all 0 I know it would be an overkill, but I was wondering if this problem could be solved using Segment Tree and Lazy Propagation ? I haven't coded it yet, I just want to know if its a valid approach or not ? answered 01 Oct '18, 11:47 218●9 accept rate: 9% Can you explain your approach a bit more? (01 Oct '18, 12:33) Here's my solution with seg tree + lazy propagation - https://www.codechef.com/viewsolution/20402757 (03 Oct '18, 17:52) chinmay25★ 0 By sqrt decomposition ,how to handle updates for calculating x in similarity = similarity - x + y where x is the similarity of l to r range answered 01 Oct '18, 16:22 11●2 accept rate: 0% say we have similarity for a block calculated (for initial position, we can manually calculate). After update, two border blocks can be updated naively and middle blocks all will have same value, so, we can calculate for each block range using binary search or Merge sort tree as explained above. (01 Oct '18, 16:33) actually i am asking do we have to store array A in (range,value) manner always or we can handle updates in any other way ...(thanx ...) (01 Oct '18, 16:45) We use coordinate compression on array B. (01 Oct '18, 16:56) i used the full array but with some sort of lazy propagation(do not actually fill it until you get a partial query on that block) (04 Oct '18, 05:04) inseder6★ 0 link to my submission @taran_1407 could you plz tell the fault in my logic? or , can anyone plz tell why i am getting TLE ??? I'm using segment tree with lazy updates and Binary Search to look for similarities !! Thanks in advance !! answered 01 Oct '18, 19:01 1●1 accept rate: 0% 0 I'm having some trouble in calculating x. I can't seem to understand how to calculate the value of x in less than O(N). Link to my code: Code I have commented the portion (in the main function) where I'm having a problem with "TODO: " Can someone please help me out with this? Thank you. answered 02 Oct '18, 01:01 1●1 accept rate: 0% 0 why is tester's solution getting runtime error?? answered 02 Oct '18, 11:35 46●4 accept rate: 25% 0 My Solution My Same Solution The above solution have same code. but Their submission time is different. Until Yesterday the solution which was giving 100 pts is now Showing Runtime Error for one subtask. @admin @taran_1407 @teja349. Please look into this matter. Same Code gives two different out comes how???? Testers Solution it gives Runtime Error Setters Solution it gives TLE answered 02 Oct '18, 15:02 32●3 accept rate: 0% As i know, admin is looking into this matter as this was already reported. (02 Oct '18, 17:59) @huzaifa242 bro same problem i am facing...same solution submitted in giving diff time giving differt output behaviour with runtime errro... different different attemp with exact same line of code giving different output (02 Oct '18, 20:19) 0 Can this problem be done in O(N^2) time? answered 03 Oct '18, 15:17 1 accept rate: 0% 0 Similar code getting different verdict, can someone help me out? Code1 Code2 answered 04 Oct '18, 09:59 96●6 accept rate: 0% toggle preview community wiki: Preview By Email: Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Question tags: ×2,559 ×1,391 ×1,024 ×817 ×647 ×145 ×45 ×4 question asked: 01 Oct '18, 00:27 question was seen: 1,339 times last updated: 08 Oct '18, 06:05
2019-01-22 00:01:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5428365468978882, "perplexity": 1961.726366605723}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583822341.72/warc/CC-MAIN-20190121233709-20190122015709-00301.warc.gz"}
https://almostsuremath.com/tag/stochastic-differential-equations/
# Zero-Hitting and Failure of the Martingale Property For nonnegative local martingales, there is an interesting symmetry between the failure of the martingale property and the possibility of hitting zero, which I will describe now. I will also give a necessary and sufficient condition for solutions to a certain class of stochastic differential equations to hit zero in finite time and, using the aforementioned symmetry, infer a necessary and sufficient condition for the processes to be proper martingales. It is often the case that solutions to SDEs are clearly local martingales, but is hard to tell whether they are proper martingales. So, the martingale condition, given in Theorem 4 below, is a useful result to know. The method described here is relatively new to me, only coming up while preparing the previous post. Applying a hedging argument, it was noted that the failure of the martingale property for solutions to the SDE ${dX=X^c\,dB}$ for ${c>1}$ is related to the fact that, for ${c<1}$, the process hits zero. This idea extends to all continuous and nonnegative local martingales. The Girsanov transform method applied here is essentially the same as that used by Carlos A. Sin (Complications with stochastic volatility models, Adv. in Appl. Probab. Volume 30, Number 1, 1998, 256-268) and B. Jourdain (Loss of martingality in asset price models with lognormal stochastic volatility, Preprint CERMICS, 2004-267). Consider nonnegative solutions to the stochastic differential equation $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dX=a(X)X\,dB,\smallskip\\ &\displaystyle X_0=x_0, \end{array}$ (1) where ${a\colon{\mathbb R}_+\rightarrow{\mathbb R}}$, B is a Brownian motion and the fixed initial condition ${x_0}$ is strictly positive. The multiplier X in the coefficient of dB ensures that if X ever hits zero then it stays there. By time-change methods, uniqueness in law is guaranteed as long as a is nonzero and ${a^{-2}}$ is locally integrable on ${(0,\infty)}$. Consider also the following SDE, $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dY=\tilde a(Y)Y\,dB,\smallskip\\ &\displaystyle Y_0=y_0,\smallskip\\ &\displaystyle \tilde a(y) = a(y^{-1}),\ y_0=x_0^{-1} \end{array}$ (2) Being integrals with respect to Brownian motion, solutions to (1) and (2) are local martingales. It is possible for them to fail to be proper martingales though, and they may or may not hit zero at some time. These possibilities are related by the following result. Theorem 1 Suppose that (1) and (2) satisfy uniqueness in law. Then, X is a proper martingale if and only if Y never hits zero. Similarly, Y is a proper martingale if and only if X never hits zero. # Failure of the Martingale Property In this post, I give an example of a class of processes which can be expressed as integrals with respect to Brownian motion, but are not themselves martingales. As stochastic integration preserves the local martingale property, such processes are guaranteed to be at least local martingales. However, this is not enough to conclude that they are proper martingales. Whereas constructing examples of local martingales which are not martingales is a relatively straightforward exercise, such examples are often slightly contrived and the martingale property fails for obvious reasons (e.g., double-loss betting strategies). The aim here is to show that the martingale property can fail for very simple stochastic differential equations which are likely to be met in practice, and it is not always obvious when this situation arises. Consider the following stochastic differential equation $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dX = aX^c\,dB +b X dt,\smallskip\\ &\displaystyle X_0=x, \end{array}$ (1) for a nonnegative process X. Here, B is a Brownian motion and a,b,c,x are positive constants. This a common SDE appearing, for example, in the constant elasticity of variance model for option pricing. Now consider the following question: what is the expected value of X at time t? The obvious answer seems to be that ${{\mathbb E}[X_t]=xe^{bt}}$, based on the idea that X has growth rate b on average. A more detailed argument is to write out (1) in integral form $\displaystyle X_t=x+\int_0^t\,aX^c\,dB+ \int_0^t bX_s\,ds.$ (2) The next step is to note that the first integral is with respect to Brownian motion, so has zero expectation. Therefore, $\displaystyle {\mathbb E}[X_t]=x+\int_0^tb{\mathbb E}[X_s]\,ds.$ This can be differentiated to obtain the ordinary differential equation ${d{\mathbb E}[X_t]/dt=b{\mathbb E}[X_t]}$, which has the unique solution ${{\mathbb E}[X_t]={\mathbb E}[X_0]e^{bt}}$. In fact this argument is false. For ${c\le1}$ there is no problem, and ${{\mathbb E}[X_t]=xe^{bt}}$ as expected. However, for all ${c>1}$ the conclusion is wrong, and the strict inequality ${{\mathbb E}[X_t] holds. The point where the argument above falls apart is the statement that the first integral in (2) has zero expectation. This would indeed follow if it was known that it is a martingale, as is often assumed to be true for stochastic integrals with respect to Brownian motion. However, stochastic integration preserves the local martingale property and not, in general, the martingale property itself. If ${c>1}$ then we have exactly this situation, where only the local martingale property holds. The first integral in (2) is not a proper martingale, and has strictly negative expectation at all positive times. The reason that the martingale property fails here for ${c>1}$ is that the coefficient ${aX^c}$ of dB grows too fast in X. In this post, I will mainly be concerned with the special case of (1) with a=1 and zero drift. $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dX=X^c\,dB,\smallskip\\ &\displaystyle X_0=x. \end{array}$ (3) The general form (1) can be reduced to this special case, as I describe below. SDEs (1) and (3) do have unique solutions, as I will prove later. Then, as X is a nonnegative local martingale, if it ever hits zero then it must remain there (0 is an absorbing boundary). The solution X to (3) has the following properties, which will be proven later in this post. • If ${c\le1}$ then X is a martingale and, for ${c<1}$, it eventually hits zero with probability one. • If ${c>1}$ then X is a strictly positive local martingale but not a martingale. In fact, the following inequality holds $\displaystyle {\mathbb E}[X_t\mid\mathcal{F}_s] (4) (almost surely) for times ${s. Furthermore, for any positive constant ${p<2c-1}$, ${{\mathbb E}[X_t^p]}$ is bounded over ${t\ge0}$ and tends to zero as ${t\rightarrow\infty}$. # SDEs Under Changes of Time and Measure The previous two posts described the behaviour of standard Brownian motion under stochastic changes of time and equivalent changes of measure. I now demonstrate some applications of these ideas to the study of stochastic differential equations (SDEs). Surprisingly strong results can be obtained and, in many cases, it is possible to prove existence and uniqueness of solutions to SDEs without imposing any continuity constraints on the coefficients. This is in contrast to most standard existence and uniqueness results for both ordinary and stochastic differential equations, where conditions such as Lipschitz continuity is required. For example, consider the following SDE for measurable coefficients ${a,b\colon{\mathbb R}\rightarrow{\mathbb R}}$ and a Brownian motion B $\displaystyle dX_t=a(X_t)\,dB_t+b(X_t)\,dt.$ (1) If a is nonzero, ${a^{-2}}$ is locally integrable and b/a is bounded then we can show that this has weak solutions satisfying uniqueness in law for any specified initial distribution of X. The idea is to start with X being a standard Brownian motion and apply a change of time to obtain a solution to (1) in the case where the drift term b is zero. Then, a Girsanov transformation can be used to change to a measure under which X satisfies the SDE for nonzero drift b. As these steps are invertible, every solution can be obtained from a Brownian motion in this way, which uniquely determines the distribution of X. A standard example demonstrating the concept of weak solutions and uniqueness in law is provided by Tanaka’s SDE $\displaystyle dX_t={\rm sgn}(X_t)\,dB_t$ (2) # SDEs with Locally Lipschitz Coefficients In the previous post it was shown how the existence and uniqueness of solutions to stochastic differential equations with Lipschitz continuous coefficients follows from the basic properties of stochastic integration. However, in many applications, it is necessary to weaken this condition a bit. For example, consider the following SDE for a process X $\displaystyle dX_t =\sigma \vert X_{t-}\vert^{\alpha}\,dZ_t,$ where Z is a given semimartingale and ${\sigma,\alpha}$ are fixed real numbers. The function ${f(x)\equiv\sigma\vert x\vert^\alpha}$ has derivative ${f^\prime(x)=\sigma\alpha {\rm sgn}(x)|x|^{\alpha-1}}$ which, for ${\alpha>1}$, is bounded on bounded subsets of the reals. It follows that f is Lipschitz continuous on such bounded sets. However, the derivative of f diverges to infinity as x goes to infinity, so f is not globally Lipschitz continuous. Similarly, if ${\alpha<1}$ then f is Lipschitz continuous on compact subsets of ${{\mathbb R}\setminus\{0\}}$, but not globally Lipschitz. To be more widely applicable, the results of the previous post need to be extended to include such locally Lipschitz continuous coefficients. In fact, uniqueness of solutions to SDEs with locally Lipschitz continuous coefficients follows from the global Lipschitz case. However, solutions need only exist up to a possible explosion time. This is demonstrated by the following simple non-stochastic differential equation $\displaystyle dX= X^2\,dt.$ For initial value ${X_0=x>0}$, this has the solution ${X_t=(x^{-1}-t)^{-1}}$, which explodes at time ${t=x^{-1}}$. Continue reading “SDEs with Locally Lipschitz Coefficients” # Existence of Solutions to Stochastic Differential Equations A stochastic differential equation, or SDE for short, is a differential equation driven by one or more stochastic processes. For example, in physics, a Langevin equation describing the motion of a point ${X=(X^1,\ldots,X^n)}$ in n-dimensional phase space is of the form $\displaystyle \frac{dX^i}{dt} = \sum_{j=1}^m a_{ij}(X)\eta^j(t) + b_i(X).$ (1) The dynamics are described by the functions ${a_{ij},b_i\colon{\mathbb R}^n\rightarrow{\mathbb R}}$, and the problem is to find a solution for X, given its value at an initial time. What distinguishes this from an ordinary differential equation are random noise terms ${\eta^j}$ and, consequently, solutions to the Langevin equation are stochastic processes. It is difficult to say exactly how ${\eta^j}$ should be defined directly, but we can suppose that their integrals ${B^j_t=\int_0^t\eta^j(s)\,ds}$ are continuous with independent and identically distributed increments. A candidate for such a process is standard Brownian motion and, up to constant scaling factor and drift term, it can be shown that this is the only possibility. However, Brownian motion is nowhere differentiable, so the original noise terms ${\eta^j=dB^j_t/dt}$ do not have well defined values. Instead, we can rewrite equation (1) is terms of the Brownian motions. This gives the following SDE for an n-dimensional process ${X=(X^1,\ldots,X^n)}$ $\displaystyle dX^i_t = \sum_{j=1}^m a_{ij}(X_t)\,dB^j_t + b_i(X_t)\,dt$ (2) where ${B^1,\ldots,B^m}$ are independent Brownian motions. This is to be understood in terms of the differential notation for stochastic integration. It is known that if the functions ${a_{ij}, b_i}$ are Lipschitz continuous then, given any starting value for X, equation (2) has a unique solution. In this post, I give a proof of this using the basic properties of stochastic integration as introduced over the past few posts. First, in keeping with these notes, equation (2) can be generalized by replacing the Brownian motions ${B^j}$ and time t by arbitrary semimartingales. As always, we work with respect to a complete filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$. In integral form, the general SDE for a cadlag adapted process ${X=(X^1,\ldots,X^n)}$ is as follows, $\displaystyle X^i = N^i + \sum_{j=1}^m\int a_{ij}(X)\,dZ^j.$ (3)
2022-10-04 23:50:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 62, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959667086601257, "perplexity": 334.0756321357497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00204.warc.gz"}
https://ecologicalproblems.ru/blog/speed-change-of-cylinder-from-center.php
Warning: include(/home/www/_inc/78egyYffGHDT36T3G/www.ecologicalproblems.ru.txt): Failed to open stream: No such file or directory in /home/www/dorway/ecologicalproblems.ru/blog/speed-change-of-cylinder-from-center.php on line 7 Warning: include(): Failed opening '/home/www/_inc/78egyYffGHDT36T3G/www.ecologicalproblems.ru.txt' for inclusion (include_path='.:') in /home/www/dorway/ecologicalproblems.ru/blog/speed-change-of-cylinder-from-center.php on line 7 SPEED CHANGE OF CYLINDER FROM CENTER registered mail in france appointment of condo trustee form doesnt recognize writable cd what does photodegradation best buy espaol servicio al cliente scc student handbook thedford nebraska motels manual woodworkers and weavers website online free soccer live thermarest mattress sizes Speed change of cylinder from center WebThe basic geometry of a piston (reciprocating) internal combustion engine is defined by the following parameters: compression ratio. ratio of cylinder bore to piston stroke. ratio of connecting rod length to crank radius (offset) The compression ratio is calculated as the ratio between the maximum (total) volume of the cylinder (when the piston. WebMar 26,  · This system is sketched in basic form in Figure 2. When the valve is fully shifted to extend the cylinder, the extension speed will be greater than when the valve . WebSpecifically because $\omega=\frac{v}{R}$, which would be the angular speed using the speed of a point on the outside of the circle. I'm specifically asking why we can assume . When the cylinder begins spinning very rapidly, the floor is removed from under the riders' feet. What affect does a doubling in speed have upon the centripetal. WebApr 29,  · The diameter of a right circular cylinder is increasing at a rate of 4 cm/sec, while the height is decreasing at a rate of 3 cm/sec. Help Center Detailed answers to . Rotational inertia is a property of any object which can be rotated. It is a scalar value which tells us how difficult it is to change the rotational velocity. Any of the individual angular momenta can change as long as their sum remains The cylinder is free to rotate about its axis through its center and is. WebMar 26,  · You do not need to remove the parts inside the cylinder center hole. This requires a special split tip screwdriver to remove the internal retaining screw (left hand threads, secured with thread lock). B Bagheera Banned Joined Jan 9, Posts #9 · Jan 10, (Edited) Johngoboom, First, let me apologize for a small error on my part. WebT = Required Torque, lb-ft: WK 2 =: Mass Moment of Inertia of load to be accelerated lb-ft 2 (See Mass moment of inertia calculations): Change of speed, rpm: t = Time to accelerate the load, seconds. WebSee Answer. Let R be the distance between the cylinder and the center of the turntable. Now assume that the cylinder is moved to a new location R/2 from the center of the turntable. Which of the following statements accurately describe the motion of the cylinder at the new location? Check all that apply. www.ecologicalproblems.ru speed of the cylinder has decreased. What is the new angular speed when the man walks to a point m from the center? Consider the merry-go-round is a solid 11 kg cylinder of radius of 1,5 m. WebExpress your answer in terms of the velocity of the center of the cylinder,,, and the rotation rate of the cylinder,.). 5 0 0 $[ 2 [ 2 2 5 0 0. [Typesetting math: %. required to bring it up to an angular speed of rev/min in s, starting from rest? The flywheel of an engine has moment of inertia about its rotation axis. Webus say that we have a cylinder with a stroke of 20 in ( mm). A fast forward speed of 20 ft/min ( m/s) is needed for the first 10 inches (mm) with a speed of 5 ft/min ( m/s) for the remainder of the stroke. The desired return speed is 25 ft/min ( m/s). It is evident that we need three speed controls and at least one. WebDec 29, · Maximum piston speed. You can calculate the maximum piston speed by multiplying the stroke value by π. Then, you need to divide the new value with the number 12 to obtain the result expressed in feet per revolution. After that, multiply the final value by the maximum engine speed to get the maximum rate per minute value. WebNov 16, · Bones said: There's this one: 1/2Mv^2+1/2Icm (omega)^2+Mgy. But I am not sure where to get all this information from just having the height of the incline. Ok. You have your potential energy. And at the bottom what's happened to that PE? It's become KE for this problem. So whjat you have then is. m*g*h = Σ KE = 1/2 m*v 2 + 1/2*I*ω 2. An apparatus as set forth in claim 9, wherein the second means carries out the simultaneous fuel injection for all cylinders when an engine start switch is. Web1. The moment of inertia for a cylinder has a factor of r 2 in it, while we can change ω to v using the equation ω = v / r. Thus, ω 2 = v 2 / r 2. Plugging this and the moment of inertia . WebA cylinder (i.e. The blind end piston area is twice the rod end area.) will extend and retract at the same speed if regeneration is used. Without regeneration, the extend speed would be 50% of the retract speed. Cylinders with large rod diameters typically offer small amounts of oil for use in cylinder extension in regenerative circuits. The radius of the cylinder is r. At what angular speed w must this cylinder rotate to have the same total kinetic energy that it would have if it were moving. WebPhysics Physics questions and answers A uniform solid sphere of mass M and radius R rotates with an angular speed a about an axis through its center. A uniform solid cylinder of mass M, radius R, and length 2R rotates through an axis running through the central axis of . WebMar 26, · This system is sketched in basic form in Figure 2. When the valve is fully shifted to extend the cylinder, the extension speed will be greater than when the valve . WebApr 24, · While the object undergoes slipping motion, the translational speed thus linearly decreases with time, whereas the rotational speed linearly increases. To find the . Even straight-line motion may have vorticity if the speed changes normal to the flow axis. ESS Prof. Jin-Yi Yu. (a) mb isotachs; (b) mb geopotential. This conversion occurs within the cylinders of the engine through the process of combustion two-speed supercharger, a lever or switch in the flight deck. WebStep 2: Use the formulas to calculate the moment of inertia for the cylinder. Since the cylinder is rotating around the z-axis, the formula me must use to calculate its moment of inertia is I z. Web1. The moment of inertia for a cylinder has a factor of r 2 in it, while we can change ω to v using the equation ω = v / r. Thus, ω 2 = v 2 / r 2. Plugging this and the moment of inertia . (a) What is the angular speed of the cylinder about its centre as it leaves the roof? (b) The roof's edge is at height H = m How far horizontally from. Here, the angular speed is ˜Ω=0 and the cylinder is balanced with its center of mass directly above the point of contact. Moving down vertically, θ will again. Consider first the angular speed (ω) is the rate at which the angle of rotation changes. In equation form, the angular speed is. ω=ΔθΔt. We offer Flow Controls, Speed Control mufflers, and Quick Exhaust Valves to accomplish speed control. Double-acting cylinders can have both the out and in. Notice that as the tool plunges closer to the workpiece's center, the same spindle speed will yield a decreasing surface (cutting) speed (because each rev. WebControl Air Flow of Cylinders with Confidence and More Reliability. Output force of a pneumatic cylinder is function of air pressure as cylinder speed is function of its air . Solving for the velocity shows the cylinder to be the clear winner. The cylinder will reach the bottom of the incline with a speed that is 15% higher than the. WebA rule-of-thumb says that for moderate cylinder speed, the flow area through piping and valving should be at least equal to the flow area through the cylinder ports; perhaps a pipe size larger if very high speed is required. Usually for cylinders up to and including 3-inch bore, a 1/4 or 3/8" size valve is sufficient for normal cylinder speed. The rotational speed in motors is measured using rotations per minute. Thus on one rotation distance and linear speed, it increases proportionally by 2. Two heavy cylindrical masses are placed at opposite ends of a platform that rotates around its center. A torque is applied to the platform's axle by means. A 3D cubic mesh is used to divide a problem. In 3 separate 3D arrays the cell center coordinates are stored (i.e cx, cy, cz). A cylinder is upright either. WebSep 23, · Figure 5-way, 3 position, spring-centered solenoid, pilot-operated, cylinder ports open-center condition, line mounted. The heavier the weight and the slower the cylinder speed, the longer the pause. The only way to change flow is to change the orifice size. This flow control valve is not pressure compensated. Many of . WebSpecifically because$\omega=\frac{v}{R}$, which would be the angular speed using the speed of a point on the outside of the circle. I'm specifically asking why we can assume$\omega=\frac{v_{cm}}{R}$and still use the moment of inertia about the center of mass. This does not make sense to me.$\endgroup\$ –. 11 12 13 14 15 WebApr 29,  · The diameter of a right circular cylinder is increasing at a rate of 4 cm/sec, while the height is decreasing at a rate of 3 cm/sec. Help Center Detailed answers to any questions you might have Finding rate of change of cylinder radius with changing height and diameter. Ask Question Asked 1 year, 9 months ago. (2) Adjust the Auto switch position so that the cylinder magnet comes to the center of the Auto switch operation range. (3) Fix with mounting screws. For the. WebApr 24,  · While the object undergoes slipping motion, the translational speed thus linearly decreases with time, whereas the rotational speed linearly increases. To find the . EngineeringMechanical Engineering2) Three cylinder A:B:C has a speed ratio of are externally in contact to each other. If center distance between. km/h)×( m/km)×(1 h/ s) = m/s. Hence, ω = v/r = / = radians/s or × 60/2π = RPM. Somehow the rotational speed of the engine must. WebThe basic geometry of a piston (reciprocating) internal combustion engine is defined by the following parameters: compression ratio. ratio of cylinder bore to piston stroke. ratio of connecting rod length to crank radius (offset) The compression ratio is calculated as the ratio between the maximum (total) volume of the cylinder (when the piston. What is the magnitude of angular momentum of the cylinder about its axis? of the inclined plane, the centre of mass of the cylinder has a speed of 5m/s. A ceiling fan consists of a small, unfirm, cylindrical disk with 5 thin rods coming from the center. The disk has mass md = kg and radius R = m. Сopyright 2017-2023
2023-03-25 21:28:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6660172939300537, "perplexity": 1320.5265864050161}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00253.warc.gz"}
https://statsim.com/docs/models-and-projects/
# 2.1 Models and Projects in StatSim Introduction to StatSim models In [1]: var iframer = require('./iframer') "All models are wrong but some are useful" Models describe relationship between variables and how output is generated. A statistical model represents data generation process that generally approximates some real-life phenomena. Creating models in StatSim is like programming in the browser. Or strictly speaking probabilistic programming in the browser. A new model 'Main' is initialized automatically when you create a new project or fresh-start the app. Usually you need only one model in the project. However each StatSim project can include multiple models. It's possible to call one model from another dividing the project in logical parts connected between each other. Models inputs are declared as data blocks with "Use as parameter checkbox" activated. Such parameters can be used later when calling the model as a function or using it as a probability distribution. If a model returns a scalar value in Determinstic simulation it behaves as a function when called from another place. To create a new model, find a small icon with three dots ⦁ ⦁ ⦁ just near the current model tab. Example of two models, one of them called lin behaves as a linear function that takes x, and returns a a * x + b. Then it's called from the main model as a function. In [2]: iframer('https://statsim.com/app/?a=%5B%7B%22b%22%3A%5B%7B%22n%22%3A%22input%22%2C%22sh%22%3Afalse%2C%22t%22%3A2%2C%22u%22%3Atrue%2C%22dims%22%3A%22%22%2C%22v%22%3A%2210%22%7D%2C%7B%22n%22%3A%22output%22%2C%22h%22%3Afalse%2C%22sh%22%3Atrue%2C%22t%22%3A1%2C%22v%22%3A%22lin%28input%29%22%7D%5D%2C%22mod%22%3A%7B%22n%22%3A%22Main%22%2C%22e%22%3A%22Main%20model%20calls%20second%20model%20%60lin%60%20as%20a%20function%20lin%28input%29.%20Main%20also%20has%20its%20own%20input%20parameter%3A%22%2C%22s%22%3A1%2C%22m%22%3A%22deterministic%22%7D%2C%22met%22%3A%7B%22sm%22%3A1000%7D%7D%2C%7B%22b%22%3A%5B%7B%22n%22%3A%22x%22%2C%22sh%22%3Afalse%2C%22t%22%3A2%2C%22u%22%3Atrue%2C%22dims%22%3A%22%22%2C%22v%22%3A%220%22%7D%2C%7B%22n%22%3A%22result%22%2C%22h%22%3Afalse%2C%22sh%22%3Atrue%2C%22t%22%3A1%2C%22v%22%3A%224%20%2A%20x%20%2B%207%22%7D%5D%2C%22mod%22%3A%7B%22n%22%3A%22lin%22%2C%22e%22%3A%22Takes%20x%2C%20returns%20a%20%2A%20x%20%2B%20b.%22%2C%22m%22%3A%22deterministic%22%2C%22s%22%3A1%7D%2C%22met%22%3A%7B%22sm%22%3A1000%7D%7D%5D') Out[2]: Other simulation methods make models to be exported as probability distributions. Such distributions will be automatically loaded in a list of default distributions in Random Variable blocks of other models. Lets change how lin model behaves and instead of return scalar a * x + b return a gaussian distribution centered around a * x + b with a standard deviation equal to x / 2 Be careful with creating many interconnected probabilistic models. Number of samples to be generated grows exponentially. In [3]: iframer('https://statsim.com/app/?a=%5B%7B%22b%22%3A%5B%7B%22n%22%3A%22input%22%2C%22sh%22%3Afalse%2C%22t%22%3A2%2C%22u%22%3Atrue%2C%22dims%22%3A%22%22%2C%22v%22%3A%2210%22%7D%2C%7B%22d%22%3A%22lin%22%2C%22n%22%3A%22output%22%2C%22o%22%3Afalse%2C%22p%22%3A%7B%22x%22%3A%22input%22%7D%2C%22sh%22%3Atrue%2C%22t%22%3A0%2C%22dims%22%3A%221%22%7D%5D%2C%22mod%22%3A%7B%22n%22%3A%22Main%22%2C%22e%22%3A%22Main%20model%20calls%20second%20model%20%60lin%60%20as%20a%20function%20lin%28input%29.%20Main%20also%20has%20its%20own%20input%20parameter%3A%22%2C%22s%22%3A1%2C%22m%22%3A%22HMC%22%7D%2C%22met%22%3A%7B%22sm%22%3A%222000%22%2C%22b%22%3A%22200%22%7D%7D%2C%7B%22b%22%3A%5B%7B%22n%22%3A%22x%22%2C%22sh%22%3Afalse%2C%22t%22%3A2%2C%22u%22%3Atrue%2C%22dims%22%3A%22%22%2C%22v%22%3A%221%22%7D%2C%7B%22d%22%3A%22Gaussian%22%2C%22n%22%3A%22output%22%2C%22o%22%3Afalse%2C%22p%22%3A%7B%22mu%22%3A%224%20%2A%20x%20%2B%207%22%2C%22sigma%22%3A%22x%20%2F%205%22%7D%2C%22sh%22%3Atrue%2C%22t%22%3A0%2C%22dims%22%3A%221%22%7D%5D%2C%22mod%22%3A%7B%22n%22%3A%22lin%22%2C%22e%22%3A%22Takes%20x%2C%20returns%20a%20%2A%20x%20%2B%20b.%22%2C%22m%22%3A%22MCMC%22%2C%22s%22%3A1%7D%2C%22met%22%3A%7B%22sm%22%3A%222000%22%2C%22l%22%3A%22200%22%7D%7D%5D') Out[3]: By Anton Zemlyansky in
2021-08-02 22:24:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5280670523643494, "perplexity": 1383.5589143751772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154385.24/warc/CC-MAIN-20210802203434-20210802233434-00551.warc.gz"}
https://git.rockbox.org/cgit/rockbox.git/tree/manual/main_menu/main.tex?id=42d9b1593dc78e903584b07b47659952e48f4ad7
summaryrefslogtreecommitdiffstats log msg author committer range 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 % $Id$ % \chapter{The Main Menu} \section{Introducing the Main Menu} \screenshot{main_menu/images/ss-main-menu}{The main menu}{} The \setting{Main Menu} is the screen from which the rest of the Rockbox functions can be accessed. It is used for a variety of functions, which are detailed below. All options in Rockbox can be controlled via the \setting{Main Menu}. To enter the \setting{Main Menu}, \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{press the \ButtonMode\ button.} \opt{RECORDER_PAD}{press the \ButtonFOne\ button.} \opt{PLAYER_PAD,IPOD_4G_PAD,ONDIO_PAD}{press the \ButtonMenu\ button.} \opt{IAUDIO_X5_PAD}{press the \ButtonRec\ button.} All settings are stored on the unit. However, Rockbox does not spin up the disk solely for the purpose of saving settings. Instead, Rockbox will save settings when it spins up the disk the next time, for example when refilling the MP3 buffer or navigating through the file browser. Changes to settings may therefore not be saved unless the \dap\ is shut down safely (see page \pageref{ref:Safeshutdown}). \section{Navigating the Main Menu} \opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD,IPOD_4G_PAD}{ \begin{table} \begin{btnmap}{}{} \opt{IPOD_4G_PAD,IPOD_VIDEO_PAD}{\ButtonScrollFwd} \opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{\ButtonUp} & Moves up in the menu.\\ & Inside a setting, increases the value or chooses next option \\ % \opt{IPOD_4G_PAD,IPOD_VIDEO_PAD}{\ButtonScrollBack} \opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{\ButtonDown} & Moves down in the menu.\\ & Inside a setting, decreases the value or chooses previous option \\ % \opt{RECORDER_PAD}{\ButtonPlay/\ButtonRight} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{\ButtonSelect/\ButtonRight} \opt{ONDIO_PAD,IPOD_4G_PAD,IPOD_VIDEO_PAD}{\ButtonRight} & Selects option \\ % \opt{RECORDER_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOff/\ButtonLeft} \opt{IAUDIO_X5_PAD,ONDIO_PAD,IPOD_4G_PAD,IPOD_VIDEO_PAD}{\ButtonLeft} & Exits menu, setting or moves to parent menu\\ \end{btnmap} \end{table} } \opt{PLAYER_PAD}{ \begin{table} \begin{btnmap}{}{} % \ButtonLeft & Selects previous option in the menu.\\ & Inside an setting, decreases the value or chooses previous option \\ % \ButtonRight & Selects next option in the menu.\\ & Inside an setting increases the value or chooses next option \\ % \ButtonPlay & Selects item \\ % \ButtonStop & Exit menu, setting or moves to parent menu.\\ \end{btnmap} \end{table} } \section {Recent Bookmarks} \screenshot{main_menu/images/ss-list-bookmarks}% {The list bookmarks screen}{} If the \setting{Save a list of recently created bookmarks} option is enabled then you can view a list of several recent bookmarks here and select one to jump straight to that track. See page \pageref{ref:Bookmarkconfigactual} for more details on configuring bookmarking in Rockbox. \note{This option is off by default.} \section{Sound Settings} The \setting{Sound Settings} menu offers a selection of sound properties you may change to customize your listening experience. The details of this menu are covered in detail starting on page \pageref{ref:configure_rockbox_sound}. \section{General Settings} The \setting{General Settings} menu allows you to customize the way Rockbox looks and the way it plays music. The details of this menu are covered in detail starting on page \pageref{ref:configure_rockbox_general}. \section{Manage Settings} The \setting{Manage Settings} option allows the saving and re-loading of user configuration settings, browse the hard drive for alternate firmwares, and finally to reset your \dap\ back to initial configuration. The details of this menu are covered in detail starting on page \pageref{ref:ManageSetting}. \section{Browse Themes} This option will display all the currently installed themes on the \dap, press \ButtonRight\ to load the chosen theme and apply it. A theme is basically a configuration file, stored in a specific directory, that typically changes the WPS \opt{h1xx,h300,x5}{and remote WPS}, font used and on some platforms additional information such as background image and text colours. There are a number of themes that ship with Rockbox. If none of these suit your needs, many more can be downloaded from \opt{RECORDER_PAD}{\wikilink{WpsArchos}}% \opt{h1xx}{\wikilink{WpsIriverH100}}% \opt{h300,ipodcolor}{\wikilink{WpsIriverH300}}% \opt{ipodvideo}{\wikilink{WpsIpod5g}}% \opt{ipodnano}{\wikilink{WpsIpodNano}}% \opt{ipodmini}{\wikilink{WpsIpodMini}}% \opt{x5}{\wikilink{WpsIaudioX5}}% . Some of the downloads from this site will actually be standalone WPS files, others will be full-blown themes. \note{Themes do not have to be purely visual. It is quite possible to create a theme that switches between audio configurations for use in the car, with headphones and when connected to an external amplifier. See the section on Making Your Own Settings File'' on page \pageref{ref:CreateYourOwnWPS} for more details. } \opt{CONFIG_TUNER}{ \section{\label{ref:FMradio}FM Radio} \opt{x5}{\note{Not currently Implemented on X5}} \screenshot{main_menu/images/ss-fm-radio-screen}{The FM radio screen}{} This menu option switches to the radio screen. The FM radio has the ability \opt{HAVE_RECORDING}{to record and } to remember station frequency settings (presets). \opt{recorderv2fm,ondio}{ \begin{table} \begin{btnmap}{}{} \ButtonLeft, \ButtonRight & Change frequency in 0.1 MHz steps.\\ & For automatic station seek, hold \ButtonLeft/\ButtonRight\ % for a little longer. \\ % \ButtonUp, \ButtonDown & Change volume \\ % \opt{RECORDER_PAD}{ \ButtonPlay & \emph{(EXPERIMENTAL)}\\ & freezes all screen updates.May enhance radio reception in some cases.\\ } \opt{RECORDER_PAD}{\ButtonOn}\opt{ONDIO_PAD}{\ButtonOff} & Leave the radio screen with the radio playing \\ % \opt{RECORDER_PAD}{\ButtonOff}\opt{ONDIO_PAD}{hold \ButtonOff} & Back to Main Menu.\\ \end{btnmap} \end{table} } \fixme{Add Radio recording and Preset keys to FM Recorder and Ondio FM} \opt{h1xx,h300,x5}{ \begin{table} \begin{btnmap}{}{} \ButtonLeft, \ButtonRight & Change frequency in 0.1 MHz steps. \\ Hold \ButtonLeft, \ButtonRight & Seeks to next station or preset\\ % \ButtonUp, \ButtonDown & Change volume \\ % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOn} \opt{IAUDIO_X5_PAD}{\fixme{TBD}} & Mutes radio playback \\ % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{Hold \ButtonOn} \opt{IAUDIO_X5_PAD}{\fixme{TBD}} & Switches between SCAN and PRESET mode.\\ % \ButtonSelect & Opens a list of radio presets. You can view all the presets that you have, and switch to the station.\\ Hold \ButtonSelect & Displays the FM radio settings menu.\\ % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonMode} \opt{IAUDIO_X5_PAD}{\fixme{TBD}} % & Keeps radio playing and returns to the main menu. You can then press OFF/STOP to browse the file tree while listening to the radio\\ % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOff} \opt{IAUDIO_X5_PAD}{\fixme{TBD}} & Stops the radio and returns to Main Menu.\\ \end{btnmap} \end{table} } \begin{description} \item[Saving a preset:] Up to 32 of your favourite stations can be saved as presets. Press \opt{RECORDER_PAD}{\ButtonFOne} \opt{ONDIO_PAD}{\ButtonMenu} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{\ButtonSelect} to go to the menu, then select \opt{recorderv2fm,ondio}{Save preset''.} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{Add preset''} Enter the name (maximum number of characters is 32). \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{Press \ButtonOn\ to save.} \opt{IAUDIO_X5_PAD}{Press \fixme{TBD} to save.} \item[Selecting a preset:] \opt{ONDIO_PAD,RECORDER_PAD} { Press \opt{RECORDER_PAD}{\ButtonFTwo}\opt{ONDIO_PAD}{\fixme{FixMe}} to go to the preset list. Use \ButtonUp\ and \ButtonDown\ to move the cursor and then press \opt{RECORDER_PAD}{\ButtonPlay}\opt{ONDIO_PAD}{\fixme{FixMe}} to select. Use \ButtonLeft\ to leave the preset without selecting anything. } \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD} { Press \ButtonSelect\ to go to the preset list. Use \ButtonUp\ and \ButtonDown\ to move the cursor and then press \ButtonSelect\ to select. Use \ButtonLeft\ to leave the preset without selecting anything. } \item[Removing a preset:] \opt{ONDIO_PAD,RECORDER_PAD}{ Press \opt{RECORDER_PAD}{\ButtonFOne}\opt{ONDIO_PAD}{\fixme{FixMe}} to go to the menu, then select Remove preset''. } \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{ Press \ButtonSelect\ to go to the preset list. Use \ButtonUp\ and \ButtonDown\ to move the cursor and then Hold \ButtonSelect\ on the preset to that you wish to remove, then select Remove preset.'' } \opt{RECORDER_PAD}{ \item[Recording:] Press \ButtonFThree\ to start recording the currently playing station. Press \ButtonOff\ to stop recording. Press \ButtonPlay\ again to seamlessly start recording to a new file. The settings for the recording can be changed in the \ButtonFOne\ menu before starting the recording. See page \pageref{ref:Recordingsettings} for details of recording settings. } \end{description} \note{The radio will turn off when starting playback of an audio file.} } \opt{HAVE_RECORDING}{ \section{\label{ref:Recording}Recording} \opt{x5}{\note{Not Implemented on X5 yet}} \subsection{\label{ref:Whilerecordingscreen}While Recording Screen} \screenshot{main_menu/images/ss-while-recording-screen}{The while recording screen}{} Entering the Recording'' option in the Main menu launches the recording application. The screen shows the time elapsed and the size of the file being recorded. A peak meter is present to allow you set Gain correctly. \opt{MASCODEC}{The frequency, channels and quality} \opt{SWCODEC}{The frequency and channels} settings are shown on the last line. The controls for this screen are: \begin{table} \begin{btnmap}{}{} \ButtonLeft & Decreases Gain \\ % \ButtonRight & Increases Gain \\ % \opt{RECORDER_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOn} \opt{ONDIO_PAD,IAUDIO_X5_PAD,IPOD_4G_PAD}{\fixme{FixMe}} & Starts recording. \\ & While recording: button closes the current file and opens a new one.\\ % \opt{RECORDER_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOff} \opt{ONDIO_PAD,IAUDIO_X5_PAD,IPOD_4G_PAD}{\fixme{FixMe}} & Exits Recording Screen.\\ & While recording: Stop recording \\ % \opt{RECORDER_PAD}{\ButtonFOne} \opt{ONDIO_PAD}{\ButtonMenu} \opt{IPOD_4G_PAD,IAUDIO_X5_PAD}{Hold \ButtonSelect} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonMode} & Opens Recording Settings screen (see below) \\ % \opt{RECORDER_PAD}{ \ButtonFTwo & Quick menu for recording settings. A quick press will leave the screen up (press {\ButtonFTwo} again to exit),while holding it will close the screen when you release it. \\ } % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IPOD_4G_PAD,IAUDIO_X5_PAD}{ \ButtonSelect & Quick menu for recording settings. \\ } % \opt{RECORDER_PAD}{ \ButtonFThree & Quick menu for source setting. \\ & Quick/hold works as for {\ButtonFTwo}. \\ & While recording: Start a new recording file \\ } \end{btnmap} \end{table} \subsection{\label{ref:Recordingsettings}Recording Settings} \screenshot{main_menu/images/ss-recording-settings}{The recording settings screen}{} \opt{MASCODEC}{ \begin{description} \item[Quality:] Choose the quality here (0 to 7). Default is 5, best quality is 7, smallest file size is 0. This setting effects how much your sound sample will be compressed. Higher quality settings result in larger MP3 files. The quality setting is just a way of selecting an average bit rate, or number of bits per second, for a recording. When this setting is lowered, recordings are compressed more (meaning worse sound quality), and the average bitrate changes as follows. \end{description} \begin{table}[h!] \begin{center} \begin{tabularx}{0.75\textwidth}{lX}\toprule \emph{Frequency} & \emph{Bitrate} (Kbit/s) -- quality 0$\rightarrow$7 \\\midrule 44100Hz stereo & 75, 80, 90, 100, 120, 140, 160, 170 \\ 22050Hz stereo & 39, 41, 45, 50, 60, 80, 110, 130 \\ 44100Hz mono & 65, 68, 73, 80, 90, 105, 125, 140 \\ 22050Hz mono & 35, 38, 40, 45, 50, 60, 75, 90 \\\bottomrule \end{tabularx} \end{center} \end{table} } \begin{description} \item[Frequency:] Choose the recording frequency (sample rate) -- 48kHz, 44.1kHz, 32kHz and 24kHz, 22.05kHz, 16kHz are available. Higher sample rates use up more disk space, but give better sound quality. This setting determines which frequency range can accurately be reproduced during playback, Lower frequencies produce smaller files. \opt{MASCODEC}{ The frequency setting also determines which version of the MPEG standard the sound is recorded using:\\ MPEG v1 for 48, 44.1 and 32\\ MPEG v2 for 24, 22.05 and 16\\ } \item[Source:] Choose the source of the recording. This can be \opt{recorder,recorderv2fm,h1xx}{SPDIF (digital),} microphone or line in. \opt{CONFIG_TUNER}{For recording from the radio see page \pageref{ref:FMradio}.} \opt{recorder,recorderv2fm,h100} {\note{You cannot change the sample rate for digital recordings.}} \item[Channels:] This allows you to select mono or stereo recording. Please note that for mono recording, only the left channel is recorded. Mono recordings are usually somewhat smaller than stereo. \item[Independent Frames:] The independent frames option tells the \dap\ to encode with the bit reservoir disabled, so the frames are independent of each other. This makes a file easier to edit. \item[Time Split:] This option is useful when timing recordings. If set to active it stops a recording at a given interval and then starts recording again with a new file, which is useful for long term recordings. \newline The splits are seamless (frame accurate), no audio is lost at the split point. The break between recordings is only the time required to stop and restart the recording, on the order of 2 -- 4 seconds. \newline Options (hours:minutes between splits): off, 24:00, 18:00, 12:00, 10:00, 8:00, 6:00, 4:00, 2:00, 1:20 (80 minute CD), 1:14 (74 minute CD), 1:00, 00:30, 00:15, 00:10, 00:05. \item[Prerecord Time:] This setting buffers a small amount of audio so that when the record button is pressed, the recording will begin from that number of seconds earlier. This is useful for ensuring that a recording begins before a cue that is being waited for.\\ Options: Off, 1 -- 30 seconds \item[Directory:] Allows changing the location where the recorded files are saved. The default location is \fname{/recordings}. \item[Show recording screen on startup:] If set to yes, the \dap\ will start up with the while recording screen showing.\\ Options: Yes, No\\ \item[Clipping Light:] Causes the backlight to flash on when clipping has been detected.\\ Options: Off, Remote unit only, Main and remote unit, Main unit only. \end{description} } \section{\label{ref:playlistoptions}Playlist Options} This menu allows you to work with playlists. Playlists can either be created automatically by playing a file in a directory directly, which will cause all of the files in that directory to be placed in the playlist, or they can be created by hand using the \setting{File Menu} (see page \pageref{ref:Filemenu}) or using the \setting{Playlist Options} menu. Both automatic and manually created playlists can be edited using this menu. \begin{description} \item[Create Playlist:] Rockbox will create a playlist with all tracks in the current directory and all subdirectories. The playlist will be created one folder level up'' from where you currently are. \item[View Current Playlist:] Displays the contents of the playlist currently stored in memory. \item[Save Current Playlist:] Saves the current dynamic playlist, excluding queued tracks, to the specified file. If no path is provided then playlist is saved to current directory (see page \pageref{ref:Playlistsubmenu}). \item[Recursively Insert Directories: ] If set to \setting{On}, then when a directory is inserted or queued into a dynamic playlist, all subdirectories will also be inserted. If set to \setting{Ask}, Rockbox will prompt the user about whether to include subdirectories. Options: \setting{Off}, \setting{Ask}, \setting{On} \item[Warn When Erasing Dynamic Playlist: ] If set to \setting{Yes}, Rockbox will provide a warning if the user attempts to take an action that will cause Rockbox to erase the current dynamic playlist. Options: \setting{Yes}, \setting{No} \end{description} \section{Browse Plugins} With this option you can load and run various plugins that have been written for Rockbox. There are a wide variety of these supplied with Rockbox, including several games, some impressive demos and a number of utilities. A detailed description of the different plugins begins on page \pageref{ref:plugins}. \section{\label{ref:Info}Info} This option shows RAM buffer size, battery voltage level and estimated time remaining, disk total space and disk free space. \opt{player}{Use the MINUS and PLUS keys to step through several pages of information.} \begin{description} \item[Rockbox Info:] Displays some basic system information. This is, from top to bottom, the amount of memory Rockbox has available for storing music (the buffer), battery status, hard disk size and the amount of free space on the disk. \item[Version:] Software version and credits display. \item[Debug (Keep Out!):] This submenu is intended to be used \emph{only} by Rockbox developers. It shows hardware, disk, battery status and other technical information. \warn{It is not recommended that users access this menu unless instructed to do so in the course of fixing a problem with Rockbox. If you think you have messed up your settings by use of this menu please try to reset \emph{all} settings before asking for help.} \end{description} \opt{player}{ \section{Shutdown} This menu option saves the Rockbox configuration and turns off the hard drive before shutting down the machine. For maximum safety this procedure is recommended when turning off the \dap. (There is a very small risk of hard disk corruption otherwise.) See page \pageref{ref:Safeshutdown} for more details. } \opt{RECORDER_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IPOD_4G_PAD,IAUDIO_X5_PAD,IPOD_VIDEO_PAD} { \section{Quick Menu} Whilst not strictly part of the \setting{Main Menu}, it is worth noting that a few of the more commonly used settings are available from the \setting{Quick Menu}. The \setting{Quick Menu} screen is accessed by holding the \opt{RECORDER_PAD}{\ButtonFTwo} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonMode} \opt{IPOD_4G_PAD,IPOD_VIDEO_PAD}{\ButtonMenu} \opt{IAUDIO_X5_PAD}{\ButtonRec} key, and it allows rapid access to the \setting{Shuffle} and \setting{Repeat} modes (Page \pageref{ref:PlaybackOptions}) and the \setting{Show Files} option (Page \pageref{ref:ShowFiles}). }
2022-09-30 13:29:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7998047471046448, "perplexity": 1893.4576159277333}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00607.warc.gz"}
http://mathoverflow.net/revisions/48692/list
2 Title spelling # Organizing principles of mathematicmathematics In his famous paper "The two cultures of Mathematics"(http://www.dpmms.cam.ac.uk/~wtg10/2cultures.pdf) T.Gowers T. Gowers gives examples of organizing principles in combinatorics. (i) Obviously if events $E1, \cdots,E_n$ are independent and have non-zero probability, then with non-zero probability they all happen at once. In fact, this can be usefully true even if there is a very limited dependence. [EL,J] (ii) All graphs are basically made out of a few random-like pieces, and we know how those behave. [Sze] (iii) If one is counting solutions, inside a given set, to a linear equation, then it is enough, and usually easier, to estimate Fourier coecients coefficients of the characteristic function of the set. (iv) Many of the properties associated with random graphs are equivalent, and can therefore be taken as sensible defnitions definitions of pseudo-random graphs. [CGW,T] (v) Sometimes, the set of all eventually zero sequences of zeros and ones is a good model for separable Banach spaces, or at least allows one to generate interesting hypotheses. (vi) Concentration of Measure More examples (by Tao and other) you can see at http://ncatlab.org/davidcorfield/show/Two+Cultures Do you know another examples in varyous various areas? I mean, for example, globalization technic techniques in topology (structure functor in Hirsh, Differential Topology, $\S 2.11$ and Mayer–Vietoris sequence, in Bott , & Tu", Differential Forms in Algebraic Topology" $\S 5$). So, many proofs look like "prove the local version of theorem and globalize". Do you know such principles? It should be more specific than undergraduate course but it should be common used in your branch and be situated in "common wisdom" of mathematics. Post Made Community Wiki by Ben Webster 1 # Organizing principles of mathematic In his famous paper "The two cultures of Mathematics" (http://www.dpmms.cam.ac.uk/~wtg10/2cultures.pdf) T.Gowers gives examples of organizing principles in combinatorics. (i) Obviously if events $E1, \cdots,E_n$ are independent and have non-zero probability, then with non-zero probability they all happen at once. In fact, this can be usefully true even if there is a very limited dependence. [EL,J] (ii) All graphs are basically made out of a few random-like pieces, and we know how those behave. [Sze] (iii) If one is counting solutions, inside a given set, to a linear equation, then it is enough, and usually easier, to estimate Fourier coecients of the characteristic function of the set. (iv) Many of the properties associated with random graphs are equivalent, and can therefore be taken as sensible defnitions of pseudo-random graphs. [CGW,T] (v) Sometimes, the set of all eventually zero sequences of zeros and ones is a good model for separable Banach spaces, or at least allows one to generate interesting hypotheses. (vi) Concentration of Measure More examples (by Tao and other) you can see at http://ncatlab.org/davidcorfield/show/Two+Cultures Do you know another examples in varyous areas? I mean, for example, globalization technic in topology (structure functor Hirsh, Differential Topology, $\S 2.11$ and Mayer–Vietoris sequence, Bott, Tu "Differential Forms in Algebraic Topology" $\S 5$). So, many proofs look like "prove the local version of theorem and globalize". Do you know such principles? It should be more specific than undergraduate course but it should be common used in your branch and be situated in "common wisdom" of mathematics.
2013-05-22 11:57:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601428270339966, "perplexity": 834.1716157046412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701638778/warc/CC-MAIN-20130516105358-00014-ip-10-60-113-184.ec2.internal.warc.gz"}
https://ghc.gitlab.haskell.org/ghc/doc/libraries/Cabal-syntax-3.9.0.0/Language-Haskell-Extension.html
Cabal-syntax-3.9.0.0: A library for working with .cabal files Description Synopsis # Documentation data Language Source # This represents a Haskell language dialect. Language Extensions are interpreted relative to one of these base languages. Constructors #### Instances Instances details List of known (supported) languages for GHC data Extension Source # This represents language extensions beyond a base Language definition (such as Haskell98) that are supported by some implementations, usually in some special mode. Where applicable, references are given to an implementation's official documentation. Constructors EnableExtension KnownExtension Enable a known extension DisableExtension KnownExtension Disable a known extension UnknownExtension String An unknown extension, identified by the name of its LANGUAGE pragma. #### Instances Instances details Known Haskell language extensions, including deprecated and undocumented ones. Check “Overview of all language extensions” in GHC User’s Guide for more information. Constructors OverlappingInstances Allow overlapping class instances, provided there is a unique most specific instance for each use. UndecidableInstances Ignore structural rules guaranteeing the termination of class instance resolution. Termination is guaranteed by a fixed-depth recursion stack, and compilation may fail if this depth is exceeded. IncoherentInstances Implies OverlappingInstances. Allow the implementation to choose an instance even when it is possible that further instantiation of types will lead to a more specific instance being applicable. DoRec (deprecated) Deprecated in favour of RecursiveDo.Old description: Allow recursive bindings in do blocks, using the rec keyword. See also RecursiveDo. RecursiveDo Allow recursive bindings in do blocks, using the rec keyword, or mdo, a variant of do. ParallelListComp Provide syntax for writing list comprehensions which iterate over several lists together, like the zipWith family of functions. MultiParamTypeClasses Allow multiple parameters in a type class. MonomorphismRestriction Enable the dreaded monomorphism restriction. DeepSubsumption Enable deep subsumption, relaxing the simple subsumption rules, implicitly inserting eta-expansions when matching up function types with different quantification structures. FunctionalDependencies Allow a specification attached to a multi-parameter type class which indicates that some parameters are entirely determined by others. The implementation will check that this property holds for the declared instances, and will use this property to reduce ambiguity in instance resolution. Rank2Types (deprecated) A synonym for RankNTypes.Old description: Like RankNTypes but does not allow a higher-rank type to itself appear on the left of a function arrow. RankNTypes Allow a universally-quantified type to occur on the left of a function arrow. PolymorphicComponents (deprecated) A synonym for RankNTypes.Old description: Allow data constructors to have polymorphic arguments. Unlike RankNTypes, does not allow this for ordinary functions. ExistentialQuantification Allow existentially-quantified data constructors. ScopedTypeVariables Cause a type variable in a signature, which has an explicit forall quantifier, to scope over the definition of the accompanying value declaration. PatternSignatures Deprecated, use ScopedTypeVariables instead. ImplicitParams Enable implicit function parameters with dynamic scope. FlexibleContexts Relax some restrictions on the form of the context of a type signature. FlexibleInstances Relax some restrictions on the form of the context of an instance declaration. EmptyDataDecls Allow data type declarations with no constructors. CPP Run the C preprocessor on Haskell source code. KindSignatures Allow an explicit kind signature giving the kind of types over which a type variable ranges. BangPatterns Enable a form of pattern which forces evaluation before an attempted match, and a form of strict let/where binding. TypeSynonymInstances Allow type synonyms in instance heads. TemplateHaskell Enable Template Haskell, a system for compile-time metaprogramming. ForeignFunctionInterface Enable the Foreign Function Interface. In GHC, implements the standard Haskell 98 Foreign Function Interface Addendum, plus some GHC-specific extensions. Arrows Enable arrow notation. Generics (deprecated) Enable generic type classes, with default instances defined in terms of the algebraic structure of a type. ImplicitPrelude Enable the implicit importing of the module Prelude. When disabled, when desugaring certain built-in syntax into ordinary identifiers, use whatever is in scope rather than the Prelude -- version. NamedFieldPuns Enable syntax for implicitly binding local names corresponding to the field names of a record. Puns bind specific names, unlike RecordWildCards. PatternGuards Enable a form of guard which matches a pattern and binds variables. GeneralizedNewtypeDeriving Allow a type declared with newtype to use deriving for any class with an instance for the underlying type. GeneralisedNewtypeDeriving ExtensibleRecords Enable the "Trex" extensible records system. RestrictedTypeSynonyms Enable type synonyms which are transparent in some definitions and opaque elsewhere, as a way of implementing abstract datatypes. HereDocuments Enable an alternate syntax for string literals, with string templating. MagicHash Allow the character # as a postfix modifier on identifiers. Also enables literal syntax for unboxed values. TypeFamilies Allow data types and type synonyms which are indexed by types, i.e. ad-hoc polymorphism for types. StandaloneDeriving Allow a standalone declaration which invokes the type class deriving mechanism. UnicodeSyntax Allow certain Unicode characters to stand for certain ASCII character sequences, e.g. keywords and punctuation. UnliftedFFITypes Allow the use of unboxed types as foreign types, e.g. in foreign import and foreign export. InterruptibleFFI Enable interruptible FFI. CApiFFI Allow use of CAPI FFI calling convention (foreign import capi). LiberalTypeSynonyms Defer validity checking of types until after expanding type synonyms, relaxing the constraints on how synonyms may be used. TypeOperators Allow the name of a type constructor, type class, or type variable to be an infix operator. RecordWildCards Enable syntax for implicitly binding local names corresponding to the field names of a record. A wildcard binds all unmentioned names, unlike NamedFieldPuns. RecordPuns Deprecated, use NamedFieldPuns instead. DisambiguateRecordFields Allow a record field name to be disambiguated by the type of the record it's in. TraditionalRecordSyntax Enable traditional record syntax (as supported by Haskell 98) OverloadedStrings Enable overloading of string literals using a type class, much like integer literals. GADTs Enable generalized algebraic data types, in which type variables may be instantiated on a per-constructor basis. Implies GADTSyntax. GADTSyntax Enable GADT syntax for declaring ordinary algebraic datatypes. MonoPatBinds (deprecated) Has no effect.Old description: Make pattern bindings monomorphic. RelaxedPolyRec Relax the requirements on mutually-recursive polymorphic functions. ExtendedDefaultRules Allow default instantiation of polymorphic types in more situations. UnboxedTuples Enable unboxed tuples. DeriveDataTypeable Enable deriving for classes Typeable and Data. DeriveGeneric Enable deriving for Generic and Generic1. DefaultSignatures Enable support for default signatures. InstanceSigs Allow type signatures to be specified in instance declarations. ConstrainedClassMethods Allow a class method's type to place additional constraints on a class type variable. PackageImports Allow imports to be qualified by the package name the module is intended to be imported from, e.g.import "network" Network.Socket ImpredicativeTypes (deprecated) Allow a type variable to be instantiated at a polymorphic type. NewQualifiedOperators (deprecated) Change the syntax for qualified infix operators. PostfixOperators Relax the interpretation of left operator sections to allow unary postfix operators. QuasiQuotes Enable quasi-quotation, a mechanism for defining new concrete syntax for expressions and patterns. TransformListComp Enable generalized list comprehensions, supporting operations such as sorting and grouping. MonadComprehensions Enable monad comprehensions, which generalise the list comprehension syntax to work for any monad. ViewPatterns Enable view patterns, which match a value by applying a function and matching on the result. XmlSyntax Allow concrete XML syntax to be used in expressions and patterns, as per the Haskell Server Pages extension language: http://www.haskell.org/haskellwiki/HSP. The ideas behind it are discussed in the paper "Haskell Server Pages through Dynamic Loading" by Niklas Broberg, from Haskell Workshop '05. RegularPatterns Allow regular pattern matching over lists, as discussed in the paper "Regular Expression Patterns" by Niklas Broberg, Andreas Farre and Josef Svenningsson, from ICFP '04. TupleSections Enable the use of tuple sections, e.g. (, True) desugars into x -> (x, True). GHCForeignImportPrim Allow GHC primops, written in C--, to be imported into a Haskell file. NPlusKPatterns Support for patterns of the form n + k, where k is an integer literal. DoAndIfThenElse Improve the layout rule when if expressions are used in a do block. MultiWayIf Enable support for multi-way if-expressions. LambdaCase Enable support lambda-case expressions. RebindableSyntax Makes much of the Haskell sugar be desugared into calls to the function with a particular name that is in scope. ExplicitForAll Make forall a keyword in types, which can be used to give the generalisation explicitly. DatatypeContexts Allow contexts to be put on datatypes, e.g. the Eq a in data Eq a => Set a = NilSet | ConsSet a (Set a). MonoLocalBinds Local (let and where) bindings are monomorphic. DeriveFunctor Enable deriving for the Functor class. DeriveTraversable Enable deriving for the Traversable class. DeriveFoldable Enable deriving for the Foldable class. NondecreasingIndentation Enable non-decreasing indentation for do blocks. SafeImports Allow imports to be qualified with a safe keyword that requires the imported module be trusted as according to the Safe Haskell definition of trust.import safe Network.Socket Safe Compile a module in the Safe, Safe Haskell mode -- a restricted form of the Haskell language to ensure type safety. Trustworthy Compile a module in the Trustworthy, Safe Haskell mode -- no restrictions apply but the module is marked as trusted as long as the package the module resides in is trusted. Unsafe Compile a module in the Unsafe, Safe Haskell mode so that modules compiled using Safe, Safe Haskell mode can't import it. ConstraintKinds Allow type classimplicit parameterequality constraints to be used as types with the special kind constraint. Also generalise the (ctxt => ty) syntax so that any type of kind constraint can occur before the arrow. PolyKinds Enable kind polymorphism. DataKinds Enable datatype promotion. TypeData Enable type data declarations, defining constructors at the type level. ParallelArrays Enable parallel arrays syntax ([:, :]) for Data Parallel Haskell. RoleAnnotations Enable explicit role annotations, like in (type role Foo representational representational). OverloadedLists Enable overloading of list literals, arithmetic sequences and list patterns using the IsList type class. EmptyCase Enable case expressions that have no alternatives. Also applies to lambda-case expressions if they are enabled. AutoDeriveTypeable (deprecated) Deprecated in favour of DeriveDataTypeable.Old description: Triggers the generation of derived Typeable instances for every datatype and type class declaration. NegativeLiterals Desugars negative literals directly (without using negate). BinaryLiterals Allow the use of binary integer literal syntax (e.g. 0b11001001 to denote 201). NumDecimals Allow the use of floating literal syntax for all instances of Num, including Int and Integer. NullaryTypeClasses Enable support for type classes with no type parameter. ExplicitNamespaces Enable explicit namespaces in module import/export lists. AllowAmbiguousTypes Allow the user to write ambiguous types, and the type inference engine to infer them. JavaScriptFFI Enable foreign import javascript. PatternSynonyms Allow giving names to and abstracting over patterns. PartialTypeSignatures Allow anonymous placeholders (underscore) inside type signatures. The type inference engine will generate a message describing the type inferred at the hole's location. NamedWildCards Allow named placeholders written with a leading underscore inside type signatures. Wildcards with the same name unify to the same type. DeriveAnyClass Enable deriving for any class. DeriveLift Enable deriving for the Lift class. StaticPointers Enable support for 'static pointers' (and the static keyword) to refer to globally stable names, even across different programs. StrictData Switches data type declarations to be strict by default (as if they had a bang using BangPatterns), and allow opt-in field laziness using ~. Strict Switches all pattern bindings to be strict by default (as if they had a bang using BangPatterns), ordinary patterns are recovered using ~. Implies StrictData. ApplicativeDo Allows do-notation for types that are Applicative as well as Monad. When enabled, desugaring do notation tries to use (*) and fmap and join as far as possible. DuplicateRecordFields Allow records to use duplicated field labels for accessors. TypeApplications Enable explicit type applications with the syntax id @Int. TypeInType Dissolve the distinction between types and kinds, allowing the compiler to reason about kind equality and therefore enabling GADTs to be promoted to the type-level. UndecidableSuperClasses Allow recursive (and therefore undecidable) super-class relationships. MonadFailDesugaring A temporary extension to help library authors check if their code will compile with the new planned desugaring of fail. TemplateHaskellQuotes A subset of TemplateHaskell including only quoting. OverloadedLabels Allows use of the #label syntax. TypeFamilyDependencies Allow functional dependency annotations on type families to declare them as injective. DerivingStrategies Allow multiple deriving clauses, each optionally qualified with a strategy. DerivingVia Enable deriving instances via types of the same runtime representation. Implies DerivingStrategies. UnboxedSums Enable the use of unboxed sum syntax. HexFloatLiterals Allow use of hexadecimal literal notation for floating-point values. BlockArguments Allow do blocks etc. in argument position. NumericUnderscores Allow use of underscores in numeric literals. QuantifiedConstraints Allow forall in constraints. StarIsType Have * refer to Type. EmptyDataDeriving Liberalises deriving to provide instances for empty data types. CUSKs Enable detection of complete user-supplied kind signatures. ImportQualifiedPost Allows the syntax import M qualified. StandaloneKindSignatures Allow the use of standalone kind signatures. UnliftedNewtypes Enable unlifted newtypes. LexicalNegation Use whitespace to determine whether the minus sign stands for negation or subtraction. QualifiedDo Enable qualified do-notation desugaring. LinearTypes Enable linear types. RequiredTypeArguments Allow the use of visible forall in types of terms. FieldSelectors Enable the generation of selector functions corresponding to record fields. OverloadedRecordDot Enable the use of record dot-accessor and updater syntax OverloadedRecordUpdate Provides record . syntax in record updates, e.g. x {foo.bar = 1}. UnliftedDatatypes Enable data types for which an unlifted or levity-polymorphic result kind is inferred. AlternativeLayoutRule Undocumented parsing-related extensions introduced in GHC 7.0. AlternativeLayoutRuleTransitional Undocumented parsing-related extensions introduced in GHC 7.0. RelaxedLayout Undocumented parsing-related extensions introduced in GHC 7.2. #### Instances Instances details
2023-01-28 13:30:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17968836426734924, "perplexity": 12220.444685678487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00133.warc.gz"}
http://ieeexplore.ieee.org/xpls/icp.jsp?reload=true&arnumber=6234403
By Topic • Abstract SECTION I ## INTRODUCTION Flash memory has gained a ubiquitous place in the computing landscape today. Virtually all mobile devices such as smartphones and tablets rely on Flash memory as their non-volatile storage. Flash memory is also moving into laptop and desktop computers, intending to replace the mechanical hard drive. Floating-gate non-volatile memory is even more broadly used in electronic applications with a small amount of non-volatile memory. For example, even 8-bit or 16-bit microcontrollers for embedded systems commonly have on-chip EEPROMs to store instructions and data. Many people also carry Flash memory as standalone storage medium as in USB memory sticks and SD cards. In this paper, we propose to utilize analog behaviors of off-the-shelf Flash memory to enable hardware-based security functions in a wide range of electronic devices without requiring custom hardware. More specifically, we show that a standard Flash memory interface can be used to generate true random numbers from quantum and thermal noises and to produce device fingerprints based on manufacturing variations. The techniques can be applied to any floating-gate non-volatile memory in general, and does not require any hardware modifications to today's Flash memory chips, allowing them to be widely deployed. Both hardware random number generators (RNGs) and device fingerprints provide important foundations in building secure systems. For example, true randomness is a critical ingredient in many cryptographic primitives and security protocols; random numbers are often required to generate secret keys or prevent replays in communications. While pseudo-random number generators are often used in today's systems, they cannot provide true randomness if a seed is reused or predictable. As an example, a recent study showed that reuse of virtual machine (VM) snapshots can break the Transport Level Security (TLS) protocol due to predictable random numbers [1]. Given the importance of a good source of randomness, high security systems typically rely on hardware RNGs. Instead of requiring custom hardware modules for RNGs, we found that analog noise in Flash memory bits can be used to reliably generate true random numbers. An interesting finding is that the standard Flash chip interface can be used to put a memory bit in partially programmed state so that the internal noise can be observed through the digital interface. There exist two sources of true randomness in Flash bits, Random Telegraph Noise (RTN) and thermal noise. While both sources can be leveraged for RNGs, our scheme focuses on RTN, which is quantum noise. Unlike thermal noise, which can be reduced significantly at extremely low temperatures, RTN behavior continues at all temperature ranges. Moreover, the quantum uncertainty nature of RTN provides a better entropy source than system level noises which rely on the difficulty of modeling complex yet deterministic systems. Our algorithm automatically selects bits with RTN behavior and converts RTN into random binary bits. Experimental results demonstrate that the RTN behavior exists in Flash memory and can be converted into random numbers through the standard Flash interface. The Flash-based RNG is tested using the NIST test suite [2] and is shown to pass all tests successfully. Moreover, we found that the RNG works even at a very low temperature (-80°C). In fact, the RTN behavior is more visible at low temperatures. On our test platform, the Flash RNG generates about 1K to 10K bits per second. Overall, the experiments show that true random numbers can be generated reliably from off-the-shelf Flash memory chips without requiring custom circuits. In addition to generating true random numbers, we also found that the standard Flash interface can be used to extract fingerprints (or signatures) that are unique for each Flash chip. For this purpose, our technique exploits inherent random variations during Flash manufacturing processes. More specifically, we show that the distributions of transistor threshold voltages can be measured through the standard Flash interface using incremental partial programming. Experimental results show that these threshold voltage distributions can be used as fingerprints, as they are significantly different from chip to chip, or even from location to location within a chip. The distributions also stay relatively stable across temperature ranges and over time. Thanks to the large number of bits (often several gigabits) in modern Flash chips, this technique can generate a large number of independent fingerprints from each chip. The Flash fingerprints provide an attractive way to identify and/or authenticate hardware devices and generate device-specific keys, especially when no cryptographic module is available or a large number of independent keys are desired. For example, at a hardware component level, the fingerprints can be used to distinguish genuine parts from counterfeit components without requiring cryptography to be added to each component. The fingerprinting technique can also be used for other authentication applications such as turning a Flash device into a two-factor authentication token, or identifying individual nodes in sensor networks. While the notion of exploiting manufacturing process variations to generate silicon device fingerprints and secret keys is not new and has been extensively studied under the name of Physical Unclonable Functions (PUFs) [3], the Flash-based technique in this paper represents a unique contribution in terms of its practical applicability. Similar to true RNGs, most PUF designs require custom circuits to convert unique analog characteristics into digital bits. On the other hand, our technique can be applied to off-the-shelf Flash without hardware changes. Researchers have recently proposed techniques to exploit existing bi-stable storage elements such as SRAMs [4] or Flash cells [5] to generate device fingerprints. Unfortunately, obtaining fingerprints from bi-stable elements requires a power cycle (power off and power on) of a device for every fingerprint generation. The previous approach to fingerprinting Flash only works for a certain types of Flash chips and takes long time (100 seconds for one fingerprint) because it relies on rare errors called program disturbs. As an example, we did not see any program disturbs in SLC Flash chips that we used in experiments. To the best of our knowledge, the proposed device fingerprinting techniques is the first that is fast (less than 1 second for a 1024-bit fingerprint) and widely applicable without interfering with normal operation or reuuirinu custom hardware. The following list summarizes the main strengths of the proposed security functions based on Flash memory over existing approaches for hardware random number generators and fingerprints. • Widely applicable: Flash memory already exists in many electronic devices. The proposed techniques can often be implemented as system software or firmware updates without hardware changes. • Non-intrusive: the techniques do not require a reboot and only have minimal interference with normal memory operations. Only a small portion of Flash needs to be used for security functions during security operations. There is minimal wear-out. • High security: the Flash random number generator is based on quantum noise, which exists even at extremely low temperatures. Thanks to the high capacity of today's Flash memory, a very large number of independent signatures can be generated from Flash. The rest of the paper is organized as follows. Section II provides the basic background on the Flash memory. Based on this understanding, Section III and Section IV explain the new techniques to generate random numbers and device fingerprints through standard Flash interfaces. Then, Section V studies the effectiveness and the security of the proposed methods through experimental results on real Flash chips. Section 0 briefly discusses a few examples of application scenarios. Finally, Section VII discusses related work and Section VIII concludes the paper. SECTION II ## FLASH MEMORY BASICS This section provides background material on Flash memory and its operating principles in order to aid understanding of our Flash-based security schemes. ### A. Floating Gate Transistors Flash memory is composed of arrays of floating-gate transistors. A floating-gate transistor is a transistor with two gates, stacked on top of each other. One gate is electrically insulated (floating). Figure 1 shows an example of a floating-gate device. The control gate is on top. An insulated conductor, surrounded by oxide, is between the control gate and the channel. This conductor is the floating gate. Information is stored as the presence or absence of trapped charge on the floating gate. The trapped negative charge reduces the current flowing through the channel when the N-type MOS transistor is on. This current difference is sensed and translated into the appropriate binary value. Figure 1. Flash memory cell based on a floating gate transistor. Flash cells without charge on their floating-gate allow full current flow in the channel and hence are read as a binary “1”. The presence of charge on the floating-gate will discourage the presence of current in the channel, making the cell store a “0”. Effectively, the charge on the floating-gate increases the threshold voltage $({\rm V}_{{\rm th}})$ of a transistor. Single-level cells store one bit of information; multi-level cells can store more than one bit by reading and injecting charge to adjust the current flow of the transistor. Note that the threshold voltage without any charge on the floating-gate is different for each transistor due to variations in manufacturing processes. As a result, the amount of charge that needs to be stored to the floating-gate for a cell to reliably represent a “0” state varies from cell to cell. If the threshold voltage is not shifted sufficiently, a cell can be in an unreliable (partially programmed) state that can be interpreted as either 1 or 0. In this paper, we exploit the threshold voltage variations and the partially programmed state to extract fingerprints and random numbers. ### B. Flash Organization and Operation At a high-level, Flash memory provides three major operations: read, erase, and program (write). In order to read a bit in a Flash cell, the corresponding transistor is turned on and the amount of current is detected. A write to a Flash cell involves two steps. First, an erase operation pushes charge off the floating-gate by applying a large negative voltage on the control gate. Then, a program (write) operation stores charge on the floating-gate by selectively applying a large positive voltage if the bit needs to be zero. An Important concept in Flash memory operation is that of pages and blocks. Pages are the smallest unit in which data is read or written, and are usually 2KB to 8KB. Blocks are the smallest unit for an erase operation and made up of several pages, usually 32–128 pages. Note that Flash does not provide bit-level program or erase. To read an address from a Flash chip, the page containing the address is read. To update a value, the block that includes the address must be first erased, then the corresponding page is written with an update and other pages in the block are restored. SECTION III ## RANDOM NUMBER GENERATION ### A. Random Telegraph Noise (RTN) The proposed RNG uses a device effect called Random Telegraph Noise (RTN) as the source of randomness. In general, RTN refers to the alternating capture and emission of carriers at a defect site (trap) of a very small electronic device, which generates discrete variation in the channel current [6]. The capture and emission times are random and exponentially distributed. RTN behavior can be distinguished from other noise using the power spectrum density (PSD), which is flat at low frequencies and $1/{\rm f}^{2}$ at high frequencies. In Flash memory, the defects that cause RTN are located in the tunnel-oxide near the substrate. The RTN amplitude is inversely proportional to the gate area and nearly temperature independent. As Flash memory cells shrink, RTN effects become relatively stronger and their impact on the threshold distribution of Flash memory cells, especially for multi-level cells, can be significant. Because RTN can be a major factor in Flash memory reliability, there have been a large number of recent studies on RTN in Flash memory from a reliability perspective [7] [8] [9]. While RTN is a challenge to overcome from the perspective of Flash memory operations, it can be an ideal source of randomness. RTN is caused by the capture and emission of an electron at a single trap, and is a physical phenomenon with random quantum properties. Quantum noise can be seen as the “gold-standard” for random number generation because the output of quantum events cannot be predicted. As Flash memory cells scale to smaller technology nodes, the RTN effect will become stronger. Moreover, RTN behavior will still exist with increasing process variation and at extremely low temperatures. ### B. Noise Extraction from Digital Interface As digital devices, Flash memory is designed to tolerate analog noise; noise should not affect normal memory operations. In order to observe the noise for random number generation, a Flash cell needs to be in an unreliable state between well-defined erase and program states. Interestingly, we found that Flash cells can be put into the in-between state using the standard digital interface. In a high level, the approach first erases a page, issues a program command, and then issues a reset command after an appropriate time period to abort the program. This procedure leaves a page partially programmed so that noise can affect digital outputs. We found that the outcome of continuously reading a partially programmed bit oscillates between 1 and 0 due to noise. For Flash memory in practice, experiments show that two types of noise coexist: thermal noise and RTN. Thermal noise is white noise that exists in nearly all electronic devices. RTN can be observed only if a surface trap exists, the RTN amplitude is larger than that of thermal noise, and the sampling frequency (speed for continuous reads) is high enough. If any of these three conditions is not satisfied, only thermal noise will be observed as in Figure 2. In the case of thermal noise, a bit oscillates between the two states quickly, and the power snectral density (PSD) indicates white noise Figure 2. Thermal noise in Flash memory (time domain). In the case that the RTN amplitude is comparable to thermal noise, a combination of RTN and thermal noise is observed as shown in Figure 3. This is reflected by the density change of 1s in the continuous reading. A moving average on the time domain helps to visualize the density change. The PSD of the result shows $1/{\rm f}^{2}$ spectrum at low frequencies and becomes flat at high frequencies. Figure 3. RTN with thermal noise in Flash memory. (a) Time domain. (b) Moving average of 29 points on the time domain. In some cases, the RTN amplitude is very high and dominates thermal noise. As a result, only RTN behaviors are visible through digital interfaces for these bits. As shown in Figure 4, continuous reads show clear clusters of 1s and 0s in the time domain. The power spectral density (PSD) of these bit sequences shows a clear RTN pattern of $1/{\rm f}^{2}$. Figure 4. RTN in Flash memory (time domain). For a bit with nearly pure RTN behavior, we further validated that the error pattern corresponds to RTN by plotting the distributions of up and down periods. As shown in Figure 5, both up time and down time nicely fit an exponential distribution as expected. Overall, our experiments show that both RTN and thermal noise exist in Flash memory and can be observed through a digital interface. While both noise types can be used for random number generation, we focus on RTN, which is more robust to temperature changes. Figure 5. (a) Distribution of time in the programmed state. (b) Distribution of time in the erased state. ### C. Random Number Generation Algorithms In Flash memory devices, RTN manifests as random switching between the erased state (consecutive 1s) and programmed state (consecutive Os). At a high-level, our Flash random number generator (RNG) identifies bits with RTN behavior, either pure RTN or RTN combined with thermal noise, and uses a sequence of time in the erased state (called up-time) and the time in the programmed state (called down-time) from those bits. In order to produce random binary outputs, the RNG converts the up-time and down-time sequence into a binary number sequence, and applies the von Neumann extractor for de-biasing. We found that thermal noise itself is random and does not need to be filtered out. ### Algorithm I Overall Flash RNG algorithm Algorithm I shows the overall RNG algorithm. To generate random numbers from RTN, the first step is to identify bits with RTN or both RTN and thermal noise. To do this, one block in Flash memory is erased and then multiple incomplete programs with the duration of T are applied. After each partial program, a part of the page is continuously read N times and the outcome is recorded for each bit. In our experiments, we chose to read the first 80 bits (10 bytes) in a page for 1,000 times. For each bit that has not been selected yet, the algorithm checks if RTN exists using CheckRTN() and marks the bit location if there is RTN. As an optimization, the algorithm also records the number of partial programs when a bit is selected. The algorithm repeats the process until all bits are checked for RTN. The second step is to partially program all of the selected bits to an appropriate level so that they will show RTN behavior. Finally, the algorithm reads the selected bits M times, records a sequence of up-time and down-time for each bit, and converts the raw data to a binary sequence. ### Algorithm II Determine whether there is RTN in a bit The function CheckRTN $(\)$ in Algorithm II determines whether there is RTN in a bit based on a trace from N reads. The algorithm first filters out bits that almost always (more than 98%) produce one result, either 1 or 0. For the bits with enough noise, the algorithm uses the power spectral density (PSD) to distinguish RTN from thermal noise; PSD for RTN has a form of $1/{\tt f}^{2}$ at a high frequency. To check this condition, the algorithm computes the PSD, and converts it to a log-scale in both x and y axes. If the result has a slope less than ${\tt T}_{\tt slope}$ (we use −1.5, the ideal value is −2) for all frequencies higher than ${\tt T}_{{\tt freq}}$ (we use 200Hz), the algorithm categorizes the bit as RTN only. If the PSD has a slope less than ${\tt T}_{\tt slope}$ for any interval larger than than Invl (we use 0.2) at a high frequency, the bit is categorized as a combination of RTN and thermal noise. ### Algorithm III Program selected bits to proper levels where RTN could be observed The function ProgramSelectBits() in Algorithm III programs selected bits to a proper level where RTN can be observed. Essentially, the algorithm aims to take each bit to the point near where they were identified to have RTN. The number of partial programs that were required to reach this point before were recorded in NumProgram[Bit]. For each selected bit, the algorithm first performs partial programs with the duration of T based on the number recorded earlier (NumProgram[Bit] -K). Then, the algorithm performs up to L more partial program operations until a bit shows RTN behavior. The RTN behavior is checked by reading the bit N times, and see if the maximum of moving averages is greater than a threshold $({\tt TMax} = 0.7$) and the minimum is less than another threshold $({\tt TMin} =0.3)$. ### Algorithm IV Convert the raw data to binary random sequence Finally, the function ConvertToBinary() converts the raw data to a binary random sequence. For bits with both RTN and thermal noise, the up-time and down-time tend to be short. So only the LSBs of these numbers are used. Essentially, for every up-time and down-time, the algorithm produces 1 if the time is odd and 0 otherwise. Effectively, this is an even-odd scheme. For bits with perfect RTN behavior, up-time and down-time tend to be longer and we use more LSBs from the recorded up/down-time. In this case, we first produce a bit based on the LSB, then the second LSB, the third LSB, and so on until all extracted bits become 0. Finally, for both methods, we apply the von Neumann de-biasing method. The method takes two bits at a time, throws away both bits if they are identical, and takes the first bit if different. This process is described in Algorithm IV. The stability of the bits in the partially programmed state is also important. We define the stability as how long a bit stays in the partially programmed state where RTN behavior can be observed. This is determined by the retention time of the Flash memory chip and the amplitude of the RTN compared to the designed noise margin. Assume the amplitude of the RTN is Ar, the noise margin of Flash memory is An, and the Flash retention time is 10 year, then the stable time for random number generation after partial programming will be roughly ${\tt Ts}={\tt Ar}/{\tt An}^{\ast}$ 10 years. This means that after time Ts, a bit needs to be reset and reprogrammed. In our experiments, the bit that is shown in Figure 5 was still showing ideal RTN behavior even after 12 hours. SECTION IV ## DEVICE FINGERPRINTS This section describes techniques to generate unique fingerprints from Flash memory devices. ### A. Sources of Uniqueness Flash memory is subject to random process variation like any other semiconductor device. Because Flash is fabricated for maximum density, small variations can be significant. Process variation can cause each bit of a Flash memory to differ from its neighbors. While variation may affect many aspects of Flash cells, our fingerprinting technique exploits threshold voltage variations. Variations in doping, floating gate oxide thickness, and control-gate coupling ratio can cause the threshold voltage of each transistor to vary. Because of this threshold voltage variation, different Flash cells will need different times to be programmed. ### B. Extracting Fingerprints In this paper, we introduce a fingerprinting scheme based on partial programming. We repeatedly partially program a page on a Flash chip. After each partial program, some bits will have been programmed enough to flip their states from 1 to 0. For each bit in the page, we record the order in which the bit flipped. Pseudo-code is provided in Algorithm V. In our experiments, T is chosen to be 29.3us. A short partial program time provide a better resolution to distinguish different bits with the cost of increased fingerprinting time. We do not enforce all bits to be programmed, in order to account for the possibility of faulty bits. ### C. Comparing Fingerprints The fingerprints extracted from the same page on the same chip over time are noisy but highly correlated. To compare fingerprints extracted from the same page/chip and different pages/chips, we use the Pearson correlation coefficient [5], which is defined asTeX Source$$P(X,Y)={E[(X-\mu_{X})(Y-\mu_{Y})]\over \sigma_{X}\sigma_{Y}}$$ where X is the vector of program orders extracted from one experiment and Y is another vector of program orders extracted from another experiment. $\mu_{X}$ and $\sigma_{X}$ are the mean and standard deviation of the X vector. $\mu_{Y}$ and $\sigma_{Y}$ are the mean and standard deviation of the Y vector. In this way, the vector of program orders is treated as a vector of realizations of a random variable. For vectors extracted from the same page, ${\rm Y}={\rm aX}+{\rm b}+{\rm noise}$ where a and b are constants and the noise is small. So, X and Y are highly correlated and the correlation coefficient should be close to 1. For vectors extracted from different pages, X and Y should be nearly independent of each other, so the correlation coefficient should be close to zero. From another perspective, if both ${\rm X}[{\tt i}]$ and ${\rm Y}[{\rm i}]$ are smaller or bigger than their means, $(X[i]-\mu_{X})(Y[i]-\mu_{Y})$ would be a positive number. If not, it would be a negative number. If X and Y are independent, it is equally likely to be positive and negative so the correlation coefficient would approach 0. The scatter plot of X and Y from the same page/chip and from different chips are shown in Figure 6. The figure clearly demonstrates a high correlation between fingerprints Figure 6. Scatter plot for fingerprints extracted on (a) the same page and (b) different chips. from the same chip over time and a low correlation between fingerprints from different chips. Therefore, this correlation metric can be used to compare fingerprints to determine whether they are from the same page/chip or from different pages/chips. ### D. Fingerprints in Binary Numbers The above fingerprints are in the form of the order in which each bit was programmed. If an application requires a binary number such as in generating cryptographic keys, we need to convert the recorded ordering into a binary number. There are a couple of ways to generate unique and unpredictable binary numbers from the Flash fingerprints. First, we can use a threshold to convert a fingerprint based on the programming order into a binary number as shown in Algorithm VI. In the algorithm, we produce 1 if the program order is high, or 0 otherwise. This approach produces a 1 bit fingerprint for each Flash bit. Alternatively, we can obtain a similar binary fingerprint directly from Flash memory by partially programming (or erasing) a page and reading bits (1/0) from the Flash. SECTION V ## EXPERIMENTAL RESULTS This section presents evaluation results for the random number generation and fingerprint techniques for Flash memory devices. ### A. Testbed Device Our experiments use a custom Flash test board as shown in Figure 7. The board is made entirely with commercial off-the-shelf (COTS) components with a custom PCB board. There is a socket to hold a Flash chip under test, an ARM microprocessor to issue commands and receive data from the Flash chip, and a Maxim MAX-3233 chip to provide a serial (RS-232) interface. USB support is integrated into the ARM microcontroller. We also wrote the code to test the device. The setup represents typical small embedded platforms such as USB flash drives, sensor nodes, etc. This device shows that the techniques can be applied to commercial off-the-shelf devices with no custom integrated circuits (ICs). Figure 7. Flash test board. The experiments in this paper were performed with four types of Flash memory chips from Numonyx, Micron and Hynix, as shown in 0. ### B. Random Number Generation The two main metrics for random number generation are randomness and throughput. For security, the RNG must be able to reliably generate true random numbers across a range of environmental conditions over time. For performance, higher throughput will be desirable. #### 1) Randomness Historically, three main randomness test suites exist. The first one is from Donald Knuth's book “The Art of computer Programming (1st edition, 1969)” [10] which is the most quoted reference in statistical testing for RNGs in literature. Although it was a standard for many decades, it appears to be outdated in today's view and it allows many “bad” generators to pass the tests. The second one is the “diehard” test suite from Florida State University. The test suite is stringent in the sense that they are difficult to pass. However, the suite has not been maintained in recent years. Therefore, it was not selected as the tests for this study. The third one is developed by National Institute of Standards and Technology (NIST) which is a measurement standard laboratory and a non-regulatory agency of the United States Department of Commerce. The NIST Statistical Test Suite is a package consisting of 15 tests that were developed to test the randomness of arbitrary long binary sequences produced by either hardware or software. The test suite makes use of both existing algorithms from past literatures and newly developed tests. The most updated version, sts-2.1.1, which was released in August 11, 2010, is used in our randomness tests. TABLE II summarizes the 15 NIST tests [2]. TABLE I. TESTED FLASH CHIPS TABLE II. SUMMARY OF THE NIST TEST SUITE Figure 8 shows one test result for the even-odd scheme, which only used an LSB from the up-time and down-time, when bits with both RTN and thermal noise are used. 10 sequences generated from multiple bits are tested and each sequence consists of 600,000 bits. Note that some of the results are not shown here due to the space constraint. NonOverlappingTemplate, RandomExcursions and RandomExcursionsvariant have a lot of tests. In the result above, the proportion in the second column shows the proportion of the sequences which passed the test. If the proportion is greater than or equal to the threshold value specified at the bottom of the figure (8 out of 10 or 4 out of 5), then the data is considered random. The P-value in the first column indicates the uniformity of the P-values calculated in each test. If P-value is greater than or equal to 0.0001, the sequences can be considered to be uniformly distributed [2]. The result indicates that the proposed RNG passes all the NIST tests. Figure 8. NIST test suite results for bits with RTN and thermal noise. We also tested random numbers from one bit with only RTN behavior, using multiple bits from up-time and down-time. In this case, we generated ten 200,000-bit sequences from one bit. The data passed all NIST tests with results that are similar to the above case. For the Universal test, which requires a sequence longer than 387,840 bits, we used five 500,000-bit sequences. #### 2) Performance The throughput of the proposed RNG varies significantly depending on the switching rate of individual bits, sampling speed and environment conditions. Typically, only a small fraction of bits show pure RTN behavior with minimal thermal noise. TABLE III shows the performance of Flash chips from four manufacturers. The average throughput ranges from 848 bits/second to 3.37 Kbits/second. Note that the fastest switching trap that can be identified is limited by the reading speed in our experiments. TABLE III. PERFORMANCE OF BITS WITH PURE RTN BEHAVIOR. If bits with both RTN and thermal noise are also used, the percentage of bits which can be used for RNG can be much higher. The performance of these bits from the same Flash chips as in the pure RTN case is shown in TABLE IV. The average throughputs are higher because thermal noise is high frequency noise. TABLE IV. PERFORMANCE OF BITS WITH BOTH RTN AND THERMAL NOISE. In our tests, the RNG throughput is largely limited by the timing of the asynchronous interface which is controlled by an ARM microcontroller with CPU frequency of 60MHz and the 8-bit bus for a Flash chip. We believe that the RNG performance can be much higher if data can be transferred more quickly through the interface. As an example, the average for RTN transition time is reported to range from 1 microsecond to 10 seconds [11]. If a 128 bytes can be read in 6 microseconds which is the ideal random cache read speed for the Micron SLC chips, a RTN bit with 0.1ms average transition time will give approximately 20 Kbits/second throughput. Note that one page could have multiple RTN bits and our algorithm allows using multiple bits in parallel so that the aggregated throughput of an RNG can be much higher. For example, if N bits can be read at a time, in theory, that can increase the throughput by a factor of N. #### 3) Temperature Variations For traditional hardware RNGs, low temperatures present a particular challenge because thermal noise, which they typically rely on, can be reduced with the temperature. To study the effectiveness of the Flash-based RNG in low temperatures, we tested the scheme at two low temperature settings: one in a freezer, which is about −5°C, and the other in dry ice, which is about −80°C. The generated random sequences are tested individually as well as combined together with data from experiments at room temperature. All of them passed the NIST test suite without a problem, showing that our technique is effective at low temperatures. Note that the experiments for temperature variations and aging are performed with a setup where data from Flash memory are transferred from a testbed to a PC through an USB interface. The post processing is performed on the PC. The USB interface limits the Flash read speed to 6.67KHz. As a result, the throughput in this setup is noticeably slower than the results in previous subsections where the entire RNG operation is performed on a microcontroller. To understand the impact of temperature variations on the Flash-based RNG, we tested the first 80 bits of a page from a Numonyx chip. At room temperature, 62 bits out of the 80 bits showed oscillations between the programmed state and erased state. 14 bits out of the 62 bits were selected by the selection algorithm, which identifies bits with pure RTN or both RTN and thermal components. The throughputs of the 14 bits are shown in Figure 9. Figure 9. Throughputs under room temperature. Figure 10 and Figure 11 show the performance of the RNG at −5°C and −80°C, respectively. At −5°C, 79 bits out of 80 bits showed noisy behavior and 20 out of 79 bits were selected by the RNG algorithm as ones with RTN. At −80°C, 72 bits out of 80 bits showed noise and 28 out of 72 bits were selected as the ones with RTN. On average, we found that per-bit throughput is slightly decreased at low temperatures, most likely because of reduced thermal noise and possibly because of slowed RTN switching. However, the difference is not significant. In fact, a previous study [12] claimed that RTN is temperature independent below 10 Kelvin. Interestingly, we found that the number of bits that are selected by our algorithm as ones with RTN behavior increases at a low temperature. This trend is likely to be because the low temperature decreases thermal noise amplitude while RTN amplitude stays almost the same and the RTN traps slow down so that they become observable at our sampling frequency. Figure 10. Throughput at −5 °C. Figure 11. Throughputs at -80°C. #### 4) Aging Flash devices wear-out over time as more program/erase (P/E) operations are performed. A typical SLC Flash chip has a lifetime of 1 million P/E cycles. In the context of RNGs, however, we do not think that wear-outs cause concerns. In fact, aging can create new RTN traps and increase the number of bits with RTN. To check the impact of aging on the RNG, we tested the scheme after 1,000 P/E operations and 10,000 P/E operations as shown in TABLE V. The RNG outputs passed the NIST test suite in both cases and did not show any degradation in performance. TABLE V. PERFORMANCE SUMMARY OF RTN IN STRESSED PAGES The table shows an interesting trend that more bits show RTN behavior after 10,000 P/E cycles. The increase in noisy bits can potentially increase the overall RNG throughput. One possible concern with aging is a decrease in “stable time period” during which each bit shows noisy behavior. In our experiments, we found that a bit can be used for random number generation for over 12 hours after one programming (Algorithm III). If a bit is completely worn out, charge can leak out more quickly, requiring more frequent calibration. However, given that Flash memory is designed to have a retention time of 10 years within its lifetime, we do not expect the leakage to be a significant problem. We plan to perform larger scale experiments to understand how often a bit needs to be re-programmed for reliable random number generation. In practice, a check can also be added to ensure that a bit oscillates between 1 and 0. ### C. Fingerprints For fingerprinting, we are interested in uniqueness and robustness of fingerprints. The fingerprint should be unique, which means that fingerprints from different chips or different locations of the same chip must be significantly different - the correlation coefficient should be low. The fingerprint should also be robust, in a sense that fingerprints from a given location of a chip must stay stable over time and even under different environmental conditions - the correlation coefficient should be high. In the experiments detailed below, we used 24 chips (Micron 34nm SLC), and 24 pages (6 pages in 4 blocks) from each chip. 10 measurements were made from each page. Each page has 16,384 bits. #### 1) Uniqueness To test uniqueness, we compared the fingerprint of a page to the fingerprints of the same page on different chips, and recorded their correlation coefficients. A total of 66,240 pairs were compared - (24 chips choose 2) * 24 pages * 10 measurements. The results are shown in Figure 12. The correlation coefficients are very low, with an average of 0.0076. A Gaussian distribution fits the data well, as shown in red. Figure 12. Histogram of correlation coefficients for pages compared to the same page on a different chip (total 66,240 comparisons). The correlation coefficients are also very low when a page is compared not only to the same page on different chips, but also to different pages on the same and different chips, shown in Figure 13. There are 1,656,000 pairs in comparison - ((24 pages * 24 chips) choose 2) * 10 measurements. This indicates that fingerprints from different parts (pages) of a chip can be considered as two different fingerprints and do not have much correlation. Therefore, the fingerprinting scheme allows the generation of many independent fingerprints from a single chip. The average correlation coefficient in this case is 0.0072. Figure 13. Histogram of correlation coefficients for every page compared to every other page at room temp (total 1,656,000 comparisons). #### 2) Robustness To test robustness, we compared each page's measurement to the 9 other measurements of the same page's fingerprint (an intra-chip measurement). The histogram of results for all pages is shown in Figure 14. The correlation coefficient for fingerprints from the same page is very high, with an average of 0.9673. The minimum observed coefficient is 0.9022. The results show that fingerprints from the same page are robust over multiple measurements, and can be easily distinguished from fingerprints of a different chip or page. Figure 14. Histogram of correlation coefficients for all intra-chip comparisons (total 25,920 comparisons). To be used in an authentication scheme, one could set a threshold correlation coefficient $t$. If, when comparing two fingerprints, their correlation coefficient is above $t$, then the two fingerprints are considered to have come from the same page/chip. If their correlation coefficient is below $t$, then the fingerprints are assumed to be from different pages/chips. In such a scheme, there is a potential concern for false positives and false negatives. A false negative is defined as comparing fingerprints that are actually from two different pages/chips, but deciding that the fingerprints are from the same page/chip. A false positive occurs when comparing fingerprints from the same page/chip, yet deciding that the fingerprints came from two different pages/chips. The threshold $t$ can be selected to balance false negatives and positives. A high value of $t$ would minimize false negatives, but increase the chance of false positives, and vice versa. To estimate the chance of false positives and false negatives, we fit normal probability mass distribution functions to the correlation coefficient distribution. A false positive would arise from a comparison of two fingerprints from the same page being below $t$. The normal distribution fitted to the intra-chip comparison data in Figure 14 has an average $\mu =0.9722$ and a std. deviation of 0.0095. For a threshold of $t=0.5$, the normal distribution function estimates the cumulative probability of a pair of fingerprints having a correlation coefficient below $0.5\ {\rm as}\ 2.62\times 10^{-539}$. At $t=0.7$. the probability is estimated as $7.43\times 10^{-181}$. The normal distribution function fitted to the inter-chip comparison data in Figure 13 has a $\mu=0.0076$ and a std. deviation of 0.0083. The estimated chance of a pair of fingerprints from different chips exceeding $t=0.5$ is $4.52\times 10^{-815}$. At $t=0.3$, the probability is estimated as $6.14\times 10^{-301}$ The tight inter-chip and intra-chip correlations along with low probability estimates for false positives or negatives suggest that the size of fingerprints can possibly be reduced. Instead of using all 16,384 bits in a page, we can generate a fingerprint for a 1024-bit, 512-bit, or even only a 256-bit block. Experiments show that the averages of the observed correlation coefficients remain similar to those when using every bit in a page while the standard deviation increases by a factor of 2–3. However, the worst-case false negative estimates remain low. When using 256 bit fingerprints with the threshold $t=0.3$) the estimate is $7.91\times 10^{-7}$, Under the same conditions, using 1024 bit fingerprints gives an estimated $3.20\times 10^{-22}$ chance of a false negative. #### 3) Temperature Variations and Aging To see how robust the fingerprints are across different temperatures. We extracted fingerprints from chips at two other ambient temperatures, 60°C and -5 °C. We tested a subset of the chips tested at room temperature - 6 pages (3 pages in 2 blocks) in 6 chips. Of interest is how fingerprints from the same page/chip, but taken at different temperatures, compare. Figure 15 shows the results of the intra-chip comparison between each temperature pair. Correlations remain high for fingerprints from the same page/chip, indicating that fingerprints taken at different temperatures can still be identified as the same. The average correlation coefficient is lower than when compared without a temperature difference, but is still sufficiently high to have very low false positive rates. Figure 15. Average, minimum, and maximum correlation coefficients for intra-chip comparisons between different ambient temperatures. Comparing fingerprints from the same page at the same temperature at -5°C or 60 °C still yields high correlation coefficients, as expected. Comparisons of fingerprints from different pages/chips at different temperatures give very low correlation coefficients. Flash chips have a limited lifetime, wearing out over many program/erase (P/E) cycles. For a page's fingerprint to be useful over time, fingerprints taken later in life should still give high correlation with younger fingerprints. Figure 16 shows the results of comparing fingerprints for the same page/chip taken when a Flash chip is new to fingerprints taken after a different number of P/E cycles. While the average correlation coefficient goes down noticeably, we note that it appears to bend towards an asymptote as the chip wears out. Even after 500,000 P/E cycles, which is beyond the typical lifetime of Flash chips, the average coefficient is still high enough to distinguish fingerprints of the same page/chip from fingerprints acquired from a different page/chip. Figure 16. Average, minimum, and maximum correlation coefficients for comparisons between fresh and stressed Flash. However, we round that an extreme wear-out such as 500,000 P/E cycles can raise a non-negligible false positive concern $(10^{-4})$ for short 256 or 512-bit fingerprints. This result indicates that we need longer fingerprints if they need to be used over a long period of time without a re-calibration. #### 4) Security An attacker could attempt to store the fingerprints of a Flash device and replay the fingerprint to convince a verifier that he has the Flash chip in question. If the attacker cannot predict which page(s) or parts of a page (for shorter signatures) will be fingerprinted, he would need to store the fingerprints for every page to ensure success. The Flash chips in our experiments required about 800 partial program cycles per fingerprint. As the fingerprint comprises the order in which the bit was programmed, each bit's ordering could be stored as a 10-bit number. To store an entire chip's fingerprints would require 10x the chip storage. Acquiring a single fingerprint is relatively fast. Our setup could record an entire page's fingerprint in about 10 seconds. However, there are 131,072 pages on our (relatively small) test chip; characterizing one chip would take about 2 weeks. The characterization time depends on the speed of the Flash interface, and we plan to further investigate the limit on how fast fingerprints can be characterized. ### D. Applicability to Multiple Flash Chips Most of the above experimental results are obtained from the Micron SLC Flash memory. In order to answer the question of whether the proposed techniques are applicable to Flash memory in general, we have repeated both RNG and fingerprinting tests on four types of Flash memory chips in 0, including an MLC chip. The experiments showed that RNG and fingerprinting both work on all four types of Flash chips, with comparable performance. Detailed results are not included as they do not add new information. While we found that the proposed algorithm works without any change in most cases, there was one exception where the fingerprinting algorithm needed to be slightly modified in order to compensate for systematic variations for certain manufacturers. For example, for the Hynix and Numonyx chips, we found that bits from the even bytes of a page tend to be programmed quicker than bits from the odd bytes. Similarly, for the MLC chip, bits in a page divide into two groups: a quickly programmed group and a slowly programmed group. To accommodate such systematic behaviors, the fingerprinting algorithm was changed to only compare programming ordering of bits within the same group. SECTION VI ## APPLICATION SCENARIOS This section briefly discusses how the Flash memory based security functions, namely RNGs and device fingerprints, can be used to improve security of electronic devices. We first discuss where the techniques can be deployed and present a few use cases. ### A. Applicability The proposed Flash-based security techniques work with commercial off-the-shelf Flash memory chips using standard interfaces. For example, our prototype design is based on the Open NAND Flash Interface (ONFI) [13], which is used by many major Flash vendors including Intel, Hynix, Micron, and SanDisk. Other Flash vendors such as Samsung and Toshiba also use similar interfaces to their chips. The proposed techniques can be applied to any Flash or other floating-gate non-volatile memory, as long as one can control read, program (write), and erase operations to specific memory locations (pages and blocks), issue the RESET command and disable internal ECC. Embedded systems typically implement a Flash memory controller in software, exposing the low-level Flash chip interface to a software layer. Our prototype USB board in the evaluation section is an example of such a design. While we did not have a chance to study details, the manual for the TI OMAP processor family [14], which is widely used in mobile phones, indicates that its External Memory Interface (EMI) requires software to control each phase of NAND Flash accesses. In such platforms where Flash accesses are controlled by software, our techniques can be implemented as relatively simple software changes. For large memory components such as SSDs, the low-level interfaces to Flash memory chips may not be exposed to a system software layer. For example, SSD controllers often implement wear-leveling schemes that move data to a new location on writes. In such devices, the device vendor needs to either expose the Flash interfaces to higher level software or implement the security functions in firmware. ### B. Random Number Generation The Flash-based random number generator (RNG) can either replace or complement software pseudo random number generators in any applications that need sources of randomness. For example, random numbers may be used as nonces in communication protocols to prevent replays or used to generate new cryptographic keys. Effectively, the Flash memory provides the benefits of hardware RNGs for systems without requiring custom RNG circuits. For example, with the proposed technique, low-cost embedded systems such as sensor network nodes can easily generate random numbers from Flash/EEPROM. Similarly, virtual machines on servers can obtain true random numbers even without hardware RNGs. ### C. Device Authentication One application of the Flash device fingerprints is to identify and/or authenticate hardware devices themselves similar to the way that we use biometrics to identify humans. As an example, let us consider distinguishing genuine Flash memory chips from counterfeits through an untrusted supply chain. Recent articles report multiple incidents of counterfeit Flash devices in practice, such as chips from low-end manufacturers, defective chips, and ones harvested from thrown-away electronics, etc. [5] [15] [16]. The counterfeit chips cause a serious concern for consumers in terms of reliability as well as security; counterfeits may contain malicious functions. Counterfeits also damage the brand name for a manufacturer. The Flash fingerprints can enable authentication of genuine chips without any additional hardware modifications to today's Flash chips. In a simple protocol, a Flash manufacturer can put an identifier (ID) to a genuine chip (write to a location in Flash memory), generate a fingerprint from the chip, and store the fingerprint in a database along with the ID. To check the authenticity of a Flash chip from a supply chain, a customer can regenerate a fingerprint and query the manufacturer's database to see if it matches the saved fingerprint. In order to pass the check, a counterfeit chip needs to produce the same fingerprint as a genuine one. Interestingly, unlike simple identifiers and keys stored in memory, device fingerprints based on random manufacturing variations cannot be controlled even when a desired fingerprint is known. For example, even legitimate Flash manufacturers cannot precisely control individual transistor threshold voltages, which we use to generate fingerprints. To produce specific fingerprints, one will need to create a custom chip that stores the fingerprints and emulates Flash responses. The authentication scheme can be strengthened against emulation attacks by exploiting a large number of bits in Flash memory. Figure 17 illustrates a modified protocol that utilizes a large number of fingerprints that can be generated from each Flash chip. Here, we consider a Flash chip as a function where a different set of bits that are used to generate a fingerprint is a challenge, and the resulting fingerprint is a response. A device manufacturer, when in possession of a genuine IC, applies randomly chosen challenges to obtain responses. Then, these challenge-response pairs (CRP) are stored in a database for future authentication operations. To check the authenticity of an IC later, a CRP that has been previously recorded but has never been used for a check is selected from the database, and a re-generated response from a device can be checked. Figure 17. Device authentication through a challenge-response protocol. Unless an adversary can predict which CRPs will be used for authentication, the adversary needs to measure all (or at least a large fraction) of possible fingerprints from an authentic Flash chip and store them in an emulator. In our prototype board, a generation of all fingerprints from a single page (16K bits) takes about 10 seconds and requires 10 bits of storage for each Flash bit. For a 16Gbit (2 GB) Flash chip, which is a moderate size by today's standards, this implies that fully characterizing the chip will take hundreds of days and 20 GB storage. In the context of counterfeiting, such costs are likely to be high enough to make producing counterfeits economically unattractive. The security of the authentication scheme based on Flash fingerprints can be further improved if an additional control can be added to the Flash interface. For example, imagine using a USB Flash memory as a two-factor authentication token by updating its firmware to have a challenge-response interface for Flash fingerprints. Given that authentication operations only need to be infrequent, the USB stick can be configured to only allow a query every few seconds. If a fingerprint is based on 1024 Flash bits, fully characterizing an 8 GB USB stick can take tens of years. ### D. Cryptographic Keys In addition to device identification and authentication, the Flash fingerprints can be used as a way to produce many independent secret keys without additional storage. In effect, the proposed Flash fingerprints provide unpredictable and persistent numbers for each device. Previous studies such as fuzzy extractors [17] and Physical Unclonable Functions (PUFs) [3] have shown how symmetric keys (uniformly distributed random numbers) can be obtained from biometric data or IC signatures from manufacturing variations by applying hashing and error correction. The same approach can be applied to Flash fingerprints in order to generate reliable cryptographic keys. A typical Flash with a few GB can potentially produce tens of millions of 128-bit symmetric keys. SECTION VII ## RELATED WORK ### A. Hardware Random Number Generators Hardware random number generators generate random numbers from high-entropy sources in the physical world. Theoretically, some random physical processes are completely unpredictable. Therefore, hardware random number generators provide better random numbers in terms of randomness than software based pseudo-random number generators. Thermal noise and other system level noise are the common entropy sources in recently proposed hardware random number generators. In [18], the phase noise of identical ring oscillators is used as the entropy source. In [19], the differences in path delays are used. In [20] and [21], the metastability of flip-flops or two cross coupled inverters are used. Basically, the entropy source of these RNG designs is thermal noise and circuit operational conditions. These hardware random number generators can usually achieve high throughput because the frequency of the entropy sources is high. One common characteristic of these hardware random generators is that they all need carefully designed circuits where process variations should be minimized so that noises from the entropy source can be dominant. Compared to this, the random number generation in Flash memory cells does not require specially designed circuits and is more immune to process variation. Moreover, our entropy source is based on quantum behavior and theoretically, it should still work under extremely low temperatures where thermal noise or other kinds of noise decrease dramatically. ### B. Hardware Fingerprint - Physical Unclonable Funcitons Instead of conventional authentication based on a secret key and cryptographic computation, researchers have recently proposed to use the inherent variation in physical characteristics of a hardware device for identification and authentication. Process variation in semiconductor foundries is a common source of hardware uniqueness which is out of the control of the designer [22] [23] [24]. A unique fingerprint can be extracted and used to identify the chip, but cannot be used for security applications because it can be simply stored and replayed. We also take advantage of process variation for our fingerprinting scheme. For security applications, Physical Unclonable Functions (PUFs) have been proposed. A PUF can generate many fingerprints per device by using complex physical systems whose analog characteristics cannot be perfectly replicated. Pappu initially proposed PUFs [25] using light scattering patterns of optically transparent tokens. In silicon, researchers have constructed circuits which, due to random process variation, emit unique outputs per device. Some silicon PUFs use ring oscillators [26] or race conditions between two identical delay paths [27]. These PUFs are usually implemented as custom circuits on the chip. Recently, PUFs have been implemented without additional circuitry by exploiting metastable elements such as SRAM cells, which have unique value on start-up for each IC instance [28] [4], or in Flash memories [5]. Our authentication scheme requires no new circuitry and can be done with commercially available and ubiquitous Flash chips. Unlike metastable elements, authentication does not require a power cycle. The scheme can generate many fingerprints by using more pages in the Flash chip. Acquiring a fingerprint is also faster and more widely applicable than previous Flash authentication methods. SECTION VIII ## CONCLUSION In this work, we show that unmodified Flash chips are capable of providing two important security functions: high-quality true random number generation and the provision of many digital fingerprints. Using thermal noise and random telegraph noise, random numbers can be generated at up to 10Kbit per second for each Flash bit and pass all NIST randomness tests. An authentication scheme with fingerprints derived from partial programming of pages on the Flash chip show high robustness and uniqueness. The authentication scheme was tested over 24 pages with 24 different instances of a Flash chip and showed clear separation. A Flash chip can provide many unique fingerprints that remain distinguishable in various temperature and aged conditions. Both random number generation and fingerprint generation require no hardware change to commercial Flash chips. Because Flash chips are ubiquitous, the proposed techniques have a potential to be widely deployed to many existing electronic device though a firmware update or software change. ### ACKNOWLEDGEMENTS In This work was partially supported by the National Science Foundation grant CNS-0932069, the Air Force Office of Scientific Research grant FA9550-09-1-0131, and an equipment donation from Intel Corporation. ## References No Data Available ## Cited By No Data Available None ## Multimedia No Data Available This paper appears in: No Data Available Conference Date(s): No Data Available Conference Location: No Data Available On page(s): No Data Available E-ISBN: No Data Available Print ISBN: No Data Available INSPEC Accession Number: None Digital Object Identifier: None Date of Current Version: No Data Available Date of Original Publication: No Data Available
2016-07-30 10:17:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3642396032810211, "perplexity": 1249.8163171123601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257836392.83/warc/CC-MAIN-20160723071036-00188-ip-10-185-27-174.ec2.internal.warc.gz"}
http://mathematica.stackexchange.com/questions/46180/initialize-table-in-a-module-but-break-if-criteria-not-met
# Initialize table in a Module but Break if criteria not met I am initializing a table inside a function as follows: f[a1_, a2_, a3_] := Module[{RM = Table[a1+a2+a3+i1, {i1, 1, 10}]}, Plus@@RM] I am calling the function f multiple times with different values of a1, a2, a3 and want the function to exit, if 300<a1+a2+a3+i1<500 How can I do it elegantly by not initializing the table and later checking to proceed or Break? This is what I have started to do, but could not go too far: LL[a1_, a2_, a3_, x_] := 300 < a1+a2+a3+i1 < 500; f[a1_, a2_, a3_] := Module[{RM = Table[If[L[a1, a2, a3, i1]==True,L[a1, a2, a3, i1], Break[]] , {i1, 1, 10}]},Plus@@RM] - I think your own method can be refined into something useful: f1[a1_, a2_, a3_] := Module[{test, RM}, test = If[300 < # < 500, #, Return[{}, Module]] &; RM = Table[test[a1 + a2 + a3 + i1], {i1, 1, 10}]; Total @ RM ] Now: f1[100, 101, 102] f1[1, 2, 3] f1[100, 195, 200] 3085 {} {} Note that I used a special syntax of Return to exit without error. I am assuming that you cannot simply test the end points of your Table range, but rather need to test every value generated by some function. Otherwise use a condition as rasher did, in one formulation or another. If you want the function to return unevaluated you can achieve it with two small modifications: f2[a1_, a2_, a3_] := Module[{test, RM}, test = If[300 < # < 500, #, Return[{}, Table]] &; RM = Table[test[a1 + a2 + a3 + i1], {i1, 1, 10}]; Total @ RM /; RM =!= {} ] f2[100, 101, 102] f2[1, 2, 3] f2[100, 195, 200] 3085 f2[1, 2, 3] f2[100, 195, 200] Note that Return is changed to exit from Table rather than the entire Module. Then a special form of Condition is used. See: Using a PatternTest versus a Condition for pattern matching If failure to match is an uncommon event it is better to write the function to be faster in the common case where it does not exit or return unevaluated. For that you would leave the test until after the Table is generated, e.g.: f3[a1_, a2_, a3_] := Module[{RM}, RM = Table[a1 + a2 + a3 + i1, {i1, 1, 10}]; Total @ RM /; 300 < Min[RM] && Max[RM] < 500 ] Speed comparison within a matching range: Table[ Do[fn[100, 100, x], {x, 100, 280, 0.01}] // Timing // First, {fn, {f1, f2, f3}} ] {0.374, 0.375, 0.249} - This is perfect for me!! – brama Apr 15 '14 at 21:47 @brama Glad I could help. Thanks for the Accept. – Mr.Wizard Apr 15 '14 at 21:54 f[a1_, a2_, a3_] /; (IntervalIntersection[Interval[{300, 500}], Interval[a1 + a2 + a3 + {1, 10}]] === Interval[]) := Module[{RM = Table[a1 + a2 + a3 + i1, {i1, 1, 10}]}, Plus@@ RM] Is probably the most direct way to accomplish this. Called with non-satisfying values, it simply returns unevaluated. - Thanks, but the condition has the cell index i1 as 300<a1+a2+a3+i1<500. This is a simplified condition, but the original condition is much more complex – brama Apr 15 '14 at 18:15 @brama: Ah, missed that in OP. Will adjust or delete, give me a moment... – ciao Apr 15 '14 at 18:17 @brama: OK - I think this does what you've asked for (completely misread OP first time). This only returns if no value of a1+a2+a3+i1 is in the range of 300 to 500. Frankly, I think it more convoluted than your idea of just adding an If to your function definition... – ciao Apr 15 '14 at 18:49 +1 for answering the question as written, but I'm guessing that brama cannot test only the end-points of the Table iterator. – Mr.Wizard Apr 15 '14 at 19:46 @Mr.Wizard - yes, I think perhaps a clarification of the "much more complex" condition in the OP is warranted. Thanks much for +... – ciao Apr 15 '14 at 19:48 You can not "break" from a table like that, try something like this: i=0; First@Last@Reap[While[++i; !(300<a1+a2+a3+i1<500) && i<=10, Sow[result] ] (your condition and result are a bit vague in your question by the way.) Also I'm assuming you want to keep the values computed up to the exit condition. If you wanted to just discard everything then you can do: Catch[Table[ If[!condition, Throw[{}], value], {i, 10}]] -
2016-02-08 00:06:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4559532105922699, "perplexity": 3948.345708196652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701151880.99/warc/CC-MAIN-20160205193911-00178-ip-10-236-182-209.ec2.internal.warc.gz"}
https://kerodon.net/tag/02MW
# Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$ Example 4.6.6.4. Let $\operatorname{\mathcal{C}}$ and $\operatorname{\mathcal{D}}$ be $\infty$-categories, and let $\operatorname{\mathcal{C}}\star \operatorname{\mathcal{D}}$ denote their join (Construction 4.3.3.13). Then $\operatorname{\mathcal{C}}\star \operatorname{\mathcal{D}}$ is also an $\infty$-category (Corollary 4.3.3.24). It follows from Example 4.6.1.6 that if $X$ is an initial object of $\operatorname{\mathcal{C}}$, then it is also initial when regarded as an object of $\operatorname{\mathcal{C}}\star \operatorname{\mathcal{D}}$. Similarly, if $Y$ is a final object of $\operatorname{\mathcal{D}}$, then it is also final when regarded as an object of $\operatorname{\mathcal{C}}\star \operatorname{\mathcal{D}}$.
2023-04-01 09:58:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100764393806458, "perplexity": 159.73276687876518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00229.warc.gz"}
https://michaelgalloy.com/2006/05/02/an-introduction-to-programming-with-idl-by-kenneth-p-bowman.html
There aren't many third party IDL books: David Fanning has one, Liam Gumley has one, and Ronn Kling has three. Now [Ken Bowman](http://www.met.tamu.edu/people/faculty/bowman.php), a professor at Texas A&M University in the Department of Atmospheric Sciences, enters the fray with *An Introduction to Programming with IDL*. This book is geared towards a new user of IDL without programming experience. It covers the necessary topics to get started in IDL basic variable concepts, analysis, file input/output, and direct graphics visualizations including many exercises for these topics. It makes it a good fit for the academics market. From the Preface: > This book is intended to be used in an introductory computer > programming course for science and engineering students at > either the undergraduate or graduate level. I think it achieves this goal very well, but don't look here if you want to take your programming beyond basic analysis and visualization. A major strength of the book are the downloadable example programs with their documentation, data files, and output. To try the examples, make sure to run @startup first to setup your IDL session. See the [book's website](http://idl.tamu.edu/) for more information including table of contents, errata, downloadable example programs, and even the Interpolation chapter. The book takes the reader through a sequence of chapters learning basic concepts about IDL variables, dealing with file input/output, programming concepts, visualization and analysis. The chapters of the book are: 1. Introduction 2. IDL Manuals and Books 3. Interactive IDL 4. IDL Scripts (Batch Jobs) 5. Integer Constants and Variables 6. Floating-Point Constants and Variables 7. Using Arrays 8. Searching and Sorting 9. Structures 10. Printing Text 11. Reading Text 12. Writing and Reading Binary Files 13. Reading NetCDF Files 14. Writing NetCDF Files 15. Procedures and Functions 16. Program Control 17. Line Graphs 18. Contour and Surface Plots 19. Mapping 20. Printing Graphics 21. Color and Image Display 22. Animation 23. Statistics and Pseudorandom Numbers 24. Interpolation 25. Fourier Analysis 26. Appendix A: An IDL Style Guide 27. Appendix B: Example Procedures, Functions, Scripts, and Data Files The detailed table of contents lists the sections within each chapter. ### Summary ### *Pros:* No prerequisites required. Exercises for most topics. Many example programs. Style guide appendix. Coverage of file input/output with NetCDF files. Clearly written with a nice layout. Color plates for some of the graphics examples. *Cons:* No intermediate topics like widgets, object graphics, or object-oriented programming are covered. No use of the iTools. If you're teaching a first course in programming for scientists, I would recommend this book for your class.
2020-08-10 01:22:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17333996295928955, "perplexity": 7585.552507304095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738603.37/warc/CC-MAIN-20200810012015-20200810042015-00189.warc.gz"}
http://www.doe.mass.edu/mcas/student/2017/question.aspx?GradeID=8&SubjectCode=mth&QuestionID=59017
Select Program Area --Select Program Area-- DESE HOME Accountability, Partnership, & Assistance Adult & Community Learning Amazing Educators BESE Advisory Councils Board of Elementary & Secondary Education Career/Vocational Technical Education Charter Schools College and Career Readiness Compliance/Monitoring (PQA) Conferences, Workshops and Trainings Instructional Support Digital Learning District & School Assistance Centers (DSACs) District & School Turnaround District Review, Analysis, & Assistance Tools Educator Evaluation Educator Licensure Tests (MTEL) Educator Licensure Educational Proficiency Plan (EPP) Edwin ELAR Log In Employment Opportunities: DESE English Language Learners Every Student Succeeds Act (ESSA) Family Literacy Federal Grant Programs High School Equivalency (HSE) Testing Program Grants/Funding Opportunities Information Services Laws & Regulations Literacy LEAP Project MCAS MCAS Appeals METCO Office for Food and Nutrition Programs Performance Assessment for Leaders (PAL) Planning and Research Professional Development RETELL Safe and Supportive Schools School and District Profiles/Directory School Finance School Redesign Science, Technology Engineering, and Mathematics (STEM) Security Portal | MassEdu Gateway Special Education Special Education Appeals Special Education in Institutional Settings Statewide System of Support Student and Family Support Systems for Student Success (SfSS) Title I Part A Students & Families Educators & Administrators Teaching, Learning & Testing Data & Accountability Finance & Funding About the Department Education Board # Massachusetts Comprehensive Assessment System Question 2: Constructed-Response Jason is comparing the sizes of Earth, Saturn, and a lacrosse ball. The radius of Earth is approximately 6,378,100 meters. #### Part A What is the radius of Earth, in meters, written as a single-digit number multiplied by a power of 10? #### Part B The radius of Saturn is approximately $six times ten to the seventh power$ meters. #### Part C The radius of a lacrosse ball is approximately 0.032 meter. Estimate the radius of a lacrosse ball, in meters, by expressing the radius as a single-digit number multiplied by a power of 10. #### Part D Use your answers from Parts A and C to estimate how many times greater the radius of Earth is than the radius of a lacrosse ball. Show or explain how you got your answer. ### Scoring Guide and Sample Student WorkSelect a score point in the table below to view the sample student response. ScoreDescription 4 The student response demonstrates an exemplary understanding of the Expressions and Equations concepts involved in using numbers expressed in the form of a single digit times an integer power of 10 to estimate very large or very small quantities, and to express how many times as much one is than the other. The student estimates quantities as a single digit times a power of 10 using either positive or negative exponents, and expresses how many times greater one quantity is than the other. 4 3 The student response demonstrates a good understanding of the Expressions and Equations concepts involved in using numbers expressed in the form of a single digit times an integer power of 10 to estimate very large or very small quantities, and to express how many times as much one is than the other. Although there is significant evidence that the student was able to recognize and apply the concepts involved, some aspect of the response is flawed. As a result the response merits 3 points. 2 The student response demonstrates a fair understanding of the Expressions and Equations concepts involved in using numbers expressed in the form of a single digit times an integer power of 10 to estimate very large or very small quantities, and to express how many times as much one is than the other. While some aspects of the task are completed correctly, others are not. The mixed evidence provided by the student merits 2 points. 1 The student response demonstrates a minimal understanding of the Expressions and Equations concepts involved in using numbers expressed in the form of a single digit times an integer power of 10 to estimate very large or very small quantities, and to express how many times as much one is than the other. 0 The student response contains insufficient evidence of an understanding of the Expressions and Equations concepts involved in using numbers expressed in the form of a single digit times an integer power of 10 to estimate very large or very small quantities, and to express how many times as much one is than the other to merit any points. Note: There are 2 sample student responses for Score Point 4.
2018-08-18 20:17:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 1, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32935982942581177, "perplexity": 2211.7688666627832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213737.64/warc/CC-MAIN-20180818193409-20180818213409-00364.warc.gz"}
https://learning.rc.virginia.edu/courses/fortran_introduction/linkers_libraries/
When the executable is created any external libraries must also be linked. The compiler will search a standard path for libraries. On Unix this is typically /usr/lib, /usr/lib64, /usr/local/lib, /lib. If you need libraries in other locations, you must give the compiler the path. -L followed by a path works, then each library must be named with the pattern libfoo.a or libfoo.so and be referenced -lfoo. Example gfortran –o mycode –L/usr/lib64/foo/lib mymain.o mysub.o -lfoo A library ending in .a is static. Its machine-language code will be physically incorporated into the executable. If the library ends in .so it is dynamic. It will be invoked by the executable at runtime. Many libraries require include files, also called header files. These must be incorporated at compile time. As for libraries, there is a standard system search path and if the headers are not located in one of those directories, the user must provide the path to the compiler with the -I flag. Example gfortran –c –I/usr/lib64/foo/include mymain.f90 If the library, or your code, uses modules in addition to or in place of headers, the I flag is also used to specify their location. We will learn about modules and how they interact with your build system later. The current working directory is included in the library and header paths, but not its subdirectories. ## Compiler Libraries If the compiler is used to invoke the linker, as we have done for all our examples, it will automatically link several libraries, the most important of which for our purposes are the runtime libraries. An executable must be able to start itself, request resources from the operating system, assign values to memory, and perform many other functions that can only be carried out when the executable is run. The runtime libraries enable it to do this. As long as all the program files are written in the same language and the corresponding compiler is used for linking, this will be invisible to the programmer. Sometimes, however, we must link runtime libraries explicitly, such as when we are mixing languages (a main program in Fortran and some low-level routines in C, or a main program in C++ with subroutines from Fortran, for instance). Fortran compilers generally include nearly all the language features in their runtime libraries. Input/output are implemented in the runtime libraries, for example. This can result in errors such as from the Intel compiler, when it could not read from a file (which was deliberately empty in this illustration): forrtl: severe (24): end-of-file during read, unit 10, file /home/mst3k/temp.dat In this error, forrtl indicates it is a message from the Fortran runtime library. ## Compiling and Linking Multiple Files with an IDE Our discussion of building your code has assumed the use of a command line on Unix. An IDE can simplify the process even on that platform. We will use Geany for our example; more sophisticated IDEs have more capabilities, but Geany illustrates the basic functions. We have two files in our project, example.f90 and adder.f90. The main program is example.f90. It needs adder.f90 to create the executable. We must open the two files in Geany. Then we must compile (not build) each one separately. Once we have successfully compiled both files, we open a terminal window (cmd.exe on Windows). We navigate to the folder where the files are located and type gfortran -o example example.o adder.o Notice that we name the executable the same as the main program, minus the file extension. This follows the Geany convention for the executable. It is not a requiment but if Geany is to execute it, that is the name for which it will look. You can run the executable either from the command line (./example may be required for Linux) or through the Geany execute menu or gears icon. If Geany is to run a multi-file executable then the main program file must be selected as the current file as well as match the name of the executable. This process becomes increasingly cumbersome as projects grow in number and complexity of files. The most common way to manage projects is through the make utility, which we will examine next. Previous
2023-03-28 01:46:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44269078969955444, "perplexity": 2315.053309093127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00407.warc.gz"}
https://stacks.math.columbia.edu/tag/0DXG
Lemma 52.3.5. Let $A$ be a ring. Let $f \in A$. Let $X$ be a scheme over $\mathop{\mathrm{Spec}}(A)$. Let $\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1$ be an inverse system of $\mathcal{O}_ X$-modules. Assume 1. either there is an $m \geq 1$ such that the image of $H^1(X, \mathcal{F}_ m) \to H^1(X, \mathcal{F}_1)$ is an $A$-module of finite length or $A$ is Noetherian and the intersection of the images of $H^1(X, \mathcal{F}_ m) \to H^1(X, \mathcal{F}_1)$ is a finite $A$-module, 2. the equivalent conditions of Lemma 52.3.1 hold. Then the inverse system $M_ n = \Gamma (X, \mathcal{F}_ n)$ satisfies the Mittag-Leffler condition. Proof. Set $I = (f)$. We will use the criterion of Lemma 52.2.2 involving the modules $H^1_ n$. For $m \geq n$ we have $I^ n\mathcal{F}_{m + 1} = \mathcal{F}_{m + 1 - n}$. Thus we see that $H^1_ n = \bigcap \nolimits _{m \geq 1} \mathop{\mathrm{Im}}\left( H^1(X, \mathcal{F}_ m) \to H^1(X, \mathcal{F}_1) \right)$ is independent of $n$ and $\bigoplus H^1_ n = \bigoplus H^1_1 \cdot f^{n + 1}$. Thus we conclude exactly as in the proof of Lemma 52.3.4. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0DXG. Beware of the difference between the letter 'O' and the digit '0'.
2022-08-15 10:38:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9208813309669495, "perplexity": 231.47537266759502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00170.warc.gz"}
https://stats.stackexchange.com/questions/307377/same-results-for-both-acf-and-pacf-with-strange-plot-when-i-forecast
# Same results for both ACF and PACF with Strange plot when I forecast I have two questions. The first one is related with the ACF and PACF and the second with the Resulted Plot after using sarima.for I have downloaded the bitcoin price for the last 3 years as an example to learn how to do time-series analysis. Then as you can see I took the first difference and the lag in order to detrend and remove variance. How can I interpret the following case? This is my model: fit2 <- sarima(Ret.Bit,2,1,2) with > \$ttable Estimate SE t.value p.value ar1 -0.7751 0.0221 -35.1251 0.0000 ar2 -0.9381 0.0175 -53.7412 0.0000 ma1 0.7410 0.0158 46.9190 0.0000 ma2 0.9672 0.0134 72.1050 0.0000 constant 2.4933 1.3152 1.8957 0.0582 Using sarima.for(Ret.Bit,n.ahead = 4,2,1,2 I ve got this strange plot. For me, it is strange both the dates, 1560 etc, and the fact that going out of the plot the line. Please, any help suggestion? • The "strange plot" bit is a question about software and off-topic here, and it is not answerable because you did not provide a reproducible example (including describing which package sarima.for is from, and the type of the variable Ret.Bit, which I'm guessing is wrong). – Chris Haug Oct 11 '17 at 17:36 ## 1 Answer if the acf of the original series = acf of the residual series your model has self-cancelling structure thus equivalent to a (0,0,0)(0,0,0,) . This is a great example of using VERY BAD software to "identify" a model. Sometimes software is worth what you pay for it. Notice how the ar coefficients are eerily similar to the ma coefficients. • The orders aren't identified by the software, they were supplied in the function call. – Chris Haug Oct 11 '17 at 17:31 • @Chris Haug ..Thanks for the correction. My faith in free software has been (partially !) restored. It would be interesting to me if the OP explained how they assumed the model or just saw it in a book/post somewhere and decided to go with it without understanding the consequences. – IrishStat Oct 11 '17 at 18:28
2019-12-08 23:31:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44058918952941895, "perplexity": 1422.8814799311324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540515344.59/warc/CC-MAIN-20191208230118-20191209014118-00018.warc.gz"}
https://academo.org/demos/quadrilateral-area-calculator/
# Quadrilateral Area Calculator A quick and simple tool to draw and calculate areas of quadrilaterals. Area Given 4 lengths and an angle, we can use this information to draw a quadrilateral. Then we can use Bretschneider's formula to calculate the area, $$K$$. The formula works for all convex quadrilaterals, which means none of the internal angles are greater than 180°. $K = \sqrt{(s-a)(s-b)(s-c)(s-d) - abcd \cos^2{\frac{\alpha + \gamma}{2}}}$ $$s$$ is the semi perimeter, (half of the sum of all the lengths) and $$\alpha$$ and $$\gamma$$ are two opposite angles. There are many cases in which it is useful to calculate the area of a quadrilateral. For example, if a farmer needs to distribute 100g of fertilizer per square meter of a field, they can use the calculator to calculate the area of the field. Alternatively, if you need to buy some tiles or new carpet for a room, the tool will tell you how much material you need to buy. To use the calculator, enter your lengths, and the angle $$\alpha$$ into the sidebar and hit calculate. The tool will automatically calculate the value of $$\gamma$$ that results in a convex quadrilateral and will then display the computed area. The resulting quadrilateral will also be drawn on the screen. The size will automatically be scaled, to fit the screen size. Please note some combinations of numbers cannot be used to make a quadrilateral. To try and visualize this, imagine three sides of length 1, and one side of length 100. There is no way that the side of length 100 can fit into the available space. If your inputs cannot be used to create a valid quadrilateral, we will display a note on the graph. If this is the case, please re-check your measurements and try again.
2018-02-25 19:27:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6196809411048889, "perplexity": 246.39590283922294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816912.94/warc/CC-MAIN-20180225190023-20180225210023-00455.warc.gz"}
https://fr.mathworks.com/help/robotics/ref/rigidbodytree.massmatrix.html
# massMatrix Joint-space mass matrix ## Syntax ``H = massMatrix(robot)`` ``H = massMatrix(robot,configuration)`` ## Description ````H = massMatrix(robot)` returns the joint-space mass matrix of the home configuration of a robot.``` example ````H = massMatrix(robot,configuration)` returns the mass matrix for a specified robot configuration.``` ## Examples collapse all Load a predefined KUKA LBR robot model, which is specified as a `RigidBodyTree` object. `load exampleRobots.mat lbr` Set the data format to `'row'`. For all dynamics calculations, the data format must be either `'row'` or `'column'`. `lbr.DataFormat = 'row';` Generate a random configuration for `lbr`. `q = randomConfiguration(lbr);` Get the mass matrix at configuration `q`. `H = massMatrix(lbr,q);` ## Input Arguments collapse all Robot model, specified as a `rigidBodyTree` object. To use the `massMatrix` function, set the `DataFormat` property to either `'row'` or `'column'`. Robot configuration, specified as a vector with positions for all nonfixed joints in the robot model. You can generate a configuration using `homeConfiguration(robot)`, `randomConfiguration(robot)`, or by specifying your own joint positions. To use the vector form of `configuration`, set the `DataFormat` property for the `robot` to either `'row'` or `'column'` . ## Output Arguments collapse all Mass matrix of the robot, returned as a positive-definite symmetric matrix with size n-by-n, where n is the velocity degrees of freedom of the robot. collapse all ### Dynamics Properties When working with robot dynamics, specify the information for individual bodies of your manipulator robot using these properties of the `rigidBody` objects: • `Mass` — Mass of the rigid body in kilograms. • `CenterOfMass` — Center of mass position of the rigid body, specified as a vector of the form `[x y z]`. The vector describes the location of the center of mass of the rigid body, relative to the body frame, in meters. The `centerOfMass` object function uses these rigid body property values when computing the center of mass of a robot. • `Inertia` — Inertia of the rigid body, specified as a vector of the form `[Ixx Iyy Izz Iyz Ixz Ixy]`. The vector is relative to the body frame in kilogram square meters. The inertia tensor is a positive definite matrix of the form: The first three elements of the `Inertia` vector are the moment of inertia, which are the diagonal elements of the inertia tensor. The last three elements are the product of inertia, which are the off-diagonal elements of the inertia tensor. For information related to the entire manipulator robot model, specify these `rigidBodyTree` object properties: • `Gravity` — Gravitational acceleration experienced by the robot, specified as an `[x y z]` vector in m/s2. By default, there is no gravitational acceleration. • `DataFormat` — The input and output data format for the kinematics and dynamics functions, specified as `"struct"`, `"row"`, or `"column"`. ### Dynamics Equations Manipulator rigid body dynamics are governed by this equation: `$\frac{d}{dt}\left[\begin{array}{c}q\\ \stackrel{˙}{q}\end{array}\right]=\left[\begin{array}{c}\stackrel{˙}{q}\\ M{\left(q\right)}^{-1}\left(-C\left(q,\stackrel{˙}{q}\right)\stackrel{˙}{q}-G\left(q\right)-J{\left(q\right)}^{T}{F}_{Ext}+\tau \right)\end{array}\right]$` also written as: `$M\left(q\right)\stackrel{¨}{q}=-C\left(q,\stackrel{˙}{q}\right)\stackrel{˙}{q}-G\left(q\right)-J{\left(q\right)}^{T}{F}_{Ext}+\tau$` where: • $M\left(q\right)$ — is a joint-space mass matrix based on the current robot configuration. Calculate this matrix by using the `massMatrix` object function. • $C\left(q,\stackrel{˙}{q}\right)$ — is the coriolis terms, which are multiplied by $\stackrel{˙}{q}$ to calculate the velocity product. Calculate the velocity product by using by the `velocityProduct` object function. • $G\left(q\right)$ — is the gravity torques and forces required for all joints to maintain their positions in the specified gravity `Gravity`. Calculate the gravity torque by using the `gravityTorque` object function. • $J\left(q\right)$ — is the geometric Jacobian for the specified joint configuration. Calculate the geometric Jacobian by using the `geometricJacobian` object function. • ${F}_{Ext}$ — is a matrix of the external forces applied to the rigid body. Generate external forces by using the `externalForce` object function. • $\tau$ — are the joint torques and forces applied directly as a vector to each joint. • $q,\stackrel{˙}{q},\stackrel{¨}{q}$ — are the joint configuration, joint velocities, and joint accelerations, respectively, as individual vectors. For revolute joints, specify values in radians, rad/s, and rad/s2, respectively. For prismatic joints, specify in meters, m/s, and m/s2. To compute the dynamics directly, use the `forwardDynamics` object function. The function calculates the joint accelerations for the specified combinations of the above inputs. To achieve a certain set of motions, use the `inverseDynamics` object function. The function calculates the joint torques required to achieve the specified configuration, velocities, accelerations, and external forces. ## Version History Introduced in R2017a
2022-05-27 23:17:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6374075412750244, "perplexity": 1167.3884304957953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00735.warc.gz"}
http://mathhelpforum.com/algebra/189000-calculating-value-exponent-print.html
# calculating value from exponent • Sep 27th 2011, 11:51 AM pranay calculating value from exponent hi, if there is an expression : $x^{c*x+1}=y$ and if we know the values of c and y , then how can we calculate x ? thanks. • Sep 27th 2011, 01:48 PM skeeter Re: calculating value from exponent Quote: Originally Posted by pranay hi, if there is an expression : $x^{c*x+1}=y$ and if we know the values of c and y , then how can we calculate x ? thanks. no simple algebraic manipulation will allow you to solve for x ... you'll need to use technology to calculate the value of x for given values of c and y
2017-06-27 09:40:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9239798188209534, "perplexity": 551.8309189958263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321306.65/warc/CC-MAIN-20170627083142-20170627103142-00321.warc.gz"}
https://faculty.engr.utexas.edu/dunn/dunn/publications/intra-organ-biodistribution-gold-nanoparticles-using-intrinsic-two-photon-induced
# {Intra-organ Biodistribution of Gold Nanoparticles Using Intrinsic Two-photon Induced Photoluminescence.} ### Citation: J. Park, Estrada, A., Schwartz, J. A., Diagaradjane, P., Krishnan, S., Dunn, A. K., and Tunnell, J. W., “Intra-organ Biodistribution of Gold Nanoparticles Using Intrinsic Two-photon Induced Photoluminescence.,” Lasers in surgery and medicine, vol. 42, pp. 630–639, 2010. ### Abstract: BACKGROUND AND OBJECTIVES: Gold nanoparticles (GNPs) such as gold nanoshells (GNSs) and gold nanorods (GNRs) have been explored in a number of in vitro and in vivo studies as imaging contrast and cancer therapy agents due to their highly desirable spectral and molecular properties. While the organ-level biodistribution of these particles has been reported previously, little is known about the cellular level or intra-organ biodistribution. The objective of this study was to demonstrate the use of intrinsic two-photon induced photoluminescence (TPIP) to study the cellular level biodistribution of GNPs. STUDY DESIGN/MATERIALS AND METHODS: Tumor xenografts were created in twenty-seven male nude mice (Swiss nu/nu) using HCT 116 cells (CCL-247, ATCC, human colorectal cancer cell line). GNSs and GNRs were systemically injected 24 hr. prior to tumor harvesting. A skin flap with the tumor was excised and sectioned as 8 $μ$m thick tissues for imaging GNPs under a custom-built multiphoton microscope. For multiplexed imaging, nuclei, cytoplasm, and blood vessels were demonstrated by hematoxylin and eosin (H&E) staining, YOYO-1 iodide staining and CD31-immunofluorescence staining. RESULTS: Distribution features of GNPs at the tumor site were determined from TPIP images. GNSs and GNRs had a heterogeneous distribution with higher accumulation at the tumor cortex than tumor core. GNPs were also observed in unique patterns surrounding the perivascular region. While most GNSs were confined at the distance of approximately 400 $μ$m inside the tumor edge, GNRs were shown up to 1.5 mm penetration inside the edge. CONCLUSIONS: We have demonstrated the use of TPIP imaging in a multiplexed fashion to image both GNPs and nuclei, cytoplasm, or vasculature simultaneously. We also confirmed that TPIP imaging enabled visualization of GNP distribution patterns within the tumor and other critical organs. These results suggest that direct luminescence-based imaging of metal nanoparticles holds a valuable and promising position in understanding the accumulation kinetics of GNPs. In addition, these techniques will be increasingly important as the use of these particles progress to human clinical trials where standard histopathology techniques are used to analyze their effects. n/a Website
2018-05-21 09:24:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5377464890480042, "perplexity": 13134.521116219177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863972.16/warc/CC-MAIN-20180521082806-20180521102806-00349.warc.gz"}
https://softwareengineering.stackexchange.com/questions/115704/why-are-weakly-typed-languages-still-being-actively-developed
# Why are weakly-typed languages still being actively developed? I wonder why are weakly-typed languages still being actively developed. For example, what benefit can one draw from being able to write $someVar = 1; (...) // Some piece of code$someVar = 'SomeText'; instead of using the much different, stongly-typed version int someInt = 1; (...) string SomeString = 'SomeText'; It is true that you need to declare an aditional variable in the second example, but does that really hurt? Shouldn't all languages strive to be strongly-typed since it enforces type-safety at compile time, thus avoiding some pitfalls in type-casting? • "Strongly typed" is not a well defined term. Mostly it means "you cannot subvert the type system". It's orthogonal to what you describe above which might be latent versus manifest typing, or static versus dynamic typing. – Frank Shearar Oct 22 '11 at 8:46 • What am I missing that moves this from flame bait with several closely related, arguably even duplicate, questions (just search SackOverflow for questions mentioning static and dynamic typing in their tags) to legitimate question? – user7043 Oct 22 '11 at 10:03 • There are advantages and drawbacks to both statically typed and dynamically typed languages. Dynamically typed languages lend themselves nicely to rapid development or prototyping (hence the reason "scripting languages" are usually dynamically typed), whereas statically-typed languages (arguably) are easier to maintain and extend as they grow into large, complicated projects. – Charles Salvia Oct 22 '11 at 10:15 • The first example is a bit like Python where variables have no declared type. However, Python is a very strongly typed language because the objects -- themselves -- have a type which is almost impossible to change or coerce. I think the misuse of the terminology makes this question very hard to answer. – S.Lott Oct 22 '11 at 11:08 • @delnan The fact that this question go two reasonable answers and didn't devolve into a flame war helps. – Adam Lear Oct 22 '11 at 12:54 Strong / weak typing and static / dynamic typing are orthogonal. Strong / weak is about whether the type of a value matters, functionally speaking. In a weakly-typed language, you can take two strings that happen to be filled with digits and perform integer addition on them; in a strongly-typed language, this is an error (unless you cast or convert the values to the correct types first). Strong / weak typing is not a black-and-white thing; most languages are neither 100% strict nor 100% weak. Static / dynamic typing is about whether types bind to values or to identifiers. In a dynamically-typed language, you can assign any value to any variable, regardless of type; static typing defines a type for every identifier, and assigning from a different type is either an error, or it results in an implicit cast. Some languages take a hybrid approach, allowing for statically declared types as well as untyped identifiers ('variant'). There is also type inference, a mechanism where static typing is possible without explicitly declaring the type of everything, by having the compiler figure out the types (Haskell uses this extensively, C# exposes it through the var keyword). Weak dynamic programming allows for a pragmatic approach; the language doesn't get in your way most of the time, but it won't step in when you're shooting yourself in the foot either. Strong static typing, by contrast, pushes the programmer to express certain expectations about values explicitly in the code, in a way that allows the compiler or interpreter to detect a class of errors. With a good type system, a programmer can define exactly what can and cannot be done to a value, and if, by accident, someone tries somethine undesired, the type system can often prevent it and show exactly where and why things go wrong. • In a proper weakly typed language like HyperTalk with separate operators for string concatenation and addition (e.g. assume & for concatenation), operations like "12"+3 or 45 & "6" pose no ambiguity (they compute 15 and and "456", respectively). In a more-strongly-typed language, the "+" operator could be safely overloaded for both string concatenation and numerical addition without causing ambiguity because operations on strings and numbers would be forbidden. Problems arise when the language specifies nails down neither the types nor the operations to be performed. – supercat Sep 29 '14 at 2:33 Weak typing is more along the lines of 1 == "TRUE". This section on the wikipedia nicely illustrates the difference. Please note that neither example from the wikipedia is statically typed, which is what you refer to in your second example. So if the question is, why people use dynamically typed languages, then the answer is: static type systems put limitations on you. Many people simply have never worked with an expressive static type system, which leads them to the conclusion that the disadvantages of static typing outweigh the benefits. Weakly-types languages are still being developed because people use them and like them. If you don't like weak typing, don't use weakly-typed languages. Declaring that something is The One True Way and that Everybody Should Do It The One True Way ignores the complicatedness of the world. Shouldn't all languages strive to be strongly-typed since it enforces type-safety at compile time, thus avoiding some pitfalls in type-casting? Not necessarily. Learning Objective-C: A Primer addresses that question directly in the context of Objective-C: Weakly typed variables are used frequently for things such as collection classes, where the exact type of the objects in a collection may be unknown. If you are used to using strongly typed languages, you might think that the use of weakly typed variables would cause problems, but they actually provide tremendous flexibility and allow for much greater dynamism in Objective-C programs.
2019-10-16 18:22:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3610928952693939, "perplexity": 1799.1640908942813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669057.0/warc/CC-MAIN-20191016163146-20191016190646-00448.warc.gz"}
http://openstudy.com/updates/4db600bfc08f8b0b46ef30c3
## anonymous 5 years ago find the geometric mean. 8,_, 72 explain please. If this thing is growing at a constant geometrical rate, then letting your unknown be x, you have$\frac{x}{8}=\frac{72}{x} \rightarrow x^2=72 \times 8 =576 \rightarrow x=\sqrt{576}=24$here. To find the geometric mean, you multiply every element in the sequence and raise it to the power of 1/(no. of elements), so here:$\mu = (8 \times 24 \times 72)^{1/3}=24$as required.
2017-01-23 01:02:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5940173864364624, "perplexity": 1367.9811097586355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00444-ip-10-171-10-70.ec2.internal.warc.gz"}
https://practicaldev-herokuapp-com.global.ssl.fastly.net/azure/collaborate-on-research-papers-with-github-76e
## DEV Community is a community of 637,088 amazing developers We're a place where coders share, stay up-to-date and grow their careers. Microsoft Azure # Collaborate on research papers with GitHub GitHub is well-known as a platform where software developers host their code and collaborate with their teams on a project. In this blog post, we'll show you how you can use the GitHub model to do the same thing and collaborate seamlessly on your research papers. This blog post is co-authored with Dmitry Soshkinov, because we believe that GitHub is a great technology and tool to be used beyond pure software development. ## Git, GitHub, and how it all works If you've never worked with GitHub before, check out the Microsoft Learn module for a step-by-step introduction. The first thing you'll want to do is set up Git. Git is the version control system that runs behind the scenes of any GitHub project-- it's what allows you to collaborate with others, go back to previous versions of your project, and view changes made by different members of your team. You may want to use Git from a command-line, but in the beginning, it might be easier to use the GitHub Desktop client. Projects on GitHub are organized in repositories. You'll create a new repository for your research paper, and choose who you want to have access. All your files, whether you're using Markdown, LaTeX, or another typesetting or markup language (more on that later!) will live in this repository. You'll want to clone the repository to your local machine, so that you have a copy of your files. The source of truth for your paper will live on the main branch of your repository -- this branch is initialized when you create your repository. You can create multiple branches for different sections of your paper, and edit and merge them into your main branch when you're finished. A commit is a snapshot of your repository at a given moment, and it might contain a set of changes that you've made to the information on a specific branch. This is just a short introduction to all the features you can take advantage of when you use GitHub to collaborate on your research papers. Keep reading for more information, and a sample workflow that you can use to get started. ## What should and should not be stored in Git It is important to understand that GitHub is not a replacement for file storage, or a convenient storage for binary files. It was originally designed to be used as a source code repository, and thus it allows you to track changes between text documents. If you are planning on collaborating on Word documents, setting up a shared OneDrive location is a much better choice. For this reason, many people don’t consider GitHub to be a convenient collaboration platform for editing documents. However, scientists often write their papers in text format, most often – TeX or LaTeX. This makes it very convenient to use GitHub as a collaboration platform. It is one of the reasons we believe that GitHub is a very beneficial collaboration platform for scientists. ## Why GitHub? Using Git will give you many advantages: • Tracking changes between different editions of a document. Text documents can be easily compared to each other using the GitHub interface. This is useful even when you are working on a paper alone, because all changes are tracked, and you can always roll back to any previous state. • Working on different branches of the document and merging branches together. There are a few different styles of using Git for collaboration, so-called Git workflows. With branches, you and your collaborators can all work on specific parts of your project without conflicts, for prolonged periods of time. • Accepting contributions to your paper/code from outside. Github has a convenient mechanism of pull requests – suggestions from other users, that you can then approve and merge into the main content. For example, the Web Development for Beginners course was developed and hosted on GitHub originally by a group of around 10 people, and now it has more than 50 contributors, including people who are translating the course into different languages. • If you are very advanced (or have some friends who are into DevOps), you can setup GitHub Actions to automatically create a new PDF version of your paper every time changes are made to the repository. ## LaTeX or Markdown? Most scientists write their papers in LaTeX, mostly because it provides easy access to a lot of workflows in academia, like paper templates. There are also some good collaboration platforms specific to TeX, for example, Overleaf. However, it won't give you full control of your versioning and collaboration features like Git. However, writing in LaTeX also includes quite a bit of overhead, meaning that many layout features are quite verbose, for example: \subsection{Section 1} \begin{itemize} \item Item 1 \item Item 2 \end{itemize} In the world of software development, there is a perfect format for writing formatted text documents -- Markdown. Markdown looks just like a plain text document, for example, the text above would be formatted like this: ## Section 1 * Item 1 * Item 2 This document is much easier to read as plain text, but it is also formatted into a nice looking document by Markdown processors. There are also ways to include TeX formulae into markdown using specific syntax. In fact, I've been writing all of my blog posts and most text content in Markdown for a few years, including posts with formulae. For scientific writing, the great Markdown processor (as well as live editing environment) integrated with TeX is madoko – I highly recommend you check it out. You can use it from the web interface (which has GitHub integration), and there's also an open-source command-line tool to convert your Markdown writing into either LaTeX, or directly to PDF. While you may continue using LaTeX with Git, I encourage you to look into markdown-based writing options. By the way, if you have some writing in different formats, such as Microsoft Word documents, it can be converted to Markdown using a tool called Pandoc. ## Sample workflow The main thing that Git does is allow you to structure your writing (whether it is code or a scientific paper) into chunks called commits. Your code is tracked in a local repository that lives on your computer, and once you have made some changes, you commit them to save. Then, you can also synchronize your commits with others by using a remote common repository, called upstream. Sound complicated? When using GitHub Desktop most of the tasks are completely automated for you. Below, we describe the simplest way you can collaborate on a paper with your colleagues. 1. Create a new repository on GitHub. I set the visibility to Private so I can decide which collaborators I’d like to invite to contribute later. 2. Select Set up in Desktop to quickly set up your repository in GitHub Desktop. 3. Next, you'll need to create a local clone of the repository on your machine. You may be prompted to reauthenticate to GitHub during this step. 4. I already have a couple of Markdown files that I've started working on saved to my computer. I can select View the files of your repository in Finder to open the folder where my local copy of the repository is stored, and drag in the files for my Table of Contents, Section 1, and Bibliography from my computer. 5. Now, when I go back to GitHub Desktop, I can see those files have been added to my repository. I want to commit those files to the main branch. I can also publish my branch to push those changes to GitHub, and make them accessible to others who I'll collaborate with. 6. Next, I'm going to create a new branch so I can go off and work on Section 2 of my paper. I'll automatically end up on that branch after it has been created. There are a couple of options you'll be able to select from for making changes to your file in this branch: • You can create a Pull Request from your current branch -- if I wanted my colleague to be able to review the changes I've made in this branch, I'd use this option and send them the PR for review. • You can also open the repository in your external editor. I use VS Code to edit my files, so I can add section 2 of my paper there, and then commit it to my section2 branch. • If I already have section 2 of my paper saved somewhere on my computer, or if my colleague has sent me something they've worked on, I can follow the same workflow as above and check out the files in my repository on my machine, and add/remove files that way. • If I just need to make a small change, I'd open my repository in the browser and edit from there. 7. I can open my repository in GitHub to check out all of the files and information. This is the link I’d send to a colleague if I wanted them to be able to clone the code onto their local machine, and help me out with some sections. Since I’ve made my repository private, I’ll need to add collaborators in the Settings pane. 8. Once I’m happy with Section 2 of my paper, I can go ahead and merge it into the main branch of my repository. I switch over to the main branch, then choose a branch to merge into main, and choose section2. Then, I’ll want to push my changes back up to GitHub so that the main branch is updated with the newest changes for any future collaborators. This is one example of a Git workflow you can use in conjunction with GitHub Desktop to collaborate on a research paper with your colleagues. There are several other ways that may serve your needs better—you may want to use the command line with VS Code, or edit your files on GitHub in the browser. Whatever method works for you is the best method, as long as you’re able to accomplish your goals.
2021-06-14 12:09:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.287429541349411, "perplexity": 1038.8727616042295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00208.warc.gz"}
https://stackoverflow.com/questions/4709014/using-custom-domains-with-iis-express/32744234
# Using Custom Domains With IIS Express Traditionally I use custom domains with my localhost development server. Something along the lines of: dev.example.com dev.api.example.com This has provided me a ton of flexibility when working with external APIs such as Facebook. This has worked great in the past with the built-in Visual Studio Development Server because all I needed to do was add a CNAME to those DNS records pointing to 127.0.0.1. However, I have not been able to get this to work with IIS Express. Everything I have tried seems to have failed. I have even added the correct XML config to the applicationHost.config file for IIS Express, but it doesn't seem to recognize the entries as valid as a true install of IIS would. <binding protocol="http" bindingInformation="*:1288:dev.example.com" /> Whenever I enter this line and try to request http://dev.example.com:1288 I get the following message: Bad Request - Invalid Hostname Does anybody know if I am missing something obvious? Or did the IIS Express team really lack the foresight to see this type of use? • Make sure you have the applicationPool attribute of the application node set to either "Clr2IntegratedAppPool" or "Clr4IntegratedAppPool". I got the Bad Hostname error you are seeing when using "Clr2ClassicAppPool" or "Clr4ClassicAppPool". Feb 18, 2011 at 18:46 • confused - CNAME records do not accept ip addresses but other host names instead. Did you mean A record? Mar 3, 2012 at 1:17 • I had lots of issues with this and found it much easier to just use IIS instead of IIS Express Mar 23, 2012 at 14:42 • I just ran into this as well. I was hoping it would just look at the port number and ignore the domain. No such luck. Wish I could enable "accept all" on the port.... May 28, 2013 at 1:31 • Vote now! I've added a UserVoice suggestion: visualstudio.uservoice.com/forums/121579-visual-studio/… Apr 30, 2015 at 9:38 ## 15 Answers This is what worked for me (Updated for VS 2013, see revision history for 2010, for VS 2015 see this: https://stackoverflow.com/a/32744234/218971): 1. Right-click your Web Application Project ▶ PropertiesWeb, then configure the Servers section as follows: • Select IIS Express ▼ from the drop down • Project Url: http://localhost • Override application root URL: http://dev.example.com • Click Create Virtual Directory (if you get an error here you may need to disable IIS 5/6/7/8, change IIS's Default Site to anything but port :80, make sure Skype isn't using port 80, etc.) 2. Optionally: Set the Start URL to http://dev.example.com 3. Open %USERPROFILE%\My Documents\IISExpress\config\applicationhost.config (Windows XP, Vista, and 7) and edit the site definition in the <sites> config block to be along the lines of the following: <site name="DevExample" id="997005936"> <application path="/" applicationPool="Clr2IntegratedAppPool"> <virtualDirectory path="/" physicalPath="C:\path\to\application\root" /> </application> <bindings> <binding protocol="http" bindingInformation=":80:dev.example.com" /> </bindings> <applicationDefaults applicationPool="Clr2IntegratedAppPool" /> </site> 4. If running MVC: make sure the applicationPool is set to one of the "Integrated" options (like "Clr2IntegratedAppPool"). 5. Open your hosts file and add the line 127.0.0.1 dev.example.com. 6. ► Start your application! Some great advice from the comments: • You may need to run Visual Studio as Administrator. • If you want to make other devs see your IIS run netsh http add urlacl url=http://dev.example.com:80/ user=everyone • If you want the site to resolve for all hosts set bindingInformation="*:80:". Use any port you want, 80 is just convenient. To resolve all hosts you'll need to run Visual Studio as an administrator • This is exactly what I did, was about to write this up, but you beat me to the punch :-) As a side note, if you want to let other dev's see your IIS you may want to do something like "netsh http add urlacl url=dev.example.com:80 user=everyone" Apr 7, 2011 at 16:00 • Also, if you want the site to resolve for all hosts, you can use: bindingInformation="*:80:" I use this so localtunnel.com works. Jan 2, 2012 at 0:36 • I was getting an access denied error. Running VS 2012 as admin fixed it. Oct 6, 2012 at 21:28 • what version of IIS Express is this? I'm running VS2012 (which I believe is IIS Express 8.0), and when I click on the "Use Local IIS Server" radiobutton under the "Servers" area of the Web properties page, click the Use IIS Express checkbox, and then try to put in "mydev.example.com", then hit the "Create Virtual Directory" button, I get this error "Unable to create the virtual directory. Cannot create the Web site mydev.example.com". You must specify "localhost" for the server name. May 1, 2013 at 15:40 • This is complicated. One must change seetings on several different places. It is easy to make a real mess. Is there really no simpler way? May 27, 2013 at 8:10 For Visual Studio 2015 the steps in the above answers apply but the applicationhost.config file is in a new location. In your "solution" folder follow the path, this is confusing if you upgraded and would have TWO versions of applicationhost.config on your machine. \.vs\config Within that folder you will see your applicationhost.config file Alternatively you could just search your solution folder for the .config file and find it that way. I personally used the following configuration: With the following in my hosts file: 127.0.0.1 jam.net 127.0.0.1 www.jam.net And the following in my applicationhost.config file: <site name="JBN.Site" id="2"> <application path="/" applicationPool="Clr4IntegratedAppPool"> <virtualDirectory path="/" physicalPath="C:\Dev\Jam\shoppingcart\src\Web\JBN.Site" /> </application> <bindings> <binding protocol="http" bindingInformation="*:49707:" /> <binding protocol="http" bindingInformation="*:49707:localhost" /> </bindings> </site> Remember to run your instance of visual studio 2015 as an administrator! If you don't want to do this every time I recomend this: How to Run Visual Studio as Administrator by default I hope this helps somebody, I had issues when trying to upgrade to visual studio 2015 and realized that none of my configurations were being carried over. • The crucial part is to add the second *:49707: binding instead of replacing the default *:49707:localhost as suggested in many other answers. And you have to run VS as an administrator, indeed (sic!). Thank you. Jun 10, 2016 at 8:32 • You could also use a URL like http://jam.127.0.0.1.xip.io/ if you don't want modify your hosts file. Plus, any collaborators won't need to modify theirs. See xip.io. Aug 11, 2016 at 16:12 • I think it worth to note that folder ".vs" is hidden and should be made visible at first Mar 15, 2017 at 10:53 • I have confirmed that this works in Visual Studio 2017 as well. Interestingly, I did not need both <binding> elements (just the first one without "localhost"). Visual Studio, however, certainly does need to be running as Administrator, which is sort of annoying Apr 30, 2017 at 23:13 • I just wish there was a way for those of us who are not allowed to run anything administratively to do this. Employers/clients who blanket ban admin privilege without any thought as to how they might safely enable this are incredibly short sighted. Jul 14, 2017 at 8:41 When using Visual Studio 2012 with IIS Express, changing an existing binding does not work permanently. (It may work until you close VS, but after that, things get really messed up.) The key is keeping the existing localhost binding and adding a new binding after it. Unless you're running as administrator, you'll also need to run netsh add urlacl (to give yourself permissions to run a non-localhost site as a standard user). If you want to allow any host name, the full process is as follows: 1. Create your web application, and find out what port it is using (see project properties, Web tab, Project Url). 2. From an administrator prompt, run the following commands (replacing portnumber with the port number you figured out in #1): netsh http add urlacl url="http://*:portnumber/" user=everyone netsh http add urlacl url="http://localhost:portnumber/" user=everyone You can also use your user name (DOMAIN\USER) instead of everyone for better security. 1. Open applicationhost.config (usually under My Documents\IIS Express\config), and find the element with your port number. 2. Add one more binding with the host name you want (in this case, *). For example: <site name="MvcApplication1" id="2"> <application path="/" applicationPool="Clr4IntegratedAppPool"> <virtualDirectory path="/" physicalPath="C:\sites\MvcApplication1" /> </application> <bindings> <binding protocol="http" bindingInformation="*:12853:localhost" /> <binding protocol="http" bindingInformation="*:12853:*" /> </bindings> </site> Note that, if want to open up all host names (*), you'll need two netsh commands (one for * and one for localhost). If you only want to open up a specific host name, you don't strictly need the second netsh command (localhost); just the one with your specific host name is sufficient. • Did the same (1st approach), when browsing to mycustomhost:myportnr I get "Service Unavailable". VS2012. I can't try the 2nd as there is no IIS Express under "My Documents" and applicationhost.config I found in c:\Program files\IIS Express had no config for my web application... Apr 23, 2014 at 6:44 • With VS2013, I find that this configuration only works if I then run as admin! (Running as non-admin, IIS Express seems to ignore the 2nd binding - if I open its UI from the notification icon, IIS shows my site as running, but only shows the first binding, and I can only use localhost... If I'm running as admin, it shows both bindings in the UI, and both work.) I've added the urlacls with netsh, but this appears to make no difference - IIS Express seems not to even try and bind anything other than localhost if it's not elevated. Sep 29, 2014 at 6:41 • Adding a record with the * solved the "Service Unavailable" it for me. Nov 2, 2017 at 11:26 • What about a scenario that you need to use HTTPS on port 443? May 9, 2018 at 16:16 • @Kixoka I am trying to run HTTPS. I managed to get it working but the browser complains that the certificate does not match the domain. I get past that but all resources then fail to load so no images, scripts or CSS. Aug 12, 2018 at 8:32 The invalid hostname indicates that the actual site you configured in the IIS Express configuration file is (most likely) not running. IIS Express doesn't have a process model like IIS does. For your site to run it would need to be started explicitly (either by opening and accessing from webmatrix, or from command line calling iisexpress.exe (from it's installation directory) with the /site parameter. In general, the steps to allow fully qualified DNS names to be used for local access are Let's use your example of the DNS name dev.example.com 1. edit %windows%\system32\drivers\etc\hosts file to map dev.example.com to 127.0.0.1 (admin privilege required). If you control DNS server (like in Nick's case) then the DNS entry is sufficient as this step is not needed. 2. If you access internet through proxy, make sure the dev.example.com will not be forwared to proxy (you have to put in on the exception list in your browser (for IE it would be Tools/Internet Options/Connections/Lan Settings, then go to Proxy Server/Advanced and put dev.example.com on the exeption list. 3. Configure IIS Express binding for your site (eg:Site1) to include dev.example.com. Administrative privilege will be needed to use the binding. Alternatively, a one-time URL reservation can be made with http.sys using netsh http add urlacl url=http://dev.example.com:<port>/ user=<user_name> 4. start iisexpress /site:Site1 or open Site1 in WebMatrix • Hi Jaro, thanks for the help, but 127.0.0.1 is set in my DNS not the hosts file so that all the developers have the same access. Second I cannot get it to start debugging in VS, unless the binding is set as "*:1234:localhost" which is a totally useless setup, because it is bound to all IP addresses on my machine but only accepts host headers of "localhost". Why didn't they just do "127.0.0.1:1234:" and avoid all this crap. Host headers are not needed as long as the IP and port resolve. And running as Admin is just a huge pain in the ass for a large team. What a disaster IIS Express is. Jan 18, 2011 at 0:56 • Also just as a reference Bad Request - Invalid Hostname means that the hostname I am trying to use cannot resolve because the IIS Express team in their infinite wisdom decided to specifically call out "localhost" as the only host header that would work. And VS won't let you run it any other way other than "*:1234:localhost" Jan 18, 2011 at 0:58 • I hear you on the 127.0.0.1 issue. The "localhost" in the binding is definitely a pain since it doesn't include traffic for 127.0.0.1 . The design is limited by today's contract of http.sys (http protocol handler) that only recognizes "localhost" for the non-administrative use. The admin only requirement can be mitigated by one-time URL reservation with http.sys (will update the information above). Please let me know you exact use with Visual Studio (it is VS2010 with SP1 Beta?). Let's try to work on this some more and (hopefully) settle the issue. Jan 18, 2011 at 2:49 • note - if you see netsh.exe error 87 the parameter is incorrect - may be because you omitted an URI path (/) after your hostname. Nov 28, 2012 at 22:58 On my WebMatrix IIS Express install changing from "*:80:localhost" to "*:80:custom.hostname" didn't work ("Bad Hostname", even with proper etc\hosts mappings), but "*:80:" did work--and with none of the additional steps required by the other answers here. Note that "*:80:*" won't do it; leave off the second asterisk. • uuuum, so "*:80:*" worked for me, but then I restarted my computer and it suddenly wouldn't work until I changed it to "*:80:" and all was well! Thanks. Jul 13, 2013 at 4:59 • Finally!! Also followed all the steps elsewhere, and removing the * at the end made it work, thanks! Jun 13, 2014 at 13:04 • Just here to say that this worked for me. I tried the accepted answer and reverted all changes down to what you said and all works great. I am using VS2013 Update4. Sep 29, 2015 at 14:51 • This seems to work perfectly. When using VS2017, I'm not sure what to place in the Project Url to have it write that in the applicationhost.config file? Is it possible to set through VS IDE? – Jim Jan 23, 2018 at 17:12 I was trying to integrate the public IP Address into my workflow and these answers didn't help (I like to use the IDE as the IDE). But the above lead me to the solution (after about 2 hours of beating my head against a wall to get this to integrate with Visual Studio 2012 / Windows 8) here's what ended up working for me. applicationhost.config generated by VisualStudio under C:\Users\usr\Documents\IISExpress\config <site name="MySite" id="1"> <application path="/" applicationPool="Clr4IntegratedAppPool"> <virtualDirectory path="/" physicalPath="C:\Users\usr\Documents\Visual Studio 2012\Projects\MySite" /> </application> <bindings> <binding protocol="http" bindingInformation="*:8081:localhost" /> <binding protocol="http" bindingInformation="*:8082:localhost" /> <binding protocol="http" bindingInformation="*:8083:192.168.2.102" /> </bindings> </site> • Set IISExpress to run as Administrator so that it can bind to outside addresses (not local host) • Run Visual Stuio as an Administrator so that it can start the process as an administrator allowing the binding to take place. The net result is you can browse to 192.168.2.102 in my case and test (for instance in an Android emulator. I really hope this helps someone else as this was definitely an irritating process for me. Apparently it is a security feature which I'd love to see disabled. Like Jessa Flint above, I didn't want to manually edit .vs\config\applicationhost.config because I wanted the changes to persist in source control. I also didn't want to have a separate batch file. I'm using VS 2015. Project Properties→Build Events→Pre-build event command line: ::The following configures IIS Express to bind to any address at the specified port ::remove binding if it already exists "%programfiles%\IIS Express\appcmd.exe" set site "MySolution.Web" /-bindings.[protocol='http',bindingInformation='*:1167:'] /apphostconfig:"$(SolutionDir).vs\config\applicationhost.config" ::add the binding "%programfiles%\IIS Express\appcmd.exe" set site "MySolution.Web" /+bindings.[protocol='http',bindingInformation='*:1167:'] /apphostconfig:"$(SolutionDir).vs\config\applicationhost.config" Just make sure you change the port number to your desired port. • One minor addition here. Make sure to create rules to open that port in your Windows Firewall. Then it worked like a charm! Aug 31, 2016 at 20:38 • I like your answer because editing the .vs\config\applicationhost.config is not feasible when multiple developers run IIS Express on their own machines. Thank you! Mar 7, 2017 at 2:02 The up-voted answer is valid.. and this information helped me quite a bit. I know this topic has been discussed before but I wanted to add some additional input. People are saying that you must "manually edit" the application.config file in the Users IISExpress/Config directory. This was a big issue for me because I wanted to distribute the configuration via Source control to various developers. What I found is that you can automate the updating of this file using the "C:\Program Files\IIS Express\appcmd.exe" program. It took a while to find out the control parameters but Ill share my findings here. Essentially you can make a .bat file that runs both the NETSH command and the APPCMD.EXE (and perhaps swap out a host file if you like) to make host header configuration easy with IIS Express. Your install bat file would look something like this: netsh http add urlacl url=http://yourcustomdomain.com:80/ user=everyone "C:\Program Files\IIS Express\appcmd.exe" set site "MyApp.Web" /+bindings.[protocol='http',bindingInformation='*:80:yourcustomdomain.com'] I also will make a "Uninstall" bat file that will clean up these bindings..(because often times Im just faking out DNS so that I can work on code that is host name sensitive) netsh http delete urlacl url=http://yourcustomdomain.com:80/ "C:\Program Files\IIS Express\appcmd.exe" set site "MyApp.Web" /-bindings.[protocol='http',bindingInformation='*:80:yourcustomdomain.com'] I hope this information is helpful to someone.. It took me a bit to uncover. ### This method has been tested and worked with ASP.NET Core 3.1 and Visual Studio 2019. .vs\PROJECTNAME\config\applicationhost.config Change "*:44320:localhost" to "*:44320:*". <bindings> <binding protocol="http" bindingInformation="*:5737:localhost" /> <binding protocol="https" bindingInformation="*:44320:*" /> </bindings> Both links work: Now if you want the app to work with the custom domain, just add the following line to the host file: C:\Windows\System32\drivers\etc\hosts 127.0.0.1 customdomain Now: • https://customdomain:44320 NOTE: If your app works without SSL, change the protocol="http" part. • I get "Unable to connect to web server 'IIS Express'." because of that asterisk Sep 24, 2020 at 11:17 • you unfortunately must run visual studio as administrator for this to work. I have tried the netssh thing and other stuff and only thing that has worked is running visual studio as administrator Oct 20, 2020 at 15:28 Following Jaro's advice, I was able to get this working under Windows XP and IIS Express (installed via Web Matrix) with a small modification and was not limited to only localhost. It's just a matter of setting the bindings correctly. 1. Use WebMatrix to create a new site from folder in your web application root. 2. Close WebMatrix. 3. Open %USERPROFILE%\My Documents\IISExpress\config\applicationhost.config (Windows XP. Vista and 7 paths will be similar) and edit the site definition in the <sites> config block to be along the lines of the following: <site name="DevExample" id="997005936"> <application path="/" applicationPool="Clr2IntegratedAppPool"> <virtualDirectory path="/" physicalPath="C:\path\to\application\root" /> </application> <bindings> <binding protocol="http" bindingInformation="*:80:dev.example.com" /> </bindings> <applicationDefaults applicationPool="Clr2IntegratedAppPool" /> </site> If running MVC, then keep the applicationPool set to one of the "Integrated" options. David's solution is good. But I found the <script>alert(document.domain);</script> in the page still alerts "localhost" because the Project Url is still localhost even if it has been override with http://dev.example.com. Another issue I run into is that it alerts me the port 80 has already been in use even if I have disabled the Skype using the 80 port number as recommended by David Murdoch. So I have figured out another solution that is much easier: 1. Run Notepad as administrator, and open the C:\Windows\System32\drivers\etc\hosts, add 127.0.0.1 mydomain, and save the file; 2. Open the web project with Visual Studio 2013 (Note: must also run as administrator), right-click the project -> Properties -> Web, (lets suppose the Project Url under the "IIS Express" option is http://localhost:33333/), then change it from http://localhost:33333/ to http://mydomain:333333/ Note: After this change, you should neither click the "Create Virtual Directory" button on the right of the Project Url box nor click the Save button of the Visual Studio as they won't be succeeded. You can save your settings after next step 3. 3. Open %USERPROFILE%\My Documents\IISExpress\config\applicationhost.config, search for "33333:localhost", then update it to "33333:mydomain" and save the file. 4. Save your setting as mentioned in step 2. 5. Right click a web page in your visual studio, and click "View in Browser". Now the page will be opened under http://mydomain:333333/, and <script>alert(document.domain);</script> in the page will alert "mydomain". Note: The port number listed above is assumed to be 33333. You need to change it to the port number set by your visual studio. Post edited: Today I tried with another domain name and got the following error: Unable to launch the IIS Express Web server. Failed to register URL... Access is denied. (0x80070005). I exit the IIS Express by right clicking the IIS Express icon at the right corner in the Windows task bar, and then re-start my visual studio as administrator, and the issue is gone. • now, this is the answer that should be considered as the proper solution ! Make sure you edit the applicationhost.config located in .vs/config if you are using visual studio 2015. Sep 2, 2016 at 14:48 I tried all of above, nothing worked. What resolved the issue was adding IPv6 bindings in the hosts file. In step 5 of @David Murdochs answer, add two lines instead of one, i.e.: 127.0.0.1 dev.example.com ::1 dev.example.com I figured it out by checking \$ ping localhost from command line, which used to return: Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 Instead, it now returns: Reply from ::1: time<1ms I don't know why, but for some reason IIS Express started using IPv6 instead of IPv4. Just in case if someone may need... My requirement was: • SSL enabled • Custom domain • Running in (default) port: 443 Setup this URL in IISExpress: http://my.customdomain.com To setup this I used following settings: Project Url: http://localhost:57400 Start URL: http://my.customdomain.com /.vs/{solution-name}/config/applicationhost.config settings: <site ...> <application> ... </application> <bindings> <binding protocol="http" bindingInformation="*:57400:" /> <binding protocol="https" bindingInformation="*:443:my.customdomain.com" /> </bindings> </site> • My IIS Express website fails to launch after making these changes. Can you share your launchSettings.json file? Nov 4, 2021 at 23:36 • my project also fails. Aug 2 at 9:47 • @Justin, mine was an ASP.NET Web Forms app using .NET Framework 4x. So I didn't have a launchsettings.json – dan Aug 3 at 4:36 I was using iisexpress-proxy (from npm) for this. https://github.com/icflorescu/iisexpress-proxy • Does this still work for you? I ran into issues having to install openssl just to get it to run, and even then it gives errors about keys being too small. Nov 5, 2021 at 0:11 Leaving this here just in case anyone needs... I needed to have custom domains for a Wordpress Multisite setup in IIS Express but nothing worked until I ran Webmatrix/Visual Studio as an Administrator. Then I was able to bind subdomains to the same application. <bindings> <binding protocol="http" bindingInformation="*:12345:localhost" /> <binding protocol="http" bindingInformation="*:12345:whatever.localhost" /> </bindings> Then going to http://whatever.localhost:12345/ will run.
2022-09-28 17:40:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3407215178012848, "perplexity": 4460.417383221292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00342.warc.gz"}
https://www.arogyarama.com/navicat-premium-crack-verified-registration-serial-key-2019-latest/
# Navicat Premium Crack ##VERIFIED## Registration Serial Key [2019] Latest Full Navicat Premium License Keys Keys + Crack Navicat Premium License Key for Mac Version. Navicat Premier with Registration Keys 2019 free downloads + License Keys. Navicat 10 uses a new interface, Navicat Lite, with a separate. Navicat 3.9.9 Free Download. Navicat Premium 15.0.23 Crack + Serial Key Download 2019 Windows 7/ 8/ 8.1/ 10 or. This new full license keys database migration tool provides a friendly . How to use Navicat Premium 15.0.9 Crack? Navicat Premium 15.0.9 Crack is a database solution which helps in transferring the data from one MySQL or MariaDB server to another server. Top Windows Registry Repair software to detect and fix registry errors and optimize your PC. Fix registry errors, improve the speed and performance of your PC with this free registry cleaner.Q: Show that if $a$ is a root of $x^3 + x^2 + x – 4$, then $a = a^2 + a – 4$. Show that if $a$ is a root of $x^3 + x^2 + x – 4$, then $a = a^2 + a – 4$. I know how to solve this using the method I saw on this website. By using the full terms of the polynomial and multiplying through by $a^2$ I get: $$0 = (a^3 – a^2 + a + 4) (a^2 + a – 4)$$ And then I arrive at: $$0 = (a – 4) (a^2 + a + 4)$$ But I don’t understand how to arrive at this result by multiplying through by $a^2$, which I know is permitted. And I’m also not sure why the $-4$ somehow makes a difference. How do I justify this step? A: You can also prove it by algebraic manipulation by multiplying the polynomial by its conjugate $$x^3+x^2+x-4=x^3+x^2+x-4\overline x=(x-\overline x)(x^2+x+4)=a(a-4)$$ Q: Can you shoot down Jedi starfighters? I am confused about the mechanics
2022-10-03 17:34:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3491663336753845, "perplexity": 1742.07707703885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00402.warc.gz"}
https://zbmath.org/?q=an%3A1204.03039
zbMATH — the first resource for mathematics Enumerations and completely decomposable torsion-free abelian groups. (English) Zbl 1204.03039 The main results of this paper are the following: For any family $$R$$ of finite sets there exists a completely decomposable torsion-free abelian group $$G_{R}$$ of infinite rank such that $$G_{R}$$ has an $$X$$-computable copy if and only if $$R$$ has a $$\Sigma_{2}^{X}$$-computable enumeration (Theorem 4). There exists a completely decomposable torsion-free abelian group $$G$$ of infinite rank such that $$G$$ has an $$X$$-decomposable copy if and only if $$X^{\prime}>_{T}0^{\prime}$$ (Theorem 5). As a consequence of these results, it is proved that there exists a completely decomposable torsion-free abelian group $$G$$ of infinite rank with no jump degree. MSC: 03C57 Computable structure theory, computable model theory 03D45 Theory of numerations, effectively presented structures 20K20 Torsion-free groups, infinite rank Full Text: References: [1] Baer, R.: Abelian groups without elements of finite order. Duke Math. J. 3, 68–122 (1937) · Zbl 0016.20303 · doi:10.1215/S0012-7094-37-00308-9 [2] Coles, R., Downey, R., Slaman, T.: Every set has a least jump enumeration. J. Lond. Math. Soc. (2) 62(3), 641–649 · Zbl 1023.03036 [3] Downey, R.: On presentations of algebraic structures. In: Sorbi (ed.) Complexity, Logic and Recursion Theory, pp. 157–205. Marcel Dekker, New York (1997) · Zbl 0915.03039 [4] Ershov, Y., Goncharov, S.: Constructive Models. Nauchnaya kniga, Novosibirsk (1999) [5] Fuchs, L.: Infinite Abelian Groups, vol. II. Academic Press, New York (1973) · Zbl 0257.20035 [6] Khisamiev, N.: Constructive Abelian Groups. Handbook of Recursive Mathematics, vol. 139. Elsevier, Amsterdam (1998) · Zbl 0940.03044 [7] Mal’tsev, A.I.: On recursive Abelian groups. Soviet Math. Dokl. 3 [8] Miller, R.: The $$\Delta$$ 2 0 -spectrum of a linear order. J. Symb. Log. 66, 470–486 (2001) · Zbl 0992.03050 · doi:10.2307/2695025 [9] Selman, A.: Arithmetical reducibilities I. Z. Math. Log. Grundlagen Math. 17, 335–370 (1971) · Zbl 0229.02037 · doi:10.1002/malq.19710170139 [10] Slaman, T.: Relative to any non-recursive set. Proc. Am. Math. Soc. 126, 2117–2122 (1998) · Zbl 0894.03017 · doi:10.1090/S0002-9939-98-04307-X [11] Soare, R.I.: Recursively Enumerable Sets and Degrees. Springer, Berlin/Heidelberg (1987) · Zbl 0667.03030 [12] Wehner, S.: Enumerations, countable structures and Turing degrees. Proc. Am. Math. Soc. 126, 2131–2139 (1998) · Zbl 0906.03044 · doi:10.1090/S0002-9939-98-04314-7 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-03-06 12:27:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287829756736755, "perplexity": 2394.999196512823}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374686.69/warc/CC-MAIN-20210306100836-20210306130836-00222.warc.gz"}
https://en.wikipedia.org/wiki/Bijective
# Bijection (Redirected from Bijective) A bijective function, f: XY, where set X is {1, 2, 3, 4} and set Y is {A, B, C, D}. For example, f(1) = D. In mathematics, a bijection, bijective function, one-to-one correspondence, or invertible function, is a function between the elements of two sets, where each element of one set is paired with exactly one element of the other set, and each element of the other set is paired with exactly one element of the first set. There are no unpaired elements. In mathematical terms, a bijective function f: XY is a one-to-one (injective) and onto (surjective) mapping of a set X to a set Y.[1][2] The term one-to-one correspondence must not be confused with one-to-one function (a.k.a. injective function) (see figures). A bijection from the set X to the set Y has an inverse function from Y to X. If X and Y are finite sets, then the existence of a bijection means they have the same number of elements. For infinite sets, the picture is more complicated, leading to the concept of cardinal number—a way to distinguish the various sizes of infinite sets. A bijective function from a set to itself is also called a permutation, and the set of all permutations of a set forms a symmetry group. Bijective functions are essential to many areas of mathematics including the definitions of isomorphism, homeomorphism, diffeomorphism, permutation group, and projective map. ## Definition For a pairing between X and Y (where Y need not be different from X) to be a bijection, four properties must hold: 1. each element of X must be paired with at least one element of Y, 2. no element of X may be paired with more than one element of Y, 3. each element of Y must be paired with at least one element of X, and 4. no element of Y may be paired with more than one element of X. Satisfying properties (1) and (2) means that a pairing is a function with domain X. It is more common to see properties (1) and (2) written as a single statement: Every element of X is paired with exactly one element of Y. Functions which satisfy property (3) are said to be "onto Y " and are called surjections (or surjective functions). Functions which satisfy property (4) are said to be "one-to-one functions" and are called injections (or injective functions).[3] With this terminology, a bijection is a function which is both a surjection and an injection, or using other words, a bijection is a function which is both "one-to-one" and "onto".[1][4] Bijections are sometimes denoted by a two-headed rightwards arrow with tail (U+2916 RIGHTWARDS TWO-HEADED ARROW WITH TAIL), as in f : XY. This symbol is a combination of the two-headed rightwards arrow (U+21A0 RIGHTWARDS TWO HEADED ARROW), sometimes used to denote surjections, and the rightwards arrow with a barbed tail (U+21A3 RIGHTWARDS ARROW WITH TAIL), sometimes used to denote injections. ## Examples ### Batting line-up of a baseball or cricket team Consider the batting line-up of a baseball or cricket team (or any list of all the players of any sports team where every player holds a specific spot in a line-up). The set X will be the players on the team (of size nine in the case of baseball) and the set Y will be the positions in the batting order (1st, 2nd, 3rd, etc.) The "pairing" is given by which player is in what position in this order. Property (1) is satisfied since each player is somewhere in the list. Property (2) is satisfied since no player bats in two (or more) positions in the order. Property (3) says that for each position in the order, there is some player batting in that position and property (4) states that two or more players are never batting in the same position in the list. ### Seats and students of a classroom In a classroom there are a certain number of seats. A bunch of students enter the room and the instructor asks them to be seated. After a quick look around the room, the instructor declares that there is a bijection between the set of students and the set of seats, where each student is paired with the seat they are sitting in. What the instructor observed in order to reach this conclusion was that: 1. Every student was in a seat (there was no one standing), 2. No student was in more than one seat, 3. Every seat had someone sitting there (there were no empty seats), and 4. No seat had more than one student in it. The instructor was able to conclude that there were just as many seats as there were students, without having to count either set. ## More mathematical examples and some non-examples • For any set X, the identity function 1X: XX, 1X(x) = x is bijective. • The function f: RR, f(x) = 2x + 1 is bijective, since for each y there is a unique x = (y − 1)/2 such that f(x) = y. More generally, any linear function over the reals, f: RR, f(x) = ax + b (where a is non-zero) is a bijection. Each real number y is obtained from (or paired with) the real number x = (yb)/a. • The function f: R → (−π/2, π/2), given by f(x) = arctan(x) is bijective, since each real number x is paired with exactly one angle y in the interval (−π/2, π/2) so that tan(y) = x (that is, y = arctan(x)). If the codomain (−π/2, π/2) was made larger to include an integer multiple of π/2, then this function would no longer be onto (surjective), since there is no real number which could be paired with the multiple of π/2 by this arctan function. • The exponential function, g: RR, g(x) = ex, is not bijective: for instance, there is no x in R such that g(x) = −1, showing that g is not onto (surjective). However, if the codomain is restricted to the positive real numbers ${\displaystyle \scriptstyle \mathbb {R} ^{+}\;\equiv \;\left(0,\,+\infty \right)}$, then g would be bijective; its inverse (see below) is the natural logarithm function ln. • The function h: RR+, h(x) = x2 is not bijective: for instance, h(−1) = h(1) = 1, showing that h is not one-to-one (injective). However, if the domain is restricted to ${\displaystyle \scriptstyle \mathbb {R} _{0}^{+}\;\equiv \;\left[0,\,+\infty \right)}$, then h would be bijective; its inverse is the positive square root function. ## Inverses A bijection f with domain X (indicated by f: X → Y in functional notation) also defines a converse relation starting in Y and going to X (by turning the arrows around). The process of "turning the arrows around" for an arbitrary function does not, in general, yield a function, but properties (3) and (4) of a bijection say that this inverse relation is a function with domain Y. Moreover, properties (1) and (2) then say that this inverse function is a surjection and an injection, that is, the inverse function exists and is also a bijection. Functions that have inverse functions are said to be invertible. A function is invertible if and only if it is a bijection. Stated in concise mathematical notation, a function f: X → Y is bijective if and only if it satisfies the condition for every y in Y there is a unique x in X with y = f(x). Continuing with the baseball batting line-up example, the function that is being defined takes as input the name of one of the players and outputs the position of that player in the batting order. Since this function is a bijection, it has an inverse function which takes as input a position in the batting order and outputs the player who will be batting in that position. ## Composition The composition ${\displaystyle \scriptstyle g\,\circ \,f}$ of two bijections f: X → Y and g: Y → Z is a bijection, whose inverse is given by ${\displaystyle \scriptstyle g\,\circ \,f}$ is ${\displaystyle \scriptstyle (g\,\circ \,f)^{-1}\;=\;(f^{-1})\,\circ \,(g^{-1})}$. A bijection composed of an injection (left) and a surjection (right). Conversely, if the composition ${\displaystyle \scriptstyle g\,\circ \,f}$ of two functions is bijective, it only follows that f is injective and g is surjective. ## Bijections and cardinality If X and Y are finite sets, then there exists a bijection between the two sets X and Y if and only if X and Y have the same number of elements. Indeed, in axiomatic set theory, this is taken as the definition of "same number of elements" (equinumerosity), and generalising this definition to infinite sets leads to the concept of cardinal number, a way to distinguish the various sizes of infinite sets. ## Properties • A function f: RR is bijective if and only if its graph meets every horizontal and vertical line exactly once. • If X is a set, then the bijective functions from X to itself, together with the operation of functional composition (∘), form a group, the symmetric group of X, which is denoted variously by S(X), SX, or X! (X factorial). • Bijections preserve cardinalities of sets: for a subset A of the domain with cardinality |A| and subset B of the codomain with cardinality |B|, one has the following equalities: |f(A)| = |A| and |f−1(B)| = |B|. • If X and Y are finite sets with the same cardinality, and f: X → Y, then the following are equivalent: 1. f is a bijection. 2. f is a surjection. 3. f is an injection. • For a finite set S, there is a bijection between the set of possible total orderings of the elements and the set of bijections from S to S. That is to say, the number of permutations of elements of S is the same as the number of total orderings of that set—namely, n!. ## Bijections and category theory Bijections are precisely the isomorphisms in the category Set of sets and set functions. However, the bijections are not always the isomorphisms for more complex categories. For example, in the category Grp of groups, the morphisms must be homomorphisms since they must preserve the group structure, so the isomorphisms are group isomorphisms which are bijective homomorphisms. ## Generalization to partial functions The notion of one-to-one correspondence generalizes to partial functions, where they are called partial bijections, although partial bijections are only required to be injective. The reason for this relaxation is that a (proper) partial function is already undefined for a portion of its domain; thus there is no compelling reason to constrain its inverse to be a total function, i.e. defined everywhere on its domain. The set of all partial bijections on a given base set is called the symmetric inverse semigroup.[5] Another way of defining the same notion is to say that a partial bijection from A to B is any relation R (which turns out to be a partial function) with the property that R is the graph of a bijection f:A′B′, where A′ is a subset of A and B′ is a subset of B.[6] When the partial bijection is on the same set, it is sometimes called a one-to-one partial transformation.[7] An example is the Möbius transformation simply defined on the complex plane, rather than its completion to the extended complex plane.[8] ## Notes 1. ^ a b "The Definitive Glossary of Higher Mathematical Jargon — One-to-One Correspondence". Math Vault. 2019-08-01. Retrieved 2019-12-07. 2. ^ "Injective, Surjective and Bijective". www.mathsisfun.com. Retrieved 2019-12-07. 3. ^ There are names associated to properties (1) and (2) as well. A relation which satisfies property (1) is called a total relation and a relation satisfying (2) is a single valued relation. 4. ^ "Bijection, Injection, And Surjection | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2019-12-07. 5. ^ Christopher Hollings (16 July 2014). Mathematics across the Iron Curtain: A History of the Algebraic Theory of Semigroups. American Mathematical Society. p. 251. ISBN 978-1-4704-1493-1. 6. ^ Francis Borceux (1994). Handbook of Categorical Algebra: Volume 2, Categories and Structures. Cambridge University Press. p. 289. ISBN 978-0-521-44179-7. 7. ^ Pierre A. Grillet (1995). Semigroups: An Introduction to the Structure Theory. CRC Press. p. 228. ISBN 978-0-8247-9662-4. 8. ^ John Meakin (2007). "Groups and semigroups: connections and contrasts". In C.M. Campbell, M.R. Quick, E.F. Robertson, G.C. Smith (eds.). Groups St Andrews 2005 Volume 2. Cambridge University Press. p. 367. ISBN 978-0-521-69470-4.CS1 maint: uses editors parameter (link) preprint citing Lawson, M. V. (1998). "The Möbius Inverse Monoid". Journal of Algebra. 200 (2): 428. doi:10.1006/jabr.1997.7242. ## References This topic is a basic concept in set theory and can be found in any text which includes an introduction to set theory. Almost all texts that deal with an introduction to writing proofs will include a section on set theory, so the topic may be found in any of these: • Wolf (1998). Proof, Logic and Conjecture: A Mathematician's Toolbox. Freeman. • Sundstrom (2003). Mathematical Reasoning: Writing and Proof. Prentice-Hall. • Smith; Eggen; St.Andre (2006). A Transition to Advanced Mathematics (6th Ed.). Thomson (Brooks/Cole). • Schumacher (1996). Chapter Zero: Fundamental Notions of Abstract Mathematics. Addison-Wesley. • O'Leary (2003). The Structure of Proof: With Logic and Set Theory. Prentice-Hall. • Morash. Bridge to Abstract Mathematics. Random House.
2020-02-24 13:32:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746073246002197, "perplexity": 459.7023621776587}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00445.warc.gz"}
https://eprint.iacr.org/2022/1328
### Revisiting Nearest-Neighbor-Based Information Set Decoding ##### Abstract The syndrome decoding problem lies at the heart of code-based cryptographic constructions. Information Set Decoding (ISD) algorithms are commonly used to assess the security of these systems. The most efficient ISD algorithms rely heavily on nearest neighbor search techniques. However, the runtime result of the fastest known ISD algorithm by Both-May (PQCrypto '17) was recently challenged by Carrier et al. (Asiacrypt '22), which introduce themselves a new technique called RLPN decoding which yields improvements over ISD for codes with small rates $\frac{k}{n}\leq 0.3$. In this work we first revisit the Both-May algorithm, by giving a clean exposition and a corrected analysis. We confirm the claim by Carrier et al. that the initial analysis is flawed. However, we find that the algorithm still (slightly) improves on time complexity and significantly improves on memory complexity over previous algorithms. Our first main contribution is therefore to set the correct baseline for further improvements. As a second main contribution we then show how to improve on the Both-May algorithm, lowering the worst case running time in the full distance decoding setting to $2^{0.0948n}$. We obtain even higher time complexity gains for small rates, shifting the break even point with RLPN decoding to rate $\frac{k}{n}=0.25$. Moreover, we significantly improve on memory for all rates $\frac{k}{n}<0.5$. We obtain our improvement by introducing a novel technique to combine the list construction step and the list filtering step commonly applied by ISD algorithms. Therefore we treat the nearest neighbor routine in a non-blackbox fashion which allows us to embed the filtering into the nearest neighbor search. In this context we introduce the fixed-weight nearest neighbor problem, and propose a first algorithm to solve this problem. Besides resulting in an improved decoding algorithm this opens the direction for further improvements, since any faster algorithm for solving this fixed-weight nearest neighbor variant is likely to lead to further improvements of our ISD algorithm. Available format(s) Category Attacks and cryptanalysis Publication info Preprint. Keywords representation techniquesyndrome decodingnearest neighbor searchcode-based cryptography Contact author(s) andre r esser @ gmail com History 2023-02-17: revised See all versions Short URL https://ia.cr/2022/1328 CC BY BibTeX @misc{cryptoeprint:2022/1328, author = {Andre Esser}, title = {Revisiting Nearest-Neighbor-Based Information Set Decoding}, howpublished = {Cryptology ePrint Archive, Paper 2022/1328}, year = {2022}, note = {\url{https://eprint.iacr.org/2022/1328}}, url = {https://eprint.iacr.org/2022/1328} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
2023-03-28 22:27:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3842037618160248, "perplexity": 1885.2007246211099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00157.warc.gz"}
http://math.stackexchange.com/questions/295961/graphing-absolute-value-functions/295975
# Graphing Absolute Value Functions Given: $y = -|2x + 1|-3$ I came up with the graph of... $1, -7$ $0, -5$ $-1, -3$ $-2, -1$ $-3, 1$ If you were to graph this, it would turn out to be an entirely straight line. This is an absolute value function, so should it not turn at the vertex? What am I missing? - That function cannot return a positive result. Check your computation for $x=-3$. Your value for $x=-2$ is also wrong. – Brian M. Scott Feb 6 '13 at 2:16 Hint: $$y=-|2x+1|-3=\begin{cases}-2x-1-3=-2x-4&\text{, if}\;\;\;2x+1\ge 0\Longleftrightarrow x\ge-\frac{1}{2}\\{}\\\;\;\;2x+1-3=\;\;\;2x-2&\text{, if}\;\;\;2x+1<0\Longleftrightarrow x<-\frac{1}{2}\end{cases}$$ so you can see all your values are wrong. - $$y = -|2x + 1|-3$$ This means: $$y=-|2x+1|-3=\begin{cases}-2x-1-3=-2x-4,&\text{ if}\;\; 2x+1\ge 0\iff x\ge-\frac12\\{}\\ \;\; 2x+1-3=\;\;\;2x-2,&\text{ if}\;\;2x+1<0\iff x<-\frac12\end{cases}$$ Note: $\,y \leq -3\,$ for all values $\,x\,.\,$ When $\,x = -\dfrac12,\; y = -3.\;$ So the "vertex" is at the point $\left(-\frac12, -3\right)$, and as you suggested in your question, the graph of the function $y = -|2x + 1|-3\,$ does, indeed "turn" at this point: when $x \lt -\frac12,\;y$ is increasing (approaching $-3$ from the "left"). When $\,x \gt -\frac12,\;y\,$ is decreasing. If $f(x)=-|2x+1|-3$ then we can compute points on the line: $$\tag{Points to plot}$$ $$x = 2 \implies f(2) = -|2\cdot 2 +1| - 3 = -|5| - 3 = -8;\tag{2, -8}$$ $$x=1 \implies f(1)=-|2\cdot 1+1|-3=-|3|-3=-6;\tag{1, -6}$$ $$x=0\implies f(0)=-|2\cdot 0+1|-3=-|1|-3 = -4;\quad\tag{0,-4}$$ $$x=-1\implies f(-1)=-|2\cdot(- 1)+1|-3=-|-1|-3=-4\tag{-1, -4}$$ $$x=-2 \implies f(-2)=-|2\cdot(-2)+1|-3=-|-3|-3=-6\tag{-2, -6}$$ $$x = -3\implies f(-3)=-|2\cdot (-3)+1|-3=-|-5|-3=-8\tag{-3, -8}$$ This should give you enough points, starting with the vertex $(-1/2, -3)$ to graph your equation. Perhaps seeing the graph will help understand what is happening with this function (as you can see, it is NOT a straight line! $\quad\quad y = -|2x + 1|-3$ - If $f(x)=-|2x+1|-3$ then: $$x=1\to f(1)=-|2\times 1+1|-3=-|3|-3=-6\neq -7\\x=0\to f(0)=-|2\times 0+1|-3=-|1|-3=-4\neq -5\\x=-1\to f(-1)=-|2\times(- 1)+1|-3=-|-1|-3=-4\neq -3\\x=-2\to f(-2)=-|2\times(-2)+1|-3=-|-3|-3=-6\neq -1\\x=1-3\to f(-3)=-|2\times (-3)+1|-3=-|-5|-3=-8\neq -1$$ In fact the function $f$ doesn't meet such these pairs of $(x,y)$ as you noted. This what @Don's answer is trying to show. -
2016-05-29 07:59:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8299923539161682, "perplexity": 270.11218965791574}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278417.79/warc/CC-MAIN-20160524002118-00125-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-7-section-7-2-solving-percent-problems-with-equations-exercise-set-page-484/18
## Prealgebra (7th Edition) 0.22= 44 % $\times$ x 0.22 = 0.44 $\times$ x $\frac{0.22}{0.44}$ = $\frac{0.44\times x}{0.44}$ $\frac{0.22}{0.44}$ $\times$ $\frac{100}{100}$=x $\frac{22}{44}$ =x x=0.5
2019-07-15 19:57:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.726279079914093, "perplexity": 4966.24257934366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524111.50/warc/CC-MAIN-20190715195204-20190715221204-00287.warc.gz"}
https://www.physicsforums.com/threads/normal-frequencies-and-normal-modes-of-a-multi-part-system.546817/
# Normal frequencies and normal modes of a multi-part system ## Homework Statement ***This is problem 11.29 in Taylor's Classical Mechanics*** A thin rod of length 2b and mass m is suspended by its two ends with two identical vertical springs (force constant k) that are attached to the horizontal ceiling. Assuming that the whole system is constrained to move in just the one vertical plane, find the normal frequencies and normal modes of small oscillations. [Hint: It is crucial to make a wise choice of generalized coordinates. One possibility would be r, φ, and $\alpha$, where r and φ specify the position of the rod's CM relative to an origin half way between the springs on the ceiling, and $\alpha$ is the angle of tilt of the rod. Be careful when writing down the potential energy.) ## The Attempt at a Solution Right now, I'm just trying to set up the Lagrangian for this system, but the potential is giving me some problems. I recognize that there is a gravitational potential and a spring potential. I'm attempting to find the positions of the ends of the rod relative to some fixed point; however, I'm not sure what fixed point I should choose. Ultimately, I'm trying to find the lengths of the springs in terms of r, φ, and $\alpha$. I'm getting a little frustrated with the trig and trying things out. Could someone point me in the right direction? ## Answers and Replies Related Advanced Physics Homework Help News on Phys.org vela Staff Emeritus
2021-03-07 17:41:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8144455552101135, "perplexity": 237.67568274978797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378043.81/warc/CC-MAIN-20210307170119-20210307200119-00113.warc.gz"}
https://www.nature.com/articles/s41467-021-26384-8?error=cookies_not_supported&code=1d814096-b39c-4442-90f2-ca2e39c546e0
## Introduction Satisfaction with the thermal environment for human body is significant, not merely due to the demand for comfort, but more importantly because thermal conditions are crucial for human body health1. Heat-resulted physiological and psychological problems not only can be threatening for human health2, but also negatively influence labor productivity and society economy3. Personal thermal management focusing on thermal conditions of human body and its local environment is emerging as an energy-efficient and cost-effective solution4,5. Without consuming excess energy on managing the temperature of the entire environment6,7, innovative textiles have been designed for controlling human body heat dissipation routes8,9. In general, human body dissipates heat via four different pathways: radiation, convection, conduction and evaporation10. Recently, textiles with engineered radiative properties11,12,13,14,15,16, convective and conductive properties17,18,19 have been demonstrated as promising approaches for personal thermal management especially for mild scenarios. However, for intense scenarios, textiles for ideal personal perspiration or evaporation management are still lacking. For the delicate human body system with a narrow temperature range (36–38 °C core temperature at rest and up to 41 °C for heavy exercise)20, evaporation plays an indispensable role in human body thermoregulation. Even at a mild state, about 20% of heat dissipation of the dry human body relies on the water vapor loss via insensible perspiration10,21. With further increase of heat load, liquid sweat evaporation contributes to more and more heat loss and becomes the major route for human body heat dissipation in intense scenarios such as heavy exercise and hot/humid environments, where excess heat cannot be dissipated efficiently by other pathways22,23. State-of-the-art textiles for daily use are usually sufficiently good at water vapor transmission to ensure comfort at the mild state (See Supplementary Note 1 and Supplementary Figs. 12 for more discussion)24. Nevertheless, the cooling performance of conventional textiles is to be improved when human body is in more intense scenarios, such as moderate/profuse perspiration situations in which liquid sweat is inevitably present. In order to avoid increased wettedness on the skin which causes less comfort in such cases25,26, state-of-the-art textiles, including moisture management fabrics, tend to focus on sweat removal. Textiles made of natural fibers, such as cotton, show strong water absorption capacity, which can help alleviate sense of wettedness quickly27. In spite of diminished absorbing ability, synthetic fibers (with profiled cross-section), such as polyester, are developed to possess enhanced moisture transportation than natural fibers to deliver water to the textile surface for faster evaporation28,29. Microfibres are also explored for improved wicking30. Besides, strategies including surface hydrophilicity/hydrophobicity modification31,32,33, multiple-layer design with differential wettability34,35 and hierarchical design of multiscale interconnected pores with capillarity gradient36,37 are reported to realize better controlled directional water transportation. These textiles serve as a buffer absorbing water to provide dry sense for people and can potentially offer a comparatively larger surface area for evaporation. However, how to efficiently unlock the cooling power of sweat evaporation for human body thermoregulation and design textiles based on laws of human body perspiration process have not been taken into account. In the aspect of thermoregulation, sweat is secreted to be evaporated and take away the excess heat. Nevertheless, although sweat evaporation does happen on the conventional textiles, human skin underneath is not effectively cooled since heat for vaporization is not efficiently drawn from the skin because of the limited heat transfer38,39,40. One extreme case is that only the textile surface rather than human skin can be cooled. In other words, the sweat absorbed in the conventional textiles shows decreased evaporative cooling efficiency in cooling the human body, which means sweat is less efficiently utilized. Also, even regarding evaporation rate of conventional textiles, it is relatively restrained because skin heat cannot be efficiently delivered to the evaporation interface to accelerate evaporation. The inefficient cooling effect will lead to further perspiration, and meanwhile the slow sweat evaporation, will result in the accumulation of sweat in the textile. This process may undermine the buffer effect of the textiles once the absorption limit of the fabric is reached, at which point the human body will get wet and sticky again. The excessive perspiration can also cause potential risk of dehydration, electrolyte disorder, physical and mental deterioration or even death41. Moreover, when people are in highly active scenarios, the maximum cooling power of sweat evaporation that can be achieved actually limits the maximum activity level of human body42. Accordingly, in addition to decent wicking property, an optimal textile for perspiration scenario should show high evaporation ability and more importantly high sweat evaporative cooling efficiency to utilize sweat in a highly efficient manner, to provide adequate cooling effect using minimized amount of sweat. In this work, we propose a novel concept of integrated cooling (i-Cool) textile of heat conduction and sweat transportation to achieve the as-mentioned goals based on human body perspiration process, as illustrated in Fig. 1a. We introduce heat conductive components into the textile and divide the functionalities of heat conduction and sweat transport into two operational components. The heat conductive matrix and sweat transportation channels are integrated together in the i-Cool textile. The synergistic effect of the two components results in excellent performance at sweat wicking, fast evaporation, efficient evaporative cooling for human body and reducing human body dehydration. As shown in Fig. 1b, the sweat transport channels can pull liquid water up from skin and spread it out in the sweat transport channels for evaporation. On the other hand, the heat conductive matrix can efficiently transfer skin heat to the evaporation spots that are integrated on the heat conductive matrix43,44. Therefore, combined with large evaporation area and efficient heat conduction from skin, sweat absorbed in the water transportation channels can be evaporated quickly into air, taking away a huge amount of heat from the skin. The efficient heat removal from the skin provides improved evaporative cooling effect and decrease skin temperature effectively, which will consequently reduce human body dehydration. As illustrated in Fig. 1c, compared to the conventional textiles, the i-Cool textile functions not only to wick sweat but also provide heat conduction paths for the accelerated evaporation and efficiently take away a great amount of heat from the skin. Furthermore, the enhanced evaporation ability and high sweat evaporative cooling efficiency can prevent the i-Cool textile from flooding to a much greater extent and avoid excessive perspiration. The improved evaporative cooling effect does not mean more sweat needs to be generated or even evaporated. Therefore, the i-Cool textile can help human body achieve enhanced cooling effect with greatly reduced sweat secretion by using the sweat in a highly efficient manner. ## Results and discussion On the basis of the i-Cool functional structure design principles as outlined above, we selected copper (Cu) and nylon 6 nanofibres for proof of concept. It is worthwhile to mention that Cu and nylon 6 nanofibres are not the only choices. Other materials satisfying the design principles can be applied as well. Here, Cu is well-known for its extraordinary thermal conductivity (~400 W·m−1·K−1), and nylon 6 nanofibres are capable of water wicking. As illustrated in Supplementary Fig. 3, electrospinning was utilized to generate nylon 6 nanofibres, which were transferred to the heat conductive Cu matrix prepared by laser cutting. With press lamination, the i-Cool (Cu) textile with desired functional structure design was fabricated. The photograph of as-fabricated i-Cool (Cu) textile is displayed in Fig. 2a. Nylon 6 nanofibres not only cover the Cu top surface, but also fill inside the pores, as shown in the magnified photograph of the bottom side of the i-Cool (Cu) textile in the inset of Fig. 2b. Nanofibres on the skeleton of Cu matrix are denser with smaller void space among the nanofibres than the ones in the pores of Cu matrix, which can be clearly observed in the scanning electron microscope (SEM) images in Fig. 2b and Supplementary Fig. 4. The capillarity difference resulted from the morphology difference benefits one-way directional water transportation from inner surface to outer surface. To evaluate the performance of the i-Cool (Cu) textile, we selected cotton textile as the main control textile since it is arguably the most widely used and accepted textile in human history. We have also chosen other well-known activewear fabrics for comparison purposes. ### Liquid water transport characterization Textiles designed for perspiration scenarios must be able to wick sweat from the skin (in contact with textile bottom) and spread it out. Correspondingly, we tested in parallel the i-Cool (Cu) textile and commercial textiles including cotton, Dri-FIT, CoolMax and Coolswitch via mimicking the sweat transport process from the human body skin to the outer surface of the textile. Textile samples covered a certain amount of liquid water on the platform respectively, and the wicking rate was calculated via dividing wicking area by wicking time for every sample (Supplementary Fig. 5). It turned out that the interconnected nylon 6 nanofibres in the i-Cool (Cu) textile was able to quickly transport liquid water from bottom to top and spread it out, which exhibited comparable or higher wicking rate in comparison with conventional textiles (Fig. 2c). Besides, due to the unique structure design and the nanofibre morphology variation from i-Cool (Cu) bottom to the outer surface, i-Cool (Cu) exhibits good one-way water transport property. As displayed in Supplementary Fig. 6a, the water droplet added onto the inner side of i-Cool (Cu) can be transported to the outer surface and spread out very quickly while little water remained on the inner side. In reverse, water transportation was limited when the water droplet added to the outer side. As a comparison, for cotton, the testing time on the outer side and inner side was almost the same no matter which side the water droplet was added onto (Supplementary Fig. 6b), which means the conventional cotton fabric shows no one-way transport capability. Also, in the scenario of adding water onto inner side, the water spreading rate on the inner surface and outer surface ($${S}_{{{{{{\rm{inner}}}}}}}$$ and $${S}_{{{{{{\rm{outer}}}}}}}$$) and one-way transport index (µ) were defined (See Methods for more details) and plotted in Supplementary Fig. 745. The i-Cool (Cu) shows obviously different $${S}_{{{{{{\rm{inner}}}}}}}$$ and $${S}_{{{{{{\rm{outer}}}}}}}$$, and very large µ, while $${S}_{{{{{{\rm{inner}}}}}}}$$ and $${S}_{{{{{{\rm{outer}}}}}}}$$ are very similar for cotton and its µ is very close to 1, which demonstrates the apparent one-way sweat transport advantage of i-Cool (Cu) again. This property can also help faster evaporation, because sweat can spread on the outer surface quickly and liquid water transport to the nanofibres right on the heat conductive Cu matrix is preferential37. ### Thermal resistance measurement To quantify the enhancement of heat transport capability of the i-Cool (Cu) textile, we performed the measurement of thermal resistance using cut bar method, as illustrated in Supplementary Fig. 8. Using this method, we measured the dry thermal resistance of the i-Cool (Cu) textile and other commercial textile samples all under an additional contact pressure of ~15 psi (103 kPa). As exhibited in Fig. 2d, the i-Cool (Cu) textile shows about 14–20 times lower thermal resistance compared to the conventional textiles (See Supplementary Note 2 and Supplementary Fig. 8 for more details and discussion). A thermal resistor model was built up to interpret the measured thermal resistance. It was found out the nylon 6 nanofibre layer contributes to the major thermal resistance, and increasing the thickness of heat conductive matrix (Cu) will only cause minor increase of thermal resistance (Supplementary Fig. 9). It provides support for the possibility of extending the i-Cool concept into fabrics of various thickness. ### Transient droplet evaporation test We further used a transient droplet evaporation test to compare the evaporation performance of the i-Cool (Cu) textile and the conventional textiles. Figure 2e illustrates the experimental setup: A heater placed on an insulating foam was used to simulate human skin with a thermocouple attached to the heater surface; We added liquid water at 37 °C to mimic sweat onto the artificial skin, then textile samples covered on the wet artificial skin immediately; The power density of the artificial skin was maintained constant during the measurement. During the whole evaporation process, skin temperature was always monitored and recorded. For example, a group of typical curves of skin temperature versus time are shown in Supplementary Fig. 10. Generally, the curves can be divided into three stages for every tested textile sample. Initially, when water was just added onto the artificial skin, skin temperature dropped sharply. Then, skin temperature was relatively stable only fluctuating in a small range in the evaporation stage. Eventually, skin temperature rose again quickly once water was completely evaporated. Two pieces of important information can be obtained through comparing the curves of i-Cool (Cu) and the conventional textiles. Firstly, the evaporation time with i-Cool (Cu) was much shorter, which indicates that i-Cool (Cu) exhibits higher evaporation rate. This conclusion can also be verified by measuring the mass loss of liquid water over time during the evaporation test (Supplementary Fig. 11). Secondly, skin temperature with i-Cool (Cu) textile was lower than the conventional textiles during evaporation, demonstrating human body can evaporate sweat faster with even lower skin temperature when a person wears i-Cool textile. The summarized comparison of average skin temperature and average evaporation rate between the i-Cool (Cu) textile and the conventional textiles is displayed in Fig. 2f (0.1 mL initial water, 422.5 W/m2 power density, ambient temperature: ~22 °C). The i-Cool (Cu) shows 2.3–4.5 °C lower average skin temperature and about twice faster average evaporation rate compared to the conventional textiles. Furthermore, measurements under assorted skin power density and initial liquid water amount for i-Cool (Cu) and cotton were performed. With different experimental parameters, the average evaporation rate was calculated and plotted versus the average skin temperature during evaporation in Supplementary Fig. 12a and Supplementary Fig. 12b. In our measurement range, a linear relationship between the average evaporation rate and the average skin temperature was observed with a certain amount of initial water. Employed the linear fitting relationship and replotted from Supplementary Fig. 12, Fig. 2g shows the fitted relationship between the average evaporation rate and the initial water amount at different skin temperatures for the i-Cool (Cu) and cotton. Generally, the average evaporation rate increases as the initial water amount increases and it shows an approaching saturation trend as the initial water amount reaches a certain level. This is perhaps consistent with the change trend of average evaporation area during the drying process when the initial water amount is changed. It is obvious that the i-Cool (Cu) exhibits overall higher evaporation rate than cotton. Besides, i-Cool (Cu) can achieve this with lower initial water amount and lower skin temperature, indicating the superiority in sweat evaporation of the i-Cool functional structure design. In order to further characterize the evaporation features of i-Cool (Cu) and analyze its advantages over conventional textiles, we performed a steady-state evaporation test. Compared to the transient droplet evaporation test above, the steady-state evaporation test can help derive additional useful indexes to differentiate the evaporation property of textiles during human body perspiration. The measurement apparatus is illustrated in Fig. 3a. Similarly, a heater placed on an insulating foam was used to simulate human skin. Thermocouples and a water inlet which were sealed in a thin acrylic board were attached to the artificial skin surface. Not adding a certain initial amount of water, water heated to 37 °C was pumped onto the skin surface at a specific rate continuously, and textiles on it wicked the intake water. Power density of the skin was adjusted to maintain skin temperature stable at 35 °C. The system with textile samples finally reached a steady-state. By changing steady-state evaporation rate (i. e. water pumping rate), the corresponding stable water mass gain and power density can be measured for different textiles. Figure 3b exhibits the measured water mass gain ratio (i. e. water mass gain/textile sample dry mass*100%, denoted as W) of i-Cool (Cu), cotton and Dri-FIT versus increasing evaporation rate (denoted as v). Firstly, it was observed that the water mass gain ratio of i-Cool (Cu) was always lower than cotton and Dri-FIT at the same evaporation rate, indicating that less sweat is required to “activate” i-Cool (Cu) to reach the same evaporation rate compared to the conventional ones. For example, when the steady-state evaporation rate was 1.1 mL/h, i-Cool (Cu) only showed about 20 percent of water mass gain ratio, while W of cotton was approximately 130 percent. This phenomenon was also in accordance with the transient droplet evaporation test results. Furthermore, we fitted the curves in Fig. 3b and calculated water mass gain ratio gradient (dW/dv), as shown in Fig. 3c. dW/dv of i-Cool (Cu) is apparently smaller than the conventional textiles, even if all of them displayed water mass gain increase as the growth of evaporation rate. Besides, dW/dv of cotton and Dri-FIT rises rapidly with the increase of evaporation rate, especially cotton. It means that it becomes even more and more difficult to achieve higher evaporation rate. Nevertheless, this index for i-Cool (Cu) stays almost unchanged in the measurement range. During real human body perspiration, these features of i-Cool (Cu) enables it to fast evaporate sweat before sweat accumulates a lot and to retain a relatively dry state even during very profuse perspiration that requires high evaporation rate. The measured power density (denoted as q) of artificial skin in this test is shown in Fig. 3d. Overall, the skin power density with i-Cool (Cu) was higher than the conventional textiles when they were at the same evaporation rate, demonstrating the cooling ability of i-Cool (Cu) during perspiration is stronger. It is worthwhile to mention that i-Cool (Cu) is easier to reach higher evaporation rate, thus the cooling power difference between i-Cool (Cu) and conventional textiles in practical use can be further enlarged. Besides, the curves in Fig. 3d were fitted and power density gradient (dq/dv) could be derived, as displayed in Fig. 3e. This index (dq/dv) exhibits the cooling power increment rate when evaporation rate increases. Obviously, dq/dv of i-Cool (Cu) is much higher than cotton and Dri-FIT, which means i-Cool (Cu) can provide much higher cooling power when every unit of sweat evaporates. To be specific, dq/dv of i-Cool (Cu) is about 3 times higher than that of cotton and Dri-FIT. Furthermore, to some extent, dq/dv can be converted into sweat evaporative cooling efficiency (denoted as η) (See Supplementary Note 3 for more discussion). Based on our estimation, the evaporative cooling efficiency of i-Cool (Cu) is 0.8~1, while η of cotton and Dri-FIT is only 0.2~0.4 (Supplementary Fig. 13). Therefore, we demonstrated i-Cool (Cu) shows evident advantages in both evaporation ability and sweat evaporative cooling efficiency, which makes it to be promising in next-generation textiles for personal perspiration management. ### Artificial sweating skin platform with feedback control loop Human body is capable of adjusting itself to maintain homeostasis in the means of feedback control loops46. Taking perspiration as an example, when the human body temperature exceeds a threshold, the sympathetic nervous system stimulates the eccrine sweat glands to secrete water to the skin surface. In reverse, water evaporation on the skin surface accelerates heat loss and thus body temperature decreases, which will reduce or suspend the perspiration of human body (Fig. 4a)47,48. To mimic human body perspiration situation and show the performance difference between the i-Cool (Cu) textile and the conventional textiles, we designed an artificial sweating skin platform with feedback control loop, as illustrated in Fig. 4b. In this system, an artificial sweating skin that can generate sweat uniformly from every fabricated perspiration spot was built up and served as the test platform. Power was supplied to the artificial sweating skin platform to generate heat flux simulating human body metabolic heat. A syringe pump and a temperature controller were utilized to provide continuous liquid water supply at a constant temperature (37 °C) for the artificial sweating skin. A thermocouple was attached to the artificial sweating skin platform surface, monitoring skin temperature with a thermocouple meter that transmitted skin temperature data to the computer in real time. Subsequently, the internal set program could instantly alternate the pumping rate of the syringe pump that corresponds to the sweating rate of artificial sweating skin, which realized the feedback control loop imitating human body’s feedback control mechanism. To achieve uniform water outflow through each artificial sweat pore mimicking human body skin sweating, we designed the artificial sweating skin platform as illustrated in Fig. 4c. In the bottom, an enclosed small cuboid cavity connecting to water inlet acted as a water reservoir. When water was pumped in, water in the reservoir was forced out upwards through the channels on the reservoir cap. On the top of it, a perforated hydrophilic heater was attached to generate heat, in the meantime through which water can flow out. The uniform “sweating” from each artificial sweat pore was realized by the fabricated Janus-type wicking layer with limited water outlets that was placed above the perforated heater (See Supplementary Note 4–5 and Supplementary Figs. 1416 for more details and discussion). We believe that the measurement results obtained with the as-built artificial sweating skin platform can provide reasonable parallel thermal comparison among the textile samples, even though this set-up cannot fully represent the human body due to the lack of some other feedback control mechanisms such as blood flow feedback control and the differences in size, shape, thermal capacity, etc. With the realization of scale-up, we expect to conduct the human physiological wear experiment42 in the near future. ### Artificial sweating skin test On the artificial sweating skin platform, we first performed a demonstrative experiment to intuitively show the sweat evaporative cooling efficiency difference. In this experiment, the same power density was used for the i-Cool (Cu) textile and cotton textile while the sweating rate was varied for different ones to realize the same skin temperature (34.5 °C), then we observed the condition of the artificial skin device and the textile samples after stabilization of 30 minutes. As shown in Supplementary Fig. 17, bare skin remained almost dry. The skin with the i-Cool (Cu) textile also remained dry while there was a little water absorbed in the sample. Nevertheless, there was a much larger amount of water remaining on both the skin platform and the cotton textile. These results intuitively demonstrated the i-Cool (Cu) can cool down the skin more efficiently consuming much less sweat. Then, we performed measurements with constant skin power density for i-Cool (Cu) and other commercial textile samples, to mimic an exercise scenario of human body (See Supplementary Note 6 and Supplementary Fig. 18 for more discussion for this measurement). All the measurements were performed from the same initial state. The skin temperature and sweating rate (i.e. water pumping rate) after stabilization were measured. Figure 4d shows the experimental results when skin power density was ~750 W/m2 and ambient temperature was 22 °C. The cooling performance of i-Cool (Cu) is very similar to the bare skin, which is recognized as the most efficient cooling approach since sweat evaporation can directly take away heat from the skin. Compared to the conventional textiles, i-Cool (Cu) exhibited evidently lower skin temperature (~2.8 °C lower than cotton, ~2 °C temperature difference with Dri-FIT and Coolswitch, ~3.4 °C temperature difference with CoolMax). The sweating rate provided for the conventional textiles was over 2–3 times as much as i-Cool (Cu). It proves that conventional textiles cannot achieve better cooling effect even with much more available sweat. On the other hand, i-Cool (Cu) is able to unlock the cooling power of sweat more efficiently, which can deliver improved cooling effect with reduced sweating dehydration. As a result, conventional textiles would become highly wet after perspiration, whereas i-Cool (Cu) could retain a much drier state (insets of Fig. 4d), which is a comprehensive effect of evaporation ability and sweat evaporative cooling efficiency. We tested the Cu heat conductive matrix and nylon 6 nanofibre film separately. The departure of the heat conduction component and water transport component makes both of them less efficient in evaporative cooling, as exhibited in Supplementary Fig. 19. These tests illustrate the key factor to achieve an effective cooling effect is the integrated functional design of heat conduction and sweat transportation. Different cotton samples with various area mass density were also tested (See Supplementary Note 7 and Supplementary Fig. 20 for more details). In our experiments, the thinnest cotton sample (26.5 g/m2) that is too transparent to be practically used still exhibited around 1.5 °C higher skin temperature than the i-Cool (Cu) textile. These results further validate the superiority of the i-Cool structure that is an integrated one with both heat conduction and sweat transportation. The artificial sweating tests under different skin power densities to simulate changed human body metabolic heat production were also conducted. As displayed in Fig. 4e, the enhanced cooling performance showing lower skin temperature and reduced sweating rate in comparison to conventional textiles was still true when different skin power densities were applied. It verifies the advantages of i-Cool in a wide range of heat production. Besides, the evaluation of performance under diverse ambient environment conditions was performed, especially in high temperature circumstances and high relative humidity surroundings in which perspiration is more likely to happen. At the ambient temperature of 40 °C, the evaporative cooling performance of i-Cool (Cu) textile and the conventional textiles is shown in Fig. 4f. The cooling performance distinction between the i-Cool (Cu) and the conventional textiles was still very apparent. To take a step further, we decreased skin power density of the artificial sweating skin to make skin temperature lower than ambient temperature to compare bare skin, i-Cool (Cu) and cotton, to see if the high thermal conductivity design in the i-Cool (Cu) will cause adverse effect for skin temperature. Consequently, skin temperature with the i-Cool (Cu) was almost the same as bare skin and showed better performance than cotton, as shown in Supplementary Fig. 21, indicating its evaporative cooling effect surpassed the opposing heat conduction from the ambient. In addition to high ambient temperature, we also investigated the performance of i-Cool (Cu) and other conventional textiles in a high relative humidity (RH) environment (Fig. 4g). As the relative humidity was raised, skin temperature with all the textile swatches rose correspondingly. Nevertheless, the skin temperature of the i-Cool (Cu) was still much lower than the conventional textiles. Moreover, we performed measurements to see how the parameters in the functional structure design of i-Cool (Cu) influence its performance (See Supplementary Note 8 and Supplementary Fig. 22 for more details). The results provide additional guidelines for personal perspiration management textile design. ### i-Cool practical application demonstration To further study the cooling effect of the i-Cool textile on human body, we developed a thermal simulation considering the coupled heat transfer, moisture vapor and liquid water transfer processes based on the actual human body with complex structure and dynamic physiological responses (See Supplementary Note 9, Supplementary Dataset 1 and Supplementary Fig. 23 for more details)49,50,51. The simulation results show that the i-Cool textile with improved evaporation ability and sweat evaporative cooling efficiency can achieve temperature reduction in both the skin temperature and core temperature of the human body compared to that with conventional textiles (Supplementary Fig. 23), which further validates the potential of the i-Cool structure design in efficient evaporative cooling for the human body. To bridge the gap between i-Cool (Cu) concept demonstration to practical use, we demonstrated the feasibility via fabricating the i-Cool textile based on commercial fabrics. Firstly, we verified the replacement of Cu matrix by polymer materials with heat conductive coatings. As shown in Supplementary Fig. 24, the i-Cool textiles using silver (Ag) coated polyester (PET) and nanoporous polyethylene (NanoPE) matrices exhibit almost the same performance as i-Cool (Cu) in the artificial sweating skin test (experimental parameters: same as Fig. 3d). Furthermore, we fabricated i-Cool textiles based on commercial knitted fabrics made of PET fibers. Here, we chose Dri-FIT and CoolMax which were already tested as control samples as the substrates. Figure 5a illustrates the fabrication process: holes were cut by laser cutting on the original fabric, after which it went through a facile electroless plating process. The Ag coating was deposited onto every fiber’s surface of the fabric. Next, cellulose fibers were filled into the holes of the fabric, and prepared nylon 6 nanofibre film was transferred onto the fabric via press lamination to realize the i-Cool (Ag) textile which possessed the desired i-Cool structure. It is worthwhile to point out the fabrics we selected and the electroless plating method are not the only choices. Other textile material and other methods offering heat conductive coatings can be utilized. Alternatively, heat conductive fibers can be applied as well for the heat transport matrix. Figure 5b shows the photograph of the i-Cool (Ag) textile sample swatch (Dri-FIT as substrate). The photograph viewing from the i-Cool (Ag) bottom is exhibited in the inset of Fig. 5c, and the SEM images of the Ag coated PET fibers (Fig. 5c, Supplementary Fig. 25) show the Ag coating is conformal and uniform. The branched structure formed in the electroless plating process can potentially enlarge evaporation area as well. The photograph and SEM images of i-Cool textile with CoolMax substrate are shown in Supplementary Figs. 26 and 27. Successively, we performed the same steady-state evaporation test and artificial sweating skin test for the i-Cool (Ag) textile. In the steady-state evaporation test, the curves of i-Cool (Ag) plotted with curves of i-Cool (Cu), cotton and Dri-FIT (Fig. 5d, e) exhibited that i-Cool (Ag) exhibited very similar performance to the i-Cool (Cu) textile. Compared to the original Dri-FIT textile acting as the substrate, i-Cool (Ag) owns significantly improved evaporation performance and evaporative cooling efficiency, which is owing to the i-Cool functional structure. Also, in the artificial sweating skin test, i-Cool (Ag) and i-Cool (Cu) presented comparable cooling performance for personal perspiration management, which was significantly improved in contrast to cotton and Dri-FIT. This is also true for the i-Cool textile prepared with CoolMax substrate (Supplementary Fig. 28). With only sweat transportation channels, the modified Dri-FIT and CoolMax showed weaker cooling performance (Supplementary Fig. 28), which verifies the i-Cool structure combining heat conduction with water transportation provides superior strategy in personal perspiration management. These results demonstrate the feasibility of readily applying the i-Cool concept to practical usage. In summary, we report a novel concept of i-Cool textile with unique functional structure design for personal perspiration management. The innovative employment of integrated water transport and heat conductive functional components together ensures not only its wicking ability, but also the fast evaporation rate, enhanced evaporative cooling effect and reduction of human body dehydration for human body via utilizing sweat in a highly efficient manner, which was demonstrated by the transient and steady-state evaporation test. An artificial sweating skin platform with feedback control loop simulating human body perspiration situation was realized, on which the i-Cool (Cu) textile shows comparable performance to the bare skin and apparent cooling effect with less provided sweat compared to the conventional textiles. Also, the structure advantage maintains under various conditions of exercise and ambient environment. Besides, the practical application feasibility of the i-Cool design principles was demonstrated, exhibiting decent performance. Therefore, we expect the i-Cool textile will open a new door and provide new insights for the textiles for personal perspiration management. ## Methods ### Textile preparation The Cu matrix used in the i-Cool (Cu) textile sample (main text) was prepared with Cu foil (~25 µm thickness, Pred Materials) laser cut via DPSS UV laser cutter. A pore array (2 mm diameter, 3 mm pitch) on the Cu foil was created to realize the Cu matrix. Nylon 6 nanofibre film was prepared by electrospinning. The nylon 6 solution system used in this work is 20 wt% nylon-6 (Sigma-Aldrich) in formic acid (Alfa Aesar). The polymer solution was loaded in a 5 mL syringe with a 22-gauge needle tip, which is connected to a voltage supply (ES30P-5W, Gamma High Voltage Research). The solution was pumped out of the needle tip using a syringe pump (Aladdin). The nanofibres were collected by a grounded copper foil (Pred Materials). The applied potential was 15 kV. The pumping rate was 0.1 mL/h. The distance between the needle tip and the collector is 20 cm. After collecting nylon 6 nanofibres of desired mass, the nylon 6 nanofibre film (~4.5 g/m2, ~25 µm thickness) was transferred and laminated on the Cu matrix. A hydraulic press (MTI) was used to press nylon 6 nanofibres both into the holes and on the top of the Cu matrix. The fabricated i-Cool (Cu) was ~45 µm thick and 107.7 g/m2. The varied parameters of the i-Cool (Cu) textile are shown in Supplementary Fig. 22. To fabricate the i-Cool (Ag) textile sample, same pore array as above was cut by laser cutter (Epilog Fusion M2 laser cutter) for the Dri-FIT or CoolMax textiles. Then, the fabric was cleaned and modified with polydopamine (PDA) coating for 2 h in an aqueous solution that consists of 2 g/L dopamine hydrochloride (Sigma Aldrich) and 10 mM Tris-buffer solution (pH 8.5, Teknova)52. For electroless plating of silver (Ag), the PDA-coated fabrics were then dipped into a 25 g/L AgNO3 solution (99.9%, Alfa Aesar) for 30 min to form the Ag seed layer. After rinsing with deionized (DI) water, the fabric was immersed into the plating bath solution containing 4.2 g L−1 Ag(NH3)2+ (made by adding 28% NH3·H2O dropwise into 5 g L−1 AgNO3 until the solution became clear again) and 5 g L−1 glucose (anhydrous, EMD Millipore Chemicals)53 for 2 h. Next, the fabric was turned over and placed into a new plating bath for another 2 h. After drying, cellulose fibers were filled into the cut pores by extraction filtration of paper pulp. Then, nylon 6 nanofibre film (~2-2.5 g/m2) was added onto it by the same process described above. The as-prepared i-Cool (Ag) (based on Dri-FIT) is ~175 g/m2. The one based on CoolMax is ~199 g/m2. For i-Cool samples based on Ag-coated PET and NanoPE film matrix, the PET matrix (~50 µm thickness) and NanoPE matrix (~25 µm thickness) were prepared by laser cutting in the same way, and went through the same Ag coating process and nylon 6 nanofibre film lamination. The cotton textile sample was from a common short-sleeve T-shirt (100% cotton, single jersey knit, 135 g/m2, ~400 µm thickness, Dockers). The Dri-FIT textile sample was from a regular Dri-FIT T-shirt (100% PET, single jersey knit, 143 g/m2, ~400 µm thickness, Nike). The CoolMax textile sample was from a T-shirt made of 100% CoolMax Extreme polyester fibers (100% PET, single jersey knit, 166 g/m2, ~445 µm thickness, purchased from Galls.com). The Coolswitch textile sample was from a Coolswitch T-shirt (91%PET/9% Elastane, French terry knit, 140 g/m2, ~350 µm thickness, Under Armour). ### Material characterization The optical microscope images were taken with an Olympus optical microscope. The SEM images were taken by a FEI XL30 Sirion SEM (5 kV) and a FEI Nova NanoSEM 450 (5 kV). ### Wicking rate measurement The wicking rate measurement method was based on AATCC 198 with modification. 5 × 5 cm textile samples were prepared ahead. 0.1 mL of distilled water was placed on the simulated skin platform by pipette. Then textile samples were covered on the water, and the time of water reaching the circle of 1.5 cm in radius on the top surface of textile was recorded. Wicking rate was calculated using wicking area divided by wicking time. ### One-way water transport characterization A 5 × 5 cm textile sample was fixed onto an acrylic frame that had a 4 × 4 cm square hole. Camera was placed right above the frame or underneath the frame to shoot videos. In total 20 µL of deionized water was added onto one side of textile sample and the water transport process was filmed. The testing time was calculated by an image processing software (SketchAndCalc Area Caculator). We calculated the $${S}_{{inner}}$$, $${S}_{{outer}}$$ and µ at the testing time of 15 s. Sinner or Souter = water spreading area/testing time. µ is a one-way transport index, which is defined as Souter/Sinner. ### Thermal resistance measurement The cut bar method adapted from ASTM 5470 was used to measure thermal resistance. In this setup, eight thermocouples are inserted into the center of two 1 inch × 1 inch copper reference bars to measure the temperature profiles along the top and bottom bar. A resistance heater generates a heat flux which flows through the top bar followed by the sample and then the bottom bar after which the heat is dissipated into a large heat sink. The entire apparatus (top bar, sample, bottom bar) is wrapped in thermal insulation. A modest pressure of approximately 15 psi was applied at the top bar to reduce contact resistance, and no thermal grease was used due to the material porosity. The temperature profiles of the top and bottom copper bars are then used to determine both the heat flux and the temperature drop across the sample stack, which can derive the total thermal resistance ($${R}_{{{{{{\rm{TOT}}}}}}}$$). Plotting the $${R}_{{{{{{\rm{TOT}}}}}}}$$ versus the number of sample layers, the sample thermal resistance with contact thermal resistance between samples can be obtained from the slope of the line. ### Water vapor transmission property tests The upright cup testing procedure was based on ASTM E96 with modification. Medium bottles (100 mL; Fisher Scientific) were filled with 80 ml of distilled water, and sealed with the textile samples using open-top caps and silicone gaskets (Corning). The exposed area of the textile was 3 cm in diameter. The sealed bottles were placed into an environmental chamber in which the temperature was held at 35 °C and relative humidity was 30 ± 5%. The mass of the bottles and the samples was measured periodically. By dividing the reduced mass of the water by the exposed area of the bottle (3 cm in diameter), the water vapor transmission was calculated. The evaporative resistance measurement was based on ISO 11092/ASTM F1868 with modification. A heater was used to generate stable heat flux mimicking the skin. A metal foam soaked with water was placed on the heater. A waterproof but vapor permeable film was covered on the top of the metal foam to protect the textile sample from contact with water. The whole device was thermally guarded. For different textile samples, we adjusted the heat flux to maintain the same skin temperature (35 °C) for all measurements. The ambient temperature was controlled by the water recirculation system at 35 °C, and the relative humidity was within 24 ± 4%. The evaporative resistance was calculated by $${R}_{{{{{{\rm{ef}}}}}}}=\frac{({P}_{s}-{P}_{a})\bullet A}{H}-{R}_{{{{{{\rm{ebp}}}}}}}$$, where $${P}_{s}$$ is the water vapor pressure at the plate surface, which can be assumed as the saturation at the temperature of the surface, $${P}_{a}$$ is the water vapor pressure in the air, A is the area of the plate test section, H is the power input, and $${R}_{{{{{{\rm{ebp}}}}}}}$$ is the value measured without any textile samples. ### Water vapor thermal measurement The artificial sweating skin platform was utilized in this measurement. A steady power density (580 W/m2) and water flow rate (0.25 mL/h) were adopted. An acrylic frame (thickness: 1.5 mm) with a crossing was laser cut and placed on the platform to support the textile samples avoiding the liquid water contact. Stable skin temperature was read. The ambient was 22 °C ± 0.2 °C, 40 ± 5% relative humidity. ### Transient droplet evaporation test The skin was simulated by a polyimide insulated flexible heater (McMaster-Carr, 25 cm2) which was connected to a power supply (Keithley 2400). A ribbon type hot junction thermocouple (~0.1 mm in diameter, K-type, Omega) was in contact with the top surface of the simulated skin to measure the skin temperature. The heater was set on a 10 cm-thick foam for heat insulation. During the tests, water (37 °C) was added onto the simulated skin and textile samples were covered on the simulated skin immediately. The skin temperatures with wet textile samples during water evaporation were measured with an assorted combination of initial water amount and generated area power density of simulated skin. The average evaporation rate was calculated by dividing the initial water amount by evaporation time. The end point of the evaporation was defined as the inflection point between the relatively stable range and the rapid increase stage of temperature. The average skin temperature referred to the average temperature reading spanned the evaporation stage in which skin temperature was relatively stable. The mass of wet textile samples was measured by a digital balance (U. S. Solid, 0.001g accuracy) to track the water mass loss during the evaporation. The tests were all performed in an environment of 22 ± 0.2 °C, 40 ± 5% relative humidity.
2022-11-28 05:15:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4192277491092682, "perplexity": 3566.1003035751514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00855.warc.gz"}
https://math.stackexchange.com/questions/887463/int-0y-exp-left-alpha-sqrtx1-x-right-rm-dx-int-0
# $\int_{0}^{y} \exp\left(\, -\alpha \sqrt{x(1-x)}\,\right)\, {\rm d}x = \int_{0}^{y} \exp\left(\, -\alpha \sqrt{x}\,\right)\, {\rm d}x$? Are the following integrals equal for large $\alpha$: $$I_1 =\int_{0}^{y} \exp\left(\, -\alpha \sqrt{x(1-x)}\,\right)\, {\rm d}x$$ $$I_2 =\int_{0}^{y} \exp\left(\, -\alpha \sqrt{x}\,\right)\, {\rm d}x$$ According to the answers I got from this forum they both equal to: $$I \sim_{\alpha \sim \infty} \frac{1}{\alpha^2}- {\frac { \left( \alpha\,y+1 \right) {{\rm e}^{ -\alpha\,y}}}{{\alpha}^{2}}}.$$ However, it doesn't make sense since $\sqrt{x}$ and $\sqrt{x(1-x)}$ are very different and the plot of these functions show the difference. If they be equal then following should be equal which is not clearly equal: $$\int_{y_1}^{y_1+dy} \exp\left(\, -\alpha \sqrt{x(1-x)}\,\right)\, {\rm d}x = \int_{y_1}^{y_1+dy} \exp\left(\, -\alpha \sqrt{x}\,\right)\, {\rm d}x$$ • Well, $f(\alpha)\sim_{\alpha\to\infty}g(\alpha)$ doesn't imply $f(\alpha)=g(\alpha)$, so what is it you're unsure about? – CuriousGuest Aug 4 '14 at 20:45 • Near $0$, the functions $\sqrt{x}$ and $\sqrt{x(1-x)}$ are quite similar. And for the asymptotics as $\alpha\to \infty$, only the values for $x$ very close to $0$ matter, provided that $y < 1$. Note that asymptotic equality is not equality. Both integrals have the same dominant terms, but they differ in the power order terms. – Daniel Fischer Aug 4 '14 at 20:46 • @CuriousGuest I mean if we accept that approximation then $I_1(y1)-I_1(y2) = I_2(y1)-I_2(y2)$ which for small y2-y1 clearly is not the case. – Hesam Aug 4 '14 at 20:54 • @DanielFischer Right, but if I evaluate I1(0,y1)−I1(0,y2) = I1(y1,y2)? Then it means the equality is more general. – Hesam Aug 4 '14 at 20:55 • To let $y_2-y_1\to 0$ you first need to fix $\alpha$, but in this case asymptotics for $\alpha\to\infty$ is irrelevant. – CuriousGuest Aug 4 '14 at 20:58
2019-08-19 06:02:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8930234909057617, "perplexity": 440.2457336371508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314667.60/warc/CC-MAIN-20190819052133-20190819074133-00076.warc.gz"}
https://www.projecteuclid.org/euclid.bjma/1498097003
Banach Journal of Mathematical Analysis Maps preserving a new version of quantum $f$-divergence Marcell Gaál Abstract For an arbitrary nonaffine operator convex function defined on the nonnegative real line and satisfying $f(0)=0$, we characterize the bijective maps on the set of all positive definite operators preserving a new version of quantum $f$-divergence. We also determine the structure of all transformations leaving this quantity invariant on quantum states for any strictly convex functions with the properties $f(0)=0$ and $\lim_{x\to\infty}f(x)/x=\infty$. Finally, we derive the corresponding result concerning those transformations on the set of positive semidefinite operators. We emphasize that all the results are obtained for finite-dimensional Hilbert spaces. Article information Source Banach J. Math. Anal., Volume 11, Number 4 (2017), 744-763. Dates Received: 6 July 2016 Accepted: 23 October 2016 First available in Project Euclid: 22 June 2017 Permanent link to this document https://projecteuclid.org/euclid.bjma/1498097003 Digital Object Identifier doi:10.1215/17358787-2017-0015 Mathematical Reviews number (MathSciNet) MR3708527 Zentralblatt MATH identifier 06841252 Citation Gaál, Marcell. Maps preserving a new version of quantum $f$ -divergence. Banach J. Math. Anal. 11 (2017), no. 4, 744--763. doi:10.1215/17358787-2017-0015. https://projecteuclid.org/euclid.bjma/1498097003 References • [1] P. Busch and S. P. Gudder, Effects as functions on projective Hilbert space, Lett. Math. Phys. 47 (1999), no. 4, 329–337. • [2] E. Carlen, “Trace inequalities and quantum entropy: An introductory course” in Entropy and the Quantum (Tucson, 2009), Contemp. Math. 529, Amer. Math. Soc., Providence, 2010, 73–140. • [3] G. Chevalier, “Wigner’s theorem and its generalizations” in Handbook of Quantum Logic and Quantum Structures, Elsevier, Amsterdam, 2007, 429–475. • [4] I. Csiszár, Information-type measures of difference of probability distributions and indirect observations, Studia. Sci. Math. Hungar. 2 (1967), 299–318. • [5] S. S. Dragomir, A new quantum $f$-divergence for trace class operators in Hilbert spaces, Entropy 16 (2014), no. 11, 5853–5875. • [6] M. Gaál and L. Molnár, Transformations on density operators and on positive definite operators preserving the quantum Rényi divergence, Period. Math. Hungar. 74 (2017), no. 1, 88–107. • [7] Gy. P. Gehér, An elementary proof for the non-bijective version of Wigner’s theorem, Phys. Lett. A 378 (2014), no. 30–31, 2054–2057. • [8] M. Győry, A new proof of Wigner’s theorem, Rep. Math. Phys. 54 (2004), no. 2, 159–167. • [9] F. Hansen and G. K. Pedersen, Jensen’s inequality for operators and Löwner’s theorem, Math. Ann. 258 (1982), no. 3, 229–241. • [10] F. Hiai, M. Mosonyi, D. Petz, and C. Bény, Quantum $f$-divergences and error correction, Rev. Math. Phys. 23 (2011), no. 7, 691–747. • [11] F. Kraus, Über konvexe matrixfuntionen, Math. Z. 41 (1936), no. 1, 18–42. • [12] K. Matsumoto, A new quantum version of f-divergence, preprint, arXiv:1311.4722v3 [quant-ph]. • [13] L. Molnár, Selected Preserver Problems on Algebraic Structures of Linear Operators and on Function Spaces, Lecture Notes in Math. 1895, Berlin, Springer, 2007. • [14] L. Molnár, Maps on states preserving the relative entropy, J. Math. Phys. 49 (2008), no. 3, art. ID 032114. • [15] L. Molnár, Order automorphisms on positive definite operators and a few applications, Linear Algebra Appl. 434 (2011), no. 10, 2158–2169. • [16] L. Molnár, Two characterizations of unitary-antiunitary similarity transformations of positive definite operators on a finite-dimensional Hilbert space, Ann. Univ. Sci. Budapest Eötvös Sect. Math. 58 (2015), 83–93. • [17] L. Molnár and G. Nagy, Isometries and relative entropy preserving maps on density operators, Linear Multilinear Algebra 60 (2012), no. 1, 93–108. • [18] L. Molnár, G. Nagy, and P. Szokol, Maps on density operators preserving quantum f-divergences, Quantum Inf. Process. 12 (2013), no. 7, 2309–2323. • [19] L. Molnár and P. Szokol, Maps on states preserving the relative entropy, II, Linear Algebra Appl. 432 (2010), no. 12, 3343–3350. • [20] M. Müller-Lennert, F. Dupuis, O. Szehr, S. Fehr, and M. Tomamichel, On quantum Rényi entropies: A new generalization and some properties, J. Math. Phys. 54 (2013), no. 12, art. ID 122203. • [21] M. Ohya and D. Petz, Quantum Entropy and Its Use, Springer, Berlin, 1993. • [22] D. Petz, Quasientropies for states of a von Neumann algebra, Publ. Res. Inst. Math. Sci. 21(1985), no. 4, 787–800. • [23] D. Petz, Quasi-entropies for finite quantum systems, Rep. Math. Phys. 23 (1986), no. 1, 57–65. • [24] D. Virosztek, Maps on quantum states preserving Bregman and Jensen divergences, Lett. Math. Phys. 106 (2016), no. 9, 1217–1234. • [25] D. Virosztek, Quantum f-divergence preserving maps on positive semidefinite operators acting on finite dimensional Hilbert spaces, Linear Algebra Appl. 501 (2016), 242–253.
2019-11-17 21:31:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 5, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6581333875656128, "perplexity": 1687.1301214389732}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669276.41/warc/CC-MAIN-20191117192728-20191117220728-00355.warc.gz"}
https://www.physicsforums.com/threads/group-theory-q.110928/
# Group Theory Q 1. Feb 16, 2006 ### ElDavidas I was in a tutorial today and was asked "What is the largest order that an element of $$S_{10}$$ can have?" I thought the answer was 10! but I've been told this is wrong. Can someone help me out with what's going on? I thought you calulated the order by the formula: $$|S_n| = n!$$ Last edited: Feb 16, 2006 2. Feb 16, 2006 ### Muzza You were asked about the order of an element, not the order of the group. There can't be an element of order 10! in S_10, because then S_10 would be abelian (even cyclic). Do you know that any permutation can be written as the product of disjoint cycles? 3. Feb 16, 2006 ### JasonRox I noticed that a lot of people can't answer these questions when asked. The question that Muzza just asked is something you should know to answer the question you want to know. 4. Feb 16, 2006 ### matt grime Ahem, this seems that it is also a matter of English and presumption. For the following question: Let G be a group of order m, what is the largest order an element can have? Then the correct answer really is m, since all elements have order dividing m and there is always a cyclic group of order m. However, just because something can happen doesn't mean it does happen. If we're given the extra information that G is actually S_n and n!=m, then, we can get a *better* answer, and indeed we can explicitly say what all permissible orders of elements are. Can is a bad word, in this question, or many questions. The better phrase would be: what is the largest order of an element of S_n.
2019-01-19 04:16:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.614007294178009, "perplexity": 570.1133035286955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662124.0/warc/CC-MAIN-20190119034320-20190119060320-00089.warc.gz"}
http://www.mzan.com/article/47701437-how-to-get-count-by-combining-more-than-2-tables-in-mysql.shtml
Home How to get count by combining more than 2 tables in mysql # How to get count by combining more than 2 tables in mysql user3292629 1# user3292629 Published in 2017-12-07 18:23:14Z i have a table 'A' with status column, it can have 4 values. In table A i have table 'B's id, table B have table 'C's id. I want to get the status count FROM table 'A' by joining all these columns. The status column in table A is a foreign key from table 'D'. Table 'D' having status like 1-agreed, 2-not agreed etc ATrimeloni 2# The question is missing some information that might be helpful. Particularly, what exactly you are wanting to count. (i.e. are you just trying to count ALL rows, or are you trying to count the number of rows in table A that have each status). I'll put together an answer that assumes that latter. I'll also just assume that "id" is the primary key of its own table, and that id will be the id from other tables inside a table. select A.statusField, count(*) from A join B on (A.Bid = B.id) join C on (B.Cid = C.id) group by A.statusField Hope that helps.
2017-12-18 22:23:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2138988971710205, "perplexity": 1917.4079880819193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948627628.96/warc/CC-MAIN-20171218215655-20171219001655-00559.warc.gz"}
https://scipost.org/submissions/2003.04281v3/
# On Scalar Products in Higher Rank Quantum Separation of Variables ### Submission summary As Contributors: Jean Michel Maillet · Giuliano Niccoli Arxiv Link: https://arxiv.org/abs/2003.04281v3 (pdf) Date accepted: 2020-11-24 Date submitted: 2020-11-18 10:49 Submitted by: Maillet, Jean Michel Submitted to: SciPost Physics Academic field: Physics Specialties: Mathematical Physics Quantum Physics Approach: Theoretical ### Abstract Using the framework of the quantum separation of variables (SoV) for higher rank quantum integrable lattice models [1], we introduce some foundations to go beyond the obtained complete transfer matrix spectrum description, and open the way to the computation of matrix elements of local operators. This first amounts to obtain simple expressions for scalar products of the so-called separate states (transfer matrix eigenstates or some simple generalization of them). In the higher rank case, left and right SoV bases are expected to be pseudo-orthogonal, that is for a given SoV co-vector, there could be more than one non-vanishing overlap with the vectors of the chosen right SoV basis. For simplicity, we describe our method to get these pseudo-orthogonality overlaps in the fundamental representations of the $\mathcal{Y}(gl_3)$ lattice model with $N$ sites, a case of rank 2. The non-zero couplings between the co-vector and vector SoV bases are exactly characterized. While the corresponding SoV-measure stays reasonably simple and of possible practical use, we address the problem of constructing left and right SoV bases which do satisfy standard orthogonality. In our approach, the SoV bases are constructed by using families of conserved charges. This gives us a large freedom in the SoV bases construction, and allows us to look for the choice of a family of conserved charges which leads to orthogonal co-vector/vector SoV bases. We first define such a choice in the case of twist matrices having simple spectrum and zero determinant. Then, we generalize the associated family of conserved charges and orthogonal SoV bases to generic simple spectrum and invertible twist matrices. Under this choice of conserved charges, and of the associated orthogonal SoV bases, the scalar products of separate states simplify considerably and take a form similar to the $\mathcal{Y}(gl_2)$ rank one case. Published as SciPost Phys. 9, 086 (2020) Dear Editor, We would like to thank the referees for their comments and remarks. Following their suggestion, we have added a final section "conclusions and perspectives" to summarize the main results of the paper and to give perspectives on future developments. We also improved the links to the main formulae in our description of results in the end of introduction. Best regards, J. M. Maillet, G. Niccoli, L. Vignoli ### List of changes We have implemented the following changes in this version : - we have added links to the main formulae in our description of results in the end of introduction. - we have added a final section "conclusions and perspectives" to summarize the main results of the paper and to give perspectives on future developments. - a few typos fixed, in particular in the Acknowledgements. - some new links to published versions included in references when available.
2021-05-07 18:25:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6207050681114197, "perplexity": 1132.7184920655623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00576.warc.gz"}
https://socratic.org/questions/5930ca367c0149398a7b1797
# Why are V^(V+) salts generally colourless? Jul 7, 2017 Well, ${V}^{V +}$ has no $d \text{-electrons.......}$ #### Explanation: Atomic vanadium, $Z = 23$, has an electronic configuration of $\left[A r\right] 3 {d}^{3} 4 {s}^{2}$. The colours of transition metal ions are, to a first approximation, related to electronic transitions of the $d - \text{electrons}$....... Given that ${V}^{V +}$ has NEITHER $\text{d-electrons}$ nor $\text{s-electrons}$, there are no valence electrons to give rise to a absorption in the visible region. On the other hand, ${V}^{+ I I I}$, (and ${V}^{+ I I}$, and ${V}^{+ I I I}$, ${V}^{+ I V}$) are conceived to have $3 d$ electrons, and their electronic transitions gives rise to colour...... The illustration displays solution of $V \left(I V +\right)$, $V \left(I V +\right)$, $V \left(I I I +\right)$, and $V \left(I I +\right)$.............. In fact vanadium has a very large redox manifold, and with many accessible oxidation states displays a rainbow of colours.........
2019-10-13 22:16:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7718179821968079, "perplexity": 1491.6517264099268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648343.8/warc/CC-MAIN-20191013221144-20191014004144-00261.warc.gz"}
https://docs.nimsuite.com/en/processing/filters.html
# NIM ### Filters Filters are NIM's "workhorse" feature. Almost everything you do in NIM is accomplished using filters. To get started, Create a filter. A filter applies logical criteria to your current Vault data, in order to produce a subset of that data. Then, that subset—the filter's output—becomes the input for a different object in NIM, which performs some action with it. Most importantly, filters provide the input for functions in Mappings, members for Roles, and reports inside Notification templates. Filters can also be used with Export tasks and Multi-export tasks to produce CSVs for downstream consumption outside of NIM. Since filters operate on data in the vault, a filter's output changes as the contents of the vault change. For example, suppose you create a filter that returns all user accounts in system X. As accounts are added or removed from system X (and system X's data is collected and thus refreshed in the vault), the filter's output changes accordingly. The true power of filters, however, is joining tables from different Systems. This is possible due to the universal format of the data in the vault. You do this with Relation items. For example, suppose you create a filter that returns all users with an account in system X, but no account in system Y. (This relation item would likely use with no logic.) You could then feed this filter into a user create mapping function for system Y. This filter might initially return 25 users. The first time the mapping is executed, accounts for those 25 users will be created in system Y. On subsequent executions, the filter will return empty results, and the mapping will do nothing—until more users are added to system X. And so on. Using filters in this way, you can progressively automate your organization's entire provisioning lifecycle. You aren't limited to user accounts; you can work with any type of resource supported by the Connectors of the involved systems. This is dynamic provisioning. Based on rules defined in filters, changes in source systems automatically trigger NIM to provision changes into target systems. Most of the work is front-loaded. After you've built up your provisioning rules using filters, mappings, and other features in NIM, it runs by itself. You only intervene when you need to change the rules. Although mappings and roles are the most important ways in which filters are used, they are not the only ways. Almost every feature in NIM relies on filters in some way. The bottom line is: the data in the Vault is the foundation of NIM, and filters are how you use that data. ### Tip NIM's filters operate on the same basic principles as queries in SQL. #### Invalid filters Invalid filters are underlined in red: These filters return invalid output until you reconfigure them. Until then, other objects that use them (e.g., mappings) will fail. To help debug an invalid filter, use the Validation tools.
2022-05-18 00:23:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3572731912136078, "perplexity": 1967.5719544258377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00775.warc.gz"}
https://grupa.cluster015.ovh.net/mtdabfgp/6ecdde-thermal-stability-of-alkali-metal-oxides-down-the-group
Mhw Weapon Transmog, Nombres Comunes Y Propios, Truck Fuel Tank Accessories, 20th Century British Female Poet And Critic, Fire Shot Silencer Price, Synlab Covid Test, Podobne" /> Mhw Weapon Transmog, Nombres Comunes Y Propios, Truck Fuel Tank Accessories, 20th Century British Female Poet And Critic, Fire Shot Silencer Price, Synlab Covid Test, Podobne" /> # thermal stability of alkali metal oxides down the group By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Explain. On moving down the group, as the atomic number of halogen increases, its thermal stability increases. As we move down group 1 and group 2, the thermal stability of nitrate increases. Explain. Group 1 metals most clearly show the effect of increasing size and mass on the decent of a group. The thermal stability; of these carbonates increases down the group, i.e., from Be to Ba, BeCO3 < MgCO3 < CaCO3 < SrCO3 < BaCO3 BeCO3 is unstable to the extent that it is stable only in atmosphere of CO2. However Li 2 CO 3 is less stable and readily decomposes to form oxide. Cloudflare Ray ID: 6103951b3c82640d By Fajan's Rule you should be getting the answer and then more electropositive metal will have more ionic character and then that will increase stability. Responders shouldn't have to search for it. Electronegativity of heavier elements of Group 15. Stability of carbonates increases down group I (alkali) and group II (alkaline earth) metals. Sol: (a) Both melting point and heat of reaction of alkali metals with water decrease down the group from Li to Cs. I'm not trying to be difficult; the terms 'stable' and 'reactive' encompass a lot of different areas & answering your question well depends on exactly what you're referring to. Why are BeSO 4 and MgSO 4 readily soluble in water while CaSO 4, SrSO 4 and BaSO 4 are insoluble? The stability of carbonates and bicarbonates increases down the group. What sort of work environment would require both an electronic engineer and an anthropologist? Magnesium carbonate decomposes to magnesium oxide (MgO) and carbon dioxide (CO 2) when heated. The ease of thermal decomposition on carbonates and nitrates (see table) the strength of covalent bonds in M2 Allof these decrease down the group. As you move up the group, you see an increase in electronegtivity. All compounds of alkali metals are easily soluble in water but lithium compounds are more soluble in organic solvents. The larger the ion, we see a lower charge density. The metals which are above hydrogen and possess positive values of standard reduction potentials are weakly electropositive metals. The thermal stability of carbonates increases with the increasing basic strength of metal hydroxides on moving down the group.Thus the order is The bicarbonates of all the alkali metals are known. Stability of carbonates increases down group I (alkali) and group II (alkaline earth) metals. As a result, the spread of negative charge towards another oxygen atom is prevented. Best answer As we move down the alkali metal group, we observe that stability of peroxide increases. Questions. Carbonates of metal: Thermal stability The carbonates of alkali metals except lithium carbonate are stable to heat. Now, according to one of my study sources, thermal stability of oxides is as follows: normal oxide (that of Lithium)>peroxide (that of Sodium)>superoxide (that of Potassium, Rubidium, Cesium). Beryllium Yes. Ionic character and the thermal stability of the carbonates increases from Be to Ba. Although the heat of reaction of Li is the highest, but due to its high melting point, even this heat is not sufficient to melt the metal, which exposes greater surface to water for reaction. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. is unstable towards heat and decomposes to give The thermal stability of carbonates increases with the increasing basic strength of metal hydroxides on moving down the group.Thus the order is The bicarbonates of all the alkali metals are known. (ii) All the alkaline earth metals form oxides of formula MO. Answer As we move from top to bottom in a group the size of the alkali metals increases, thereby the bond dissociation energy decreases hence it requires less energy to decompose so thermal stability also decreases.. (ii) All the alkaline earth metals form oxides of formula MO. So, when we create a carbonate complex like the example below, the negative charge will be attracted to the positive ion. The effect of heat on the Group 2 carbonates All the carbonates in this group undergo thermal decomposition to the metal oxide and carbon dioxide gas. Can 1 kilogram of radioactive material with half life of 5 years just decay in the next minute? Vaporization of the nitrate salts. The carbonates of alkaline earth metals also decompose on heating to form oxide and carbon dioxide. Looking at the enthalpy change of formation for group 2 metal oxides it’s clearly less energy is needed to break them as you go down the group. Thanks for contributing an answer to Chemistry Stack Exchange! Use MathJax to format equations. This results in the charge density of their corresponding cations decreasing down the group. As the positive ions get bigger as you go down the Group, they have less effect on the carbonate ions near them. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. Thermal stability. It's how resistant a molecule is to decomposition at higher temperatures. Carbonates of alkaline earth The sulphates of alkaline earth metals are all white solids. This can be explained as follows: The size of lithium ion is very small. Nitrates of alkaline and alkali metals give corresponding nitrites except for lithium nitrate, it gives lithium oxides. This is because of the following two reasons: [ M = Be, Mg, Ca, Sr, Ba] Could you please be a little more elaborate? The carbonates of alkaline earth metals can be regarded as salts of weak carbonic acid (H2CO3) and metal hydroxide, M (OH)2. This is just an illustration, and in reality the negative charge we see on the two $\ce{O}$ atoms is localized due to resonance. Most carbonates tend to decompose on heating to give the metal oxide and carbon dioxde. 3. However, carbonate of lithium, when heated, decomposes to form lithium oxide. (I am talking about S block alkali metals). How do airplanes maintain separation over large bodies of water? To learn more, see our tips on writing great answers. Alkali metal oxide formation in the melt and nitrogen or nitrogen oxides release. carbon dioxide and the oxide. • In Group 1, lithium carbonate behaves in the same way - producing lithium oxide and carbon dioxide.. When the ions electron cloud, is less polarized, the bond is less strong, leading to a less stable molecule. How to prevent players from having a specific item in their inventory? Alkali metal oxide formation in the melt and nitrogen or nitrogen oxides release. (ii) Carbonates. (ii) The solubility and the nature of oxides, of Group 2 elements. You may need to download version 2.0 now from the Chrome Web Store. Is it unusual for a DNS response to contain both A records and cname records? Register visits of my pages in wordpresss. Hence option A is correct. Alkali metal carbonates and bicarbonates are highly stable towards heat and their stability increases down the group, since electropositive character increases from Li to Sc. Alkali and alkaline earth metal nitrates are soluble in water. In other words, as you go down the Group, the carbonates become more thermally stable. The effective hydrated ionic radii. The carbonate ion has a big ionic radius so it is easily polarized by a small, highly charged cation. However, carbonate of lithium, when heated, decomposes to form lithium oxide. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. To compensate for that, you have to heat the compound more in order to persuade the carbon dioxide to break free and leave the metal oxide. The decomposition temperatures again increase down the Group. Book about young girl meeting Odin, the Oracle, Loki and many more. Since beryllium oxide is high stable, it makes BeCO 3 unstable. (Reverse travel-ban), How to mount Macintosh Performa's HFS (not HFS+) Filesystem. Alkali metal - Alkali metal - General properties of the group: The alkali metals have the high thermal and electrical conductivity, lustre, ductility, and malleability that are characteristic of metals. It's how resistant a molecule is to decomposition at higher temperatures. How can we discern so many different simultaneous sounds, when we can only hear one frequency at a time? metals. How can I relate the reactivity series to electronegativity and ionization energy? All the All compounds of alkali metals are easily soluble in water but lithium compounds are more soluble in organic solvents. Because of this polarization, the carbon dioxide will become more stable and energetically favorable. of a soluble salt of these metals. Well as you go down the group, the charged ion becomes larger. Why does Pb have a higher electronegativity than Sn? The thermal stability Charge density is basically the amount of charge in a given volume. Greater charge density, means a greater pull on that carbonate ion, and a greater pull causes the delocalized ions, and a more stable $\ce{CO2}$ molecule. MathJax reference. (i) Thermal stability of carbonates of Group 2 elements. Sulphates. Contrary to alkali metal sulphates, beryllium sulphate is water-soluble. For carbonates and bicarbonates, I know that stability increases down the group, and for chlorides and fluorides, stability decreases down the group. Magnesium oxide is stable to heat. , on decomposition, gives oxide. Similar to lithium nitrate, alkaline earth metal nitrates also decompose to give oxides. All alkali earth metal carbonates decompose. Stability: The carbonates of all alkaline earth metal decompose on heating to form corresponding metal oxide and carbon dioxide. Be > Mg > Ca > Sr > Ba. The oxides are very stable due to high lattice energy and are used as refractory material. Another way to prevent getting this page in the future is to use Privacy Pass. Hence, more is the stability of oxide formed, less will be stability of carbonates. Does magnesium carbonate decompose when heated? Stability of oxides decreases down the group. The carbonates decompose on heating form metal oxide and CO2. Down the group, atoms of the alkali metals increase in both atomic and ionic radii, due to the addition of electron shells. Are there countries that bar nationals from traveling to certain countries? In alkali metals, on moving down the group, the atomic size increases and the effective nuclear charge decreases. Ans.Alkali metals are highly reactive and hence they do not occur in the free state. Solution : (i) Nitrates Thermal stabilityNitrates of alkali metals, except , decompose on strong heating to form nitrites. Your answer might sound comment-like to some people, and I don't think it will solve the OP's problem, really. decomposition of magnesium oxide. D) On moving down the group, the thermal energy and the lattice energy of the oxides of alkali metals decrease. Why does this happen? Sulphates. In Europe, can I refuse to use Gsuite / Office365 at work? C) On moving down the group, the thermal energy and the lattice energy of the chlorides of alkali metals decrease. Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It however shows reversible decomposition in closed container The smaller the ionic radius of the cation, the more densely charged it is. (i) Thermal stability of carbonates of Group 2 elements. (i) Thermal stability of carbonates of Group 2 elements. I already quoted necessary lines to explain the concept. carbonates decompose on heating to give ammonium carbonate solution to a solution Electronegativity, is the tendency to attract electrons to itself. Addison and Logan discuss these factors in depth [62]. Cu, Hg, Ag, etc., belong to this group. MCO 3 —-> MO + CO 2 The temperature of decomposition i.e. This is an important detail. … In Group 1, lithium carbonate behaves in the same way, producing lithium oxide and carbon dioxide: $Li_2CO_3 (s) \rightarrow Li_2O(s) + CO_2$ The rest of the Group 1 carbonates do not decompose at laboratory temperatures, although at higher temperatures this becomes possible. Generally, Stocks move the index. It is stability that you are a human and gives you temporary access to the addition electron! Of chemistry the field of thermal stability of alkali metal oxides down the group ) when heated, decomposes to form and! More thermally stable increases, its thermal stability of oxide formed, less will attracted. Metals dissolve in liquid ammonia density of their corresponding cations decreasing down the column stability that you are a and! Densely charged it is easily polarized by a small, highly charged cation more charged. Maintain separation over large bodies of water item in their inventory ( CO 2 the temperature of decomposition.! Another oxygen atom is prevented it makes BeCO 3 unstable clicking “ Post Your answer ”, you to. The compounds changes down the column the reason for the exceptional stability of peroxide increases center! Great answers increasing size and the charge density the atomic size increases and the thermal stability the densely! Metal: thermal stability the carbonates decompose on heating to form nitrites metals in! And ionic radii, due to high lattice energy and the charge density the... In its outermost shell stability: the solubility and the nature of oxides of formula MO references! Towards oxygen increases down the group the atomic number of the sulphates in water decreases as the size..., decompose on heating to form oxide and carbon dioxide ( CO 2 the temperature of decomposition i.e, high... Comment-Like to some people, and other halogens, are likewise related thier... N'T think it will solve the OP 's problem, really referring to is thermal the., bicarbonates, fluorides, chlorides, and chlorides ’ t remember like alkali give. Highly charged cation CsO2 respectively web property the charged ion becomes larger exceptional stability of carbonates of group-2 metals that! Heating it Cups and Wizards, Dragons ''.... can ’ t.. So, when we can only hear one frequency at a time: 6103951b3c82640d • Your IP 213.239.217.177... + the size of lithium, when heated oxides are very stable due to lattice. Valence electron is much more weakly bound than those in inner shells and oxide... As you go down the group, the carbonates of group-2 metals and that of lithium, when create... Electronegativity than Sn in ionic radius so it is to electronegativity and ionization energy how the thermal stability,! Group-2 metals and that of lithium decompose on heating to give carbon.... Many different simultaneous sounds, when we create a carbonate complex like the example below, carbonates. Group ii metal oxide basicity and hydroxide solubility in water while CaSO 4, SrSO 4 and BaSO are. Because of this polarization, the Oracle, Loki and many more see our tips writing., highly charged cation and alkali metals, on moving down the group, the spread of negative is! ) in Microsoft Word 3 ) br > < br > as we down... An electronic engineer and an anthropologist heating form carbonates ( MC0 3 ) of! The effect of increasing size and mass on the decent of a group bicarbonates ( except exits. See our tips on writing great answers page in the center positive ion will be stability oxide... Site design / logo © 2021 Stack Exchange is a question and answer site for scientists, academics teachers! Small, highly charged cation oxides, of group 2 elements another oxygen is... A lower charge density increases the hydration energy of the sulphates of alkaline and metals!
2021-07-28 17:07:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5128253102302551, "perplexity": 3293.637086047444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153739.28/warc/CC-MAIN-20210728154442-20210728184442-00668.warc.gz"}
https://socratic.org/questions/584ce3067c014939f7ebe944
# Question be944 Dec 20, 2016 If we expand the summation as follows $\frac{1}{n} {\sum}_{k = 0}^{n - 1} {e}^{\frac{k}{n}} = \frac{1}{n} \left\{{e}^{\frac{0}{n}} + {e}^{\frac{1}{n}} + {e}^{\frac{2}{n}} + {e}^{\frac{3}{n}} + \ldots + {e}^{\frac{n - 1}{n}}\right\}$ " " = 1/n { underbrace(e^0+e^(1/n)+(e^(1/n))^2 + (e^(1/n))^3 + ... + (e^(1/n))^(n-1))_("n terms") } # So you are correct, It is a GP with; first term $a = {e}^{0} \left(= 1\right)$ and common ratio $r = {e}^{\frac{1}{n}}$ So Using the GP summation formula: ${S}_{n} = a \frac{\left(1 - {r}^{n}\right)}{\left(1 - r\right)}$ to get: $\frac{1}{n} {\sum}_{k = 0}^{n - 1} {e}^{\frac{k}{n}} = \frac{1}{n} 1 \frac{\left(1 - {\left({e}^{\frac{1}{n}}\right)}^{n}\right)}{\left(1 - {e}^{\frac{1}{n}}\right)}$ $\text{ } = \frac{\left(1 - {e}^{\frac{n}{n}}\right)}{n \left(1 - {e}^{\frac{1}{n}}\right)}$ $\text{ } = \frac{\left({e}^{\frac{n}{n}} - 1\right)}{n \left({e}^{\frac{1}{n}} - 1\right)}$ $\text{ } = \frac{\left({e}^{\frac{n}{n}} - {e}^{0}\right)}{n \left({e}^{\frac{1}{n}} - 1\right)}$ (as ${e}^{0} = 1$) QED
2019-06-18 03:37:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9934906959533691, "perplexity": 12193.463948661672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998605.33/warc/CC-MAIN-20190618023245-20190618045245-00035.warc.gz"}
https://www.wyzant.com/resources/answers/topics/methods
10 Answered Questions for the topic Methods 07/13/19 #### How are the physiological properties of mitochondria measured? this is my first question on BiologySE. I am a Physics and Mathematics student currently doing a project on cell growth simulation. I am doing literature survey and I have a question about cellular... more 06/22/19 #### What's the difference between a method and a function? Can someone provide a simple explanation of methods vs. functions in OOP context? 06/20/19 #### How are the physiological properties of mitochondria measured? This is my first question on BiologySE. I am a Physics and Mathematics student currently doing a project on cell growth simulation. I am doing literature survey and I have a question about cellular... more 05/22/19 #### Why are exclamation marks used in Ruby methods? In Ruby some methods have a question mark (?) that ask a question like include? that ask if the object in question is included, this then returns a true/false.But why do some methods have... more 05/19/19 #### Tangent of a Parabola Find the value of c such that y=x+c is a tangent to the parabola y=x^2-x-12. 05/02/17 #### Challenge exercise Java I need help with problem 41. Exercise 3.40 Assume a class Tree has a field of type Triangle called leaves and a field of type Square called trunk. The constructor of Tree takes no parameters and... more 11/13/16 #### explain who is correct and WHY; adapt corrdct method Tylishia says that she can calculate 324-198 by adding 2 to both numbers and calculating 326-200. Marcus says that she could get the correct answer by subtracting 2 from 324 and adding 2 to 198... more 08/07/15 #### How does a Ball object draw itself onto a panel in a different class? A) The Ball object's image gets drawn in the containing class, not in the Ball class.B) By passing its information to the other class so that class can draw it on the panel there.C) By drawing on... more 08/07/15
2022-08-14 18:00:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2171100527048111, "perplexity": 1346.0068470931683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00337.warc.gz"}
https://www.gamedev.net/forums/topic/482801-are-these-pc-specs-and-price-good/
# Are these pc specs and price good? This topic is 3764 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I'm a CS student interested in graphics and AI programming who will be buying a computer soon and wanted to know if the specs and price of the desktop I'm planning to buy are good. I will appreciate all of your advices but keep in mind that I only have $1300 to spend on the computer. NOTE: Aside of programming I will be playing Flight Simulator X and Crisis Here are the specs: Case: Apevia X-Jupiter Jr. Fans: 120mm fan ( Qty: 3 ) PSU: CoolerMaster Unit 600 Watts Extreme Power-SLI Supports CPU: Intel Core 2 Duo E6750 @ 2.66GHz CPU Cooling: CoolerMaster Liquid CPU Cooling Motherboard: Asus P5N-D nForce 750i SLI RAM: 2GB (2x1GB) Corsair Value Video Card: NVIDIA GeForce 8800GT 512MB 16x PCI Express (Only one) Monitor: 19" ViewSonic VA1903 WB WXGA+ HDD: 320GB SATA-II 3.0Gb/s 16MB Cache 7200RPM Optical Drive: 20X DVD DVD±R/±RW + CD-R/RW DRIVE DUAL LAYER Optical Drive: 16X DVD ROM Sound: Integrated 7.1 Sound Speakers: Logitech (BLACK) X-540 70Watts 5.1 Configuration Speaker System Network: PCI Wireless 802.11g 54Mbps Network Interface Card Media: 12 in 1 Flash Media Reader/Writer Software: Windows Vista Home Premium SP1 All for a total price of$1238 before shipping and handling Is the 600W PSU a good option or should I get a 500W or 535W PSU instead? Thanks in advance, CircuitX [Edited by - CircuitX on February 13, 2008 7:56:28 PM] ##### Share on other sites I have a 600 watt PSU in a very similar system (same processor, but 8800GTS) and it is more than enough for the system. Actually way more than I really needed. I've personally been happy with mine (OCZ StealthXStream. I was surprised when I was shopping for power supplies, at how limited my choices where if I didn't want it to have LEDs) and means I don't have to worry about upgrading with more drives and that. I haven't found any drawbacks to it as of yet. First thing I would suggest, unless you are planning to really overclock that CPU, ditch the cooling. Stock Air is way more than enough (Mine idles at 30 something C, and stresses out at 60 something when pushed) I can't really think of a valid second, other than my very strong dislike of cases such as that. I don't think I'll ever want anything too different from my Antec P-180. (the only drawback to my case is it is fricken huge! and heavy.) ##### Share on other sites Thanks for your reply. I have a little doubt about what you mean when you say ditch the cooling (I'm a Spanish speaker with average english knowledge). Are you referring to the cpu liquid cooling or the extra fan when you said ditch the cooling? About the PSU then I think a 500W should do it because the 8800GT is less power hungry. Am I right or should I stay with 600W? You said you didn't liked the case, can I know why? Is it bad for upgrades or messy to deal with the cables on it? ##### Share on other sites Do you have a specific reason for getting a burner and a non-burner drive? If you aren't doing disc-to-disc copies I'd just get one (the writer). ##### Share on other sites I assume you are ordering from www.cyberpowerpc.com ? I got a pc from them and its great but I have been very unsatisfied with their service. First my order took 33 days to build and ship! Second I ordered the mushkin extreme memory but when the pc arrived it had regular mushkin ram. It then took them 3 weeks to verify and ship out the correct memory, which when this happened they charged me for the correct ram, did not tell me they were going to charge me, and claimed they would refund the money once the wrong ram was sent back to them. Its been two weeks since they received the wrong ram back from me and I have yet to be refunded this second charge. I have been trying to call them and they do not call back. So buy at your own risk from them, they have good reviews on some sites but there are horrible ones like mine mixed in with them. ##### Share on other sites I did mean to not bother with the Water Cooling system (They're risky, and eat up a lot of electrical power). As for the case, I can't really comment on it beyond I simply hate the look of it. But to each their own. (That is assuming the one Google showed me which had all the bright lights and clear parts to the case is the same one you are getting) I agree on using the money you would have spent on the water cooling system to upgrade to a quad core processor. Side note question. Any idea if the difference in FSB would have much of an effect on the processors? Q6600 being 4 cores at 2.40GHZ and 1066 FSB, vs the E6750's 2 cores at 2.66GHZ and 1333FSB? How much of an impact is that likely to make for single core programs? Any reason why the Core 2 Quads aren't at 1333, only Xeons? ##### Share on other sites Quote: Posted by TalrothI did mean to not bother with the Water Cooling system (They're risky, and eat up a lot of electrical power). Don't worry Talroth you didn't bother me instead you gave me a very good advice that I will follow, and about the case I was in doubt of it so I change it for a better looking Apevia X-Telstar Jr (blue color), I like cases with side windows on it ;). Quote: Posted by TalrothSide note question. Any idea if the difference in FSB would have much of an effect on the processors? Q6600 being 4 cores at 2.40GHZ and 1066 FSB, vs the E6750's 2 cores at 2.66GHZ and 1333FSB? How much of an impact is that likely to make for single core programs?Any reason why the Core 2 Quads aren't at 1333, only Xeons? Well I don't know too much of the differences between Quad and Dual Cores but my Computer Science Professor told me that it will take long for programmers to take advantage of the 4 cores so I should stick with 2 cores for now. I'm taking his advice, he is an electric/computer engineer and he must be doing some intensive calculations because his specialty is in neural networks. That is as far as I can go because I don't know too much about that, I'm only in my second year of CS. Do you order it during christmas rush? Your guess was right I'm ordering from cyberpowerpc and it's good to know that there's a chance of having some issues with them, at least I can be sure that my pc will arrive if I order it. Maybe it will took long for me to be shipped because I'm from Puerto Rico but that price can't be matched, I had checked some other sites and no other is giving those specs at that price, they are at \$1400 and over. I will have to think if I will take the risk of having those kind of issues. I will check more reviews of them. [Edited by - CircuitX on February 13, 2008 11:57:22 PM] ##### Share on other sites I ordered my PC Dec 1. I expected some delay but not 33 days to build, especially since when I called I was told nothing was on back order. The pc works great I have no complaints other then them switching out cheapo memory on me. And I have the same case, the Apevia Telstar Jr, I like it. 1. 1 2. 2 Rutin 22 3. 3 4. 4 JoeJ 17 5. 5 • 14 • 30 • 13 • 11 • 11 • ### Forum Statistics • Total Topics 631774 • Total Posts 3002295 ×
2018-07-22 13:25:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2920931577682495, "perplexity": 1380.1487864996768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593223.90/warc/CC-MAIN-20180722120017-20180722140017-00149.warc.gz"}
https://www.top3d.app/tutorials/3d-topology-optimization-using-matlab-fmincon-top3dfmincon
# 3D Topology Optimization using MATLAB fmincon – Top3d/fmincon In this tutorial, you will learn how to use Matlab1 fmincon function as an optimizer in our 3d topology optimization program. It constrains six(6) main steps, i.e., Initialize Fmincon, Define Objective function, Hessian, Constraint, Output function and Call fmincon. Note: the line numbers in each step is refer to as the code snippets in each step instead of the top3d program. ### Step.0: Remove Codes from Top3d Before we get started, please download the Top3d program if you haven’t done so. Then, DELETE lines 64- 96 (mainly the while loop) in the program. The following steps start after line 63. ### Step.1: Initialize fmincon In this step, we are going to initialize parameters used by Matlab fmincon function. • In line 2, a global variable ce is defined. ce will be used in both myObjFcn and myHessianFcn. • Since our constraint (volume constraint) is a function of xPhys instead of design variable x, we will define it in the function myConstrFcn • There are lots of OPTIONS defined for fmincon • TolX: user-defined terminarion criterion. • MaxIter: maximum number of iterations. • Algorithm: Matlab fmincon has many alogrithms, such as ‘sqp’, ‘active-set’, ‘trust-region-reflective’ and ‘interior-point’. The reason why we choose ‘interior-point’ instead of others is because ‘interior-point’ accepted user-supplied Hessian of the Lagrange function while ‘sqp’ and ‘active-set’ do not allow user to provide hessian. The hessian in ‘sqp’ or ‘active-set’ are approximated by the means of BFGS (Quasi-Newtown method). • GradObj, GradConstr, Hessian, HessFcn: With analytic expressions of the gradients of the objective function, constraints and Hessian of the Lagrange function, the optimization algorithm will execute faster than numerical approximation. • Display: display option is turned off. We will display iteration information in the OutputFcn. • OutputFcn: we defined output function in myOutputFcn, which will display iteration information, structural topology corresponding. • PlotFcns: Matlab provides some built-in plot function, @optimplotfval plots the function value. • Click here for more information out fmincon options. global ce % Shared between myfun and myHessianFcn A = []; B = []; Aeq = []; Beq = []; LB = zeros(size(x)); UB = ones(size(x)); OPTIONS = optimset('TolX',tolx, 'MaxIter',maxloop, 'Algorithm','interior-point',... 'GradObj','on', 'GradConstr','on', 'Hessian','user-supplied', 'HessFcn',@myHessianFcn,... 'Display','none', 'OutputFcn',@(x,optimValues,state) myOutputFcn(x,optimValues,state,displayflag), 'PlotFcns',@optimplotfval); ### Step.2: Define Objective function The function to be minimized. myObjFcn is a function that accepts a vector x and returns a scalar f and the gradient vector gradf(x), the objective function evaluated at x. In our problem (minimize compliance) the objective is given by and the gradient is given by Click here to learn more about problem statement, objective function and sensitivity analysis. function [f, gradf] = myObjFcn(x) xPhys(:) = (H*x(:))./Hs; % FE-ANALYSIS sK = reshape(KE(:)*(Emin+xPhys(:)'.^penal*(E0-Emin)),24*24*nele,1); K = sparse(iK,jK,sK); K = (K+K')/2; U(freedofs,:) = K(freedofs,freedofs)F(freedofs,:); % OBJECTIVE FUNCTION AND SENSITIVITY ANALYSIS ce = reshape(sum((U(edofMat)*KE).*U(edofMat),2),[nely,nelx,nelz]); c = sum(sum(sum((Emin+xPhys.^penal*(E0-Emin)).*ce))); dc = -penal*(E0-Emin)*xPhys.^(penal-1).*ce; % FILTERING AND MODIFICATION OF SENSITIVITIES dc(:) = H*(dc(:)./Hs); % RETURN f = c; gradf = dc(:); end % myfun See Writing Objective Functions for details. ### Step.3: Define Hessian The Hessian of the Lagrangian is given by the equation: In our problem the Hessian of the objective function can be expressed as: since the constraint is linear constraint, the Hessian of the constraint Details about the derivation of Hessian corresponding to the objective function is discussed in the paper. myHessianFcn computes the Hessian at a point x with Lagrange multiplier structure lambda: function h = myHessianFcn(x, lambda) xPhys = reshape(x,nely,nelx,nelz); % Compute Hessian of Obj. Hessf = 2*(penal*(E0-Emin)*xPhys.^(penal-1)).^2 ./ (E0 + (E0-Emin)*xPhys.^penal) .* ce; Hessf(:) = H*(Hessf(:)./Hs); % Compute Hessian of constraints Hessc = 0; % Linear constraint % Hessian of Lagrange h = diag(Hessf(:)) + lambda.ineqnonlin*Hessc; end % myHessianFcn For details, see Hessian. For an example, see fmincon Interior-Point Algorithm with Analytic Hessian. ### Step.4: Define Constraint The function that computes the nonlinear inequality constraints $c(x) \leq 0$ and the nonlinear equality constraints $ceq(x) = 0$. myConstrFcn accepts a vector x and returns the four vectors c, ceq, gradc, gradceq. c is a vector that contains the nonlinear inequalities evaluated at x, ceq is a vector that contains the nonlinear equalities evaluated at x, gradc, the gradient of c(x), and gradceq, the gradient of ceq(x). In our problem, we only have one equality constraint, i.e., volume fraction constraint Click here to learn more about problem statement and constraints. function [cneq, ceq, gradc, gradceq] = myConstrFcn(x) xPhys(:) = (H*x(:))./Hs; % Non-linear Constraints cneq = sum(xPhys(:)) - volfrac*nele; gradc = ones(nele,1); % Linear Constraints ceq = []; gradceq = []; end % mycon For more information, see Nonlinear Constraints. ### Step.5: Define Output Function The myOutputFcn is a function that an optimization function calls at each iteration. The iteration information is displayed and the structural topology can be also displayed during the iteration process depends on users setting. function stop = myOutputFcn(x,optimValues,state,displayflag) stop = false; switch state case 'iter' % Make updates to plot or guis as needed xPhys = reshape(x, nely, nelx, nelz); %% PRINT RESULTS fprintf(' It.:%5i Obj.:%11.4f Vol.:%7.3f ch.:%7.3f\n',optimValues.iteration,optimValues.fval, ... mean(xPhys(:)),optimValues.stepsize); %% PLOT DENSITIES if displayflag, figure(10); clf; display_3D(xPhys); end title([' It.:',sprintf('%5i',optimValues.iteration),... ' Obj. = ',sprintf('%11.4f',optimValues.fval),... ' ch.:',sprintf('%7.3f',optimValues.stepsize)]); case 'init' % Setup for plots or guis if displayflag figure(10) end case 'done' % Cleanup of plots, guis, or final plot figure(10); clf; display_3D(xPhys); otherwise end % switch end % myOutputFcn See Output Function. ### Step.6: Call fmincon fmincon(@myObjFcn, x, A, B, Aeq, Beq, LB, UB, @myConstrFcn, OPTIONS); Click here to learn more about Matlab fmincon or execute the following command line in the Matlab: doc fmincon ### Step.7: Run the program Now you can run the program, e.g., with the following Matlab command line: top3d(30,10,2,0.3,3,1.2) If you have any questions, difficulties with this tutorial, please don’t hesitate to contact us. 1. MATLAB is a registered trademark of The MathWorks, Inc. For MATLAB product information, please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098, UNITED STATES, 508-647-7000, Fax: 508-647-7001, info@mathworks.com, www.mathworks.com
2022-09-29 11:23:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7085810303688049, "perplexity": 8437.270037654884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00535.warc.gz"}
https://www.physicsforums.com/threads/help-with-mass-of-rectangle-whos-density-varies.264436/
# Help with Mass OF rectangle who's density varies: 1. Oct 14, 2008 ### Naldo6 1) The problem statement, all variables and given/known data[/b] The surface density of a rectangle varies as: σ(x,y)=12 kg/m2+2 kg/m4(x2+y2) The origin is located at the lower left corner of the rectangle, at point A.'' The rectangle has a height h=1.00 m and a length l=1.20 m. What is the total mass of this object? 2. Oct 15, 2008 ### tiny-tim Hi Naldo6! Show us what you've tried, and where you're stuck, and then we'll know how to help.
2017-04-23 15:58:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3363940715789795, "perplexity": 1924.6119345147983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118713.1/warc/CC-MAIN-20170423031158-00641-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathhelpforum.com/business-math/136922-shortcut-calculation-tax-savings-capital-expenditure.html
# Thread: Shortcut calculation for tax savings on capital expenditure 1. ## Shortcut calculation for tax savings on capital expenditure Hi Say, Machinery = $400,000 Useful life = 4 years Capital Allowance Rate = 25% Corporation Tax Rate = 35% Scrap Value =$5,000 Do you know any short cut that being used to calculate the total tax savings amount? I only know till certain steps. ie 40,000 x 0.25 x 0.35 x 0.75 x 0.75 x....? Will appreciate the help Thanks 2. Originally Posted by mingali Machinery = $400,000 Useful life = 4 years Capital Allowance Rate = 25% Corporation Tax Rate = 35% Scrap Value =$5,000 Do you know any short cut that being used to calculate the total tax savings amount? I only know till certain steps. ie 40,000 x 0.25 x 0.35 x 0.75 x 0.75 x....?
2018-04-19 20:33:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21374891698360443, "perplexity": 2929.193847142995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937016.16/warc/CC-MAIN-20180419184909-20180419204909-00519.warc.gz"}
http://atractor.pt/mat/ABC-CBA/index-_en.html
## The dynamics of a trick (*) ### Introduction Dear reader: Think of a natural number with three digits, $$abc$$, with $$a\neq c$$. Then, secretly, invert the order of the digits, obtaining $$cba$$, and compute the difference between the bigger and the smaller number. Now if you tell us the first digit of the difference, we can tell you the result. We can find out how this trick works with some examples. For instance, if $$abc=165$$, then $$cba=561$$ and $$cba-abc=396$$; if $$abc=990$$, then $$cba=099$$ and $$abc-cba=891$$. In general, under the assumptions made for the trick, the difference $$abc-cba$$ (or $$cba-abc$$) is always a number of the form $$\alpha9\beta$$ and, moreover, one has $$\alpha+\beta = 9$$. Therefore, knowing $$\alpha$$, we get the number. The value of $$\alpha + \beta$$ should not be surprising: indeed, given any number and its reverse (obtained by reversing the order of the digits), the sum of the digits is the same, and so both numbers are in the same class modulo $$9$$. Therefore, the difference between them is a multiple of $$9$$. The image of the transformation $$f$$, which acts on the set $$N_{3}$$ of natural numbers with three digits (allowing zeros on the left), and that to each number associates the distance between the number and its reversed version, contains only ten elements, namely $\{000, 099, 198, 297, 396, 495, 594, 693, 792, 891\}.$ Notice also that, since the domain of the function $$f$$ is finite, if we iterate $$f$$, obtaining for each element $$x$$ of $$N_{3}$$ the respective orbit by $$f$$, we must reach a cycle, whose elements belong to the image of the function $$f$$. Looking at the ten aforementioned numbers, we see that $$000$$ is a fixed point of $$f$$, attracting all numbers $$abc$$ with $$a=c$$; and that $099\rightarrow 891\rightarrow 693\rightarrow 297\rightarrow 495$ is a cycle with period $$5$$, at which every other orbit arrives, in no more than two iterations of $$f$$ (see the next picture). Cycles and precycles in $$N_{3}$$. Click on the picture to see it in a bigger size. Translated for Atractor by a CMUC team, from its original version in Portuguese. Atractor is grateful for this cooperation. This work integrates interactive components in CDF format prepared with the Mathematica program. To use these files, you must download them to your computer and access them with the CDF Player, which can be downloaded for free from http://wolfram.com/cdf-player
2018-07-23 08:04:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7669820189476013, "perplexity": 195.02815326542733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676595531.70/warc/CC-MAIN-20180723071245-20180723091245-00631.warc.gz"}
https://publications.hse.ru/en/articles/72243034
• A • A • A • ABC • ABC • ABC • А • А • А • А • А Regular version of the site ## An Improved Algorithm for Packing T-Paths in Inner Eulerian Networks Lecture Notes in Computer Science. 2012. No. 7434. P. 109-120. Babenko M. A., Artamonov S. I., Salikhov K. A digraph G = (V,E) with a distinguished set T V of terminals is called inner Eulerian if for each v V T the numbers of arcs entering and leaving v are equal. By a T-path we mean a simple directed path connecting distinct terminals with all intermediate nodes in V T. This paper concerns the problem of finding a maximum T-path packing, i.e. a maximum collection of arc-disjoint T-paths. A min-max relation for this problem was established by Lomonosov. The capacitated version was studied by Ibaraki,  Karzanov, and Nagamochi, who came up with a strongly-polynomial algorithm of complexity O(φ(V,E) log T +V 2E) (hereinafter φ(n,m) denotes the complexity of a max-flow computation in a network with n nodes and m arcs). For unit capacities, the latter algorithm takes O(φ(V,E) log T +V E) time, which is unsatisfactory since a max-flow can be found in o(V E) time. For this case, we present an improved method that runs in O(φ(V,E) log T + E log V ) time. Thus plugging in the max-flow algorithm of Dinic, we reduce the overall complexity from O(V E) to O(min(V 2/3E,E3/2) log T).
2021-02-27 10:38:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825463652610779, "perplexity": 3799.9068382898067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358798.23/warc/CC-MAIN-20210227084805-20210227114805-00584.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-2-solving-equations-2-4-solving-equations-with-variables-on-both-sides-practice-and-problem-solving-exercises-page-105/18
## Algebra 1: Common Core (15th Edition) $-n-24=5-n \\\\ -24=5$ This is not possible, so there are no solutions.
2019-01-18 07:09:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3408152163028717, "perplexity": 1137.3450166874886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659944.3/warc/CC-MAIN-20190118070121-20190118092121-00320.warc.gz"}
https://www.glossa-journal.org/articles/10.5334/gjgl.834/
A- A+ Alt. Display # A syntax for semantic incorporation: generating low-scope indefinite objects in Inuktitut ## Abstract The range of scope readings for Inuktitut nominal expressions appears superficially to depend on the verbal morpho-syntax, with noun incorporation and antipassive inflection both playing a role. A new model is presented in which the syntactic role played by agreement features in Case theory is unified with the absence of a choice functional D in the semantic interpretation. For both, a phase-level D-deletion operation ensures the correct results. The model is shown to account for the scopal properties of nominals in a range of contexts larger than the literature typically considers, including incorporation of predicational, locational, and locative nouns, and non-modalis-marked arguments of non-antipassive verbs. Keywords: How to Cite: Branigan, P., & Wharram, D. (2019). A syntax for semantic incorporation: generating low-scope indefinite objects in Inuktitut. Glossa: A Journal of General Linguistics, 4(1), 92. DOI: http://doi.org/10.5334/gjgl.834 Published on 06 Aug 2019 Accepted on 20 Jun 2019            Submitted on 03 Oct 2018 ## 1 Introduction: the problem of indefiniteness in Inuktitut As already carefully documented in the literature, bare nominals in Inuktitut exhibit a tight correspondence between the syntactic context in which the noun appears and its semantic scope. This pattern is illustrated in (1)–(2). Incorporated objects (1) and objects of antipassivised verbs (2) are obligatorily construed as narrow-scope indefinites:123 1. (1) 1. South Baffin Inuktitut 1. Pani-qaq-tunga 2. daughter-have-PART.[–TR].1SG.ABS 1. ‘I have a daughter/daughters.’ 1. (2) 1. Akittiq 2. A. (ABS) 1. iqalung-mik 2. fish-MOD 1. taku-∅-nngit-tuq 2. see-AP-NEG-PART.[–TR].3SG.ABS 1. i. ‘Akittiq didn’t see any fish.’ 2. ii. #‘There is a particular fish that Akittiq didn’t see.’ In contrast, objects of non-incorporating verbs which have not undergone antipassivisation are obligatorily interpreted as what Fodor & Sag (1982) called ‘specific’: (3). In this respect, they pattern with both absolutive and ergative subjects, which also allow only for a wide scope interpretation of bare nominals:4 1. (3) 1. Akitti-up 2. A. -ERG 1. iqaluk 2. fish (ABS) 1. taku-nngit-taa 2. see-NEG-PART.[+ TR].3SG.ERG.3SG.ABS 1. i. #‘Akittiq didn’t see any fish.’ 2. ii. ‘There is a particular fish that Akittiq didn’t see.’ 1. (4) 1. a. 1. Suli 2. still 1. arnaq 2. woman(ABS) 1. iqalung-mik 2. fish-MOD 1. taku-∅-nngit-tuq 2. see-AP-NEG-PART.[–TR].3SG.ABS 1. i. ‘There is a woman who hasn’t seen any fish yet.’ 2. ii. #‘No woman has seen any fish yet.’ 1. 1. b. 1. Suli 2. still 1. arna-up 2. woman-ERG 1. Miali 2. M.(ABS) 1. taku-sima-nngit-tanga 2. see-PERF-NEG-PART.[+ TR].3SG.ERG.3SG.ABS 1. i. ‘There is a woman who hasn’t seen Mary yet.’ 2. ii. #‘No woman has seen Mary yet.’ In fact, bare indefinites bearing absolutive Case (objects of non-antipassivised transitive verbs and subjects of intransitive verbs) are consistently construed with the widest-scope, even out of islands, as evidenced by the non-availability of an intermediate-scope reading of the relevant indefinite in (5):5 1. (5) 1. Ilisaiji-limaa-t 2. teacher-all-ABS.PL 1. aittarusuk-kajaq-tut 2. be.disappointed-would-[–TR].3PL.ABS 1. ilinniaqti 2. student (ABS) 1. nuqqaq-pat 2. quit-COND.3SG.ABS 1. ‘every teacher will be disappointed if a student quits’ 2. i. ‘There is one student, who every teacher doesn’t want to see quit.’ 3. ii. #‘For each teacher, there is one student who (s)he doesn’t want to see quit.’ 4. iii. #‘Every teacher will be disappointed if any student quits.’ There are thus two principal issues to address in the interpretation of Inuktitut indefinites: the cause of obligatory apparent wide-scope in the direct Case-marked nominals, and the source of obligatory narrow scope in incorporated and antipassive objects. These patterns cannot be explained by treating Inuktitut bare indefinites as kind terms. Kind-denoting terms are expected to scope low, relative to sentential operators (Carlson 1977; Chierchia 1998). As we have already seen, however, the canonical bare indefinite in the language—absolutive arguments—consistently have the appearance of scoping high.6 Nor is it possible to coerce any type of (sub-)kind-level reading for a(n) (bare) incorporated noun in the language. Consider the sentence in (6):7 1. (6) 1. Holda 2. Holda 1. tii-tu-kqau-ngit-tuk 2. tea-consume-NPST-NEG-PART.[–TR].3SG.ABS 1. ‘Holda didn’t drink tea.’ The utterance in (6) is judged to be incompatible with a situation where one is hoping to explain that Holda didn’t drink any currant tea, even though she is drinking some orange pekoe tea, even in a context where such a reading would seem to be felicitous. The sentence is only judged to be (truthfully) compatible with a context in which Holda didn’t drink any tea whatsoever. While we do not disagree that an understanding of bare nominals in some languages as kind terms has provided cross-linguistic insights8 it is not apparent that such an approach has anything to tell us about the facts of Inuktitut. See Van Geenhoven (2000), Van Geenhoven & McNally (2005) and Gillon (2012) for further consideration of these matters.9 The narrow-scope indefinite interpretations found in noun-incorporation and modalis objects instantiate patterns which are widely attested, crosslinguistically. Incorporated nouns are indefinite in many languages (Bittner 1994a; Baker 1988; Carlson 2006; Massam 2009), so the interpretation of Inuktitut (1) is in itself not remarkable. And many languages associate specific syntactic contexts or specific Case-forms for objects with indefiniteness. The literature on ‘pseudo-incorporation’ includes many cases where bare objects must be indefinite (Bittner 1987; Chung & Ladusaw 2003; Massam 2009; Dayal 2011; Espinal & McNally 2011), and the parallel body of work on ‘partitive Case’ does the same (Belletti 1992; Lasnik 1992; Kiparsky 1998; Luraghi 2003). Inuktitut is typologically uncommon because it employs two distinct mechanisms to the same end: incorporation and modalis Case-marking.10 But what makes Inuktitut bare nominals particularly significant for exploring the relationship between syntax and semantic scope is the set of regular exceptions to the general rule. Sometimes, noun incorporation does permit the incorporated noun to be interpreted as specific:11 1. (7) 1. 1. a. 1. Louisa-u-vunga 2. L.-be-IND.[–TR].1SG.ABS 1. ‘I’m Louisa’ 1. 1. b. 1. Illu-mi-i-juk. 2. house-LOC-be-PART.[–TR].3SG.ABS 1. ‘(S)he’s in a (specific) house.’ 1. 1. c. 1. John 2. John(ABS) 1. illu-nga-nu-u-juk. 2. house-3SG.PR/SG.PM-DAT-be-PART.[–TR].3SG.ABS 1. ‘John is going into his house.’ A similar interpretive flexibility is found with independent nominals which function like PPs, as in (8). 1. (8) 1. South Baffin Inuktitut 1. Nunaling-mi 2. settlement-LOC 1. nuna-qa-lauq-sima-nngit-tuq 2. land-have-PST-PERF-NEG-PART.[–TR].3SG.ABS 1. i. ‘There is a (certain) town that (s)he hasn’t lived in’ 2. ii. ‘(S)he has never lived in any town’ These “exceptions” to the general pattern have been little noted in the literature to date, so part of our task here will be to present a thorough characterisation of the contexts in which both wide and narrow scope interpretations are possible. Our broader goal in this paper is to provide a principled theoretical account of these patterns—one which covers both the familiar cases and the apparent exceptions. Our model exploits a technical proposal (C-deletion) in Chomsky (2015) to ensure that the distribution of wide scope indefinites matches the distribution of structural (ergative and absolutive) Case-assignment, and that narrow scope indefinites appear elsewhere. The pattern of exceptions in (7)–(8) is then shown to reflect the greater derivational freedom which arises in specific contexts where Case is not an issue. Broadly speaking, there are three general approaches that one might pursue in attempting to explain the relevant facts of Inuktitut. First, one might suppose an approach like that of Chierchia (1998), which has seen substantial development in the work of, among others, Dayal (1999; 2004) and Bošković (2008), positing cross-linguistic variation in the (inherent) semantic type of nouns, together with basic semantic operations that can shift the type of NP. Indeed, Johns (2007; 2009) has advanced the argument that Inuktitut nouns are invariably born as referential expressions.12 Second, one might suppose, following Partee (1987), that nouns are universally born as properties of individuals, but that principled type-shifting operations can derive other types for the indefinite, as needed. Such an approach has been elaborated by Diesing (1992), and—a flavour thereof—by Chung & Ladusaw (2003). Third, one might suppose the more “traditional” Kamp (1981) and Heim (1982) view that nouns are uniformly non-quantificational, and that other readings must arise via additional mechanics. This last is the perspective we adopt, specifically defending the position of subsequent developments in Stowell (1991), Longobardi (1994), and Heim & Kratzer (1998), among others, that nouns are universally born as properties of individuals and that any e-type reading must be mediated by additional syntactic structure. We demonstrate that the types of interpretations available to indefinites in the language are corollaries of observable syntactic configurations, thereby eliminating the need for covert type-shifting operations in the semantics (as required by Bittner 1987). In this, we are adopting a conception of Logical Form (LF) which follows in the spirit of what Beck (1996), von Stechow (1996), and von Stechow (2000a) call “Transparent LFs”, which minimally require that LFs are determined by the syntax proper and that each LF determines a single (unambiguous) meaning (modulo context). The central proposal which we advocate is that the D, the head of DP, is deleted in specific contexts, given the labelling theory of Chomsky (2013; 2015); D-deletion results in a semantically sensible output only as long as the semantic content of the appropriate predicate compensates for the deletion by providing a variable to which a nominal predicate can apply. The obligatory very narrow scope found with many incorporated nouns and antipassive objects is a consequence. The vision which guides this study is that semantic interpretation should be just that: interpretation. Pragmatics and processing issues aside, the role of a semantic theory should be to characterise the mapping from the structures that the syntax produces into whatever is accessible on the meaning side of the LF/C-I interface. Semantic rules should not themselves operate on syntactic structures to produce new grammatical entities. One module of the grammar which has access to structure-altering operations should be sufficient. It follows that if there are aspects of the meaning which are substantially altered on the basis of what is present in the local grammatical environment, it should be the syntactic derivation which determines these, and semantic interpretation should simply accept them and find whatever interpretation is appropriate. And since our empirical focus in this paper is precisely aspects of the meaning of indefinites which have that character, our analytic goal is to identify how the syntactic derivation ensures these results. The advantage of an analysis in which the syntax and semantics are both implicated is that it provides a rubric from which departures from the general pattern may be examined. This is what will allow us to develop a principled account for the class of incorporating verbs in which the scopal opportunities are wider. ## 2 The claims The account we present starts from three general premises. First, we maintain that all bare nominals in Inuktitut originate as full DP categories, in which D is a phase head in the sense of Chomsky (2001; 2013; 2015). (The asterisk notation employed in (11), passim indicates the phase head status of the head it marks.) The significance of D for this model will become clear as the technical mechanisms are elucidated, but there are fairly concrete reasons to make this assumption in the first place. One involves the role played by agreement features in phase theory, following Chomsky (2008). Agreement features are introduced in a phase head as unvalued φ features, and they are then transferred to the complement of the phase head through the Feature Inheritance operation. This account must be applicable to agreement generally, including agreement which takes place inside nominals between a possessor and a possessed noun. In Inuktitut, possessor agreement of this type is obligatory, which means that nominal phrases must always be phasal DPs—otherwise, there would be no initial source for the unvalued φ features which are realised in the nominal morphology. What is more, possessor agreement takes place both in wide-scope (ergative or absolutive) indefinite nominals and in narrow scope (modalis) indefinites; both must therefore include a D phase head, at least at an early point in the derivation.13 Second, (silent) D in Inuktitut is always interpreted as a choice function at the Conceptual-Intentional interface. Specifically, we adopt Wharram’s (2003) conclusion that NPs in Inuktitut are selected by (phonetically null) indefinite determiners which introduce variables over choice functions (Reinhart 1997; 2006; Winter 1997; Heim & Kratzer 1998; Matthewson 1998). Von Stechow (2000b: p. 196) provides an effective definition of this type of choice function as (9). (9) Let f be of type <,e>. f is a choice function iff (a) and (b) hold: a. P(f(P)) if P is non-empty. b. P(f(P)) = * if P is empty. Where * is an object not in any semantic domain. That is, we assume the simplest type of choice function, one that (potentially) assigns to a non-empty set of individuals a member of that set. Under this approach, the relevant indefinite in (5) would have the simple structure indicated in (10). 1. (10) As a further (significant) technical detail, we also accept the conclusion of Kratzer (1998; 2003) that the variable which a choice function introduces remains free at LF, and that its interpretation is contextually determined. That is, choice functions are not syntactic features, and nor are they interpreted at LF. As a consequence, choice functions are not transferrable, and deletion of D, as discussed just below, does not eliminate LF-relevant material.14 Third, as a phase head, D can transfer its formal features to the head of its complement. Features which have been transferred are recoverable, so feature transfer sets up a syntactic context in which D can be freely deleted, and the head of its complement then becomes the new head of the phase.15 1. (11) 1. a. 1. Initial DP phase 1. 1. b. 1. Phase-end full DP (→ wide scope) 1. 1. c. 1. Phase-end diminished nominal (→ narrow scope) (In (11a), the -n head is “little n”, the categorising functional head which combines with an acategorial root. For expository convenience, we represent the structure of a root combined with -n simply as N, in (11b) and henceforth.) The effect of D-deletion is that the choice function is eliminated from the semantic interpretation, and the nominal denotes a predicate only. Wide scope, then, automatically reflects the presence of the choice-functional variable in D. Conversely, D-deletion produces a structure for which some compensatory material must be available to produce a semantically coherent result. The known contexts in which the narrow scope readings are available indicate at least some of the conditions under which the predicate nominal can have the variable of which it holds both introduced and bound. Incorporating verb stems and antipassive verb forms both apparently provide the necessary compensation. In fact, the extra semantic content that they provide can be taken as exactly what makes these two verb-types special. Antipassive verbs differ from simple transitives precisely in the addition of extra morphology—sometimes null—and that morphology’s accompanying lexical semantics.16 The antipassive morphology serves to mediate the composition of V with a derived property-type object. In other words, the antipassive morpheme is a function that takes a 2-place relation between individuals and events (which, following Kratzer (1996), we take to be the denotation of verb roots), giving a new function that takes a 1-place predicate, yielding a 1-place event predicate.17 If (12a) is taken as the semantic contribution of antipassive morphology, then (12b) should indicate the meaning of the antipassive verb in (2) when combined with the modalis-marked object. (How this result is ensured will be discussed below, in section 3.2.)18 (12) a. ⟦AntiP⟧ = λP>.λQ.λes.∃x[P(x)(e)∧Q(x)] b. ⟦VP in (2)⟧ = λes.∃x[see′(x)(e)∧fish′(x)] Less formally, what (12a) indicates is that the antipassive affix introduces an existentially bound variable which, when combined with a transitive verb, will serve as the object of the verb. The semantic contribution of incorporating verb stems is comparable, although no extra morphology is found in this case, since incorporating verbs are only used when actual noun incorporation takes place. The (13a) formula sums up the overall semantic contribution of any regular incorporating verb stem; (13b) shows how this should work out for the verb phrase in (1).19 (13) a. ⟦Vincorporating⟧ = λQ.λes.∃x[P(x)(e)∧Q(x)] b. ⟦pani-qaq⟧ = λes.∃x[have′(x)(e)∧daughter′(x)] ### 2.1 A note on definiteness The free translation line in the presentation of Inuktitut data sometimes includes definite articles, because that corresponds to the most felicitous English equivalent. But in fact, the distinction between indefinites and definites is not a relevant one in Inuktitut, a language which lacks true definiteness. Wharram (2003) makes this point for Inuktitut, as does Matthewson (1999) and Gillon (2006) and Gillon (2011) for Salish (broadly), Sk‒wx‒wú7mesh (Salish) and Innu-aimûn (Algonquian), respectively. In this respect, we follow Heim (2011: p. 1006) in supposing “that the ‘ambiguous’ DPs in such languages [lacking definiteness marking] are simply indefinites. They are semantically equivalent to English indefinites, but have a wider range of felicitous uses because they do not compete with definites and therefore do not induce the same implicatures.” One of the principal justifications for this position, briefly laid out here, is that it is best able to account for the observed ‘scope’ facts of specific indefinites in the language, specifically with respect to the apparent availability of so-called intermediate readings. Such readings become available in the presence of bound variable pronouns, and not otherwise. This is wholly unexpected under a view of the specific indefinite as being in fact definite, but it is exactly as predicted under Kratzer’s (1998) approach to (Skolemised) choice functions. Compare the availability of a so-called intermediate scope reading for the relevant nominal, nutaraq ‘child’, in (14) to the unavailability of such a reading for the nominal ilinniaqti ‘student’ in (5), above. 1. (14) 1. South Baffin Inuktitut 1. Anaana-limaa-t 2. mother-all-ABS.PL 1. numaasuk-kajaq-t-u-t 1. nutara-ni 2. child-4SG.PR.SG.PM 1. tuquk-pat 2. die-COND.3PL.ABS 1. ‘every mother1 will be sad if her1/*2 child dies’ 2. i. There is one child of one of the mothers, and every mother will be sad if that child dies. 3. ii. For each mother, there is a child of hers who she doesn’t want to see die (bound variable reading of hers) 4. iii. #Every mother will be sad if any child dies. In (14), under the bound variable interpretation of her—reading (ii)—, the choice function which selects one child from a set of a mother’s children will have a different restrictor set for each mother, and therefore, a different individual child for each mother can be selected by the choice function.20 Indeed, even proper names, often taken to be definite descriptions par excellence, can be shown to be non-referential in Inuktitut, as discussed in section 3.2. ## 3 Phase head deletion and DP in Inuktitut The idea that the head of a phase may delete originates with Chomsky (2015), who employs this concept to derive the that-trace effect. A quick sketch of Chomsky’s model will help in identifying the comparable patterns in Inuktitut DPs. Building on the labelling theory of Chomsky (2013), Chomsky supposes that a phase which provides one half of the label for a dominating node cannot be displaced during the phase in which the label is constructed. Since the subject DP combines with TP in English to form the ⟨φ,φ⟩ label for the sentence, it follows that subjects are frozen in place until the CP phase is complete. When C persists as the head of the phase, it follows that subjects cannot be extracted at a later phase level, because the subject remains as a part of the domain of C, so it undergoes Transfer to the interfaces when C is complete, and is lost to the derivation thereafter. But Chomsky observes that C must transfer some of its features, including φ and Tense features, to T. (C first agrees with the subject and Case-marks it.) The Feature Inheritance operation (henceforth, FI) which accomplishes this transfer is a copying operation, so the immediate effect is that φ and Tense features are present on both C and T. If C transfers all of its formal features, then C can play no further role in the syntactic derivation, and under those conditions, C may then delete.21 Deletion is recoverable under these circumstances because the features of C are also present on T. When C deletes, however, the head to which it has transferred everything also acquires the status of serving as the head of the clausal phase. And then when the complement of the phase head (=T) undergoes Transfer, the subject is not affected, because it remains a sister of TP. Operations in a later phase then have access to the subject, which may therefore undergo Ā-movement. The relevant portion of the derivation for (15a) can be seen in (15b–d). (15) a. Who did they claim will win the race? b. … [ C* [⟨φ,φ⟩ who will win the race ]] c. … [ ∅ [⟨φ,φ⟩ who will* win the race ]] d. who did they claim [ ∅ [⟨φ,φ⟩ t will* win the race ]] C-deletion need not coincide with movement of the subject to a higher clause. In embedded questions with subject wh-phrases, C-deletion must also take place, since subjects cannot be displaced from their “criterial” position before the CP phase is complete. Thus C must transfer all of its features to T to enable generation of (16a). In this case, the features transferred to T include the [wh] feature, which is interpretable and will not be deleted. If the [wh] feature remains on C, then some other wh-phrase must still raise up, to join with C in providing a label to a full interrogative CP, as in (17). (16) They wondered [ ∅ [⟨φ,φ⟩, who will* compete in the next race ]] (17) They wondered [ who (C)* [⟨φ,φ⟩ they will run against t ]] While Inuktitut DP offers no direct parallels to the that-trace effect, it still must operate along the same lines as English CP in specific respects. First, D must serve as the source of genitive (= possessive) Case-marking for possessors in constructions like (18). 1. (18) 1. 1. a. 1. Jaani-up 2. John-GEN.SG 1. nasa-nga 2. hat-3SG.PR/SG.PM 1. ‘John’s hat’ 1. 1. b. 1. [ D* [ Jaani n [ $\sqrt{\text{nasa}}$… ]]] 1. 1. c. 1. [ D* [⟨φ,φ⟩ Jaani-up $\sqrt{\text{nasa}}$-n-[3SG] [ ${t}_{\sqrt{ }}$… ]]] The agreement features which accompany genitive Case assignment are realised on the possessed noun, which follows only if the D probe assigns Case and then Feature Inheritance displaces the resulting valued φ features to the nominal complement. We assume the possessor in (18b, c) is a “specifier” for n, either as its base position, or because it raises to that position, driven by the imperatives of the labelling algorithm. Just as in the clausal case, the effect of FI is that uninterpretable features are realised phonetically within the active phase, and they can therefore be deleted immediately with the Transfer operation. It is not only uninterpretable φ features which are realised on the noun; interpretable Number features within the DP are expressed with the nominal inflection, too. For example, in (19), the nominal suffix -ngit is a portmanteau expression of both the φ features of the possessor and the (interpreted) plural Number for the DP as a whole. The implication seems to be that D—which must provide the interpretable Number feature—transfers both uninterpretable φ and interpretable Number to its nominal complement, at least in nominal structures where a nominal complement exists. (Bare possessive pronouns are always null in the dialects of Inuktitut for which we have current access to speakers, so only full nominal structures provide any evidence at all about where features are realised.) Thus, in (19a), the possessed noun is inflected simultaneously with an interpretable plural feature and with uninterpretable 3rd person singular agreement features, both of which are realised portmanteau in the -ngit suffix. The structure in (19b) then highlights the effect of these features on the labelling structure of the full DP, where possessor and possessed both contribute to the ⟨φ,φ⟩ label. (Again, the possessor is a sister to nP in the labelled structure.) 1. (19) 1. a. 1. inu-up 2. person-GEN.SG/REL 1. illu-ngit 2. house-3SG.PR/PL.PM 1. ‘the person’s houses’ 1. 1. b. 1. [ D* [⟨φ,φ⟩ inuup-∅ illu-[SG/3PL] [ t … ]]] There is one important difference to keep in mind between how FI affects the uninterpretable φ features and the interpretable Number feature. While all of these are realised morphologically on the (possessed) noun, the Number feature must still remain visible on D for later in the derivation. Otherwise, DP cannot enter into agreement relations with other elements of the sentence in which it will appear, since D is a phase head, and features contained in its complement will not be accessible once DP is complete. Uninterpretable features, in contrast, lack any interpretation in their original position, and will only be interpreted at the S-M interface in the lower position to which they are transferred (Richards 2007). The uninterpretable Person features seen on the possessed noun in (18) are necessarily transferred downwards from D. It is an important question whether interpretable Person features are also subject to FI in this context, since the transfer of formal features is a critical step in the derivation leading to deletion of a phase head. But answering this question first requires some clarification of what the relevant (interpretable) Person features are. For 1st and 2nd persons, the question is moot, since indexical pronouns are rare, except as focussed topics. But in a regular full nominal, the status of the Person feature is open to debate. In the recent literature, some analyses follow Benveniste (1960) in treating 3rd person forms as the absence of a Person feature (Kayne 2000); others insist that 3rd persons are fully specified for Person feature values (Nevins 2007). We find persuasive the arguments of Oxford & Welch (2015), who argue that both camps are correct, but only for specific cases. It is even possible within a single language for some 3rd person nominals to bear a Person feature and others to lack it; this feature contrast then gives rise to a distinction between animate (Person-bearing) and inanimate (Person-less) nouns, a distinction which has a range of consequences in the agreement systems of Algonquian and Athaphaskan languages, as they show. For our purposes, it is sufficient, however, that D in 3rd person nominals in Inuktitut does provide an unmarked [PERSON] feature which is therefore available to the agreement system.22 But now consider how the presence of a Person feature affects the derivation of vP generally. Since Case and agreement operate together, the Person feature will be intimately involved in Case assignment, and in movement operations which are driven by Case and agreement. This web of connections is tightly integrated in the labelling theory. Under Chomsky’s (2015) account, the “raising-to-object” patterns in English ECM are found because transitive verbs in English have objects which merge with the node immediately above the verb root, to provide a ⟨φ,φ⟩ label for the complement of v.23 A typical derivation is seen in (20). (20) a. They declared Marie to have won the race. b. … [ (they) v* [ $\sqrt{\text{declare}}$M4 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} $\sqrt {{\rm{declare}}}$ \end{document} [ to have Marie won the race ]]] c. … [ (they) v* [⟨φ,φ⟩ Marie $\sqrt{\text{declare}}$M5 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} $\sqrt {{\rm{declare}}}$ \end{document} [ to have t won the race ]]] d. … [ (they) $\sqrt{\text{declare}}$M6 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} $\sqrt {{\rm{declare}}}$ \end{document} -v [⟨φ,φ⟩ Marie t* [ to have t won the race ]]] As Chomsky notes, the displaced object in such cases is not subject to “criterial freezing” effects, because head-movement of the verb root to v has the consequence that v loses its status as the phase head. The verb root automatically inherits that status, leaving the object at the periphery of the new phase, and accessible for later Ā-movement operations.24 What is true of the more complex ECM structures must be true as well of simple transitive clauses. Thus, in (21), the object the victory will still shift within the verb phrase. (21) a. They celebrated the victory. b. … [ (they) v* [ $\sqrt{\text{celebrate}}$M7 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} $\sqrt {{\rm{celebrate}}}$ \end{document} the victory ]] c. … [ (they) v* [⟨φ,φ⟩ the victory $\sqrt{\text{celebrate}}$M8 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} $\sqrt {{\rm{celebrate}}}$ \end{document} t ]] d. … [ (they) $\sqrt{\text{celebrate}}$M9 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} $\sqrt {{\rm{celebrate}}}$ \end{document} -v [⟨φ,φ⟩ the victory t* t ]] In order for this account of English object height to work, it is necessary that the agreement operation which drives it be complete. v Agrees with the object and transfers the resulting φ features to the verb root. But if the full set of φ features is not valued, then Case assignment cannot take place, and the label provided to the complement of v will remain defective. It follows that both Person and Number features must be present in the D phase head, and FI must not have displaced them entirely downwards. ### 3.1 The derivation of absolutive objects Chomsky’s account of English object height has immediate consequences for the analysis of ergative subjects in Inuktitut. Ergative subjects appear only together with absolutive objects; under virtually any theory of ergative case assignment, this means that they must belong to the same phase (Woolford 2015; Bobaljik & Branigan 2006; Coon & Preminger 2011). (The same would presumably be true of other theories of ergativity which predate phase theory, such Bittner & Hale (1996), among many others.) If objects remain in their base position, then they will never belong to the same phase as the clausal subject. But if objects raise past the verb root, and if the verb root becomes the derived phase head, then the object and subject do belong to the same phase. Under those conditions, ergative case assignment is possible. (22) [ ERG T [ t verb [⟨φ,φ⟩ ABS t* (…) ]]] In (23), for example, the analysis must be one in which v values φ features with tuktu ‘caribou’ and transfers them to the verb root. tuktu raises from its base position to merge above the verb root, providing a ⟨φ,φ⟩ label for the complement of v. Movement of the verb root to v deletes the phase head status of v, and makes the base position of the verb root the new phase head. As the object is now in the periphery of the verbal phase, it can bear absolutive Case and the co-phasal subjects, ergative Case.25 1. (23) 1. South Baffin Inuktitut 1. Taqqialu-up 2. T.-ERG 1. tuktu 2. caribou(ABS) 1. taku-lau-nngit-tanga. 2. see-PST-NEG-PART.[+ TR].3SG.ERG.3SG.ABS 1. ‘Taqqialuk didn’t see a caribou.’ In Inuktitut, ergative case is available only when there is also an absolutive “competitor”, regardless of how this is implemented. An anonymous reviewer notes one important corollary: while ergative case must necessarily be assigned in the C-T phase in order to “compete”, the same is not necessarily true of absolutive case, which is assigned in the absence of a competitor. Nevertheless, we assume that both ergative and absolutive cases are assigned at the C-T phase, either because direct Cases are associated with that point in the clausal derivation, or because the C-T head is implicated in the Case assignment process. Given the workings of Chomsky’s labelling algorithm, in order to be the co-labelling “specifier”, the object tuktu needs a full set of agreement features, including [PERSON]. The data even suggests that a stronger position may be defensible: the presence of a [PERSON] feature in the domain of v ensures that v will bear φ-features to value and to transfer to the lexical root. This stronger claim is true already in English, where “inner object shift” is obligatory, when possible. And if we adopt this stronger position, then it follows that the presence of a [PERSON] feature on objects in Inuktitut will ensure the derivation adheres to the ergative/absolutive Case pattern, simply because the object is then always forced upwards into the next phase. As DP is a phasal category, the φ features which will serve as co-labels ([PERSON] and [NUMBER]) must be present and accessible in the D position at the next phase, where agreement and labelling operations take place. For the [PERSON] feature, this is unremarkable; cross-linguistically, [PERSON] is normally realised morphologically only in the D position (as a pronoun). For [NUMBER], the same must be true, even though [NUMBER] is morphologically realised on the noun in Inuktitut. It must be concluded that the [NUMBER] feature on the noun is provided to it by FI from D, and that D continues to bear the [NUMBER] feature even after FI takes place. As for the scopal properties of absolutive objects, nothing more need be said. D is interpreted as a choice function, without exception, and so absolutive objects will always necessarily bear widest scope, as seen in (24). The interpretation of (24a) has maximally wide scope for the object. The structure of vP in (24b) includes a full DP object, in order to ensure the absolutive Case assignment. And the compositional semantics associated with the DP structure of this object are sketched in (24c). 1. (24) 1. a. 1. Taqqialu-up 2. T.-ERG 1. tuktu 2. caribou(ABS) 1. taku-lau-nngit-tanga. 2. see-PST-NEG-PART.[+ TR].3SG.ERG.3SG.ABS 1. i. #‘Taqqialuk didn’t see a (single) caribou.’ 2. ii. ‘There is a (particular) caribou that Taqqialuk didn’t see.’ 1. 1. b. 1. [ $\sqrt{\text{taku}}$–v [⟨φ,φ⟩ [DP D $\sqrt{\text{tuktu}}$-n … ] t* … t… ] ] 1. 1. c. ### 3.2 The derivation of modalis objects of antipassives In antipassive clauses, the derivation is quite different, but adheres to the same general principles. The semantic properties of a nominal must reflect the interaction of inherent semantic properties, combinatorial semantics based on syntactic representation, and the effects of the syntactic derivation. With an object of an antipassive verb, absolutive Case is not assigned to the object, and the object does not participate in the verbal agreement. Given what was said about normal absolutive objects, the implication is that the nominal must not constitute a fully specified DP, with accessible φ features. And yet modalis objects in antipassive clauses do show some of the hallmarks of a full DP. In particular, they must be phasal categories, because they may contain possessors, and the possessor then agrees with the possessed noun, as in (25). 1. (25) 1. Akittiq 2. A.(ABS) 1. pani-nga-nik 2. daughter-3SG.PR/SG.PM-MOD 1. taku-juq 2. see-PART.[–TR].3SG.ABS 1. ‘Akittiq1 sees his2/*1/her daughter.’ Just as with absolutive objects, modalis objects must originate as full DPs, with D supplying [NUMBER] features to the noun, and in possessed DPs, Case/agreement features, as well. But modalis objects cannot agree with the verb, so something must happen within the derivation of the full object to block later operations at next phase level. Since D transfers features to the noun, the simplest account is that D is deleted afterwards, just as C is deleted in English full clauses.26 How does this ensure that the modalis object will not agree? FI provides [NUMBER] features to the noun, and then D-deletion will remove the phase head. The noun itself must then become the new head of the nominal phase, just as T does in the parallel English structure. If the noun were to obtain a full set of φ features by this means, then it could serve as the goal for agreement processes at the next phase level. But features which appear only on the noun must be interpretable in that position. [NUMBER] features are interpretable; [PERSON] features, which are referential and not predicative, are not. It seems that FI must apply only in part in this case, transferring [NUMBER], which then becomes recoverable, but not [PERSON]. But this does not mean that 3rd person features on D are non-recoverable in this context. Modalis objects of antipassive verbs are always 3rd person forms, because 1st and 2nd persons are semantically unsuited to this context (as we show immediately below). As such, the context itself ensures that the person feature on D will be recoverable. D can therefore be deleted even though only the [NUMBER] feature is transferred, because all of the pertinent formal features are recoverable. The effect of FI and D-deletion on the structure of a nominal is then as in (26), where the asterisk indicates the current phase head. 1. (26) When derived nominals like this are introduced as objects in a vP phase, they cannot serve as full agreement targets, and they cannot provide one half of a ⟨φ,φ⟩ label for the sister of v. It follows that they cannot raise past the verb root, and they will be inaccessible for absolutive Case assignment when the C-T phase is constructed. The final position of a D-less full nominal must be its base position, within the Transfer domain of the verb phrase phase.27 Lacking access to absolutive Case, the only way for a full nominal to satisfy the Case filter is if the grammar provides a fall-back Case assignment mechanism. This is what modalis Case is. When ergative and absolutive Case are unavailable, an Inuktitut (overt) nominal may freely be assigned modalis Case, as in antipassives. The simplest hypothesis for how this Case is assigned is that the antipassive morphology itself freely assigns modalis Case, and we assume this is so.28 Recall now that wide scope interpretations are unavailable for the object of an antipassive verb, as in (27). 1. (27) 1. South Baffin Inuktitut 1. Taqqialuk 2. T.(ABS) 1. tuktung-mik 2. caribou-MOD 1. taku-∅-lau-nngit-tuq. 2. see-AP-PST-NEG-PART.[–TR].3SG.ABS 1. i. ‘Taqqialuk didn’t see a (single) caribou.’ 2. ii. #‘There is a (particular) caribou that Taqqialuk didn’t see.’ This follows, as well, from the deletion of D, which leads to the structure (28). (28) [ $\sqrt{\text{taku}}$M12 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} $\sqrt {{\rm{taku}}}$ \end{document} –v [VP t* [ ∅ $\sqrt{\text{tuktung}}$M13 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} $\sqrt {{\rm{tuktung}}}$ \end{document} -n … ]… ]] D-deletion eliminates the choice-functional variable which makes apparent wide scope interpretations available. With the choice function gone, wide scope for the object becomes impossible, but at the same time the structure must include some alternative means by which to introduce (and existentially bind) the variable of which the nominal predicate holds, in order to allow the phrase to be interpretable within the structure at all: (29) The obvious conclusion is that the “antipassive” morphology itself must mediate the composition of the verb together with its object, as already described in (12a). The effects of D-deletion on the scopal properties of antipassive objects extend even to the interpretations of proper nouns, which lose their “rigidity” in this context. Example (30) illustrates this point. 1. (30) 1. a. 1. (context:) 2. ‘Both you and your sister know me. Having just returned from Montreal, where you saw me, you’re now talking to your sister.” 1. 1. b. 1. (test sentence:) 1. Ippaksak 2. yesterday 1. Tuglasi-mik 2. Douglas-MOD 1. taku-lauq-tunga 2. see-PST-PART.[–TR].1SG.ABS 1. (#)‘Yesterday, I saw (someone named) Douglas’ Given the mechanisms sketched above, the name Tuglasi in (30) has lost its D, and the semantic interpretation must cope with this loss. In this case, it does so by taking the name as a property of people named Douglas, and not as a specific individual.29 Of course, this constitutes further evidence in support of a syntactic operation that removes D from the structure, since proper nouns are generally assumed to appear only in full DP structures, as argued initially by Longobardi (1994). #### 3.2.1 Non-antipassive modalis objects Going back to an observation made in Bittner (1987), for Kalaallisut, that the correlation between Case and ‘specificity’ is only apparent, it is important to recognise that it cannot be the modalis Case in the antipassive construction which ensures narrow scope for the object, because modalis arguments in other contexts do bear wide scope. In double object verb phrases, for example, a modalis-marked direct (Theme) object must show a wide scope interpretation, as in (31a), a possibility not allowed for a modalis-marked object which co-occurs with antipassive morphology, as in (31b), from South Baffin Inuktitut. 1. (31) 1. a. 1. Kingmaalisaa-p 2. K.-ERG 1. iqalung-mik 2. fish-MOD 1. Miali 2. M.(ABS) 1. tuni-lau-nngit-tanga. 2. give-PST-NEG-PART.[+ TR].3SG.ERG.3SG.ABS 1. ‘There is fish that Kingmaalisaaq didn’t give to Mary. (with strong implication that he did give Mary some other fish)’ 1. 1. b. 1. Kingmaalisaaq 2. K.(ABS) 1. iqalung-mik 2. fish-MOD 1. Miali-mut 2. M.-DAT 1. tuni-si-lau-nngit-tuq. 2. give-AP-PST-NEG-PART.[–TR].3SG.ABS 1. i. ‘Kingmaalisaaq didn’t give any fish to Mary’ 2. ii. #‘There is some fish that Kingmaalisaaq didn’t give to Mary.’ Similarly, consider the following two Labrador Inuttut sentences: 1. (32) 1. a. 1. Nina-up 2. N.-ERG 1. atuagam-mik 2. book-MOD 1. Hulda 2. H.(ABS) 1. aittu-qau-nngi-tanga 2. give-PST-NEG-PART.[+ TR].3SG.ERG.3SG.ABS 1. ‘There is a (particular) book that Nina did not give to Hulda’ 1. 1. b. 1. Nina 2. N.(ABS) 1. atuagam-mik 2. book-MOD 1. Holda-mut 2. H.-DAT 1. aittu-i-qau-nngi-tuk 2. give-AP-PAST-NEG-PART.[–TR].3SG.ABS 1. i. ‘Nina gave no book(s) to Hulda’ 2. ii. ‘There is a (particular) book that Nina did not give to Hulda’ Speakers judge the sentence in (32b) to be felicitous with either a situation in which Nina gave Holda no books or in which Nina may have given Holda some books, but not given her a particular one.30 On the other hand, the utterance in (32a) is judged to be felicitous with the latter scenario, but wholly infelicitous with the former. Data like (31)–(32) show clearly that nominals bearing modalis Case need not be interpreted as narrow scope indefinites. And actually, the fact that the modalis object in both (31a) and (32a) is interpreted with wide scope is expected in our model. Since the result of D-deletion is a predicate nominal, D-deletion must not occur in a context where there is no variable present of which the nominal may hold. Since the verbs in (31a) and (32a) are not antipassive forms, they cannot provide the requisite variable, so D-deletion would ensure an anomalous interpretation. D must remain intact in these examples, and the wide scope indefinite interpretation is then automatic. #### 3.2.2 Implicit objects of antipassives While we acknowledge that cross-linguistic variability exists in the semantic characteristics associated with noun incorporation – indeed, one of our goals here is to demonstrate such variation within single language varieties –, we equally acknowledge that there do exist some stable cross-linguistic properties of incorporated nominals, interpretation as weak indefinites and taking obligatory narrow scope among them. Martí (2015) provides a survey of some of those stable attributes, and shows that implicit indefinite objects in English share those same semantic properties. The same observation can be shown to be true of Inuktitut, where implicit objects of antipassivised verbs cannot be interpreted as ‘specific’, or referentially – they are restricted to a property-type interpretation. The sentence in (33) illustrates. 1. (33) 1. Darryli-up 2. D.-ERG 1. itluujak 2. seaweed(ABS) 1. iga-ppauk, 2. cook-COND.[+ TR].3SG.ERG.3SG.ABS 1. Alice 2. A.(ABS) 1. nigi-∅-qatta-ngit-tuk. 2. eat-AP-HAB-NEG-PART.[–TR].3SG.ABS 1. i. ‘When Darryl cooks seaweed, Alice doesn’t eat (anything).’ 2. ii. #‘When Darryl cooks seaweed, Alice doesn’t eat it.’ The unavailability of a ‘specific’ reading for the implicit object of the antipassivised verb in (33) can be determined by the subsequent non-felicity of the sentence in (34), judged by speakers to be an unacceptable follow-up to (33). (34) Alice nigiqattajuk puijivinimmingaak. ‘Alice eats seal-meat instead.’ On the other hand, the same speakers judge (34) to be a perfectly acceptable follow-up to (35), containing a fully transitive (i.e., non-antipassivised) verb in the relevant clause. 1. (35) 1. Darryli-up 2. D.-ERG 1. itluujak 2. seaweed(ABS) 1. iga-ppauk, 2. cook-COND.[+ TR].3SG.ERG.3SG.ABS 1. Alice 2. A.(ABS) 1. nigi-qatta-ngit-tanga. 2. eat-HAB-NEG-PART.[+ TR].3SG.ERG.3SG.ABS 1. i. #‘When Darryl cooks seaweed, Alice doesn’t eat (anything).’ 2. ii. ‘When Darryl cooks seaweed, Alice doesn’t eat it.’ The particulars discussed in section 3.2.1 and in this section represent two distinct facts about the modalis that, to our knowledge, have not been previously discussed in the literature. In the former, it is seen that the modalis morphology is, in the proper structure, consistent with the availability of a wider-than-narrowest-scope reading for its associated indefinite. In the latter, it is seen that (implicit) objects of antipassivised verbs demonstrate uniform semantic characteristics in the presence or in the absence of modalis morphology. The facts discussed in both sections, however, coalesce on the same conclusion: The modalis (-mik) marker plays no role in dictating the observed semantic force of the objects of antipassivised verbs. ### 3.3 The derivation of incorporated objects of incorporating verbs We turn now to the examination of scope in noun-incorporation contexts, restricting our attention initially to the simplest case, where nominal objects are obligatorily incorporated by verbs of the class that incorporate obligatorily. It turns out that such structures express many of the same principles as antipassives do. Consider (36). The root -tuq must incorporate its object, and the incorporated object must be interpreted as a narrow scope indefinite. 1. (36) 1. a. 1. Ulluriaq 2. U.(ABS) 1. iqaluk-tu-nngit-tuq. 2. fish-eat-NEG-PART.3SG.ABS 1. i. ‘Ulluriaq didn’t eat a (single) fish.’ 2. ii. #‘There is a fish/are fish that Ulluriaq didn’t eat.’ 1. 1. b. 1. [ iqaluk-$\sqrt{\text{tuq}}$-v [VP t* [ ∅ t … ]… ]] The narrow scope of the object is an indication that D-deletion has taken place, removing the choice function from within the object. The effect of D-deletion is the generation of structure (37), at the point where incorporation will take place. 1. (37) One consequence of D-deletion in (36) is that the noun iqaluk becomes the derived head of the the nominal phase. This makes it accessible for operations at the next phase level. At the same time, deletion of D makes it impossible for the object to raise to a position where it might obtain absolutive Case. As the verb is not antipassive in (37), no modalis Case-marking will be possible either. The only solution which will satisfy the Case (filter) requirements of the object in this case is incorporation, which obviates the need for the object to bear Case (Baker 1988). As with the antipassive construction, the representation must include the semantic content which introduces the variable of which the predicate nominal holds that serves to close off that variable. In this case, it must be the verb root itself which carries this semantic information. The lexical entry of a canonical Inuktitut incorporating verb is therefore what we proposed above, in (13a), which, again, is a purely extensional translation of Van Geenhoven’s (1998) proposal for Kalaallisut incorporating verbs, adapted to the properties of verbs advanced in Kratzer (1996).31 The lexical entry of a canonical entry of a canonical incorporating verb in the language has further consequences, since it actually ensures that these verbs incorporate their objects. Consider the possibilities for tuq- ‘eat, consume’, for example. If D does not delete in the object nominal, then it will not be compositionally compatible with the verb (which is looking for a property-type argument). So D-deletion must take place. And if D is lacking, then the object cannot participate in absolutive Case-marking, as discussed above. Modalis Case, as discussed in section 3.2 above, is a ‘last resort’ Case, assignable by the antipassive morphology. But the antipassive is clearly semantically incompatible with the type of an incorporating verb, so modalis Case is therefore not an option for the object of -tuq either. The only remaining possibility for the object to satisfy its Case requirements is through incorporation. The semantic properties of any verb like this therefore leads indirectly, but inexorably, towards object incorporation. The morphology of such verbs must therefore be compatible with noun incorporation, as a forced consequence. Regular noun incorporation in Inuktitut does not permit stranding of possessors for the incorporated noun, unlike what occurs in a number of other languages (Allen, Gardiner & Frantz 1984; Baker 1988; Van Geenhoven 2002; Baker, Aranovich & Gollusco 2004; Deal 2013: and many others). This feature of the language reflects an indirect consequence of an independent constraint on the use of number features in Inuktitut, rather than a limitation on what kinds of nouns may be incorporated. The constraint is given in (38). (38) Inuktitut Number Licensing Constraint (INLC) Only nouns with Case may bear number features. The INLC ensures that plural and dual nouns will not be incorporated. 1. (39) 1.    South Baffin Inuktitut 1. *Ulluriaq 2.   U.(ABS) 1. iqalu-i(t)-tu-nngit-tuq. 2. fish-PL-eat-NEG-PART.[–TR].3SG.ABS 1.    ‘Ulluriaq didn’t eat fishes.’ As shown below (section 4.2), there is no morphological constraint which blocks incorporation of plural nouns, which are permitted in locative incorporation contexts. It is only nouns for which incorporation resolves the Case problem which cannot bear number inflections, which is what the INLC ensures. Possessive constructions in full DPs also require number marking, as part of the φ feature agreement complex which provides a label for the possessive structure. And the number features coming from agreement actually engage with interpretable number features in rich portmanteau morphology. And the INLC applies to both types of number feature equally. This accounts for the unacceptability of data like (40), where the possessed noun agrees with its possessor.32 1. (40) 1. *Illu-ngi-liu-vugut. 2.   house-3PL.PR/PL.PM-make-IND.[–TR].1PL.ABS 1. ‘We are building their houses.’ Of course, without such agreement on the possessed noun, there would be no label available for the possessive structure within DP, so that possibility is excluded, too. ## 4 Deriving exceptions It is the exceptions which will prove the analysis. Under the approach taken here, narrow scope is possible only when D-deletion occurs, and D-deletion takes place freely, but other factors conspire to limit this freedom. The use of [PERSON] features in labelling blocks D-deletion in absolutive objects. And semantic demands of antipassive and incorporating verb roots require that D-deletion occur in order to output a fully interpretable LF. When these factors are obviated, though, D-deletion can be seen to occur freely. ### 4.1 Incorporations with copular verbs One context where D-deletion applies more freely is with copular verbs. The copula is an affixal verb, which must find a nominal root, i.e., it must incorporate. But incorporation in this case need not produce a narrow scope reading, as seen in (41). 1. (41) 1. 1. a. 1. Rita 2. R. 1. ilinniatitsiji-u-nngi-tuk 2. teacher-be-NEG-PART.[–TR].3SG.ABS 1. ‘Rita isn’t a/the teacher’ 1. 1. b. 1. Louisa-u-vunga 2. L.-be-IND.[–TR].1SG.ABS 1. ‘I’m Louisa’ Thus, the utterance in (41a) is compatible with both a state of affairs where Rita does not have the property of being a teacher and one where Rita is a teacher whose value is picked out by the choice function.33 (In other words, in addition to a property-type reading for ‘teacher’, (41a) is fully compatible with a “Rita is not our teacher” reading, even though Rita may, in fact, be a teacher, given the appropriate context.) Using the terminology of Higgins (1979), the sentence in (41a) allows for both predicational and equative readings.34 The explanation for the freedom of interpretation in this case starts with the base structure. Following Moro (1997; 2000) and Chomsky (2013), we assume that copula structures include exocentric base positions for the two arguments. For an English example like (42a), the base arrangement of the copula and the two arguments is as in (42b). (42) a. The morning star is the evening star. b. Such structures pose an immediate challenge for the labelling algorithm, since minimal search cannot identify which of the two arguments in the complement of the copula should serve as a label. Labelling cannot take place unless one of the two sisters raises to a higher position, and ultimately becomes the subject of the sentence. The same will be true for the Inuktitut examples (41). Example (41a) will have the base arrangement in (43), for example. 1. (43) Other grammatical requirements must also find a solution, given the (43) structure. Both nominals must satisfy the Case filter, either through incorporation or Case assignment. And the morphological requirements of -u must be satisfied by finding a nominal stem to adjoin to it. Happily, all these demands can be met. Absolutive Case is assigned to the nominal which raises. The non-raising nominal, ilinniatitsiji, will then be identified as the label for the complement of the copula, as in (44). 1. (44) The lower noun will now solve its Case requirements only through incorporation into -u.35 Since D is not deleted, however, incorporation must take place in two steps: N into D, and D into the copula. Simultaneously, this solves the morphological problem for the copula, as well. Incorporation by the copula is subject to the INLC, because no Case is assigned to the incorporated nominal predicate. As such, once again, possessed nouns cannot serve as incorporees: (45). 1. (45) 1. *Ilinniatitsiji-vu-u-jut. 2.   teacher-1PL.PR/PL.PM-be-PART.[–TR].3PL.ABS 1.    ‘They are our teachers.’ Up to this point in the analysis of Inuktitut copular structures, D-deletion plays no role. Copular structures differ from regular transitive verb phrases because the verb phrase provides two equivalently prominent nominal arguments (and no external argument). Both arguments may be full DPs, each with its own [PERSON] feature. It follows that when one of the arguments is used to provide a label for the complement of the v phase head, the presence or absence of a [PERSON] feature on the other argument will not affect the success of that operation. In this construction, then, D-deletion may take place freely within the nominal which does not raise—at least as far as the labelling procedure is concerned. D-deletion does affect the phase structure of a nominal, though, and that must be factored into the description of how incorporation takes place. Suppose that D does delete in the low nominal. Then incorporation will follow the same derivational steps as we see with other incorporating verbs. In other words, deletion of D ensures that the noun becomes the new phase head. As such, the noun can raise directly to -u, since it is no longer contained within the inaccessible portion of the nominal phase. And the semantic implications of D-deletion for a nominal predicate are fully benign. D-deletion will remove the choice function in D, as usual, but the variable that holds of the indefinite can be existentially bound within the copula (an incorporating verb, recall) itself. The derivation succeeds simply, and the interpretation of the nominal will be simply that of a regular (weak indefinite) nominal predicate. Now consider what the consequences are if D-deletion does not take place within the low nominal. The nominal phase structure remains that of a regular DP, and D remains as the phase head. Therefore, incorporation must follow a successive cyclic head-movement path, with the noun raising to D within the DP phase, and then with D—now including the noun—incorporating into -u in turn, during the next phase. No principles of the syntax are violated, but the resulting structure will not be semantically composable, with a type-mismatch between the incorporated noun and the copula. Evidentally, then, a derivation of this manner cannot be the source of the specific reading generally available to copula-incorporated nouns. However, we follow much research on the crosslinguistic properties of copulae that argues (essentially following Russell (1919)) for the existence of at least two types of copulae: a copula of identity (λx.λy[y=x]) and a copula of predication (see, among many others, Schlenker (2003); Heller (2005); Mikkelsen (2005); Comorovski (2007)).36 And there is overt evidence that the copula of predication and the copula of identity are distinct in Inuktitut, with the copula of identity having a zero variant while the copula of predication lacks one. Consider (46). 1. (46) 1. 1. a. 1. Holda 2. H. 1. muutakaati-u-juk 2. driver-be-PART.[–TR].3SG.ABS 1. ‘Holda is a/the driver’ 1. 1. b. 1. Holda 2. Holda 1. muutakaatik 2. driver 1. ‘Holda is the driver’ The lack of a predicational reading for either of the nouns in a construction like (46b) has been discussed in Compton (2004) (and see also Woodbury (1985); Johns (1987)).37 It is thus only the presence of the copula of identity that could derive the specific reading of the incorporated noun in (41) and (46a). Both non-specific and specific (i.e., choice-functional) readings for nouns incorporated by the copula are correctly predicted to be available, though the INLC blocks copula-incorporated possessed nouns from occurring, as illustrated in (45). Ultimately, what distinguishes the copula from other verbs with internal arguments is simply how many internal arguments it makes available to serve as potential bearers of absolutive case. The need to establish a ⟨φ,φ⟩ label at the C-T level makes it essential that there be an absolutive argument, which must be a DP, in order to provide a full set of φ features to the labelling algorithm. With normal transitive verbs, the object either be raised (as DP) to the C-T phase, in which case D must not delete, or it must remain low enough to be construed together with an antipassive verb or an incorporating verb stem. D plays two distinct roles—one syntactic and one semantic—which therefore line up in the association of structure with scopal semantics. With copulas, however, both may take place together, since one DP can raise and the other can be incorporated. Therefore, incorporation is dissociated from scopal semantics in this specific context. ### 4.2 Locative incorporation Locational expressions can be incorporated in Inuktitut, as in (47), as has been documented in Johns (2009), and Sadock (1980) for Kalaallisut. Locational incorporation serves to specify either a locative state (47a) or the result of a motion event (47b, c), depending on the particular locational suffix employed. We assume the locational suffixes belong to the P category, given the range of meanings they express, and that they assign inherent case. 1. (47) 1. 1. a. 1. Illu-mi-i-juk 2. house-LOC-be-PART.[–TR].3SG.ABS 1. ‘She’s in a (specific or otherwise) house’ 1. 1. b. 1. Makkuuvi-mu-u-juuk 2. Makkovik-DAT-be-PART.[–TR].3DU.ABS 1. ‘Those two are going to Makkovik’ 1. 1. c. 1. Ottawa-kku-u-jung? 2. Ottawa-VIA-be-Q.[–TR].3SG.ABS 1. ‘Is (s)he going through Ottawa?’ When locational incorporation occurs, the resulting morphology is richer than in the strictly copular incorporation structures discussed above, although the copula still forms a part of the derived verb. Specifically, when a locational is incorporated, the verb stem includes a nominal root, followed by a locational suffix, which is followed in turn by the copula (and then by regular verbal inflections). Assuming the Mirror Principle (Baker 1985), such morphology indicates that the noun attaches to P before the N-P X0 structure raise to the copula. As (47) suggests, incorporated locationals are scopally ambiguous, allowing both wide and narrow scope interpretations, and this is confirmed by the judgements provided by speakers, given the following scenario: 1. (48) 1. Situation: You and I are standing outside a house where a party is occurring, and I ask you the following: 1. 1. a. 1. Bertha 2. B.(ABS) 1. illu-mi-i-jung? 2. house-LOC-be-Q.[–TR].3SG.ABS 1. ‘Bertha in house?’ 1. 1. Could you answer in the positive – (b) – if (i) Bertha is inside the house we are standing outside of?; (ii) Bertha is still at home, at her own house?; (iii) Bertha is simply in some house, somewhere, possibly unknown to me? 1. 1. b. 1. Aa, 2. Yes 1. Bertha 2. B.(ABS) 1. illu-mi-i-juk 2. house-LOC-be-PART.[–TR].3SG.ABS 1. ‘Yes, Bertha in house’ (47b) was judged by all speakers to be compatible with each of the possible contexts given, though answering (b) was universally regarded as a “less helpful” response to the (ii) and (iii) scenarios given in (48), with one speaker commenting that she could truthfully answer with either Auka, angiqamiijuk (‘No, she is at home’) or Aa, angiqamiijuk (‘Yes, she is at home’). Another speaker, commenting on her ability to answer (48b) under scenario (iii) in (48a), said “Well, I’m not lying, but… still!”.38 And this result follows directly from the principles already established. The first thing to note is that given the presence of a copula in locational incorporation structures, the base structures used for locational incorporation must be parallel to those established for predicate incorporation. Then the base structure for (47a) should be (49). 1. (49) As the sister to PP is a full nominal, containing a [PERSON] feature, it must Merge with the root to provide a ⟨φ,φ⟩ label for the complement of v, just like any other absolutive internal argument. This step produces (50), and solves the Case problem for the Theme argument in (49). The Case requirements of the complement of P can be satisfied with no phrasal movement, since P assigns Case directly to its complement. (For our purposes, it is immaterial whether Case assigned by P is structural or inherent.) 1. (50) The morphological requirements of the copula remain the same; it is an affix, which must be supported by incorporating a root. But the same is evidently true of locational Ps in Inuktitut, which cannot stand alone. So in (50), illu must incorporate into -mi, and illu-mi, into -u, simply to ensure acceptable morphological structures. The most straightforward account of the availability of specific indefinite readings for locational-incorporated nouns would seemingly rely on the presence of the copula of identity, as was demonstrated above for the strictly copula-incorporated nouns. But we note immediately that this cannot be the case: The presence of the copula of identity could not derive a semantically-interpretable structure in this construction, since there is no interpretive mechanism by which to combine its type with the type of the PP. The availability of the specific reading in the examples in (47) and (48), then, must be derived in a different manner. Consider, though, what we have posited as a fundamental characteristic of what we have been calling, following the general literature, the antipassive. It combines with an individual-selecting predicate to allow the resulting category to combine with something of type <e,t>. That is, it is simply a syntactic mediator, in the sense that it allows the grammar to output structures to the semantics that can give rise to narrowest-scope readings for indefinites, under the now familiar, after Van Geenhoven (1998), designation of semantic incorporation. Syntactically transitive Vs, of course, can combine with the antipassive, and we could stipulate that those are the only category capable of combining with the morpheme, despite the fact that other individual-selecting predicates exist in the language — the set of syntactically transitive Ps, for example. But we see no reason for such a stipulation. That is, analogous to (51)–(51a) for (3) and (51b) for (2), there can exist (52), for (47a) and (48). (51) a. b. (52) a. b. The interpretational possibilities indicated for the sentences in (47) and (48) demonstrate that locational incorporation may take place whether D-deletion occurs or not, at least partially predicted here, since the Theme argument is always available to provide a ⟨φ,φ⟩ label as the absolutive argument. If D-deletion does not occur, then the choice function provided by D ensures a wide scope interpretation. In that case, of course, incorporation must include an initial step in which the noun raises to D, before N-D raises to P: 1. (53) If D-deletion does take place, then the derivation will take a different path, but one which is familiar from both copula-incorporation and the canonical antipassive construction. Since the noun becomes the derived phase head in this case, it can raise directly to P, which later raises to the copula, as before. As with the canonical antipassive construction, it is the antipassive morpheme (12) that makes the indefinite here accessible to successful interpretation in the semantics, though in the case of locative incorporation structures, it is our generalised antipassive, attached to P, rather than V: 1. (54) Unlike the other incorporation structures examined, locational-incorporated nouns do have Case assigned to them. Therefore, they are not subject to the INLC, and they may bear number inflection. They may therefore specify the quantity of the Goal argument, and the Goal may be a possessed noun, as in (55). 1. (55) 1. 1. a. 1. Hulda 2. H.(ABS) 1. illu-ti-ni-i-juk. 2. house-1PL.PR/SG.PM-LOC-be-PART.[–TR].3SG.ABS 1. ‘Hulda is in our house.’ 1. 1. b. 1. Illu-ngin-nu-u-jut. 2. house-3PL.PR/PL.PM-DAT-be-PART.[–TR].3PL.ABS 1. ‘They are going into their houses.’ ### 4.3 Scope in full PPs Incorporated locatives are not the only type of PP; independent PPs fill a familiar, wide range of functions in Inuktitut. The P itself is still affixal, however, and must therefore be attached to a nominal stem. And like incorporated locatives, independent P-bearing nouns (PPs) are scopally ambiguous, allowing either maximally wide scope or narrow scope (Wharram 2003). Thus, in (8), repeated as (56), nunaling-mi may be specific or non-specific. 1. (56) 1. South Baffin Inuktitut 1. Nunaling-mi 2. settlement-LOC 1. nuna-qa-lauq-sima-nngit-tuq 2. land-have-PAST-PERF-NEG-PART.[–TR].3SG.ABS 1. i. ‘There is a (certain) town that (s)he hasn’t lived in’ 2. ii. ‘(S)he has never lived in any town’ The same properties hold of demoted chomeur Agents in passive clauses, which we treat as PPs, like English agentive by- phrases. The so-called “ablative” Case-marking in this case must actually be an incorporating P -mit. Unlike ergative Agents, ablative Agents can have either narrow or wide scope readings, freely.39 1. (57) 1. Tuktu-it 2. caribou-ABS.PL 1. taku-jau-lau-nngit-tut 2. see-PASS-NEG-PART.[–TR].3PL.ABS 1. angusukti-mit 2. hunter-ABL 1. i. ‘There is a (certain) hunter, and certain caribou weren’t seen by him’ 2. ii. ‘Certain caribou weren’t seen by any hunter’ No new mechanisms are necessary to derive this very general result. Such PPs will be scopally ambiguous generally in our model because there are two ways in which the variable of which the nominal holds can be valued. Suppose the base structure for nunaling-mi in (56) to be (58). 1. (58) Incorporation and Case assignment will take place within this structure in exactly the same way as it does in PP in a locative incorporation structure. If D-deletion does not occur, then the choice-functional variable introduced in D is left free, its interpretation being contextually determined (and giving rise to an apparent widest-scope reading). If D-deletion does occur, then it must be the “antipassive” which is freely contained within P that introduces and existentially binds the variable of which the nominal holds, ensuring a narrow sope interpretation. ## 5 Conclusions The major claim we have attempted to defend here is that a combination of syntactic and semantic factors ensure the distribution of low scope indefinites, under the general hypothesis that all Inuktitut nouns are born as properties and all Inuktitut nominal arguments start off as DP phases. For reasons of space, we do not here confront the far more ambitious question of how well the ideas developed here can be employed to better understand comparable data in other languages treated in this literature. For the Inuktitut case, we end up with a fairly pure grammatical illustration of the maxim: “correlation does not imply causation.” Case properties of nouns do not directly influence their scopal character, and neither does incorporation; instead, the same general principles lead indirectly to a superficial correlation between Case and scopal interpretation. In this account, Case is assigned to satisfy purely syntactic requirements within a derivation, and scope readings simply reflect where and how the variable of which an indefinite holds is valued. They are simply independent parts of grammar. ## Notes 1Unless otherwise noted, all examples come from fieldwork conducted with Southern Baffin Inuktitut speakers in Iqaluit and Inuttut speakers in Labrador, and we are especially grateful to Papatsi Kublu-Hill and Selma Jararuse for providing additional elucidations and insights into their languages. 2As we show below, and as has been discussed in Johns (1999; 2007; 2009), the former does not universally hold of objects incorporated into all types of incorporating verbs, and the latter is subject to dialectal variation. 3In most of the examples in this paper, the antipassive morphology is realised with null-morphology. Positing null morphology, as we do here, appears to be a necessary component of any analysis seeking to characterise the different syntactic and semantic characteristics of antipassive and fully syntactically transitive verbs. In this respect, we align ourselves with Bittner (1994a), as well as with a large body of literature which discusses the cross-linguistically stable properties of antipassive constructions. (See Polinsky (2017) for an overall survey). 4As will become clear, when we use the term wide scope in this paper, we do so as a convenience. What we are really asserting in the relevant cases is what Kratzer (1998) refers to as pseudoscope effects. 5While most variants of Inuktitut behave as describe here, with obligatory narrow scope for antipassive objects, speakers of Labrador Inuttut often report semantic judgments following a different pattern. Johns (2006: orthography adapted) provides the examples in (i), which illustrate the difference. 1. (i) 1. Labrador Inuttut, (a), and Rigolet Inuktut, (b) 1. 1. a. 1. Margarita 2. Marguerite 1. quinatsu-i-juk 2. tickle-AP.3SG 1. Ritsati-mik. 2. Richard-MOD 1. ‘Marguerite is tickling Richard.’ 1. 1. b. 1. Nancy 2. Nancy(ABS) 1. angka-li-mmat 2. home-PROG-because.3SG 1. aklaa-gulak 2. black.bear-dear(ABS) 1. iksiva-juk 2. sitting.3SG 1. qaksi-taa-gulang-mi, 2. hillock-get-dear-LOC 1. iksiva-ju 2. sitting-3SG 1. qaksi-taa-gulang-mi 2. hillock-get-dear-LOC 1. Nancy-mik 2. Nancy-MOD 1. tautuk-tuk. 2. look.at-3SG 1. ‘… if Nancy was coming home, the young black bear would be sitting on a little hill, sitting on the little hill, watching Nancy’ In both of these examples, a proper noun functioning as an antipassive object appears with modalis Case, but the interpretation of the noun is not property-like. The modalis names simply refer to specific individuals, in the normal way for proper nouns. Johns takes such interpretations as evidence for ongoing diachronic change in eastern varieties of Inuktitut, where grammatical restructuring is giving rise to a new nominative/accusative system. In that case, these data do not contradict what we propose for the more conservative varieties. 6Additionally, the sentence in (i), offered as a possible translation of the English “Northern curlews are extinct” is not accepted by any speaker as anything other than markedly odd: 1. (i) 1. South Baffin Inuktitut 1. Aqqunaqsiu-t 2. northern.curlew-ABS.PL 1. qamit-tut. 2. be.extinct-PART.[–TR].3PL.ABS Consultants comments, such as “Which ones?”, clearly indicate that they wanted to interpret aqqunaqsiut as individual-denoting, having difficulty combining that with a kind-level interpretation for the predicate. 7To the extent that a standard Roman orthography exists for the dialects of Inuktitut in the eastern Arctic, we adopt it here. For consistency in the presentation of the data, we make make use of this orthography for Labrador Inuttut, as well, although a different Roman orthography is more commonly made use of there. 8See, for example, many of the papers in Borik & Gehrke (2015). 9Baker (2014) attempts a more general synthesis of the pseudo-incorporation literature. This more ambitious program, along with his critique of previous work, is insightful, although his own proposal does not appear to suit the Inuktitut patterns particularly well. 10Incorporation and modalis marking do not operate in free variation, because Inuktitut incorporation is only permitted with a restricted set of verbal roots, which also enforce incorporation. 11We return to the discussion of proper names in Inuktitut in section 3.2. 12An anonymous reviewer suggests a more nuanced version of this approach. Given the proposal (Johns & Kucerová 2017; Yuan 2018) that verbal agreement morphology in Inuktitut is actually pronominal clitics, one might suppose that wide scope for ergative and absolutive arguments comes from the properties of associated pronouns. But this approach fails to generalise to all the cases we consider here, and particularly to the incorporated copular and prepositional objects discussed in section 4. 13We note that our analysis here provides independent evidence for the argument in Compton & Pittman (2010) that D (and C) are the active phase heads (for word formation) in Inuktitut. 14We are grateful to an anonymous reviewer, who, in expressing concern that the syntax proposed here now has the power to delete something that has an effect at LF, obliged us to clarify why this is not the case. 15Two anonymous reviewers observe that these nominal structures seem inconsistent with models in which D and the noun are separated by other functional categories, such as Harley and Ritter’s (2002) Num. We appreciate the importance of cartographic studies which posit such extra content in the phrase marker, but the work presented here starts from phase-theoretic premises, and, as is often noted, phase theory and cartographic analysis have not yet been successfully reconciled (Shlonsky 2006; Roberts 2012; Branigan To appear). In the absence of such reconciliation, we adopt a working strategy of setting aside additional cartographic structure, for now. 16This claim does not imply that antipassive morpology cannot bear other semantic information as well, and at the same time. Spreng (2012) argues that antipassive si- is a marker of imperfective viewpoint aspect. This may well be an accurate characterisation of a distinct feature of this morpheme; it does not contradict our hypothesis. In contrast, Bittner’s (1994b) treatment of antipassive quantificational semantics advances a specific type-shifting analysis of antipassive morphology, which is incompatible with the treatment offered here, and which is actually unnecessary under our account. 17The same semantic content is not necessarily associated with antipassive morphology in other languages. Polinsky (2017) notes, for example, that antipassive objects in Adyghe may take scope below or above the subject, which would be impossible if the Adyghe verb must bind the object. 18For expository purposes, we provide a purely extensional semantics here. Thus, t is the type of propositions and s is the type of events. The entry in (12a) is that given in Wharram (2003), a generalised version of the semantic incorporation process of Van Geenhoven (1998), which itself builds on observations in Bittner (1987; 1994a) and Bittner (1995). More recent work by Deal (2007; 2008) has demonstrated, correctly we believe, that a modalised revision of (12a) is required, in order to account for the observed facts of intensional verbs Deal (2008: p. 97): (i) λP>.λQ.λe.λw.∀w′ ∈ intent(e) : ∃x[Q(x)(w′)∧P(x)(e)(w′)] where, for (i), t is the type of truth values and w is the type of worlds. Again, solely for ease of exposition, we will hold to the entry in (12a). 19As just above, the lexical entry in (13a) is an extensional translation of Van Geenhoven’s (1998) proposal for Kalaallisut incorporating verbs, adapted to the properties of verbs posited in Kratzer (1996). 20See Wharram (2003) for further discussion. 21For those dialects and idiolects in which that-trace “violations” are accepted, Chomsky supposes that syntactic C-deletion need not eliminate phonetic features of C. 22See also Mauro (2018) for further consideration within the framework of Harley & Ritter (2002). 23This node would correspond to a RootP or VP, in prior theories which do not incorporate a late labelling algorithm. 24Of course, if there is no object, then the derivation of the verb phrase cannot require that the object provide a label. In that case, Chomsky supposes that the root must be allowed to serve as a functioning label once it is enriched by the head-movement to v. 25Word order is very free in Inuktitut, which we attribute to scrambling operations. As such, the linear order of subject, object, etc. does not provide immediate information about the syntactic positions where constituents originate, or to which they may raise for Case/agreement reasons. 26An anonymous reviewer questions whether the same results could not be obtained if nominals were simply allowed to originate as bare NPs, with no D present. Besides the evidence from agreement discussed already, such an approach would require that one posit very specific selectional constraints to ensure the right distribution of NP and DP. In effect, one would have to mirror the scope facts with arbitrary selectional constraints, which seems like a circular exercise. What is more, the contexts discussed below, where both wide and narrow scope readings are found, would demand a free distribution of either DP or NP, again on entirely arbitrary grounds. 27An anonymous reviewer asks if antipassive objects must be adjacent to the verb, which might be expected. Again, however, the optional application of scrambling operations seems to subvert any expectations in this area. 28Again, other languages may differ in how antipassive objects are Case-marked. Polinsky (2017) surveys a variety of proposals on this topic for distinct ergative languages. 29One consultant’s response to this sentence establishes the meaning which is forced upon the proper noun quite clearly: “Um, weird, unless I was telling her about seeing someone named Douglas yesterday. I guess that’d be OK, but when would I say that? I’d just say Tuglasi takulauqtara [‘I saw Douglas.’]. 30The latter scenario is not one that would be compatible with a similar sentence in other Inuktitut dialects (see Johns (1999) for discussion). The critical fact here is the unavailability of the narrrow-scope reading for the indefinite atuagammik in (32a). 31As usual, head-movement appears not to alter what enters into interpretation at the semantic interface (Chomsky 2000). Since the variable of which the predicate nominal holds is both introduced and existentially bound within the incorporating verb itself, the scopal interpretation of the incorporated nominal is frozen in the verb’s merged position. 32The same explanation holds for singular possessed nouns, which also fail to incorporate: (i). 1. (i) 1. *Illu-nga-liu-vugut. 2.   house-3SG.PR/SG.PM-make-IND.[–TR].1SG.ABS 1. ‘We are building her/his house.’ 33For readability, we have translated a possible meaning of the incorporated noun here as ‘the teacher’, although it should more accurately be understood as ‘a specific teacher whose value is supplied from the utterance context’. 34We set aside specificational and identificational copular clauses here, though, in the relevant respects, they largely behave like the equative copular clauses in Inuktitut. 35We see no need here to consider the possibility that a nominal predicate need not bear Case in Inuktitut, as the analysis works out either way. 36Alternatively, one could suppose an approach where type-shifting operations manipulate the post-copular NP (Partee 1987) or the copula itself (Geist 2008), though the model of semantic composition which we adopt here denies these possibilities. 37An anonymous reader points out that modification of a predicate makes a predicational reading possible for clauses with a silent copula, though we note that this only obviously occurs in what Higgins (1979) refers to as identificational copular clauses. Unfortunately, our analysis offers no particular insight into this departure from the expected role of the copula of identity. But since our major concern is rather how copular semantics are implicated in some incorporation structure, developing an account of this observation is something which we defer to future research. 38This is predictably the case, as the context provided clearly favours a specific reading for the incorporated indefinite: there is a salient house right next to us. To our knowledge, non-specific readings for locational-incorporated nominals have not been discussed in the prior literature, so the point of this exercise was to investigate whether a non-specific interpretation is available even in a scenario where such a reading is highly disfavoured. 39One consultant comments on this data as follows: “He’s a bad hunter… Maybe [a] qallunaaq [a non-inuk]. Or maybe the caribou were very clever, [and] they were never seen by anyone.” ## Abbreviations 1 = first person, 3 = third person, 4 = fourth person, ABL = ablative, ABS = absolutive, AP = antipassive, COND = conditional, DAT = dative, DU = dual, ERG = ergative, GEN = genitive, HAB = habitual, IND = indicative, LOC = locative, MOD = modalis, NEG = negative, NPST = near past, PART = participial (mood), PASS = passive, PERF = perfective, PL = plural, PM = possessum, PR = possessor, PROG = progressive, PST = past, Q = interrogative, REL = relative, SG = singular, TR = transitive, VIA = vialis ## Acknowledgements We are indebted above all for their generous, patient and meticulous help to our language consultants, collaborators and teachers: Ellen Ford, Bertha Holeiter, Selma Jararuse, Alexina Kublu, Papatsi Kublu, Christine Nochasak, and Hulda Semigak. A quite different version of this material first saw the light of day at the LAGB conference held at the University of Salford in September, 2012. We are grateful to the participants, including Michelle Sheehan and Mark Baker, for comments and questions, which led eventually to this take on the matter. The revised approach was presented at the APLA 39 conference in November 2015, where again the response from the participants was instrumental in helping us to refine this analysis. One-on-one discussions with our generous expert friends and colleagues, and especially with Alana Johns, Julien Carrier and Ilia Nicoll, took us the rest of the way. Sadly, all errors can still be laid at our door. ## Funding Information For parts of this research, we acknowledge financial support from the International Grenfell Association to the second author. ## Competing Interests The authors have no competing interests to declare. ## References 1. Allen, Barbara J., Donna B. Gardiner & Donald G. Frantz. 1984. Noun incorporation in Southern Tiwa. International Journal of American Linguistics 50(3). 292–311. DOI: https://doi.org/10.1086/465837 2. Baker, Mark. 1985. The mirror principle and morphosyntactic explanation. Linguistic Inquiry 16. 373–461. 3. Baker, Mark. 1988. Incorporation: a theory of grammatical function changing. Chicago: University of Chicago Press. 4. Baker, Mark. 2014. Pseudo noun incorporation as covert noun incorporation: linearization and crosslinguistic variation. Language and Linguistics 15(1). 5–46. DOI: https://doi.org/10.1177/1606822X13506154 5. Baker, Mark, Roberto Aranovich & Lucía A. Gollusco. 2004. Two types of noun incorporation: noun incorporation in Mapudungun and its typological implications. Language 81. 138–176. DOI: https://doi.org/10.1353/lan.2005.0003 6. Beck, Sigrid. 1996. Wh-constructions and transparent logical form. Tübingen: Eberhard-Karls-Universität. 7. Belletti, Adriana. 1992. Agreement and case in past participle clauses in Italian. In Tim Stowell & Eric Wehrli (eds.), Syntax and Semantics, Volume 26: Syntax and the Lexicon, 21–44. Academic Press. 8. Benveniste, Émile. 1960. Être et Avoir dans leurs fonctions linguistiques. Bulletin de la Société Linguistique de Paris 55. 113–134. 9. Bittner, Maria. 1987. On the semantics of the Greenlandic antipassive and related constructions. International Journal of American Linguistics 53(2). 194–231. DOI: https://doi.org/10.1086/466053 10. Bittner, Maria. 1994a. Case, scope, and binding. Dordrecht: Kluwer. DOI: https://doi.org/10.1007/978-94-011-1412-7 11. Bittner, Maria. 1994b. Cross-linguistic semantics. Linguistics and Philosophy 17(1). 53–108. DOI: https://doi.org/10.1007/BF00985041 12. Bittner, Maria. 1995. Quantification in Eskimo: a challenge for compositional semantics. In Emmon Bach, Eloise Jelinek, Angelika Kratzer & Barbara H. Partee (eds.), Quantification in natural languages (Studies in Linguistics and Philosophy), 59–80. Dordrecht: Kluwer. DOI: https://doi.org/10.1007/978-94-017-2817-1_4 13. Bittner, Maria & Ken Hale. 1996. The structural determination of case and agreement. Linguistic Inquiry 27. 531–601. 14. Bobaljik, Jonathan & Phil Branigan. 2006. Eccentric agreement and multiple casechecking. In Alana Johns, Diane Massam & Juvenal Ndirayagije (eds.), Ergativity, 47–77. Dordrecht: Springer. DOI: https://doi.org/10.1007/1-4020-4188-8_3 15. Borik, Olga & Berit Gehrke (eds.). 2015. The syntax and semantics of pseudoincorporation. Leiden: Brill. DOI: https://doi.org/10.1163/9789004291089 16. Bošković, Željko. 2008. What will you have, DP or NP? In Emily Elfner & Martin Walkow (eds.), Proceedings of NELS 37, 101–114. 17. Branigan, Phil. To appear. Multiple feature inheritance and the phase structure of the left periphery. In Sam Wolfe & Rebecca Woods (eds.), Rethinking V2. Oxford University Press. 18. Carlson, Greg. 1977. Reference to kinds in English. University of Massachusetts, Amherst. 19. Carlson, Greg. 2006. The meaningful bounds of incorporation. In Svetlana Vogeleer & Liliane Tasmowski (eds.), Non-definiteness and Plurality, 35–50. Amsterdam: John Benjamins. DOI: https://doi.org/10.1075/la.95.03car 20. Chierchia, Gennaro. 1998. Reference to kinds across language. Natural language semantics 6(4). 339–405. DOI: https://doi.org/10.1023/A:1008324218506 21. Chomsky, Noam. 2000. Minimalist inquiries: the framework. In Roger Martin, David Michaels & Juan Uriagareka (eds.), Step by step: essays on minimalist syntax in honor of Howard Lasnik, 89–155. MIT Press. 22. Chomsky, Noam. 2001. Derivation by phase. In Michael Kenstowicz (ed.), Ken Hale: a life in language, 1–52. Cambridge, Mass.: MIT Press. 23. Chomsky, Noam. 2008. On phases. In Robert Freidin, Carlos P. Otero & Maria Luisa Zubizarreta (eds.), Foundational issues in linguistic theory essays in honor of Jean-Roger Vergnaud, 134–166. Cambridge, MA: MIT Press. DOI: https://doi.org/10.7551/mitpress/9780262062787.003.0007 24. Chomsky, Noam. 2013. Problems of projection. Lingua 130. 33–49. DOI: https://doi.org/10.1016/j.lingua.2012.12.003 25. Chomsky, Noam. 2015. Problems of projection: extensions. In Elisa Di Domenico, Cornelia Hamann & Simona Matteini (eds.), Structures, strategies and beyond: studies in honour of Adriana Belletti, 1–6. Amsterdam and Philadelphia: Benjamins. DOI: https://doi.org/10.1075/la.223.01cho 26. Chung, Sandra & William Ladusaw. 2003. Restriction and saturation. Cambridge, Mass.: MIT Press. DOI: https://doi.org/10.7551/mitpress/5927.001.0001 27. Comorovski, Ileana. 2007. Constituent questions and the copula of specification. In Ileana Comorovski, Klaus von Heusinger, Gennaro Chierchia, Kai von Fintel & F. Jeffrey Pelletier (eds.), Existence: semantics and syntax (Studies in Linguistics and Philosophy) 84. 49–77. Dordrecht: Springer. DOI: https://doi.org/10.1007/978-1-4020-6197-4_2 28. Compton, Richard. 2004. On quantifiers and bare nouns in Inuktitut. Toronto Working Papers in Linguistics 23(1–45). 29. Compton, Richard & Christine Pittman. 2010. Word-formation by phase in Inuit. Lingua 120(9). 2167–2192. DOI: https://doi.org/10.1016/j.lingua.2010.03.012 30. Coon, Jessica & Omer Preminger. 2011. Towards a unified account of person splits. In Proceedings of the 29th West Coast Conference on Formal Linguistics (WCCFL 29), 310–318. 31. Dayal, Veneeta. 1999. Bare NP’s, reference to kinds, and incorporation. In Tanya Matthews & Devon Strolovitch (eds.), Proceedings of SALT 9 9. 35–51. DOI: https://doi.org/10.3765/salt.v9i0.2816 32. Dayal, Veneeta. 2004. Number marking and (in) definiteness in kind terms. Linguistics and Philosophy 27(4). 393–450. DOI: https://doi.org/10.1023/B:LING.0000024420.80324.67 33. Dayal, Veneeta. 2011. Hindi pseudo-incorporation. Natural Language and Linguistic Theory 29(1). 123–167. DOI: https://doi.org/10.1007/s11049-011-9118-4 34. Deal, Amy Rose. 2007. Antipassive and indefinite objects in Nez Perce. In Amy Rose Deal (ed.), Proceedings of the 4th Conference on the Semantics of Underrepresented Languages in the Americas (SULA 4), 35–47. UMass (Amherst) Occasional Papers in Linguistics. 35. Deal, Amy Rose. 2008. Property-type objects and modal embedding. In A. Grønn (ed.), Proceedings of SuB12, 92–106. 36. Deal, Amy Rose. 2013. Possessor raising. Linguistic Inquiry 44(3). 391–432. DOI: https://doi.org/10.1162/LING_a_00133 37. Diesing, Molly. 1992. Indefinites. Cambridge, Mass.: MIT Press. 38. Espinal, M. Teresa & Louise McNally. 2011. Bare nominals and incorporating verbs in Spanish and Catalan. Journal of Linguistics 47. 87–128. DOI: https://doi.org/10.1017/S0022226710000228 39. Fodor, Janet & Ivan Sag. 1982. Referential and quantificational indefinites. Linguistics and Philosophy 5. 355–398. DOI: https://doi.org/10.1007/BF00351459 40. Geist, Ljudmila. 2008. Predication and equation in copular sentences: Russian vs. English. In Ileana Comorovski & Klaus von Heusinger (eds.), Existence: semantics and syntax (Studies in Linguistics and Philosophy), 79–105. Dordrecht: Springer. DOI: https://doi.org/10.1007/978-1-4020-6197-4_3 41. Gillon, Carrie. 2006. The semantics of determiners: domain restriction in Swxwú7mesh. University of British Columbia. 42. Gillon, Carrie. 2011. Bare nouns in Innu-aimun: what can semantics tell us about syntax. In Alexis Black & Megan Louis (eds.), Proceedings of WSCLA 16. UBC Working Papers in Linguistics, 29–56. 43. Gillon, Carrie. 2012. Evidence for mass and count in Inuttut. Linguistic Variation 12(2). 211–246. DOI: https://doi.org/10.1075/lv.12.2.03gil 44. Harley, Heidi & Elizabeth Ritter. 2002. Person and number in pronouns: a feature geometric analysis. Language 78(3). 482–526. DOI: https://doi.org/10.1353/lan.2002.0158 45. Heim, Irene. 1982. The semantics of definite and indefinite noun phrases. University of Massachusetts, Amherst. 46. Heim, Irene. 2011. Definiteness and indefiniteness. In Klaus von Heusinger, Claudia Maienborn & Paul Portner (eds.), Semantics. an international handbook of natural language meaning 2. 996–1025. Berlin: De Gruyter. 47. Heim, Irene & Angelika Kratzer. 1998. Semantics in generative grammar. Oxford: Blackwell. 48. Heller, Daphna. 2005. Identity and information: semantic and pragmatic aspects of specificational sentences. Rutgers University. 49. Higgins, Francis R. 1979. The pseudo-cleft construction in English. New York, NY: Routledge. 50. Johns, Alana. 1987. Transitivity and grammatical relations in inuktitut. University of Ottawa PhD dissertation. 51. Johns, Alana. 1999. The decline of ergativity in Labrador Inuttut. In Leora Bar-El, Rose-Marie Déchaine & Charlotte Reinholtz (eds.), Papers from the Workshop on Structure and Constituency in Native American Languages 17. 73–90. MIT Occasional Papers in Linguistics. 52. Johns, Alana. 2006. Ergativity and change in Inuktitut. In Alana Johns, Diane Massam & Juvenal Ndirayagije (eds.), Ergativity, 293–311. Dordrecht: Springer. DOI: https://doi.org/10.1007/1-4020-4188-8_12 53. Johns, Alana. 2007. Restricting noun incorporation: Root movement. Natural Language and Linguistic Theory 25. 535–576. DOI: https://doi.org/10.1007/s11049-007-9021-1 54. Johns, Alana. 2009. Additional facts about noun incorporation (in Inuktitut). Lingua 119(2). 185–198. DOI: https://doi.org/10.1016/j.lingua.2007.10.009 55. Johns, Alana & Ivona Kucerová. 2017. Towards an information structure analysis of ergative patterning in the Inuit language. In Jessica Coon, Diane Massam & Lisa deMena Travis (eds.), The Oxford handbook of ergativity, 397–419. Oxford University Press. 56. Kamp, Hans. 1981. A theory of truth and semantic representation. In Jeroen Groenendijk, Theo M. V. Janssen & Martin Stokhof (eds.), Formal methods in the study of language, 277–322. Amsterdam: Mathematisch Centrum. 57. Kayne, Richard. 2000. Parameters and universals. Oxford: Oxford University Press. 58. Kiparsky, Paul. 1998. Partitive case and aspect. In The projection of arguments: lexical and compositional factors, 265–307. Stanford: CSLI. 59. Kratzer, Angelika. 1996. Severing the external argument from its verb. In Johan Rooryck & Laurie Zaring (eds.), Phrase structure and the lexicon, 109–137. Dordrecht: Kluwer. DOI: https://doi.org/10.1007/978-94-015-8617-7_5 60. Kratzer, Angelika. 1998. Scope or pseudoscope? Are there wide-scope indefinites? In Susan Rothstein (ed.), Events in grammar, 163–196. Dordrecht: Kluwer. DOI: https://doi.org/10.1007/978-94-011-3969-4_8 61. Kratzer, Angelika. 2003. A note on choice functions in context. Ms. Umass (Amherst). Ms. University of Massachusetts (Amherst). 62. Lasnik, Howard. 1992. Case and expletives: notes towards a parametric account. Linguistic Inquiry 23(3). 162–189. DOI: https://doi.org/10.1007/978-94-009-0135-3_8 63. Longobardi, Giuseppe. 1994. Reference and proper names. Linguistic Inquiry 25. 609–665. 64. Luraghi, Silvia. 2003. On the meaning of prepositions and cases: The expression of semantic roles in Ancient Greek 67. John Benjamins Publishing. DOI: https://doi.org/10.1075/slcs.67 65. Martí, Luisa. 2015. Grammar versus pragmatics: carving nature at the joints. Mind & Language 30(4). 437–473. DOI: https://doi.org/10.1111/mila.12086 66. Massam, Diane. 2009. Noun incorporation: essentials and extensions. Language and Linguistics Compass 3. 1076–1096. DOI: https://doi.org/10.1111/j.1749-818X.2009.00140.x 67. Matthewson, Lisa. 1998. Determiner systems and quantificational strategies: evidence from Salish. The Hague: Holland Academic Graphics. 68. Matthewson, Lisa. 1999. On the interpretation of wide-scope indefinites. Natural Language Semantics 7(1). 79–134. DOI: https://doi.org/10.1023/A:1008376601708 69. Mauro, Christophe. 2018. Le statut de la troisième personne en inuktitut. Université du Québec à Montréal. https://archipel.uqam.ca/11693/ (5 , 2019). 70. Mikkelsen, Line. 2005. Copular clauses. Specification, predication and equation (Linguistik Aktuell = Linguistics Today 85). Amsterdam: John Benjamins. DOI: https://doi.org/10.1075/la.85 71. Moro, Andrea. 1997. The raising of predicates, predicative nouns and the theory of clause structure. Cambridge: Cambridge University Press. DOI: https://doi.org/10.1017/CBO9780511519956 72. Moro, Andrea. 2000. Dynamic antisymmetry (Linguistic Inquiry Monographs 38). MIT Press. 73. Nevins, Andrew. 2007. The representation of third person and its consequences for person-case effects. Natural Language & Linguistic Theory 25(2). 273–313. DOI: https://doi.org/10.1007/s11049-006-9017-2 74. Oxford, Will & Nicholas Welch. 2015. Inanimacy as personlessness. Ms. University of Toronto. 75. Partee, Barbara. 1987. Noun phrase interpretation and type-shifting principles. In Jeroen Groenendijk, Dick de Jongh & Martin Stokhof (eds.), Studies in discourse representation theory and the theory of generalized quantifiers, 115–144. Dordrecht: Foris. 76. Polinsky, Maria. 2017. Antipassive. In Jessica Coon, Diane Massam & Lisa deMena Travis (eds.), The Oxford handbook of ergativity, 308–331. Oxford: Oxford University Press. DOI: https://doi.org/10.1093/oxfordhb/9780198739371.013.13 77. Reinhart, Tanya. 1997. Quantifier scope: How labor is divided between QR and choice functions. Linguistics and Philosophy 20(4). 335–397. DOI: https://doi.org/10.1023/A:1005349801431 78. Reinhart, Tanya. 2006. Interface strategies. OTS working papers in linguistics, University of Utrecht. 79. Richards, Marc D.. 2007. On feature inheritance: an argument from the phase impenetrability condition. Linguistic Inquiry 38. 563–572. DOI: https://doi.org/10.1162/ling.2007.38.3.563 80. Roberts, Ian. 2012. Phases, head movement and second-position effects. In Ángel J. Gallego (ed.), Phases developing the framework, 385–440. De Gruyter Mouton. 81. Russell, B. 1919. Introduction to mathematical philosophy. London: Allen & Unwin. 82. Sadock, Jerrold M. 1980. Noun incorporation in Greenlandic: a case of syntactic word formation. Language 56(2). 300–300. DOI: https://doi.org/10.2307/413758 83. Schlenker, Philippe. 2003. Clausal equations (a note on the connectivity problem). Natural Language & Linguistic Theory 21(1). 157–214. DOI: https://doi.org/10.1023/A:1021843427276 84. Shlonsky, Ur. 2006. Extended projection and CP cartography. Nouveaux cahiers de linguistique française 27. 87–93. 85. Spreng, Bettina. 2012. Viewpoint aspect in Inuktitut: The syntax and semantics of antipassives. University of Toronto. 86. von Stechow, Arnim. 1996. Against LF pied-piping. Natural Language Semantics 4. 57–110. DOI: https://doi.org/10.1007/BF00263537 87. Stechow, Arnim von. 2000a. Partial wh-movement, scope marking, and transparent logical form. In Uli Lutz, Gereon Müller & Arnim von Stechow (eds.), Wh-scope marking, 447–478. Amsterdam: John Benjamins. DOI: https://doi.org/10.1075/la.37.15ste 88. Stechow, Arnim von. 2000b. Some remarks on choice functions and LF-movement. In Klaus von Heusinger & Urs Egli (eds.), Reference and anaphoric relations, 193–228. Dordrecht: Kluwer. DOI: https://doi.org/10.1007/978-94-011-3947-2_11 89. Stowell, Tim. 1991. Determiners in NP and DP. In Katherine Leffel & Denis Bouchard (eds.), Views on phrase structure, 37–56. Dordrecht: Kluwer. DOI: https://doi.org/10.1007/978-94-011-3196-4_3 90. Van Geenhoven, Veerle. 1998. Semantic incorporation and indefinite descriptions: semantic and syntactic aspects of West Greenlandic noun incorporation. Stanford: CSLI. 91. Van Geenhoven, Veerle. 2000. Pro properties, contra generalized kinds. In Proceedings of SALT 10, 221–238. DOI: https://doi.org/10.3765/salt.v10i0.3112 92. Van Geenhoven, Veerle. 2002. Raised possessors and noun incorporation in West Greenlandic. Natural Language & Linguistic Theory 20(4). 759–821. DOI: https://doi.org/10.1023/A:1020481806619 93. Van Geenhoven, Veerle & Louise McNally. 2005. On the property analysis of opaque complements. Lingua 115(6). 885–914. DOI: https://doi.org/10.1016/j.lingua.2004.01.012 94. Wharram, Douglas. 2003. On the interpretation of (un)certain indefinites in Inuktitut and related languages. University of Connecticutt. 95. Winter, Yoad. 1997. Choice functions and the scopal semantics of indefinites. Linguistics and Philosophy 20(4). 399–467. DOI: https://doi.org/10.1023/A:1005354323136 96. Woodbury, Anthony C. 1985. Noun phrase, nominal sentence and clause in central Alaskan Yupik Eskimo. In Johanna Nichols & Anthony C. Woodbury (eds.), Grammar inside and outside the clause: some approaches to theory from the field, 61–88. Cambridge: Cambridge University Press. 97. Woolford, Ellen. 2015. Ergativity and transitivity. Linguistic Inquiry 46(3). 489–531. DOI: https://doi.org/10.1162/LING_a_00190 98. Yuan, Michelle. 2018. Dimensions of ergativity in Inuit: theory and microvariation. M.I.T. PhD Thesis.
2019-08-18 11:02:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5715709328651428, "perplexity": 6821.226599149866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313803.9/warc/CC-MAIN-20190818104019-20190818130019-00042.warc.gz"}
https://www.chemicalforums.com/index.php?topic=97088.0;prev_next=prev
July 09, 2020, 01:16:40 PM Forum Rules: Read This Before Posting ### Topic: calculate concentrations by a second order reaction in a batch reactor  (Read 781 times) 0 Members and 1 Guest are viewing this topic. #### niiels93 • New Member • Posts: 4 • Mole Snacks: +0/-0 ##### calculate concentrations by a second order reaction in a batch reactor « on: October 11, 2018, 10:03:55 AM » Hello all, Im trying to determain the concentration B & concentration of C by a second reaction order in a batch reactor. The formula is: A+B--> C Where A0 = 5 mol/l and B0 = 3 mol/l I cant find the formulas which describes concentration A & B against time. Can someone help me out? Tnx! #### mjc123 • Chemist • Sr. Member • Posts: 1752 • Mole Snacks: +245/-11 ##### Re: calculate concentrations by a second order reaction in a batch reactor « Reply #1 on: October 11, 2018, 11:25:02 AM » Can you solve the differential equation?
2020-07-09 17:16:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8389726877212524, "perplexity": 13109.036832633798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00562.warc.gz"}
http://astronomy.stackexchange.com/tags/supernova/new
# Tag Info 2 Type Ia supernovae are not common; they are rare events, happening maybe once per 100 years in a galaxy. Nevertheless they have two properties that make them fantastically useful for distance measurement. They are "standard candles". The physics of the supernova detonation, thought to be when a white dwarf accretes matter and exceeds the Chandrasekhar ... 2 The Chandrasekhar limit in general does not pertain to the mass of the star as a whole. It addresses the mass of the degenerate core. It's only in white dwarfs where the Chandrasekhar limit applies to the mass of white dwarf as a whole, but that's because white dwarfs are almost entirely degenerate matter. Consider a 1.6 solar mass that is not a member of a ... 2 The Chandrasekhar limit applies only for white dwarfs. Stars on the main sequence (or even off the main sequence) can easily surpass it, but if a white dwarf's mass is greater than the Chandrasekhar limit ($1.39 M_{\odot}$), it will undergo some sort of collapse. First, though, in response to Or has it already undergone supernova explosion? White ... Top 50 recent answers are included
2015-03-27 22:08:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6356050968170166, "perplexity": 718.2785372769634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296974.26/warc/CC-MAIN-20150323172136-00262-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/show-x-2-is-continuous.416104/
# Show x^2 is continuous. 1. Jul 14, 2010 ### Daveyboy 1. The problem statement, all variables and given/known data Show x^2 is continuous, on all reals, using a delta/epsilon argument. Let E>0. I want to find a D s.t. whenever d(x,y)<D d(f(x),f(y))<E. WLOG let x>y |x^2-y^2|=x^2-y^2=(x-y)(x+y)<D(x+y) I am trying to bound x+y, but can't figure out how. 2. Jul 14, 2010 ### Hurkyl Staff Emeritus One problem is that you have the epsilon-delta statement wrong. What you wrote is the definition of "uniformly continuous", and squaring is not uniformly continuous. Uniform continuity requires a delta that works for all values of y. Continuity only requires that, for each value of y, there exists a delta. If it helps, you should think of the delta you are choosing as a function of y. 3. Jul 14, 2010 ### Tobias Funke Maybe writing x+y=x-y+2y will help? 4. Jul 15, 2010 ### HallsofIvy You need to show: given any real number, a, then for any $\epsilon> 0$, there exist $\delta> 0$ such that if $|x- a|< \delta$ then $|x^2- a^2|< \epsilon$. You might start by factoring $|x^2- a^2|< |x-a||x+ a|$. Now, if x is close to a, so x- a is close to 0, how large can x+ a be? Last edited by a moderator: Jul 17, 2010 5. Jul 16, 2010 ### Daveyboy then x+a approaches 2a... I feel like I should just take delta = sqrt(epsilon), and I'm fairly confident any delta less than that will suffice. I do not really understand how to show that though. 6. Jul 17, 2010 ### HallsofIvy Saying x+ a "approaches" 2a is not enough. If $|x- a|< \delta$ then $-\delta< x- a< \delta$ so, adding 2a to each part, $2a- \delta< x+ a< 2a+ \delta$. Now you can say $|(x-a)(x+a)|= |x-a||x+a|< (a+\delta)|x- a|$. Last edited by a moderator: Jul 17, 2010 7. Jul 17, 2010 ### Staff: Mentor Tip: Make sure that LaTeX expressions start and end with the same type of tag. [ tex] ... [ /tex] and [ itex] ... [ /itex]. You sometimes start the expression with [ itex] and end with [ /math]. I'm not sure that this Web site can render [ math] ... [ /math] expressions, but I am sure that you can't mix them. 8. Jul 17, 2010 ### HallsofIvy Thanks, Mark44. I just need to learn to check my responses before going on! 9. Jul 17, 2010 ### Staff: Mentor That's advice I'm trying to give myself, too. 10. Jul 17, 2010 ### Telemachus How do this demonstrate the continuity? I mean, what you're saying is the same than $$\lim_{x \to c}{f(x)} = f(c)$$, but in delta epsilon notation? I don't get it :P 11. Jul 18, 2010 ### Daveyboy shouldn't we conclude that $|(x-a)(x+a)|= |x-a||x+a|< (2a+\delta)|x- a|$ ? I still want to clearly define delta as a function of x, epsilon, and a (even though a is a constant) I see the trick that was used to bound |x^2-a^2| and that was neat, but now I am confused. If I solved $(2a+\delta)|x- a|<\epsilon$ for delta would that give me a correctly bounded delta? 12. Jul 18, 2010 ### Hurkyl Staff Emeritus I think you have a typo there.... No, you want it as a function of epsilon and a. (Check your quantifiers -- delta is "chosen" before x enters the picture)
2018-03-24 02:46:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7381184697151184, "perplexity": 2174.0673088338904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649627.3/warc/CC-MAIN-20180324015136-20180324035136-00711.warc.gz"}
https://spec.oneapi.io/versions/latest/elements/oneMKL/source/domains/rng/mkl-rng-wichmann_hill.html
# wichmann_hill¶ The wichmann_hill engine is the set of 273 Wichmann-Hill’s combined multiplicative congruential generators from NAG Numerical Libraries [NAG]. Description The set of 372 different basic pseudorandom number generators wichmann_hill is the second basic generator in the NAG libraries. Generation algorithm $$x_n=a_{1, j} x_{n-1} (mod \ m_{1, j})$$ $$y_n = a_{2, j} y_{n-1} (mod \ m_{2, j})$$ $$z_n = a_{3, j} z_{n-1} (mod \ m_{3, j})$$ $$w_n = a_{4, j} w_{n-1} (mod \ m_{4, j})$$ $$u_n = (x_n / m_{1, j} + y_n / m_{2, j} + z_n / m_{3, j} + w_n / m_{4, j}) mod \ 1$$ The constants $$a_{i, j}$$ range from 112 to 127, the constants $$m_{i, j}$$ are prime numbers ranging from 16718909 to 16776917, close to $$2 ^ {24}$$. ## class wichmann_hill¶ Syntax namespace oneapi::mkl::rng { class wichmann_hill { public: static constexpr std::uint32_t default_seed = 1; wichmann_hill(sycl::queue queue, std::uint32_t seed = default_seed); wichmann_hill(sycl::queue queue, std::uint32_t seed, std::uint32_t engine_idx); wichmann_hill(sycl::queue queue, std::initializer_list<std::uint32_t> seed); wichmann_hill(sycl::queue queue, std::initializer_list<std::uint32_t> seed, std::uint32_t engine_idx); wichmann_hill(const wichmann_hill& other); wichmann_hill(wichmann_hill&& other); wichmann_hill& operator=(const wichmann_hill& other); wichmann_hill& operator=(wichmann_hill&& other); ~wichmann_hill(); }; } Class Members Routine Description wichmann_hill(sycl::queue queue, std::uint32_t seed = default_seed) Constructor for common seed initialization of the engine (for this case multiple generators of the set would be used) wichmann_hill(sycl::queue queue, std::uint32_t seed, std::uint32_t engine_idx) Constructor for common seed initialization of the engine (for this case single generator of the set would be used) wichmann_hill(sycl::queue& queue, std::initializer_list<std::uint32_t> seed) Constructor for extended seed initialization of the engine (for this case multiple generators of the set would be used) wichmann_hill(sycl::queue& queue, std::initializer_list<std::uint32_t> seed, std::uint32_t engine_idx) Constructor for extended seed initialization of the engine (for this case single generator of the set would be used) wichmann_hill(const wichmann_hill& other) Copy constructor wichmann_hill(wichmann_hill&& other) Move constructor wichmann_hill& operator=(const wichmann_hill& other) Copy assignment operator wichmann_hill& operator=(wichmann_hill&& other) Move assignment operator Constructors wichmann_hill::wichmann_hill(sycl::queue queue, std::uint32_t seed = default_seed) Input Parameters queue Valid sycl::queue object, calls of the oneapi::mkl::rng::generate() routine submits kernels in this queue to obtain random numbers from a given engine. seed The initial conditions of the generator state. Assume $$x_0=seed \ mod \ m_1, y_0 = z_0 = w_0 = 1$$. If $$x_0 = 0$$, assume $$x_0 = 1$$. wichmann_hill::wichmann_hill(sycl::queue queue, std::uint32_t seed, std::uint32_t engine_idx) Input Parameters queue Valid sycl::queue object, calls of the oneapi::mkl::rng::generate() routine submits kernels in this queue to obtain random numbers from a given engine. seed The initial conditions of the generator state. Assume $$x_0=seed \ mod \ m_1, y_0 = z_0 = w_0 = 1$$. If $$x_0 = 0$$, assume $$x_0 = 1$$. engine_idx The index of the set 1, …, 273. Throws oneapi::mkl::invalid_argument Exception is thrown when $$idx > 273$$ wichmann_hill::wichmann_hill(sycl::queue& queue, std::initializer_list<std::uint32_t> seed) Input Parameters queue Valid sycl::queue object, calls of the oneapi::mkl::rng::generate() routine submits kernels in this queue to obtain random numbers from a given engine. seed The initial conditions of the generator state, assume: if $$n = 0: x_{0} = y_{0} = z_{0} = w_{0} = 1$$ if $$n = 1: x_{0} = seed[0] \ mod \ m_1, y_{0} = z_{0} = w_{0} = 1$$. If $$x_0 = 0$$, assume $$x_0 = 1$$. if $$n = 2: x_{0} = seed[0] \ mod \ m_1, y_{0} = seed[1] \ mod \ m_2, z_{0} = w_{0} = 1$$. if $$n = 3: x_{0} = seed[0] \ mod \ m_1, y_{0} = seed[1] \ mod \ m_2, z_{0} = seed[2] \ mod \ m_3, w_{0} = 1$$. if $$n \geqslant 4: x_{0} = seed[0] \ mod \ m_1, y_{0} = seed[1] \ mod \ m_2$$ $$z_{0} = seed[2] \ mod \ m_3, w_{0} = seed[3] \ mod \ m_4$$. wichmann_hill::wichmann_hill(sycl::queue& queue, std::initializer_list<std::uint32_t> seed, std::uint32_t engine_idx) Input Parameters queue Valid sycl::queue object, calls of the oneapi::mkl::rng::generate() routine submits kernels in this queue to obtain random numbers from a given engine. seed The initial conditions of the generator state, assume: if $$n = 0: x_{0} = y_{0} = z_{0} = w_{0} = 1$$ if $$n = 1: x_{0} = seed[0] \ mod \ m_1, y_{0} = z_{0} = w_{0} = 1$$. If $$x_0 = 0$$, assume $$x_0 = 1$$. if $$n = 2: x_{0} = seed[0] \ mod \ m_1, y_{0} = seed[1] \ mod \ m_2, z_{0} = w_{0} = 1$$. if $$n = 3: x_{0} = seed[0] \ mod \ m_1, y_{0} = seed[1] \ mod \ m_2, z_{0} = seed[2] \ mod \ m_3, w_{0} = 1$$. if $$n \geqslant 4: x_{0} = seed[0] \ mod \ m_1, y_{0} = seed[1] \ mod \ m_2$$ $$z_{0} = seed[2] \ mod \ m_3, w_{0} = seed[3] \ mod \ m_4$$. engine_idx The index of the set 1, …, 273. wichmann_hill::wichmann_hill(const wichmann_hill& other) Input Parameters other Valid wichmann_hill object. The queue and state of the other engine is copied and applied to the current engine. wichmann_hill::wichmann_hill(wichmann_hill&& other) Input Parameters other Valid wichmann_hill object. The queue and state of the other engine is moved to the current engine. wichmann_hill::wichmann_hill& operator=(const wichmann_hill& other) Input Parameters other Valid wichmann_hill object. The queue and state of the other engine is copied and applied to the current engine. wichmann_hill::wichmann_hill& operator=(wichmann_hill&& other) Input Parameters other Valid wichmann_hill r-value object. The queue and state of the other engine is moved to the current engine. Parent topic: Engines (Basic Random Number Generators)
2021-11-30 22:24:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.617116391658783, "perplexity": 14111.41976127761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359073.63/warc/CC-MAIN-20211130201935-20211130231935-00104.warc.gz"}
http://math.stackexchange.com/questions/38312/how-to-find-homology-groups-of-a-connected-sum
How to find homology groups of a connected sum? I need to find homology groups for the following simplicial complex: $RP^2$ # $RP^2$ # $\Delta$ # $\Delta$ How to do it? If I am not mistaken, $RP^2$ is not orientable - so we cannot just sum the groups. How to solve the problem? - What is $\Delta$? –  Gerben May 10 '11 at 19:26 You can use the Mayer Vietoris sequence. Not sure what you mean by "not orientable --> we cannot sum the groups". Summing the groups generally works only if the intersection has trivial homology. –  Alon Amit May 10 '11 at 19:35 $\Delta$ is a triangle/disk. 2-dimensional simplex. –  user10732 May 10 '11 at 19:38 Taking the connected sum of a surface and the 2-disk is the same as puncturing the surface. So your surface is the Klein bottle twice punctured. Its fundamental group, by van Kampen, is free on three generators, so its first homology is $\mathbb{Z}^3$. It has non-empty boundary, so its second homology is trivial. - Indeed, the surface is homotopy equivalent to a wedge of three circles! –  Grumpy Parsnip May 10 '11 at 20:46 Edit: Now that I see $\Delta$ is a disk, the surface in question has a nonempty boundary, and my answer below assumed the boundary was empty. I'm not sure what $\Delta$ is, but if you can write your surface as an identification space of a polygon, it's easy to construct a cell complex, and therefore a chain complex that will calculate the homology. For example, $RP^2\# RP^2\# T$, where $T$ is a torus, is the quotient of a octagon where the boundary edges are glued by the pattern $aabbcdc^{-1}d^{-1}$. The chain complex has one generator in degree $0$, $4$ in degree 1 and $1$ in degree $2$. The boundary operator is zero on edges and is equal to $2a+2b$ on the unique generator in degree $2$. So $\partial_2$ is injective, implying $H_2=0$. Also $H_1=\mathbb Z^4/im(\partial_2)$, which you can verify is $\mathbb Z_2\oplus \mathbb Z\oplus \mathbb Z\oplus \mathbb Z$. Alternatively you can appeal to the classification of surfaces, and figure out whether your surface is a connected sum of some number of projective planes (in the nonorientable case) or tori (in the orientable case), and just look up or figure out the answer for these examples. - Could you please explain to me how one could identify a surface to an identification of space of polygons in general? –  El Moro May 10 '11 at 20:35 @El Moro: There's probably a good place on the web to find this. I know that Massey's book on algebraic topology has a nice treatment. Basically, you cut along simple closed curves to get a punctured disk, and then cut along some properly embedded arcs to get a polygon. Gluing the cuts back together is the identification space topology. –  Grumpy Parsnip May 10 '11 at 20:43
2015-04-19 09:56:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868065595626831, "perplexity": 291.03032169969714}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246638571.67/warc/CC-MAIN-20150417045718-00130-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.bernardosulzbach.com/bitwise-and-of-range-of-numbers/
# Bitwise And of Range 2017-05-13 Recently I had to solve a problem which asked you to determine the bitwise and of a range of nonnegative numbers. There is an obvious linear solution to this problem which simply computes the bitwise and of the range. Bitwise and of [4, 10] = 4 & 5 & 6 & 7 & 8 & 9 & 10 However, after thinking about how the anding ends up “erasing” bits permanently I figured out the following logarithmic solution: def bitwise_and_of_range(begin, end): if begin == end: return begin else: return bitwise_and_of_range(begin >> 1, end >> 1) << 1 Essentially, if you have at least two numbers in the range, the last bit will be zero, so you can compute the bitwise and of the prefixes and append a zero to the result (by shifting it to the left).
2022-01-24 12:53:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6406077742576599, "perplexity": 391.28273684349415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304570.90/warc/CC-MAIN-20220124124654-20220124154654-00669.warc.gz"}
https://physics.stackexchange.com/questions/533887/si-units-and-the-coriolis-parameter
# SI Units and the Coriolis Parameter I am trying to solve the following equation numerically $$|u_\text{max}|=\frac{\Delta p}{|f|\rho}\frac{\sqrt{2}}{R}\mathrm e^{-1/2} \tag{1}.$$ Here, $$\Delta p=20\ \mathrm{hPa}$$, $$R=500\ \mathrm{km}$$ and $$\rho=1\ \mathrm{km/m^3}$$. For equation $$(1)$$, $$f$$ denotes the Coriolis parameter, which in this case equals $$f=2\Omega\sin(45).$$ Using Wikipedia, the SI units for $$\Omega$$ are in $$\mathrm{rads/s}$$. Does this mean that the SI units for $$f$$ are also in $$\mathrm{rads/s}$$? Being the velocity, the SI units of $$u_\text{max}$$ should be in $$\mathrm{m/s}$$. Substituting all values into $$(1)$$, $$|u_\text{max}|=\sqrt{\frac{2}{\mathrm e}}\frac{20\times 10^2 \ \mathrm{Pa}}{\Omega\sin(45) \ \mathrm{rads/s}\times 1 \ \mathrm{kg/m^3}\times \left(500\times 10^3\right)\ \mathrm{m}}.$$ I'm unsure of how the correct SI units ($$\mathrm{m/s}$$) appear. The mentioned "$$\mathrm{rads/s}$$" is not a correct SI unit symbol. (Note that the Wikipedia page on Coriolis frequency also shows wrong units symbols for the hour "$$\mathrm{hr}$$" and minute "$$\mathrm{m}$$".) The the correct special symbol for the radian is $$\mathrm{rad}$$. The radian is a special name for an SI derived unit. It can be expressed in SI base units as follows. $$1\ \mathrm{rad}=1\ \frac{\mathrm m}{\mathrm m}=1$$ Therefore, $$1\ \mathrm{rad/s}=1\ \mathrm{s^{-1}}$$ Furthermore, you should know that $$1\ \mathrm{Pa}=1\ \mathrm{kg\ m^{-1}\ s^{-2}}$$; thus $$\frac{1\ \mathrm{Pa}}{1\ \mathrm{s^{-1}}\times1\ \mathrm{kg\ m^{-3}}\times1\ \mathrm m} =\frac{1\ \mathrm{kg\ m^{-1}\ s^{-2}}}{1\ \mathrm{s^{-1}}\times1\ \mathrm{kg\ m^{-3}}\times1\ \mathrm m} =1\ \mathrm{m\ s^{-1}}$$ Anything with radians or lengths at right angles should be treated with care and caution. SI is a mess for angles, which reflects issues in the wider community about levels of abstraction. For instance the unit Hertz is just "per second". It can be anything per second. Because it can be confused there are even explicit notes in the bipm web site (owners of SI) about the problem and how they needed extra 'units' for radiation, which otherwise would also have units of Hertz.
2020-09-26 14:48:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8639122247695923, "perplexity": 557.1138258365007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00426.warc.gz"}
http://math.stackexchange.com/questions/371528/positive-curvature-on-holomorphic-vector-bundles/371545
# Positive curvature on holomorphic vector bundles There must be a mistake in my understanding the definition of positivity for the curvature. Let me summarize: Let $(L,\nabla,h) \rightarrow M$ be a hermitian hol line bundle with Chern connection. Then one can show (e.g. Huybrechts, complex geometry Prop. 4.3.8) that the curvature $F = \nabla^2 \in \mathcal{A}^{1,1}(M,\mathbb{C})$ is a (1,1)-Form with the property that (i) $h(F_{X,Y}\sigma,\tau)=-h(\sigma,F_{X,Y}\tau)$. Now, writing $F$ in local coordinates $(z_i)$ of $M$, we see $F=\sum_{i,j}F_{Z_i,\bar Z_j}dz_i \wedge d\bar z_j$ where $a_{ij}:=F_{Z_i,\bar Z_j}$ is a hermitian symmetric matrix. Now my question is: From the first equality (i) we deduce, that $F_{X,Y}$ is a purely imaginary complex number. Why isn't this a contradiction to $a_{ij}$ being hermitian symmetric? The matrix entries of $a_{ij}$ are special $F_{X,Y}$, so we get a matrix with purely imaginary entries which cannot be positive definite. Where is my mistake?! - In the expression $F_{X,Y}$, $X$ and $Y$ are vectors living in the real tangent space $TM \subset TM \otimes \mathbb C$. But of course $Z_i$ and $\bar Z_i$ do not lie in the real tangent space. For example, consider the two-form $\omega = idx \wedge dy$. Then using $dx = \frac{1}{2}(dz + d\bar z), dy = \frac{1}{2i}(dz - d\bar z)$ you see that $\omega = \frac{1}{2} d\bar z \wedge dz$. So writing it in terms of real differential forms gives a purely imaginary coefficient but in terms of the complex coordinates you get a real coefficient.
2014-07-31 06:04:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371112585067749, "perplexity": 99.88988711244507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00032-ip-10-146-231-18.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/471649/xelatex-installed-italic-font-not-appearing
# XeLaTeX: Installed Italic font not appearing Italics go missing when I change to a font that is installed and I can't figure out why. Here is the installed font, (as shown in Mac OS's Font Book): Here's a MWE: \documentclass[fontsize=14.5pt, paper=a6, DIV=14, pagesize]{scrartcl} \usepackage{fontspec,xltxtra,xunicode} \defaultfontfeatures{Mapping=tex-text} \setmainfont{BentonModernFour} %\setromanfont[Mapping=tex-text]{BentonModernFour-Roman}% Doesn't work %\setromanfont[Mapping=tex-text]{Hoefler Text} %Works \begin{document} No one gave their real names but \textit{nommes de guerre}. \end{document} Yields only this: But this doesn't work for me. I've also looked at Fontspec can't find italic font (installed later) but will use condensed (installed first) on OSX which explains how to update the font cache for LuaLaTeX, but not XeLaTeX. LuaLaTeX yields the desired output, but XeLaTeX does not. How can I fix this? Here is the log file: https://www.dropbox.com/s/kb86n5as1sog613/mwe%2Bitalics%2Bxe.log?dl=0 Apparently, this is a recurring problem. Here's someone on a different forum from 2012: https://latex.org/forum/viewtopic.php?t=20135 Adding ItalFont=(name of font)as a fontspec option didn't work. As did ItalicFont=*.otf (see image) • Don't use xltxtra and xunicode – DG' Jan 24 '19 at 13:53 • Deleting xlxtra and xunicode did not fix the problem. Jan 24 '19 at 14:09 • If fontspec can't find a shape, use the key ItalicFont to set the font manually. Jan 24 '19 at 14:40 • @UlrikeFischer: syntax? ItalicFont=*? See edit above. Jan 24 '19 at 15:23 • fontspec has a documentation. Did you check it? Jan 24 '19 at 15:33 This should work. a) Select font by name: \documentclass{scrartcl} \usepackage{fontspec} \setmainfont[UprightFont = *-Roman , BoldFont = *-Bold , ItalicFont = *-Italic , BoldItalicFont = *-BoldItalic , Ligatures=TeX]{BentonModernFour} \begin{document} No one gave their real names but \textit{nommes de guerre}. \end{document} b) Select font by file: \documentclass{scrartcl} \usepackage{fontspec} \setmainfont[Path=/path/to/font/files/ , Extension = .otf , UprightFont = *-Roman , BoldFont = *-Bold , ItalicFont = *-Italic , BoldItalicFont = *-BoldItalic , Ligatures=TeX]{BentonModernFour} \begin{document} No one gave their real names but \textit{nommes de guerre}. \end{document} BTW: You can find a more detailed explanation on how font selection works in the fontspec-manual • Indeed it did. But why does "ItalicFont=*.otf" not work? Jan 24 '19 at 16:33 • You can either specify the font by name (what we're doing here) or by filename. In that case you have to provide the full filename, extension, and the path to the fonts. – DG' Jan 24 '19 at 16:37 • The problem is that the issue only occurs sporadically. Sometimes, merely identifying a font family works without more, e.g., "\setromanfont[Mapping=tex-text]{Hoefler Text}" Other times, even though the different fonts are installed in the same directory, italics or bold do not display correctly. It is even stranger when LuaLaTeX and XeLaTeX compiles yield different results with respect to the generation of italics. All of this can not be random behavior, there has to be a reason why one works and the other doesn't. – Jan 24 '19 at 18:41 \documentclass{scrartcl} \usepackage{fontspec} \setmainfont{BentonModernFour}[ UprightFont = *-Roman, BoldFont = *-Bold, ItalicFont = *-Italic, BoldItalicFont = *-BoldItalic, ] \begin{document} foo \textit{bar} \end{document}
2022-01-25 06:38:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7732622623443604, "perplexity": 6101.895423076634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304760.30/warc/CC-MAIN-20220125035839-20220125065839-00127.warc.gz"}
http://en.wikipedia.org/wiki/Kernelization
# Kernelization Not to be confused with Kernel trick. In computer science, a kernelization is a technique for designing efficient algorithms that achieve their efficiency by a preprocessing stage in which inputs to the algorithm are replaced by a smaller input, called a "kernel". The result of solving the problem on the kernel should either be the same as on the original input, or it should be easy to transform the output on the kernel to the desired output for the original problem. Kernelization is often achieved by applying a set of reduction rules that cut away parts of the instance that are easy to handle. In parameterized complexity theory, it is often possible to prove that a kernel with guaranteed bounds on the size of a kernel (as a function of some parameter associated to the problem) can be found in polynomial time. When this is possible, it results in a fixed-parameter tractable algorithm whose running time is the sum of the (polynomial time) kernelization step and the (non-polynomial but bounded by the parameter) time to solve the kernel. Indeed, every problem that can be solved by a fixed-parameter tractable algorithm can be solved by a kernelization algorithm of this type. ## Example: vertex cover A standard example for a kernelization algorithm is the kernelization of the vertex cover problem by S. Buss.[1] In this problem, the input is an undirected graph $G$ together with a number $k$. The output is a set of at most $k$ vertices that includes the endpoint of every edge in the graph, if such a set exists, or a failure exception if no such set exists. This problem is NP-hard. However, the following reduction rules may be used to kernelize it: 1. If $v$ is a vertex of degree greater than $k$, remove $v$ from the graph and decrease $k$ by one. Every vertex cover of size $k$ must contain $v$ since otherwise too many of its neighbors would have to be picked to cover the incident edges. Thus, an optimal vertex cover for the original graph may be formed from a cover of the reduced problem by adding $v$ back to the cover. 2. If $v$ is an isolated vertex, remove it. An isolated vertex cannot cover any edges, so in this case $v$ cannot be part of any minimal cover. 3. If more than $k^2$ edges remain in the graph, and neither of the previous two rules can be applied, then the graph cannot contain a vertex cover of size $k$. For, after eliminating all vertices of degree greater than $k$, each remaining vertex can only cover at most $k$ edges and a set of $k$ vertices could only cover at most $k^2$ edges. In this case, the graph may be replaced by the complete graph $K_{k+1}$, which also has no $k$-vertex cover. An algorithm that applies these rules repeatedly until no more reductions can be made necessarily terminates with a kernel that has at most $k^2$ edges and (because each edge has at most two endpoints and there are no isolated vertices) at most $2k^2$ vertices. This kernelization may be implemented in linear time. Once the kernel has been constructed, the vertex cover problem may be solved by a brute force search algorithm that tests whether each subset of the kernel is a cover of the kernel. Thus, the vertex cover problem can be solved in time $O(2^{2k^2}+n+m)$ for a graph with $n$ vertices and $m$ edges, allowing it to be solved efficiently when $k$ is small even if $n$ and $m$ are both large. Although this bound is fixed-parameter tractable, its dependence on the parameter is higher than might be desired. More complex kernelization procedures can improve this bound, by finding smaller kernels, at the expense of greater running time in the kernelization step. In the vertex cover example, kernelization algorithms are known that produce kernels with at most $2k$ vertices. One algorithm that achieves this improved bound exploits the half-integrality of the linear program relaxation of vertex cover due to Nemhauser and Trotter.[2] Another kernelization algorithm achieving that bound is based on what is known as the crown reduction rule and uses alternating path arguments.[3] The currently best known kernelization algorithm in terms of the number of vertices is due to Lampis (2011) and achieves $2k-c\log k$ vertices for any fixed constant $c$. It is not possible, in this problem, to find a kernel of size $O(\log k)$, unless P = NP, for such a kernel would lead to a polynomial-time algorithm for the NP-hard vertex cover problem. However, much stronger bounds on the kernel size can be proven in this case: unless coNP $\subseteq$ NP/poly (believed unlikely by complexity theorists), for every $\epsilon>0$ it is impossible in polynomial time to find kernels with $O(k^{2-\epsilon})$ edges.[4] It is unknown for vertex cover whether kernels with $(2-\epsilon)k$ vertices for some $\epsilon>0$ would have any unlikely complexity-theoretic consequences. ## Definition In the literature, there is no clear consensus on how kernelization should be formally defined and there are subtle differences in the uses of that expression. ### Downey-Fellows Notation In the Notation of Downey & Fellows (1999), a parameterized problem is a subset $L\subseteq\Sigma^*\times\N$ describing a decision problem. A kernelization for a parameterized problem $L$ is an algorithm that takes an instance $(x,k)$ and maps it in time polynomial in $|x|$ and $k$ to an instance $(x',k')$ such that • $(x,k)$ is in $L$ if and only if $(x',k')$ is in $L$, • the size of $x'$ is bounded by a computable function $f$ in $k$, and • $k'$ is bounded by a function in $k$. The output $(x',k')$ of kernelization is called a kernel. In this general context, the size of the string $x'$ just refers to its length. Some authors prefer to use the number of vertices or the number of edges as the size measure in the context of graph problems. ### Flum-Grohe Notation In the Notation of Flum & Grohe (2006, p. 4), a parameterized problem consists of a decision problem $L\subseteq\Sigma^*$ and a function $\kappa:\Sigma^*\to\N$, the parameterization. The parameter of an instance $x$ is the number $\kappa(x)$. A kernelization for a parameterized problem $L$ is an algorithm that takes an instance $x$ with parameter $k$ and maps it in polynomial time to an instance $y$ such that • $x$ is in $L$ if and only if $y$ is in $L$ and • the size of $y$ is bounded by a computable function $f$ in $k$. Note that in this notation, the bound on the size of $y$ implies that the parameter of $y$ is also bounded by a function in $k$. The function $f$ is often referred to as the size of the kernel. If $f=k^{O(1)}$, it is said that $L$ admits a polynomial kernel. Similarly, for $f={O(k)}$, the problem admits linear kernel. ## Kernelizability and fixed-parameter tractability are equivalent A problem is fixed-parameter tractable if and only if it is kernelizable and decidable. That a kernelizable and decidable problem is fixed-parameter tractable can be seen from the definition above: First the kernelization algorithm, which runs in time $O(|x|^c)$ for some c, is invoked to generate a kernel of size $f(k)$. The kernel is then solved by the algorithm that proves that the problem is decidable. The total running time of this procedure is $g(f(k)) + O(|x|^c)$, where $g(n)$ is the running time for the algorithm used to solve the kernels. Since $g(f(k))$ is computable, e.g. by using the assumption that $f(k)$ is computable and testing all possible inputs of length $f(k)$, this implies that the problem is fixed-parameter tractable. The other direction, that a fixed-parameter tractable problem is kernelizable and decidable is a bit more involved. Assume that the question is non-trivial, meaning that there is at least one instance that is in the language, called $I_{yes}$, and at least one instance that is not in the language, called $I_{no}$; otherwise, replacing any instance by the empty string is a valid kernelization. Assume also that the problem is fixed-parameter tractable, ie., it has an algorithm that runs in at most $f(k) \cdot |x|^c$ steps on instances $(x, k)$, for some constant $c$ and some function $f(k)$. To kernelize an input, run this algorithm on the given input for at most $|x|^{c+1}$ steps. If it terminates with an answer, use that answer to select either $I_{yes}$ or $I_{no}$ as the kernel. If, instead, it exceeds the $|x|^{c+1}$ bound on the number of steps without terminating, then return $(x,k)$ itself as the kernel. Because $(x,k)$ is only returned as a kernel for inputs with $f(k)\cdot |x|^c > |x|^{c+1}$, it follows that the size of the kernel produced in this way is at most $\max\{|I_{yes}|, |I_{no}|, f(k)\}$. This size bound is computable, by the assumption from fixed-parameter tractability that $f(k)$ is computable. ## More Examples • Vertex Cover: The vertex cover problem has kernels with at most $2k$ vertices and $O(k^2)$ edges.[5] Furthermore, for any $\varepsilon >0$, vertex cover does not have kernels with $O(k^{2-\varepsilon})$ edges unless $\text{coNP}\subseteq\text{NP/poly}$.[4] The vertex cover problems in $d$-uniform hypergraphs has kernels with $O(k^{d})$ edges using the sunflower lemma, and it does not have kernels of size $O(k^{d-\varepsilon})$ unless $\text{coNP}\subseteq\text{NP/poly}$.[4] • Feedback Vertex Set: The feedback vertex set problem has kernels with $4k^2$ vertices and $O(k^2)$ edges.[6] Furthermore, it does not have kernels with $O(k^{2-\varepsilon})$ edges unless $\text{coNP}\subseteq\text{NP/poly}$.[4] • k-Path: The k-path problem is to decide whether a given graph has a path of length at least $k$. This problem has kernels of size exponential in $k$, and it does not have kernels of size polynomial in $k$ unless $\text{coNP}\subseteq\text{NP/poly}$.[7] • Bidimensional problems: Many parameterized versions of bidimensional problems have linear kernels on planar graphs, and more generally, on graphs excluding some fixed graph as a minor.[8] ## Notes 1. ^ This unpublished observation is acknowledged in a paper of Buss & Goldsmith (1993) 2. ^ 3. ^ Flum & Grohe (2006) give a kernel based on the crown reduction that has $3k$ vertices. The $2k$ vertex bound is a bit more involved and folklore. 4. ^ a b c d 5. ^ 6. ^ 7. ^ 8. ^
2014-08-22 08:00:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 117, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003694415092468, "perplexity": 208.28050736711342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823333.10/warc/CC-MAIN-20140820021343-00248-ip-10-180-136-8.ec2.internal.warc.gz"}
https://questions.examside.com/past-years/jee/question/in-an-experiment-electrons-are-made-to-pass-through-a-narro-2008-marks-4-nexzcypzn0stxkzj.htm
### JEE Mains Previous Years Questions with Solutions 4.5 star star star star star 1 ### AIEEE 2008 In an experiment, electrons are made to pass through a narrow slit of width $'d'$ comparable to their de Broglie wavelength. They are detected on a screen at a distance $'D'$ from the slit (see figure). Which of the following graphs can be expected to represent the number of electrons $'N'$ detected as a function of the detector position $'y'\left( {y = 0} \right.$ corresponds to the middle of the slit$\left. \, \right)$ A B C D ## Explanation The electron beam will be diffracted and the maxima is obtained at $y=0.$ Also the distance between the first minima on both side will be greater than $d.$ 2 ### AIEEE 2007 Photon of frequency $v$ has a momentum associated with it. If $c$ is the velocity of light, the momentum is A $hv/c$ B $v/c$ C $h$ $v$ $c$ D $hv/{c^2}$ ## Explanation Energy of a photon of frequency $v$ is given by $E = hv.$ Also, $E = m{c^2},\,\,m{c^2} = hv$ $\Rightarrow mc = {{hv} \over C} \Rightarrow p = {{hv} \over c}$ 3 ### AIEEE 2007 If ${g_E}$ and ${g_M}$ are the accelerations due to gravity on the surfaces of the earth and the moon respectively and if Millikan's oil drop experiment could be performed on the two surfaces, one will find the ratio ${{electro\,\,ch\arg e\,\,on\,\,the\,\,moon} \over {electronic\,\,ch\arg e\,\,on\,\,the\,\,earth}}\,\,to\,be$ A ${g_M}/{g_E}$ B $1$ C $0$ D ${g_E}/{g_M}$ ## Explanation electronic charge does does not depend on acceleration due to gravity as it is a universal constant. So, electronic charge on earth $=$ electronic charge on moon $\therefore$ Required ratio $=1.$ 4 ### AIEEE 2006 The anode voltage of a photocell is kept fixed. The wavelength $\lambda$ of the light falling on the cathode is gradually changed. The plate current ${\rm I}$ of the photocell varies as follows A B C D ## Explanation As $\lambda$ decreases, $v$ increases and hence the speed of photo electron increases. The chances of photo electron to meet the anode increases and hence photo electric current increases.
2022-01-20 14:05:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6917054653167725, "perplexity": 605.9416969502562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00090.warc.gz"}
https://brilliant.org/discussions/thread/requests-for-brilliantorg/
# Requests for Brilliant.org A chat/email system on here to talk to people who posted questions on here. You can also talk to people who are following you or that you are following. Why not ask them on how they come up with the question if they created the question they posted, or even talk about ideas for future, etc. Icon: An envelope Location of Icon: In between "Post Something" and the Notifications' Bell Note by Lew Sterling Jr 5 years, 8 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: I have already made such a request to Brilliant once. I too feel that some sort of chat should be enabled so that there is an interaction among members. - 5 years, 8 months ago Chat option will also help to directly talk to members why they are reporting the problem - 5 years, 8 months ago Why not put it in notifications when someone reports your problem with the his/her reason? I think Brilliant would be like Facebook with Chat option :) - 5 years, 8 months ago Agreed. - 5 years, 8 months ago I too strongly feel so. - 5 years, 8 months ago Yeah me too. - 5 years, 8 months ago Use this - 2 years, 7 months ago This really would take Brilliant to the next level. - 5 years, 8 months ago Now its not needed as the dispute feature is developed - 5 years, 8 months ago
2020-09-22 14:56:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694204330444336, "perplexity": 4960.828088225971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00516.warc.gz"}
http://tex.stackexchange.com/questions/149993/how-to-add-parameters-to-shape-which-internally-use-append-after-command
# How to add parameters to shape which internally use 'append after command' Edit: It seems that this question (one of mine first here) is not so clear as I imagined, when ask them. So I -- stimulated by recently received comments on it -- rephrase it with hope, that now hopefully will be more clear, what bother me at that time. Here we go: In one of my text books I had a great number of images made by TikZ package, which have only slightly different common elements. The definitions for this elements I store in \tikzset something like this: \tikzset{TCP/.style = { tcpBOX/.style 2 args = {shape=rectangle, draw=##1, % border color fill=##2, % fill collor thick, inner sep= 2mm, outer sep=0mm, minimum height=11mm, % etc }, % other common elements }} and used in TikZ pictures as \begin{tikzpicture}[TCP] \node (n1) [tcpBOX={black}{none}] at (0,0) {some text}; % ... \end{tikzpicures} In some cases I need labels inside tcpBOX node. For this I use the solution which Mark Wibrow provided me years ago on this comp.text.tex thread: \documentclass[tikz,border=5mm]{standalone} %%% code from ctt, December 1999 \makeatletter \def\tikzsavelastnodename#1{\let#1=\tikz@last@fig@name} \makeatother \tikzset{ BOX/.style = {rectangle, minimum width=33mm, minimum height=22mm, draw, thick, text=red, append after command={\pgfextra{\tikzsavelastnodename\tikzsavednodename}},#1% }, add text/.style args = {#1:#2}{append after command={% node[anchor=#1] at (\tikzsavednodename.#1) {#2}} }, } %%% example of use \begin{document} \begin{tikzpicture} \node[BOX, ] (a) {box}; \end{tikzpicture} \end{document} Its work fine, how to merge code for tcpBOX and BOX in a way, that the tcpBOX can be used for example as: \node[box=<desired parameter>, ... add text=south:lorum] (<node name>) at (<coordinate>) {text in box}; In time of asked this question I haven't clue how to merge both style definition, because they have own arguments to be set in their use (tcpBOX for "draw" and "fill" and BOX for append command). - I'm not sure I quite understand the question. If you want to change the fill, why don't you just add e.g. fill=blue (or whatever) to the parameters you pass? Why do you want to be able to write e.g. BOX={fill=blue} or something? (If that's the idea - I'm not really sure how you envisage this working.) Your parameters will override the defaults e.g. you can also add text=green to get "box" appearing in green rather than red. Maybe you should post what you tried. That might give a clearer sense of what you are trying to achieve. – cfr Dec 22 '13 at 4:06 Of course i can define each shapes as you suggest. Actually, up to now i ding so. The idea behind of my questions is the following: for illustration my text book (few hundredths images) i prepare standard shapes which has i thair use small variations (width, fill color) which I wrote as parameters to shape. Some times hapens, that i like to add some text in this shapes in sense of given example. in such a cases I need to 'revrite' existing shapes as you suggest. I just like to escape this and if it is possible, to use 'standardized' form of predefined shapes collected in tikzset. – Zarko Dec 22 '13 at 8:22 But why do you want to add the parameter to box=<> rather than just adding it? I could understand if you wanted to create a few standard options e.g. BOX, BOF, BOG etc. but I understood that you wanted to use the parameters when calling BOX i.e. for each instance. And I don't get why that would be more convenient that just adding the parameters directly. I still think showing what you tried might make this clearer. Failing that, maybe just give a concrete example of what you have in mind. Or have I misunderstood the point at which you want to specify the additional parameters? – cfr Dec 22 '13 at 13:25 Meanwhile I rethinking my problem and your questions and find solution which fulfill my expectations. I define "append after command" as separate style and as such added to style for BOX. Now the definition for BOX style can have own parameters. Thank you very much for your attention, it helps me a lot. – Zarko Dec 24 '13 at 23:37 Perhaps you could post your solution here so that other users can see exactly what you did and benefit from it? – cfr Dec 28 '13 at 19:54 Now the questions seems to be quite trivial and solution so obvious ... Any way, solution which I use now solve my problem on the following way: • for append after command={\pgfextra{\tikzsavelastnodename\tikzsavednodename}},##1 I define as new style named saveLNN (as acronym for save-Last-Node-Name) • to my definition of tcpBOX add this saveLNN Complete code is: \documentclass[tikz,border=5mm]{standalone} \makeatletter \def\tikzsavelastnodename#1{\let#1=\tikz@last@fig@name} \makeatother \tikzset{TCP/.style = {% saveLNN/.style = {append after command={% \pgfextra{\tikzsavelastnodename\tikzsavednodename}},##1}, append after command={node[ATnode,anchor=##1] at (\tikzsavednodename.##1) {##2}} }, ATnode/.style = {inner sep=0.5mm, font=\tiny\bfseries\sffamily}, tcpBOX/.style 2 args = {shape=rectangle, draw=##1, % border color fill=##2, % fill collor thick, inner sep= 2mm, outer sep=0mm, minimum height=11mm, saveLNN}, } }% end of tikzset which gives: I'm still open for better solution(s). -
2016-05-06 05:40:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6544366478919983, "perplexity": 2084.4013813557153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861727712.42/warc/CC-MAIN-20160428164207-00187-ip-10-239-7-51.ec2.internal.warc.gz"}
https://puzzling.stackexchange.com/questions/9566/a-rectangle-of-numbers/9567
A rectangle of numbers Six numerals and two ? symbols are arranged in a $2\times4$ rectangle. ? ? 8 9 5 6 2 3 Apparently, either one or two different symbols are needed to replace the two ? symbols. What are they? • You went to the trouble of making an account, so why not log back on and accept an answer? Even if you don't know by now, you can adopt the puzzle as your own and settle on one. – can-ned_food Mar 22 '17 at 1:48 My guess would be: / and * Why? Look at the numeric keypad on your keyboard. Above 8 there is a slash (/) and above 9 there is an asterisk (*) • Or the number strip, then you get *, and (. – Travis Feb 26 '15 at 18:54 The grid of numbers is ? ? 8 9 5 6 2 3 Each number is three greater than the number below it and one greater than the number to its left (if there is one). Alternatively, think of each column as an arithmetic progression $2,5,8,?$ and $3,6,9,?$ (or reversed). So, the missing numbers are 11 and 12. 11 12 8 9 5 6 2 3 • could also be 11 mod 10 and 12 mod 10 – dmg Feb 25 '15 at 8:24 • that's too obvious to be true – Novarg Feb 25 '15 at 8:58 It's 11 and 12 like the answer above mine but the reasoning could also be 2+3 = 5 + 6 = 11 (written as sum of two consecutive numbers 5 and 6) : 8+9 = 17 + 6 = 23 (11 + 12)
2019-08-18 13:38:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7197083234786987, "perplexity": 737.0584273279497}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00129.warc.gz"}
https://stats.stackexchange.com/questions/153412/finding-vectors-with-an-extreme-component
# Finding vectors with an extreme component I'm looking for a function that measures if a vector component dominates all the rest. Let $$\mathbf{v} = [v_1, v_2, \ldots, v_n]$$ and assume that it is L2 normalized, $|\mathbf{v}|_2 = \sqrt{\sum_{i=1}^n v_i^2} = 1$. If it helps, consider $\mathbf{v}$ a normalized unit vector from a matrix diagonalization. The function $\theta$ should be maximized when: $$v_i=1,v_{j \neq i}=0 \rightarrow \theta=1$$ and it should be small (on average) when the components are drawn from a standard normal distribution: $$v_i = \mathcal{N}(0,\sigma^2=1)$$ but should somehow vary in a "reasonable" way in-between. • It seems you are considering the inequality between the components in $v$. You may want to use Gini coefficient (en.wikipedia.org/wiki/Gini_coefficient). (Imagine $v$ as a vector of people's income in a nation.) Another measure could be $D(v)\equiv n\times\{median(v) - mean(v)\}$ when $n>2$? If $v$ was drawn from a symmetric distribution, $D(v)=0$. In your case $v = (0,0,\dots,1)$, $D(v)=1$. – semibruin May 21 '15 at 16:32 • The Gini coefficient is perfect, thanks for introducing it to me! To make it work for my example, I simply used $v_i^2$ and scaled gini coefficient $g \rightarrow 2(g-(1/2))$. This gives 0 for a uniform distribution and one for my test case. Please post this as an answer so I can accept it. – Hooked May 21 '15 at 16:51 For a vector $v$, set $$Z(v) = \frac{\max |v_i|}{\sqrt{\sum_i v_i^2}}$$ Then for vectors of length $n$. $$Z(0, 0, \ldots, 1, \ldots, 0) = 1$$ and $$Z(a, a, \ldots, a) = \frac{1}{\sqrt{n}}$$ These are the maximum and minimum values, because on one hand $$\max |v_i| = \max \sqrt{v_i^2} \leq \sqrt{\sum_i v_i^2}$$ and on the other $$\sqrt{\sum_i v_i^2} \leq \sqrt{\sum_i \max_j{v_j^2}} = \sqrt{n \max_j{v_j^2}} = \sqrt{n} \max{|v_i|}$$ So that's a function with your desired property that maps $R^n$ into the interval $\left[ \frac{1}{\sqrt{n}}, 1 \right]$. You can then choose any monotonic function $M: \left[ \frac{1}{\sqrt{n}}, 1 \right] \rightarrow [0, 1]$ to post compose with.
2020-07-07 13:25:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7974817156791687, "perplexity": 332.136983110975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655892516.24/warc/CC-MAIN-20200707111607-20200707141607-00115.warc.gz"}
https://stupidsid.com/previous-question-papers/download/applied-mathematics-1-9331
MORE IN Applied Mathematics 1 MU First Year Engineering (Semester 1) Applied Mathematics 1 December 2014 Total marks: -- Total time: -- INSTRUCTIONS (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary 1(a) If $\ \tanh x=\dfrac {2}{3}$. Find the value of x and then cosh 2x. 3 M 1(b) $if \ u = \tan^{-1} \left ( \dfrac{y}{x} \right ),$ Find these value of $\dfrac {\partial ^2 u}{\partial x^2} + \dfrac {\partial ^2 u}{\partial y^2}$ 3 M 1(c) If x=r cos θ, y=r sin θ Find $\dfrac {\partial (x,y)}{\partial (r, \theta)}$ 3 M 1(d) Prove that $\log \sec x = \dfrac {1}{2}x^2 + \dfrac {1}{1z}x^2 + \dfrac {1}{45}x^6 \cdots \ \cdots$ 3 M 1(e) Show that every square matrix can be uniquely expressed as the sum of Hermitian matrix and a skew Hermitian matrix. 4 M 1(f) Find the nth derivative of y =sin x sin 2x sin 3x. 4 M 2(a) Solve the equation x6+1=0 6 M 2(b) Reduce the matrix to normal form and find its rank where, $A= \begin{bmatrix} 1 &-1 &3 &6 \\1 &3 &-3 &-4 \\5 &3 &3 &11 \end{bmatrix}$ 6 M 2(c) State and prove Eulers theorem for a homogeneous function in two variables: Hence verify the Eulers theorem for $u = \dfrac {\sqrt{xy}}{\sqrt{x+} \sqrt{y}}$ 8 M 3(a) Test the consistancy of the following equations and solve them if they are consistent. 2x-y+z=8, 3x-y+z=6 4x-y+2z=7, -x+y-z=4 6 M 3(b) Find the stationary values x3+3xy2-3x2-3y2+4 6 M 3(c) Separatic into real and imaginary parts of sin-1 (eio). 8 M 4(a) $if \ x=uv, \ y=\dfrac {u}{v}$ prove that J.J=1 6 M 4(b) Show that for real values of a and b, $e^{2 \ ai \cot ^{-1}b} \left [ \dfrac {bi-1}{bi+1} \right ]^{-2} = 1$ 6 M 4(c) Solve the following equations by Gauss-seidal method 27x + 6y-z=85 6x+15y+2z=72 x+y+54z=110 8 M 5(a) Expond cos7θ in a series of cosine of multiple of θ 6 M 5(b) $If \lim_{x \to 0} \dfrac {a\sin hx + b \sin x}{x^3} = \dfrac {5}{3},$ find a and b 6 M 5(c) $If \ y = \dfrac {\sin^{-1}x}{\sqrt{1-x^2}}$ then prove that (1-x2)yn+1 - (2n+1)xyn-n2yn-1=0 8 M 6(a) Examine whether the vectors x1= [3,1,1] x2=[2,0,-1] x3=[4,2,1] are linearly independent. 6 M 6(b) If u=f(x-y, y-z, z-x) then show that $\dfrac {\partial u}{\partial x}+ \dfrac {\partial u}{\partial y} + \dfrac {\partial u}{\partial z} = 0$ 6 M 6(c) Fit a straight line for the following data x 1 2 3 4 5 6 y 49 54 60 73 80 86 8 M More question papers from Applied Mathematics 1
2020-08-09 23:22:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075401186943054, "perplexity": 4186.178452498903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738595.30/warc/CC-MAIN-20200809222112-20200810012112-00184.warc.gz"}
https://www.physicsforums.com/threads/trying-to-understand-selection-rules-in-cohen-tannoudji.859180/
# Trying to understand Selection Rules in Cohen-Tannoudji Tags: 1. Feb 24, 2016 ### kq6up On the bottom half of page 249 Cohen-Tannoudji it talks about selection rules in terms of off diagonal elements of the matrix generated by $\langle \phi _{ n^{ \prime },\tau ^{ \prime } } \mid \hat { B } \mid \phi _{ n,\tau } \rangle$. I thought all off diagonal matrix elements would be zero due to the $\delta_{n^{\prime},n}$ nature of these state vectors? Is this because it is not just a double sum with a single good quantum number? I am confused. Could you shed some light? Thanks, KQ6UP 2. Feb 24, 2016 ### kith I don't have the book handy, but by "$\delta_{n^{\prime},n}$ nature of these state vectors" you probably mean that $\langle \phi _{ n^{ \prime },\tau ^{ \prime } } \mid \phi _{ n,\tau } \rangle \propto \delta_{n^{\prime},n}$ right? If yes, why do you expect something similar to hold if you include the $\hat { B }$ operator? Acting with it on the state to the right may give you nonzero coefficients for states where the principal quantum number is equal to $n'$. Last edited: Feb 24, 2016 3. Feb 24, 2016 ### kq6up Is that because some operators can change the state (like ladder operators)? I always view those as in a class of their own because they are not part of a normal eigen function / eigen value equal where the function remains unchanged (e.g. A f(x)=a f(x). It seems like off diagonal elements of a regular operator like momentum would all have to be zero. Am I understanding you clearly, or no? Thanks, KQ6UP 4. Feb 24, 2016 ### kith Yes, if you apply an operator to a state it will change the state in most cases. Only the eigenstates of the operator remain unchanged. The off-diagonal elements of the matrix representation of an operator are only zero if you chose a basis of eigenstates of said operator for the matrix representation. Are you familiar with the Pauli matrices? They represent the spin-1/2 operators $S_x$, $S_y$ and $S_z$ in a basis of eigenstates of $S_z$. As you see, only the $z$-matrix is diagonal, the others are not. 5. Feb 24, 2016 ### kq6up That makes perfect sense thank you. Chris
2018-03-19 21:32:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7144189476966858, "perplexity": 488.44444133966675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647146.41/warc/CC-MAIN-20180319194922-20180319214922-00542.warc.gz"}
http://mathhelpforum.com/geometry/88924-find-area-polygon.html
# Math Help - Find the area of the polygon 1. ## Find the area of the polygon How do I find the area of the polygon to these 4?????????? I'm so lost 2. For the one with the sides = 11 and 21, remeber that you have a triangle and a rectangle, that when added together yield the total area. so, the area of the rectangle is obviously 11*21 and to find the area of the triangle you gotta use a little trig to find the base before you can apply the formula $A=\frac{1}{2}bh$. So how do we find the base. Easy $\tan{5}=\frac{21}{b}$, therefore, $b=\frac{21}{\tan{5}}$. Got it? 3. You can pretty much use the same reasoning to solve the rest of the polygons. You know what I mean? 4. nope, not a clue.
2016-05-30 01:13:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7757734060287476, "perplexity": 270.7374190995521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049282327.67/warc/CC-MAIN-20160524002122-00053-ip-10-185-217-139.ec2.internal.warc.gz"}
https://boydorr.github.io/rdiversity/reference/phy2dist.html
Converts any phylo object to a matrix of pairwise tip-to-tip distances. phy2dist(tree, precompute_dist = TRUE) ## Arguments tree object of class phylo. precompute_dist object of class logical or numeric. When TRUE (by default) a distance matrix is generated and stored in slot distance, when FALSE no distance matrix is generated, and when numeric a distance matrix is generated until the number of species exceeds the defined value. ## Value phy2sim(x) returns an object of class distancecontaining a matrix of pairwise tip-to-tip distances.
2022-09-25 07:24:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27867892384529114, "perplexity": 5303.880173330015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00539.warc.gz"}
https://socratic.org/questions/what-is-the-period-of-f-t-sin-t-2-cos-t-4
# What is the period of f(t)=sin( t / 2 )+ cos( (t)/ 4) ? $8 \pi$ Period of $\sin \left(\frac{t}{2}\right)$ --> $4 \pi$ Period of $\cos \left(\frac{t}{4}\right)$--> $8 \pi$ Least common multiple of $2 \pi$ and $8 \pi$ --> $8 \pi$ Period of f(t) --> $8 \pi$
2021-04-13 07:52:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5180549621582031, "perplexity": 5672.866716731147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00441.warc.gz"}
https://eccc.weizmann.ac.il/report/2020/167/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > DETAIL: ### Paper: TR20-167 | 9th November 2020 20:18 #### Approximate Hypergraph Vertex Cover and generalized Tuza's conjecture TR20-167 Authors: Venkatesan Guruswami, Sai Sandeep Publication: 9th November 2020 20:20 Keywords: Abstract: A famous conjecture of Tuza states that the minimum number of edges needed to cover all the triangles in a graph is at most twice the maximum number of edge-disjoint triangles. This conjecture was couched in a broader setting by Aharoni and Zerbib who proposed a hypergraph version of this conjecture, and also studied its implied fractional versions. We establish the fractional version of the Aharoni-Zerbib conjecture up to lower order terms. Specifically, we give a factor $t/2+ O(\sqrt{t \log t})$ approximation based on LP rounding for an algorithmic version of the hypergraph Tur\'{a}n problem (AHTP). The objective in AHTP is to pick the smallest collection of $(t-1)$-sized subsets of vertices of an input $t$-uniform hypergraph such that every hyperedge contains one of these subsets. Aharoni and Zerbib also posed whether Tuza's conjecture and its hypergraph versions could follow from non-trivial duality gaps between vertex covers and matchings on hypergraphs that exclude certain sub-hypergraphs, for instance, a tent" structure that cannot occur in the incidence of triangles and edges. We give a strong negative answer to this question, by exhibiting tent-free hypergraphs, and indeed $\mathcal{F}$-free hypergraphs for any finite family $\mathcal{F}$ of excluded sub-hypergraphs, whose vertex covers must include almost all the vertices. The algorithmic questions arising in the above study can be phrased as instances of vertex cover on \emph{simple} hypergraphs, whose hyperedges can pairwise share at most one vertex. We prove that the trivial factor $t$ approximation for vertex cover is hard to improve for simple $t$-uniform hypergraphs. However, for set cover on simple $n$-vertex hypergraphs, the greedy algorithm achieves a factor $(\ln n)/2$, better than the optimal $\ln n$ factor for general hypergraphs. ISSN 1433-8092 | Imprint
2023-02-02 20:54:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.79609614610672, "perplexity": 850.6874345675471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00192.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-2-section-2-6-rational-functions-and-their-graphs-exercise-set-page-399/49
## Precalculus (6th Edition) Blitzer The graph has a horizontal asymptote at $y=-2$. The graph intersects the x-axis at $\left( \frac{-1}{2},0 \right)$ and y-axis at $\left( 0,-1 \right)$ First, plot the graph for $f\left( x \right)\ =\ \frac{1}{x}$. Now, $f\left( x\ +\ a \right)$ implies that the graph is shifted leftwards by $a$ units. Plot $y\ =\ \frac{1}{x\ +\ 1}$ by shifting the graph of $f\left( x \right)\ =\ \frac{1}{x}$ $1$ unit to the left. Further, $f\left( x \right)\ -\ a$ implies a downwards shift in the graph by $a$ units. Plot $g\left( x \right)\ =\ \frac{1}{x\ +\ 1}\ -\ 2$ by shifting $y\ =\ \frac{1}{x\ +\ 1}$ down by $2$ units.
2020-01-18 15:49:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7857513427734375, "perplexity": 232.32739045840808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00481.warc.gz"}
https://www.storyofmathematics.com/5-1-10/
# How do I interpret this equation 5+1×10=? Is the answer 15 or 60? This question aims to interpret the equation as having two operations: product and sum. 1. What do we do first when we have a math problem involving more than one operation—such as addition and subtraction or subtraction and multiplication? What do we do for the expression 10 – 5 X 2 = ? 2. Do we subtract first (10 – 5 = 5) and then multiply (5 X 2 = 10)? 3. Or do we start by multiplying (5 X 2 = 10) and then subtracting (10 – 10 = 0)? In a situation like this, we follow PEMDAS. 1. Parentheses 2. Exponents 3. Multiplication and Division (from left to right) 4. Addition and Subtraction (from left to right) In the above example, we are dealing with multiplication and subtraction. Multiplication is the step before subtraction, so we first multiply $5\times 2$ and then subtract the sum from $10$, leaving $0$. Example $5+(8-2)\times 2\div 6-1=?$ Start with the parentheses: $8 – 2 = 6$. (Although the subtraction is usually done in the last step since it is in the parentheses, we do it first.) That leaves $5+6\times 2\div 6-1=?$. Then Exponents, as there is no exponent in the equation, this step is not required in this example. Then multiplication and division, starting from the left: $6\times 2=12$, we are left with $5+12\div 6-1=?$ Then moving to the right: $12\div 6=2$, so the problem $5+2-1=?$ Then addition and subtraction, starting from the left:$5+2=7$, leaving $7-1=?$. Finally, move to the right: $7-1=6$. The given equation is $5+1\times 10$. When an expression contains both a sum and a product without parentheses, we must always perform the product before completing the sum. So this is equivalent to placing parentheses around the product. $=5+(1\times 10)$ Let’s evaluate the product using the fact that the product of $1$ and $10$ is $10$. $=5+10$ Next, we evaluate the sum using the fact that the sum of $5$ and $10$ is $15$. $=15$ So the answer to the equation is $15$. ## Numerical Result The answer to the equation $5+1\times 10$ is $15$. ## Example How to interpret the equation $5+1\times 20=$? Is the answer $25$ or $120$? Solution The given equation is $5+1\times 20$. When an expression contains both a sum and a product without any parentheses, we must always perform the product before the sum. So this is equivalent to placing parentheses around the product. $=5+(1\times 20)$ Let’s evaluate the product using the fact that the product of $1$ and $20$ is $20$. $=5+20$ Next, we evaluate the sum using the fact that the sum of $5$ and $20$ is $25$. $=25$ So the answer to the equation is $25$.
2022-08-15 10:07:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9573274254798889, "perplexity": 619.51202303153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00465.warc.gz"}
https://www.calculators.tech/gravel-calculator
# Gravel Calculator #### RESULTS Fill the calculator form and click on Calculate button to get result here Get Custom Built Calculator For Your Website OR Get Gravel Calculator For Your Website ## Gravel Calculator Gravel calculator is used to calculate the total amount of gravel you need for the construction of a house, building, or any other structure. It is a pea gravel calculator that calculates that answers the question: How much rock do I need, how much concrete do I need, or how much stone do I need? It gives you an accurate estimate in cubic yards and plans your budget considering the material, installation, and labor cost. You don't have to pay more than what you need to pay by relying on estimates from any other person or contractor. In this post, we will explain how to use our gravel calculator, how much gravel do you need, gravel estimation for different shapes, and much more. ## How to use a gravel calculator? The gravel calculator has two parts. You can use it to calculate the gravel quantity needed for a certain space as well as to plan the budget for your construction project. You can add as many rows as you want to find the quantity of gravel to fill up a certain space. To use gravel calculator, follow the below steps: • Enter the number of portions in the given input box. • Select the type of shape from the given list. • Enter the height, length, and depth in respective input boxes. • Enter tons per cubic yard in the given input box. • Enter the percentage of extra material. • Enter the cost of gravel per ton. • Enter the labor cost per hour and other project costs in the respective input boxes. This calculator will calculate the quantity of gravel and plan your budget in real-time. If you are wondering how much does gravel cost, you are at the right place because of the budget planning feature offered by this calculator. ## How much gravel do I need? To fill a given area or space with gravel layers, many constructors face a lot of difficulties in estimating the required gravel quantity. Moreover, workers in the construction industry may find it difficult to come up with the exact required quantity of gravel. Our calculator lets you calculate the exact required quantity of gravel for a specific-sized area so that, you can smoothly complete your construction work without any difficulties. You can calculate the quantity of gravel needed as: 1. Identify the shape of the space for which you need gravel. 2. Use respective geometrical formulas to estimate the volume of gravel needed. 3. Determine the density of mixed gravel. 4. Multiply the density of gravel by its volume to calculate the weight. Let's go through the process to calculate the quantity of gravel for several types of areas. ### ·Rectangular or square area You need to multiply the gravel density by its volume to measure how much gravel you need to fill a rectangular region. The volume of a square or rectangle in cubic feet can be expressed as: $\text{volume}=\text{Height}\left(ft\right)\times\text{width}\left(ft\right)\times\text{length}\left(ft\right)$ Here you can see the image of a rectangular shape: Example: If a box has a depth of$(1 ft$, width of $3 ft$, and length of $6 ft$, calculate the gravel required to fill it up. Solution: $\text{volume}=\text{Height}\left(ft\right)\times\text{width}\left(ft\right)\times\text{length}\left(ft\right)$ $\text{volume}=6ft\times 3ft \times1ft = 18ft^3$ You can multiply this volume by the density of gravel to get its weight. ### ·Round area covered with gravel The calculation is a bit different if the shape you want to fill is round or area you want to cover with gravel like this: To calculate the volume of this shape, multiple its area to its height. Area of a round shape can be calculated using the formula: $A=\pi\times r^2$, then $volume=A\times h$ $\text {or}$ $\pi r^2h$ You can use our cylinder volume calculator to calculate the volume of a circular shape. ### ·Irregularly shaped area Things can get a bit difficult if you need to calculate the quantity of gravel for an irregular shaped area. Here what you can do is: 1. Divide the irregular shape into small regular area 2. Calculate the volume for each area according to its shape. 3. Add volumes of all areas to get the quantity of gravel required to fill up the whole irregular space. ## How much is a ton of gravel? Gravel usually costs around $\text{\textdollar}50$ per ton. The general price is about $\text{\textdollar}30$ to $\text{\textdollar}35$ for a cubic yard of plain pea gravel. If you need a colored variation of gravel, its cost will be up by $\text{\textdollar}25$  to $\text{\textdollar}40$. Buying gravel in bulk quantity can cut you some bucks, so consider it buying in bulk instead of small quantities. ## How much does a cubic yard of gravel weigh? How much is a yard of gravel? How much does a yard of gravel weigh? A cubic yard of usual gravel weighs about $1.4$ tons  $2830$ pounds. A square yard of gravel weighs about $74kg$ or $57$ pounds with a depth of $5$ cm or $2$inch. ## How many square feet does a ton of rock cover? $1$ Ton of Rock Covers $80$ square feet if it is $3$" deep, $120$ square feet if it is $2$" deep, and $240$ square feet if the area is $1$" deep. ## How many yards in a ton of gravel? A ton of gravel is about $19$ cubic feet or $0.705$ cubic yards of gravel with a normal size of stones meaning that it has been filtered and has no residual sand, clay, etc. ## Chart for Gravel and other materials weight The below chart provides information about the weight of several landscaping materials including gravel, dry and wet topsoil, riprap, rock, dry sand, and wet sand. The weights are given in tons, kilograms, pounds, and metric tons so that you don't have to make conversion manually. You can also use our conversion calculators to make any type of conversion online. Material Weight per Cubic Yard Weight per Cubic Meter Pounds Tons Kilograms Metric Tons Gravel (¼” – 2″) 2,800 – 3,400 lbs 1.4 – 1.7 tons 1,660 – 2,020 kg 1.66 – 2.02 tonnes Rock (2″ – 6″) 3,000 – 3,400 lbs 1.5 – 1.7 tons 1,780 – 2,020 kg 1.78 – 2.02 tonnes Sand (wet) 3,000 – 3,400 lbs 1.5 – 1.7 tons 1,780 – 2,020 kg 1.78 – 2.02 tonnes Sand (dry) 2,600 – 3,000 lbs 1.3 – 1.5 tons 1,540 – 1,780 kg 1.54 – 1.78 tonnes Topsoil (wet) 3,000 – 3,400 lbs 1.5 – 1.7 tons 1,780 – 2,020 kg 1.78 – 2.02 tonnes Topsoil (dry) 2,000 – 2,600 lbs 1 – 1.3 tons 1,190 – 1,540 kg 1.19 – 1.54 tonnes Riprap 3,400 – 4,000 lbs 1.7 – 2 tons 2,020 – 2,370 kg 2.02 – 2.37 tonnes Related Calculators Other Languages User Ratings • Total Reviews 0 • Overall Rating 0/5 • Stars Reviews No Review Yet
2022-07-01 10:45:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 30, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4281744658946991, "perplexity": 2024.1861357411567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00499.warc.gz"}
https://increasewebsiteranking.com/ui5ebfcn/sebastian-ruder-optimization-ddb9fa
Sebastian Ruder. Cited by. ∙ 0 ∙ share read it. Block user . Sebastian Ruder, Barbara Plank (2017). DeepMind. Block or report user Block or report sebastianruder. optimization An overview of gradient descent optimization algorithms. Semantic Scholar profile for Sebastian Ruder, with 594 highly influential citations and 48 scientific research papers. Sebastian Ruder PhD Candidate, Insight Centre Research Scientist, AYLIEN @seb_ruder | @_aylien |13.12.16 | 4th NLP Dublin Meetup NIPS 2016 Highlights 2. Sebastian Ruder sebastianruder. Follow. Research scientist, DeepMind. Part of what makes natural gradient optimization confusing is that, when you’re reading or thinking about it, there are two distinct gradient objects you have to understand and contend which, which mean different things. A Comprehensive Analysis of Morphological Generalization in Bilingual Lexicon Induction. 7. In-spired by work on curriculum learning, we propose to learn data selection measures using Bayesian Optimization and evaluate them across … Adagrad (Adaptive Gradient Algorithm) Whatever the optimizer we learned till SGD with momentum, the learning rate remains constant. Different gradient descent optimization algorithms have been proposed in recent years but Adam is still most commonly used. DeepLearning.AI @DeepLearningAI_ Sep 10 . 2. Download PDF Abstract: Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. The loss function, also called the objective function is the evaluation of the model used by the optimizer to navigate the weight space. Prevent this user from interacting with your repositories and sending you notifications. ruder.sebastian@gmail.com Abstract Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. Code, poster Sebastian Ruder (2017). 417. Sebastian Ruder, Parsa Ghaffari, John G. Breslin (2017). 2. If you continue browsing the site, you agree to the use of cookies on this website. vene.ro. In … We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. Sebastian Ruder ... Learning to select data for transfer learning with Bayesian Optimization Domain similarity measures can be used to gauge adaptability and select ... 07/17/2017 ∙ by Sebastian Ruder, et al. Optimization for Deep Learning Highlights in 2017. Different gradient descent optimization algorithms have been proposed in recent years but Adam is still most commonly used. Agenda 1. Sebastian Ruder Optimization for Deep Learning 24.11.17 1 / 49. Gradient descent is the preferred way to optimize neural networks and many other machine learning algorithms but is often used as a black box. Now customize the name of a clipboard to store your clips. Natural Language Processing Machine Learning Deep Learning Artificial Intelligence. This post discusses the most exciting highlights and most promising recent approaches that may shape the way we will optimize our models in the future. Gradient descent is the preferred way to optimize neural networks and many other machine learning algorithms but is often used as a black box. NIPS overview 2. Adaptive Learning Rate . Sort by citations Sort by year Sort by title. We reveal geometric connections between constrained gradient-based optimization methods: mirror descent, natural gradient, and reparametrization. 112. Verified email at google.com - Homepage. This post explores how many of the most popular gradient-based optimization algorithms such as Momentum, Adagrad, and Adam actually work. Sebastian Ruder. Report abuse. arXiv preprint arXiv:1609.04747, 2016. It also spends too much time inching towards theminima when it's clea… Finally !! Some features of the site may not work correctly. Sort. This post discusses the most exciting highlights and most promising recent approaches that may shape the way we will optimize our models in the future. This post explores how many of the most popular gradient-based optimization algorithms such as Momentum, Adagrad, and Adam actually work. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. I just finished reading Sebastian Ruder’s amazing article providing an overview of the most popular algorithms used for optimizing gradient descent. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , Copenhagen, Denmark. optimization An overview of gradient descent optimization algorithms. RNNs 5. An Overview of Multi-Task Learning in Deep Neural Networks. Search. Cited by. Learning to select data for transfer learning with Bayesian Optimization . Sebastian Ruder, Barbara Plank (2017). See our Privacy Policy and User Agreement for details. We also discuss the different ways cross-lingual word embeddings are evaluated, as well as future challenges and research horizons. Reference Sebastian Ruder, An overview of gradient descent optimization algorithms, 2017 https://arxiv.org/pdf/1609.04747.pdf For more information on Transfer Learning there is a good resource from Stanfords CS class and a fun blog by Sebastian Ruder. The above picture shows how the convergence happens in SGD with momentum vs SGD without momentum. Sebastian Ruder Optimization for Deep Learning Sebastian Ruder PhD Candidate, INSIGHT Research Centre, NUIG Research Scientist, AYLIEN @seb ruder Advanced Topics in Computational Intelligence Dublin Institute of Technology 24.11.17 Sebastian Ruder Optimization for Deep Learning 24.11.17 1 / 49 Learning to select data for transfer learning with Bayesian Optimization. Ruder, Sebastian Abstract Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations … Block user Report abuse. Sebastian Ruder, Barbara Plank (2017). To compute the gradient of the loss function in respect of a given vector of weights, we use backpropagation. Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. Learn more about blocking users. Courtesy: Sebastian Ruder Let’s Begin. Show this thread. It contains one hidden layer and one output layer. Pretend for a minute that you don't remember any calculus, or even any basic algebra. Advanced Topics in Computational Intelligence Research Scientist @deepmind. Data Selection Strategies for Multi-Domain Sentiment Analysis. Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. Model Loss Functions . arXiv preprint arXiv:1706.05098. Looks like you’ve clipped this slide to already. You are currently offline. Optimization for Deep Learning sebastian@ruder.io,b.plank@rug.nl Abstract Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing ap- proaches define ad hoc measures that are deemed suitable for respective tasks. Learning to select data for transfer learning with Bayesian Optimization Domain similarity measures can be used to gauge adaptability and select ... 07/17/2017 ∙ by Sebastian Ruder, et al. arXiv pr… Building applications with Deep Learning 4. - Dr. Sheila Castilho, Machine intelligence in HR technology: resume analysis at scale - Adrian Mihai, Hashtagger+: Real-time Social Tagging of Streaming News - Dr. Georgiana Ifrim, Transfer Learning for Natural Language Processing, Transfer Learning -- The Next Frontier for Machine Learning, No public clipboards found for this slide. Learning-to-learn / Meta-learning 8. Reinforcement Learning 7. Learn more about reporting abuse. 1. A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks Victor Sanh1, Thomas Wolf1, Sebastian Ruder2,3 1Hugging Face, 20 Jay Street, Brooklyn, New York, United States 2Insight Research Centre, National University of Ireland, Galway, Ireland 3Aylien Ltd., 2 Harmony Court, Harmony Row, Dublin, Ireland fvictor, thomasg@huggingface.co, sebastian@ruder.io Authors: Sebastian Ruder, ... and that seemingly different models are often equivalent modulo optimization strategies, hyper-parameters, and such. Now, from above visualizations for Gradient descent it is clear that behaves slow for flat surfaces i.e. FAQ About Contact • Sign In Create Free Account. Contact GitHub support about this user’s behavior. Improving classic algorithms 6. Sebastian Ruder retweeted. Dublin Institute of Technology Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. PhD Candidate, INSIGHT Research Centre, NUIG Let us consider the simple neural network above. A childhood desire for a robotic best friend turned into a career of training computers in human language for @alienelf. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. Skip to search form Skip to main content > Semantic Scholar's Logo. Clipping is a handy way to collect important slides you want to go back to later. Gradient descent is … Title. Optimization for Deep Learning 1. S Ruder. 24.11.17 Strong Baselines for Neural Semi-supervised Learning under Domain Shift, On the Limitations of Unsupervised Bilingual Dictionary Induction, Neural Semi-supervised Learning under Domain Shift, Human Evaluation: Why do we need it? will take more iterations to converge on flatter surfaces. Research Scientist, AYLIEN For more detailed explanation please read this overview of gradient descent optimization algorithms by Sebastian Ruder. @seb ruder The momentum term γ is usually initialized to 0.9 or some similar term as mention in Sebastian Ruder’s paper An overview of gradient descent optimization algorithm. EMNLP/IJCNLP (1) 2019: 974-983 Year; An overview of gradient descent optimization algorithms. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. Talk on Optimization for Deep Learning, which gives an overview of gradient descent optimization algorithms and highlights some current research directions. Learning to select data for transfer learning with Bayesian Optimization . See our User Agreement and Privacy Policy. Generative Adversarial Networks 3. General AI 9. Authors: Sebastian Ruder. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 372–382, Copenhagen, Denmark. If you continue browsing the site, you agree to the use of cookies on this website. Articles Cited by Co-authors. ∙ 0 ∙ share We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime. You can learn more about different gradient descent methods on the Gradient descent optimization algorithms section of Sebastian Ruder’s post An overview of gradient descent optimization algorithms. Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. In this blog post, we will cover some of the recent advances in optimization for gradient descent algorithms. Paula Czarnowska, Sebastian Ruder, Edouard Grave, Ryan Cotterell, Ann A. Copestake: Don't Forget the Long Tail! One simple thing to try would be to sample two points relatively near each other, and just repeatedlytake a step down away from the largest value: The obvious problem in this approach is using a fixed step size: it can't get closer to the true minima than the step size so it doesn't converge. You're givena function and told that you need to find the lowest value. Image by Sebastian Ruder. One key difference between this article and that of (“An Overview of Gradient Descent Optimization Algorithms” 2016) is that, $$\eta$$ is applied on the whole delta when updating the parameters \ (\theta_t\), including the momentum term. You can specify the name … In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , Copenhagen, Denmark. Evaluated, as well as future challenges and research horizons on optimization gradient... Optimization strategies, hyper-parameters, and Adam actually work function is the evaluation of 2017. Is still most commonly used not work correctly you more relevant ads mirror descent, Natural,... Site may not work correctly Forget the Long Tail happens in SGD with momentum vs without... Show you more relevant ads skip to search form skip to main content Semantic... Visualizations for gradient descent optimization algorithms such as momentum, Adagrad, such!, Ann A. Copestake: Do n't remember any calculus, or even any basic algebra Adagrad Adaptive... The name … Sebastian Ruder,... and that seemingly different models are often modulo! As a black box gradient Algorithm ) Whatever the optimizer to navigate the space...: Do n't remember any calculus, or even any basic algebra convergence happens in SGD with,! Forget the Long Tail this post explores how many of the 2017 Conference on Empirical Methods Natural... How the convergence happens in SGD with momentum vs SGD without momentum will cover of. Most commonly used Stanfords CS class and a fun blog by Sebastian Ruder, Parsa Ghaffari John... Layer and one output layer arxiv pr… we reveal geometric connections between constrained gradient-based optimization algorithms have proposed! Algorithms have been proposed in recent years but Adam is still most commonly used, called! You more relevant ads Processing machine learning algorithms but is often used as a black box cookies. John G. Breslin ( 2017 ) clear that behaves slow for flat surfaces i.e as a black box equivalent optimization., or even any basic algebra 0 ∙ share Courtesy: Sebastian,... Networks and many other machine learning algorithms but is often used as a black sebastian ruder optimization the preferred way optimize!, Natural gradient, and to provide you with relevant advertising you agree to the use cookies. See our Privacy Policy and user Agreement for details pretend for a robotic best friend turned into career. Such as momentum, the learning rate remains constant in optimization for Deep learning Artificial Intelligence the different ways word., also called the objective function is the preferred way to optimize neural networks and many machine. Converge on flatter surfaces and one output layer above picture shows how the convergence happens in SGD with vs. Discuss the different ways cross-lingual word embeddings are evaluated, as well as future challenges and research horizons Adagrad Adaptive... Momentum, the learning rate remains constant customize the name of a clipboard to store your.. Contact GitHub support about this user from interacting with your repositories and sending you notifications is clear that behaves for. Other machine learning Deep learning, which gives an overview of gradient descent optimization algorithms and Highlights some research... By citations Sort by title class and a fun blog by Sebastian Ruder, Edouard Grave, Ryan Cotterell Ann... … Sebastian Ruder Let ’ s behavior blog by Sebastian Ruder, Edouard,. And research horizons Adagrad, and such we also discuss the different ways cross-lingual embeddings. Desire for a minute that you Do n't Forget the Long Tail show you more relevant ads pages... ’ s behavior and many other machine learning algorithms but is often used a... Some features of the model used by the optimizer we learned till SGD with momentum vs SGD momentum... Of Morphological Generalization in Bilingual Lexicon Induction to find the lowest value in respect of a to... Converge on flatter surfaces sebastian ruder optimization Empirical Methods in Natural Language Processing, Copenhagen, Denmark many other learning. The preferred way to collect important slides you want to go back to later optimize neural.! Cookies to improve functionality and performance, and to provide you with relevant advertising in Natural Language Processing, 372–382... Are evaluated, as well as future challenges and research horizons a clipboard store... Still most commonly used recent years but Adam is still most commonly used without... 'S Logo, pages 372–382, Copenhagen, Denmark the name of a clipboard store. A. Copestake: Do n't Forget the Long Tail about contact • Sign in Create Free Account calculus or. The convergence happens in SGD with momentum, Adagrad, and reparametrization about contact • Sign in Create Account! Childhood desire for a minute that you Do n't remember any calculus or! • Sign in Create Free Account select data for transfer learning with Bayesian optimization research... Cookies on this website ; an overview of gradient descent optimization algorithms in.... 'S Logo also called the objective function is the evaluation of the loss in. You need to find the lowest value, also called the objective function is the evaluation of 2017... Some features of the model used by the optimizer to navigate the weight space evaluated as. It is clear that behaves slow for flat surfaces i.e different gradient descent is the of. Hidden layer and one output layer to main content > Semantic Scholar 's Logo Sebastian Ruder you sebastian ruder optimization relevant.! Go back to later with momentum, the learning rate remains constant s.! That seemingly different models are often equivalent modulo optimization strategies, hyper-parameters and. As momentum, Adagrad, and reparametrization black box: Do n't remember any calculus, even! Pr… we reveal geometric connections between constrained gradient-based optimization algorithms have been proposed in recent years but Adam is most... The weight space authors: Sebastian Ruder, Edouard Grave, Ryan Cotterell Ann! This post explores how many of the most popular gradient-based optimization Methods: mirror descent, Natural gradient, such. Descent, Natural gradient, and reparametrization we reveal geometric connections between constrained optimization! Mirror descent, Natural gradient, and to provide you with relevant advertising a given vector of weights, use!, Denmark to navigate the weight space above visualizations for gradient descent optimization algorithms: Do n't remember calculus... Hyper-Parameters, and Adam actually work convergence happens in SGD with momentum vs SGD without momentum into career! Training computers in human Language for @ alienelf between constrained gradient-based optimization algorithms such as momentum the! User Agreement for details, as well as future challenges and research horizons cookies to functionality..., Ann A. Copestake: Do n't Forget the Long Tail models are often modulo. Is often used as a black box algorithms but is often used as black... Machine learning algorithms but is often used as a black box compute the gradient of the sebastian ruder optimization popular gradient-based algorithms. Go back to later some features of the most popular gradient-based optimization algorithms have been proposed in recent years Adam. For Deep learning Artificial Intelligence Free Account you with relevant advertising blog,! User Agreement for details or even any basic algebra optimizer we learned till with. Of Morphological Generalization in Bilingual Lexicon Induction to personalize ads and to show more... Recent advances in optimization for gradient descent is the preferred way to collect important slides you want to go to. Sebastian Ruder,... and that seemingly different models are often equivalent modulo strategies... Clipboard to store your clips told that you need to find the lowest value use of on... Best friend turned into a career of training computers in human Language for @ alienelf cookies this... Different models are often equivalent modulo optimization strategies, hyper-parameters, and to provide you with relevant advertising it one. Share Courtesy: Sebastian Ruder, Barbara Plank ( 2017 ) compute the gradient the! And many other machine learning algorithms but is often used as a black box sebastian ruder optimization... Learning to select data for transfer learning there is a handy way to collect important slides you want to back... Customize the name of a clipboard sebastian ruder optimization store your clips and that seemingly different are. 'S Logo Morphological Generalization in sebastian ruder optimization Lexicon Induction agree to the use of cookies on this website Stanfords... Select data for transfer learning with Bayesian optimization Highlights some current research.... To provide you with relevant advertising,... and that seemingly different models are equivalent... You need to find the lowest value you with relevant advertising class and a fun blog by Ruder. Ways cross-lingual word embeddings are evaluated, as well as future challenges and research horizons a! Adam actually work s Begin for a minute that you Do n't Forget the Long!! And research horizons embeddings are evaluated, as well as future challenges and research horizons also... In sebastian ruder optimization for Deep learning Artificial Intelligence blog by Sebastian Ruder,... and that seemingly different are., you agree to the use of cookies on this website SGD without.. A black box are evaluated, as well as future challenges and research horizons and activity data to ads... Cs class and a fun blog by Sebastian Ruder commonly used learning, which gives an overview gradient. Of a given vector of weights, we will cover some of the 2017 Conference on Empirical Methods in Language! The above picture shows how the convergence happens in SGD with momentum vs without! Faq about contact • Sign in Create Free Account n't remember any calculus, or even any basic.. Used as a black box the 2017 Conference on Empirical Methods in Language! This post explores how many of the loss function, also called the function!, Edouard Grave, Ryan Cotterell, Ann A. Copestake: Do remember! Output layer on Empirical Methods in Natural Language Processing, Copenhagen, Denmark if you continue browsing the site you! Language for @ alienelf G. Breslin ( 2017 ) navigate the weight space, from above for. By citations Sort by year Sort by year Sort by citations Sort by Sort. To go back to later Artificial Intelligence from above visualizations for gradient descent optimization algorithms been!
2021-04-11 01:50:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31218671798706055, "perplexity": 2387.536669303168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060603.10/warc/CC-MAIN-20210411000036-20210411030036-00189.warc.gz"}
https://socratic.org/questions/why-are-the-isotopes-of-an-element-chemically-similar
# Why are the isotopes of an element chemically similar? Nov 5, 2014 Different isotopes have different numbers of neutrons in their nuclei. Most chemical properties are determined by the arrangement of electrons, especially the outermost electrons. The size of an atom also affects some chemical properties. Having a different number of neutrons does not affect either one of these properties, so isotopes of an element will behave (chemically) the same. $$ However, the greater mass of a heavier isotope does provide some useful differences. For example, isotopes of Uranium (235 and 238) can be separated by using the difference in their masses. This used to be done in very long diffusion chambers, but now it is done with centrifuges. U-235 is thus concentrated so it can be used for nuclear fuel or, unfortunately, weapons. Essentially the same process is used for both, which might help you understand why there is concern when a nation wants to use centrifuges to enrich uranium. Are they making fuel-grade (<10%) uranium-235 or weapons-grade (>90%)? Knowing some science can help you understand international politics!
2022-08-10 01:53:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5974795818328857, "perplexity": 1123.5720285541577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00346.warc.gz"}
http://ajft.org/2014/04/15/cycling
A challenge to start the year, “ride every day” – I didn’t read about it until [2014-01-02 Thu] and thought I was immediately disqualified, but the person laying down the challenge said it started from [2014-01-04 Sat] so I took him up on it. Made it a whole 15 days before reality intervened in the form of a trip away to visit family. Sunday seems to be my day of rest, the rest is mostly made of my short commute and trips to the shops, with a few more interesting rides thrown in. The other challenge is my annual goal of 5200km, one I’ve struggled to meet for the last . This year for the first time in a while I came very close to making my goal – one last ride on New Year’s Eve had me within striking distance. Depending on whether you rely on Garmin or Strava, I seem to have made 5128km, only around 70km short. 2014 [293/365] January February March [26/31] [25/28] [27/31] Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa . X X X X . X X X X X X X . X X X X X X X X X X X X X X X X X X . . . X X X X X X X X X X X X X . X X X X X X X X X X X X X X X X X X X . X . X X X X . X X X X X X X X X X X X . . April May June [19/30] [27/31] [23/30] Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa X X X X X X X X . X X X X . . X X . X X . . . X X X X X X . X X X X X X . . X X X . . . X X X X X X . X X X X X X X . . X X . X . X X X X X X . X X X X X X . X X X . X X X X X X . X July August September [27/31] [26/31] [23/30] Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa X X X X X X X X X X X X X X . X X X X X X X X X X X X X X X X X X X . X X X X X X X X X X X X X . X X X X X . . X X X X X X . X X . X . X . X X X X X . . X X X X . X X X X X X . . . . October November December [21/31] [26/30] [23/31] Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa . . . . . X X X X X X . . . . . X X . X X X X X X . X X X X X X . X X X X X X . X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X . . . . X X X X X X X X X X X X X . . . X . Commute to work every day, shopping or a social ride on Saturday, then often no riding on a Sunday, that seems to be how the year played out. Longest number of days in a row that I rode was 20, which I hit a few times. Rode on 293 of the 365 days of the year, 226 of those days were commuting to work. Since it’s a short commute, the rides home can be quite varied, depending on domestic commitments. Last update – [2018-08-20 Mon].
2018-11-14 03:19:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28333598375320435, "perplexity": 249.57548226753468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741578.24/warc/CC-MAIN-20181114020650-20181114042650-00070.warc.gz"}
https://ericphanson.com/teaching/qit/es3/exercise_9/
# Exercise 9 ## from Example Sheet 3 ### Exercise 9. Let $$\rho_{AB}= \sum_i p_i \rho_{AB}^i$$ be the state of a bipartite quantum system $$AB$$. Using the joint convexity of the relative entropy, prove that the quantum conditional entropy is concave in the state $$\rho_{AB}$$. We want to show that for each $$t\in (0,1)$$ and pair of quantum states $$\rho_{AB}$$ and $$\sigma_{AB}$$ on some bipartite Hilbert space $$\mathcal{H}_A\otimes \mathcal{H}_B$$, show that $S(A|B)_{\omega} \geq t S(A|B)_{\rho} + (1-t) S(A|B)_{\sigma}.$(1) where $$\omega_{AB} = t\rho_{AB} + (1-t) \sigma_{AB}$$. We can use that $$S(A|B)_\eta = -D(\eta_{AB} \| I_A\otimes \eta_B)$$, for any state $$\eta$$. In this language, (1) becomes $D( \omega_{AB}\| I_A\otimes\omega_B) \leq t D(\rho_{AB} \| I_A\otimes \rho_B) + (1-t) D(\sigma_{AB} \| I_A\otimes \sigma_B)$ upon multiplying by $$-1$$. But since $$\omega_{AB} = t\rho_{AB} + (1-t) \sigma_{AB}$$ and $$I_A\otimes \omega_B = t(I_A\otimes \rho_A) + (1-t) (I_A\otimes \sigma_B)$$, the previous exercise establishes this inequality immediately.
2018-09-23 22:28:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561529159545898, "perplexity": 149.1269711933459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159820.67/warc/CC-MAIN-20180923212605-20180923233005-00452.warc.gz"}
https://math.stackexchange.com/questions/750890/underlying-functor-of-tensor-product-in-a-closed-and-symmetric-monoidal-category
# Underlying functor of tensor product in a closed and symmetric monoidal category. I will follow, for terminology and notation, G. M. Kelly, Basic Concepts of Enriched Category Theory. For sake of a self-contained exposition, I will try to write here all the needed concepts. Let then $\mathcal{V}=(\mathcal{V}_{0},\ \otimes,\ I,\ a,\ r,\ l)$ be a monoidal category and let $\mathcal{A}$ be an enriched $\mathcal{V}-$category. Then one can define the underling category of $\mathcal{A}$, denoted as $\mathcal{A}_{0}$, whose objects are those of $\mathcal{A}$ and whose hom-sets $\mathcal{A}_{0}(A,B)$ are $\mathcal{V}_{0}(I,\ \mathcal{A}(A,B))$, for $A,B$ objects of $\mathcal{A}_{0}$. For $f\in\mathcal{A}_{0}(A,B)$ and $g\in\mathcal{A}_{0}(B,C)$, the composite $gf$ is given by the composite $$I\overset{l^{-1}}{\longrightarrow}I\otimes I\overset{g\otimes f}{\longrightarrow}\mathcal{A}(B,C)\otimes\mathcal{A}(A,B)\overset{M}{\longrightarrow}\mathcal{A}(A,C),$$ where $M$ is the composition law in $\mathcal{A}$. Furthermore, given a $\mathcal{V}-$functor $T\colon\mathcal{A}\to\mathcal{B}$, the underling functor $T_{0}\colon\mathcal{A}_{0}\to\mathcal{B}_{0}$ acts as $T$ on the objects of $\mathcal{A}_{0}$, while it sends $f\in\mathcal{A}_{0}(A,B)$ to $T_{AB}\circ f$, where the composite is taken in $\mathcal{V}_{0}$. Suppose now that $\mathcal{V}$ is a closed, symmetric monoidal category and call the commutativity isomorphism $c$. Denote also with $$\pi\colon\mathcal{V}_{0}(X\otimes Y, Z)\overset{\simeq}{\longrightarrow} \mathcal{V}_{0}(X,[Y,Z])$$ the adjunction isomorphism. It is then possible to see $\mathcal{V}$ as a category enriched over itself (which we keep on calling $V$): the objects of $\mathcal{V}$ are those of $\mathcal{V}_{0}$, while the hom-object $\mathcal{V}(X,Y)$ is $[X,Y]$. The composition law $M\colon[Y,Z]\times [X,Y]\to [X,Z]$ is the arrow corresponding under $\pi$ to the composite $$([Y,Z]\times [X,Y])\times X\overset{a}{\longrightarrow}[Y,Z]\times ([X,Y]\times X)\overset{1\otimes e}{\longrightarrow} [Y,Z]\times Y\overset{e}{\longrightarrow} Z$$ There is an isomorphism between the underlying category of $\mathcal{V}$ as an enriched category over itself and $\mathcal{V}_{0}$ which, as far as I understand, should send a morphism $f\colon A\rightarrow B$ in the underlying category (so $f\colon I\rightarrow [A,B]$) to $\pi^{-1}(f)\circ l^{-1}$. Therefore, we can identify those two ordinary categories. We can also define a $\mathcal{V}-$ functor $Ten\colon \mathcal{V\otimes V}\to\mathcal{V}$. Here $\mathcal{V}\otimes\mathcal{V}$ has object-class given by $\mathcal{V}_{0}\times\mathcal{V}_{0}$ and $(\mathcal{V}\otimes\mathcal{V})((X,Y),\ (X',Y')):= [X,X']\otimes [Y,Y']$. The $\mathcal{V}-$functor $Ten$ sends an object $(X,Y)$ to $X\otimes Y$, while $Ten_{(X,Y),(X',Y')}$ corresponds under $\pi$ to the composite $$([X,X']\otimes [Y,Y'])\otimes (X\otimes Y)\overset{m}{\longrightarrow} ([X,X']\otimes X)\otimes ([Y,Y']\times Y)\overset{e\otimes e}{\longrightarrow} X'\otimes X.$$ Here $m:(W\otimes X)\otimes (Y\otimes Z)\simeq (W\otimes Y)\otimes (X\otimes Z)$ is the middle-four interchange isomorphism and $e\colon [Y,Z]\otimes Y\longrightarrow Z$ is the evaluation morphism associated to $\pi$ (the unit of the adjunction). One gets that $e(Ten\otimes 1)=(e\otimes e)m$. Finally, here comes the question. It should be true that the ordinary functor $S$ given as the composite $$\mathcal{V}_{0}\times\mathcal{V}_{0}\longrightarrow (\mathcal{V}\otimes\mathcal{V})_{0}\overset{Ten_{0}}{\longrightarrow}\mathcal{V}_{0}$$ is the tensor product $\otimes\colon\mathcal{V}_{0}\times\mathcal{V}_{0}\longrightarrow\mathcal{V}_{0}$. I can not prove this fact. My attempt, up to now, has been the following. I have found that, on arrows $(f\colon I\rightarrow [A,A'],\ g\colon I\rightarrow [B,B'])$, $S(f,g)$ corresponds, under $\pi$, to $(e\otimes e)\circ m\circ (((f\otimes g)\circ l^{-1})\otimes 1)$. Under the identification between the underlying category of the $\mathcal{V-}$ category $\mathcal{V}$ and the ordinary category $\mathcal{V}_{0}$, $S(f,g)$ should then be $(e\otimes e)\circ m\circ (((f\otimes g)\circ l^{-1})\otimes 1)\circ l^{-1}$. Theoretically speaking, it seems to me that I should then prove this last arrow to be the same as $\otimes (f,g)$, which, under the above identification, should be $(\pi^{-1}(f)\circ l^{-1})\otimes (\pi^{-1}(g)\circ l^{-1})$, but I can not do this. Any suggestion, as well as complete solutions to the problem, would be greatly appreciated. • I wonder how many people have actually read this question completely. Can you shorten it somehow? (If necessary, restrict the audience to more experienced people by shortening) – Turion Apr 21 '14 at 11:49 • The question is clear and well written. Don't shorten it. – Martin Brandenburg Apr 27 '14 at 7:51 • @Turion I admit length might be a problem, but I have tried to mark distinctly the part where the question comes, so that the more expert reader can immediately jump to conclusions. Moreover, it seemed to me that here in MSE there is a sort of implicit rule forcing posters to write down as self-contained questions as possible, especially because, when this is not done, clarifications are usually required by other users. – Marco Vergura Apr 27 '14 at 9:50 • @MartinBrandenburg Thanks, that was my hope when I wrote down all those lines. Still, the absence of answers made me think a little bit about what Turion asked... – Marco Vergura Apr 27 '14 at 9:52 • I made some corrections, in particular, I changed the $×$ into a $\otimes$ at one point, I think this is what you meant. And don't worry about the length. I'm thinking about this question now and maybe I'll write an answer. – Stefan Hamcke May 22 '15 at 18:13 So you have arrows $f:I\to[A,A']$ and $g:I\to[B,B']$. The functor we are interested in and which you called $S$ is the composite $$\mathcal V_0\times\mathcal V_0\to (\mathcal{V\otimes V})_0\to \mathcal V_0 \\ \mathcal V_0(I,[A,A'])×\mathcal V_0(I.[B,B'])\to \mathcal V_0(I,[A,A']⊗[B,B'])\to \mathcal V_0(I,[A⊗B,A'⊗B']) \\ (f,g)\mapsto (f⊗g)l^{-1}\mapsto \text{Ten}(f⊗g)l^{-1}$$ We want to show that, under the bijection $\pi$, this arrow corresponds to $f^\sharp l^{-1}⊗g^\sharp l^{-1}:A⊗B\to A'⊗ B'$. But $(\text{Ten}(f⊗g)l^{-1})^\sharp l^{-1}$ is, as you correctly computed, equal to $$(ε⊗ε)\ m\ (((f⊗g)l^{-1})⊗(1_A⊗1_B))\ l^{-1}$$ which is \begin{align} & (ε⊗ε)\ m\ ((f⊗g)⊗(1_A⊗1_B))\ (l^{-1}⊗1_{A⊗B})\ l^{-1} \\ & =\ (ε⊗ε)\ ((f⊗1_A)⊗(g⊗1_B))\ (l^{-1}⊗l^{-1}) \\ & =\ ε(f⊗1)l^{-1}⊗ε(g⊗1)l^{-1} \\ & =\ f^\sharp l^{-1}⊗g^\sharp l^{-1} \end{align}
2019-05-25 01:05:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925960898399353, "perplexity": 230.13187884203722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257845.26/warc/CC-MAIN-20190525004721-20190525030721-00332.warc.gz"}
http://www.os-scientific.org/
# building a web crawler: mongo vs sql Toby Segaran’s Programming Collective Intelligence is a lot of fun. Recently I was playing around with some of the code in the chapter on “Searching and Ranking,” which builds a simple web crawler. (The full source tree for the book is available on Toby’s blog.) Contributing to the steepness of the learning curve, in my case, at least, was the crash course in relational databases. All of the data that is scraped by the crawler are dispersed into various tables — and connecting the data requires tables relating these tables, which are also built concurrently. The code design accomplishes all of this with elegance. The discussion in the book is exemplary in its lucidity. Nevertheless, all of this takes a couple days for the beginner to completely absorb. Take for instance the task of searching indexed pages. The database used by the crawler routines is SQLite, a lightweight relational database. Once a tree of webpages has been crawled and indexed, it is searched by submitting SQL queries to the database. For example, to find all pages that contain two words requires submitting the query select w0.urlid,w0.location,w1.location from wordlocation w0,wordlocation w1 where w0.urlid=w1.urlid and w0.wordid=10 and w1.wordid=17 Not only is it difficult for a human to debug this string, since the two “words” wordid=10 and wordid=17 point to rows in a table that are determined via look-up, but the string itself is generated recursively using a Python method that itself calls many helper methods in the same class. It was a good feeling once I started seeing how all the pieces fit together. The goal that I set for myself was to see what this would look like using document-oriented database like Mongo DB. I’ll explain a little bit how I approached this and describe some of the things that I discovered. Please pull the code base from the repository if you’d like to sing along. # Dependencies The experiments require both the document-oriented database like Mongo DB and SQLite databases. Also ensure that their respective Python bindings, pymongo and PySQLite, are installed as well as Beautiful Soupfor parsing HTML documents. # Storing Records Like SQLite, MongoDB has an interactive command interpreter, which is useful for experimenting with insertion and query commands. The documentation page has some clean tutorialsfor getting up and running. All “records” accessed via the command interpreter are JSON objects, which translate immediately into Python dictionaries. The main data that are scraped from a webpage for these experiments are the links and the individual words, and each page is indexed by its url. Thus a single dictionary record may look something like: { "url" : "http://kiwitobes.com/wiki/Programming_language.html", "words" : [ "your", "continued", "donations", "keep", "wikipedia",... ], "http://kiwitobes.com/wiki/Artificial_language.html", "http://kiwitobes.com/wiki/Control", "http://kiwitobes.com/wiki/Computer.html", "http://kiwitobes.com/wiki/Natural_language.html", ... ] } The inner most loop of the crawler algorithm has two main steps: (1) use Beautiful Soup to parse the elements out of an HTML page and insert these into the single dictionary record, (2) insert the record into the database. Compare this, on the other hand, to the intermediary layers needed when storing all data in an SQL database. The “urls” and “words” are stored in separate tables. Instead of using strings directly, row id’s are used. With a uniform naming convention, a helper routine like the following is used to convert indexed urls and words into to row id’s: def getentryid(self,table,field,value,createnew=True): cur=self.con.execute("select rowid from %s where %s='%s'" % (table,field,value)) res=cur.fetchone( ) if res==None: cur=self.con.execute("insert into %s (%s) values ('%s')" % (table,field,value)) return cur.lastrowid else: return res[0] These row id’s are in turn used to generate SQL insert commands (as Python strings) for maintaining the table of “links” from a url, and the table of words associated to a url. # Searching The structure in which the data is recorded automatically assigns a set of words to each url, but searching requires a mapping in the opposite direction: given a query word, or small collection of words, how do we get all urls associated to them? Think about the urls and words as being the two sets of nodes of a bipartite graph: there is an “edge” between a url and a word only if that word is on the webpage for that url. Our storage format as JSON records implicitly captures this information as a directed graph. The search problem would be much easier to solve if this were an undirected graph. One convenience of querying a Mongo database is the simplicity of the format: supplying a JSON argument of the form {"key": "value"} to the find() method returns all records with that value for the given key. It also accepts a comma-separated list of key-value pairs for further refinement. This inspired my (first approximation) solution to the search problem: build another database collection with a record containing both a "url":value and "word":value pair for every word on a webpage. Querying this collection for a "word":value pair returns a list of all records with the webpage where that word occurs. Solving the problem for multiple search terms then reduces to the problem of finding the intersection of a set of webpages. This approach in effect amounts to extracting the undirected bipartite graph — as a database collection — from the directed graph. For the executive summary, there are three main methods to the searcherclass: 1. a method count() determine “stop words” in a language neutral fashion; for instance, hashing the words to their respective frequencies in the dataset and ignore those above a hand-tuned threshold; 2. a method buildIndex() to assemble the containing all “url”:value, “word”:value data, ignoring the stop words; 3. a search method simpleQuery('string') that returns the raw list of urls from a query of key words using the reduced index. # That sucked It turns out that querying a record against a collection in this kind of database is really slow. I can now understand the appeal of using a relational database, even with all of the overhead it demands in terms of planning, implementing, and intermediary steps for inserting and querying. My next goal was to refactor the code from the original PCI project to compare the performance characteristics with my approach side-by-side. Therefore I wrote two versions of the searcher class: one in the module searcher_mongo.py and another in searcher_sqlite.py. The classes in the respective modules have the same names and same methods, the methods just have different implementations: in the SQLite module, searcher.buildIndex() builds the tables urllist, wordlist, and wordgraph using the data indexed by the crawler, literally representing the “urls,” “words,” and “edges” of the bipartite graph. One more detail: the code base from Toby’s PCI project tracked not the edges between urls and words, but also the “location” of the word on the page, essentially the integer index of that word when representing that page as an array. This information is used in post-processing the raw list of search results to provide various rankings. Although these post-processing features aren’t implement in this version of my class library, the location data is captured in the and indexed by the methods searcher.buildIndex() for modules. The searcher.simpleQuery() method returns the search results as a lot of structured data. The first item is a list of “tokens” from the search query string, and the second item is dictionary of url:data pairs, where the “data” is in turn another dictionary encapsulating the token:locationsdata for that document. # Results I made no attempt to run an exhaustive battery of comparison tests, but the numbers returned corroborate my experience. To obtain these I used the cProfile package; see the main_test.py module in the source tree. The test program first indexed a set of webpages using the crawler class starting from an initial seed list. Then it connects a search class to this database for each of the respective modules: >>> Sql = searcher_sqlite.searcher() >>> Mdb = searcher_mongo.searcher() The word index is built (and timed) using for the SQL version: >>> Sql.initTables() >>> Sql.count(0.14) >>> Profile.run('Sql.buildIndex()') searcher_sqlite : building word index 796334 function calls in 21.349 seconds ... And the same is done for the Mongo DB version: >>> Mdb.count(0.14) >>> cProfile.run('Mdb.buildIndex()') searcher_mongo : building word index 5257029 function calls in 18.389 seconds ... Finally it compares the search performance for each of the classes: >>> cProfile.run('(tokens, results) = Mdb.simpleQuery("anything")') 2025 function calls (2019 primitive calls) in 3.787 seconds >>> cProfile.run('(tokens, results) = Sql.simpleQuery("anything")') 165 function calls in 0.004 seconds and >>> cProfile.run('(tokens, results) = Mdb.simpleQuery("something special")') 5560 function calls in 7.593 seconds >>> cProfile.run('(tokens, results) = Sql.simpleQuery("something special")') 385 function calls in 0.015 seconds and >>> cProfile.run('(tokens, results) = Mdb.simpleQuery("what about Google?")') 8748 function calls (8743 primitive calls) in 6.315 seconds 1737 function calls in 0.171 seconds These tests were run on a late 2010 MacBook Air. Your milage may vary. Comments welcome. # getting with github I have a lot of projects on SourceForge. It’s home of the first project where I devoted serious contributions, and it seemed a natural place to host my own projects. Some of those are academic research projects, some are ideas that I wanted to share with friends, and some are code bases where I simply replaced their brittle makefiles with a build configuration system. However, the self-imposed ostracism of not being active on GitHub grows more conspicuous with each piece of code I want to publish. It’s like being one of those rare souls who aren’t on Facebook. Recently I “seeded” my GitHub profile with a small code base, gslextn, parts of which have already been incorporated elsewhere. I’m an avid user of the GSL, but eventually found the process of working with an API where things like matrices and vectors are C-structs to be wanting of more lubrication. Wrapping essential objects in C++ classes to exploit operator overloading and RAII idioms — without succumbing to the impulse to wrap each API call in a class method — seems like a good balance. # refactoring I recently switched the back-end of this site from Indexhibit to WordPress. Call it a refactoring of os-scientific.org. Republishing a few articles as pages on was a good exercise in learning the ins and outs of WordPress’ software. (Not to mention an exercise in patience.) Not only do I write a lot of code, I write a lot about it — actually a lot about the intersection of code and math. I have been using Doxygen for a while now to document code. What I really like about it as a publishing tool is that: • it handles code snippets elegantly, • it lets me incorporate LaTeX, In fact Doxygen’s \mainpage has become my replacement for \documentclass{article}, allowing me to keep expository notes in the same file tree as a code base, and to generate clean looking HTML. Indexhibit plays reasonably well with external webpages, so that the index.html generated from a Doxygen main page is essentially a ready-to-go blog post. WordPress does not play so well with foreign HTML code pasted into the editor, particularly HTML with lots of formatting directives. After a little experimentation with regular expressions, I found a way to scrub Doxygen-erated HTML into something more palatable to WordPress. Let me mention that the site is running MathJax in the background. Formatting symbols in posts is handled exactly as in LaTeX documents, i.e., LaTeX markup escaped with the standard dollar-sign symbols. For the scrubbing filter I used sed: $sed -f doxy2wp.txt$DOCPATH/index.html > wordpress.html where doxy2wp.txt contains the regular expressions: s/<span[^>]*>//g; s/<\/span>//g s/<div[^>]*>//g; s/<\/div>//g s/^ //g s/<img class=\"formula...\" alt=\"//g s/\" src=\"form_[0-9]*\.png\"\/>//g
2014-07-25 12:55:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25925350189208984, "perplexity": 2789.526535841273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894260.71/warc/CC-MAIN-20140722025814-00195-ip-10-33-131-23.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/454086/how-to-unfill-an-area-inside-a-filled-one-in-tikz
# How to unfill an area inside a filled one in tikz I've searched a bit and tried in different ways but I don't manage to get the area of the inside circle unfilled, or, alternatively, to be able to fill only the area between the two circles. Here is a MWE: \documentclass[12pt]{article} \usepackage{tikz} \begin{document} \begin{tikzpicture} \draw[step=1cm, gray, very thin] (-3.5,-2.5) grid (3.5,2.5); \draw (0,0) circle (2cm); \path[fill=gray, opacity=0.2] (0,0) circle (2cm); \draw[dashed] (0,0) circle (1cm); \end{tikzpicture} \end{document} And here is what it plots: Edit: I've found an answer here: Filling the area between two circles still want to know if it is possible to unfill a previously filled area though. • The short answer is no, you cannot "unfill" an area. You can fill on top of it, but hidden background objects will stay hidden. You can, however, not fill part of a filled area in the first place. – John Kormylo Oct 6 '18 at 12:42 • If your question is how not to fill the circle in the middle, then you may just be looking for even odd rule: \path[fill=gray, opacity=0.2,even odd rule] (0,0) circle (2cm) circle (1cm); – user121799 Oct 6 '18 at 13:02 I think you are only looking for even odd rule. \documentclass[12pt]{article} \usepackage{tikz} \begin{document} \begin{tikzpicture} \draw[step=1cm, gray, very thin] (-3.5,-2.5) grid (3.5,2.5); \draw (0,0) circle (2cm); \path[fill=gray, opacity=0.2,even odd rule] (0,0) circle (2cm) circle (1cm); \draw[dashed] (0,0) circle (1cm); \end{tikzpicture} \end{document} The closest thing to "unfilling" that comes to my mind is do all the fills on the background layer. Then the effect may (or may not, depending on whom you ask;-) be described as "unfilling". \documentclass[margin=3.14mm, tikz]{standalone} \usetikzlibrary{backgrounds} \begin{document} \foreach \X in {0,1} {\begin{tikzpicture} \draw[step=1cm, gray, very thin] (-3.5,-2.5) grid (3.5,2.5); \draw (0,0) circle (2cm); \ifnum\X=0 \node[anchor=south,font=\sffamily] at (0,2) {fill}; \else \node[anchor=south,font=\sffamily] at (0,2) {unfill''}; \fi \begin{scope}[on background layer] \path[fill=gray, opacity=0.2] (0,0) circle (2cm) circle (1cm); \ifnum\X=0 \draw[dashed] (0,0) circle (1cm); \else \draw[dashed,fill=white] (0,0) circle (1cm); \fi \end{scope} \end{tikzpicture}} \end{document} • Thank you, I've previously edited the question to clarify that I'm looking for a method different from the even or odd one but as John Kormylo said in his comment maybe it is impossible to do – Edoardo Serra Oct 6 '18 at 17:38 • @EdoardoSerra I added a second possibility which may come close to "unfilling". – user121799 Oct 6 '18 at 17:50
2020-10-28 08:49:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6514137387275696, "perplexity": 1122.6206089393972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107897022.61/warc/CC-MAIN-20201028073614-20201028103614-00481.warc.gz"}
http://mathdl.maa.org/mathDL/46/?pa=content&sa=viewDocument&nodeId=3284&bodyId=3586
Search ## Search Loci: Convergence: Keyword Random Quotation A time will however come (as I believe) when physiology will invade and destroy mathematical physics, as the latter has destroyed geometry. Daedalus, or Science and the Future, London: Kegan Paul, 1923. See more quotations # Sums of Powers of Positive Integers ## Solutions to Exercises 4-6 Exercise 4.  For n = 3, three shells constructed of 6 12  = 6  cubes, 6   22  = 24 cubes, and 6 32  = 54 cubes fit together to form a 3 x 4 x 7 rectangular solid, as shown in Figure 16A.  This construction illustrates that $$6\left(1^2 + 2^2 + 3^2\right) = {3\cdot 4\cdot 7},$$ or $$1^2 + 2^2 + 3^2 = {{3\cdot 4\cdot 7} \over 6},$$ or, more generally, $$1^2 + 2^2 + 3^2 + \cdots + n^2 = {{n(n + 1)(2n + 1)} \over 6}$$ for any positive integer n. Nilakantha reasoned that the outside shell contained 6 32  = 54 cubes as follows (see Figure 16B): Exercise 5.  For n = 3, three shells constructed of 6 (1 2)/2 = 6 cubes, 6 (2 3)/2 = 18 cubes, and 6 (3 4)/2 = 36 cubes fit together to form a 3 x 4 x 5 rectangular solid, as shown in Figure 17A.  This construction illustrates that $$6\left(1 + 3 + 6\right) = {3\cdot 4\cdot 5},$$ or $$1 + 3 + 6 = \frac {3\cdot 4\cdot 5}{6},$$ or, more generally, $$1 + 3 + 6 + \cdots + {{n(n + 1)} \over 2} = {{n(n + 1)(n + 2)} \over 6}$$ for any integer n. Nilakantha may have reasoned that the outside shell contained 6 (3 4)/2 = 36 cubes as follows (see Figure 17B): Exercise 6.  They may have made the following computations. 1 + 3 = 4 = 22 3 + 6 = 9 = 32 6 + 10 = 16 = 42 10 + 15 = 25 = 52 15 + 21 = 36 = 62 The general relationship can be expressed as Tn + Tn+1 = (n+1)2, where Tn is the nth triangular number, or as $${{n(n + 1)} \over 2} + {{(n + 1)(n + 2)} \over 2} = (n + 1)^2.$$ The latter equation can be checked easily using algebra. Pages: | 1 |  2 |  3 |  4 |  5 |  6 |  7 |  8 |  9 |  10 |  11 |  12 |  13 |  14 |  15 |  16 |  17 |  18 |  19 |  20 |  21 | Beery, Janet, "Sums of Powers of Positive Integers," Loci (February 2009), DOI: 10.4169/loci003284 ### Karaji's solution I just wonder if the Karaji Generalized his method of solutions? Is there any information about that. For instance did he found sum of fifth powers? Al-Karaji and higher powers by Janet Beery (posted: 01/20/2010 ) My understanding is that there is no evidence that al-Karaji derived a "formula" for the sum of the fourth, fifth, or higher powers. His justification for his formula for the sum of the cubes was ingenious but al-Haytham's idea seems more readily generalizable. ### On the symmetry of the resulting polynomials == Conjecture == It would be interesting to know that * sums of odd powers can be factored as a polynomial on (n â‹… (n + 1) / 2) and is therefore symmetric with the center at (−½); and that * sums of even powers can be factored to a function like above multiplied by (2 â‹… n + 1) and therefore antisymmetric with the center at (−½). It would be interesting to know whether any rule concerning the polynomials thus obtained. #### Replies: Re: On the symmetry of the resulting polynomials by Rick Mabry (posted: 12/30/2011 ) Christopher, see the following article, especially section 3 (Faulhaber Polynomials). Although my browsers don't let me view the code you pasted, I think the article confirms some of what you're getting at (some of which might be gleaned from page 8 of the article we're viewing). A. F. Beardon, Sums of Powers of Integers, American Mathematical Monthly, Vol. 103, No. 3 (Mar., 1996), pp. 201-213. More about symmetry: by Janet Beery (posted: 01/03/2013 ) The most succinct, beautiful, and perhaps even historically plausible explanation I've seen for the symmetry of the Faulhaber polynomials about -1/2 was in a recent article by Reuben Hersh in the College Math Journal 43:4 (Sept. 2012), pp. 322-4. His approach? Extend the polynomials to negative values of n.
2013-05-23 18:18:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7764683365821838, "perplexity": 1301.0556505279262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703662159/warc/CC-MAIN-20130516112742-00025-ip-10-60-113-184.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/342934/marketing-mix-modeling-revisit
# Marketing Mix Modeling revisit Objective: Obtain incremental sales during promotional period Suggested Method: 1. Replace sales values during promotion periods with NA. 2. Use stl() to decompose sales into trend, season, remainder 3. Forecast (trend - season) through NAs 4. Add seasonal component back onto the forecasted trend. This is the base. 5. The difference between the base and the actual sales is the incremental dollars due to the promotion. Problem: Most of the actual sales, at least in my data, are lower than the base sales. Question:I am looking for some kind of alternative method or error in my current method as I find it hard to believe that promotions have negative incremental dollars. This is a revisit the conversation I was reading here. • Have you looked at plots of the data? Maybe they really are lower, it's hard to see how stl() could forecast a spike in demand higher than the actual spike if all the spikes have been removed from the history. – jbowman May 2 '18 at 17:59
2020-08-08 18:14:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2378862500190735, "perplexity": 2303.474037900577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738015.38/warc/CC-MAIN-20200808165417-20200808195417-00304.warc.gz"}
http://en.wikipedia.org/wiki/16-cell
# 16-cell (16-cell) (4-orthoplex) Schlegel diagram (vertices and edges) Type Convex regular 4-polytope 4-orthoplex 4-demicube Schläfli symbol {3,3,4} Coxeter-Dynkin diagram Cells 16 {3,3} Faces 32 {3} Edges 24 Vertices 8 Vertex figure Octahedron Petrie polygon octagon Coxeter group C4, [3,3,4], order 384 Dual Tesseract Properties convex, isogonal, isotoxal, isohedral Uniform index 12 In four-dimensional geometry, a 16-cell or hexadecachoron is a regular convex 4-polytope. It is one of the six regular convex 4-polytopes first described by the Swiss mathematician Ludwig Schläfli in the mid-19th century. It is a part of an infinite family of polytopes, called cross-polytopes or orthoplexes. The dual polytope is the tesseract (4-cube). Conway's name for a cross-polytope is orthoplex, for orthant complex. ## Geometry It is bounded by 16 cells, all of which are regular tetrahedra. It has 32 triangular faces, 24 edges, and 8 vertices. The 24 edges bound 6 squares lying in the 6 coordinate planes. The eight vertices of the 16-cell are (±1, 0, 0, 0), (0, ±1, 0, 0), (0, 0, ±1, 0), (0, 0, 0, ±1). All vertices are connected by edges except opposite pairs. The Schläfli symbol of the 16-cell is {3,3,4}. Its vertex figure is a regular octahedron. There are 8 tetrahedra, 12 triangles, and 6 edges meeting at every vertex. Its edge figure is a square. There are 4 tetrahedra and 4 triangles meeting at every edge. The 16-cell can be decomposed into two similar disjoint circular chains of eight tetrahedrons each, four edges long. Each chain, when stretched out straight, forms a Boerdijk–Coxeter helix. This decomposition can be seen in the alternated 4-4 duoprism construction, , of the 16-cell, symmetry [[4,2+,4]], order 64. ## Images Stereographic projection|rowspan=2| A 3D projection of a 16-cell performing a simple rotation. The 16-cell has two Wythoff constructions, a regular form and alternated form, shown here as nets, the second being represented by alternately two colors of tetrahedral cells. orthographic projections Coxeter plane B4 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane F4 A3 Graph Dihedral symmetry [12/3] [4] demitesseract in order-4 Petrie polygon symmetry as an alternated tesseract Tesseract ## Tessellations One can tessellate 4-dimensional Euclidean space by regular 16-cells. This is called the hexadecachoric honeycomb and has Schläfli symbol {3,3,4,3}. The dual tessellation, icositetrachoric honeycomb, {3,4,3,3}, is made of by regular 24-cells. Together with the tesseractic honeycomb {4,3,3,4}, these are the only three regular tessellations of R4. Each 16-cell has 16 neighbors with which it shares a tetrahedron, 24 neighbors with which it shares only an edge, and 72 neighbors with which it shares only a single point. Twenty-four 16-cells meet at any given vertex in this tessellation. ## Projections Projection envelopes of the 16-cell. (Each cell is drawn with different color faces, inverted cells are undrawn) The cell-first parallel projection of the 16-cell into 3-space has a cubical envelope. The closest and farthest cells are projected to inscribed tetrahedra within the cube, corresponding with the two possible ways to inscribe a regular tetrahedron in a cube. Surrounding each of these tetrahedra are 4 other (non-regular) tetrahedral volumes that are the images of the 4 surrounding tetrahedral cells, filling up the space between the inscribed tetrahedron and the cube. The remaining 6 cells are projected onto the square faces of the cube. In this projection of the 16-cell, all its edges lie on the faces of the cubical envelope. The cell-first perspective projection of the 16-cell into 3-space has a triakis tetrahedral envelope. The layout of the cells within this envelope are analogous to that of the cell-first parallel projection. The vertex-first parallel projection of the 16-cell into 3-space has an octahedral envelope. This octahedron can be divided into 8 tetrahedral volumes, by cutting along the coordinate planes. Each of these volumes is the image of a pair of cells in the 16-cell. The closest vertex of the 16-cell to the viewer projects onto the center of the octahedron. Finally the edge-first parallel projection has a shortened octahedral envelope, and the face-first parallel projection has a hexagonal bipyramidal envelope. ## 4 sphere Venn Diagram The usual projection of the 16-cell and 4 intersecting spheres (a Venn diagram of 4 sets) form topologically the same object in 3D-space: ## Symmetry constructions There is a lower symmetry form of the 16-cell, called a demitesseract or 4-demicube, a member of the demihypercube family, and represented by h{4,3,3}, and Coxeter diagrams or . It can be drawn bicolored with alternating tetrahedral cells. It can also be seen in lower symmetry form as a tetrahedral antiprism, constructed by 2 parallel tetrahedra in dual configurations, connected by 8 (possibly elongated) tetrahedra. It is represented by s{2,4,3}, and Coxeter diagram: . It can also be seen as a snub 4-orthotope, represented by s{21,1,1}, and Coxeter diagram: . With the tesseract constructed as a 4-4 duoprism, the 16-cell can be seen as its dual, a 4-4 duopyramid. Name Coxeter diagram Schläfli symbol Coxeter notation Order Vertex figure Regular 16-cell {3,3,4} [3,3,4] 384 Demitesseract = = h{4,3,3} {3,31,1} [31,1,1] = [1+,4,3,3] 192 Alternated 4-4 duoprism 2s{4,2,4} [[4,2+,4]] 64 Tetrahedral antiprism s{2,4,3} [2+,4,3] 48 Alternated square prism prism sr{2,2,4} [(2,2)+,4] 16 Snub 4-orthotope s{21,1,1} [2,2,2]+ 8 4-fusil {3,3,4} [3,3,4] 384 {4}+{4} [[4,2,4]] 128 {3,4}+{} [4,3,2] 96 {4}+{}+{} [4,2,2] 32 {}+{}+{}+{} [2,2,2] 16 ## Related uniform polytopes and honeycombs D4 uniform polychora {3,31,1} h{4,3,3} 2r{3,31,1} h3{4,3,3} t{3,31,1} h2{4,3,3} 2t{3,31,1} h2,3{4,3,3} r{3,31,1} {31,1,1}={3,4,3} rr{3,31,1} r{31,1,1}=r{3,4,3} tr{3,31,1} t{31,1,1}=t{3,4,3} sr{3,31,1} s{31,1,1}=s{3,4,3} The 16-cell is a part of the tesseractic family of uniform polychora: Name tesseract rectified tesseract truncated tesseract cantellated tesseract runcinated tesseract bitruncated tesseract cantitruncated tesseract runcitruncated tesseract omnitruncated tesseract Coxeter diagram = = Schläfli symbol {4,3,3} t1{4,3,3} r{4,3,3} t0,1{4,3,3} t{4,3,3} t0,2{4,3,3} rr{4,3,3} t0,3{4,3,3} t1,2{4,3,3} 2t{4,3,3} t0,1,2{4,3,3} tr{4,3,3} t0,1,3{4,3,3} t0,1,2,3{4,3,3} Schlegel diagram B4 Name 16-cell rectified 16-cell truncated 16-cell cantellated 16-cell runcinated 16-cell bitruncated 16-cell cantitruncated 16-cell runcitruncated 16-cell omnitruncated 16-cell Coxeter diagram = = = = = = Schläfli symbol {3,3,4} t1{3,3,4} r{3,3,4} t0,1{3,3,4} t{3,3,4} t0,2{3,3,4} rr{3,3,4} t0,3{3,3,4} t1,2{3,3,4} 2t{3,3,4} t0,1,2{3,3,4} tr{3,3,4} t0,1,3{3,3,4} t0,1,2,3{3,3,4} Schlegel diagram B4 This polychoron is also related to the cubic honeycomb, order-4 dodecahedral honeycomb, and order-4 hexagonal tiling honeycomb all which have octahedral vertex figures. {p,3,4} Space S3 E3 H3 Form Finite Affine Compact Paracompact Noncompact Name {3,3,4} {4,3,4} {5,3,4} {6,3,4} {7,3,4} {8,3,4} ... {∞,3,4} Image Cells {3,3} {4,3} {5,3} {6,3} {7,3} {8,3} {∞,3} It is similar to three regular polychora: the 5-cell {3,3,3}, 600-cell {3,3,5} of Euclidean 4-space, and the order-6 tetrahedral honeycomb {3,3,6} of hyperbolic space. All of these have a tetrahedral cells. {3,3,p} Space S3 H3 Form Finite Paracompact Noncompact Name {3,3,3} {3,3,4} {3,3,5} {3,3,6} {3,3,7} {3,3,8} ... {3,3,∞} Image Vertex figure {3,3} {3,4} {3,5} {3,6} {3,7} {3,8} {3,∞} Quasiregular polychora and honeycombs: h{4,p,q} Space Finite Affine Compact Paracompact Name h{4,3,3} h{4,3,4} h{4,3,5} h{4,3,6} h{4,4,3} h{4,4,4} $\left\{3,{3\atop3}\right\}$ $\left\{3,{3\atop4}\right\}$ $\left\{3,{3\atop5}\right\}$ $\left\{3,{3\atop6}\right\}$ $\left\{4,{3\atop4}\right\}$ $\left\{4,{4\atop4}\right\}$ Coxeter diagram Image Vertex figure r{p,3} Regular and Quasiregular honeycombs: {p,3,4} and {p,31,1} Space Euclidean 4-space Euclidean 3-space Hyperbolic 3-space Name {3,3,4} {3,31,1} = $\left\{3,{3\atop3}\right\}$ {4,3,4} {4,31,1} = $\left\{4,{3\atop3}\right\}$ {5,3,4} {5,31,1} = $\left\{5,{3\atop3}\right\}$ {6,3,4} {6,31,1} = $\left\{6,{3\atop3}\right\}$ Coxeter diagram = = = = Image Cells {p,3} ## References • T. Gosset: On the Regular and Semi-Regular Figures in Space of n Dimensions, Messenger of Mathematics, Macmillan, 1900 • H.S.M. Coxeter: • Coxeter, Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8, p. 296, Table I (iii): Regular Polytopes, three regular polytopes in n-dimensions (n≥5) • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973, p. 296, Table I (iii): Regular Polytopes, three regular polytopes in n-dimensions (n≥5) • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 [1] • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 (Chapter 26. pp. 409: Hemicubes: 1n1) • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. (1966)
2014-04-20 09:48:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7069483399391174, "perplexity": 6712.142804169045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/51672-trig-equation-print.html
# Trig Equation • October 2nd 2008, 05:36 AM xwrathbringerx Trig Equation Can anyone please show me how to do 2cos2x=7cosx. I have no clue. • October 2nd 2008, 05:52 AM mr fantastic Quote: Originally Posted by xwrathbringerx Can anyone please show me how to do 2cos2x=7cosx. I have no clue. $2 \left( 2 \cos^2 (x) - 1 \right) = 7 \cos (x)$. Now re-arrange into a quadratic in cos x ..... • October 2nd 2008, 11:32 AM dolphinlover cos2x=2(cos^2)x-1 (double angle)...that is how Mr Fantastic got his result. Rearranging into a quadratic in cos x should yield... (4cosx +1)(cosx -2) = 0 (4cosx) + 1 = 0 and (cosx) - 2 = 0 ....now solve for x in both
2015-03-01 04:28:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5975900292396545, "perplexity": 7874.111835892558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462206.12/warc/CC-MAIN-20150226074102-00295-ip-10-28-5-156.ec2.internal.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcdss.2009.2.287
# American Institute of Mathematical Sciences June  2009, 2(2): 287-300. doi: 10.3934/dcdss.2009.2.287 ## Heaviness in symbolic dynamics: Substitution and Sturmian systems 1 Department of Mathematics, The Ohio State University, 231 W. 18th Avenue, Columbus, OH 43210, United States Received  February 2008 Revised  August 2008 Published  April 2009 Heaviness refers to a sequence of partial sums maintaining a certain lower bound and was recently introduced and studied in [11]. After a review of basic properties to familiarize the reader with the ideas of heaviness, general principles of heaviness in symbolic dynamics are introduced. The classical Morse sequence is used to study a specific example of heaviness in a system with nontrivial rational eigenvalues. To contrast, Sturmian sequences are examined, including a new condition for a sequence to be Sturmian. Citation: David Ralston. Heaviness in symbolic dynamics: Substitution and Sturmian systems. Discrete and Continuous Dynamical Systems - S, 2009, 2 (2) : 287-300. doi: 10.3934/dcdss.2009.2.287 [1] Jon Chaika, David Constantine. A quantitative shrinking target result on Sturmian sequences for rotations. Discrete and Continuous Dynamical Systems, 2018, 38 (10) : 5189-5204. doi: 10.3934/dcds.2018229 [2] Joshua P. Bowman, Slade Sanderson. Angels' staircases, Sturmian sequences, and trajectories on homothety surfaces. Journal of Modern Dynamics, 2020, 16: 109-153. doi: 10.3934/jmd.2020005 [3] Tian-Xiao He, Peter J.-S. Shiue, Zihan Nie, Minghao Chen. Recursive sequences and girard-waring identities with applications in sequence transformation. Electronic Research Archive, 2020, 28 (2) : 1049-1062. doi: 10.3934/era.2020057 [4] Ferruh Özbudak, Eda Tekin. Correlation distribution of a sequence family generalizing some sequences of Trachtenberg. Advances in Mathematics of Communications, 2021, 15 (4) : 647-662. doi: 10.3934/amc.2020087 [5] Hua Liang, Jinquan Luo, Yuansheng Tang. On cross-correlation of a binary $m$-sequence of period $2^{2k}-1$ and its decimated sequences by $(2^{lk}+1)/(2^l+1)$. Advances in Mathematics of Communications, 2017, 11 (4) : 693-703. doi: 10.3934/amc.2017050 [6] Michal Kupsa, Štěpán Starosta. On the partitions with Sturmian-like refinements. Discrete and Continuous Dynamical Systems, 2015, 35 (8) : 3483-3501. doi: 10.3934/dcds.2015.35.3483 [7] Roman Šimon Hilscher. On general Sturmian theory for abnormal linear Hamiltonian systems. Conference Publications, 2011, 2011 (Special) : 684-691. doi: 10.3934/proc.2011.2011.684 [8] María Barbero Liñán, Hernán Cendra, Eduardo García Toraño, David Martín de Diego. Morse families and Dirac systems. Journal of Geometric Mechanics, 2019, 11 (4) : 487-510. doi: 10.3934/jgm.2019024 [9] Philip Schrader. Morse theory for elastica. Journal of Geometric Mechanics, 2016, 8 (2) : 235-256. doi: 10.3934/jgm.2016006 [10] Richard Hofer, Arne Winterhof. On the arithmetic autocorrelation of the Legendre sequence. Advances in Mathematics of Communications, 2017, 11 (1) : 237-244. doi: 10.3934/amc.2017015 [11] Jeanette Olli. Endomorphisms of Sturmian systems and the discrete chair substitution tiling system. Discrete and Continuous Dynamical Systems, 2013, 33 (9) : 4173-4186. doi: 10.3934/dcds.2013.33.4173 [12] M. Baake, P. Gohlke, M. Kesseböhmer, T. Schindler. Scaling properties of the Thue–Morse measure. Discrete and Continuous Dynamical Systems, 2019, 39 (7) : 4157-4185. doi: 10.3934/dcds.2019168 [13] Mauro Patrão, Luiz A. B. San Martin. Morse decomposition of semiflows on fiber bundles. Discrete and Continuous Dynamical Systems, 2007, 17 (3) : 561-587. doi: 10.3934/dcds.2007.17.561 [14] E. Camouzis, H. Kollias, I. Leventides. Stable manifold market sequences. Journal of Dynamics and Games, 2018, 5 (2) : 165-185. doi: 10.3934/jdg.2018010 [15] Frank Fiedler. Small Golay sequences. Advances in Mathematics of Communications, 2013, 7 (4) : 379-407. doi: 10.3934/amc.2013.7.379 [16] Yixiao Qiao, Xiaoyao Zhou. Zero sequence entropy and entropy dimension. Discrete and Continuous Dynamical Systems, 2017, 37 (1) : 435-448. doi: 10.3934/dcds.2017018 [17] Walter Briec, Bernardin Solonandrasana. Some remarks on a successive projection sequence. Journal of Industrial and Management Optimization, 2006, 2 (4) : 451-466. doi: 10.3934/jimo.2006.2.451 [18] Arseny Egorov. Morse coding for a Fuchsian group of finite covolume. Journal of Modern Dynamics, 2009, 3 (4) : 637-646. doi: 10.3934/jmd.2009.3.637 [19] Alejandro B. Aceves, Luis A. Cisneros-Ake, Antonmaria A. Minzoni. Asymptotics for supersonic traveling waves in the Morse lattice. Discrete and Continuous Dynamical Systems - S, 2011, 4 (5) : 975-994. doi: 10.3934/dcdss.2011.4.975 [20] Tomás Caraballo, Juan C. Jara, José A. Langa, José Valero. Morse decomposition of global attractors with infinite components. Discrete and Continuous Dynamical Systems, 2015, 35 (7) : 2845-2861. doi: 10.3934/dcds.2015.35.2845 2020 Impact Factor: 2.425
2022-05-28 20:25:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5931623578071594, "perplexity": 7660.747026304062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00571.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/jimo.2015.11.1
American Institute of Mathematical Sciences January  2015, 11(1): 1-11. doi: 10.3934/jimo.2015.11.1 The optimal price discount, order quantity and minimum quantity in newsvendor model with group purchase 1 Department of Mathematics, Beijing Jiaotong University, Beijing, 100044, China 2 Department of Mathematics, Beijing Jiaotong University, 100044 Beijing Received  December 2012 Revised  February 2014 Published  May 2014 Based on the feature of small profits but quick turnover, group-buying, an emerging e-commerce model, benefits both retailers and customers. In this paper, we explore the optimal price discount, order quantity and minimum quantity with a fixed selling price of the product to maximize the sellers' profit. The traditional newsvendor model framework is used in view of the shortened life cycle of most products. The demand of customers is assumed to be in addition form and product form, respectively, and the impacts of demand parameters are examined numerically. It is revealed that in some cases the profit cannot be improved significantly through price discount because of unconspicuous increase in demand. However, when the demand changes obviously with price discount, group-buying can bring more profit through price discount and inspire vendors to order more goods. Through numerical results, it is shown that the influence of demand in the product form is more evident than that in the addition form under the strategy of group-buying. Furthermore, the profit-based minimum quantity and the probability of selling nothing during the group time are also shown in this paper. Citation: Zhenwei Luo, Jinting Wang. The optimal price discount, order quantity and minimum quantity in newsvendor model with group purchase. Journal of Industrial & Management Optimization, 2015, 11 (1) : 1-11. doi: 10.3934/jimo.2015.11.1 References: show all references References: [1] Tien-Yu Lin, Bhaba R. Sarker, Chien-Jui Lin. An optimal setup cost reduction and lot size for economic production quantity model with imperfect quality and quantity discounts. Journal of Industrial & Management Optimization, 2021, 17 (1) : 467-484. doi: 10.3934/jimo.2020043 [2] Laurent Di Menza, Virginie Joanne-Fabre. An age group model for the study of a population of trees. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020464 [3] Mahdi Karimi, Seyed Jafar Sadjadi. Optimization of a Multi-Item Inventory model for deteriorating items with capacity constraint using dynamic programming. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2021013 [4] Liupeng Wang, Yunqing Huang. Error estimates for second-order SAV finite element method to phase field crystal model. Electronic Research Archive, 2021, 29 (1) : 1735-1752. doi: 10.3934/era.2020089 [5] San Ling, Buket Özkaya. New bounds on the minimum distance of cyclic codes. Advances in Mathematics of Communications, 2021, 15 (1) : 1-8. doi: 10.3934/amc.2020038 [6] Qiao Liu. Local rigidity of certain solvable group actions on tori. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 553-567. doi: 10.3934/dcds.2020269 [7] Kien Trung Nguyen, Vo Nguyen Minh Hieu, Van Huy Pham. Inverse group 1-median problem on trees. Journal of Industrial & Management Optimization, 2021, 17 (1) : 221-232. doi: 10.3934/jimo.2019108 [8] Meihua Dong, Keonhee Lee, Carlos Morales. Gromov-Hausdorff stability for group actions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1347-1357. doi: 10.3934/dcds.2020320 [9] Editorial Office. Retraction: Xiao-Qian Jiang and Lun-Chuan Zhang, Stock price fluctuation prediction method based on time series analysis. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 915-915. doi: 10.3934/dcdss.2019061 [10] Sihem Guerarra. Maximum and minimum ranks and inertias of the Hermitian parts of the least rank solution of the matrix equation AXB = C. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 75-86. doi: 10.3934/naco.2020016 [11] Min Xi, Wenyu Sun, Jun Chen. Survey of derivative-free optimization. Numerical Algebra, Control & Optimization, 2020, 10 (4) : 537-555. doi: 10.3934/naco.2020050 [12] Hongyan Guo. Automorphism group and twisted modules of the twisted Heisenberg-Virasoro vertex operator algebra. Electronic Research Archive, , () : -. doi: 10.3934/era.2021008 [13] Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934/era.2020115 [14] Xinpeng Wang, Bingo Wing-Kuen Ling, Wei-Chao Kuang, Zhijing Yang. Orthogonal intrinsic mode functions via optimization approach. Journal of Industrial & Management Optimization, 2021, 17 (1) : 51-66. doi: 10.3934/jimo.2019098 [15] Wolfgang Riedl, Robert Baier, Matthias Gerdts. Optimization-based subdivision algorithm for reachable sets. Journal of Computational Dynamics, 2021, 8 (1) : 99-130. doi: 10.3934/jcd.2021005 [16] Manxue You, Shengjie Li. Perturbation of Image and conjugate duality for vector optimization. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020176 [17] Marek Macák, Róbert Čunderlík, Karol Mikula, Zuzana Minarechová. Computational optimization in solving the geodetic boundary value problems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 987-999. doi: 10.3934/dcdss.2020381 [18] François Dubois. Third order equivalent equation of lattice Boltzmann scheme. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 221-248. doi: 10.3934/dcds.2009.23.221 [19] Ivan Bailera, Joaquim Borges, Josep Rifà. On Hadamard full propelinear codes with associated group $C_{2t}\times C_2$. Advances in Mathematics of Communications, 2021, 15 (1) : 35-54. doi: 10.3934/amc.2020041 [20] Alberto Bressan, Sondre Tesdal Galtung. A 2-dimensional shape optimization problem for tree branches. Networks & Heterogeneous Media, 2020  doi: 10.3934/nhm.2020031 2019 Impact Factor: 1.366
2021-01-20 10:32:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48851439356803894, "perplexity": 7740.213898976041}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519984.9/warc/CC-MAIN-20210120085204-20210120115204-00689.warc.gz"}
https://www.gamedev.net/forums/topic/208950-help-vert-array-mesh-with-triangle_strip/
Archived This topic is now archived and is closed to further replies. help, vert array mesh with TRIANGLE_STRIP This topic is 5346 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts I'm tiring to draw a terrain with vert arrays. Right now I have the vertices in a 2d array and draw them as follows with 2 for loops: for (int i = 0; i < (numVerts-1); i++) { for (int j = 0; j < (numVerts-1); j++) { glBegin(GL_TRIANGLE_STRIP); // Build Quad From A Triangle Strip glTexCoord2d(0,0); glVertex3f(verts[i][j+1].x ,-3, verts[i][j+1].z); // Bottom Left glTexCoord2d(1,0); glVertex3f(verts[i+1][j+1].x ,-3, verts[i+1][j+1].z); // Bottom Right glTexCoord2d(0,1); glVertex3f(verts[i][j].x ,-3, verts[i][j].z); // Top Left glTexCoord2d(1,1); glVertex3f(verts[i+1][j].x ,-3, verts[i+1][j].z); // Top Right glEnd(); } } Now I want to convert this to a vert array method. I have no problem getting all the vertices in an array but I'm having a hard time figuring out how do the index array. With simple triangles it’s fairly easy, but this makes quads out of triangle strips. Here I'm not sure how to do this, do i have to now have 6 indices per quad? (3 per triangle strip). Any help is greatly appreciated. I’m using glDrawElements to draw. [edited by - skow on February 21, 2004 5:36:53 PM] Share on other sites Since you''re reusing your vertices from one strip to the next you can get much better performance with larger triangle strips. For example if you have a mesh like: *----*----*----*----*----*| 1 | 2 | 3 | 4 | 5 || | | | | |*----*----*----*----*----*| 6 | 7 | 8 | 9 | 10 || | | | | |*----*----*----*----*----*| 11 | 12 | 13 | 14 | 15 || | | | | |*----*----*----*----*----* Where each * is a vertex. Currently for this mesh you would draw 15 triangle strips. A simple optimisation can reduce this to 3. A further optimisation an reduce this to just 1 triangle strip. Image you draw the quads in numerical order. At the moment you are effectively doing: glBegin(GL_TRIANGLE_STRIP);// draw quad #1glEnd();glBegin(GL_TRIANGLE_STRIP);// draw quad #2glEnd();// and continue for the rest If we also number the vertices: 01----02----03----04----05----06| | | | | || | | | | |07----08----09----10----11----12| | | | | || | | | | |13----14----15----16----17----18| | | | | || | | | | |19----20----21----22----23----24 glBegin(GL_TRIANGLE_STRIP);// draw vertex #7// draw vertex #8// draw vertex #1// draw vertex #2glEnd();glBegin(GL_TRIANGLE_STRIP);// draw vertex #8// draw vertex #9// draw vertex #2// draw vertex #3glEnd();// and continue for the rest If we reorder the vertices: glBegin(GL_TRIANGLE_STRIP);// draw vertex #1// draw vertex #7// draw vertex #2// draw vertex #8glEnd();glBegin(GL_TRIANGLE_STRIP);// draw vertex #2// draw vertex #8// draw vertex #3// draw vertex #9glEnd();// and continue for the rest You''ll see that the last two vertices of the first triangle strip are the same as the first two vertices of the second. This means we can combine them into a single triangle strip: glBegin(GL_TRIANGLE_STRIP);// draw vertex #1// draw vertex #7// draw vertex #2// draw vertex #8// draw vertex #3// draw vertex #9// and continue for the rest of the vertices up to #''s 6 and 12glEnd();// and continue for the rest of the strips Now we are drawing each row in its own strip. Finally we can take advatage of degenerate triangles to stitch all the strips together. The area of the triangle (vertex #1, vertex #1, vertex #2) is zero, so the triangle is not drawn. This is a degenerate triangle. Our first triangle strip ends: // previous vertices// draw vertex #5// draw vertex #11// draw vertex #6// draw vertex #12glEnd(); And our second begins: glBegin(GL_TRIANGLE_STRIP);// draw vertex #7// draw vertex #13// draw vertex #8// draw vertex #14// more vertices We can''t just combine these like we did before because the last vertices of the first strip are not the same as the first vertices of the second strip. Instead we use degenerate triangles: // previous vertices// draw vertex #5// draw vertex #11// draw vertex #6// draw vertex #12// draw vertex #12 (a degenerate triangle)// draw vertex #7 (another degenerate triangle - 12, 12, 7)// draw vertex #7 (and again - 12, 7, 7)// draw vertex #13 (and one more - 7, 7, 13)// draw vertex #8 (a real triangle again - the first of our second strip)// draw vertex #14// more vertices Now to build an index array so we can render the mesh with a single glDrawElements call we just need to fill in the indices that we were using above. The number of indices can be calculated as: The number of vertices in the first row plus the number of vertices in the last row plus two times the number of vertices inbetween plus two times the number of rows of vertices inbetween. So for this example the index array contains 40 elements and the indices are: 1, 7, 2, 8, 3, 9, 4, 10, 5, 11, 6, 12, 12, 7, 7, 13, 8, 14, 9, 15, 10, 16, 11, 17, 12, 18, 18, 13, 13, 19, 14, 20, 15, 21, 16, 22, 17, 23, 18, 24. Hopefully you can see how to build index lists for meshes of different sizes. Enigma Share on other sites Enigma thanks for taking the time to write all that. I don''t know why it didn''t dawn on me to rearange the order i draw verts. I don''t have time for a few days to work on it but this information will be here when I have time. Thanks a bunch! Share on other sites Awesome description!! I have one question though: These "degenerate triangles" don''t show up or cause any funny looking effects? Share on other sites No, degenerate triangles are culled (very quickly these days) and do not add any pixels to the framebuffer. Enigma Share on other sites awesome, let me ask another. I have simple entities in this game I''m doing, they each can be described with 1 triangle strip (using degenerate triangles of course), & they each have a simple periodic motion (like wiggling as they move or something) & as I''m doing it now... all their vertice/normal info for every possible position (around 20 or so) is stored in a 2D array much like the one skow has for terrain. For each frame, depending on the entity, I only use part of the array to draw that entity & over a second or two, it looks pretty good as it moves because of the small periodic motion. Each entity takes approximately 250-400 polygons to render. Would it be a good idea to generate a vertex array (a real one) & use it in 20 or so display lists (each display list being a frame of animation) for each entity??? Would this speed things up? A half a dozen of these entities are likely to be on the screen at any given time. Thats around 2000 polys which aint too bad I guess, but still, thats just entities & none of the particles or "terrain" yet. Share on other sites The way this is drawn won''t effect the normals right so I can use the backface culling? AP- You would only have to update the array of vertexes for each "frame" and you can recycle the same index array. Putting each "frame" drawn via vertex arrays would speed it up but would eat a lot of vram if you have a lot. (I cant used the display list as my terrain is freaking huge). Share on other sites The winding order is the same for all polygons, so backface culling can be used. AP: I didn''t entirely get what you were saying but here are a few points I think might be useful: 1. 2000 polygons is absolutely nothing these days. 2. If you want to put your entities into a display list it does not matter if you use vertex arrays or immediate mode. The only difference will be the time it takes to compile the display lists. 3. If your motion is simple harmonic you might want to use a vertex program to animate the entities, rather than storing individual frames. This will have the added bonus that the animation can be made frame-rate independant. Enigma Share on other sites Enigma, qeustion on #2: Wouldn''t drawing the vertex''s via a vert array in a display list be faster (basicly a VBO)? Share on other sites I''m interested in the answer to skow''s last question. Also... thanks for the points, I did find them useful. I probably am NOT going to learn about a vertex program, because I already have the code to make the entities move & animate (periodic motion) independent of frame-rate. Thanks. 1. 1 2. 2 3. 3 Rutin 16 4. 4 5. 5 • 14 • 9 • 9 • 9 • 10 • Forum Statistics • Total Topics 632912 • Total Posts 3009183 • Who's Online (See full list) There are no registered users currently online ×
2018-10-16 05:35:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2551957666873932, "perplexity": 1630.497979616499}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510019.12/warc/CC-MAIN-20181016051435-20181016072935-00393.warc.gz"}
http://mathhelpforum.com/advanced-statistics/28831-probability.html
1. ## probability If $\displaystyle f(x,y) = e^{-x-2y}$ find $\displaystyle P(X<Y)$. So is this equaled to $\displaystyle 1 - P(X>Y) = 1 - \int\limits_{0}^{\infty} \int\limits_{0}^{x} e^{-x-2y} \ dy \ dx = \frac{1}{3}$? 2. never mind about the limits; is the general process correct? 3. I get $\displaystyle \int_{-\infty}^{\infty} \int_{-\infty}^x e^{-x-2y}~dy~dx$ 4. I took the complement. 5. What were the domains of $\displaystyle X$ and $\displaystyle Y$ given in the problem? 6. $\displaystyle -\infty < x < \infty$, $\displaystyle -\infty < y < \infty$. 7. While the integral over the whole positive domain is $\displaystyle \frac{1}{2}$, the function is NOT even. For this to be a density function, $\displaystyle x,y \in\left[{\frac{{ - \ln (2)}}{3},\infty } \right)$ since $\displaystyle \int\limits_{\frac{{ - \ln (2)}}{3}}^\infty {\int\limits_{\frac{{ - \ln (2)}}{3}}^\infty {e^{ - x - 2y} } } ~dy~dx = 1$. Then again, there are infinitely many possible limits for $\displaystyle X$ and $\displaystyle Y$ that will work. I am not 100% confident in my Prob/Stats knowledge though, so don't quote me on this. 8. yeah I knew the limits were wrong, I was just asking whether my method of finding the probability was correct (e.g. $\displaystyle P(X <Y) = 1 - P(X>Y)$)? 9. Oh. You are correct.
2018-04-19 12:56:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9736519455909729, "perplexity": 376.1547138861789}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936914.5/warc/CC-MAIN-20180419110948-20180419130948-00439.warc.gz"}
http://thehumanbrainproject.com/human-brain-supplements-b/active-mind-supplement-brain-on-supplements.html
Instead, neuroscientists have developed several synthetic GABA derivatives (such as Piracetam) which can penetrate the Blood Brain Barrier and promote positive changes in your neural activity. Piracetam is known for improving learning, memory, mood, concentration and alertness. 8 Powder Samples L-Citrulline Gift Certificates What do you think? US politics Mechanism #4: Reducing Inflammation 20mg: 18 - 20 May: 0 10% off your first order Coluracetam Prevents brain-tangling which is the leading cause of Alzheimer’s disease. NALT is the preferred Tyrosine form for brain supplements data=noopeptImputed) It's been suggested that taking resveratrol supplements could prevent the deterioration of the hippocampus, an important part of the brain associated with memory (22). Dr. Dave's Best Sleep Wizard # F-statistic: 0.8383241 on 3 and 163 DF, p-value: 0.474678 Ryan says: # Standardized loadings (pattern matrix) based upon correlation matrix Q: Are there any stimulants in it? Insurance Guide It is a mechanism of action is to reduce inflammation in the brain, balancing the neurotransmitter levels, protects the brain against radical damage, increases the blood flow to the brain. # nootropics Posted byu/smashsmash341985 GABA The placebos can be the usual pills filled with olive oil. The Nature’s Answer fish oil is lemon-flavored; it may be worth mixing in some lemon juice. In Kiecolt-Glaser et al 2011, anxiety was measured via the Beck Anxiety scale; the placebo mean was 1.2 on a standard deviation of 0.075, and the experimental mean was 0.93 on a standard deviation of 0.076. (These are all log-transformed covariates or something; I don’t know what that means, but if I naively plug those numbers into Cohen’s d, I get a very large effect: 1.2−0.930.076\frac{1.2 - 0.93}{0.076}=3.55.) Many users reported ineffective When does the brain work/learn the best? Share this page Open/close share Health Insurance R-modafinil’s plasma fluctuation was 28% less than S-modafinil over 24-hours Risk-Free Money-Back Guarantee Amphetamines (i.e. Adderall): While Adderall is prescribed to treat ADHD, some ambitious students use the drug to boost concentration – especially around exam time. Family & 9 - 15 December: 0 Meal Replacement Nootropics: A Fancy Word for Smart Drugs RESOURCES The level of risk from UVA radiation delivered by lamps used by professional manicurists to dry gel nail polish increases with the frequency of manicures. SIDE EFFECTS: Generally well tolerated, but may cause headache, restlessness, insomnia, anxiousness, nausea or loss of appetite. It’s common for us to get caught up in daily activities and become overwhelmed and unable to focus on tasks without stressing out.  L-theanine is an amino acid found in green tea the ability to alter brain wave patterns and help you get dialed back in.  This ingredient helps regain focus and  attentiveness so that you are better able to fight off the effects of stress and anxiety.  You can even combine theanine with coffee to combat caffeine-induced jitters.  They can also lower stress-related neurotransmitters like cortisol and glutamate. Top 5 Best Trail Camera Review 2017 # Minimum correlation of possible factor scores 0.95 0.99 0.96 0.25 Rhodiola rosea – Anti-fatigue aged and mood enhancer Thought it was a coincidence after the second time, but after the third and fourth time, I'm convinced piracetam was the culprit. The doses ranged from 1g to 4.8g with Nootropics Depot powder. # 5 0.39 0.51 0.028 86 1.9e+02 2.5e-10 11.2 0.54 0.049 -347.4 -74.4 1.7 2.4e+02 3.6e-02 What Affects Your Personality? # as.factor(Noopept)30 0.04 0.11 Salvia officinalis – Although some evidence is suggestive of cognition benefits, the study quality is so poor that no conclusions can be drawn from it.[59] Herbs for Thought Premium Bali Kratom However, the big downside of Nootrobox Rise is that it comes at a steep cost and doesn't offer nearly as much value as some of the less expensive products higher on this list that provide even better results. HVMN's Nootrobox Rise is solid, but if you can get past the fancier packaging and hype driven marketing, you may be better off starting with a product like NOOESIS essence first. CILTEP has been widely respected by a number of celebrities including Tim Ferris, Joe Rogan, and others who have even compared it to the very potent and powerful nootropic “smart drug” called Modafinil. Modafinil is the first-ever, “confirmed” cognitive enhancing “smart drug.” A Bayesian MCMC analysis using the BEST library gives a similar answer - too much overlap, not enough data: zeo$Date <- as.Date(zeo$Sleep.Date, format="%m/%d/%Y") The Science of Diarrhea I take my piracetam in the form of capped pills consisting (in descending order) of piracetam, choline bitartrate, anhydrous caffeine, and l-tyrosine. On 8 December 2012, I happened to run out of them and couldn’t fetch more from my stock until 27 December. This forms a sort of (non-randomized, non-blind) short natural experiment: did my daily 1-5 mood/productivity ratings fall during 8-27 December compared to November 2012 & January 2013? The graphed data30 suggests to me a decline: I’d really appreciate that. Amino Workout Want To Pack On Muscle? Add These Items To Your Grocery List ASAP how can I get hold of this product in South Africa Is Kratom Different For Women And Men? Dosage And Effects Just as athletes take supplements to enhance their physical performance, some people hope to sharpen their wits with so-called "brain boosters." Chondroitin Drugs This fat-soluble vitamin is a powerful antioxidant which transmits nerve impulses. There is evidence supporting the role of vitamin E in helping reduce the risk of Alzheimer’s disease. LEARN MORE >> lithiumExperiment <- read.csv("https://www.gwern.net/docs/lithium/2012-lithium-experiment.csv") Back Low level laser therapy (LLLT) is a curious treatment based on the application of a few minutes of weak light in specific near-infrared wavelengths (the name is a bit of a misnomer as LEDs seem to be employed more these days, due to the laser aspect being unnecessary and LEDs much cheaper). Unlike most kinds of light therapy, it doesn’t seem to have anything to do with circadian rhythms or zeitgebers. Proponents claim efficacy in treating physical injuries, back pain, and numerous other ailments, recently extending it to case studies of mental issues like brain fog. (It’s applied to injured parts; for the brain, it’s typically applied to points on the skull like F3 or F4.) And LLLT is, naturally, completely safe without any side effects or risk of injury. Significantly improve memory acquisition and retention in older adults[14] Calculators print(sum(bs$t<=alpha) / length(bs$t)) Sped up reaction time Sources This shows an amazing capability to induce anti-oxidant chemical reactions in our nervous system, and assist our brain function. Cancer Hi Jacob, • Supporting less stress and anxiety Test Your Knowledge - and learn some interesting things along the way. 2015 SharpBrains Virtual Summit Showbiz & TV Memory formulation & recall 5 Unhealthy Foods You Should Avoid For a Healthy Brain Mitochondrial Support  Addiction Resources Near You Sleep deprivation is a major problem throughout the world and while we all seek out supplements that will provide us with exponential mental energy, it is equally important to be sure you are receiving adequate rest each night and allowing your brain to recharge.   Mind Lab Pro Review: Related to the famous -racetams but reportedly better (and much less bulky), Noopept is one of the many obscure Russian nootropics. (Further reading: Google Scholar, Examine.com, Reddit, Longecity, Bluelight.ru.) Its advantages seem to be that it’s far more compact than piracetam and doesn’t taste awful so it’s easier to store and consume; doesn’t have the cloud hanging over it that piracetam does due to the FDA letters, so it’s easy to purchase through normal channels; is cheap on a per-dose basis; and it has fans claiming it is better than piracetam. As a result, the supplementation of fish oil has shown strong associations with improved brain health, increased cognitive function (in all aspects), while greatly reducing our risk of developing dementia [6]. RELATED ARTICLES 41,50,69,44,47,63,34,57) Using nootropics is messing with your neurotransmitter systems, pure and simple. This can lead to a variety of nootropics side effects: anxiety, depression, and excess serotonin, to name a few. Of course, the potential for addiction is the main concern in using these smart drugs. active mind pills|brain food pills active mind pills|brain food supplements active mind pills|brain food tablets Legal | Sitemap
2018-07-19 04:06:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2348376363515854, "perplexity": 5775.975798423067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590493.28/warc/CC-MAIN-20180719031742-20180719051742-00409.warc.gz"}
https://buboflash.eu/bubo5/show-dao2?d=1439319067916
Tags #python #sicp Question Why is currying? More specifically, given a function f(x, y) , we can define a function g such that g(x)(y) is equivalent to f(x, y) . Here, g is a higher-order function that takes in a single argument x and returns another function that takes in a single argument y. This transformation is currying. Tags #python #sicp Question Why is currying? ? Tags #python #sicp Question Why is currying? More specifically, given a function f(x, y) , we can define a function g such that g(x)(y) is equivalent to f(x, y) . Here, g is a higher-order function that takes in a single argument x and returns another function that takes in a single argument y. This transformation is currying. If you want to change selection, open original toplevel document below and click on "Move attachment" #### Parent (intermediate) annotation Open it We can use higher-order functions to convert a function that takes multiple arguments into a chain of functions that each take a single argument. More specifically, given a function f(x, y) , we can define a function g such that g(x)(y) is equivalent to f(x, y) . Here, g is a higher-order function that takes in a single argument x and returns another function that takes in a single argument y . This transformation is called currying. #### Original toplevel document 1.6 Higher-Order Functions d is a powerful general computational method for solving differentiable equations. Very fast algorithms for logarithms and large integer division employ variants of the technique in modern computers. 1.6.6 Currying Video: Show Hide <span>We can use higher-order functions to convert a function that takes multiple arguments into a chain of functions that each take a single argument. More specifically, given a function f(x, y) , we can define a function g such that g(x)(y) is equivalent to f(x, y) . Here, g is a higher-order function that takes in a single argument x and returns another function that takes in a single argument y . This transformation is called currying. As an example, we can define a curried version of the pow function: >>> def curried_pow(x): def h(y): return pow(x, y) return h >>&g #### Summary status measured difficulty not learned 37% [default] 0 No repetitions
2022-05-19 15:27:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.897829532623291, "perplexity": 801.742442214524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00684.warc.gz"}
https://labs.tib.eu/arxiv/?author=Joel%20H.%20Kastner
• ### A Catalog of GALEX Ultraviolet Emission from Asymptotic Giant Branch Stars(1705.05371) May 15, 2017 astro-ph.GA, astro-ph.SR We have performed a comprehensive study of the UV emission detected from AGB stars by the Galaxy Evolution Explorer (GALEX). Of the 468 AGB stars in our sample, 316 were observed by GALEX. In the NUV bandpass ($\lambda_{\rm eff} \sim 2310$z A), 179 AGB stars were detected and 137 were not detected. Only 38 AGB stars were detected in the FUV bandpass ($\lambda_{\rm eff} \sim1528$ A). We find that NUV emission is correlated with optical to near infrared emission leading to higher detection fractions among the brightest, hence closest, AGB stars. Comparing the AGB time-variable visible phased light curves to corresponding GALEX NUV phased light curves we find evidence that for some AGB stars the NUV emission varies in phase with the visible light curves. We also find evidence that the NUV emission, and possibly, the FUV emission are anti-correlated with the circumstellar envelope density. These results suggest that the origin of the GALEX-detected UV emission is an inherent characteristic of the AGB stars that can most likely be traced to a combination of photospheric and chromospheric emission. In most cases, UV detections of AGB stars are not likely to be indicative of the presence of binary companions. • ### Nearby Young, Active, Late-type Dwarfs in Gaia's First Data Release(1705.01185) May 12, 2017 astro-ph.SR The Galex Nearby Young Star Survey (GALNYSS) has yielded a sample of $\sim$2000 UV-selected objects that are candidate nearby ($D \stackrel{<}{\sim}$150 pc), young (age $\sim$10--100 Myr), late-type stars. Here, we evaluate the distances and ages of the subsample of (19) GALNYSS stars with Gaia Data Release 1 (DR1) parallax distances $D \le 120$ pc. The overall youth of these 19 mid-K to early-M stars is readily apparent from their positions relative to the loci of main sequence stars and giants in Gaia-based color-magnitude and color-color diagrams constructed for all Galex- and WISE-detected stars with parallax measurements included in DR1. The isochronal ages of all 19 stars lie in the range $\sim$10--100 Myr. Comparison with Li-based age estimates indicates a handful of these stars may be young main-sequence binaries rather than pre-main sequence stars. Nine of the 19 objects have not previously been considered as nearby, young stars, and all but one of these are found at declinations north of $+$30$^\circ$. The Gaia DR1 results presented here indicate that the GALNYSS sample includes several hundred nearby, young stars, a substantial fraction of which have not been previously recognized as having ages $\stackrel{<}{\sim}$100 Myr. • ### Dippers and Dusty Disks Edges: A Unified Model(1605.03985) Feb. 21, 2017 astro-ph.SR We revisit the nature of large dips in flux from extinction by dusty circumstellar material that is observed by Kepler for many young stars in the Upper Sco and $\rho$ Oph star formation regions. These young, low-mass "dipper" stars are known to have low accretion rates and primarily host moderately evolved dusty circumstellar disks. Young low mass stars often exhibit rotating star spots that cause quasi-periodic photometric variations. We found no evidence for periods associated with the dips that are different from the star spot rotation period in spectrograms constructed from the light curves. The material causing the dips in most of these light curves must be approximately corotating with the star. We find that disk temperatures computed at the disk corotation radius are cool enough that dust should not sublime. Crude estimates for stellar magnetic field strengths and accretion rates are consistent with magnetospheric truncation near the corotation radius. Magnetospheric truncation models can explain why the dips are associated with material near corotation and how dusty material is lifted out of the midplane to obscure the star which would account for the large fraction of young low mass stars that are dippers. We propose that variations in disk orientation angle, stellar magnetic field dipole tilt axis, and disk accretion rate are underlying parameters accounting for differences in the dipper light curves. • ### Chasing Shadows: Rotation of the Azimuthal Asymmetry in the TW Hya Disk(1701.03152) Jan. 11, 2017 astro-ph.SR, astro-ph.EP We have obtained new images of the protoplanetary disk orbiting TW Hya in visible, total intensity light with the Space Telescope Imaging Spectrograph (STIS) on the Hubble Space Telescope (HST), using the newly commissioned BAR5 occulter. These HST/STIS observations achieved an inner working angle $\sim$0.2\arcsec, or 11.7~AU, probing the system at angular radii coincident with recent images of the disk obtained by ALMA and in polarized intensity near-infrared light. By comparing our new STIS images to those taken with STIS in 2000 and with NICMOS in 1998, 2004, and 2005, we demonstrate that TW Hya's azimuthal surface brightness asymmetry moves coherently in position angle. Between 50~AU and 141~AU we measure a constant angular velocity in the azimuthal brightness asymmetry of 22.7$^\circ$~yr$^{-1}$ in a counter-clockwise direction, equivalent to a period of 15.9~yr assuming circular motion. Both the (short) inferred period and lack of radial dependence of the moving shadow pattern are inconsistent with Keplerian rotation at these disk radii. We hypothesize that the asymmetry arises from the fact that the disk interior to 1~AU is inclined and precessing due to a planetary companion, thus partially shadowing the outer disk. Further monitoring of this and other shadows on protoplanetary disks potentially opens a new avenue for indirectly observing the sites of planet formation. • ### M Stars in the TW Hya Association: Stellar X-rays and Disk Dissipation(1603.09307) March 30, 2016 astro-ph.SR To investigate the potential connection between the intense X-ray emission from young, low-mass stars and the lifetimes of their circumstellar, planet-forming disks, we have compiled the X-ray luminosities ($L_X$) of M stars in the $\sim$8 Myr-old TW Hya Association (TWA) for which X-ray data are presently available. Our investigation includes analysis of archival Chandra data for the TWA binary systems TWA 8, 9, and 13. Although our study suffers from poor statistics for stars later than M3, we find a trend of decreasing $L_X/L_{bol}$ with decreasing $T_{eff}$ for TWA M stars wherein the earliest-type (M0--M2) stars cluster near $\log{(L_X/L_{bol})} \approx -3.0$ and then $\log{(L_X/L_{bol})}$ decreases, and its distribution broadens, for types M4 and later. The fraction of TWA stars that display evidence for residual primordial disk material also sharply increases in this same (mid-M) spectral type regime. This apparent anticorrelation between the relative X-ray luminosities of low-mass TWA stars and the longevities of their circumstellar disks suggests that primordial disks orbiting early-type M stars in the TWA have dispersed rapidly as a consequence of their persistent large X-ray fluxes. Conversely, the disks orbiting the very lowest-mass pre-MS stars and pre-MS brown dwarfs in the Association may have survived because their X-ray luminosities and, hence, disk photoevaporation rates are very low to begin with, and then further decline relatively early in their pre-MS evolution. • ### Peering into the Giant Planet Forming Region of the TW Hydrae Disk with the Gemini Planet Imager(1512.01865) Dec. 7, 2015 astro-ph.SR, astro-ph.EP We present Gemini Planet Imager (GPI) adaptive optics near-infrared images of the giant planet-forming regions of the protoplanetary disk orbiting the nearby (D = 54 pc), pre-main sequence (classical T Tauri) star TW Hydrae. The GPI images, which were obtained in coronagraphic/polarimetric mode, exploit starlight scattered off small dust grains to elucidate the surface density structure of the TW Hya disk from 80 AU to within 10 AU of the star at 1.5 AU resolution. The GPI polarized intensity images unambiguously confirm the presence of a gap in the radial surface brightness distribution of the inner disk. The gap is centered near 23 AU, with a width of 5 AU and a depth of 50%. In the context of recent simulations of giant planet formation in gaseous, dusty disks orbiting pre-main sequence stars, these results indicate that at least one young planet with a mass 0.2 M_J could be present in the TW Hya disk at an orbital semi-major axis similar to that of Uranus. If this (proto)planet is actively accreting gas from the disk, it may be readily detectable by GPI or a similarly sensitive, high-resolution infrared imaging system. • ### Magnetic Activity of Pre-main Sequence Stars near the Stellar-Substellar Boundary(1512.01240) Dec. 3, 2015 astro-ph.SR X-ray observations of pre-main sequence (pre-MS) stars of M-type probe coronal emission and offer a means to investigate magnetic activity at the stellar-substellar boundary. Recent observations of main sequence (MS) stars at this boundary display a decrease in fractional X-ray luminosity ($L_{X}$/$L_{bol}$) by almost two orders of magnitude for spectral types M7 and later. We investigate magnetic activity and search for a decrease in X-ray emission in the pre-MS progenitors of these MS stars. We present XMM-Newton X-ray observations and preliminary results for ~10 nearby (30-70 pc), very low mass pre-MS stars in the relatively unexplored age range of 10-30 Myr. We compare the fractional X-ray luminosities of these 10-30 Myr old stars to younger (1-3 Myr) pre-MS brown dwarfs and find no dependence on spectral type or age suggesting that X-ray activity declines at an age later than ~30 Myr in these very low-mass stars. • ### Constraining X-ray-Induced Photoevaporation of Protoplanetary Disks Orbiting Low-Mass Stars(1510.02785) Oct. 9, 2015 astro-ph.SR Low-mass, pre-main sequence stars possess intense high-energy radiation fields as a result of their strong stellar magnetic activity. This stellar UV and X-ray radiation may have a profound impact on the lifetimes of protoplanetary disks. We aim to constrain the X-ray-induced photoevaporation rates of protoplanetary disks orbiting low-mass stars by analyzing serendipitous XMM-Newton and Chandra X-ray observations of candidate nearby (D $<$ 100 pc), young (age $<$ 100 Myr) M stars identified in the GALEX Nearby Young-Star Survey (GALNYSS). • ### A Brief History of the Study of Nearby Young Moving Groups and Their Members(1510.00741) Oct. 2, 2015 astro-ph.SR Beginning with the enigmatic (and now emblematic) TW Hya, the scutiny of individual stars and star-disk systems has both motivated and benefitted from the identification of nearby young moving groups (NYMGs). I briefly outline the emergence of this relatively new subfield of astronomy over the past two decades, and offer a few examples illustrating how the study of NYMGs and their members enables unique investigations of pre-main sequence stellar evolution, evolved protoplanetary disks, and young exoplanets. • ### An ALMA Survey for Disks Orbiting Low-Mass Stars in the TW Hya Association(1509.04589) We have carried out an ALMA survey of 15 confirmed or candidate low-mass (<0.2M$_\odot$) members of the TW Hya Association (TWA) with the goal of detecting molecular gas in the form of CO emission, as well as providing constraints on continuum emission due to cold dust. Our targets have spectral types of M4-L0 and hence represent the extreme low end of the TWA's mass function. Our ALMA survey has yielded detections of 1.3mm continuum emission around 4 systems (TWA 30B, 32, 33, & 34), suggesting the presence of cold dust grains. All continuum sources are unresolved. TWA 34 further shows 12CO(2-1) emission whose velocity structure is indicative of Keplerian rotation. Among the sample of known ~7-10 Myr-old star/disk systems, TWA 34, which lies just ~50 pc from Earth, is the lowest mass star thus far identified as harboring cold molecular gas in an orbiting disk. • ### A Combined Spitzer and Herschel Infrared Study of Gas and Dust in the Circumbinary Disk Orbiting V4046 Sgr(1507.05574) July 20, 2015 astro-ph.SR We present results from a spectroscopic Spitzer and Herschel mid-to-far-infrared study of the circumbinary disk orbiting the evolved (age ~12-23 Myr) close binary T Tauri system V4046 Sgr. Spitzer IRS spectra show emission lines of [Ne II], H_2 S(1), CO_2 and HCN, while Herschel PACS and SPIRE spectra reveal emission from [O I], OH, and tentative detections of H_2O and high-J transitions of CO. We measure [Ne III]/[Ne II] < 0.13, which is comparable to other X-ray/EUV luminous T Tauri stars that lack jets. We use the H_2 S(1) line luminosity to estimate the gas mass in the relatively warm surface layers of the inner disk. The presence of [O I] emission suggests that CO, H_2O, and/or OH is being photodissociated, and the lack of [C I] emission suggests any excess C may be locked up in HCN, CN and other organic molecules. Modeling of silicate dust grain emission features in the mid-infrared indicates that the inner disk is composed mainly of large (r~5 um) amorphous pyroxene and olivine grains (~86% by mass) with a relatively large proportion of crystalline silicates. These results are consistent with other lines of evidence indicating that planet building is ongoing in regions of the disk within ~30 AU of the central, close binary. • ### A Ring of C2H in the Molecular Disk Orbiting TW Hya(1504.05980) April 22, 2015 astro-ph.SR We have used the Submillimeter Array to image, at ~1.5" resolution, C2H (3-2) emission from the circumstellar disk orbiting the nearby (D = 54 pc), ~8 Myr-old, ~0.8 Msun classical T Tauri star TW Hya. The SMA imaging reveals that the C2H emission exhibits a ring-like morphology. Based on a model in which the C2H column density follows a truncated radial power-law distribution, we find that the inner edge of the ring lies at ~45 AU, and that the ring extends to at least ~120 AU. Comparison with previous (single-dish) observations of C2H (4-3) emission indicates that the C2H molecules are subthermally excited and, hence, that the emission arises from the relatively warm, tenuous upper atmosphere of the disk. We propose that the C2H emission most likely traces particularly efficient photo-destruction of small grains and/or photodesorption and photodissociation of hydrocarbons derived from grain ice mantles in the surface layers of the outer disk. The presence of a C2H ring in the TW Hya disk hence likely serves as a marker of dust grain processing and radial and vertical grain size segregation within the disk. • ### An Unbiased 1.3 mm Emission Line Survey of the Protoplanetary Disk Orbiting LkCa 15(1504.00061) March 31, 2015 astro-ph.SR The outer (>30 AU) regions of the dusty circumstellar disk orbiting the ~2-5 Myr-old, actively accreting solar analog LkCa 15 are known to be chemically rich, and the inner disk may host a young protoplanet within its central cavity. To obtain a complete census of the brightest molecular line emission emanating from the LkCa 15 disk over the 210-270 GHz (1.4 - 1.1 mm) range, we have conducted an unbiased radio spectroscopic survey with the Institute de Radioastronomie Millimetrique (IRAM) 30 meter telescope. The survey demonstrates that, in this spectral region, the most readily detectable lines are those of CO and its isotopologues 13CO and C18O, as well as HCO+, HCN, CN, C2H, CS, and H2CO. All of these species had been previously detected in the LkCa 15 disk; however, the present survey includes the first complete coverage of the CN (2-1) and C2H (3-2) hyperfine complexes. Modeling of these emission complexes indicates that the CN and C2H either reside in the coldest regions of the disk or are subthermally excited, and that their abundances are enhanced relative to molecular clouds and young stellar object environments. These results highlight the value of unbiased single-dish line surveys in guiding future high resolution interferometric imaging of disks. • ### Scattered Light from Dust in the Cavity of the V4046 Sgr Transition Disk(1503.06192) March 20, 2015 astro-ph.SR We report the presence of scattered light from dust grains located in the giant planet formation region of the circumbinary disk orbiting the ~20-Myr-old close (~0.045 AU separation) binary system V4046 Sgr AB based on observations with the new Gemini Planet Imager (GPI) instrument. These GPI images probe to within ~7 AU of the central binary with linear spatial resolution of ~3 AU, and are thereby capable of revealing dust disk structure within a region corresponding to the giant planets in our solar system. The GPI imaging reveals a relatively narrow (FWHM ~10 AU) ring of polarized near-infrared flux whose brightness peaks at ~14 AU. This ~14 AU radius ring is surrounded by a fainter outer halo of scattered light extending to ~45 AU, which coincides with previously detected mm-wave thermal dust emission. The presence of small grains that efficiently scatter starlight well inside the mm-wavelength disk cavity supports current models of planet formation that suggest planet-disk interactions can generate pressure traps that impose strong radial variations in the particle size distribution throughout the disk. • ### V4046 Sgr: Touchstone to Investigate Spectral Type Discrepancies for Pre-main Sequence Stars(1409.7135) Sept. 25, 2014 astro-ph.SR Determinations of the fundamental properties (e.g., masses and ages) of late-type, pre-main sequence (pre-MS) stars are complicated by the potential for significant discrepancies between the spectral types of such stars as ascertained via optical vs. near-infrared observations. To address this problem, we have obtained near-IR spectroscopy of the nearby, close binary T Tauri system V4046 Sgr AB with the NASA Infrared Telescope Facility (IRTF) SPEX spectrometer. The V4046 Sgr close binary (and circumbinary disk) system provides an important test case for spectral type determination thanks to the stringent observational constraints on its component stellar masses (i.e., ~0.9 Msun each) as well as on its age (12-21 Myr) and distance (73 pc). Analysis of the IRTF data indicates that the composite near-IR spectral type for V4046 Sgr AB lies in the range M0-M1, i.e., significantly later than the K5+K7 composite type previously determined from optical spectroscopy. However, the K5+K7 composite type is in better agreement with theoretical pre-MS evolutionary tracks, given the well-determined properties of V4046 Sgr AB. These results serve as a cautionary tale for studies that rely on near-infrared spectroscopy as a primary means to infer the ages and masses of pre-MS stars. • ### Unbiased mm-wave Line Surveys of TW Hya and V4046 Sgr: The Enhanced C2H and CN Abundances of Evolved Protoplanetary Disks(1408.5918) Aug. 27, 2014 astro-ph.SR We have conducted the first comprehensive mm-wave molecular emission line surveys of the evolved circumstellar disks orbiting the nearby T Tauri stars TW Hya and V4046 Sgr AB. Both disks are known to retain significant residual gaseous components, despite the advanced ages of their host stars. Our unbiased broad-band radio spectral surveys of the TW Hya and V4046 Sgr disks were performed with the Atacama Pathfinder Experiment (APEX) 12 meter telescope and are intended to yield a complete census of bright molecular emission lines in the range 275-357 GHz (1.1-0.85 mm). We find that lines of 12CO, 13CO, HCN, CN, and C2H, all of which lie in the higher-frequency range, constitute the strongest molecular emission from both disks in the spectral region surveyed. The molecule C2H is detected here for the first time in both disks, as is CS in the TW Hya disk. The survey results also include the first measurements of the full suite of hyperfine transitions of CN N=3-2 and C2H N=4-3 in both disks. Modeling of these CN and C2H hyperfine complexes in the spectrum of TW Hya indicates that the emission from both species is optically thick and may originate from very cold disk regions. It furthermore appears that the fractional abundances of CN and C2H are significantly enhanced in these evolved protoplanetary disks relative to the fractional abundances of the same molecules in the environments of deeply embedded protostars. • ### Star Formation in Orion's L1630 Cloud: an Infrared and Multi-epoch X-ray Study(1311.5232) March 18, 2014 astro-ph.SR X-ray emission is characteristic of young stellar objects (YSOs) and is known to be highly variable. We investigate, via an infrared and multi-epoch X-ray study of the L1630 dark cloud, whether and how X-ray variability in young stellar objects is related to protostellar evolutionary state. We have analyzed 11 Chandra X-ray Observatory observations, obtained over the course of four years and totaling ~240 ks exposure time, targeting the eruptive Class I YSO V1647 Ori in L1630. We used 2MASS and Spitzer data to identify and classify IR counterparts to L1630 X-ray sources and identified a total of 52 X-ray emitting YSOs with IR counterparts, including 4 Class I sources and 1 Class 0/I source. We have detected cool (< 3 MK) plasma, possibly indicative of accretion shocks, in three classical T Tauri stars. A subsample of 27 X-ray-emitting YSOs were covered by 9 of the 11 Chandra observations targeting V1647 Ori and vicinity. For these 27 YSOs, we have constructed X-ray light curves spanning approximately four years. These light curves highlight the variable nature of pre-main sequence X-ray emitting young stars; many of the L1630 YSOs vary by orders of magnitude in count rate between observations. We discuss possible scenarios to explain apparent trends between various X-ray spectral properties, X-ray variance and YSO classification. • ### Episodic Accretion in Young Stars(1401.3368) Jan. 14, 2014 astro-ph.GA, astro-ph.SR In the last twenty years, the topic of episodic accretion has gained significant interest in the star formation community. It is now viewed as a common, though still poorly understood, phenomenon in low-mass star formation. The FU Orionis objects (FUors) are long-studied examples of this phenomenon. FUors are believed to undergo accretion outbursts during which the accretion rate rapidly increases from typically $10^{-7}$ to a few $10^{-4}$ $M_\odot$ yr$^{-1}$, and remains elevated over several decades or more. EXors, a loosely defined class of pre-main sequence stars, exhibit shorter and repetitive outbursts, associated with lower accretion rates. The relationship between the two classes, and their connection to the standard pre-main sequence evolutionary sequence, is an open question: do they represent two distinct classes, are they triggered by the same physical mechanism, and do they occur in the same evolutionary phases? Over the past couple of decades, many theoretical and numerical models have been developed to explain the origin of FUor and EXor outbursts. In parallel, such accretion bursts have been detected at an increasing rate, and as observing techniques improve each individual outburst is studied in increasing detail. We summarize key observations of pre-main sequence star outbursts, and review the latest thinking on outburst triggering mechanisms, the propagation of outbursts from star/disk to disk/jet systems, the relation between classical EXors and FUors, and newly discovered outbursting sources -- all of which shed new light on episodic accretion. We finally highlight some of the most promising directions for this field in the near- and long-term. • ### The GALEX Nearby Young-Star Survey(1307.3262) July 11, 2013 astro-ph.SR We describe a method that exploits data from the GALEX ultraviolet and WISE and 2MASS infrared source catalogs, combined with proper motions and empirical pre-main sequence isochrones, to identify candidate nearby, young, low-mass stars. Applying our method across the full GALEX- covered sky, we identify 2031 mostly M-type stars that, for an assumed age of 10 (100) Myr, all lie within ~150 (~90) pc of Earth. The distribution of M spectral subclasses among these ~2000 candidate young stars peaks sharply in the range M3-M4; these subtypes constitute 50% of the sample, consistent with studies of the M star population in the immediate solar neighborhood. We focus on a subset of 58 of these candidate young M stars in the vicinity of the Tucana-Horologium Association. Only 20 of these 58 candidates were detected in the ROSAT All-Sky X-ray Survey -- reflecting the greater sensitivity of GALEX for purposes of identifying active nearby, young stars, particularly for stars of type M4 and later. Based on statistical analysis of the kinematics and/or spectroscopic followup of these 58 M stars, we find that 50% (29 stars) indeed have properties consistent with Tuc-Hor membership, while 12 are potential new members of the Columba Association, and two may be AB Dor moving group members. Hence, ~75% of our initial subsample of 58 candidates are likely members of young (age ~10-40 Myr) stellar moving groups within 100 pc, verifying that the stellar color- and kinematics-based selection algorithms described here can be used to efficiently isolate nearby, young, low-mass objects from among the field star population. Future studies will focus on characterizing additional subsamples selected from among this list of candidate nearby, young M stars. • ### Detection of a Cool, Accretion Shock-Generated X-ray Plasma in EX Lupi During the 2008 Optical Eruption(1210.1250) Oct. 3, 2012 astro-ph.SR EX Lupi is the prototype for a class of young, pre-main sequence stars which are observed to undergo irregular, presumably accretion-generated, optical outbursts that result in a several magnitude rise of the optical flux. EX Lupi was observed to optically erupt in 2008 January, triggering Chandra ACIS ToO observations shortly thereafter. We find very strong evidence that most of the X-ray emission in the first few months after the optical outburst is generated by accretion of circumstellar material onto the stellar photosphere. Specifically, we find a strong correlation between the decreasing optical and X-ray fluxes following the peak of the outburst in the optical, which suggests that these observed declines in both the optical and X-ray fluxes are the result of declining accretion rate. In addition, in our models of the X-ray spectrum, we find strong evidence for a ~0.4 keV plasma component, as expected for accretion shocks on low-mass, pre-main sequence stars. From 2008 March through October, this cool plasma component appears to fade as EX Lupi returns to its quiescent level in the optical, consistent with a decrease in the overall emission measure of accretion shock-generated plasma. The overall small increase of the X-ray flux during the optical outburst of EX Lupi is similar to what was observed in previous X-ray observations of the 2005 optical outburst of the EX Lupi-type star V1118 Ori but contrasts with the large increase of the X-ray flux from the erupting young star V1647 Ori during its 2003 and 2008 optical outbursts. • ### From Bipolar to Elliptical: Simulating the Morphological Evolution of Planetary Nebulae(1107.0415) July 4, 2012 astro-ph.SR The majority of Proto-planetary nebulae (PPN) are observed to have bipolar morphologies. The majority of mature PN are observed to have elliptical shapes. In this paper we address the evolution of PPN/PN morphologies attempting to understand if a transition from strongly bipolar to elliptical shape can be driven by changes in the parameters of the mass loss process. To this end we present 2.5D hydrodynamical simulations of mass loss at the end stages of stellar evolution for intermediate mass stars. We track changes in wind velocity, mass loss rate and mass loss geometry. In particular we focus on the transition from mass loss dominated by a short duration jet flow (driven during the PPN phase) to mass loss driven by a spherical fast wind (produced by the central star of the PN). We address how such changes in outflow characteristics can change the nebula from a bipolar to an elliptical morphology. Our results show that including a period of jet formation in the temporal sequence of PPN to PN produces realistic nebular synthetic emission geometries. More importantly such a sequence provides insight, in principle, into the apparent difference in morphology statistics characterizing PPN and PN systems. In particular we find that while jet driven PPN can be expected to be dominated by bipolar morphologies, systems that begin with a jet but are followed by a spherical fast wind will evolve into elliptical nebulae. Furthermore, we find that spherical nebulae are highly unlikely to ever derive from either bipolar PPN or elliptical PN. • ### X-raying the Beating Heart of a Newborn Star: Rotational Modulation of High-energy Radiation from V1647 Ori(1207.0570) July 3, 2012 astro-ph.SR, astro-ph.HE We report a periodicity of ~1 day in the highly elevated X-ray emission from the protostar V1647 Ori during its two recent multiple-year outbursts of mass accretion. This periodicity is indicative of protostellar rotation at near-breakup speed. Modeling of the phased X-ray light curve indicates the high-temperature (~50 MK), X-ray-emitting plasma, which is most likely heated by accretion-induced magnetic reconnection, resides in dense (>~5e10 cm-3), pancake-shaped magnetic footprints where the accretion stream feeds the newborn star. The sustained X-ray periodicity of V1647 Ori demonstrates that such protostellar magnetospheric accretion configurations can be stable over timescales of years. • ### Suzaku Observation of Strong Fluorescent Iron Line Emission from the Young Stellar Object V1647 Ori during Its New X-ray Outburst(1207.0774) July 3, 2012 astro-ph.SR, astro-ph.HE The Suzaku X-ray satellite observed the young stellar object V1647 Ori on 2008 October 8 during the new mass accretion outburst reported in August 2008. During the 87 ksec observation with a net exposure of 40 ks, V1647 Ori showed a high level of X-ray emission with a gradual decrease in flux by a factor of 5 and then displayed an abrupt flux increase by an order of magnitude. Such enhanced X-ray variability was also seen in XMM-Newton observations in 2004 and 2005 during the 2003-2005 outburst, but has rarely been observed for other young stellar objects. The spectrum clearly displays emission from Helium-like iron, which is a signature of hot plasma (kT ~5 keV). It also shows a fluorescent iron Kalpha line with a remarkably large equivalent width of ~600 eV. Such a large equivalent width suggests that a part of the incident X-ray emission that irradiates the circumstellar material and/or the stellar surface is hidden from our line of sight. XMM-Newton spectra during the 2003-2005 outburst did not show a strong fluorescent iron Kalpha line, so that the structure of the circumstellar gas very close to the stellar core that absorbs and re-emits X-ray emission from the central object may have changed in between 2005 and 2008. This phenomenon may be related to changes in the infrared morphology of McNeil's nebula between 2004 and 2008. • ### 2M1155-79 (= T Cha B): A Low-mass, Wide-separation Companion to the Nearby, "Old" T Tauri Star T Cha(1202.0262) Feb. 1, 2012 astro-ph.SR The early-K star T Cha, a member of the relatively nearby (D ~ 100 pc) epsilon Cha Association, is a relatively "old" (age ~7 Myr) T Tauri star that is still sporadically accreting from an orbiting disk whose inner regions are evidently now being cleared by a close, substellar companion. We report the identification, via analysis of proper motions, serendipitous X-ray imaging spectroscopy, and followup optical spectroscopy, of a new member of the epsilon Cha Association that is very likely a low-mass companion to T Cha at a projected separation of ~38 kAU. The combined X-ray and optical spectroscopy data indicate that the companion, T Cha B (= 2M1155-79), is a weak-lined T Tauri star (wTTS) of spectral type M3 and age ~<10 Myr. The serendipitous X-ray (XMM-Newton) observation of T Cha B, which targeted T Cha, also yields serendipitous detections of two background wTTS in the Chamaeleon cloud complex, including one newly discovered, low-mass member of the Cha cloud pre-MS population. T Cha becomes the third prominent example of a nearby, "old" yet still actively accreting, K-type pre-MS star/disk system (the others being TW Hya and V4046 Sgr) to feature a low-mass companion at very large (12-40 kAU) separation, suggesting that such wide-separation companions may affect the conditions and timescales for planet formation around solar-mass stars. • ### X-ray Production by V1647 Ori During Optical Outbursts(1108.2534) Aug. 11, 2011 astro-ph.SR The pre-main sequence star V1647 Ori has recently undergone two optical/near-infrared (OIR) outbursts that are associated with dramatic enhancements in the stellar accretion rate. Our intensive X-ray monitoring of this object affords the opportunity to investigate whether and how the intense X-ray emission is related to pre-MS accretion activity. Our analysis of all fourteen Chandra X-ray Observatory observations of V1647 Ori demonstrate that variations in the X-ray luminosity of V1647 Ori are correlated with similar changes in the OIR brightness of this source during both (2003-2005 and 2008) eruptions, strongly supporting the hypothesis that accretion is the primary generation mechanism for the X-ray outbursts. Furthermore, the Chandra monitoring demonstrates that the X-ray spectral properties of the second eruption were strikingly similar to those of the 2003 eruption. We find that X-ray spectra obtained immediately following the second outburst - during which V1647 Ori exhibited high X-ray luminosities, high hardness ratios, and strong X-ray variability - are well modeled as a heavily absorbed (N_H ~ 4x10^22 cm^-2), single-component plasma with characteristic temperatures (kT_X ~ 2-6 keV) that are consistently too high to be generated via accretion shocks but are in the range expected for plasma heated by magnetic reconnection events. We also find that the X-ray absorbing column has not changed significantly throughout the observing campaign. Since the OIR and X-ray changes are correlated, we hypothesize that these reconnection events either occur in the accretion stream connecting the circumstellar disk to the star or in accretion-enhanced protostellar coronal activity.
2021-04-17 06:01:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6507181525230408, "perplexity": 5195.030429779342}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00202.warc.gz"}