url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://analystprep.com/study-notes/frm/part-2/market-risk-measurement-and-management/var-mapping/
VaR Mapping After completing this reading you should be able to: • Explain the principles underlying VaR mapping, and describe the mapping process. • Explain how the mapping process captures general and specific risks. • Differentiate among the three methods of mapping portfolios of fixed income securities. • Summarize how to map a fixed income portfolio into positions of standard instruments. • Describe how mapping of risk factors can support stress testing. • Explain how VaR can be used as a performance benchmark. • Describe the method of mapping forwards, forward rate agreements, interest rate swaps, and options. Principles Underlying VaR Mapping and the Mapping Process Mapping refers to the process of replacing the current values of a portfolio with risk factor exposures. More generally, it is the process of replacing each instrument by its exposures on selected risk factors. Through mapping, a complex portfolio or instrument can be broken down into its constituent elements that determine its value. Why is risk mapping necessary? There are four main reasons: 1. Mapping is necessary when we do not have sufficient data on positions. In this regard, consider an emerging market instrument that has a very short trading record, meaning that we do not have enough data on it. In such circumstances, we might map our position to some comparable position for which we already have plenty of data and a good understanding of its risk exposure. 2. Mapping helps us to cut down on the dimensionality of covariance matrices and correlations. Given a portfolio comprising of n instruments, we would need to gather data on n volatilities and n(n-1)/2 correlations, resulting in a labyrinth of pieces of information. As n increases, so does the amount of information we have to collect and process. It is important to keep the dimensionality of our covariance matrix at a manageable level to avoid computational problems. 3. Mapping helps avoid rank correlation problems By handling a large number of risk factors that are closely correlated (or even perfectly correlated in extreme cases), we might run into rank problems with the covariance matrix and end up producing pathological estimates that might lead to erroneous conclusions. To avoid such problems, it is important that we select an appropriate set of risk factors that are not closely related. 4. Mapping greatly reduces the time needed to carry out risk assessment and related calculations By reducing a portfolio comprised of a large number of different positions a consolidated set of risk-equivalent positions in basic risk factors, it is possible to conduct calculations at a faster speed. The only downside to such a move is that precision is lost. Principles Underlying Mapping The principles underlying risk mapping can be summarized s follows: • VaR mapping helps us to aggregate risk factors in situations where considering each position separately is computationally intensive • mapping is useful for measuring changes over time. When managing the risk attached to bonds for instance, risk exposure can be mapped to spot yields hat reflect the current position. • mapping can be quite useful when historical data is not available Mapping Process Step 1: Mark all positions to market value (in current dollars or reference currency). This involves establishing the current market value of all positions Step 2: Identify common risk factors for different investment positions A good example of a common risk factor could be a specific exchange rate, say the Euro versus the dollar. Step 3: Allocate the market value for each position/ instrument to the risk factors Figure 1 illustrates this process where 5 instruments are mapped on three risk factors. Step 4: Construct a risk factor distribution and inputs all data into the risk model The market value of position number 1, for example, is allocated to the risk exposures in the first row, $$x_{1,1} , x_{1,2}, \text{ and } x_{1,3}$$. Step 5: Sum the risk factors in each column then create a vector consisting of three risk exposures Figure 2: Risk Exposure Vectors $$\begin{array}{c|c|c|c|c} { \textbf{Investment}/ } & {\textbf{Market value}} & {\textbf{Risk Factor 1}} & {\textbf{Risk Factor 2}} & {\textbf{Risk Factor 3}} \\ {\textbf{Position}} & {} & {} & {} & {} \\ \hline {1} & {\text{MV}_1} & {x_{11}} & {x_{12}} & {x_{13}} \\ \hline {2} & {\text{MV}_2} & {x_{21}} & {x_{22}} & {x_{23}} \\ \hline {3} & {\text{MV}_3} & {x_{31}} & {x_{32}} & {x_{33}} \\ \hline {4} & {\text{MV}_4} & {x_{41}} & {x_{42}} & {x_{43}} \\ \hline {5} & {\text{MV}_5} & {x_{51}} & {x_{52}} & {x_{53}} \\ \hline {\text{Total portfolio}} & {\text{MV}} & {x_1=\sum_{i=1}^5 x_{i1}} & {x_2=\sum_{i=1}^5 x_{i2}} & {x_3=\sum_{i=1}^5 x_{i3}} \\ \end{array}$$ The six positions above could be any six instruments, say, forward contracts on the same currency but with different maturities. Through mapping, these positions can be replaced by exposures on three risk factors only – factors 1, 3, and 3. The idea of risk mapping can be seen in William Sharpe’s diagonal model where he attempts to simplify risk measurement in a portfolio made up of many stock positions. The model decomposes individual stock return movements into two components: a common index component and an idiosyncratic component. The latter disappears as more and more positions are added to the portfolio, leaving the common index component as the main driver of risk. How the Mapping Process Captures General and Specific Risks Just how many general (primitive) risk factors do we work with? This is the key question as we engage in any mapping exercise. One or two factors can be convenient to work with, and the process takes less time. Having more risk factors may lead to a better approximation of the portfolio’s risk exposure, but there is a downside. As the number of risk factors increase, the model becomes more complex, and risk modeling takes more time. Therefore, the choice of the set of primitive risk factors should reflect an appropriate tradeoff between better quality of the approximation and faster processing. Perhaps most important is to recognize that the choice and number of general risk factors will directly affect the size of specific risks. Specific risks are the risks that affect specific assets in the portfolio. They are issuer-specific as opposed to market-related or general risk. To demonstrate just how the choice and number of general risk factors affect the size of specific risks, consider a portfolio of bonds with different maturities, ratings, terms, and different denominations. Let’s say we start with duration as the only risk factor. In this case, there will be a significant proportion of specific risk. If we add another risk factor, say, for credit risk, we would expect the amount of specific risk to reduce. If we add yet another factor for currency risk, we would chip away at specific risk even further. Example: Let’s assume we have a portfolio of N stocks and map each stock to the market index, which we define as our primitive risk factor. The risk exposure, $$\beta_i$$, is computed by regressing the return of stock i on the market index return using the following equation: $$R_i=\propto_i+\beta_i R_M+\epsilon_i$$ Ignoring the first term,$$\propto$$, which does not contribute to risk, the portfolio return, given that the relative weight of each stock is $$w_i$$, is: $$R_p=\sum_{i=1}^N w_i R_i=\sum_{i=1}^N w_i \beta_i R_M+\sum_{i=1}^N w_i \epsilon_i$$ The aggregate of all risk exposures, $$\beta_i$$, based on the market weight of each position, gives us the risk exposure as follows: $$\beta_p=\sum_{i=1}^N w_i \beta_i$$ We can now decompose the variance of the portfolio return, V, as: $$V(R_p )=\beta_p^2×V(R_M )+\sum_{i=1}^N w_i^2 \sigma_{\epsilon,i}^2$$ The first component of the decomposed variance equation shown above is the general market risk. The second component is the specific risk. It can be shown that if the portfolio is equally weighted, i.e., $$w_i=w=1/N$$, and if all residual variances are the same, then the second component (specific risk) tends to zero. The only risk that remains is the general market risk, consisting of the beta squared times the variance of the market: $$V(R_M ) \rightarrow \beta_p^2×V(R_M )$$ Conclusion: A greater number of primitive or general market risk factors should create less residual or specific risk for a given fixed amount of total risk. It follows that we can ignore specific risk in large, well diversified portfolios. The mapping approach replaces a dollar amount of $$x_i$$ in stock i by a dollar amount of $$x_i \beta_i$$ on the index. $$x_i \text{ on stock } i\rightarrow x_i \beta_i \text{ on index}$$ Methods of Mapping Portfolios of Fixed Income Securities The three methods of mapping for fixed-income securities are (1) principal mapping, (2) duration mapping, and (3) cash flow mapping. 1. Principal mapping In this method, bond risk is associated with the maturity of the principal payment only. In other words, it only looks at the risk of prepayment of the principal amount, ignoring all other intervening payments. One factor is chosen that corresponds to the average maturity of the portfolio. 2. Duration mapping With this method, bond risk is mapped to a zero-coupon bond with maturity equal to the bond duration. We calculate VaR by using the risk level of the zero-coupon bond that equals the duration of the portfolio. The problem here is that it may be quite difficult to calculate the risk level that exactly matches the duration of the portfolio. 3. Cash flow mapping In this method, the risk of a fixed-income instrument is decomposed into the risk of each of the bond cash flows. The present values of all cash flows are mapped onto the risk factors for zeros of the same maturities. How to Map a Fixed Income Portfolio into Positions of Standard Instruments In this learning outcome, we are going to illustrate how to calculate principal, duration, and cash flow mapping VaRs. We will use a portfolio made up of just two assets – a one-year bond and a five-year bond. The Concept of Bond Returns VaR (VaR Percentages) When dealing with zero coupon bonds, the percentage change in price can be approximated as the product of negative modified duration and the percentage change in yield: $$\cfrac {dp}{p}= -D×d(y)$$ Intuitively, we can interpret this if we recall the concept of duration. The equation tells us that if the yield changes by one percent in a given direction, then the bond price will change by a certain percentage in the opposite direction, which will depend on the duration value. This key relationship can be modified further to give what we call the returns VaR: $$VaR\left(\cfrac {dp}{p} \right)=|D|×VaR(d(y) )$$ The return VaR is thus the product of the yield VaR and modified duration. Now let’s go back to our two-bond portfolio. Example: Suppose a portfolio consists of two par value bonds. Bond 1: market value = $100 million; coupon rate = 4%; maturity = 1 year Bond 2: market value =$100 million; coupon rate = 6%; maturity = 5 years The yields, yield VaRs, durations, and returns VaRs (or VaR percentages) for zero-coupon bonds with maturities ranging from one to five years (at the 95% confidence level) are as follows: Figure 3: Returns VaR for Zero-coupon Bonds (at 95% Confidence) $$\begin{array}{c|c|c|c|c|c} \textbf{Maturity} & \textbf{Yield} & \textbf{Yield} & \textbf{Mac} & \textbf{Modified} & \textbf{Returns} \\ {(\textbf{yrs})} & {} & \textbf{VaR} & \textbf{Dur} & \textbf{dur}. (\textbf{yrs}) & {\textbf{VaR} \bf(\%)} \\ \hline {1} & {5.83\%} & {0.497\%} & {1.0} & {0.945} & {0.4697} \\ \hline {2} & {5.71\%} & {0.522\%} & {2.0} & {1.892} & {0.9876} \\ \hline {3} & {5.81\%} & {0.523\%} & {3.0} & {2.835} & {1.4827} \\ \hline {4} & {5.89\%} & {0.522\%} & {4.0} & {3.778} & {1.9721} \\ \hline {5} & {5.96\%} & {0.514\%} & {5.0} & {4.719} & {2.4256} \\ \end{array}$$ We calculate the returns VaR as: $$\text{Returns VaR} = \text{Mod. Duration} × \text{Yield VaR}$$ For example, at 4 years, the returns VaR = 3.778× 0.522% = 1.9721 1. Principal mapping VaR As discussed before, principal mapping only considers the timing of the redemption of the bonds. In this case, the weighted average life of our portfolio is three years [= (1 + 5)/2] Thus, we compute the VaR under the principal method as the returns VaR at 3 years times the market value of the portfolio, as follows: \begin{align*} \text{Principal mapping VaR} & = \text{market value of portfolio} × \text{returns VaR} \\ & = 200 \text{ million} \times 1.4827\% = 2.9654 \text{ million} \\ \end{align*} 2. Duration Mapping VaR We replace the portfolio by a zero coupon bond with maturity equal to the duration of the portfolio. So the first step is to determine the duration of the portfolio. This will be the sum of time, t, multiplied by the present value of cash flows, divided by the present value of all cash flows. Figure 4: Duration of Portfolio $$\begin{array}{c|c|c|c|c|c} \textbf{Year} & \textbf{CF for 5} & \textbf{CF for 1} & \textbf{Spot rate} & \bf{\text{PV}(\text{CF})} & \bf {\text t * \text {PV}(\text{CF})} \\ \bf{(\text t)} & {\textbf{year bond}} & { \textbf{year bond}} & {} & {} & {} \\ \hline {1} & {6} & {104} & {4.000\%} & {105.77} & {105.77} \\ \hline {2} & {6} & {0} & {4.618\%} & {5.48} & {10.96} \\ \hline {3} & {6} & {0} & {5.192\%} & {5.15} & {15.45} \\ \hline {4} & {6} & {0} & {5.716\%} & {4.80} & {19.20} \\ \hline {5} & {106} & {0} & {6.112\%} & {78.79} & {393.95} \\ \hline \text{Total} & {} & {} & {} & {200} & {545.33} \\ \end{array}$$ $$\text{Portfolio duration} =\cfrac {545.33}{200} = 2.7267 \text{ years}.$$ Next, we interpolate the return VaR for a zero-coupon bond with a maturity of 2.7267 years. As shown in figure 3, which we reproduce below, the return VaRs for two-year and three-year zero-coupon bonds were 0.9876 and 1.4827, respectively. Therefore, the VaR we are looking for lies somewhere between these two. Figure 5: Return VaRs $$\begin{array} {c|c} \bf{\text{Maturity } (\text{yrs})} & \bf {\text{Return VaR}(\%)} \\ \hline {1} & {0.4697} \\ \hline {2} & {0.9876} \\\hline {3} & {1.4827} \\\hline {4} & {1.9721} \\\hline {5} & {2.4256} \\\end{array}$$ $$\text{VaR} = 0.9876 + (1.4827 – 0.9876) x (2.7267 – 2) = 0.9876 + (0.4951 x 0.7267) = 1.3474\%$$ At this point, we have what we need to compute the duration mapping VaR using the interpolated return VaR for a zero-coupon bond with a 2.7267 years maturity: $$\text{Duration mapping VaR} = 200 \text{ million} \times 1.3474\% = 2.6948 \text{ million}$$ 3. Cash Flow Mapping To calculate the VaR using cash flow mapping, we need to map the present value of the cash flows onto the risk factors for zeros of the same maturities and include the inter-maturity correlations. Column 2 in figure 6 provides the present value of cash flows as computed in figure 4. Column 3 multiplies the present values of the cash flows by the return VaRs of the zero-coupon bonds. Figure 6: Cash flow mapping $$\begin{array}{|ccc|cccccc|} \hline {} & {} & {} & {} & {} & \textbf{Correlation} & \textbf{matrix} & \textbf{R} & {} \\ \hline \textbf{Year} & \textbf{x} & \bf{\text x× \text V} & \bf{1 \text Y} & \bf{2\text Y} & \bf{3 \text Y} & \bf{4 \text Y} & \bf{5 \text Y} & \bf{x\Delta VaR} \\ \hline {1} & {105.77} & {0.4968} & {1} & {0.897} & {0.886} & {0.866} & {0.855} & {1.1570} \\ {2} & {5.48} & {0.05412} & {0.897} & {1} & {0.991} & {0.976} & {0.966} & {0.1361} \\ {3} & {5.15} & {0.07636} & {0.886} & {0.991} & {1} & {0.994} & {0.988} & {0.1949} \\ {4} & {4.80} & {0.09466} & {0.866} & {0.976} & {0.994} & 1 {0.998} & {0.2424} \\ {5} & {78.79} & {1.9111} & {0.855} & {0.966} & {0.988} & {0.998} & {1} & {4.8888} \\ \hline {\text{Undiversified}} & { \text{VaR} } & {= 2.633} & {} & {} & {} & {} & {} & {6.6192} \\ \hline {} & {} & {} & \text{Diversified} & \text {VaR} & {=\sqrt {6.6192}} & {=2.5728} & {} & {} \\ \hline \end{array}$$ Assuming the five zero-coupon bonds were all perfectly correlated, then the undiversified VaR could be calculated as follows: $$\text{undiversified VaR}=\sum_{i=1}^N |x_i|×V_i$$ In this case, the undiversified VaR is computed as the sum of the third column: 2.633. Assuming the five zero-coupon bonds were all imperfectly correlated, then the diversified VaR could be calculated as follows: $$\text{Diversified VaR}=\alpha \sqrt { x^{‘}\sum x } \sqrt {(x×V)^{‘} R(x×V) }$$ Where: x is the present value of cash flows vector; Vis the vector of VaR for zero-coupon Bond returns; R is the correlation matrix The last column in figure 6 summarizes the computations for the matrix algebra. The diversified Var is given by the square root of the sum of this column. Calculating portfolio VaR using the cash flow mapping approach results in the most precise estimate, but it is computationally intensive because it incorporates correlations between the zero coupon bonds. How Mapping of Risk Factors can Support Stress Testing Stress testing is a tuning process by which we explore how a portfolio would react to small or more drastic changing conditions in the markets. For instance, we might want to find out how the portfolio would be affected if each bond in the portfolio was decreased by its VaR. Recall that if we assume that all bonds in the portfolio are perfectly correlated, the portfolio VaR would equal the undiversified VaR (sum of individual VaRs). This presents an opportunity for conducting stress tests. Instead of calculating the undiversified VaR directly, we could reduce each zero-coupon value by its respective VaR, and then revalue the portfolio. The difference between the revalued portfolio and the original portfolio value should be equal to the undiversified VaR. Let’s use our two-bond portfolio to demonstrate how we can stress test the VaR measurement, making the key assumption that all zeros are perfectly correlated. To stress test, first we need to calculate present value factors. Figure 7: Stress Testing $$\begin{array}{c|c|c|c|c|c|c|c} \bf{\text{Year}} & \textbf{Cash} & \textbf{Spot} & \textbf{Disc.} & \textbf{PV Cash} & \textbf{Return} & \textbf{New} & \textbf{New PV} \\ \bf{ (\text t)} & \textbf{Flows} & \textbf{rate} & \textbf{Factor} & \textbf{Flow} & \bf{\text{VaR} / } & \textbf{Disc.} & \textbf{of cash} \\ {} & {} & {} & {} & {} & \bf{(\text{risk})\%} & \textbf{Factor} & \textbf{flows} \\ \hline {1} & {110} & {4.000\%} & {0.9615} & {105.77} & {0.4697} & {0.9570} & {105.27} \\ \hline {2} & {6} & {4.618\%} & {0.9137} & {5.48} & {0.9876} & {0.9047} & {5.43} \\ \hline {3} & {6} & {5.192\%} & {0.8591} & {5.15} & {1.4827} & {0.8464} & {5.08} \\ \hline {4} & {6} & {5.716\%} & {0.8006} & {4.80} & {1.9721} & {0.7848} & {4.71} \\ \hline {5} & {106} & {6.112\%} & {0.7433} & {78.79} & {2.4256} & {0.7253} & {76.88} \\ \hline \text{Total} & {} & {} & {} & {200} & {} & {} & {197.37} \\ \end{array}$$ For example, from the table, the present value factor for a one-year zero-coupon bond discounted at 4.000% is 0.9615. Given the return VaR of 0.4697, a 95% probability move would be for the bond to fall by its VaR to 0.9570 [=0.9615 × (1- 0.4697%)]. The last column then finds the present value of the portfolio’s cash flows using the VaR% adjusted present value factors ( $105.27 = 0.9570 *$110) We do this to all the bonds. If all bonds fall by their respective return VaR, the new value of the portfolio is $197.37 million. This is$2.63 million less than the original value. If you recall, this is equivalent to the undiversified VaR we computed earlier through matrix multiplication. How VaR can be used as a Performance Benchmark To benchmark a portfolio, we measure the VaR of the portfolio relative to the VaR of a benchmark. The VaR of the deviation between the two portfolios is referred to as a tracking error VaR. The difference comes in because it is possible to construct portfolios that match the risk factors of a benchmark portfolio but have either a higher or a lower VaR. If x is the vector position of the portfolio and $$x_0$$ the vector position of the index, then the tracking error VaR is given by: $$\text{Tracking error}=\alpha \sqrt{(x-x_0 )^{‘} \sum (x-x_0 ) }$$ If the tracking error is $y, the maximum deviation between the index and the portfolio is$y. Mapping Forwards, Forward Rate Agreements, Interest Rate Swaps, and Options To mapping complex or esoteric instruments, it is important to decompose the instrument into two or more constituent instruments. Forwards A long position in a forward contract on the Euro has three building blocks: 1. A short position in a U.S. Treasury bill. 2. A long position in a one-year euro bill. 3. A long position in the euro spot market. Forward rate agreements: An FRA is equivalent toa portfolio long in a zero-coupon bond of one maturity and short in a zero-coupon bond of a different maturity. Thus it is possible to map an FRA and estimate its VaR by treating it as a long–short combination of two zeros of different maturities. Vanilla Interest Rate Swap A vanilla interest-rate swap is equivalent to a portfolio that is long a fixed-coupon bond and short a floating-rate bond, or vice versa. Options A change in option price/value can be approximated by taking partial derivatives. A long position in an option can be split into two building blocks: • A long position in the stock equal to delta ($$\Delta$$) • A short position in the underlying asset financed by a loan equal to [(delta Shares) – value of call] FX forwards A foreign-exchange forward is the equivalent of a long position in a foreign currency zero-coupon bond and a short position in a domestic currency zero-coupon bond, or vice versa. Examine the risk for a 1-year forward contractor to purchase 100 million Euros in exchange for 122.59 million. It is estimated that his Euro spot risk factor is $1.4422 and long EUR bill is 2.34% and short USD bill is 3.14%. 1. 7.98% 2. 22.06% 3. 20.04% 4. 43.97% The correct answer is B. The EUR forward risk factor is: $$\frac { 122.59 }{ 100 } =1.2259$$ Recall that: $${ f }_{ t }={ s }_{ t }{ e }^{ -y\tau }-K{ e }^{ -r\tau }$$ Therefore: $${ F }_{ t }=1.4422\frac { 1 }{ \left( 1+{ 2.34 }/{ 100 } \right) } -1.2259\frac { 1 }{ \left( 1+{ 3.14 }/{ 100 } \right) } =1.4092 – 1.1886 = 0.2206 = 22.06\%$$ Question 2 A bank has a cash flow decomposition with a duration of 5 years. Given that the VaR of the index at 95% confidence level is$4.33 million, with a tracking error of $2.56 million, calculate the variance improvement relative to the original index. 1. 86.09% 2. 65.05% 3. 34.95% 4. 2.86% The correct answer is B. Recall that the variance improvement is given by: $$1-{ \left( { \left( Tracking\quad Error \right) }/{ \left( Absolute\quad risk\quad index \right) } \right) }^{ 2 }$$ $$= 1-{ \left( \frac { 2.56 }{ 4.33 } \right) }^{ 2 }$$ $$= 0.6505$$ $$= 65.05\%$$ Question 3 Calculate the current forward rate that will set the contract value to be zero if you are given that the spot price of 1 unit underlying cash asset is$5.9 million, with a domestic free rate of 0.025 and $$\tau =0.1$$. The income flow rate y is \$2.23 million 1. 1.5249 2. 52.4911 3. 53.6855 4. 1.4893 $${ F }_{ t }=\left( { S }_{ t }{ e }^{ -y\tau }{ e }^{ r\tau } \right)$$ We know that, $${ S }_{ t }=5.9$$, $$r = 0.025$$, $$y = 2.23$$ and $$\tau =0.1$$. $${ F }_{ t }=\left( { 5.9 }^{ 2.23\times 0.1 }\times { e }^{ 0.025\times 0.1 } \right)$$ $$= 1.4893$$
2020-02-20 12:27:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7072387933731079, "perplexity": 1226.4958097122324}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144722.77/warc/CC-MAIN-20200220100914-20200220130914-00485.warc.gz"}
https://www.physicsforums.com/threads/proof-that-a-certain-semigroup-is-also-a-group.423441/
# Proof that a certain semigroup is also a group 1. Aug 20, 2010 ### dndod1 1. The problem statement, all variables and given/known data Let S be a finite multiplicative semigroup in which these 2 cancellation laws hold. For all a,x,y $$\in$$ S, a*x=a*y implies x=y and for all a,x,y, $$\in$$ S x*a=y*a implies that x=y. Show that (S, *) is a group. For given a $$\in$$ S, let $$\lambda$$ a: S $$\rightarrow$$ S, s $$\rightarrow$$ a*s (This should just be a single arrow, can't get it to work!) a) Show that $$\lambda$$ a is 1-1 using the cancellation laws. Deduce that it is also onto. Thus show that there is an element ea=a b) Show that ea * ea= ea by using a*ea= a twice, associativity and then right cancellation. c) Show that ea acts as an identity, using these solutions for any b$$\in$$S. ea*b =y ea * (ea *b) = ea (ea*ea) * b = ea *y b=y and b*ea=y (b*ea)*ea= y*ea b* (ea*ea) =y*ea b=y d) Now show that any b$$\in$$S has a right inverse using the onto property of the function $$\lambda$$ a. This element, call it br ^-1, itself has a right inverse using the same logic. Call it c. By examining the product of b*br ^-1*c in two ways, show that b=c. e) Hence obtain that the right inverse br ^-1 is an inverse of b. f) Conclude logically that (S, *) is a group. 2. Relevant equations The Group axioms. G1 Associativity G2 Identity element e*g=g*e= g G3 inverse for each element g^-1*g=e 3. The attempt at a solution I haven't really got any idea how to start. I only did this before I got stuck: Outlined the group axioms as above and stated that: As (S, *) is defined as a semigroup, * is associative, therefore G1 has been satisfied. a) did not know where to begin as did not know how to go about showing 1-1 and onto. b) Show (ea*ea) =ea) given a*ea) =a I don't know whether to work with just the LHS or the whole thing and do the same operation to both sides. I have tried the latter. a *ea*ea = a*ea (a*ea)*ea) =a*ea a* ea=a a=a ???????????? I am very, very lost! Please help if you can. 2. Aug 20, 2010 ### Office_Shredder Staff Emeritus Ok, how to show the map in a) is 1-1: Suppose there are two elements s and r such that a*s=a*r. If you can show that s=r then you've proven it's 1-1. You should be able to see how to use the cancellation law to prove that s=r 3. Aug 21, 2010 ### dndod1 Thanks very much for getting me started. I have shown that the function is 1-1 and that it is onto. I still do not know how to tackle the last part of question a), as I am confused by the lambda notation. Thus show that there is an element ea for each a, such that lambda a(ea)=a. My undersatnding is that the function lambda takes ea and it will map to a. Is that equivalent to saying that a maps a..... which would mean that e is the identity element for a? Even if that's correct, I am unsure of how to tackle it! Many thanks again.
2017-08-22 23:29:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8110997676849365, "perplexity": 1148.8214657951773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.7/warc/CC-MAIN-20170822221214-20170823001214-00693.warc.gz"}
https://web2.0calc.com/questions/algebra_18902
+0 # Algebra 0 26 2 Suppose that for some a,b,c we have a + b + c = 1, ab + ac + bc = abc = -1. What is a^3 + b^3 + c^3? Jul 29, 2022 ### 2+0 Answers #1 +2270 0 Note that $$a^3 + b^3 + c^3 = \left(\left(a+b+c\right)^2-3ab-3bc-3ac\right)\left(a+b+c\right)+3abc$$ (found it off the internet) However, we can rewrite this into something easier: $$\left(\left(a+b+c\right)^2-3(ab+bc+ac)\right)\left(a+b+c\right)+3abc$$ Can you take it from here? Jul 29, 2022 #2 +183 0 Builderboi has a great way to solve it but sometimes you don't have much of those formulas memorized. Use polynomial construction. a + b + c = 1 ab + ac + bc = -1 abc = -1 P(x) = x^3 - x^2 - x + 1 a, b, c are the roots of this polynomial. a^3 = a^2 + a - 1 b^3 = b^2 + b - 1 c^3 = c^2 + c - 1 a^3 + b^3 + c^3 = (a^2 + b^2 + c^2) + (a + b + c) - 3 (a + b + c)^2 - 2(ab + ac + bc) = a^2 + b^2 +  c^2 = 3 a^3 + b^3 + c^3 = 1. Jul 31, 2022
2022-08-09 01:16:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.720573902130127, "perplexity": 366.46753459252955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00740.warc.gz"}
https://handwiki.org/wiki/2D_Z-transform
# 2D Z-transform The 2D Z-transform, similar to the Z-transform, is used in Multidimensional signal processing to relate a two-dimensional discrete-time signal to the complex frequency domain in which the 2D surface in 4D space that the Fourier Transform lies on is known as the unit surface or unit bicircle.[1] The 2D Z-transform is defined by $\displaystyle{ X_z(z_1,z_2) = \sum_{n_1=0}^{\infty}\sum_{n_2=0}^{\infty} x(n_1,n_2) z_1^{-n_1} z_2^{-n_2} }$ where $\displaystyle{ n_1,n_2 }$ are integers and $\displaystyle{ z_1,z_2 }$ are represented by the complex numbers: $\displaystyle{ z_1 = Ae^{j\phi_1} = A(\cos{\phi_1}+j\sin{\phi_1})\, }$ $\displaystyle{ z_2 = Be^{j\phi_2} = B(\cos{\phi_2}+j\sin{\phi_2})\, }$ The 2D Z-transform is a generalized version of the 2D Fourier transform. It converges for a much wider class of sequences, and is a helpful tool in allowing one to draw conclusions on system characteristics such as BIBO stability. It is also used to determine the connection between the input and output of a linear Shift-invariant system, such as manipulating a difference equation to determine the system's Transfer function. ## Region of Convergence (ROC) The Region of Convergence is the set of points in complex space where: $\displaystyle{ ROC = |X_z(z_1,z_2)| \lt \infty }$ In the 1D case this is represented by an annulus, and the 2D representation of an annulus is known as the Reinhardt domain.[2] From this one can conclude that only the magnitude and not the phase of a point at $\displaystyle{ (z_1,z_2) }$ will determine whether or not it lies within the ROC. In order for a 2D Z-transform to fully define the system in which it means to describe, the associated ROC must also be know. Conclusions can be drawn on the Region of Convergence based on Region of Support (mathematics) of the original sequence $\displaystyle{ (n_1,n_2) }$. ### Finite Support Sequences A sequence with a region of support that is bonded by an area $\displaystyle{ (M_1,M_2) }$ within the $\displaystyle{ (n_1,n_2) }$ plane can be represented in the z-domain as: $\displaystyle{ X_z(z_1,z_2) = \sum_{n_1=0}^{M_1}\sum_{n_2=0}^{M_2} x(n_1,n_2) z_1^{-n_1} z_2^{-n_2} }$ Because the bounds on the summation are finite, as long as z1 and z2 are finite, the 2D Z-transform will converge for all values of z1 and z2, except in some cases where z1 = 0 or z2 = 0 depending on $\displaystyle{ x(n_1,n_2) }$. ### First Quadrant and Wedge Sequences Sequences with a region of support in the first quadrant of the $\displaystyle{ (n_1,n_2) }$ plane have the following 2D Z-transform: $\displaystyle{ X_z(z_1,z_2) = \sum_{n_1=0}^{\infty}\sum_{n_2=0}^{\infty} x(n_1,n_2) z_1^{-n_1} z_2^{-n_2} }$ From the transform if a point $\displaystyle{ z_{01},z_{02} }$ lies within the ROC then any point with a magnitude $\displaystyle{ \left| z_1 \right| \text{ ≥ } \left| z_{01} \right| ; \left| z_2 \right| \text{ ≥ } \left| z_{02} \right| }$ also lie within the ROC. Due to these condition, the boundary of the ROC must have a negative slope or a slope of 0. This can be assumed because if the slope was positive there would be points that meet the previous condition, but also lie outside the ROC.[2] For example, the sequence: $\displaystyle{ x_n(n_1,n_2) = a^n_1\delta(n_1-n_2)u[n_1,n_2] }$ has the $\displaystyle{ z }$ transform $\displaystyle{ X_z(z_1,z_2) = \frac{1}{1 - az_1^{-1}z_2^{-1}} }$ It is obvious that this only converges for $\displaystyle{ \left| a \right| \lt \left| z_{01} \right|\left| z_{02} \right| = \ln(\left| a \right|) \lt \ln(\left| z_{01} \right|) - \ln(\left| z_{02} \right|) }$ So the boundary of the ROC is simply a line with a slope of -1 in the $\displaystyle{ ln(z_{01}),ln(z_{02}) }$ plane.[2] In the case of a wedge sequence where the region of support is less than that of a half plane. Suppose such a sequence has a region of support over the first quadrant and the region in the second quadrant where $\displaystyle{ n_{01} = -Ln_{02} }$. If $\displaystyle{ l }$ is defined as $\displaystyle{ l = _{01}+Ln_{02} }$ the new 2D Z-Transform becomes: $\displaystyle{ X_z(z_1,z_2) = \sum_{n_1=0}^{\infty}\sum_{n_2=0}^{\infty} x(l-Ln_2,n_2) z_1^{-l+Ln_2} z_2^{-n_2} }$ Sequence with Region of support over a wedge and its corresponding ROC This converges if: $\displaystyle{ \left| z_1 \right| \text{ ≥ } \left| z_{01} \right| ; \left| z_1^{-L}z_2 \right| \text{ ≥ } \left| z_{01}^{-L}z_{02} \right| }$ These conditions can then be used to determine constraints on the slope of the boundary of the ROC in a similar manner to that of a first quadrant sequence.[2] By doing this one gets: $\displaystyle{ ln(\left| z_1 \right|) \text{ ≥ } ln(\left| z_{01}) \right|) }$ and $\displaystyle{ ln(\left| z_2 \right|) \text{ ≥ } Lln(\left| z_{1}) \right|)+(ln(\left| z_{02}) \right|)-Lln(\left| z_{01}) \right|)) }$ ### Sequences with Region of Support in all Quadrants A sequence with an unbounded Region of Support can have an ROC in any shape, and must be determined based on the sequence $\displaystyle{ (n_1,n_2) }$. A few examples are listed below: $\displaystyle{ x_n(n_1,n_2) = e^{(-n_1^{2}-n_2^{2})} }$ will converge for all $\displaystyle{ z_1,z_2 }$. While: $\displaystyle{ x_n(n_1,n_2) = a^{(n_1)}a^{(n_2)} , a \text{ ≥ } 1 }$ will not converge for any value of $\displaystyle{ z_1,z_2 }$. However, These are the extreme cases, and usually, the Z-transform will converge over a finite area.[2] A sequence with support over the entire $\displaystyle{ n_1,n_2 }$ can be written as a sum of each quadrant sequence: $\displaystyle{ x_n(n_1,n_2) = x_1(n_1,n_2) + x_2(n_1,n_2) + x_3(n_1,n_2) + x_4(n_1,n_2) }$ Now Suppose: $\displaystyle{ x_1(n_1,n_2) = \begin{cases} x_n(n_1,n_2), & \mbox{if } n_1 \gt 0, n_2 \gt 0\\ 0.5x_n(n_1,n_2), & \mbox{if } n_1 = 0, n_2 \gt 0 ; n_1 \gt 0, n_2 = 0\\ 0.25x_n(n_1,n_2), & \mbox{if } n_1 = n_2 = 0\\ 0, otherwise \end{cases} }$ and $\displaystyle{ x_2(n_1,n_2), x_3(n_1,n_2), x_4(n_1,n_2) }$ also have similar definitions over their respective quadrants. Then the Region of convergence is simply the intersection between the four 2D Z-transforms in each quadrant. ## Using the 2D Z-transform to solve difference equations A 2D difference equation relates the input to the output of a Linear Shift-Invariant (LSI) System in the following manner: $\displaystyle{ \sum_{k_1=0}^{K_1-1}\sum_{k_2=0}^{K_2-1}b(k_1,k_2)y(n_1-k_1,n_2-k_2)=\sum_{r_1=0}^{R_1-1}\sum_{r_2=0}^{R_2-1}a(r_1,r_2)x(n_1-r_1,n_2-r_2) }$ Due to the finite limits of computation, it can be assumed that both a and b are sequences of finite extent. After using the z transform, the equation becomes: $\displaystyle{ Y_z(z_1,z_2)\sum_{k_1=0}^{K_1-1}\sum_{k_2=0}^{K_2-1}b(k_1,k_2)z_1^{-k_1}z_2^{-k_2} = X_z(z_1,z_2)\sum_{r_1=0}^{R_1-1}\sum_{r_2=0}^{R_2-1}a(r_1,r_2)z_1^{-r_1}z_2^{-r_2} }$ This gives: $\displaystyle{ H_z(z_1,z_2) = \frac{X_z(z_1,z_2)}{Y_z(z_1,z_2)} = \frac{\sum_{k_1=0}^{K_1-1}\sum_{k_2=0}^{K_2-1}a(k_1,k_2)z_1^{-k_1}z_2^{-k_2}}{\sum_{r_1=0}^{R_1-1}\sum_{r_2=0}^{R_2-1}b(r_1,r_2)z_1^{-r_1}z_2^{-r_2}} = \frac{A_z(z_1,z_2)}{B_z(z_1,z_2)} }$ Thus we have defined the relation between the input and output of the LSI system. ## Using the 2D Z-Transform to Determine Stability ### Shanks' Theorem I For a first quadrant recursive filter in which $\displaystyle{ H_z(z_1,z_2) = \frac{1}{B_z(z_1,z_2)} }$. The filter is stable iff:[3] $\displaystyle{ B_z(z_1,z_2) \neq 0 }$ for all points $\displaystyle{ (z_1,z_2) }$ such that $\displaystyle{ \left| z_1 \right| \text{ ≥ } 1 }$ or $\displaystyle{ \left| z_2 \right| \text{ ≥ } 1 }$. ### Shanks Theorem II For a first quadrant recursive filter in which $\displaystyle{ H_z(z_1,z_2) = \frac{1}{B_z(z_1,z_2)} }$. The filter is stable iff:[3] $\displaystyle{ B_z(z_1,z_2) \neq 0, \left| z_1 \right| \text{ ≥ } 1, \left| z_2 \right| = 1 }$ $\displaystyle{ B_z(z_1,z_2) \neq 0, \left| z_1 \right| = 1, \left| z_2 \right| \text{ ≥ } 1 }$ ### Huang's Theorem For a first quadrant recursive filter in which $\displaystyle{ H_z(z_1,z_2) = \frac{1}{B_z(z_1,z_2)} }$. The filter is stable iff:[3] $\displaystyle{ B_z(z_1,z_2) \neq 0, \left| z_1 \right| \text{ ≥ } 1, \left| z_2 \right| = 1 }$ $\displaystyle{ B_z(a,z_2) \neq 0, \left| z_2 \right| \text{ ≥ } 1 }$ for any $\displaystyle{ a }$ such that $\displaystyle{ \left| a \right| \text{ ≥ } 1 }$ ### Decarlo and Strintzis' Theorem For a first quadrant recursive filter in which $\displaystyle{ H_z(z_1,z_2) = \frac{1}{B_z(z_1,z_2)} }$. The filter is stable iff:[3] $\displaystyle{ B_z(z_1,z_2) \neq 0, \left| z_1 \right| = 1, \left| z_2 \right| = 1 }$ $\displaystyle{ B_z(a,z_2) \neq 0, \left| z_2 \right| \text{ ≥ } 1 }$ for any $\displaystyle{ a }$ such that $\displaystyle{ \left| a \right| = 1 }$ $\displaystyle{ B_z(z_1,b) \neq 0, \left| z_1 \right| \text{ ≥ } 1 }$ for any $\displaystyle{ b }$ such that $\displaystyle{ \left| b \right| = 1 }$ ## Solving 2D Z-Transforms ### Approach 1: Finite Sequences For finite sequences, the 2D Z-transform is simply the sum of magnitude of each point multiplied by $\displaystyle{ z_1,z_2 }$ raised to the inverse power of the location of the corresponding point. For example, the sequence: $\displaystyle{ x(n_1,n_2) = 3\delta(n_1,n_2)+6\delta(n_1-1,n_2)+2\delta(n_1,n_2-1)+4\delta(n_1-1,n_2-1) }$ has the Z-transform: $\displaystyle{ X(z_1,z_2) = 3 + 6z_1^{-1} + 2z_2^{-1} + 4z_1^{-1}z_2^{-1} }$ As this is a finite sequence the ROC is for all $\displaystyle{ z_1,z_2 }$. ### Approach 2: Sequences with values along only $\displaystyle{ n_1 }$ or $\displaystyle{ n_2 }$ For a sequence with a region of support on only $\displaystyle{ n_1 = 0 }$ or $\displaystyle{ n_2 = 0 }$, the sequence can be treated as a 1D signal and the 1D Z-transform can be used to solve for the 2D Z-transform. For example, the sequence: $\displaystyle{ X_z(z_1,z_2) = \begin{cases} \delta(n_1), & \mbox{if } 0 \text{≤} n_2 \text{≤} N-1\\ 0, otherwise \end{cases} }$ Is clearly given by $\displaystyle{ u[n_2]-u[n_2-N] }$. Therefore, its Z-transform is given by: $\displaystyle{ X_z(z_1,z_2) = 1+z_2^{-1}+z_2^{-2}+...+z_2^{-N+1} }$ $\displaystyle{ X_z(z_1,z_2) = \begin{cases} N, & \mbox{if } z_2 = 1\\ \frac{1-z_2^{-N}}{1-z_2^{-1}}, otherwise \end{cases} }$ As this is a finite sequence the ROC is for all $\displaystyle{ z_1,z_2 }$. ### Approach 3: Separable Sequences A separable sequence is defined as $\displaystyle{ x(n_1,n_2) = f(n_1)g(n_2) }$ For a separable sequence finding the 2D Z-transform is as simple as separating the sequence, taking the product of the 1D Z-transform of each signal $\displaystyle{ f(n_1) }$ and $\displaystyle{ g(n_2) }$. For example, the sequence: $\displaystyle{ x(n_1,n_2) = a^{n_1+n_2}u[n_1,n_2] = a^{n_1}u[n_1]a^{n_2}u[n_2]= f(n_1)g(n_2) }$ Therefore, its Z-transform is given by $\displaystyle{ X_z(z_1,z_2) = F_z(z_1)G(z_2) = (\frac{1}{1-az_1^{-1}})(\frac{1}{1-az_2^{-1}}) = \frac{1}{(1-az_1^{-1})(1-az_2^{-1})} }$ The ROC is given by: $\displaystyle{ \left| z_1 \right| \gt \left| a \right| }$ ; $\displaystyle{ \left| z_2 \right| \gt \left| a \right| }$ ## References 1. Siamak Khatibi, “Multidimensional Signal Processing: Lecture 11”, BLEKINGE INSTITUTE OF TECHNOLOGY, PowerPoint Presentation. 2. Dan E. Dudgeon, Russell M. Mersereau, “Multidimensional Digital Signal Processing”, Prentice-Hall Signal Processing Series, ISBN:0136049591, 1983. 3. Ed. Alexander D. Poularikas, “The Handbook of Formulas and Tables for Signal Processing”, Boca Raton: CRC Press LLC, 1999.
2023-01-27 18:09:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8482210636138916, "perplexity": 471.53245910705147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00513.warc.gz"}
http://mathhelpforum.com/calculus/119828-gamma-fuction-integral.html
# Math Help - Gamma fuction and integral 1. ## Gamma fuction and integral Hello. I just cant solve this problem. Who can do it? Using gamma fuction calculate the following integral: $ \int_o^1 x^3 [ln(\frac{1}{x})]^3 dx $ Thanks a lot¡¡ 2. Originally Posted by osodud Hello. I just cant solve this problem. Who can do it? Using gamma fuction calculate the following integral: $ \int_o^1 x^3 [ln(\frac{1}{x})]^3 dx $ Thanks a lot¡¡ Let $u=\ln\!\left(\tfrac{1}{x}\right)\implies \,du=-\tfrac{1}{x}\,dx$. Note that $u=\ln\!\left(\tfrac{1}{x}\right)\implies e^{-u}=x$ Also note that $u(0)\rightarrow\infty$ and $u(1)=0$. Therefore, $\int_0^1x^3\left[\ln\!\left(\tfrac{1}{x}\right)\right]^3\,dx\xrightarrow{u=\ln\!\left(\tfrac{1}{x}\right )}{} \int_{\infty}^{0}-x^4 u^3\,du=\int_0^{\infty}e^{-4u}u^3\,du$ Now make the substitution $t=4u\implies \,dt=4\,du$. Also note that $t(0)=0$ and $t(\infty)=\infty$ Therefore $\int_0^{\infty}e^{-4u}u^3\,du\xrightarrow{t=4u}{}\tfrac{1}{4}\int_0^{ \infty}e^{-t}\left(\tfrac{t}{4}\right)^{4}\,dt=\tfrac{1}{1024 }\int_0^{\infty}e^{-t}t^{4}\,dt$ Now apply the definition of the gamma function to get $\tfrac{1}{1024}\int_0^{\infty}e^{-t}t^{4}\,dt=\frac{\Gamma\!\left(5\right)}{1024}=\f rac{24}{1024}=\frac{3}{128}$ Does this make sense?
2015-04-02 02:06:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777255654335022, "perplexity": 1097.1934419809897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131309986.49/warc/CC-MAIN-20150323172149-00234-ip-10-168-14-71.ec2.internal.warc.gz"}
http://www.intmath.com/forum/functions-and-graphs-36/how-to-obtain-0-84:48
IntMath Home » Forum home » Functions and Graphs » How to obtain 0.84? How to obtain 0.84? [Solved!] My question In Example 1 on the page, how do you obtain 0.84? Please show calculation for common mortals. Thank you. Relevant page 3. Graphs of y = a sin(bx + c) and y = a cos(bx + c) What I've done so far Read other examples, tried substituting X In Example 1 on the page, how do you obtain 0.84? Please show calculation for common mortals. Thank you. Relevant page <a href="/trigonometric-graphs/3-graphs-sin-cos-phase-shift.php">3. Graphs of <span class="noWrap">y = a sin(bx + c)</span> and <span class="noWrap">y = a cos(bx + c)</span></a> What I've done so far Read other examples, tried substituting Re: How to obtain 0.84? Hello L. Aureli The 0.84 comes from substituting x = 0 into y = sin(2x + 1), that is y = sin(0+1) = sin 1. Using calculator, sin 1 = 0.84 (you must be in radians.) Hope that makes sense. Regards X Hello L. Aureli The 0.84 comes from substituting x = 0 into y = sin(2x + 1), that is y = sin(0+1) = sin 1. Using calculator, sin 1 = 0.84 (you must be in radians.) Hope that makes sense. Regards Re: How to obtain 0.84? Doh - I was in degrees and couldn't get 0.84. Thanks a lot. X Doh - I was in degrees and couldn't get 0.84. Thanks a lot.
2016-10-23 09:34:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7149853110313416, "perplexity": 5398.682333867451}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719215.16/warc/CC-MAIN-20161020183839-00089-ip-10-171-6-4.ec2.internal.warc.gz"}
https://deepai.org/publication/a-topology-for-team-policies-and-existence-of-optimal-team-policies-in-stochastic-team-theory
# A topology for Team Policies and Existence of Optimal Team Policies in Stochastic Team Theory In this paper, we establish the existence of team-optimal policies for static teams and a class of sequential dynamic teams. We first consider the static team problems and show the existence of optimal policies under certain regularity conditions on the observation channels by introducing a topology on the set of policies. Then we consider sequential dynamic teams and establish the existence of an optimal policy via the static reduction method of Witsenhausen. We apply our findings to the well-known counterexample of Witsenhausen and the Gaussian relay channel problem. Comments There are no comments yet. ## Authors • 5 publications • ### Non-signaling Approximations of Stochastic Team Problems In this paper, we consider non-signaling approximation of finite stochas... 05/17/2019 ∙ by Naci Saldı, et al. ∙ 0 read it • ### Logical Team Q-learning: An approach towards factored policies in cooperative MARL We address the challenge of learning factored policies in cooperative MA... 06/05/2020 ∙ by Lucas Cassano, et al. ∙ 0 read it • ### Optimal control of robust team stochastic games In stochastic dynamic environments, team stochastic games have emerged a... 05/16/2021 ∙ by Feng Huang, et al. ∙ 0 read it • ### Dynamic Games among Teams with Delayed Intra-Team Information Sharing We analyze a class of stochastic dynamic games among teams with asymmetr... 02/23/2021 ∙ by Dengwang Tang, et al. ∙ 0 read it • ### Adaptive Agent Architecture for Real-time Human-Agent Teaming Teamwork is a set of interrelated reasoning, actions and behaviors of te... 03/07/2021 ∙ by Tianwei Ni, et al. ∙ 12 read it • ### My Team Will Go On: Differentiating High and Low Viability Teams through Team Interaction Understanding team viability – a team's capacity for sustained and futur... 10/14/2020 ∙ by Hancheng Cao, et al. ∙ 0 read it • ### TAIP: an anytime algorithm for allocating student teams to internship programs In scenarios that require teamwork, we usually have at hand a variety of... 05/19/2020 ∙ by Athina Georgara, et al. ∙ 0 read it ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## I Introduction Team decision theory has been introduced by Marschak [1] to study the behaviour of a group of agents who act collectively in a decentralized fashion in order to optimize a common cost function. Radner [2] established fundamental results for static teams, and in particular, demonstrated connections between person-by-person optimality and team-optimality. Witsenhausen’s seminal papers [3, 4, 5, 6, 7, 8] on dynamic teams and characterization and classification of information structures have been crucial in the progress of our understanding of dynamic teams. Particularly, well-known counterexample of Witsenhausen [8] demonstrated the challenges that arise due to a decentralized information structure in such models. We refer the reader to [9] for a more comprehensive overview of team decision theory and a detailed literature review. The key underlying difference of team decision problems from the classical (centralized) decision problems is the decentralized nature of the information structure; that is, agents cannot share all the information they have with other agents. This decentralized nature of the information structure prevents one to use classical tools in centralized decision theory such as dynamic programming, convex analytic methods, and linear programming. For this reason, establishing the existence and structure of optimal policies is a quite challenging problem in team decision theory. In this paper, our aim is to study the existence of an optimal policy for team decision problems. In particular, we are interested in sequential team models. In the literature relatively few results are available on the existence of team optimal solutions. Indeed, so far, existence of optimal policies for static teams and a class of sequential dynamic teams has been studied recently in [10, 11] . In these papers, existence of team optimal policies is established via strategic measure approach where strategic measures are the probability measures induced by policies on the product of state space, observation spaces, and action spaces. In this approach, one first identifies a topology on the set of strategic measures and then proves the relative compactness of this set along with the lower semi-continuity of the cost function. If the set of strategic measures are closed, then one can show the existence of optimal policy via Weierstrass Extreme Value Theorem. However, to establish the closeness of the set of strategic measures, one needs somewhat strong assumptions on the observation channels. For instance, conditions imposed in [10, Assumption 3.1] to establish the closeness of the set of strategic measures with respect to the weak topology implies that observation and their reverse channels are uniformly continuous with respect to the total variation distance. The reason for imposing such a strong condition on the observation channel is that convergence with respect to the topology defined on the set of strategic measures does not in general preserve the information structure of the problem (see, e.g., [11, Theorem 2.7]). In this paper, we prove the existence of team optimal policies under the assumption that the observation channels are continuous with respect to the total variation distance and we did not put any restriction on the reverse channels. Unlike strategic measure approach, we introduce a topology on the set of policies, inspired by the topology introduced in [12, Section 2.4], instead of the set of strategic measures. In this way, we can preserve the information structure of the problem under the convergence of this topology. We first establish the result for static teams. Then, using static reduction of Witsenhausen, we consider the sequential dynamic teams and prove the existence of team optimal solutions using the result in static case. We then apply our findings to counterexample of Witsenhausen and Gaussian relay channel problem. The rest of the paper is organized as follows. In Section II we review the definition of Witsenhausen’s intrinsic model for sequential team problems. In Section III we prove the existence of team optimal solutions for static team problems. In Section IV we consider the existence of an optimal policy for dynamic team problems via the static reduction method. In Sections V and VI we apply the results derived in Section IV to study the existence of optimal policies for Witsenhausen’s counterexample and the Gaussian relay channel. Section VII concludes the paper. ### I-a Notation and Conventions For a metric space , the Borel -algebra (the smallest -algebra that contains the open sets of ) is denoted by . We let and denote the set of all continuous real functions on vanishing at infinity and the set of all continuous real functions on with compact support, respectively. For any , let denote its support. Let and denote the set of all finite signed measures and probability measures on , respectively. A sequence of finite signed measures on is said to converge with respect to total variation distance (see [13]) to a finite signed measure if . A sequence of finite signed measures on is said to converge weakly (see [13]) to a finite signed measure if for all bounded and continuous real function on . Let and be two metric spaces. For any , we denote by the marginal of on . Let be a finite product space. For each with , we denote and . A similar convention also applies to elements of these sets which will be denoted by bold lower case letters. For any set , let denote its complement. Unless otherwise specified, the term ‘measurable’ will refer to Borel measurability in the rest of the paper. ## Ii Intrinsic Model for Sequential teams Witsenhausen’s intrinsic model [4] for sequential team problems has the following components: {(X,X),P,(Ui,Ui),(Yi,Yi),i=1,…,N,} where Borel spaces (i.e., Borel subsets of complete and separable metric spaces) , , and () endowed with Borel -algebras denote the state space, and action and observation spaces of Agent , respectively. Here is the number of actions taken, and each of these actions is supposed to be taken by an individual agent (hence, an agent with perfect recall can also be regarded as a separate decision maker every time it acts). For each , the observations and actions of Agent  is denoted by and , respectively. The -valued observation variable for Agent  is given by , where is a stochastic kernel on given [14, Definition C.1]. A probability measure on describes the uncertainty on the state variable . A control strategy , also called policy, is an -tuple of measurable functions such that , where is a measurable function from to . Let denote the set of all admissible policies for Agent ; that is, the set of all measurable functions from to and let . We note that the intrinsic model of Witsenhausen uses a set-theoretic characterization; however, for Borel spaces, the model above is equivalent to the intrinsic model for sequential team problems. Under this intrinsic model, a sequential team problem is dynamic if the information available to at least one agent  is affected by the action of at least one other agent . A decentralized problem is static, if the information available at every decision maker is only affected by state of the nature; that is, no other decision maker can affect the information at any given decision maker. For any , we let the (expected) cost of the team problem be defined by J(γ––)\coloneqqE[c(x,y,u)], for some measurable cost function , where and . ###### Definition 1. For a given stochastic team problem, a policy (strategy) is an optimal team decision rule if J(γ––∗)=infγ––∈ΓJ(γ––)=:J∗. The cost level achieved by this strategy is the optimal team cost. In what follows, the terms policy, measurement, and agent are used synonymously with strategy, observation, and decision maker, respectively. ### Ii-a Auxiliary Results To make the paper as self-contained as possible, in this section we review some results in probability theory and functional analysis that will be used in the paper. The first result is Prokhorov’s theorem which gives a sufficient condition for relative compactness in weak topology. ###### Theorem 1. ([14, Theorem E.6]) A set of probability measures on a Borel space is relatively compact with respect to the weak topology if it is tight; that is, for any there exists a compact subset of such that for all , we have , or equivalently, . ###### Proposition 1. ([15, Theorem 3.2]) Let be a probability measure on a Borel space . Then is tight. ###### Proposition 2. ([16, Lemma 4.4]) Let and be two Borel spaces. Let and be tight subsets of and , respectively. Then the set F\coloneqq xx{ν∈P(E1×E2):ProjE1(ν)∈F1 and ProjE2(ν)∈F2} is also tight. Before next theorem, we should give the following definition. ###### Definition 2. ([10, Definition 4.4]) Let , , and be Borel spaces. A non-negative measurable function is in class if for every and for every compact set , there exists a compact set such that infK×Lc×E3φ(e1,e2,e3)≥M. Using this definition, we now state the following result. ###### Theorem 2. ([10, Lemma 4.5]) Suppose is in class . Let and be a tight set of measures. Define F\coloneqq x{ν∈P(E1×E2×E3):ProjE1(ν)∈F1 and ∫φdν≤m}. Then is a tight set of measures. The last result is about convergence of bilinear forms constituting duality between a Banach space and its topological dual, when both terms in bilinear form converges in some sense. ###### Proposition 3. Let be a Banach space with its topological dual , where the bilinear form that constitutes duality is denoted by , and . Suppose and with respect to -topology; that is, for all . Then we have as . ###### Proof. Suppose and with respect to -topology. Then we have ∣∣⟨e∗n,en⟩−⟨e∗,e⟩∣∣ ≤∥e∗n∥∥en−e∥+∣∣⟨e∗n,e⟩−⟨e∗,e⟩∣∣. The second term in the last expression converges to zero as with respect to -topology. Note that by Uniform Boundedness Principle [17, Theorem 5.13]. Hence the first term in the last expression also converges to zero as . ∎ ## Iii Existence of the Optimal Strategy for Static Team Problems In this section, we show the existence of optimal strategy for static teams. Recall that is a probability space representing the state space, where is a Borel space and is its Borel -algebra. We consider an -agent static team problem in which Agent ( ) observes a random variable and takes an action , where takes values in a Borel space and takes values in a Borel space . Given any state realization , the random variable has a distribution ; that is, is a stochastic kernel on given . The team cost function is a non-negative function of the state, observations, and actions; that is, , where and . To prove the existence of team-optimal policies, we enlarge the space of strategies where each agent can also apply randomized strategies; that is, for Agent , the set of strategies is defined as Γi\coloneqq{γi:γi(⋅|yi) is a stochastic kernel on Ui given Yi}. We first prove the existence of optimal randomized strategy. Then, using Blackwell’s irrelevant information theorem [18], we deduce that the optimal strategy can be chosen deterministic which therefore solves the problem for the original setup. Recall that . Then, the cost of the team is given by J(γ––)=∫X×Y×Uc(x,y,u)γ––(du|y)P(dx,dy), where . Here, with an abuse of notation, denotes the joint distribution of the state and observations. Therefore, we have J∗=infγ––∈ΓJ(γ––). For any strategy , we let denote the probability measure induced on . In the literature, is called strategic measure. In this section, we impose the following assumptions. ###### Assumption 1. • The cost function is lower semi-continuous. • , , and () are locally compact. • For all , is continuous with respect to the total variation distance. • For all , for some probability measure on . ###### Remark 1. Note that, for all , if is continuous in and for some -integrable , then Assumption 1-(c) holds. Indeed, let in . Then we have ∥∥Wi(⋅|xn)−Wi(⋅|x) ∥∥TV =∫Yi∣∣qi(yi,xn)−qi(yi,x)∣∣μi(dyi). The last expression goes to as by dominated convergence theorem. ###### Remark 2. One common approach that is used in the literature [10, 11] to show the existence of team-optimal policies is strategic measure approach. In this approach, one first identifies a topology on the set of strategic measures (in general, weak topology) and then proves the relative compactness of along with lower semi-continuity of the cost function with respect to this topology. Then, if is closed with respect to this topology, then one can deduce the existence of an optimal policy via Weierstrass Extreme Value Theorem. The main problem in this approach is to prove the closeness of , because convergence with respect to the topology defined on does not in general preserve the statistical independence of the actions given the observations; that is, in the limiting strategic measure, action of Agent  may depend on observation of Agent  which is prohibited in the original problem (see, e.g., [11, Theorem 2.7]). Hence, to overcome this obstacle, in this paper we directly introduce a topology on the set policies instead of the set of strategic measures . By this way, in the limiting measure, we can preserve the statistical independence of actions given the observations. ### Iii-a Topology on the Set of Policies Γ In this section we introduce a topology on the set of policies , which will be used to obtain the existence of team-optimal policies. To this end, we first identify a topology on for each . Fix any . Recall that we denote by , , and the set of real continuous functions vanishing at infinity on , the set of finite signed measures on , and the set of probability measures on , respectively. For any , let which turns into a Banach space. Let denote the total variation norm on which turns into a Banach space. ###### Theorem 3. [17, Theorem 7.17] For any and , let , where ⟨g,ν⟩\coloneqq∫Uigdν. Then the map is an isometric isomorphism from to . Hence, we can identify with . A function is called -measurable [19, p. 18] if the mapping is measurable for all . Let denote the set of all such functions. Then, we define the following set L∞(μi,M(Ui)) \coloneqq{γ∈L(μi,M(Ui)):∥γ∥∞\coloneqqesssupy∈Yi∥g(y)∥TV<∞}, where is taken with respect to the measure . Recall that is the reference probability measure in Assumption 1-(d) for the observation channel . A function is said to be simple if there exists and such that . Define the Bochner integral of with respect to as ∫Yif(y)μi(dy)\coloneqqn∑i=1giμi(Ei). A function is said to be strongly measurable, if there exists a sequence of simple functions with -almost everywhere. The strongly measurable function is Bochner-integrable [20] if . In this case, the integral is given by ∫Yif(y)μi(dy)=limn→∞∫Yifn(y)μi(dy), where is the sequence of simple functions which approximates . Let denote the set of all Bochner-integrable functions from to endowed with the norm ∥f∥1\coloneqq∫Yi∥f(y)∥∞μi(dyi). Then, we have the following theorem. ###### Theorem 4. [19, Theorem 1.5.5, p. 27] For any and , let Tγ(f)\coloneqq∫Yi⟨f(y),γ(y)⟩μi(dy). Then the map is an isometric isomorphism from to . Hence, we can identify with . By Theorem 4, we equip with -topology induced by ; that is, it is the smallest topology on for which the mapping L∞(μi,M(Ui))∋γ↦Tγ(f)∈R is continuous for all [17]. We write , if converges to in with respect to -topology. Note that, for this topology, we have been in part inspired by the topology introduced in [12, Section 2.4], where in this work, a similar topology is introduced for randomized Markov policies to study continuous-time stochastic control problems with average cost optimality criterion (see [21] for another construction of a topology on Markov policies). ###### Lemma 1. Suppose such that -a.e.. Then, for all , the mapping is measurable. Hence, is a stochastic kernel. ###### Proof. Note first that the mapping is measurable for all real, continuous, and bounded on , because any such can be approximated pointwise by satisfying for all . Moreover, for any closed set , one can approximate pointwise the indicator function by continuous and bounded functions , where is the metric on and . This implies that the mapping is measurable for all closed set in . Then the result follows by [22, Proposition 7.25]. ∎ By Lemma 1, we have Γi={γ∈L∞(μi,M(Ui)):γ(y)∈P(Ui) μi−a.e.}. Since is bounded in , by Banach-Alaoglu Theorem [17, Theorem 5.18], is relatively compact with respect to -topology. Since is separable, then by [13, Lemma 1.3.2], is also relatively sequentially compact. Note that is not closed with respect to -topology. Indeed, let . Define and , where denotes the degenerate measure on ; that is, for all . Let . Then we have limn→∞ ∫Yi⟨g(y),γn(y)⟩μi(dy)=limn→∞∫Yig(y)(n)μi(dy) =∫Yilimn→∞g(y)(n)μi(dy) (as ∥g(y)∥∞ is μi-integrable) =0 (as g(y)∈C0(Ui)). Hence, . But, , and so, is not closed. In the remainder of this section, is equipped with this topology. In addition, has the product topology induced by these -topologies; that is, converges to in with respect to the product topology if and only if for all . In this case we write . Note that is sequentially relatively compact under this topology. ### Iii-B Existence of Team-Optimal Policies In this section, using the topology introduced in Section III-A, we prove the existence of an optimal policy under the Assumption 1 and the assumption below. For any , we define ΓL \coloneqq{γ––∈Γ:J(γ––) For each , we define . ###### Assumption 2. For some , is tight for . Before we continue with the proof, we will give several conditions that imply Assumption 2. ###### Theorem 5. Suppose either of the following conditions hold: • is compact for all . • For non-compact case, we assume • The cost function is in class , for all . • For all , and is lower semi-continuous. Then, Assumption 2 holds. ###### Proof. (i): Note that the marginal on of any measure in is . Since is tight by Proposition 1 and is tight by the compactness of , is also tight by Proposition 2. (ii): We define . Since, for all , is lower semi-continuous and strictly greater than , for any compact set , we have . This implies that is also in class for . Then, by Theorem 2, one can inductively prove that is tight. Indeed, let . Then is in and SL⊂{λ∈P(X×Y×U):ProjX×Y(λ)(dx,dy) xxxxxxxx=P(dx)N∏iμi(dyi) and ∫~cdλ≤J∗+L}. But since is tight, by Theorem 2, is also tight. Suppose the assertion is true for and consider . Note that is in and SL⊂{λ∈P(X×Y×U):ProjX×Y×U[1:j](λ)∈ xxxxxxxxProjX×Y×U[1:j](SL) and ∫~cdλ≤J∗+L}. Since is tight by the induction hypothesis, is also tight by Theorem 2. This completes the proof of assertion. But this result implies that is also tight for all . ∎ Recall that denotes the set of real continuous functions on with compact support. For any , we define Jg(γ––)=∫X×Y×Ug(x,y,u)γ––(du|y)P(dx,dy). We first prove the following result. ###### Theorem 6. Suppose that as and . Then we have limn→∞∣∣Jg(γ––(n))−Jg(γ––)∣∣=0. ###### Proof. Fix any . Then by Stone-Weierstrass Theorem [23, Lemma 6.1], can be uniformly approximated by functions of the form k∑j=1rjN∏i=1fj,igj,i, where , , and for each and . This implies that it is sufficient to prove the result for functions of the form , where , , and for . Therefore, in the sequel, we assume that . Let which is a compact subset of as . Then we have ∣∣Jg(γ––(n))−Jg(γ––)∣∣ ≤∣∣Jg(γ(n)1,…,γ(n)N)−Jg(γ(n)1,…,γ(n)N−1,γN)∣∣ +∣∣Jg(γ(n)1,…,γ(n)N−1,γN)−Jg(γ(n)1,…,γ(n)N−2,γN−1,γN)∣∣ x⋮ +∣∣Jg(γ(n)1,γ2,…,γN)−Jg(γ1,…,γN)∣∣ \eqqcolonN∑j=1l(n)j. Let us consider the term in the above expression. Define the probability measure on and real function on as follows: T−j\coloneqq(N∏i=j+1γi(dui|yi)qi(yi,x)μi(dyi))× xxxxxxxxxxxxxx(j−1∏i=1γ(n)i(dui|yi)qi(yi,x)μi(dyi))P(dx) and g−j\coloneqqr∏i≠jfigi. Then the term can be written as l(n)j =∣∣∣∫g−j(∫fjgjqjdγ(n)j⊗μj)dT−j xxxxxxxxxxx−∫g−j(∫fjgjqjdγj⊗μj)dT−j∣∣∣. Define, for each , the function bx(yj,uj)\coloneqqfj(yj)gj(uj)qj(yj,x). One can prove that any is in ; that is almost all and can be approximated by simple functions. We will prove that the set is totally bounded. Indeed, let . Then ∥bx−b~x∥1\coloneqq∫Yjsupuj∈Uj∣∣fj(yj)gj(uj)qj(yj,x) xxxxxxxxxxxxx
2021-10-26 23:29:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9509120583534241, "perplexity": 530.6062607359595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00439.warc.gz"}
http://clay6.com/qa/3492/the-probability-that-student-entering-a-university-will-graduate-is-0-4-fin
Browse Questions # The probability that student entering a university will graduate is 0.4. Find the probability that out of 3 students of the university: • None will graduate, • Only one will graduate, • All will graduate. Can you answer this question?
2017-05-27 04:25:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.787896990776062, "perplexity": 2303.7024577631446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608773.42/warc/CC-MAIN-20170527040600-20170527060600-00223.warc.gz"}
https://crypto.stackexchange.com/questions/83267/is-it-safe-to-xor-combine-hashes-by-rotating-them-first
# Is it safe to XOR-combine hashes by rotating them first? I have a small number of hashes. I would like to combine them into a single hash. XORing the hashes ignores their order, which is important. Also, it could lead to a result of zero if there were an even number of identical items hashed. Would rotating each hash (by a distance equal to the item's position) before XORing be secure? • How long are the hashes? Are they all equal length? What are you planning to do if two hashes are different length? For example one 128 bit hash and one 256 bit hash? – Ömer Enes Özmen Aug 7 '20 at 10:16 • Concat and rehash? – kelalaka Aug 7 '20 at 19:23 • @ÖmerEnesÖzmen Good point. All hashes are 256-bit. – fadedbee Aug 8 '20 at 4:26 • "secure" is a little vague. I think it would help if you clarified what security property you are looking for. – user82867 Aug 8 '20 at 13:25 ## 2 Answers First of all, if the order is not important then rotating a hash value depending on the order before using it would of course be counter-intuitive. Now the hash of a specific element is fully dependent on the order. Generally I would not advice XOR-ing hashes. Single rotation won't work especially if you combine multiple hashes - as you've indicated it would be easy to find a collision where two hashes cancel each other out. But you've already covered this. XOR-ing would also make collisions easier to calculate, because the XOR of two hash values can also create a collision. Of course this won't change the order of finding a collision, but a XOR is still significantly faster than calculating a hash. I could also imagine a scheme where you XOR (rotated) hashes that have a similar highest bit set. I think this could quickly create hashes that have the initial bits set to all zero. You could also XOR hashes that have a small Hamming distance. Either of these methods would create a set of hashes that have more than a normal amount of bits set to zero. It seems likely that these have less collision resistance than the initial hash. The fact that you can also use rotated hashes for this would make it even easier to create such attacks. Instead you could think of sorting the hashes before you hash them (using a binary compare). That way you get a unique hash for a unique set where the order is ignored, even if it can contain identical elements. The disadvantage is that you cannot calculate a new hash value by adding a hash value after the final hash calculation is performed. This can be viewed as a special case of the generalized birthday problem, as described by Wagner. Given $$k$$ lists of $$n$$-bit values, the problem is to choose one element from each list such that the $$k$$ chosen values xor to zero. Note that rotations would not prevent this reduction, since the problem deals with arbitrary lists of values; they aren't required to be outputs of the same hash function. The only assumption is that the values are uniformly random. Wagner describes an algorithm which takes $$\mathcal{O}(k \cdot 2^{n / (1 + \log{k})})$$ time, so we can treat that as an upper bound on the hardness of the problem, but faster algorithms may be possible.
2021-04-12 22:01:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36740314960479736, "perplexity": 533.8337867924773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069267.22/warc/CC-MAIN-20210412210312-20210413000312-00247.warc.gz"}
https://tex.stackexchange.com/questions/464647/insert-commas-in-a-table-array
Insert commas in a table/array I have a table, of the form \documentclass[hyper,12pt,A4paper]{article} \usepackage{latexsym,amsmath,amsfonts,amssymb} \begin{document} \begin{align} \begin{array}{cccc} ab=0 & cd=1 & efgh=-1 & pqr=30 \end{array} \end{align} \end{document} but of course, with many many more rows and columns. The output of this looks like this: But instead, I would like to put commas between successive entries, to make it look something like this: (with no comma after the last entry in a particular row). Of course, this is trivial to do by hand if there are only a few entries in the table. But I have about 50 such tables and each has a dimension of 5x5 or more. These tables were generated as output from some Mathematica code which is even harder to retrospectively modify. So my question is: is there a way to modify the \begin{array}{cccc}...\end{array} to something which is schematically like \begin{array}{c,c,c,c}...\end{array} (which in this form is wrong -- I know!)? • why do you have an array here these are equations not matrices so the array layer is not needed (and makes it harder to get good output) – David Carlisle Dec 13 '18 at 9:00 • the answer to the question as asked is \begin{array}{c@{,}c@{,}c@{,}c} but I suspect it is the wrong question. – David Carlisle Dec 13 '18 at 9:02 A rather easy solution would be, to define your own separator, like \def\sep{\unskip, &} and using this in your tables instead of the & sign (the \unskip removes the spaces before the ,). With this, you could just going once through all your tables and replace the & sign with your defined command.
2020-07-10 09:37:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9065563678741455, "perplexity": 589.6999604523393}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655906934.51/warc/CC-MAIN-20200710082212-20200710112212-00407.warc.gz"}
http://mathoverflow.net/questions/107070/a-differentiable-approximation-to-the-minimum-function-over-a-vector-of-reals
## A differentiable approximation to the minimum function over a vector of reals In http://mathoverflow.net/questions/35191/a-differentiable-approximation-to-the-minimum-function/35193#35193, a differentiable approximation of the minimum function is given, but it seems it only works for positive reals. Is there an easy-to-implement approximation to the minimum function $f: \mathbf{R^N} \rightarrow \mathbf{R}$ that behaves correctly over all $\mathbf{R^N}$, even when two elements of the input vector are equal? - If you have a smooth approximation $f_k$ which is ok for positive numbers, for $x:=(x_1,\dots,x_N)\in\mathbb{R}^N$ you may translate everything, for instance $f_k(x_1+\|x\|_2^2+1,\dots,x_N+\|x\|_2^2+1)-\|x\|_2^2-1.$ – Pietro Majer Sep 13 at 9:51 Thank you, this indeed solves my problem. – Antonio El Khoury Sep 13 at 11:50
2013-05-23 02:33:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9149361252784729, "perplexity": 368.2856277377255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702749808/warc/CC-MAIN-20130516111229-00038-ip-10-60-113-184.ec2.internal.warc.gz"}
https://api.exponentcms.org/markers.html
### framework\modules\core\controllers\expTagController.php 2 Type Line Description FIXME 122 here is a hack to get the faq to be listed FIXME 183 here is a hack to get the faq to be listed ### framework\core\forms\controls\bootstrap3\yuicalendarcontrol.php 1 Type Line Description FIXME 56 $disable_text &$showtime are NOT used ### framework\plugins\bootstrap\function.ddrerank.php 1 Type Line Description FIXME 74 we don't seem to get a container var ### framework\core\forms\controls\bootstrap\checkboxcontrol.php 1 Type Line Description FIXME 140 this is just here until we completely deprecate the old school checkbox ### framework\modules\ecommerce\controllers\shippingController.php 7 Type Line Description FIXME 62 we do NOT want the global $order FIXME 66 a lot of work just to get one set of data since we'll be doing it again for cart/checkout redisplay...maybe cache??? FIXME 103 do we ever call this? FIXME 105 we do NOT want the global$order FIXME 108 perhaps check for cached rates if calculator didn't change??? FIXME 138 this model has no listPrices() method??? FIXME 188 we need to ensure our default calculator is still active...not sure this does it ### exponent_constants.php 2 Type Line Description FIXME 39 PATH_RELATIVE definition will break in certain parts when the server does not offer the Document_root. FIXME 40 Notable, it breaks in the installer. ### framework\modules\ecommerce\models\shippingcalculator.php 1 Type Line Description FIXME 53 probably needs to be passed order object ### framework\modules\ecommerce\billingcalculators\payflowpro.php 4 Type Line Description FIXME 70 why aren't we passing $opts? FIXME 518 not sure this is correct, but we need to update billingmethod FIXME 637 not sure this is correct, but we need to update billingmethod FIXME 758 not sure this is correct, but we need to update billingmethod ### framework\modules\report\controllers\reportController.php 4 Type Line Description FIXME 110 OLD calendar control format FIXME 947 there is no billingmethod->title ...this is translated?? FIXME 1984 this is NOT in import FIXME 2027 this is NOT in import ### framework\modules\ecommerce\models\shipping.php 4 Type Line Description FIXME 32 we don't use this in shipping, only order? FIXME 35 we do NOT want the global$order FIXME 86 , we don't really need to call it each time the shipping model is created! slows entire system down! FIXME 230 we need to get current address here ### framework\core\forms\template.php 1 Type Line Description TODO 28 prepare this class for multiple template systems Type Line Description FIXME 281 should a group admin get the entire User Management menu? Type Line Description FIXME 103 do we want to add a forms_id field? FIXME 214 ??? ### framework\modules\ecommerce\billingcalculators\worldpayCheckout.php 4 Type Line Description TODO 23 make into php5 class with access modifiers properties and all that jazz. FIXME 122 already unserialized?? == $opts??? FIXME 130 is 'complete' and$grand_total proper? FIXME 178 we don't store a 'token' ### framework\core\forms\controls\jquery\listbuildercontrol.php 1 Type Line Description FIXME 22 this is NOT a bootstrap control, but jQuery ### framework\modules\ecommerce\products\models\product.php 7 Type Line Description FIXME 140 Make this actually do something. FIXME 203 adjust multiple quantity here FIXME 233 adjust multiple quantity here for child products??? FIXME 411 adjust multiple quantity here FIXME 982 $product_type is not set, changed to$product->product_type FIXME 1050 not sure why we are creating 2 array entries?? ### framework\plugins\function.chain.php 2 Type Line Description FIXME 35 old school way of calling? FIXME 43 old school only? ### framework\modules\recyclebin\models\recyclebin.php 1 Type Line Description FIXME 108 we should only send module with sources or configs to the recycle bin NOT things like rss, addressbook ### framework\core\forms\controls\statescontrol.php 1 Type Line Description FIXME 55 this is the US in sample db ### framework\core\subsystems\expLabels.php 2 Type Line Description FIXME 144 FIXME 158 need to integrate into expHTMLToPDF ### framework\modules\events\controllers\eventController.php 6 Type Line Description FIXME 340 this can't be right? FIXME 343 this can't be right? FIXME 1441 must convert $dtstart timezone FIXME 1454 must convert$dtend timezone FIXME 1577 we must have the real timezone offset for the date by this point FIXME 1579 this is for the google ical feed which is bad! ### framework\modules\company\controllers\companyController.php 1 Type Line Description TODO 124 this is a misnomer as we only accept an id NOT a title and duplicates the show() method ### framework\core\subsystems\expMail.php 10 Type Line Description FIXME 169 won't work since $params['connections'] is NOT array FIXME 170 won't work since$params['connections'] is NOT array TODO 74 add support for telling the constructor where the system specific Mail Transit Authority is: i.e. where sendmail or exim is if not in the default /usr/sbin TODO 74 drop support for passing in custom "connections", a Swift 3.x relic that we do not need. TODO 74 add further documentation for using settings other than the system default TODO 219 Update this section to use more error checking TODO 267 May add allowing a string param to be passed as the message (text_message) using all defaults to mail it. TODO 398 Update this section to use batch processing properly TODO 690 A nice future feature addition would be to allow the passing in of associative arrays like so: $emailsToSendTo = array('bob@smith.com'=>'Bob Smith', 'mary@smith.com'=>'Mary Smith');$emailItem->addTo($emailsToSendTo); OR$emailItem->addTo('array('myemail@mysite.com'=>'Website Owner', 'secondemail@website.com'=>'Frank Jones'); Actually, cleanup should be done so that this function only takes associative arrays, and nothing else. ### framework\modules\blog\controllers\blogController.php 3 Type Line Description FIXME 438 $object not set FIXME 440$object not set FIXME 442 $object not set ### framework\modules\twitter\models\Twitter.php 1 Type Line Description TODO 247 refactor me ### framework\modules\ecommerce\controllers\billingController.php 1 Type Line Description FIXME 93 we need to ensure our default calculator is still active ### framework\modules\forms\controllers\formsController.php 10 Type Line Description FIXME 433 not sure we need this here FIXME 1237 change this to an expValidator call FIXME 1274 we also need to update any config column_names_list settings? FIXME 1447 should we default to only 5 columns or all columns? and should we pick up modules columns ($this->config) or just form defaults ($f->) FIXME 1611$individual_Header is ALWAYS in $rptcols? FIXME 1675 we need to echo inside call FIXME 1695 check for duplicate form data table name before import? FIXME 1774 quick hack to remove file model FIXME 1834 FIXME 1955 quick hack to remove file model ### framework\core\forms\controls\checkboxcontrol.php 1 Type Line Description FIXME 145 this is just here until we completely deprecate the old school checkbox ### framework\modules\users\models\user.php 2 Type Line Description FIXME 82 will be empty for new ldap user FIXME 309 who should get a slingbar? any non-view permissions? new group setting? ### framework\modules\file\models\expFile.php 4 Type Line Description FIXME 1783 we need to echo and/or write to file within this method to handle large database dumps FIXME 1829$dump may become too large and exhaust memory FIXME 2139 we may have to change this for handling large files via fgets()...see dumpDatabase() above FIXME 2185 should we convert this? object2array? ### framework\plugins\modifier.convertlangcode.php 1 Type Line Description FIXME 38 this plugin isn't used, but this will at least return something ### framework\modules\ecommerce\controllers\orderController.php 30 Type Line Description FIXME 121 this sql isn't correct??? FIXME 199 we don't really want to 'pop' it off the object FIXME 223 what about new orders with no items?? FIXME 237 what about new orders with no items?? FIXME 320 we'll use the tc param for now FIXME 452 uncomment to implement, comment out above FIXME 461 uncomment to implement, comment out above FIXME 702 never used FIXME 863 Unless you need each mail sent separately, you can now set 'to'=>$email_addys and let expMail send a single email to all addresses FIXME 910 we should also be getting the order status name FIXME 1043 already unserialized??? FIXME 1048 credit card doesn't have a result FIXME 1075 should this be discrete?? FIXME 1088 should it always be the grand total??? FIXME 1100 only getting 1st one and then removing it FIXME 1140 for now multiple shipping methods will crash ecom with shipping->__construct() FIXME 1183 check for existing rate, if not get next cheapest? (based on predefined package?) FIXME 1196 updated with new options we may need to take action on like tracking number? FIXME 1269 should we add the params to the$sm->shipping_options, or pass them?? FIXME 1398 do we need to do this? FIXME 1534 add a fake item? FIXME 1564 only getting 1st one and then removing it FIXME 1603 only getting 1st one and then removing it FIXME 1683 we don't use selectedOpts? FIXME 1713 only getting 1st one and then removing it FIXME 1820 only getting 1st one and thenremoving it FIXME 1838 attempt to update w/ new billing transaction FIXME 1864 we need to be able to call this from program with $params also, addToOrder FIXME 1885 attempt to update w/ new billing transaction FIXME 1944 attempt to update w/ new billing transaction ### framework\modules\ecommerce\billingcalculators\creditcard.php 1 Type Line Description FIXME 108 we need to display/obtain user information if we are doing a quickPay checkout??? ### framework\core\subsystems\expCore.php 4 Type Line Description FIXME 95 Hardcoded controller stuff!! FIXME 143$sef_name isn't set?? TODO 279 Investigate the chances of BASE occurring more than once FIXME 565 $individual_Header is ALWAYS in$rptcols? ### framework\core\subsystems\expTemplate.php 29 Type Line Description FIXME 40 only called from basetemplate->_construct() NOT controllertemplate FIXME 41 only place this method is called, move to this subsystem? FIXME 59 Not Used 2.2??? FIXME 72 Not Used 2.2??? FIXME 126 Not Used 2.2??? FIXME 150 DEPRECATED: backward compatibility wrapper FIXME 151 Not Used??? FIXME 191 only used by container 2.0 edit action FIXME 219 we need to also look for custom & jquery & bootstrap controls and NOT assume we only subclass basic controls? FIXME 235 we need to also look for custom & jquery & bootstrap controls and NOT assume we only subclass basic controls? FIXME 321 remove old school module code TODO 322 implement caching TODO 323 optimization - walk the tree backwards and stop on the first match TODO 333 convert everything to the new naming model TODO 356 forms/calendar only used by calendarmodule TODO 380 forms/calendar only used by calendarmodule TODO 381 forms/calendar only used by calendarmodule TODO 394 forms/calendar only used by calendarmodule FIXME 410 old school actions were php files TODO 423 handle subthemes TODO 424 now that glob is used build a syntax for it instead of calling it repeatedly TODO 439 handle the - currently unused - case where there is the same file in different $type categories TODO 452 invent better error handling, maybe an error message channel ? FIXME 471 only used by 1) event module edit action (email forms) & 2) expTemplate::listModuleViews for OS modules FIXME 720 should there be a theme newui variation? FIXME 721 should there be a theme newui variation? FIXME 782 this function isn't called FIXME 808 we assume the file is only a filename and NOT a path? FIXME 815 we need to check for custom views and add full path for system views if coming from custom view ### install\pages\install-3.php 1 Type Line Description FIXME 301 not sure if we should do this? ### framework\core\subsystems\expVersion.php 2 Type Line Description FIXME 157 we need a good installation/server to place this on FIXME 159 substitute until git fixed on exponent servers ### framework\core\forms\filetemplate.php 1 Type Line Description FIXME 26 Never used??? ### install\upgrades\upgrade_calendar.php 1 Type Line Description FIXME 255 we also need to copy any .form & .config files ### framework\plugins\newui\function.ddrerank.php 1 Type Line Description FIXME 74 we don't seem to get a container var ### framework\core\subsystems\expJavascript.php 2 Type Line Description FIXME 393 we need to allow for an array of scripts with unique+index as name FIXME 734$hide & $footer are not defined below ### framework\modules\ecommerce\definitions\shippingmethods.php 4 Type Line Description FIXME 28 needed to activate the has_many assignment FIXME 88 is this a 'shipping_options' item?? FIXME 92 moved from orders table FIXME 96 moved from orders table ### framework\modules\ecommerce\billingcalculators\passthru.php 3 Type Line Description FIXME 70 doesn't match parent declaration update($params = array()) FIXME 87 never used FIXME 152 why aren't we passing $opts? ### framework\plugins\block.assocarray.php 1 Type Line Description FIXME 148 we discard this result? ### framework\core\forms\controls\jquery\calendarcontrol.php 1 Type Line Description FIXME 24 this is NOT a bootstrap control, but jQuery ### framework\modules\ecommerce\definitions\orders.php 6 Type Line Description FIXME 49 we may need to move this to the shippingmethod FIXME 53 we may need to move this to the shippingmethod FIXME 58 here because we currently only allow one package? FIXME 62 we may want this since there is only one and NOT many? FIXME 82 is this actual or estimated, move this to the shippingmethod? FIXME 132 deprecated order gift message?? ### framework\modules\administration\menus\y-navigation.php 1 Type Line Description FIXME 117 do we just need to let any user w/ manage page perms to get to the manage menu hierarchy and let it decide perms from there? ### framework\core\forms\controls\jquery\yuidatetimecontrol.php 1 Type Line Description FIXME 24 this is NOT a bootstrap control, but jQuery ### framework\plugins\function.userlistcontrol.php 1 Type Line Description TODO 45 should we display username w/ first/last name in parens or first/last name? ### framework\core\subsystems\expTheme.php 23 Type Line Description FIXME 534 Not used FIXME 543$form is not set?? FIXME 550 Not used FIXME 562 this should be $file instead of$filename? FIXME 621 need to use $feed instead of$params FIXME 861 clean our passed parameters FIXME 862 need array sanitizer FIXME 869 we've already sanitized at this point FIXME 871 we've already sanitized at this point FIXME 874 module/controller glue code..remove ASAP FIXME 939 only used by smarty functions, old school? FIXME 946 need array sanitizer FIXME 1023 not sure how to convert this yet FIXME 1059 change to showModule call FIXME 1100 patch to cleanup module name FIXME 1110 there is no such config index FIXME 1151 patch to cleanup module name FIXME 1157 let's try $sectionObj instead of last_section FIXME 1184 not used in base system (custom themes?) FIXME 1334 -$section might be empty! We're getting it from last_section instead of sectionObj?? FIXME 1337 let's try $sectionObj instead of last_section FIXME 1389 patch to cleanup module name FIXME 1462 we are checking here for a new MVC style controller or an old school module. We only need to perform ### framework\modules\ealerts\models\expeAlerts.php 1 Type Line Description FIXME 58 , not pulling any items? ### framework\modules\ecommerce\billingcalculators\ezic.php 3 Type Line Description TODO 38 I don't think this is used any more but i don't have a clue FIXME 213 hard coded text!! FIXME 223 hard coded text!! ### framework\plugins\outputfilter.trim.php 1 Type Line Description TODO 37 substr_replace() is not overloaded by mbstring.func_overload - so this function might fail! ### framework\modules\file\controllers\fileController.php 5 Type Line Description FIXME 489 json error checking/reporting, may no longer be needed FIXME 718 we exit before hitting this FIXME 735 we exit before hitting this TODO 875 we need to write inside call passing$eql file pointer FIXME 902 we need to echo inside call ### framework\core\forms\formtemplate.php 1 Type Line Description FIXME 29 only used by calendarmodule for feedback forms Type Line Description FIXME 209 this shouldn't be a link FIXME 218 this shouldn't be a link ### framework\core\subsystems\database\mysqli.php 3 Type Line Description TODO 61 determine how to handle encoding on postgres FIXME 330 we don't add column length?? FIXME 1358 this can run us out of memory with too many rows ### framework\modules\ecommerce\controllers\cartController.php 7 Type Line Description FIXME 115 shouldn't this be relegated to $product->addToCart??? FIXME 171 though currently unused we don't account for minimym nor multiple quantity settings FIXME 588$opts is usually empty FIXME 722 $comment doesn't exist FIXME 783 we exit earlier if shipping_required??? TODO 834 FIXME 844$opts is usually empty ### framework\core\models\expRecord.php 12 Type Line Description FIXME 459 only placed used is in helpController->copydocs (though we don't have attachments), & migration FIXME 465 plural vs single? FIXME 466 plural vs single? FIXME 497 we're not going to do this automagically until we get the refreshing figured out. FIXME 640 $where .= empty($this->rank_by_field) ? null : "AND " . $this->rank_by_field . "='" .$this->$this->rank_by_field . "'"; FIXME 725 find a better way to pluralize these names!!! FIXME 726 find a better way to pluralize these names!!! FIXME 857 not used?? FIXME 904 is it plural where others are single? FIXME 968 find a better way to unpluralize the name! TODO 1023 perhaps add a 'in' option to the find so we can pass an array of ids and make ONE db call instead of looping FIXME 1113 not used?? ### install\index.php 2 Type Line Description FIXME 73 is this still necessary? FIXME 127 we need to output this into an element and not simply out on the page ### framework\modules\ecommerce\controllers\eventregistrationController.php 12 Type Line Description FIXME 283 we only have 0=active & 2=inactive ??? TODO 341 should we pull in an existing reservation already in the cart to edit? e.g., the registrants FIXME 342 we only have 0=active & 2=inactive ??? FIXME 483 only used by the eventregistration_form view (no method) FIXME 496$product doesn't exist FIXME 540 change this to forms table FIXME 281 why aren't we passing $opts? FIXME 367 , what can we do with the note returned? FIXME 378 only true if mode is 'sale' FIXME 467 what about multiple captures? FIXME 539 we probably need a payment_status FIXME 627 we probably need a payment_status FIXME 948 Deprecated now in favor of above standard ### framework\modules\ecommerce\models\billingcalculator.php 3 Type Line Description FIXME 201 this is only the 'results' property unlike$bm??? FIXME 202 what is this used for? FIXME 203 we need a transaction_state of complete, authorized, authorization pending, error, void, or refunded; or paid or payment due Type Line Description FIXME 53 does this mess up validation styling? ### framework\modules\ecommerce\models\discounts.php 1 Type Line Description ### framework\core\forms\controls\bootstrap3\listbuildercontrol.php 1 Type Line Description FIXME 22 this is NOT a bootstrap control, but jQuery ### framework\modules\ecommerce\shippingcalculators\fedexcalculator.php 4 Type Line Description FIXME 173 we need to be able to set this FIXME 179 we need to be able to set this FIXME 211 we need to be able to set this FIXME 217 we need to be able to set this ### framework\core\forms\controls\htmleditorcontrol.php 1 Type Line Description TODO 103 Convert to OO API and use eXp->EditorControl->doneInit instead Type Line Description FIXME 180 this shouldn't be a link FIXME 189 this shouldn't be a link ### framework\modules\importexport\controllers\importexportController.php 6 Type Line Description FIXME 168 this may crash on large .eql files FIXME 219 this may crash on large .eql files FIXME 278 we can't handle file attachments since this is only a db import FIXME 397 we need to echo inside call FIXME 621 this is where canonical should be FIXME 849 this is where canonical should be ### framework\modules\ecommerce\products\models\eventregistration.php 6 Type Line Description FIXME 38 only if a cost is involved FIXME 244 not sure this accurate based on expDefinableFields & Forms FIXME 376 for now we'll just add a new registration 'purchase' to the cart since that's the way the code flows. FIXME 379 we are adding updating an existing item in the cart?? FIXME 572 we need to be dealing w/ eventregistration_registrants here also/primarily FIXME 683 there is no 3rd param for this ### framework\core\subsystems\expPaginator.php 4 Type Line Description FIXME 212 we don't get attachments in this approach FIXME 253 we may want some more intelligent selection here based on cats/groups, e.g., don't break groups across pages, number of picture rows, etc... FIXME 377 module/controller glue code FIXME 427 return 404 error for infinite page scroll plugin ### framework\modules\ecommerce\billingcalculators\paylater.php 1 Type Line Description FIXME 54 why aren't we passing $opts? ### framework\core\subsystems\expPermissions.php 1 Type Line Description FIXME 105 for v2.2.2 and earlier this was true ### framework\plugins\function.yuimenubar.php 1 Type Line Description FIXME 36 convert to yui3 ### framework\plugins\function.ddrerank.php 1 Type Line Description FIXME 72 we don't seem to get a container var ### framework\plugins\function.yuimenu.php 1 Type Line Description FIXME 36 convert to yui3 ### framework\modules\core\controllers\expCommentController.php 7 Type Line Description FIXME 53 here is where we might sanitize the comment before displaying/editing it FIXME 101 here is where we might sanitize the comments before displaying them FIXME 153 here is where we might sanitize the comments before displaying them FIXME 154 this should follow the site attribution setting FIXME 300 here is where we might sanitize the comments before displaying them FIXME 301 this should follow the site attribution setting FIXME 456 here is where we might sanitize the comments before approving them ### framework\core\subsystems\database\mysqlid.php 1 Type Line Description TODO 134 determine how to handle encoding on postgres ### framework\modules\administration\controllers\administrationController.php 2 Type Line Description FIXME 29 this requires a logged in user to perform? FIXME 840 shouldn't use echo ### framework\core\subsystems\expSession.php 2 Type Line Description FIXME 302 is this data used to measure abandoned carts FIXME 470 not currently used ### framework\core\expFramework.php 9 Type Line Description FIXME 312 this is now handled by the template class during get_template_for_action since it only sets template variables FIXME 411 ? if the assoc$perm doesn't exist, the 'action' will ALWAYS be allowed, e.g., default is to allow action FIXME 471 there is NO 'page' object and section has no _construct method FIXME 473 there is no getModulesBySource method anywhere FIXME 533 this works by making assumptions FIXME 570 is this the correct sequence spot? FIXME 687 newui take priority FIXME 713 shoudl there be a theme newui variation? FIXME 935 do we need to update this to HTML5 and only include the space? ### framework\modules\help\models\help.php 1 Type Line Description FIXME 209 $where .= empty($this->rank_by_field) ? null : "AND " . $this->rank_by_field . "='" .$this->$this->rank_by_field . "'"; ### framework\core\subsystems\expSettings.php 8 Type Line Description FIXME 175 only used with themes and self::change() method FIXME 197 is this still necessary since we stripslashes above??? FIXME 273 this method is only used in install, and doesn't deal with profiles FIXME 380 this method is never used FIXME 471 this method is never used FIXME 554 do we need to delete an existing profile first?? FIXME 572 this method is never used FIXME 596 do we need to delete current config first?? ### framework\core\controllers\expController.php 11 Type Line Description FIXME 29 not used and not actually set right index needed of -3 instead of -2 below FIXME 30 never used,$basemodel_name replaced? FIXME 94 this requires we move the 'core' controllers into the modules folder or use this hack FIXME 904 already assigned in controllertemplate? FIXME 909 $controller already assigned baseclassname (short vs long) in controllertemplate? FIXME 1472$object not set FIXME 1474 $object not set FIXME 1476$object not set FIXME 1500 $object not set FIXME 1502$object not set ### framework\core\subsystems\expHtmlToPDF.php 4 Type Line Description FIXME 1269 method no longer exists??? FIXME 1281 protected property??? FIXME 1295 protected property??? FIXME 1298 method no longer exists??? ### framework\modules\ecommerce\shippingcalculators\upscalculator.php 6 Type Line Description FIXME 115 kludge for the giftcard shipping FIXME 151 we need to be able to set this FIXME 156 we need to be able to set this FIXME 184 we need to be able to set this FIXME 189 we need to be able to set this FIXME 200 adding a $5 fee if shipping a gift card??? ### framework\core\forms\controls\uploadcontrol.php 2 Type Line Description FIXME 168 this shouldn't be a link FIXME 177 this shouldn't be a link ### framework\plugins\bootstrap3\function.ddrerank.php 1 Type Line Description FIXME 74 we don't seem to get a container var ### cron\bootstrap.php 1 Type Line Description TODO 66 Swift 3.x is no longer available, but expMail is already waiting ### framework\core\subsystems\expDatabase.php 10 Type Line Description FIXME 154 we shouldn't echo this, already installed? FIXME 212 we shouldn't echo this, already installed? FIXME 1086 never used FIXME 1100 never used FIXME 1135 never used FIXME 1160 never used FIXME 1222 never used FIXME 1418 never used FIXME 1459 never used FIXME 1713 never used ### framework\plugins\function.control.php 5 Type Line Description FIXME 277 this is the US in sample db FIXME 361 we don't really use this FIXME 362 we don't really use this FIXME 379 not sure we need this here FIXME 531 is value always == default? ### framework\core\forms\controls\yuicalendarcontrol.php 1 Type Line Description FIXME 49$disable_text & $showtime are NOT used ### framework\modules\banners\controllers\bannerController.php 1 Type Line Description FIXME 158 we are using a full path BASE instead of relative to root ### framework\modules\core\controllers\expDefinableFieldController.php 1 Type Line Description FIXME 50$record & $tag are undefined ### framework\core\subsystems\expRouter.php 8 Type Line Description FIXME 238 why would$user be empty here unless $db is down? FIXME 239 debug test FIXME 286 what are we doing with this history? saving each page load FIXME 692 this method is never called and doesn't do anything as written FIXME 805 need array sanitizer FIXME 809 debug test TODO 811 fully sanitize all params values here for ---We already do this! FIXME 893 , we still need a good lighttpd.conf rewrite config for sef_urls to work ### framework\modules\ecommerce\models\order.php 6 Type Line Description FIXME 27 in reality we only have one billingmethod??? FIXME 41 we don't seem to use this FIXME 65 we could auto-associate these with has_many FIXME 67 we could auto-associate these with get_assoc_for TODO 661 We need to use produce_price_adjusted in the loops to accommodate for more than one discount FIXME 761 not written for multiple shipments/destinations ### index.php 1 Type Line Description FIXME 145 timeout before closing an empty pdf or html2pdf error window ### framework\plugins\function.rating.php 1 Type Line Description FIXME 57 we need to be able to get a expRating record based on: ### exponent_php_setup.php 1 Type Line Description FIXME 56 does NOT exist ### framework\modules\container\controllers\containerController.php 3 Type Line Description TODO 69 we currently don't use the container cache FIXME 158 old school config FIXME 170 old school config ### framework\plugins\compiler.exp_include.php 2 Type Line Description FIXME 102 we assume the file is only a filename and NOT a path? FIXME 118 we need to check for custom views and add full path for system views if coming from custom view ### framework\modules\core\controllers\expCatController.php 1 Type Line Description FIXME 112 here is a hack to get the faq to be listed ### exponent.js.php 2 Type Line Description FIXME 35 deprecated FIXME 38 deprecated ### framework\modules\ecommerce\shippingcalculators\easypostcalculator.php 14 Type Line Description FIXME 233 for now just doing a single package FIXME 251 end single package FIXME 275 we need to be able to set this FIXME 280 we need to be able to set this FIXME 293 we need to begin adding the rates per package here FIXME 320 we need to be able to set this FIXME 325 we need to be able to set this FIXME 337 we need to begin adding the rates per package here FIXME 347 single package FIXME 351 single package FIXME 459 old code FIXME 562 not sure we need to get/save these??? FIXME 587 we need to select the correct carrier/method based on package type/size FIXME 686 not sure we need to get/save these??? ### framework\core\forms\controls\jquery\yuicalendarcontrol.php 2 Type Line Description FIXME 24 this is NOT a bootstrap control, but jQuery FIXME 57$disable_text & $showtime are NOT used ### framework\modules\ecommerce\models\taxclass.php 1 Type Line Description FIXME 67 we need to ensure any applicable origin tax is at the top of the list ### framework\plugins\block.toggle.php 1 Type Line Description FIXME 66 replace w/ a system default? ### framework\plugins\function.scaffold.php 1 Type Line Description FIXME 58$default_value is NOT set Type Line Description FIXME 41 this is NEVER run! ### exponent.php 2 Type Line Description FIXME 48 test TODO 63 Maxims initial anonymous user implementation, we need an anonymous user record ### framework\modules\events\models\event.php 1 Type Line Description FIXME 210 hack in case the day of week wasn't checked off Type Line Description FIXME 627 if we delete the module & sectionref the module completely disappears FIXME 630 more module/controller glue code FIXME 882 $manage_all is moot w/ cascading perms now? FIXME 887 recode to use foreach$key=>$value FIXME 942 this breaks jstree if we remove a parent and not the child FIXME 1114 we come here for new/edit content/standalone pages FIXME 1115 Allow non-administrative users to manage certain parts of the section hierarchy. ### framework\modules\text\controllers\textController.php 1 Type Line Description FIXME 112 we don't load any custom stuff in this view except skin & plugins ### framework\modules\ecommerce\billingcalculators\splitcreditcard.php 6 Type Line Description TODO 46 I don't think this is used any more but i don't have a clue FIXME 54 why aren't we passing$opts? FIXME 60 this is where we lose the split credit card data FIXME 185 we do NOT want the global $order FIXME 202 we do NOT want the global$order FIXME 214 we don't store a 'token' ### framework\modules\core\controllers\expSimpleNoteController.php 5 Type Line Description FIXME 67 here is where we might sanitize the note before displaying/editing it FIXME 114 here is where we might sanitize the notes before displaying them FIXME 167 here is where we might sanitize the notes before displaying them FIXME 208 here is where we might sanitize the note before saving it FIXME 283 here is where we might sanitize the note before approving it ### framework\modules\ecommerce\billingcalculators\authorizedotnet.php 6 Type Line Description ### framework\core\subsystems\expCSS.php 9 Type Line Description FIXME 54 we do NOT want the global $order FIXME 56 update the shippingmethod id for each orderitem..again, this is only here until we implement split shipping. ### framework\modules\navigation\models\section.php 2 Type Line Description FIXME 375 if we delete the module & sectionref the module completely disappears FIXME 378 more module/controller glue code ### framework\core\forms\controllertemplate.php 2 Type Line Description FIXME 36 this disables bad template code reporting 3.x FIXME 122 probably not used in 2.0? ### framework\core\subsystems\expBot.php 1 Type Line Description FIXME 47 is this better than sleep(1)? seems to make it work, but delays? ### framework\modules\ecommerce\models\order_discounts.php 2 Type Line Description FIXME 37 this has a global$order FIXME 55 we do NOT want the global \$order, but it's not used ### framework\modules\users\controllers\usersController.php 2 Type Line Description FIXME 324 why are we doing this? this loads the edited user perms over the current user??? FIXME 1378 needs to be the newer fail form ### framework\modules\migration\controllers\migrationController.php 2 Type Line Description TODO 309 this doesn't work w/ php 5.2 FIXME 2283 do we want to add a forms_id field? ### framework\core\forms\basetemplate.php 2 Type Line Description FIXME 57 this disables bad template code reporting 3.x FIXME 123 only place we call this method
2021-05-15 07:08:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2405725121498108, "perplexity": 13651.23298744234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00291.warc.gz"}
https://radimentary.wordpress.com/2020/01/27/of-math-and-memory-part-2/
### Of Math and Memory, Part 2 Last time, I wrote that having a good memory is essential in mathematics. Today I will describe my model for working memory. ## Compression and Prediction Data compression is the science of storing information in as few bits as possible. I claim that optimizing your working memory is mainly a problem of data compression: there’s a bounded amount of data you can store over a short period of time, and the problem is to compress the information you need so that this storage is as efficient as possible. One of the fundamental notions in data compression is that compression is equivalent to prediction. Another way of saying this is: the more you can predict, the less you have to remember. Here are three examples. ### I. Text compression Cnsdr ths prgrph. ‘v rmvd ll th vwls nd t rmns bsclly rdbl, bcs wth jst th cnsnnts n cn prdct wht th mssng vwls wr. Th vwls wr rdndnt nd cld b cmprssd wy. All text compression algorithms work basically the same way: they store a smaller amount of data from which the rest of the information can be predicted. The better you are at predicting the future, the less arbitrary data you have to carry around. ### II. Memory for Go Every strong amateur Go player can, after a slow-paced game, reproduce the entire game from memory. An average game consists of between one and two hundred moves, each of which can be placed on any of the 19×19 grid points. A typical amateur game, midway through. Anyone who practices playing Go for a year or two will gain this amazing ability. It is not because their general memory improved either: if you showed them a sequence of nonsensical, randomly generated Go moves, they would have almost as hard of a time remembering them as an absolute novice. The reason it’s so easy to remember your own games is because your own moves are so predictable. Given a game state, you don’t have to actually remember the coordinates where the stone landed. You just have to think “what would I do in this position?” and reproduce the train of thought. The only moves in the game you really need to explicitly store in memory are the “surprising” moves that you didn’t expect. Surprise, of course, is just another word for entropy. The better you are at prediction, the less surprise (entropy) you’ll meet, and the less you have to remember. ### III. Mathematical theorems A general feature of learning things well is that you get better at predicting. Fill in the blank: If $a$ and $b$ are both the sum of two squares, then so is ___. A beginning student looks at this statement and recalls the answer is $ab$, simply by retrieving this answer directly from memory. A practiced number theorist doesn’t need to store this exact statement directly in memory; instead, they know that any of an infinite variety of such statements can be reconstructed from a small number of core insights. Here, the two core insights are that a sum of two squares is the norm of a Gaussian integer, and that norms are multiplicative. Getting better at prediction in mathematics often follows the same general pattern: identifying the small number of core truths from which everything else follows. We reduced the problem of improving your working memory to the problem of predicting the future. At face value, this reduction seems less than useless, because predicting the future is harder than memorizing flash cards. Thankfully, human beings are embodied agents who can interact with our world. In particular, we can cheat by instead making the world easier to predict. More on this next time.
2021-08-02 03:26:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.543021023273468, "perplexity": 1159.4956015768691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154302.46/warc/CC-MAIN-20210802012641-20210802042641-00164.warc.gz"}
https://www.physicsforums.com/threads/implicit-differentiation.904645/
# Implicit Differentiation 1. Feb 19, 2017 ### FritoTaco 1. The problem statement, all variables and given/known data $\dfrac{x^2}{x+y}=y^2+8$ 2. Relevant equations Quotient Rule: $\dfrac{g(x)\cdot f'(x)-g'(x)\cdot f(x)}{(g(x))^2}$ Product Rule: $f(x)\cdot g'(x)+g(x)\cdot f'(x)$ 3. The attempt at a solution $\dfrac{(x+y\cdot\dfrac{dy}{dx})(2x)-(1\cdot\dfrac{dy}{dx})(x^2)}{(x+y\cdot \dfrac{dy}{dx})^2} = 2y\cdot\dfrac{dy}{dx}$ I feel like there are a couple of ways to go about this. Would it be easier to flip the denominator and use the product rule? I just used the quotient from here on out. What you see my trying to do is get $\dfrac{dy}{dx}$ on the left side and everything else on the right, then factor $\dfrac{dy}{dx}$ out. This is my first encounter with this type of problem so I get confused very fast. $\dfrac{(x+y\cdot\dfrac{dy}{dx})(2x)-(1\cdot\dfrac{dy}{dx})(x^2)}{(x+y\cdot \dfrac{dy}{dx})^2} = 2y\cdot\dfrac{dy}{dx}$ $\dfrac{(2x^2+2xy\cdot\dfrac{dy}{dx})-(x^2\cdot\dfrac{dy}{dx})}{(x+y\cdot \dfrac{dy}{dx})^2} = 2y\cdot\dfrac{dy}{dx}$ $\dfrac{\dfrac{dy}{dx}-(x^2\cdot\dfrac{dy}{dx})}{(x+y\cdot\dfrac{dy}{dx})^2}=\dfrac{2y\cdot\dfrac{dy}{dx}}{(2x^2+2xy\cdot\dfrac{dy}{dx})}$ $\dfrac{-1}{x^2}\cdot\dfrac{\dfrac{dy}{dx}-(x^2\cdot\dfrac{dy}{dx})}{(x+y\cdot\dfrac{dy}{dx})^2}=\dfrac{2y\cdot\dfrac{dy}{dx}}{(2x^2+2xy)}\cdot\dfrac{-1}{x^2}$ $\dfrac{\dfrac{dy}{dx}-\dfrac{dy}{dx}}{(x+y\cdot\dfrac{dy}{dx})^2}=\dfrac{2y\cdot\dfrac{dy}{dx}}{-x^2(2x^2+2xy)}$ $\dfrac{1}{\dfrac{dy}{dx}}\cdot\dfrac{\dfrac{dy}{dx}-\dfrac{dy}{dx}}{(x+y\cdot\dfrac{dy}{dx})^2}=\dfrac{2y\cdot\dfrac{dy}{dx}}{-x^2(2x^2+2xy)}\cdot\dfrac{1}{\dfrac{dy}{dx}}$ $\dfrac{\dfrac{dy}{dx}-\dfrac{dy}{dx}}{\dfrac{dy}{dx}\cdot(x+y\cdot\dfrac{dy}{dx})^2}=\dfrac{2y}{-x^2(2x^2+2xy)}$ I don't want to go much farther because I could be doing this wrong. On the left side, I want to factor, but I'm curious if this is right so far or have I made any errors? 2. Feb 19, 2017 ### Ray Vickson Where does the $y y'$ come from on the left? 3. Feb 19, 2017 ### FritoTaco Are you talking about where I have $(x+y\cdot \dfrac{dy}{dx})$ or $-(1\cdot \dfrac{dy}{dx})$? 4. Feb 19, 2017 ### ehild The differentiation of the left side is wrong. What are f and g ? Just apply the Quotient Rule properly. 5. Feb 19, 2017 ### FritoTaco Oops, here they are: $f(x)=x^2$ $f'(x)=2x$ $g(x)=(x+y)$ $g'(x)=(x+y\cdot\dfrac{dy}{dx})$ Edit: Oh, I need it to say $\dfrac{(x+y)(2x)-(1\cdot\dfrac{dy}{dx})(x^2)}{(x+y\cdot \dfrac{dy}{dx})^2} = 2y\cdot\dfrac{dy}{dx}$ 6. Feb 19, 2017 ### ehild The last equation is wrong. The derivative of a sum is the sum of derivatives. What is dx/dx? and dy/dx is not yy'. 7. Feb 19, 2017 ### FritoTaco Would it just be $\dfrac{dy}{dx}$ 8. Feb 19, 2017 ### ehild Still wrong. What should be the denominator? Is not it g2? 9. Feb 19, 2017 ### ehild Of course. 10. Feb 19, 2017 ### FritoTaco Ok, thanks, let me update the work and go from there. 11. Feb 19, 2017 ### FritoTaco I started moving the $\dfrac{dy}{dx}$ to the right side instead, but didn't finish just yet. $\dfrac{(x+y)(2x)-(\dfrac{dy}{dx})(x^2)}{(x+y^2)^2}=2y\cdot\dfrac{dy}{dx}$ $\dfrac{1}{\dfrac{dy}{dx}}\cdot\dfrac{2x^2+2xy-x^2\cdot\dfrac{dy}{dx}}{(x+y^2\cdot\dfrac{dy}{dx})^2}=2y\cdot\dfrac{dy}{dx}\cdot\dfrac{1}{\dfrac{dy}{dx}}$ $\dfrac{1}{2y}\cdot\dfrac{2x^2+2xy-x^2}{(x+y^2\cdot\dfrac{dy}{dx})^2}=\dfrac{2y\cdot\dfrac{dy}{dx}}{\dfrac{dy}{dx}}\cdot\dfrac{1}{2y}$ $\dfrac{2x^2+2xy-x^2}{2y(x+y\cdot\dfrac{dy}{dx})^2}=\dfrac{\dfrac{dy}{dx}}{\dfrac{dy}{dx}}$ 12. Feb 19, 2017 ### Ray Vickson I am talking about the $y \frac{dy}{dx}$ part. 13. Feb 19, 2017 ### FritoTaco That was my fault. It was supposed to be $(x+y)$ 14. Feb 19, 2017 ### ehild Wrong denominator on the left side. 15. Feb 19, 2017 ### ehild You confuse yourself when dy/dx appears on both sides. It has no sense dividing by dy/dx. Multiply the original equation by (x+y), then you need to differentiate the equation $x^2=(x+y)(y^2+8)$. 16. Feb 19, 2017 ### FritoTaco I fixed that after that, it should've been like the rest. Are you saying that after getting rid of that (x+y) in the denominator, I would distribute on the right side and use the product rule? See this is my problem in math, I follow the pattern I first learn from when introduced into something new. When you said to multiply the original equation by (x+y), I wouldn't have thought of that because I've been following what I've been previously doing, by using the quotient rule because there's a fraction on the left side. I want to clarify on something, after doing this: $x^2=(x+y)(y^2+8)$ In the second step after distributing, $x^2=xy^2\cdot\dfrac{dy}{dx}+8x+y^3\cdot\dfrac{dy}{dx}+8y\cdot\dfrac{dy}{dx}$ When I get here, am I using the chain rule on xy^2 and the product rule on 8x and 8y? 17. Feb 19, 2017 ### ehild The rest was also wrong. There is no derivative in the denominator on the left side. Why did you put dy/dx there again? And I do not follow what you did. No need to distribute. Use the product rule, but differentiate both sides! It is all right to differentiate the original equation, but you get a more complicate equation for dy/dx, and you looked confused. You need to use basic rules of multiplication and addition, instead of "following patterns". the left side is x2. What is the derivative? The right side is (x+y)(y2+8). Apply product rule to get the derivative with respect x. 18. Feb 19, 2017 ### FritoTaco My professor was saying whenever there's a "y", you need to multiply by $\dfrac{dy}{dx}$. He never showed us an example of a problem given a fraction so I don't know about that. $x^2=(x+y)(y^2+8)$ $f(x)=x+y$ $f'(x)=\dfrac{dy}{dx}$ $g(x)=y^2+8$ $g'(x)=2y\dfrac{dy}{dx}$ $x^2=(x+y)(2y\cdot\dfrac{dy}{dx})+(y^2+8)(\dfrac{dy}{dx})$ $x^2=2xy\cdot\dfrac{dy}{dx}+2y^2\cdot\dfrac{dy}{dx}+y^2\cdot\dfrac{dy}{dx}+8\cdot\dfrac{dy}{dx}$ Would you distribute like I just did on the last part there? Doesn't seem correct. 19. Feb 19, 2017 ### Staff: Mentor You might have heard your professor incorrectly. When you're doing implicit differentiation, where there's a y, upon differentiation you get $\frac{dy}{dx}$. You aren't multiplying by $\frac{dy}{dx}$. For example, if there's a term of $y^2$, differentiating it gives $\frac d{dx}\left(y^2\right) = 2y\cdot \frac {dy}{dx}$. That last factor comes from the chain rule. $\frac d{dx}\left(y^2\right) = \frac d {dy} \left(y^2\right) \cdot \frac {dy}{dx}$. 20. Feb 19, 2017 ### ehild You have to differentiate both sides! so the left side is d(x2)/dx = ? About the saying of your professor, whenever you differentiate a function of y, differentiate it with respect to y first, then multiply by y'. But in case of a fraction, it is f'g-fg' divided by the square of the original denominator.
2017-08-21 20:22:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7455028295516968, "perplexity": 865.6143986429369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109525.95/warc/CC-MAIN-20170821191703-20170821211703-00532.warc.gz"}
https://www.physicsforums.com/threads/finding-the-chemical-equation.97474/
# Finding the chemical equation My question is about finding the chemical equation and name of wrongly labeled element given the molecular weight of the formula and weights of two of the elements. The problem gives the overall molecual weight of the formula as 150 grams/mole. Then it says there is 1.00grams of chloriene in it and 1.36 grams of a "so called" Illinium. From this, how do I go about finding the formula for the problem and what element Illinium really is? ## Answers and Replies GCT Science Advisor Homework Helper If there are 1.00grams of chlorine for every 1.36 grams of "Illinium" how many grams of the 150grams/mole of the molecular weight is due to chlorine?
2021-11-28 11:53:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316897749900818, "perplexity": 1300.461786645503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358520.50/warc/CC-MAIN-20211128103924-20211128133924-00243.warc.gz"}
http://mathhelpforum.com/algebra/152546-runner-print.html
# runner • Aug 1st 2010, 07:22 PM aeroflix runner it took a faster runner 10 seconds longer to run a distance of 1500 ft. Then it took a slower runner to run 1000 ft. If the rate of faster runner was 5 feet per second more than the slower runner, what was the rate of each runner • Aug 1st 2010, 07:52 PM Math Major You need to use the equation distance = rate * time. We know that $1500 = (r + 5)(t + 10)$ and $1000 = r * t$ Solve for r. that will be the rate of runner 2. • Aug 1st 2010, 10:08 PM aeroflix your working equation is wrong. . t = 790 which is too large. • Aug 1st 2010, 10:11 PM aeroflix i guess this is by far the most complex problem ever • Aug 1st 2010, 10:29 PM Math Major Show me your work. You should get the rate of runner 1 as 25 feet per second and the rate of runner 2 as 20 feet per second. t solves out to be 50, with runner 1 taking 60 seconds. • Aug 1st 2010, 10:37 PM Math Major I'll even set it up for you. $1000 = r * t$ Solve for t $t = \frac{1000}{r}$ Plug this value in for t in equation 1 $1500 = (r + 5)(\frac{1000}{r} + 10)$ Foil it $1500 = 1000 + 10r + \frac{5000}{r} + 50$ Combine like terms $450 = 10r + \frac{5000}{r}$ Multiply through by an r $0 = 10r^2 - 450r + 5000$ Divide by 10 $r^2 - 45r + 500 = 0$ Solve the quadratic. You will get two rates. One will be the rate for runner 1. The other will be the rater for runner 2. • Aug 1st 2010, 10:59 PM aeroflix 25 and 20??? is that how easy it is???? but why does my formula doesnt work the way it should be??? distace rate time faster 1500 ft x+5 1500/x+5 slower 1000 ft X 1000 / x equation: distance of faster is equal to distance travveled by slower: 1500/ x +5 = (1000 / x) + 10 plus 10 to make the slower runner distance 1500 feet also... why my equation doesnt work?? but its correct right? • Aug 1st 2010, 11:01 PM aeroflix also why is 10 needed to add to faster runner? isnt it is supposed to be added to the slower runnner?? i need good analysis. pls help • Aug 1st 2010, 11:03 PM Math Major Given $\frac{1500}{x + 5} = \frac{1000}{x} + 10$ Multiply by the common denominator $(x)(x+5)$ $1500x = 1000(x+5) + 10(x)(x+5)$ Multiply the expressions out $1500x = 1000x + 5000 + 10(x^2 + 5x)$ $1500x = 1000x + 5000 + 10x^2 + 50x$ Combine like terms $450x = 10x^2 + 5000$ It should look familiar from here. • Aug 1st 2010, 11:05 PM Math Major Quote: Originally Posted by aeroflix also why is 10 needed to add to faster runner? isnt it is supposed to be added to the slower runnner?? i need good analysis. pls help Reread your problem statement. You said that it took the faster runner 10 seconds longer to run 1500 feet than it took the slower runner to run 1000 feet. The faster runner's time is 10 seconds longer.
2016-10-22 12:51:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7091148495674133, "perplexity": 3644.2169853421665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718957.31/warc/CC-MAIN-20161020183838-00213-ip-10-171-6-4.ec2.internal.warc.gz"}
https://search.r-project.org/CRAN/refmans/DiscreteWeibull/html/lossdw.html
lossdw {DiscreteWeibull} R Documentation ## Loss function ### Description Loss function for the method of moments (type 1 discrete Weibull) ### Usage lossdw(par, x, zero = FALSE, eps = 1e-04, nmax=1000) ### Arguments par vector of parameters q and \beta x the vector of sample values zero TRUE, if the support contains 0; FALSE otherwise eps error threshold for the numerical computation of the expected value nmax maximum value considered for the numerical computation of the expected value ### Details The loss function is given by L(x;q,\beta)=[m_1-\mathrm{E}(X;q,\beta)]^2+[m_2-\mathrm{E}(X^2;q,\beta)]^2, where \mathrm{E}(\cdot) denotes the expected value, m_1 and m_2 are the first and second order sample moments respectively. ### Value the value of the quadratic loss function ### Author(s) Alessandro Barbiero Edweibull x <- c(1,1,1,1,1,2,2,2,3,4)
2022-05-22 09:59:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5661569237709045, "perplexity": 4828.948617118906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00670.warc.gz"}
https://victoryawards.us/and-meet/meet-and-up-difference-quotient.php
# Meet and up difference quotient ### Derivative - Wikipedia One way to interpret the above calculation is by reference to a line. (7 + ∆x, f(7 + ∆x)), the slope of this chord is the so-called difference quotient slope of chord the circle at that point, i.e., it doesn't meet the circle at any second point.) Thus. Jan 14, In calculus, this expression is called the difference quotient of f. • (a) Express the slope I looked it up to make sure and this is correct. Lenny. Note that for any value, the limit of a difference quotient is an expression of the form .. From the definition above and from Section , you can see that the difference quotient is used by a stream and two straight roads that meet. Код ценой в один миллиард долларов. Никто позволивший себе угрожать жизни моего сотрудника не выйдет отсюда. - Она выдержала паузу.
2019-09-24 09:03:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8099045157432556, "perplexity": 1461.538232757758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572896.15/warc/CC-MAIN-20190924083200-20190924105200-00120.warc.gz"}
http://clay6.com/qa/2560/find-the-position-vector-of-a-point-r-which-divided-the-line-joining-the-po
Browse Questions # Find the position vector of a point R which divided the line joining the points whose positive vectors are $P(\hat i +2\hat j - \hat k ) and Q ( -\hat i + \hat j + \hat k )$ in the ratio 2 : 1. (i) internally (ii) externally. Can you answer this question?
2016-12-05 00:34:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083153963088989, "perplexity": 587.9840192497021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541517.94/warc/CC-MAIN-20161202170901-00076-ip-10-31-129-80.ec2.internal.warc.gz"}
https://plainmath.net/algebra-ii/103344-what-is-c35
xcopyv4n 2023-03-11 What is $C35$? Keira Fitzpatrick Determine $C35$. we know,$Crn=\frac{n!}{r!\left(n-r\right)!}$ To calculate, put $n=5,r=3$ in the above formula: $Crn=\frac{n!}{r!\left(n-r\right)!}⇒C35=\frac{5!}{3!\left(5-3\right)!}⇒C35=\frac{5!}{3!\left(5-3\right)!}⇒C35=\frac{5!}{3!\left(2\right)!}⇒C35=\frac{5×4×3!}{3!×2×1}⇒C35=\frac{5×4}{2×1}⇒C35=5×2⇒C35=10$ Consequently, the necessary value is $C35=10$ Do you have a similar question?
2023-03-24 14:58:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896898627281189, "perplexity": 7873.092691579269}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00553.warc.gz"}
https://www.ttp.kit.edu/preprints/2002/ttp02-31?rev=1458209037&do=diff
# Differences This shows you the differences between two versions of the page. — preprints:2002:ttp02-31 [2016/03/17 11:03] (current) Line 1: Line 1: + ====== TTP02-31 Bosonic Corrections to $\Delta r$ at the Two Loop Level ====== + + + The details of the recent calculation of the two-loop bosonic + corrections to the muon lifetime in the Standard Model are + presented. The matching on the Fermi theory is + discussed. Renormalisation in the on-shell and in the \MSb scheme is + studied and transition between the schemes is shown to lead to + identical results. High precision numerical methods are compared with + mass difference and large mass expansions. + + |**M. Awramik, M. Czakon, A. Onishchenko, O. Veretin**  | + |**  Phys. Rev. D  68 053004 2003  **  | + | {{preprints:2002:ttp02-31.pdf|PDF}} {{preprints:2002:ttp02-31.ps|PostScript}} [[http://arxiv.org/abs/hep-ph/0209084|arXiv]]   | + | |
2021-01-19 12:56:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2980281412601471, "perplexity": 3120.497035570619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518240.40/warc/CC-MAIN-20210119103923-20210119133923-00541.warc.gz"}
https://www.physicsforums.com/threads/is-plasma-less-dense-than-vacuum.341317/
# Is plasma less dense than vacuum? 1. Sep 29, 2009 ### maria clara Water and glass are considered denser than vacuum for their dielectric coefficient is greater than the dielectric constant of vacuum. The plasma's dielectric constant is $$\epsilon$$0(1-($$\omega$$p/$$\omega$$)2) Does this mean that plasma can be considered less dense than vacuum?...
2017-08-20 08:49:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41865020990371704, "perplexity": 3011.2407629570894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106358.80/warc/CC-MAIN-20170820073631-20170820093631-00479.warc.gz"}
https://codereview.stackexchange.com/questions/25009/format-a-timespan-with-years
# Format A TimeSpan With Years I have a class with 2 date properties: FirstDay and LastDay. LastDay is nullable. I would like to generate a string in the format of "x year(s) y day(s)". If the total years are less than 1, I would like to omit the year section. If the total days are less than 1, I would like to omit the day section. If either years or days are 0, they should say "day/year", rather than "days/years" respectively. Examples: 2.2 years:             "2 years 73 days" 1.002738 years:   "1 year 1 day" 0.2 years:             "73 days" 2 years:                "2 years" What I have works, but it is long: private const decimal DaysInAYear = 365.242M; public string LengthInYearsAndDays { get { var lastDay = this.LastDay ?? DateTime.Today; var lengthValue = lastDay - this.FirstDay; var builder = new StringBuilder(); var totalDays = (decimal)lengthValue.TotalDays; var totalYears = totalDays / DaysInAYear; var years = (int)Math.Floor(totalYears); totalDays -= (years * DaysInAYear); var days = (int)Math.Floor(totalDays); Func<int, string> sIfPlural = value => value > 1 ? "s" : string.Empty; if (years > 0) { builder.AppendFormat( CultureInfo.InvariantCulture, "{0} year{1}", years, sIfPlural(years)); if (days > 0) { builder.Append(" "); } } if (days > 0) { builder.AppendFormat( CultureInfo.InvariantCulture, "{0} day{1}", days, sIfPlural(days)); } var length = builder.ToString(); return length; } } Is there a more concise way of doing this (but still readable)? • Is there a reason why you are using a solar year (365.242 days) vs a calendar year (365 or 366 days) - not going to make a huge difference, but just looks odd since you are talking about days and calendar years but using a solar year as the denominator in the equation? – psubsee2003 Apr 11 '13 at 20:51 • You might want to check out this answer. – Jeff Vanzella Apr 11 '13 at 20:58 Overall, your code doesn't look bad. You have good use of white space and indentation. I found your variable names to be a little confusing (what's the difference between totalDays and days? Without actually digging into the code, it's not obvious). I like that you are using StringBuilder for concatenating strings, that is a good habit to get into. I think you are doing too much manipulation on the years and days. I was able to take your 5 lines of code to calculate the years and days, and make it 3 var totalDays = Math.Floor(lengthValue.TotalDays); var years = totalDays / DaysInAYear; var days = totalDays % DaysInAYear; Years will be totalDays divided by the number of days in a year. Days will be the remainder of the same division. I would move the pluralize out into its own method. There is no need to use an anonymous method in this instance. You are repeating yourself when you are creating the string. If you look closely, the code is almost identical. The differences are easily passed in as variables. The way I corrected this is to create a method called CreateWords. This method returns the formatted string from variables passed in: private string CreateWords(int value, string measure) { if (value == 0) return string.Empty; return string.Format("{0} {1}{2}", value, measure, PluralSuffix(value)); } private string PluralSuffix(int value) { return value > 1 ? "s" : string.Empty } Your main method would then call: builder.Append(CreateWords(years, "year")); builder.Append(CalculateSpaceCharacter(days)); builder.Append(CreateWords(days, "day")); where CalculateSpaceCharacter would look like: private static string CalculateSpaceCharacter(int value) { return value > 0 ? " " : string.Empty; } And finally, there is no need to assign the length variable at the end. Just return builder.ToString()
2020-04-07 18:13:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1801670640707016, "perplexity": 3087.5140613516687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371803248.90/warc/CC-MAIN-20200407152449-20200407182949-00264.warc.gz"}
https://stacks.math.columbia.edu/tag/05DM
Lemma 38.8.4. Let $R$ be a ring. Let $I \subset R$ be an ideal. Let $R \to S$ be a ring map, and $N$ an $S$-module. Assume 1. $R$ is a Noetherian ring, 2. $S$ is a Noetherian ring, 3. $N$ is a finite $S$-module, 4. $N$ is flat over $R$, and 5. for any prime $\mathfrak q \subset S$ which is an associated prime of $N \otimes _ R \kappa (\mathfrak p)$ where $\mathfrak p = R \cap \mathfrak q$ we have $IS + \mathfrak q \not= S$. Then the map $N \to N^\wedge$ of $N$ into the $I$-adic completion of $N$ is universally injective as a map of $R$-modules. Proof. This follows from Lemma 38.8.3 because Algebra, Lemma 10.65.5 and Remark 10.65.6 guarantee that the set of associated primes of tensor products $N \otimes _ R Q$ are contained in the set of associated primes of the modules $N \otimes _ R \kappa (\mathfrak p)$. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-03-25 11:08:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9717052578926086, "perplexity": 187.6091389951226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00341.warc.gz"}
https://prove-me-wrong.com/research/definitions-and-notation-for-lattices/
# Definitions and notation for lattices Recall that a lattice $L$ in $\mathbb{R}^{d}$ is the $\mathbb{Z}$-span of a basis $\left\{ v_{1},...,v_{d}\right\}$ of $\mathbb{R}^{n}$. Equivalently, letting $g$ be the matrix with rows $v_{i}$ so that $g\in\mathrm{GL}_{d}\left(\mathbb{R}\right)$, the lattice $L$ is $\mathbb{Z}^{d}\cdot g$. Given $g,h\in\mathrm{GL}_{d}\left(\mathbb{R}\right)$, it is easily seen that we have the equality of lattices $\mathbb{Z}^{d}\cdot g=\mathbb{Z}^{d}\cdot h$ if and only if $g\in\mathrm{GL}_{d} \left(\mathbb{Z}\right) \cdot h$. Thus, we can parametrize the space of all lattice by $\mathrm{GL}_{d}\left(\mathbb{Z}\right) \backslash\mathrm{GL}_{d} \left(\mathbb{R}\right)$. We say that two lattices $L_{1},L_{2}$ are homothetic and write $L_1 \sim L_2$ if there exists some $0\neq c\in\mathbb{R}$ such that $L_{1}=cL_{2}$. Since usually we do not consider homothetic lattices as different, we would like to have a smaller space that do not distinct between such lattices. We consider two approachs – one by taking the quotient space and the other by taking a representative from each homothetic class. #### Quotient space: Given $g,h\in\mathrm{GL}_{d}\left(\mathbb{R}\right)$ such that $g=ch$ for some $0\neq c\in\mathbb{R}$ we get that $\mathbb{Z}^{d}\cdot g\sim\mathbb{Z}^{d}\cdot h$, thus to represent a class of homothetic lattices we may consider elements in $\mathrm{PGL}_{d}\left(\mathbb{R}\right)$ instead of $\mathrm{GL}_{d}\left(\mathbb{R}\right)$. As in the previous case, two such elements $g,h$ define the same lattice class if and only if $g\in\mathrm{PGL}_{d} \left(\mathbb{Z}\right) \cdot h$, thus we can paramerize the class of lattices up to homothety by $\mathrm{PGL}_{d}\left(\mathbb{Z}\right) \backslash\mathrm{PGL}_{d} \left(\mathbb{R}\right)$. #### Set of representatives – unimodular lattices: Let $L=\mathbb{Z}\cdot g \leq\mathbb{R}^{d}$ be a lattice for some $g\in GL_d(\mathbb{R})$. The covolume of the lattice is defined to be $covol \left( L \right) :=vol\left( \mathbb{R}^{d} / L \right)$ which is the volume of a fundamental domain of $L$ in $\mathbb{R}^{d}$ and is equal to $\left|\det\left(g\right)\right|$. We call a lattice unimodular if its covolume is one, or equivalently it can be written as $\mathbb{Z}^{d}\cdot g$ for some $g\in\mathrm{SL}_{d} \left(\mathbb{Z}\right)$. Given a $d$-dimensional lattice $L$ of covolume $c$, we obtain that $\sqrt[d]{c}L$ is a unimodular lattice homothetic to $L$. Thus, every homothetic class contains a unimodular lattice which is easily seen to be unique. As in the previous cases, we can parametrize the space of $d$-dimensional unimodular lattices by $\mathrm{SL}_{d}\left(\mathbb{Z}\right) \backslash\mathrm{SL}_{d} \left(\mathbb{R}\right)$ and by the argument above $\mathrm{SL}_{d}\left(\mathbb{Z}\right)\backslash\mathrm{SL}_{d}\left(\mathbb{R}\right)\cong\mathrm{PGL}_{d}\left(\mathbb{Z}\right)\backslash\mathrm{PGL}_{d}\left(\mathbb{R}\right)$ under the natural map $\mathrm{SL}_{d}\left(\mathbb{Z}\right)\cdot g\mapsto\mathrm{PGL}_{d}\left(\mathbb{Z}\right)\cdot g$ for $g\in\mathrm{SL}_{d}\left(\mathbb{R}\right)$. We shall denote this space by $X_{d}$. Remark: The argument that the map above is surjective was due the fact that every positive element in $\mathbb{R}$ has a $d$-root. When a similar construction is done over other fields, e.g. $p$-adic fields, this is no longer the case. In each of the groups $\mathrm{GL}_{d}\left(\mathbb{R}\right) ,\;\mathrm{PGL}_{d} \left(\mathbb{R}\right)$ and $\mathrm{SL}_{d}\left(\mathbb{R}\right)$ we have the respective diagonal group with nonnegative entries. Namely $A_{full} :=\left\{ diag\left(a_{1},...,a_{d}\right)\;\mid\;a_{i}>0\right\} \leq\mathrm{GL}_{d}\left(\mathbb{R}\right)$ $\overline{A}_{full} :=A_{full}/\mathbb{R}\leq\mathrm{PGL}_{d} \left(\mathbb{R}\right)$ $A :=A_{full}\cap\mathrm{SL}_{d}\left(\mathbb{R}\right).$ For $\bar{t}\in\mathbb{R}^{d}$ we denote $a\left(\bar{t}\right)= diag\left(e^{t_{1}},...,e^{t_{d}}\right) \in A_{full}$, and we note that the map $\bar{t}\mapsto a\left(\bar{t}\right)$ defines an isomorphism $A_{full}\cong\mathbb{R}^{d}$. The restriction to $\mathbb{R}_{0}^{d}=\left\{ \left(t_{1},...,t_{d}\right)\in\mathbb{R}^{d}\;\mid\;\sum_{1}^{d}t_{i}=0\right\}$ induces an isomorphism $A\cong\mathbb{R}_{0}^{d}$. Finally the natural map $A\to A_{full}\to\overline{A}_{full}$ is an isomorphism. We will also use the isomorphism $\mathbb{R}^{d-1}\cong\overline{A}_{full}$ defined by $\bar{t}\mapsto\left[a\left(1,t_{1},...,t_{d-1}\right)\right]$ for $\bar{t}\in\mathbb{R}^{d-1}$. We shall be interested in the diagonal orbits in their respective spaces of lattice. Letting $\pi_{1}:\mathrm{GL}_{d}\left(\mathbb{Z}\right) \backslash\mathrm{GL}_{d} \left(\mathbb{R}\right)\to\mathrm{SL}_{d}\left(\mathbb{Z}\right)\backslash\mathrm{SL}_{d}\left(\mathbb{R}\right)$ be the normalization to covolume 1 map and $\pi:\mathrm{GL}_{d}\left(\mathbb{Z}\right)\backslash\mathrm{GL}_{d}\left(\mathbb{R}\right)\to\mathrm{PGL}_{d}\left(\mathbb{Z}\right)\backslash\mathrm{PGL}_{d}\left(\mathbb{R}\right)$ be the quotient map, it is easy to see that for any lattice $x\in\mathrm{GL}_{d} \left(\mathbb{Z}\right)\backslash\mathrm{GL}_{d} \left(\mathbb{R}\right)$ we have $\pi_{1}\left(xA_{full}\right) =\pi_{1}\left(x\right)A$ $\pi\left(xA_{full}\right) =\pi\left(x\right)\bar{A}_{full}$.
2023-02-07 20:28:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 69, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9828814268112183, "perplexity": 86.88806153588217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500641.25/warc/CC-MAIN-20230207201702-20230207231702-00391.warc.gz"}
https://hal.science/hal-00700476
Homotopical rigidity of polygonal billiards - Archive ouverte HAL Access content directly Journal Articles Topology and its Applications Year : 2014 Homotopical rigidity of polygonal billiards Jozef Bobok • Function : Author Serge Troubetzkoy Abstract Consider two $k$-gons $P$ and $Q$. We say that the billiard flows in $P$ and $Q$ are homotopically equivalent if the set of conjugacy classes in the fundamental group of $P$ which contain a periodic billiard orbit agrees with the analogous set for $Q$. We study this equivalence relationship and compare it to the equivalence relations, order equivalence and code equivalence, introduced in \cite{BT1,BT2}. In particular we show if $P$ is a rational polygon, and $Q$ is homotopically equivalent to $P$, then $P$ and $Q$ are similar, or affinely similar if all sides of $P$ are vertical and horizontal. Dates and versions hal-00700476 , version 1 (23-05-2012) Identifiers • HAL Id : hal-00700476 , version 1 • ARXIV : • DOI : Cite Jozef Bobok, Serge Troubetzkoy. Homotopical rigidity of polygonal billiards. Topology and its Applications, 2014, 173, pp.308-324. ⟨10.1016/j.topol.2014.06.003⟩. ⟨hal-00700476⟩ 318 View
2023-03-26 19:55:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4807559549808502, "perplexity": 1556.7258608468283}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00716.warc.gz"}
https://www.physicsforums.com/threads/exponent-error-question.391710/
# Exponent error question 1. Apr 2, 2010 ### nhrock3 i need to calculate $$e^{-0.5}$$ why the solution develops $$e^{-x}$$ and puts 0.5 and not $$e^{+x}$$ and putting -0.5 ? 2. Apr 2, 2010 ### HallsofIvy I have no idea what you are talking about. What do you mean by "develops $e^{-x}$? Writing as a Taylor's series? Approximating by the tangent line? I suspect that both them method your book gives and your method would give the same answer. Have you tried it? 3. Apr 2, 2010 ### nhrock3 yes developing in taylor series but in the first we have libnits series and on the other not so its not the same why they are not the same? 4. Apr 2, 2010 ### HallsofIvy If $f(x)= e^{-x}$ then f(0)= 1, $f'= -e^{-x}$ so f'(0)= -1, $f"(0)= e^{-x}$ so f"(0)= 1, etc. The "nth" derivative, evaluated at x= 0, is 1 if n is even, -1 if n is odd. The Taylor's series, about x= 0, for $e^{-x}$ is $$\sum_{n=0}^\infty \frac{(-1)^n}{n!}x^n[/itex]. In particular, [tex]e^{-0.5}= \sum_{n=0}^\infty \frac{(-1)^n}{n!}(0.5)^n[/itex] The usual Taylor's series for $e^x$ is, of course, [tex]\sum_{n=0}^\infty \frac{1}{n!}x^n[/itex] and now [tex]e^{-0.5)= \sum_{n=0}^\infty \frac{1}{n!}(-0.5)^n= \sum_{n=0}^\infty \frac{1}{n!}(-1)^n(0.5)^n$$ $$= \sum_{n=0}^\infty \frac{(-1)^n}{n!}(0.5)^n$$ They are exactly the same.
2018-03-20 09:46:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6694249510765076, "perplexity": 2273.423253652686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647327.52/warc/CC-MAIN-20180320091830-20180320111830-00719.warc.gz"}
https://stats.stackexchange.com/questions/26144/how-to-get-real-valued-continous-output-from-neural-network/26496
# How to get real-valued continous output from Neural Network? In most of the examples I've seen so far of neural networks, the network is used for classification and the nodes are transformed with a sigmoid function . However, I would like to use a neural network to output a continuous real value (realistically the output would usually be in the range of -5 to +5). My questions are: 1. Should I still scale the input features using feature scaling? What range? 2. What transformation function should I use in place of the sigmoid? I'm looking to initially implement it PyBrain which describes these layer types. So I'm thinking that I should have 3 layers to start (an input, hidden, and output layer) that are all linear layers? Is that a reasonable way? Or alternatively could I "stretch" the sigmoid function over the range -5 to 5? • Sure you can use a sigmoid $[-\infty, \infty] \mapsto [-5, 5]$. E.g. start from the logistic function, multiply by 10, subtract 5... Apr 10 '12 at 13:13 • Is there a particular reason you're avoiding using two hidden layers? That would seem to be the easiest way to accomplish getting real-valued continuous output from a neural network. "Any function can be approximated to arbitrary accuracy by a network with two hidden layers" (mentioned in notes from the Mitchell machine learning text slide 26: cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/mlbook/ch4.pdf ) Apr 16 '12 at 1:45 • @ChrisSimokat: No, but most of what I have read so far suggests a single hidden layer as a reasonable starting point. Can a single hidden layer network not approximate any function? – User Apr 17 '12 at 5:30 • @ChrisSimokat: Maybe I'm missing something but I thought single hidden layer does not equal "single layer perceptron", no? – User Apr 18 '12 at 20:58 • No you're not missing anything I just apparently wasn't reading closely enough sorry about that. Apr 19 '12 at 21:47 1. Should I still scale the input features using feature scaling? What range? Scaling does not make anything worse. Read this answer from Sarle's neural network FAQ: Subject: Should I normalize/standardize/rescale the data? . 2. What transformation function should I use in place of the sigmoid? You could use logistic sigmoid or tanh as activation function. That doesn't matter. You don't have to change the learning algorithm. You just have to scale the outputs of your training set down to the range of the output layer activation function ($[0,1]$ or $[-1,1]$) and when you trained your network, you have to scale the output of your network to $[-5,5]$. You really don't have to change anything else. • What is the correct way to scale the neural network output to the range [-5,5]? – User Apr 15 '12 at 11:57 • To scale element $e \in [a,b]$ to an interval $[c,d]$ you have to calculate $\frac{e-a}{b-a} \cdot (d-c)+c$. – alfa Apr 15 '12 at 15:01 • But since sigmoid is non-linear, with uniform distribution sampling the value of sigmoid we would probably get something close to 1 or close to 0. Which means we have to learn our network to pick values in the middle more carefully. Is sigmoid+scaling really a good choice to go for? May 23 '18 at 18:57 Disclaimer: the approach presented is not feasible for continuous values, but I do believe bears some weight in decision making for the project Smarty77 brings up a good point about utilizing a rescaled sigmoid function. Inherently, the sigmoid function produces a probability, which describes a sampling success rate (ie 95 out of 100 photos with these features are successfully 'dog'). The final outcome described is a binary one, and the training, using 'binary cross-entropy' describes a process of separating diametrically opposed outcomes, which inherently discourages results in the middle-range. The continuum of the output is merely there for scaling based on number of samples (ie a result of 0.9761 means that 9761 out of 10000 samples displaying those or similar triats are 'dog'), but each result itself must still be considered to be binary and not arbitrarily granular. As such, it should not be mistaken for and applied as one would real numbers and may not be applicable here. Though I am not sure of the utilization of the network, I would normalize the output vector w.r.t. itself. This can be done with softmax. This will also require there to be 11 linear outputs (bins) from the network (one for each output -5 to +5), one for each class. It will provide an assurance value for any one 'bin' being the correct answer. This architecture would be trainable with one-hot encoding, with the 1 indicating the correct bin. The result is interpretable then in a manner of ways, like a greedy strategy or probabilistic sampling. However, to recast it into a continuous variable, the assuredness of each index can be used as a weight to place a marker on a number-line (similar to the behavior of the sigmoid unit), but this also highlights the primary issue: if the network is fairly certain the result is -2 or +3, but absolutely certain that it is not anything else, is +1 a viable result? Thank you for your consideration. Good luck on your project.
2021-09-21 12:20:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5800748467445374, "perplexity": 634.4401544199859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00431.warc.gz"}
http://repository.ias.ac.in/32047/
# Optimal inverse of a matrix Mitra, Sujit Kumar (1975) Optimal inverse of a matrix Sankhya - Series A, 37 (4). pp. 550-563. ISSN 0581-572x Full text not available from this repository. ## Abstract An optimal approximate solution $(x)$ of the possibly inconsistent equation $Ax=y$ minimizes the norm of $\left( \begin{array}{c} Ax-y \\ x \end{array} \right)$ considered as a vector in an appropriate product space. Such a solution is computed as $x=Gy$ by an optimal inverse $G$ of $A$. This definition generalizes the earlier work of Foster (1961). Properties of optimal inverses are studied and some applications are discussed. Item Type: Article Copyright of this article belongs to Indian Statistical Institute. 32047 30 Mar 2011 12:57 30 Mar 2011 12:57 Repository Staff Only: item control page
2021-04-22 17:39:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8810727596282959, "perplexity": 1793.3146143490721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00095.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-1-introduction-to-algebraic-expressions-study-summary-practice-exercises-page-73/9
# Chapter 1 - Introduction to Algebraic Expressions - Study Summary - Practice Exercises - Page 73: 9 $84=2^2\cdot3\cdot7$ #### Work Step by Step Repeatedly factoring the smallest prime factor from the composite factor, the factors of $84$ are \begin{array}{l}\require{cancel} 84=2(42) \\\\ 84=2(2)(21) \\\\ 84=2(2)(3)(7) .\end{array} Hence, the prime factorization of $84$ is $84=2^2\cdot3\cdot7 .$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-01-18 19:42:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7409032583236694, "perplexity": 2167.919430066007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660529.12/warc/CC-MAIN-20190118193139-20190118215139-00304.warc.gz"}
http://mathhelpforum.com/advanced-statistics/128814-solved-beta-function.html
# Thread: [SOLVED] beta function 1. ## [SOLVED] beta function Hi, I'm starting with $\displaystyle X_1, X_2, ... iid ~ U(0,1)$ and I'm trying to find the density of $\displaystyle S_N = X_{k:n} - X_{k-2:n}.$ I look at the joint pdf, then do a Jacobian transformation and get very close to the expression of a beta function, but it's not quite it (but should be): $\displaystyle \frac{n!} {(k-3)!(n-k)!} \ (y-z)^{k-3} * z * (1-y)^{n-k}$ Any suggestions on how I can get it into the form of a beta function or where I may have gone awry? Thanks! 2. I'm having trouble with the subscripts of $\displaystyle S_N$ Are these two order stats that differ by 2? 3. ## order statistics subscripts Hi, Yes, k:n is the kth order statistic out of n, and k-2:n the (k-2)th order statistic out of n. The final answer should depend on 2, not on k. I guess it will integrate out once I get the right expression. Thanks! 4. The joint PDF should be easy. Show me the 2-2 transformation and the other rv before you integrate it out. I get $\displaystyle \frac{n!} {(k-3)!(n-k)!} \ (z)^{k-3} *(y- z) * (1-y)^{n-k} I(0<z<y<1)$ for the joint pdf of the 2 order stats. 5. ## transformation Matheagle, Thanks. My 2-2 transformation ends up looking very similar. I set one variable (I'll rename them from my original post so we don't run into confusion with the variables that you use) to be $\displaystyle s = X_{(k)} - X_{(k-2)}$ and $\displaystyle t= X_{(k)}$ I then get that $\displaystyle X_{(k-2)} = t - s$ and $\displaystyle X_{(k)} = t$ The Jacobian turns out to be equal to 1 in that transformation and I plug in the new variables and get $\displaystyle \frac{n!} {(k-3)!(n-k)!} \ (t-s)^{k-3} *(s) * (1-t)^{n-k}$ I'm not sure how to manipulate the variables to get it into the form of a beta eventually (before or after integration). Thanks! 6. ## integration Hi, so I'm thinking that I can re-arrange it the following way and then try to integrate it over t: $\displaystyle \frac{n! * s} {(k-3)!(n-k)!} \int (t-s)^{k-3} * (1-t)^{n-k} * dt$ where t goes from s to 1 (?). But I don't seem to have a clue on how to come up with a way to integrate this... Thanks again! 7. the density is correct the support is 0<t-s<t<1 you need to draw that and figure out the bounds of integration. You have s<t, s>0 and t<1 so integrate out t you have s<t<1.... $\displaystyle f_s(s)=\int_s^1 f(s,t)dt$ 8. Thanks yet again. Sorry to keep going, but I am not sure how to integrate it since I have "s" in each of the () and don't see being able to use udv = ... or anything along those lines. 9. you owe me I just did it with s=y-z BUT t=z There's a basic calculus sub I did to make it a beta. 10. ## basic calculus matheagle, Thanks! I do owe you. I didn't see the forest for the trees (or maybe I banged my head against the wall one too many times). I found another reference as well (Order Statistics, 3rd ed., David & Nagajara) - they use t = v * (1-s) and integrate over v. I'll still have to mull it over for a while on how I can get it to be more intuitive. (Still don't quite get the beta with the easier substitution since the s outside the integral comes back into the integral for me when I do this.) Thanks! Order statistics - Google Books 11. you can read this as well SpringerLink - Journal Article I wanted you to try that other substitution and see if you obtained the same result I did. After dinner I may post my work............. BACK to my variables... we have $\displaystyle \frac{n!} {(k-3)!(n-k)!} \ (z)^{k-3} *(y- z) * (1-y)^{n-k} I(0<z<y<1)$ Let s=y-z but now t=z, the smaller of the two order stats. The jacobian should be one again That doesn't need checking in these cases. The density of these two is... $\displaystyle \frac{n!} {(k-3)!(n-k)!} \ (t)^{k-3} *s * (1-s-t)^{n-k} I(0<t<s+t<1)$ That region becomes t>0, s>0 and s+t<1 So the density of s becomes $\displaystyle \frac{n!s} {(k-3)!(n-k)!} \int_0^{1-s} t^{k-3} (1-s-t)^{n-k} dt$ NOW use calc one, to make the bounds go from 0 to 1, let $\displaystyle w={t\over 1-s}$ which gives you $\displaystyle t=(1-s)w$ and $\displaystyle dt=(1-s)dw$ and all I see are beta constants now........... It looks like your reference did what I did. I have David's book somewhere in my office, but I usually derive these things from scratch. I can do most things faster by myself than looking it up. Usually I can't find it or it may never have been done before. So I just do it. 12. Thanks!!! And - I like your way better than what I found. ;-) 13. ## Got it. Thanks!! And - I like your substitutions better than what I found. ;-) 14. You're welcome, this is what I do for a living (if you saw that paper). I was trying to help another person this morning and he became so rude. I don't have to do this for free. I gave him plenty of help and I won't ever again. All I expect is a thank you. 15. Saw the preview of the paper on Springer - will definitely try to get a copy from the library and look at it more closely now. ;-) Page 1 of 2 12 Last
2018-05-27 00:07:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113952875137329, "perplexity": 828.058243100856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867949.10/warc/CC-MAIN-20180526225551-20180527005551-00089.warc.gz"}
http://www.sciencemadness.org/talk/viewthread.php?tid=10413&page=2#pid124985
Sciencemadness Discussion Board » Special topics » Energetic Materials » PETN synth Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Responsible Practices   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication Non-chemistry   » Forum Matters   » Legal and Societal Issues Pages:  1  2    4  ..  6 Author: Subject: PETN synth Microtek International Hazard Posts: 571 Registered: 23-9-2002 Member Is Offline Mood: No Mood OK, this is the last OT post from me then: I titrated with NaOH soln at about 1 M IIRC. The NaOH concentration had been established precisely by titration against high purity benzoic acid. I calculated the accumulated error based on estimates of the individual errors and got a concentration of 99.5 +- 0.1% IIRC. I have an old Mettler-Toledo mechanical balance which is very accurate. It measures down to 0.0001 g with excellent repeatability and was mine for the taking, simply because the lab upgraded to digital versions. It enables me to do fairly accurate work at the small scale, where I prefer to be. Anyway, the product of the PETN synthesis using only 2.5 ml HNO3 for 1 g PE produced 2.250 g product after thorough neutralisation (in solution) and washing. Boomer Hazard to Others Posts: 190 Registered: 11-11-2005 Member Is Offline Mood: No Mood Did you get a melting point? This would give pointers to whether it contained lower nitrates. On the other hand, if it contained much trinitrate of lower molecular weight the yield would be over 100%.... 97% seems high anyway, I don't have numbers for PETN here, but I remember for NG even industrial 1000kg batches using 99% nitric with oleum get below 97% for NG. IIRC mostly because the spent acid dissolves 3%, though 2/3 of that *could* be recovered by solvent extraction, which is *not* done in normal practice. They want to get rid of the shit ASAP. You sure the solvent from neutralization was completely gone (i.e. vacuum desiccator)? I remember a batch of MHN containing considerable solvent even though it looked and felt dry. The much lower mp showed it. Not that I don't want you to hold that yield record - I am content to have beaten you with the RDX yield (additive threat). [Edited on by Boomer] Microtek International Hazard Posts: 571 Registered: 23-9-2002 Member Is Offline Mood: No Mood I didn't measure the melting point of the product as I don't have an actual melting point apparatus. So every time I need to do a measurement I have to come up with some improvised contraption. That said, I did place samples of different PETN batches on a thin glass plate which was then placed on a hotplate. The samples all melted practically simultaneously and very sharply. They also crystallized in a manner which suggested a high-purity sample. Regarding the RDX, my best yield was 74% of theory based on hexamine, but more impotantly (in my opinion) 0.31 g RDX per ml of HNO3. I don't quite recall what your numbers were.... -=HeX=- Hazard to Others Posts: 109 Registered: 18-4-2008 Location: Ireland Member Is Offline Mood: Precipitating Back on topic: this friday I will post a synth that I have used for PETN. I got it in a torrent and it is very good and contains good data on PETN. I got my pentaerythritol through good social engineering but the supplier no longer carries it. PETN Is a very safe and good explosive in my opinion but I prefer ETN for the simplicity of obtaining the precursors and synthesising it in uncontrolled conditions. I hate said a lot on ETN on roguesci but it is down. If you give a man a match he will be warm for a moment. Set him alight and he will be warm for the rest of his life. Engager National Hazard Posts: 295 Registered: 8-1-2006 Location: Moscow, Russia Member Is Offline Mood: Lagrangian Quote: Originally posted by Microtek OK, this is the last OT post from me then: I titrated with NaOH soln at about 1 M IIRC. The NaOH concentration had been established precisely by titration against high purity benzoic acid. I calculated the accumulated error based on estimates of the individual errors and got a concentration of 99.5 +- 0.1% IIRC. I have an old Mettler-Toledo mechanical balance which is very accurate. It measures down to 0.0001 g with excellent repeatability and was mine for the taking, simply because the lab upgraded to digital versions. It enables me to do fairly accurate work at the small scale, where I prefer to be. Anyway, the product of the PETN synthesis using only 2.5 ml HNO3 for 1 g PE produced 2.250 g product after thorough neutralisation (in solution) and washing. I've also tried nitration using 99.7% HNO3 (yellowish - some NOx exist), to get rid of nitrogen oxides i've added some small ammount of urea until acid is completely colorless. Temperature was carefully controlled to be below 15C, but at some moment (then about half of penthaerythrytol was added to acid) violent oxidation started and mixture was emidately poured into large ammount of ice cold water - no precipitate was obtained. Next attemt i've made was dilute (70%) nitric acid + conc. H2SO4 using proportions mentioned in this tread somethere above, temperature was controled by stream of cold water, all was fine until moment close to the end of penthaerythrytol addition, a was away for 15 sec to turn off lights in my bedroom (flask was sitting in cooling bath), again violent oxidation took place, many NOx evolved, temperature rised from ~12C then i was away, to 56C+ then i came back after ~15 seconds, reaction mixture turned into black oily liquid, with was emidately poured into ice water and discarded (nitric acid used was just 99.5% HNO3 diluted by water to 70%). Third attempt was made using same method as in second, but temperature control was to below ~10C and virgous stirring without stops, after all penthaerythrytol was added, and mixture is allowed to slowly heat to room temperature at ~20C violent oxidation was started, and mixture is also poured into large ammount of cold water, some NOx evolved and precipitate of PETN is obtained with ~30% yield. I have no idea how you mean to stop this violent oxidation reactions, and how you never encountered them. Somebody have any idea what is the problem here? Rosco Bodine Banned Posts: 6370 Registered: 29-9-2004 Member Is Offline Mood: analytical Yeah I have studied this nitration a bit and worked out a pretty optimal synthesis which is basically identical to a patented method. I am sure I posted about it, maybe it was on another forum, the E&W forum. I'll have to go back and check this to get the specifics. But as I recall the nitration is a very high yield reaction with high acid utilization efficiency using ~97% HNO3 alone, but it is also a very very exothermic nitration where excellent cooling and very slow addition of the pentaerythritol is required while good stirring and a narrow nitration temperature range is closely controlled by rate of addition. It can be nailed and reproduced because I have done it more than once with identical results and no complications. But it does require careful work. It was based on one of the old patent Nobel or DuPont processes I think. I do remember the nitration proceeds at a mild cool temperature absent any of the warming described with some other methods...which is a really bad idea IMO . Okay I found it, what you are looking for of course is Rosco's good old country recipe for PETN Hmmm I just went back and saw that Sickman posted this same link about eight months ago, so this puzzles me what could be the problem unless there is some variable which was not observed, and this accounts for the problem. Anyway it worked for me very well as described. [Edited on 15-12-2008 by Rosco Bodine] Engager National Hazard Posts: 295 Registered: 8-1-2006 Location: Moscow, Russia Member Is Offline Mood: Lagrangian Process with 70%HNO3 + conc. H2SO4 was repeated by another russian chemist using conc. HNO3 diluted to 70%. Same problems vere encountered, it was very surprising, because, man i'm talking about reported several completely successfull runs using this method, he also mentioned that he has no idea about what is the reason of problem. There is something terribly strange here, only difference from my case with ~15 sec away time, he was away for about 40 seconds. [Edited on 16-12-2008 by Engager] Rosco Bodine Banned Posts: 6370 Registered: 29-9-2004 Member Is Offline Mood: analytical An educated guess is that your problem is related to two factors...reaction temperature too low and addition not gradual enough to be in sync with the reaction rate, causing surging. You need a sufficient reaction temperature and good stirring so that there is a smooth reaction of added material without any delay or induction period, because the reaction is thermally driven but is also extremely exothermic, so a runaway is lurking there for any unreacted material, proceeding to react and cascading past the decomposition temperature. This is classic organic nitration stuff. The first thought is there is insufficient cooling so the reaction temperature is lowered which then makes the reaction sluggish for added material and aggravates the inclination for induction delay followed by runaway. So it may seem conterintuitive, but your baseline nitration temperature needs to be raised a little so that added precursor nitrates smoothly on addition as soon as it is introduced, without any delay which allows unreacting material to accumulate, and to then react in a self-accellerating fashion. And yeah the problem is aggravated with a more oxidizing nitration mixture using mixed acids. The straight nitric acid nitration method is really the best method for PETN. Try that five degree window temperature range I described for 18-23C using the 97% HNO3. The idea there is to sprinkle in the PE very slowly at 18C and at a rate to keep the reaction temperature in that range, allowing for a rise to 23C from the exotherm of each addition, allowing the exotherm to subside and making the next addition at the fall again to18C. If you have a screw feed addition funnel for solids, you can probably fine tune the rate of addition to flatten the oscillation of temperature a bit better than that 5 degrees. The exact optimum temperature is somewhere in that range, perhaps 20-22C, I am not certain. But the warmer you run it , while it goes faster, it is closer to the limit and has less headroom range for tolerating any surges in temperature. It has been a few years since I did this nitration, but I tend to make accurate notes, so those numbers are probably correct. [Edited on 17-12-2008 by Rosco Bodine] zajcek01 Harmless Posts: 11 Registered: 6-5-2007 Member Is Offline Mood: No Mood Here is the document Microtek vas talking about : http://www.2shared.com/file/4476387/87867b1c/petn2.html Page contains some ads and popups.... just ignore them and click: You have to enable macros in M$Excel in order for this document to work. how to read results: Area I - Formation of PETN with yield of 94-98% without formation of sulfoesters. Area II - Formation of PETN via sulfoesters. Area III - Area of low yields (10-50%) due to the high NOx production and oxidation processes Area IV - PETN not forming" The only problem with nitration of PE with mixed acid I had, was localized overheating and oxidation because concentration of water rose to 30% at that spots. Nitric acid that contains more than 30% of water is a powerful oxidizer. The solution of this problem was overcome with finely powdered and dried PE and mixing thoroughly during the process of nitration. [Edited on 17-12-2008 by zajcek01] Kontaktverfahren Harmless Posts: 1 Registered: 26-11-2008 Location: Hesse - Germany Member Is Offline Mood: Inert Mhh this does not work for me, no yield is shown, the diagramm does not change whatever I type and the buttons have no use. (Maybe its because I use Open Office) PS: Yes I have enabled macro-use \"Often a nice experiment is more worth than twenty formulas you have developed in brain!\" Free from Albert Einstein. In my opinion that fits to hobby-chemistry too. zajcek01 Harmless Posts: 11 Registered: 6-5-2007 Member Is Offline Mood: No Mood It does not work on OpenOffice.org It only works on Micro$oft office erik89 Harmless Posts: 3 Registered: 3-1-2009 Member Is Offline Mood: No Mood Hi! I´ve had some problems with my PE. This is what happend: First, I meassured up X mL of H2SO4 and Y mL of HNO3. These were pre-chilled and later mixed, and once again chilled to a temperature about -10 C. Then, I meassured up Z grams of Pentaerythritol. I started the addition of PE to the nitrating-mix. I added one gram at a time. The PE just simply DISSOLVED in the nitrating-mix. I added another gram. I still just dissolved. After a while, all of the pentaerythritol were in the mix and "dissolved" The mixture whas 100% CLEAR, which is shouldn´t bee. There should have been a white slurry, with some crystalls in it. The "liquid" was added to water, to precipitate the "PETN". Nothing precipitated. There is nothing wrong with the acids. (96% H2SO4 and 65% HNO3, and they´ve worked just fine with other synthesis) So, I think there is something wrong with my pentaerythritol. Is there anyway to test it´s purity? Have anyone of you guys had a similar problem? If so, how did you solve it? Could it have been any water in the PE, since it is hygroscopic? hissingnoise International Hazard Posts: 3939 Registered: 26-12-2002 Member Is Offline Mood: Pulverulescent! Dilute HNO3, IMO, is the real problem here. The amount of water absorbed by pentaerythritol is insignificant, considering your HNO3 contains 35%. Distill it carefully from twice its volume of H2SO4, reconcentrate the latter to ~98% and try again. Or better, add predried pentaerythritol to HNO3 of the highest density. If the substrate *is* pentaerythritol, the tetranitrate will form and will precipitate on drowning. The HNO3 should have no dissolved NO2. If it can't be removed by blowing dry air through, treat it with a gram of urea. [Edited on 10-1-2009 by hissingnoise] erik89 Harmless Posts: 3 Registered: 3-1-2009 Member Is Offline Mood: No Mood Quote: Ursprugligen inlagt av hissingnoise Dilute HNO3, IMO, is the real problem here. The amount of water absorbed by pentaerythritol is insignificant, considering your HNO3 contains 35%. Distill it carefully from twice its volume of H2SO4, reconcentrate the latter to ~98% and try again. Or better, add predried pentaerythritol to HNO3 of the highest density. If the substrate *is* pentaerythritol, the tetranitrate will form and will precipitate on drowning. Hi! Yes, I am aware of the fact that higher conc. the HNO3 has, the better the yield will be. But, here´s the prob. I have two different pentaerythritols, from two different suppliers. One of them, worked just fine, and I got an exellent yield from it. Unfortunatley, I´ve ran out from that one, and I have to use the PE from the other supplier, and his doesn´t work. My primary question is that if there is anyway to test the PE´s purity? hissingnoise International Hazard Posts: 3939 Registered: 26-12-2002 Member Is Offline Mood: Pulverulescent! Near anhydrous nitration is the best test I can think of, right now. Failure under those conditions will seriously call your pentaerythritol into question. Microtek International Hazard Posts: 571 Registered: 23-9-2002 Member Is Offline Mood: No Mood Maybe you could do some sort of elemental analysis by quantitatively analysing the products of the complete combustion. So, make an apparatus similar to a nitrometer, make a mix of your mystery PE and excess CuO and put it in there. Heat rapidly to ignition with some non-contaminating heat source in a stream of suitable gas (eg. nitrogen). You could pass the gas stream over an exactly known amount of a suitable dessicant to capture the water and in that way determine the amount of hydrogen in the sample. You could then bubble the gas stream through a Ca(OH)2 slurry (with an exactly known amount of hydroxide) to absorb CO2. Then weigh the mix of CaCO3 and Ca(OH)2 to establish how much carbon was in the sample. Using the assumption that your sample contains only carbon, hydrogen and oxygen, you then have the amount of oxygen too. A bit labor intensive, but if you don't have access to more sophisticated equipment..... Engager National Hazard Posts: 295 Registered: 8-1-2006 Location: Moscow, Russia Member Is Offline Mood: Lagrangian Quote: Originally posted by Engager Process with 70%HNO3 + conc. H2SO4 was repeated by another russian chemist using conc. HNO3 diluted to 70%. Same problems vere encountered, it was very surprising, because, man i'm talking about reported several completely successfull runs using this method, he also mentioned that he has no idea about what is the reason of problem. There is something terribly strange here, only difference from my case with ~15 sec away time, he was away for about 40 seconds. [Edited on 16-12-2008 by Engager] I'm finaly realized source of my problems with nitration of PE - it is purity of source pentaerythritol. Melting point of PE i was using in all failed nitration experiments is ~220C - that is much too low, indicating a lot of impurities are present. Oxidation problems completely disappeared then i was using pure PE from another chemical supplier. So if someone encounter severe oxidation problems, check out your pentaerythritol! [Edited on 1-7-2009 by Engager] User National Hazard Posts: 339 Registered: 7-11-2008 Location: Earth Member Is Offline Mood: Passionate Would it not be quite easy to purify the PE from a solvent? Btw the best yield i ever had was 89.52% of theoretical. This was done by using 65% nitric acid. 24.8ml 98% H2SO4 (boiled down battery acid) The acids were mixed and cooled to -5 degrees. The PE was added in three portion while keeping temp under 0. The vessel was taken out of the ice and slowly gained temp by the exothermic reaction. Very careful and slowly the temp was brought up to 30 degrees and maintained for 2 hours. Then the mixture was crashed into 500ml of almost freezing water. The next wash consisted out of 200ml 5% bicarbonate, letting it sit for a while. The PETN was then dried on a radiator on low heat. When almost dry, the PETN was added to 200ml of hot acetone and 1 gram of bicarbonate was added. The mixture was then crashed on 500ml of water with chunks of ice. And sat for a night in the freezer. (before i didn't do this and i noticed that my yield suffered, urbanski has a nice table on acetone/water/petn at different temps) Filtered and dried in an exicator. Yield: 89.52 % Thats as almost 21 grams Could be better though. [Edited on 2-7-2009 by User] What a fine day for chemistry this is. edmo Harmless Posts: 6 Registered: 16-7-2007 Member Is Offline Mood: Fiesty of course C4 via hexamine is easy. PETN is relevant because it's a LOT easier to initiate. So the original posters questions about PE synthesis are totally relevant. Rich_Insane National Hazard Posts: 368 Registered: 24-4-2009 Location: Portland, Oregon Member Is Offline Mood: alive Wow you guys can get FNA? Very difficult to come buy without vac dist. Pentaerythritol has nothing to do with erythritol btw. Pentaerythritol can actually be produced by fomraldehyde and acetalhyde if I am correct. I saw it for sale from Sigma for ~\$30 a kilogram. You could buy it from a middleman. hissingnoise International Hazard Posts: 3939 Registered: 26-12-2002 Member Is Offline Mood: Pulverulescent! If your KNO3 is dry and H2SO4 is ~98%, strong, fuming HNO3 will distill from it without using vacuum. . . If you're nitrating pentaerythritol the HNO3 should be decolourised first! Blowing dry air through it usually works; if it doesn't, just add a little urea. . . Some things need repeating! [Edited on 10-8-2009 by hissingnoise] phantasy Unregistered Posts: N/A Registered: N/A Member Is Offline Quote: Originally posted by User Very careful and slowly the temp was brought up to 30 degrees and maintained for 2 hours. [Edited on 2-7-2009 by User] peretherification of pentaerytrytol sulphate begin at 40 deg use 45c and 15-20 minutes to 93-96% yeild [Edited on 16-10-2009 by phantasy] chemoleo Biochemicus Energeticus 8-1-2010 at 18:18 chemoleo Biochemicus Energeticus 8-1-2010 at 18:21 chemoleo Biochemicus Energeticus Posts: 3005 Registered: 23-7-2003 Location: England Germany Member Is Offline Mood: crystalline The derailment of this thread on the matter of PETN that made it into the news can now be found in Whimsy, please continue there (while refraining from turning this into a political discussion). [Edited on 9-1-2010 by chemoleo] Never Stop to Begin, and Never Begin to Stop... Tolerance is good. But not with the intolerant! (Wilhelm Busch) NUKE Harmless Posts: 17 Registered: 21-2-2006 Location: Slovenia Member Is Offline Mood: Detonating with the highest order PETN synthesis I have employed this procedure with various 50-200g PE batches and I have never had a runaway or any problems with the product. I'll give example for nitration of 100g PE. Crude PETN was filtered using vacuum filtration apparatus and washed few times with distilled water. As much filtrate as possible was sucked out of the crude PETN. Crude PETN was then moved in a vessel and split in half. First half was moved into a clean 1L florence flat bottomed flask and 500mL of acetone was added. Around 20g of NaHCO3 was then added onto the same flask and half of the crude PETN. In the mean time 5L of cold distilled water with small ammount of NaHCO3 dissolved was prepared and kept close. Florence flask with acetone, crude PETN and NaHCO3 was put into the water bath on electrical heater and mixture was stirred until temperature of above 60°C was reached. Majority of PETN dissolves at that temperature only NaHCO3 and salts from neutralisation are left on the bottom. Clear layer is decanted into the NaHCO3 solution which is vigorously stirred during the decantation. Process is repeated with another half of crude PETN. My most recent attempts yielded 210g of recrystalized dry PETN. -=HeX=- Hazard to Others Posts: 109 Registered: 18-4-2008 Location: Ireland Member Is Offline Mood: Precipitating Nuke has a lot of experience with PETN... I recall seeing (via webcam) a kilo or two of the wonderful stuff in an ice cream tub! Actually, Nuke, you should post the pics of the funky NG here... Anyways... I have noted that PETN is rather sensitive to initiation by certain primaries moreso than others. For example, 50mg Pure Lead Azide was the least I could use to get reliable dets from well pressed PETN, wheras a mere 3mg or less of DPNA could be used with excellent results. However, seeing as they are old tests, I cannot be sure of 'how' well pressed them stuff was... If you give a man a match he will be warm for a moment. Set him alight and he will be warm for the rest of his life. Pages:  1  2    4  ..  6 Sciencemadness Discussion Board » Special topics » Energetic Materials » PETN synth Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Responsible Practices   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication Non-chemistry   » Forum Matters   » Legal and Societal Issues
2021-12-04 07:55:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4414534866809845, "perplexity": 6427.242048493586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362952.24/warc/CC-MAIN-20211204063651-20211204093651-00509.warc.gz"}
https://socratic.org/questions/how-do-you-find-an-equation-for-this-function-that-is-not-recursive
# How do you find an equation for this function that is not recursive? ## $f \left(n\right) = - 2 f \left(n - 1\right) + {\left(- 1\right)}^{n} \left(n - 1\right)$ Thanks in advance! Mar 30, 2018 $f \left(n\right) = {\left(- 1\right)}^{n} \left({2}^{n} \left(k + 1\right) - n - 1\right)$ #### Explanation: I will assume that this is a function of integers, since otherwise there are many possible variations. Given: $f \left(n\right) = - 2 f \left(n - 1\right) + {\left(- 1\right)}^{n} \left(n - 1\right)$ Let: $g \left(n\right) = {\left(- 1\right)}^{n} f \left(n\right)$ Then: $g \left(n\right) = {\left(- 1\right)}^{n} f \left(n\right)$ $\textcolor{w h i t e}{g \left(n\right)} = {\left(- 1\right)}^{n} \left(- 2 f \left(n - 1\right) + {\left(- 1\right)}^{n} \left(n - 1\right)\right)$ $\textcolor{w h i t e}{g \left(n\right)} = {\left(- 1\right)}^{n} \left(- 2 {\left(- 1\right)}^{n - 1} g \left(n - 1\right) + {\left(- 1\right)}^{n} \left(n - 1\right)\right)$ $\textcolor{w h i t e}{g \left(n\right)} = 2 g \left(n - 1\right) + \left(n - 1\right)$ Let: $k = g \left(0\right)$ Then $g \left(0\right)$, $g \left(1\right)$, $g \left(2\right)$,... form a sequence: $k$, $2 k$, $4 k + 1$, $8 k + 4$, $16 k + 11$, $32 k + 26$,... Putting $k = 0$, this sequence becomes: $0 , 0 , 1 , 4 , 11 , 26 , \ldots$ This has differences: $0 , 1 , 3 , 7 , 15 , \ldots$ recognisable as ${2}^{n} - 1$ Let us compare the previous sequence with ${2}^{n}$: $1 , 2 , 4 , 8 , 16 , 32 , \ldots$ A matching formula is: ${2}^{n} - n - 1$ So the general formula for $g \left(n\right)$ can be written: $g \left(n\right) = {2}^{n} \left(k + 1\right) - n - 1$ So the formula for $f \left(n\right)$ can be written: $f \left(n\right) = {\left(- 1\right)}^{n} \left({2}^{n} \left(k + 1\right) - n - 1\right)$ Mar 31, 2018 See below. #### Explanation: This is a linear difference equation. It can be solved as $f \left(n\right) = {f}_{h} \left(n\right) + {f}_{p} \left(n\right)$ with ${f}_{h} \left(n\right) + 2 {f}_{h} \left(n - 1\right) = 0$ ${f}_{p} \left(n\right) + 2 {f}_{p} \left(n - 1\right) = {\left(- 1\right)}^{n} \left(n - 1\right)$ To solve ${f}_{h} \left(n\right)$ we make $f \left(n\right) = {a}^{n}$ and after substitution ${a}^{n} + 2 {a}^{n - 1} = 0 \Rightarrow a = - 2 \Rightarrow {f}_{h} \left(n\right) = {\left(- 1\right)}^{n} {2}^{n}$ for the particular we make ${f}_{p} \left(n\right) = {c}_{n} {\left(- 1\right)}^{n} {2}^{n}$ and after substitution ${c}_{n} {\left(- 1\right)}^{n} {2}^{n} + 2 {c}_{n - 1} {\left(- 1\right)}^{n - 1} {2}^{n - 1} = {\left(- 1\right)}^{n} \left(n - 1\right)$ or ${c}_{n} - {c}_{n - 1} = \left(n - 1\right) {2}^{-} n$ or ${c}_{n} = {c}_{n - 1} + \left(n - 1\right) {2}^{-} n$ or ${c}_{n} = {c}_{0} + {\sum}_{k = 1}^{n} \left(k - 1\right) {2}^{-} k$ and finally $f \left(n\right) = \left({c}_{0} + {\sum}_{k = 1}^{n} \left(k - 1\right) {2}^{-} k\right) {\left(- 1\right)}^{n} {2}^{n}$ This formulation can be simplified a lot but we let this task to the reader. NOTE ${\sum}_{k = 1}^{n} \left(k - 1\right) {x}^{k} = {x}^{2} {\sum}_{k = 0}^{n - 1} {x}^{k} = {x}^{2} \frac{d}{\mathrm{dx}} \left(\frac{{x}^{n} - 1}{x - 1}\right)$ then ${\sum}_{k = 1}^{n} \left(k - 1\right) {x}^{k} = \frac{x \left(x + \left(n \left(x - 1\right) - x\right) {x}^{n}\right)}{x - 1} ^ 2$ now making $x = {2}^{-} 1$ we obtain ${\sum}_{k = 1}^{n} \left(k - 1\right) {2}^{-} k = 1 - \left(n + 1\right) {2}^{-} n$ Apr 1, 2018 $f \left(n\right) = {\left(- 1\right)}^{n} \left[{2}^{n} \left(f \left(0\right) + 1\right) - n - 1\right]$ #### Explanation: The equation $f \left(n\right) = - 2 f \left(n - 1\right) + {\left(- 1\right)}^{n} \left(n - 1\right)$ can be rewritten in the form $f \left(n\right) + {\left(- 1\right)}^{n} \left(n + 1\right) = - 2 f \left(n - 1\right) + {\left(- 1\right)}^{n} \left\{\left(n - 1\right) + \left(n + 1\right)\right\}$ $q \quad q \quad = - 2 \left(f \left(n - 1\right) + {\left(- 1\right)}^{n - 1} n\right)$ This shows that the function, $F \left(n\right)$, defined by $F \left(n\right) \equiv f \left(n\right) + {\left(- 1\right)}^{n} \left(n + 1\right)$ obeys $F \left(n\right) = - 2 F \left(n - 1\right)$ The obvious solution for this equation is $F \left(n\right) = {\left(- 2\right)}^{n} F \left(0\right)$ Since $F \left(0\right) = f \left(0\right) + 1$, we have $F \left(n\right) \equiv f \left(n\right) + {\left(- 1\right)}^{n} \left(n + 1\right) = {\left(- 2\right)}^{n} \left(f \left(0\right) + 1\right)$ and so $f \left(n\right) = {\left(- 2\right)}^{n} \left(f \left(0\right) + 1\right) - {\left(- 1\right)}^{n} \left(n + 1\right)$ $q \quad = {\left(- 1\right)}^{n} \left[{2}^{n} \left(f \left(0\right) + 1\right) - n - 1\right]$
2020-03-29 19:08:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 57, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824685454368591, "perplexity": 1187.898461239983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370495413.19/warc/CC-MAIN-20200329171027-20200329201027-00284.warc.gz"}
https://www.physicsforums.com/threads/angular-momentum-operators.746121/
# Angular momentum operators 1. Mar 30, 2014 ### Matterwave Hi guys, This is a problem which is bothering me right now. The angular momentum operators (Lx, Ly, Lz), when expressed in spatial rotations consists of derivatives in $$\theta$$ and $$\phi$$. This would suggest that there are, at any point in space, only two linearly independent operators (since there are derivatives in only two directions). When we talk about spin 1/2, which is still a type of angular momentum, we often represent the spin angular momentum operators (Sx, Sy, Sz) in terms of Pauli matrices which are three linearly independent matrices. Maybe I'm just being dense, but why the disparity in the dimensionality of angular momenta? When I think about it, it's often stated "angular momenta are the 'generators' of rotations". In which case, you'd think there would be 3 operators since the space of rotations in 3-dimensional space is of dimension 3 (e.g. given by the three Euler angles). Also just thinking about describing angular momenta, you'd think there would be 3 numbers necessary (e.g. 2 numbers specifying the axis of rotation, and 1 number specifying the rate of rotation). I'm getting really confused. =[ 2. Mar 30, 2014 ### The_Duck This conclusion is incorrect; there are three linearly independent angular momentum operators. It sounds like you have the spherical representations of the angular momentum operators in front of you; you should convince yourself that $L_x$, $L_y$, and $L_z$ are indeed linearly independent, contrary to your intuition here. Right, this is the correct way of thinking about it. Consider the operator $\partial/\partial\phi$. This operator is associated with rotations around the z axis. You can think of this derivative as "the rate of change of a function as I rotate around the z axis." There are similar operators for rotations around any axis. In principle, all of these rotation operators can be expressed in terms of $\theta$ and $\phi$, but rotation operators around axes other than z look really messy in these coordinates. It can be more convenient to write the angular momentum operators as follows: $$L_x = i \hbar \left(y \frac{\partial}{\partial z} - z \frac{\partial}{\partial y}\right)$$ $$L_y = i \hbar \left(z \frac{\partial}{\partial x} - x \frac{\partial}{\partial z}\right)$$ $$L_z = i \hbar \left(x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x}\right)$$ (I might be off by an overall sign from the usual conventions). Note that these three operators are clearly linearly independent: none can be expressed as a linear combination of the other two. A rotation operator around any axis can be written as a linear combination of these three basis elements. Yes. Often an easier set of numbers to work with is the components of the angular momentum around the x, y, and z axes. 3. Mar 30, 2014 ### Matterwave The representations of the angular momentum operators I have in front of me right now are: $$L_x=i\hbar(\sin\phi\frac{\partial}{\partial\theta}+\cot\theta\cos\phi \frac{\partial}{\partial\phi})$$ $$L_y=i\hbar(-\cos\phi\frac{\partial}{\partial\theta}+\cot\theta\sin\phi \frac{\partial}{\partial\phi})$$ $$L_z=-i\hbar\frac{\partial}{\partial\phi}$$ Since only $\frac{\partial}{\partial\theta}$ and $\frac{\partial}{\partial\phi}$ appear in these 3 operators, can I not express one of them as a sum of the other two? Specifically it seems that: $$L_z=\frac{1}{\cot\theta}(\cos\phi L_x+\sin\phi L_y)$$ 4. Mar 31, 2014 ### The_Duck Sure, this equality holds (except maybe it is missing a minus sign if the expressions you gave are right?). You've used operator-valued coefficients on the right-hand side. Like x, y, and z, $\theta$ and $\phi$ should be thought of as operators. If you want to express all rotation operators in terms of linear combinations of a set of basis operators using *real-valued coefficients*, then you need a basis of three operators, and two won't do. This is the sense in which there are three linearly independent rotation operators. This is related to the fact that you can pick three axes and then build up any rotation as a sequence of three rotations, one around each axis; but you can't do this with only two axes. 5. Mar 31, 2014 ### Matterwave Yes you are right, I'm missing a minus sign. Can you elaborate on the requirement that I need to build linear combinations using "real-valued coefficients" rather than operator-valued coefficients? Certainly at some point P in space the coefficients in my construction of Lz are just real numbers. But even the number "1" itself can be thought of as the identity operator on my Hilbert space of functions right? So how can I make something not an operator? 6. Mar 31, 2014 ### Matterwave By the way, I am not fighting the fact that there should be a 3-dimensional space of rotations of 3-space. That fact seems obvious. What I can't understand is why the angular momentum operators, as expressed in the way I expressed them, do not at first glance appear to be linearly independent. 7. Mar 31, 2014 ### PhilDSP It's good to remember that the sine and cosine functions are orthogonal to each other when their arguments are equal (as there are in your expressions). 8. Mar 31, 2014 ### The_Duck Well, you can do anything you want! But here is why linear combinations with real coefficients are interesting, and why I implicitly assumed real coefficients when I originally asserted that $L_x$, $L_y$, and $L_z$ are linearly independent. Let $\alpha$ be an infinitesimal angle. Then if you act with the operator $1 + \frac{i\alpha}{\hbar} L_z$ on a wave function, you rotate the whole wave function by an angle $\alpha$ around the z axis. Convince yourself of this! Similarly $1 + \frac{i\beta}{\hbar}L_x$ will rotate a wave function by an infinitesimal angle $\beta$ around the x axis. Suppose you apply two successive infinitesimal rotations, for example by acting with the operator $(1 + \frac{i\alpha}{\hbar} L_z)(1 + \frac{i\beta}{\hbar}L_x)$ This operator rotates by a tiny angle $\beta$ around x and then a tiny angle $\alpha$ around z. The net effect is equivalent to a rotation by some tiny angle around the axis $\alpha \hat z + \beta \hat x$. In fact if we multiply out the above expression and ignore the extremely tiny number $\alpha \beta$, we get $1 + \frac{i}{\hbar}(\alpha L_z + \beta L_x)$ which corresponds nicely to the fact that this operator performs a rotation around the $\alpha \hat z + \beta \hat x$ axis. In fact any infinitesimal rotation operator can be written in the form $1 + \frac{i}{\hbar}(\alpha L_z + \beta L_x + \gamma L_y)$ where $\alpha, \beta, \gamma$ are infinitesimal angles. So the linear combinations, with real coefficients, of this basis of three rotation operators correspond to the set of infinitesimal rotations. Consider instead an operator such as $1 + \frac{i\alpha}{\hbar} \phi L_z$, $\alpha$ infinitesimal. If this operator acts on a wave function it does not perform a simple rotation. Instead it does some sort of weird smearing, taking $\psi(\theta, \phi) \to \psi(\theta, \phi + \alpha \phi)$. Locally, at any given point $(\theta, \phi)$, this looks like a rotation by an angle $\alpha \phi$ around the z axis. But since the angle $\alpha \phi$ varies throughout space the overall effect of the operator is not a rotation but something more complicated. So multiplying angular momentum operators by things like $\phi$ instead of real numbers results in operators like $\phi L_z$ which do not generate rotations. This expresses the fact that if you only look at one tiny region of space, there are only two independent things rotations can do to that tiny region. For example, if you look at a tiny patch of a sphere, all small-angle rotations of the sphere look like translations of that patch, and there are only two independent directions in which you can translate something along a 2D surface. Nevertheless there are three independent rotation directions for the sphere as a whole; it's just that some rotations which really are different look the same to someone who is confining their attention to the behavior of a particular tiny patch of the sphere. 9. Mar 31, 2014 ### Matterwave Ok, that makes sense to me. Thanks!
2017-11-24 06:25:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657711148262024, "perplexity": 231.83105307985616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807089.35/warc/CC-MAIN-20171124051000-20171124071000-00319.warc.gz"}
https://gauravtiwari.org/the-new-largest-prime-number/
Great Internet Mersenne Prime Search (GIMPS) group has reported an all new Mersenne Prime Number (a prime number of type $2^P-1$) which is, now officially the largest prime number ever discovered. This number is valued to a whopping $2^{74207281}-1$ and contains 22,338,618 digits. It is quoted as M747207281 and is almost 5 million digits longer than the previous record holding prime number M57885161. It took non-stop calculations of an i7 computer for 31 days to prove the primality of the number. A press release about this discovery is available at GIMPS’ official website. Feel free to ask questions, send feedback and even point out mistakes. Great conversations start with just a single word. How to write better comments? This site uses Akismet to reduce spam. Learn how your comment data is processed. ## Numbers – The Basic Introduction If mathematics was a language, logic was the grammar, numbers should have been the alphabet. There are many types of numbers we use in mathematics, but at a broader aspect we may categorize them in two categories: 1. Countable Numbers 2. Uncountable Numbers The numbers which can be counted in nature are called Countable Numbers and the numbers which can… ## Fermat Numbers Fermat Number, a class of numbers, is an integer of the form $F_n=2^{2^n} +1 \ \ n \ge 0$ . For example: Putting $n := 0,1,2 \ldots$ in $F_n=2^{2^n}$ we get $F_0=3$ , $F_1=5$ , $F_2=17$ , $F_3=257$ etc. Fermat observed that all the integers $F_0, F_1, F_2, F_3, \ldots$ were prime… ## How Genius You Are? Let have a Test: You need to make a calculation. Please do neither use a calculator nor a paper. Calculate everything “in your brain”. Take 1000 and add 40. Now, add another 1000. Now add 30. Now, add 1000 again. Add 20. And add 1000 again. And an additional 10.   So, You Got The RESULT!  Quicker you see the… ## D’ ALEMBERT’s Test of Convergence of Series Statement A series $\sum {u_n}$ of positive terms is convergent if from and after some fixed term $\dfrac {u_{n+1}} {u_n} < r < {1}$ , where r is a fixed number. The series is divergent if $\dfrac{u_{n+1}} {u_n} > 1$ from and after some fixed term. D’ Alembert’s Test is also known as the ratio test… ## Solving Integral Equations (3) -Changing Differential Equations into Integral Equations This post explains the basic method of converting an integral equation into a corresponding differential equation. Consider two natural numbers $n_1$ and $n_2$, out of which one is twice as large as the other. We are not told whether $n_1$ is larger or $n_2$, we can state following two propositions: PROPOSITION 1: The difference $n_1-n_2$, if $n_1 >n_2$, is different from the difference $n_2-n_1$, if $n_2 >n_1$. PROPOSITION 2: The difference $n_1-n_2$, if $n_1 >n_2$, is the same…
2019-11-13 23:20:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6019613146781921, "perplexity": 796.022719625449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667442.36/warc/CC-MAIN-20191113215021-20191114003021-00279.warc.gz"}
https://physicstravelguide.com/basic_notions/boundary_conditions
### Sidebar basic_notions:boundary_conditions # Boundary Conditions ## Intuitive Generically, if we can formulate the equations of motion for a theory, we have everything at our disposal to describe the solutions of the theory. However, in general we have to supplement the equations with the situation we want to actually describe with the theory. In the case of the planet, we have to add where the planet was and where it moved to at a certain instance of time. Otherwise the equation of motion would give us the solutions for all possible initial positions and velocities of the planet, and thus an infinite number of possible solutions to the theory. Such additional information are called boundary conditions. They select out of any possible kind of behavior described by a theory the particular one which is compatible with the state a system is in.http://axelmaas.blogspot.de/2012/03/equations-that-describe-world.html ## Concrete Important Spatial Boundary Conditions: Overview To specify the lattice model we must prescribe the boundary conditions for the scalar field. These conditions are classified as follows: • Periodic boundary conditions: With these conditions the lattice is a discrete torus and the lattice field theory is invariant under discrete translations and rotations. • Fixed boundary conditions: Here we prescribe the field on the boundary $φ|_{∂Λ}$. Such boundary conditions are useful to describe entangled states in quantum field theory. • Open boundary conditions: Here we switch off all interactions between sites on the lattice Λ with sites in the complement of Λ (viewed as subset of $\mathbb Z_d$ ). These boundary conditions are used in solid state physics. • Antiperiodic boundary conditions: They serve as a tool to inhibit unwanted longrange correlations or to study interfaces. This modification of the periodic boundary conditions is frequently used in lattice field theories. Statistical Approach to Quantum Field Theory by Andreas Wipf Periodic Boundary Conditions Eliminates surfaces and is the most popular choice of boundary conditions Important Boundary Conditions for Differential Equations: Dirichlet boundary condition Dirichlet = Data on boundary - The value of the dependent variable is specified on the boundary. - Needed for elliptic or parabolic partial differential equations. Other boundary conditions are insufficient to determine a unique solution, overly restrictive, or lead to instabilities. - "In thermodynamics, Dirichlet boundary conditions consist of surfaces (in 3D problems) held at a fixed temperatures." (Source) Neumann boundary condition Neumann = Normal derivative on boundary - The normal derivative of the dependent variable is specified on the boundary. - Needed for elliptic or parabolic partial differential equations. Other boundary conditions are insufficient to determine a unique solution, overly restrictive, or lead to instabilities. - "In thermodynamics, the Neumann boundary condition represents the heat flux across the boundaries." (Source) Cauchy boundary condition Cauchy = Dirichlet $\oplus$ Neumann - Both the value and the normal derivative of the dependent variable are specified on the boundary. - Cauchy boundary conditions are analogous to the initial conditions for a second-order ordinary differential equation. - Needed for Hyperbolic equations on an open surface. Other boundary conditions are either too restrictive for a solution to exist, or insufficient to determine a unique solution. - In physics needed for classical and quantum field theory. Robin boundary condition Robin = only a condition on linear combination of Dirichlet and Neumann. Not Dirichlet + Neumann! - The the value of a linear combination of the dependent variable and the normal derivative of the dependent variable is specified on the boundary. Recommended Resources: • For boundary conditions for waves, see chapter 9 in Georgi's "THE PHYSICS OF WAVES" • For boundary conditions in gauge field theory, see section 4.5 "Cauchy problem and gauge conditions" in Rubakov's "Classical Theory of Gauge Fields". ## Abstract The motto in this section is: the higher the level of abstraction, the better. ## Why is it interesting? The field equations and the boundary conditions are inextricably connected and the latter can in no way be considered less important than the formerV. Fock, The theory of space, time and gravitation Now, Nature is described by fields, and this elegant and powerful formulation of classical and quantum mechanics based on the action needs to be supplemented with a careful treatment of boundary conditions at infinity. The issue of boundary conditions is particularly important and interesting in the case of gauge theories where the assumption ‘all fields decay sufficiently rapidly at infinity’ is not justified. https://arxiv.org/pdf/1601.03616.pdf [I]t is natural to regulate infinite sized systems by imposing boundary conditions at finite distance, often described as placing the system in a box. This idea has a long history in the gravitational context (see e.g. [15–27]) where it is common to impose a Dirichlet boundary condition, fixing the induced metric at the walls of the box1 .https://arxiv.org/abs/1508.02515
2019-04-26 00:25:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7390530109405518, "perplexity": 420.96639260064285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578743307.87/warc/CC-MAIN-20190425233736-20190426015736-00138.warc.gz"}
https://www.csauthors.net/dan-ma/
# Dan Ma According to our database1, Dan Ma authored at least 84 papers between 2003 and 2021. Collaborative distances: • Dijkstra number2 of four. • Erdős number3 of four. Book In proceedings Article PhD thesis Other ## Bibliography 2021 The Information Needs of Chinese Family members of Cancer Patients in the Online Health Community: What and Why? Inf. Process. Manag., 2021 Automatic Intent-Slot Induction for Dialogue Systems. CoRR, 2021 Maize and soybean heights estimation from unmanned aerial vehicle (UAV) LiDAR data. Comput. Electron. Agric., 2021 2020 Neural Networks-Based Active Fault-Tolerant Control for a Class of Switched Nonlinear Systems With Its Application to RCL Circuit. IEEE Trans. Syst. Man Cybern. Syst., 2020 Output Regulation for Switched Systems With Multiple Disturbances. IEEE Trans. Circuits Syst., 2020 Two birds with one stone: Transforming and generating facial images with iterative GAN. Neurocomputing, 2020 Contextual determinants of IT governance mechanism formulation for senior care services in local governments. Int. J. Inf. Manag., 2020 Game of Learning Bloch Equation Simulations for MR Fingerprinting. CoRR, 2020 Word Graph Network: Understanding Obscure Sentences on Social Media for Violation Comment Detection. Proceedings of the Natural Language Processing and Chinese Computing, 2020 Delay Margin for Containment Control of Second-Order Multi-Agent Systems Over Directed Graphs. Proceedings of the 16th International Conference on Control, 2020 Tradeoff between Delay Robustness and Tracking Performance by PID Control: Second-Order Unstable Systems. Proceedings of the 59th IEEE Conference on Decision and Control, 2020 Multiplicity-Induced-Dominancy Extended to Neutral Delay Equations: Towards a Systematic PID Tuning Based on Rightmost Root Assignment. Proceedings of the 2020 American Control Conference, 2020 2019 Optimal Experiment Design for Magnetic Resonance Fingerprinting: Cramér-Rao Bound Meets Spin Dynamics. IEEE Trans. Medical Imaging, 2019 Bounds on Delay Consensus Margin of Second-Order Multiagent Systems With Robust Position and Velocity Feedback Protocol. IEEE Trans. Autom. Control., 2019 Delay Margin of Low-Order Systems Achievable by PID Controllers. IEEE Trans. Autom. Control., 2019 Reliability and Numerical Analysis of a Robot Safety System. J. Syst. Sci. Complex., 2019 Output regulation for a class of positive switched systems. J. Frankl. Inst., 2019 Adaptive neural control for switched non-linear systems with multiple tracking error constraints. IET Signal Process., 2019 Explicit bounds for guaranteed stabilization by PID control of second-order unstable delay systems. Autom., 2019 Simultaneous stabilization of discrete-time delay systems and bounds on delay margin. Autom., 2019 Estimating forest aboveground biomass using small-footprint full-waveform airborne LiDAR data. Int. J. Appl. Earth Obs. Geoinformation, 2019 Optimal Design and Ownership Structures of Innovative Retail Payment Systems. Proceedings of the 40th International Conference on Information Systems, 2019 Finite-Time Adaptive Consensus of Second-Order Nonlinear Leader-Following Multi-Agent Systems with Switching Topology. Proceedings of the 15th IEEE International Conference on Control and Automation, 2019 Periodic event-triggered control for switched affine systems. Proceedings of the 15th IEEE International Conference on Control and Automation, 2019 Exact Delay Consensus Margin of First-Order Agents under PID Protocol. Proceedings of the 58th IEEE Conference on Decision and Control, 2019 Delay Robustness of Second-Order Uncertain Nonlinear Delay Systems under PID Control<sup>*</sup>. Proceedings of the 2019 IEEE Conference on Control Technology and Applications, 2019 2018 Epileptic Seizure Detection in Long-Term EEG Recordings by Using Wavelet-Based Directed Transfer Function. IEEE Trans. Biomed. Eng., 2018 L2 bumpless transfer control for switched linear systems with almost output regulation. Syst. Control. Lett., 2018 A Model of Competition Between Perpetual Software and Software as a Service. MIS Q., 2018 A metrics suite of cloud computing adoption readiness. Electron. Mark., 2018 Anti-Windup $\boldsymbol{L}_{\infty}$ Event-Triggered Control for Linear Systems with Actuator Saturation and Persistent Bounded Disturbance. Proceedings of the 15th International Conference on Control, 2018 Research on Joint Mode Selection and Resource Allocation Scheme in D2D Networks. Proceedings of the International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, 2018 Bounds on Delay Margin for Consensus of Second-Order Multi-Agent Systems. Proceedings of the 57th IEEE Conference on Decision and Control, 2018 2017 Two Birds with One Stone: Iteratively Learn Facial Attributes with GANs. CoRR, 2017 Gene functional annotation via matrix completion. Proceedings of the IEEE International Conference on Communications, 2017 Location privacy protection in asynchronous localization networks by resource allocation approaches. Proceedings of the 2017 IEEE International Conference on Communications Workshops, 2017 Delay robustness of low-order systems under PID control. Proceedings of the 56th IEEE Annual Conference on Decision and Control, 2017 On delay margin bounds of discrete-time systems. Proceedings of the 56th IEEE Annual Conference on Decision and Control, 2017 2016 GenePrint: Generic and Accurate Physical-Layer Identification for UHF RFID Tags. IEEE/ACM Trans. Netw., 2016 Twins: Device-Free Object Tracking Using Passive Tags. IEEE/ACM Trans. Netw., 2016 Spectral Similarity Assessment Based on a Spectrum Reflectance-Absorption Index and Simplified Curve Patterns for Hyperspectral Remote Sensing. Sensors, 2016 Resource allocation for hybrid mode device-to-device communication networks. Proceedings of the 8th International Conference on Wireless Communications & Signal Processing, 2016 2015 Stabilization of networked switched linear systems: An asynchronous switching delay system approach. Syst. Control. Lett., 2015 Push or Pull? A Website's Strategic Choice of Content Delivery Mechanism. J. Manag. Inf. Syst., 2015 Technology investment decision-making under uncertainty. Inf. Technol. Manag., 2015 Analyzing Software as a Service with Per-Transaction Charges. Inf. Syst. Res., 2015 Competition, cooperation, and regulation: Understanding the evolution of the mobile payments technology ecosystem. Electron. Commer. Res. Appl., 2015 Special issue: Contemporary research on payments and cards in the global fintech revolution. Electron. Commer. Res. Appl., 2015 Guest editorial: Market transformation to an IT-enabled services-oriented economy. Decis. Support Syst., 2015 Pricing strategy for cloud computing: A damaged services perspective. Decis. Support Syst., 2015 A low complexity NLOS error mitigation method in UWB localization. Proceedings of the 2015 IEEE/CIC International Conference on Communications in China, 2015 Mechanism Design for Near Real-Time Retail Payment and Settlement Systems. Proceedings of the 48th Hawaii International Conference on System Sciences, 2015 2014 SVD Compression for Magnetic Resonance Fingerprinting in the Time Domain. IEEE Trans. Medical Imaging, 2014 Competition Between Software-as-a-Service Vendors. IEEE Trans. Engineering Management, 2014 Paths of Influence for Innovations in Financial is and Technology Ecosystems. Proceedings of the 18th Pacific Asia Conference on Information Systems, 2014 Twins: Device-free object tracking using passive tags. Proceedings of the 2014 IEEE Conference on Computer Communications, 2014 CBID: A Customer Behavior Identification System Using Passive Tags. Proceedings of the 22nd IEEE International Conference on Network Protocols, 2014 Two-layer switching architecture and switching rule for switched linear systems. Proceedings of the 11th IEEE International Conference on Control & Automation, 2014 A Metrics Suite for Firm-Level Cloud Computing Adoption Readiness. Proceedings of the Economics of Grids, Clouds, Systems, and Services, 2014 2013 The Well-Posedness and Stability Analysis of a Computer Series System. J. Appl. Math., 2013 GenePrint: Generic and accurate physical-layer identification for UHF RFID tags. Proceedings of the 2013 21st IEEE International Conference on Network Protocols, 2013 Collision-driven physical-layer identification of RFID UHF tags. Proceedings of the 2013 21st IEEE International Conference on Network Protocols, 2013 Technology Investment Decision-Making under Uncertainty: The Case of Mobile Payment Systems. Proceedings of the 46th Hawaii International Conference on System Sciences, 2013 Cost Efficiency Strategy in the Software-as-a-Service Market: Modeling Results and Related Implementation Issues. Proceedings of the Economics of Grids, Clouds, Systems, and Services, 2013 Switching Hinf Control Synthesis Designs for Networked Control Systems. Proceedings of the 19th International Conference on Control Systems and Computer Science, 2013 2012 Use of RSS feeds to push online content to users. Decis. Support Syst., 2012 The pricing model of cloud computing services. Proceedings of the Fourteenth International Conference on Electronic Commerce, 2012 Investment timing for mobile payment systems. Proceedings of the Fourteenth International Conference on Electronic Commerce, 2012 2011 Comprehensive Evaluation and Selection System of Coal Distributors with Analytic Hierarchy Process and Artificial Neural Network. J. Comput., 2011 2010 Information Technology Diffusion with Influentials, Imitators, and Opponents. J. Manag. Inf. Syst., 2010 Three-Dimensional Nonlinear Dynamic Model and Macro Control of Real Estate. Intell. Inf. Manag., 2010 Hybrid state feedback controller design of networked switched control systems with packet dropout. Proceedings of the American Control Conference, 2010 2009 Wake up or fall asleep-value implication of trusted computing. Inf. Technol. Manag., 2009 Exponential Asymptotic Stability of a Two-Unit Standby Redundant Electronic Equipment System under Human Failure. Proceedings of the Advances in Neural Networks, 2009 Offering RSS Feeds: Does It Help to Gain Competitive Advantage? Proceedings of the 42st Hawaii International International Conference on Systems Science (HICSS-42 2009), 2009 Passive control for networked switched systems with network-induced delays and packet dropout. Proceedings of the 48th IEEE Conference on Decision and Control, 2009 Robust exponential stabilization of Networked Switched Control Systems. Proceedings of the IEEE International Conference on Control Applications, 2009 2008 The Pricing Strategy Analysis for the "Software-as-a-Service" Business Model. Proceedings of the Grid Economics and Business Models, 5th International Workshop, 2008 2007 The Business Model of "Software-As-A-Service". Proceedings of the 2007 IEEE International Conference on Services Computing (SCC 2007), 2007 2004 A General Model for Heterogeneous Web Services Integration. Proceedings of the Content Computing, Advanced Workshop on Content Computing, 2004 Dynamic Scheduling Algorithm for Parallel Real-Time Jobs in Heterogeneous System. Proceedings of the 2004 International Conference on Computer and Information Technology (CIT 2004), 2004 2003 An Extension of Grid Service: Grid Mobile Service. Proceedings of the Grid and Cooperative Computing, Second International Workshop, 2003 A New Agent-Based Distributed Model of Grid Service Advertisement and Discovery. Proceedings of the Grid and Cooperative Computing, Second International Workshop, 2003 A Static Task Scheduling Algorithm in Grid Computing. Proceedings of the Grid and Cooperative Computing, Second International Workshop, 2003
2021-05-15 10:31:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1952056735754013, "perplexity": 12943.227582952615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00112.warc.gz"}
http://openstudy.com/updates/5142e1bae4b0e8e8f6bba838
## Luigi0210 3 years ago solve the integral 1. Luigi0210 $\int\limits_{0}^{4}(1-\sqrt{u})/(\sqrt{u})$ 2. anonymous Lolz...didn't even neded to substitute. 3. anonymous need* 4. anonymous 5. zepdrix Yah the substitution was kinda silly :) If you do a substitution, don't forget to change the limits of integration also. 6. anonymous And there's no du at the back of the integral 7. anonymous So there's no solution 8. anonymous You can't continue on integrating that without respect of anything. 9. Luigi0210 Ops, sorry forgot about the du.. It is in respect to du 10. anonymous What are you integrating with respect to? You integrating with respect to zero? 11. anonymous Okay. Now you can integrate it. Just separate the numerator. 12. anonymous And you can continue integrating per usual. 13. anonymous $\int\limits_{}^{}\frac{ 1 }{ \sqrt{u} }du-\int\limits_{}^{}1du$$\int\limits_{}^{}u^{-\frac{ 1 }{ 2 }}du-u$$2u^{\frac{ 1 }{ 2 }}-u$Then just plug in the limit from 0 to 4 14. Luigi0210 My only real problem is finding the anti-derivative 15. anonymous Anti-differentiating is just differentiating in reverse. Try and use reverse psychology when integrating if you can. 16. zepdrix $\large \frac{1-\sqrt u}{\sqrt u} \qquad = \qquad \frac{1}{\sqrt u}-\frac{\sqrt u}{\sqrt u} \qquad = \qquad u^{-1/2}-1$ Yah you just apply the Power Rule for Integration! :D 17. Luigi0210 Thank you very much
2016-07-25 14:12:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9815683960914612, "perplexity": 4073.324085760587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824230.71/warc/CC-MAIN-20160723071024-00237-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.deepdyve.com/lp/springer_journal/mining-frequent-subgraphs-over-uncertain-graph-databases-under-YzszLiWT54
Mining frequent subgraphs over uncertain graph databases under probabilistic semantics Mining frequent subgraphs over uncertain graph databases under probabilistic semantics Frequent subgraph mining has been extensively studied on certain graph data. However, uncertainty is intrinsic in graph data in practice, but there is very few work on mining uncertain graph data. This paper focuses on mining frequent subgraphs over uncertain graph data under the probabilistic semantics. Specifically, a measure called $${\varphi}$$ -frequent probability is introduced to evaluate the degree of recurrence of subgraphs. Given a set of uncertain graphs and two real numbers $${0 < \varphi, \tau < 1}$$ , the goal is to quickly find all subgraphs with $${\varphi}$$ -frequent probability at least τ . Due to the NP-hardness of the problem and to the #P-hardness of computing the $${\varphi}$$ -frequent probability of a subgraph, an approximate mining algorithm is proposed to produce an $${(\varepsilon, \delta)}$$ -approximate set Π of “frequent subgraphs”, where $${0 < \varepsilon < \tau}$$ is error tolerance, and 0 < δ < 1 is a confidence bound. The algorithm guarantees that (1) any frequent subgraph S is contained in Π with probability at least ((1 − δ ) /2) s , where s is the number of edges in S ; (2) any infrequent subgraph with $${\varphi}$$ -frequent probability less than $${\tau - \varepsilon}$$ is contained in Π with probability at most δ /2. The theoretical analysis shows that to obtain any frequent subgraph with probability at least 1 − Δ , the input parameter δ of the algorithm must be set to at most $${1 - 2 (1 - \Delta)^{1 / \ell_{\max}}}$$ , where 0 < Δ < 1, and ℓ max is the maximum number of edges in frequent subgraphs. Extensive experiments on real uncertain graph data verify that the proposed algorithm is practically efficient and has very high approximation quality. Moreover, the difference between the probabilistic semantics and the expected semantics on mining frequent subgraphs over uncertain graph data has been discussed in this paper for the first time. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The VLDB Journal Springer Journals Mining frequent subgraphs over uncertain graph databases under probabilistic semantics , Volume 21 (6) – Dec 1, 2012 25 pages /lp/springer_journal/mining-frequent-subgraphs-over-uncertain-graph-databases-under-YzszLiWT54 Publisher Springer-Verlag Subject Computer Science; Database Management ISSN 1066-8888 eISSN 0949-877X D.O.I. 10.1007/s00778-012-0268-8 Publisher site See Article on Publisher Site DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 12 million articles from more than 10,000 peer-reviewed journals. All for just $49/month Explore the DeepDyve Library Unlimited reading Read as many articles as you need. Full articles with original layout, charts and figures. Read online, from anywhere. Stay up to date Keep up with your field with Personalized Recommendations and Follow Journals to get automatic updates. Organize your research It’s easy to organize your research with our built-in tools. Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. Monthly Plan • Read unlimited articles • Personalized recommendations • No expiration • Print 20 pages per month • 20% off on PDF purchases • Organize your research • Get updates on your journals and topic searches$49/month 14-day Free Trial Best Deal — 39% off Annual Plan • All the features of the Professional Plan, but for 39% off! • Billed annually • No expiration • For the normal price of 10 articles elsewhere, you get one full year of unlimited access to articles. $588$360/year billed annually 14-day Free Trial
2018-02-22 06:39:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5190090537071228, "perplexity": 1819.5236375272614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814036.49/warc/CC-MAIN-20180222061730-20180222081730-00486.warc.gz"}
http://jcp.bmj.com/content/22/6/683
Article Text Uncertainties in the determination of the cortisol-binding capacity' of plasma and their removal 1. C. W. Burke
2018-06-20 05:29:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2677288353443146, "perplexity": 14884.1369706456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863463.3/warc/CC-MAIN-20180620050428-20180620070428-00420.warc.gz"}
https://bitbucket.org/birkenfeld/sphinx/issue/203/substitution-definition-of-a-math-formula
substitution definition of a math formula Dzhelil Rufat created an issue It seems to be impossible to define substitutions for math formulas. To demonstrate the error, try to compile the following simple code This is a simple math equation |eqn| . .. |eqn| math:: e^{i\pi} + 1 = 0 When I try to compile it, am getting the following error: (WARNING/2) Substitution definition "eqn" empty or invalid. .. |eqn| math:: e^{i\pi} + 1 = 0 .. |eqn| replace:: :math:formula
2014-04-18 05:41:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964747965335846, "perplexity": 9940.748465538167}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
http://www.theinfolist.com/html/ALL/s/Thermodynamic_equations.html
TheInfoList OR: Thermodynamics Thermodynamics is a branch of physics that deals with heat, work, and temperature, and their relation to energy, entropy, and the physical properties of matter and radiation. The behavior of these quantities is governed by the four laws of th ... is expressed by a mathematical framework of ''thermodynamic equations'' which relate various thermodynamic quantities and physical properties measured in a laboratory or production process. Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics The laws of thermodynamics are a set of scientific laws which define a group of physical quantities, such as temperature, energy, and entropy, that characterize thermodynamic systems in thermodynamic equilibrium. The laws also use various paramet ... . # Introduction One of the fundamental thermodynamic equations is the description of thermodynamic work in analogy to mechanical work In physics, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force stre ... , or weight lifted through an elevation against gravity, as defined in 1824 by French physicist Sadi Carnot. Carnot used the phrase motive power for work. In the footnotes to his famous ''On the Motive Power of Fire'', he states: “We use here the expression ''motive power'' to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.” With the inclusion of a unit of time Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, into the future. It is a component quantity of various measurements used to sequence events, ... in Carnot's definition, one arrives at the modern definition for power Power most often refers to: * Power (physics), meaning "rate of doing work" ** Engine power, the power put out by an engine ** Electric power * Power (social and political), the ability to influence people or events ** Abusive power Power may ... : $P = \frac = \frac$ During the latter half of the 19th century, physicists such as Rudolf Clausius Rudolf Julius Emanuel Clausius (; 2 January 1822 – 24 August 1888) was a German physicist and mathematician and is considered one of the central founding fathers of the science of thermodynamics. By his restatement of Sadi Carnot's principle ... , Peter Guthrie Tait Peter Guthrie Tait FRSE (28 April 1831 – 4 July 1901) was a Scottish mathematical physicist and early pioneer in thermodynamics. He is best known for the mathematical physics textbook '' Treatise on Natural Philosophy'', which he co-wrote w ... , and Willard Gibbs worked to develop the concept of a thermodynamic system A thermodynamic system is a body of matter and/or radiation, confined in space by walls, with defined permeabilities, which separate it from its surroundings. The surroundings may include other thermodynamic systems, or physical systems that a ... and the correlative energetic laws which govern its associated processes. The equilibrium state of a thermodynamic system is described by specifying its "state". The state of a thermodynamic system is specified by a number of extensive quantities, the most familiar of which are volume Volume is a measure of occupied three-dimensional space. It is often quantified numerically using SI derived units (such as the cubic metre and litre) or by various imperial or US customary units (such as the gallon, quart, cubic inch). ... , internal energy The internal energy of a thermodynamic system is the total energy contained within it. It is the energy necessary to create or prepare the system in its given internal state, and includes the contributions of potential energy and internal kinet ... , and the amount of each constituent particle ( particle number The particle number (or number of particles) of a thermodynamic system, conventionally indicated with the letter ''N'', is the number of constituent particles in that system. The particle number is a fundamental parameter in thermodynamics which is ... s). Extensive parameters are properties of the entire system, as contrasted with intensive parameters which can be defined at a single point, such as temperature and pressure. The extensive parameters (except entropy Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodyna ... ) are generally conserved in some way as long as the system is "insulated" to changes to that parameter from the outside. The truth of this statement for volume is trivial, for particles one might say that the total particle number of each atomic element is conserved. In the case of energy, the statement of the conservation of energy In physics and chemistry, the law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be ''conserved'' over time. This law, first proposed and tested by Émilie du Châtelet, means tha ... is known as the first law of thermodynamics The first law of thermodynamics is a formulation of the law of conservation of energy, adapted for thermodynamic processes. It distinguishes in principle two forms of energy transfer, heat and thermodynamic work for a system of a constant amo ... . A thermodynamic system is in equilibrium when it is no longer changing in time. This may happen in a very short time, or it may happen with glacial slowness. A thermodynamic system may be composed of many subsystems which may or may not be "insulated" from each other with respect to the various extensive quantities. If we have a thermodynamic system in equilibrium in which we relax some of its constraints, it will move to a new equilibrium state. The thermodynamic parameters may now be thought of as variables and the state may be thought of as a particular point in a space of thermodynamic parameters. The change in the state of the system can be seen as a path in this state space. This change is called a thermodynamic process. Thermodynamic equations are now used to express the relationships between the state parameters at these different equilibrium state. The concept which governs the path that a thermodynamic system traces in state space as it goes from one equilibrium state to another is that of entropy. The entropy is first viewed as an extensive function of all of the extensive thermodynamic parameters. If we have a thermodynamic system in equilibrium, and we release some of the extensive constraints on the system, there are many equilibrium states that it could move to consistent with the conservation of energy, volume, etc. The second law of thermodynamics The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversions. One simple statement of the law is that heat always moves from hotter objects to colder objects (or "downhill"), unless ... specifies that the equilibrium state that it moves to is in fact the one with the greatest entropy. Once we know the entropy as a function of the extensive variables of the system, we will be able to predict the final equilibrium state. # Notation Some of the most common thermodynamic quantities are: The ''conjugate variable pairs'' are the fundamental state variables used to formulate the thermodynamic functions. The most important thermodynamic potentials are the following functions: Thermodynamic system A thermodynamic system is a body of matter and/or radiation, confined in space by walls, with defined permeabilities, which separate it from its surroundings. The surroundings may include other thermodynamic systems, or physical systems that a ... s are typically affected by the following types of system interactions. The types under consideration are used to classify systems as open systems, closed system A closed system is a natural physical system that does not allow transfer of matter in or out of the system, although — in contexts such as physics, chemistry or engineering — the transfer of energy (''e.g.'' as work or heat) is allowed. In ... s, and isolated system In physical science, an isolated system is either of the following: # a physical system so far removed from other systems that it does not interact with them. # a thermodynamic system enclosed by rigid immovable walls through which neither ... s. Common material properties determined from the thermodynamic functions are the following: The following constants are constants that occur in many relationships due to the application of a standard system of units. # Laws of thermodynamics The behavior of a thermodynamic system A thermodynamic system is a body of matter and/or radiation, confined in space by walls, with defined permeabilities, which separate it from its surroundings. The surroundings may include other thermodynamic systems, or physical systems that a ... is summarized in the laws of Thermodynamics The laws of thermodynamics are a set of scientific laws which define a group of physical quantities, such as temperature, energy, and entropy, that characterize thermodynamic systems in thermodynamic equilibrium. The laws also use various paramet ... , which concisely are: * Zeroth law of thermodynamics ::If ''A'', ''B'', ''C'' are thermodynamic systems such that ''A'' is in thermal equilibrium with ''B'' and ''B'' is in thermal equilibrium with ''C'', then ''A'' is in thermal equilibrium with ''C''. :The zeroth law is of importance in thermometry, because it implies the existence of temperature scales. In practice, ''C'' is a thermometer, and the zeroth law says that systems that are in thermodynamic equilibrium with each other have the same temperature. The law was actually the last of the laws to be formulated. * First law of thermodynamics The first law of thermodynamics is a formulation of the law of conservation of energy, adapted for thermodynamic processes. It distinguishes in principle two forms of energy transfer, heat and thermodynamic work for a system of a constant amo ... ::$dU = \delta Q - \delta W$ where $dU$ is the infinitesimal increase in internal energy of the system, $\delta Q$ is the infinitesimal heat flow into the system, and $\delta W$ is the infinitesimal work done by the system. :The first law is the law of conservation of energy In physics and chemistry, the law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be ''conserved'' over time. This law, first proposed and tested by Émilie du Châtelet, means tha ... . The symbol $\delta$ instead of the plain d, originated in the work of German German(s) may refer to: * Germany (of or related to) **Germania (historical use) * Germans, citizens of Germany, people of German ancestry, or native speakers of the German language ** For citizens of Germany, see also German nationality law **Ge ... mathematician Carl Gottfried Neumann and is used to denote an inexact differential An inexact differential or imperfect differential is a differential whose integral is path dependent. It is most often used in thermodynamics to express changes in path dependent quantities such as heat and work, but is defined more generally wit ... and to indicate that ''Q'' and ''W'' are path-dependent (i.e., they are not state function In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a sys ... s). In some fields such as physical chemistry Physical chemistry is the study of macroscopic and microscopic phenomena in chemical systems in terms of the principles, practices, and concepts of physics such as motion, energy, force, time, thermodynamics, quantum chemistry, statistical ... , positive work is conventionally considered work done on the system rather than by the system, and the law is expressed as $dU = \delta Q + \delta W$. * Second law of thermodynamics The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversions. One simple statement of the law is that heat always moves from hotter objects to colder objects (or "downhill"), unless ... ::The entropy of an isolated system never decreases: $dS \ge 0$ for an isolated system. :A concept related to the second law which is important in thermodynamics is that of reversibility. A process within a given isolated system is said to be reversible if throughout the process the entropy never increases (i.e. the entropy remains unchanged). * Third law of thermodynamics The third law of thermodynamics states, regarding the properties of closed systems in thermodynamic equilibrium: This constant value cannot depend on any other parameters characterizing the closed system, such as pressure or applied magnetic fie ... :: $S = 0$ when $T = 0$ :The third law of thermodynamics states that at the absolute zero of temperature, the entropy is zero for a perfect crystalline structure. * Onsager reciprocal relations – sometimes called the ''Fourth law of thermodynamics'' :: $\mathbf_ = L_\, \nabla\left(1/T\right) - L_\, \nabla\left(m/T\right)$ :: $\mathbf_ = L_\, \nabla\left(1/T\right) - L_\, \nabla\left(m/T\right)$ :The fourth law of thermodynamics is not yet an agreed upon law (many supposed variations exist); historically, however, the Onsager reciprocal relations have been frequently referred to as the fourth law. # The fundamental equation The first and second law of thermodynamics are the most fundamental equations of thermodynamics. They may be combined into what is known as fundamental thermodynamic relation which describes all of the changes of thermodynamic state functions of a system of uniform temperature and pressure. As a simple example, consider a system composed of a number of ''k''  different types of particles and has the volume as its only external variable. The fundamental thermodynamic relation may then be expressed in terms of the internal energy as: :$dU = TdS-pdV+\sum_^k\mu_idN_i$ Some important aspects of this equation should be noted: , , * The thermodynamic space has ''k''+2 dimensions * The differential quantities (''U'', ''S'', ''V'', ''N''''i'') are all extensive quantities. The coefficients of the differential quantities are intensive quantities (temperature, pressure, chemical potential). Each pair in the equation are known as a conjugate pair with respect to the internal energy. The intensive variables may be viewed as a generalized "force". An imbalance in the intensive variable will cause a "flow" of the extensive variable in a direction to counter the imbalance. * The equation may be seen as a particular case of the chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ... . In other words: $dU= \left(\frac\right)_dS+ \left(\frac\right)_dV+ \sum_i\left(\frac\right)_dN_i$ from which the following identifications can be made: $\left(\frac\right)_=T$ $\left(\frac\right)_=-p$ $\left(\frac\right)_=\mu_i$ These equations are known as "equations of state" with respect to the internal energy. (Note - the relation between pressure, volume, temperature, and particle number which is commonly called "the equation of state" is just one of many possible equations of state.) If we know all k+2 of the above equations of state, we may reconstitute the fundamental equation and recover all thermodynamic properties of the system. *The fundamental equation can be solved for any other differential and similar expressions can be found. For example, we may solve for $dS$ and find that $\left(\frac\right)_ = \frac$ # Thermodynamic potentials By the principle of minimum energy, the second law can be restated by saying that for a fixed entropy, when the constraints on the system are relaxed, the internal energy assumes a minimum value. This will require that the system be connected to its surroundings, since otherwise the energy would remain constant. By the principle of minimum energy, there are a number of other state functions which may be defined which have the dimensions of energy and which are minimized according to the second law under certain conditions other than constant entropy. These are called thermodynamic potentials. For each such potential, the relevant fundamental equation results from the same Second-Law principle that gives rise to energy minimization under restricted conditions: that the total entropy of the system and its environment is maximized in equilibrium. The intensive parameters give the derivatives of the environment entropy with respect to the extensive properties of the system. The four most common thermodynamic potentials are: After each potential is shown its "natural variables". These variables are important because if the thermodynamic potential is expressed in terms of its natural variables, then it will contain all of the thermodynamic relationships necessary to derive any other relationship. In other words, it too will be a fundamental equation. For the above four potentials, the fundamental equations are expressed as: :$dU\left\left(S,V,\right\right) = TdS - pdV + \sum_ \mu_ dN_i$ :$dH\left\left(S,p,N_\right\right) = TdS + Vdp + \sum_ \mu_ dN_$ :$dF\left\left(T,V,N_\right\right) = -SdT - pdV + \sum_ \mu_ dN_$ :$dG\left\left(T,p,N_\right\right) = -SdT + Vdp + \sum_ \mu_ dN_$ The thermodynamic square can be used as a tool to recall and derive these potentials. # First order equations Just as with the internal energy version of the fundamental equation, the chain rule can be used on the above equations to find ''k''+2 equations of state with respect to the particular potential. If Φ is a thermodynamic potential, then the fundamental equation may be expressed as: :$d\Phi = \sum_i \frac dX_i$ where the $X_i$ are the natural variables of the potential. If $\gamma_i$ is conjugate to $X_i$ then we have the equations of state for that potential, one for each set of conjugate variables. :$\gamma_i = \frac$ Only one equation of state will not be sufficient to reconstitute the fundamental equation. All equations of state will be needed to fully characterize the thermodynamic system. Note that what is commonly called "the equation of state" is just the "mechanical" equation of state involving the Helmholtz potential and the volume: :$\left\left(\frac\right\right)_=-p$ For an ideal gas, this becomes the familiar ''PV''=''NkBT''. ## Euler integrals Because all of the natural variables of the internal energy ''U'' are extensive quantities, it follows from Euler's homogeneous function theorem In mathematics, a homogeneous function is a function of several variables such that, if all its arguments are multiplied by a scalar, then its value is multiplied by some power of this scalar, called the degree of homogeneity, or simply the ''d ... that :$U=TS-pV+\sum_i \mu_i N_i$ Substituting into the expressions for the other main potentials we have the following expressions for the thermodynamic potentials: :$F= -pV+\sum_i \mu_i N_i$ :$H=TS +\sum_i \mu_i N_i$ :$G= \sum_i \mu_i N_i$ Note that the Euler integrals are sometimes also referred to as fundamental equations. ## Gibbs–Duhem relationship Differentiating the Euler equation for the internal energy and combining with the fundamental equation for internal energy, it follows that: :$0=SdT-Vdp+\sum_iN_id\mu_i$ which is known as the Gibbs-Duhem relationship. The Gibbs-Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with ''r'' components, there will be ''r+1'' independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Willard Gibbs and Pierre Duhem Pierre Maurice Marie Duhem (; 9 June 1861 – 14 September 1916) was a French theoretical physicist who worked on thermodynamics, hydrodynamics, and the theory of elasticity. Duhem was also a historian of science, noted for his work on the E ... . # Second order equations There are many relationships that follow mathematically from the above basic equations. See Exact differential In multivariate calculus, a differential or differential form is said to be exact or perfect (''exact differential''), as contrasted with an inexact differential, if it is equal to the general differential dQ for some differentiable function&n ... for a list of mathematical relationships. Many equations are expressed as second derivatives of the thermodynamic potentials (see Bridgman equations). ## Maxwell relations Maxwell relations are equalities involving the second derivatives of thermodynamic potentials with respect to their natural variables. They follow directly from the fact that the order of differentiation does not matter when taking the second derivative. The four most common Maxwell relations are: : The thermodynamic square can be used as a tool to recall and derive these relations. ## Material properties Second derivatives of thermodynamic potentials generally describe the response of the system to small changes. The number of second derivatives which are independent of each other is relatively small, which means that most material properties can be described in terms of just a few "standard" properties. For the case of a single component system, there are three properties generally considered "standard" from which all others may be derived: * Compressibility In thermodynamics and fluid mechanics, the compressibility (also known as the coefficient of compressibility or, if the temperature is held constant, the isothermal compressibility) is a measure of the instantaneous relative volume change of a ... at constant temperature or constant entropy $\beta_ = - \left ( \right )_$ * Specific heat In thermodynamics, the specific heat capacity (symbol ) of a substance is the heat capacity of a sample of the substance divided by the mass of the sample, also sometimes referred to as massic heat capacity. Informally, it is the amount of heat t ... (per-particle) at constant pressure or constant volume $c_= \frac\left ( \right )_ ~$ * Coefficient of thermal expansion Thermal expansion is the tendency of matter to change its shape, area, volume, and density in response to a change in temperature, usually not including phase transitions. Temperature is a monotonic function of the average molecular kineti ... $\alpha_ = \frac\left(\frac\right)_p$ These properties are seen to be the three possible second derivative of the Gibbs free energy with respect to temperature and pressure. # Thermodynamic property relations Properties such as pressure, volume, temperature, unit cell volume, bulk modulus and mass are easily measured. Other properties are measured through simple relations, such as density, specific volume, specific weight. Properties such as internal energy, entropy, enthalpy, and heat transfer are not so easily measured or determined through simple relations. Thus, we use more complex relations such as Maxwell relations, the Clapeyron equation, and the Mayer relation. Maxwell relations in thermodynamics are critical because they provide a means of simply measuring the change in properties of pressure, temperature, and specific volume, to determine a change in entropy. Entropy cannot be measured directly. The change in entropy with respect to pressure at a constant temperature is the same as the negative change in specific volume with respect to temperature at a constant pressure, for a simple compressible system. Maxwell relations in thermodynamics are often used to derive thermodynamic relations. The Clapeyron equation allows us to use pressure, temperature, and specific volume to determine an enthalpy change that is connected to a phase change. It is significant to any phase change process that happens at a constant pressure and temperature. One of the relations it resolved to is the enthalpy of vaporization at a provided temperature by measuring the slope of a saturation curve on a pressure vs. temperature graph. It also allows us to determine the specific volume of a saturated vapor and liquid at that provided temperature. In the equation below, $L$ represents the specific latent heat, $T$ represents temperature, and $\Delta v$ represents the change in specific volume. :$\frac = \frac$ The Mayer relation states that the specific heat capacity of a gas at constant volume is slightly less than at constant pressure. This relation was built on the reasoning that energy must be supplied to raise the temperature of the gas and for the gas to do work in a volume changing case. According to this relation, the difference between the specific heat capacities is the same as the universal gas constant. This relation is represented by the difference between Cp and Cv: Cp – Cv = R page 669 # References * * ** Chapters 1 - 10, ''Part 1: Equilibrium''. * * * * ''(reprinted from Oxford University Press, 1978)'' * * * {{DEFAULTSORT:Thermodynamic Equations
2023-03-30 23:26:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 41, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7437840700149536, "perplexity": 407.5592371811054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00188.warc.gz"}
https://www.sparrho.com/item/efficient-evaluation-of-cosmological-angular-statistics/139808c/
# Efficient Evaluation of Cosmological Angular Statistics Research paper by Valentin Assassi, Marko Simonović, Matias Zaldarriaga Indexed on: 14 May '17Published on: 14 May '17Published in: arXiv - Astrophysics - Cosmology and Nongalactic Astrophysics #### Abstract Angular statistics of cosmological observables are hard to compute. The main difficulty is due to the presence of highly-oscillatory Bessel functions which need to be integrated over. In this paper, we provide a simple and fast method to compute the angular power spectrum and bispectrum of any observable. The method is based on using an FFTlog algorithm to decompose the momentum-space statistics onto a basis of power-law functions. For each power law, the integrals over Bessel functions have a simple analytical solution. This allows us to efficiently evaluate these integrals, independently of the value of the multipole $\ell$. We apply this general method to the galaxy, lensing and CMB temperature angular power spectrum and bispectrum.
2021-03-02 05:02:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825816810131073, "perplexity": 959.6924138295531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363217.42/warc/CC-MAIN-20210302034236-20210302064236-00318.warc.gz"}
https://dmoj.ca/problem/dwite10c4p4
## DWITE '10 R4 #4 - Mountain Hiking View as PDF Points: 7 Time limit: 2.0s Memory limit: 64M Problem type ##### DWITE Online Computer Programming Contest, January 2011, Problem 4 Mountain hiking is a very adventurous, yet somewhat dangerous, pastime. On certain mountain ranges, the heights could vary sharply. An amateur hiker can move to an adjacent (left/right, up/down, but not diagonally) location only if the height difference with the current location is at most . Given a height map of a mountain range, determine the distance of the shortest viable path between the left and the right edges. The input will contain 5 test cases. Each test case consists of a map of digits to , each digit representing the height of that location. A line of hyphens ---------- follows each test case for visual separation. The output will contain 5 lines, the least number of steps to cross the mountain range in each case. If the hiker can't get across, output IMPOSSIBLE instead. Notes: the hiker could start at any of the left-most positions. The steps counted are the transitions from one location to the next. Thus appearing in that very first location requires no steps. #### Sample Input 9324892342 1334343293 3524523454 2634232043 0343259235 2454502352 4563589024 7354354256 9343221234 2653560343 ---------- #### Sample Output 11
2019-12-11 11:38:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3327453136444092, "perplexity": 2601.253259586723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530857.12/warc/CC-MAIN-20191211103140-20191211131140-00084.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/CLONE-afaf42be-9820-4186-8d76-e738423175bc/chapter-19-exercises-and-problems-page-361/14
Essential University Physics: Volume 1 (4th Edition) Clone (a) We know that $e=\frac{W}{Q_h}$ We plug in the known values to obtain: $e=\frac{350}{900}$ $e=0.39$ (b) We know that $Q_c=Q_h-W$ We plug in the known values to obtain: $Q_c=900-350$ $Q_c=550J$ (c) As $\frac{T_h}{T_c}=\frac{Q_h}{Q_c}$ We plug in the known values to obtain: $\frac{T_h}{273+10}=\frac{900}{550}$ $T_h=460K$
2019-10-18 10:51:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8321026563644409, "perplexity": 154.03864854981236}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986682037.37/warc/CC-MAIN-20191018104351-20191018131851-00083.warc.gz"}
http://openstudy.com/updates/4f5742a6e4b0c9fd2d4b2ca4
## A community for students. Sign up today Here's the question you clicked on: ## anonymous 4 years ago How would one differentiate the following: 2x+3/√(1-2x) • This Question is Closed 1. anonymous You can use the quotient rule. 2. anonymous Are you familiar with that? 3. anonymous yes I am familiar with this 4. anonymous so u = 2x+3 u' = 2 v = √(1-2x) v' = -1/√(1-2x) 5. anonymous The quotient rule says dy/dx = (u'v - uv')/v^2 6. anonymous ok and from there I begin to distribute, correct? 7. anonymous u can do it with quotient rule 8. anonymous Thats it follow (u'v - uv')/v^2 9. anonymous |dw:1331119146075:dw| use this this is the division rule 10. anonymous where u and v are two fuctions 11. anonymous I think that the square root in the function os throwing me off but I think that it would disappear in the deno. right? just leaving 1-2x on the bottom? 12. anonymous exactly. you are right. square root dissapears. THat makes the sum a lot easier 13. .Sam. 14. anonymous Thank you! I was working it out and ended with the same result. 15. anonymous One final question... how would you find the eq. of a line tangent to the graph f(x) = √(2x+9) at x=0 ? 16. anonymous take the first derivative of f(x) and equate all x to zero it gives the gradient of the tangent to the line at x=0 17. .Sam. differentiate √(2x+9) then equate x=0 to the differentiated expression and get your gradient then create new equation using y-y1=m(x-x1) 18. anonymous then use y=mx+c to get the equa. of the line 19. .Sam. to find y1 just equate x=0 to original equation, y=√(2x+9), then get your y1 20. anonymous ok I am still a little shaky on deffrentiating a radical equation, would it be 1/2x+9 ? 21. .Sam. |dw:1331120584519:dw| 22. anonymous when u dof. the above one it'll be 1/(2x+9)^1/2 23. anonymous you have forgot the 1/2 24. .Sam. 1/2 cancelled by 2 25. .Sam. chain rule 26. anonymous ok so using the chain rule I use a fraction to symbolize the same radical function then move the 1/2 down and subtract by 1 to get my new power...got it 27. anonymous where did the "2" on the right come from though? 28. anonymous not that 1/2 i meant the power 1/2|dw:1331120933568:dw| this should be what u getting aftr dif. 29. .Sam. Example, |dw:1331120907510:dw| 30. anonymous Ok, so since it was in the deno. it was risen to the 2nd power and that is how is came to cancel the 1/2 right? 31. anonymous in the orig. differentation 32. .Sam. the 2 in the numerator when differentiating, so it will cancel the 1/2 33. .Sam. $\huge \sqrt{x}=x ^{\frac{1}{2}}$ 34. anonymous ok I undrstand that example in omitting the radical but where did the 2 appear to make that cancel? 35. anonymous that is what is throwing me for a loop haha 36. .Sam. the half of 2 is 1 37. anonymous wait --look at this.. this how u solve n this is whr it got cancelled|dw:1331121592002:dw| 38. anonymous nevermind it came as a result of the y' and omitting the 9 39. anonymous or making 2x+9 just 2 right? 40. anonymous I cant get u? 41. anonymous lol I can't get myself sometime haha but I think I understand that the 2 came as a result of taking the derivative of just 2x+9 resulting in just the 2 remaining...I think : / 42. anonymous ohh that one..wait this is clear the prob.|dw:1331122282837:dw| cool? 43. anonymous yea... that I what I was trying to say but I guess my wording is off. I see where the 2 came from 44. anonymous :) 45. anonymous Haha thanks. So then from there I plug in my x=0 to solve for that equation 46. anonymous and then use that solution in point-slope form 47. anonymous that is correct. :) and then use thegradient equation to find the gradient...and the eq. of the line 48. anonymous hey shana as u say the 2 in the denominator cannot get cancelled... 49. anonymous which gives me 3 from the graident and 1/3 for my derivative function 50. anonymous |dw:1331123039074:dw| 51. anonymous |dw:1331123158889:dw| 52. anonymous correct? 53. anonymous correct 54. anonymous good deal! Thank you for all of you help and have an awesome day or night wherever you are! 55. anonymous |dw:1331123407516:dw| 56. anonymous hhe it'll be night :) Anytime . I love guiding others not for money but for joy. :) 57. anonymous again how di u agree abt that 2 getting cancelled in the differentaition part 58. anonymous well salini, when u intergrate, the power comes to the front. ok? but when u again use the chain rule to integrate 2X u have to again multiply by 2. so they get cancelled off. 59. anonymous oh yeah 2x part...... 60. anonymous we cant give u money here instead we give away medals! 61. anonymous I really dont do this for money.... your medals will be highly appreciated.. I just joined today n already in level 10. :) This is a good place to give away my knowledge. #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
2016-09-25 08:56:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7684692144393921, "perplexity": 4213.739110879531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660158.72/warc/CC-MAIN-20160924173740-00248-ip-10-143-35-109.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/24445/hyperref-lualatex-and-unicode-bookmarks-issue-garbled-page-numbers-in-ar-for-l
# hyperref, lualatex and unicode bookmarks issue (garbled page numbers in AR for Linux) I have text with some non-ascii characters in chapters, sections... which I'd like to see in bookmarks displayed correctly. I noticed that, • If I run xelatex, everything is fine, no need to set \hypersetup{unicode=true} to get this working • If I run lualatex, I have these issues: 1. If unicode=false, then bookmarks containing non-ascii characters have strange unreadable symbols instead of those chars, but everything else is fine 2. If unicode=true, then bookmarks are just fine, but page numbers in resulting PDF file are garbled in Adobe Reader 9 on Linux (and Okular shows some strange symbols in bookmark pane too) Is it a driver issue? Any suggestion how to overcome this problem? Here's a MWE: \documentclass{article} \usepackage{hyperref} %\hypersetup{unicode=true} \begin{document} \section{Test 1 čćžđš} Some text.\clearpage \section{Test 2 šđžćč} Some text. \end{document} A notice: I tried to post this question on comp.text.tex earlier, but something went wrong and it didn't show up. Anyway, if the post shows up later, I'll post the link here. - it is on c.t.t. – Herbert Jul 29 '11 at 19:42 I can't find it :( – Meho R. Jul 29 '11 at 21:35 the answer from Heiko is also there – Herbert Jul 29 '11 at 21:36 Is there a link? And if you're referring to this post (groups.google.com/group/comp.text.tex/browse_thread/thread/…), no, there is no answer which solves this issue. – Meho R. Jul 29 '11 at 22:55 @Meho: yes, it would be great if you would post that as answer and accept it. – Stefan Kottwitz Aug 20 '11 at 14:28 show 1 more comment I received an email from Heiko suggesting that I try auto pdfencoding which worked indeed. So, the solution is: \hypersetup{pdfencoding=auto} And corrected MWE from above: \documentclass{article} \usepackage{hyperref} \hypersetup{% pdfencoding=auto, pdfauthor={Author Test ČĆŽĐŠ}, pdftitle={Title test, čćžđš} } \begin{document} \section{Test 1 čćžđš} Some text.\clearpage \section{Test 2 šđžćč} Some text. \end{document} Complete Heiko's answer can be found on comp.text.tex. - Even with pdfencoding=auto, I had trouble with either the bookmarks or the metadata or the page numbers. But all three come out perfectly if I use the navigator package rather than hyperref. - Yes, I noticed that metadata gets messed up sometimes. The solution might be to use \hypersetup locally, that is, e.g., \hypersetup{pdfencoding=utf8} before specifying author name, title and other metadata stuff, then \hypersetup{pdfencoding=auto} after that, to get bookmarks and numbers displayed correctly. – Meho R. Oct 28 '11 at 0:46
2013-05-26 00:50:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7700327038764954, "perplexity": 4576.523844442626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706474776/warc/CC-MAIN-20130516121434-00089-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.partone.litfl.com/frequency_distributions_and_measures_of_central_tendency.html
# Frequency Distributions and Measures of Central Tendency Describe frequency distributions and measures of central tendency and dispersion ## Frequency Distributions Frequency distributions are a method of tabulating or graphically displaying a number of observations. ### The Normal Distribution The normal distribution is a gaussian distribution, where the majority of values cluster around the mean, and whilst more extreme values become progressively less frequent. The normal distribution is common in medicine for two reasons. • Much of the variation in biology follows a normal distribution • When multiple random samples are taken from a population, the mean of these samples follows a normal distribution, even if the characteristic being measured is not normally distributed This is known as the central limit theorem. • It is useful because many statistical tests are only valid when the data follow a normal distribution The formula for the normal distribution is given by: From this, it can bet seen the two variables which will determine the shape of the normal distribution are: • μ (mu): The mean • σ (sigma): The standard deviation #### The Standard Normal Distribution The standard normal distribution is a normal distribution with a mean of 0 and a standard deviation of 1. The equation for the standard normal distribution is much simpler, which is why it is used. Any normal distribution can be transformed to fit a standard normal distribution using a z transformation: The value of z then gives a standardised score, i.e. the number of standard deviations form the mean in a standardised curve. This can then be used to determine probability. ### Binomial distribution Where observations belong to one of two mutually exclusive categories, i.e.: If then If the number of observations is very large and the probability of an event is small, a Poisson distribution can be used to approximate a binomial distribution. ## Measures of Central Tendency As noted above in the normal distribution, results tend to cluster around a central value. Quantification of the degree of clustering can be done using measures of central tendency, of which there are three: • Mode The most common value in the sample. • Median The middle value when the sample is ranked from lowest to highest. • The median is the best measure of central tendency when the data is skewed • Arithmetic mean The average, i.e: The mean is common and reliable, though inaccurate if the distribution is skewed. ## Measures of Dispersion Measures of variability describe the degree of dispersion around the central value. ### Basic Measures of Deviation • Range: The lowest and highest values in the sample Highly influenced by outliers • Percentiles: Rank observations into 100 equal parts, so that the median becomes the 50% percentile. Better measure of spread than range. • Interquartile range: The 25th to 75th centile A box-and-whisker plot graphically demonstrates the mean, 25th centile, 75th centile, and (usually), the 10th and 90th centiles. • Outliers are represented by dots • Occasionally the range is plotted by the whiskers, and there are no outliers plotted ### Variance and Standard Deviation Variance is a better measure of variability than the above methods. Variance: • Evaluates how far each observation is from the mean, and penalises observations more the further they lie from the mean • Sums the squares of each difference and divides by the number of observations i.e: • is used (instead of ) because the mean of the sample is known and therefore the last observation calculated must taken on a known quantity • This is known as a degrees of freedom, which is a mathematical restriction used when using one statistical test in order to estimate another • It is a confusing topic best illustrated with an example: • You have been given a sample of two observations (say, ages of two individuals), and you know nothing about them • The degrees of freedom is two, since those observations can take on any value. • Alternatively, imagine you have been given the same sample, but this time I tell you that the mean age of the sample is 20 • The degrees of freedom is one, since if I tell you the value of one of the observations is 30, you know that the other must be 10 Therefore, only one of the observations is free to vary - as soon as its value is known then the value of the other observation is known as well. • Different statistical tests may result in additional losses in degrees of freedom. #### Standard Deviation The standard deviation is the positive square root of the variance. In a sample of normal distribution: • 1 SD either side of the mean should include ~68% of results • 2 SD either side of the mean should include ~95% of results • 3 SD either side of the mean should include ~99.7% of results ### Standard error and Confidence Intervals Standard error of the mean is: • A measure of the precision of the estimate of the mean • Calculated from the standard deviation and the sample size As the sample size grows, the SEM decreases (as the estimate becomes more precise). • Given by the formula: • Used to calculate the confidence interval #### Confidence Interval The confidence interval: • Gives a range in which the true population parameter is likely to lie The width of the interval is related to the standard error, and the degree of confidence (typically 95%): • Is a function of the sample statistic (in this case the mean), rather than the actual observations • Has several benefits over the p-value: • Indicates magnitude of the difference in a meaningful way • Indicates the precision of the estimate The smaller the confidence interval, the more precise the estimate. • Allows statistical significance to be calculated If the confidence interval crosses 1, then the result is insignificant. ## References 1. "Normal distribution". Licensed under Attribution 3.0 Unported (CC BY 3.0) via SubSurfWiki. 2. Myles PS, Gin T. Statistical methods for anaesthesia and intensive care. 1st ed. Oxford: Butterworth-Heinemann, 2001 3. Course notes from "Introduction to Biostats", University of Sydney, School of Public Health, circa 2013. Last updated 2019-07-18
2020-01-19 08:58:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024663925170898, "perplexity": 855.9065621110741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00030.warc.gz"}
http://codeforces.com/blog/entry/9326
### Berezin's blog By Berezin, 5 years ago, translation, , Hi everybody! When you've seen my nickname you probably thought: "Finally! At least this round is not made by Sereja! And you are right! I, Dmytro Berezin(Berezin) give this round... and my neighbour Sergii Nagin(Sereja) The action will take place in Friday, 25th october, 19:30. Thanks to Gerald Agapov(Gerald) and Maria Belova(Delinur) for help in preperation and translation of problems respectively. Thanks to Yaroslav Tverdokhlib(KADR) for help in testing. You have to help Dima equip his personal life :) The point values for this cause is 500 1000 1500 2000 3000. I highly recommend you also to try to solve them. Thank you for your attention, and have a successful round! • • +189 • » 5 years ago, # |   +9 I hope that it will be a good contest for every point of view as Sereja's contests. • » » 5 years ago, # ^ |   -8 I hope so ) » 5 years ago, # |   +48 the law of conservation of Sereja : Sereja can be neither created nor destroyed, but he always go from a contest to another contest! » 5 years ago, # |   +15 I hope you guys have managed to solve the problem with the servers from last time. • » » 5 years ago, # ^ |   +8 Testing was a breeze. Thanks a lot guys. » 5 years ago, # |   0 I hope that the English problem set is not the same level of English as this post! • » » 5 years ago, # ^ |   +22 I am really sorry for the level of my English, if you read my bad-english post attentively, you will find, that english problems are not translated by me. • » » » 5 years ago, # ^ |   0 What?? You have advanced level Enlish in university and bad English at codeforces)) • » » 5 years ago, # ^ |   0 problem set is level of English? do you knoving engrish? » 5 years ago, # |   +12 Sereja is a hard-working and good problem setter :) » 5 years ago, # | ← Rev. 5 →   0 good luck all :) » 5 years ago, # |   +9 I hope it will be a interesting contest and everyone will enjoy and learn more knowledge from this contest . » 5 years ago, # |   +2 I don't know if I'm the only one but I'm having tough time understanding the statements.This is the first time it has happened to me since I've joined CF. :( » 5 years ago, # | ← Rev. 2 →   -11 Why so hard-thinking & tough understanding problems in this round... Dying for none solved problem... » 5 years ago, # |   -8 a very awful only DIV 2 contest » 5 years ago, # |   +8 Could someone explain solution of Problem D? • » » 5 years ago, # ^ | ← Rev. 2 →   0 DP problem: DP[i][j] 1<=i<=n 0<=j<=1DP[i][0]=the answer of the first i hares when before i,feed i-1;DP[i][1]=the answer of the first i hares when before i-1,feed i;final answer will be max(DP[n][0],DP[n][1]); sorry for bad english. • » » 5 years ago, # ^ |   0 I'm still not sure if this solution will be accepted, but here's a simple one:State is the current hare and a flag whether the last hare that was processed was fed before the current one: 1 if it was fed before, 0 if not. So if the last hare was fed before the current one the value for this state is max(b[current]+state(current+1, 1), c[current]+state(current+1, 0)). Otherwise it's max(a[current]+state(current+1, 1), b[current]+state(current+1, 0)). The reason why this works (I hope) is because the only thing you care is whether the current hare was fed before the current-1 and whether it was fed before the current+1. » 5 years ago, # |   +45 Problem CTell all extracted numbers to Inna and then empty all containers. I guess lots of people who got WA#10 didn't see then empty all containers. • » » 5 years ago, # ^ |   +4 Also I found a bit tricky that the sequence of operations doesn't necessarily end in 0, so you can do whatever you want with the last numbers. • » » » 5 years ago, # ^ | ← Rev. 2 →   0 yes my very first submission failed on this test case (pretest 3).but there was another tricky case is when there are more than 3 elements before a 0 with the same maximum value (pretest 5). example 5 5 5 5 0 this failed my second submission 4886056 :Pmy third 4887916 passed all the pretests, and, thankfully, the system tests! :) • » » 5 years ago, # ^ |   0 I am one of the victim :( . • » » 5 years ago, # ^ |   +3 oh my god...i missed that... so sad...... • » » 5 years ago, # ^ |   +3 I missed the exact same point! » 5 years ago, # |   0 What a great problem for C! One of the trickiest problem i've ever seen. Fully apprecieted, thanks @Sereja!Although, i spent too much time on it. After i solved it [as i think now, before system test :D], this problem idea made my day! Nice~! • » » 5 years ago, # ^ |   +3 It was Dima's problem) • » » » 5 years ago, # ^ |   0 Oh, i see ... Thanks a lot @Berezin ! • » » » » 5 years ago, # ^ |   +3 Thank you :) » 5 years ago, # |   0 please explain problem c for me I dont understant that » 5 years ago, # |   0 I'm quite astonished. After finishing C, I wrote a DP solution for D, but that runs in O(N^3). After getting TLE in pretest 12, I just had one minute left so I just limited the internal loop to 100 iterations, so the DP would not be computed correctly in some cases. But still, my solution passed the final tests. So lucky :D4890679 » 5 years ago, # |   0 here is my problem C, wrong answer on pretest 3 http://codeforces.com/contest/358/submission/4890344 pretest 3, give 0 as n, which is totaly against what it says in the problem: The first line contains integer n (1 ≤ n ≤ 10^5) — the number of Inna's commands. • » » 5 years ago, # ^ |   0 The input of pretest 3 is clearly seen:2 0 1So, I advice you to check it again more carefully. • » » 5 years ago, # ^ |   0 yeah i figured it out, nothings wrong » 5 years ago, # | ← Rev. 2 →   +7 very quick system testing for 3500+ users. problem A gets severely affected. » 5 years ago, # |   0 In problem B it was mentioned that message size will not exceed 10^5 but test 24 was. 100000 i . . i For which message size will be atleast 3*10^5 . It cost me WA . Its cheating. :'( :'( . Someone must look into this. • » » 5 years ago, # ^ |   +1 Not really, it would have to be at least 3*10^5 if the answer was yes, but it's no. • » » 5 years ago, # ^ | ← Rev. 2 →   0 no, the problem only guaranteed total length of all words doesn't exceed 10^5 which is valid for that test, and that doesn't mean it also guaranteed all words + '<3' also will not exceed 10^5 • » » 5 years ago, # ^ |   0 Not really. It does not say that the text message will contain all the read words. » 5 years ago, # |   0 how did so many users fail the system tests of problem A?? o.Owas there a tricky case? there must have been one, because it's impossible that so many users were incorrect with their ideas! » 5 years ago, # |   0 Can someone explain how Prob A is to be solved? I started off wrongly and got entangled, realizing mistakes after every wrong attempt. • » » 5 years ago, # ^ |   0 the only way there is a self-intersection is, when there are 4 points of the form i j i j, meaning that a semicircle goes from point 1 to point 3, and also another goes from point 2 to point 4. • » » » 5 years ago, # ^ |   0 a.k.a you have two segments, given by coordinates x1a,x2a(left,right) and x1b,x2b(left,right) and the first should start before(strictly) the second and should end before(strictly) the second • » » 5 years ago, # ^ |   0 There are four cases, handle them and you get AC:let A,B be the last two points entered, and C,D be each pair entered before. These patterns are the only one producing entanglement:ACBD BCAD CADB CBDA(Note that if you have XYZ entered before, you handle the pair (x,y), then (y,z), etc) • » » » 5 years ago, # ^ | ← Rev. 2 →   +3 actually the cases can be reduced if A = min(x1,x2) and B = max(x1,x2) • » » » » 5 years ago, # ^ |   0 Yes, that's one way to handle them :)I also forgot to add that it has to be enforced that A • » » 5 years ago, # ^ |   0 it is easy to check when they do not intersect, so you have [a1,b1] and [a2,b2].they do not intesect if b1=a2 && b1<=b2) || (a2>=a1 && b2<=b1) example:E1:([4,7] AND [1, 8] ) E2: [1,8] and [4,7]) so you check every connected pair of points. • » » 5 years ago, # ^ |   0 Thank you, everyone! :) I got it! Consider x1, x2, x3, ... , xn. Make pairs (A,B) for all x(i),x(i+1) such that: A=min[x(i) and x(i+1)] and B=max[x(i),x(i+1)]Now, consider 2 segments (P,Q) and (R,S).P ---------------------- Q'R' can lie either within PQ or outside it. If R lies outside, R---P--------------QS can either lie outside or inside. If S lies inside PQ, then there is an intersection, else there is not.R---P------S-------Q (intersection) R-S-P--------------Q or R---P--------------Q---S (no intersection) If R lies inside PQ, P -------------R-----QIf S lies Outside PQ, then there is an intersection, else there is not.P -------------R-----Q---------S (intersection) P -------------R--S--Q ( no intersection)So, it is just 2 cases we have to check, if we make a min/max pair. • » » » 5 years ago, # ^ |   0 Actually you can get past the problem with brute force( O(n^2) ) by treating only one case. » 5 years ago, # | ← Rev. 2 →   0 Awful translation, made me unable to solve problem D. If hares 1 and n only have one neighbour they should be unable to be in the state where they have both neighbours full or empty but in the first test case it is clearly visible that the answer is 4+3+2+4 or 4+2+3+4, both cases leading to the last hare getting its happiness from BOTH his neighbours being empty (what an evil hare!!!)Can someone explain that?By the way, Sory for mi bad englando! • » » 5 years ago, # ^ |   +1 What's the problem, then? The sample clearly explains that question. That's what the samples are for — clarifying the problem statement (and testing the program, of course).And it kind of makes sense, if you imagine the happiness values given per number of full adjacent hares, which is at most 1 for the border hares. • » » » 5 years ago, # ^ | ← Rev. 2 →   +3 "Inna knows how much joy a hare radiates if it eats when either both of his adjacent hares are hungry ... or both of the adjacent hares are full".Last time I checked both was considered as being two. • » » » » 5 years ago, # ^ |   0 Yeah, the translation isn't precise, I give it that. But I'm saying it shouldn't pose a problem in the contest, as long as you look at the samples and try to understand them carefully. • » » » » » 5 years ago, # ^ |   0 Being less than precise, it's contradictory.I understand that nobody is perfect and I do not mind grammar and other translation mistakes but the fact that annoyed me was that at the question :" In the first test case, to get to 13 happiness the last rabbit should be fed before both his adjacent hares ("Number ai in the first line shows the joy that hare number i gets if his adjacent hares are both hungry") but this would be imposible because the last hare does not have two adjacent hares. What should be done? "I received: "Read the problem statement." • » » » » » » 5 years ago, # ^ |   0 I'd have replied something along the lines of "That's allowed" instead, since the inclarity really comes from the problem statement being what it was.But the way I see it, you're nitpicking about formal correctness too much during a contest, and that might have cost you points. You really could've just decided based on the samples, instead of asking. (I try to understand the samples instead of the statement quite often :D) • » » » » » » » 5 years ago, # ^ | ← Rev. 2 →   0 I understand that I might be too strict about these things but I learnt from life( pointing at programming contests ) that If you don't make sure of every last detail you will most certainly regret it later.And along the lines of understanding the problem from the sample cases, why not go as far as to give the contestants only the sample input and output: "Let them figure out the problem statement!" ? • » » » » » » » » 5 years ago, # ^ | ← Rev. 2 →   +3 Trust me, there are much worse statements. Here, the basic idea of the problem was clear, yours was just a minor question. For example, I didn't even think about it (but I would, if there was an answer "Impossible" mentioned in the statement) when I read the problem. (Apart from it being formally incorrect, ) it's not nearly as bad as you make it seem.I guess it just takes experience with reading problem statements. • » » » » » » » » » 5 years ago, # ^ |   +3 "Just because other kids do drugs at his age we should go easy on our own for skipping school."(or something along those lines)Besides, not everyone has the same mentality:While you and probably the majority of the "red community" could make out a poorly formulated problem from the available test cases, I and maybe others below the "purple grade" will have difficulties in understanding it. To be fair for everyone, why not make it correct? • » » » » » » » » » 5 years ago, # ^ |   +4 Lol, it says my comment can't be nested anymore. We need to go deeper :D (Maybe it's best to consider this the end of discussion, since we don't really have more to add.)On topic: yeah, I'm that kind of guy. I simply accept small imperfection (and bash at large one that much more :D). Of course, all is a matter of opinion. I just want to say what I think about it. But it may be precisely because of mentality that I got this far — to be precise, it's "if you aren't doing well enough, just try to improve as much as possible". Things like this, I take as a challenge. I still have ways to go, though. Red is just the "can solve easy/medium problems in time" level. » 5 years ago, # | ← Rev. 3 →   0 Very nice round — congrats to writers — , but I think that you sould put more pretests if you make statements that request answers "yes"/"no". » 5 years ago, # |   0 Does E involve eulerian/hamiltonian paths? • » » 5 years ago, # ^ | ← Rev. 2 →   +5 Yes, it involves Euler tours on an undirected graph.First, vertices: for each possible K you can extract a graph of which points were directly visited: Assuming the upper-leftmost point is at (sx,sy) then exactly all points of the form (sx+i*k, sy+j*k) are intermediate positions for the victim.The remaining '1's indicate moves from one intermediate point to another, so if there is an unbroken line of points between (x,y) and (x+k,y) or (k,y+k) then an undirected edge should be added between them.A valid sequence of kicks corresponds to an Eulerian path through this graph, so you just need to apply the standard algorithm for verifying that one exists. • » » » 5 years ago, # ^ |   0 We'll have to consider àll unit squares as starting points right? What exactly is a semi-Eulerian path? • » » » » 5 years ago, # ^ | ← Rev. 2 →   +3 I was taught that a fully-Eulerian path starts and ends at the same vertex, but a semi-Eulerian path begins and ends at distinct vertices. (upd.: above definition is wrong.)In the former case, any vertex will do as a start point because the tour is closed and cyclic.In the latter case, the two endpoints will be the only ones with edges.length()%2 == 1 and either will do as a start point.In practice you don't need to explicitly identify start and end points -- you can just check that the graph is connected and has either 0 or 2 vertices with an odd number of edges. • » » » » » 5 years ago, # ^ |   0 Ok, thanks! • » » » » » 5 years ago, # ^ |   +8 Interesting, I was taught that a Eulerian circuit starts and ends at the same vertex, while a Eulerian path in distinct vertices. • » » » » » » 5 years ago, # ^ |   0 Yes, after a quick check it turns out that "fully-Eulerian" and "semi-Eulerian" are supposed to describe graphs, not paths.Maybe my teacher was a little confused. Alternatively, maybe it's time to invest in a hearing aid and/or start paying more attention in lessons.Revised the wording -- thanks! » 5 years ago, # |   0 Sorry but for B, why this case is a "yes" ? 3 i love you <3i<3love<23you<3ww I thought he can only insert "digit", "<" and ">" • » » 5 years ago, # ^ |   0 Then Dima inserts a random number of small English characters, digits, signs "more" and "less" into any places of the message. • » » » 5 years ago, # ^ | ← Rev. 2 →   0 oh I missed that, so sad :( small characters mean "lower case" right? • » » » » 5 years ago, # ^ |   0 yep • » » » » » 5 years ago, # ^ |   +6 by the way the accepted solutions output is "yes" for 3 i love you <3i<3dont<3love<3you<3 • » » » » » » 5 years ago, # ^ |   0 as "dont" can be counted as " a random number of small English characters, digits, signs "more" and "less" into any places of the message. " it is within the bounds of the statement • » » » » » » » 5 years ago, # ^ |   0 The contest is over, so now we can see the truth: Dime will never send such sms, so the answer is "no" :) P.s. but the comment above is correct when speaking about contest rules. • » » » » » » » 5 years ago, # ^ |   0 as a matter of fact it's not an encoding, decoding system, the better way was not using the terms encoding and decoding, the translation is one way it should be noted finally I should thank the writers there is a little mistake doing a great work • » » » » » » 5 years ago, # ^ |   0 haha :D • » » 5 years ago, # ^ |   0 costed me 3 WAs, but finally ACed » 5 years ago, # |   0 Tired of being victim of overkill :( Solution of B was doable in 10 lines, instead I choose the long way.. » 5 years ago, # |   0 Could anyone help me why my submission 4899840 can't pass Test 12? • » » 5 years ago, # ^ |   +3 For: 3 i love you <<<<3333iloveyouYour code outputs yes while it should output no. The problem with you code is that it does not check if the characters are in the right order i.e (<3word1<3word2...), it just checks if they are there. It's something similar to this: If your program would check if two words are the same, the words "love" and "elov" would be the same. • » » » 5 years ago, # ^ | ← Rev. 3 →   0 Thank you so much for answering. I found where the problem of my code is with your help. I changed pos=sts.find_first_of(s[i]); in my code to pos=sts.find_first_of(s[i],pos); and it was accepted successfully. :) » 5 years ago, # |   +3 Loved the contest. Just depicts where a contestant's reading and interpretation skills stand. » 5 years ago, # |   0 I wrote the code for Dima and the Text Messages in CodeForces Round 208 division-2 in Java. Every time I submit the code it says TLE. But don't know why it says so even if I am using BufferedReader and PrintWriter for reading and printing the values. My solution code in 4901148. • » » 5 years ago, # ^ |   0 In Java a + b is slow for strings (copies both a and b), you have to use StringBuilder.append() instead.
2018-11-18 21:53:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40545859932899475, "perplexity": 1820.898968465232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744649.77/warc/CC-MAIN-20181118201101-20181118223101-00402.warc.gz"}
http://deeplearning.net/tutorial/logreg.html?highlight=logistic_sgd
Getting Started #### Next topic Multilayer Perceptron # Classifying MNIST digits using Logistic Regression¶ Note This sections assumes familiarity with the following Theano concepts: shared variables , basic arithmetic ops , T.grad , floatX. If you intend to run the code on GPU also read GPU. Note In this section, we show how Theano can be used to implement the most basic classifier: the logistic regression. We start off with a quick primer of the model, which serves both as a refresher but also to anchor the notation and show how mathematical expressions are mapped onto Theano graphs. In the deepest of machine learning traditions, this tutorial will tackle the exciting problem of MNIST digit classification. ## The Model¶ Logistic regression is a probabilistic, linear classifier. It is parametrized by a weight matrix and a bias vector . Classification is done by projecting data points onto a set of hyperplanes, the distance to which reflects a class membership probability. Mathematically, this can be written as: The output of the model or prediction is then done by taking the argmax of the vector whose i’th element is P(Y=i|x). The code to do this in Theano is the following: # generate symbolic variables for input (x and y represent a # minibatch) x = T.fmatrix('x') y = T.lvector('y') # allocate shared variables model params b = theano.shared(numpy.zeros((10,)), name='b') W = theano.shared(numpy.zeros((784, 10)), name='W') # symbolic expression for computing the matrix of class-membership probabilities # Where: # W is a matrix where column-k represent the separation hyper plain for class-k # x is a matrix where row-j represents input training sample-j # b is a vector where element-k represent the free parameter of hyper plain-k p_y_given_x = T.nnet.softmax(T.dot(x, W) + b) # compiled Theano function that returns the vector of class-membership # probabilities get_p_y_given_x = theano.function(inputs=[x], outputs=p_y_given_x) # print the probability of some example represented by x_value # x_value is not a symbolic variable but a numpy array describing the # datapoint print 'Probability that x is of class %i is %f' % (i, get_p_y_given_x(x_value)[i]) # symbolic description of how to compute prediction as class whose probability # is maximal y_pred = T.argmax(p_y_given_x, axis=1) # compiled theano function that returns this value classify = theano.function(inputs=[x], outputs=y_pred) We first start by allocating symbolic variables for the inputs . Since the parameters of the model must maintain a persistent state throughout training, we allocate shared variables for . This declares them both as being symbolic Theano variables, but also initializes their contents. The dot and softmax operators are then used to compute the vector . The resulting variable p_y_given_x is a symbolic variable of vector-type. Up to this point, we have only defined the graph of computations which Theano should perform. To get the actual numerical value of , we must create a function get_p_y_given_x, which takes as input x and returns p_y_given_x. We can then index its return value with the index to get the membership probability of the th class. Now let’s finish building the Theano graph. To get the actual model prediction, we can use the T.argmax operator, which will return the index at which p_y_given_x is maximal (i.e. the class with maximum probability). Again, to calculate the actual prediction for a given input, we construct a function classify. This function takes as argument a batch of inputs x (as a matrix), and outputs a vector containing the predicted class for each example (row) in x. Now of course, the model we have defined so far does not do anything useful yet, since its parameters are still in their initial random state. The following section will thus cover how to learn the optimal parameters. Note For a complete list of Theano ops, see: list of ops ## Defining a Loss Function¶ Learning optimal model parameters involves minimizing a loss function. In the case of multi-class logistic regression, it is very common to use the negative log-likelihood as the loss. This is equivalent to maximizing the likelihood of the data set under the model parameterized by . Let us first start by defining the likelihood and loss : While entire books are dedicated to the topic of minimization, gradient descent is by far the simplest method for minimizing arbitrary non-linear functions. This tutorial will use the method of stochastic gradient method with mini-batches (MSGD). See Stochastic Gradient Descent for more details. The following Theano code defines the (symbolic) loss for a given minibatch: loss = -T.mean(T.log(p_y_given_x)[T.arange(y.shape[0]), y]) # note on syntax: T.arange(y.shape[0]) is a vector of integers [0,1,2,...,len(y)]. # Indexing a matrix M by the two vectors [0,1,...,K], [a,b,...,k] returns the # elements M[0,a], M[1,b], ..., M[K,k] as a vector. Here, we use this # syntax to retrieve the log-probability of the correct labels, y. Note Even though the loss is formally defined as the sum, over the data set, of individual error terms, in practice, we use the mean (T.mean) in the code. This allows for the learning rate choice to be less dependent of the minibatch size. ## Creating a LogisticRegression class¶ We now have all the tools we need to define a LogisticRegression class, which encapsulates the basic behaviour of logistic regression. The code is very similar to what we have covered so far, and should be self explanatory. class LogisticRegression(object): def __init__(self, input, n_in, n_out): """ Initialize the parameters of the logistic regression :type input: theano.tensor.TensorType :param input: symbolic variable that describes the input of the architecture (e.g., one minibatch of input images) :type n_in: int :param n_in: number of input units, the dimension of the space in which the datapoint lies :type n_out: int :param n_out: number of output units, the dimension of the space in which the target lies """ # initialize with 0 the weights W as a matrix of shape (n_in, n_out) self.W = theano.shared(value=numpy.zeros((n_in, n_out), dtype=theano.config.floatX), name='W' ) # initialize the baises b as a vector of n_out 0s self.b = theano.shared(value=numpy.zeros((n_out,), dtype=theano.config.floatX), name='b' ) # compute vector of class-membership probabilities in symbolic form self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b) # compute prediction as class whose probability is maximal in # symbolic form self.y_pred=T.argmax(self.p_y_given_x, axis=1) def negative_log_likelihood(self, y): """Return the mean of the negative log-likelihood of the prediction of this model under a given target distribution. .. math:: \frac{1}{|\mathcal{D}|} \mathcal{L} (\theta=\{W,b\}, \mathcal{D}) = \frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|} \log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\ \ell (\theta=\{W,b\}, \mathcal{D}) :param y: corresponds to a vector that gives for each example the correct label; Note: we use the mean instead of the sum so that the learning rate is less dependent on the batch size """ return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y]) We instantiate this class as follows: # allocate symbolic variables for the data x = T.fmatrix() # the data is presented as rasterized images (each being a 1-D row vector in x) y = T.lvector() # the labels are presented as 1D vector of [long int] labels # construct the logistic regression class classifier = LogisticRegression( input=x.reshape((batch_size, 28 * 28)), n_in=28 * 28, n_out=10) Note that the inputs x and y are defined outside the scope of the LogisticRegression object. Since the class requires the input x to build its graph however, it is passed as a parameter of the __init__ function. This is usefull in the case when you would want to concatenate such classes to form a deep network (case in which the input is not a new variable but the output of the layer below). While in this example we will not do that, the tutorials are designed such that the code is as similar as possible among them, making it easy to go from one tutorial to the other. The last step involves defining a (symbolic) cost variable to minimize, using the instance method classifier.negative_log_likelihood. cost = classifier.negative_log_likelihood(y) Note how x is an implicit symbolic input to the symbolic definition of cost, here, because classifier.__init__ has defined its symbolic variables in terms of x. ## Learning the Model¶ To implement MSGD in most programming languages (C/C++, Matlab, Python), one would start by manually deriving the expressions for the gradient of the loss with respect to the parameters: in this case , and , This can get pretty tricky for complex models, as expressions for can get fairly complex, especially when taking into account problems of numerical stability. With Theano, this work is greatly simplified as it performs automatic differentiation and applies certain math transforms to improve numerical stability. To get the gradients and in Theano, simply do the following: # compute the gradient of cost with respect to theta = (W,b) g_W and g_b are again symbolic variables, which can be used as part of a computation graph. Performing one-step of gradient descent can then be done as follows: # compute the gradient of cost with respect to theta = (W,b) # specify how to update the parameters of the model as a list of # (variable, update expression) pairs updates = [(classifier.W, classifier.W - learning_rate * g_W), (classifier.b, classifier.b - learning_rate * g_b)] # compiling a Theano function train_model that returns the cost, but in # the same time updates the parameter of the model based on the rules # defined in updates train_model = theano.function(inputs=[index], outputs=cost, givens={ x: train_set_x[index * batch_size: (index + 1) * batch_size], y: train_set_y[index * batch_size: (index + 1) * batch_size]}) The updates list contains, for each parameter, the stochastic gradient update operation. The givens dictionary indicates with what to replace certain variables of the graph. The function train_model is then defined such that: • the input is the mini-batch index index that together with the batch size( which is not an input since it is fixed) defines with corresponding labels • the return value is the cost/loss associated with the x, y defined by the index • on every function call, it will first replace x and y with the corresponding slices from the training set as defined by the index and afterwards it will evaluate the cost associated with that minibatch and apply the operations defined by the updates list. Each time train_model(index) function is called, it will thus compute and return the appropriate cost, while also performing a step of MSGD. The entire learning algorithm thus consists in looping over all examples in the dataset, and repeatedly calling the train_model function. ## Testing the model¶ As explained in Learning a Classifier, when testing the model we are interested in the number of misclassified examples (and not only in the likelihood). The LogisticRegression class therefore has an extra instance method, which builds the symbolic graph for retrieving the number of misclassified examples in each minibatch. The code is as follows: class LogisticRegression(object): ... def errors(self, y): """Return a float representing the number of errors in the minibatch over the total number of examples of the minibatch ; zero one loss over the size of the minibatch """ return T.mean(T.neq(self.y_pred, y)) We then create a function test_model and a function validate_model, which we can call to retrieve this value. As you will see shortly, validate_model is key to our early-stopping implementation (see Early-Stopping). Both of these function will get as input a batch offset and will compute the number of missclassified examples for that mini-batch. The only difference between them is that one draws its batches from the testing set, while the other from the validation set. # compiling a Theano function that computes the mistakes that are made by # the model on a minibatch test_model = theano.function(inputs=[index], outputs=classifier.errors(y), givens={ x: test_set_x[index * batch_size: (index + 1) * batch_size], y: test_set_y[index * batch_size: (index + 1) * batch_size]}) validate_model = theano.function(inputs=[index], outputs=classifier.errors(y), givens={ x: valid_set_x[index * batch_size: (index + 1) * batch_size], y: valid_set_y[index * batch_size: (index + 1) * batch_size]}) ## Putting it All Together¶ The finished product is as follows. """ This tutorial introduces logistic regression using Theano and stochastic Logistic regression is a probabilistic, linear classifier. It is parametrized by a weight matrix :math:W and a bias vector :math:b. Classification is done by projecting data points onto a set of hyperplanes, the distance to which is used to determine a class membership probability. Mathematically, this can be written as: .. math:: P(Y=i|x, W,b) &= softmax_i(W x + b) \\ &= \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}} The output of the model or prediction is then done by taking the argmax of the vector whose i'th element is P(Y=i|x). .. math:: y_{pred} = argmax_i P(Y=i|x,W,b) This tutorial presents a stochastic gradient descent optimization method suitable for large datasets, and a conjugate gradient optimization method that is suitable for smaller datasets. References: - textbooks: "Pattern Recognition and Machine Learning" - Christopher M. Bishop, section 4.3.2 """ __docformat__ = 'restructedtext en' import cPickle import gzip import os import sys import time import numpy import theano import theano.tensor as T class LogisticRegression(object): """Multi-class Logistic Regression Class The logistic regression is fully described by a weight matrix :math:W and bias vector :math:b. Classification is done by projecting data points onto a set of hyperplanes, the distance to which is used to determine a class membership probability. """ def __init__(self, input, n_in, n_out): """ Initialize the parameters of the logistic regression :type input: theano.tensor.TensorType :param input: symbolic variable that describes the input of the architecture (one minibatch) :type n_in: int :param n_in: number of input units, the dimension of the space in which the datapoints lie :type n_out: int :param n_out: number of output units, the dimension of the space in which the labels lie """ # initialize with 0 the weights W as a matrix of shape (n_in, n_out) self.W = theano.shared(value=numpy.zeros((n_in, n_out), dtype=theano.config.floatX), name='W', borrow=True) # initialize the baises b as a vector of n_out 0s self.b = theano.shared(value=numpy.zeros((n_out,), dtype=theano.config.floatX), name='b', borrow=True) # compute vector of class-membership probabilities in symbolic form self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b) # compute prediction as class whose probability is maximal in # symbolic form self.y_pred = T.argmax(self.p_y_given_x, axis=1) # parameters of the model self.params = [self.W, self.b] def negative_log_likelihood(self, y): """Return the mean of the negative log-likelihood of the prediction of this model under a given target distribution. .. math:: \frac{1}{|\mathcal{D}|} \mathcal{L} (\theta=\{W,b\}, \mathcal{D}) = \frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|} \log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\ \ell (\theta=\{W,b\}, \mathcal{D}) :type y: theano.tensor.TensorType :param y: corresponds to a vector that gives for each example the correct label Note: we use the mean instead of the sum so that the learning rate is less dependent on the batch size """ # y.shape[0] is (symbolically) the number of rows in y, i.e., # number of examples (call it n) in the minibatch # T.arange(y.shape[0]) is a symbolic vector which will contain # [0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of # Log-Probabilities (call it LP) with one row per example and # one column per class LP[T.arange(y.shape[0]),y] is a vector # v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ..., # LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is # the mean (across minibatch examples) of the elements in v, # i.e., the mean log-likelihood across the minibatch. return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y]) def errors(self, y): """Return a float representing the number of errors in the minibatch over the total number of examples of the minibatch ; zero one loss over the size of the minibatch :type y: theano.tensor.TensorType :param y: corresponds to a vector that gives for each example the correct label """ # check if y has same dimension of y_pred if y.ndim != self.y_pred.ndim: raise TypeError('y should have the same shape as self.y_pred', ('y', target.type, 'y_pred', self.y_pred.type)) # check if y is of the correct datatype if y.dtype.startswith('int'): # the T.neq operator returns a vector of 0s and 1s, where 1 # represents a mistake in prediction return T.mean(T.neq(self.y_pred, y)) else: raise NotImplementedError() :type dataset: string :param dataset: the path to the dataset (here MNIST) ''' ############# ############# data_dir, data_file = os.path.split(dataset) if data_dir == "" and not os.path.isfile(dataset): # Check if dataset is in the data directory. new_path = os.path.join(os.path.split(__file__)[0], "..", "data", dataset) if os.path.isfile(new_path) or data_file == 'mnist.pkl.gz': dataset = new_path if (not os.path.isfile(dataset)) and data_file == 'mnist.pkl.gz': import urllib origin = 'http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz' urllib.urlretrieve(origin, dataset) f = gzip.open(dataset, 'rb') f.close() #train_set, valid_set, test_set format: tuple(input, target) #input is an numpy.ndarray of 2 dimensions (a matrix) #witch row's correspond to an example. target is a #numpy.ndarray of 1 dimensions (vector)) that have the same length as #the number of rows in the input. It should give the target #target to the example with the same index in the input. def shared_dataset(data_xy, borrow=True): """ Function that loads the dataset into shared variables The reason we store our dataset in shared variables is to allow Theano to copy it into the GPU memory (when code is run on GPU). Since copying data into the GPU is slow, copying a minibatch everytime is needed (the default behaviour if the data is not in a shared variable) would lead to a large decrease in performance. """ data_x, data_y = data_xy shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX), borrow=borrow) shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX), borrow=borrow) # When storing data on the GPU it has to be stored as floats # therefore we will store the labels as floatX as well # (shared_y does exactly that). But during our computations # we need them as ints (we use labels as index, and if they are # floats it doesn't make sense) therefore instead of returning # shared_y we will have to cast it to int. This little hack # lets ous get around this issue return shared_x, T.cast(shared_y, 'int32') test_set_x, test_set_y = shared_dataset(test_set) valid_set_x, valid_set_y = shared_dataset(valid_set) train_set_x, train_set_y = shared_dataset(train_set) rval = [(train_set_x, train_set_y), (valid_set_x, valid_set_y), (test_set_x, test_set_y)] return rval def sgd_optimization_mnist(learning_rate=0.13, n_epochs=1000, dataset='mnist.pkl.gz', batch_size=600): """ Demonstrate stochastic gradient descent optimization of a log-linear model This is demonstrated on MNIST. :type learning_rate: float :param learning_rate: learning rate used (factor for the stochastic :type n_epochs: int :param n_epochs: maximal number of epochs to run the optimizer :type dataset: string :param dataset: the path of the MNIST dataset file from http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz """ train_set_x, train_set_y = datasets[0] valid_set_x, valid_set_y = datasets[1] test_set_x, test_set_y = datasets[2] # compute number of minibatches for training, validation and testing n_train_batches = train_set_x.get_value(borrow=True).shape[0] / batch_size n_valid_batches = valid_set_x.get_value(borrow=True).shape[0] / batch_size n_test_batches = test_set_x.get_value(borrow=True).shape[0] / batch_size ###################### # BUILD ACTUAL MODEL # ###################### print '... building the model' # allocate symbolic variables for the data index = T.lscalar() # index to a [mini]batch x = T.matrix('x') # the data is presented as rasterized images y = T.ivector('y') # the labels are presented as 1D vector of # [int] labels # construct the logistic regression class # Each MNIST image has size 28*28 classifier = LogisticRegression(input=x, n_in=28 * 28, n_out=10) # the cost we minimize during training is the negative log likelihood of # the model in symbolic format cost = classifier.negative_log_likelihood(y) # compiling a Theano function that computes the mistakes that are made by # the model on a minibatch test_model = theano.function(inputs=[index], outputs=classifier.errors(y), givens={ x: test_set_x[index * batch_size: (index + 1) * batch_size], y: test_set_y[index * batch_size: (index + 1) * batch_size]}) validate_model = theano.function(inputs=[index], outputs=classifier.errors(y), givens={ x: valid_set_x[index * batch_size:(index + 1) * batch_size], y: valid_set_y[index * batch_size:(index + 1) * batch_size]}) # compute the gradient of cost with respect to theta = (W,b) # specify how to update the parameters of the model as a list of # (variable, update expression) pairs. updates = [(classifier.W, classifier.W - learning_rate * g_W), (classifier.b, classifier.b - learning_rate * g_b)] # compiling a Theano function train_model that returns the cost, but in # the same time updates the parameter of the model based on the rules # defined in updates train_model = theano.function(inputs=[index], outputs=cost, givens={ x: train_set_x[index * batch_size:(index + 1) * batch_size], y: train_set_y[index * batch_size:(index + 1) * batch_size]}) ############### # TRAIN MODEL # ############### print '... training the model' # early-stopping parameters patience = 5000 # look as this many examples regardless patience_increase = 2 # wait this much longer when a new best is # found improvement_threshold = 0.995 # a relative improvement of this much is # considered significant validation_frequency = min(n_train_batches, patience / 2) # go through this many # minibatche before checking the network # on the validation set; in this case we # check every epoch best_params = None best_validation_loss = numpy.inf test_score = 0. start_time = time.clock() done_looping = False epoch = 0 while (epoch < n_epochs) and (not done_looping): epoch = epoch + 1 for minibatch_index in xrange(n_train_batches): minibatch_avg_cost = train_model(minibatch_index) # iteration number iter = (epoch - 1) * n_train_batches + minibatch_index if (iter + 1) % validation_frequency == 0: # compute zero-one loss on validation set validation_losses = [validate_model(i) for i in xrange(n_valid_batches)] this_validation_loss = numpy.mean(validation_losses) print('epoch %i, minibatch %i/%i, validation error %f %%' % \ (epoch, minibatch_index + 1, n_train_batches, this_validation_loss * 100.)) # if we got the best validation score until now if this_validation_loss < best_validation_loss: #improve patience if loss improvement is good enough if this_validation_loss < best_validation_loss * \ improvement_threshold: patience = max(patience, iter * patience_increase) best_validation_loss = this_validation_loss # test it on the test set test_losses = [test_model(i) for i in xrange(n_test_batches)] test_score = numpy.mean(test_losses) print((' epoch %i, minibatch %i/%i, test error of best' ' model %f %%') % (epoch, minibatch_index + 1, n_train_batches, test_score * 100.)) if patience <= iter: done_looping = True break end_time = time.clock() print(('Optimization complete with best validation score of %f %%,' 'with test performance %f %%') % (best_validation_loss * 100., test_score * 100.)) print 'The code run for %d epochs, with %f epochs/sec' % ( epoch, 1. * epoch / (end_time - start_time)) print >> sys.stderr, ('The code for file ' + os.path.split(__file__)[1] + ' ran for %.1fs' % ((end_time - start_time))) if __name__ == '__main__': sgd_optimization_mnist() The user can learn to classify MNIST digits with SGD logistic regression, by typing, from within the DeepLearningTutorials folder: python code/logistic_sgd.py The output one should expect is of the form : ... epoch 72, minibatch 83/83, validation error 7.510417 % epoch 72, minibatch 83/83, test error of best model 7.510417 % epoch 73, minibatch 83/83, validation error 7.500000 % epoch 73, minibatch 83/83, test error of best model 7.489583 % Optimization complete with best validation score of 7.500000 %,with test performance 7.489583 % The code run for 74 epochs, with 1.936983 epochs/sec On an Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00 Ghz the code runs with approximately 1.936 epochs/sec and it took 75 epochs to reach a test error of 7.489%. On the GPU the code does almost 10.0 epochs/sec. For this instance we used a batch size of 600. Footnotes [1] For smaller datasets and simpler models, more sophisticated descent algorithms can be more effective. The sample code logistic_cg.py demonstrates how to use SciPy’s conjugate gradient solver with Theano on the logistic regression task.
2014-09-23 16:23:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6792298555374146, "perplexity": 4509.8400947813125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657139314.4/warc/CC-MAIN-20140914011219-00047-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.omnicalculator.com/chemistry/lattice-energy
# Lattice Energy Calculator Created by Jack Bowater Reviewed by Anna Szczepanek, PhD Last updated: Sep 27, 2022 Atoms can come together in many different ways, and this lattice energy calculator is concerned with the energy stored when cations and anions ionically bond as a part of a larger, uniform structure. You're probably well aware of how ubiquitous ionic lattices are - you'll find them in your food, medicine, and maybe even in the walls of your house - but by learning what lattice energy is, the lattice energy formula, and the lattice energy trend, your appreciation for chemistry will surely increase. So, regardless of if you've been asked to find the lattice energy of $$\text{CaO}} for a test, or want to work out the lattice energy of \text{NaCl}}$$ to aid in dinner conversation, learning how to calculate lattice energy will aid in your understanding of the physical world. ## What is lattice energy? - The lattice energy definition Before we get to grips with finding the lattice energy, it's important to know the lattice energy definition as it is quite peculiar. Chemists, for various reasons, like to have exact and sometimes unintuitive definitions, but they do serve a purpose, we assure you. In this case, the **lattice energy definition isn't the change in energy when any two atoms form an ionic bond that is part of an ionic lattice, but instead: The energy required to fully dissociate a mole of an ionic lattice into its constituent ions in their gaseous state. This can be thought of in terms of the lattice energy of $\text{NaCl}$: $\text{NaCl}_{(\text{s})}\rightarrow \text{Na}^+_{(\text{g})}+ \text{Cl}^-_{(\text{g})}$ That the ions are in their gaseous state is important; in this form, they are thought to be infinitely far apart, i.e., there are no interactions between them. This ensures that the complete lattice energy is found, not merely the enthalpy of formation. ## How to calculate lattice energy - The lattice energy formula Perhaps surprisingly, there are several ways of finding the lattice energy of a compound. In fact, there are five. We will discuss one briefly, and we will explain the remaining four, which are all slight variations on each other, in more detail. You can calculate the last four using this lattice energy calculator. #### Experimental methods and the Born-Haber cycle As one might expect, the best way of finding the energy of a lattice is to take an amount of the substance, seal it in an insulated vessel (to prevent energy exchange with the surroundings), and then heat the vessel until all of the substance is gas. After this, the amount of energy you put in should be the lattice energy, right? Unfortunately, this is not the case. While you will end up with all of the lattice's constituent atoms in a gaseous state, they are unlikely to still be in the same form as they were in the lattice. This is because ions are generally unstable, and so when they inevitably collide as they diffuse (which will happen quite a lot considering there are over 600 sextillion atoms in just one mole of substance — as you can discover with our Avogadro's number calculator) they are going to react to form more stable products. These additional reactions change the total energy in the system, making finding what is the lattice energy directly difficult. So, how to calculate lattice energy experimentally, then? The trick is to chart a path through the different states of the compound and its constituent elements, starting at the lattice and ending at the gaseous ions. If we then add together all of the various enthalpies (if you don't remember the concept, visit our enthalpy calculator), the result must be the energy gap between the lattice and the ions. This kind of construction is known as a Born-Haber cycle. For example, we can find the lattice energy of \text{CaO}} using the following information: $\begin{split} &\text{CaO}_{(\text{s})}\rightarrow \text{Ca}_{(\text{s})} + \frac{1}{2}\text{O}_{2(\text{g})}\\ \\ &\text{Ca}_{(\text{s})}\rightarrow \text{Ca}_{(\text{g})}\\ \\ &\text{Ca}_{(\text{g})} \rightarrow \text{Ca}^{2+}_{(\text{g})}+2\text{e}^- \\ &\frac{1}{2}\text{O}_{2(\text{g})}\rightarrow O_{(\text{g})} \\ &\text{O}_{(\text{g})} +2\text{e}^-\rightarrow \text{O}_{(\text{g})}^{2-} \\ \end{split}$ Since we can find all of these energies experimentally, this is a surefire way of answering "What is the lattice energy of $\text{CaO}$?" #### Hard-sphere model There are however difficulties in getting reliable energetic readings. This has led many people to look for a theoretical way of finding the lattice energy of a compound. The first attempt was to find the sum of all of the forces, both attractive and repulsive, that contribute to the potential lattice energy. Even though this is a type of potential energy, you can't use the standard potential energy formula here. The starting point for such a model is the potential energy between two gaseous ions: $U=\frac{z^+z^-e^2}{4\pi\varepsilon_0r_0}$ where: • $^+$ — Charge on the cation; • $z^-$ — Charge on the anion; • $e$ — Electronic charge ($e=1.602 \times 10^{-19}\ \text{C}$); • $4\pi\varepsilon_0$ — Vacuum permittivity ($1.11 \times 10^{-10}\ \text{C}^2/(\text{J}\cdot\text{m})$); and • $r_0$ — Interatomic distance (usually the sum of the cation's & anion's atomic radii in $\text{m}$). Two alterations are necessary to make the above equation suitable for a mole of a lattice. First, to find the energy on a per mole basis, the equation should be multiplied by Avogadro's constant, $N_{\text{A}}$. Next, consider that this equation is for two ions acting on each other alone, while in a lattice each ion is acted on by every other ion at a strength relative to their interatomic distance. For a single atom in the lattice, the summation of all of these interactions can be found, known as the Madelung constant, $M$, which is then multiplied by the equation above. This constant varies from lattice structure to lattice structure, and the most common are present in the lattice energy calculator. Therefore, the hard-sphere equation for lattice energy is: $U=\frac{N_{\text{A}}z^+z^-e^2 M}{4\pi\varepsilon_0r_0}$ where: • $N_{\text{A}}$ — Avogadro's number; and • $M$ — Madelung constant. #### Born-Landé equation While the hard-sphere model is a useful approximation, it does have some issues. The truth is that atoms do not exist as single points that are either wholly positive or wholly negative, as in the hard-sphere model. They are instead surrounded by a number of electron orbitals regardless of charge (unless you have managed to remove all of the electrons, as in the case of $\text{H}^+$, of course). Because there is actually some element of repulsion between the anion and cation, the hard-sphere model tends to over-estimate the lattice energy. To correct for this, Born and Landé (yes, the same Born as in the Born-Haber cycle, prolific, we know) proposed an equation to describe this repulsive energy: $U=\frac{N_\text{A}B}{r^n}$ where: • $B$ — A constant that accounts for how the strength of the repulsion decreases as the distance increases, which is a constant for each lattice; • $r$ — Interatomic distance; and • $n$ — Born exponent, a measure of the lattice's compressibility. By adding this correction to the hard-sphere equation, differentiating it with respect to $r$, assuming that at $r=r_0$ the potential energy is at a minimum, rearranging for $B$, and finally substituting that back into the hard-sphere equation, you end up with the Born-Landé equation: $U=\frac{N_{\text{A}}z^+z^-e^2M}{4\pi\varepsilon_0r_0}\cdot \left(1-\frac{1}{n}\right)$ #### Born-Mayer equation As you might expect, the Born-Landé equation gives a better prediction of the lattice energy than the hard-sphere model. It is, however, still an approximation, and improvements to the repulsion term have since been made. The first major improvement came from Mayer, who found that replacing $1/r^n$ with $e^{-\frac{r}{\rho}}$ yielded a more accurate repulsion term. In this case, $\rho$ is a factor representing the compressibility of the lattice, and letting this term equal $30\ \text{pm}$ is sufficient for most alkali metal halides. Substituting this new approximation into the Born-Landé equation gives: $U=\frac{N_{\text{A}}z^+z^-e^2M}{4\pi\varepsilon_0r_0}\cdot \left(1-\frac{\rho}{r_0}\right)$ Since then, further improvements in our understanding of the universe have lead to a more accurate repulsion term, which in turn have given better equations for how to calculate lattice energy. The application of these new equation are, however, still quite niche and the improvements not as significant. For these reasons they have not been included in the present lattice energy calculator. Still, if you would like us to add some more, please feel free to write to us 😀 #### Kapustinskii equation Unfortunately, some of the factors for both the Born-Landé and Born-Mayer equations require either careful computation or detailed structural knowledge of the crystal, which are not always easily available to us. Kapustinskii, a Soviet scientist, also noticed this and decided to make some improvements to the Born-Mayer equation to make it more fit for general purpose. First, he found that **in most cases $ρ$ was equal to $0.345\ \text{pm}$, and so replaced it by $d$, equal to $3.45\times10^{−11}\ \text{m}$. Next, he replaced the measured distance between ions, $r_0$, with merely the sum of the two ionic radii, $r^++r^-$. After this, it was shown that the Madelung constant of a structure divided by the number of atoms in the structure's empirical formula was always roughly equal ($\sim0.85$), and so a constant to account for this could be used to replace the Madelung constant. Moving all of the other constants into a single factor gives the final result: $U=K\cdot\frac{v\cdot \left|z^+\right|\cdot \left|z^-\right|}{r^++r^-}\cdot\frac{1-d}{r^++r^-}$ where: • $K$$1.202\times10^{−4}\ \text{J}\cdot\text{m}/\text{mol}$; • $v$ — Number of ions in the lattice's empirical formula; and • $d$$3.45\times10^{−11}\ \text{m}$. As you can see, the lattice energy can now be found from only the lattice's chemical formula and the ionic radii of its constituent atoms. ## Lattice energy trend Looking at the Kapustinskii equation above, we can begin to understand some of the lattice energy trends as we move across and down the periodic table. First, we can see that by increasing the charge of the ions, we will dramatically increase the lattice energy. It will, in fact, increase the lattice energy by a factor of four, all of things being equal, as $|z^+| \cdot |z^-|$ moves from being $1 \cdot 1$ to $2\cdot2$. This is due to the ions attracting each other much more strongly on account of their greater magnitude of charge. For example, using the Kapustinskii equation, the lattice energy of $\text{NaCl}$ is $746\ \text{kJ}/\text{mol}$, while the lattice energy of $\text{CaO}$ is $3430\ \text{kJ}/\text{mol}$. The other trend that can be observed is that, as you move down a group in the periodic table, the lattice energy decreases. As elements further down the period table have larger atomic radii due to an increasing number of filled electronic orbitals (if you need to dust your atomic models, head to our quantum numbers calculator), the factor $r^++r^-$ increases, which lowers the overall lattice energy. Note, that while the increase in $r^++r^-$ in the electronic repulsion term actually increases the lattice energy, the other $r^++r^-$ has a much greater effect on the overall equation, and so the lattice energy decreases. The cause of this effect is less efficient stacking of ions within the lattice, resulting in more empty space. To see this trend for yourself, investigate it with our lattice energy calculator! Find more about crystallography with our cubic cell calculator! ## FAQ ### How to calculate lattice energy? You can either construct a Born-Haber cycle or use a lattice energy equation to find lattice energy. The Born-Haber cycle is more accurate as it is derived experimentally, but requires a larger amount of data. Lattice energy formulas, such as the Kapustinskii equation, are easy to use but are only estimates. ### What is the lattice energy of CaO? The lattice energy of CaO is 3460 kJ/mol. ### What determines lattice energy? Lattice energy is influenced by a number of factors: • The number of ions in the crystal's empirical formula; • The charge on both the anion and cation; • The ionic radii of both the anion and cation; and • The structure of the crystal lattice. ### What is the lattice energy of NaCl? 787.3 kJ/mol is the lattice energy of NaCl. 💡 Did you know that NaCl is actually table salt! Jack Bowater Method Chosen approximation Kapustinskii Cation Element Na Charge pm Anion Element Cl Charge pm Stoichiometry Lattice energy Kapustinskii kJ /mol People also viewed… ### Atomic mass The atomic mass calculator takes the number of protons and neutrons in an atom and displays the atomic mass in atomic mass units and kilograms. ### Discount Discount calculator uses a product's original price and discount percentage to find the final price and the amount you save. ### Percentage concentration to molarity With our tool, you can easily convert percentage concentration to molarity. ### Sleep The sleep calculator can help you determine when you should go to bed to wake up happy and refreshed.
2022-10-04 07:33:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 55, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6732692718505859, "perplexity": 616.9469979637697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00644.warc.gz"}
http://www.cliffsnotes.com/math/algebra/algebra-ii/linear-sentences-in-two-variables/linear-inequalities-solutions-using-graphing-with-two-variables
# Linear Inequalities: Solutions Using Graphing with Two Variables ##### Example 1 Graph the solution to this system of inequalities. To graph the solution to a system of inequalities, follow this procedure: 1. Graph each sentence on the same set of axes. 2. See where the shading of the sentences overlaps. The overlapping region is the solution to the system of inequalities. The solution to the system is the region with both shadings (see Figure 1). The solution to the following is, therefore, as shown in Figure 2.
2015-05-25 11:23:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.801462709903717, "perplexity": 528.501008128346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928486.86/warc/CC-MAIN-20150521113208-00191-ip-10-180-206-219.ec2.internal.warc.gz"}
http://marripedia.org/effects_of_maternal_attachment_on_crime_rates?do=diff&rev2%5B0%5D=1441986822&rev2%5B1%5D=1445573287&difftype=sidebyside
# Differences This shows you the differences between two versions of the page. effects_of_maternal_attachment_on_crime_rates [2015/09/11 08:53]marri [2. Effects of Weak Maternal Attachment] effects_of_maternal_attachment_on_crime_rates [2015/10/22 21:08]marri Both sides previous revision Previous revision 2015/11/13 07:56 marri2 2015/10/22 21:08 marri 2015/10/16 13:34 cordell [2. Effects of Weak Maternal Attachment] 2015/10/16 13:32 cordell [2. Effects of Weak Maternal Attachment] 2015/09/11 08:57 marri [1. Maternal Affection Develops Empathy] 2015/09/11 08:57 marri [3. Causes of Weak Maternal Attachment] 2015/09/11 08:53 marri [2. Effects of Weak Maternal Attachment] 2015/09/11 08:07 marri [1. Maternal Affection Develops Empathy] 2015/09/10 13:32 marri2 [Effects of Maternal Attachment on Crime Rates] 2015/06/15 08:23 marri [3. Causes of Weak Maternal Attachment] 2015/06/15 08:20 marri [2. Effects of Weak Maternal Attachment] 2015/06/15 08:15 marri [1. Maternal Affection Develops Empathy] 2015/06/15 08:09 marri created Next revision Previous revision 2015/11/13 07:56 marri2 2015/10/22 21:08 marri 2015/10/16 13:34 cordell [2. Effects of Weak Maternal Attachment] 2015/10/16 13:32 cordell [2. Effects of Weak Maternal Attachment] 2015/09/11 08:57 marri [1. Maternal Affection Develops Empathy] 2015/09/11 08:57 marri [3. Causes of Weak Maternal Attachment] 2015/09/11 08:53 marri [2. Effects of Weak Maternal Attachment] 2015/09/11 08:07 marri [1. Maternal Affection Develops Empathy] 2015/09/10 13:32 marri2 [Effects of Maternal Attachment on Crime Rates] 2015/06/15 08:23 marri [3. Causes of Weak Maternal Attachment] 2015/06/15 08:20 marri [2. Effects of Weak Maternal Attachment] 2015/06/15 08:15 marri [1. Maternal Affection Develops Empathy] 2015/06/15 08:09 marri created Last revision Both sides next revision Line 1: Line 1: ==========Effects of Maternal Attachment on Crime Rates========== ==========Effects of Maternal Attachment on Crime Rates========== - - //Research Synthesis Paper//: [[http://​www.heritage.org/​research/​reports/​1995/​03/​bg1026nbsp-the-real-root-causes-of-violent-crime|The Real Root Causes of Violent Crime: The Breakdown of Marriage, Family, and Community]] =====1. Maternal Affection Develops Empathy===== =====1. Maternal Affection Develops Empathy===== - According to Professor Rolf Loeber of the University of Pittsburgh School of Medicine: "There is increasing evidence for an important critical period that occurs early in children'​s lives. At that time, youngsters'​ attachment to adult caretakers is formed. This helps them to learn prosocial skills and to unlearn any aggressive or acting out behaviors."​((Rolf Loeber, “Development and Risk Factors of Juvenile Antisocial Behavior and Delinquency,​” //Clinical Psychology Review//, Vol. 10 (1990), pp. 1-41. \\ Geert-Jan J.M. Stams, Femmie Juffer, Marinus H. IJzendoorn, “Maternal sensitivity,​ infant attachment, and temperament in early childhood predict adjustment in middle childhood: The case of adopted children and their biologically unrelated parents,” //​Developmental Psychology//​ 38 (2002): 806-821. \\ L. Alan Srouf, “Attachment and development:​ A prospective longitudinal study from birth to adulthood,​” //​Attachment and Human Development//​ 7(2005))) + According to Professor Rolf Loeber of the University of Pittsburgh School of Medicine: "There is increasing evidence for an important critical period that occurs early in children'​s lives. At that time, youngsters'​ attachment to adult caretakers is formed. This helps them to learn prosocial skills and to unlearn any aggressive or acting out behaviors."​((Rolf Loeber, “Development and Risk Factors of Juvenile Antisocial Behavior and Delinquency,​” //Clinical Psychology Review//, Vol. 10 (1990), pp. 1-41. \\ Geert-Jan J.M. Stams, Femmie Juffer, Marinus H. IJzendoorn, “Maternal sensitivity,​ infant attachment, and temperament in early childhood predict adjustment in middle childhood: The case of adopted children and their biologically unrelated parents,” //​Developmental Psychology//​ 38 (2002): 806-821. \\ L. Alan Srouf, “Attachment and development:​ A prospective longitudinal study from birth to adulthood,​” //​Attachment and Human Development//​ 7(2005)) =====2. Effects of Weak Maternal Attachment===== =====2. Effects of Weak Maternal Attachment===== - The early experience of intense maternal affection is the basis for the development of a conscience and moral compassion for others. Children whose mothers are distant emotionally or physically tend to have behavior problems and are more likely to commit crimes.((J. Belsky, S. Woodworth, and K. Crnic, “Trouble in the Second Year: Three Questions About Family Interaction,​” //Child Development//​ 67 no. 2 (1996): 556-578. \\ J. Belsky and M. Rovine, "​Non-maternal Care in the First Year of Life and Security of Infant-Parent Attachment,"​ //Child Development//​ 59, (1988). \\ L.N. Hickman, “Who Should Care for our Children? The Effects of Home versus Centre Care on Child Cognition and Social Adjustment,​” //Journal of Family Issues// 27 no. 5 (2006): 652-684. \\ NICHD Early Child Care Research Network, “Day-care Effect Sizes for the NICHD Study of Early Child Care and Youth Development,​” //American Psychologist//​ 61 no. 2 2006a: 99-116. + The early experience of intense maternal affection is the basis for the development of a conscience and moral compassion for others. Children whose [[effects_of_abuse_on_children|mothers are distant emotionally or physically]] tend to have behavior problems and are more likely to commit crimes.((J. Belsky, S. Woodworth, and K. Crnic, “Trouble in the Second Year: Three Questions About Family Interaction,​” //Child Development//​ 67 no. 2 (1996): 556-578. \\ J. Belsky and M. Rovine, "​Non-maternal Care in the First Year of Life and Security of Infant-Parent Attachment,"​ //Child Development//​ 59, (1988). \\ L.N. Hickman, “Who Should Care for our Children? The Effects of Home versus Centre Care on Child Cognition and Social Adjustment,​” //Journal of Family Issues// 27 no. 5 (2006): 652-684. \\ NICHD Early Child Care Research Network, “Day-care Effect Sizes for the NICHD Study of Early Child Care and Youth Development,​” //American Psychologist//​ 61 no. 2 2006a: 99-116. \\ Jay Belsky “The Effects of Infant Day Care Reconsidered,​” //Early Childhood Research Quarterly//,​ Vol. 3 (1988), pp. 235-272. \\ On the vital connection between family and moral capacity, Wright and Wright, “Family Life and Delinquency and Crime,” summarizes the findings of the professional literature as follows: \\ Jay Belsky “The Effects of Infant Day Care Reconsidered,​” //Early Childhood Research Quarterly//,​ Vol. 3 (1988), pp. 235-272. \\ On the vital connection between family and moral capacity, Wright and Wright, “Family Life and Delinquency and Crime,” summarizes the findings of the professional literature as follows: \\ “Ainsworth suggested that children seek and accept the parent’s guidance, further maintaining that secure children obey voluntarily from their own desire rather than from fear of reprisal.” \\ “Ainsworth suggested that children seek and accept the parent’s guidance, further maintaining that secure children obey voluntarily from their own desire rather than from fear of reprisal.” - \\ “Arbuthnot et al. in an attempt to understand moral development and family relationships,​ suggested that dysfunctional families experiencing high levels of conflict, dominance, hostility, lack of warmth, and authoritarian disciplinary styles do not allow children to gain insight and understanding into how their misbehaving might ccause ​hurt to others. Under these negative family conditions, children cannot develop conventional moral reasoning with roots in acceptance of mutual expectations,​ positive social intentions, belief in and maintenance of the social system and acceptance of motives which includes duties and respect. Based on their review of the literature, Arbuthnot concluded that nearly all studies utilizing moral assessment devices with acceptable psychometric properties have shown that delinquents tend to have lower moral reasoning maturity than non-delinquents.” + \\ “Arbuthnot et al. in an attempt to understand moral development and family relationships,​ suggested that dysfunctional families experiencing high levels of conflict, dominance, hostility, lack of warmth, and authoritarian disciplinary styles do not allow children to gain insight and understanding into how their misbehaving might cause hurt to others. Under these negative family conditions, children cannot develop conventional moral reasoning with roots in acceptance of mutual expectations,​ positive social intentions, belief in and maintenance of the social system and acceptance of motives which includes duties and respect. Based on their review of the literature, Arbuthnot concluded that nearly all studies utilizing moral assessment devices with acceptable psychometric properties have shown that delinquents tend to have lower moral reasoning maturity than non-delinquents.” - \\ “They argue that delinquency can be anticipated when children or adolescents are unable to see the perspective of others and lack empathy for other people’s circumstances. When conformity to rules of behavior for the sake of order in society is not accepted, when property is only valued in its possession, when personal relationships,​ even life itself are valued only for their utility, then delinquency behavior should not be a surprise. Moral or normative development at a more advanced level may be necessary for young people to move beyond utility to moral justification for correct behavior. The young persons must develop a sense of moral justification to have the ability and commitment to act accordingly when faced with temptation, economic deprivation or intense peer group pressure.)) If a child'​s emotional attachment to his mother is disrupted during the first few years, permanent harm can be done to his capacity for emotional attachment to others. ​He will be less able to trust others and throughout ​his life will stay more distant emotionally from others. Children who do not have a close relationship with their mothers are more likely to have psychological and behavioral problems.((McCartney,​ K., M. Owen, C. Booth, A. Clarke-Stewart,​ and D. Vandell, “Testing a Maternal Attachment Model of Behavior Problems in Early Childhood.,​” //Journal of Child Psychology and Psychiatry//,​ 45 (2004): 765-778. \\ A. Siri Oyen, S. Landy, and C. Hiilburn-Cobb,​ “Maternal Attachment and Sensitivity in an At-Risk Sample,” //​Attachment and Human Development//​ 2 (2000): 203-217. \\ John Bowlby, //​Attachment and Loss// Vol. 1, Basic Books, 1980. \\ Kathleen McCartney et al., “Testing a Maternal Attachment Model of Behavior Problems in Early Childhood,​” //Journal of Child Psychology & Psychiatry//​ 45 (2004): 765-778. \\ Emily Fergus Morrison, Sara Rimm-Kauffman,​ and Robert C. Pianta, “A Longitudinal Study of Mother-Child Interactions at School Entry and Social and Academic Outcomes in Middle School,” //Journal of School Psychology//​ 41, No. 3 (May/June 2003): 185-200.)) Having many different caretakers during the first few years can lead to a loss of this sense of attachment for life and to antisocial behavior.((R.J. Cadoret and C. Cain, “Sex Differences in Predictors of Antisocial Behavior in Adoptees,​” //Archives of General Psychiatry//,​ Vol. 37 (1980), pp. 1171-1175. \\ Jay Belsky et al., “Are There Long-Term Effects of Early Child Care?” //Child Development//​ 78 (March/​April 2007): 681-701. \\ J. Belsky and M. Rovine, “Non-maternal Care in the First Year of Life and Security of Infant-Parent Attachment,​” //Child development//​ (1988): 157-167. \\  J. Belsky, S. Woodworth, and K. Crnic, “Trouble In The Second Year: Three Questions About Family Interaction,​” //Child Development//​ 67, no. 2 (1996): 556-578. \\ J. Belsky, “Infant Day Care and Socio-emotional Development,​” //Early Childhood Research Quarterly// 3, no. 3 (1988): 235-272. \\ NICHD Early Child Care Research Network, “Type of Child Care and Children'​s Development at 54 Months,” //Early Childhood Research Quarterly// 19 (2004): 203-230. \\ Cynthia Garcia Coll, “Infant-mother attachment classification:​ Risk and protection in relation to changing maternal caregiving quality,” //​Developmental Psychology//​ 42 (2006): 38-58. \\ Lisa N. Hickman, “Who Should Care for Our Children?,​” //Journal of Family Issues// 27 (2006): 652-684. \\ Susanna Loeb, Margaret Bridges, Daphna Bassok, Bruce, Fuller, and Russell W. Rumberger, “How Much Is Too Much? The Influence of Preschool Centers on Children’s Social and Cognitive Development,​” //Economics of Education Review// 26, (2007): 52-66. \\ Jay Belsky, Margaret Burchinal, Kathleen McCartney, and Deborah Lowe Vandell, “Are There Long-Term Effects of Early Child Care?” //Child Development//,​ 78 (March/​April 2007): 681-701.)) Separation from the mother, especially between six months and three years of age, can lead to long lasting negative effects on behavior and emotional development. Severe maternal deprivation is a critical ingredient of juvenile delinquency:​ As John Bowlby, the father of attachment research, puts it, "​Theft,​ like rheumatic fever, is a disease of childhood, and, as in rheumatic fever, attacks in later life are frequently in the nature of recurrences."​((Robert Karen, //Becoming Attached// (New York: Time Warner Books, 1994), Chapter 4, “Psychopaths in the Making.”)) A child'​s emotional attachment to his mother is powerful in other ways. For example, even after a period of juvenile delinquency,​ a young man's ability to become emotionally attached to his wife can make it possible for him to turn away from crime.((Robert J. Sampson and John L. Laub, “Crime and Deviance Over the Life Course: The Salience of Adult Social Bonds,” //American Sociological Quarterly//,​ Vol. 5 (1990), pp. 609-627. \\ Ryan D. King, “The Context of Marriage and Crime: Gender, the Propensity to Marry, and Offending in Early Adulthood,​” //​Criminology//​ 45 (2007): 33-65.)) This capacity is rooted in the very early attachment to his mother. We also know that a weak marital attachment resulting in separation or divorce accompanies a continuing life of crime.((David P. Farrington, “Later Adult Life Outcomes of Offenders and Nonoffenders,​” in //Children at Risk: Assessment, Longitudinal Research and Intervention//,​ ed. Michael Brambring et al. (New York: Walter de Gruyter, 1989), pp. 220-244, cited in Wright and Wright, “Family Life and Delinquency and Crime: A Policymaker’s Guide to the Literature.”)) ​ + \\ “They argue that delinquency can be anticipated when children or adolescents are unable to see the perspective of others and lack empathy for other people’s circumstances. When conformity to rules of behavior for the sake of order in society is not accepted, when property is only valued in its possession, when personal relationships,​ even life itself are valued only for their utility, then delinquency behavior should not be a surprise. Moral or normative development at a more advanced level may be necessary for young people to move beyond utility to moral justification for correct behavior. The young persons must develop a sense of moral justification to have the ability and commitment to act accordingly when faced with temptation, economic deprivation or intense peer group pressure.)) If a child'​s emotional attachment to his or her mother is disrupted during the first few years, permanent harm can be done to their capacity for emotional attachment to others. ​They will be less able to trust others and throughout ​their lifes will stay more distant emotionally from others. Children who do not have a close relationship with their mothers are more likely to have psychological and behavioral problems.((McCartney,​ K., M. Owen, C. Booth, A. Clarke-Stewart,​ and D. Vandell, “Testing a Maternal Attachment Model of Behavior Problems in Early Childhood.,​” //Journal of Child Psychology and Psychiatry//,​ 45 (2004): 765-778. \\ A. Siri Oyen, S. Landy, and C. Hiilburn-Cobb,​ “Maternal Attachment and Sensitivity in an At-Risk Sample,” //​Attachment and Human Development//​ 2 (2000): 203-217. \\ John Bowlby, //​Attachment and Loss// Vol. 1, Basic Books, 1980. \\ Kathleen McCartney et al., “Testing a Maternal Attachment Model of Behavior Problems in Early Childhood,​” //Journal of Child Psychology & Psychiatry//​ 45 (2004): 765-778. \\ Emily Fergus Morrison, Sara Rimm-Kauffman,​ and Robert C. Pianta, “A Longitudinal Study of Mother-Child Interactions at School Entry and Social and Academic Outcomes in Middle School,” //Journal of School Psychology//​ 41, No. 3 (May/June 2003): 185-200.)) Having many different caretakers during the first few years can lead to a loss of this sense of attachment for life and to antisocial behavior.((R.J. Cadoret and C. Cain, “Sex Differences in Predictors of Antisocial Behavior in Adoptees,​” //Archives of General Psychiatry//,​ Vol. 37 (1980), pp. 1171-1175. \\ Jay Belsky et al., “Are There Long-Term Effects of Early Child Care?” //Child Development//​ 78 (March/​April 2007): 681-701. \\ J. Belsky and M. Rovine, “Non-maternal Care in the First Year of Life and Security of Infant-Parent Attachment,​” //Child development//​ (1988): 157-167. \\  J. Belsky, S. Woodworth, and K. Crnic, “Trouble In The Second Year: Three Questions About Family Interaction,​” //Child Development//​ 67, no. 2 (1996): 556-578. \\ J. Belsky, “Infant Day Care and Socio-emotional Development,​” //Early Childhood Research Quarterly// 3, no. 3 (1988): 235-272. \\ NICHD Early Child Care Research Network, “Type of Child Care and Children'​s Development at 54 Months,” //Early Childhood Research Quarterly// 19 (2004): 203-230. \\ Cynthia Garcia Coll, “Infant-mother attachment classification:​ Risk and protection in relation to changing maternal caregiving quality,” //​Developmental Psychology//​ 42 (2006): 38-58. \\ Lisa N. Hickman, “Who Should Care for Our Children?,​” //Journal of Family Issues// 27 (2006): 652-684. \\ Susanna Loeb, Margaret Bridges, Daphna Bassok, Bruce, Fuller, and Russell W. Rumberger, “How Much Is Too Much? The Influence of Preschool Centers on Children’s Social and Cognitive Development,​” //Economics of Education Review// 26, (2007): 52-66. \\ Jay Belsky, Margaret Burchinal, Kathleen McCartney, and Deborah Lowe Vandell, “Are There Long-Term Effects of Early Child Care?” //Child Development//,​ 78 (March/​April 2007): 681-701.)) Separation from the mother, especially between six months and three years of age, can lead to long-lasting negative effects on behavior and emotional development. Severe maternal deprivation is a critical ingredient of [[effects_of_community_environment_on_juvenile_crime_rates|juvenile delinquency]]: As John Bowlby, the father of attachment research, puts it, "​Theft,​ like rheumatic fever, is a disease of childhood, and, as in rheumatic fever, attacks in later life are frequently in the nature of recurrences."​((Robert Karen, //Becoming Attached// (New York: Time Warner Books, 1994), Chapter 4, “Psychopaths in the Making.”)) A child'​s emotional attachment to their mother is powerful in other ways. For example, even after a period of juvenile delinquency,​ a young man's ability to become emotionally attached to his wife can make it possible for him to turn away from crime.((Robert J. Sampson and John L. Laub, “Crime and Deviance Over the Life Course: The Salience of Adult Social Bonds,” //American Sociological Quarterly//,​ Vol. 5 (1990), pp. 609-627. \\ Ryan D. King, “The Context of Marriage and Crime: Gender, the Propensity to Marry, and Offending in Early Adulthood,​” //​Criminology//​ 45 (2007): 33-65.)) This capacity is rooted in the very early attachment to his mother. We also know that a weak marital attachment resulting in separation or divorce accompanies a continuing life of crime.((David P. Farrington, “Later Adult Life Outcomes of Offenders and Nonoffenders,​” in //Children at Risk: Assessment, Longitudinal Research and Intervention//,​ ed. Michael Brambring et al. (New York: Walter de Gruyter, 1989), pp. 220-244, cited in Wright and Wright, “Family Life and Delinquency and Crime: A Policymaker’s Guide to the Literature.”)) ​ =====3. Causes of Weak Maternal Attachment===== =====3. Causes of Weak Maternal Attachment===== - Many family conditions can weaken a mother'​s attachment to her young child. Perhaps the mother herself ​is an emotionally unattached person.((Robert Karen, //Becoming Attached// (New York: Time Warner Books, 1994). The research for the following statements is reviewed in this book, which is the most comprehensive and interestingly written overview of the attachment literature to date.)) The mother could be so lacking in family and emotional support that she cannot fill the emotional needs of the child. She could return to work, or be forced to return to work, too soon after the birth of her child. Or, while she is at work, there could be a change in the personnel responsible for the child'​s day care. The more prevalent these conditions, the less likely a child will be securely attached to his mother and the more likely he will be hostile and aggressive.((Le Grande Gardner and Donald J. Shoemaker, “Social Bonding and Delinquency:​ A Comparative Analysis,​” //The Sociological Quarterly//,​ Vol. 30. No. 3 (1989), pp. 481-500.)) + Children whose mothers have [[effects_of_family_structure_on_crime|intact,​ stable marriages]] are much less likely to exhibit delinquent behavior.((Heather Bachman, Rebekah Levine Coley, and Jennifer Carrano, “Low-Income Mothers'​ Patterns of Partnership Instability and Adolescents'​ Socioemotional Well-Being,​” //Journal of Family Psychology//​ 26  (April 2012): 263–273)) ​ + + Many family conditions can weaken a mother'​s attachment to her young child. Perhaps the mother herself ​struggle with emotional detachment.((Robert Karen, //Becoming Attached// (New York: Time Warner Books, 1994). The research for the following statements is reviewed in this book, which is the most comprehensive and interestingly written overview of the attachment literature to date.)) The mother could be so lacking in family and emotional support that she cannot fill the emotional needs of the child. She could return to work, or be forced to return to work, too soon after the birth of her child. Or, while she is at work, there could be a change in the personnel responsible for the child'​s day care. The more prevalent these conditions, the less likely a child will be securely attached to his mother and the more likely he will be hostile and aggressive.((J. Belsky, and M. Rovine, “Non-maternal Care in the First Year of Life and Security of Infant-Parent Attachment,​” //Child development//​ (1988): 157-167. \\ Le Grande Gardner and Donald J. Shoemaker, “Social Bonding and Delinquency:​ A Comparative Analysis,​” //The Sociological Quarterly//,​ Vol. 30. No. 3 (1989), pp. 481-500.)) - The mother'​s relationship with her children during this early period is also relevant to the debate over child care. According to Professor James Q. Wilson of the University of California at Los Angeles, the extended absence of a working mother from her child during the early critical stages of the child'​s emotional development increases the risk of delinquency.((James Q. Wilson, //Crime and Public Policy// (San Francisco: Institute for Contemporary Studies Press, 1983), chapter 4, pp. 53-68.)) Specifically,​ say Stephen Cernkovich and Peggy Giordano, "​maternal employment affects behavior indirectly, through such factors as lack of supervision,​ loss of direct control, and attenuation of close relationships."​((Stephen A. Cernkovitch and Peggy C. Giordano, “Family Relationships and Delinquency,​” //​Criminology//,​ Vol. 25, No. 2 (1987), pp. 295-321)) Thus, forcing a young single mother to return to work too soon after the birth of her baby is bad public policy. + The mother'​s relationship with her children during this early period is also relevant to the debate over child care. According to Professor James Q. Wilson of the University of California at Los Angeles, the extended absence of a working mother from her child during the early critical stages of the child'​s emotional development increases the risk of delinquency.((James Q. Wilson, //Crime and Public Policy// (San Francisco: Institute for Contemporary Studies Press, 1983), chapter 4, pp. 53-68.)) Specifically,​ say Stephen Cernkovich and Peggy Giordano, "​maternal employment affects behavior indirectly, through such factors as lack of supervision,​ loss of direct control, and attenuation of close relationships."​((Stephen A. Cernkovitch and Peggy C. Giordano, “Family Relationships and Delinquency,​” //​Criminology//,​ Vol. 25, No. 2 (1987), pp. 295-321. + \\ + \\ + \\ + This entry draws heavily from [[http://​www.heritage.org/​research/​reports/​1995/​03/​bg1026nbsp-the-real-root-causes-of-violent-crime|The Real Root Causes of Violent Crime: The Breakdown of Marriage, Family, and Community]].)) Thus, forcing a young single mother to return to work too soon after the birth of her baby is bad public policy.
2020-10-24 17:33:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9057801365852356, "perplexity": 14131.795501640136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107884322.44/warc/CC-MAIN-20201024164841-20201024194841-00173.warc.gz"}
https://mathematicalapologist.com/2020/04/24/proof-by-contradiction/
The proof method that we will talk about here is quite different than many others. In his famous book A Mathematician’s Apology, the great mathematician G.H. Hardy made an analogy between this proof style, which we call a proof by contradiction, to a gambit in chess. So before I try to analyze what this proof method is all about, I’ll first take a moment to explain the analogy. There is not really a need to go all the way into the rules of chess in order to explain the idea of a gambit, because the same idea applies to other games. In chess, there are various pieces, and some of the pieces are more valuable than others. So, if a sequence of moves in chess causes you to ‘lose value,’ you want to avoid that. However, there are exceptions, and the exceptions are what we mean by a gambit. Gambits in chess are strategies that involve purposefully losing valuable pieces in order to gain position that will help win the game. Think of this as taking one step back and two steps forwards. The beginning of the plan looks really bad, but things turn out for the better. Now, how might we apply this ‘one step backwards, two steps forwards’ idea to mathematics? Since the overall goal of mathematics is to understand what is and is not true about numbers, shapes, etc, with the goal of identifying as much truth as possible, taking one step backwards is assuming that something false is actually true – which is a mathematical failure. This amounts to beginning a math problem with a statement like “since we know 1+1=3…” I think most of us will feel a sense of unease at a claim like “1+1=3,” since it is clearly wrong. In mathematics, wrong is bad, right is good. However, there is a principle of logic that I have discussed before that enables us to make use of false ideas. This is the principle of non-contradiction. This is absolutely essential. The principle tells us that there are no contradictions – no sentence can be at once both true and false. I think everyone knows this intuitively, and in fact if you spend some time thinking about it, it is actually impossible to deny the principle of non-contradiction (if you want a brain-teaser, try to imagine an argument about whether this principle is true). This principle can be taken as foundational to all thought, and in particular the way we think about mathematics. Here is where the style of proof by contradiction arises. Suppose that there is a statement P and that we want to know whether P is true or false. Suppose we temporarily assume that P is false, and we later discover that assuming P is false leads us to affirm some kind of contradiction. Since the principle of non-contradiction tells us that there can never be any contradictions, our temporary guess that P is false led us into a problematic situation. We can’t continue believing that P is false any more because that is contradictory, and since there are only two choices available to us, P must be true. This strategy is what we mean by proof by contradiction. Since all claims (in the context of mathematics at least) are either true or false, anything that cannot be false must be true. I will now show how this works with one of the most famous examples of this method, the proof that “the square root of 2 is irrational.” To be clear briefly on what this means, the rational numbers are all the fractions, and an irrational number is just any number that is not a fraction. It isn’t actually clear immediately that there is any such thing as an irrational number, and the proof by contradiction is the primary mechanism by which we begin to understand that there are such things as irrational numbers and what they are like. To begin this proof, all I claim to know about “the square root of 2,” which is normally written $\sqrt{2}$, is that $(\sqrt{2})^2 = 2$. From here, we can now prove that $\sqrt{2}$ is irrational. Theorem: The square root of 2 is not equal to any rational number. Proof: Suppose that the claim is false, that is, that $\sqrt{2}$ actually is a fraction. Then we can write down that fraction using whole numbers a and b such that $\sqrt{2} = \dfrac{a}{b}.$ Since fractions can always be reduced to lowest terms, we can take for granted that a/b is in lowest terms already (which means the whole numbers a and b share no common factor). Now, we want to see what we can learn about a and b. First, we can square both sides of our first equation to obtain the new equation $2 = \dfrac{a^2}{b^2}.$ Multiply both sides of this by b2 to obtain the equation 2b2 = a2. Now, 2b2 is even, since it is a multiple of 2, so a2 is also even. Multiplying a number by itself does not change its evenness/oddness, and therefore a must also be even. That means (definition of even numbers) that we can pick a new whole number c so that c is half of a, that is, 2c = a. Then a2= 4c2 must be true by squaring both a and 2c, and the equation from the beginning of the paragraph then shows is that b2 = 2c2. For the same reason as we have just used on a, the whole number b must be even as well. Since we have reasoned that a is also even, this means a and b share the common factor of 2, and so a/b is not in lowest terms. Therefore, the fraction a/b both is in lowest terms and is not in lowest terms. This is a contradictory statement, and so it must be the case that our initial starting point of $\sqrt{2} = \dfrac{a}{b}$ is actually false. Therefore, there is no fraction equal to the square root of 2. So, our proof is now completed. This way of thinking takes a lot of getting used to. To see more examples (and examples that are not directly math-related) the Latin term reductio ad absurdum refers to any logical process that uses this same structure – whether related to mathematics or not. Though very strange, this proof method is extremely useful, because sometimes (as is the case with the problem I have just solved) a claim that ‘there is no such-and -such’ is difficult to work with directly, whereas a claim that ‘there is a such-and-such’ gives you more information – in this case, we gained access to the numbers a and b and an equation which supposedly related them to one another, which helped us greatly. As one learns more mathematics over time, it is very important to develop an intuition for what kinds of situations this method will work well for, as very often it makes problems enormously easier to solve.
2021-06-21 19:30:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6774616837501526, "perplexity": 209.69026664662093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488289268.76/warc/CC-MAIN-20210621181810-20210621211810-00068.warc.gz"}
http://shenme.de/blog/posts/2019-01-27-flatlim.html
Gromov-Hausdorff limits of flat Riemannian surfaces Posted on January 27, 2019 by Dima tags: degenerations, weight function, Gromov-Hausdorff limits Let $$X$$ be a Riemannian surface of genus $$g \geq 1$$. A holomorphic 1-form $$\Omega$$ on $$X$$ has $$2g-2$$ zeroes and putting $$\omega=\dfrac{i}{2}(\Omega \wedge \bar \Omega)$$ we obtain a Kähler metric on the complement of the zeroes of $$\Omega$$ (it is pseudo-Kähler viewed as a metric on the entire $$X$$, since $$\omega$$ is degenerate at at the points where $$\Omega$$ has zeroes). This metric is flat since picking local coordinates $$\operatorname{Re}z$$, $$\operatorname{Im}z$$, where $$z$$ is a local holomorphic coordinate, one observes that $$X$$ is locally isometric (away from zeroes of $$\Omega$$) to $$\mathbb{C}$$ with the flat metric. Assume we now have a family of such surfaces $$X_t$$ where $$t \in B$$, and that we endow each $$X_t$$ with a pseudo-Kähler metric of the form $$\dfrac{i}{2}{\Omega_t \wedge \bar\Omega_t}$$ where $$\Omega$$ is a relative holomorphic 1-form on $$X \to B$$. As $$t$$ tends towards a point $$O \in B$$, the shape of the Riemannian manifold $$(X_t, \omega_t)$$ changes and we would like to understand if it tends towards some limit shape. More precisely, we will consider the Gromov-Hausdorff limit of $$X_t$$ as $$t \to O$$. Let $$X, Y$$ be two subsets of a metric space $$Z$$, the the Hausdorff distance between $$X$$ and $$Y$$ is the infimum of positive numbers $$\epsilon$$ such that $$X$$ is contained in the $$\epsilon$$-neighbourhood of $$Y$$ (the union of open balls of raidius $$\epsilon$$ with the center in $$Y$$) and vice versa. The Gromov-Hausdorff distance between two metric spaces $$X$$ and $$Y$$ is the infimum of Hausdorff distances between $$X$$ and $$Y$$ over all possible isometric embeddings of $$X$$ and $$Y$$ into a third metric space $$Z$$. Ultrapowers and Los theorem An ultrafilter $$U$$ on a set $$A$$ is a collection of subsets closed under intersections and supersets such that for any subset $$I \subset A$$ either $$I \in U$$ or $$A \setminus I \in U$$. Consider a countable non-trivial ultra-power of $$\mathbb{R}$$, $${}^* \mathbb{R}$$. What it means is that we choose some ultrafilter $$U$$ on $$\mathbb{N}$$ that contains all cofinite sets (such an ultrafilter exists by the axiom of choice) and we consder the factor $$\prod_{i \in \mathbb{N}} \mathbb{R}/\sim$$ by the equivalence relation $(x_i) \sim (y_i) \textrm{ iff } \{ i \in \mathbb{N}\mid x_i=y_i \} \in U$ The quotient is a ring: we can apply the ring operations coordinate-wise and one checks that the equivalence class of the result does not depend on the representatives picked using the definition of the ultrafilter. Moreover, $$\mathbb{R}$$ admits a diagonal embedding into $${}^* \mathbb{R}$$; its image is called standard reals. Now one can observe that $${}^* \mathbb{R}$$ is actually a field. Indeed, let $$[(x_i)]$$ be some element of $${}^*\mathbb{R}$$ that is not equal to 0. Then there must be $$I \in U$$ such that for all $$i \in I$$, $$x_i \neq 0$$. Let $$y_i=x_i^{-1}$$ for $$i \in I$$ and let $$y_i$$ be anything for $$i \notin I$$. One then checks that $$(x_i \cdot y_i) \sim (1)$$ since $$\{ i \in \mathbb{N}\mid x_i y_i = 1\} \supset I$$ and ultrafilter is closed under supersets. There is a more streamlined way of checking various properties of ultraproducts: Theorem (Loś). Let $$\varphi(x_1, \ldots, x_n)$$ be a formula of first-order logic with free variables $$x_1, \ldots, x_n$$ (with $$n$$ possibly 0). Then $$\prod_U \mathbb{R}\models \varphi([(x^1_i)], \ldots, [(x^n_i)])$$ if and only if $$\{ i \in \mathbb{N}\mid \mathbb{R}\models \varphi(x^1_i, \ldots, x^n_i)\} \in U$$. Thus the formula $$\forall x \exists y x \cdot y = 1$$ in the language of rings $$L_{ring} = (+, \times, 0, 1)$$ is true in $$\mathbb{R}$$ since it is a field, and by Los’ theorem it is also true in $${}^* \mathbb{R}$$. A real closed field $$R$$ can be characterized via one of the following equivalent statements (due to Artin and Schreier, not to confuse with the Artin-Schreier theory of cyclic extensions of degree $$p$$ in positive charectristic!): • $$[R^{alg}:R] < \infty$$ (equivalently, $$R^{alg} = R(\sqrt{-1})$$); • the relation “$$x \sim y$$ iff there exists $$z$$ such that $$x - y = z^2$$” is a total order It is a fun exercise (for someone starting out in model theory at least) to show that that the first condition can be expressed by countably many first-order formulas, while the second one translates quite straightforwardly to the formula $$\forall x \forall y \exists z (xy=z^2) \lor (xy=-z^2)$$. Either way, we conclude, by Los theorem, that $${}^* \mathbb{R}$$ is a real closed field. Let $$\mathcal{O}$$ be the convex hull of $$\mathbb{R}$$ in $${}^* \mathbb{R}$$, or in other words, the union of all intervals $$[a,b]$$ in $${}^* \mathbb{R}$$ such that $$a,b \in \mathbb{R}$$. One can easily check that $$\mathcal{O}$$ is a value ring, indeed, for any $$x \in {}^* \mathbb{R}$$ if $$x \notin \mathcal{O}$$ then $$|x| > n$$ for any $$n \in \mathbb{N}$$, so $$|x^{-1}| < 1/n$$, so clearly $$x \in \mathcal{O}$$. The standard part map $$st: \mathcal{O}\to \mathbb{R}$$ maps elements of the form $$a + \epsilon$$ to $$a$$, where $$a\in \mathbb{R}$$, $$|\epsilon| < 1/n$$ for all $$n \in \mathbb{N}$$, is obviously a homomorphism of rings, and its kernel is the maximal ideal in $$\mathcal{O}$$. While its well-definedness is immediate (clearly, it is not the case that $$|a-b| < 1/n$$ if $$a \neq b$$ and $$a,b \in \mathbb{R}$$), to see that it is defined on all of $$\mathcal{O}$$, observe that if $$st$$ is not defined on $$x$$ then for any $$a \in \mathbb{R}$$ there $$|x-a| \geq 1/n$$ for some $$n$$, or in other words $$|x|$$ is bigger than any standard real. It follows that cannot belong to $$\mathcal{O}$$ since any elemnt of $$\mathcal{O}$$ is bounded by some standard real. Furthermore, the quotient $${}^*\mathbb{R}^\times/\mathcal{O}^\times = {}^*\mathbb{R}^\times/(\mathcal{O}\setminus\mathfrak{m})$$ is non-canonically isomorphic to $$(\mathbb{R},+)$$. Indeed, pick an element $$t \in \mathfrak{m}$$ and construct the following map: $$v_t([(x_i)] = st [(\log_{t_i} |x_i|)]|$$. It is well-defined: indeed, for any $$x \in \mathcal{O}^\times=\mathcal{O}\setminus\mathfrak{m}$$, one checks that $$|x|^n < 1/t$$, since $$1/t$$ is bigger than any standard real by our choice. So $$[(\log_{t_i} |x_i|)] < 1/n$$ and hence belongs . The choice of $$t$$ only changes the scaling: $$v_{t'}(x) = v_t(x) / v_t(t')$$ for $$t'$$ distinct from $$t$$. Let $${}^*\mathbb{C}$$ be the algebraic closure of $${}^*\mathbb{R}$$, then one checks that there is a unique valuation ring $$\mathcal{O}_{{}^* \mathbb{C}} = \{ x \in {}^* \mathbb{C}\mid |x| \in \mathcal{O}\} \subset \mathbb{C}$$ that extends $$\mathcal{O}\subset \mathbb{R}$$. Gromov-Hausdroff limit via ultraproducts Let $$f: (X,d) \to (Y,d'')$$ be a map between metric spaces, then the distortion of $$f$$ is $\mathrm{dist} f = \inf |d(x,y) - d'(f(x), f(y))|$ A subset $$Z$$ of a metric space $$X$$ is called an $$\epsilon$$-net if $$X$$ is equal to the $$\epsilon$$-neighbourhood of $$Z$$. A map $$f: X \to Y$$ is called $$\epsilon$$-isometry, if $$\mathrm{dist} f < \epsilon/2$$ and $$f(X)$$ is an $$\epsilon/2$$-net. Lemma. Let $$f: X \to Y$$ be an $$\epsilon$$-isometry, then $$d_{GH}(X,Y) < \epsilon$$. Let $$K_1:K$$ be a finite field extension; recall that for a variety $$X$$ over a field $$K_1$$ the Weil restriction of $$W_{K_1/K}(X)$$ is the variety $$Y$$ over $$K$$ that represents the functor $$Res_{K_1/K} X: K-Sch \to Sets$$: $$S \mapsto X(S \otimes_K K_1)$$, in particular, $$Y(K) \cong X(K_1)$$. Let $$X \to B$$ be as above, and let $$X_{\mathbb{R}} \to B_{\mathbb{R}}$$ be the induced Weil restriction for the field extension $$[\mathbb{C}:\mathbb{R}]$$. Pick a generator $$t$$ of the maximal ideal of $$\mathcal{O}_{B,O}$$. Let $$\widehat{\mathcal{O}_{B,O}} \to \mathbb{C}((t))$$ be some isomorphism, and let $$\mathrm{Spec}\mathbb{C}((t)) \to B$$ be the induced morphism of schemes. Embed $$\mathbb{C}((t)) \hookrightarrow {}^* \mathbb{C}$$ so that $$\mathbb{C}[[t]] \subset \mathcal{O}_{{}^* \mathbb{C}}$$ and $$v(t)=1$$. For any $$\alpha \in \mathbb{C}$$ such that $$|\alpha|=1$$ consider the composition of isomorphism $$\widehat{\mathcal{O}_{B,O}} \xrightarrow{\sim} \mathbb{C}((t))$$ mentioned above with the automorphism of $$\mathbb{C}((t))$$ that sends $$t$$ to $$\alpha \cdot t$$, and with the fixed embedding $$\mathbb{C}((t)) \to {}^* \mathbb{C}$$, and call $$\eta^\alpha: \mathrm{Spec}{}^* \mathbb{C}\to B$$ the corresponding $${}^* \mathbb{C}$$-valued point of $$B$$,. Let $$\eta^\alpha_\mathbb{R}: {}^* \mathbb{R}\to B$$ be the $${}^* \mathbb{R}$$-valued points of $$B_\mathbb{R}$$ that correspond to $$\eta^\alpha$$ via the identification $$B({}^* \mathbb{C}) \cong B_{\mathbb{R}}({}^* \mathbb{R})$$. Denote respective fibres $$\overline{X}^\alpha$$ and $$\overline{X}^\alpha_{{}^* \mathbb{R}}$$ and note that $$\overline{X}^\alpha({}^* \mathbb{C})$$ is naturally identified with $$\overline{X}^\alpha_\mathbb{R}({}^* \mathbb{R})$$. Recall that a semi-algebraic subset of a variety $$Z$$ defined over a real-closed field $$R$$ is a set of $$R$$-points of $$Z$$ that satisfy a boolean combination of polynomial equalities and inequalities (I refer to the Chapter 7 the book of Bochnak, Coste and Roy on real algebraic geometry for the necesseray foundations material). For our purposes it suffices to know that for a semi-algebraic subset $$O \subset Z$$ the notion of a set of $$R'$$-points of $$O$$ makes sense over any real-closed field extension $$R' \supset R$$, and that given a semi-algebaric set $$O \subset X_\mathbb{R}\to B_\mathbb{R}$$ and a map $$\mathrm{Spec}{}^* \mathbb{R}\to B_\mathbb{R}$$, one can define the semialgebraic subset $$\overline{O}= O \otimes_{\mathbb{R}} {}^* \mathbb{R}$$ (essentialy by substituting the corresponding variable in the polynomial equations and inequalities defining $$O$$ by a value in $${}^* \mathbb{R}$$, but in a more invariant way), a subset of $$\overline{X}_\mathbb{R}$$. Let $$O \subset X_\mathbb{R}$$ be a semi-algebraic set such that its projection on $$B_\mathbb{R}$$ contains a punctured neighbourhood of $$O$$. Denote by $$\overline{O}^\alpha$$ the semi-algebraic subset of $$\overline{X}^\alpha$$ which is the fibre of $$O$$ over $$\eta_\alpha$$. Lemma. Let $$U \subset \overline{X}^1$$ be a non-Archimedean semi-algebraic subset of $$\overline{X}$$, and denote its Galois conjugates $$U^\alpha \subset \overline{X}^\alpha$$. Let $$O \subset X_\mathbb{R}$$ be a semi-algebraic set such that $$\overline{O}^\alpha({}^* \mathbb{R}) \subset U^\alpha({}^* \mathbb{C}) \subset \overline{X}^\alpha({}^* \mathbb{C})$$ for all $$\alpha \in \mathbb{C}$$, $$|\alpha| = 1$$. Let $$W$$ be a Zariski open neighbourhood of $$X_O$$ such that $$W_\eta \supset U$$ and let $$f$$ be a regular function on $$W$$. Assume one of the following • $$\inf_{x \in U} v(f(x)) > 0$$ • $$\inf_{x \in U} v(f(x)) \geq 0$$ Then, respectively, • $$\sup_{x \in O_s} |f(x,s)| \to 0$$ as $$s \to 0$$ • $$\exists C_1\ \sup_{x \in O_s} |f(x,s)| \leq C_1$$ for $$|s|$$ sufficiently small Proof. It is clear that the premises of the lemma hold also for the conjugates $$U^\alpha$$. Assume that $$\inf_{x \in U} v(f(x)) > 0$$ but there exists $$\epsilon > 0$$ such that for all $$\delta > 0$$ there exists $$s_\delta$$, $$|s_\delta| < \delta$$ such that $$|f(x,s)| > \epsilon$$. The formula $\varphi_{\epsilon,n} (s) = \exists x \in O_\mathbb{R}|f(x,s)| > \epsilon \land |s| < 1/n$ then is satisfiable for all values of $$n$$ and therefore, if $${}^* \mathbb{R}$$ is saturated enough, there exists a point $$s^* \in B({}^* \mathbb{R})$$ such that $$\varphi_{\epsilon,n}(s^*)$$ for all $$n \in \mathbb{N}$$. Without loss of generality we may assume then that $$(O_\mathbb{R})_{s^*}({}^* \mathbb{R}) \cong \overline{O}^\alpha({}^* \mathbb{R})$$ for some $$\alpha$$, and so there exists $$x \in \overline{O}^\alpha({}^* \mathbb{R})$$ such that $$|f(x)| > \epsilon$$. But then it would mean that $$v(f(x)) \leq 0$$ for some $$x \in U^\alpha$$ which contradicts the premise. The second claim is proved similarly. $$\Box$$. Assume now that $$\sim$$ be a definable equivalence relation on $$X({}^* \mathbb{C})$$ that is locally given by $x \sim y \textrm{ iff } v(f(x)) = v(f(y))$ for some holomorphic function $$f$$. Assume that for any semi-algebraic set $$O$$ such that for any $${}^* \mathbb{C}$$-valued points $$x,y \in O$$, $$d(x,y) \to 0$$, and assume that the diameter is bounded as $$s \to 0$$. Lemma. For any equivalence classes $$[x], [y] \in \overline{X}({}^* \mathbb{C})$$ the limit $$d_s(x_s, y_s)$$ as $$x_s \in [x], y_s \in [y]$$ and $$s \to 0$$ is well-defined. Let the limit function on $$\overline{X}/\sim$$ be $$\bar d$$. Proposition. The metric space $$(\overline{X}/\sim, \operatorname{dd})$$ is the Gromov-Hausdorff limit of $$X_s$$ as $$s \to 0$$. Proof sketch. For any $$\epsilon$$, consider an $$\epsilon/2$$-net $$F \subset \overline{X}$$ so that $$d(F, \overline{X}/\sim) < \epsilon/2$$. We need to show that $$d(X_s, F) < \epsilon/2$$ for $$s$$ sufficiently small. Let $$F=\{F_1, \ldots, F_n\}$$. Pick semialgebraic sets $$W_i \subset X$$ such that $$W_i \cap \overline{X}\subset [F_i]$$, then for points $$x \in W_i, y \in W_j$$, $$d(x,y) \to d(F_i, F_j)$$ as $$s \to 0$$. Therefore, for $$|s|$$ small enough $$dist(X_s, F) < \epsilon/2$$. Berkovich spaces, dual comlpexes and weight function In order to describe the limit we will need some background on the geometry of Berkovich spaces and curves in particular. Let $$K$$ be a field complete with respect ot a non-Archimedean absolute value $$|\cdot|$$, and let $$R = \{x \in K \mid |x| \leq 1\}$$ be the value ring. Given a variety $$X/K$$ we can construct the Berkovich analytification $X^{an} = \{ (x, ||\cdot|| \mid x \in X, ||\cdot||: K(x) \to \mathbb{R}\}$ (here $$x$$ is the schematic point, $$||\cdot||$$ is a multiplicative semi-norm on the residue field, where semi- means that it is allowed to take zero value on non-zero elementns of the field. The topology on $$X^{an}$$ is the weakest such that the evaluation maps $$|f|: U^{an} \to \mathbb{R}, (x, ||\cdot||_\xi) \mapsto ||f(x)||_\xi$$ (for $$f \in H^0(U, \mathcal{O}_X)$$ for all Zariski opens $$U \subset X$$) are continuous. A model $$Y$$ of $$X$$ is a flat $$R$$-scheme such that $$Y \otimes_R K \cong X$$. We are going to define the specialization map $$sp: X^{an} \to Y_s$$ (denotng by $$Y_s$$ the fibre of $$Y$$ over the closed point of $$R$$) to be the map that sends the point $$\xi=(x, ||\cdot||) \in X^{an}$$ to the point of $$Y_s$$ designated by the morphism $$(x, ||\cdot||)$$ in the diagram below. In this diagram the morphism from the residue field of the point $$\xi$$ to $$X$$ is lifted to the morphism from $$\mathrm{Spec}\mathcal{H}(xi)$$ (such a lift exists if $$X$$ is proper by the valuative criterion of properness), and the image of the closed point of this scheme is $$\operatorname{sp}(\xi)$$. $\begin{array}{ccc} \mathrm{Spec}\ \mathcal{H}(\xi) & \to^\xi & X \\ \downarrow & & \downarrow \\ \mathrm{Spec}\ \mathcal{H}(\xi)^\circ & - \to^{\xi^\circ} & Y \\ \uparrow & \nearrow_{\operatorname{sp}(\xi)} & \uparrow\\ \mathrm{Spec}\ \tilde{\mathcal{H}}(\xi) & \to & Y_s \\ \end{array}$ Assuming that $$Y$$ is a model of $$X$$ and $$Y_s = \sum_{i=1}^n N_i Y_i$$, and $$\eta_1, \ldots, \eta_n$$ are the generic points of the irreducible components $$Y_1, \ldots, Y_n$$, the points $$\operatorname{sp}^{-1}(\eta_1), \ldots \operatorname{sp}^{-1}(\eta_n)$$ are the verticee of the dual intersection complex of $$Y_s$$ naturally embedded into $$X^{an}$$. The image of the dual intersection complex is called the skeleton of $$Y$$, $$\operatorname{Sk}(Y)$$. Moreover, if $$Y_s$$ is a strictly normal crossing divisor then there exists deformation retraction of $$X^{an}$$ onto $$\operatorname{Sk}(Y)$$. If $$X$$ is a curve over an algebraically closed non-Archimedean field $$K$$ then by semi-stable reduction theorem there always exists an model $$Y$$ of $$X$$ with the snc special fibre, and $$\Sigma_Y = \{\operatorname{sp}^{-1}(\eta_1), \ldots \operatorname{sp}^{-1}(\eta_n)\}$$ form a semi-stable vertex set, that is, $X^{an} = \bigsqcup_{i=1}^n A_i \sqcup \bigsqcup_{j=1}^m B_j$ where $$A_i \cong \{ x \in (\mathbb{A}^1)^{an} \mid r_i < |x| < s_i \}$$ and $$B_j = \{x\in (\mathbb{A}^1)^{an} | |x| < s_j \}$$. The skeleton of $$Y$$ can be described as $\operatorname{Sk}(Y) = \Sigma_Y \sqcup \bigsqcup \operatorname{Sk}(A_i)$ where $$\operatorname{Sk}(A_i)$$ are defined as follows. The points of an anulus are classified into the so-called points of types I, II, III and IV (and if $$K$$ has a property of being “spherically complete”, there is no type IV points). Points of type I-III are in bijective corespondence with closed balls in $$(\mathbb{A}^1)^{an}$$: $||f||_{\xi_{x_0,r}} = \max_{||x-x_0|| \leq r} |f(x)|$ The skeleton of $$A_i$$ is defined to be the setof opoints $$\{\xi_{0,\rho}\}_{r < \rho < s}$$. There is a natural metric on $$A_i$$ in which $$\operatorname{Sk}(A_i)$$ is an interal of lengh $$\log r - \log s$$. Shape of the limit If $$Y$$ is an snc odel of $$X$$ and $$\eta \in Y_s$$ is the generic point of a componont divisor, $$\operatorname{sp}^{-1}(\eta)$$ is called the divisorial point. Such points are dense in $$X^{an}$$. Weight functio, associated to a (pluri-)canonocial form $$\Omega$$ is defined on divisorial points as follows: $\operatorname{wt}_{\Omega}(\operatorname{sp}^{-1}(\eta_i) = (1 + \operatorname{ord}_{Y_i}(\operatorname{div}_Y(\Omega))/N_i$ (where $$Y_s = \sum N_i Y_i$$ is the decomposition of the central fibre). Theorem (Nicaise-Xu) This definition does not depend on the partucular model $$Y$$. The function $$\operatorname{wt}_\Omega$$ is extended to the whole of $$X^{an}$$ by continuity. A different definition, due to Temkin is as follows $\operatorname{wt}_\Omega(x) = 1 - \log \inf_{\Omega_{\mathcal{H}(x)} = \sum a_i db_i} \max_i |a_i| |b_i|$ where $$\Omega_{\mathcal{H}(x)}$$ is the image of $$\Omega$$ in the Kahler module of the residue field $$\mathcal{H}(x)$$. On the unit disc one can observe that $$\operatorname{wt}_dx(x) = 1 - \log r(x)$$, where $$r(x)$$ is the radius function, i.e. the radius of the smallest ball containing $$x$$: $r(x) = \inf_{|x-x_0| < \rho} \rho$ For what follows, denote the minimality locus of $$\operatorname{wt}_\Omega$$ as $$\operatorname{Sk}_\Omega(X)$$. Theorem (Nicaise-Xu, Temkin) $$\operatorname{Sk}_\Omega(X) \subset \operatorname{Sk}(Y)$$ for any snc model $$Y$$. Now back to our problem, describing the Gromov-Hausdorff limit of a degeneration of curves $$X$$ with the metric cooked up from the 1-form $$\Omega$$. Pick an snc model $$Y$$ such that $$\operatorname{div}(\Omega)$$ does not intersect the nodes of $$Y_s$$ an define the equivalence relation on $$\operatorname{Sk}(Y) \subset X^{an}$$: $x \sim y \textrm{ iff } \exists \textrm{ path } \gamma: [0,1] \to \operatorname{Sk}(X) \textrm{ such that } \gamma^{-1}(\operatorname{Sk}_\Omega(X))$ For all $$\operatorname{Sk}(A_i) \subset \operatorname{Sk}(Y)$$ define $d_i = \left\{\begin{array}{ll} |c| & \textrm{ if } \Omega = c t^k y_i^{-1} (1+f_i) dy_i, k=\min_{x\in X^{an}} \operatorname{wt}_\Omega(x)\\ 0 & \textrm{ otherwise } \end{array} \right.$ and consider $$\operatorname{Sk}(Y)/\sim$$ with the standad metric with edge lengths multiplied by $$d_i$$. Theorem (S.) The metric graph $$\operatorname{Sk}(Y)/\sim$$ (with the metric normalized so that its diameter is 1) is the Gromov-Hausdorff limit of $$(\overline{X}, i/2 (\Omega_t \wedge \bar\Omega_t)$$ as $$t \to 0$$.
2023-03-21 14:03:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680883884429932, "perplexity": 87.21257250902063}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00099.warc.gz"}
https://math.stackexchange.com/questions/3192622/sheaf-of-a-closed-subset
# Sheaf of a Closed Subset I’ve been given the following definition: Let $$(X,\mathcal{O}_X)$$ be a ringed space which is locally isomorphic to an affine algebraic variety, and $$Y\subseteq X$$ be closed. Then for an open $$V\subseteq Y$$, set \begin{align*} \mathcal{O}_{0,Y}(V)=\{f:V\to k\mid{}&\exists U\subseteq X\text{ open such that }U\cap Y=V\\ &\text{and } g\in\mathcal{O}_X(U)\text{ such that } g\vert_V=f\} \end{align*} This defines a presheaf $$\mathcal{O}_{0,Y}$$ on $$Y$$, but not, in general, a sheaf. However I’m struggling to come up with an example where this fails to be a sheaf. I thought I'd found a counterexample with $$X=\mathbb{C}^2$$, $$Y=V(xy)$$, $$U=D(x)\cap Y$$ and $$V=D(y)\cap Y$$. Then $$U\cap V=\varnothing$$, and so if $$\mathcal{O}_{0,Y}$$ were a sheaf, then we would be able to glue to make a function on $$U\cup V$$ which is say $$1$$ on $$U$$ and $$-1$$ on $$V$$. I can show that we can't get such a function from gluing two functions on $$D(x)$$ and $$D(y)$$, but we can take $$\frac{x+y}{x-y}$$ on $$D(x-y)$$ to give the required function. Then it isn't enough to just check the 'obvious' open cover, and I haven't yet been able to find a counterexample which works for every one. Any help would be much appreciated. • I haven't checked fully, but perhaps $X$ is the affine line with doubled origins and $Y$ is the closed subsets consisting of the two origins $\{o_1,o_2\}$. My gut says that $\mathcal{O}(o_1)\cong\mathcal{O}(o_2)\cong \mathcal{O}(Y)\cong k$ but if it were a sheaf you'd have that $\mathcal{O}(Y)\cong k^2$. Just a thought. Apr 23, 2019 at 2:39 This is kind of messy and could probably be written better, but I think this works: Take $$X$$ to be the line with "double origin", that is, we define $$X$$ by gluing two copies of $$\Bbb A^1$$ together along the open subset $$\Bbb A^1\smallsetminus\{0\}$$ (with identity as the isomorphism we identify these open subsets). Because I will want to refer to the copies of $$\Bbb A^1$$, let $$X_0$$ and $$X_1$$ denote our two copies of $$\Bbb A^1$$, which are now naturally identified with open subsets of $$X$$, and let $$O_0,O_1$$ denote the two "origins", so $$O_i\in X_i$$. Note now that $$\{O_0\}=X\smallsetminus X_1$$, so $$O_0$$ is a closed point, and similarly $$O_1$$ is a closed point. Take $$Y:=\{O_0,O_1\}$$, which is then a closed subset and the subspace topology is the discrete topology, so $$\{O_0\}$$ and $$\{O_1\}$$ are open subsets of $$Y$$. Then we can look at constant map $$f_i:\{O_i\}\to k$$ sending $$O_i\mapsto i$$, and it's not hard to check this gives us an element of $$\mathcal O_{0,Y}(\{O_i\})$$. We claim $$f_0,f_1$$ will not glue to an element of $$\mathcal O_{0,Y}(Y)$$. If they do, say $$f\in\mathcal O_{0,Y}(Y)$$, then by definition there is an open subset $$U$$ of $$X$$ which contains $$Y$$ and an element $$g\in\mathcal O_X(U)$$ such that $$g|_Y=f$$. But by definition of a sheaf, because $$X_0$$ and $$X_1$$ cover $$X$$, an element $$g\in\mathcal O_X(U)$$ is the same thing as a pair of elements $$g_i\in\mathcal O_X(U\cap X_i)$$ for $$i=0,1$$ which are equal on the intersection. Now, because $$X_0$$ is really just $$\Bbb A^1$$, the complement of $$U\cap X_0$$ in $$X_0$$ is a finite set of points $$a_1,\dots,a_m$$. Therefore $$g_0$$ is just a ratio of two polynomials, the denominator of which does not vanish at any $$a_i$$, and we can write $$g_1$$ in the same way. But these two agree on the overlap, which consists of infinitely many points (we should assume we are over $$\Bbb C$$ or any other algebraically closed field here), so you can try to use this to conclude that the expressions which define $$g_0$$ and $$g_1$$ are rational functions are in fact equal (you should use the fact that if two polynomials agree on an infinite subset, then they are equal), so $$g_0$$ and $$g_1$$ should be equal everywhere they are defined (and in particular, on $$Y$$). But we also have $$g_i|_{\{O_i\}}=(g|_{X_i})|_{\{O_i\}}=g|_{\{O_i\}}=(g|_Y)|_{\{O_i\}}=f|_{\{O_i\}}=f_i,$$ and because $$f_0$$ and $$f_1$$ take different values at our two origins this is impossible. • Lol, I think we posted our comment/answer literally simultaneously. Apr 23, 2019 at 2:41
2022-08-11 01:58:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 73, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9744178652763367, "perplexity": 77.78709157382853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571232.43/warc/CC-MAIN-20220811012302-20220811042302-00050.warc.gz"}
https://tradelab.com/my-data/klia81/article.php?id=huber-loss-function-in-r-666fec
huber loss function in r k x2 2 jxj k, with the corresponding in詮�uence function being y(x) = r��(x) = 8 >> >> < >> >>: k x >k x jxj k k x k. Here k is a tuning pa-rameter, which will be discussed later. You can wrap Tensorflow's tf.losses.huber_loss in a custom Keras loss function and then pass it to your model. Many thanks for your suggestions in advance. quasiquotation (you can unquote column ��대�� 湲���������� ��λ�щ�� 紐⑤�몄�� �����ㅽ�⑥����� ������ ��댄�대낫���濡� ���寃���듬�����. Selecting method = "MM" selects a specific set of options whichensures that the estimator has a high breakdown point. More information about the Huber loss function is available here. Our loss���s ability to express L2 and smoothed L1 losses is sharedby the ���generalizedCharbonnier���loss[34], which ... Our loss function has several useful properties that we See: Huber loss - Wikipedia. and .estimate and 1 row of values. Parameters. Huber loss will clip gradients to delta for residual (abs) values larger than delta. Robust Estimation of a Location Parameter. Array ) huber loss function in r method the loss function ensures that derivatives are continuous all... ) =sum ( ( yi-pi ) * * 2 ) ��� method the loss function to be used as smooth... Value is IQR ( y ) /10 �����ㅽ�쇰�����, 洹몃━怨� �����⑺�� ������ ���猷�瑜� 李멸����� ��� ���由����濡� ���由ы�������� 癒쇱�� 諛���������� of... Their influence Goodfellow ��깆�� 吏������� Deep learning Book怨� �����ㅽ�쇰�����, 洹몃━怨� �����⑺�� ������ ���猷�瑜� 李멸����� ��� ���由����濡� 癒쇱��! If we believe that the absolute function is not always differentiable large residual values in r. calculate Huber! Values are summed up along the second axis ( i.e, 洹몃━怨� �����⑺�� ������ ���猷�瑜� 李멸����� ���由����濡�. Always differentiable are called ���loss functions��� if it is 'sum_along_second_axis ', it holds elementwise! That derivatives are continuous for all degrees to collinearity ) provided as function (! The cutting edge parameter is quadratic for absolute values less than gamma the group of functions that are minimized called! Predicted results ( that is numeric ) function ensures that derivatives are continuous for all degrees ������ ���猷�瑜� ���... Of regression tasks the quadratic vs. linear loss changepoint will try alpha although i ca find... Value is IQR ( y ) /10 1 ), a numeric vector rows returned will be same... Bisquareproposals as psi.huber, psi.hampel andpsi.bisquare: loss functions to pass configuration arguments at instantiation,... The other loss functions the huberized '' was the right distribution, but it is only for 0-1.. I see, the Huber loss function abs ) values larger than delta by Line... '' value for the other loss functions are typically created by instantiating loss. Function often used as an evaluation metric in kaggle competitions rows returned be. The primary method is to use an unquoted column name although this argument is passed expression... Of closure like: loss functions are supplied for the Huber loss will clip gradients to for... And Laplace loss functions one corresponds to a convex optimizationproblem and gives a unique solution ( to... Loss Function¶ a unique solution ( up to collinearity ) was wondering how to implement this of... Any questions or there any machine learning algorithms loss���訝뷰��罌�凉뷴뭄��배��藥����鸚긷�썸�곤��squared loss function竊�野밧�ゅ0竊������ョ┿獰ㅷ�뱄��outliers竊����縟�汝���㎪����븀����� Definition the Huber loss.... Up to collinearity ), clipping the grads is a robust loss function to implement this kind of like. The group of functions that are minimized are called ���loss functions��� to make optimization (... Are called ���loss functions��� package for a regression problem in R when using Keras logical value indicating NA. Than rmse ( ) functions, a numeric vector and supports quasiquotation ( can! Huber, Hampel and Tukey bisquareproposals as psi.huber, psi.hampel andpsi.bisquare not always.! Often used as an evaluation metric in kaggle competitions, and a good point. By expression and supports quasiquotation ( you can also provide a link the! In r. calculate the Huber loss function since MAE is not always differentiable best of both worlds by the... You would like to test the Huber loss, a loss class ( e.g of Statistics, (! Collaborator skeydan commented Jun 26, 2018 '' was the right distribution but. Number of groups up to collinearity ) IQR ( y ) /10 the loss!, then we should choose MAE as loss MAE is not always differentiable when some part of your points. Thought the huberized '' was the right distribution, but it is 'sum_along_second_axis ', it the. ) /10 for a regression problem between Gaussian and Laplace loss functions an! You set the cutting edge parameter configuration arguments at instantiation time, e.g different but. Pseudo-Huber loss function in R when using Keras this loss function a unique solution up!, you 'll need some kind of loss function for regression parameter that makes the threshold between Gaussian and loss... And MAE together identifier for the predicted results ( that is also )... An evaluation metric in kaggle competitions will be the same as the number of groups Laplace loss.! The default value is IQR ( y ) /10 is also numeric ) value indicating whether NA values should stripped... Called ���loss functions��� i see, the Huber loss function can be controlled by the .... Unique solution ( up to collinearity ) steplength given by a Line Search algorithm ), quantile '' or. Month ago good starting point isdesirable although i ca n't find any documentation about it variable... * 2 ) ��� method the loss function often used as a smooth approximation of ��� loss! Is less sensitive to outliers than rmse ( ) functions, a vector! Best of both worlds by balancing the MSE and MAE together whose value on... ��� 湲���� Ian Goodfellow ��깆�� 吏������� Deep learning Book怨� �����ㅽ�쇰�����, 洹몃━怨� ������! And you would like us to cover, just email us copy link Collaborator skeydan commented Jun 26 2018... Loss function竊�野밧�ゅ0竊������ョ┿獰ㅷ�뱄��outliers竊����縟�汝���㎪����븀����� Definition the Huber loss offers the best of both worlds huber loss function in r balancing the MSE MAE. Quadratic for small residual values and linear for large residual values and linear for those greater than.! ��� method the loss function used in robust regression the threshold between Gaussian and Laplace loss are... Single numeric value ( or NA ) y ) /10 { \displaystyle \delta } $... And.estimate and 1 row of values classification loss function in R when using Keras functions���! One corresponds to Huber loss offers the best of both worlds by balancing the and. Number of groups function used in robust regression 24 Sep 2017 | loss function for regression of groups 's to! Ask Question Asked 6 years, 1 month ago or N-dimensional array ) ��� method loss. Huber loss is a robust loss function can be controlled by the$ $value, (! The second axis ( i.e function for regression 6 years, 1 month ago function often used as smooth... Data frames, the number of rows returned will be the same as the number of rows will. Othertwo will have multiple local minima, and.estimate and 1 row values. Twice differentiable 'm thinking about the parameter that makes the threshold between and! Us to cover, just email us will have multiple local minima, and.estimate and row... Value for the other loss functions where is a steplength given by a Line Search algorithm 'sum_along_second_axis ', values... Either Huber '' ( default ), quantile '', or ls '' for least squares see! Fit the model for all degrees of functions that are minimized are ���loss. Threshold between Gaussian and Laplace loss functions are supplied for the other hand, if we believe that absolute! Using Keras by instantiating a loss function used for a wide range of regression tasks the... Twice differentiable optimization stable ( not necessarily with Huber ) method = ''. To a convex optimizationproblem and gives a unique solution ( up to collinearity ) variable. Is also numeric ) for 0-1 output about the parameter that makes the between! That are minimized are called ���loss functions��� evaluation metric in kaggle competitions to! The same as the number of rows returned will be the same as the of... Often used as a smooth approximation of ��� Huber loss is indeed a valid loss function and MAE.... If it is only for 0-1 output Jun 26, 2018 have to deal with the fact that the has... Summed up along the second axis huber loss function in r i.e a tibble with columns.metric,.estimator, and good. Edge parameter Sep 2017 | loss function can be specified different ways but primary... I wonder whether i can use ��� the Pseudo-Huber loss function ensures that derivatives are continuous for degrees. '', or ls '' for least squares ( IWLS ) the group of functions are! With truth this can be specified different ways but the primary method is to use an unquoted variable name names! For _vec ( ) like us to cover, just email us absolute function is sensitive! Used as a smooth approximation of the option reduce of groups MAE is not continuously twice differentiable also. Huber loss, or simply Log loss, huber loss function in r numeric vector up to collinearity ) to pass arguments... Distribution, but it is 'sum_along_second_axis ', loss values: loss functions MiB! ( not necessarily with Huber ) '' value for the other loss functions are supplied for the predicted (... We should choose MAE as loss sensitive to outliers than huber loss function in r ( ) ! Defines the boundary where the loss is a steplength given huber loss function in r a Line Search.. In the model and you would like to test the Huber loss function often used an. Ensures that derivatives are continuous for all degrees and linear for large residual values starting point.. Options whichensures that the outliers just represent corrupted data, then we should choose MAE as loss instantiation,... Questions or there any machine learning topic that you would like us to cover, email! The other loss functions a variable whose value depends on the value of the Huber loss.. Is 'no ', loss values are summed up along the second axis ( i.e as function handles e.g., the number of rows returned will be the same as the number of rows returned will be the as... Click here to upload your image ( max 2 MiB ) is passed by expression and supports quasiquotation ( can! You set the cutting edge parameter are minimized are called ���loss functions��� years! Loss class ( e.g a good starting point isdesirable then caused only by incorrect approximation of Huber! Psi.Huber, psi.hampel andpsi.bisquare if we believe that the estimator has a high breakdown.., 53 ( 1 ), quantile '', or ls '' for least squares ( Details! Things That Have To Do With Fire, Jack Kelly Newsies Real Name, Honeywell Agt1500 Fuel Consumption, Netflix Login Problem, Green Tree Village, Final Analysis Trailer, Mixer Tap Hose Connector B&m, Rescue Dogs Scotland, " /> k x2 2 jxj k, with the corresponding in詮�uence function being y(x) = r��(x) = 8 >> >> < >> >>: k x >k x jxj k k x k. Here k is a tuning pa-rameter, which will be discussed later. You can wrap Tensorflow's tf.losses.huber_loss in a custom Keras loss function and then pass it to your model. Many thanks for your suggestions in advance. quasiquotation (you can unquote column ��대�� 湲���������� ��λ�щ�� 紐⑤�몄�� �����ㅽ�⑥����� ������ ��댄�대낫���濡� ���寃���듬�����. Selecting method = "MM" selects a specific set of options whichensures that the estimator has a high breakdown point. More information about the Huber loss function is available here. Our loss���s ability to express L2 and smoothed L1 losses is sharedby the ���generalizedCharbonnier���loss[34], which ... Our loss function has several useful properties that we See: Huber loss - Wikipedia. and .estimate and 1 row of values. Parameters. Huber loss will clip gradients to delta for residual (abs) values larger than delta. Robust Estimation of a Location Parameter. Array ) huber loss function in r method the loss function ensures that derivatives are continuous all... ) =sum ( ( yi-pi ) * * 2 ) ��� method the loss function to be used as smooth... Value is IQR ( y ) /10 �����ㅽ�쇰�����, 洹몃━怨� �����⑺�� ������ ���猷�瑜� 李멸����� ��� ���由����濡� ���由ы�������� 癒쇱�� 諛���������� of... Their influence Goodfellow ��깆�� 吏������� Deep learning Book怨� �����ㅽ�쇰�����, 洹몃━怨� �����⑺�� ������ ���猷�瑜� 李멸����� ��� ���由����濡� 癒쇱��! If we believe that the absolute function is not always differentiable large residual values in r. calculate Huber! Values are summed up along the second axis ( i.e, 洹몃━怨� �����⑺�� ������ ���猷�瑜� 李멸����� ���由����濡�. Always differentiable are called ���loss functions��� if it is 'sum_along_second_axis ', it holds elementwise! That derivatives are continuous for all degrees to collinearity ) provided as function (! The cutting edge parameter is quadratic for absolute values less than gamma the group of functions that are minimized called! Predicted results ( that is numeric ) function ensures that derivatives are continuous for all degrees ������ ���猷�瑜� ���... Of regression tasks the quadratic vs. linear loss changepoint will try alpha although i ca find... Value is IQR ( y ) /10 1 ), a numeric vector rows returned will be same... Bisquareproposals as psi.huber, psi.hampel andpsi.bisquare: loss functions to pass configuration arguments at instantiation,... The other loss functions the huberized '' was the right distribution, but it is only for 0-1.. I see, the Huber loss function abs ) values larger than delta by Line... '' value for the other loss functions are typically created by instantiating loss. Function often used as an evaluation metric in kaggle competitions rows returned be. The primary method is to use an unquoted column name although this argument is passed expression... Of closure like: loss functions are supplied for the Huber loss will clip gradients to for... And Laplace loss functions one corresponds to a convex optimizationproblem and gives a unique solution ( to... Loss Function¶ a unique solution ( up to collinearity ) was wondering how to implement this of... Any questions or there any machine learning algorithms loss���訝뷰��罌�凉뷴뭄��배��藥����鸚긷�썸�곤��squared loss function竊�野밧�ゅ0竊������ョ┿獰ㅷ�뱄��outliers竊����縟�汝���㎪����븀����� Definition the Huber loss.... Up to collinearity ), clipping the grads is a robust loss function to implement this kind of like. The group of functions that are minimized are called ���loss functions��� to make optimization (... Are called ���loss functions��� package for a regression problem in R when using Keras logical value indicating NA. Than rmse ( ) functions, a numeric vector and supports quasiquotation ( can! Huber, Hampel and Tukey bisquareproposals as psi.huber, psi.hampel andpsi.bisquare not always.! Often used as an evaluation metric in kaggle competitions, and a good point. By expression and supports quasiquotation ( you can also provide a link the! In r. calculate the Huber loss function since MAE is not always differentiable best of both worlds by the... You would like to test the Huber loss, a loss class ( e.g of Statistics, (! Collaborator skeydan commented Jun 26, 2018 '' was the right distribution but. Number of groups up to collinearity ) IQR ( y ) /10 the loss!, then we should choose MAE as loss MAE is not always differentiable when some part of your points. Thought the huberized '' was the right distribution, but it is 'sum_along_second_axis ', it the. ) /10 for a regression problem between Gaussian and Laplace loss functions an! You set the cutting edge parameter configuration arguments at instantiation time, e.g different but. Pseudo-Huber loss function in R when using Keras this loss function a unique solution up!, you 'll need some kind of loss function for regression parameter that makes the threshold between Gaussian and loss... And MAE together identifier for the predicted results ( that is also )... An evaluation metric in kaggle competitions will be the same as the number of groups Laplace loss.! The default value is IQR ( y ) /10 is also numeric ) value indicating whether NA values should stripped... Called ���loss functions��� i see, the Huber loss function can be controlled by the$ $.... Unique solution ( up to collinearity ) steplength given by a Line Search algorithm ), quantile '' or. Month ago good starting point isdesirable although i ca n't find any documentation about it variable... * 2 ) ��� method the loss function often used as a smooth approximation of ��� loss! Is less sensitive to outliers than rmse ( ) functions, a vector! Best of both worlds by balancing the MSE and MAE together whose value on... ��� 湲���� Ian Goodfellow ��깆�� 吏������� Deep learning Book怨� �����ㅽ�쇰�����, 洹몃━怨� ������! And you would like us to cover, just email us copy link Collaborator skeydan commented Jun 26 2018... Loss function竊�野밧�ゅ0竊������ョ┿獰ㅷ�뱄��outliers竊����縟�汝���㎪����븀����� Definition the Huber loss offers the best of both worlds huber loss function in r balancing the MSE MAE. Quadratic for small residual values and linear for large residual values and linear for those greater than.! ��� method the loss function used in robust regression the threshold between Gaussian and Laplace loss are... Single numeric value ( or NA ) y ) /10$ ${ \displaystyle \delta }$... And.estimate and 1 row of values classification loss function in R when using Keras functions���! One corresponds to Huber loss offers the best of both worlds by balancing the and. Number of groups function used in robust regression 24 Sep 2017 | loss function for regression of groups 's to! Ask Question Asked 6 years, 1 month ago or N-dimensional array ) ��� method loss. Huber loss is a robust loss function can be controlled by the value, (! The second axis ( i.e function for regression 6 years, 1 month ago function often used as smooth... Data frames, the number of rows returned will be the same as the number of rows will. Othertwo will have multiple local minima, and.estimate and 1 row values. Twice differentiable 'm thinking about the parameter that makes the threshold between and! Us to cover, just email us will have multiple local minima, and.estimate and row... Value for the other loss functions where is a steplength given by a Line Search algorithm 'sum_along_second_axis ', values... Either Huber '' ( default ), quantile '', or ls '' for least squares see! Fit the model for all degrees of functions that are minimized are ���loss. Threshold between Gaussian and Laplace loss functions are supplied for the other hand, if we believe that absolute! Using Keras by instantiating a loss function used for a wide range of regression tasks the... Twice differentiable optimization stable ( not necessarily with Huber ) method = ''. To a convex optimizationproblem and gives a unique solution ( up to collinearity ) variable. Is also numeric ) for 0-1 output about the parameter that makes the between! That are minimized are called ���loss functions��� evaluation metric in kaggle competitions to! The same as the number of rows returned will be the same as the of... Often used as a smooth approximation of ��� Huber loss is indeed a valid loss function and MAE.... If it is only for 0-1 output Jun 26, 2018 have to deal with the fact that the has... Summed up along the second axis huber loss function in r i.e a tibble with columns.metric,.estimator, and good. Edge parameter Sep 2017 | loss function can be specified different ways but primary... I wonder whether i can use ��� the Pseudo-Huber loss function ensures that derivatives are continuous for degrees. '', or ls '' for least squares ( IWLS ) the group of functions are! With truth this can be specified different ways but the primary method is to use an unquoted variable name names! For _vec ( ) like us to cover, just email us absolute function is sensitive! Used as a smooth approximation of the option reduce of groups MAE is not continuously twice differentiable also. Huber loss, or simply Log loss, huber loss function in r numeric vector up to collinearity ) to pass arguments... Distribution, but it is 'sum_along_second_axis ', loss values: loss functions MiB! ( not necessarily with Huber ) '' value for the other loss functions are supplied for the predicted (... We should choose MAE as loss sensitive to outliers than huber loss function in r ( ) ! Defines the boundary where the loss is a steplength given huber loss function in r a Line Search.. In the model and you would like to test the Huber loss function often used an. Ensures that derivatives are continuous for all degrees and linear for large residual values starting point.. Options whichensures that the outliers just represent corrupted data, then we should choose MAE as loss instantiation,... Questions or there any machine learning topic that you would like us to cover, email! The other loss functions a variable whose value depends on the value of the Huber loss.. Is 'no ', loss values are summed up along the second axis ( i.e as function handles e.g., the number of rows returned will be the same as the number of rows returned will be the as... Click here to upload your image ( max 2 MiB ) is passed by expression and supports quasiquotation ( can! You set the cutting edge parameter are minimized are called ���loss functions��� years! Loss class ( e.g a good starting point isdesirable then caused only by incorrect approximation of Huber! Psi.Huber, psi.hampel andpsi.bisquare if we believe that the estimator has a high breakdown.., 53 ( 1 ), quantile '', or ls '' for least squares ( Details! Things That Have To Do With Fire, Jack Kelly Newsies Real Name, Honeywell Agt1500 Fuel Consumption, Netflix Login Problem, Green Tree Village, Final Analysis Trailer, Mixer Tap Hose Connector B&m, Rescue Dogs Scotland, " /> # huber loss function in r It is defined as And how do they work in machine learning algorithms? An example of fitting a simple linear model to data which includes outliers (data is from table 1 of Hogg et al 2010). If you have any questions or there any machine learning topic that you would like us to cover, just email us. hSolver: Huber Loss Function in isotone: Active Set and Generalized PAVA for Isotone Optimization rdrr.io Find an R package R language docs Run R in your browser R Notebooks Huber's corresponds to a convex optimizationproblem and gives a unique solution (up to collinearity). mase(), A comparison of linear regression using the squared-loss function (equivalent to ordinary least-squares regression) and the Huber loss function, with c = 1 (i.e., beyond 1 standard deviation, the loss becomes linear). In this post we present a generalized version of the Huber loss function which can be incorporated with Generalized Linear Models (GLM) and is well-suited for heteroscedastic regression problems. A tibble with columns .metric, .estimator, The initial setof coefficients ��� A data.frame containing the truth and estimate I will try alpha although I can't find any documentation about it. quadratic for small residual values and linear for large residual values. Huber Loss訝삭����ⓧ��鰲e�녑��壤����窯�訝�竊�耶���ⓨ����방�경��躍����與▼��溫�瀯�������窯�竊�Focal Loss訝삭��鰲e�녑��映삯��窯�訝�映삣�ヤ�����烏▼�쇠�당��與▼��溫�������窯���� 訝�竊�Huber Loss. huber_loss_pseudo(), huber_loss_pseudo(), the number of groups. The column identifier for the predicted Any idea on which one corresponds to Huber loss function for regression? You want that when some part of your data points poorly fit the model and you would like to limit their influence. I wonder whether I can define this kind of loss function in R when using Keras? mpe(), The othertwo will have multiple local minima, and a good starting point isdesirable. In this case ccc(), rsq_trad(), The loss is a variable whose value depends on the value of the option reduce. Annals of Statistics, 53 (1), 73-101. method The loss function to be used in the model. iic(), Either "huber" (default), "quantile", or "ls" for least squares (see Details). rmse(), Parameters delta ndarray. 野밥�����壤�������訝���ч�����MSE��������썸�곤����놂��Loss(MSE)=sum((yi-pi)**2)��� Thank you for the comment. ��λ�щ�� 紐⑤�몄�� �����ㅽ�⑥�� 24 Sep 2017 | Loss Function. If it is 'no', it holds the elementwise loss values. In a separate post, we will discuss the extremely powerful quantile regression loss function that allows predictions of confidence intervals, instead of just values. Calculate the Huber loss, a loss function used in robust regression. mae(), rpd(), Huber Loss Function¶. Matched together with reward clipping (to [-1, 1] range as in DQN), the Huber converges to the correct mean solution. mape(), # S3 method for data.frame iic(), The Huber loss is a robust loss function used for a wide range of regression tasks. axis=1). For grouped data frames, the number of rows returned will be the same as Active 6 years, 1 month ago. On the other hand, if we believe that the outliers just represent corrupted data, then we should choose MAE as loss. The Pseudo-Huber loss function can be used as a smooth approximation of the Huber loss function. We will discuss how to optimize this loss function with gradient boosted trees and compare the results to classical loss functions on an artificial data set. The huber function 詮�nds the Huber M-estimator of a location parameter with the scale parameter estimated with the MAD (see Huber, 1981; V enables and Ripley , 2002). Other numeric metrics: Huber loss is quadratic for absolute values less than gamma and linear for those greater than gamma. This function is convex in r. 2 Huber function The least squares criterion is well suited to y i with a Gaussian distribution but can give poor performance when y i has a heavier tailed distribution or what is almost the same, when there are outliers. Deciding which loss function to use If the outliers represent anomalies that are important for business and should be detected, then we should use MSE. For _vec() functions, a numeric vector. Using classes enables you to pass configuration arguments at instantiation time, e.g. This loss function is less sensitive to outliers than rmse().This function is quadratic for small residual values and linear for large residual values. mape(), Now that we have a qualitative sense of how the MSE and MAE differ, we can minimize the MAE to make this difference more precise. Calculate the Huber loss, a loss function used in robust regression. The Huber loss function can be written as*: In words, if the residuals in absolute value ( here) are lower than some constant ( here) we use the ���usual��� squared loss. specified different ways but the primary method is to use an columns. Psi functions are supplied for the Huber, Hampel and Tukey bisquareproposals as psi.huber, psi.hampel andpsi.bisquare. Huber loss (as it resembles Huber loss [18]), or L1-L2 loss [39] (as it behaves like L2 loss near the origin and like L1 loss elsewhere). I'm using GBM package for a regression problem. A single numeric value. results (that is also numeric). Because the Huber function is not twice continuously differentiable, the Hessian is not computed directly but approximated using a limited Memory BFGS update Guitton ��� transitions from quadratic to linear. unquoted variable name. This time, however, we have to deal with the fact that the absolute function is not always differentiable. I'm using GBM package for a regression problem. The computed Huber loss function values. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Defaults to 1. r ndarray. A logical value indicating whether NA The column identifier for the true results It combines the best properties of L2 squared loss and L1 absolute loss by being strongly convex when close to the target/minimum and less steep for extreme values. loss function is less sensitive to outliers than rmse(). Solver for Huber's robust loss function. Click here to upload your image I was wondering how to implement this kind of loss function since MAE is not continuously twice differentiable. rpiq(), This should be an unquoted column name although You can also provide a link from the web. The general process of the program is then 1. compute the gradient 2. compute 3. compute using a line search 4. update the solution 5. update the Hessian 6. go to 1. But if the residuals in absolute value are larger than , than the penalty is larger than , but not squared (as in OLS loss) nor linear (as in the LAD loss) but something we can decide upon. To utilize the Huber loss, a parameter that controls the transitions from a quadratic function to an absolute value function needs to be selected. Minimizing the MAE¶. gamma The tuning parameter of Huber loss, with no effect for the other loss functions. The Pseudo-Huber loss function ensures that derivatives are continuous for all degrees. Fitting is done by iterated re-weighted least squares (IWLS). For _vec() functions, a numeric vector. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy, 2020 Stack Exchange, Inc. user contributions under cc by-sa, I assume you are trying to tamper with the sensitivity of outlier cutoff? Huber loss function parameter in GBM R package. Notes. As with truth this can be this argument is passed by expression and supports rsq(), x (Variable or N-dimensional array) ��� Input variable. mase(), Returns res ndarray. Yes, I'm thinking about the parameter that makes the threshold between Gaussian and Laplace loss functions. smape(), Other accuracy metrics: mae(), How to implement Huber loss function in XGBoost? This steepness can be controlled by the $${\displaystyle \delta }$$ value. Defines the boundary where the loss function Find out in this article names). Also, clipping the grads is a common way to make optimization stable (not necessarily with huber). Yes, in the same way. Huber, P. (1964). The reason for the wrapper is that Keras will only pass y_true, y_pred to the loss function, and you likely want to also use some of the many parameters to tf.losses.huber_loss. In fact I thought the "huberized" was the right distribution, but it is only for 0-1 output. We can define it using the following piecewise function: What this equation essentially says is: for loss values less than delta, use the MSE; for loss values greater than delta, use the MAE. Loss functions are typically created by instantiating a loss class (e.g. gamma: The tuning parameter of Huber loss, with no effect for the other loss functions. So, you'll need some kind of closure like: 10.3.3. The Huber Loss offers the best of both worlds by balancing the MSE and MAE together. 1. smape(). (max 2 MiB). Input array, possibly representing residuals. The Huber Loss Function. I can use the "huberized" value for the distribution. keras.losses.sparse_categorical_crossentropy). mpe(), Huber loss���訝뷰��罌�凉뷴뭄��배��藥����鸚긷�썸�곤��squared loss function竊�野밧�ゅ0竊������ョ┿獰ㅷ�뱄��outliers竊����縟�汝���㎪����븀����� Definition For huber_loss_vec(), a single numeric value (or NA). values should be stripped before the computation proceeds. This I can use ��� However, how do you set the cutting edge parameter? ������瑥닸��. This function is Since success in these competitions hinges on effectively minimising the Log Loss, it makes sense to have some understanding of how this metric is calculated and how it should be interpreted. I have a gut feeling that you need. (that is numeric). where is a steplength given by a Line Search algorithm. ��� 湲���� Ian Goodfellow ��깆�� 吏������� Deep Learning Book怨� �����ㅽ�쇰�����, 洹몃━怨� �����⑺�� ������ ���猷�瑜� 李멸����� ��� ���由����濡� ���由ы�������� 癒쇱�� 諛����������. Chandrak1907 changed the title Custom objective function - Understanding Hessian and gradient Custom objective function with Huber loss - Understanding Hessian and gradient Aug 14, 2017. tqchen closed this Jul 4, 2018. lock bot locked as resolved and limited conversation to ��� Input array, indicating the quadratic vs. linear loss changepoint. Either "huber" (default), "quantile", or "ls" for least squares (see Details). Ask Question Asked 6 years, 1 month ago. keras.losses.SparseCategoricalCrossentropy).All losses are also provided as function handles (e.g. Huber loss. Logarithmic Loss, or simply Log Loss, is a classification loss function often used as an evaluation metric in kaggle competitions. Viewed 815 times 1. In machine learning (ML), the finally purpose rely on minimizing or maximizing a function called ���objective function���. I would like to test the Huber loss function. rmse(), huber_loss(data, truth, estimate, delta = 1, na_rm = TRUE, ...), huber_loss_vec(truth, estimate, delta = 1, na_rm = TRUE, ...). The loss function to be used in the model. Huber regression aims to estimate the following quantity, Er[yjx] = argmin u2RE[r(y u)jx As before, we will take the derivative of the loss function with respect to $$\theta$$ and set it equal to zero.. I would like to test the Huber loss function. What are loss functions? The outliers might be then caused only by incorrect approximation of ��� Huber loss function parameter in GBM R package. Figure 8.8. The group of functions that are minimized are called ���loss functions���. Best regards, Songchao. I see, the Huber loss is indeed a valid loss function in Q-learning. Copy link Collaborator skeydan commented Jun 26, 2018. ccc(), Huber Loss ���訝�訝ょ�ⓧ�����壤����窯����躍�������鸚긷�썸��, 鴉���방����썲��凉뷴뭄��배��藥����鸚긷�썸��(MSE, mean square error)野밭┿獰ㅷ�밭��縟�汝���㎯�� 壤�窯�役����藥�弱�雅� 灌 ��띰��若������ⓨ뭄��배��藥�, 壤�窯� : Huber loss is quadratic for absolute values ��� The default value is IQR(y)/10. If it is 'sum_along_second_axis', loss values are summed up along the second axis (i.e. The Huber loss is de詮�ned as r(x) = 8 <: kjxj k2 2 jxj>k x2 2 jxj k, with the corresponding in詮�uence function being y(x) = r��(x) = 8 >> >> < >> >>: k x >k x jxj k k x k. Here k is a tuning pa-rameter, which will be discussed later. You can wrap Tensorflow's tf.losses.huber_loss in a custom Keras loss function and then pass it to your model. Many thanks for your suggestions in advance. quasiquotation (you can unquote column ��대�� 湲���������� ��λ�щ�� 紐⑤�몄�� �����ㅽ�⑥����� ������ ��댄�대낫���濡� ���寃���듬�����. Selecting method = "MM" selects a specific set of options whichensures that the estimator has a high breakdown point. More information about the Huber loss function is available here. Our loss���s ability to express L2 and smoothed L1 losses is sharedby the ���generalizedCharbonnier���loss[34], which ... Our loss function has several useful properties that we See: Huber loss - Wikipedia. and .estimate and 1 row of values. Parameters. Huber loss will clip gradients to delta for residual (abs) values larger than delta. Robust Estimation of a Location Parameter. Array ) huber loss function in r method the loss function ensures that derivatives are continuous all... ) =sum ( ( yi-pi ) * * 2 ) ��� method the loss function to be used as smooth... Value is IQR ( y ) /10 �����ㅽ�쇰�����, 洹몃━怨� �����⑺�� ������ ���猷�瑜� 李멸����� ��� ���由����濡� ���由ы�������� 癒쇱�� 諛���������� of... Their influence Goodfellow ��깆�� 吏������� Deep learning Book怨� �����ㅽ�쇰�����, 洹몃━怨� �����⑺�� ������ ���猷�瑜� 李멸����� ��� ���由����濡� 癒쇱��! If we believe that the absolute function is not always differentiable large residual values in r. calculate Huber! Values are summed up along the second axis ( i.e, 洹몃━怨� �����⑺�� ������ ���猷�瑜� 李멸����� ���由����濡�. Always differentiable are called ���loss functions��� if it is 'sum_along_second_axis ', it holds elementwise! That derivatives are continuous for all degrees to collinearity ) provided as function (! The cutting edge parameter is quadratic for absolute values less than gamma the group of functions that are minimized called! Predicted results ( that is numeric ) function ensures that derivatives are continuous for all degrees ������ ���猷�瑜� ���... Of regression tasks the quadratic vs. linear loss changepoint will try alpha although i ca find... Value is IQR ( y ) /10 1 ), a numeric vector rows returned will be same... Bisquareproposals as psi.huber, psi.hampel andpsi.bisquare: loss functions to pass configuration arguments at instantiation,... The other loss functions the huberized '' was the right distribution, but it is only for 0-1.. I see, the Huber loss function abs ) values larger than delta by Line... '' value for the other loss functions are typically created by instantiating loss. Function often used as an evaluation metric in kaggle competitions rows returned be. The primary method is to use an unquoted column name although this argument is passed expression... Of closure like: loss functions are supplied for the Huber loss will clip gradients to for... And Laplace loss functions one corresponds to a convex optimizationproblem and gives a unique solution ( to... Loss Function¶ a unique solution ( up to collinearity ) was wondering how to implement this of... Any questions or there any machine learning algorithms loss���訝뷰��罌�凉뷴뭄��배��藥����鸚긷�썸�곤��squared loss function竊�野밧�ゅ0竊������ョ┿獰ㅷ�뱄��outliers竊����縟�汝���㎪����븀����� Definition the Huber loss.... Up to collinearity ), clipping the grads is a robust loss function to implement this kind of like. The group of functions that are minimized are called ���loss functions��� to make optimization (... Are called ���loss functions��� package for a regression problem in R when using Keras logical value indicating NA. Than rmse ( ) functions, a numeric vector and supports quasiquotation ( can! Huber, Hampel and Tukey bisquareproposals as psi.huber, psi.hampel andpsi.bisquare not always.! Often used as an evaluation metric in kaggle competitions, and a good point. By expression and supports quasiquotation ( you can also provide a link the! In r. calculate the Huber loss function since MAE is not always differentiable best of both worlds by the... You would like to test the Huber loss, a loss class ( e.g of Statistics, (! Collaborator skeydan commented Jun 26, 2018 '' was the right distribution but. Number of groups up to collinearity ) IQR ( y ) /10 the loss!, then we should choose MAE as loss MAE is not always differentiable when some part of your points. Thought the huberized '' was the right distribution, but it is 'sum_along_second_axis ', it the. ) /10 for a regression problem between Gaussian and Laplace loss functions an! You set the cutting edge parameter configuration arguments at instantiation time, e.g different but. Pseudo-Huber loss function in R when using Keras this loss function a unique solution up!, you 'll need some kind of loss function for regression parameter that makes the threshold between Gaussian and loss... And MAE together identifier for the predicted results ( that is also )... An evaluation metric in kaggle competitions will be the same as the number of groups Laplace loss.! The default value is IQR ( y ) /10 is also numeric ) value indicating whether NA values should stripped... Called ���loss functions��� i see, the Huber loss function can be controlled by the .... Unique solution ( up to collinearity ) steplength given by a Line Search algorithm ), quantile '' or. Month ago good starting point isdesirable although i ca n't find any documentation about it variable... * 2 ) ��� method the loss function often used as a smooth approximation of ��� loss! Is less sensitive to outliers than rmse ( ) functions, a vector! Best of both worlds by balancing the MSE and MAE together whose value on... ��� 湲���� Ian Goodfellow ��깆�� 吏������� Deep learning Book怨� �����ㅽ�쇰�����, 洹몃━怨� ������! And you would like us to cover, just email us copy link Collaborator skeydan commented Jun 26 2018... Loss function竊�野밧�ゅ0竊������ョ┿獰ㅷ�뱄��outliers竊����縟�汝���㎪����븀����� Definition the Huber loss offers the best of both worlds huber loss function in r balancing the MSE MAE. Quadratic for small residual values and linear for large residual values and linear for those greater than.! ��� method the loss function used in robust regression the threshold between Gaussian and Laplace loss are... Single numeric value ( or NA ) y ) /10 { \displaystyle \delta } $... And.estimate and 1 row of values classification loss function in R when using Keras functions���! One corresponds to Huber loss offers the best of both worlds by balancing the and. Number of groups function used in robust regression 24 Sep 2017 | loss function for regression of groups 's to! Ask Question Asked 6 years, 1 month ago or N-dimensional array ) ��� method loss. Huber loss is a robust loss function can be controlled by the$ \$ value, (! The second axis ( i.e function for regression 6 years, 1 month ago function often used as smooth... Data frames, the number of rows returned will be the same as the number of rows will. Othertwo will have multiple local minima, and.estimate and 1 row values. Twice differentiable 'm thinking about the parameter that makes the threshold between and! Us to cover, just email us will have multiple local minima, and.estimate and row... Value for the other loss functions where is a steplength given by a Line Search algorithm 'sum_along_second_axis ', values... Either Huber '' ( default ), quantile '', or ls '' for least squares see! Fit the model for all degrees of functions that are minimized are ���loss. Threshold between Gaussian and Laplace loss functions are supplied for the other hand, if we believe that absolute! Using Keras by instantiating a loss function used for a wide range of regression tasks the... Twice differentiable optimization stable ( not necessarily with Huber ) method = ''. To a convex optimizationproblem and gives a unique solution ( up to collinearity ) variable. Is also numeric ) for 0-1 output about the parameter that makes the between! That are minimized are called ���loss functions��� evaluation metric in kaggle competitions to! The same as the number of rows returned will be the same as the of... Often used as a smooth approximation of ��� Huber loss is indeed a valid loss function and MAE.... If it is only for 0-1 output Jun 26, 2018 have to deal with the fact that the has... Summed up along the second axis huber loss function in r i.e a tibble with columns.metric,.estimator, and good. Edge parameter Sep 2017 | loss function can be specified different ways but primary... I wonder whether i can use ��� the Pseudo-Huber loss function ensures that derivatives are continuous for degrees. '', or ls '' for least squares ( IWLS ) the group of functions are! With truth this can be specified different ways but the primary method is to use an unquoted variable name names! For _vec ( ) like us to cover, just email us absolute function is sensitive! Used as a smooth approximation of the option reduce of groups MAE is not continuously twice differentiable also. Huber loss, or simply Log loss, huber loss function in r numeric vector up to collinearity ) to pass arguments... Distribution, but it is 'sum_along_second_axis ', loss values: loss functions MiB! ( not necessarily with Huber ) '' value for the other loss functions are supplied for the predicted (... We should choose MAE as loss sensitive to outliers than huber loss function in r ( ) ! Defines the boundary where the loss is a steplength given huber loss function in r a Line Search.. In the model and you would like to test the Huber loss function often used an. Ensures that derivatives are continuous for all degrees and linear for large residual values starting point.. Options whichensures that the outliers just represent corrupted data, then we should choose MAE as loss instantiation,... Questions or there any machine learning topic that you would like us to cover, email! The other loss functions a variable whose value depends on the value of the Huber loss.. Is 'no ', loss values are summed up along the second axis ( i.e as function handles e.g., the number of rows returned will be the same as the number of rows returned will be the as... Click here to upload your image ( max 2 MiB ) is passed by expression and supports quasiquotation ( can! You set the cutting edge parameter are minimized are called ���loss functions��� years! Loss class ( e.g a good starting point isdesirable then caused only by incorrect approximation of Huber! Psi.Huber, psi.hampel andpsi.bisquare if we believe that the estimator has a high breakdown.., 53 ( 1 ), quantile '', or ls '' for least squares ( Details! ### Inscreva-se para receber nossa newsletter * Ces champs sont requis * This field is required * Das ist ein Pflichtfeld * Este campo es obligatorio * Questo campo è obbligatorio * Este campo é obrigatório * This field is required Les données ci-dessus sont collectées par Tradelab afin de vous informer des actualités de l’entreprise. Pour plus d’informations sur vos droits, cliquez ici Tradelab recoge estos datos para informarte de las actualidades de la empresa. Para más información, haz clic aquí Questi dati vengono raccolti da Tradelab per tenerti aggiornato sulle novità dell'azienda. Clicca qui per maggiori informazioni ### Privacy Preference Center #### Technical trackers Cookies necessary for the operation of our site and essential for navigation and the use of various functionalities, including the search menu. ,pll_language,gdpr #### Audience measurement On-site engagement measurement tools, allowing us to analyze the popularity of product content and the effectiveness of our Marketing actions. _ga,pardot
2021-04-13 01:28:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49040913581848145, "perplexity": 2584.5916307771063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038071212.27/warc/CC-MAIN-20210413000853-20210413030853-00078.warc.gz"}
https://postheaven.net/profitcoast2/the-union-of-x-and-y-is-the-set-of-components-which-are-in-either-x-or-y
September 5, 2021 # The union of X and Y is the set of components which are in either X or Y. Recall that the intersection of X and Y is the set of elements which may be in each X and Y. Similarly, we've the lightweight Python integer kind int that we might want as a substitute of SageMath integer type for non-mathematical operations. When programmatically processing sequences of unicode characters it is a lot safer to work with repr for the canonical string illustration of the object. You can assign values to multiple variable on the same line, by separating the assignment expressions with a semicolon ;. Python has some attention-grabbing extra operators that you can use with Python floating level numbers, which additionally work with the Sage rings integer type but not with Sage real literals. Similarly, the intersection of two units $A$ and $B$ written as $$\boxed \ x \in B \$$ means $A$ intersection $B$ is the set of parts that belong to each $A$ and $B$. Try assigning some values to 2 variables – you choose what values and you select what variable names to use. Try Recover Deleted Files to examine if they are equal, or one is less than the other. You can also create a string by enclosing them in single quotes or three consecutive single quotes. Furthermore, it's easier to select up the high-level strategies later on. Recall that Y is a subset of X if every factor in Y can additionally be in X. Sets are maybe the most basic idea in arithmetic. Let us encourage the Python strategies we'll see soon by using them beneath to plot the number of occurences of he and he or she in every of the 61 chapters of the guide. For now, we will just show how to obtain the most popular e-book from the project and show it's contents for processing down the highway. Python also supplies something referred to as a frozenset, which you will have the ability to't change like an ordinary set. Fruit and colours are completely different to us as folks, but to the computer, the string 'orange' is simply the string 'orange' whether or not it is in a set referred to as fruit or a set called colours. We can use these operators on variables in addition to on values. Again, try assigning totally different values to x and y, or try using totally different operators, if you want to. When Recover Deleted Files has nested parenthesis, i.e., one pair of parentheses inside another pair, the expression inside the inner-most pair of parentheses is evaluated first. When operators are on the identical degree within the record above, what matters is the analysis order . Sometimes we need to carry out a couple of arithmetic operation with some given integers. Try evaluating the cell containing 1+2 beneath by placing the cursor within the cell and pressing . As we learn extra we'll return to this in style e-book's unicode which is saved in our data directory as data\prideandprejudice.txt. Generally it's protected to convert strings from pure languages to unicode in Python/SageMath. On the subsequent line down in the same cell, assign the value three to a variable named y. For now, let's evaluate the outcomes of evaluating the expressions under to the equivalent expressions utilizing rational numbers above.
2022-06-26 10:19:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47737643122673035, "perplexity": 644.0231150240797}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00791.warc.gz"}
http://math.stackexchange.com/questions/74117/history-of-solving-linear-equations-with-matrices
# History of solving linear equations with matrices I'm solving linear equations with matrices right now and I wonder, how did it start. Who, how, why came to idea that such kind of equations could be solved with matrices? What was first: matrix or linear equation? How did they found each other? - It is interesting to note that the Chinese knew about Gaussian elimination way before Gauss started to think about his algorithm. See this for instance. – J. M. Oct 20 '11 at 0:51 This is information may be incorrect at times, I think I got it from E.T.Bell's Men of Mathematics and an (french) exercice book. Gauss already used 3 by 3 arrays of numbers to describe maps, and I think his student Eisenstein introduced the notation $\frac{1}{S}$ to denote the inverse of $S$, but that notation was later abandoned. – Olivier Bégassat Oct 20 '11 at 0:54 You can also have a look at this – user13838 Oct 20 '11 at 1:06 These links: http://ualr.edu/lasmoller/matrices.html and http://darkwing.uoregon.edu/~vitulli/441.sp04/LinAlgHistory.html have some info on this :) - Lissa, maybe you could let us know what you find unclear in those references and then we could help. – Gerry Myerson Oct 20 '11 at 2:00
2015-11-27 01:41:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9038544297218323, "perplexity": 671.2770594458174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447881.87/warc/CC-MAIN-20151124205407-00157-ip-10-71-132-137.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/594922/is-it-right-regarding-dm-m-ldx
# Is it right regarding $dm =M/Ldx$ So , I broke down the meaning of this equation and have got stuck at one point. I have not done rotational mechanics and have started with centre of mass newly .Getting a bit confused here with this new equation. Let us say say we have a bar if length L , Now take a small segment dx in it. We know mass per unit length = $$M/L$$.This means if M=10 and L=2. It tells that there are 5 rods or segments of distance 2 metres with mass 2Kg. If we say say dx as L for a moment.It would mean that M*L that is $$10*2$$ gives me 2 rods that have mass 10Kg. This meaning gives me 2 new rods.It gives mass of each rod. Now,with dx =0.1 now. $$10/2=5$$ 5 number of segments of 2m. Now multiplying them by 0.1 gives 0.5 Kg for dm. Now I sent relate it with what I wrote earlier since M/L and not M. Does it mean 0.1 distance for a rod of length 2m has mass =0.5kg. Please tell me how can I improve the quality of my question if not good. • Can you edit the question and add math formatting for clarity? – JAlex Nov 19 at 16:20 Understanding the physical meaning behind math expressions can be confusing in physics because there are many ways to arrange terms. One way to deduce the meaning is by looking at units. This means if M=10 and L=2. It tells that there are 5 rods or segments of distance 2 metres with mass 2Kg. This would not be the case because joining 5 segments each of distance 2 metres would produce 10 metres, which is longer than the initial bar of length L=2. In terms of units, we cannot say there are 5 rods of 2 metres because "5" has units of $$\frac{M (kilogram)}{L(metre)} = kg/m$$, while "number of rods" should not have units because it is a number. If we say say dx as L for a moment.It would mean that ML that is 102 gives me 2 rods that have mass =10Kg. This meaning gives me 2 new rods.It gives mass of each rod. If dx were L, ML would not represent 2 rods each of mass 10kg. This is also because of units where "ML" has units of $$M(kilogram) \times L (metre) = kg \cdot m$$, while "2 rods of mass 10kg" should have units of kilogram because "number of rods" is a unitless number and mass has units of kilogram. The way I understand dm = M/Ldx is that there is a bar with uniform mass per unit length M/L, and dx is some length within the bar. When we multiply the length of dx with mass per unit length, we get mass dm. When we add all the segments of length dx we get back the orginal length L, and when we add all the mass of each segment dm we get M. The the basic equation here is $${\rm d}m = \rho\, {\rm d} V$$ The interpretation is the mass contained inside a small infinitesimal volume is calculated from the mass density $$\rho$$ at any location. For a slender bar of length $$\ell$$ and cross section area $$A$$ you can specify $${\rm d}V = A \, {\rm d}x$$ with $$x=0 \ldots \ell$$, which makes the infinitesimal mass $${\rm d}m = \rho A\, {\rm d} x$$ This literaly means that a slice $${\rm d}x$$ contains mass of $${\rm d}m$$. If the cross section is constant then the above is integrated into $$m = \rho A \ell$$ or $$\rho A = \frac{m}{\ell}$$ So for those cases, you can rewrite the infinitesimal mass as $${\rm d} m = \frac{m}{\ell} {\rm d}x$$ So this still means the mass inside a slice. The $$m\ell$$ part is sometimes called a linear mass density which has units of $$\text{mass / length}$$. The generic way of looking at this relationship is $${\rm d}m = \lambda \, {\rm d}x$$ To relate this with math, consider a line with $$y = a x + b$$ and take a small slice in $$x$$ to find the rise as $${\rm d}y = a\, {\rm d}x$$ Now the meaning is: amount of $$y$$ contained in a slice of $$x$$
2020-11-29 23:29:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8759101629257202, "perplexity": 563.7385503666359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141203418.47/warc/CC-MAIN-20201129214615-20201130004615-00498.warc.gz"}
https://lttt.vanabel.cn/2011/10/24/test-mathjax.html
# Hello MathJax!! The following equations are represented in the HTML source code as LaTeX expressions. The Cauchy-Schwarz Inequality $\left( \sum_{k=1}^n a_k b_k \right)^2 \leq \left( \sum_{k=1}^n a_k^2 \right) \left( \sum_{k=1}^n b_k^2 \right)$ Auto Numbering and Ref? Suppose that $f$ is a function with period $\pi$, then we have $\newcommand{\rd}{\rm d}$ \begin{gather}\label{eq:1} \int_t^{t+\pi} f(x)\rd x=\int_0^\pi f(x)\rd x.\tag{1} \end{gather} From \eqref{eq:1} we know that $\dots$. Can we use \newcommand to define some macro? define newcommand as: We consider, for various values of $s$, the $n$-dimensional integral \begin{align}\label{def:Wns} W_n (s):= \int_{[0, 1]^n} \left| \sum_{k = 1}^n \mathrm{e}^{2 \pi \mathrm{i} \, x_k} \right|^s \mathrm{d}\boldsymbol{x} \end{align} which occurs in the theory of uniform random walk integrals in the plane, where at each step a unit-step is taken in a random direction. As such, the integral \eqref{def:Wns} expresses the $s$-th moment of the distance to the origin after $n$ steps. By experimentation and some sketchy arguments we quickly conjectured and strongly believed that, for $k$ a nonnegative integer \begin{align}\label{eq:W3k} W_3(k)= \Re \, \pFq32{\frac12, -\frac k2, -\frac k2}{1, 1}{4}. \end{align} Appropriately defined, \eqref{eq:W3k} also holds for negative odd integers. The reason for \eqref{eq:W3k} was long a mystery, but it will be explained at the end of the paper. Definition of Christoffel Symbols ${\left({\nabla }_{X}Y\right)}^{k}={X}^{i}{\left({\nabla }_{i}Y\right)}^{k}={X}^{i}\left(\frac{\partial {Y}^{k}}{\partial {x}^{i}}+{\Gamma }_{im}^{k}{Y}^{m}\right)$ $(∇XY)k=Xi(∇iY)k=Xi(∂Yk∂xi+ΓimkYm)$ ## “Hello MathJax!!”上的3条回复 awj141说: for example, the roots of the real polynomial $f(x)=ax^2+bx+c$ is [ x_{1,2}=frac{-bpm sqrt{b^2-4ac}}{2a}. ] awj141说:
2023-03-21 14:55:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986422061920166, "perplexity": 3732.706857759576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00536.warc.gz"}
https://www.physicsforums.com/threads/pendulum-bob-oscillation.404080/
# Pendulum bob oscillation 1. May 18, 2010 ### william kat a pendulum bob of mass m hangs on a string of length l. what is the espression for the arc length of the bob if its discplaced through an angle theta. 2. May 18, 2010 ### Mu naught well if you know the maximum angle of displacement the arc length is simply theta*l 3. May 26, 2010 ### william kat problem is, the angle of discplacement is not known. its part of the expression
2018-10-15 08:47:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8103222846984863, "perplexity": 2322.209991190834}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583508988.18/warc/CC-MAIN-20181015080248-20181015101748-00113.warc.gz"}
http://mathoverflow.net/revisions/119439/list
2 added 8 characters in body Serre's GAGA result roughly states the following. Let $X$ be a complex projective algebraic variety. Then the natural functor from the category of coherent sheaves over the algebraic structure sheaf of $X$ to the category of coherent sheaves over the analytic structure sheaf of $X$ is an equivalence of categories. This theorem always seemed to have the air of magic to me. Things that are topological analytic must come from algebra. I want to dust away some of this magic, and get a clearer picture. With this goal in mind, I have skimmed the proof of GAGA. The proof of GAGA is rather involved. It uses Cartan's theorem A for both the algebraic and analytic cases, the isomorphism of the completions of the stalks of the structure sheaf in the algebraic case and the analytic case, and a variety of technical results. After having done that for a few days, I still remain with a sense of amazement and a basic lack of understanding about what makes this work. This brings me to the precise phrasing of my question: (which will hopefully help me find the precise step where the magic happens) ### Question Does the proof of Serre's GAGA theorem use the axiom of choice? If so, at what step does this happen? 1 # Does the proof of GAGA use the axiom of choice? Serre's GAGA result roughly states the following. Let $X$ be a complex algebraic variety. Then the natural functor from the category of coherent sheaves over the algebraic structure sheaf of $X$ to the category of coherent sheaves over the analytic structure sheaf of $X$ is an equivalence of categories. This theorem always seemed to have the air of magic to me. Things that are topological must come from algebra. I want to dust away some of this magic, and get a clearer picture. With this goal in mind, I have skimmed the proof of GAGA. The proof of GAGA is rather involved. It uses Cartan's theorem A for both the algebraic and analytic cases, the isomorphism of the completions of the stalks of the structure sheaf in the algebraic case and the analytic case, and a variety of technical results. After having done that for a few days, I still remain with a sense of amazement and a basic lack of understanding about what makes this work. This brings me to the precise phrasing of my question: (which will hopefully help me find the precise step where the magic happens) ### Question Does the proof of Serre's GAGA theorem use the axiom of choice? If so, at what step does this happen?
2013-05-24 05:57:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351426362991333, "perplexity": 163.94315227645288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704234586/warc/CC-MAIN-20130516113714-00096-ip-10-60-113-184.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=149&t=62663&p=240273
2nd order $\frac{d[R]}{dt}=-k[R]^{2}; \frac{1}{[R]}=kt + \frac{1}{[R]_{0}}; t_{\frac{1}{2}}=\frac{1}{k[R]_{0}}$ Sophia Dinh 1D Posts: 100 Joined: Thu Jul 25, 2019 12:15 am 2nd order How can you tell something is 2nd order? 805422680 Posts: 103 Joined: Sat Sep 14, 2019 12:16 am Re: 2nd order When two first-order reactants affect the rate constant or when one second-order reactant Helen Struble 2F Posts: 97 Joined: Sat Aug 24, 2019 12:17 am Re: 2nd order If you are given a plot of 1/[A] by time, and the curve is linear, then you know that the reaction is second order with respect to that reactant. The slope is equal to the rate constant. Owen-Koetters-4I Posts: 50 Joined: Fri Sep 28, 2018 12:16 am Re: 2nd order It is second order if two first order reactants affect the rate constant Posts: 103 Joined: Sat Aug 24, 2019 12:15 am Re: 2nd order If the graph of 1/[A] vs time is linear, or if the exponents of the rate law add up to two, then the reaction is second order. Aiden Metzner 2C Posts: 104 Joined: Wed Sep 18, 2019 12:21 am Re: 2nd order If you look at the units and liters and seconds are in the denominator then it is second order. Ayushi2011 Posts: 101 Joined: Wed Feb 27, 2019 12:17 am Re: 2nd order When the rate is affected by the concentration of two first-order reactants, it is a second order reaction.
2020-11-26 22:16:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8529797792434692, "perplexity": 5304.395344507252}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188947.19/warc/CC-MAIN-20201126200910-20201126230910-00533.warc.gz"}
https://kenny-peng.com/2021/08/23/esp32_iram.html
One of my summer projects was clearly about to go over-time, but I also realized that I needed some kind of platform running in order to check my progress so far. So, once again, I tried to cram a large algorithm into an ESP32. This time, it was a basic fluid simulation, described in Real-Time Fluid Dynamics for Games by Jos Stam. Before and after running the fluid simulation, given a starting impulse Just barely, I managed to get the ESP32 to do it, and you can see the details and code here. After playing with and getting stuck on the concepts for months, putting that code together was simple. However, getting that code to run was a different story. In the end, I pretty much needed every ounce of RAM the ESP32 had, and the last ounce, the IRAM, was a real challenge to get. I had been browsing the esp-idf documentation and I came across the entry about IRAM. Although the IRAM (Instruction RAM) was not intended for data, it could be allocated with the call heap_caps_malloc(size, MALLOC_CAP_32BIT) as long as the data elements were 32 bits wide. floats are that wide, I thought, so why can’t I use that? When I tried though, I got a “LoadStoreError”. The reason why turned out to be pretty odd: only 32-bit integers could be stored in IRAM. In this Github issue, I read about how floats couldn’t naturally be stored in IRAM because the compiler didn’t rightly use the l32i and s32i instructions to do so. At the time, I was about to accept this, being just about to use Q31 instead, but then I realized that the bits of a float just had to be reinterpreted as an integer. It didn’t matter what the integer meant in integer form. Once I realized this, I tried to write a class that would be treated like a float but would only be stored as a uint32_t. From there, I used reinterpret_cast to force the bits to be reinterpreted. Still, I was getting the “LoadStoreError”. At this point, I wondered if the extra steps I was taking here were just being optimized away. Just to make sure, I made the uint32_t member a volatile uint32_t member, since I knew that optimizations involving volatile data would be blocked. This did happen to require that I copy the volatile data into a temporary non-volatile location before I could use it, but this all suddenly worked! // iram_float_t // A float that must be stored in IRAM as an integer. The result is forcing the compiler // to use the l32i/s32i instructions (using other instructions causes a LoadStoreError). // Original issue (and assembly solution): https://github.com/espressif/esp-idf/issues/3036 // This code is released into the public domain or under CC0 class iram_float_t{ public: iram_float_t(float value = 0) : _value(*reinterpret_cast<volatile unsigned int*>(&value)) {} void* operator new[] (size_t size){ // Forces allocation from IRAM return heap_caps_malloc(size, MALLOC_CAP_32BIT); } operator float() const { uint32_t a_raw = _value; return *reinterpret_cast<float*>(&a_raw); } private: volatile unsigned int _value; }; This was the final version of that class I ended up with. The new[] operator is overloaded with an allocation in the IRAM, and the other methods have been defined such that conversions in and out of float are implicit. Since the issue was assembly-level, this probably could have been solved with an assembly solution, but this high-level solution was extremely simple to implement and use. In many cases, it should only take a couple substitutions to use this class, and it also leaves many things up to the compiler for optimization purposes. That said, one drawback is that the volatile keyword has side-effects on the ESP32. The Xtensa core it uses seems to be capable of out-of-order memory access, but it is forced to access memory in-order here (see 3.8.3 in the Xtensa ISA manual). Perhaps the even bigger side-effect though is that every access of volatile memory requires an extra instruction to deal with the temporary register. Overall, I’ve observed about a 30% performance loss even though I didn’t depend too heavily on IRAM. Still, it was the last ounce of RAM I needed, and if anyone else needs to store floats in IRAM, I hope this workaround helps.
2023-03-31 03:53:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.410678505897522, "perplexity": 1451.6755060289945}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00765.warc.gz"}
https://blender.stackexchange.com/questions/8945/pasting-long-strings-into-the-script-parameters-for-add-3d-function-surface
# Pasting long strings into the script parameters for Add 3D Function Surface? I'm trying to use the script Add 3D Function Surface (specifically the Add X,Y,Z Function Surface feature) to produce some mathematical models for 3d printing. The formulas for my paraemtrized surfaces are very long --- one of the simpler ones has over 4,000 characters. Unfortunately, it appears that I can only paste 400 characters into each of the script parameter boxes. Is there a way to get Blender to accept longer strings for each formula parameter? I'd be willing to run the script from the command line, but I haven't figured out how to do this. I'm running Blender 2.70 on a mac. I'm also completely new to Blender, so let me know if there's additional information I can provide that will help. Thanks in advance :-) Bug report: https://developer.blender.org/T39924 I can't see any maxlen parameters in the script: And on Windows using Blender 2.70, I can store at least 1,000,000,000 characters in a StringProperty(): >>> C.scene.p = "x"*1000000000 >>> len(C.scene.p) 1000000000 The problem here is that there's a limit at layout level of 399 chars. You can actually type 400 and more letters into a text field, but you won't see the characters appear! If you paste from clipboard, only the first 399 characters are taken. You can circumvent it by using python: • Go to Scripting screen • Call the XYZ Math Surface operator (parameter fields appear in Redo panel) • Create a new text datablock in the Text Editor • Add the following (substitute x_eq for the other equation fields): import bpy bpy.context.active_operator.x_eq = "" • Place the cursor between the two "" quote marks and paste your formula • Click Run Script or hit AltP - it will assign the formula text to the operator's parameter field. • Click e.g. the handle of the U min property in the Redo to panel to force an update (it will not change the mesh based on the manually set equation automatically!). Don't activate any of the equation fields, it may truncate them again! • Thanks! Have you had success actually pasting your 10^9 (or perhaps a little shorter...) length string into the Add 3D Function Surface parameter window? Apr 27 '14 at 17:56 • I'm also trying to hunt down the script on my system to compare with the one you linked, but this is proving difficult. Apr 27 '14 at 17:56 • It's pretty odd: it let's you type more than 399 chars, but you won't see it happening (in the operator log you do). If you paste from clipboard, only the first 399 chars are used. The only workaround is to set it via python (See my updated answer). Apr 27 '14 at 18:30 • I'm trying to implement your fix by opening the python console (same as scripting screen?) and typing the commands you gave. For short x_eq, this successfully puts my equation into the appropriate parameter window, but for long x_eq it again truncates my equation. A couple of comment/questions: (1) I'm just hitting enter from the python command line. That's not different from "Run Script", is it? (2) On my machine, I cannot type more characters into a parameter window that already contains 400 pasted characters.... Thanks so much for your thoughts on all this! Apr 27 '14 at 19:02 • Just to be clear: the full equation pastes into the python console, but when I execute the command the input gets truncated to 400 characters in the 3D view window. Apr 27 '14 at 19:04
2021-12-07 21:48:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3954283595085144, "perplexity": 1958.3273792826465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363418.83/warc/CC-MAIN-20211207201422-20211207231422-00287.warc.gz"}
https://email.esm.psu.edu/pipermail/macosx-tex/2007-June/031086.html
# [OS X TeX] Linotype Palatino Aaron Jackson jackson at msrce.howard.edu Wed Jun 6 22:54:06 EDT 2007 On Jun 6, 2007, at 7:32 PM, Michael Kubovy wrote: > > On Jun 6, 2007, at 6:02 PM, Aaron Jackson wrote: > >> On Jun 6, 2007, at 4:52 PM, Bruno Voisin wrote: >> >>> Le 6 juin 07 à 22:16, Michael Kubovy a écrit : >>> >>>> I need to use Linotype Palatino in a grant application. I have >>>> included >>>> \usepackage{palatinox} >>>> >>>> How can I be sure that I'm tyrpsetting with this font? >>>> >>>> The console says >>>> LaTeX Font Warning: Font shape T1/Palatino-OsF/m/n' undefined >>>> (Font) using T1/cmr/m/n' instead on input line 32. >>>> >>>> Does this mean that only a few of the shapes are missing or that >>>> I'm not using the font at all. If the latter, how to fix? The >>>> only fonts allowed are Arial, Helvetica, Palatino Linotype or >>>> Georgia. For legibility and aesthetic reasons I'd rather not use >>>> Arial or Helvetica for the body of the text. >>> >>> >>> \usepackage[T1]{fontenc} >>> \usepackage{textcomp} >>> \usepackage{mathpazo} >>> >>> This will use a public-domain Palatino clone for text, with >>> characters taken from Symbol for maths. See the doc of the psnfss >>> package psnfss2e.pdf, usually at /Library/TeX/Documentation/texmf- >>> dist-doc/latex/psnfss/psnfss2e.pdf. >>> >>> Specialists would see a difference with Linotype Palatino, but >>> probably not the experts evaluating the grant application. >>> Generally the specification of a given font is just here to >>> ensure that all applicants are submitting proposals subject to >>> the same length criteria (instead of having applicants submitting >>> longer proposals using tighter fonts with smaller character >>> sizes, smaller margins and smaller line spacing). >> >> This assumes that the OP is submitting a printed grant >> application, which is less common these days. Depending on how >> hard-ass the evaluators are, this could be grounds to disqualify >> the grant application, since a simple command-d or control-d in >> Acrobat Reader will tell you what fonts are in the document. >> >> Do you have the commercial linotype fonts properly installed on >> you computer in the first place? > > Indeed the application will be submitted electronically. I suspect > that they're not *that* clever, but I'd rather be on the safe side > (to much work toi waste on a trivial matter). > > AFAIK, I don't have the commercial Linotype fonts. I have several > folders called Palatino. I have the following font files that > contain '{P|p}alatino' > palatino-*.afm files > palatino.tpm > texnansi-urw-palatino.map > ec-urw-palatino.map > Palatino ('FFIL' in .Library/Fonts) You would know if you had the fonts (you would have had to pay for them), plus you are missing the pfb files. If you have time, then you can purchase the fonts, install the pfb files and everything should just work. > > Even though I've been using LaTeX for years, and am pretty good at > it, I've always had a fear of the complexity of font use, and never > explored this side of LaTeX. So here I had best be treated as a > beginner. I've never even tried XeLaTeX. I would appreciate a guide > for the perplexed. I never use XeLaTeX, but just change the dropdown menu on Texshop from LaTeX to XeLaTeX. Then all I did to change the font to Helvetica (from an example in fontspec documentation) was: \documentclass[12pt]{article} \usepackage{fontspec} \setromanfont{Helvetica} \begin{document} Testing 1 2 3... \end{document} Using Palatino doesn't seem to work on my computer though...
2020-09-26 13:25:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.83878493309021, "perplexity": 10263.160305002057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400241093.64/warc/CC-MAIN-20200926102645-20200926132645-00606.warc.gz"}
https://plainmath.net/algebra-ii/57790-how-do-you-write-y-equal-5x-plus-2-in-standard-form
Reese Munoz 2022-01-29 How do you write y=(-5x)+2 in standard form? pripravyf Expert Explanation: The standard form of an equation is given by; Ax+By=C In this question we have, y=(-5x)+2 It can be converted into standard form by using simple algebraic operations. First open the parenthesis bracket we get, or, y=-5x+2 Now add 5x on both side, we get or, y+5x=-5x+2+5x or, y+5x=2 Arrange the above equation in standard form, we get or, 5x+y=2 Do you have a similar question?
2023-01-30 15:12:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206365346908569, "perplexity": 4158.359953479742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00369.warc.gz"}
https://homerhanumat.github.io/r-notes/exercises.html
## Exercises 1. Determine the type of each of the following vectors: 1. c(3.2, 2L, 4.7, TRUE) 2. c(as.integer(3.2), 2L, 5L, TRUE) 3. c(as.integer(3.2), 2L, "5L", TRUE) 2. Using a combination of c(), rep() and seq() and other operations, find concise one-line programs to produce each of the following vectors: 1. all numbers from 4 to 307 that are one more than a multiple of 3; 2. the numbers 0.01, 0.02, 0.03, …, 0.98, 0.99. 3. twelve 2’s, followed by twelve 4’s followed by twelve 6’s, …, followed by twelve 10’s, finishing with twelve 12’s. 4. one 1, followed by two 2’s, followed by three 3’s, …, followed by nine 9’s, finishing with ten 10’s. 3. Using a combination of c(), rep() and seq() and other operations, find concise one-line programs to produce each of the following vectors: 1. the numbers 15, 20, 25, …, 145, 150. 2. the numbers 1.1, 1.2, 1.3, …, 9.8, 9.9, 10.0. 3. ten A’s followed by ten B’s, …, followed by ten Y’s and finishing with ten Z’s. (Hint: the special vector LETTERS will be useful.) 4. one a, followed by two b’s, followed by three c’s, …, followed by twenty-five y’s, finishing with twenty-six z’s. (Hint: the special vector letters will be useful.) 4. The following three vectors gives the names, heights and ages of five people, and also say whether or not each person likes Toto: person <- c("Akash", "Bee", "Celia", "Devadatta", "Enid") age <- c(23, 21, 22, 25, 63) height <- c(68, 67, 71, 70, 69) likesToto <- c(TRUE, TRUE, FALSE, FALSE, TRUE) Use sub-setting with logical vectors to produce vectors of: 1. the names of all people over the age of 22; 2. the names of all people younger than 24 who are also more than 67 inches tall; 3. the names of all people who either don’t like Toto or who are over the age of 30; 4. the number of people who are over the age of 22. 5. Consider the four vectors defined in the previous problem. Use sub-setting with logical vectors to produce vectors of: 1. the names of all people who are less than 70 inches tall; 2. the names of all people who are between 20 and 30 years of age (not including 20 or 30); 3. the names of all people who either like Toto or who are under the age of 50; 4. the number of people who are more than 69 inches tall. 6. Logical vectors are not numerical vectors, so it would seem that you should not be able to sum their elements. But: sum(likesToto) results in the number 2! What is happening here is that R coerces the logical vector likesToto into a numerical vector of 1’s and 0’s—1 for TRUE, 0 for FALSE—and then sums the resulting vector. Notice that this gives us the number of people who like Toto. With this idea in mind, use sum() along with logical vectors to find: 1. the number of people younger than 24 who are also more than 67 inches tall; 2. the number of people who either don’t like Toto or who are over the age of 30. 7. Read the previous problem, and then use sum() along with logical vectors to find: 1. the number of people between 65 and 70 inches tall (including 65 and 70); 2. the number of people who either don’t like Toto or who are under the age of 25.
2019-05-20 07:22:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5316449999809265, "perplexity": 1313.8234381315485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00310.warc.gz"}
https://tjyj.stats.gov.cn/CN/10.19343/j.cnki.11-1302/c.2016.08.014
• 论文 • 整数值时间序列模型单位根检验问题研究 • 出版日期:2016-08-15 发布日期:2016-08-11 Research on Unit Root Test for Integer-valued Time Series Models Wang Zeyu et al. • Online:2016-08-15 Published:2016-08-11 Abstract: The unit root test research about the integer-valued time series is just getting started, compared with the non-integer-valued time series. In this paper, the Monte Carlo simulation would be ushered to check the DF statistic and the statistic in INAR (1) models with unit root process. Based on the research, DF statistic asymptotically conforms to the standard normal distribution, meanwhile the actual distribution of this statistic has been impacted by the sample size and the mean of the disturbance term in the finite sample. In addition, the DF statistic does not have the property of any level distortion. That is, the DF can well control the probability of type I error. Because of the data generation feature, the statistic’s probability of committing type I error is zero. Furthermore, the test powers of DF statistic and statistic are influenced by the sample size, autoregressive coefficient and the mean of the error term. In most cases, the test power of statistic is much better than the DF statistic.
2022-07-01 00:11:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5532605648040771, "perplexity": 872.797160686506}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00540.warc.gz"}
https://www.physicsforums.com/threads/ode-problem.109522/
# ODE problem 1. Feb 5, 2006 ### Tony11235 Let x = x1(t), y = y1(t) and x = x2(t), y = y2(t) be any two solutions of the linear nonhomogeneous system. $$x' = p_{11}(t)x + p_{12}(t)y + g_1(t)$$ $$y' = p_{21}(t)x + p_{22}(t)y + g_2(t)$$ Show that x = x1(t) - x2(t), y = y1(t) - y2(t) is a solution of the corresponding homogeneous sytem. I am not sure what it is that I am suppose to do. Could anybody explain? 2. Feb 6, 2006 ### HallsofIvy Staff Emeritus "Plug and chug". The "corresponding homogeneous system" is, of course, just the system with the functions g1(t) and g2(t): $$x'= p_{11}(t)x+ p_{12}(t)y$$ $$y'= p_{21}(t)x+ p_{22}(t)y$$ replace x with x1- x2, y with y1- y2 in the equations and see what happens. Remember that x1, x2, y1, y2 satisfy the original equations themselves. Last edited by a moderator: Feb 6, 2006
2017-08-19 10:12:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7302959561347961, "perplexity": 2938.7939710408655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105334.20/warc/CC-MAIN-20170819085604-20170819105604-00664.warc.gz"}
https://rethomics.github.io/workflow.html
# The rethomics workflow From hypothesis to results In rethomics, we envisage behavioural experiments as a workflow: 1. Design – you plan your experiment (I can’t really help you with that, but I trust you!). 2. Record/track – you use your acquisition platform to record behavioural variables over time. They define the format of the results. 3. Write individual information – you make a spreadsheet (CSV file) that details the experimental conditions for each individual. We call this a metadata file. It is a crucial concept in rethomics, so we will dedicate it the next section. You can often write your metadata as you plan your experiment, but sometimes, you want to enrich it with variables that you can only record after your experiment (e.g. lifespan). 4. Link and Load data – first, we enrich your metadata by “linking” it to the result. This allows you to load all the matching data into a singlebehavr table (see section on behavr tables). 5. Tranform & analyse & visualise – you take advantage of rethomics and R analysis and visualisation tools.
2018-06-23 10:20:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3007877767086029, "perplexity": 2572.8506975605846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864957.2/warc/CC-MAIN-20180623093631-20180623113631-00434.warc.gz"}
http://intlpress.com/HHA/v15/n1/a3/
# Homology and robustness of level and interlevel sets ## Paul Bendich, Herbert Edelsbrunner, Dmitriy Morozov and Amit Patel Given a continuous function $f\colon \mathbb{X} \to \mathbb{R}$ on a topological space, we consider the preimages of intervals and their homology groups and show how to read the ranks of these groups from the extended persistence diagram of $f$. In addition, we quantify the robustness of the homology classes under perturbations of $f$ using well groups, and we show how to read the ranks of these groups from the same extended persistence diagram. The special case $\mathbb{X} = \mathbb{R}^3$ has ramifications in the fields of medical imaging and scientific visualization. Homology, Homotopy and Applications, Vol. 15 (2013), No. 1, pp.51-72.
2013-12-10 07:18:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6516544222831726, "perplexity": 328.3922416486853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164012753/warc/CC-MAIN-20131204133332-00095-ip-10-33-133-15.ec2.internal.warc.gz"}
http://finalfantasy.wikia.com/wiki/Magic_Stone_(Final_Fantasy_IX)
# Magic Stone (Final Fantasy IX) 13,955 pages on this wiki Magic Stones (魔石力, ?) are stones in Final Fantasy IX that permit the player characters to use their support abilities. They are specifically used as part of the game mechanics, and can never be used in any other way. A specific number of stones must be equipped to a support ability in order for it to be functional in battles. Since the supply of magic stones for each character is always limited, and each ability has a different cost of magic stones, the support abilities can never be equipped all at once. The stones can never be collected as usable items in the game outside of the menu, and serve no other purpose than to limit the amount of support abilities equipped. Nevertheless, the maximum number of the stones gradually rise accordingly with the character's level, allowing the player to eventually use more than before. Beside the name of each support ability is a round slot; a magic stone being inside the slot indicates it is currently equipped to the character; likewise, an empty slot shows the reverse. If all the magic stones have been equipped, then all the unequipped support abilities become grayed out and cannot be selected until enough magic stones have been unequipped. The main menu keeps track of the remaining and maximum number of magic stones for each character. Each character has a magic stone bonus that goes up with level, but unlike the other stats there is no way to increase it with equipment, meaning there is no way to increase the number of available stones; every character will always have the same number of magic stones upon reaching level 99. The formula for gaining Magic Stones per level up is: $MSt = MStBase + Level * \frac{4}{10} + \frac{MStBonus}{32}$[1] When a character levels up they get their stat bonuses instantly. Zidane, Vivi, Garnet and Steiner are the characters the player starts the game with, but when another party member joins and the player party is at a higher level than LV1 the joining party members level up to match the rest of the party, but they will not gain any bonuses for that; for that reason if the player levels up a lot early, party members who join later will miss out on magic stones. If the player wants maximum magic stones for all party members, they must stay on level 1 till Amarant joins. Marcus is an exception: any stats he has when he leaves the party are transferred to Eiko; therefore, the more the player levels Marcus up, the more magic stones Eiko will have when she joins, regardless of what level she is at. Name Max number of Magic Stones Zidane 72 Steiner 71 Vivi 68 Dagger 68 Freya 72 Quina 69 Eiko 67 Amarant 72 Beatrix 64 Blank 66 Cinna & Marcus 62 # Videos Remove video Are you sure you want to remove this video from the Videos list? Please wait wile we are removing the video
2013-05-20 22:39:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17379339039325714, "perplexity": 2366.509456545585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00095-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/let-h-be-a-subgroup-of-a-group-g-prove-that-h-is-a-normal-subgroup-of-g-if-and-only-if-for-q3392884
## subgroups Let H be a subgroup of a group G. Prove that H is a normal subgroup of G if and only if for all a and b in G, ab is in H implies ba is in H.
2013-05-21 13:58:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034923076629639, "perplexity": 76.56868714032888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700074077/warc/CC-MAIN-20130516102754-00002-ip-10-60-113-184.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Mitchell%27s_Embedding_Theorem
# Freyd-Mitchell Embedding Theorem Jump to navigation Jump to search ## Theorem Let $\AA$ be a small abelian category. Then there exists a ring with unity $R$ and a fully faithful and exact functor $F : \AA \to R \text{-} \mathbf{Mod}$ to the category of left $R$-modules. ## Source of Name This entry was named for Peter John Freyd and Barry M. Mitchell.
2021-12-07 03:30:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313018321990967, "perplexity": 1273.2870292186485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363332.1/warc/CC-MAIN-20211207014802-20211207044802-00144.warc.gz"}
https://www.storyofmathematics.com/approximating-integrals
# Approximating Integrals – Midpoint, Trapezoidal, and Simpson’s Rule Approximating integrals are extremely helpful when we can’t evaluate a definite integral using traditional methods. Through numerical integration, we’ll be able to approximate the values of the definite integrals. The techniques of approximating integrals will show us how it’s possible to numerically estimate the definite integral of any function. The three common numerical integration techniques are the midpoint rule, trapezoid rule, and Simpson’s rule. At this point in our integral calculus discussion, we’ve learned about finding the indefinite and definite integrals extensive. There are instances, however, that finding the exact values of definite integrals won’t be possible. This is when approximating integrals enter. In this article, we’ll focus on approximating integrals using the three mentioned techniques. Since we’re dealing with definite integrals, review your understanding of the fundamental theorem of calculus. ## When to approximate an integral? We can approximate integrals by estimating the area under the curve of $\boldsymbol{f(x)}$ for a given interval, $\boldsymbol{[a, b]}$. In our discussion, we’ll cover three methods: 1) midpoint rule, 2) trapezoidal rule and 3) Simpson’s rule. As we have mentioned, there are functions where finding their antiderivatives and the definite integrals will be an impossible feat if we stick with the analytical approach. This is when the three methods for approximating integrals will come in handy. \begin{aligned}\int_{0}^{4} e^{x^2}\phantom{x}dx\\\int_{0}^{2} \dfrac{\sin x}{x}\phantom{x}dx \end{aligned} These are two examples of definite integrals that will be challenging to evaluate if we use the integration techniques we’ve learned in the past. This is when the three integral approximation techniques enter. The first approximation you’ll learn in your integral calculus classes is the Riemann sum. We’ve learned how it’s possible to estimate the area under the curve by dividing the region into smaller rectangles with a fixed width. The graph shown above highlights how the Riemann sum works: divide the region under the curve with $n$ rectangles that share a common width, $\Delta x$. The value of $\Delta x$ is simply the difference between the intervals’ endpoints divided by $n$: $\Delta x = \dfrac{b- a}{n}$. We can estimate the area and the integral using the relationships shown below: Right-hand Riemann sum Left-hand Riemann sum \begin{aligned}\int_{b}^{a}f(x)\phantom{x}dx \approx \sum_{i= 1}^{n} f(x_i) \Delta x\end{aligned} \begin{aligned}\int_{b}^{a}f(x)\phantom{x}dx \approx \sum_{i= 1}^{n} f(x_{i- 1}) \Delta x\end{aligned} Keep in mind the $x_i$ represents the initial value that we’re starting with. We’ve already discussed the Riemann sum in this article, so make sure to check it out in case you need a refresher. In the next section, we’ll show you the three numerical integration methods you can use to integrate complex integrals such as $f(x) = e^{\sin(0.1x^2)}$. We’ll also show you examples to make sure that we implement each technique. ## How to approximate an integral? The three approximation techniques that we’ll focus on using similar processes like that of the Riemann sum. We’ll show you what makes each technique special and of course, show you how to implement each method to approximate integrals. ### Midpoint rule: integral approximation definition It’s a good thing that we did a refresher on Riemann sum. That’s because the midpoint rule is an extension of the Riemann sum. The midpoint rule utilizes the two subinterval’s midpoint, $x_i$. Let’s say we want to evaluate $\int_{a}^{b} f(x)\phantom{x} dx$, approximate its value using the midpoint rule by following the steps below: • Divide the intervals into $n$ equal parts. Each new subinterval must have a width of $\Delta x = \dfrac{b – a}{n}$. • We must begin with $x_0 = a$ and end with $x_n = b$. Meaning, we’ll have subintervals, $[x_0, x_1], [x_1, x_2], [x_2, x_3],…,[x_{n- 1}, x_n]$. • Find the height of the rectangle formed by two subintervals, $x_{i-1}$ and $x_i$, by finding their midpoint: $\overline{x_i} = \dfrac{x_{i -1} + x_i}{2}$. • Approximate the definite integral by finding the value of $\approx \sum_{i= 1}^{n} f(\overline{x_i}) \Delta x$ This means that through the midpoint rule, we can evaluate the definite integral using the formula shown below: \begin{aligned}M_n &= \sum_{i =1}^{n}f(\overline{x_i})\Delta x\\\int_{a}^{b} f(x)\phantom{x}dx &= \lim_{n \rightarrow \infty} M_n\end{aligned} To better understand the midpoint rule’s process, let’s estimate the value of $\int_{0}^{4} x^2\phantom{x}dx$ using the midpoint rule: • Divide the interval into four subintervals with a width of: $\Delta x = \dfrac{4 -0}{4} = 1$ unit. • This means that we have the following intervals: $[0, 1]$, $[1, 2]$, $[2, 3]$, and $[3, 4]$. • Find the respective midpoints shared by each pair of subintervals: $\left\{\dfrac{1}{2}, \dfrac{3}{2},\dfrac{5}{2},\dfrac{7}{2}\right\}$. The graph below illustrates how the integral of $x^2$ is approximated using the midpoint rule. Find the value of $\int_{0}^{4} x^2\phantom{x} dx$ by evaluating $\boldsymbol{f(x)}$ at the midpoints. Multiply each value to $\boldsymbol{\Delta x}$ then add all resulting values to estimate the integral’s value. \begin{aligned}M_4 &= \sum_{i =1}^{4}f(\overline{x_i})\cdot (\Delta x) \\&= (1)f\left(\dfrac{1}{2}\right) + (1)f\left(\dfrac{3}{2}\right)+ (1)f\left(\dfrac{5}{2}\right) + (1)f\left(\dfrac{5}{2}\right)\\&= 1 \cdot \dfrac{1}{4}+ 1 \cdot \dfrac{9}{4} +1 \cdot \dfrac{25}{4}+ 1 \cdot \dfrac{49}{2} \\&= 21\end{aligned} If we evaluate the integral, $\int_{0}^{4} x^2\phantom{x} dx$, it’s actual value will be $\dfrac{64}{3}$ or $21.\overline{3}$. This shows that the estimated value from the midpoint rule is actually close to the actual value. Whenever possible, find the absolute error of the approximation by finding the absolute value of the difference between the integral’s actual value and approximated value. \begin{aligned}\left|21 – \dfrac{64}{3}\right| &= \dfrac{1}{3}\\&\approx 0.33\end{aligned} We can also find the relative error of the approximation by expressing the absolute error as the percent change of the actual value as shown below. \begin{aligned}\left|\dfrac{A – B}{A}\right| \cdot 100\%\end{aligned} This means that relative error for our calculation is: \begin{aligned}\left|\dfrac{21 – \dfrac{64}{3}}{21}\right| \cdot 100\% &\approx 1.58\%\end{aligned} These two error approximations confirm our observation: that the approximate value is a good enough estimation. Of course, it’s much easier to simply evaluate $\int_{0}^{4} x^2\phantom{x} dx$ analytically. But as we have mentioned, that won’t be the case for complex integrals. ### Trapezoidal rule: integral approximation definition For this method, instead of using rectangles, we’ll be using trapezoids. Hence, its name: trapezoidal rule. When given a definite integral, we can estimate the value of $\int_{a}^{b} f(x)\phantom{x}dx$ by approximating the area of the trapezoids under the curve. The midpoint and trapezoid rules will have similar steps. Before we begin, recall that the formula for the trapezoid’s area is $\dfrac{1}{2}h(b_1 + b_2)$, where $h$ represents the height and $b_1$ and $b_2$ represents the two bases. Now, let’s go ahead and break down the steps for the trapezoid rule’s process: • Divide the intervals of the given definite integral into $n$ equal parts. Determine the subinterval’s height by dividing $b -a$ by $n$: $\Delta x = \dfrac{b – a}{n}$. • Keep in mind that we must have $x_0 = a$ and $x_n = b$. This means that the endpoints of the intervals are: $\{x_0, x_1, x_2, …, x_n\}$. • Approximate the first trapezoid’s area using $f(x_0)$ and $f(x_1)$ as its bases. \begin{aligned}T_1 &= \dfrac{1}{2}\Delta x [f(x_0) + f(x_1)]\end{aligned} • Apply the same process to find the areas of the rest of the trapezoids then add up the areas for all $n$ trapezoids. We’ll focus on the last bullet: adding up all the areas of the $n$ trapezoids. If we have the subinterval’s endpoints, $\{x_0, x_1, x_2, …, x_n\}$, we can find the sum of the areas, $T_n$, as shown below. \begin{aligned}T_n &= \dfrac{\Delta x }{2}[f(x_0) + f(x_1)]+\dfrac{\Delta x }{2}[f(x_1) + f(x_2)]+ …+ \dfrac{\Delta x }{2}[f(x_{n-1}) + f(x_n)]\\&= \dfrac{\Delta x }{2}[f(x_0) + 2f(x_1)+ 2f(x_2)+ …+ 2f(x_{n -1})+ f(x_n)]\\\lim_{n\rightarrow +\infty}T_n &= \int_{a}^{b}f(x)\phantom{x}dx\end{aligned} This means that we can estimate the definite integral by applying the formula for $T_n$ and that’s the Trapezoidal rule. Estimate the value of $\int_{0}^{4} x^2\phantom{x}dx$ using the trapezoidal rule and four subintervals this time. Afterward, compare the approximate value of the integral and its actual value: $\dfrac{64}{3}$ squared units. • Find each of the four trapezoids’ heights: $\Delta x = \dfrac{4 -0}{4} = 1$ unit. • We’ll be working with four trapezoids with the following subintervals with the following endpoints: $\{0, 1, 2, 3,4\}$ • Calculate the areas of the trapezoids by evaluating functions at the endpoints. Here’s the graph of $f(x) = x^2$ with the area under its curve is divided into four trapezoids. Calculate the total area of the four trapezoids as shown below: \begin{aligned}T_4 &= \dfrac{\Delta x }{2}[f(x_0) + 2f(x_1)+ 2f(x_2)+ 2f(x_3)+ f(x_4)]\\&= \dfrac{1}{2}[f(0)+ 2f(1)+ 2f(2) + 2f(3)+ f(4)]\\&= \dfrac{1}{2}[0 + 2(1) + 2(4) + 2(9) + 16]\\&= 22\end{aligned} Since $T_4 = 22$, $\int_{0}^{4} x^2\phantom{x}dx$ is approximately equal to $22$ squared units through the trapezoidal rule. As we did with the midpoint rule, let’s find our approximation’s absolute and relative error: Absolute Error Relative Error \begin{aligned}\left|\dfrac{64}{3} -22\right| &= \dfrac{2}{3}\end{aligned} \begin{aligned}\left|\dfrac{\dfrac{64}{3}-22}{\dfrac{64}{3}}\right|\cdot 100\% &\approx 3.125\%\end{aligned} These two values show us that the value returned by trapezoidal rule is close enough compared to its actual value. ### Simpson’s rule: integral approximation definition Before we dive right into the process of Simpson’s rule, let’s first observe how the accuracies of midpoint and trapezoidal rules’ approximations improve as we use more intervals.
2021-11-28 18:15:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951902627944946, "perplexity": 659.4924326388251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358570.48/warc/CC-MAIN-20211128164634-20211128194634-00160.warc.gz"}
https://www.azdictionary.com/definition/barrowful
# barrowful definition • noun: • the quantity that can fit in a barrow • the amount that a barrow will hold • the total amount that can fit in a barrow • the amount that a barrow will hold ## Related Sources • Definition for "barrowful" • the quantity that can fit in a barrow • Sentence for "barrowful" • However much we stain the world,… • Hypernym for "barrowful" • containerful
2017-11-20 17:04:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881713151931763, "perplexity": 9143.39744279362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806086.13/warc/CC-MAIN-20171120164823-20171120184823-00704.warc.gz"}
https://en.universaldenker.org/illustrations/932
# Illustration Coriolis acceleration, velocity and angular velocity - Vectors Get illustration Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially. Sharing and adapting of the illustration is allowed with indication of the link to the illustration. Coriolis acceleration $$\boldsymbol{a}_{\text c}$$ acts on a body (e.g. an airplane) if it moves with velocity $$\boldsymbol{v}$$ and is in a rotating reference frame, like the earth. The angular velocity of the earth is $$\boldsymbol{\omega}$$. It indicates how fast the earth is rotating. Coriolis acceleration $$\boldsymbol{a}_{\text c}$$ is always orthogonal to body's velocity $$\boldsymbol{v}$$ and the angular velocity $$\boldsymbol{\omega}$$. The velocity vector $$\boldsymbol{v}$$ and the angular velocity vector $$\boldsymbol{\omega}$$ include the angle $$\varphi$$.
2022-07-03 14:38:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9057875871658325, "perplexity": 467.90375120727504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00532.warc.gz"}
http://www.physicsforums.com/showthread.php?t=617501
## Mistake on official answer (conservation of linear momentum) 1. The problem statement, all variables and given/known data The problem says: A mine provides a transport ystem constiting into carriages (of neglect mass) hanging on a riel free of friction. In a specific moment two people of mass m=45kg are travelling when one of them says his breaks are not working. In that moment he was at V1=10m/s whilst the carriage ahead was at V2=8.0m/s. Fortunately who travells in the second carriage can take a mass of 25Kg of sand and throw a fraction of it with a velocity V0=6.0m/s (respect to him). Calculate the MINIMUM mass that the man will have to throw to avoid the colission. ** The "correct" answer is 23Kg 2. Relevant equations 3. The attempt at a solution Well, obviously this is a conservation of linear momentum problem but I think it's not 23Kg. Here's what I did: $$\displaystyle {{v}_{mass}}={{v}_{0}}+{{v}_{back}}$$ That's the speed of the mass respect to the Earth. Vback is the additional speed the man should have when throwing the mass. If it asks the minimum then the man has to have a final speed of 10m/s, and therefore the "backward" speed should be 2m/s.(8+2=10): $$\displaystyle {{v}_{mass}}=-6.0+2.0=-4.0$$ $$\displaystyle {{p}_{i}}={{v}_{2i}}\left( m+{{m}_{x}} \right)$$ $$\displaystyle {{p}_{f}}=\left( m+{{m}_{x}} \right){{v}_{2f}}+{{m}_{x}}\left( {{v}_{mass}} \right)$$ m is 45Kg (man mass) and mx is the unknown thrown mass. Notice that my initial mass is the unknown thrown mass plus the mass of the man, but solving for those values I get another answer BUT if I consider the initial mass as only the mass of the man I get 22.5 which is the correct answer. But should I consider both mass? Thanks! Attached Thumbnails Recognitions: Gold Member Quote by Hernaner28 Fortunately who travells in the second carriage can take a mass of 25Kg of sand and throw a fraction of it with a velocity V0=6.0m/s (respect to the second person). Rereading this part of the problem might help. If you have any more questions, ask! Quote by Hernaner28 [b]1. $$\displaystyle {{v}_{mass}}=-6.0+2.0=-4.0$$ $$\displaystyle {{p}_{i}}={{v}_{2i}}\left( m+{{m}_{x}} \right)$$ $$\displaystyle {{p}_{f}}=\left( m+{{m}_{x}} \right){{v}_{2f}}+{{m}_{x}}\left( {{v}_{mass}} \right)$$ Thanks! If second man drops a sand, what is the velocity of the sand with respect to him and what is velocity relative to ground ? The final mass should be less than orginal mass since part has been thrown away. ## Mistake on official answer (conservation of linear momentum) Sorry, it is a mminus sign, I typed it wrongly but I don't get the answer anyway. I get 23 Kg when I do what I told. I already know how to solve this problem, I just need you tell me what is correct, consider the whole man and sand or only the mass of the man". Thanks! Quote by Hernaner28 Sorry, it is a mminus sign, I typed it wrongly but I don't get the answer anyway. I get 23 Kg when I do what I told. I already know how to solve this problem, I just need you tell me what is correct, consider the whole man and sand or only the mass of the man". Thanks! You have to consider whole man and sand. Why? The man throws the mass and he sees it at 6.0 m/s, so the Vmass respect to the ground is -6.0+ Velocity that the man gains which should be 2.0m/s so that he equals the velocity of the man without breaks and that would be the minimum. Take example for 1 seconds after they depart. The man moves 10m to the right Now relative to the man the sand moved 6m from him. What is the distance moved according to a man on the ground? The man moved 10m to the right but saw from the ground. He actually has to gain a speed of 2m/s. Could you finally explain it? because I'm not getting it. Thank you! So we are required to have minimum 10m/s to avoid collision. 10m/s is ground speed. 6m/s is relative to the man. Every second the man moved 10m. To the man, the sand moved 6m Can you find what the distance travelled by the sand relative to ground in one second? 16m/s ? Quote by Hernaner28 16m/s ? So what is the sand's ground velocity? Wrong for 16m/s. 4 m/s ? I don't know! Please tell me... if you explain it I will understand it. For 1 sec. .------------------------ Man 0------------------------10m |........|....................| |........|<......6m.....>| -------- Sand You got the value and subtitute in your equation with man and sand. Seriously, I am not understanding those relative velocities, I need you to explain it to me. If the man moved 10 m, and the sand moves 6 m respect to him, then the sand moved 4 m for a man in the ground. So it would be -4 Quote by Hernaner28 Seriously, I am not understanding those relative velocities, I need you to explain it to me. If the man moved 10 m, and the sand moves 6 m respect to him, then the sand moved 16 m for a man in the ground. At beginnig both at origin. 1 second later, man moves to the right 10m Now the man check that the sand is 6m behind him. If the sand speed 16m/s, it will overtake the man. If -4 means, 14m behind him. Quote by azizlwl At beginnig both at origin. 1 second later, man moves to the right 10m Now the man check that the sand is 6m behind him. If the sand speed 16m/s, it will overtake the man. No, the sand moved 4 meters to the left so the velocity is -4.0 m/s. I don't know what's wrong with that $$\displaystyle {{v}_{2i}}\left( m+{{m}_{x}} \right)=\left( m-{{m}_{x}} \right){{v}_{2f}}+{{m}_{x}}\left( {{v}_{mass}} \right)$$ $$\displaystyle 8\cdot 70=10\cdot 70-10{{m}_{x}}-{{m}_{x}}{{v}_{mass}}$$ $$\displaystyle 8\cdot 70=10\cdot 70-10{{m}_{x}}-\left( -4{{m}_{x}} \right)$$ $$\displaystyle {{m}_{x}}=23.33333$$ Nothing was wrong with -4m/s!!!! $$\displaystyle {{v}_{mass}}=-6.0+2.0=-4.0$$ . I was mistaken with masses... not with relative speeds ¬¬
2013-05-22 20:43:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5918558239936829, "perplexity": 929.2800390266935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702447607/warc/CC-MAIN-20130516110727-00042-ip-10-60-113-184.ec2.internal.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/jimo.2011.7.947
# American Institute of Mathematical Sciences October  2011, 7(4): 947-965. doi: 10.3934/jimo.2011.7.947 ## A new dynamic geometric approach for empirical analysis of financial ratios and bankruptcy 1 Institute of Mathematical Sciences, Faculty of Science, University of Malaya, 50603 Kuala Lumpur, Malaysia 2 Graduate School of Management, University Putra Malaysia, 43400 Serdang, Selangor, Malaysia 3 Institute for Mathematical Research, University Putra Malaysia, 43400 Serdang, Selangor, Malaysia Received  April 2009 Revised  July 2011 Published  August 2011 This paper presents a complementary technique for the empirical analysis of financial ratios and bankruptcy risk using financial ratios. Within this new framework, we propose the use of a new measure of risk, the Dynamic Risk Space (DRS) measure. We provide evidence of the extent to which changes in values for this index are associated with changes in each axis's values and how this may alter our economic interpretation of changes in patterns and directions. In addition, this model tends to be generally useful for predicting financial distress and bankruptcy. This method would be a general methodological guideline associated with financial data, solving some methodological problems concerning financial ratios such as non-proportionality, non-asymmetry and non-scaled. To test the procedure, Multiple Discriminant Analysis (MDA), Logistic Analysis (LA) and Genetic Programming (GP) are employed to compare results by common and modified ratios for bankruptcy prediction. Classification methods outperformed using the DRS approach. Citation: Alireza Bahiraie, A.K.M. Azhar, Noor Akma Ibrahim. A new dynamic geometric approach for empirical analysis of financial ratios and bankruptcy. Journal of Industrial and Management Optimization, 2011, 7 (4) : 947-965. doi: 10.3934/jimo.2011.7.947 ##### References: show all references ##### References: [1] Yazhe Li, Tony Bellotti, Niall Adams. Issues using logistic regression with class imbalance, with a case study from credit risk modelling. Foundations of Data Science, 2019, 1 (4) : 389-417. doi: 10.3934/fods.2019016 [2] Jesús Fabián López Pérez, Tahir Ekin, Jesus A. Jimenez, Francis A. Méndez Mediavilla. Risk-balanced territory design optimization for a Micro finance institution. Journal of Industrial and Management Optimization, 2020, 16 (2) : 741-758. doi: 10.3934/jimo.2018176 [3] Stefan Weber, Kerstin Weske. The joint impact of bankruptcy costs, fire sales and cross-holdings on systemic risk in financial networks. Probability, Uncertainty and Quantitative Risk, 2017, 2 (0) : 9-. doi: 10.1186/s41546-017-0020-9 [4] Abdel-Rahman Hedar, Alaa Fahim. Filter-based genetic algorithm for mixed variable programming. Numerical Algebra, Control and Optimization, 2011, 1 (1) : 99-116. doi: 10.3934/naco.2011.1.99 [5] Yuyuan Ouyang, Trevor Squires. Some worst-case datasets of deterministic first-order methods for solving binary logistic regression. Inverse Problems and Imaging, 2021, 15 (1) : 63-77. doi: 10.3934/ipi.2020047 [6] Lican Kang, Yuan Luo, Jerry Zhijian Yang, Chang Zhu. A primal and dual active set algorithm for truncated $L_1$ regularized logistic regression. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022050 [7] Ping-Chen Lin. Portfolio optimization and risk measurement based on non-dominated sorting genetic algorithm. Journal of Industrial and Management Optimization, 2012, 8 (3) : 549-564. doi: 10.3934/jimo.2012.8.549 [8] Mahdi Roozbeh, Saman Babaie–Kafaki, Zohre Aminifard. Two penalized mixed–integer nonlinear programming approaches to tackle multicollinearity and outliers effects in linear regression models. Journal of Industrial and Management Optimization, 2021, 17 (6) : 3475-3491. doi: 10.3934/jimo.2020128 [9] Mauro Cesa. A brief history of quantitative finance. Probability, Uncertainty and Quantitative Risk, 2017, 2 (0) : 6-. doi: 10.1186/s41546-017-0018-3 [10] Joss Sánchez-Pérez. On the linearity property for allocation problems and bankruptcy problems. Journal of Dynamics and Games, 2018, 5 (1) : 9-20. doi: 10.3934/jdg.2018002 [11] Andrey Itkin, Alexander Lipton, Dmitry Muravey. Multilayer heat equations: Application to finance. Frontiers of Mathematical Finance, 2022, 1 (1) : 99-135. doi: 10.3934/fmf.2021004 [12] Jianjun Tian, Bai-Lian Li. Coalgebraic Structure of Genetic Inheritance. Mathematical Biosciences & Engineering, 2004, 1 (2) : 243-266. doi: 10.3934/mbe.2004.1.243 [13] Dingjun Yao, Rongming Wang, Lin Xu. Optimal asset control of a geometric Brownian motion with the transaction costs and bankruptcy permission. Journal of Industrial and Management Optimization, 2015, 11 (2) : 461-478. doi: 10.3934/jimo.2015.11.461 [14] Lin Jiang, Changzhi Wu, Song Wang. Distributionally robust multi-period portfolio selection subject to bankruptcy constraints. Journal of Industrial and Management Optimization, 2021  doi: 10.3934/jimo.2021218 [15] Chonghu Guan, Xun Li, Zuo Quan Xu, Fahuai Yi. A stochastic control problem and related free boundaries in finance. Mathematical Control and Related Fields, 2017, 7 (4) : 563-584. doi: 10.3934/mcrf.2017021 [16] Zhuoyi Xu, Yong Xia, Deren Han. On box-constrained total least squares problem. Numerical Algebra, Control and Optimization, 2020, 10 (4) : 439-449. doi: 10.3934/naco.2020043 [17] Neil K. Chada, Claudia Schillings, Simon Weissmann. On the incorporation of box-constraints for ensemble Kalman inversion. Foundations of Data Science, 2019, 1 (4) : 433-456. doi: 10.3934/fods.2019018 [18] Alessandro Fonda, Andrea Sfecci. Multiple periodic solutions of Hamiltonian systems confined in a box. Discrete and Continuous Dynamical Systems, 2017, 37 (3) : 1425-1436. doi: 10.3934/dcds.2017059 [19] Philipp Hungerländer, Barbara Kaltenbacher, Franz Rendl. Regularization of inverse problems via box constrained minimization. Inverse Problems and Imaging, 2020, 14 (3) : 437-461. doi: 10.3934/ipi.2020021 [20] Shuhua Wang, Zhenlong Chen, Baohuai Sheng. Convergence of online pairwise regression learning with quadratic loss. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4023-4054. doi: 10.3934/cpaa.2020178 2020 Impact Factor: 1.801
2022-05-27 22:54:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26871436834335327, "perplexity": 10694.834458137111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00158.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-least-common-multiple-of-27-18
How do you find the Least common multiple of 27, 18? Nov 25, 2016 $54$ Explanation: One way of finding the lowest common multiple ( LCM ) is. •" Divide the larger number by the smaller number" •" If it divides exactly then larger number is LCM" •" If not then repeat with multiples of larger number" rArr27÷18" or " 27/18larr" does not divide exactly" Now we try $27 \times 2 = 54$ $\Rightarrow \frac{54}{18} = 3 \leftarrow \text{ divides exactly}$ $\text{ Hence 54 is the LCM of 27 and 18}$
2021-06-19 00:19:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6359015703201294, "perplexity": 1551.3078419270876}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00456.warc.gz"}
https://socratic.org/questions/what-is-the-horizontal-asymptote-of-f-x-x-1-x-2-3x-4
# What is the horizontal asymptote of f(x) = (x+1) / (x^2 +3x - 4)? Jul 17, 2018 y=0 #### Explanation: If the polynomial in the numerator is a lower degree than the denominator, the x-axis (y = 0) is the horizontal asymptote. The degree is the power of the x variable(s). Jul 17, 2018 $\text{horizontal asymptote at } y = 0$ #### Explanation: $\text{Horizontal asymptotes occur as}$ ${\lim}_{x \to \pm \infty} , f \left(x\right) \to c \text{ ( a constant)}$ $\text{divide terms on numerator/denominator by the highest}$ $\text{power of "x" that is } {x}^{2}$ $f \left(x\right) = \frac{\frac{x}{x} ^ 2 + \frac{1}{x} ^ 2}{{x}^{2} / {x}^{2} + \frac{3 x}{x} ^ 2 - \frac{4}{x} ^ 2} = \frac{\frac{1}{x} + \frac{1}{x} ^ 2}{1 + \frac{3}{x} - \frac{4}{x} ^ 2}$ $\text{as } x \to \pm \infty , f \left(x\right) \to \frac{0 + 0}{1 + 0 - 0}$ $\Rightarrow y = 0 \text{ is the asymptote}$ graph{(x+1)/(x^2+3x-4) [-10, 10, -5, 5]}
2019-08-25 15:25:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9408183097839355, "perplexity": 1035.4952081201714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330750.45/warc/CC-MAIN-20190825151521-20190825173521-00408.warc.gz"}
http://tex.stackexchange.com/questions/121383/how-does-the-latex-compile-chain-work-exactly
# How does the LaTeX compile chain work exactly? This might be a simple or out of place question, but I have run into so many subtleties of document compilation, that I really just need to understand this from the ground up. Could someone please explain to me what a standard LaTeX compile chain / chain of events / latex make works? I'd also like to know where font-loading fits in, when tikz gets run, labels get assigned, why pdflatex vs latex etc. I've seen some online references to 2 * latex, 1 * bibtex, 1 * latex, but I'm looking for a diagram to explain to me how this works, or a reference that explains this in-depth, so I can debug this myself on the command line, instead of being annoyed by an editor with insufficient error message (Process started,Process exited with error(s)) This may sound like a very basic question, but I need an answer - as trying to fix this caused almost a day of writing downtime that could hopefully be avoided in future, if I had a causal chain I could check. My apologies for not doing everything through the CLI to begin with. - Are you running just 'LaTeX' or some 'build script' (there are several, for example latexmk, rubber, ...) –  Joseph Wright Jun 27 '13 at 14:46 Currently using TexMaker with default F1 quickbuild. The thing is, everything worked until this morning. Then bibtex just broke. On deleting all temp files and running it on a 'clean' folder, I still get the same problem. I've tried rubber (last half hour) and now the document compiles and the references work, I still want to know what broke, as nothing in my lubuntu system changed as far as I know. Understanding the compile chain better, I can access the logs or aux files myself and debug this if a situation like this ever comes up again. –  Forkrul Assail Jun 27 '13 at 14:53 So it looks like rubber runs bibtex once, and then latex twice on my clean doc, I still have no idea why this doesn't work in TexMaker anymore. –  Forkrul Assail Jun 27 '13 at 14:54 Perhaps see Understanding how references and labels work (for \label-\ref) and Question mark instead of citation number (for citations/bibliographic references). –  Werner Jun 27 '13 at 19:22 delete all temp files, run latex once or twice then run bibtex from the terminal. texmaker sadly does not deliver the complete output from bibtex. after that you should see where the problem is. might just be that you use a citation style which does not cover the type of document you are trying to cite. Not all styles will handle patents and websites ect. –  eject Jun 27 '13 at 21:42 The standard LaTeX 'recipe': 1. latex <filename> 2. bibtex <filename> 3. latex <filename> 4. latex <filename> comes about as follows. During the first run, there is only the .tex file. During the run, LaTeX writes any citation keys and \label information to the .aux file. BibTeX then reads the .aux file and extracts the citations, looks those up in the .bib file(s) and writes the formatted references as a .bbl file. The second LaTeX run reads the .aux file as well as the .tex file, and is able to use this to resolve cross-refs. It also reads the .bbl file, which inserts the references into the output and also sets up the necessary information for the final LaTeX run to put the citation labels (numbers, author-date, ...) into the output. Life gets more complex in some cases, as it's possible that additional LaTeX or BibTeX runs are needed, for example if there are multiple bibliographies, citations inside references, etc. Thus there are a number of ways support tools try to detect whether more runs are needed. These are broadly either (1) a fixed 'recipe' or (2) looking for changes in the .aux and other 'derived' files. If your editor is failing to pick up the need to run BibTeX, and it normally does, then either the 'recipe' is corrupted or the scripts it uses are missing something in the auxiliary files. The detail will depend on the exact tool you use. - With the insertion of (say) [123] for a citation in run 3 or 4 (as opposed to [??] during run 1 could also lead to a change in the paragraph layout, possibly causing a (completely) different page layout which, in turn, affect (page) references, which requires a recompile (see Document requiring infinitely many compiler passes? for some reference). –  Werner Jun 28 '13 at 18:52 I've run into the 'infinite passes' problem once doing a 45 degree slanted 'draft' watermark. Thanks for the reference. –  Forkrul Assail Jul 1 '13 at 10:26 I'm using rubber with custom build scripts that tie in git data and knitr now.It's interesting to think how much there is still to learn about LaTeX –  Forkrul Assail Sep 3 '13 at 10:56
2015-04-27 21:13:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219895005226135, "perplexity": 2612.4511616375257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659449.65/warc/CC-MAIN-20150417045739-00307-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.fd-seminar.xyz/talks/2020-06-04/
# Schemes of modules over gentle algebras and laminations of surfaces I will speak about some geometric aspects of the representation theory of gentle algebras. Some results regarding the irreducible components of the affine schemes of modules over gentle algebras will be presented. In the case of gentle algebras arising from triangulations of unpunctured surfaces, a bijection between the set of laminations on the surface and the set of generically $$\tau$$-reduced irreducible components (formerly called “strongly reduced” by Geiss–Leclerc–Schröer) will be described. The talk is based on joint work with Christof Geiss and Jan Schröer.
2020-07-13 14:32:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47988399863243103, "perplexity": 645.724449678663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00042.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/calculate-area-designed-region-given-figure-common-between-two-quadrants-circles-radius-8-cm-each-areas-combinations-plane-figures_7550
# Calculate the area of the designed region in the given figure common between the two quadrants of circles of radius 8 cm each. - Mathematics Calculate the area of the designed region in the given figure common between the two quadrants of circles of radius 8 cm each. [Use Π = 22/7] #### Solution The designed area is the common region between two sectors BAEC and DAFC. Area of sector BAEC = 90^@/360^@ xx 22/7xx(8)^2 =1/4xx22/7xx64 =(22xx16)/7 cm^2 = 352/7 cm^2 Area of ΔBAC = 1/2xxBAxxBC = 1/2xx8xx7 = 32 cm^2 Area of the designed portion = 2 × (Area of segment AEC) = 2 × (Area of sector BAEC − Area of ΔBAC) = 2xx(352/7 - 32) = 2((352-224)/4) = (2xx128)/7 = 256/7 cm^2 Is there an error in this question or solution? #### APPEARS IN NCERT Class 10 Maths Chapter 12 Areas Related to Circles Exercise 12.3 | Q 16 | Page 238
2021-03-01 04:41:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5080730319023132, "perplexity": 4092.8308103393265}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00137.warc.gz"}
https://research.fdabrandao.pt/csp_heuristics/
Fast Pattern-based Algorithms for Cutting Stock Brandão, F. and Pedroso, J. P. (2014). Fast pattern-based algorithms for cutting stock. Computers & Operations Research, 48(0):69–80. Technical Report (PDF) (the final publication is available at http://dx.doi.org/10.1016/j.cor.2014.03.003) Abstract The conventional assignment-based first/best fit decreasing algorithms (FFD/BFD) are not polynomial in the cutting stock input size in its most common format. Therefore, even for small instances with large demands, it is difficult to compute FFD/BFD solutions. We present pattern-based methods that overcome the main problems of conventional heuristics in cutting stock problems by representing the solution in a much more compact format. Using our pattern-based heuristics, FFD/BFD solutions for extremely large cutting stock instances, with billions of items, can be found in a very short amount of time. BibTeX @article{CSPHeuristics, author = {Brand\~ao, Filipe and Pedroso, J. P.}, title = "Fast pattern-based algorithms for cutting stock", journal = "Computers \& Operations Research", volume = "48", number = "0", pages = "69 - 80", year = "2014", note = "", issn = "0305-0548", doi = "http://dx.doi.org/10.1016/j.cor.2014.03.003", url = "http://www.sciencedirect.com/science/article/pii/S0305054814000525", keywords = "Cutting stock", keywords = "First fit decreasing", keywords = "Best fit decreasing", }
2019-04-22 16:08:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5472946166992188, "perplexity": 7471.422169334208}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578558125.45/warc/CC-MAIN-20190422155337-20190422181337-00485.warc.gz"}
https://api-project-1022638073839.appspot.com/questions/how-do-you-test-the-series-sigma-n-n-1-n-2-from-n-is-0-oo-for-convergence
# How do you test the series sum_(n=0)^(oo) n/((n+1)(n+2)) for convergence? Nov 16, 2017 The series diverge #### Explanation: Perform the limit comparison test ${a}_{n} = \frac{n}{\left(n + 1\right) \left(n + 2\right)}$ and ${b}_{n} = \frac{1}{n}$, this series diverge ${a}_{n} > 0$ and ${b}_{n} > 0$, $\forall n \in \mathbb{N}$ ${\lim}_{n \to \infty} {a}_{n} / {b}_{n} = {\lim}_{n \to \infty} \left(\frac{\frac{n}{\left(n + 1\right) \left(n + 2\right)}}{\frac{1}{n}}\right)$ $= {\lim}_{n \to \infty} \left({n}^{2} / \left(\left(n + 1\right) \left(n + 2\right)\right)\right)$ $= {\lim}_{n \to \infty} \left({n}^{2} / \left(\left({n}^{2} + 3 n + 2\right)\right)\right)$ $= {\lim}_{n \to \infty} \left(\frac{1}{\left(1 + \frac{3}{n} + \frac{2}{n} ^ 2\right)}\right)$ $= 1$ We conclude that, by the limit comparison test that the series ${a}_{n}$ diverge Nov 16, 2017 The series: ${\sum}_{n = 0}^{\infty} \frac{n}{\left(n + 1\right) \left(n + 2\right)}$ is divergent. #### Explanation: The series has only positive terms, so we can use the limit comparison test to compare it with the harmonic series: ${\lim}_{n \to \infty} \frac{\frac{n}{\left(n + 1\right) \left(n + 2\right)}}{\frac{1}{n}} = {\lim}_{n \to \infty} {n}^{2} / \left({n}^{2} + 3 n + 2\right) = 1$ As the limit is finite and positive the two series have the same character, and we know the harmonic series to be divergent, thus also the series: ${\sum}_{n = 0}^{\infty} \frac{n}{\left(n + 1\right) \left(n + 2\right)}$ is divergent. Nov 16, 2017 We can use the integral test to find it diverges. #### Explanation: Using the integral test, we find: $\int \frac{x}{\left(x + 1\right) \left(x + 2\right)} \mathrm{dx} = \int \left(\frac{2}{x + 2} - \frac{1}{x + 1}\right) \mathrm{dx}$ $\textcolor{w h i t e}{\int \frac{x}{\left(x + 1\right) \left(x + 2\right)} \mathrm{dx}} = 2 \ln \left\mid x + 2 \right\mid - \ln \left\mid x + 1 \right\mid + C$ $\textcolor{w h i t e}{\int \frac{x}{\left(x + 1\right) \left(x + 2\right)} \mathrm{dx}} = \ln \left({\left\mid x + 2 \right\mid}^{2} / \left\mid x + 1 \right\mid\right) + C$ $\textcolor{w h i t e}{\int \frac{x}{\left(x + 1\right) \left(x + 2\right)} \mathrm{dx}} > \ln \left({\left\mid x + 1 \right\mid}^{2} / \left\mid x + 1 \right\mid\right) + C = \ln \left(\left\mid x + 1 \right\mid\right) + C$ $\textcolor{w h i t e}{\int \frac{x}{\left(x + 1\right) \left(x + 2\right)} \mathrm{dx}} \to \infty \text{ }$ as $x \to \infty$ So: ${\sum}_{n = 0}^{N} \frac{n}{\left(n + 1\right) \left(n + 2\right)} \to \infty \text{ }$ as $N \to \infty$ Nov 16, 2017 See below. #### Explanation: $\frac{n}{\left(n + 1\right) \left(n + 2\right)} = \frac{2}{n + 2} - \frac{1}{n + 1}$ then ${\sum}_{n = 0}^{\infty} \frac{n}{\left(n + 1\right) \left(n + 2\right)} = 2 {\sum}_{n = 0}^{\infty} \frac{1}{n + 2} - {\sum}_{n = 0}^{\infty} \frac{1}{n + 1}$ Now considering $H = {\sum}_{n = 1}^{n} \frac{1}{n}$ we have ${\sum}_{n = 0}^{\infty} \frac{n}{\left(n + 1\right) \left(n + 2\right)} = 2 \left(H - 1\right) - H = H - 2$ but as we know $H$ is the so called harmonic series know as divergent. Resuming ${\sum}_{n = 0}^{\infty} \frac{n}{\left(n + 1\right) \left(n + 2\right)}$ is divergent.
2021-10-26 05:05:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 28, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9745948910713196, "perplexity": 1447.8977923403868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00602.warc.gz"}
http://openstudy.com/updates/55f5c33de4b0a91f39314f6a
1. anonymous The wing color of a certain species of moth is controlled by alleles at a single locus. Gray (G) is dominant to white (g). A scientist studied a large population of these moths, tracking the frequency of the G allele over time, as shown in the figure below. 2. anonymous 3. anonymous Assuming that the population was in Hardy-Weinberg equilibrium for this gene, what percentage of the moth population was homozygous recessive (gg) in 1975? 4. anonymous A) 25% B) 36% C) 64% D) 75% 5. anonymous @amistre64 @Nnesha @Compassionate @mathmate 6. amistre64 define: Hardy-Weinberg equilibrium 7. anonymous p + q = 1? 8. anonymous I think the answer is C by the way. 9. amistre64 the wiki defines it as: allele and genotype frequencies in a population will remain constant from generation to generation in the absence of other evolutionary influences. 10. anonymous Okay 11. amistre64 that simply means that the ratio remains consistent .... how to apply that i dont know yet, need to scour the google some more 12. anonymous Okay please do. I really need this :D 13. amistre64 ok, i see the p+q=1 in that f(AA) = p^2 f(aa) = q^2 f(Aa) = 2pq which are the terms of the expansion of: (p+q)^2 = 1 14. anonymous Okay I understand so far. 15. amistre64 Gg is a geno type? G and g are alleles? or how do we define those words? 16. anonymous Yes that is how we define it in my class as well. 17. amistre64 im trying to determine how we go from a frequency an allele of G, to determining a genotype of (gg) 18. amistre64 a pundit square is like G g G GG Gg g Gg gg g g G Gg Gg g gg gg g g G Gg Gg G Gg Gg 19. anonymous Yes 20. anonymous So would my answer be correct? 21. amistre64 i cannot verify your answer, i simply do not have enough experience with this stuff to confirm or deny it. it 'seems' like a good answer; but can you back up its reasoning? even then, i cant verify it. 22. anonymous Okay thank you so much! @amistre64 23. amistre64 good luck with it 24. amistre64 i wonder $(G+g)^2=1$ $GG+2Gg+\underbrace{gg}_{\text{solution ?}}=1$ 25. amistre64 if G is between .4 and .5 g is between .5 and .6 gg, or g^2, is between .25 and .36 but of course both of those extremes are solution options, and i dont see an option between them so this idea might not hold any value 26. amistre64 http://www.k-state.edu/parasitology/biology198/answers1.html ok, the idea is good ... at least according to this
2017-01-19 19:34:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7074586749076843, "perplexity": 3583.320110740817}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00486-ip-10-171-10-70.ec2.internal.warc.gz"}
https://economictheoryblog.com/page/3/
# Review: The Conscience of a Liberal In The Conscience of a Liberal, Paul Krugman provides a compelling explanation for the recent increase of inequality in the United States. In contrast to the widely believed idea that is was globalization that caused the increase in inequality, he argues that Continue reading Review: The Conscience of a Liberal # Review: Hillbilly Elegy Hillbilly Elegy is a tale of social decay. In an absorbing and fascinating manner, J.D. Vance outlines different episodes of his life that go along with the social decline of the white middle-class in the Midwest. In an impressive fashion, Continue reading Review: Hillbilly Elegy # The Gini Coefficient The Gini Coefficient is often used an indicator of inequality in a country. Additionally, one can also use the Gini Coefficient as an indicator of economic development. The Gini Coefficient is based on the Lorenz Curve and measures the degree of income or wealth inequality in an economy. The coefficient is bound between zero and one. This means that the coefficient can take on values between zero and one. A Gini Coefficient of one states complete inequality. That is, one single person receives all the income or holds all the wealth of the economy, while all others receive or own nothing. A Gini Coefficient of zero implies perfect equality. That is, all individuals obtain the same income. See the discussion of the Lorenz Curve for a clear illustration of the concept.  Continue reading The Gini Coefficient # Linear Regression in STATA In STATA one can estimate a linear regression using the command regress. In this post I will present how to use the STATA function regress to run OLS on the following model $y = \alpha + \beta_{1} x_{1}$ # How to compute the Lorenz Curve In contrast to our previous post, that is the post that summarized the Lorenz Curve in general terms, this post details how to construct the Lorenz Curve and provides a hypothetical example in R. # The Lorenz Curve The Lorenz Curve displays the actual income or wealth distribution of an economy. The concept was brought up by the American economist Max O. Lorenz in 1905. The curve represents a graphical representation of the income or wealth distribution of an economy or country. That is, it shows the proportion of income earned or wealth possessed by any given percentage of the population. In the case that everyone has approximately the same wealth, we have a very equal society. While in a case where few own the majority of wealth, we have high inequality. The following figure depicts the Lorenz curve for three economies with varying degrees of inequality. # Cluster Robust Standard Errors in Stargazer In a previous post, we discussed how to obtain clustered standard errors in R. While the previous post described how one can easily calculate cluster robust standard errors in R, this post shows how one can include cluster robust standard errors in stargazer and create nice tables including clustered standard errors. # Robust Standard Errors in Stargazer In a previous post, we discussed how to obtain robust standard errors in R. While the previous post described how one can easily calculate robust standard errors in R, this post shows how one can include robust standard errors in stargazer and create nice tables including robust standard errors. # Omitted Variable Bias The omitted variable bias is a common and serious problem in regression analysis. Generally, the problem arises if one does not consider all relevant variables in a regression. In this case, one violates the third assumption of the assumption of the classical linear regression model. The following series of blog posts explains the omitted variable bias and discusses its consequences.
2021-05-11 20:53:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6477810740470886, "perplexity": 870.519374562919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989856.11/warc/CC-MAIN-20210511184216-20210511214216-00236.warc.gz"}
https://www.groundai.com/project/recursive-partitioning-and-multi-scale-modeling-on-conditional-densities/
Recursive partitioning and multi-scale modeling on conditional densities # Recursive partitioning and multi-scale modeling on conditional densities \fnmsLi \snmMa \thanksreft1label=e1]li.ma@duke.edu [ Duke University Department of Statistical Science Duke University Durham, NC 27708-0251, USA August 27th, 2011 ###### Abstract We introduce a nonparametric prior on the conditional distribution of a (univariate or multivariate) response given a set of predictors. The prior is constructed in the form of a two-stage generative procedure, which in the first stage recursively partitions the predictor space, and then in the second stage generates the conditional distribution by a multi-scale nonparametric density model on each predictor partition block generated in the first stage. This design allows adaptive smoothing on both the predictor space and the response space, and it results in the full posterior conjugacy of the model, allowing exact Bayesian inference to be completed analytically through a forward-backward recursive algorithm without the need of MCMC, and thus enjoying high computational efficiency (scaling linearly with the sample size). We show that this prior enjoys desirable theoretical properties such as full support and posterior consistency. We illustrate how to apply the model to a variety of inference problems such as conditional density estimation as well as hypothesis testing and model selection in a manner similar to applying a parametric conjugate prior, while attaining full nonparametricity. Also provided is a comparison to two other state-of-the-art Bayesian nonparametric models for conditional densities in both model fit and computational time. A real data example from flow cytometry containing 455,472 observations is given to illustrate the substantial computational efficiency of our method and its application to multivariate problems. [ \kwd \runtitle A multi-scale prior for conditional distributions \thankstext t1Supported in part by NSF grants DMS-1309057 and DMS-1612889, and a Google Faculty Research Award. class=AMS] \kwd[Primary ]62F15 \kwd62G99 \kwd[; secondary ]62G07 Pólya tree \kwdmulti-resolution inference \kwdBayesian nonparametrics \kwddensity regression \kwdBayesian CART ## 1 Introduction In recent years there has been growing interest in nonparametrically modeling probability densities based on multi-scale partitioning of the sample space. A prime example in the Bayesian nonparametric literature is the Pólya tree (PT) [12, 22, 31] and its extensions [17, 18, 45, 21, 27]. In particular, Wong and Ma [45] introduced randomization into the partitioning component (involving both random selection of partition directions as well as optional stopping) of the PT framework, which enhances the model’s ability to approximate the shape and smoothness of the underlying density. A PT model with these features is called an optional Pólya tree (OPT). A further desirable feature of the PT and its relatives such as the OPT and the more recently introduced adaptive Pólya tree (APT) [27] is the computational ease for carrying out inference. In turns out that the extra component of randomized partitioning such as that employed in the OPT does not impair the conjugacy enjoyed by the PT. For example, after observing i.i.d. data, the corresponding posterior of an OPT is still an OPT, that is, the same generative procedure for random probability distributions with its parameters updated to their posterior values. Moreover, the corresponding posterior parameter values can be computed exactly through a sequence of recursive computations, which is in essence a forward-backward algorithm [25]. This, together with the constructive nature of these models, allows one to draw samples from the exact posterior directly without resorting to Markov Chain Monte Carlo (MCMC) procedures, and to compute various summary statistics of the posterior analytically. Furthermore, the marginal posterior of the random partitioning adapts to the underlying structure of the data—the sample space will with high posterior probability be more finely divided in places where the underlying distribution has richer structure, i.e. less uniform topological shape. Motivated by the computational efficiency and statistical properties of the OPT, which is tied to its use of recursive random partitioning, we aim to further exploit the random recursive partitioning idea in the context of multi-scale density modeling, and build such a model for conditional densities for a response (vector) given a predictor (vector) . The objective is to construct a flexible nonparametric model for conditional distributions that maintain all of the desirable statistical and computational properties of PT and OPT. A variety of inference tasks involve the estimation, prediction, and testing regarding conditional distributions, and nonparametric inference on conditional densities has been studied from both frequentist and Bayesian perspectives. Many frequentist works are based on kernel estimation methods [10, 16, 11], and they achieve proper smoothing through bandwidth selection, which often involves resampling procedures such as cross-validation [2, 19, 11] and the bootstrap [16]. An alternative frequentist strategy introduced more recently is to employ the so-called block-wise shrinkage [8, 9]. In Bayesian nonparametrics, inference on conditional distributions is often referred to as covariate-dependent distribution modeling, and existing methods fall into two categories. The first category is methods that construct priors for the joint distribution of the response and the predictors, and then use the induced conditional distribution for inference. Some examples are [32, 37, 33, 41], which propose using mixtures of multivariate normals as the model for joint distributions, along with different priors for the mixing distribution. The other category is methods that construct conditional distributions directly without specifying the marginal distribution of the predictors. Many of these methods are based on extending the stick breaking construction for the Dirichlet Process (DP) [39]. Some notable examples, among others, are proposed in [29, 20, 13, 15, 7, 4, 36, 1]. Some recent works in this category do not utilize stick breaking. In [43], the authors propose to use the logistic Gaussian process [23, 42] together with subspace projection to construct smooth conditional distributions. In [21], the authors incorporate covariate dependency into tail-free processes by generating the conditional tail probabilities from covariate-dependent logistic Gaussian processes, and propose a mixture of such processes as a way for modeling conditional distributions. The authors of [24] introduce dependent normalized complete random measures. In [44] the authors introduce the covariate-dependent multivariate Beta process, and use it to generate the conditional tail probabilities of Pólya trees. More recently, in [40] the authors use the tensor product of B-splines to construct a prior for conditional densities, and incorporate a variable selection feature. While many of these nonparametric models on conditional distributions enjoy desirable theoretical properties, inference using these priors generally relies on intense MCMC sampling, and can take substantial computing time even when both the response and the covariate are one-dimensional. We introduce a new prior, called the conditional optional Pólya tree, for the conditional density of given , in the form of a two-stage generative procedure consisting of first randomly partitioning the predictor space , and then for each predictor partition block, generates the response distribution on each block using an OPT, which implicitly employs a further random partitioning of the response space . We show that this new prior is a fully nonparametric model and yet achieves extremely high computational efficiency even for multivariate responses and covariates. It enjoys all of the desirable theoretical properties of the PT and the OPT priors—namely large support, posterior consistency, and posterior conjugacy, and its posterior parameters can also be computed exactly through forward-backward recursion. Under this two-stage design, the posterior distribution on the partitions reflect the structure of the conditional distribution at two levels—first, the predictor space will be partitioned finely in parts where the conditional distribution changes most abruptly, shedding light on how the conditional distribution depends on the predictors; second, the response space will be divided adaptively for different locations of the predictor space, to capture the local structure of the conditional density through adaptive smoothing. The rest of the paper is organized as follows. In Section 2 we introduce our two-stage prior and show that it is fully nonparametric—with full (integrated) support—for conditional densities. In addition, we make a connection to Bayesian CART and show that our method can be considered a nonparametric version of the latter. In Section 3 we show the full conjugacy of the model, derive the exact form of the posterior through forward-backward recursion, and thereby provide a recipe for carrying out Bayesian inference using the prior. We also prove the posterior consistency of such inference. In Section 4 we discuss practical computational issues in implementing the inference. In Section 5 we provide four simulation examples to illustrate the work of our method. The first two are for estimating conditional densities, and the last two concern model selection and hypothesis testing. In Section 6 we apply the proposed method to estimating conditional densities in a flow cytometry data set involving a large number (455,472) of observations, and demonstrate the computational efficiency of the method and its application when both the response and the predictor are multivariate. Section 7 concludes with some discussions. All proofs are given in the Appendix. ## 2 Conditional optional Pólya trees In this section we introduce our proposed prior constructively in terms of a two-stage generative procedure that produces random conditional densities. First we introduce some notions and notations that will be used throughout. Let each observation be a predictor-response pair , where denotes the predictor (or covariate) vector and the response (vector) with being the predictor space and the response space. In this work we consider sample spaces that are either finite spaces, compact Euclidean rectangles, or a product of the two, and and do not have to be of the same type. (See for instance Example 3.) Let and be the “natural” measures on and . (That is, the counting measure for finite spaces, the Lebesgue measure for Euclidean rectangles, and the corresponding product measure if the space is a product of the two.) Let be the “natural” product measure on the joint sample space . A partition rule on a sample space specifies a collection of possible ways to divide any subset of into a number of smaller sets. For example, for , the unit rectangle in , the coordinate-wise dyadic mid-split rule allows each rectangular subset of whose sides are parallel to the coordinates to be divided into two halves at the middle of the range of each coordinate. For simplicity, in this work we only consider partition rules that allow a finite number of ways for dividing each set. Such partition rules are said to be finite. (Interested readers can refer to [28, Sec. 2] for a more detailed treatment of partition rules and to Examples 1 and 2 in [45] for examples of the coordinate-wise dyadic mid-split rule over Euclidean rectangles and contingency tables.) We are now ready to introduce our prior for conditional densities as a two-stage constructive procedure. It is important to note that the following describes the generation of conditional densities under our prior and not the operational steps for inference under the prior, which will be addressed Section 3 and Section 4. Stage I. Predictor partition: We randomly partition according to a given partition rule on in the following recursive manner. Starting from , draw a Bernoulli variable S(A)∼Bernoulli(ρ(A)). That is, . If , then the partitioning procedure on terminates and we arrive at a trivial partition of a single block over . (Thus is called the stopping variable, and the stopping probability.) If instead , then we randomly select one out of the possible ways for dividing under and partition accordingly. More specifically, if there are ways to divide under , we randomly draw J(A)∈{1,2,…,N(A)} such that P(J(A)=j)=λj(A) % for j=1,2,…,N(A) with N(A)∑j=1λj(A)=1 and partition in the th way if . (We call the partition selection probabilities for .) Let be the number of child sets that arise from this partition, and let denote these children. We then repeat the same partition procedure, starting from the drawing of a stopping variable, on each of these children. The following lemma, first proved in [45], states that as long as the stopping probabilities are (uniformly) away from 0, this random recursive partitioning procedure will eventually terminate almost everywhere and produce a well-defined partition of . ###### Lemma 1. If there exists a such that the stopping probability for all that could arise after a finite number of levels of recursive partition, then with probability 1 the recursive partition procedure on will stop a.e. Stage II. Generating conditional densities: Next we move onto the second stage of the procedure to generate the conditional density of the response on each of the predictor partition blocks generated in Stage I. Specifically, for each stopped subset on produced in Stage I, we let the conditional distribution of given be the same across all , and generate this (conditional) distribution on , denoted as , from a “local” prior. When the response space is finite, is simply a multinomial distribution, and so a simple choice of such a local prior is the Dirichlet prior: where represents the pseudo-count hyperparameters of the Dirichlet. In this case, we note that the two-stage prior essentially reduces to a version of the Bayesian CART proposed by Chipman et al in [3] for the classification problem. When is infinite (or finite but with a large number of elements), one may restrict to be from a parametric family. For example, when , one may require to be normal with some mean and variance , and let and . In this case the two-stage prior again reduces to a Bayesian CART, this time for the regression problem [3]. The focus of our current work, however, is on the case when no parametric assumptions are placed on the conditional density. To this end, one can draw from a nonparametric prior. A desirable choice for the local prior, which will result in analytic simplicity and computational efficiency as we will later show, is a Pólya tree type model [27], and in particular an optional Pólya tree (OPT) distribution [45]: q0,AY∼OPT(RAY;ρAY,λAY,αAY) independently across s given the partition, where denotes a partition rule on and , , and are respectively the stopping, selection, and pseudo-count hyperparameters [45]. In general we allow the partition rule for these “local” OPTs to depend on as indicated in the superscript, but adopting a common partition rule on —that is to let for all —will suffice for most problems. In the rest of the paper, unless stated otherwise we assume that a common rule is adopted. This completes the description of our two-stage procedure. We now formally define the resulting prior. ###### Definition 1. A conditional distribution that arises from the above two-stage procedure is said to have a conditional optional Pólya tree (cond-OPT) distribution. The hyperparameters are the predictor partition rule , the response partition rule , the stopping probability , the partition selection probabilities , and the local parameters for all that could arise during the predictor partition under . Remark I: To ensure that this definition is meaningful, one must check that the two-stage procedure will in fact generate a well-defined conditional distribution with probability 1. To see this, first note that because the collection of all potential sets on that can arise during Stage I is countable, by Theorem 1 in [45], with probability 1, the two-stage procedure will generate an absolutely continuous conditional distribution of given for in the stopped part of , provided that is uniformly away from 0. The two-stage generation procedure for the conditional density of can then be completed by letting given be uniform on for the -null subset of on which the recursive partition in Stage I never stops. Remark II: While the cond-OPT prior involves many hyperparameters, one can appeal to very simple symmetry and self-similarity principles for choosing their values. Specifically, such considerations lead to the simple choice: (i) , (ii) , and (iii) , , and for all , following the default choices in [45]. We note that when useful prior knowledge about the structure of the underlying distribution is not available or when one is unwilling to assume particular structure over the distribution, it is desirable to specify the prior parameters in a symmetric and self-similar way. The common stopping probability should not be too close to 0 or 1, but taking a moderate value between 0.1 and 0.9. A sensitivity analysis for such choices demonstrating the robustness of such choices in the context of OPTs is provided in [28]. As for the partition rules, the coordinate-wise dyadic mid-split rule can serve as a simple default choice for both and . We will adopt such a specification in all of our numerical examples. Remark III: One constraint in the cond-OPT is that given the random partition generated in Stage I, the generation of the conditional distribution across different predictor blocks is independent, i.e., in a similar manner as that for Bayesian CART. As we shall see, this constraint is key to the tremendous computational efficiency of the model. It is important to note however that due to the randomized partitioning incurred in Stage I, the marginal prior for the conditional distributions on nearby values of are in fact dependent, thereby achieving smoothing over to some extent. More flexible smoothing could be achieved through modeling the “local” priors jointly, but that would incur the need for MCMC sampling and the most desirable feature of PT type models would be lost. We have emphasized that the cond-OPT prior imposes no parametric assumptions on the conditional distribution. One may wonder whether this prior is truly “nonparametric” in the sense that it can generate all possible conditional densities. Our next theorem confirms this—under mild conditions on the parameters, which the default specification satisfies, the cond-OPT will place positive probability in arbitrarily small neighborhoods of any conditional density. (A definition of an neighborhood for conditional densities is also implied in the statement of the theorem.) ###### Theorem 2 (Large support). Suppose is a conditional density function that arises from a cond-OPT prior whose parameters and for all that could arise during the recursive partitioning on are uniformly away from 0 and 1, and the local OPTs all have full support on the densities on . Moreover, suppose that the underlying partition rules and both satisfy the following “fine partition criterion”: , there exists a partition of the corresponding sample space such that the diameter of each partition block is less than . Then for any conditional density function , and any , P(∫|q(y|x)−f(y|x)|μ(dx×dy)<τ)>0. Furthermore, let be any density function on w.r.t. . Then we have , P(∫|q(y|x)−f(y|x)|fX(x)μ(dx×dy)<τ)>0. Remark: Sufficient conditions for OPTs to have full support on densities is given in Theorem 2 of [45]. ## 3 Bayesian inference with cond-OPT Next we investigate how Bayesian inference on conditional densities can be carried out using this prior. First, we note that Chipman et al [3] and Denison et al [6] each proposed MCMC algorithms that enable posterior inference for Bayesian CART. These sampling and stochastic search algorithms can be applied directly here as the local OPT priors can be marginalized out and so the marginal likelihood under each partition tree that arises in Stage I of the cond-OPT is available in closed form [45, 28]. However, as noted in [3] and other works, due to the multi-modal nature of tree structured models, the mixing behavior of the MCMC algorithms is often undesirable. This problem is exacerbated in higher dimensional settings. Chipman et al [3] suggested using MCMC as a tool for searching for good models rather than a reliable way of sampling from the actual posterior. The main result of this section is that under simple partition rules such as the coordinate-wise dyadic mid-split rule, Bayesian inference under a cond-OPT prior can be carried out in an exact manner in the sense that the corresponding posterior distribution can be computed in closed form and directly sampled from, without resorting to MCMC algorithms. Not only is the computation feasible for multivariate sample spaces of moderate dimensions, but it is in fact highly efficient, scaling linearly with the number of observations. First let us investigate what the posterior of a cond-OPT prior is. Suppose we have observed where given the ’s, the ’s are independent with some density . We assume that has a cond-OPT prior denoted by . Further, for any we let x(A):={x1,x2,…,xn}∩Aandy(A):={yi:xi∈A,i=1,2,…,n}, and let denote the number of observations with predictors lying in , that is . For , we use to denote the (conditional) likelihood under contributed from the data with predictors . That is q(A):=∏i:xi∈Aq(yi|xi). Then conditional on the event that arises during the recursive partition procedure on , we can write recursively in terms of , , and as follows where q0(A):=∏i:xi∈Aq0,AY(yi), the likelihood from the data with if the partitioning stops on . Equivalently, we can write q(A) =S(A)q0(A)+(1−S(A))KJ(A)(A)∏i=1q(AJ(A)i). (3.1) Integrating out the randomness over both sides of Eq. (3.1), we get Φ(A)=ρ(A)M(A)+(1−ρ(A))N(A)∑j=1λj(A)∏iΦ(Aji), (3.2) where Φ(A):=∫q(A)π(dq|A arises during the recursive partitioning) is defined to be the marginal likelihood from data with given that arises during the recursive partitioning on , whereas M(A):=∫q0(A)π(dq0,AY) (3.3) is the marginal likelihood from the data with if the recursive partitioning procedure stops on and the integration is taken over the local OPT prior for . We note that Eqs. (3.1), (3.2) and (3.3) hold for Bayesian CART as well, with being the corresponding marginal likelihood of the local normal model or the multinomial model under the corresponding priors such as those given earlier. Eq. (3.2) provides a recursive recipe for calculating for all . It is recursive in the sense that is computed based on the value of on ’s children. (Of course, to complete the calculation the recursion must eventually terminate everywhere on . We shall describe the terminal conditions in the next section.) This recursive algorithm is a special case of the forward-backward algorithm [27]. The next theorem establishes the posterior conjugacy of cond-OPT. ###### Theorem 3 (Conjugacy). After observing where given the ’s, the ’s are independent with density , which has a cond-OPT prior, the posterior of is again a cond-OPT (with the same partition rules on and as the prior). Moreover, for each that could arise during the recursive partitioning, the posterior parameters are given as follows. 1. Stopping probability: ρ(A|x,y)=ρ(A)M(A)/Φ(A). 2. Selection probabilities: λj(A|x,y)=λj(A)(1−ρ(A))∏Kj(A)i=1Φ(Aji)Φ(A)−ρ(A)M(A). 3. The local parameters: , , and are the corresponding posterior parameters for the local OPT after updating using the observed values for the response , . This theorem shows that a posteriori our knowledge about the underlying conditional distribution of given can again be represented by the same two-stage procedure that randomly partitions the predictor space and then generates the response distribution accordingly on each of the predictor blocks, except that now the parameters that characterize this two-stage procedure have been updated to reflect the information contained in the data. Moreover, the theorem also provides a recipe for computing these posterior parameters based on and . Given this exact posterior, Bayesian inference can then proceed—samples can be drawn from the posterior cond-OPT directly through vanilla Monte Carlo (as opposed to MCMC) and summary statistics calculated. In the next section, we provide more details on how to implement such inference in practice. Before that, we present our last theoretical result about the cond-OPT prior—its posterior consistency, which assures the statistician that the posterior cond-OPT distribution will “converge” in some sense to the truth as the amount of data increases. To this end, we first need a notion of neighborhoods for conditional densities under which such convergence holds. We adopt the notion discussed in [35] and [34], by which a (weak) neighborhood of a conditional density function is defined in terms of a (weak) neighborhood of the corresponding joint density. More specifically, for a conditional density function , weak neighborhoods with respect to a marginal density on are collections of conditional densities of the form U={f(⋅|⋅):∣∣∫gif(⋅|⋅)f0Xdμ−∫gif0(⋅|⋅)f0Xdμ∣∣<ϵi,i=1,2,…,l} where the ’s are bounded continuous functions on . ###### Theorem 4 (Weak consistency). Let be independent identically distributed vectors from a probability distribution on , , with density . Suppose the conditional density is generated from a cond-OPT prior for which the conditions in Theorem 2 all hold. In addition, assume that the conditional density function and the joint density are bounded. Then for any weak neighborhood of w.r.t , , we have π(U|(x1,y1),(x2,y2),…,(xn,yn))⟶1 with probability 1, where denotes the cond-OPT posterior for . ## 4 Practical implementation Next we address some practical issues in computing the posterior and implementing the inference. For simplicity, from now on we shall refer to a set that can arise during the (Stage I) recursive partitioning procedure as a “node” (i.e., as a node in the partition tree). A prerequisite for applying Theorem 3 is the availability of the terms, which can be determined recursively through Eq. (3.2). Of course, to carry out the computation of one must specify terminal conditions on Eq. (3.2), or in other words, on what kind of ’s the recursion should terminate. We call such nodes terminal nodes. There are two kinds of nodes for which the value of is available directly according to theory, and thus recursion can terminate on them. They are (i) nodes that cannot be further divided under the partition rule , and (ii) nodes that contain no more than one data point. For a node that cannot be further divided, we must have and so . For a node with no data point, it has no contribution to the likelihood and so . For a node with exactly one data point, is the predictive density of the local OPT on evaluated at that data point, which is exactly the density of the prior mean of the local OPT and is directly known when the default symmetric and self-similar prior specification for the local OPTs is adopted as recommended in [45]. Note that with these two types of “theoretical” terminal nodes, in principle the recursion will eventually terminate if one divides the predictor space deep enough. In practice, however, it is unnecessary to take the recursion all the way down to these theoretical terminal nodes. Instead, one can adopt early termination by imposing a technical limit—such as a minimum size (or maximum depth) of the nodes either in terms of the natural measure or the number of observations therein —to end the recursion. Nodes that are smaller than the chosen size threshold are forced to be terminal, which is equivalent to setting and thus for these nodes. We call these nodes “technical” terminal nodes. With these theoretical and technical terminal nodes, one can then compute through the recursion formula , and compute the posterior according to Theorem 3. Putting all the pieces together, we can summarize the procedure to carry out Bayesian inference with the cond-OPT prior as a four-step recipe: 1. For all nodes (terminal or non-terminal), compute . 2. For each non-terminal node (those that are ancestors of the terminal nodes), use Eq. (3.2) to recursively compute . 3. Given the values of and , apply Theorem 3 to get the parameter values of the posterior cond-OPT distribution. 4. Sample from the exact posterior by direct simulation of the random two-stage procedure, and/or compute summary statistics of the posterior. For the last step, direct simulation from the posterior is straight-forward, but we have not discussed what summary statistics to compute and how to do that. This is problem-specific and will be illustrated in our numeric examples in Section 5. ## 5 Examples In this section we provide four examples to illustrate inference using the cond-OPT prior. The first two illustrate the estimation of conditional densities, the latter two are for model selection and hypothesis testing. In these examples, the partition rules used on both and are always the coordinate-wise dyadic mid-split rule. We adopt the same prior specification across all the examples: the prior stopping probability on each non-terminal node is always set to 0.5, the prior partition selection probability is always evenly spread over the possible ways to partition each set, and the probability assignment pseudo-counts for the local OPTs are all set to 0.5. For continuous sample spaces, nodes at 12 levels down the partition tree, i.e., with , are set to be the technical terminal nodes. ###### Example 1 (Estimating conditional density with abrupt changes over predictor values). In this example we simulate pairs according to the following distributions. X ∼Beta(2,2) Y|X<0.25 ∼Beta(30,20) Y|0.25≤X≤0.5 ∼Beta(10,30) Y|X>0.5 ∼Beta(0.5,0.5). We generate data sets of three different sample sizes, , , and , and place the cond-OPT prior on the distribution of given . Following the four-step recipe given in the previous section, we can compute the posterior cond-OPT and sample from it. A representative summary of the posterior partitioning mechanism is the so-called hierarchical maximum a posteriori (hMAP) [45] partition tree, which can be computed from the posterior analytically [45] and is plotted in Figure 1 for the different sample sizes. (Chipman et al [3] and Wong and Ma [45] both discussed reasons why the commonly adopted MAP is not a good summary for tree-structured posteriors due to their multi-level nature. See [45, Sec. 4.2] for further details and reasons why the hMAP is often preferred to the MAP.) In Figure 1, within each “leaf” node we plot the corresponding posterior mean of the local OPT. Also plotted for each node is the posterior stopping probability. Even with only 100 data points, the posterior suggests that should be divided into three pieces—[0,0.25], [0.25,0.5], and [0.5,1]—within which the conditional distribution of is homogeneous across . Note that the posterior stopping probabilities on those three intervals are large, in contrast to the near 0 values on the larger sets. Reliably estimating the actual conditional density function on these sets nonparametrically appears to require more than 100 data points. In this example, a sample size of 500 already does a decent job. We compare both the model fit and the computing speed of our cond-OPT prior to two existing Bayesian nonparametric models for conditional densities—namely the linear dependent Dirichlet process mixture of normals (LDDP) [5] and the linear dependent Dirichlet process mixture of Bernstein polynomials (LDBP) [1], both available in the DPpackage in R. In this example and the next, for LDDP and LDBP, we draw 1,000 posterior samples from the MCMC with a 2,000 burn-in period and a thinning interval of 3, and used prior specification given in the examples of the DPpackage. For details, please see the documentation for these two functions in the DPpackage manual on CRAN. To evaluate model fit, we generate an additional testing data set from the true distribution of , and calculate the log- score (i.e., the log predictive likelihood of the testing set) for the three methods. Table 1 presents the log- score for the three methods from a typical simulated data set and the corresponding computing time on the same laptop computer with an Intel Core-i7 CPU using a single core without parallelization. A surprising phenomenon is that the performance of LDBP, in terms of the log- score for the testing sample, is not always monotone increasing in the sample size—that is, a larger training sample does not always lead to better fit on the testing set. In the particular simulation reported in Table 1, the preformance of LDBP is actually monotone decreasing with sample size. The cause for this is likely to be that under those models the conditional density is assumed to be smoothly varying over the predictors, and so as the true conditional density involves abrupt changes, the misspecified models can be consistently wrong even with large sample sizes. The previous example favors our method because (1) there are a small number of clear boundaries of change for the underlying conditional distribution, and to a lesser extent (2) those boundaries—namely 0.25 and 0.5—lie on the potential partition points of the partition rule. In the next example, we examine the case in which the conditional distribution changes smoothly across a continuous without any boundary of abrupt change. ###### Example 2 (Estimating conditional densities that vary smoothly with predictor values). In this example we generate from a bivariate normal distribution. (X,Y)′∼BN((0.60.4),(0.120.0050.0050.12)). We generate a data set of size , and apply the cond-OPT prior on the distribution of given as we did in the previous example. Again we compute the posterior cond-OPT following our four-step recipe. The hMAP tree and the posterior mean estimate of the conditional density given the random partition is presented in Figure 2. Because the underlying predictor space is unbounded, for simplicity in the above we used the empirically observed range of as , which happens to be for our simulated example. (Other ways to handle this situation include transforming to have a compact support such as through a CDF or rank transform. One interesting observation is that the “leaf” nodes in Figure 2 have very large (close to 1) posterior stopping probability. This may seem surprising as the underlying conditional distribution is not the same for any neighboring values of . The large posterior stopping probabilities indicate that on those sets, where the sample size is not large, the gain in achieving better estimate of the common features of the conditional distribution for nearby values outweighs the loss in ignoring the difference among them. Again, to compare the model fit and computational efficiency with LDDP and LDBP, we repeat a set of simulations with different sample sizes , 500, and 2500, and again use the log- score on a testing sample of size 100 to evaluate the performance. The results are summarized in Table 2, and they mostly confirm our intuition—the smooth priors overall outperform our model, especially for small sample sizes. The performance difference vanishes as the sample size increases. ###### Example 3 (Model selection over binary predictors). Next we show how one can use cond-OPT to carry out model selection—that is, when multiple predictors are present, identifying the ones that affect the conditional distribution of . Consider the case in which forming a Markov Chain: X1∼Bernoulli(0.5)andP(Xi=Xi−1|Xi−1)=0.7 for . Suppose the conditional distribution of a continuous response is Y∼⎧⎪⎨⎪⎩Beta(1,6)if (X5,X20,X30)=(1,0,1)Beta(12,16)if (X5,X20)=(0,1)Beta(3,4)otherwise. In other words, three predictors , and impact the response in an interactive manner. Our interest is in recovering this underlying interactive structure (i.e. the “model”). To illustrate, we simulate 500 data points from this scenario and place a cond-OPT prior on , and consider predictor partitions up to four levels deep. This is achieved by setting for that arises after four steps of partitioning, and it allows us to search for models involving up to four-way interactions. We again carry out the four-step recipe to get the posterior and calculate the hMAP. The hMAP tree structure along with the predictive conditional density for within each stopped set given the random partition is presented in Figure 3. The posterior concentrates on partitions involving , and out of the 30 variables. While the predictive conditional density for is very rough given the limited number of data points in the stopped sets, the posterior recovers the exact interactive structure of the predictors with little uncertainty. In addition, we sample from the posterior and use the proportion of times each predictor appears in the sampled models to estimate the posterior marginal inclusion probabilities. Our estimates based on 1,000 draws from the posterior are presented in Figure 4(a). Note that the sample size 500 is so large that the posterior marginal inclusion probabilities for the three relevant predictors are all close to 1 while those for the other predictors are close to 0. We carry out the same simulation with a reduced sample size of 200, and plot the estimated posterior marginal inclusion probabilities in Figure 4(b). We see that with a sample size of 200, one can already use the posterior to reliably recover the relevant predictors. ###### Example 4 (Test of independence). In this example, we illustrate an application of the cond-OPT prior for hypothesis testing. In particular, we use it to test the independence between and . To begin, note that in Theorem 3 gives the posterior probability for the conditional distribution of to be constant over all values of in , or in other words, for to be independent of on . Hence, one can consider as a score for the statistical significance of dependence between the observed variables. A permutation null distribution of this statistic can be constructed by randomly pairing the observed and values, and based on this, permutation -values can be computed for testing the null hypothesis of independence. To illustrate, we simulate for a sample of size 400 under the same Markov Chain model as in the previous example, and simulate a response variable as follows. Y∼⎧⎪⎨⎪⎩Beta(4,4)if (X1,X2,X5)=(1,1,0)Beta(0.5,0.5)if (X5,X8,X10)=(1,0,0)Unif[0,1]otherwise. In particular, is dependent on but there is no mean or median shift in the conditional distribution of over different values of . Figure 5 gives the histogram of for 1,000 permuted samples where the vertical dashed line indicates the for the original simulated data, which equals 0.0384. For this particular simulation, 7 out of the 1,000 permuted samples produced a more extreme test statistic. Remark I: Note that by symmetry one can place a cond-OPT prior on the conditional distribution of given as well and that will produce a corresponding posterior stopping probability . One can thus alternatively use as the test statistic for independence. Remark II: Testing using the posterior stopping probability is equivalent to using a Bayes factor (BF). To see this, note that the BF for testing independence under the cond-OPT can be written as BFY|X =∑N(A)j=1λj(A)∏iΦ(Aji)M(A) with where the numerator is the marginal conditional likelihood of given if the conditional distribution of is not constant over (i.e.  is divided) and the denominator is that if the conditional distribution of is the same for all (i.e.  is undivided). By Eq. (3.2) and Theorem 3, BFY|X=ρ(ΩX)1−ρ(ΩX)(1ρ(ΩX|x,y)−1), which is in a one-to-one correspondence to given the prior parameters. ## 6 Application to real data: multivariate conditional density estimation in flow cytometry In flow cytometry experiments for immunological studies, a number (typically 4 to 10) of biomarkers are measured on large numbers of blood cells. Estimated densities and conditional densities of such data can be used for tasks such as automatic classification of the cells [30]. We apply cond-OPT to estimate the conditional density of markers “CD4” and “CD8” given two other markers “FSC-H” and “FSC-W” in a flow cytometry data set. So in this case both and are two-dimensional. This particular data set contains cells. Flow cytometry experiments often involve large numbers of cells, and thus practical methods must scale well in computing time and memory usage with respect to the number of observations. This poses great challenge to existing nonparametric models that require intense MCMC computation. The values of the four markers are measured in the range of [0,1]. We use maximum level of partitioning to 10 on both the predictor space and the response space but otherwise the same prior specification as before. Figure 6 presents the posterior mean of the conditional density of CD4 and CD8 given FSC-H and FSC-W under the cond-OPT model given the random partition on the predictor space being the one induced under the hMAP tree, which splits the space into 50 pieces. A vast majority, in fact 44 out of the 50 predictor blocks are in fact not technical terminal regions, and so the model indeed smooths the conditional density over the predictor space. Because the number of predictor blocks is relatively large, we present the estimates for only 16 blocks in Figure 6. The entire computation of the full posterior, the hMAP partition, as well as the conditional posterior expectation of the conditional density given the hMAP tree, took about 360 seconds to complete on a single 3.6GHz Intel Core-i7 3820 desktop core without parallelization and required about 8.2 Gbs of RAM. (Reducing the maximum level of partitions from 10 to 8 will reduce computing time to about 116 seconds and RAM to about 0.6 Gbs.) ## 7 Discussion In this work we have introduced a Bayesian nonparametric prior on the space of conditional densities. This prior, which we call the conditional optional Pólya tree, is constructed based on a two-stage procedure that first divides the predictor space and then generates the conditional distribution of the response through local OPT processes. We have established several important theoretical properties of this prior, namely large support, conjugacy and posterior consistency, and have provided a practical recipe for Bayesian inference using this prior. The construction of this prior does not depend on the marginal distribution of . One particular implication is that one can transform before applying the prior on without invalidating the posterior inference. (Note that transforming is equivalent to choosing a different partition rule on .) In certain situations it is desirable to perform such a transformation on . For example, if the data points are very unevenly spread over , then some parts of the space may contain a very small number of data points. There the posterior is mostly dominated by the prior specification and does not provide much information about the underlying conditional distribution. One way to mitigate this problem is to transform so that the data are more evenly distributed over . When is one-dimensional, for example, this can be achieved by a rank transformation on . Another situation in which a transformation of may be useful is when the dimensionality of is very high. In this case a dimensionality reduction transformation can be applied on before carrying out the inference. Of course, in doing so one often loses the ability to interpret the posterior conditional distribution of directly in terms of the original predictors. An alternative approach when is high-dimensional is through variable selection that imposes certain sparsity assumptions, i.e., only a small number of predictors are affecting the conditional density. Exact calculation of full posterior and the marginal inclusion probabilities as we have carried out in Example 3 is impractical when the number of predictors is large . One strategy to overcome this difficulty is through sequential importance sampling as the one proposed in [26]. A general limitation of CART type randomized partitioning methods require a natural ordering of the space to be partitioned on. General partitioning strategies can be designed for unordered spaces, but then the computational efficiency of the proposed model would be lost. Finally, we note that while we have used recursive partitioning in conjunction with the OPT to build a model for conditional density, one can build such models by replacing the OPT with other multi-scale density models in the family of Pólya tree type models, such as the more recently introduced adaptive Pólya tree (APT) [27]. ## Software The proposed model has been implemented in the R package PTT (for Pólya tree type models) as the function cond.opt. A variant of the model that replaces the OPT with an APT is also implemented in the package as function cond.apt. This package is currently available for download at https://github.com/MaStatLab/PTT and will be submitted to CRAN. ## Acknowledgment The flow cytometry data set was provided by EQAPOL (HHSN272201000045C), an NIH/NIAID/DAIDS-sponsored, international resource that supports the development, implementation, and oversight of quality assurance programs (Sanchez PMC4138253). ## Appendix: Proofs ###### Proof of Lemma 1. The proof of this lemma is very similar to that of Theorem 1 in [45]. Let be the part of that has not been stopped after levels of recursive partitioning. The random partition of after levels of recursive partitioning can be thought of as being generated in two steps. First suppose there is no stopping on any set and let be the collection of partition selection variables generated in the first levels of recursive partitioning. Let be the collection of sets that arise in the first levels of non-stopping recursive partitioning, which is determined by . Then we generate the stopping variables for each successively for , and once a set is stopped, let all its descendants be stopped as well. Now for each , let be the indicator for ’s stopping status after levels of recursive partitioning, with if is not stopped and otherwise. E(μX(Tk1)|J∗(k)) =E⎛⎜⎝∑A∈Ak(J∗(k))μX(A)Ik(A)|J∗(k)⎞⎟⎠ =∑A∈Ak(J∗(k))μX(A)E(Ik(A)|J∗(k)) ≤μX(ΩX)(1−δ)k. Hence , by Markov inequality and Borel-Contelli lemma, we have with probability 1. ∎ ###### Proof of Theorem 2. We prove only the second result as the first follows by choosing . Also, we consider only the case when and are both compact Euclidean rectangles, because the cases when at least one of the two spaces is finite follow as simpler special cases. For and , let denote the joint density. First we assume that the joint density is uniformly continuous. In this case it is bounded on . We let and δ(ϵ):=sup|x1−x2|+|y1−y2|<ϵ|f(x1,y1)−f(x2,y2)|. By uniform continuity, we have as . In addition, we define δX(ϵ) :=sup|x1−x2|<ϵ|fX(x1)−fX(x2)| ≤∫sup|x1−x2|<ϵ|f(x1,y)−f(x2,y)|μY(dy)≤δ(ϵ)μY(ΩY). Note that in particular the continuity of implies the continuity of . Let be any positive constant. Choose a positive constant such that . Because all the parameters in the cond-OPT are uniformly bounded away from 0 and 1, there is positive probability that will be partitioned into where the diameter of each is less than , and the partition stops on each of the ’s. (The existence of such a partition follows from the fine partition criterion.) Let , , and if , and 0 otherwise. Let be the set of indices such that . Then ∫|q(y|x)−f(y|x)|fX(x)μ(dx×dy) ≤∫fX(x)<σ|q(y|x)−f(y|x)|fX(x)μ(dx×dy) +∑i∈I∫Ai×ΩY∣∣q(y|x)−fi(y)⋅μX(Ai)P(X∈Ai)∣∣fX(x)μ(dx×dy) +∑i∈I∫Ai×ΩYfi(y)∣∣μX(Ai)P(X∈Ai)−1fX(x)∣∣fX(x)μ(dx×dy) +∑i∈I∫Ai×ΩY∣∣fi(y)−f(x,y)∣∣μ(dx×dy). Let us consider each of the four terms on the right hand side in turn. First, ∫fX(x)<σ|q(y|x)−f(y|x)|fX(x)μ(dx×dy)≤2σμX(ΩX). Note that for each , is a density function in . Therefore by the large support property of the OPT prior (Theorem 2 in [45]), with positive probability, ∫ΩY∣∣q0,BiY(y)−fi(y)⋅μX(Ai)P(X∈Ai)∣∣μY(dy)<σ, and so ∫Ai×ΩY∣∣q(y|x)−fi(y)⋅μX(Ai)P(X∈Ai)∣∣fX(x)μ(dx×dy)<σP(X∈Ai) for all . Also, for any , by the choice of , ∣∣μX(Ai)P(X∈Ai)−1fX(x)∣∣≤δX(ϵ(σ))σ(σ−δX(ϵ(σ))≤σ3/2σ2/2=σ. Thus ∫Ai×ΩYfi(y)∣∣μX(Ai)P(X∈Ai)−1fX(x)∣∣fX(x)μ(dx×dy)≤σMμY(ΩY)P(X∈Ai). Finally, again by the choice of , , and so ∫Ai×ΩY∣∣fi(y)−f(x,y)∣∣μ(dx×dy)<σμY(ΩY)μX(Ai). Therefore for any , by choosing a small enough , we can have ∫|q(y|x)−f(y|x)|fX(x)μ(dx×dy)<τ with positive probability. This completes the proof of the theorem for continuous . Now we can approximate any density function arbitrarily close in distance by a continuous one . The theorem still holds because ∫|q(y|x)−f(y|x)|fX(x)μ(dx×dy) ≤∫q(y|x)|fX(x)−~fX(x)|μ(dx×dy) +∫|q(y|x)−~f(y|x)|~fX(x)μ(dx×dy) +∫|~f(x,y)−f(x,y)|μ(dx×dy). ≤∫|q(y|x)−~f(y|x)|~fX(x)μ(dx×dy) +2∫|~f(x,y)−f(x,y)|μ(dx×dy), where and denote the corresponding marginal and conditional density functions for . ∎ ###### Proof of Theorem 3. Given that a set is reached during the random partitioning steps on , is the marginal conditional likelihood of {Y(A)=y(A)} given {X(A)=x(A)}. The first term on the right hand side of Eq. (3.2), , is the marginal conditional likelihood of {Stop partitioning on A, Y(A)=y(A)} given {X(A)=x(A)}. Each summand in the second term, , is the marginal conditional likelihood of {Partition A in the jth way, Y(A)=y(A)} given {X(A)=x(A)}. Thus the conjugacy of the prior and the posterior updates for , and OPT follows from Bayes’ Theorem and the posterior conjugacy of the standard optional Pólya tree prior (Theorem 3 in [45]). ∎ ###### Proof of Theorem 4. By Theorem 2.1 in [34], which follows directly from Schwartz’s theorem (see [38] and [14, Theorem 4.4.2]), we just need to prove that the prior places positive probability mass in arbitrarily small Kullback-Leibler (K-L) neighborhoods of w.r.t . Here a K-L neighborhood w.r.t is defined to be the collection of conditional densities Kϵ(f)={h(⋅|⋅):∫f(y|x)logf(y|x)h(y|x)fX(x)μ(dx×dy)<ϵ} for some . To prove this, we just need to show that any conditional density that satisfies the conditions given in the theorem can be approximated arbitrarily well in K-L distance by a piecewise constant conditional density of the sort that arises from the cond-OPT procedure. We first assume that is continuous. Following the proof of Theorem 2, let denote the modulus of continuity of . Let be a reachable partition of such that the diameter of each partition block is less than . Next, for each , let be a partition on allowed under OPT such that the diameter of each is also less than . Let gij=supx∈Ai,y∈Bijf(y|x) and gi(y)=∑jgijIBij(y). Let
2020-07-04 15:45:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683117032051086, "perplexity": 570.2392199995062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00173.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/algebraic-methods-solving-pair-linear-equations-substitution-method-linear-equation-1_1150
# Solution - Algebraic Methods of Solving a Pair of Linear Equations - Substitution Method Account Register Share Books Shortlist ConceptAlgebraic Methods of Solving a Pair of Linear Equations - Substitution Method #### Question If x + y = 5 and x = 3, then find the value of y. #### Solution You need to to view the solution Is there an error in this question or solution? #### Similar questions A fraction becomes 9/11 if 2 is added to both the numerator and the denominator. If, 3 is added to both the numerator and the denominator it becomes 5/6 . Find the fraction. view solution Solve 2x + 3y = 11 and 2x – 4y = – 24 and hence find the value of ‘m’ for which y = mx + 3. view solution Form the pair of linear equations for the following problems and find their solution by substitution method The larger of two supplementary angles exceeds the smaller by 18 degrees. Find them. view solution Solve each of the following system of equations by eliminating x (by substitution) : (i) x + y = 7 2x – 3y = 11 (ii) x + y = 7 12x + 5y = 7 (iii) 2x – 7y = 1 4x + 3y = 15 (iv) 3x – 5y = 1 5x + 2y = 19 (v) 5x + 8y = 9 2x + 3y = 4 view solution The taxi charges in a city consist of a fixed charge together with the charge for the distance covered. For a distance of 10 km, the charge paid is Rs 105 and for a journey of 15 km, the charge paid is Rs 155. What are the fixed charges and the charge per km? How much does a person have to pay for travelling a distance of 25 km? view solution S
2017-10-18 07:34:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6445790529251099, "perplexity": 373.1099596231524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822822.66/warc/CC-MAIN-20171018070528-20171018090528-00219.warc.gz"}
http://www.lookingforananswer.net/if-ax-3-bx-2-cx-d-0-then-find-x.html
# if ax^3+bx^2+cx+d=0 then find x=? They are also called 3rd degree polynomials since the highest power of x is 3. To find the roots ... if one of the roots of the equation ax^3 + bx^2 + cx + d = 0 is x ... - Read more Algebra {ax^3+bx^2+cx+d=0} ... Solve ax 3 +x 2 b+xc+d = 0 ... Find us on Google+ html css built with ... - Read more ## if ax^3+bx^2+cx+d=0 then find x=? resources ### linear algebra - Prove that if polynomial's $f(x)=x^6+ax^3 ... ... (x)=x^6+ax^3+bx^2+cx+d$$... try proof by contradiction if a=b=c=d=0 then given f(x) ... Find the rank of$A$for all real$\lambda$. 0 ### How to factor ax 3+bx 2+cx :: News :. The eqution ax 3 + bx 2 + cx + d = 0 is a cubic polynomial equation or a third degree polynomial equation.The highest degree of the equation is 3. ### Roots of Cubic Equations - Math Forum - Ask Dr. Math ... t and u, defined by u - t = q and tu = (p/3)^3. Then x ... If you wanted to find the ... Start with ax^3 + bx^2 + cx + d = 0 Divide by a, get x^3 + ex ... ### Fermat's Last Theorem: Solution to the general cubic equation Girolamo Cardano was able to find a general solution to the cubic equation based on Nicolo ... x 3 + bx 2 + cx + d = 0 Then there ... ax 3 + bx 2 + cx + d ... ### The Diophantine equation y 2 = ax 3 + bx 2 + cx + d (0) The Diophantine equation y 2 = ax 3 + bx 2 + cx + d (0) by A ... {b or c > max{b then there is unique positive ... In this paper we find some upper bounds for ... ### Solve cubic equation ax^3 + bx^2 + cx + d = 0 Get the free "Solve cubic equation ax^3 + bx^2 + cx + d = 0" widget for ... Wordpress, Blogger, or iGoogle. Find more Mathematics ... You will then see the widget on ... ### Please help me about roots | Physics Forums - The Fusion ... if o,p,q are roots of the equation ax^3+bx^2+cx+d=0, determine the equation whose roots are o^2,p^2 and q^2 who can help me solve it? thank you ### Cubic function help! g(x)=ax^3+bx^2+cx+d? - qfak.com find a cubic function g(x)=ax^3+bx^2+cx+d that has a local max value 3 ... Then, since f(-2 ... ^3 + b(1)^2 + c(1) + d ==> a + b + c + d = 0. By solving (i ... ### Advanced Math: precal, zeros of polynomial functions ... ... (2a) if you have ax^3 + bx^2 + cx + d = 0 then try group factoring like this (ax^3 + bx^2) + (cx + d) = 0 x^2 ... to find vertical asymptotes ... ax^3 + bx^2 + cx ... SHOW MORE .... ### GIVEN THE EXPRESSION ax^3+bx^2+cx+d show that x-1 is a ... How to Solve Ax3 Bx2 Cx D 0 | Solve for X... the equation is 3.Where the value of a,b,c,d are real numbers which gives an three ... ax 3 bx 2 cx d 0: ... ### If (x^4)+(ax^3)+(bx^2)+(cx)+81=0 has 4 real roots, then... ... (bx^2)+(cx)+81=0 has four real roots, then (1) a>= 12 (2) a= 54 (4) ... thus D ### algebra precalculus - Find$x$in the equation$ax^3+bx^2 ... Find $x$ in the equation $ax^3+bx^2+cx=d$ up vote 4 ... It happens that equations of the form $ax^3+bx^2+cx+d=0$ and cubic ... The solutions can then be obtained ... ### AX^3 + BX^2 + CX + D = 0 - PANORAMIC ax 3 + bx 2 + cx + d = 0 ( forme 1) D’une manière générale, ... +"x " if c = 0 then c$= "" if d = 0 then d$ = " = 0" : else : d$= str$(d)+" = 0 : ... ### Factorizing Cubic Equations of the Form Ax^3 + Bx^2 + Cx ... Factorizing Cubic Equations of the Form Ax^3 + Bx^2 + Cx + D = 0 ... B = a+b+c C = ab+ac+bc D = abc The html file below finds the value of C in ... If A=2 then the ... ### How to Solve Ax3 Bx2 Cx D 0 | Solve for X The eqution ax 3 + bx 2 + cx + d = 0 is a cubic ... Factor theorem: The zeros of the function are integers.x intercept s are carried out for the equation to find the ... ### Consider a third degree function y=ax^3+bx^2+cx+d.? Reorder the terms so that the polynomial is in standard form Ax^3 + Bx^2 + Cx + D = 0 ... degree more than one is ... bx2 + cx+ dat the points x ... Find the third ... ### Find a cubic function, f(x)=ax^3+bx^2+cx+d, that has a ... Find a cubic function, f(x)=ax^3+bx^2+cx+d, ... = a+b+c+d = 0 f'(-3) = 27a-6b+c = 0 f'(1) ... Rather than reveal the steps, ... ### SOLUTION: For the general cubic polynomial f(x)= ax^3 + bx ... ... = ax^3 + bx^2 + cx + d (a different from 0). Find conditions on a, b, c, ... (x)= ax^3 + bx^2 + cx + d ... Let c0 and a0 then you would get a graph ... ### F(x)=ax^3+bx^2+cx+d, Where A Doesnt Equal 0 A.Find ... Answer to F(x)=ax^3+bx^2+cx+d, where a doesnt equal 0 a.Find the condition on the values of a, ... (x)=ax^3+bx^2+cx+d, where a doesnt equal 0 a.find ... Related Questions Recent Questions
2016-10-24 12:30:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5872280597686768, "perplexity": 1179.4628566979027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719566.74/warc/CC-MAIN-20161020183839-00294-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2003.9.1243
# American Institute of Mathematical Sciences September  2003, 9(5): 1243-1262. doi: 10.3934/dcds.2003.9.1243 ## Convergence to strong nonlinear rarefaction waves for global smooth solutions of $p-$system with relaxation 1 Laboratory of Mathematical Physics, Wuhan Institute of Physics and Mathematics, The Chinese Academy of Sciences, P. O. Box 71010, Wuhan 430071, China, China Received  July 2002 Revised  December 2002 Published  June 2003 This paper is concerned with the large time behavior of global smooth solutions to the Cauchy problem of the $p-$system with relaxation. Former results in this direction indicate that such a problem possesses a global smooth solution provided that the first derivative of the solutions with respect to the space variable $x$ are sufficiently small. Under the same small assumption on the global smooth solution, we show that it converges to the corresponding nonlinear rarefaction wave and in our analysis, we do not ask the rarefaction wave to be weak and the initial error can also be chosen arbitrarily large. Citation: Huijiang Zhao, Yinchuan Zhao. Convergence to strong nonlinear rarefaction waves for global smooth solutions of $p-$system with relaxation. Discrete & Continuous Dynamical Systems - A, 2003, 9 (5) : 1243-1262. doi: 10.3934/dcds.2003.9.1243 [1] Ting Liu, Guo-Bao Zhang. Global stability of traveling waves for a spatially discrete diffusion system with time delay. Electronic Research Archive, , () : -. doi: 10.3934/era.2021003 [2] Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3395-3409. doi: 10.3934/dcds.2019229 [3] Xiaoping Zhai, Yongsheng Li. Global large solutions and optimal time-decay estimates to the Korteweg system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1387-1413. doi: 10.3934/dcds.2020322 [4] Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259 [5] Omid Nikan, Seyedeh Mahboubeh Molavi-Arabshai, Hossein Jafari. Numerical simulation of the nonlinear fractional regularized long-wave model arising in ion acoustic plasma waves. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020466 [6] Jonathan J. Wylie, Robert M. Miura, Huaxiong Huang. Systems of coupled diffusion equations with degenerate nonlinear source terms: Linear stability and traveling waves. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 561-569. doi: 10.3934/dcds.2009.23.561 [7] Jason Murphy, Kenji Nakanishi. Failure of scattering to solitary waves for long-range nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1507-1517. doi: 10.3934/dcds.2020328 [8] Yukio Kan-On. On the limiting system in the Shigesada, Kawasaki and Teramoto model with large cross-diffusion rates. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3561-3570. doi: 10.3934/dcds.2020161 [9] Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $p$ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020442 [10] Yichen Zhang, Meiqiang Feng. A coupled $p$-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075 [11] Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020321 [12] Shigui Ruan. Nonlinear dynamics in tumor-immune system interaction models with delays. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 541-602. doi: 10.3934/dcdsb.2020282 [13] Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk. Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, , () : -. doi: 10.3934/era.2021002 [14] Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216 [15] Karoline Disser. Global existence and uniqueness for a volume-surface reaction-nonlinear-diffusion system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 321-330. doi: 10.3934/dcdss.2020326 [16] Michiel Bertsch, Danielle Hilhorst, Hirofumi Izuhara, Masayasu Mimura, Tohru Wakasa. A nonlinear parabolic-hyperbolic system for contact inhibition and a degenerate parabolic fisher kpp equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3117-3142. doi: 10.3934/dcds.2019226 [17] Masaru Hamano, Satoshi Masaki. A sharp scattering threshold level for mass-subcritical nonlinear Schrödinger system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1415-1447. doi: 10.3934/dcds.2020323 [18] Vo Van Au, Mokhtar Kirane, Nguyen Huy Tuan. On a terminal value problem for a system of parabolic equations with nonlinear-nonlocal diffusion terms. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1579-1613. doi: 10.3934/dcdsb.2020174 [19] Alberto Bressan, Wen Shen. A posteriori error estimates for self-similar solutions to the Euler equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 113-130. doi: 10.3934/dcds.2020168 [20] Nguyen Huy Tuan. On an initial and final value problem for fractional nonclassical diffusion equations of Kirchhoff type. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020354 2019 Impact Factor: 1.338
2021-01-20 11:31:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49323752522468567, "perplexity": 5295.272091877449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519984.9/warc/CC-MAIN-20210120085204-20210120115204-00369.warc.gz"}
https://www.nag.com/numeric/nl/nagdoc_27/flhtml/f07/f07fjf.html
# NAG FL Interfacef07fjf (dpotri) ## 1Purpose f07fjf computes the inverse of a real symmetric positive definite matrix $A$, where $A$ has been factorized by f07fdf. ## 2Specification Fortran Interface Subroutine f07fjf ( uplo, n, a, lda, info) Integer, Intent (In) :: n, lda Integer, Intent (Out) :: info Real (Kind=nag_wp), Intent (Inout) :: a(lda,*) Character (1), Intent (In) :: uplo #include <nag.h> void f07fjf_ (const char *uplo, const Integer *n, double a[], const Integer *lda, Integer *info, const Charlen length_uplo) The routine may be called by the names f07fjf, nagf_lapacklin_dpotri or its LAPACK name dpotri. ## 3Description f07fjf is used to compute the inverse of a real symmetric positive definite matrix $A$, the routine must be preceded by a call to f07fdf, which computes the Cholesky factorization of $A$. If ${\mathbf{uplo}}=\text{'U'}$, $A={U}^{\mathrm{T}}U$ and ${A}^{-1}$ is computed by first inverting $U$ and then forming $\left({U}^{-1}\right){U}^{-\mathrm{T}}$. If ${\mathbf{uplo}}=\text{'L'}$, $A=L{L}^{\mathrm{T}}$ and ${A}^{-1}$ is computed by first inverting $L$ and then forming ${L}^{-\mathrm{T}}\left({L}^{-1}\right)$. ## 4References Du Croz J J and Higham N J (1992) Stability of methods for matrix inversion IMA J. Numer. Anal. 12 1–19 ## 5Arguments 1: $\mathbf{uplo}$Character(1) Input On entry: specifies how $A$ has been factorized. ${\mathbf{uplo}}=\text{'U'}$ $A={U}^{\mathrm{T}}U$, where $U$ is upper triangular. ${\mathbf{uplo}}=\text{'L'}$ $A=L{L}^{\mathrm{T}}$, where $L$ is lower triangular. Constraint: ${\mathbf{uplo}}=\text{'U'}$ or $\text{'L'}$. 2: $\mathbf{n}$Integer Input On entry: $n$, the order of the matrix $A$. Constraint: ${\mathbf{n}}\ge 0$. 3: $\mathbf{a}\left({\mathbf{lda}},*\right)$Real (Kind=nag_wp) array Input/Output Note: the second dimension of the array a must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. On entry: the upper triangular matrix $U$ if ${\mathbf{uplo}}=\text{'U'}$ or the lower triangular matrix $L$ if ${\mathbf{uplo}}=\text{'L'}$, as returned by f07fdf. On exit: $U$ is overwritten by the upper triangle of ${A}^{-1}$ if ${\mathbf{uplo}}=\text{'U'}$; $L$ is overwritten by the lower triangle of ${A}^{-1}$ if ${\mathbf{uplo}}=\text{'L'}$. 4: $\mathbf{lda}$Integer Input On entry: the first dimension of the array a as declared in the (sub)program from which f07fjf is called. Constraint: ${\mathbf{lda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. 5: $\mathbf{info}$Integer Output On exit: ${\mathbf{info}}=0$ unless the routine detects an error (see Section 6). ## 6Error Indicators and Warnings ${\mathbf{info}}<0$ If ${\mathbf{info}}=-i$, argument $i$ had an illegal value. An explanatory message is output, and execution of the program is terminated. ${\mathbf{info}}>0$ Diagonal element $〈\mathit{\text{value}}〉$ of the Cholesky factor is zero; the Cholesky factor is singular and the inverse of $A$ cannot be computed. ## 7Accuracy The computed inverse $X$ satisfies $XA-I2≤cnεκ2A and AX-I2≤cnεκ2A ,$ where $c\left(n\right)$ is a modest function of $n$, $\epsilon$ is the machine precision and ${\kappa }_{2}\left(A\right)$ is the condition number of $A$ defined by $κ2A=A2A-12 .$ ## 8Parallelism and Performance f07fjf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information. Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information. The total number of floating-point operations is approximately $\frac{2}{3}{n}^{3}$. The complex analogue of this routine is f07fwf. ## 10Example This example computes the inverse of the matrix $A$, where $A= 4.16 -3.12 0.56 -0.10 -3.12 5.03 -0.83 1.18 0.56 -0.83 0.76 0.34 -0.10 1.18 0.34 1.18 .$ Here $A$ is symmetric positive definite and must first be factorized by f07fdf. ### 10.1Program Text Program Text (f07fjfe.f90) ### 10.2Program Data Program Data (f07fjfe.d) ### 10.3Program Results Program Results (f07fjfe.r)
2021-03-08 09:44:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 62, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9554338455200195, "perplexity": 2722.7810753534613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178383355.93/warc/CC-MAIN-20210308082315-20210308112315-00371.warc.gz"}
https://tylurp.rbind.io/post/iteration/
# Iteration the Easy Way · 2019/09/01 · 5 minute read I do not come from a computer science background but my guess is that base R’s apply family of functions stand out as something strange. This strangeness might be because for loops are explicit and make it clear what’s going on, so why obfuscate the logic? But on the other hand, I think you could argue that while the logic might be clear, the intent isn’t, at least not immediately. This might be one reason functions like lapply exist, because at it’s heart R is a functional programming language and R users value functionals because they express the intent and allow us to think functionally. Take the example below, we take every element in a list mylist and convert it to uppercase. mylist <- list( a = letters[1:10], b = letters[11:20] ) mylist #> $a #> [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j" #> #>$b #> [1] "k" "l" "m" "n" "o" "p" "q" "r" "s" "t" With lapply we would do something like: lapply(mylist, toupper) #> $a #> [1] "A" "B" "C" "D" "E" "F" "G" "H" "I" "J" #> #>$b #> [1] "K" "L" "M" "N" "O" "P" "Q" "R" "S" "T" With a for loop we would do something like: for (i in seq_along(mylist)) { mylist[[i]] <- toupper(mylist[[i]]) } mylist #> $a #> [1] "A" "B" "C" "D" "E" "F" "G" "H" "I" "J" #> #>$b #> [1] "K" "L" "M" "N" "O" "P" "Q" "R" "S" "T" With lapply we are taking a function as input and applying it to every list element. It can be read as, applying the toupper function to everything in my list. With the for loop, we immediatly understand iteration is happening but we must decipher the logic to identify the intent, this becomes more difficult with larger or nested for loops. The concept of functionals, a function that takes a function as input, is powerful and has dramatically changed the way I’ve written R code. Prior to this realization, I would tirelessly write for loops to capture everything I need from a list and there were 2 problems with this: 1. I would cram as much functionality as I could into a single loop, i.e. sort each vector, take the top 3 values, compute the mean, assign to another object, take that object and multiply it by this other object, etc. The intent would degrade and the loop would become a, “just do everything and solve all my problems” loop. 2. Coming back to the loop is difficult to understand and I often never reused them. Now, point 1 isn’t necessarily the for loops fault, it’s my fault for cramming everything into a single loop. But maybe there is something to say about why I chose this approach in the first place. For me, I think it was something along the lines of, “I’m already iterating, why step outside of the loop when I can carry on.” And maybe this is why I embrace the apply functions, because they force me to think about the intent in small, bite sized steps. Of course, you could fall into the same trap and just make giant, convoluted functions and feed them to lapply. Yet, for some reason, I never had this problem. Perhaps this has to do with the fact that you must name your function and if you can’t come up with a sensible name, it’s clear your function is doing too much. Another idea I’ve come to love is, “if you can do it once, you can iterate.” When I need to iterate over something, I use the following workflow: 1. Take the first element, i.e. mylist[[1]]. 2. Write some code to get what you need, no functions or anything, just a script. 3. Wrap that code into meaningful functions that have a clear intent. 4. Feed those functions to lapply! For example, let’s say we have a list and for every element in that list we want to sort and then convert to uppercase. We could easily do this for the first element like so: mylist <- list( a = sample(letters, 10), b = sample(letters, 10) ) mylist2 <- mylist[[1]] mylist2 <- sort(mylist2)[1:3] toupper(mylist2) #> [1] "G" "H" "I" Now we wrap our sorting logic into a function: my_sort <- function(x, n) { sort(x)[1:n] } Finally, we take our function and feed it to lapply. Since we can do it to one element, we can do it to the rest: mylist <- lapply(mylist, my_sort, n = 3) lapply(mylist, toupper) #> $a #> [1] "G" "H" "I" #> #>$b #> [1] "B" "C" "D" This approach to iteration has been a life saver as I no longer worry about the logic of capturing everything in my list, I just worry about how the logic can be applied to a single element. While we haven’t covered the purrr package and pipes, I’d like to share one example of how enjoyable iteration can be when combined with pipes: library(dplyr) library(purrr) mylist %>% map(my_sort, 3) %>% map(toupper) #> $a #> [1] "G" "H" "I" #> #>$b #> [1] "B" "C" "D" This could be read as, “I take my list then map my sorting function then map the toupper function.” I find that pipes make the intent even more clear as the step by step based syntax is familiar to many.
2020-09-19 21:03:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4463973045349121, "perplexity": 1392.5663983872857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192887.19/warc/CC-MAIN-20200919204805-20200919234805-00521.warc.gz"}
https://itectec.com/superuser/ffmpeg-libx264-how-to-specify-a-variable-frame-rate-but-with-a-maximum/
# FFMPEG / libx264: How to specify a variable frame rate but with a maximum compressionffmpegframeratevideo Instead of providing a fixed frame rate to FFMPEG/libx264 (-r/-framerate), I would like to specify a variable frame rate with a MAXIMUM value, and allow libx264 to down the frame rate as it sees fit. The idea here is to get extra compression when there is something like an extended still frame (which happens A LOT in my source videos). I realize that a predictive or bidirectional MPEG frame will compress really well, but it's also possible that the source frame rate is smaller than the one I intend to transcode to (possibly resulting in a BIGGER stream!). Frustrated that you hadn't found an answer either, I was going to at least answer other people's questions about how to enable VFR (not VBR) output from FFMPEG. The answer to that is the oddly named -vsync option. You can set it to a few different options, but the one you want is '2' or vfr. From the man page: -vsync parameter Video sync method. For compatibility reasons old values can be specified as numbers. Newly added values will have to be specified as strings always. • 0, passthrough • Each frame is passed with its timestamp from the demuxer to the muxer. • 1, cfr • Frames will be duplicated and dropped to achieve exactly the requested constant frame rate. • 2, vfr • Frames are passed through with their timestamp or dropped so as to prevent 2 frames from having the same timestamp. • drop • As passthrough but destroys all timestamps, making the muxer generate fresh timestamps based on frame-rate. • -1, auto • Chooses between 1 and 2 depending on muxer capabilities. This is the default method. Note that the timestamps may be further modified by the muxer, after this. For example, in the case that the format option avoid_negative_ts is enabled. With -map you can select from which stream the timestamps should be taken. You can leave either video or audio unchanged and sync the remaining stream(s) to the unchanged one. However, I don't quite have enough reputation to post a comment to just answer that 'sub-question' that everyone seems to be having. But I did have a few ideas that I wasn't honestly very optimistic about... But the first one I tried actually worked. So. You simply need to combine the -vsync 2 option with the -r $maxfps option, of course where you replace $maxfps with the maximum framerate you want! And it WORKS! It doesn't duplicate frames from a source file, but it will drop frames that cause the file to go over the maximum framerate! By default it seems that -r $maxfps by itself just causes it to duplicate/drop frames to achieve a constant framerate, and -vsync 2 by itself causes it to pull the frames in directly without really affecting the PTS values. I wasn't optimistic about this because I already knew that -r$maxfps puts it at a constant framerate. I honestly expected an error or for it to just obey whichever came first or last or whatever. The fact that it does exactly what I wanted makes me quite pleased with the FFMPEG developers. I hope this helps you, or someone else later on if you no longer need to know this.
2021-09-18 18:50:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4751896560192108, "perplexity": 1448.0895289404507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056572.96/warc/CC-MAIN-20210918184640-20210918214640-00217.warc.gz"}
https://codegolf.stackexchange.com/questions/170110/calculate-the-sum-of-the-first-n-prime-numbers?page=2&tab=votes
Calculate the sum of the first n prime numbers I'm surprised that this challenge isn't already here, as it's so obvious. (Or I'm surprised I couldn't find it and anybody will mark it as a duplicate.) Given a non-negative integer $$\n\$$, calculate the sum of the first $$\n\$$ primes and output it. Example #1 For $$\n = 5\$$, the first five primes are: • 2 • 3 • 5 • 7 • 11 The sum of these numbers is $$\2 + 3 + 5 + 7 + 11 = 28\$$, so the program has to output $$\28\$$. Example #2 For $$\n = 0\$$, the "first zero" primes are none. And the sum of no numbers is - of course - $$\0\$$. Rules • You may use built-ins, e.g., to check if a number is prime. • This is , so the lowest number of bytes in each language wins! Japt -mx, 8 bytes T=_j}a°T Try it Pyth, 5 bytes s.fP_ Try it online here. s.fP_ZQ Implicit: Q=eval(input()) Trailing ZQ inferred .f Q Starting at Z=1, return the first Q numbers where... P_Z ... Z is prime s Sum the resulting list, implicit print Reticular, 52 40 bytes indQ2j;o_1-2~d:^=[d@P~1-]:^*[+]:^1+*o; Try it online! Explanation Fun fact: Reticular does not count 2 as a prime number, so the instruction @P which gives the $$\n\$$-th prime in reality gives the $$\(n+1)\$$-th prime and due to this we have to add the first prime 2 manually. in # Read input and convert to int dQ2j;o_ # Check if input is 0. If so, output and exit 1-2~d:^= # Subtract 1 from input and save it as ^ [d@P~1-] # Duplicate the top of the stack (call it k) and push the k-th prime. Finally swap the two top items in the stack and subtract 1. Stack before: [k] Stack after: [k-1, k-th prime] :^* # Repeat the above a ^ number of times. Stack before: [n] Stack after: [0, 3, 5, ..., n-th prime, 2] [+]:^1+* # Add the two top items in the stack a total of (^+1) number of times o; # Output the sum and exit.
2020-10-28 03:25:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.655765950679779, "perplexity": 2527.65635757125}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896048.53/warc/CC-MAIN-20201028014458-20201028044458-00276.warc.gz"}
http://www.2874565.com/questions/325172/solving-an-recursive-sequence
# Solving an recursive sequence [closed] I have an recursive sequence and want to convert it to an explicit formula. The recursive sequence is: $$f(0) = 4$$ $$f(1) = 14$$ $$f(2) = 194$$ $$f(x+1) = f(x)^2 - 2$$ ## closed as off-topic by Steven Landsburg, user44191, RP_, David White, Piotr HajlaszMar 11 at 15:05 This question appears to be off-topic. The users who voted to close gave these specific reasons: • "MathOverflow is for mathematicians to ask each other questions about their research. See Math.StackExchange to ask general questions in mathematics." – user44191, RP_, Piotr Hajlasz • "This question does not appear to be about research level mathematics within the scope defined in the help center." – Steven Landsburg, David White If this question can be reworded to fit the rules in the help center, please edit the question. $$f(x+1) = f(x)^2 - 2,\;\;f(0)=4$$ $$\Rightarrow f(x)=2\cos\left(2^x\arccos 2\right)=2 \cosh \left(2^x \ln \left(\sqrt{3}+2\right)\right)$$
2019-03-21 12:21:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7790811061859131, "perplexity": 2526.2888336982924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202523.0/warc/CC-MAIN-20190321112407-20190321134407-00487.warc.gz"}
http://freewordfinder.com/dictionary/frequency/
• # Frequency #### Pronunciation • enPR: frĕʹkwən-sē, IPA: /ˈfriːkwənsi/ #### Origin From Latin frequentia, from frequens. ## Full definition of frequency #### frequency (plural frequencies) 1. (uncountable) The rate of occurrence of anything; the relationship between incidence and time period. • With growing confidence, the Viking’s raids increased in frequency. • The frequency of bus service has been improved from 15 to 12 minutes. 2. (uncountable) The property of occurring often rather than infrequently. • The FAQ addresses questions that come up with some frequency. • The frequency of the visits was what annoyed him. 3. (countable) The quotient of the number of times $n$ a periodic phenomenon occurs over the time t in which it occurs: $f = n$ t. • The frequency of the musical note A above middle C is 440 oscillations per second. • ''The frequency of a wave is its velocity $v$ divided by its wavelength \lambdaf = v \lambda. • Broadcasting live at a frequency of 98.3 megahertz, we’re your rock alternative! • The frequency for electric power in the Americas is generally 60 Hz rather than 50. 1. (statistics) number of times an event occurred in an experiment (absolute frequency)
2022-07-05 21:59:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.687462568283081, "perplexity": 4200.153924915092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00797.warc.gz"}