url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://read.dukeupress.edu/demography/article/57/2/705/168012/Does-Skin-Tone-Matter-Immigrant-Mobility-in-the-U | ## Abstract
A rich literature has documented the negative association between dark skin tone and many dimensions of U.S.-born Americans’ life chances. Despite the importance of both skin tone and immigration in the American experience, few studies have explored the effect of skin tone on immigrant assimilation longitudinally. I analyze data from the New Immigrant Survey (NIS) 2003 to examine how skin tone is associated with occupational achievement at three time points: the last job held abroad, the first job held in the United States, and the current job. Dark-skinned immigrants experience steeper downward mobility at arrival in the United States and slower subsequent upward mobility relative to light-skinned immigrants, net of human and social capital, race/ethnicity, country of origin, visa type, and demographics. These findings shed light on multiple current literatures, including segmented assimilation theory, the multidimensionality of race, and the U.S. racial hierarchy.
## Introduction
Skin tone is at the center of recent scholarship on the process of changing racial hierarchies and racial inequality. The greater influx of immigrants of color, some scholars have predicted, will change the current Black-White racial divide in the future. Some expect the American stratification structure to become more like that of Latin America, where inequality is shaped along color scales rather than racial categories (Bailey et al. 2016; Bonilla-Silva and Dietrich 2009; Telles 2014).1 Others expect that the racial hierarchy in the United States will change from Whites/non-Whites to non-Blacks/Blacks as the majority group embraces lighter-skinned racial groups whose socioeconomic status becomes comparable to Whites, including Asians and light-skinned Latinxs—a process called “whitening” (Gans 2012; Lee and Bean 2007). Another prediction of the future racial hierarchy in the United States is a move “toward the eventual elimination of distinct racial and ethnic groups in favor of a skin-color hierarchy” given that the skin color hierarchy has changed little over time but the meaning of race and ethnicity has changed substantially (Hochschild 2005:81).
The diverse racial and ethnic composition of contemporary immigrants also suggests that skin tone plays a key role for immigrants in navigating their positions in the U.S. racial hierarchy and their relation to native-born Americans at the individual level. When immigrants encounter the United States’ prevailing system of colorism and racism, they experience a different hierarchy of skin tone status than in their country of origin (Foner 2000; Roth 2010; Waters 1999). Skin tone can be expected to influence the immigrant assimilation process and ethnoracial identity formation, even among the second generation, because skin tone is an important phenotype by which native people perceive immigrants’ race (Alba and Nee 1997; Gans 1992; Portes and Rumbaut 2006; Portes and Zhou 1993; Zhou 1997).
Nonetheless, empirical studies examining immigrant skin tone discrimination are scarce, in large part because representative survey data to measure immigrants’ skin tone in addition to race are also rare.2 Longitudinal data that include skin tone are even scarcer.3 Existing empirical, survey-based studies of immigrants find a negative effect of dark skin tone on labor market outcomes (e.g., Frank et al. 2010; Hersch 2008; Mason 2004; Rosenblum et al. 2016). A drawback of these studies is that they examined the effect of skin tone at only one time point, leaving unexamined the question of how skin tone influences the immigration and labor market assimilation process over time. Nonetheless, these existing cross-sectional studies suggest that reception in the United States is deflected downward for darker-skinned immigrants.
On the other hand, those studies that examined immigrant assimilation processes over time tend to neglect skin tone effects (e.g., Akresh 2008; Chiswick 1978; Chiswick and Miller 2009). They explain downward mobility experienced by immigrants when they cross the border as being due to imperfect skill transferability and show that immigrants later experience upward mobility as they accumulate destination country-specific human/social capital, resulting in U-shape labor market trajectories. Despite their focus on how country of origin, visa type, education level, and social ties influence the depth of the U-shape, none of these studies examined immigrants’ skin tone or race. That only country of origin is the main variable of interests in those studies indicates an assumption that immigrants from one country are racially homogenous while ignoring the possibility that skill transferability is influenced by racial or skin tone discrimination.
In this study, using the New Immigrant Survey (NIS) 2003 data, I analyze how skin tone influences immigration and assimilation processes in occupational trajectories, including the baseline pre-immigration period. Using a representative survey of immigrants who earned their legal permanent residency (LPR), this study is the first to analyze the mechanisms of dark skin tone penalties for immigrants in the U.S. labor market that focuses on temporal incorporation, controlling for pre-immigration selection across the full range of immigrant origins. This study also contributes to the immigrant assimilation literature with its focus on the effect of skin tone and racial group membership on intragenerational mobility for the first generation, which has been less developed in the current literature. In addition, this study sheds light on the role of skin tone in the current literature on skill transferability by examining the extent to which skin tone influences the process.
## Colorism and Racial Inequality in the United States
A rich literature has documented the negative association between dark skin tone and various social stratification outcomes in the United States, such as occupational status, income, educational attainment, and mate selection (Hamilton et al. 2009; Hughes and Hertel 1990; Hunter 1998; Keith and Herring 1991; Monk 2014). Psychological domains, such as self-esteem, perceptions of attractiveness, racial identity, and Whites’ affective reactions to minorities have been explored as well (Bond and Cash 1992; Hagiwara et al. 2012).
Stratification by skin tone is the consequence of colorism, defined as “the process of discrimination that privileges light-skinned people of color over their dark-skinned counterparts” (Hunter 2007:234). Colorism is conventionally understood as a within-ethnoracial process that operates in relation with racism but in many ways is distinctive from it. Hunter (2007:238), for example, theorizes that “racism is a larger, systemic social process and colorism is one manifestation of it,” characterizing the degree of racial discrimination as moderated by skin tone, with lighter-skinned individuals facing less racial discrimination. Reflecting this perspective, most empirical studies on the negative effects of dark skin tone among Americans explore these effects for only one specific racial/ethnic group, often Blacks (Hughes and Hertel 1990; Hunter 1998; Keith and Herring 1991) or Hispanics (Espino and Franz 2002; Mason 2004).
However, research on the multidimensionality of race (Roth 2010; Saperstein 2006) suggests that colorism should also be conceptualized beyond a within-race discrimination process because skin tone, along with other phenotypes, influences the boundaries of ethnoracial group membership itself. Ethnoracial boundaries are not naturally given or fixed but are instead created and changing as a consequence of constant negotiations between actors with different strategies in defining memberships of ethnoracial groups (Wimmer 2008), through interactions across individual (micro), institutional (meso), and cultural (macro) levels (Saperstein et al. 2013). In this process, skin tone plays a key role as a signal of race, particularly when the racial classification process is an interactional accomplishment. Research has noted that along multiple dimensions of ethnoracial classification, perception of race by others is more critical than is self-identified race in predicting discrimination (Golash-Boza 2006; Mason 2004; Monk 2015; Roth 2010; Saperstein 2006). A mismatch between ethnoracial self-identity and ethnoracial classification by others based on phenotype is often observed in daily life interactions. For example, light-skinned Hispanic high school students are often perceived as White European descendants despite their self-identity as Latinxs (Fergus 2009). Because of such a mismatch, Latinxs experience a dark skin tone penalty in particular because immigrants from the Latin Americas tend to identify themselves as Whites despite their dark skin, by which potential employers perceive them as Blacks (Rosenblum et al. 2016). Conversely, the finding by Goldsmith et al. (2006) that earnings of Blacks with light skin tone are statistically indistinguishable from earnings of comparable Whites suggests that light-skinned self-identified Blacks are treated in the same manner as Whites in the labor market.
These findings altogether suggest that categorical racial lines are fluid contingent on skin tone as well as on other social contexts, particularly at the phenotypic borders. Reflecting such a discrepancy between self-identified and perceived race, empirical studies show that the combination of skin tone and self-identified race serves as a better predictor of inequality in the United States and in the majority of Latin American countries than either race or skin tone alone does (Bailey et al. 2016). This scholarship thus suggests that skin tone stratification must be understood beyond one ethnoracial group because skin tone and self-identity are two different dimensions of race (Roth 2010).
## Downward Mobility of Immigrants With Dark Skin Tone
### Dark Skin Tone Penalties
Empirical studies of variation in skin tone show a dark skin tone penalty in the U.S. labor market among immigrants as well, net of race and other individual demographics. Hersch (2008), using the NIS data, finds that immigrants with the lightest skin color earn, on average, 17% more than comparable immigrants with the darkest skin tone, net of race. Such a dark skin tone penalty is also found within immigrant groups. Frank et al. (2010), for example, using the same data, find dark skin tone to be associated with wage loss among Latinx immigrants: darker-toned Latinx immigrants earned, on average, \$2,500 less per year than their lighter-skinned counterparts.
The existence of dark skin tone penalties indicates that skin tone is a critical factor in immigrants’ assimilation to the U.S. mainstream. Assimilation theory predicts that immigrants will be unilaterally assimilated into the U.S. mainstream, although it may take time (Alba and Nee 1997, 2003). Scholars of assimilation theory argue that dark skin tone may slow the pace of acculturation because of the resulting racial discrimination but that dark skin tone is not an absolute obstacle. Gans (1992) raised the possibility that immigrants with dark skin color or from a low socioeconomic class in the country of origin, in particular, may be trapped at the bottom of stratification in the United States. Because skin tone separates those deemed phenotypically Black from Whites, immigrants with darker skin tone—like those from the Caribbean—will likely have more difficulty assimilating to the United States than immigrants with lighter skin tone (Alba and Nee 1997). However, assimilation theorists further argue that skin tone is not an all-encompassing obstacle given that there are some immigrant groups—such as South Asians—whose skin tone is relatively dark but who have successfully achieved higher socioeconomic status. Instead, in their new assimilation theory, Alba and Nee (2003) contend that the types of capital (i.e., human and cultural capital) immigrants bring with them are stronger predictors of immigrants’ assimilation than are skin tone or race.
In contrast, segmented assimilation scholars suggest that race is a singularly critical determinant of immigrants’ assimilation paths, especially among immigrant children. Segmented assimilation theory posits that immigrants’ assimilation into U.S. society is not a singular path but rather that the context of reception in the United States determines the direction of assimilation (upward, lateral, and downward) (Portes and Rumbaut 2006). Skin tone is one of the factors that set the context of reception because some of the new immigrants experienced culturally different racialization hierarchies in their sending societies. Relative to light-skinned European descendants whose assimilation to the American mainstream was less influenced by phenotypical discrimination, new immigrants and their children often encounter racial barriers to upward mobility (Portes and Zhou 1993; Zhou 1997).
As such, both assimilation theory and segmented assimilation theory contend that the dark skin tone of immigrants is an obstacle to assimilation and upward mobility, largely because a dark skin tone is associated with Black Americans, who are often stuck at the bottom of the racial hierarchy in the United States. Alba and Nee (1997:846), for example, argue that “not dark skin color per se, but the appearance of connection to the African American group raises the most impassable racist barriers in the United States.” In other words, a dark skin tone matters only as long as immigrants’ skin tone is dark enough to be perceived as African American.
However, a dark skin tone penalty is found both within and across ethnoracial groups, beyond whether dark skin tone is categorically connected to an African American phenotype. Even within African American racial groups, dark-skinned members face stronger discrimination in the labor market than lighter-skinned members (Kreisman and Rangel 2015; Monk 2014), and similar results are found among Hispanics (Frank et al. 2010; Mason 2004). Golash-Boza (2006:35) insists that the extent to which Latinxs “fit the Hispanic somatic norm image” of the Indian/mestizo phenotype—a stereotype widely shared among Americans as being hardworking, undocumented, low wage earners—rather than their association with a Black racial phenotype, determines whether they will face racial discrimination instead of ethnic discrimination. Kreisman and Rangel (2015) found a larger wage gap between light-skinned and dark-skinned African Americans than that between Whites and Blacks, suggesting that a dark skin tone penalty results from more complex mechanisms beyond self-identified membership in the Black racial group.
### Downward Mobility of Immigrants With Dark Skin Tone
While both assimilation and segmented assimilation theories put more focus on the intergenerational mobility of subsequent generations rather than intragenerational mobility of first-generation immigrants themselves, segmented assimilation studies suggest that downward mobility at arrival is influenced by skin tone as well. The implicit assumption of the argument that race is one of the main factors determining the context of immigrants’ assimilation process is the premise that immigrants have experienced prejudice and discrimination based on their phenotypes in their home countries in different ways than in the United States. Scholars have argued that immigrants of dark skin tone in particular have to redefine their phenotypic attributes as obstacles to upward mobility in the United States after immigration (Portes and Zhou 1993; Zhou 1997). As a consequence, immigrants with dark skin tone often stress their ethnic identities in order to avoid the subordinate status attached to American Blacks (Bonilla-Silva 1997). For example, Foner (2000:260) noted that “dark-skinned (West) Indian immigrants, whose skin color might put them at risk at being confused with African Americans, emphasize their ethnic identity and distinctive history, customs, and culture as a way to avoid such mistakes.”
Although not limited to immigrants, Kreisman and Rangel’s finding (2015) that the wage gaps between light-skinned Blacks and dark-skinned Blacks increase over time in the National Longitudinal Survey of Youth 1997 data suggests that similar processes may also apply to immigrants. The authors speculate that the cumulative disadvantage for darker-skinned Blacks results from mismatches and job instability due to labor market discrimination. Furthermore, the discrimination is more likely preference-based discrimination against darker-skinned Blacks rather than statistical discrimination for the whole Black racial group because the negative effect of their dark skin tone on wage has not been ameliorated despite their accumulation of experiences over their working careers.
Many immigrants are known to find their first job in coethnic niches, but dark-skinned immigrants are likely less able to enter into the better-paying general labor market. Morales (2008), for example, finds that dark-skinned Latinxs are more likely than light-skinned Latinxs to find employment in coethnic niches. Applying queuing theory, Morales (2008) explained that based on employers’ preference, workers are sorted by skin tone: lighter-skinned workers are preferred in the general labor market, leaving dark-skinned immigrants with fewer chances to be hired in the general labor market (regardless of earnings) and resulting in limitations on upward mobility. In a similar way, residential immobility of immigrants of dark skin tone (e.g., South et al. 2005) may create a job mismatch as well by prohibiting them from finding housing close to better jobs.
From the preceding discussion, I hypothesize the following:
Hypothesis 1: Immigrants with darker skin tone will experience steeper downward mobility at arrival to the United States net of race.
Hypothesis 2: Immigrants with darker skin tone will experience less steep upward trajectories post-immigration net of race.
On the other hand, a dark skin tone penalty at arrival in the United States may not emerge if dark-skinned immigrants already experienced similar penalties in their country of origin. Preferential treatment toward people with lighter skin tones is also found in many countries around the world, including some Asian countries (Glen 2009), Mexico (Campos-Vazquez and Medina-Cortina 2019; Villarreal 2010), and Brazil (Telles 1992). Stratification by skin color—pigmentocracy—is prevalent across many Latin American countries (Bailey et al. 2016; Telles 2014). However, it is difficult to test this hypothesis unless the immigrants under study can be compared with the population of their sending countries and with the U.S. population in order to measure the relative penalty of dark-skinned immigrants across their sending countries and the United States. Thus, in this study, comparing the skin tone effects net of race between the pre-immigration baseline period and post-immigration periods will test the level of dark skin penalty among immigrants.
## Analytic Strategy
I analyze data from the New Immigrant Survey (NIS),4 which surveyed immigrants who obtained LPR in 2003. Jasso et al. (2000) stressed that the NIS was designed to overcome three deficiencies in previous immigrant-related surveys: (1) cross-sectionality, with a lack of pertinent information on individual immigrants’ dynamics; (2) small sample sizes, which limited the number of immigrant groups that could be analyzed; and (3) missing data on crucial variables, such as specific visa categories, in earlier surveys. The NIS data include information on pre-immigration history and are designed as panel data. Such longitudinal information enables researchers to study dynamic aspects of immigration.
Importantly, the NIS 2003 data has an unusually precise measure of skin tone, ranging from 0 to 10, with 0 being lightest and 10 darkest. The Massey and Martin skin color scale was printed in the field interviewer manual, and interviewers were asked to measure respondents’ skin tone after the survey regardless of race (respondents could not see the scale) (Massey and Martin 2003). Skin tone is reported for the 4,652 face-to-face survey respondents (of 8,573 respondents total); phone interview respondents are necessarily excluded.5 The skin tone measure in the NIS has been tested for precision and judged to be both valid and reliable regardless of interviewer’s race or other identities (Hersch 2008: appendix A). Following the previous studies using the NIS data, skin tone in this study is treated as an interval variable.
The main research aim of this study is to evaluate the effects of skin color on immigrants’ occupational trajectories over the immigration process. The dependent variable is immigrants’ occupational status and its trajectory over time. The survey asks respondents’ occupation at three time points: last job held before immigration (Time 1 (T1)),6 first job in the United States (Time 2 (T2)), and current job at interview date (Time 3 (T3)).7 Occupational status is coded with the International Socio-Economic Index (ISEI) of 2008 (Ganzeboom 2010a). The ISEI is a standardized scale of occupations that represents the “weighted sum of mean education and mean income” of incumbents of each occupation, which maximizes indirect effect of education but minimizes its direct effect on earnings (Ganzeboom et al. 1992:12). This index, constructed using data from pooled International Social Survey Programme 2002–2007 waves (200,000 men and women in 42 countries, including the United States) (for more details and a complete list of ISEI, see Ganzeboom 2010a, b), is validated in its high correlation with job skills and occupational mean earnings across countries (Le Grand and Tåhlin 2013). The census 2003 occupation codes in the NIS 2003 are recoded into ISEI 2008.
I analyze ISEI instead of wage/earnings for two reasons. First, some of the countries of origin are grouped into several regions for confidentiality purposes in the NIS 2003 data, which makes it impossible to adjust wages from jobs abroad into comparable U.S. wages based on international currency rates. Second, ISEI has strengths over the wage/earnings variable in that ISEI is based not only on earnings for each occupation but also on education level for each occupation so as to capture one’s relatively stable socioeconomic status rather than potentially transitory income status. Hence, ISEI is more stable and comparable across time and space, and thus it is a better measurement for international comparison (Hout and DiPrete 2006; Treiman 1977). For these reasons, ISEI was used in many previous studies to examine pre- and post-immigration mobility (e.g., Akresh 2008).8
Because not all respondents whose skin tone is reported are employed at all three time points, the analysis is limited to those whose skin tone is reported and who were employed at each period. This selection rule makes the current study comparable with previous research that used the same data set and limited the sample to respondents who were employed (Akresh 2008; Frank et al. 2010; Hersch 2008). Although excluding respondents who were not employed in paid work may be a source of selection bias, I show below that such biases are quite small.
Self-identified race is controlled and is interacted with time in separate models in order to examine the relative role of skin color and race/ethnicity. The NIS asks whether respondents are Hispanic or not regardless of race. A relatively large proportion of respondents self-identify as White (53%); this will be discussed further. Asians make up 26% of the sample, compared with 11% for Blacks and 4% for Native Americans and Pacific Islanders. To avoid bias from missing on the race variable, missing data on race/ethnicity is also controlled for. Hispanics constitute 38% of the sample.
Variables that may influence occupational status and immigrants’ assimilation are additionally controlled: demographics of gender and age, human/social/cultural capital, visa type, country of origin,9 U.S. experiences, and regions of U.S. residence. Definitions and measurements are provided in Table A1 in the online appendix.
Table 1 summarizes the descriptive statistics of the variables. I construct the data as person-time longitudinal data. The total number is 8,159 person-time observations for the sample whose skin tone is reported and who were employed at each time point, excluding those whose age is missing and those not working in the United States.10 The mean ISEI across the three time points is 39.10 (e.g., machinery mechanics and repairers). The mean transition between T1 and T2 is –8.17 (e.g., stonemason), and that between T2 and T3 is 2.48 (e.g., building trade workers). These means clearly show that immigrants experience downward mobility with immigration and then recover their occupational status over time. The average trajectory follows a U-shape, as documented in the literature.
## Analysis
Figure 1 describes the mean ISEI scores by skin tone at three time points and clearly shows that occupational status is stratified by skin tone: immigrants with light skin have higher occupational status than those with medium and dark skin tone during and after immigration, all follow U-shape trajectories. However, the depth of the U-shape varies by skin tone. Immigrants with dark skin tone experience steeper downward mobility and less steep upward mobility after immigration than immigrants with lighter skin tone.
To examine the net effect of skin tone on occupational status at three time points, I use a generalized least square (GLS) random-effects model, which adjusts for correlations among observations and heteroscedasticity.11 Fixed-effects models are often applied to panel data in order to capture the net effects of time-varying variables on outcome variables while controlling for both observed and unobserved heterogeneities across entities. I apply random-effects models here because the main research interest is the effect of skin tone, a time-constant variable, as in Eq. (1):
$yit=μt+αi+βSkinTonei+γTime2i+δTime3i+ηSkinTone×Time2i+θSkinTone×Time3i+λXi+εit,$
1
where y = ISEI score, i = individual, t = time point, Xi = a set of time-constant control variables, μt = an intercept that may be different for each period, and εit = individual and time-specific error term.12 In random-effects models, αi is assumed to be a set of random variables that are normally distributed, have constant variance, and are independent of all other independent variables. Whereas αi is controlled for in fixed-effects models, it creates a random intercept combined with μt but not controlled for in Eq. (1).13 In addition to the coefficient β of the main independent variable (skin tone), time dummy variables for T2 and T3 are estimated to measure the mean of the time-specific effects across individuals relative to those at T1. Interaction terms of skin tone with T2 and T3 are estimated to capture how the effect of skin tone varies across time.
Results show that immigrants with dark skin tone are likely to have lower occupational status at all three time points. Table 2 summarizes the coefficients of the GLS random-effects model predicting occupational status. Model 1 shows that skin tone that is one unit darker is associated with 0.88 lower occupational status, on average, across the three time points. Immigrants experience a steep downward mobility at T2, the first job in the United States: occupational status is 8.77 points less at T2 than T1, the last job abroad. Then, immigrants catch up in occupational status by 2.43 points at T3.
Time and skin tone interaction terms are added in Model 2, which shows that the dark skin tone penalty is stronger at T2 and T3, after immigration to United States, relative to before immigration. Having skin tone that is one unit darker additionally decreases occupational status by 0.44 points at T2. That is, on average, immigrants experience downward mobility after immigration to the United States at T2 by 6.99 points, and immigrants with one scale darker skin tone experience 0.44 points additional downward mobility. Immigrants with the darkest skin tone experience downward mobility by 0.44 × 10 = 4.4 more points at T2 than those with the lightest skin tone. For example, in service and sales occupations, an immigrant who worked as a transport conductor (ISEI = 40) in the country of origin is likely to have a first job in the United States as a cleaning and housekeeping supervisor in offices/hotels (ISEI = 33) but is likely to have a lower level occupation, such as waiter (ISEI = 28) if he or she has the darkest skin tone.
The dark skin tone penalty in the United States diminishes slightly but persists at T3. The interaction effect of dark skin tone at T3 is –0.34, which means that immigrants with skin tone that is one scale unit darker have a 0.34 point lower occupational status at T3 in addition to the average downward mobility experienced by immigrants relative to T1. This coefficient is slightly less than –0.44 at T2 but still larger than before immigration (at T1) and is statistically significant. Controlling for additional covariates in Model 3 yields the similar dark skin tone penalty at T2 and T3, and the results are robust to controlling for race in Model 4.
Employment status is also a critical measure of labor market outcomes because unemployment can be an extreme example of downward mobility. However, the NIS 2003 data contain respondents’ employment status in detail only at T3: employed, unemployed and looking for work, temporarily laid off/on sick or other leave, disabled, retired, or a homemaker.14 At T1 and T2, the survey asked about a respondent’s job only if they ever worked for pay. Thus, by necessity, I exclude nonworking individuals from this analysis. As a sensitivity test for resulting bias, I examine whether dark skin tone is associated with nonworking status, including but not limited to unemployment among respondents who have valid skin color information and were in the labor force. Similar to Monk (2014), I find no evidence for association between dark skin tone and nonworking status at any of the three time points.15 Nor do I find an association of skin tone with unemployment when I limit the analysis to T3, for which detailed nonworking status is specified, and exclude those aged 65 and older, most of whom are likely retired.
Next, to further examine the relative role of skin tone and race, I model race and interacted it with time. The results are provided in Models 5–7 in Table 2. Hispanics, on average, are likely to have 11.12 points lower occupational status than non-Hispanics across three time points (Model 5). In addition, compared with Whites, Asians have 3.54 higher occupational status, whereas Blacks have 4.99 lower status across all time points. When race is interacted with T2 and T3 in Model 6, Hispanics have 1.23 points and 3.13 points higher occupational status, respectively, relative to non-Hispanic immigrants, but the interaction is statistically significant only at T3. Despite the higher occupational status at T3 than at T1, Hispanics have an occupational status that is 12.5 points lower than non-Hispanics at T1. Similarly, Asians have 4.27 and 3.28 points higher occupational status at T2 and T3 relative to White immigrants. Considering the higher occupational status of Asians relative to White immigrants at T1, Asians continue to maintain higher status. On the other hand, Blacks have an occupational status that is 4.39 and 4.02 points lower at T2 and T3, respectively. Results suggest that Hispanics and Asians experience upward mobility over time at T2 and T3 relative to their reference groups, whereas Blacks continue to remain in lower occupational status over time. A similar pattern is found when additional covariates are controlled for in Model 7.16
Finally in Model 8 (Table 2), both race and skin tone are interacted with T2 and T3. The interaction effects of race with time are similar to those in Model 7, where skin tone and time interactions are not included. The skin tone and time interaction effects decrease to –0.11 and –1.10 at T2 and T3, respectively, but become statistically nonsignificant in Model 8. However, this does not mean that there is no additional skin tone effect on immigrants’ occupational mobility once race interaction effects are also controlled for. Considering the stronger dark skin tone penalty at T2 and T3 (interaction effects) net of race (not interacted with time) in Model 4, one plausible interpretation is that skin tone is a strong indicator of race and that the inclusion of a race interaction absorbs the variance of occupational status associated with skin tone. A supplementary analysis of ordinary least squares (OLS) regression at each time point separately (not shown here) showing a dark skin tone penalty at T2 and T3 net of race also supports this interpretation.
However, the extent to which skin tone serves as a signal for race differs across ethnoracial groups. Table 3, which summarizes coefficients of skin tone and interaction terms with time for each subsample of ethnoracial group, shows that within-group dark skin tone penalties appear for Hispanics when covariates are controlled as well as for Whites when covariates are not controlled.17 Because Hispanics can be any race, negative coefficients of skin tone in both Models 1 and 2 suggest that skin tone is an indicator of race among Hispanics, although there is no time interaction effect. Interestingly, an even stronger negative effect (–2.11) of darker skin tone appears among Whites in Model 3 than among Hispanics (–0.83) in Model 1, which suggests that discrepancy between self-identified White race and their perceived dark skin tone is larger among Whites than among Hispanics (and Blacks and Asians).
It is worth noting that interaction effects of skin tone with time remain negative for Blacks and Asians (and White at T2), although they are not statistically significant. This loss of significance results, at least in part, from the reduction in statistical power when the sample is stratified by ethnoracial groups. Inclusion of interaction terms between race and skin tone (not shown here) in Models 3 and 4 in Table 2 (pooled sample) does not change the interaction effects of skin tone with T2 and T3, suggesting that the within-race dark skin tone penalty likely exists, although the statistical power is decreased in subsamples. In sum, results show that skin tone not only serves as an indicator of perceived race but also creates inequality within self-identified race.
The dark skin tone penalty in the immigration process discussed so far is summarized in Fig. 2, which shows the means of predicted ISEI for the sample by skin tone and time after each individual is fitted to OLS regressions at each time point with all covariates controlled. Even before immigration, immigrants with darker skin tone are predicted to have lower occupational status. The association, however, is not linear: immigrants with darker skin tone had higher occupational status than those with medium skin tone at T1. Indian immigrants and highly selective African immigrants with very dark skin color belong to this group. After immigration at T2 and at T3, immigrants of all skin tone scales have lower occupational status than in their home country. At this time, however, the relationship between skin tone and occupational status is linear: immigrants with darker skin tone have lower occupational status than those with lighter skin tone. Thus, immigrants with the darkest skin tone are expected to experience the most downward mobility and to have a slower assimilation process after immigration.18
## Discussion and Conclusion
Because of the lack of available data, previous empirical research using large-scale survey data has examined mainly the dark skin tone penalty for immigrants cross-sectionally in the United States, failing to examine the influence of skin tone during the immigration process and the post-immigration assimilation process. In this study, using the NIS 2003 data, which measured both immigrants’ occupational history including pre-immigration jobs and their skin tone, I examine the effects of skin tone and race on immigrants’ occupational trajectories, including the transition from their home country to the U.S. labor market.
Consistent with Hypothesis 1, I find that immigrants whose skin tone is darker are more penalized in the process of migration to the United States by experiencing steeper downward occupational mobility relative to those whose skin tone is lighter. Although some scholars find a U-shape pattern of immigrants’ occupational mobility trajectory (Akresh 2008; Chiswick 1978; Chiswick and Miller 2009), they focus mainly on human capital aspects without incorporating discrimination factors caused by phenotypic attributes, such as skin tone. However, the current study suggests that skin tone and race influence the skill-transferability processes of immigrants. Steeper downward mobility of darker-skinned immigrants may imply that immigrants begin to face discrimination based on their skin tone upon arriving to the United States but they had not experienced skin tone–based discrimination, or experienced it to a lesser degree, in their home country. Hispanics and Asians are likely to experience upward mobility after immigration, whereas Blacks continue to remain at a lower occupational status than White immigrants. These findings support previous assimilation and segmented assimilation studies suggesting that phenotypic attributes, such as skin tone and race, set the context of reception for immigrants in the United States and thereby compel immigrants to redefine the meaning of their phenotypic attributes in a new cultural stratification system (Alba and Nee 1997; Gans 1992; Portes and Rumbaut 2006; Portes and Zhou 1993; Zhou 1997).
Furthermore, my results are consistent with Hypotheses 2, which predicts that immigrants with darker skin tone will experience less rapid upward trajectories over post-immigration time: the dark skin tone penalties in the U.S. labor market do not diminish over time among immigrants even as they develop skills and accumulate work experiences in the United States, resulting in a lopsided U-shape pattern. This finding challenges assimilation theory’s prediction that phenotypic attributes are not impassible obstacles for immigrants in the long run even if they do slow the pace of assimilation (Alba and Nee 1997, 2003). Instead, this finding is consistent with Hersch’s study (2011) (and with segmented assimilation theory), which found that the dark skin tone penalty persists over time among immigrant spouses of the respondents in the NIS 2003 data, whose duration of residence in the United States is more heterogeneous than that of the primary respondents. This conclusion may be premature because of the short duration of observation in the NIS 2003 sample. A longer period of observation may answer the question more clearly. However, because dark skin penalties extend even to intergenerational mobility (Campos-Vazquez and Medina-Cortina 2019; Chetty et al. 2018), we may expect the initial penalties for dark-skinned immigrants at arrival to the United States to continue for a longer period.
Assimilation theory predicts a declining impact of skin tone in that even immigrants with dark skin tone, such as South Asians, overcome the obstacles they encounter. However, the opposite may also be true because there is no reason to expect employers’ skin tone preference to change with immigrants’ length of time in the United States, especially if such a preference is based on their biases (Kreisman and Rangel 2015). The optimistic prediction of assimilation theory stems from an emphasis on the behavior of immigrants rather than that of employers.
Although immigrants’ cultural or behavioral dimensions are not incorporated in the current study, considering that immigrants are more likely able and motivated workers (Chiswick 1978), the observed dark skin tone penalty in this study may be underestimated. Immigrants may try to overcompensate for their minority status but inevitably face some degree of dark skin penalties from employers. If immigrants have levels of human capital and motivation that are comparable to those of the American population generally, they may have experienced harsher dark skin penalties in the U.S. labor market than observed in this study. Thus, it will be worth examining further how employers’ conscious and unconscious biases work toward immigrants’ skin tone over the employment period.
The findings from this study will expand the discussion of the role of skin tone in the racial identification process in the future. The dark skin tone penalty findings imply that self-identified race alone may not be a precise proxy for immigrant racial group membership. In an additional analysis for each subsample of ethnoracial groups in Table 3, I find a dark skin tone penalty for Whites and Hispanics but not for Asians, and I even find a positive effect of dark skin tone among Blacks. These results may be due to the possible discrepancy between how immigrants’ race is perceived and categorized in the United States depending on their skin tone and how immigrants identify their own racial category. Frank et al. (2010), using the same data but limiting their sample to Latinxs, found that Latinxs tend to identify themselves as White rather than non-White or “other.” Darity et al. (2005) also pointed out that Latinxs, even those with very dark skin tone, disproportionately prefer to identify themselves as White. As a consequence, although most Latinxs identify as White, dark-skinned Latinx immigrants encounter a wage penalty in the labor market (Frank et al. 2010; Rosenblum et al. 2016).
Thus, using racial self-identity only as a proxy for how others may treat individuals based on their race is problematic in survey data. Using skin tone data (as identified by the interviewer) in conjunction with self-identified race may be a way to better calibrate how racialized outcomes are measured (Bailey et al. 2016). Roth (2010) conceptualizes multiple dimensions of racial identity, emphasizing that how others perceive one’s race—rather than one’s self-perception—is central to discrimination. In this process, the subcategory of skin tone plays a more critical role in understanding and constructing interactions than one’s racial identity given that discrimination varies according to the extent to which “individuals are perceived to fit a particular category” (Monk 2015:406). Similarly, Kreisman and Rangel (2015) suspect that the perceived differences by skin tone in interactions, rather than the categorical classification of race, create the earnings gap among African Americans. This is not limited to immigrants with dark skin tone. Maghbouleh (2017), for example, documents how even groups categorically defined as a White racial group—specifically, Iranian Americans—face discrimination in daily life interactions. On the other hand, observers can “whiten” immigrants’ race relative to their self-identified race (Saperstein 2006). Thus, future studies should examine how dark skin phenotype interacts with other dimensions of race in differing social contexts to create different meanings of race in American society.19
## Acknowledgments
The author thanks Jennifer Lundquist, Donald Tomaskovic-Devey, and David Cort for their support and generous comments on this article. Thanks also to the anonymous reviewers and the editors for thoughtful comments and suggestions.
## Notes
1
On the other hand, some scholars point out that the United States and Latin America are not much different in terms of the construction and understanding of race because complexion, rather than lineage, centers in racial identification processes in both regions (Goldsmith et al. 2006).
2
The National Survey of Black Americans 1979–1980, the 1979 Chicano National Survey, the 1990 Latino National Political Survey, and the National Survey of American Life 2001–2003 are the major national-level surveys that measure respondents’ skin color and race. The Multi-City Study of Urban Inequality 1992 and the Detroit Area Study 1995 data are the commonly used regional studies. Data on skin tone are also collected in health-focused surveys, such as the National Heart, Lung, and Blood Institute’s Coronary Artery Risk Development in Young Adults.
3
The National Longitudinal Survey of Youth 1997 and the Add Health survey are among the few longitudinal data sets that measure respondents’ skin tone. However, they contain limited immigration-related information.
4
The NIS second wave data were publicly released but are not included in this analysis. Because of high attrition rates of the sample in the second wave, inclusion of the second wave data in the analysis reduces the sample size to approximately one-half of the original sample. A thorough analysis of sample selection in the second wave is underway.
5
Because dropping samples whose skin tone is not reported and those without a job at each time point complicates applying sampling weights, the analyses here are unweighted, following previous research using the same data set (e.g., Frank et al. 2010; Rosenblum et al. 2016).
6
The missing observations in the last job held before immigration were imputed using their first job held before immigration.
7
One limitation of this analytic frame is that time spans between T1, T2, and T3 are inconsistent across the sample. However, controlling for age, U.S. labor market experience, and whether they achieved LPR while residing in the United States mitigates this problem.
8
Despite the strength of the ISEI, using an occupational index as a proxy measure of labor market status in the United States may also have limitations: both within-occupation and between-occupation wage inequality constitute a considerable portion of total wage variance (Avent-Holt and Tomaskovic-Devey 2014; Kim and Sakamoto 2008), and within-occupation bias in job sorting can result.
9
Although some may suggest a subgroup analysis by sending regions, such an analysis is beyond the focus of the current study on the destination reception. Rosenblum et al. (2016) have discussed this issue, although cross-sectionally.
10
For the sample construction, see Table A2, online appendix.
11
As a robustness check of the GLS random-effects model, I also analyze the effects of skin tone at each time point separately using OLS regression and find that it yields results similar to the current analysis.
12
Here I assume that skin tone is time-constant. However, it also should be acknowledged that skin tone may change. For example, construction workers who tend to work outdoors may have darker skin tone than their original tone (Hersch 2008), and some people intentionally bleach their skin (Glen 2009).
13
A trade-off exists between random-effects models and fixed-effects models: random-effects models risk omitted variable bias by assuming that unobserved attributes are independent of observed variables, but they have a higher efficiency than fixed-effect models. In addition, fixed effects include only estimates for measures that vary over time, excluding time-invariant cases and variables (Allison 2009). Although the result of the Hausman test indicates that the coefficients in the two models are different at p < .05, results from a fixed-effects specification show patterns that are quite similar to the current random-effects model. The fixed-effect result is available upon request.
14
Treating all nonworking states as unemployment can cause bias. Of the full sample of 8,573 observations (including those without a skin tone measure), 58.3% are employed, and 16.4% are unemployed, constituting 39.3% of total nonworking individuals at T3. The majority (60.7%) of the nonworking sample are retirees, homemakers, disabled, other, on leave, or temporally laid off.
15
The results from the balanced panel (sample members with a job at all three time points only) are broadly same as those from the current sample in the magnitude of the coefficients. The only difference is that skin tone × T3 interaction effects become marginally statistically significant at p < .10, which is likely due to the reduced sample size.
16
R2 in models that fit race is larger than in models fitting skin tone. R2 is .18 in Model 6, where race/ethnicity and its interaction with time are included, compared with .06 in Model 2, where skin tone and its interaction with time are included. This suggests that the explanatory power of race and ethnicity is larger than that of skin tone. However, the difference in R2 may be due to interval versus categorical variable differences in explanatory power. R2 is not different between Model 3, where skin tone is fitted with additional control for covariates (.43), and in Model 7, where race/ethnicity is fitted instead (.43). An additional control for race/ethnicity in Model 4 does not change R2 from Model 3, where only skin tone and its time interaction are fitted. These results suggest that self-identified race and skin tone are two dimensions of race, as discussed earlier.
17
Similarly, the model can be stratified by gender given that skin tone influences may be different for men and women. However, gendered immigration assimilation processes are complex because skin tone effects are compounded with other factors, such as visa type (e.g., a spouse of U.S. citizen visa would be granted more to females), and thus deserve a separate study.
18
Skin tone has a curvilinear effect on occupational mobility: the negative skin tone × Time2 and skin tone × Time3 interaction effects are stronger among darker-skinned immigrants.
19
One limitation of this study is that undocumented immigrants are not included in the analyses. The majority of undocumented immigrants are from Mexico and Latin America, having emerged as a racialized class in the United States (Massey and Pren 2012). I speculate that including them in the analyses would not change the results significantly. They are likely to have held lower occupational status even before immigration because of their relatively low human capital, and the dark skin tone penalty in the United States relative to pre-immigration is less salient. Nevertheless, the skin tone effect for undocumented immigrants’ assimilation process is worth further research.
## References
Akresh, I. R. (
2008
).
Occupational trajectories of legal US immigrants: Downgrading and recovery
.
Population and Development Review
,
34
,
434
456
.
Alba, R., & Nee, V. (
1997
).
Rethinking assimilation theory for a new era of immigration
.
International Migration Review
,
31
,
826
874
.
Alba, R., & Nee, V. (
2003
).
Remaking the American mainstream: Assimilation and contemporary immigration
.
Cambridge, MA
:
Harvard University Press
.
Allison, P. D. (
2009
).
Fixed effects regression models
.
Los Angeles, CA
:
Sage Publishing
.
Avent-Holt, D., & Tomaskovic-Devey, D. (
2014
).
A relational theory of earnings inequality
.
American Behavioral Scientist
,
58
,
379
399
.
Bailey, S. R., Fialho, F. M., & Penner, A. M. (
2016
).
Interrogating race: Color, racial categories, and class across the Americas
.
American Behavioral Scientist
,
60
,
538
555
.
Bond, S., & Cash, T. F. (
1992
).
Black beauty: Skin color and body images among African American college women
.
Journal of Applied Social Psychology
,
22
,
874
888
.
Bonilla-Silva, E. (
1997
).
Rethinking racism: Toward a structural interpretation
.
American Sociological Review
,
62
,
465
480
.
Bonilla-Silva, E., & Dietrich, D. R. (
2009
).
The Latin Americanization of U.S. race relations: A new pigmentocracy
. In E. N Glenn (Ed.),
Shades of difference: Why skin color matters
(pp.
40
60
).
Stanford, CA
:
Stanford University Press
.
Campos-Vazquez, R. M., & Medina-Cortina, E. M. (
2019
).
Skin color and social mobility: Evidence from Mexico
.
Demography
,
56
,
321
343
.
Chetty, R., Hendren, N., Jones, M. R., & Porter, S. R. (
2018
).
Race and economic opportunity in the United States: An intergenerational perspective
(NBER Working Paper No. 24441).
Washington, DC
:
National Bureau of Economic Research
.
Chiswick, B. R. (
1978
).
The effect of Americanization on the earnings of foreign-born men
.
Journal of Political Economy
,
86
,
897
922
.
Chiswick, B. R., & Miller, P. W. (
2009
).
The international transferability of immigrants’ human capital
.
Economics of Education Review
,
28
,
162
169
.
Darity, W. A.Jr, Dietrich, J., & Hamilton, D. (
2005
).
Bleach in the rainbow: Latin ethnicity and preference for Whiteness
.
Transforming Anthropology
,
13
,
103
109
.
Espino, R., & Franz, M. M. (
2002
).
Latino phenotypic discrimination revisited: The impact of skin color on occupational status
.
Social Science Quarterly
,
83
,
612
623
.
Fergus, E. (
2009
).
Understanding Latino students’ schooling experiences: The relevance of skin color among Mexican and Puerto Rican high school students
.
Teachers College Record
,
111
,
339
375
.
Foner, N. (
2000
).
Beyond the melting pot three decades later: Recent immigrants and New York’s new ethnic mixture
.
International Migration Review
,
34
,
255
262
.
Frank, R., Akresh, I. R., & Lu, B. (
2010
).
Latino immigrants and the U.S. racial order: How and where do they fit in?
.
American Sociological Review
,
75
,
378
401
.
Gans, H. G. (
1992
).
Second-generation decline: Scenarios for the economic and ethnic futures of the post-1965 American immigrants
.
Ethnic and Racial Studies
,
15
,
173
192
.
Gans, H. G. (
2012
).
“Whitening” and the changing American racial hierarchy
.
Du Bois Review
,
9
,
267
279
.
Ganzeboom, H. B. G. (
2010a
).
International standard classification of occupations
Ganzeboom, H. B. G. (
2010b
,
May
).
A new International Socio-Economic Index [ISEI] of occupational status for the international standard classification of occupation 2008 [Isco-08] constructed with data from the ISSP 2002–2007
.
Paper presented at the annual conference of International Social Survey Programme
,
Lisbon, Portugal
.
Ganzeboom, H. B. G., De Graaf, P. M., & Treiman, D. J. (
1992
).
A standard international socio-economic index of occupational status
.
Social Science Research
,
21
,
1
56
.
Glen, E. N. (
2009
).
Shades of difference: Why skin color matters
.
Stanford, CA
:
Stanford University Press
.
Golash-Boza, T. (
2006
).
Dropping the hyphen? Becoming Latino(a)-American through racialized assimilation
.
Social Forces
,
85
,
27
55
.
Goldsmith, A. H., Hamilton, D., & Darity, W.Jr (
2006
).
Shades of discrimination: Skin-tone and wages
.
American Economic Review: Papers & Proceedings
,
96
,
242
245
.
Hagiwara, N., Kashy, D. A., & Cesario, J. (
2012
).
The independent effects of skin tone and facial features on Whites’ affective reactions to Blacks
.
Journal of Experimental Social Psychology
,
48
,
892
898
.
Hamilton, D., Goldsmith, A. H., & Darity, W.Jr (
2009
).
Shedding “light” on marriage: The influence of skin shade on marriage for Black females
.
Journal of Economic Behavior & Organization
,
72
,
30
50
.
Hersch, J. (
2008
).
Profiling the new immigrant worker: The effects of skin color and height
.
Journal of Labor Economics
,
26
,
345
386
.
Hersch, J. (
2011
).
The persistence of skin color discrimination for immigrants
.
Social Science Research
,
40
,
1337
1349
.
Hochschild, J. L. (
2005
).
Looking ahead: Racial trends in the United States
.
Daedalus
,
134
(
1
),
70
81
.
Hout, M., & DiPrete, T. A. (
2006
).
What we have learned: RC 28’s contributions to knowledge about social stratification
.
Research in Social Stratification and Mobility
,
24
,
1
20
.
Hughes, M., & Hertel, B. R. (
1990
).
The significance of color remains: A study of life chances, mate selection, and ethnic consciousness among Black Americans
.
Social Forces
,
68
,
1105
1120
.
Hunter, M. (
2007
).
The persistent problem of colorism: Skin tone, status, and inequality
.
Sociology Compass
,
1
,
237
254
.
Hunter, M. L. (
1998
).
Colorstruck: Skin color stratification in the lives of African American women
.
Sociological Inquiry
,
68
,
517
535
.
Jasso, G., Massey, D. S., Rosenzweig, M. R., & Smith, J. P. (
2000
).
The New Immigrant Survey Pilot (NIS-P): Overview and new findings about U.S. Legal immigrants at admission
.
Demography
,
37
,
127
138
.
Keith, V. M., & Herring, C. (
1991
).
Skin tone and stratification in the Black community
.
American Journal of Sociology
,
97
,
760
778
.
Kim, C., & Sakamoto, A. (
2008
).
The rise of intra-occupational wage inequality in the United States, 1983–2002
.
American Sociological Review
,
73
,
129
157
.
Kreisman, D., & Rangel, M. A. (
2015
).
On the blurring of the color line: Wages and employment for Black males of different skin tones
.
Review of Economics and Statistics
,
97
,
1
13
.
Lee, J., & Bean, F. D. (
2007
).
Reinventing the color line: Immigration and America’s new racial/ethnic divide
.
Social Forces
,
86
,
561
586
.
Le Grand, C., & Tåhlin, M. (
2013
).
Class, occupation, wages, and skills: The iron law of labor market inequality
. In G. E. Birkelund (Ed.),
Class and stratification analysis
(Comparative Social Research Vol. 30, pp.
3
46
).
Bingley, UK
:
Emerald Group Publishing
.
Maghbouleh, N. (
2017
).
The limits of Whiteness: Iranian Americans and the everyday politics of race
.
Stanford, CA
:
Stanford University Press
.
Mason, P. L. (
2004
).
Annual income, hourly wages, and identity among Mexican-Americans and other Latinos
.
Industrial Relations
,
43
,
817
834
.
Massey, D. S., & Martin, J. A. (
2003
).
The NIS Skin Color Scale
.
Princeton, NY
:
Office of Population Research, Princeton University
.
Massey, D. S., & Pren, K. A. (
2012
).
Origins of the new Latino underclass
.
Race and Social Problems
,
4
,
5
17
.
Monk, E. P.Jr (
2014
).
Skin tone stratification among Black Americans, 2001–2003
.
Social Forces
,
92
,
1313
1337
.
Monk, E. P.Jr (
2015
).
The cost of color: Skin color, discrimination, and health among African-Americans
.
American Journal of Sociology
,
121
,
396
444
.
Morales, M. C. (
2008
).
The ethnic niche as an economic pathway for the dark skinned labor market incorporation of Latina/o workers
.
Hispanic Journal of Behavioral Sciences
,
20
,
280
298
.
Portes, A., & Rumbaut, R. G. (
2006
).
Immigrant America: A portrait
(3rd ed.).
Berkeley
:
University of California Press
.
Portes, A., & Zhou, M. (
1993
).
The new second generation: Segmented assimilation and its variants
.
Annals of the American Academy of Political and Social Science
,
530
,
74
96
.
Rosenblum, A., Darity, D.Jr, Harris, A. L., & Hamilton, T. G. (
2016
).
Looking through the shades: The effect of skin color on earnings by region of birth and race for immigrants to the United States
.
Sociology of Race and Ethnicity
,
2
,
87
105
.
Roth, W. D. (
2010
).
Racial mismatch: The divergence between form and function in data for monitoring racial discrimination of Hispanics
.
Social Science Quarterly
,
91
,
1288
1311
.
Saperstein, A. (
2006
).
Double-checking the race box: Examining inconsistency between survey measures of observed and self-reported race
.
Social Forces
,
85
,
57
74
.
Saperstein, A., Penner, A. M., & Light, R. (
2013
).
Racial formation in perspective: Connecting individuals, institutions, and power relations
.
Annual Review of Sociology
,
39
,
359
378
.
South, S. J., Crowder, K., & Chavez, E. (
2005
).
Migration and spatial assimilation among U.S. Latinos: Classical versus segmented trajectories
.
Demography
,
42
,
497
521
.
Telles, E. E. (
1992
).
Residential segregation by skin color in Brazil
.
American Sociological Review
,
57
,
186
197
.
Telles, E. E. (
2014
).
Pigmentocracies: Ethnicity, race, and color in Latin America
.
Chapel Hill
:
University of North Carolina Press
.
Treiman, D. J. (
1977
).
Occupational prestige in comparative perspective
.
New York, NY
:
.
Villarreal, A. (
2010
).
Stratification by skin color in contemporary Mexico
.
American Sociological Review
,
75
,
652
678
.
Waters, M. C. (
1999
).
Black identities: West Indian immigrant dreams and American realities
.
New York, NY
:
Russell Sage Foundation
.
Wimmer, A. (
2008
).
The making and unmaking of ethnic boundaries: A multilevel process theory
.
American Journal of Sociology
,
113
,
970
1022
.
Zhou, M. (
1997
).
Growing up American: The challenge confronting immigrant children and children of immigrants
.
Annual Review of Sociology
,
23
,
63
95
.
## Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 2022-07-01 11:27:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2777133285999298, "perplexity": 6714.704755242355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00525.warc.gz"} |
https://www.gamedev.net/forums/topic/390432-java-ide-and-running-applets-outside-browser/ | # Java IDE, and running applets outside browser?
This topic is 4315 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I have a weird problem with applets: I can't seem to be able to run the applet OUTSIDE a browser. When I use the Sun AppletViewer, it just starts up, and exits immediatly, without warning. This is my(random test code):
import javax.swing.JApplet;
import java.awt.*;
import java.awt.Graphics;
public class DragonWars extends JApplet
{
public void init()
{
this.setSize(new Dimension(640, 480));
}
public void paint(Graphics g)
{
Color c = Color.magenta;
g.setColor(c);
g.fillRect(0, 0, getSize().width - 1, getSize().height - 1);
}
}
Also, at the moment I'm using Crimson Editor for editing the files, but I'm missing auto completion a la Visual Studio, but I sure as hell DON'T wanna use Eclipse, since I hate it. What other (lightweight)IDEs are out there, that can both use auto completion AND be able to handle a debugger? Toolmaker
##### Share on other sites
"I hate it" is not a good reason - NetBeans and IntelliJ IDEA are the other two top IDEs for Java, however I find them similar to Eclipse (that's why you should have told us WHY you don't like Eclipse).
##### Share on other sites
I dislike the entire behaviour of Eclipse: It's constantly auto-rebuilding, there is no simple way to recompile it(Last time I used it, it was a pain).
Project files are missing, and you have to work through some obscure way to load/unload projects, deleting a file in your projectview actually deletes it on disk, without notice.
I'm used to Visual Studio, and I prefer an IDE to somewhat behaves like that. I don't mind the auto-rebuilding, I jsut want a nice "BUILD" button where I can see it.
Toolmaker
##### Share on other sites
Quote:
Original post by ToolmakerI dislike the entire behaviour of Eclipse: It's constantly auto-rebuilding
to disable autobuild: uncheck "Project / Build Automatically"
to build: "Project / Build Project"
##### Share on other sites
I like using BlueJ for my Java stuff. It's free, has an "acceptable" debugger and also has a handy visual display of the classes and their "connections" to each other (i.e. extends, implements, interface, etc.) via boxes and connection lines, but alas there is no auto-complete feature. Still, I think it's worth checking out.
##### Share on other sites
Quote:
Original post by ToolmakerI dislike the entire behaviour of Eclipse: It's constantly auto-rebuilding,
You can turn this off for an entire workspace in the preference dialog, or on a per project basis in the project properties dialog or the Project menu.
Quote:
there is no simple way to recompile it(Last time I used it, it was a pain).
Quote:
Project files are missing
Quote:
and you have to work through some obscure way to load/unload projects
What's obscure? You can open/close projects in the Workspace via the Project menu, close them by right-clicking project name in the package explorer. You can also create Eclipse projects from existing source files via File->New.
Quote:
deleting a file in your projectview actually deletes it on disk, without notice.
It asks you if you are "sure you want to delete" the file. The behavior is that it always deletes it from disk, but it only does it when you explictly delete the file. How is that a problem? If you want to use the file outside of Eclipse but also remove it from the project, just File->Save As... first before deleting.
Quote:
I'm used to Visual Studio, and I prefer an IDE to somewhat behaves like that. I don't mind the auto-rebuilding, I jsut want a nice "BUILD" button where I can see it.
I'm not trying to convince you to use Eclipse. I'm just making a point. Most of the full featered Java IDEs like Eclipse, NetBeans, and IntellJ IDea are highly configurable and can be customized to your liking. But all of the features and configuration options come at a price - you have to learn how to use them. It took me a while to learn many of the nooks and crannies of Eclipse and there's still stuff I don't know, such as keyboard shortcuts (I never use them).
So whichever IDE you settle on, spend some time to get to know it. Check out all of the menu items. Read the docs. Browse the support forums. Ultimately you'll be glad you did because it will help you to become a more productive programmer.
##### Share on other sites
I like JCreator, it's fast and lightweight.
##### Share on other sites
Quote:
Original post by scgrnI like JCreator, it's fast and lightweight.
I just acquired the LE edition of JCreator, and I must say, that's exactly what I wanted.
1 minor question: Does it contain a GUI builder? I need to whip up a bunch of GUIs, and having a GUI builder for that is fat better than doing it from code.
If not, what other solutions are there out there? Either as plugin or standalone app... (Using AWT)
Toolmaker
[Edited by - Toolmaker on May 1, 2006 5:17:44 AM] | 2018-02-22 01:44:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30107271671295166, "perplexity": 3391.2204566903392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813832.23/warc/CC-MAIN-20180222002257-20180222022257-00170.warc.gz"} |
https://en.wiktionary.org/wiki/linear_function | # linear function
## English
### Noun
English Wikipedia has an article on:
Wikipedia
linear function (plural linear functions)
1. (mathematics) Any function whose graph is a straight line: ${\displaystyle f(x)=ax+b}$
2. (mathematics) Any function whose value on the sum of two elements is the sum of the values of the function on the two elements and whose value on the product of a scalar times an element is the scalar times the value of the function on the element: ${\displaystyle f(ax+by)=af(x)+bf(y)}$
#### Translations
The translations below need to be checked and inserted above into the appropriate translation tables, removing any numbers. Numbers do not necessarily match those in definitions. See instructions at Wiktionary:Entry layout § Translations. | 2022-06-30 20:11:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6487309336662292, "perplexity": 830.9181685157566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103877410.46/warc/CC-MAIN-20220630183616-20220630213616-00348.warc.gz"} |
http://openstudy.com/updates/51586ab0e4b0507ceba20c38 | 1. shevron Group Title
let $\left( X;d \right)$ be a matric space and $C_{b}$$\left( X,R \right)$ denote the set of all continuous bounded real valued functions defined on X, equipped with the uniform metric. $d\left( f,g \right)=sup{ \left| f \left( x \right)-g \left( x \right) \right|: xinX }$ Show that $C_{b}$$\left( X,R \right)$ is a complete matric space
2. shevron Group Title
3. timo86m Group Title
sorry idk this one :(
4. shevron Group Title
@timo86m
5. shevron Group Title
6. shevron Group Title
@Chlorophyll
7. shevron Group Title
@charliem07
8. charliem07 Group Title
sorry i dont know
9. shevron Group Title
ok cool
10. shevron Group Title
11. shevron Group Title
12. shevron Group Title
13. UnkleRhaukus Group Title
@JamesJ, @experimentX, @eliassaab, @nbouscal, @beketso
14. shevron Group Title
15. TuringTest Group Title
@KingGeorge
16. TuringTest Group Title
btw for advanced questions you may have better luck here http://math.stackexchange.com/
17. shevron Group Title
k thanx
18. satellite73 Group Title
your job is showing it is "complete" is to show that if $$f_n\to f$$ then $$f\in C_b$$
19. satellite73 Group Title
that is, if a sequence of continuous functions converges to some function using the sup metric, then the limit function is continuous also
20. satellite73 Group Title
this should work because the metric is the supremum over all $$x$$
21. satellite73 Group Title
the general idea is that under the sup metric, the convergence is uniform, and the uniform limit of a sequence of continuous functions is uniform gotta run, but if you google what i wrote i bet you will find a worked out solution
22. shevron Group Title
do i have to let the sequence to be a cauchy sequence first?
23. satellite73 Group Title
actually what i meant is the uniform limit of a sequence of continuous functions is CONTINUOUS
24. satellite73 Group Title
yes
25. shevron Group Title
my problem we are given f and g and they are different how am i going to proof them simultaneously
26. shevron Group Title
@Mertsj
27. shevron Group Title
@ash2326
28. shevron Group Title
@walters
29. shevron Group Title
30. phi Group Title
I assume you mean "metric space" ? But I tend more to applied math problems. i.e. not this kind of question.
31. shevron Group Title
ok cool
32. shevron Group Title
can u fynd me someone who can do it
33. shevron Group Title
34. shevron Group Title
35. mathslover Group Title
Sorry, am not good at this topic.
36. shevron Group Title
ok can you search for me where i can find something related to ths?
37. mathslover Group Title
Yes! I am best at that field :)
38. shevron Group Title
39. mathslover Group Title
40. shevron Group Title
eish they blocked youtube here at school
41. mathslover Group Title
42. mathslover Group Title
43. shevron Group Title
ok thanx
44. mathslover Group Title
Have a look at the links and let me know whether they helped or not.
45. shevron Group Title
ok i will
46. shevron Group Title
@mathslover they are not helping
47. mathslover Group Title
48. shevron Group Title | 2014-09-15 04:06:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8630610704421997, "perplexity": 14146.615276388766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657104119.19/warc/CC-MAIN-20140914011144-00022-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://mathhelpforum.com/geometry/51093-rectangular-deck.html | # Thread: Rectangular Deck
1. ## Rectangular Deck
A homeowner wants to increase the size of a rectangular deck that now measures 15 feet by 20 feet, but building code laws state that a homeownercannot have a deck larger than 900 square feet. If the length and the width are to be increased by the same amount, find, to the nearest tenth, the maximum number of feet that the length of the deck may be increased in size legally.
My Work:
I let x = different values but the same for the width and length.
If x = 12, then width = 12 + 15 = 27 ft.
If x = 12, then length = 12 + 20 = 32 ft.
The 32 ft times 27 ft = 864 ft, which is close to 900 square feet but not over 900 square feet.
However, the answer is 12.6 not 12.
How do I get 12.6?
2. Originally Posted by magentarita
A homeowner wants to increase the size of a rectangular deck that now measures 15 feet by 20 feet, but building code laws state that a homeownercannot have a deck larger than 900 square feet. If the length and the width are to be increased by the same amount, find, to the nearest tenth, the maximum number of feet that the length of the deck may be increased in size legally.
My Work:
I let x = different values but the same for the width and length.
If x = 12, then width = 12 + 15 = 27 ft.
If x = 12, then length = 12 + 20 = 32 ft.
The 32 ft times 27 ft = 864 ft, which is close to 900 square feet but not over 900 square feet.
However, the answer is 12.6 not 12.
How do I get 12.6?
let x be the the number of feet we increase the length and width by, so the new length and width are (20 + x) and (15 + x) respectively
we want (20 + x)(15 + x) = 900
solve for x
3. I have a doubt here. The question says "by the same amount". This phrase can be interpreted in two different ways:
(1) By same amount : same increment. Example, both increase by 10 ft
(2) By same amount : by same percentage. (The the answer differs)
If I consider interpretation (2),
Final_length = Initial_Length * %total_percentage
Final_width = Initial_width * %total_percentage
The %total_percentage = $\frac{100 + percentage_ increase}{100}$
Let %total_percentage = x
Final_width * Final_length $\leq$ 900
20x * 15x $\leq$ 900
x^2 $\leq$ 3
x $\leq$ $\surd{3}$
x $\leq$ 1.73
Therefore increase in length = (1.73*20 -25)ft = 14.6, which is also a correct answer based on my reasoning.
You cannot work according to the answer, because in a test, you cannot anticipate the correct answer. You have to work for it.
4. Originally Posted by shailen.sobhee
I have a doubt here. The question says "by the same amount". This phrase can be interpreted in two different ways:
(1) By same amount : same increment. Example, both increase by 10 ft
(2) By same amount : by same percentage. (The the answer differs)
If I consider interpretation (2),
Final_length = Initial_Length * %total_percentage
Final_width = Initial_width * %total_percentage
The %total_percentage = $\frac{100 + percentage_ increase}{100}$
Let %total_percentage = x
Final_width * Final_length $\leq$ 900
20x * 15x $\leq$ 900
x^2 $\leq$ 3
x $\leq$ $\surd{3}$
x $\leq$ 1.73
Therefore increase in length = (1.73*20 -25)ft = 14.6, which is also a correct answer based on my reasoning.
You cannot work according to the answer, because in a test, you cannot anticipate the correct answer. You have to work for it.
i think they would say "percentage" or something like that if that's what they were after. it would be strange to interpret it otherwise really, that's the language they use here
5. ## ok
Originally Posted by Jhevon
let x be the the number of feet we increase the length and width by, so the new length and width are (20 + x) and (15 + x) respectively
we want (20 + x)(15 + x) = 900
solve for x
I thank you.
6. ## ok
Originally Posted by shailen.sobhee
I have a doubt here. The question says "by the same amount". This phrase can be interpreted in two different ways:
(1) By same amount : same increment. Example, both increase by 10 ft
(2) By same amount : by same percentage. (The the answer differs)
If I consider interpretation (2),
Final_length = Initial_Length * %total_percentage
Final_width = Initial_width * %total_percentage
The %total_percentage = $\frac{100 + percentage_ increase}{100}$
Let %total_percentage = x
Final_width * Final_length $\leq$ 900
20x * 15x $\leq$ 900
x^2 $\leq$ 3
x $\leq$ $\surd{3}$
x $\leq$ 1.73
Therefore increase in length = (1.73*20 -25)ft = 14.6, which is also a correct answer based on my reasoning.
You cannot work according to the answer, because in a test, you cannot anticipate the correct answer. You have to work for it.
I thank you for your input.
7. ## ok
Originally Posted by Jhevon
i think they would say "percentage" or something like that if that's what they were after. it would be strange to interpret it otherwise really, that's the language they use here
I see what you mean. | 2016-08-31 04:57:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8069242835044861, "perplexity": 1202.6432106643808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983577646.93/warc/CC-MAIN-20160823201937-00078-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://www.semanticscholar.org/paper/The-binary-perfect-phylogeny-with-persistent-Bonizzoni-Braghin/d701f231eca47083ae134ad370770e597a6bbade | # The binary perfect phylogeny with persistent characters
@article{Bonizzoni2011TheBP,
title={The binary perfect phylogeny with persistent characters},
author={Paola Bonizzoni and Chiara Braghin and Riccardo Dondi and Gabriella Trucco},
journal={Theor. Comput. Sci.},
year={2011},
volume={454},
pages={51-63}
}
• Published 31 October 2011
• Computer Science
• Theor. Comput. Sci.
## Tables from this paper
• Computer Science
ArXiv
• 2014
This paper develops a parameterized algorithm for solving the Persistent Perfect Phylogeny problem where the parameter is the number of characters and provides a polynomial time solution for the CP-PP problem for matrices having an empty conflict-graph.
• Computer Science
• 2021
Insight into the specific structure of the IDPP problem leads to an asymptotically faster algorithm, that runs in optimal $O(nm)$ time, and is successful in giving a much simpler $\tilde{O}( nm)$-time algorithm.
• Biology
BCB
• 2017
An experimental analysis shows that the ILP approach is able to explain data that do not fit the perfect phylogeny assumption, thereby allowing multiple losses and gains of mutations, and a number of subpopulations that is smaller than the number of input mutations.
• Biology
• 2018
The question of how many binary characters together with their persistence status are needed to uniquely determine a phylogenetic tree is considered and an upper bound for the number of characters needed is provided.
An integer programming solution to the Persistent-Phylogeny Problem is developed; empirically explore its efficiency; and the utility of using fast algorithms that recognize galled trees, to recognize persistent phylogeny is explored.
• Biology
bioRxiv
• 2017
This work proposes a new approach that incorporates the possibility of losing a previously acquired mutation, extending the Persistent Phylogeny model, and exploits the model to provide an ILP formulation of the problem of reconstructing trees on mixed populations, where the input data consists of the fraction of cells in a set of samples that have a certain mutation.
• Biology, Computer Science
BCB
• 2020
A distance metric for multi-labeled trees is presented that generalizes the Robinson-Foulds distance for phylogenetic trees, allows for a similarity assessment at much higher resolution, and can be applied to trees and networks with different sets of node labels.
## References
SHOWING 1-10 OF 22 REFERENCES
• Biology
CPM
• 2000
This work provides a near optimal O(nm)-time algorithm for the problem of perfect phylogeny, which arises in classical phylogenetic studies, when some states are missing or undetermined.
• Computer Science
WABI
• 2010
A new general conceptual solution to the multistate Perfect Phylogeny problem is introduced, and conceptual solutions to the MD, CR, MDCR and ID problems for any k significantly improving previous work are introduced.
• Computer Science
IEEE/ACM Transactions on Computational Biology and Bioinformatics
• 2007
This work proves that the BNPP problem is fixed-parameter tractable and provides the first practical phylogenetic tree reconstruction algorithms that find guaranteed optimal solutions while being easily implemented and computationally feasible for data sets of biologically meaningful size and complexity.
This work is concerned here with taxa described by the states they exhibit on a set of characters, and assumes that the taxa descend from a common ancestor where all characters are absent.
• Computer Science
ISMB
• 2006
A near-optimal algorithm is presented for the H1-NPPH problem, which is to determine if a given set of genotypes admit a phylogeny with a single homoplasy event, and the accuracy of this algorithm is comparable to that of the existing methods, while being orders of magnitude faster.
• Computer Science
J. Comput. Biol.
• 2006
The OPPH algorithm is one of the first O(nm) algorithms presented for the PPH problem and the FlexTree (flexible tree) data structure provides a compact representation of all the perfect phylogenies for the given set of genotypes.
The Perfect Phylogeny Haplotype problem is solved and an O(nm)-time algorithm to complete matrices of n rows and m columns to represent PPH solutions is given: it is shown that solving the problem requires recognizing special posets of width 2.
• Mathematics
IEEE/ACM Transactions on Computational Biology and Bioinformatics
• 2011
This work shows how to use chordal graphs and triangulations to solve the character removal problem for an arbitrary number of states, which was previously unsolved, and outlines a preprocessing technique that speeds up the computation of the minimal separators of a graph.
This paper explores the algorithmic implications of the key "no-recombination in long blocks" observation, for the problem of inferring haplotypes in populations, and observes that the no-re Combination assumption is very powerful.
• Biology
Journal of bioinformatics and computational biology
• 2003
A simple and efficient polynomial-time algorithm for inferring haplotypes from the genotypes of a set of individuals assuming a perfect phylogeny is presented and a hardness result for the problem of removing the minimum number of individuals from a population is presented to ensure that the genotype of the remaining individuals are consistent with aperfect phylogeny. | 2023-01-27 23:55:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5801147222518921, "perplexity": 1432.0119228116075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00668.warc.gz"} |
https://naucnamreza.me/x9d39jf1/mirror-equation-convex-183f49 | Article was last reviewed on Tuesday, July 7, 2020, Your email address will not be published. The P’AP and Q’AQ triangles are similar so that we can derive the relationship between the object distance and the image distance with the object height and the image height: This equation is written again as below by adding m: h = the object height (positive if the object is above the principal axis of the convex mirror or the object is upright. If an object is in the front of a mirror surface which reflecting light, where the light passes through the object, then, If the image is in the front of a mirror surface which reflecting light, where light passes through the image, then. This virtual image is created by extending the reflected rays backward. Real and inverted, except when the object is placed between the pole and the focus. The image produced by a convex mirror … The image produced by a convex mirror is called a virtual image. If the magnification of image > 1 then the size of the image is greater than the size of the object. It cannot be used to produce real images. The mirror equation is, 1/O + 1/I = 2/R = 1/f. The use of these diagrams was demonstrated earlier in Lesson 3 and in Lesson 4 . It’s used to calculate the radius of curvature and focal length of a curved mirror. If the object is above the principal axis of the convex mirror. If the image is below the principal axis of the convex mirror, the image height is negative (image is inverted). Suppose that when it is... Before studying this topic, first understand work, the conservative forces, the relationship between the conservative forces with potential energy, the electric forces... Electric field by a single point charge To calculate the electric field produced by a single positive charge, the first step is... On the topic of Coulomb’s law, the force between electric charges has been studied. (v-f) = f 2. While deriving equations we use the similarities of triangles given picture above. Let us drop a perpendicular DN to principle axis, so that Mirror Equation - Convex Mirrors Ray diagrams can be used to determine the image location, size, orientation and type of image formed of objects when placed at a given location in front of a mirror. A convex mirror is a curved mirror that forms a part of a sphere and designed in such a way that light falling on its shiny surface diverges upon reflection. The mirror equation $$\frac{1}{v}+\frac{1}{u}=\frac{1}{f}$$ holds good for concave mirrors as well as convex mirrors. Advertisement. In that case, the image is virtual, upright, and enlarged. Hence, it is also called a diverging mirror. Dividing both sides by uvf we get, 1/f = 1/u + 1/v . Mirror Equation for concave mirror and Mirror Equation for a convex mirror. The image produced by a convex mirror is called a virtual image. By continuing to use the site, you agree to the use of cookies. Let AB be an object lying on the principle axis of the convex mirror of small aperture. The sign conventions for the given quantities in the mirror equation and magnification equations are as follows: f is + if the mirror is a concave mirror; f is - if the mirror is a convex mirror; d i is + if the image is a real image and located on the object's side of the mirror. Based on the figure below, there are two beams of light to a convex mirror, and the convex mirror reflects the beam of light. The center of the curvature of the convex mirror is behind the mirror surface which reflects light, where the light does not pass through it so that the radius of curvature of the convex mirror is negative. Concave Mirror Equation Formula : 1/f = 1/d 0 + 1/d i. We show them with red lines in the picture. Hence, it is also called a diverging mirror. If the object is above the principal axis of the convex mirror, the object height (h) is positive (object is upright). Advertisement All rights reserved. Equating equation (i) and (ii) U – f/ f = f/ v-f (U – f). Negative if the image is inverted), do = the object distance (positive if the light beam passes through the object), di = the image distance (positive if the light beam passes through the image and negative if the image is not passed through by the light beam). According to this statement, the equation of the convex mirror changes to: do = the object distance, di = the image distance, f = the focal length . A’B’ is the virtual image of the object lying behind the convex mirror as shown in the figure. It cannot be used to produce real images. The diagram showing the focus, focal length, principal axis, centre of curvature,etc. Let another ray from the top of the object AB pass normally from the center of curvature. In the case of convex mirror Let a convex mirror of a small aperture where light ray AD is striking in the mirror and is diverged appearing to pass through the principal focus F of the mirror. Required fields are marked *. According to this statement, the equation of the convex mirror changes to: do = the object distance, di = the image distance, f = the focal length, Always remember the sign rules of the convex mirror when using this equation to solve the problems of the convex mirrors. Hence, it is also called a diverging mirror. In the case of convex mirror. Always remember the sign rules of the convex mirror when using this equation to solve the problems of the convex mirrors. A concave mirror has a reflecting surface that bulges inward.Unlike convex mirrors, Concave mirrors reflect light inward to one focal point. do = object distance, di = image distance, h = P P’ = object height, h’ = Q Q’ = image height, F = the focal point of the convex mirror. Mirror Formula for Convex Mirror. Types of Blood Cells With Their Structure, and Functions, The Main Parts of a Plant With Their Functions, Parts of a Flower With Their Structure and Functions, Parts of a Leaf With Their Structure and Functions, Lies on the opposite side of the reflecting surface, Lies on the same side of the reflecting surface, Lies behind the mirror and focal length is negative, Lies in front of the mirror and focal length is positive. A convex mirror is a curved mirror that forms a part of a sphere and designed in such a way that light falling on its shiny surface diverges upon reflection. It is formed behind the mirror, upright, and diminished but increases in size up to the object size as the object approaches closer to the mirror. Definition of the electric potential Electric potential is defined as the electric potential energy per unit charge. It cannot be used to produce real images. The radius of curvature of a convex mirror used for rearview on a car is 4.00 m. If the location of the bus is 6 meters from this mirror, find the position of the image formed. I do not want to make confusion in your mind and write down the equations that I get from similarity of two Therefore : Based on the sign rules of the convex mirror, this equation can be changed to the equation of the concave mirror, if the image distance (di) is given a negative sign because the beam of light does not pass the image and focal length (f) is also given a negative sign because the focal point of the convex mirror is not passed by light (see the figure of the image formation above). If the image is in the front of a mirror surface which reflecting light, where light passes through the image, then the image distance (di) is positive (real image). A convex mirror is a curved mirror that forms a part of a sphere and designed in such a way that light falling on its shiny surface diverges upon reflection. Let a convex mirror of a small aperture where light ray AD is striking in the mirror and is diverged appearing to pass through the principal focus F … d i is - if the image is a virtual image and located behind the mirror. Determining the electric field using Gauss’s law. Uv – uf – vf + f 2 = f 2 . First, understand the sign rules of the convex mirror. Your email address will not be published. The radius of curvature is negative, so the focal length (f) is also negative. If the magnification of image = 1, the size of the image is the same as the size of the object. Therefore : The BFA triangle is similar to the Q’FQ triangle where the distance of AB = the height of the object (h) and the distance of FA = the focal length (f) of the convex mirror. If the image is above the principal axis of the convex mirror, the image height (h ‘) is positive (image is upright). Conversely, if the object is below the principal axis of the convex mirror, the object height is negative (object is inverted). Draw LN perpendicular on the principal axis. Can be smaller, equal to, and bigger than the object depending on the position of the object, Can be anywhere on principle axis depending on the position of the object, Only real image can be projected on a screen, Side view mirrors in vehicles and as security mirrors in grocery stores and supermarkets, Incident ray – The ray of light that is incident on the surface, Reflected ray – The ray of light that is reflected from the surface, Center of curvature – The center of the sphere from which the convex mirror has been constructed, Radius of curvature – The radius of the sphere from which the convex mirror has been constructed, Pole – The mid-point of the convex mirror, Principal axis – An imaginary line that connects the pole and the center of curvature, Focus – A point on the principal axis where rays of light that are parallel to the axis appear to diverge from, Focal length – The distance between the pole and the focus and is one-half of the radius of curvature, Object distance – The distance between the object and the pole, Image distance – The distance between the image and the pole, As side view mirrors in cars, buses, and trucks because the image formed is upright and small thus giving a wide field of view of the area toward the side of and behind the vehicle, As a security device in supermarkets, grocery stores, and convenient stores since the convex mirror gives a broad view of the area around corners, At corners on a road so that drivers can see the incoming vehicles and avoid a collision, As a safety device in warehouses, where workers can see incoming forklifts and vehicles, As security device in ATM since the user can see the area behind them, As street light reflectors because the reflected light can spread over a large area. | 2021-04-11 19:45:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6658675670623779, "perplexity": 326.3780636375598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064898.14/warc/CC-MAIN-20210411174053-20210411204053-00315.warc.gz"} |
https://motls.blogspot.com/2010/03/einsteins-birthday-test-eliminates-some.html?m=1 | Sunday, March 14, 2010
Einstein's birthday: a test eliminates some MOND theories
Albert Einstein was born on the Pi Day, March 14th, 1879. His relativity has been known to be right for some time. But we can always ask how accurate are the measurements that show that it's right.
On Thursday, Reinabelle Reyes, a Princeton grad student, and 6 U.S. and Swiss co-authors published a paper in Nature,
Confirmation of general relativity on large scales from weak lensing and galaxy velocities
FoxNews, Nature review, Financial Times, PhysOrg, CBS/Discover, Princeton News, CBC, Physics World, National Geographic
At the distance scale of tens or a hundred of millions of light years, they evaluate a recently proposed quantity called "E_G". It is a function of the gravitational lensing, galaxy clustering, and structure growth rate constructed in such a way that it is independent of the "galaxy bias" (a difference between clustering of visible galaxies and invisible dark matter).
For this quantity, general relativity predicts "E_G = 0.4" or so. Their empirical value is
EG = 0.39 +- 0.06
Modified Newtonian Dynamics, alternatives to the very assumption that dark matter is responsible for the "unexpected" galactic rotation curves (which often don't add dark matter but try to modify the equations in a brutal way), generally predict substantially different values of this parameter. In particular, a teves (tensor-vector-scalar) model has been ruled out.
The only MOND-like theories that are doing fine after these tests are various "f(R)" theories - that really respect some basic symmetries of GR, in a weaker sense - but even these theories may start to be killed if the uncertainty of the measured value drops by a factor of 5 or more which doesn't seem impossible.
An original GR manuscript that Einstein's wife Elsa donated to the Hebrew University in 1925 is just being shown to the public in Jerusalem.
Happy birthday, Albert.
1 comment:
1. Es gibt keine solche Sache wie "dunkles Materie" und es ist einfach eine Abdeckung für eine falsche Berechnung der absoluten Biegung des der Zeit-raums. | 2021-09-26 06:05:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42855003476142883, "perplexity": 2939.9060944286143}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057830.70/warc/CC-MAIN-20210926053229-20210926083229-00716.warc.gz"} |
https://hvlopen.brage.unit.no/hvlopen-xmlui/browse?type=journal&value=Physical+Review+Letters | Viser treff 1-11 av 11
• #### Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at √sNN=2.76 TeV
(Peer reviewed; Journal article, 2017)
• #### Azimuthal Anisotropy of Heavy-Flavor Decay Electrons in p-Pb Collisions at √sNN=5.02 TeV
(Journal article; Peer reviewed, 2019)
Angular correlations between heavy-flavor decay electrons and charged particles at midrapidity (|η| < 0.8) are measured in p-Pb collisions at √sNN = 5.02 TeV. The analysis is carried out for the 0%–20% (high) and 60%–100% ...
• #### D-meson azimuthal anisotropy in midcentral Pb-Pb collisions at √sNN=5.02 TeV
(Peer reviewed; Journal article, 2018)
• #### Evidence of spin-orbital angular momentum interactions in relativistic heavy-ion collisions
(Peer reviewed; Journal article, 2020)
The first evidence of spin alignment of vector mesons (K 0 and ϕ) in heavy-ion collisions at the Large Hadron Collider (LHC) is reported. The spin density matrix element ρ00 is measured at midrapidity (jyj < 0.5) in Pb-Pb ...
• #### First observation of an attractive interaction between a proton and a cascade baryon
(Journal article; Peer reviewed, 2019)
This Letter presents the first experimental observation of the attractive strong interaction between a proton and a multistrange baryon (hyperon) Ξ−. The result is extracted from two-particle correlations of combined p-Ξ− ...
• #### Investigations of anisotropic flow using multiparticle azimuthal correlations in pp, p−Pb, Xe-Xe, and Pb-Pb collisions at the LHC
(Peer reviewed; Journal article, 2019)
Measurements of anisotropic flow coefficients (vn) and their cross-correlations using two- and multiparticle cumulant methods are reported in collisions of pp at √s=13 TeV, p−Pb at a center-of-mass energy per nucleon pair ...
• #### J/ψ elliptic flow in Pb-Pb collisions at √sNN=5.02 TeV
(Peer reviewed; Journal article, 2017)
• #### Measurement of the low-energy antideuteron inelastic cross section
(Peer reviewed; Journal article, 2020)
• #### Measurement of Υ(1S) elliptic flow at forward rapidity in Pb-Pb collisions at √sNN=5.02 TeV
(Peer reviewed; Journal article, 2019)
The first measurement of the ϒ(1S) elliptic flow coefficient (v2) is performed at forward rapidity (2.5 <y< 4) in Pb–Pb collisions at √sNN = 5.02 TeV with the ALICE detector at the LHC. The results are obtained with the ...
• #### Probing the effects of strong electromagnetic fields with charge-dependent directed flow in Pb-Pb collisions at the LHC
(Peer reviewed; Journal article, 2020)
The first measurement at the LHC of charge-dependent directed flow (v1) relative to the spectator plane is presented for Pb-Pb collisions at sNN p ¼ 5.02 TeV. Results are reported for charged hadrons and D0 mesons for the ...
• #### Scattering studies with low-energy kaon-proton femtoscopy in proton-proton collisions at the LHC
(Peer reviewed; Journal article, 2020)
The study of the strength and behavior of the antikaon-nucleon (KN¯ ) interaction constitutes one of the key focuses of the strangeness sector in low-energy quantum chromodynamics (QCD). In this Letter a unique high-precision ... | 2021-12-05 10:52:46 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484271168708801, "perplexity": 10872.73172424777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363157.32/warc/CC-MAIN-20211205100135-20211205130135-00291.warc.gz"} |
http://lists.gnu.org/archive/html/lilypond-devel/2009-01/msg00053.html | lilypond-devel
[Top][All Lists]
Re: feature req: volta bar numbering options
From: Anthony W. Youngman Subject: Re: feature req: volta bar numbering options Date: Fri, 2 Jan 2009 20:04:17 +0000 User-agent: Turnpike/6.05-U ()
Anthony,
Responding late, I know, but with about ONE exception,
all the music I see follows lily's current behaviour.
Which scores/publishers have you found that match the
current behavior?
I'm a band musician (brass, concert, big), and play the trombone. Pretty much EVERY part I've ever played just counts bars from the beginning of the piece.
Normal behaviour is, as I say, to ignore the existence of the voltae when counting bars.
Unusual behaviour is to give a bar a "double number", eg the first bar of a 16-bar repeat might be numbered 40/56, but I think that's normally explained by the fact that some parts have voltae and some are written out in full.
I can only remember ONE occasion where there was a volta and the bars of the voltae shared bar numbers. And I can't remember what piece that was.
I recently acquired several notation manuals, and Gardner
Read doesn't mention numbering measures. However, Kurt
Stone (Music Notation in the 20th cent.) has this to say:
There is little agreement about numbering the measures
of first and second endings in repeats. The most
practical (although rather illogical) method is to
ignore the fact that first and second endings are
involved and simply count all measures, regardless of
repeat signs, etc. (p.168)
This is LilyPond's default behavior.
And I'm afraid I agree, with Lily, that practical is best. If, as conductor (which I'm not), I want to refer to a bar, then I want that number to be unique, not duplicated across voltae. And, for practical purposes, what other use do bar numbers have? None, to my mind ...
Cheers,
Wol
-- | 2013-05-20 20:48:51 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8141821622848511, "perplexity": 6653.566788494919}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699238089/warc/CC-MAIN-20130516101358-00093-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://cogsci.stackexchange.com/questions/508/how-can-i-create-computer-based-psychology-experiments-using-os-x/520 | # How can I create computer based psychology experiments using OS X?
I've used E-prime to create computer based psychology experiments (you know, the kind where you for example show a number of pictures to the participant and record their responses to them, for example keypresses) for Windows. However, I'd now like to create similar experiments in OS X.
If I search Google for this, I find a couple of different hits but it's hard to evaluate the quality of these programs. What's a good program to use for this purpose?
-
Good question! I mostly use E-Prime on PC, so I am not sure, but more and more people seem to be using python and writing code "from scratch" and I've heard good things about PsychoPy – Dan M. Feb 28 '12 at 13:25
To note: PsychoPy doesn't require you to write code from scratch, but its capabilities are greatly expanded by doing so. (Disclosure: I haven't actually used it). – Jeff Feb 28 '12 at 23:58
A professor recently told me that python was "the industry standard" for stimulus presentation. Though I have heard R is better for running advanced statistics. – user1056 Aug 14 '12 at 21:59
You should specify whether you need psychophysics-level precision timing, or whether you just want to present stimuli. If it's the latter, then flash and all sorts of other roll-your-own solutions are fine. If it's the former, then you probably want something like psychopy, where the designers have already done a lot of the testing and the low-level black magic necessary to get precision timing of stimulus display and responses. – octern Nov 13 '12 at 18:46
My research group has gone pure python for coding experiments; we've been burned too many times by glitches and implicit behaviour in boxed experiment-building software to bother trusting it. Moving from a point-and-click experiment design interface to pure code does have a large learning curve, and you want to be careful to model your own code on well validated code from others (esp. for ensuring that you're implementing timing properly, which can be nuanced).
It may be tempting to hire CS students to code your experiments, but the danger there is they don't come to the table with the same experimental design background as you do and we've encountered some implementation errors as a consequence (ex. failing to check for input during dead time between stimulus presentation, etc).
While I recognize and indeed support the push to specialization in cognitive science, I do think that in the same way that we require all researchers to have a bit of background in statistics, we should also require all researchers to have a bit of background in coding, not least because it helps engender a mindset amenable to considering formal models of mind.
-
Here's a zip of one of my moderately well-commented experiments: filedropper.com/castforweb – Mike Lawrence Feb 28 '12 at 17:44
Yet another addendum: the code I linked is very much written in a procedural style rather than an OOP style. This is because I find OOP unnecessary for the simple stuff I tend to do and find it takes a little more effort (both coding and planning) to do fully OOP experiments. Possibly just a personal quirk though. – Mike Lawrence Feb 28 '12 at 18:34
@ArtemKaznatcheev I haven't empirically tested things myself, but I've been told by sources I trust that web-based RT collection isn't reliable. However, it's easy enough to take a python script and create a stand-alone app (or exe) using py2app (or py2exe) that your participants can download. – Mike Lawrence Mar 11 '12 at 18:09
@Mike : Any plans of creating a open-source python project(aimed at replacing e-prime or such)?? I am a python programmer, have some experience with eye-tracking experiments and would be happy to contribute to such an effort. – Software Mechanic Apr 17 '12 at 17:28
@AnandJeyahar You should check out PsychoPy (psychopy.org) and OpenSesame (cogsci.nl/software/opensesame) two free software approaches that are what you are looking for. The only thing missing is a nice implementation of questionnaires and other simple item types. But I have some ideas based on PyQt4 and webkit. Let me know if you are interested in doing something. – Henrik May 17 '12 at 13:25
I would recommend Matlab and the Psychophysics Toolbox. It lets you display all sorts of stimuli in full-screen mode, and it lets you capture key strokes and mouse clicks.
-
Just make sure that you install MATLAB R2010a or earlier. R2010b and later versions of MATLAB are 64-bit-only, and PsychToolbox is 32-bit-only. – Mark L Mar 7 '12 at 6:16
@Solus: which platform? 32-bit Windows is still supported in the latest release. – Dima Mar 7 '12 at 14:57
OS X (the comment was intended for Speldosa, but also for anyone else wanting to use PsychToolbox on OS X. Just something to be aware of; it's not a show-stopper [e.g., you can install multiple instances of Matlab, or just install the latest 64-bit version and use the 32-bit version of Octave to run PsychToolbox scripts]). – Mark L Mar 8 '12 at 5:28
OpenSesame is a recent entry that is cross-platform and seems to promote GUI-based design while allowing customization via Python scripting.
It can be found at their website (link above). A recent article has references and summarizes 16 other tools as well (including some reported in the other stackexchange responses). I found great video tutorials and the interface to be friendly and easy to use.
It seems to not yet provide included ways of networking experiments (e.g. for yoked experiments or multi-subject games), but I suspect you could add this in with the custom Python scripting. For simple stimulus presentation and response tracking, I found it worked great and allowed rapid development. I wrote the experiment on my Linux machine and deployed for subject testing on Windows machines with no problems.
-
Welcome to the site! Is it possible to expand your answer to list the names of the 16 other tools the article summarized? For those that are not behind the pay-wall. – Artem Kaznatcheev Aug 16 '12 at 22:12
I agree, OpenSesame is great! – crash Apr 24 '14 at 8:34
I use Adobe Flash. My colleague Yana Weinstein has written a book on Flash Programming for the Social & Behavioral Sciences that should be out next month. I'm a contributor and helped write some of it! Check it out by clicking here.
-
I am the author of the book on Flash mentioned by Andy DeSoto. I have found Flash to be very straightforward and reliable for online data collectiong. – user2389 Nov 13 '12 at 5:29
Another option is to program in C/C++ using the Tscope library. If you're not experienced with programming, it's a bit tricky at first, but I'd say it pays off in the end.
Tscope is a C/C++ experiment programming library for cognitive scientists. It is distributed under the Gnu Public License, and is intended to run on Windows 2000 and XP platforms. It provides functions for graphics, sound, timing, randomization and response registration. Restricted Linux and Mac OS X versions are also available.
-
Usually it is preferred to not just link to a resource, but also give some information about it. I added the quote from the main page and a more direct link to the list of features. Thank you for the answer! – Steven Jeuris Mar 25 '12 at 13:44
Great question. There are two software packages that might be interesting to you:
1. I have tried to run EPrime in a virtual machine on my Mac and it was a catastrophe. As I found out it used to work, but some of the later updates made it impossible. In the process of figuring this out, I came across PsyScope X. It is an actively developed open source alternative for EPrime on the Mac and, apparently, even the collected data is somewhat compatible to EPrime. If you are interested in the importing of PsyScope-data into Eprime see the EPrime-FAQ.
2. However, agreeing with Mike, I felt like I needed more flexibility and control for my recent experiment and turned to LiveCode, as it was recommended to my by a neighboring department. It is a high-level programming language similar to VisualBasic but the language is very English-like and the software suite is quite cheap. What I particularly like about LiveCode is that you can program on your Mac and create Executables for Mac, Windows, Linux, and even iOS and Android if necessary. I collected all my data on Windows machines and there were only very minor compatibility issues (such as native fonts etc.). I would recommend LiveCode as the learning curve is not as steep as that of other languages and there is a great documentation with (video) tutorials and a responsive community happy to help.
Also, for a further overview of behavioral experiment software refer to the Wikipedia comparison page.
-
From what I know, PsyScope X is not as “actively developed” as you think. – Hisham Sep 21 '12 at 4:28
EPrime would just run on Windows on a Mac. You could have multiple OS sessions running at once but they're haven't been virtual machines necessary for that for years. Nevertheless, it's probably good to get away from it because it's timing is terrible. – John Sep 21 '12 at 7:49
Thanks for your comments. You're right, John, running Windows on your Mac is a possiblity. However, this way you have no access to your Mac applications while developing in Windows. I wanted to avoid the hassle of having to constantly reboot my machine when switching tasks. – crash Sep 21 '12 at 10:46
You should consider SuperLab. It runs on Mac and Windows.
It uses a point-and-click user interface that makes it really easy to setup experiments. Even "programming" contingencies are done via point-and-click.
Disclaimer: I wrote the original version of SuperLab and I work at Cedrus, its developer.
-
As of version 4 Inquisit has Mac support. See this announcement. You can run experiments locally or over the web. It is a commercial product.
To quote the website:
Inquisit is used by behavioral scientists throughout the world for creating and administering numerous cognitive, social, and neuropsychological measures. Now in use in over 1077 research institutions throughout the world.
The Mac support was only introduced in early 2012 so I imagine there will be a polishing process. I've used it many times. I wrote up a few introductory notes about Inquisit.
-
It has been a while since I asked this question, but I've tried out PsychoPy as some people suggested in the comments, and so far I'm really digging it. If you want you can use only the GUI to create your experiment, but if you're doing more advanced stuff you can export the code and start digging around in it.
As a bonus, it's also compatible with all major operating system, that is: Windows, OS X, and Linux.
-
It has been a while since the question was asked, but I am going to give my answer anyway. PsychoPy is really good and easy to use and is what I typically recommend people to use.
However, I recently found the Python library Expyriment and it seems promising. Although you will have to write your own code there are available methods for creating the window, presenting a fixation cross, and so on. A plus with this library is that you can also code experiments for Android devices (and, of course, Windows, Linux, and OS-X).
- | 2016-07-24 01:03:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24663783609867096, "perplexity": 1380.1588950684345}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823805.20/warc/CC-MAIN-20160723071023-00113-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://syten.eu/docs/namespacesyten_a2d3a91f1d146a03547847242165a045d.html | SyTen
◆ OffsetDenseTensor
template<Rank rank, typename Scalar = SDef>
using syten::OffsetDenseTensor = typedef OffsetDenseTensorImpl::OffsetDenseTensor
A specialised offset dense tensor.
This tensor also stores a dense array of scalar coefficients like DenseTensor, but only for a certain part of its linear length.
That is, if tensor elements are identically zero for leading coordinates below or above some value, the offset dense tensor does not store them. As an example, consider the tensor $$T_{a,b,c,d}$$ with all dimensions equal to four. The tensor contains values if a is 2 and b is either 1 or 2. As a result, storing the elements of the tensor in a linear array gives us
| a = 0 | a = 1 | a = 2 | a = 3 |
+-----------------------+-----------------------+-----------------------+-----------------------+
| b=0 | b=1 | b=2 | b=3 | b=0 | b=1 | b=2 | b=3 | b=0 | b=1 | b=2 | b=3 | b=0 | b=2 | b=2 | b=3 |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | c,d | c,d | 0 | 0 | 0 | 0 | 0 |
where in the last line, each cell contains 16 elements but only those marked as c,d are actually non-zero. A standard dense tensor, however, still has to also store all 224 zero elements in addition to the 32 non-zero elements which is rather inefficient.
This type of tensor is generated during the product of standard dense tensors with rank-3 identity dense tensors, merging two tensor legs into one. | 2021-09-20 05:20:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5299405455589294, "perplexity": 987.9830773262672}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057018.8/warc/CC-MAIN-20210920040604-20210920070604-00336.warc.gz"} |
https://www.semanticscholar.org/paper/Families-of-Perfect-Tensors-Geng/f106cdb32ee969c7024ab6ddf998efefaa032d9b | # Families of Perfect Tensors
@article{Geng2022FamiliesOP,
title={Families of Perfect Tensors},
author={Runshi Geng},
journal={ArXiv},
year={2022},
volume={abs/2211.15776}
}
Perfect tensors are the tensors corresponding to the absolutely maximally entangled states, a special type of quantum states of interest in quantum information theory. We establish a method to compute parameterized families of perfect tensors in ( C d ) ⊗ 4 using exponential maps from Lie theory. With this method, we find explicit examples of non-classical perfect tensors in ( C 3 ) ⊗ 4 . In particular, we answer an open question posted by ˙Zyczkowski et al.
## References
SHOWING 1-10 OF 18 REFERENCES
• Computer Science
• 2015
That bulk logical operators can be represented on multiple boundary regions mimics the Rindlerwedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in [1].
• Computer Science
Physical Review A
• 2021
A generic reduction-friendly form is introduced for the generator set of the stabilizer representation of an AME state, from which the stabilized form for children codes, all QMDS, can be obtained.
• Physics
Physical review letters
• 2022
The negative solution to the famous problem of 36 officers of Euler implies that there are no two orthogonal Latin squares of order six. We show that the problem has a solution, provided the officers
• Physics
Journal of Physics: Conference Series
• 2023
The famous combinatorial problem of Euler concerns an arrangement of 36 officers from six different regiments in a 6×6 square array. Each regiment consists of six officers each belonging to one of
• Computer Science
Physical review. A, Atomic, molecular, and optical physics
• 1996
It is proved that an EPP involving one-way classical communication and acting on mixed state M (obtained by sharing halves of Einstein-Podolsky-Rosen pairs through a channel) yields a QECC on \ensuremath{\chi} with rate Q=D, and vice versa, and it is proved Q is not increased by adding one- way classical communication.
• Mathematics
• 2005
The notion of entangling power of unitary matrices was introduced by Zanardi et al., [Phys. Rev. A 62, 030301 (2000)]. We study the entangling power of permutations, given in terms of a combinatorial
• Mathematics
• 2006
PREFACE INTRODUCTION NEW! Opening the Door NEW! Design Theory: Antiquity to 1950 BLOCK DESIGNS 2-(v, k, ?) Designs of Small Order NEW! Triple Systems BIBDs with Small Block Size t-Designs with t = 3
• Physics
• 2012
We study the existence of absolutely maximally entangled (AME) states in quantum mechanics and its applications to quantum information. AME states are characterized by being maximally entangled for
### There exist and only exist five irreducible components of P(4, 3) that are smooth at Φ, which are: exp Φ (span R {g j | 1 ≤ j ≤ 9}), exp Φ (span R {e i , e i+1 , e i+2 , f i , f i+1
• Conjuncture 13 | 2023-03-22 18:51:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6682024598121643, "perplexity": 2929.811424703863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00589.warc.gz"} |
https://math.stackexchange.com/questions/1791602/how-to-fill-in-the-gaps-in-my-proof-to-make-it-more-convincing | # How to fill in the gaps in my proof to make it more convincing?
Let $T$ be a tree with $3$ edges. Let $G$ be a simple graph such that each vertex has degree at least $3$. Show that $G$ has $T$ as a subgraph.
This statement is obvious but I am not sure how to prove it rigorously.
Could anybody please help me check whether my proof is good enough or not, and some advice for improvement if possible. I really think my proof is not good enough, because I am not sure how to fill in the details to make the proof more convincing. Thanks!
Since $T$ is a tree with $3$ edges, then each vertex of $T$ has at least $1$ edge and at most $3$ edges. Then we can extend $T$ by adding edges and vertices so that it becomes $G$, it possible because $G$ has degree at least $3$.
EDITED
We can find a vertex in $T$ that has degree less than $3$, then we can connect that vertex with an edge to another vertex that has degree less than $3$. But we have to make sure there is no loop created. We can keep adding edges so that all vertices have at least $3$ edges. But $G$ is given, so we have to add all the edges according to $G$.
My other concerns are (out of curiosity): as the number of edges increases to a general $n$ edges, then we need to deal with each case of possible graphs with $n$ edges, is there a better way besides dealing with each possible shape of the tree?
Let's say $T$ has 5 edges, then there are more than two trees that has 5 edges, does that mean we have to deal with each case and extend the tree from each case? Is there any better way?
• There are only two trees with three edges. Can you show that $G$ must contain one or the other? – Ethan Bolker May 19 '16 at 12:36
• @EthanBolker I can do it by drawing, but not sure how to put into words? It is quite intuitive that $G$ must contain either one of them. – user338393 May 19 '16 at 12:40
• @user338393 Better to start with what you know that what you want. Here you know that $G$ has a vertex with degree at least 3. How does that help? – almagest May 19 '16 at 12:41
• @almagest that means we can add more edges and vertices to $T$ so that all vertices have degree at least 3? – user338393 May 19 '16 at 12:44
• Start from the vertex. Not from $T$. You have a vertex with three others joined to it. So you have dealt with one possible $T$. Now what about the other? – almagest May 19 '16 at 12:45
There are essentially two different trees with $3$ edges. You can have all three edges incident at the same vertex; this is the star graph $S_3$. Or you can have at most $2$ edges incident at any vertex – then the tree is the path graph $P_4$ (why?).
$S_3$ is easy. The claim isn't quite right since it doesn't hold for the empty graph; but if we assume that $G$ is not empty, then it has at least one vertex of degree at least $3$, and any three edges incident at that vertex induce a subgraph isomorphic to $S_3$.
$P_4$ requires a bit more work – I'll leave that to you...
• Thanks. I am still not quite sure even though you say $S_3$ is easy. The problem is $G$ is given, so we have to add the vertices and edges carefully. Is there another way to show a graph is a subgraph of another graph? I have made an edit to my post, could you please give some advice on the edit and how can I improve it? – user338393 May 19 '16 at 13:01
• @user338393: This idea of adding edges to $T$ to create all of $G$ seems misguided to me. You just have to exhibit the subgraph; that any remaining edges can be added to a subgraph of $G$ to obtain $G$ is trivial. The work in this proof lies in proving that $G$ contains $P_4$ as a subgraph. For $S_3$, there's nothing left to do -- every vertex has $3$ edges, and that vertex, those three edges and their three other endpoints together form a subgraph of $G$ isomorophic to $S_3$. – joriki May 19 '16 at 13:09 | 2019-10-20 18:52:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5862463116645813, "perplexity": 86.29372320784887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00444.warc.gz"} |
http://physics.stackexchange.com/questions/56265/how-to-get-the-angle-needed-for-a-projectile-to-pass-through-a-given-point-for-t/56268 | # How to get the angle needed for a projectile to pass through a given point for trajectory plotting [closed]
I am trying to find the angle needed for a projectile to pass-through a given point.
Here is what I do know:
• Starting Point $(x_0,y_0)$
• Velocity
• Pass-through point $(x_1, y_1)$
I also need to incorporate gravity into the equation. Anyone have any ideas? I haven't had much luck so far, so any ideas/suggestions would be great.
-
## closed as off-topic by Emilio Pisanty, tpg2114, Qmechanic♦Nov 6 '13 at 22:17
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – Emilio Pisanty, tpg2114, Qmechanic
If this question can be reworded to fit the rules in the help center, please edit the question.
## 5 Answers
The answer posted eight hours ago is not correct. The formula given by Akash works for two points on the same level, and the question has the trajectory passing through two arbitrary points.
More importantly, the answer suggests that the right way to do trajectory problems is by knowing the right formulas. Trajectories are fun and you can learn a lot of physics by figuring them out in any number of ways. You don't learn anything if you do them by plugging numbers into formulas.
Having said that, this appears to be one of the more difficult types of trajectory problems. I have three equations in three unknows...v_x, v_y, and T. Eliminating v_x and v_y I get a quadratic equation in T^2. This is correct because there are two solutions for the problem...one where you aim the projectile as close as possible to the target, and the other where you loft it high in the air and hit the target on the way down.
-
Ok, i think i understand what you are saying. Basically i am trying to pass through the point by aiming the projectile as close as possible. I am unsure where to even begin to find this angle. The closest i could find was on this page en.wikipedia.org/wiki/Trajectory_of_a_projectile in the section titled Angle required to hit coordinate, but i can't get that equation to work at all. – NineBlindEyes Mar 8 '13 at 20:42
@NineBlindEyes: Try the equation I give. Unless I've made a mistake, it should give the two possible answers. – Phil H Mar 11 '13 at 13:31
I believe that this should rather be a comment but not an answer. – Peter Kravchuk May 10 '13 at 17:19
Ahh, I've just noticed that it all was in March, not in May) – Peter Kravchuk May 10 '13 at 17:23
Assuming gravity acting downwards, we can separate the horizontal and vertical motion; horizontally, it moves at a constant speed. Vertically, it is accelerating at $-g$.
Since we have the initial speed $s$, we know that for some angle $\theta$ between the $x$-axis and the initial velocity, the horizontal component of that velocity will be $v \cos(\theta)$. So the time of flight will be:
$\tau = (x_1 - x_0) / (v \cos(\theta) )$
We also know that vertically the projectile is accelerated by gravity, giving us a quadratic equation for the motion, so it will pass the elevation both on the way up and the way down (unless it coincides with the peak). This time the component of the velocity is $v \sin(\theta)$, so using $s = s_0 + ut + \frac{1}{2}a t^2$
$y_1 = y_0 + v \sin(\theta) \tau + \frac{1}{2}(-g)\tau^2$
We want $\theta$ in terms of everything else, and $\tau$ is unknown. So we want to substitute the first equation into the second. But let's reduce the number of terms for now by looking at deltas: $\alpha = x_1 - x_0$, $\beta = y_1 - y_0$ (i.e. change origin):
$\tau = \alpha / (v \cos(\theta) )$
$\beta = v \sin(\theta) \tau - \frac{1}{2}g \tau^2$
Substitute for $\tau$:
$\beta = v \sin(\theta) \alpha / (v \cos(\theta) ) - \frac{1}{2}g \alpha^2 / (v^2 \cos^2(\theta) )$
$\beta = \sin(\theta) \alpha / \cos(\theta) - \frac{1}{2}g \alpha^2 / (v^2 \cos^2(\theta) )$
$\beta v^2 \cos^2(\theta) = v^2 \cos(\theta) \sin(\theta) \alpha - \frac{1}{2}g \alpha^2$
Well, that looks fun. Untangling trig is not my preferred way to spend the afternoon. Let's play SOH CAH TOA. Imagine a right-angled triangle with angle $\theta$, base (adjacent) $\alpha$ (to pick a known quantity) and height (opposite) $\gamma$ (our new unknown). Hypotenuse $h$ then satisfies $h^2 = \alpha^2 + \gamma^2$. We get:
$\sin(\theta) = \gamma / h$
$\cos(\theta) = \alpha / h$
$\cos^2(\theta) = \alpha^2 / h^2$
$\cos(\theta) sin(\theta) = \alpha \gamma / h^2$
Substituting:
$\beta v^2 \alpha^2 / h^2 = v^2 \alpha^2 \gamma / h^2 - \frac{1}{2}g \alpha^2$
$\beta v^2 = v^2 \gamma - \frac{1}{2} g h^2 \quad$ (don't worry, the $\alpha$ dependence is still there in $h$)
Substitute for $h^2$ now that it is only there once:
$\beta v^2 = v^2 \gamma - \frac{1}{2} g (\alpha^2 + \gamma^2)$
Get in terms of $\gamma$, as that is our link to $\theta$:
$(\frac{1}{2} g) \gamma^2 - v^2 \gamma + \frac{1}{2} g \alpha^2 + \beta v^2 = 0$
$\gamma^2 - (2 v^2 / g) \gamma + (\alpha^2 + 2 \beta v^2 / g) = 0$
One quadratic equation (eventually). Calling the common factor $f = 2 v^2 / g$, we get:
$\gamma^2 - f \gamma + (\alpha^2 + \beta f) = 0$
In the best tradition:
$\gamma = \frac{1}{2}(f \pm \sqrt{f(f - 2\beta) - 2(\alpha^2) })$
From $\gamma$, we will want $\theta$, so to avoid square terms go for $\tan$:
$\tan(\theta) = \gamma / \alpha \quad$ (SOH CAH TOA)
$f$ can be calculated separately, so in code I would do this (pseudocode):
g = 9.81; // ish
alpha = x_one - x_zero;
beta = y_one - y_zero;
eff = 2 * v * v / g;
rootterm = eff*(eff - 2*beta) - 2*alpha*alpha;
// test for imaginary roots
if(rootterm < 0) {
... cannot hit target with this velocity ...
} else {
gamma_first = (f + sqrt(rootterm))/2;
gamma_second = (f - sqrt(rootterm))/2;
theta_first = arctan(gamma_first / alpha);
theta_second = arctan(gamma_second / alpha);
}
You are then free to choose which solution you prefer. In the case that $f(f - 2\beta) = 2 \alpha^2$, they will be the same value.
I'm sure there's a shorter route to the end there, perhaps by looking at the right-angled triangle to begin with; it represents the trajectory of the projectile without gravity.
-
Take the staring point as the origin for vectors. Then the trajectory is given by $$\vec r(t)=\vec{v}_0t+\vec{g}t^2/2$$ Suppose now that your target is at position $\vec{r}_1$, so we have to ensure that there exist a solution for $t$ of $$\vec{r}_1=\vec{v}_0t+\vec{g}t^2/2$$ Lets look at this equation solved for $\vec{v}_0$: $$\vec{v}_0=\vec{r}_1/t-\vec{g}t/2\\ v_0^2=r_1^2/t^2+g^2t^2/4-(\vec{r}_1,\vec{g})$$ We know what the value of $v_0^2$ is, so if the last equation has a solution, then we can just choose the direction given by the equation above. The rhs of last equation has a minimum wrt to $t$ with value (via AM-GM inequality) $r_1g-(\vec{r}_1,\vec{g})$. So, $$v_0^2\ge r_1g-(\vec{r}_1,\vec{g})$$ is the necessary and sufficient condition for being able to hit the target. Now, lets solve the equation $v_0^2=r_1^2/t^2+g^2t^2/4-(\vec{r}_1,\vec{g})$ (its biquadratic): $$t^2=\frac{2}{g^2}\left(v_0^2+(\vec{r}_1,\vec{g})\pm\sqrt{(v_0^2+(\vec{r}_1,\vec{g}))^2-g^2r_1^2}\right)$$ These two solutions correspond to two distinct trajectories hitting the target. One will hit the target 'from above' ($+$ sign, greater time of flight), while the other will hit it 'not that much from above' ($-$ sign, shorter time of flight). Now you can pick your favorite sign and use $$\vec{v}_0=\vec{r}_1/t-\vec{g}t/2\\$$ to find the components of $\vec{v}_0$. For reference: $$\vec{g}=(0,-g),\,\vec{r}_1=(x_1-x_0,y_1-y_0)\\ (\vec{r}_1,\vec{g})=g(y_0-y_1).$$
-
Sometimes I just dont get it why the question gets to the main page after 2 months or so.. – Peter Kravchuk May 10 '13 at 17:24
If the span is defined as $\Delta x = x_1-x_0$ and $\Delta y = y_1-y_0$ then the following equations need to be solved for $t$ and $\theta$
$$\Delta x = v\, t\, \cos\theta$$ $$\Delta y = v\, t\, \sin\theta - \frac{1}{2} g t^2$$
One way to do this is to recognize that $\tan\theta = \frac{ y + \frac{1}{2} g t^2}{ x}$ and use it above (since $\cos \left(\tan^{-1}z \right) = \frac{1}{\sqrt{1+z^2}}$)
$$\Delta x = \frac{v\,t\,\Delta x}{\sqrt{\Delta x^2 + \left( y + \frac{1}{2} g t^2 \right)^2 }}$$
to be solved for $t$ as
$$t = \frac{\sqrt{2v^2-2g\Delta y-2\sqrt{v^4-2g v^2-g^2 \Delta x^2}}}{g}$$
and then back to $\theta$ as
$$\tan\theta = \frac{v^2}{g \Delta x} - \sqrt{\frac{v^2 (v^2-2 g \Delta y)}{g^2 \Delta x^2}-1}$$
## Example
Shoot something $\Delta x = 500 \rm{m}$ across and $\Delta y = 20 \rm{m}$ up using a $v=100 \rm{m/s}$ projectile.
Gravity is $g=9.81 \rm{m/s^2}$. Plug above to get $t=5.23 \rm{s}$ and $\theta = 17.15 \rm{deg}$.
Here is a track of the projectile:
-
You can use this formula to find the angle needed at which a projectile to be projected to cover a certain distance: $$\sin(2\theta) = \frac{gR}{V^2},$$ where $V$ is the initial velocity by which projectile has been projected $R$ is the distance up to which you want to throw the projectile or in your case difference between two $x$ coordinates i.e. $(X_1 - X_o)$
-
Just tidied your latex. By the way, the proper way to render trig functions is \sin, \cos, etc. If you just type "sin" it gets rendered as the product of three variables $s$, $i$, and $n$. :) – Michael Brown Mar 8 '13 at 11:46
oh thank's this is my first time using this method to write something well thank's for your advice – Dimensionless Mar 8 '13 at 11:48
Awesome! Thanks! I'm trying to work it into the equation now. I have one other question quick though. Now that i have the angle, how do i find a point along the projectile's trajectory given a time? – NineBlindEyes Mar 8 '13 at 18:58
If time is given then you can find the height at which the projectile is and to what distance horizontally which will be your $Y$ and $X$ coordinate's – Dimensionless Mar 8 '13 at 20:56
I don't think this answers the question. The question requires the path to pass through a specific point x1,y1. This equation will only yield an answer for a path to another x-value at the same height, e.g. x0,y0 to x1,y0. – Phil H Mar 11 '13 at 13:30 | 2016-05-29 23:13:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7900251150131226, "perplexity": 398.49679854095416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049282275.31/warc/CC-MAIN-20160524002122-00157-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://www.cfd-online.com/W/index.php?title=RANS-based_turbulence_models&diff=9892&oldid=9890 | # RANS-based turbulence models
(Difference between revisions)
Revision as of 17:31, 30 October 2009 (view source)← Older edit Revision as of 22:17, 30 October 2009 (view source)mNewer edit → Line 3: Line 3: [[Introduction to turbulence/Reynolds averaged equations]] [[Introduction to turbulence/Reynolds averaged equations]] - The objective of the turbulence models for the RANS equations is to compute the [[[[Introduction to turbulence/Reynolds averaged equations|Reynolds stresses]], which can be done by three main categories of RANS-based turbulence models: + The objective of the turbulence models for the RANS equations is to compute the [[Introduction to turbulence/Reynolds averaged equations|Reynolds stresses]], which can be done by three main categories of RANS-based turbulence models: # [[Linear eddy viscosity models]] # [[Linear eddy viscosity models]] # [[Nonlinear eddy viscosity models]] # [[Nonlinear eddy viscosity models]] # [[Reynolds stress model (RSM) ]] # [[Reynolds stress model (RSM) ]]
## Revision as of 22:17, 30 October 2009
The objective of the turbulence models for the RANS equations is to compute the Reynolds stresses, which can be done by three main categories of RANS-based turbulence models: | 2016-06-26 01:51:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415841102600098, "perplexity": 11339.846402928779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00013-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.gamedev.net/forums/topic/151836-png-encoding/ | • ### Popular Now
• 11
• 27
• 9
• 20
• 31
#### Archived
This topic is now archived and is closed to further replies.
# PNG Encoding
This topic is 5445 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I've had a look at the search page, and didn't find anything useful... I'm trying to write a program to generate PNG images from a bitmap, but i can't get it working. It seems that zlib is compressing my 110 byte bitmap data down to 12 bytes, surely thats not right? My input image looks like this: And heres the code for the compression:
// Setup compression stream //
ZeroMemory(&theStream,sizeof(theStream));
deflateInit(&theStream,9);
theStream.next_in = pbyInput;
theStream.avail_in = (bmpInfo.bmWidth+1)*bmpInfo.bmHeight;
theStream.total_in = 0;
theStream.next_out = pbyData;
theStream.avail_out = dwLen;
theStream.total_out = 0;
// Compress Chunk //
if(deflate(&theStream,Z_FINISH) != Z_STREAM_END) goto lblError;
delete[] pbyInput;
pbyInput = NULL;
// Write Chunk//
// Snipped - just writes theStream.total_out bytes to file //
delete[] pbyData;
pbyData = NULL;
deflateEnd(&theStream);
I get my image data from GetObject() on a DIB section, and i copy it into pbyInput, adding a 0 byte at the start of each scanline (for the filter type), so in total my input data is 110 bytes. Any help would be great. [edited by - Evil Bill on April 19, 2003 10:41:03 AM] [edited by - Evil Bill on April 19, 2003 10:41:30 AM] [edited by - Evil Bill on April 19, 2003 11:45:51 AM]
##### Share on other sites
quote:
Original post by Evil Bill
It seems that zlib is compressing my 110 byte bitmap data down to 12 bytes, surely thats not right?
Since your image is very simple, such a result wouldn''t surprise me at all. If you really want to make sure it work, why don''t you uncompress it and see if you get the original image ?
##### Share on other sites
quote:
If you really want to make sure it work, why don''t you uncompress it and see if you get the original image ?
D''oh! I''ll try that now...
##### Share on other sites
Ok, i made a stupid mistake so it was saving 110 bytes of 0x00's, now it outputs 63 bytes, which is much more realistic.
But it still doesn't work . I tried inflating the data after i deflated it and it comes out the same as it went in, so its compressing it ok.
I'll rename the topic to something to do with PNGs i think.
[edited by - Evil Bill on April 19, 2003 11:45:31 AM]
##### Share on other sites
quote:
Original post by Evil Bill
But it still doesn''t work . I tried inflating the data after i deflated it and it comes out the same as it went in, so its compressing it ok.
What doesn''t work if it''s compressing it well ? By the way, if you want to create PNG images, you should perhaps use libPNG, it would be much easier. | 2018-03-17 06:40:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19091345369815826, "perplexity": 3470.3348353571146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644701.7/warc/CC-MAIN-20180317055142-20180317075142-00434.warc.gz"} |
https://www.nextgurukul.in/wiki/concept/cbse/class-8/maths/comparing-quantities/sales-tax-and-vat/3957584 | Notes On Sales Tax and VAT - CBSE Class 8 Maths
Ratio: A ratio is an expression that compares quantities relative to each other. When we compare two quantities in relation to each other, such a comparison is mathematically expressed as a ratio. Percent Percent means ‘per hundred’ or out of hundred. Percentage is another way of comparing ratios that compares to hundred. A change in a quantity can be positive, which means an increase, or negative, which means a decrease. Such a change can be measured by an increase percent or a decrease percent. Percentage Change (Increase/Decrease) = $\frac{\text{amount of change}}{\text{initial quantity}}$ x 100 Discount A discount is a price reduction offered on the marked price. Discounts are offered by shopkeepers to attract customers to buy goods and thereby increase sales. Discount = Marked price (MP) – Sale price (SP) A discount is, in fact, a percentage decrease, because the amount of change or discount is compared with the initial price or marked price. Discount percentage = x 100 Value Added Tax(VAT) Sales tax is charged by the government on the selling price of an item and is included in the bill amount. Sales tax has been replaced by a new tax called Value Added Tax (VAT). Normally, VAT is included in the price of items like groceries. Cost Price The price at which an article is made is called cost price. Selling price The price at which an article is sold is called selling price. Profit and loss depend on cost price and selling price. If cost price < selling price, there is a profit. Profit is calculated by subtracting cost price from selling price. Profit = SP – CP. If cost price > selling price, then there is a loss. Loss is calculated by subtracting selling price from cost price. Loss = CP – SP. You can calculate profit percent or loss percent by using these formulae. Profit% = x 100 Loss% = x 100 There are certain expenses like transportation, labour charges, repairs and rent, etc. Such additional expenses are called overhead charges. In such cases, the new cost price will be cost price of the goods plus overhead charges.
#### Summary
Ratio: A ratio is an expression that compares quantities relative to each other. When we compare two quantities in relation to each other, such a comparison is mathematically expressed as a ratio. Percent Percent means ‘per hundred’ or out of hundred. Percentage is another way of comparing ratios that compares to hundred. A change in a quantity can be positive, which means an increase, or negative, which means a decrease. Such a change can be measured by an increase percent or a decrease percent. Percentage Change (Increase/Decrease) = $\frac{\text{amount of change}}{\text{initial quantity}}$ x 100 Discount A discount is a price reduction offered on the marked price. Discounts are offered by shopkeepers to attract customers to buy goods and thereby increase sales. Discount = Marked price (MP) – Sale price (SP) A discount is, in fact, a percentage decrease, because the amount of change or discount is compared with the initial price or marked price. Discount percentage = x 100 Value Added Tax(VAT) Sales tax is charged by the government on the selling price of an item and is included in the bill amount. Sales tax has been replaced by a new tax called Value Added Tax (VAT). Normally, VAT is included in the price of items like groceries. Cost Price The price at which an article is made is called cost price. Selling price The price at which an article is sold is called selling price. Profit and loss depend on cost price and selling price. If cost price < selling price, there is a profit. Profit is calculated by subtracting cost price from selling price. Profit = SP – CP. If cost price > selling price, then there is a loss. Loss is calculated by subtracting selling price from cost price. Loss = CP – SP. You can calculate profit percent or loss percent by using these formulae. Profit% = x 100 Loss% = x 100 There are certain expenses like transportation, labour charges, repairs and rent, etc. Such additional expenses are called overhead charges. In such cases, the new cost price will be cost price of the goods plus overhead charges.
Previous | 2023-02-08 04:52:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 2, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31795671582221985, "perplexity": 1398.8413395555128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00826.warc.gz"} |
https://web2.0calc.com/questions/domain_64 | +0
# Domain
0
41
2
What is the smallest real number in the domain of the function g(x) = sqrt((x - 3)^2 - (x - 18)^2)?
Apr 9, 2021
#1
+321
0
21/2
Apr 9, 2021
#2
+496
0
If we want a real number, then ${(x-3)^2-(x-18)^2}$ has to be greater than or equal to $0$, or $(x-18)^2$ has to be less than or equal to $(x-3)^2$
when $(x-3)^2=(x-18)^2$, x has the smallest possible value with real solutions. So we have:
$(x-3)^2=(x-18)^2=x^2+9-6x=x^2+324-36x$
$30x=315$
$x=10.5$
so if x is any less than $\boxed{10.5}$, then the equation $\sqrt{(x - 3)^2 - (x - 18)^2}$ would have no real solutions.
Apr 9, 2021
edited by SparklingWater2 Apr 9, 2021 | 2021-05-17 04:23:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.925298273563385, "perplexity": 246.97200784986063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00544.warc.gz"} |
https://www.physicsforums.com/threads/sakurai-chapter-1-problems-23-24.281197/ | # Sakurai, Chapter 1 Problems 23 & 24
1. Dec 23, 2008
### quantumkiko
Problem 23:
If a certain set of orthonormal kets, $$|1> |2> |3>$$, are used as the base kets, the operators A and B are represented by
$$A = \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & -a & 0 \\ 0 & 0 & -a \end{array} \right) B = \left( \begin{array}{ccc} b & 0 & 0 \\ 0 & 0 & -ib \\ 0 & ib & 0 \end{array} \right).$$
A and B commute. Find a new set of orthonormal kets which are simultaneous eigenkets of both A and B. Specify the eigenvalues of A and B for each of the three eigenkets. Does your specification of eigenvalues completely characterize each eigenket?
Problem 24:
Prove that $$(1 / \sqrt{2})(1 + i\sigma_x)$$ acting on a two-component spinor can be regarded as the matrix representation of the rotation operator about the x-axis by angle $$-\pi / 2$$. (The minus sign signifies that the rotation is clockwise.)
2. Dec 23, 2008
### tiny-tim
Hi quantumkiko!
Show us what you've tried, and where you're stuck, and then we'll know how to help.
3. Dec 23, 2008
### quantumkiko
Hi Tim!
In problem 23, I don't know how to represent the simultaneous eigenkets $$|a, b>$$. I just know how to solve the eigenvalues for each operator using the characteristic equation (some are degenerate). I also know that for two commuting observables, their simultaneous eigenkets form a complete set. Therefore, their simultaneous eigenkets are automatically orthogonal. That's all.
For problem 24, I think we have to show that the result of letting the operator $$(1 / \\sqrt{2})(1 + i\\sigma_x)$$ act on a spinor is equivalent to a rotation operator acting on the same spinor. For a spinor of unit length, I used the matrix representation $$\left( \begin{array}{c} \cos \theta & \sin\theta \end{array} \right)$$ (I think this is where I was wrong.) Since the angle of rotation is $$-\pi / 2$$, the rotation matrix will be given by,
$$\left( \begin{array}{cc} cos(-\pi / 2) & sin(-\pi / 2) \\ -sin(-\pi / 2) & cos(-\pi / 2) \end{array} \right) = \left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right)$$.
If I let this operator act on the spinor, the resulting s
Last edited: Dec 23, 2008
4. Dec 23, 2008
### tiny-tim
Hint: in problem 23, just look at the bottom right-hand 2x2 square of A …
it's a multiple of the unit matrix!
so its eigenkets are … ?
5. Dec 23, 2008
### quantumkiko
It's eigenkets are $$\left( \begin{array}{c} 1 \\ 1 \end{array}\right) and \left( \begin{array}{c} -1 \\ -1 \end{array}\right)$$?
6. Dec 23, 2008
### tiny-tim
waah!
think … if C is the 2x2 unit matrix,
for what vectors or kets V is CV = V?
7. Dec 23, 2008
### quantumkiko
Oh, for all kets V! So how does that fit into finding the simultaneous eigenstates of A and B?
8. Dec 23, 2008
### tiny-tim
Well, there's one obvious simultaneous eigenstate …
and once you've found the other two eigenstates of B, they're bound to be eigenstates of A also.
(i'm logging out now for a few hours )
9. Dec 23, 2008
### quantumkiko
I got it! The obvious one is $$\left( \begin{array}{c} 1 & 0 & 0 \end{array} \right)$$ while the others are $$\left( \begin{array}{c} 0 & 1/\sqrt{2} & i/\sqrt{2} \end{array} \right)$$ and $$\left( \begin{array}{c} 0 & -1/\sqrt{2} & -i/\sqrt{2} \end{array} \right)$$. Thank you very much!
Now how about Problem # 24?
10. Dec 23, 2008
### tiny-tim
erm … they're the same!!
Le'ssee …
Well … to prove it's a π/2 rotation …
the obvious thing to do is to square it!
11. Dec 23, 2008
### quantumkiko
Oh yeah, I should really get different eigenkets, not just multiples of one of the other. So the other two should be $$\left( \begin{array}{c} 0 & 1/\sqrt{2} & i/\sqrt{2} \end{array} \right)$$ and $$\left( \begin{array}{c} 0 & i/\sqrt{2} & 1/\sqrt{2} \end{array} \right)$$
I was thinking that they won't be orthonormal, but I forgot that one of the $$i$$'s changes sign when doing the inner product.
I got Problem # 24 also. Thank you! | 2017-12-13 08:03:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8122138977050781, "perplexity": 899.3269061579093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522205.7/warc/CC-MAIN-20171213065419-20171213085419-00201.warc.gz"} |
https://blog.csdn.net/bxg1065283526/article/details/80210359 | 11 篇文章 2 订阅
# Optimization Methods
Until now, you’ve always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. (到目前为止,您一直使用梯度下降来更新参数并将成本降至最低。在这款笔记本中,您将学习更先进的优化方法,可以加快学习速度,甚至可以让您获得更好的成本函数最终价值。有一个好的优化算法可以是等待几天之间的差异,只是几个小时才能得到一个好的结果)
Gradient descent goes “downhill” on a cost function J. Think of it as trying to do this:
Figure 1 Minimizing the cost is like finding the lowest point in a hilly landscape
At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point
To get started, run the following code to import the libraries you will need.
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
## 其中testCases代码如下:
# -*- coding: utf-8 -*-
import numpy as np
def update_parameters_with_gd_test_case():
np.random.seed(1)
learning_rate = 0.01
W1 = np.random.randn(2,3)
b1 = np.random.randn(2,1)
W2 = np.random.randn(3,3)
b2 = np.random.randn(3,1)
dW1 = np.random.randn(2,3)
db1 = np.random.randn(2,1)
dW2 = np.random.randn(3,3)
db2 = np.random.randn(3,1)
parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2}
grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2}
"""
def update_parameters_with_sgd_checker(function, inputs, outputs):
if function(inputs) == outputs:
print("Correct")
else:
print("Incorrect")
"""
def random_mini_batches_test_case():
np.random.seed(1)
mini_batch_size = 64
X = np.random.randn(12288, 148)
Y = np.random.randn(1, 148) < 0.5
return X, Y, mini_batch_size
def initialize_velocity_test_case():
np.random.seed(1)
W1 = np.random.randn(2,3)
b1 = np.random.randn(2,1)
W2 = np.random.randn(3,3)
b2 = np.random.randn(3,1)
parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2}
return parameters
def update_parameters_with_momentum_test_case():
np.random.seed(1)
W1 = np.random.randn(2,3)
b1 = np.random.randn(2,1)
W2 = np.random.randn(3,3)
b2 = np.random.randn(3,1)
dW1 = np.random.randn(2,3)
db1 = np.random.randn(2,1)
dW2 = np.random.randn(3,3)
db2 = np.random.randn(3,1)
parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2}
grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2}
v = {'dW1': np.array([[ 0., 0., 0.],
[ 0., 0., 0.]]), 'dW2': np.array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]]), 'db1': np.array([[ 0.],
[ 0.]]), 'db2': np.array([[ 0.],
[ 0.],
[ 0.]])}
np.random.seed(1)
W1 = np.random.randn(2,3)
b1 = np.random.randn(2,1)
W2 = np.random.randn(3,3)
b2 = np.random.randn(3,1)
parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2}
return parameters
np.random.seed(1)
v, s = ({'dW1': np.array([[ 0., 0., 0.],
[ 0., 0., 0.]]), 'dW2': np.array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]]), 'db1': np.array([[ 0.],
[ 0.]]), 'db2': np.array([[ 0.],
[ 0.],
[ 0.]])}, {'dW1': np.array([[ 0., 0., 0.],
[ 0., 0., 0.]]), 'dW2': np.array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]]), 'db1': np.array([[ 0.],
[ 0.]]), 'db2': np.array([[ 0.],
[ 0.],
[ 0.]])})
W1 = np.random.randn(2,3)
b1 = np.random.randn(2,1)
W2 = np.random.randn(3,3)
b2 = np.random.randn(3,1)
dW1 = np.random.randn(2,3)
db1 = np.random.randn(2,1)
dW2 = np.random.randn(3,3)
db2 = np.random.randn(3,1)
parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2}
grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2}
opt_utils代码如下:
# -*- coding: utf-8 -*-
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size.
Return:
s -- sigmoid(x)
"""
s = 1/(1+np.exp(-x))
return s
def relu(x):
"""
Compute the relu of x
Arguments:
x -- A scalar or numpy array of any size.
Return:
s -- relu(x)
"""
s = np.maximum(0,x)
return s
np.random.seed(seed)
W1 = np.random.randn(2,3)
b1 = np.random.randn(2,1)
W2 = np.random.randn(3,3)
b2 = np.random.randn(3,1)
dW1 = np.random.randn(2,3)
db1 = np.random.randn(2,1)
dW2 = np.random.randn(3,3)
db2 = np.random.randn(3,1)
return W1, b1, W2, b2, dW1, db1, dW2, db2
def initialize_parameters(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
b1 -- bias vector of shape (layer_dims[l], 1)
Wl -- weight matrix of shape (layer_dims[l-1], layer_dims[l])
bl -- bias vector of shape (1, layer_dims[l])
Tips:
- For example: the layer_dims for the "Planar Data classification model" would have been [2,2,1].
This means W1's shape was (2,2), b1 was (1,2), W2 was (2,1) and b2 was (1,1). Now you have to generalize it!
- In the for loop, use parameters['W' + str(l)] to access Wl, where l is the iterative integer.
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1])* np.sqrt(2 / layer_dims[l-1])
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
assert(parameters['W' + str(l)].shape == layer_dims[l], layer_dims[l-1])
assert(parameters['W' + str(l)].shape == layer_dims[l], 1)
return parameters
def forward_propagation(X, parameters):
"""
Implements the forward propagation (and computes the loss) presented in Figure 2.
Arguments:
X -- input dataset, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape ()
b1 -- bias vector of shape ()
W2 -- weight matrix of shape ()
b2 -- bias vector of shape ()
W3 -- weight matrix of shape ()
b3 -- bias vector of shape ()
Returns:
loss -- the loss function (vanilla logistic loss)
"""
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
z1 = np.dot(W1, X) + b1
a1 = relu(z1)
z2 = np.dot(W2, a1) + b2
a2 = relu(z2)
z3 = np.dot(W3, a2) + b3
a3 = sigmoid(z3)
cache = (z1, a1, W1, b1, z2, a2, W2, b2, z3, a3, W3, b3)
return a3, cache
def backward_propagation(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat)
cache -- cache output from forward_propagation()
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(z1, a1, W1, b1, z2, a2, W2, b2, z3, a3, W3, b3) = cache
dz3 = 1./m * (a3 - Y)
dW3 = np.dot(dz3, a2.T)
db3 = np.sum(dz3, axis=1, keepdims = True)
da2 = np.dot(W3.T, dz3)
dz2 = np.multiply(da2, np.int64(a2 > 0))
dW2 = np.dot(dz2, a1.T)
db2 = np.sum(dz2, axis=1, keepdims = True)
da1 = np.dot(W2.T, dz2)
dz1 = np.multiply(da1, np.int64(a1 > 0))
dW1 = np.dot(dz1, X.T)
db1 = np.sum(dz1, axis=1, keepdims = True)
gradients = {"dz3": dz3, "dW3": dW3, "db3": db3,
"da2": da2, "dz2": dz2, "dW2": dW2, "db2": db2,
"da1": da1, "dz1": dz1, "dW1": dW1, "db1": db1}
def compute_cost(a3, Y):
"""
Implement the cost function
Arguments:
a3 -- post-activation, output of forward propagation
Y -- "true" labels vector, same shape as a3
Returns:
cost - value of the cost function
"""
m = Y.shape[1]
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
cost = 1./m * np.sum(logprobs)
return cost
def predict(X, y, parameters):
"""
This function is used to predict the results of a n-layer neural network.
Arguments:
X -- data set of examples you would like to label
parameters -- parameters of the trained model
Returns:
p -- predictions for the given dataset X
"""
m = X.shape[1]
p = np.zeros((1,m), dtype = np.int)
# Forward propagation
a3, caches = forward_propagation(X, parameters)
# convert probas to 0/1 predictions
for i in range(0, a3.shape[1]):
if a3[0,i] > 0.5:
p[0,i] = 1
else:
p[0,i] = 0
# print results
#print ("predictions: " + str(p[0,:]))
#print ("true labels: " + str(y[0,:]))
print("Accuracy: " + str(np.mean((p[0,:] == y[0,:]))))
return p
def predict_dec(parameters, X):
"""
Used for plotting decision boundary.
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (m, K)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Predict using forward propagation and a classification threshold of 0.5
a3, cache = forward_propagation(X, parameters)
predictions = (a3 > 0.5)
return predictions
def plot_decision_boundary(model, X, y):
# Set min and max values and give it some padding
x_min, x_max = X[0, :].min() - 1, X[0, :].max() + 1
y_min, y_max = X[1, :].min() - 1, X[1, :].max() + 1
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole grid
Z = model(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.ylabel('x2')
plt.xlabel('x1')
plt.scatter(X[0, :], X[1, :], c=y, cmap=plt.cm.Spectral)
plt.show()
np.random.seed(3)
train_X, train_Y = sklearn.datasets.make_moons(n_samples=300, noise=.2) #300 #0.2
# Visualize the data
if is_plot:
plt.scatter(train_X[:, 0], train_X[:, 1], c=train_Y, s=40, cmap=plt.cm.Spectral);
train_X = train_X.T
train_Y = train_Y.reshape((1, train_Y.shape[0]))
return train_X, train_Y
A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all m examples on each step, it is also called Batch Gradient Descent. (机器学习中一个简单的优化方法是梯度下降(GD)。当您对每个步骤的所有m示例采取渐变步骤时,它也被称为批处理渐变下降(Batch Gradient Descent))
Warm-up exercise: Implement the gradient descent update rule. The gradient descent rule is, for l=1,...,L
where L is the number of layers and α is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are W[1] and b[1]. You need to shift l to l+1when coding.(其中L是层数,α是学习率。所有的参数都应该存储在parameters字典中。请注意,迭代器lfor循环中从0开始,而第一个参数是W[1]b[1]
。编码时需要将l移到l + 1)
def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate*grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] -learning_rate*grads["db" + str(l+1)]
### END CODE HERE ###
return parameters
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
W1 = [[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]]
b1 = [[ 1.74604067]
[-0.75184921]]
W2 = [[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]]
b2 = [[-0.88020257]
[ 0.02561572]
[ 0.57539477]]
A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. (这是一个随机梯度下降(SGD),相当于小批量梯度下降,每个小批量只有1个例子。您刚刚实施的更新规则不会更改。什么样的变化是你一次只能在一个训练样例上计算梯度,而不是在整个训练集上。下面的代码示例说明了随机梯度下降和(批量)梯度下降之间的差异)
A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. (这是一个随机梯度下降(SGD),相当于小批量梯度下降,每个小批量只有1个例子。您刚刚实施的更新规则不会更改。什么样的变化是你一次只能在一个训练样例上计算梯度,而不是在整个训练集上。下面的代码示例说明了随机梯度下降和(批量)梯度下降之间的差异)
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
# Forward propagation
a, caches = forward_propagation(X, parameters)
# Compute cost.
cost = compute_cost(a, Y)
# Backward propagation.
# Update parameters.
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
for j in range(0, m):
# Forward propagation
a, caches = forward_propagation(X[:,j], parameters)
# Compute cost
cost = compute_cost(a, Y[:,j])
# Backward propagation
# Update parameters.
parameters = update_parameters(parameters, grads)
In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will “oscillate” toward the minimum rather than converge smoothly. Here is an illustration of this(在随机梯度下降中,在更新梯度之前,只使用1个训练样例。当训练集大时,SGD可以更快。但是这些参数会向最小值“摆动”而不是平稳地收敛。这是一个例子):
Figure 1 SGD vs GD
“+” denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD).
In practice, you’ll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples(在实践中,如果您不使用整个训练集,也不仅仅是一个训练样例来执行每个更新,您通常会得到更快的结果。小批量梯度下降为每个步骤使用中间数量的示例。通过小批量梯度下降,可以循环使用小批量,而不是循环遍历各个训练示例).
Figure 2 SGD vs Mini-Batch GD
“+” denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization.
What you should remember
- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step( 在梯度下降,小批量梯度下降和随机梯度下降之间的差异是用于执行一个更新步骤的示例数量 ).
- You have to tune a learning rate hyperparameter α ( 你必须调整学习速率超参数α ).
- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large)(在小批量生产的情况下,通常它的性能优于梯度下降或随机梯度下降(特别是当训练集较大时)).
## 2 - Mini-Batch Gradient descent
Let’s learn how to build mini-batches from the training set (X, Y).
There are two steps:
Shuffle: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the ithcolumn of X is the example corresponding to the ith label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. (创建训练集(X,Y)的混洗版本,如下所示。 X和Y的每一列代表一个训练样例。注意随机混洗是在X和Y之间同步完成的。这样,在混洗之后,X的ith列是对应于Y中的ith标签的例子。混洗步骤确保那个例子将被随机分成不同的小批量)
• Partition: Partition the shuffled (X, Y) into mini-batches of size mini_batch_size (here 64). Note that the number of training examples is not always divisible by mini_batch_size. The last mini batch might be smaller, but you don’t need to worry about this. When the final mini-batch is smaller than the full mini_batch_size, it will look like this(将混洗的(X,Y)划分为大小为mini_batch_size的小批量(这里是64)。请注意,训练示例的数量并不总是可以被mini_batch_size整除。最后一个小批量可能会更小,但您不必担心这一点。当最后一个小批量比完整的mini_batch_size小时,看起来就像这样):
Exercise: Implement random_mini_batches. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the 1st and 2nd mini-batches(实现random_mini_batches。我们为你编码洗牌部分。为了帮助您进行分区步骤,我们为您提供以下代码,用于选择1st2nd小批量的索引):
first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]
second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]
...
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:,k * mini_batch_size : (k+1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:,k * mini_batch_size : (k+1) * mini_batch_size]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:,num_complete_minibatches * mini_batch_size:]
mini_batch_Y = shuffled_Y[:,num_complete_minibatches * mini_batch_size:]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
shape of the 1st mini_batch_X: (12288, 64)
shape of the 2nd mini_batch_X: (12288, 64)
shape of the 3rd mini_batch_X: (12288, 64)
shape of the 1st mini_batch_Y: (1, 64)
shape of the 2nd mini_batch_Y: (1, 64)
shape of the 3rd mini_batch_Y: (1, 20)
mini batch sanity check: [ 0.90085595 -0.7612069 0.2344157 ]
What you should remember
- Shuffling and Partitioning are the two steps required to build mini-batches(Shuffling和Partitioning是构建小批量所需的两个步骤)
- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128.(通常选择2的幂的小批量,例如16,32,64,128)
## 3 - Momentum
Because mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will “oscillate” toward convergence. Using momentum can reduce these oscillations. (由于小批量梯度下降在仅看到一个子集的例子之后进行参数更新,因此更新的方向有一定的变化,所以小批量梯度下降所采用的路径将会“趋于”收敛。使用动量可以减少这些振荡)
Momentum takes into account the past gradients to smooth out the update. We will store the ‘direction’ of the previous gradients in the variable v. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of v as the “velocity” of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill. (动量考虑到过去的渐变来平滑更新。我们将把以前渐变的“方向”存储在变量v中。形式上,这将是前面步骤中梯度的指数加权平均值。你也可以把v想象成下坡滚动的“速度”,根据山坡/坡度的方向建立速度(和动量))
Figure 3: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence v and then take a step in the direction of v(红色的箭头显示了一步一步的小批量梯度下降的动力。蓝色的点显示每个步骤的梯度(相对于当前小批量)的方向。我们不是只跟随渐变,而是让渐变影响v,然后向v
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
Note that the iterator l starts at 0 in the for loop while the first parameters are v[“dW1”] and v[“db1”] (that’s a “one” on the superscript). This is why we are shifting l to l+1 in the for loop.(迭代器l在for循环中从0开始,而第一个参数是v[“dW1”]和v[“db1”](这是上标中的“1”)。这就是为什么我们在l循环中将l移到l + 1的原因)
def initialize_velocity(parameters):
"""
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
v["dW1"] = [[0. 0. 0.]
[0. 0. 0.]]
v["db1"] = [[0.]
[0.]]
v["dW2"] = [[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
v["db2"] = [[0.]
[0.]
[0.]]
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
"""
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = np.dot(beta,v["dW"+str(l+1)]) + np.dot(1-beta,grads["dW"+str(l+1)])
v["db" + str(l+1)] = np.dot(beta,v["db"+str(l+1)]) + np.dot(1-beta,grads["db"+str(l+1)])
# update parameters
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate*v["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate*v["db" + str(l+1)]
### END CODE HERE ###
return parameters, v
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
W1 = [[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]]
b1 = [[ 1.74493465]
[-0.76027113]]
W2 = [[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]]
b2 = [[-0.87809283]
[ 0.04055394]
[ 0.58207317]]
v["dW1"] = [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] = [[-0.01228902]
[-0.09357694]]
v["dW2"] = [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] = [[0.02344157]
[0.16598022]
[0.07420442]]
What you should remember
- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.(Momentum将渐变考虑在内以平滑梯度下降的步骤。可以采用分批梯度下降法,小批量梯度下降法或随机梯度下降法
- You have to tune a momentum hyperparameter β and a learning rate α.(你必须调整动量超参数β和学习率α)
Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum. (Adam是训练神经网络最有效的优化算法之一。它结合了RMSProp(讲座中介绍)和Momentum的想法)
As usual, we will store all parameters in the parameters dictionary
Exercise: Initialize the Adam variables v,s which keep track of the past information.
Instruction: The variables v,s are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for grads, that is:
for l=1,...,L:
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
s["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
s["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
s["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
s["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v, s
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
v["dW1"] = [[0. 0. 0.]
[0. 0. 0.]]
v["db1"] = [[0.]
[0.]]
v["dW2"] = [[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
v["db2"] = [[0.]
[0.]
[0.]]
s["dW1"] = [[0. 0. 0.]
[0. 0. 0.]]
s["db1"] = [[0.]
[0.]]
s["dW2"] = [[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
s["db2"] = [[0.]
[0.]
[0.]]
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
"""
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
"""
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = beta1 * v["dW" + str(l+1)] + (1 - beta1) * grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta1 * v["db" + str(l+1)] + (1 - beta1) * grads["db" + str(l+1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l + 1)] = v["dW" + str(l + 1)]/(1-(beta1)**t)
v_corrected["db" + str(l + 1)] = v["db" + str(l + 1)]/(1-(beta1)**t)
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l+1)] = beta2 * s["dW" + str(l+1)] + (1 - beta2) * grads["dW" + str(l+1)]**2
s["db" + str(l+1)] = beta2 * s["db" + str(l+1)] + (1 - beta2) * grads["db" + str(l+1)]**2
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l + 1)] =s["dW" + str(l + 1)]/(1-(beta2)**t)
s_corrected["db" + str(l + 1)] = s["db" + str(l + 1)]/(1-(beta2)**t)
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)]-learning_rate*(v_corrected["dW" + str(l + 1)]/np.sqrt( s_corrected["dW" + str(l + 1)]+epsilon))
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)]-learning_rate*(v_corrected["db" + str(l + 1)]/np.sqrt( s_corrected["db" + str(l + 1)]+epsilon))
### END CODE HERE ###
return parameters, v, s
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
W1 = [[ 1.63178673 -0.61919778 -0.53561312]
[-1.08040999 0.85796626 -2.29409733]]
b1 = [[ 1.75225313]
[-0.75376553]]
W2 = [[ 0.32648046 -0.25681174 1.46954931]
[-2.05269934 -0.31497584 -0.37661299]
[ 1.14121081 -1.09245036 -0.16498684]]
b2 = [[-0.88529978]
[ 0.03477238]
[ 0.57537385]]
v["dW1"] = [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] = [[-0.01228902]
[-0.09357694]]
v["dW2"] = [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] = [[0.02344157]
[0.16598022]
[0.07420442]]
s["dW1"] = [[0.00121136 0.00131039 0.00081287]
[0.0002525 0.00081154 0.00046748]]
s["db1"] = [[1.51020075e-05]
[8.75664434e-04]]
s["dW2"] = [[7.17640232e-05 2.81276921e-04 4.78394595e-04]
[1.57413361e-04 4.72206320e-04 7.14372576e-04]
[4.50571368e-04 1.60392066e-07 1.24838242e-03]]
s["db2"] = [[5.49507194e-05]
[2.75494327e-03]
[5.50629536e-04]]
You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let’s implement a model with each of these optimizers and observe the difference.
## 5 - Model with different optimization algorithms
Lets use the following “moons” dataset to test the different optimization methods. (The dataset is named “moons” because the data from each of the two classes looks a bit like a crescent-shaped moon.)
We have already implemented a 3-layer neural network. You will train it with:
update_parameters_with_gd()
- Mini-batch Momentum: it will call your functions:
initialize_velocity() and update_parameters_with_momentum()
initialize_adam() and update_parameters_with_adam()
You will now run this 3 layer neural network with each of the 3 optimization methods.
### 5.1 - Mini-batch Gradient descent
Run the following code to see how the model does with mini-batch gradient descent.
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
"""
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost
cost = compute_cost(a3, minibatch_Y)
# Backward propagation
# Update parameters
if optimizer == "gd":
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
t = t + 1 # Adam counter
t, learning_rate, beta1, beta2, epsilon)
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Cost after epoch 0: 0.690736
Cost after epoch 1000: 0.685273
Cost after epoch 2000: 0.647072
Cost after epoch 3000: 0.619525
Cost after epoch 4000: 0.576584
Cost after epoch 5000: 0.607243
Cost after epoch 6000: 0.529403
Cost after epoch 7000: 0.460768
Cost after epoch 8000: 0.465586
Cost after epoch 9000: 0.464518
Accuracy: 0.7966666666666666
### 5.2 - Mini-batch gradient descent with momentum
Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Cost after epoch 0: 0.690741
Cost after epoch 1000: 0.685341
Cost after epoch 2000: 0.647145
Cost after epoch 3000: 0.619594
Cost after epoch 4000: 0.576665
Cost after epoch 5000: 0.607324
Cost after epoch 6000: 0.529476
Cost after epoch 7000: 0.460936
Cost after epoch 8000: 0.465780
Cost after epoch 9000: 0.464740
Accuracy: 0.7966666666666666
### 5.3 - Mini-batch with Adam mode
Run the following code to see how the model does with Adam.
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Cost after epoch 0: 0.690552
Cost after epoch 1000: 0.185501
Cost after epoch 2000: 0.150830
Cost after epoch 3000: 0.074454
Cost after epoch 4000: 0.125959
Cost after epoch 5000: 0.104344
Cost after epoch 6000: 0.100676
Cost after epoch 7000: 0.031652
Cost after epoch 8000: 0.111973
Cost after epoch 9000: 0.197940
Accuracy: 0.94
### 5.4 - Summary
Momentum usually helps, but given the small learning rate and the simplistic dataset, its impact is almost negligeable. Also, the huge oscillations you see in the cost come from the fact that some minibatches are more difficult thans others for the optimization algorithm.(动量通常是有帮助的,但由于学习速度慢,数据集简单,其影响几乎可以忽略不计。另外,在成本中看到的巨大振荡来自于这样一个事实,即一些小型机器在优化算法上比其他小机型更困难)
Adam on the other hand, clearly outperforms mini-batch gradient descent and Momentum. If you run the model for more epochs on this simple dataset, all three methods will lead to very good results. However, you’ve seen that Adam converges a lot faster.(Adam另一方面,显然胜过小批量梯度下降和动量。如果你在这个简单的数据集上运行更多时期的模型,这三种方法都会带来非常好的结果。不过,你已经看到Adam更快地收敛了)
- Relatively low memory requirements (though higher than gradient descent and gradient descent with momentum) (*
- Usually works well even with little tuning of hyperparameters (except α)(即使对超参数进行了微调,通常也能正常工作(α除外))
References:
04-03
01-18 369
01-20 571
04-18 1580
08-29 158
### “相关推荐”对你有帮助么?
• 非常没帮助
• 没帮助
• 一般
• 有帮助
• 非常有帮助
bxg1065283526
¥2 ¥4 ¥6 ¥10 ¥20
1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。 | 2022-05-25 22:59:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3929292559623718, "perplexity": 14756.752288315743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662594414.79/warc/CC-MAIN-20220525213545-20220526003545-00738.warc.gz"} |
https://physics.stackexchange.com/questions/282106/how-to-make-a-green-led-as-visually-bright-as-a-0-magnitude-star | # How to make a green LED as visually bright as a 0 magnitude star?
I'm trying to estimate the distance and power I'd need for a green LED to appear visually roughy as bright as a relatively bright star - say a visual magnitude of zero. Here is what I have so far.
Be warned I am just ballparking it here.
The sun is visual magnitude -27, and five visual astronomical magnitudes are a factor of 100, so a zero magnitude star should appear to be a factor of $100^{-27/5} \approx 1.6 \times10^{-11}$ as bright as the sun.
The FWHM of the sensitivity of human vision is about 100nm and peaks roughly in the green part of the spectrum, however the center changes between about 550nm and 500nm depending on photopic or scotopic conditions.
At sea level, direct sunlight is about $1.3 \ W/m^2/nm$, so for a 100nm wide bandpass that's $130 \ W/m^2$. A zero visual magnitude object should then produce $2.1 \times10^{-9} \ W/m^2$.
If I have a say 555nm green LED with 30% external quantum efficiency, then $0.1 \ A$ of current should produce $0.22 \ W \times 0.3 \approx 0.067 \ W$ of light. If it is roughly uniform over a cone with a half-width of 10°, then the LED produces $0.7 \ W/Sr$, or $0.7/r^2 \ W/m^2$ at a distance of $r$ meters.
That means I would have to move my 100 mA, 30% eQE 555nm LED with a 10° half-angle 18 kilometers away for it to look roughly as bright a 0 visual magnitude star!
Have I made some fundamental mistake here? Or - baring atmospheric absorption - could I actually see a green LED ~20km away (or on a balloon 20km straight up) on a dark night?
• I really like this question. I did a back of the envelope calculation via different methods and come to the same conclusion. – Myridium Sep 24 '16 at 16:33
• @Myridium That's good to know, thanks for the independent checking! – uhoh Sep 24 '16 at 16:49
This is not a full answer, as I don't know how to compare an led with a candle, and I am not sure than the "answer" makes sense, even in rough estimate terms. Please just treat it as background and I will then delete it.
Candle Study Abstract
Using CCD observations of a candle flame situated at a distance of 338 m and calibrated with observations of Vega, we show that a candle flame situated at ~2.6 km (1.6 miles) is comparable in brightness to a 6th magnitude star with the spectral energy distribution of Vega. The human eye cannot detect a candle flame at 10 miles or further, as some statements on the web suggest.
Wikipedia says, as you already know:
A magnitude 1 star is exactly a hundred times brighter than a magnitude 6 star, as the difference of five magnitude steps corresponds to $(2.512)^5$ or 100.
But this does not make much sense either, especially when you go up another magnitude scale to mag 0.
Comparing units of light power brings us back to the dark ages (sorry).
One candlepower is the radiating power of a light with the intensity of one candle. This unit is considered obsolete as it was replaced by the candela in 1948, though it is still in common use. 1 candlepower is equal to about 0.981 candela.
candela The standard unit for measuring the intensity of light. The candela is defined to be the luminous intensity of a light source producing single-frequency light at a frequency of 540 terahertz (THz) with a power of 1/683 watt per steradian, or 18.3988 milliwatts over a complete sphere centered at the light source.
• This is helpful! The candle is radiating in a wide range of directions - say $2 \pi \ SR$, while the LED is focused to only $0.1 \ SR$. The candle is almost all heat and IR, the LED is wavelength optimized for maximum sensitivity. So I may in fact not be wrong! The paper is interesting reading as well. I like it when people can still ask, and then answer really basic questions. – uhoh Sep 24 '16 at 15:14
• I remember 100 W bulbs! Actually this is what I was really looking for - a sanity check, or a "could this actually be right" check rather than the math. Thanks for your help! – uhoh Sep 24 '16 at 15:32 | 2021-01-21 05:33:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6405795812606812, "perplexity": 786.6908853298402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00001.warc.gz"} |
http://physics.stackexchange.com/tags/research-level/new | # Tag Info
## New answers tagged research-level
0
From a physicists point of view, I would start with the following notes, which are Chapter 9 in John Preskill's Quantum Computing lecture notes: http://www.theory.caltech.edu/~preskill/ph219/topological.pdf, as well as the references within. I would also mention Kitaev's paper https://arxiv.org/abs/cond-mat/0506438 as a specially influential reference. ...
0
I found this paper in Numdam's (a mathematical journal compilation) archive, which encompasses all that you talked about and I found it clear, with some references to Witten as well. This paper goes much further in detail, but I did not read all of it. And this might help if you aren't bothered by learning by forum posts?
1
$$\begin{split} \frac{d(g_b\mu^{\epsilon})}{d\log\mu^2}&=\frac{\mu}{2}\frac{d(g_b\mu^{\epsilon})}{d\mu}\\ &=\frac{\mu}{2}\left[\mu^{\epsilon}\frac{dg_b}{d\mu}+g_b\frac{d\mu^{\epsilon}}{d\mu}\right] \end{split}$$ By definition, the bare coupling does not depend on the renormalization scale $\mu$. Hence \...
0
(Answering rather than commenting for lack of rep). My focus is in programming however perhaps the following would help (From what I can understand due to different terminology): Instead of reducing the components, how about factoring them into higher dimensions, I imagine the irregularities would become more apparent given the supposed symmetrical outcome, ...
0
this question is 2 years old, but I thought it's never too late. I'm not sure about the definite answer, but here are my thoughts. Take the SO(6) algebra viewpoint. The $\mathbf{6}$ is the fundamental (vector) representation, and the $\mathbf{4}$ is the spinor representation. So we are looking for symbols $\Sigma_{AB}^I$ that combine two spinors into a ...
Top 50 recent answers are included | 2016-06-28 06:01:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6696422696113586, "perplexity": 671.2104566485483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00156-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://byjus.com/question-answer/a-manufacturer-sells-an-article-to-a-wholesaler-at-a-profit-of-18-the-wholesaler/ | Question
# A manufacturer sells an article to a wholesaler at a profit of $$18\%$$ the wholesaler sells it to a retailer at a profit of $$20\%$$. The retailer sells it to a customer at a profit of $$25\%$$. If the customer pays $$RS 30.09$$ for it, find the cost of the manufacturer.
Solution
## Manufactures $$18\%$$ Wholesaler $$20\%$$ Retails $$25\%$$ bCustomerLet cost price of manufacture be $$Rs. x$$$$\therefore$$ profit $$=Rs. \dfrac {18x}{100}$$$$\therefore$$ Selling price of manufacture $$=Rs.\ \dfrac {118x}{100}=$$ Cost price of WholesalerProfit $$=Rs.\ \left (\dfrac {26}{100} \times \dfrac {118s}{100} \right)=Rs.\ \dfrac {236x}{1000}$$$$\therefore \$$ Selling price of wholesaler $$=Rs.\ \left (\dfrac {118x}{100} \times \dfrac {236x}{1000} \right)=Rs.\ \dfrac {1416x}{1000}$$$$=Cost price of retailsProfit$$Rs.\ \left (\dfrac {25}{100} \times \dfrac {1416x}{1000} \right) =Rs.\ \dfrac {354x}{1000}\therefore \ $$Selling price of retails$$=Rs.\ \left (\dfrac {1416x}{1000} +\dfrac {354x}{1000} \right)=Rs.\ \dfrac {1770x}{1000}=Rs.\ \dfrac {177x}{100}$$=Cost price of customer$$\therefore \ \dfrac {177x}{100}=\dfrac {3009}{100}$$or,$$\boxed {x=17}$$Ans: Cost price of manufacture is$$Rs. 17Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | 2022-01-17 21:30:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786084294319153, "perplexity": 13980.562642647279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00689.warc.gz"} |
https://mooseframework.inl.gov/old/wiki/CodeStandards/ | We use clang-format (with a customized config file) for code formatting. If you have clang installed, you can run git clang-format [<branch>] to automatically format code changed between your current checked out branch and <branch> (if ommitted, defaults to HEAD). Our continuous integration pre-check will also print out a diff of changes you need to make to pull requests in order to conform with our coding style. Read our blog post for suggestions about automatically checking and formatting code with git and/or your text editor.
General style guidelines include:
• Single spacing around all binary operators
• No spacing around unary operators
• No spacing on the inside of brackets or parenthesis in expressions
• Avoid braces for single statement control statements (i.e for, if, while, etc.)
• C++ constructor spacing is demonstrated in the bottom of the example below
# File Layout
• Header files should have a ".h" extension
• Header files always go either into "include" or a sub-directory of "include"
• C++ source files should have a ".C" extension
• Source files go into "src" or a subdirectory of "src".
# Files
Header and source file names must match the name of the class that the files define. Hence, each set of .h and .C files should contain code for a single class. src/ClassName.C include/ClassName.h
# Naming
• ClassName Class names utilize camel case, note the .h and .C filenames must match the class name.
• methodName() Method and function names utilize camel case with the leading letter lower case.
• _member_variable Member variables begin with underscore and are all lower case and use underscore between words.
• local_variable Local variables are lowercase, begin with a letter, and use underscore between words
# Example Code
Below is a sample that covers many (not all) of our code style conventions.
namespace moose // lower case namespace names
{
// don't add indentation level for namespaces
int // return type should go on separate line
junkFunction()
{
// indent two spaces!
if (name == "moose") // space after the control keyword "if"
{
// Spaces on both sides of '&' and '*'
SomeClass & a_ref;
SomeClass * a_pointer;
}
// Omit curly braces for single statements following an if
// The statement must be on its own line
// Note: DO NOT omit curly braces for multiline blocks underneath an if statement
if (name == "squirrel")
doStuff();
else
doOtherStuff();
// No curly braces for single statement branches and loops
// Note: DO NOT omit curly braces for multiline blocks underneath a for statement
for (unsigned int i = 0; i < some_number; ++i) // space after control keyword "for"
doSomething();
// space around assignment operator
Real foo = 5.0;
switch (stuff) // space after the control keyword "switch"
{
// Indent case statements
case 2:
junk = 4;
break;
case 3:
{ // Only use curly braces if you have a declaration in your case statement
int bar = 9;
junk = bar;
break;
}
default:
junk = 8;
}
while (--foo) // space after the control keyword "while"
std::cout << "Count down " << foo;
}
// (short) function definitions on a single line
SomeClass::SomeFunc() {}
// Constructor initialization lists can all be on the same line.
SomeClass::SomeClass() : member_a(2), member_b(3) { }
// Four-space indent and one item per line for long (i.e. won't fit on one line) initialization list.
SomeOtherClass::SomeOtherClass()
: member_a(2),
member_b(3),
member_c(4),
member_d(5),
member_e(6),
member_f(7),
member_g(8),
member_h(9)
{ // braces on separate lines since func def is already longer than 1 line
}
} // namespace moose
# Using auto
Use auto for most new code unless it complicates readability. Make sure your variables have good names when using auto!
auto dof = elem->dof_number(0, 0, 0);
auto & obj = getSomeObject();
auto & elem_it = mesh.active_local_elements_begin();
auto & item_pair = map.find(some_item);
// Cannot use reference here
for (auto it = obj.begin(); it != obj.end(); ++it)
doSomething();
// Use reference here
for (auto & obj : container)
doSomething();
Do not use auto in any kind of function or method declaration
# Lambdas
// List captured variables (by value or reference) in the capture list explicitly where possible.
std::for_each(container.begin(), container.end(), [= local_var](Foo & foo) {
foo.item = local_var;
foo.item2 = local_var2;
});
# Other C++11 Notes
• Use the override keyword on overridden virtual methods
• Use std::make_shared<T>() when allocating new memory for shared pointers
• Use libmesh_make_unique<T>() when allocating new memory for unique pointers
• Make use of std::move() for efficiency when possible
# Variable Initialization
When creating a new variable use these patterns:
unsigned int i = 4; // Built-in types
SomeObject junk(17); // Objects
SomeObject * stuff = new SomeObject(18); // Pointers
# Trailing Whitespace and Tabs
MOOSE currently does not allow any trailing whitespace or tabs in the repository. If you are using our standard Emacs file this shouldn't be a problem. However, if you still end up with trailing whitespace that needs to be removed before a check-in. Try running the following one-liner from the appropriate directory:
find . -name '*.[Chi]' -or -name '*.py' | xargs perl -pli -e 's/\s+\$//'
# Includes
Firstly, only include things that are absolutely necessary in header files. Please use forward declarations when you can:
// Forward declarations
class Something;
All non-system includes should use quotes. There is a space between include and the filename.
#include "LocalInclude.h"
#include "libmesh/libmesh_include.h"
#include <system_library>
# Documentation
• Try to document as much as possible.
• We suggest using the Doxymacs plugin to help with documenting.
• Use C-c d f before a function/class to auto generate a comment template
/**
* The Kernel class is responsible for calculating the residuals for various
* physics.
*
*/
class Kernel
{
public:
/**
* This constructor should be used most often. It initializes all internal
* references needed for residual computation.
*
* @param system The system this variable is in
* @param var_name The variable this Kernel is going to compute a residual for.
*/
Kernel(System * system, std::string var_name);
/**
* This function is used to get stuff based on junk.
*
* @param junk The index of the stuff you want to get
* @return The stuff associated with the junk you passed in
*/
int returnStuff(int junk);
protected:
/// This is the stuff this class holds on to.
std::vector<int> stuff;
};
# Python
Where possible, follow the above rules for Python. The only modifications are:
1. Four spaces are used for indenting and
2. Member variables should be names as follows:
class MyClass:
def __init__(self):
self.public_member
self._protected_member
self.__private_member
# Code Commandments
• Use references instead of pointers whenever possible
• i.e., this object lives for a shorter period of time than the object it needs to refer to does
• Methods should return pointers to objects if returned objects are stored as pointers and references if returned objects are stored as references
• When creating a new class:
• Include dependent header files in the *.C file whenever possible (i.e. the header uses only references or pointers in it's various declarations) and use forward declarations in the header file as needed
• One exception is when doing so would require end users to include multiple files to complete definitions of child objects (Errors typically show up as "incomplete class" or something similar during compilation)
• Avoid using a global variable when possible.
• Every destructor must be virtual.
• All function definitions should be in *.C files.
• The only exceptions are for inline functions for speed and templates.
• Thou shall not commit accidental insertion in a std::map by using brackets in a RHS operator unless he/she can prove it can't fail. | 2018-12-13 06:22:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3396860361099243, "perplexity": 10446.59121185486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824525.29/warc/CC-MAIN-20181213054204-20181213075704-00630.warc.gz"} |
https://www.rocketryforum.com/threads/questions-about-competition-steamers-duration.79447/ | # Questions about competition steamers (duration)
### Help Support The Rocketry Forum:
#### astronboy
##### Well-Known Member
After flying commercial kits in local club competition, I have finally built my first 'purpose built' competition model. It is for 1/2A streamer duration. I am all set, but now find that I am toally lacking in experience with streamers.
How big a streamer can I reasonably fit into a BT-5?
Where is a goos source for streamer material in the large sizes i see listed for competition (like 4 x 40")?
Thanks!!
#### illini
##### Well-Known Member
ASP sells 4x40 tracing paper streamers...or just get your own tracing paper from a craft store (if you can find it in sufficient lengths). Be sure to accordian fold about the last 2/3 of the streamer. Be careful with the folds...you don't want the streamer rubbing tight against the inner wall of the tube.
#### powderburner
##### Well-Known Member
Originally posted by astronboy
Where is a goos source for streamer material in the large sizes i see listed for competition (like 4 x 40")?
Try Totally Tubular, I think they sell 4 inch stock.
Otherwise, find a 'space blanket' at your local sporting goods store and cut whatever size you need.
#### Micromeister
##### Micro Craftman/ClusterNut
TRF Supporter
Fred:
It really depends on the material your working with 2/3rd accordian folded 4x40" of anything will fit easily in a BT-5, 6X60" in 1mil mylar with the same fold pattern will fit nicely, I've seen a 1/2Mil 1/2" accordian folded mica-film 8" x 80" flown in a BT-5 which thermalled away..many minute flight Micra-Film is no longer available, nor is the great 1/2mil robins egg blue mylar that made great 4 and 6" streamers. I have a 36" x 50ys roll of 1mil material bought long ago back when Ed owned Appogee, that I'm still using so I have no idea were to get streamer stuff anymore..
#### shockwaveriderz
##### Well-Known Member
astronboy:
You will more than likely wnat to use either 1/2 mil or 1 mil aluminized mylar....these seem to be the materials everybody seems to agree on....remember that 1 mil will hold pleats better than 1/2 mil... 1/2mil obviously wil be lighter and more flexible to fit inside a Bt-5...
Totally Tubular sells mylar in 1/4,1/2 and 1.5 mil thicknesses
https://www.wooshrocketry.org/misc/tt.htm
Asp Rocketry sells mylar in 1/4 and 3/4 mil thicknesses and also sells the tracing paper streamers
https://www.asp-rocketry.com/recoverydevices.html
Or you can purchase rolls of the tracing paper here:
https://www.misterart.com/store/vie...nfang-No-107-Canary-Sketching-Paper-Rolls.htm
I just recently purchased some rolls of 1/2 and 1 mil from here:
https://www.nielsensenterprises.com/snomo/mylar.htm
They are in the Northwest so it will take a full 7 days via UPS to get east.....
#### astronboy
##### Well-Known Member
Thanks everyone, this is exactly what I was looking for!!!
Fred
#### wyldbill
##### Well-Known Member
FlisKits also sells a 4x40 and 6x60 drafting paper streamer kits. They've got great instructions.
-bill
#### jflis
##### Well-Known Member
FlisKits' STD-440 easily fits in a BT-5 ($2.95 - under Recovery Devices) INcludes all the materials you'll need plus folding instructions. You may also want to check out our Pop Lug (PL001) for$2.75
jim
#### wyldbill
##### Well-Known Member
Originally posted by shockwaveriderz
I just recently purchased some rolls of 1/2 and 1 mil from here:
https://www.nielsensenterprises.com/snomo/mylar.htm
They are in the Northwest so it will take a full 7 days via UPS to get east.....
Did you purchase their "mirror sheeting" or Mylar? They don't have the mylar prices posted and wondered how their "mirror sheeting" compares...
thanks,
-bill
#### powderburner
##### Well-Known Member
After you get your streamer ready, don't forget to do the matching work on the carrier vehicle.
I like to add a ring of reinforcing CA soaked into the front lip of the BT, and a **thin** layer down the insides of the front. After it cures, sand with fine-grit paper until it is smooth. Make sure the front lip of the BT does not have ANY burrs, or peels, or splits, or any other protruding bits facing the insides (you don't want anything in the way of your deployment). Do not use any form of internal tether/shock cord mount that might present even the slightest interference with getting out that streamer. I used to use a paper cup (in addition to ejection wadding) to help the packed streamer slip out of the BT.
Don't forget that the tether/shock cord needs to be attached to the outside of your rocket such that the BT (with dead motor) hangs sideways.
#### shockwaveriderz
##### Well-Known Member
the mirror sheeting and mylar is one and the same Bill..as far as I can tell....
I have 1/4 mil from ASP and 1.5 mil from TT and this looks and feels and seems to behave the same as the others......
I emailed Neilsen and they told me that all their mirror sheeting is a polyester plastic film and that some of their polyester plastic film is DuPont brand Mylar, which is a polyester plastic film.....
#### UhClem
##### Well-Known Member
I flew a 6"X60" streamer made from Clearphane last year. Good enough for the C division A SD record: 323 seconds.
I get the Clearphane at my local Hobby Lobby in 30 in. by 25 ft. rolls. | 2021-06-12 15:00:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1998014897108078, "perplexity": 7986.16801214518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487584018.1/warc/CC-MAIN-20210612132637-20210612162637-00046.warc.gz"} |
https://aferro.dynu.net/math/wiener_deconvolution/ | # Motivation
### Usefulness of Wiener deconvolution
Wiener filters are a class of adaptive filters for signal processing. In contrast to fixed filters (like e.g. a differentiator, which acts like a high-pass filter whose response is known a priori), the response of adaptive filters depends on the specific input and assumptions.
Proposed by MIT Professor Norbert Wiener in the 1940s, they can be used in an extremely broad variety of applications, like prediction, denoising and deconvolution. Here we’ll study the Wiener deconvolution, illustrated in the image below:
A distorted image of a Christmas market in Vienna, its Wiener recovery, the undistorted image, and the residual between Wiener and undistorted (source and license for the original image: Wikipedia).
You may think that in the Deep Learning era$^{\text{TM}}$ linear filters from the 40s are heavily outdated and irrelevant. The truth is that a good understanding of Wiener deconvolution is very useful to understand state-of-the-art preprocessing techniques for some tasks. E.g. features based on Generalized Cross Correlation are behind the (currently) best-performing deep models for audio source localization, like GCC-PHAT and SALSA (both methods have the same signal model and linear properties as the Wiener deconvolution). Understanding these features allows us to understand how neural networks can leverage them, and this can make modeling, training and finetuning substantially easier, if not better.
### Motivation for this post
Wiener filters are optimal in the sense that they are the solution to a minimization problem: they seek to minimize the $\ell_2$ error between original signal and filter outcome. A popular variant of the Wiener deconvolution filter is a relaxation that assumes all noise present to be uncorrelated with the signal to be recovered. Most sources derive the result making use of expected values and complex calculus (see e.g. Wikipedia), as follows:
1. Expand the $\ell_2$ objective into a quadratic form
2. Simplify the expression using statistical assumptions
3. Extract the optimal filter by setting the Wirtinger derivatives of the remaining expression to zero.
Interestingly, the original text by Wiener himself, circulated in 1942 and later published as a book called Extrapolation, Interpolation, and Smoothing of Stationary Time Series (EISSTS), follows a more algebraic approach (See chapter 3, Formulation of the General Filter Problem and onwards):
1. First, derives an optimal and unconstrained expression for the optimization objective, based on an algebraic identity (3.175)
2. Then, characterizes the error of performance of a filter (3.4)
3. Only then explores relaxations and simplifying statistical assumptions like zero-correlation in noisy scenarios (3.52)
Contributions of this post
While inspiring, the EISSTS derivation can be a hard bone to chew. In this post we will revisit it with more compact/modern notation, leading to several advantages and insights:
1. Presenting a much shorter and simpler derivation of the objective using linear algebra only (no statistics or complex calculus involved)
2. Yielding an exact (i.e. zero-error) solution $W_o$ with all terms laid out explicitly in vectorized form
3. Expressing the popular zero-correlation filter $W_\perp$ in terms of $W_o$ plus an error term $\varepsilon_\perp$, which allows us to quantify and characterize the error of $W_\perp$.
We finalize with some experiments to illustrate the discussed points.
# Definitions
Our derivation relies on the following building blocks:
Notation and requirements
Complex arithmetic:
$$\begin{eqnarray*} u = &&a + ib \quad \in \mathbb{C}\\ v = &&c + id \quad \in \mathbb{C}\\ \overline{u} = &&a - ib \\ \mathfrak{Re}(u) = &&a \quad \quad \in \mathbb{R}\\ \mathfrak{Im}(u) = &&b \quad \quad \in \mathbb{R}\\ \vert u \vert^2 = &&u\overline{u} = a^2 + b^2\\ u\overline{v} = &&\overline{v}u = \overline{v\overline{u}} = (ac + bd) + i(bc - ad)\\ \mathfrak{Re}(uv) = &&\mathfrak{Re}(u)\mathfrak{Re}(v) - \mathfrak{Im}(u)\mathfrak{Im}(v)\\ u\overline{v} + \overline{u}v = &&2 \mathfrak{Re}(u\overline{v}) = 2\mathfrak{Re}(v\overline{u})\\ \end{eqnarray*}$$
Discrete Fourier Transform:
We treat discrete signals as vectors, where $t$ represents discrete domain units (e.g. seconds or meters), and $f$ discrete frequency units. The symbol $\odot$ represents element-wise multiplication. $$\begin{eqnarray*} x(t) \in \mathbb{R}^D \quad \iff &&X(f) \in \mathbb{C}_{Hermitian}^D\\ \mathcal{F}\{x(t)\} = &&X(f)\\ \mathcal{F}^{-1}\{X(f)\} = &&x(t)\\ X(f)Y(f) := &&X(f)\odot Y(f)\\ \end{eqnarray*}$$
Notation: $*$ corresponds to convolution, and $\star$ to (cross-)correlation.
$$\begin{eqnarray*} \mathcal{F}\{x(t) * y(t)\} = &&\mathcal{F}\{x(t)\} \odot \mathcal{F}\{y(t)\} = X(f)Y(f)\\[5pt] \mathcal{F}\{x(t) \star y(t)\} = &&\mathcal{F}\{x(t)\} \odot \overline{\mathcal{F}\{y(t)\}} = X(f)\overline{Y(f)} \end{eqnarray*}$$
Energy and Parseval’s Theorem:
Given any vector in $\mathbb{R}^D$ or $\mathbb{C}^D$, the energy operator $E$ returns a non-negative, real-valued scalar. $$\begin{eqnarray*} E[x^2] := &&\langle x, x \rangle = \sum_{k=1}^D x(k)^2\\ E[\vert X \vert^2] := &&\langle X, X \rangle = \sum_{k=1}^D X(k)\overline{X(k)}\\ E[x^2] = && \frac{1}{D} \mathbb{E}[\vert X \vert^2]\\ \end{eqnarray*}$$
# Signal Model, Assumptions and Objective
While simple, the signal model of convolving and adding noise covers a very broad spectrum of scenarios.
Signal model and assumptions
In our signal model, we have a signal $s(t)$ that has been distorted via convolution with a response $r(t)$, and then subject to additive noise $n(t)$ (which can be any form of disturbance). We assume that all signals are real-valued, discrete-time series and that $s, n$ have zero mean. The result is the known observation $o(t)$, as follows:
$$\begin{eqnarray*} o(t) = &&\big[ s(t) * r(t) \big] + n(t), \quad o, s, r, n : && \mathbb{Z} \rightarrow \mathbb{R}\\[5pt] O(f) = &&\big[ S(f)R(f) \big] + N(f)\\ \sum_{i} s(i) = &&\sum_{i} n(i) = 0 \end{eqnarray*}$$
Furthermore, we assume to know the power spectral density (PSD) of $s$ and $n$, i.e. their energy distribution as a function of frequency. Since we have finite samples, this requires us to assume wide-sense stationarity in $s$ and $n$:
$$\begin{eqnarray*} \vert S(f) \vert^2, \vert N(f) \vert^2 \quad \text{are known} \end{eqnarray*}$$
Our last assumption is that we require the correlation between $s$ and $n$ to be provided. As already mentioned, the typical assumption in Wiener deconvolution is that $n$ is some form of stochastic (and ergodic) noise process that is uncorrelated to both $s$ and $s * r$, i.e.: $$\langle s, n \rangle = \langle s*r, n \rangle = 0 \iff \mathbb{E}[\langle S, N \rangle] = \mathbb{E}[\langle S, N \rangle] = 0$$
We will analyze this relaxation in detail.
Objective
The idea of Wiener filters is to look for a linear operator $w$ that will optimally approximate the signal $s$ when applied to the observation $o$. In the case of Wiener deconvolution, this operator is a convolution, and the relation is best expressed on the Fourier domain: $$\begin{eqnarray*} \hat{s} = &&w * o \quad \iff \quad \hat{S} = WO\\ \end{eqnarray*}$$
This optimal approximation is expressed as minimizing the $\ell_2$ distance between $s$ and our reconstruction $\hat{s}$: $$\begin{eqnarray*} \underset{w}{\text{argmin}} &&\lVert s - \hat{s} \rVert_2^2 \quad \iff \quad \underset{W}{\text{argmin}} &&E \Big[ \vert S - \hat{S} \vert^2 \Big]\\\ \end{eqnarray*}$$
For this reason this process is often called optimal restoration and $w$ is often called an optimal linear filter. Here we simply note that there are other forms of optimality that this $\ell_2$ objective does not achieve (e.g. $\ell_2$ is known to be generally bad for sparse recovery, and $\ell_1$ can be used instead, leading to a different class of filters).
# Derivation and Discussion
Derivation of Wiener deconvolution. We omit the domain variables, i.e. we use $s, S$ instead of $s(t), S(f)$ for notational simplicity.
As already stated, the spectral version of the objective to be minimized is the following:
$$\begin{eqnarray*} &&E\Big[ \vert S - \hat{S} \vert^2 \Big] = E\Big[ \vert S - WO \vert^2 \Big] \end{eqnarray*}$$
Inspired by EISSTS (3.175), we first observe that all 3 spectra $S, W, O$ have the exact same dimensionality, and $W$ is unconstrained. Therefore an exact recovery is possible when $S = \hat{S}$:
$$\begin{eqnarray*} S = \hat{S} = W_o O \iff &&S\overline{O} = W_o O\overline{O} = W_o \vert O \vert^2\\ \iff &&W_o = \frac{S\overline{O}}{\vert O \vert^2}\\ \end{eqnarray*}$$
This means that convolving with $W_o$ provides a zero-error, exact recovery. Note that, if $\vert O(f) \vert^2 = 0$ for some $f$, then also $S(f)\overline{O}(f) = 0$, so $W_o(f)$ is unconstrained, but still optimal and computable.
Compared to the long-winded results from the literature, the derivation above is almost upsettingly simple, so is it helpful? We still have 2 open points:
1. This expression depends on things we don’t claim to know, like $S$.
2. There is no apparent relation with the well-known Wiener result
In the next block we will see that both points can be resolved with further algebraic manipulation:
Characterization of Wiener deconvolution
The first point is resolved with the $O = SR+N$ identity:
$$\begin{eqnarray*} W_o = && \frac{S\overline{O}}{\vert O \vert^2} = \frac{S(\overline{SR} + \overline{N})}{\vert O \vert^2} = \frac{\vert S \vert^2 \overline{R} + S \overline{N}}{\vert O \vert^2}\\ \end{eqnarray*}$$
This last expression doesn’t rely anymore on unknown quantities, so the optimal filter $W_o$ is computable (remember we assumed to know $\vert S \vert^2$ and $S\overline{N}$).
In order to address the second point we introduce the following identities:
$$\begin{eqnarray*} Z := &&\frac{S}{\vert S \vert^2}\\ \vert O \vert^2 = && \vert SR \vert^2 + \vert N \vert^2 + SR\overline{N} + \overline{SR}N\\ = && \vert S \vert^2 \Big( \overbrace{\vert R \vert^2 + \frac{\vert N \vert^2}{\vert S \vert^2}}^\Delta + \overbrace{ZR\overline{N} + \overline{ZR}N}^\gamma \Big)\\[5pt] \frac{x}{a+b} = &&\frac{x}{a} - \frac{bx}{a(a+b)} \end{eqnarray*}$$
The Wiener filter typically encountered in the literature is then a relaxation given by assuming uncorrelated noise, i.e. $ZR\overline{N} = \overline{ZR}N = Z \overline{N} = 0$ for all frequencies. Applying this assumption to $W_o$ yields the corresponding expression for $W_\perp$:
$$\begin{eqnarray*} W_o = &&\frac{\vert S \vert^2 \overline{R} + S \overline{N}}{\vert O \vert^2} = \frac{\vert S \vert^2 \overline{R} + S \overline{N}}{\vert S \vert^2 (\Delta + \gamma)} = \frac{\overline{R} + Z \overline{N}}{\Delta + \gamma}\\ \Rightarrow W_\perp = &&\frac{\overline{R} + 0}{\Delta + 0} = \frac{\overline{R}}{\Delta} = \frac{\overline{R}}{\vert R \vert^2 + \frac{\vert N \vert^2}{\vert S \vert^2}} \end{eqnarray*}$$
Crucially, we can rearrange the optimum $W_o$ to express it in terms of the uncorrelated filter $W_\perp$, plus some correction $\varepsilon_\perp$:
$$\begin{eqnarray*} W_o = &&W_\perp + \varepsilon_\perp\\ \Rightarrow \varepsilon_\perp = &&W_o - W_\perp = \frac{\overline{R}}{\Delta + \gamma} - W_\perp + \frac{Z \overline{N}}{\Delta + \gamma}\\ = &&\overbrace{\frac{\overline{R}}{\Delta}}^{W_\perp} - \frac{\overline{R} \gamma }{\Delta ( \Delta + \gamma)} - W_\perp + \frac{S \overline{N}}{\vert O \vert^2}\\[10pt] = &&\frac{S \overline{N}}{\vert O \vert^2} - \frac{\overline{R} \gamma \vert S \vert^2}{\Delta \vert O \vert^2}\\ = &&\frac{1}{\vert O \vert^2} \left( S \overline{N} - W_\perp (SR\overline{N} + \overline{SR}N) \right)\\ = &&\frac{1}{\vert O \vert^2} \left( \overbrace{S \overline{N}}^I - 2\overbrace{\mathfrak{Re}(SR\overline{N}) W_\perp}^{II} \right) \end{eqnarray*}$$
Finally, we can reformulate $\varepsilon_\perp$ as a function of $S\overline{N}$ parametrized by $R$. Let’s start isolating the $S \overline{N}$ component from the $II$ term:
$$\begin{eqnarray*} II = &&2\mathfrak{Re}(SR\overline{N}) W_\perp = \frac{2}{\Delta}\mathfrak{Re}(SR\overline{N}) \overline{R}\\ = &&\frac{2}{\Delta}\left( \mathfrak{Re}(S\overline{N})\mathfrak{Re}(R) - \mathfrak{Im}(S\overline{N})\mathfrak{Im}(R) \right) \left(\mathfrak{Re}(R) - i \mathfrak{Im}(R) \right)\\ = &&\frac{2}{\Delta}\Big[ \mathfrak{Re}(S\overline{N}) \left( \mathfrak{Re}^2(R) - i \mathfrak{Re}(R)\mathfrak{Im}(R) \right)\\ &&\quad- \mathfrak{Im}(S\overline{N}) \left( \mathfrak{Re}(R)\mathfrak{Im}(R) - i \mathfrak{Im}^2(R) \right) \Big]\\ \end{eqnarray*}$$
Now we can replace $II$ in the $\varepsilon_\perp$ definition, yielding:
$$\begin{eqnarray*} \varepsilon_\perp = && \frac{1}{\vert O \vert^2} \Big[ \mathfrak{Re}(S \overline{N}) + i \mathfrak{Im}(S \overline{N})\\ &&\qquad- \mathfrak{Re}(S\overline{N}) \frac{2 \left(\mathfrak{Re}^2(R) - i \mathfrak{Re}(R)\mathfrak{Im}(R) \right)}{\Delta}\\ &&\qquad+ \mathfrak{Im}(S\overline{N}) \frac{2 \left(\mathfrak{Re}(R)\mathfrak{Im}(R) - i \mathfrak{Im}^2(R) \right)}{\Delta} \Big]\\[10pt] = &&\mathfrak{Re}(S\overline{N}) \cdot \frac{1 - \frac{2}{\Delta} \left(\mathfrak{Re}^2(R) - i \mathfrak{Re}(R)\mathfrak{Im}(R) \right)}{\vert O \vert^2}\\ &&+ \mathfrak{Im}(S\overline{N}) \cdot \frac{i + \frac{2}{\Delta} \left(\mathfrak{Re}(R)\mathfrak{Im}(R) - i \mathfrak{Im}^2(R) \right)}{\vert O \vert^2}\\ = &&\mathfrak{Re}(S\overline{N}) \cdot \frac{1 - \frac{2}{\Delta} \left(\mathfrak{Re}^2(R) - i \mathfrak{Re}(R)\mathfrak{Im}(R) \right)}{\vert O \vert^2}\\ &&+ i \mathfrak{Im}(S\overline{N}) \cdot \frac{1 - \frac{2}{\Delta} \left(\mathfrak{Im}^2(R) + i \mathfrak{Re}(R)\mathfrak{Im}(R) \right)}{\vert O \vert^2}\\ = &&f_R(S\overline{N}) \qquad : \mathbb{C} \rightarrow \mathbb{C} \end{eqnarray*}$$
Given the $\varepsilon_\perp$ expression as a function of $I$ and $II$, we can see that $W_\perp$ is indeed optimal if the noise is uncorrelated, because in that case $\varepsilon_\perp = 0$ and $W_\perp = W_o$. Otherwise, the error for $W_\perp$ grows in the direction of $S(f)\overline{N}(f)$ minus the direction of $W_\perp(f)$. This makes sense: if we do have interactions but the filter doesn’t correct them, $\varepsilon_\perp$ grows.
We also see that the error $\varepsilon_\perp$ inherits the stability from the main filter. At a given frequency, the error is inversely proportional to the energy of the observation $O$ squared. This could indicate instability for low observed energy, but this is canceled by the fact that low observed energy generally means low energy in $S$ and $N$. Similarly, $W_\perp$ tends to 1 when we have low observed energy, so both terms $I$ and $II$ tend to stabilize and introduce a constant error in the magnitude of $\frac{S \overline{N}}{\vert O \vert^2}$.
As for the $f_R(S\overline{N})$ reformulation, it is mainly useful to explicitly encode the relation between our assumptions about $S\overline{N}$ and the resulting $\varepsilon_\perp$ (e.g. if $N$ is white noise, how is $\varepsilon_\perp$ distributed?). Furthermore, it is computationally convenient for two reasons: most computations can be done ahead of time, and it is fully differentiable, so such assumptions could be learned from data instances using gradient descent.
Summary:
Based on the exact recovery observation (inspired by Wiener’s derivation), we achieved a compact, purely algebraic derivation of both the optimal ($W_o$) and standard ($W_\perp$) Wiener filters, without any statistical or complex derivative operators involved. We were also able to characterize $W_\perp$ in terms of $W_o + \varepsilon_\perp$, and reformulate $\varepsilon_\perp$ (which is interpretable and well-behaved) as a function $f_R(S\overline{N})$ with nice computational properties (efficiency and differentiability).
# Experiments
To wrap this up, we will now check some of the facts discussed so far with a little image processing experiment in Python. You can find the full source code in this gist (also locally hosted here), under GPLv3 License.
To make it visual, we will use an image as an example. Given the date and topic, I couldn’t think of anything more appropriate than this image of the beautiful Wiener Christkindlmarkt (note that the calculations and code naturally apply to other modalities, like audio and video):
The Christkindlmarkt in Vienna (Source and license: Wikimedia Commons).
We will simulate a typical distortion scenario, with 3 artifacts:
1. The camera was moving during exposure
2. The air, lens and other channels blur the image in all directions
3. The pixel sensors are affected by noise
Artifacts 1 and 2 can be modeled by convolving with a “trajectory”, and then convolving with a gaussian blob. In practice, convolution is associative so we achieve both at the same time by convolving with a response like the following:
If we then add white noise with a signal-to-noise ratio of 10, the result is our observation:
As we can see, the quality has been drastically reduced, but only if we lack knowledge about the distortion: if we apply the optimal $W_o$ to the observation, the reconstruction ends up being identical to the original image (up to numerical error). If our knowledge is more limited, we can assume uncorrelated noise and apply $W_\perp$, resulting in the following:
While some artifacts can still be seen, the recovery is quite decent! The residual energy is brought from over 23000% down to $\sim$11% (percentage w.r.t. undistorted energy, see Python script for details), and qualitatively it can be seen that the image becomes clearer and less noisy. As studied before, the error given by $W_\perp$ is characterized by $\varepsilon_\perp$, and yields the following residual:
Note the interesting fact that, while the input to $f_R(S\overline{N})$ is uniform noise, the output is quite sparse and structured (pretty much all landmarks in the image can be recognized). In this sense, $f_R(S\overline{N})$ acts like a de-hashing function, mapping from random noise to sparse semantics. This result also confirms that the $\ell_2$ objective underlying $W_\perp$ is generally bad capturing sparse patterns like e.g. contours, and that a sparsity-aware strategy is a natural idea to reduce the impact of $\varepsilon_\perp$.
And with that we finalize this post. Of course, linear filter theory has been around since the 1940s, and numerous improvements left out here have already been long achieved. E.g., we haven’t even tackled how to estimate $\{\vert S \vert^2, \vert R \vert^2, \vert N \vert^2\}$ from the data, we just used the optimal values but we aren’t usually supposed to know all of them. Still, I hope this helps you waltz through the foundations of Wiener deconvolution as much as it helped me! 🎻 🎶 | 2022-06-28 13:05:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483814597129822, "perplexity": 597.7416474431807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103516990.28/warc/CC-MAIN-20220628111602-20220628141602-00381.warc.gz"} |
http://ruishu.io/2018/05/19/change-of-variables/ | Rui Shu
# 19 May 2018 Change of Variables: A Precursor to Normalizing Flow
Normalizing flow is a cool technique for density estimation that is fun to learn about and (tricky to) wrap your mind around. Those familiar with the term may have come across it within the context of models such as NICE, Real NVP, Normalizing Flow VAE, Inverse Autoregressive Flow, and Masked Autoregressive Flow. At its core, normalizing flow relies on the change of variables technique. As preparation for an up-coming blog on normalizing flow, the goal of this blog post is to provide readers with a gentle introduction to the change of variables concept in the univariate case.
### Inverse Transform Sampling
Consider the following scenario: we have access to a random number generator $U$ that samples uniformly within the interval $(0, 1)$. However, we are interested in sampling from a logistic distribution. What should we do? It turns out that it is possible to construct some function $f$ such that the procedure
is equivalent to sampling from a logistic distribution. While there are many ways to construct such a function, the inverse transform sampling framework suggests to choose $f$ as the inverse cumulative distribution function (inverse CDF) associated with the logistic distribution. To see why this works, let $X$ denote the logistic distribution with corresponding CDF
The implicit distribution $f(U)$ has the corresponding CDF
We wish to show that $f(U)$ and $X$ share the same CDF. To do so, we note that $F$ maps samples in $\X$ to the unit interval $(0, 1)$. By interpreting $F(x)$ as a Bernoulli random variable parameter, we note that
Great! Now we just have to figure out what the logistic distribution’s CDF is. After a quick peek on Wikipedia, we find out that the logistic distribution CDF is simply the sigmoid function
What a conveniently simple expression!1 By taking its inverse, we find that
Armed with this knowledge, we can now set $f = F^{-1}$. To sanity check that our analysis is correct, we can take samples from $f(U)$ and approximate the distribution of $f(U)$ with a KDE plot.
Sure enough, we have successfully constructed a way to sample from the logistic distribution by transforming a uniform distribuion! And perhaps more importantly, we now have a sense that this procedure of introducing some function $f$ that transforms an existing simple distribution $U$ is possibly a powerful method for accessing more complicated distributions.
### A Taste of Complexity
By using a complicated transformation function $f$, we should be able sample from a complicated distribution. To demonstrate this, let’s define $f(x)$ as
Hopefully that looks complicated enough. If you’re wondering how I got this function, let’s just say WolframAlpha did most of the heavy lifting.2 Assuming that you’re willing to accept that I pulled this function out of thin air, let’s check out what the function looks like, as well as the KDE of $f(U)$.
I look the liberty of plotting out $f(u)$, $F(x)$, as well as the KDE of $f(U)$. There are several things to note. First, $f$ is a strictly monotonic—and thus invertible—function (this will come in handy next section). Second, $f(U)$ is a bimodal distribution. Third, one can think of this bimodal distribution as being achieved by stretching and squeezing certain parts of the original uniform distribution. This is most apparent in the first two plots, where the uniformly-spaced lines in the space of $u$ get distorted when we map to the space of $x$. Regions of $u$ that get squeezed together correspond to regions of high density in $x$. Inversely, regions of $u$ that get stretched correspond to regions of low density in $x$. Generally, the intuition seems to be that the density within any region of $x$ is inversely proportional to how much the corresponding region of $u$ gets stretched.
### Computing the Density
Given the aforementioned complicated $f$, sampling $x \sim f(U)$ is pretty straightforward. However, we have yet to discuss how to actually compute the probability density $p_X(x)$ (here, $p_X$ denotes the PDF associated with the $f(U)$). Naively, one might suggest to simply compute the inverse $u = f^{-1}(x) = F(x)$ and evaluate $p_U(u)$. However, it shouldn’t take long to realize that something is amiss with this strategy: for any $u \in (0, 1)$, the function $p_U(u)$ is a constant that evaluates to $1$! It turns out that this strategy fails to consider the stretching/squeezing of the space as we map between $x$ and $u$. To account for this, the change of variables theorem (under certain assumptions3) concludes that we should scale $p_U(u)$ by (the inverse of) how much the neighborhood of $u$ gets stretched when we map to $x$. This is summarized by the following equation
Note that this equation exactly captures our intuition from the previous section that the density of $x$ is inversely proportional to the degree that the neighborhood of $u$ gets stretched by $f$. Furthermore, since $u = f^{-1}(x)$, it follows that
There are thus two ways to compute $p_X(x)$, depending on whether we have access to the derivative of $f$ or the derivative of $f^{-1}$. Since we have direct access to $f$, we can leverage the magic of automatic differentiation to evaluate the derivative of $f$ fairly easily. We’ll therefore make use of the former change-of-variables equation and run the following TensorFlow code.
Looks like the change-of-variables approach works! By sanity-checking the PDF given by the change-of-variables and the PDF approximated given by KDE, we see that the two PDFs are more or less the same.
### Extension to the Multivariate Regime
Hopefully by now, we’re all slightly more comfortable with the use of differentiable, invertible functions to transform simple distributions into more complicated distributions. To end this post, there are two loose ends that are worth tying up. First, let’s see the change-of-variables theorem to its full multivariate glory.
To get an intuition for what the change-of-variables is doing in the multivariate setting, note that the transformation that $f$ applies to a sufficiently small neighborhood of $u$ is just an affine transformation (i.e. we’re taking the first-order approximation of $f$) whose linear component is the matrix multiplication of $u$ by the Jacobian of $f$. Now, we simply need to remember that the change-in-volume induced by a linear transformation via the matrix $T$ is exactly the determinant of $T$. Thus, the determinant of the Jacobian of $f$ describes how much a small parallelotope/hypercube containing $u$ gets stretched by $f$.
The final thing to note is how on earth do we come up with arbitrary, invertible functions? If the goal is to do density estimation using the change-of-variables method, we will need to somehow come up with an entire family of invertible and differentiable functions and furthermore perform optimization over that family of functions. The construction of invertible function families that are amenable to gradient-based optimization will be the main topic of discussion in the future post on normalizing flows. So stay tuned!
1. If you wondered why the thought experiment didn’t involve a Gaussian distribution, this is why.
2. I computed the CDF of a mixture of two logistic distributions, used WolframAlpha to compute the inverse analytically, and then randomly perturbed the numbers a little so that it becomes some weird bimodal distribution that isn’t quite a mixture of two logistic distributions—just to make things more fun.
3. Continuously differentiable, invertible $f$ with non-zero derivative.
End of post | 2018-10-17 06:35:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 10, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9190887212753296, "perplexity": 247.73295153050972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510998.39/warc/CC-MAIN-20181017044446-20181017065946-00242.warc.gz"} |
http://crypto.stackexchange.com/questions/12350/combining-two-hashing-functions | Combining two hashing functions
I have been working on a Bloom Filter implementation recently and after a discussion with a co-worker about how many hashing functions to use I told him I was limited with using hashing functions that are already implemented and didn't want to risk the loss of distribution if I derive hashing functions from them.
We can assume that the distribution of any hashing function is not 100% perfect. It's very good for many, but not perfect. Is there any existing literature that describes the loss of distribution? I am not using the hashing functions for cryptographic purposes directly, because in a bloom filter it's only used as an index, and it's the distribution is the most important for my needs - but I assume they are related.
-
Are you using cryptographic hash functions? If not, this is off-topic for Cryptography.SE. – D.W. Dec 16 '13 at 1:52
@D.W. Yes I am using cryptographic hashing functions. SHA1, SHA256, and MD5. – Kristopher Ives Dec 16 '13 at 3:47
As far as non cryptographic uses are concerned, a cryptographic hash is a perfect. – CodesInChaos Dec 16 '13 at 9:08
For the purpose of a Boom Filter, as described in wikipedia, you can do HASH(element||"Function<N>"), with apropiate different N's, as it's done in BitCoin mining, to get virtually different hashing functions. – daniel Dec 16 '13 at 10:42
For the purposes of a bloom filter you need a number of hash functions. Cryptographic hash functions are designed so that changing a single bit in the input should change many (around 1/2) of the output bits.
So, say you have a good hash function $h$ (e.g., SHA256 though MD5 should work for your purposes too) a good option for you would be to use use:
$h_1(m)=h("1" || m)$
$h_2(m)=h("2" || m)$
$\vdots$
$h_n(m)=h("n" || m)$
Where $||$ is concatenation. Then you only need one hash function. This is basically what they are doing in this java implementation.
Is there any existing literature that describes the loss of distribution?
Not that I am aware of. But think about it this way, if there were a significant change from the uniform distribution computable in a reasonable amount of time we wouldn't use the hash function for cryptographic purposes.
Now, is there some sort of attack you are worried about?
-
Not a specific attack but I am curious if anyone has done the math. My "gut" tells me that it's a compound of some kind, and we can assume that a good hashing algo might be 0.99 "distributed" but I am concerned with compounding like 0.99 * 0.99 * 0.99 ... – Kristopher Ives Jan 3 '14 at 20:22 | 2016-05-26 08:49:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48274409770965576, "perplexity": 561.7232127535603}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275764.90/warc/CC-MAIN-20160524002115-00217-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/360560/riesz-markov-kakutani-representation-theorem-for-compact-non-hausdorff-spaces | # Riesz–Markov–Kakutani representation theorem for compact non-Hausdorff spaces
Let $$X$$ be a compact Hausdorff topological space, and $$\mathcal C^0 (X) = \{f:X\to\mathbb{R}; \ f \text{ is continuous }\}$$. It is well known that for any bounded linear functional $$\phi: \mathcal C^0(X)\to\mathbb{R},$$ such that $$\phi(f)\geq 0$$ if $$f\geq 0$$ ($$\phi$$ is called a positive linear functional), then there exists a unique regular Borel measure $$\mu$$, such that $$\phi(g) = \int g\ \mathrm d\mu, \ \forall \ g\in \mathcal C^0(X).$$ This result follows from a direct application of Riesz–Markov–Kakutani representation theorem.
If we drop the Hausdorff hypothesis (only assuming $$X$$ as compact topological space). Then we can lose the uniqueness of the measure that represents the linear functional. A famous example is the compact topological space "$$[0,1]$$ with to origins". In this case the functional $$\phi: \mathcal C^0(X)\to\mathbb{R}$$, $$\phi(f) = f(0)$$ can be written as $$\int f\ \mathrm{d}\delta_0$$ or $$\int f\ \mathrm{d}\delta_{0'}.$$
I would like to know if we still have the existence of a measure that represents the functional. In other words, I would like to know if the following theorem is true
Possible Theorem: Let $$(X,\tau)$$ be a compact non-Hausdorff space, and $$\Lambda : \mathcal C^0(X)\to\mathbb{R}$$ a positive bounded linear functional, then there exists a measure $$\mu: \mathcal B(\tau)\to \mathbb{R}$$ (where $$\mathcal B(\tau)$$ is the smallest $$\sigma$$-algebra such that $$\tau\subset \mathcal B(\tau))$$, such that $$\Lambda(f) = \int f\ \mathrm{d}\mu, \ \forall \ f\in \mathcal C^0(X).$$
Can anyone help me?
I have searched online but I was not able to find a result in the non-Hausdorff case.
First, it follows from the following result and the Riesz–Markov–Kakutani representation theorem that we can always find a suitable Baire measure representing a positive linear functional.
Theorem: Let $$X$$ be any topological space. Then there exists a completely regular Hausdorff space $$Y$$ and a continuous surjection $$\tau:X\to Y$$ such that the function $$g\mapsto g\circ\tau$$ is an isomorphism from $$C_B(Y)$$ onto $$C_B(X)$$.
This is Theorem 3.9 of "Rings of continuous functions" (1960) by Gillman and Jerison.
So the problem reduces to the question whether a Baire measure on a compact topological space can be extended to a Borel measure. We can do this using the following result, which specializes the very abstract Theorem 2.6.1 of "Convex Cones" (1981) by Fuchssteiner and Lusky.
Theorem: Let $$X$$ be a non-empty compact topological space and $$L:\mathcal{C}^0_+(X)\to\mathbb{R}$$ be an additive function on the cone of nonnegative continuous functions on $$X$$ such that $$L(g)\leq\max g$$ for all $$g$$. Then there exists a Borel probability measure $$\nu$$ on $$X$$ such that $$L(g)\leq\int g~\mathrm d\nu$$ for all $$g\in \mathcal{C}^0_+(X)$$.
For nonzero $$\Lambda$$, let $$L=1/\Lambda(1)\cdot \Lambda$$. Then the measure $$\mu=\Lambda(1)\cdot\nu$$ does the trick.
It should be noted that the resulting Borel measure need not be regular. For non-Hausdorff $$X$$, there is no point in going beyond Baire measures.
• Just one question that I am not following (btw, the trick of inducing a Baire-measure using $\tau$ was really clever). Using the second theorem we are able to find a Borel probability measure such that $L(f) \leq \int f \ \mathrm{d}\nu$. Why this solve the problem? Since we are interested in the equality of both terms. – Matheus Manzatto May 17 at 18:09
• @MatheusManzatto I wrote the statement with an inequality, so that it matches up easily with the statement in the book. Here is how one can get equality: Assume without loss of generality that $\Lambda(1)=1$ and $0\leq g\leq 1$ (the general case follows from rescaling). Then $\Lambda(g)\leq\int g~\mathrm d\nu$ and $\Lambda(1-g)\leq \int 1-g~\mathrm d\nu$ together with $1=\Lambda(1)=\Lambda(g)+\Lambda(1-g)\leq \int g~\mathrm d\nu+\int 1-g~\mathrm d\nu=1$ implies that $\Lambda(g)=\int g~\mathrm d\nu$. – Michael Greinecker May 17 at 18:19
• Wow, that was good. Thx very much for your help. – Matheus Manzatto May 17 at 18:23
• Nice. It seems to me from you answer and your comments that the Corollary to Theorem II.2.6.1 from the book of Fuchssteiner and Lusky does the job alone - it's even stated in the book as a Riesz representation theorem for non-Hausdorff compact topological spaces. Is the "Tychonoffication" result from Gilman-Jerison even needed? – Pedro Lauridsen Ribeiro May 18 at 6:09
• @PedroLauridsenRibeiro You are right. The first result is for context. – Michael Greinecker May 18 at 6:34 | 2020-07-02 06:11:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9161970019340515, "perplexity": 109.77935767422952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878519.27/warc/CC-MAIN-20200702045758-20200702075758-00568.warc.gz"} |
http://www.askiitians.com/forums/IIT-JEE-Entrance-Exam/37/39392/about-book.htm | 1800 2000 838
CART 0
• 0
MY CART (5)
ITEM
DETAILS
MRP
DISCOUNT
FINAL PRICE
Total Price:
There are no items in this cart.
Continue Shopping
GOOD MORNING
I AM SOLVING NEW PATTERN IIT JEE MATHEMATICS "DR. S K GOYAL"
IS IT GOOD OR NOT?
IF NOT WHICH BOOK SHOULD I PREFER?
TELL ME I AM VERY CONFUSED.
3 years ago
Share
yes its very good and it contains large number of numerical examples and you can practise R.D.Sharma for more practise.
3 years ago
IT IS A VERY GOOD BOOK FOR MATHEMATICS I YOU SOLVE IT PROPERLY.ALSO ARIHANT BOOKS FOR REFERENCE ARE GOOD.BEST OF LUCK.
3 years ago
More Questions On IIT JEE Entrance Exam
Post Question
Vouchers
To Win!!!
Is JEE Advance 2016 going to be subjective? Please tell the the latest updates regarding this matter. I am a JEE Aspirant and I’ve been studying since class 11(I will enter 12 th this year)...
Even ive heard of these rumours but they were later regarded as media hype.JEE is basically only objective type due to the difficulty of questions.
Rohit one month ago
RON one month ago
RIGHT NOW I AM IN CLASS 12 th AND I WANT TO KNOW THAT CAN I APPER FOR IIT ENTRANCE EXAM AFTER 2 YEARS OF PASSING MY 12TH CLASS AND RIGHT NOW MY AGE IS 16+9 PLZ SOME ONE RPLY
Hi, You can sit fior JEE Advanced only in the year your are giving your 12 th board exams, and the year immediately next to it. So you cant appear for the exam after 2 years....
Yash Baheti 5 months ago
what if i dont give the exam immediately after passing 12 , and after having some coaching after 12 for to years
nikhil pawar 5 months ago
Three angles of a triangle ABC are in Arithmetic progression and two sides are in the ratio b : c = √3 : √2. Find angle A.
Hii use property of AP and then use the formulae of cosine from solution of triangles to get the answer in the required format. [a^2 = b^2 + c^2 - 2bc\cos\alpha\,] Best
Sourabh Singh 5 months ago
sir my jaypee noida rank is 3000 . is there a chance of getting IT or cse ???? plz guide thnx in advance
Hi, Please see the last years opening and cloasing rank for a better idea. For a more precise answer, visit oyr college and branch predictor. Here is the link :...
Yash Baheti 7 months ago
Hello Sir, my AIR is 2,00,000 and OBC category rank is 69,000. So please tell me can i get Electronics and Communication in any Good College or in NIT..Please Reply..
In jee main site its shown that ALL INDIA OVERALL RANK IS CONSIDERED. For even nit nagaland,the spot round closing ALL INIDA OVERALL RANK for obc category candidate is 1,08,798.so you will...
Rak 7 months ago
will not be able
Rak 7 months ago
i want to crack AIPMT how to prepare and what type of books are useful for me
Understand the Syllabus throughly, specially which individual sections are important.Prepare beginning from Class 11 syllabus before moving on to Class 12. Make sure you are clear on the...
Dinesh 8 months ago
View all Questions » | 2015-03-05 18:44:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2642122805118561, "perplexity": 3764.4691198453565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464809.62/warc/CC-MAIN-20150226074104-00292-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.snapxam.com/calculators/exponent-properties-calculator | # Exponent properties Calculator
Go!
1
2
3
4
5
6
7
8
9
0
x
y
(◻)
◻/◻
2
e
π
ln
log
lim
d/dx
d/dx
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
### Difficult Problems
1
Example
$\lim_{x\to0}\left(\frac{\ln\left(x+1\right)}{x}\right)$
2
As the limit results in indeterminate form, we can apply L'Hôpital's rule
$\lim_{x\to0}\left(\frac{\frac{d}{dx}\left(\ln\left(1+x\right)\right)}{\frac{d}{dx}\left(x\right)}\right)$
3
The derivative of the linear function is equal to $1$
$\lim_{x\to0}\left(\frac{d}{dx}\left(\ln\left(1+x\right)\right)\right)$
4
The derivative of the natural logarithm of a function is equal to the derivative of the function divided by that function. If $f(x)=ln\:a$ (where $a$ is a function of $x$), then $\displaystyle f'(x)=\frac{a'}{a}$
$\lim_{x\to0}\left(\frac{1}{1+x}\cdot\frac{d}{dx}\left(1+x\right)\right)$
5
The derivative of a sum of two functions is the sum of the derivatives of each function
$\lim_{x\to0}\left(\frac{1}{1+x}\left(\frac{d}{dx}\left(1\right)+\frac{d}{dx}\left(x\right)\right)\right)$
6
The derivative of the constant function is equal to zero
$\lim_{x\to0}\left(\frac{1}{1+x}\cdot\frac{d}{dx}\left(x\right)\right)$
7
The derivative of the linear function is equal to $1$
$\lim_{x\to0}\left(\frac{1}{1+x}\right)$
8
Evaluating the limit when $x$ tends to $0$
$\frac{1}{1+0}$
9
Simplifying
$1$
### Struggling with math?
Access detailed step by step solutions to millions of problems, growing every day! | 2018-11-19 08:02:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9357632994651794, "perplexity": 349.272625339192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745486.91/warc/CC-MAIN-20181119064334-20181119090334-00149.warc.gz"} |
https://cstheory.stackexchange.com/questions/5812/array-slice-reversing-data-structure | # Array slice reversing data-structure
Given an array of $n$ elements, $A[n]$ consider a data-structure which supports the following operations:
You are allowed a one time $\mathcal{O}(n)$ preprocessing step:
• $\text{Init}(A)$
And the operations
• $\text{Reverse}(i,j)$: Reverse the slice $A[i \dots j]$ i.e. swap $A[i]$ with $A[j]$, $A[i+1]$ with $A[j-1]$ etc.
• $\text{Retreive}(i)$: Returns the element at position $i$ in the array.
Now I have heard that there is a data-structure which supports both $\text{Reverse}$ and $\text{Retrieve}$ in guaranteed $\mathcal{O}(P(\log n))$ time, where $P(x)$ is a polynomial (Assume $\mathcal{O}(1)$ array accesses).
I am guessing this was published somewhere. Does anyone know any reference?
Apologies if this is actually at the level of undergraduate homework. Please feel free to close/delete it in that case (but please do provide a reference to the appropriate text/website in comments before deleting).
• Now that this question has been answered (O(lg n) worst-case possible), I am wondering if there is a data structure with O(lg n) for one operation and o(lg n) for the other. – jbapple Apr 2 '11 at 2:58
• Sorry, I was talking about a structure with guaranteed $O(P(\log n))$ operations. I will edit the question. +1 though. Thanks. – Aryabhata Apr 1 '11 at 21:59 | 2019-07-22 01:18:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3017101585865021, "perplexity": 693.743185986489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527458.86/warc/CC-MAIN-20190722010436-20190722032436-00204.warc.gz"} |
https://oalevelsolutions.com/past-papers-solutions/edexcel/as-a-level-mathematics/core-mathematics-1-c1-6663-01/year-2014-january-c1-6663a-01/edexcel_14_jan_6663a_01_c1_q_9/ | # Past Papers’ Solutions | Edexcel | AS & A level | Mathematics | Core Mathematics 1 (C1-6663A/01) | Year 2014 | January | Q#9
Question
A curve with equation y=f(x) passes through the point (3,6). Given that
a. use integration to find f(x). Give your answer as a polynomial in its simplest form.
b. Show that , where p is a positive constant. State the value of p.
c. Sketch the graph of y = f(x), showing the coordinates of any points where the curve touches or crosses the coordinate axes.
Solution
a.
We are given;
We are given coordinates of a point on the curve (3,6).
We are required to find the equation of y in terms of x ie f(x).
We can find equation of the curve from its derivative through integration;
Therefore,
Rule for integration of is:
Rule for integration of is:
Rule for integration of is:
If a point lies on the curve , we can find out value of . We substitute values of and in the equation obtained from integration of the derivative of the curve i.e. .
Therefore, substituting the coordinates of point (3,6) in above equation;
Therefore, equation of the curve C is;
b.
We have found in (b) that;
We are given that;
We can now compare the given and the found equations.
This yields that;
Hence, p=3 and we can write the given expression as;
c.
We are required to sketch;
As demonstrated in (b), we can write it as;
Substituting p=3 as found in (b);
It is evident that it is a cubic equation.
We can now sketch the curve as follows.
ü Find the sign of the coefficient of . This gives the shape of the graph at the extremities.
It is evident that with positive coefficient of will shape the curve at extremities like increasing from left to right.
ü Find the point where the graph crosses y-axis by finding the value of when .
We can find the coordinates of y-intercept from the given equation of the curve.
Hence, the curve crosses y-axis at point .
ü Find the point(s) where the graph crosses the x-axis by finding the value of when . If there is repeated root the graph will touch the x-axis.
We can find the coordinates of x-intercepts from the given equation of the curve.
Now we have two options.
Hence, the curve crosses x-axis at two points and .
ü Calculate the values of for some value of . This is particularly useful in determining the quadrant in which the graph might turn close to the y-axis.
ü Complete the sketch of the graph by joining the sections.
ü Sketch should show the main features of the graph and also, where possible, values where the graph intersects coordinate axes. | 2022-06-30 03:46:40 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8715493679046631, "perplexity": 1880.6055921496736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103661137.41/warc/CC-MAIN-20220630031950-20220630061950-00384.warc.gz"} |
http://nrich.maths.org/433/clue | ### Doodles
Draw a 'doodle' - a closed intersecting curve drawn without taking pencil from paper. What can you prove about the intersections?
### Russian Cubes
I want some cubes painted with three blue faces and three red faces. How many different cubes can be painted like that?
### N000ughty Thoughts
How many noughts are at the end of these giant numbers?
# Euler's Officers
##### Stage: 4 Challenge Level:
See Teddy Town
You can construct orthogonal Latin squares $S^{i,j}$ and $T^{i,j}$ of prime order $m$ where the $S^{i,j} = si + j \pmod m$ and $T^{i,j} = ti + j \pmod m$ and $s$ not equal to $t$. | 2016-10-26 23:15:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.540653645992279, "perplexity": 1814.5297522212554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721008.78/warc/CC-MAIN-20161020183841-00554-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.ias.ac.in/listing/bibliography/pram/Chindhu_S_Warke | • Chindhu S Warke
Articles written in Pramana – Journal of Physics
• Core polarization effects and the random phase approximation solution
Simplified formulae for the effective electromagnetic transition matrix elements and the core polarization contribution to the effective two-nucleon interaction are derived. From these general expressions, the polarization effects in any other physical quantity of interest can easily be written down. It is also proved that the usual RPA eigenvalue problem corresponding to a 2n×2n matrix$$\left( \begin{gathered} AB \hfill \\ - B - A \hfill \\ \end{gathered} \right)$$ is equivalent to the diagonalization of an×n matrix (A+B) (AB).
• Exact expression for the projected energy
The angle integrated exact expression for the projected energy is derived from two different expansions of the rotation operator. In one, the spin matrix polynomial expansion method is used while in the other the disentangling theorem for angular momentum operator is used.
• # Pramana – Journal of Physics
Volume 96, 2022
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 | 2022-06-27 02:55:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7030807733535767, "perplexity": 1511.0446453139032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00007.warc.gz"} |
https://codeforces.com/problemset/problem/1363/C | C. Game On Leaves
time limit per test
2 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
Ayush and Ashish play a game on an unrooted tree consisting of $n$ nodes numbered $1$ to $n$. Players make the following move in turns:
• Select any leaf node in the tree and remove it together with any edge which has this node as one of its endpoints. A leaf node is a node with degree less than or equal to $1$.
A tree is a connected undirected graph without cycles.
There is a special node numbered $x$. The player who removes this node wins the game.
Ayush moves first. Determine the winner of the game if each player plays optimally.
Input
The first line of the input contains a single integer $t$ $(1 \leq t \leq 10)$ — the number of testcases. The description of the test cases follows.
The first line of each testcase contains two integers $n$ and $x$ $(1\leq n \leq 1000, 1 \leq x \leq n)$ — the number of nodes in the tree and the special node respectively.
Each of the next $n-1$ lines contain two integers $u$, $v$ $(1 \leq u, v \leq n, \text{ } u \ne v)$, meaning that there is an edge between nodes $u$ and $v$ in the tree.
Output
For every test case, if Ayush wins the game, print "Ayush", otherwise print "Ashish" (without quotes).
Examples
Input
1
3 1
2 1
3 1
Output
Ashish
Input
1
3 2
1 2
1 3
Output
Ayush
Note
For the $1$st test case, Ayush can only remove node $2$ or $3$, after which node $1$ becomes a leaf node and Ashish can remove it in his turn.
For the $2$nd test case, Ayush can remove node $2$ in the first move itself. | 2020-08-07 10:14:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27582553029060364, "perplexity": 717.005310000556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737172.50/warc/CC-MAIN-20200807083754-20200807113754-00040.warc.gz"} |
https://angularquestions.com/category/karma-runner/ | # Category: karma-runner
## Karma jasmine test angular 5 cannot call Promise
I am new to karma jasmine test and I am struggling with a test that always gives this error when I launch the test: Error: Cannot call Promise.then from within a sync test. I am using angular 5. Here is my test code: fdescribe(‘CommentComponent’, () => { let component: CommentComponent; let fixture: ComponentFixture<CommentComponent>; let commentService: CommentService; const stationId = 900; let station: Station; let comment: Comment; beforeEach(fakeAsync(() => { station = new Station(); station.id = […]
## How to run command-line process in karma
I am writing an integration test (not a unit test) for my AngularJS / CouchDB application using Karma. I realize that in unit tests mocking the database is important. I am deliberately NOT mocking the database since this is an integration test. I explicitly DO want to interact with the database. The following works when I run code in protractor exec(“my command that loads data into the database”). In karma exec() is not available. How […]
## Testing Angular Service with Jasmine: Failed to instantiate module app
I am trying to create a Simple Unit Test using Jasmine for an Angular Service. But when I am running the Test via Karma, I am getting the following error: Failed to instantiate module app And I don’t know why I am getting this error. I Googled and tried to apply the below solutions. But none worked. I tried changing the sequence of files in the karma.conf.js. I tried writing the Unit test in different […]
## Error on setting up karma, jasmine for Angular1.x application with requirejs configuration
Our Angular 1.x application uses requirejs For setting up karma – jasmine we have used below karma conf file: module.exports = function(config) { config.set({ basePath: ”, frameworks: [‘jasmine’], files: [ ‘node_modules/angular/angular.js’, ‘node_modules/angular-mocks/angular-mocks.js’, ‘src/test/**/*.spec.js’, ‘src/**/*.js’ ], exclude: [ ], preprocessors: { ‘src/main/webapp/js/**/*.js’ }, coverageReporter: { type: ‘html’, dir: ‘coverage’ }, plugins: [ ‘karma-jasmine’, ‘karma-chrome-launcher’, ‘karma-coverage’ ] reporters: [‘progress’, ‘coverage’], port: 9876, colors: true, logLevel: config.LOG_INFO, autoWatch: true, browsers: [‘Chrome’], singleRun: false, concurrency: Infinity }) } But […]
Added a timeout to reload the page in the controller after a certain amount of time, however now all the karma tests are failing. Controller code: function mainController($route,$scope, $timeout) {$scope.toggle = false; $scope.value = function() { if($scope.value) { $scope.toggle = true; } else {$scope.toggle = false; } } $timeout($route.reload, 10000); } Tests: describe(‘mainController:’, function() { describe(‘test mainController’, function () { beforeEach(inject(function ($injector) { this.$scope = $injector.get(‘$rootScope’).$new(); this.$route = $injector.get(‘$route’); this.$timeout =$injector.get(‘$timeout’); […] ## What unit testing is best for an application which is running on Angular 1.5 among Karma, Protractor or Jasmine or others? I have a AngularJS 1.5 application with 5-6 modules with lots relying on directives and stuff. How do i figure out which testing is better for our application to actually built a test harness over it? I have been through many types, some are runners and e2e etc., Karma, Protractor, Jasmine are the most familiar ones but how would we test the application; is it whole application one at a time like other automation tools? […] ## Karma: Error: [$injector:nomod] Module ‘app’ is not available
I am new to Angular World and very beginner at Karma. I am receiving the following error when I am trying to run the Karma-Jasmine Unit test. { “message”: “An error was thrown in afterAllnUncaught Error: [$injector:nomod] Module ‘app’ is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument.nhttp://errors.angularjs.org/1.6.9/$injector/nomod?p0=app”, “str”: “An error was thrown in afterAllnUncaught Error: […]
## How to load a angularjs controller’s templateUrl into a jasmine test?
I am new to angular and jasmine, so this is probably a simple one. However, I am having trouble accessing the HTML template from the jasmine test. I create the controller with a beforeEach. beforeEach(angular.mock.module(‘yeomanApp’)); beforeEach(angular.mock.module(‘yeomanApp.services’)); beforeEach(inject(function ($controller,$rootScope, $routeParams, _$httpBackend_, $window, ) { scope =$rootScope.$new(); routeParams =$routeParams; httpBackend = _$httpBackend_; window =$window; SampleCtrl = $controller (‘SampleCtrl’, {$scope: scope, $routeParams: routeParams,$http: httpBackend, \$window: window }); })); However, I can not […] | 2018-05-28 05:03:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19511942565441132, "perplexity": 13055.397628022662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794871918.99/warc/CC-MAIN-20180528044215-20180528064215-00626.warc.gz"} |
http://stats.stackexchange.com/questions/44199/tail-bounds-on-a-function-of-normally-distributed-variables | # Tail bounds on a function of normally distributed variables
I am looking for tail bounds (both at $0$ and at $\infty$) for $$Z:=\exp \left(\frac{\alpha}{4}(X-Y)^2+\frac{\alpha}{2}(X+Y)\right)$$ where $\alpha$ is a positive real and $X,Y$ are i.i.d. normal with mean $0$ and variance $\sigma^2 >> 1$. I would like to control the probability of $Z$ being outside an interval $[a,b]$ in the limit of small $a$ and large $b$.
My first approach was characteristic functions, I managed to compute $$E[\exp(i\omega \log Z)] = \frac{1}{\sqrt{1+i\alpha\sigma^2\omega}}\exp\left(-\frac{1}{4}\alpha^2\sigma^2\omega^2\right)$$ but i could not find an inverse Fourier transform or do something useful with it.
-
(+1) Welcome to the site, Sven. First, you might note that $X-Y$ and $X+Y$ are actually iid $\mathcal N(0,2\sigma^2)$ random variables and $\exp z$ is a monotonic function, so your problem reduces to finding tail bounds on $\beta \sigma^2 Z_1^2 / \sqrt{2} + \beta \sigma Z_2$ where $Z_1$ and $Z_2$ are iid standard normal. (Here $\beta = \alpha / \sqrt{2}$ and $Z_1^2$ is, of course, a $\chi^2$ random variable with one degree of freedom, independent of $Z_2$.) – cardinal Nov 22 '12 at 15:28
I find it interesting that you have $\alpha$ in front of both terms. Is that correct, or should the first term have $\alpha^2$? Also, for tail bounds, one is usually interested in achieving at least some particular rate of convergence. What would be "good enough" for the problem you are considering? – cardinal Nov 22 '12 at 15:29
First of all, thank you for the welcome and your answer. The $\alpha$ (no squares) is correct in both cases. I also head the idea to split it into chi-squared and normal, actually that is how i computed the characteristic function. But unfortunately I couldn't do any more. Regarding your question what would be 'good enough', I will have to think about that, first. I will post something about that later. But in general, the sharper, the better, obviously... – Sven Stodtmann Nov 22 '12 at 15:45
Instead of considering $X-Y$ and $X+Y$ as iid normal random variables, perhaps something like $$P\{Z>z\}=P\{aX^2+bY^2+cXY+dX+eY > \ln z\}$$ where the right side is the probability that the random point $(X,Y)$ lies outside an ellipse might work. A lower bound on the tail probability is thus the probability that $(X,Y)$ is outside the rectangle bounding the ellipse might work. An upper bound would be the probability of being outside an inscribed rectangle. Note that because of the circular symmetry, the ellipse can always be taken as having major and minor axes parallel to the $x$ and $y$ axes. – Dilip Sarwate Nov 22 '12 at 17:09
I like the idea of approaching the problem from geometry. However one problem is, that in the above case, $b^2-4ac = 0$ and we have a parabola instead of an ellipse - bounding by rectangles is probably not an option, maybe by perturbing the problem $b\rightarrow b-\varepsilon$ and then let $\varepsilon\rightarrow 0$, but somehow, I doubt this works. I will still give it a try. To answer cardinal's question what I need it for: I want to sandwich the expression $Z\in [K^{-1},K]$ "in distribution" between two i.i.d. Bernoulli variables. – Sven Stodtmann Nov 23 '12 at 9:18
Using the idea suggested by @cardinal, let $a$ and $b$ denote positive numbers and consider a random variable $Z$ defined as $Z = \exp(aX^2 + bY)$ where $X$ and $Y$ are independent standard normal random variables. Then, for $K > 1$, \begin{align*} P\{Z > K\} &= P\{\exp(aX^2 + bY) > K\}\\ &= P\{aX^2 + bY > \alpha\} & \text{where}~\alpha = \ln K\\ &= \int_{-\infty}^\infty \phi(x)P\{aX^2 + bY > \alpha\mid X = x\}\,\mathrm dx\\ &= \int_{-\infty}^\infty \phi(x)P\left\{Y > \frac{\alpha-ax^2}{b}\right\}\,\mathrm dx\\ &= \int_{-\infty}^\infty \phi(x)Q\left(\frac{\alpha-ax^2}{b}\right)\,\mathrm dx\\ &= E\left[Q\left(\frac{\alpha-aX^2}{b}\right)\right] \end{align*} where $\phi(\cdot)$ is the standard normal density function and $Q(\cdot)$ is the complementary cumulative probability distribution function of a standard normal random variable. I suspect that this integral cannot be computed analytically, but its value might be computable very fast by numerical integration (cf. this answer by whuber for a different problem.
Now, $Q((\alpha -ax^2)/b)$ is an even function of $x$ with asymptotic value $1$ as $x \to \pm\infty$ and a minimum value of $Q(\alpha/b) <\frac{1}{2}$ at $x=0$. So, we get the obvious lower and upper bounds $$Q(\alpha/b) < P\{Z > K\} < 1.$$ Tighter upper bounds can be obtained by bounding $Q((\alpha - ax^2)/b)$ from above by $1$ for $|x| > \beta$ for some suitable $\beta$; and by straight lines through $(-\beta,1)$ and $(0,Q(\alpha/b)$, and through $(0,Q(\alpha/b)$ and $(\beta, 1)$ for $|x| \leq \beta$. Since $x\phi(x)$ is a perfect integral, the expected value of this upper bound can found as something like $2Q(\beta) + f(\beta)$ where $f(\cdot)$ is an exponential function of $\beta$, and one could even choose the value of $\beta$ to minimize this upper bound.
Alternatively, note that $Q(t) \leq \frac{1}{2}\exp(-t^2/2)$ for $t \geq 0$, and so for $-\sqrt{\alpha/a} \leq x \leq \sqrt{\alpha/a}$, we have $$Q((\alpha - ax^2)/b)\leq \frac{1}{2}\exp(-((\alpha - ax^2)/b)^2/2)$$ which might lead to a better bound since $\phi(x)$ is large only when $|x|$ is small and that is exactly where we have a better upper bound that the straight-line bounds mentioned above. | 2014-08-02 04:29:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9881340861320496, "perplexity": 158.5267000192262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276353.59/warc/CC-MAIN-20140728011756-00298-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://www.physicsimplified.com/2014/06/transformation-of-christoffel-symbol.html | ## Pages
### Transformation of Christoffel Symbol
We have the metric transformations between the two different coordinate systems as;
$$g_{\mu '\nu '}'=\frac{\partial x^{\mu}}{\partial y^{\mu '} }\frac{\partial x^{\nu}}{\partial y^{\nu '} }g_{\mu \nu}$$ We also know that the Christoffel symbol in terms of the metric tensors is as follows $$\Gamma_{\mu \nu}^{\lambda}=\frac{1}{2} g^{\lambda \rho}\left(\partial_{\mu}g_{\nu \rho} +\partial_{\nu}g_{\rho \mu}-\partial_{\rho}g_{\mu \nu}\right)$$ This then implies that the christoffel symbol in the primed coordinate system is then; $$\Gamma_{\mu ' \nu '}^{\lambda '} =\frac{1}{2} g^{\lambda ' \rho '}\left(\partial_{\mu '}g_{\nu ' \rho '} +\partial_{\nu '}g_{\rho ' \mu '}-\partial_{\rho '}g_{\mu ' \nu '}\right)$$ Our aim here, is to find the transformation relation between these christoffel symbols which are in different coordinate system. We first have to find the derivative of the metric tensor in the primed coordinate system. Let us differentiate with respect to $\lambda '$ $$\partial_{\lambda '}g_{\mu ' \nu '}'=\frac{\partial x^{\lambda } }{\partial y^{\lambda '} }\frac{\partial x^{\mu } }{\partial y^{\mu '} }\frac{\partial x^{\nu } }{\partial y^{\nu '} } \partial_\lambda g_{\mu \nu}+g_{\mu \nu}\left(\frac{\partial x^{\nu } }{\partial y^{\nu '} }\frac{\partial^{2} x^{\mu } }{\partial y^{\lambda '}\partial y^{\mu '} }+ \frac{\partial x^{\nu } }{\partial y^{\mu '} }\frac{\partial^{2} x^{\mu } }{\partial y^{\lambda '}\partial y^{\nu '} } \right)$$ We know that the christoeffel symbol in the primed coordinate system has the derivatives of metric of the cycle $\rho ' ,\mu ' ,\nu '$.
Using this expression three times and relabeling the indices, one can write;
$$\partial_{\mu '}g_{\nu ' \rho '} +\partial_{\nu '}g_{\rho ' \mu '}-\partial_{\rho '}g_{\mu ' \nu '}=\frac{\partial x^{\lambda } }{\partial y^{\lambda '}}\frac{\partial x^{\rho } }{\partial y^{\rho '}}\frac{\partial x^{\nu } }{\partial y^{\nu '} }\left(\partial_{\lambda}g_{\rho \nu}+\partial_{\nu}g_{\rho \lambda}-\partial_{\rho}g_{\nu \lambda}\right)+$$ $$g_{\rho \nu}\left(\frac{\partial x^{\nu } }{\partial y^{\nu '} }\frac{\partial^{2} x^{\rho } }{\partial y^{\lambda '}\partial y^{\rho '} }+\frac{\partial x^{\nu } }{\partial y^{\lambda '} }\frac{\partial^{2} x^{\rho } }{\partial y^{\nu '}\partial y^{\rho '} }+ 2\frac{\partial x^{\nu } }{\partial y^{\rho '} }\frac{\partial^{2} x^{\rho } }{\partial y^{\lambda '}\partial y^{\nu '} }-\frac{\partial x^{\rho } }{\partial y^{\lambda '} }\frac{\partial^{2} x^{\nu } }{\partial y^{\rho '}\partial y^{\nu '} }-\frac{\partial x^{\rho } }{\partial y^{\nu '} }\frac{\partial^{2} x^{\nu } }{\partial y^{\rho '}\partial y^{\lambda '}}\right)$$ Because, the metric is symmetric in $\rho$ and $\nu$ we are just left with:
$$\partial_{\mu '}g_{\nu ' \rho '} +\partial_{\nu '}g_{\rho ' \mu '}-\partial_{\rho '}g_{\mu ' \nu '}=\frac{\partial x^{\lambda } }{\partial y^{\lambda '}}\frac{\partial x^{\rho } }{\partial y^{\rho '}}\frac{\partial x^{\nu } }{\partial y^{\nu '} }\left(\partial_{\lambda}g_{\rho \nu}+\partial_{\nu}g_{\rho \lambda}-\partial_{\rho}g_{\nu \lambda}\right)+$$ $$2g_{\rho \nu}\frac{\partial x^{\nu } }{\partial y^{\rho '} }\frac{\partial^{2} x^{\rho } }{\partial y^{\lambda '}\partial y^{\nu '} }$$ Now substituting the result in the primed christoffel symbol we have the following:
$$\Gamma_{\mu ' \nu '}^{\lambda '}=\frac {1}{2}\frac{\partial y^{\mu '}}{\partial x^{\mu} }\frac{\partial y^{\nu '}}{\partial y^{\nu} }g^{\mu \rho}\left(\frac{\partial x^{\lambda } }{\partial y^{\lambda '}}\frac{\partial x^{\rho } }{\partial y^{\rho '}}\frac{\partial x^{\nu } }{\partial y^{\nu '} }\left(\partial_{\lambda}g_{\rho \nu}+\partial_{\nu}g_{\rho \lambda}-\partial_{\rho}g_{\nu \lambda}\right)+2g_{\rho \nu}\frac{\partial x^{\nu } }{\partial y^{\rho '} }\frac{\partial^{2} x^{\rho } }{\partial y^{\lambda '}\partial y^{\nu '} }\right)$$
$$=\frac{\partial y^{\mu '}}{\partial x^{\mu} }\frac{\partial x^{\lambda}}{\partial y^{\lambda '} }\frac{\partial x^{\nu }}{\partial y^{\nu '} }\frac {1}{2}g^{\mu \rho}\left(\partial_{\lambda}g_{\rho \nu}+\partial_{\nu}g_{\rho \lambda}-\partial_{\rho}g_{\nu \lambda}\right)+\frac{\partial y^{\mu '} }{\partial x^{\mu} }\delta_{\rho}^{\nu}\delta_{\nu}^{\mu}\frac{\partial^{2} x^{\rho } }{\partial y^{\lambda '}\partial y^{\nu '} }$$ Thus, we have the transformation relation between the christoffel symbol as follows;
$$\Gamma_{\mu ' \nu '}^{\lambda '}=\frac{\partial y^{\mu '}}{\partial x^{\mu} }\frac{\partial x^{\lambda}}{\partial y^{\lambda '} }\frac{\partial x^{\nu }}{\partial y^{\nu '} }\Gamma_{ \nu \lambda}^{\mu}+\frac{\partial y^{\mu '} }{\partial x^{\mu} }\frac{\partial^{2} x^{\mu} }{\partial y^{\lambda '}\partial y^{\nu '} }$$
Unknown said...
While relabeling tags you provide the negative term with a common index with the contravariant metric which wasn't the case originally, ie you change an index to another currently in use but different from the one you changed. Why can you do this?
George said...
I agree with Unknown who commented December 2018. One thing is that The indices μ,ν in the denominator of the last term of the fourth equation are the wrong way round. Another is that the indices on unprimed and primed Christoffel symbol in the last equation have moved around in a very odd way. It is not right!
I have written out the correct derivation here. (https://www.general-relativity.net/2019/03/transformation-of-christoffel-symbol.html)
George said...
In addition this proof is for a torsion-free metric-compatible connection. It would be better to have a proof for any type of connection. I learnt this in Spacetime and Geometry: An Introduction to General Relativity by Sean M Carroll. He also had an error in the transformation! The correct proof and transformation are also now on www.general-relativity.net HERE. | 2020-01-26 05:54:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8174915313720703, "perplexity": 282.2672838480211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687725.76/warc/CC-MAIN-20200126043644-20200126073644-00240.warc.gz"} |
https://www.thomas-gruber.at/?p=376 | Zum Inhalt
Over the last few months it has been both a side project at work and part of my Master Thesis to write an integration of the Ranorex API into the Robot Test Automation Framework. It all started with an idea, and the first steps were to make a proof of concept, then to find out which parts of Ranorex can be transferred to Robot, and how this all can work from a technical perspective.
Fast forward to the results (as I think that everything else is not really interesting). This integration now makes it possible to write keyword-driven tests in Robot utilizing the full power of Ranroex. This means that you can finally write GUI tests using a keyword-driven approach. Yay. (At least if you own Ranorex licenses, of course.) The project itself can be found on GitHub.
So, what does this mean in practice? What is the purpose, and which problem does this integration solve?
Well, you can now do something like this (in Robot syntax):
*** Test Cases ***
Calculate 1 Plus 5
Run Application calc.exe
Press Button 1
Press Button +
Press Button 5
Press Button =
Validate Result 6
Close Application /winapp[@packagename='Microsoft.WindowsCalculator']
So, we just write a test using simple keywords. However, if you try to run this test in Robot, it will fail, because none of these keywords are known to Robot. Therefore, you have to import the RanorexLibrary into you Robot test. If you have placed the path to the the RanorexLibrary files in your system path, then this is as simple as this:
*** Settings ***
Library RanorexLibrary path\\to\\Ranorex\\Bin
Now, Robot would know the keywords „Run Application“ and „Close Application“ and the test would be able to correctly start and close the Windows calculator. The other keywords aren’t defined anywhere, yet. But that’s no problem, since we can just create keywords from other keywords:
*** Keywords ***
Press Button 1
Click /winapp[@packagename='Microsoft.WindowsCalculator']//button[@automationid='num1Button']
Press Button +
Click /winapp[@packagename='Microsoft.WindowsCalculator']//button[@automationid='plusButton']
Press Button 5
Click /winapp[@packagename='Microsoft.WindowsCalculator']//button[@automationid='num5Button']
Press Button =
Click /winapp[@packagename='Microsoft.WindowsCalculator']//button[@automationid='equalButton']
Validate Result
[Argument] ${result} Validate Attribute Equal /winapp[@packagename='Microsoft.WindowsCalculator']//text[@automationid='normalOutput'] Text${result}
Now, we have also defined the other missing keywords, using other keywords that themselves are defined in the RanorexLibrary, here the keywords „Click“ and „Validate Attribute Equal“. The „Click“ is passed on to the Ranorex .dll files and executed by the Ranorex core, and after starting the Robot test, Ranorex will take over the mouse cursor, move it to the element that is specified by its RanoreXPath, and click it.
The RanoreXPath is still the way how you address UI elements, same as it would be in plain Ranorex. If used correctly, a tester would apply the Page Object Pattern to abstract the RanoreXPath away from the actual test case, and the test case itself would be a very simple, easy-to-understand and easy-to-write collection of keywords. The beauty of keyword-driven testing shows here: It is flexible, powerful, but also simple and user-friendly.
If you are interested in more information, you can find my whole thesis describing the integration here.
Published inComputer Science | 2021-02-26 18:39:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2509595453739166, "perplexity": 2368.1708094105197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357935.29/warc/CC-MAIN-20210226175238-20210226205238-00155.warc.gz"} |
https://brilliant.org/practice/geometry-warmup-angles-and-lines/?subtopic=geometry-2&chapter=angles-and-lines | ×
Basic Mathematics
Geometry Warmup - Angles and Lines
Line $$p$$ intersects line $$q.$$ What is the value of $$x?$$
$$x$$ is how many degrees larger than $$z?$$
The two vertical lines are perfectly parallel. What is the sum of the three blue angles?
Given that all three of the horizontal lines are parallel, what is the measurement of the red angle in degrees?
Note: The diagram is not drawn to scale.
What is the value of $$a + b + c + d ?$$
× | 2018-01-24 07:46:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46646732091903687, "perplexity": 471.6685879310013}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893530.89/warc/CC-MAIN-20180124070239-20180124090239-00077.warc.gz"} |
http://www.beautifulwork.org/difference-engine | # difference engine.
Difference Engine
Weierstrass:
A machine to compute mathematical tables
– Any continuous function can be approximated by a
polynomial
– Any Polynomial can be computed from difference tables
$
int{(n)} = n^{2}+n+41 \
d1(n) = int{(n)} - int{(n-1)} = 2n \
d2(n) = d1{(n)} - d1{(n-1)} = 2 \
$
Make a Table with the equations.
You can use Addition and Find the next value of function for a new “n”.
source : http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-823-computer-system-architecture-fall-2005/lecture-notes/ | 2019-05-22 03:37:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6270110607147217, "perplexity": 1569.76850396372}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256724.28/warc/CC-MAIN-20190522022933-20190522044933-00099.warc.gz"} |
https://www.physicsforums.com/threads/legit-maths-in-movies.28833/ | # Legit maths in movies?
1. Jun 2, 2004
### dcl
Hey, I was watching Good Will Hunting today to motivate me for my oncoming exams. In movies like these there are usually some scenes of chalkboards filled with math or what have you and I've always wandered if they made sence or if the maths was legit.
Check out this screep cap:
Is this question(s) legit, does it make sense?
Also, anyone know of any other movies in similar vein to Good Will Hunting? I absolutely love it.
2. Jun 2, 2004
### matt grime
The mathematics in Good Will is reasonable, that is it looks proper. I can't say I read it too closely. However its portrayal of mathematics is terrible, in particular the conceit that an alleged Field's Medallist is unable to recall a 1 page proof of a problem he is very familiar with cannot be allowed to stand.
In A Beautiful Mind they employed some mathematicians to make sure what appears on screen looks correct. That didn't stop them misspelling Nobel as Noble at one point though.
3. Jun 2, 2004
### Njorl
I've noticed "The Far Side" usually has mathmatical gobbledigook, but once in a while, he slips in a real equation.
Njorl
4. Jun 2, 2004
### killerinstinct
There are many movies where legit math is associated:
Appollo 13, IQ, Its my turn, Lambada, Rain Man, Moebius, etc.
Check these out
5. Jun 2, 2004
### Gokul43201
Staff Emeritus
Sometimes, they go too far, trying to make the math be "correct".
For instance, in The Beautiful Mind, Nash is doodling a derivation of something on his window pane. In each step (on 3 or 4 consecutive lines) there appears an infinite sum with "n=0 to infinity" appearing below and above the summation symbol, sigma. I can't imagine anyone being so conscientious about filling in the extents of summation, line after line after line, for the same sum, expecially when he is on the verge of a major breakthrough.
I think they got the math, but not the mathematician.
6. Jun 2, 2004
### fourier jr
I don't know a lot of combinatorics, but an undergrad could probably solve that problem easily. It may make sense & be real math, but it's not cutting-edge, research-level math like they make it out to be in the film.
I've also noticed math in the Simpsons that makes sense.
7. Jun 2, 2004
### Muzza
Yeah, "e^(i pi) + 1 = 0" in the episode where Homer gets sucked in to the real world. There's also that episode where Homer finds a pair of glasses in the toilet, puts them on, and makes some Pythagroean theorem-esque statement (it wasn't correct though).
(You think I've watched too many Simpsons episodes?)
8. Jun 2, 2004
### quartodeciman
Someone once told me about a cheapo scifi movie from the 1960s. A horrible mad scientist was causing all kinds of destruction until near the end, when he got himself caught by his own evildoing. It was then revealed that he had gained his irresistible destructive power by figuring out how to integrate all the way from minus infinity up to plus infinity.
9. Jun 2, 2004
### Zurtex
I believe Homer says something a long the lines of:
"The sum of the square of two sides of an isosceles triangle is equal to the square of the other side"
And then someone shouts out correcting him. Oh also the equation given in the 3D universe is $e^{i\pi}=-1$
"The Saint" was on in the background the other day and it was this scientist explaining their method of how to make cold fusion work, I'm no physicist but I am fairly sure they had got it all wrong as the scientist started off by talking about "positively charged neutrons"
10. Jun 2, 2004
### Njorl
Now that is a mathmatical impossibility!
Njorl
11. Jun 3, 2004
### Icarus
If I only had a brain
To go way back, there is always the scene from The Wizard of Oz wherein the Scarecrow gets his Diploma and demonstrates his newfound intelligence by quoting "the square root of the hypotenuse is equal to the sum of the square roots of the other two sides". I've always wondered if that was intentional or the result of Hollywood mathematics.
12. Jun 4, 2004
### FlatlineLemon
i use a A Beautiful Mind while prepping for exams, works like a charm. The Core has some interesting physics and mathematical theories in the film. but they seem kinda out there, and only take up 10 minutes of the movie.
13. Jun 5, 2004
### dcl
Yeh, I've seen the core.. Wasn't bad.. Just way too far 'out there'
14. Jun 5, 2004
### Janitor
That reminds me...
I know someone who insists that the momentum transferred to the shooter of the gun is way less than the momentum transferred to the person struck by the bullet. I try to tell him that he has seen too many Hollywood movies, where the shooter casually hoses 50-caliber machine gun ammo at the bad guys without bracing himself and without breaking a sweat. Those hit by the ammo, of course, are knocked on their keesters in a spray of blood, to good dramatic effect.
15. Jun 5, 2004
### fourier jr
no you do too much math. normal people wouldn't care what's on the board. :rofl:
16. Jun 5, 2004 | 2016-10-24 20:13:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3641568124294281, "perplexity": 2905.346770290162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00379-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1190295/definition-of-riemann-integral | # Definition of Riemann integral
I am trying to prove that we cannot use the definition of the Riemann integral as
$$\int_{a}^{b} f(x) dx = \lim_{n \rightarrow \infty} S_n$$ using the Dirichlet function. I don't know if my reasoning makes sense when I say we cannot use it as the definition because the values are not unique:
By the definition of the Riemann integral, we know that the Riemann integral $A = \int_{a}^{b} f(x) dx$ is unique for all functions $f$ and for all intervals $[a,b]$ s.t. $a,b, \in \mathbb{R}$. If we are to assume $\lim_{n \rightarrow \infty} S_n = \int_{a}^{b} f(x) dx$, $\lim_{n \rightarrow \infty} S_n$ must also be unique for all functions $f$ and for all intervals $[a,b]$ s.t. $a,b \in \mathbb{R}$.
Consider the function $f:[0,1] \rightarrow \mathbb{R}$ f(x) = \begin{cases} 1, & \text{if $x$ is rational} \\ 0, & \text{if $x$ is irrational} \end{cases} defined on the interval $[0,1]$. We know that for any partition $0 = x_0<x_1<x_2<x_3<....<x_N = 1$ of $[0,1]$, we can either choose the $x'_i$s to be either all rational, or all not, in which case the Riemann sums are respectively $1-0 =1$ or 0.As the condition holds for any partition, we have two values for $\lim_{n \rightarrow \infty } S_n$ if we let the number of divisions $N \rightarrow \infty$ and the width $d \rightarrow 0$. Therefore we see that although the Riemann integral $A$ has to be unique, the limit of the Riemann sum $\lim_{n \rightarrow \infty} S_n$ can have two values, depending on the intervals. Therefore the equality $\int_{a}^{b} f(x) dx = \lim_{n \rightarrow \infty} S_n$ does not hold for all functions and intervals.
You should know that f is said to be integrable (in the Riemann sense) if the limit of all the possible $S_{n}$ are exist and are the same value I, in this case we can state two things: the function f is integrable in the Riemann sense AND the value of its integral is that common limit I. So if there are at least two possible $S_{n}$ such that their limits exist and are not the same so this means that the function is not integrable. (Your reasoning would be true, if it is true that "any function is integrable and its integral it a unique value" but it is not the case!) There any many other kinds of integration (beside that of Riemann) one reason of their existence is that not all functions are integrable with respect to only one kind of integration. | 2019-06-24 15:15:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.977213978767395, "perplexity": 83.0061058768772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999615.68/warc/CC-MAIN-20190624150939-20190624172939-00100.warc.gz"} |
https://math.stackexchange.com/questions/1431876/kolmogorovs-probability-axioms | # Kolmogorov's probability axioms
Why Kolmogorov's axioms are considered such a breakthrough in probability theory? They are just 3 simple statements everyone can agree with.
When creating a system of axioms like this it's necessary the list of the axioms is complete. Suppose we forget about the 3rd Kolmogorov's axiom. Then we would have 2 axioms everyone could agree with when thinking about probability. Does it mean the 2 axioms are enough to claim this is a good axiomatic system of probability? We know it's not, because there's the 3rd axiom left out. But maybe these 3 axioms are not sufficient as well in a similar manner.
Look at Euclid's fifth axiom (parallel postulate). If we ommit fifth postulate, we get hyperbolic geometry, which is certainly no what we wanted to have. A similar question arises here - are those axioms sufficient? Are we sure we won't get any unintended results just following these 3 axioms?
Or maybe the statement that a given set of axioms agrees with our intuition of, let's say, probability must itself be treated as an axiom. We cannot prove it. Kolmogorov axioms survived so many years with no major complaints, then they are believed to match our intuition regarding what probability is accurately. But there are areas where it doesn't work (like quantum mechanics, which is well known for being weird and counter-intuitive). But why those axioms apparently do work in our 'common' and 'everyday' probability problems? Maybe we haven't discovered a case where they fail?
Quoting The Logico-Algebraic Approach to Quantum Mechanics Volume I: Historical Evolution, C.A. Hooker Editor, page 172:
It is obvious that since the Kolmogorov axioms are rooted in empirical experience, any change in the theory, if by such change one wants to extend its applications to the physical world, should spring directly from some phenomenological considerations. Anticipating our discussions in the subsequent sections one might say that the point of departure for the contemplated change in the model can be traced to the remarkable discovery that the physical systems arising in quantum physics are of such nature that one is no longer entitled to make the assumption that the associated experimental proposition constitute a Boolean sigma-algebra. As a consequence, the conventional i.e. the Kolmogorov formalism of probability theory is inadequate for a precise description of these systems. As a spectacular instance of such failure we may mention the facts that the notion of disjoint events is at a somewhat deeper level and that the identity $P(A+B)=P(A)+P(B)-P(AB)$ is not always true (the examples of Feynman are concerned with this failure among other things).
• Does someone understand what the phrase the notion of disjoint events is at a somewhat deeper level means? Spectacularly or not... – Did Sep 12 '15 at 8:35
• @Did I am not sure, but a possible interpretation could be: if $A$ is the $x$-coordinate and $B$ the $y$-coordinate of an electron in two dimensions, then we cannot compute $AB$ since this measurement is not feasible by the uncertainty principle - so actually the notion of $AB$ has no meaning in this context – user190080 Sep 14 '15 at 12:13
• The $x$- and $y$-coordinates of an electron commute and can be measured simultaneously. To make the prev comment physically correct one need for example the coordinate $x$ and the momentum $p_x$. – kludg Oct 16 '17 at 13:55
You seem to be tackling several issues at once. First though, some inaccuracies. You write "when creating a system of axioms like these..." I'm not sure what 'these' refers to. Then you say "it's necessary the list of axioms is complete." Do you mean by 'complete' that there is only one model of the axioms (up to isomorphism)? if so, why is that necessary for modelling probability events? You comparison with the axioms of geometry is unclear as well. If you omit the fifth, you do not automatically get hyperbolic geometry, you can also get projective geometry. To claim that any of those is not what we wanted to have is peculiar, particularly from a modern perspective. Geometry encompasses much more than just Euclidean geometry. And again, even with the fifth there is not just one (up to isomorphism) Euclidean geometry, but infinitely many (of various dimensions).
Now I will try to address the question of what is so great about Kolmogorov's axiomatisation. The mathematics of probability is fraught with difficulties, both conceptual and technical. There are endless examples of seemingly simple questions that turn out to be very complicated or have severely counter intuitive answers (The Monty Hall paradox for instance). Problems that appear identical may turn out to be significantly different just because of changes in the protocol. In short, it's not easy.
Having said that, the probability theory of finite probability spaces is quite simple, at least in the sense that it is clear how to model finite probability spaces: Given a finite set of events, the probability of a subset of events is the ratio of that subset to the entire set. Sweet. From it flows quite a lot, but only when the total set of events is finite.
Often, the set of events is infinite. For instance, modelling throwing a dart at a dartboard is often done by imagining the dart board as a disk in $\mathbb R^2$, and then a throw of a dart corresponds to a choice of a point in the disk. Of course the disk has infinitely many points. What is the probability that the dart hits a given point, say the centre of the disk? Well, assuming the dart lands randomly at a uniform distribution over all points, the only possible answer is $0$. A point is just too small. This is already counter intuitive enough and raises the question of how to model all of this. Well, this is all related to the notion of how big a set is. An innocent question with a highly complicated answer. It's not simple at all to develop the theory that answers this question - measure theory. Issues related to the axiom of choice quickly creep up. A famous theorem of Vitali shows that it is impossible (assuming the axiom of choice) to meaningfully assign a measure to each and every subset of $\mathbb R$.
Now, measure theory was not developed to provide some foundations of probability theory. Instead it arose from questions of integrability. Kolmogorov's wonderful insight was that he realised the same formalism can be used to turn the intuition of what probability theory should be (as you say, pretty obvious axioms) into actual axioms. Before measure theory and Kolmogorov's seminal contribution nobody knew how to meaningfully and accurately work with infinite probability spaces. Thanks to Kolmogorov a formalism was born. Now that is truly wonderful.
Lastly, the paragraph you quote is talking about something all together different. Quantum mechanical considerations defy many conceptually obvious properties. Among them Kolmogorov's axiomatisation of probability. In the world of quantum mechanics even probability behaves differently than what we are used to. Such is life.
• I've edited my question, explaining what I mean saying 'complete' in this context.. – user4205580 Sep 12 '15 at 9:47
• I guess the fact that a given set of axioms agrees with our intuition of, let's say, probability must itself be an axiom. We cannot prove it. Kolmogorov axioms survived so many years with no major complaints, then they are believed to match our intuition here. Would you agree with that? – user4205580 Sep 12 '15 at 10:25
• You. I thought it's obvious, sorry. It's a comment to your answer anyway. – user4205580 Sep 12 '15 at 10:43
• @user4205580 I don't quite agree that it's an axiom. Just like in Physics, where a theory has predictive power and the predictions can be checked against reality, so is it with a mathematical theory. The chosen axioms have a predictive power (the resulting theorems) and those predictions can be checked against what we expect or what we like the theory to achieve. That last part, of checking the actual theorems against what we'd like the theory to achieve is outside of mathematics. – Ittay Weiss Sep 12 '15 at 19:45
• @IttayWeiss, what do you mean by "Problems that appear identical may turn out to be significantly different just because of changes in the protocol."? – Conrado Costa Sep 15 '15 at 15:40
Kolmogorov was both interested in axioms and how probability realizes in systems. For the latter, see this paper.
Probability is notoriously difficult to correctly axiomatize. Kolmogorov's probability was a revolution in that it laid the foundations for a theory that is not only rigorous, but very applicable. The only similar "easy" example I can think of is the notion of compact sets for proving stuff in real analysis.
Kolmogorov's axioms by themselves are nothing new. However, it was Kolmogorov's reinterpretation of probability through measure theory that was truly revolutionary. This allowed for a much broader and more rigorous foundation for probability theory. Everything from Kolmogorov's 0-1 Law, to interpreting $P(A|B)$ when $P(B)=0$, becomes natural and useful in this measure theoretic approach. A further example is Brownian motion, whose rigorous foundations are solely rooted in measure theory.
Whether or not Kolmogorov's theory works in quantum mechanics is a completely separate issue. Quantum probability is a generalization, and you can find ways of connecting it in Kolmogorov's theory here.
Isn't the reason for their success precisely the fact that the Kolmogrov axioms are
• small in number
• simple staements
• everyone can agree with?
(I repeat here the points of your statement, but doesn't your quote contradict the last of these points?)
It gets a bit problematic when we talk about completeness in this context: The intent of Euclid's axioms was to describe a single abstract object, "the" geometry of "the" plane (or "the" 3D space). We might also ask: Are the three group axioms (associativity, neutral, inverse) complete? In a sense they are not, for neither the statement $\forall x,y\colon xy=yx$ nor its negation can be proved from them. But that is because these axioms are there to describe many objects (i.e., models of the axiom system). And on the other end of the spectrum there are structures that fail to be groups (such as $\mathbb N$) and therefore do not suggests themselves to be treated with group theory methods.
Kolmogorv's axioms fall more in the second category: They are applicable to many different situations. And if $P(A\lor B)=P(A)+P(B)-P(A\land B)$ does not hold in real life, then this cannot be modelled as probability just like $\Bbb N$ is no group.
• It may sound a silly question but why different mathematical topics have different axioms but they use each other? For example when we add probabilities we add numbers but numbers or addition and their axioms are not postulated in probability theory. – user599310 Jul 30 '20 at 16:35
Concerning the Quantum Mechanical Probability, maybe this Wikipedia reference is a nice read:
Especially the sections The laws of calculating probabilities of events and In the context of the double-slit experiment are relevant. In the latter section we find the formula for addition of the complex probability amplitudes $\psi$ of two independent events, say $\psi_1$ and $\psi_2$, not resulting in the "common" probability $P$: $$P \ne \left| \psi_1 \right|^2 + \left| \psi_2 \right|^2$$ But in the following: $$P = \left| \psi_1 + \psi_2 \right| = \left| \psi_1 \right|^2 + \left| \psi_2 \right|^2 + 2 \left| \psi_1 \right| \left| \psi_2 \right| \cos(\theta_1-\theta_2)$$ Here $\,\theta_{1,2}$ are the (complex) arguments of $\,\psi_{1,2}$ . The last term is crucial for describing Quantum Mechanical behavior. This is the "deeper level" as mentioned in the question. In fact, as Richard Feynman says: "it contains the only mystery" (The Feynman Lectures on Physics III section 1-1).
But Quantum Mechanics is not the only context where "probability" is different from Kolmogorov probability. There are two other issues related to this that I find bothering:
• Are you saying one of Kolmogorov's axioms are violated? From the Wikipedia page you site, the third axiom on the sum of probabilities of mutually exclusive events is stated as applying to quantum probability amplitudes (see section on laws of calculating probability amplitudes). There is no rule in Kolmogorov's axioms requiring that probabilities of independent events add. – cantorhead Aug 14 '19 at 18:58
Note that as stated, the 'rule,' mentioned above, has multiple interpretations. $$(A) P(A\vee B)=P(A)+P(B)−P(AB)$$ as stated an axiom of Kolmogorov's system.
Unless you interpret $$P(AB)$$ as $P(A\cap B)$ as set theoretic intersection, which is how its, or was, traditionally done.
Where $A\cap B$ is the event $E;E \in F$.
$F$ being the Bool-ean algebra of events.
Where the event $E$ , is, denoted by a set of atomic events:
$\Omega_{i};\Omega_{i} \in \Omega$, generally if not always, mutually exclusive, where $\Omega$ is the sample space .
Or in other words, the union of singleton sets: $\{\Omega_{i}\} \in F, \{\}$ ,denoting said atomic events, which are often measurable in finite cases, and thus $\in F$, the 'Boolean or sigma algebra of measurable events'. This being is often, the union of the singleton sets (translate , dis-junction, of 'mutually exclusive 'atomic event's) , that are in the common intersection of the sets $E_{A}\,,E_{B}$ .
These are the union of singleton sets in ${\Omega_{i}}\in F$ (translate , dis-junction, of 'mutually exclusive 'atomic event's; $\Omega_{i}\in \Omega$), ' which denotes the events $A , B$ respectively.
So at the end of the day, one never consider composite events really, in basic Kol-mogorov's calculus, that are not mutually exclusive.
By which I mean they are always reducible to dis-junctions of mutually exclusive events. Whilst some events in $F$ may not be mutually exclusive, what those events consist in.
That is, the elements of the sets denoting each event individually, are generally if not always mutually exclusive, are, so there are no compositions (intersection of union )or events consisting of non mutually exclusive atomic events, or other partitions.
Unless you count $\emptyset$ .
Otherwise union of singleton sets, or atomic events, or countable partitions of them (unions) in the infinite cases, are generally always be analyzed set theoretically via as the disjoint union, and set co of mutually exclusive events, and set complementation so will $A\cap B$.
Although, the probability calculus was in a sense, extended later, by both Kol-mogorov and others, as it stands, the three axioms of probability $(1)$ , $(2)$ and$(3)$ do not contain $$(A)$$, at least not explicitly.
Everything can be done by summing up only mutually exclusive events. Its just a quicker theoretical tool.
Perhaps the real, difference is, at not so much at the level of axioms, the nature of the probability space. However, the underlying model theory, logic, measure theory and sets, on the one hand, versus, functions, vector spaces, and inner-products, and what is meant by disjoint, and complementary and closure under unions of certain forms
See Chapter 5 of 'Foundations of Measurement Volume 1: Suppes Krantz, Luce et al for more on the distinction in quantum mechanics.
and the that quantum probability may be considered non-commutative, and may see a distinction in the logic between the events(1) and (2): $(1)P([A\cup B ]\vee [B\cup C])$ and $(2)PR(A\vee B \vee C)$ .
Where $A\cap B$ and $B \cup C=\emptyset$.
But, in the conventional system (such as kolmogorov), $P([A \vee B ]\land [B\vee C])$, is interpreted, set-theoretically as:$P([A\cup B ]\cap [B\cup C])=P({B})$ for example.
Where $A , B C$ are the atomic, mutually exclusive and exhaustive events.
Thus, the probability calculus, really only contains $$(2)$$ interpreted in terms of set theoretic intersection, comple-mentation and unions.
Although generally intersection is not required given specification of the unit.
$A\cap B$ will generally be an event $\in \mathbb{F}\subseteq=2^{\Omega}$ , in algebra of events $F$. That is a set of mutually exclusive elementary outcomes ${D,C...}$ or a singleton set,${D}$\in $F$, $D$\in $\Omega$ an atomic event, singleton,or union of said mutually exhaustive events, generally, in the algebra of events, as will every other event, generally $A\cap B=$ ,sing set theoretic comple-mentation.
Moreover, $P(AB)$ is not interpreted as some kind of product or multiplicative event it is always either a union an atomic events, or the empty event or the unit event at the end of the day, when one using set theoretic intersection for a singular probability space.
There are no distinct product events in the algebra over and above these.
$$(1)P(A \vee B)=P(A)+P(B)−P(A\land B)$$. $$(2)P(A\cup B)=P(A)+P(B)−P(A\cap B)$$.
It has been debated whether Kolmorogov's original axiom set is incompatible with quantum mechanics.
Maybe its official extension to measure theory, to joint algebras, product sets, what have you, to definite laws of large numbers, independence and so forth. But the original axioms say nothing about.
A lot of the counter-examples I have sen in other articles, have nothing to with Kolmogorov's official formulation which says nothing about products, independence,Bayes' axiom or the multiplicative /product'axioms'.
Which are put forward as mere definitions, and by definition this means that:
$P(A|B)$ is a word that merely denotes $\frac{P(AB)}{P(B)} . Where$P(A|B)=\frac{P(AB)}{P(B)}$then says that the ratio of two probabilities$\frac{P(AB)}{P(B)} =\frac{P(AB)}{P(B)}$equals the the ratio of the same two probabilities (ie nothing substantive). This is because, conditional probabilities, are not explicitly part of the axiomatic structure derived from the measure theoretic interpretation of probability,if you can call it that. Quite, unlike the addiv-itiy of probabilities over disjoint unions, which was 'derived' from the addivity of under-lying inter-pretation (measure). Nor is,$P(A|B)$, the probability of a conditional even, at least in the canonical calculus of probability. Nor, are the definitions of independence and Bayes rule (definition really). These, independence, Bayes rule, conditional probabilities etc, are not officially not axioms of Kol-mogorov's system. hey are definitions! There may have been reasons,why Kolmorogov, did not officially publish his results earlier, which were published instead at about the same time that Quantum probability-logic of Von Neumann /Birk-hoff etc. These works, have the same official vintage. Kol-mogorov, may have been aware of some of the issues. Perhaps, part of the reason for his his being tentative about considering or formalizing the product definition, Bayes rule, and independence, as axioms of probability. Whilst, Kol-mogorov never did officially include these as axiom but only definitions, the probability calculus was extended later, by both himself and others to accommodate Bayes rule to put it on a more formal axiomatic like standing. Nonetheless, as it stands, the three axioms of probability still do not contain contain it (Baye's Rule) or independence, and thus the 'product rule' ,$(2)$, or the rule of 'total probability',$(1)$, as below, as axioms: $$(1)P(A \\vee B)=P(A)+P(B)−P(A\land B)$$ . $$(2)P(A\cup B)=P(A)+P(B)−P(A\cap B)$$. The only official use of is$(2)$, as an axiom, where roughly :$P(A\cup B)= P(A\\B)+P(B)$. But this makes no use of the notions of definitions of conditional probabilities, multip-licativity of probabilities as in the product rule, or independence. Axioms: Where a Probability space is a triple : $$\langle \Omega,\mathbb{F}, P\rangle$$. Where,$\Omega$is the sample space, the set of mutually exclusive and exhaustive atomic events, ,often called single-tons events,$\in$the algebra$\mathbb{F} $. Where,$\mathbb{F} \subseteq \mathcal{P(\Omega)}=2^{\Omega}$, is an Bool-ean algebra events, a set of measurable subsets, of$\Omega$, closed under the unit event (certain event:$\Omega$, complementation, and countable union. $$(1)P(E)\in \mathbb{F} ,P(E)\geq 0\quad \forall E\in F$$ $$(2) P(\Omega)=1$$. $$(3)P (⋃^{i=\infty}_{i=1}E_{i}) =$$. $$\lim_{n \to \infty} ∑^{i=n} _{i=1}P(E_{i})$$. Where:$\forall A_{i}$are mutually exclusive. That is, pairwise disjoint. A countable sequence of , pairwise-disjoint sets, which requires an axiom of continuity, and sometimes is formalized as fourth axiom distinct from finite additivity. If you read Kol-mogorov's main work, he states many tentative axioms such as axiom$4|5$the frequency principle which he decided to reject in the end . Whilst Kol-morogov is often considered to be a frequentist in certain circles, in the end he rejected the formal frequent-ism of Von Mises/Reichen-bach, as did Von -Neumman in Quantum- Mechanics .Kol-morogov may have been still a frequentist at heart. He said explicitly that his model did not have a great deal to say, or was neutral about the world, and that frequency data, is how you obtain information, but formally speaking he rejected it in the end. Any countable sequence of disjoint sets: with mutually exclusive events ,roughly:$E_{1} ,\, E_{2} \text{..}\in F\$ , satisfies:
. $$P(⋃_{i=1}^{\infty}E_{i})= \, \sum_{i=1}^{\infty }P(E_{i}\,)$$.
• I thought you agreed this kind of logorrhea had nothing to do on the site? – Did Jul 27 '17 at 12:52 | 2021-04-16 02:38:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392803072929382, "perplexity": 521.2885814721411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088471.40/warc/CC-MAIN-20210416012946-20210416042946-00446.warc.gz"} |
https://tex.stackexchange.com/questions/541378/how-to-evenly-space-images-to-reduce-white-space | # How to evenly space images to reduce white space?
I just started learning LaTeX yesterday and getting used to all of the syntax and jargon. I apologize if this question is asked profusely, but perhaps I don't know the keywords.
I am wondering how to evenly space images to reduce whitespace (see image below).
Many posts are referring to eliminating whitespace above and below images after insertion or talking about floating environments. Is it that I am not using the correct environment for inserting .png images?
Here is what I have written:
\documentclass[a4paper,12pt]{article}
\usepackage{graphicx}
\graphicspath{ {"C:\Users\me\Desktop\Classwork\Spring 2020\CHE304\Online Lab 3\Non-Inverting Input"} }
\usepackage{microtype}
\usepackage{blindtext}
\usepackage{wrapfig}
\usepackage{amsmath}
\usepackage[english]{babel}
\usepackage{fancyhdr}
\usepackage[a4paper, inner=1.7cm, outer=2.7cm, top=2cm, bottom=2cm, bindingoffset=1.2cm]{geometry}
\usepackage[labelfont=bf]{caption}
\usepackage{float}
\begin{document}
\title{\Large{\textbf{Module VI: Operational Amplifiers}}}
\author{me}
\date{April 28, 2020}
\maketitle
\newpage
\section{Non-Inverting Operational Amplifier}
\begin{figure}[htb]
\centering
\caption{A non-inverting operational amplifier with a gain of 2}
\includegraphics[width=12cm]{./Non-InvertingInput/non-inverting-input.png}
\label{circuit}
\end{figure}
\begin{figure}[htb]
\centering
\caption{A non-inverting operational amplifier with a gain of 2}
\includegraphics[width=12cm]{./Non-InvertingInput/gain2x.png}
\label{chart1}
\end{figure}
\newpage
Here is the output:
• This line will have given errors \graphicspath{ {"C:\Users\me\Desktop\Classwork\Spring 2020\CHE304\Online Lab 3\Non-Inverting Input"} } you need to use / not \ even on windows, but better to simply delete it, it is not needed if the images are in the folder with the document. – David Carlisle Apr 29 at 15:22
• Hahaha thank you! I suppose it didn't throw any errors because I have been manually designating the paths below. – BactrianFan Apr 29 at 15:25
• There doesn't seem anything wrong with the figure spacing, you could reduce the lengths \floatsep and \intextsep but do you really want them closer together? best to ignore the spacing until you have written more text as it will adjust anyway depending on the text added. in a document with just two figures and no text latex does not have many options for positioning the figures well. – David Carlisle Apr 29 at 15:25
• It is the space at the bottom that I am considering. Such that the blank space before the new page is not as prominent. The next page has a figure that will not accommodate all three appropriately. – BactrianFan Apr 29 at 15:39
• sure but if your whole document just said hello world there would also be a space at the bottom. If you write some words they will (or may) fill that space. With the document as posted what could latex fill the space with? – David Carlisle Apr 29 at 15:41
You can (most likely) force the second float to the bottom of the page by using [!b] but only do this after all editing done as float positioning depends on the surrounding text not just the figures themselves. | 2020-10-21 22:35:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.747738242149353, "perplexity": 2035.4581132233154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878633.8/warc/CC-MAIN-20201021205955-20201021235955-00179.warc.gz"} |
https://puzzling.stackexchange.com/tags/history/new | # Tag Info
3
15
You are You became confused Your confusion may have been due to this If they don't
8
Probably Because
4
The answer is: The sequence is given by: Hint 1 refers to: Hint 2 refers to: Hint 3 refers to: Hint 4 refers to:
Top 50 recent answers are included | 2020-04-03 11:16:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941855669021606, "perplexity": 3111.6194404693106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510846.12/warc/CC-MAIN-20200403092656-20200403122656-00259.warc.gz"} |
https://gmatclub.com/forum/what-is-the-area-of-triangular-region-abc-above-143500.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 15 Nov 2018, 08:54
# Join Chat Room for Live Updates
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• ### Free GMAT Strategy Webinar
November 17, 2018
November 17, 2018
07:00 AM PST
09:00 AM PST
Nov. 17, 7 AM PST. Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT.
• ### GMATbuster's Weekly GMAT Quant Quiz # 9
November 17, 2018
November 17, 2018
09:00 AM PST
11:00 AM PST
Join the Quiz Saturday November 17th, 9 AM PST. The Quiz will last approximately 2 hours. Make sure you are on time or you will be at a disadvantage.
# What is the area of triangular region ABC above?
Author Message
TAGS:
### Hide Tags
Manager
Joined: 02 Dec 2012
Posts: 177
What is the area of triangular region ABC above? [#permalink]
### Show Tags
03 Dec 2012, 03:54
2
12
00:00
Difficulty:
5% (low)
Question Stats:
85% (00:57) correct 15% (01:27) wrong based on 1611 sessions
### HideShow timer Statistics
Attachment:
Area.png [ 9.13 KiB | Viewed 21694 times ]
What is the area of triangular region ABC above?
(1) The product of BD and AC is 20.
(2) x = 45
Math Expert
Joined: 02 Sep 2009
Posts: 50610
Re: What is the area of triangular region ABC above? [#permalink]
### Show Tags
03 Dec 2012, 03:56
1
2
What is the area of triangular region ABC above?
The area of triangle ABC = $$\frac{BD*AC}{2}$$
(1) The product of BD and AC is 20. The area of triangle ABC = $$\frac{BD*AC}{2}=\frac{20}{2}=10$$. Sufficient.
(2) x = 45. No lengths of any line segment is known. Not sufficient.
_________________
Manager
Joined: 31 May 2012
Posts: 119
Re: What is the area of triangular region ABC above? [#permalink]
### Show Tags
03 Dec 2012, 07:19
What is the area of triangular region ABC above?
(1) The product of BD and AC is 20.
(2) x = 45
Case A:
BD is right-angled to AC (Given) in diagram, BD is height and AC is base of the triangle. Area can be calculated by 1/2* base X height. So, Sufficient.
Option C,E are eliminated.
Case B:
x = 45. Angle is not sufficient to find the area. This case is in-sufficient.
B is eliminated.
Intern
Joined: 07 May 2011
Posts: 31
Re: What is the area of triangular region ABC above? [#permalink]
### Show Tags
04 Dec 2012, 19:39
1) area of triangle is half of the (base times altitude). which is 10. sufficient.
2) we are only given info on the the left portion divided by the altitude. the other portion area could be equal to the left portion or could be really elongated or small. can't tell. so not sufficient.
Attachment:
Area.png
What is the area of triangular region ABC above?
(1) The product of BD and AC is 20.
(2) x = 45
Intern
Joined: 28 Jun 2014
Posts: 6
Location: United States
GMAT Date: 12-15-2014
WE: Other (Transportation)
OG 13 Q73 Data Sufficiency [#permalink]
### Show Tags
23 Mar 2015, 23:34
How can we know for sure that AD = DC or that AB=AC? This information is required for (1) to be sufficient and A to the correct answer.
If we look at the diagram they look equal but their is no proof in the information...
DO we assume in the GMAT that if the lines look equal they are equal? I always though this was a 'no no'
Intern
Joined: 24 Mar 2013
Posts: 3
OG 13 Q73 Data Sufficiency [#permalink]
### Show Tags
24 Mar 2015, 00:21
2
mpcostello wrote:
How can we know for sure that AD = DC or that AB=AC? This information is required for (1) to be sufficient and A to the correct answer.
If we look at the diagram they look equal but their is no proof in the information...
DO we assume in the GMAT that if the lines look equal they are equal? I always though this was a 'no no'
Area of a triangle = 1/2 * base * height, and here the base is AC, while the height is BD.
Statement 1 tells us the product of the base and height, i.e. BD*AC = 20.
Since the question is asking about the area, we know that the area is 1/2 the product of BD and AC.
With this statement, we do NOT know for sure that AD = DC / AB = AC / the lines are equal... frankly we don't need to know for this statement to be sufficient. We have the product of BD and AC, and given that the area is half of the product, we have enough information to answer the question.
Statement 2 (that angle BAC, x, is 45 degrees) implies that ABC is isoceles (AB=BC), but this is NOT enough to give us any idea (on its own) about AC or BD. Hence this statement is insufficient and A is the answer.
Hope this helps.
Manager
Status: MBA Mentor
Joined: 28 Oct 2014
Posts: 67
Location: Korea, Republic of
Concentration: Strategy, Other
Schools: SDA Bocconi - Class of 2011
GMAT 1: 700 Q50 V34
GPA: 3.21
WE: Analyst (Telecommunications)
Re: What is the area of triangular region ABC above? [#permalink]
### Show Tags
24 Mar 2015, 17:46
Area of a triangle = 1/2 * Base * Height ( product of BD and AC is known so the area is 10)
X=45 doesn't help to find out anything about the area.
Cheers !
Dhiraj
Attachment:
Area.png
What is the area of triangular region ABC above?
(1) The product of BD and AC is 20.
(2) x = 45
_________________
Dhiraj Jha
http://www.globalmba-gyan.com
MBA 2011 - SDA Bocconi School of Management (Milan, Italy)
Global Account Management, Samsung Electronics Headquarters (Korea)
Intern
Joined: 28 Jul 2015
Posts: 1
Re: What is the area of triangular region ABC above? [#permalink]
### Show Tags
14 Oct 2015, 01:34
Bunuel, I understand that the statement 2 is insufficient. However, if there is a given number for AB side. Would the statement2 be sufficient to answer the question. Thanks in advance
Math Expert
Joined: 02 Sep 2009
Posts: 50610
Re: What is the area of triangular region ABC above? [#permalink]
### Show Tags
18 Oct 2015, 12:45
1
belley wrote:
Bunuel, I understand that the statement 2 is insufficient. However, if there is a given number for AB side. Would the statement2 be sufficient to answer the question. Thanks in advance
No. Even in this case we wouldn't know much about triangle BDC. Notice that we don;t know whether AB = BC.
_________________
EMPOWERgmat Instructor
Status: GMAT Assassin/Co-Founder
Affiliations: EMPOWERgmat
Joined: 19 Dec 2014
Posts: 12857
Location: United States (CA)
GMAT 1: 800 Q51 V49
GRE 1: Q170 V170
Re: What is the area of triangular region ABC above? [#permalink]
### Show Tags
11 Dec 2017, 14:10
Hi All,
We're asked to figure out the area of triangular region ABC above. For that, we'll need the Area formula:
Area = (1/2)(Base)(Height)
So to answer this question, we either need the base and height of triangle ABC or we need the areas of the two smaller triangles.
(1) The product of BD and AC is 20.
Fact 1 tells us that the product of the height (BD) and the base (AC) = 20, so all we have to do is multiply that by 1/2 to get the area.... (1/2)(20) = 10
Fact 1 is SUFFICIENT
2) X = 45
This Fact tells us NOTHING about any of the side lengths, so there's no way to determine the area.
Fact 2 is INSUFFICIENT
GMAT assassins aren't born, they're made,
Rich
_________________
760+: Learn What GMAT Assassins Do to Score at the Highest Levels
Contact Rich at: Rich.C@empowergmat.com
# Rich Cohen
Co-Founder & GMAT Assassin
Special Offer: Save \$75 + GMAT Club Tests Free
Official GMAT Exam Packs + 70 Pt. Improvement Guarantee
www.empowergmat.com/
*****Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!*****
Re: What is the area of triangular region ABC above? &nbs [#permalink] 11 Dec 2017, 14:10
Display posts from previous: Sort by | 2018-11-15 16:54:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7903532385826111, "perplexity": 3354.598308515093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742793.19/warc/CC-MAIN-20181115161834-20181115183834-00207.warc.gz"} |
https://www.blaumut.com/ovjww4/795238-how-many-diagonals-in-triangle | To find MZ, you must remember that the diagonals of a parallelogram bisect each other. Diagonals. 2 See answers manahiljaved865 is waiting for your help. a triangle has 3 (3−3)/2 = 3×0/2 = 0 diagonals. (Remember a rectangle is a type of parallelogram so rectangles get all of the parallelogram properties) If MO = 26 and the diagonals bisect each other, then MZ = ½(26) = 13 A triangle has only adjacent vertices. There are several formulas for the rhombus that have to do with its: Sides (click for more detail). Now, use the above formula to find the number of diagonals of a square. This site is using cookies under cookie policy. Example 26: Prove that the four triangles formed by joining in pairs, the mid-points of three sides of a triangle, are concurrent to each other. How many diagonals does a decagon have? A triangle has no diagonals How many diagonals does a triangle have? Answer Save. Log in. In Euclidean plane geometry, a quadrilateral is a polygon with four edges (sides) and four vertices (corners). Figure – 12: Triangle counting in Fig – 12 = 45. 11 (1998), 133-156. Each number is the numbers directly above it added together. A quadrilateral has 2 diagonals, a pentagon has 3. gjmb1960. It is possible to draw two diagonals, from one non-adjacent vertex to the other, across the center of the quadrilateral. b How many diagonals does a quadrilateral have? Add your answer and earn points. A rhombus is a type of parallelogram, and what distinguishes its shape is that all four of its sides are congruent.. Favorite Answer. Join now. 5. Click here to get an answer to your question ️ HOW many diagonal in triangle 1. 1 decade ago. How many diagonals does a 27-gon have? Here I will simply state the theorems (formal proofs are omitted, but are part of secondary school mathematics) ... Diagonals of a parallelogram bisect each other. Solution: Given: A triangle ABC and D,E,F are the mid-points of sides BC, CA and AB respectively. Regular Polygon case In the case of regular polygons, the formula for the number of triangles in a polygon is: where n is the number of sides (or vertices) . The triangle is the only polygon without diagonals. Adjacent vertices are those which are not formed from the same line. But because a polygon can’t have a negative number of sides, n must be 15. CEO Compensation and America's Growing Economic Divide. If you know side lengths of the rectangle, you can easily find the length of the diagonal using the Pythagorean Theorem, since a diagonal divides a rectangle into two right triangles. Answer: 2 triangles a How many diagonals does a triangle have? Type – 4 : Counting triangles with in embedded Triangle. Join now. A triangle is made up of three lines and three vertex points. triangle = 1 triangle, quadrilateral = 2 triangles, pentagon = 3 triangles, hexagon = 4 triangles, etc.) Michaeldididika64191 Michaeldididika64191 07.05.2020 Math Secondary School HOW many diagonal in triangle 2 ∴ CXAY is a ||gm. Relevance. Where “n” is the number of the polygon sides. a triangle doesnt have any because a diagonal is a segment from one angle of a polleygon to another so a triangle cant have any. The diagonal of the rectangle is the hypotenuseof these triangles.We can use Pythagoras' Theoremto find the length of the diagonal if we know the width and height of the rectangle. Proof . To build the triangle, start with "1" at the top, then continue placing numbers below it in a triangular pattern. Answer (1 of 1): A triangle has no vertices that are more than one away from another. A triangle indeed does not have any diagonals, but any n-sided convex shape has n-2 diagonals. Log in. A triangle has zero diagonals. Find an answer to your question how many diagonals are in triangle?? The formula is: 0.5*(n2-3n) = number of diagonals when n is the number of sides of the polygon You can specify conditions of storing and accessing cookies in your browser, Aman's income is 20% less than that of anil . 3 Answers. 1) Regular pentagon P has all five diagonals drawn. Other names for quadrilateral include quadrangle (in analogy to triangle), tetragon (in analogy to pentagon, 5-sided polygon, and hexagon, 6-sided polygon), and 4-gon (in analogy to k -gons for arbitrary values of k). Polygons. Lv 7. A quadrilateral, for example, is made up of four lines and four vertex points. 1. 4. For the square, there are two diagonals: one diagonal for every two vertices. 1 decade ago. The first diagonal is, of course, just "1"s Source(s): geonetry class!! Ranger VII was the first US spacecraft to send back pictures of what? So i though there were $$7\cdot35\cdot34$$ triangles sharing one vertex with the heptagon and having the other two on diagonals intersections. A diagonal is a straight line that connects one corner of a rectangle to the opposite corner. a triangle has three diagonals as well as three sides How many diagonals does an isosceles triangle have? Ask questions, doubts, problems and we will help you. manahiljaved865 manahiljaved865 2 minutes ago Math Secondary School How many diagonals are in triangle?? Finding length of MZ. Log in. ∴ CXAY is a quadrilateral whose diagonals bisect each other. NOAA Hurricane Forecast Maps Are Often Misinterpreted â Here's How to Read Them. So you have a 15-sided polygon (a pentadecagon, in case you’re curious). 3. Join now. A pentagon has five diagonals on the inside of the shape. To Prove: d How many diagonals … Ask your question. Ask your question. Let us take an example of a square. Diagonals must be created across vertices in a polygon, but the vertices must not be adjacent to one another. As applied to a polygon, a diagonal is a line segment joining any two non-consecutive vertices. 1. Solution : Here number of vertical parts ” 5″ and horizontal parts “3” then possible triangles is 5 x 3 x 6 /2 = 45. (Here I have highlighted that 1+3 = 4) Patterns Within the Triangle. There are a number of theorems that we need to look at before we doing the proof. The medians of a triangle intersect each other in the ratio 2:1 . 6. The diagonals of any polygon can be calculated using the formula n* (n-3)/2, where "n" is the number of sides. How many diagonals does a hexagon have? Log in. To see how many diagonals intersections exist, we just need to know that we need 2 diagonals for one intersection,so we need 4 vertex in total there are $$\binom{7}{4}=35$$ diagonals intersections. All 4 sides are congruent. All polygons, except for the triangle, have a number of diagonals. Diagonals must be created across vertices in a polygon, but the vertices must not be adjacent to one another. To summarize the final result, the number of triangles generated by intersecting diagonals of an N-regular polygon is: References [1] Bjorn Poonen, Michael Rubinstein, The Number of Intersection Points Made by the Diagonals of a Regular Polygon, SIAM J. Disc. Join now. All of the vertices in a triangle are adjacent to one another, so therefore, a triangle is not formed from any diagonal line segments. As you can see, a diagonal of a rectangle divides it into two right triangles,BCD and DAB. Therefore, a quadrilateral has two diagonals, joining opposite pairs of vertices. A hexagon has 9 diagonals: there are three diagonals for every three vertices. How many diagonals does a triangle have? Question:How many diagonals does a triangle have? Learning Outcomes How much percent is anil's income more than aman's income ?, Please aap relevant answers to delete mat karo....konsi dusmani nikal rahe ho, ask me the answer it's urgent please guys , manjoor h..........xDhehehe.....mujhe to ye pd kr hnsi hi aa gyi, solve the following quadratic equations x2-(5-i) x+(18+i) =0, in in triangle abc if ab is twice of BC then the correct statement is, thank guys for following me and who dot doing please do yrr loveu al. Karra. 0 0. What is the angle between two of these diagonals where they meet at a vertex of the pentagon? Past the heptagon, it gets more difficult to count the diagonals because there are so many of them. You know what the formula for the number of diagonals in a polygon is, and you know that the polygon has 90 diagonals, so plug 90 in for the answer and solve for n: Thus, n equals 15 or –12. Since the diagonals of a rectangle are congruent MO = 26. Jump ahead by clicking here. A square has 4 sides. (e.g. Update: Geometry!!! A triangle has zero diagonals. 2. Figure – 13: Triangle … How many diagonals does a triangle have? a square (or any quadrilateral) has 4 (4−3)/2 = 4×1/2 = 2 diagonals an octagon has 8 (8−3)/2 = 8×5/2 = 20 diagonals. A COVID-19 Prophecy: Did Nostradamus Have a Prediction About This Apocalyptic Year? A rectangle has two diagonals, and each is the same length. The formula to find the number of diagonals is n ( n - 3)/2, where n is the number of sides the polygon has. 1175 The number of diagonals of an n-sided polygon is: (n(n - 3)) / 2 It is very immediate to understand: from any vertex, you can draw diagonals to every other vertex, except three: the vertex itself, and the one immediately before and after. Math. In a 42-gon, how many diagonals can you draw from any one vertex? 7. Each triangle will have sides of length l and w and a hypotenuse of length d. You can use the Pythagorean theorem to estimate the diagonal of a rectangle, which can be expressed with the following formula: d² = l² + w², and now you should know how to find the diagonal of a rectangle explicit formula - just take a square root: d = √(l² + w²). The number of diagonals of a polygon can be found using the formula as given below: The number of diagonal lines of an n-sided polygon = n(n-3)/2. Why? If a polygon has 44 diagonals, how many sides does it have? In the case of a pentagon, which "n" will be 5, the formula as expected is equal to 5. (A) 12° (B) 36° (C) 54° (D) 60° (E) 72° 2) How many diagonals does a regular 20-sided polygon have? Angora wool is produced from what animal's coat? Angles. It can have no diagonals.A triangle has 0 diagonals. c How many diagonals does a five-sided polygon have? How many diagonals in triangle. An octagon has 20 diagonals. In the figure above, click 'reset'. How many triangles are in the above figures. Diagonals bisect vertex angles. (A) 60 (B) 120 (C) 170 (D) 240 (E) 400 Explanations to these practice problems will appear at the end of this blog article. A triangle has only adjacent vertices. 1. Note: In general, when you have an n-sided convex polygon, you can draw n-3 diagonals and you form n-2 triangles. The triangles are created by drawing the diagonals from one vertex to all the others. No diagonals How many sides does it have more detail ) Math Secondary School How diagonals! = 2 triangles, hexagon = 4 ) Patterns Within the triangle, have a Prediction About This Apocalyptic?. Made up of four lines and four vertex points joining opposite pairs vertices! The opposite corner, Aman 's income is 20 % less than of... The vertices must not be adjacent to one another 5, the formula as expected is equal 5! Has 9 diagonals: one diagonal for every three vertices 3 triangles, hexagon = 4 ) Patterns the... The pentagon do with its: sides ( click for more detail ) Often Misinterpreted â 's! The others that the diagonals because there are three diagonals as well as three How. Count the diagonals of a rectangle are congruent MO = 26 equal to.. You can See, a diagonal is a quadrilateral whose diagonals bisect each other many... Polygon has 44 diagonals, a pentagon has 3. gjmb1960 the quadrilateral curious ) to the other, across center... Polygon ( a pentadecagon, in case you ’ re curious ) pentagon = 3 triangles, and... Those which are not formed from the same length you must remember that the diagonals of a rectangle congruent. The heptagon, it gets more difficult to count the diagonals of a rectangle divides it into two right,. Counting triangles with in embedded triangle one vertex polygon can ’ t a! Of theorems that we need to look at before we doing the.. Three lines and four vertex points, then continue placing numbers below it a! Isosceles triangle have a hexagon has 9 diagonals: there are several for. The above formula to find the number of diagonals of a rectangle to the other, across the of. Top, then continue placing numbers below it in a triangular pattern start with 1 '' at top. Question How many diagonals are in triangle? the case of a rectangle to the opposite corner mid-points of BC! Five-Sided polygon have F are the mid-points of sides, n must be created across in... A number of sides, n must be created across vertices in a pattern... To 5 find the number of diagonals rectangle are congruent MO = 26 CXAY is line! Diagonals, and each is the numbers directly above it added together quadrilateral whose bisect. ( click for more detail ) How many diagonal in triangle? 1 triangle, =. Polygon ( a pentadecagon, in case you ’ re curious ) triangle counting in Fig – 12: counting. The proof opposite corner so many of them Here to get an answer to your question How many does... Angora wool is produced from what animal 's coat the square, there three..., and each is the same length build the triangle five diagonals drawn many of them diagonals and form. A pentagon has 3. gjmb1960 Hurricane Forecast Maps are Often Misinterpreted â Here 's to! Ranger VII was the first US spacecraft to send back pictures of what to one another draw. A 15-sided polygon ( a pentadecagon, in case you ’ re curious ) Nostradamus have negative! A negative number of diagonals of a pentagon has five diagonals drawn five diagonals drawn 3. gjmb1960,! Have an n-sided convex polygon, but the vertices must not be adjacent to one another if a polygon a... Are how many diagonals in triangle MO = 26 a hexagon has 9 diagonals: one diagonal for every two.. A 42-gon, How many diagonals does a triangle have continue placing numbers below it in a polygon but. Patterns Within the triangle are more than one away from another diagonal of pentagon... Find MZ, you can See, a quadrilateral has 2 diagonals, one... For every two vertices are in triangle 1 the triangles are created by drawing the diagonals of a square then! And three vertex points vertex points two diagonals: there are a number of diagonals of rectangle. A line segment joining any two non-consecutive vertices doing the proof 20 % less than that of.! Where they meet at a vertex of the quadrilateral ( 3−3 ) /2 = 3×0/2 = 0 diagonals CXAY! 'S How to Read them a 15-sided polygon ( a pentadecagon, case... One non-adjacent vertex to the other, across the center of the pentagon diagonal for two. Joining any two non-consecutive vertices made up of three lines and four points... Equal to 5 if a polygon, a quadrilateral has 2 diagonals and. Well as three sides How many diagonals can you draw from any one to. Expected is equal to 5 is equal to 5 answers manahiljaved865 is waiting for your.... Cookies in your browser, Aman 's income is 20 % less than that of anil does an triangle. Was the first US spacecraft to send back pictures of what the opposite corner less that! Then continue placing numbers below it in a triangular how many diagonals in triangle look at we. Did Nostradamus have a number of theorems that we need to look at before we doing the proof Secondary How. The first US spacecraft to send back pictures of what pentagon has 3. gjmb1960 as three How... One corner of a rectangle to the opposite corner draw from any one vertex remember that diagonals... Question ️ How many diagonals how many diagonals in triangle an isosceles triangle have if a polygon has 44,! And four vertex points Here to get an answer to your question ️ How many diagonals a... Its: sides ( click for more detail ) BCD and DAB has all diagonals! Hurricane Forecast Maps are Often Misinterpreted â Here 's How to Read.... Vertices that are more than one away from another the shape start with 1 '' at the top then...: there are two diagonals, and each is the same length you draw from any one vertex See a. From one non-adjacent vertex to the opposite corner at a vertex of the shape as expected is equal to.... | 2022-07-05 03:31:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5403479933738708, "perplexity": 830.1093110626966}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00688.warc.gz"} |
https://iwaponline.com/view-large/1035432 | In order to assist in the economic evaluation of the performance of the present work, mechanical power consumed in rotating discs was measured experimentally under different conditions by means of a watt meter. Table 2 shows the following results:
1. power consumption increases with increases disc rotational speed;
2. power consumption increases with degree of roughness.
Table 2
Mechanical power consumption at different degrees of roughness
Degree of roughness, mm
1234
rpmPower consumption, WFlat disc
100 279 287.8 295 302 153
200 290 304 312.6 322.4 164
300 306 319 328.4 334.1 172
400 323 334 340.9 350.5 180
500 345 350 357 362.1 189
Degree of roughness, mm
1234
rpmPower consumption, WFlat disc
100 279 287.8 295 302 153
200 290 304 312.6 322.4 164
300 306 319 328.4 334.1 172
400 323 334 340.9 350.5 180
500 345 350 357 362.1 189
Close Modal | 2020-11-24 21:13:52 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364269971847534, "perplexity": 126.1478586532189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141177566.10/warc/CC-MAIN-20201124195123-20201124225123-00386.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/cpaa.2005.4.267 | # American Institute of Mathematical Sciences
June 2005, 4(2): 267-281. doi: 10.3934/cpaa.2005.4.267
## Approximations of degree zero in the Poisson problem
1 Dipartimento di Georisorse e Territorio, University of Udine, 33100 Udine, Italy 2 LMGC, Université de Montpellier II, Montpellier, France
Received April 2004 Revised November 2004 Published March 2005
We discuss a technique for the approximation of the Poisson problem under mixed boundary conditions in spaces of piece-wise constant functions. The method adopts ideas from the theory of $\Gamma$-convergence as a guideline. Some applications are considered and numerical evaluation of the convergence rate is discussed.
Citation: C. Davini, F. Jourdan. Approximations of degree zero in the Poisson problem. Communications on Pure & Applied Analysis, 2005, 4 (2) : 267-281. doi: 10.3934/cpaa.2005.4.267
[1] Miguel Ángel Evangelista-Alvarado, José Crispín Ruíz-Pantaleón, Pablo Suárez-Serrato. On computational Poisson geometry II: Numerical methods. Journal of Computational Dynamics, 2021, 8 (3) : 273-307. doi: 10.3934/jcd.2021012 [2] Patrick Henning. Convergence of MsFEM approximations for elliptic, non-periodic homogenization problems. Networks & Heterogeneous Media, 2012, 7 (3) : 503-524. doi: 10.3934/nhm.2012.7.503 [3] Lijin Wang, Pengjun Wang, Yanzhao Cao. Numerical methods preserving multiple Hamiltonians for stochastic Poisson systems. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021095 [4] Chuchu Chen, Jialin Hong. Mean-square convergence of numerical approximations for a class of backward stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (8) : 2051-2067. doi: 10.3934/dcdsb.2013.18.2051 [5] Arianna Giunti. Convergence rates for the homogenization of the Poisson problem in randomly perforated domains. Networks & Heterogeneous Media, 2021, 16 (3) : 341-375. doi: 10.3934/nhm.2021009 [6] Z. Foroozandeh, Maria do rosário de Pinho, M. Shamsi. On numerical methods for singular optimal control problems: An application to an AUV problem. Discrete & Continuous Dynamical Systems - B, 2019, 24 (5) : 2219-2235. doi: 10.3934/dcdsb.2019092 [7] Qiang Long, Xue Wu, Changzhi Wu. Non-dominated sorting methods for multi-objective optimization: Review and numerical comparison. Journal of Industrial & Management Optimization, 2021, 17 (2) : 1001-1023. doi: 10.3934/jimo.2020009 [8] Alberto Bressan, Carlotta Donadello. On the convergence of viscous approximations after shock interactions. Discrete & Continuous Dynamical Systems, 2009, 23 (1&2) : 29-48. doi: 10.3934/dcds.2009.23.29 [9] George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003 [10] L’ubomír Baňas, Amy Novick-Cohen, Robert Nürnberg. The degenerate and non-degenerate deep quench obstacle problem: A numerical comparison. Networks & Heterogeneous Media, 2013, 8 (1) : 37-64. doi: 10.3934/nhm.2013.8.37 [11] Gabriella Bretti, Roberto Natalini, Benedetto Piccoli. Numerical approximations of a traffic flow model on networks. Networks & Heterogeneous Media, 2006, 1 (1) : 57-84. doi: 10.3934/nhm.2006.1.57 [12] Yanzhao Cao, Song Chen, A. J. Meir. Analysis and numerical approximations of equations of nonlinear poroelasticity. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1253-1273. doi: 10.3934/dcdsb.2013.18.1253 [13] Haibo Cui, Zhensheng Gao, Haiyan Yin, Peixing Zhang. Stationary waves to the two-fluid non-isentropic Navier-Stokes-Poisson system in a half line: Existence, stability and convergence rate. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 4839-4870. doi: 10.3934/dcds.2016009 [14] Emmanuel Frénod. Homogenization-based numerical methods. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : i-ix. doi: 10.3934/dcdss.201605i [15] Sandrine Anthoine, Jean-François Aujol, Yannick Boursier, Clothilde Mélot. Some proximal methods for Poisson intensity CBCT and PET. Inverse Problems & Imaging, 2012, 6 (4) : 565-598. doi: 10.3934/ipi.2012.6.565 [16] Giuseppe Maria Coclite, Lorenzo di Ruvo, Jan Ernest, Siddhartha Mishra. Convergence of vanishing capillarity approximations for scalar conservation laws with discontinuous fluxes. Networks & Heterogeneous Media, 2013, 8 (4) : 969-984. doi: 10.3934/nhm.2013.8.969 [17] Emmanuel Gobet, Mohamed Mrad. Convergence rate of strong approximations of compound random maps, application to SPDEs. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4455-4476. doi: 10.3934/dcdsb.2018171 [18] Jie Shen, Xiaofeng Yang. Numerical approximations of Allen-Cahn and Cahn-Hilliard equations. Discrete & Continuous Dynamical Systems, 2010, 28 (4) : 1669-1691. doi: 10.3934/dcds.2010.28.1669 [19] David Bourne, Howard Elman, John E. Osborn. A Non-Self-Adjoint Quadratic Eigenvalue Problem Describing a Fluid-Solid Interaction Part II: Analysis of Convergence. Communications on Pure & Applied Analysis, 2009, 8 (1) : 143-160. doi: 10.3934/cpaa.2009.8.143 [20] Alexandre Caboussat, Roland Glowinski. A Numerical Method for a Non-Smooth Advection-Diffusion Problem Arising in Sand Mechanics. Communications on Pure & Applied Analysis, 2009, 8 (1) : 161-178. doi: 10.3934/cpaa.2009.8.161
2020 Impact Factor: 1.916 | 2021-09-25 23:32:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49535199999809265, "perplexity": 9852.287674921725}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057787.63/warc/CC-MAIN-20210925232725-20210926022725-00464.warc.gz"} |
https://biometris.github.io/statgenGxE/index.html | statgenGxE is an R package providing functions for Genotype by Environment (GxE) analysis for data of plant breeding experiments.
The following types of analysis can be done using statgenGxE:
• Mixed model analysis of GxE table of means
• Finlay-Wilkinson Analysis
• AMMI Analysis
• GGE Analysis
• Identifying mega environments
• Stability measures
• Modeling of heterogeneity of genetic variances and correlations
statgenGxE has extensive options for summarizing and visualizing the results of the analyses. For a full overview of all options it is best to read the vignette
## Installation
• Install from CRAN:
install.packages("statgenGxE")
• Install latest development version from GitHub (requires remotes package):
remotes::install_github("Biometris/statgenGxE", ref = "develop", dependencies = TRUE) | 2021-11-28 18:08:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27931448817253113, "perplexity": 14820.641576570131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358570.48/warc/CC-MAIN-20211128164634-20211128194634-00106.warc.gz"} |
https://zenodo.org/record/5336902/export/dcite4 | Report Open Access
# D-1.4. A report of the assessment of risk factors for AMR and AMU.Assessment of ecological and management factors associated with AMR and Antimicrobial usage
Mesa-Varona, O; Tenhagen, B-A
### DataCite XML Export
<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="DOI">10.5281/zenodo.5336902</identifier>
<creators>
<creator>
<creatorName>Mesa-Varona, O</creatorName>
<givenName>O</givenName>
<familyName>Mesa-Varona</familyName>
<affiliation>BfR</affiliation>
</creator>
<creator>
<creatorName>Tenhagen, B-A</creatorName>
<givenName>B-A</givenName>
<familyName>Tenhagen</familyName>
<affiliation>BfR</affiliation>
</creator>
</creators>
<titles>
<title>D-1.4. A report of the assessment of risk factors for AMR and AMU.Assessment of ecological and management factors associated with AMR and Antimicrobial usage</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2021</publicationYear>
<dates>
<date dateType="Issued">2021-08-30</date>
</dates>
<resourceType resourceTypeGeneral="Report"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5336902</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.5336901</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/ohejp</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract"><p>In previous deliverables (D 1.2. and D 1.3) resistance data of E. coli and antimicrobial use data collected from the human and the livestock sector in the ARDIG WP1 between 2014 and 2017 were described. Antimicrobial resistance data from livestock on non-clinical isolates are harmonized in Europe by the Decision 2013/652/EU and by the new Decision 2020/1729/EU that replaces the latter from 1 January 2021. On the other hand, data on clinical isolatesare not. Livestock data on AMR in clinical and non-clinical isolates, provided by the United Kingdom, Norway, France and Germany in the WP1 of ARDIG, are based ondifferent laboratory methodologies, different evaluation criteria (i.e. epidemiological vs. clinical), different antimicrobial susceptibility testing (AST) methods (e.g. disc diffusion or broth microdilution) and covering different antimicrobials and animal types. A first approach, performed in previous deliverables, was to transform quantitative resistance data from different laboratory methods and methodologies into qualitative data using specific standards (e.g. The European Committee for Antimicrobial Susceptibility Testing (EUCAST) and the French Society of Microbiology (CASFM)) and evaluation criteria(epidemiological or clinical). However, there will be still an issue with comparability of quantitative data from different methodologies applying this method.A further approach addressed in this delivery is to overcome the lack of AMR harmonizationon laboratory methods and methodologies using statistical methods on the quantitative data. This would allow comparing AMR data between and within countries.</p></description>
</descriptions>
</resource>
48
21
views | 2022-09-27 16:45:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2412469983100891, "perplexity": 13998.052231810176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00403.warc.gz"} |
https://byjus.com/question-answer/show-that-left-begin-matrix-1-1-1-a-b-c-a-2-b-2/ | Question
# Show that $$\left| \begin{matrix} 1 & 1 & 1 \\ a & b & c \\ { a }^{ 2 } & b^{ 2 } & c^{ 2 } \end{matrix} \right| =(a-b)(b-c)(c-a)$$.
Solution
## $$\begin{vmatrix} 1 & 1 & 1 \\ a & b & c \\ {a}^{2} & {b}^{2} & {c}^{2} \end{vmatrix} = \left( a - b \right) \left( b - c \right) \left( c - a \right)$$Solving determinant, we have$$\begin{vmatrix} 1 & 1 & 1 \\ a & b & c \\ {a}^{2} & {b}^{2} & {c}^{2} \end{vmatrix}$$Applying $${C}_{2} \rightarrow {C}_{2} - {C}_{1}$$ and $${C}_{3} \rightarrow {C}_{3} - {C}_{1}$$$$= \begin{vmatrix} 1 & 0 & 0 \\ a & b - a & c - a \\ {a}^{2} & {b}^{2} - {a}^{2} & {c}^{2} - {a}^{2} \end{vmatrix}$$Taking $$\left( c - a \right)$$ and $$\left( b - a \right)$$ as common from $${C}_{3}$$ and $${C}_{2}$$ respectively.$$= \left( b - a \right) \left( c - a \right) \begin{vmatrix} 1 & 0 & 0 \\ a & 1 & 1 \\ {a}^{2} & b + a & c + a \end{vmatrix}$$Expanding the determinant along $${R}_{1}$$, we have$$= \left( b - a \right) \left( c - a \right) \left( 1 \left( c + a - b - a \right) - 0 + 0 \right)$$$$= \left( b - a \right) \left( c - a \right) \left( c - b \right)$$$$= \left( a - b \right) \left( b - c \right) \left( c - a \right)$$$$=$$ R.H.S.Hence proved.AApplied Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | 2022-01-23 22:36:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5867602229118347, "perplexity": 10371.429583056673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00017.warc.gz"} |
https://activemq.apache.org/components/artemis/documentation/1.0.0/last-value-queues.html | # Last-Value Queues
Last-Value queues are special queues which discard any messages when a newer message with the same value for a well-defined Last-Value property is put in the queue. In other words, a Last-Value queue only retains the last value.
A typical example for Last-Value queue is for stock prices, where you are only interested by the latest value for a particular stock.
## Configuring Last-Value Queues
Last-value queues are defined in the address-setting configuration:
<address-setting match="jms.queue.lastValueQueue">
<last-value-queue>true</last-value-queue>
By default, last-value-queue is false. Address wildcards can be used to configure Last-Value queues for a set of addresses (see here).
## Using Last-Value Property
The property name used to identify the last value is "_AMQ_LVQ_NAME" (or the constant Message.HDR_LAST_VALUE_NAME from the Core API).
For example, if two messages with the same value for the Last-Value property are sent to a Last-Value queue, only the latest message will be kept in the queue:
// send 1st message with Last-Value property set to STOCK_NAME
TextMessage message = session.createTextMessage("1st message with Last-Value property set");
message.setStringProperty("_AMQ_LVQ_NAME", "STOCK_NAME");
producer.send(message);
// send 2nd message with Last-Value property set to STOCK_NAME
message = session.createTextMessage("2nd message with Last-Value property set");
message.setStringProperty("_AMQ_LVQ_NAME", "STOCK_NAME");
producer.send(message);
...
// only the 2nd message will be received: it is the latest with
// the Last-Value property set | 2023-02-09 10:08:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5367889404296875, "perplexity": 6490.885205892382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501555.34/warc/CC-MAIN-20230209081052-20230209111052-00591.warc.gz"} |
https://encyclopediaofmath.org/wiki/Volterra_kernel | # Volterra kernel
A (matrix) function $K(s,t)$ of two real variables $s,t$ such that either $K(s,t)\equiv0$ if $a\leq s<t\leq b$ or $K(s,t)\equiv0$ if $a\leq t<s\leq b$. If such a function is the kernel of a linear integral operator, acting on the space $L_2(a,b)$, and is itself square-integrable in the triangle in which it is non-zero, the operator generated by it is known as a Volterra integral operator (cf. Volterra operator). | 2023-03-23 05:47:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9634097814559937, "perplexity": 157.57471318974507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00682.warc.gz"} |
https://socratic.org/questions/how-many-apples-did-he-have-when-he-began-his-deliveries-1 | # How many apples did he have when he began his deliveries?
## A farmer has to make 8 stops in delivering apples. He begins with exactly the number of apples he needs for these 8 deliveries. At the first stop, he delivers half of the apples he has plus 1/2 of an apple. At each of the next 7 stops, he delivers half of the remaining apples plus 1/2 of an apple. When he is finished he has no apples left, and none have been lost/damaged when making the deliveries.
Mar 11, 2017
$\text{255 apples}$
#### Explanation:
The trick here is actually the last delivery that the farmer makes.
You know that at each delivery, the farmer delivers half of the number of apples that he had after the previous delivery and $\textcolor{b l u e}{\frac{1}{2}}$ of an apple.
This means that he must end up with $\textcolor{red}{1}$ whole apple before his ${8}^{\text{th}}$ delivery, since
$\frac{\textcolor{red}{1}}{2} - \textcolor{b l u e}{\frac{1}{2}} = 0$
Half of the whole apple leaves him with $\frac{1}{2}$ of an apple, which he then delivers as the $\frac{1}{2}$ of an apple
Moreover, you can say that he was left with $\textcolor{red}{3}$ whole apples before his ${7}^{\text{th}}$ delviery, since
$\frac{\textcolor{red}{3}}{2} - \textcolor{b l u e}{\frac{1}{2}} = 1$
Half of the $3$ whole apples leaves him with $1$ whole apple and $\frac{1}{2}$ of an apple, which he then delivers as the $\frac{1}{2}$ of apple
How about before his ${6}^{\text{th}}$ delivery?
Following the same pattern, you can say that he was left with $\textcolor{red}{7}$ whole apples before his sixth delivery, since
$\frac{\textcolor{red}{7}}{2} - \textcolor{b l u e}{\frac{1}{2}} = 3$
Half of the $7$ whole apples leaves him with $3$ whole apples and $\frac{1}{2}$ of an apple, which he then delivers as the $\frac{1}{2}$ of apple
Can you see the pattern?
You get the number of apples he had before his previous delivery by doubling what he has now and adding $1$.
You can thus say that he has
$7 \times 2 + 1 = \text{15 apples } \to$ before his ${5}^{\text{th}}$ delivery
$15 \times 2 + 1 = \text{31 apples } \to$ before his ${4}^{\text{th}}$ delivery
$31 \times 2 + 1 = \text{63 apples } \to$ before his ${3}^{\text{rd}}$ delivery
$63 \times 2 + 1 = \text{127 apples } \to$ before his ${2}^{\text{nd}}$ delivery
$127 \times 2 + 1 = \text{255 apples } \to$ before his ${1}^{\text{st}}$ delivery
Therefore, you can say that the farmer started with $255$ apples.
$\textcolor{w h i t e}{.}$
ALTERNATIVE APPROACH
Let's assume that the farmer did not deliver $\frac{1}{2}$ of an apple at every stop. In this case, he would simply deliver half of the number of apples he has left at every stop.
In this case, the number of apples he has left would be halved with every stop. Let's say he starts with $x$ apples. He would have
• $x \cdot \frac{1}{2} = \frac{x}{2} \to$ after the ${1}^{\text{st}}$ delivery
• $\frac{x}{2} \cdot \frac{1}{2} = \frac{x}{4} \to$ after the ${2}^{\text{nd}}$ delivery
• $\frac{x}{4} \cdot \frac{1}{2} = \frac{x}{8} \to$ after the ${3}^{\text{rd}}$ delivery
• $\frac{x}{8} \cdot \frac{1}{2} = \frac{x}{16} \to$ after the ${4}^{\text{th}}$ delivery
• $\vdots$
and so on. After his ${8}^{\text{th}}$ delivery, he would be left with
$\frac{x}{2} ^ 8 = \frac{x}{256}$
apples. However, this number cannot be equal to $0$ because that would imply that he started with $0$ apples, which is not the case here.
We know that he scheduled the number of deliveries to ensure that he delivers half of what he had at every delivery, so the maximum number of apples that he can start with is $256$, since
$\frac{256}{2} ^ 8 = \frac{256}{256} = 1$
But since he must be left with $0$ apples after his ${8}^{\text{th}}$ delviery, it follows that he must have started with $1$ less apple than the maximum number of apples, and so
$256 - 1 = \text{255 apples}$
Thefore, you can say that if he starts with $255$ apples and adjusts his deliveries from just half of what he has to half of what he has and $\frac{1}{2}$ of an apple, he will manage to deliver all the apples in $8$ deliveries. | 2021-10-16 05:21:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 58, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5936226844787598, "perplexity": 900.1796712140504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583423.96/warc/CC-MAIN-20211016043926-20211016073926-00099.warc.gz"} |
https://mathcracker.com/system-equations-matrix-form-calculator | # System of Equations to Matrix form Calculator
Instructions: Use this calculator to find the matrix representation of a given system of equations that you provide. Please specify a system of linear equation, by first adjusting the dimension, if needed.
Then, fill out the coefficients associated to all the variables and the right hand size, for each of the equations. If a variable is not present in one specific equation, type "0" or leave it empty.
x + y + z =
x + y + z =
x + y + z =
One crucial ability when solving systems of linear equations is to be able to pass from the traditional format of linear systems to matrices.
One you have the matrix representation of a linear system, then you can either apply Cramer's Rule or you can solve the system by first finding the inverse of the corresponding matrix of coefficients.
Or, with the matrix representation you can build the augmented matrix and conduct Gauss pivoting method, whichever suits you best.
### First: How do you write a system of equations in matrix form?
Step 1: Identify each of the equations in the system. Each equation will correspond to a row in the matrix representation.
Step 2: Go working on each equation. For each of them, identify the left hand side and right hand side of the equation.
Step 3: What is on the left hand side will be part of the matrix A, and what is on the right hand side will be part of the vector b
Step 4: The coefficients on the left need to be identified separately in term of which coefficient multiplies each variable.
Step 5: Each equation represents a row, and each variable represents a column of the matrix A.
## How do you use a matrix to solve a system of equations?
Once you have a system in matrix form, there is variety of ways you can proceed to solve the system. Usually, you start first with computing the determinant of the matrix, as an initial criterion to know about the solutions of the system.
When $$\det A \ne 0$$, then we know the system has a unique solution. Now, when $$\det A = 0$$, it does not mean you don't have solutions, it only means that if there are solutions, it is not unique.
Indeed, when $$\det A = 0$$, you cannot use Cramer's Method or the inverse method to solve the system of equations. In that case, you are better off using Gauss pivoting method.
## How to solve matrix equations
Often times, you are given a system of equations directly in matrix format. If that is the case, and the number of equations is the same as the number of variables, you can try to use the inverse method or Cramer's Rule. Otherwise, you can use Gauss method.
Now, you can use this calculator to express a system in a traditional form when given a matrix form. | 2022-09-30 08:52:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.817966639995575, "perplexity": 268.6928318638036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335448.34/warc/CC-MAIN-20220930082656-20220930112656-00246.warc.gz"} |
https://physics.stackexchange.com/questions/469645/does-horizontal-velocity-remain-constant-when-it-hits-a-surface-in-a-parabolic-m/469648 | Does Horizontal Velocity remain constant when it hits a surface in a Parabolic Motion?
It is told in one of the answers that the horizontal velocity stays at 4.0 m/s after the ball bounces from the plate.
Why is it still at 4.0m/s, isn't there friction or is it neglected for the sake of simplicity?
The assumption is that there is no friction. In that case the horizontal component of velocity would remain same. In case of friction,there would be an impulse dure to friction acting on ball($$=\mu J$$where J is normal impulse) in left direction tangential to surface which would reduce horizontal velocity of ball | 2022-06-27 12:13:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7710354328155518, "perplexity": 553.3803599088659}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00161.warc.gz"} |
http://makie.juliaplots.org/dev/basic-tutorials.html | # Tutorial
Below is a quick tutorial to help get you started. Note that we assume you have Julia installed and configured already.
## Getting Makie
Enter the package manager by typing ] into the Repl. You should see pkg>.
add Makie
Run the following commands in the package manager:
add Makie#master AbstractPlotting#master GLMakie#master
test Makie
The first use of Makie might take a little bit of time, due to precompilation.
## Set the Scene
The Scene object holds everything in a plot, and you can initialize it like so:
scene = Scene()
Note that before you put anything in the scene, it will be blank!
## Getting help
The user-facing functions of Makie are pretty well documented, so you can usually use the help mode in the REPL, or your editor of choice. If you countinue to have issues, see Getting Help.
## Basic plotting
Below are some examples of basic plots to help you get oriented.
You can put your mouse in the plot window and scroll to zoom. Right click and drag lets you pan around the scene, and left click and drag lets you do selection zoom (in 2D plots), or orbit around the scene (in 3D plots).
Many of these examples also work in 3D.
It is worth noting initally that if you run a Makie.jl example and nothing shows up, you likely need to do display(scene) to render the example on screen.
### Scatter plot
using Makie
x = rand(10)
y = rand(10)
colors = rand(10)
scene = scatter(x, y, color = colors)
using Makie
x = 1:10
y = 1:10
sizevec = [s for s = 1:length(x)] ./ 10
scene = scatter(x, y, markersize = sizevec)
### Line plot
using Makie
x = range(0, stop = 2pi, length = 40)
f(x) = sin.(x)
y = f(x)
scene = lines(x, y, color = :blue)
using Makie
scene = lines(rand(10))
sc_t = title(scene, "Random lines")
sc_t
using Makie
x = range(0, stop = 2pi, length = 80)
f1(x) = sin.(x)
f2(x) = exp.(-x) .* cos.(2pi*x)
y1 = f1(x)
y2 = f2(x)
scene = lines(x, y1, color = :blue)
scatter!(scene, x, y1, color = :red, markersize = 0.1)
lines!(scene, x, y2, color = :black)
scatter!(scene, x, y2, color = :green, marker = :utriangle, markersize = 0.1)
### Removing from a scene
using Makie
x = range(0, stop = 2pi, length = 80)
f1(x) = sin.(x)
f2(x) = exp.(-x) .* cos.(2pi*x)
y1 = f1(x)
y2 = f2(x)
scene = lines(x, y1, color = :blue)
scatter!(scene, x, y1, color = :red, markersize = 0.1)
lines!(scene, x, y2, color = :black)
scatter!(scene, x, y2, color = :green, marker = :utriangle, markersize = 0.1)
# initialize the stepper and give it an output destination
st = Stepper(scene, "tutorial_removing_from_a_scene")
step!(st)
pop!(scene.plots)
step!(st)
pop!(scene.plots)
step!(st)
using Makie
x = range(0, stop = 10, length = 40)
y = x
#= specify the scene limits, note that the arguments for FRect are
x_min, y_min, x_dist, y_dist,
therefore, the maximum x and y limits are then x_min + x_dist and y_min + y_dist
=#
limits = FRect(-5, -10, 20, 30)
scene = lines(x, y, color = :blue, limits = limits)
You can also use the convenience functions xlims!, ylims! and zlims!.
### Basic theming
using Makie
x = range(0, stop = 2pi, length = 40)
f(x) = cos.(x)
y = f(x)
scene = lines(x, y, color = :blue)
axis = scene[Axis] # get the axis object from the scene
axis.grid.linecolor = ((:red, 0.5), (:blue, 0.5))
axis.names.textcolor = ((:red, 1.0), (:blue, 1.0))
axis.names.axisnames = ("x", "y = cos(x)")
scene
### Statistical plotting
Makie has a lot of support for statistical plots through StatsMakie.jl. See the StatsMakie Tutorial section for more information on this.
## Controlling display programmatically
Scenes will only display by default in global scope. To make a Scene display when it's defined in a local scope, like a function or a module, you can call display(scene), which will automatically display it in the best available display. You can force display to the backend's preferred window by calling display(AbstractPlotting.PlotDisplay(), scene).
## Saving plots
See the Output section.
## Animations
See the Animation section, as well as the Interaction section.
## More examples
See the Example Gallery. | 2019-12-14 15:23:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20792266726493835, "perplexity": 14053.930285187647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541281438.51/warc/CC-MAIN-20191214150439-20191214174439-00305.warc.gz"} |
http://mathoverflow.net/revisions/85012/list | 3 added 323 characters in body
Given a line function $y = ax + b$, it is easy to calculate the sum-of-squares distance between the line and a window of samples $(1, y_1), (2, y_2), ..., (n, y_n)$ (where $y_1$ is the oldest sample and $y_n$ is the newest):
$\sum_{x=1}^{n}(y_x - (ax + b))^2$
I need a fast algorithm for calculating this value for a rolling window (of length n) - I cannot rescan all the samples in the window every time a new sample arrives.
Obviously, some state should be saved and updated for every new sample that enters the window and every old sample leaves the window.
Notice that when a sample leaves the window, the indecies of the rest of the samples change as well - every $y_x$ becomes $y_{x-1}$. Therefore when a sample leaves the window, every other sample in the window contribute a different value to the new sum: $(y_x - (a(x-1) + b))^2$ instead of $(y_x - (ax + b))^2$.
Is there a known algorithm for calculating this? If not, can you think of one? (It is ok to have some mistakes due first-order linear approximations).
Thanks
2 edited tags
1
# Algorithm for calculating the sum-of-squares distance of a rolling window from a given line function
Given a line function $y = ax + b$, it is easy to calculate the sum-of-squares distance between the line and a window of samples $(1, y_1), (2, y_2), ..., (n, y_n)$ (where $y_1$ is the oldest sample and $y_n$ is the newest):
$\sum_{x=1}^{n}(y_x - (ax + b))^2$
I need a fast algorithm for calculating this value for a rolling window (of length n) - I cannot rescan all the samples in the window every time a new sample arrives.
Obviously, some state should be saved and updated for every new sample that enters the window and every old sample leaves the window.
Is there a known algorithm for calculating this? If not, can you think of one? (It is ok to have some mistakes due first-order linear approximations).
Thanks | 2013-05-22 23:48:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5734100937843323, "perplexity": 188.8604780951749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702525329/warc/CC-MAIN-20130516110845-00013-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://bitbucket.org/rmcantin/bayesopt/diff/doxygen/reference.dox?diff2=c20fdd2c37df&at=default | Diff from to
# doxygen/reference.dox
not included by default in the linker, Python or Matlab paths by
default. This is specially critical when building shared libraries
(mandatory for Python usage). The script \em exportlocalpaths.sh makes
-sure that the folder is included in all the necessary paths.
+sure that the folder with the libraries is included in all the
+necessary paths.
After that, there are 3 steps that should be follow:
\li Define the function to optimize.
\section params Understanding the parameters
BayesOpt relies on a complex and highly configurable mathematical
-model. Also, the key to nonlinear optimization is to include as much
-knowledge as possible about the target function or about the
-problem. Or, if the knowledge is not available, keep the model as
-general as possible (to avoid bias).
+model. In theory, it should work reasonably well for many problems in
+its default configuration. However, Bayesian optimization shines when
+we can include as much knowledge as possible about the target function
+or about the problem. Or, if the knowledge is not available, keep the
+model as general as possible (to avoid bias). In this part, knowledge
+about Gaussian process or nonparametric models in general might be
+useful.
+
+For example, with the parameters we can select the kind of kernel,
+mean or surrogate model that we want to use. With the kernel we can
+play with the smoothness of the function and it's derivatives. The
+mean function can be use to model the overall trend (is it flat?
+linear?). If we know the overall signal variance we better use a
+Gaussian process, if we don't, we should use a Student's t process
+instead.
For that reason, the parameters are bundled in a structure or
dictionary, depending on the API that we use. This is a brief
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o. | 2014-04-16 23:57:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6668586134910583, "perplexity": 3369.3745410202923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://stacks.math.columbia.edu/tag/0AAG | Lemma 10.103.11. Suppose $R$ is a Noetherian local ring. Let $M$ be a Cohen-Macaulay module over $R$. For any prime $\mathfrak p \subset R$ the module $M_{\mathfrak p}$ is Cohen-Macaulay over $R_\mathfrak p$.
Proof. We may and do assume $\mathfrak p \not= \mathfrak m$ and $M$ not zero. Choose a maximal chain of primes $\mathfrak p = \mathfrak p_ c \subset \mathfrak p_{c - 1} \subset \ldots \subset \mathfrak p_1 \subset \mathfrak m$. If we prove the result for $M_{\mathfrak p_1}$ over $R_{\mathfrak p_1}$, then the lemma will follow by induction on $c$. Thus we may assume that there is no prime strictly between $\mathfrak p$ and $\mathfrak m$. Note that $\dim (\text{Supp}(M_\mathfrak p)) \leq \dim (\text{Supp}(M)) - 1$ because any chain of primes in the support of $M_\mathfrak p$ can be extended by one more prime (namely $\mathfrak m$) in the support of $M$. On the other hand, we have $\text{depth}(M_\mathfrak p) \geq \text{depth}(M) - \dim (R/\mathfrak p) = \text{depth}(M) - 1$ by Lemma 10.72.10 and our choice of $\mathfrak p$. Thus $\text{depth}(M_\mathfrak p) \geq \dim (\text{Supp}(M_\mathfrak p))$ as desired (the other inequality is Lemma 10.72.3). $\square$
Comment #6601 by WhatJiaranEatsTonight on
The last inequality should be $\text{depth}(M_{\mathfrak p})\geq \dim (\text{Supp}(M_{\mathfrak p}))$. The right bracket is omitted.
There are also:
• 6 comment(s) on Section 10.103: Cohen-Macaulay modules
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2022-12-05 05:03:37 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9819748997688293, "perplexity": 152.2706559844362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00334.warc.gz"} |
https://www.homeworklib.com/answers/1900130/7-the-intensity-of-a-transition-i-from-the | # 7) The Intensity of a transition, I, from the initial state with wavefunction w.) to a...
The intensity of a transitlon, I, from the initial state with wavefunction $$\psi_{2}(\ddot)$$ to a final state, $$\psi_{b}(\vec{r})$$ is given by:
$$\left.I=\int \psi_{b}^{*}(-) \cdot \hat{\alpha}_{r}\right) \psi_{2}(\vec{r}) d r$$
where $$\vec{d}_{r}^{-}$$) is the transition operator (e.g. the dipole moment operator, $$\hat{\mu}(-)$$, for an infrared absorption transition and the polarisability operator, $$\hat{\alpha}(\vec{r})$$, for a Raman transition). For the integral (and thus intensity) to be non-zero, explain what is required in terms of symmetry.
Initial and final state wavefunctions should have opposite parity i.e opposite symmetry. If one is odd, the other must be even to make any transition possible between these two states. Parity of a wavefunction is determined using the angular momentum quantum number 'L' of the wavefunction. If 'L' is odd 1,3,5..., then wavefunctions have odd parity and if 'L' is even 2,4,6..., then wavefunctions have even symmetry.
#### Earn Coin
Coins can be redeemed for fabulous gifts.
Similar Homework Help Questions | 2021-12-05 23:46:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238200187683105, "perplexity": 1275.471712214591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363226.68/warc/CC-MAIN-20211205221915-20211206011915-00517.warc.gz"} |
https://open.kattis.com/contests/axb2pq/problems/statistics | FUCT_SU20_SE1502_T04
#### Start
2020-07-13 02:45 AKDT
## FUCT_SU20_SE1502_T04
#### End
2020-07-13 04:45 AKDT
The end is near!
Contest is over.
Not yet started.
Contest is starting in -68 days 8:23:34
2:00:00
0:00:00
# Problem DStatistics
Research often involves dealing with large quantities of data, and those data are often too massive to examine manually. Statistical descriptions of data can help humans understand their basic properties. Consider a sample of $n$ numbers $X=(x_1,x_2,\ldots ,x_ n)$. Of many statistics that can be computed on $X$, some of the most important are the following:
• $\min (X)$: the smallest value in $X$
• $\max (X)$: the largest value in $X$
• $\mbox{range}(X)$: $\max (X) - \min (X)$
Write a program that will analyze samples of data and report these values for each sample.
## Input
The input contains between $1$ and $10$ test cases. Each test case is described by one line of input, which begins with an integer $1 \leq n \leq 30$ and is followed by $n$ integers which make up the sample to be analyzed. Each value in the sample will be in the range $-1\, 000\, 000$ to $1\, 000\, 000$. Input ends at the end of file.
## Output
For each case, display Case X:, where X is the case number, followed by the min, max, and range of the sample (in that order). Follow the format of the sample output.
Sample Input 1 Sample Output 1
2 4 10
9 2 5 6 4 5 9 2 1 4
7 6 10 1 2 5 10 9
1 9
Case 1: 4 10 6
Case 2: 1 9 8
Case 3: 1 10 9
Case 4: 9 9 0 | 2020-09-19 19:08:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4008263051509857, "perplexity": 608.255529630488}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00127.warc.gz"} |
http://nxhomeworkllxk.paycheckadvance.us/different-types-of-fact-tables.html | # Different types of fact tables
This highlights the types of dimensions present in types of dimension table different tables can use the dimension table across the fact table and it can. A fact table typically has two types of columns: fact tables contain the content of the data warehouse and store different types of measures like additive. Fact table can store different types of measures such as additive, non-additive, semi-additive additive – as its name implied.
Describes the different types of facts and fact tables commonly seen in a data warehouse or data mart. Dimension (data warehouse) types conformed dimension a queries can drill into different process fact tables separately for each individual fact table.
Types of facts in data warehouse a fact table is the one which consists of the measurements the different types of facts are explained in detail below.
## Different types of fact tables
In addition to the primary fact table types for a good demonstration of how different types of fact tables can have a drastic affect on performance and/or.
Types of facts in data warehouse datawarehouse of the dimensions in the fact table example: take three types of facts to obtain different. Fact tables are the foundation of the data accumulating snapshot fact tables generally are much smaller than the other two types because of this overwriting. There are three fundamental types of fact tables in the data warehouse presentation area: transaction fact tables, periodic snapshot fact tables, and accumulating. This blog highlights the definition of a fact table and its types.
Free essay: dimension - a dimension table typically has two types of columns, primary keys to fact tables and textualdescreptive data fact -a fact table. Dimension - a dimension table typically has two types of columns, primary keys to fact tables and textual\descreptive data fact - a fact table typically. When reading about star schema design i have seen that many people uses various names for different types of dimension tables please list the names and a small.
Different types of fact tables
Rated 3/5 based on 20 review | 2018-06-20 17:01:40 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8238823413848877, "perplexity": 2131.612924714688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863830.1/warc/CC-MAIN-20180620163310-20180620183310-00236.warc.gz"} |
http://clay6.com/qa/19744/when-slightly-different-weights-are-placed-on-the-two-pane-of-a-beam-balanc | # When slightly different weights are placed on the two pane of a beam balance, the beam comes to rest at an angle with the horizontal. The beam is supported at a single point P by a pivot.
a) the net torque about P due to the two weight is non-zero in the equilibrium.
b) the whole system does not continue to rotate .
c) the centre of mass of the system lies below P.
d) the centre of mass of system lies above P. | 2017-09-20 07:20:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37795931100845337, "perplexity": 275.823971687772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686705.10/warc/CC-MAIN-20170920071017-20170920091017-00687.warc.gz"} |
https://fizzbuzzer.com/pots-of-gold/ | ## fizzbuzzer.com
### Looking for good programming challenges?
Use the search below to find our solutions for selected questions!
# Pots of gold
Sharing is caring!
Problem statement
Pots of gold game: Two players $A$ and $B$. There are pots of gold arranged in a line, each containing some gold coins (the players can see how many coins are there in each gold pot – perfect information). They get alternating turns in which the player can pick a pot from one of the ends of the line. The winner is the player which has a higher number of coins at the end. The objective is to “maximize” the number of coins collected by $A$, assuming $B$ also plays optimally. A starts the game.
The idea is to find an optimal strategy that makes $A$ win knowing that $B$ is playing optimally as well. How would you do that?
Solution
Because $A$ plays optimally, the problem of maximizing the coins collected by $A$ becomes equal to minimizing the coins collected by $B$. Similarly because $B$ also plays optimally, the problem of maximizing the coins collected by $B$ becomes equal to minimizing the coins collected by $A$.
Without loss of generality, assume $A$ plays first. He will have two possible choices:
1. He can choose the pot at the $start$ of the line. Then $B$ chooses next. $B$ can choose either $start+1$ or $end$.
1. If $B$ chooses $start+1$, $A$ is left with two options $start+2$ and $end$.
2. If $B$ chooses $end$, $A$ is left with two options $start+1$ and $end-1$.
$\rightarrow$ $A's$ outcome will be $pots[start] + minimum(choose(start + 2, end), choose(start + 1, end - 1))$.
2. He can choose the pot at the $end$ of the line. Then $B$ chooses next. $B$ can choose either $start$ or $end-1$.
1. If $B$ chooses $start$, $A$ is left with two options $start+1$ and $end-1$.
2. If $B$ chooses $end-1$, $A$ is left with two options $start$ and $end-2$.
$\rightarrow$ $A's$ outcome will be $pots[end] + minimum(choose(start + 1, end-1), choose(start, end - 2))$.
The reason we take the $minimum$ is because $B$ also plays optimally. That is, $B$ chooses the pots that minimize the output for $A$. So when $A$ chooses $pots[start]$ or $pots[end]$, on $A's$ next turn what’s left will be the minimum of what $A$ could get.
Full code | 2019-04-26 02:26:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 58, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698323011398315, "perplexity": 355.8729013886301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578747424.89/warc/CC-MAIN-20190426013652-20190426035652-00175.warc.gz"} |
http://www.researchgate.net/publication/1767268_Time-optimal_synthesis_of_unitary_transformations_in_coupled_fast_and_slow_qubit_system | Article
# Time-optimal synthesis of unitary transformations in coupled fast and slow qubit system
• ##### Robert Zeier
Physical Review A (Impact Factor: 3.04). 09/2007; DOI: 10.1103/PhysRevA.77.032332
Source: arXiv
ABSTRACT In this paper, we study time-optimal control problems related to system of two coupled qubits where the time scales involved in performing unitary transformations on each qubit are significantly different. In particular, we address the case where unitary transformations produced by evolutions of the coupling take much longer time as compared to the time required to produce unitary transformations on the first qubit but much shorter time as compared to the time to produce unitary transformations on the second qubit. We present a canonical decomposition of SU(4) in terms of the subgroup SU(2)xSU(2)xU(1), which is natural in understanding the time-optimal control problem of such a coupled qubit system with significantly different time scales. A typical setting involves dynamics of a coupled electron-nuclear spin system in pulsed electron paramagnetic resonance experiments at high fields. Using the proposed canonical decomposition, we give time-optimal control algorithms to synthesize various unitary transformations of interest in coherent spectroscopy and quantum information processing. Comment: 8 pages, 3 figures
0 Bookmarks
·
67 Views
• Source
##### Article: Time-optimal CNOT between indirectly coupled qubits in a linear Ising chain
[Hide abstract]
ABSTRACT: We give analytical solutions for the time-optimal synthesis of entangling gates between indirectly coupled qubits 1 and 3 in a linear spin chain of three qubits subject to an Ising Hamiltonian interaction with equal coupling $J$ plus a local magnetic field acting on the intermediate qubit 2. The energy available is fixed, but we relax the standard assumption of instantaneous unitary operations acting on single qubits. The time required for performing an entangling gate which is equivalent, modulo local unitary operations, to the $\mathrm{CNOT}(1, 3)$ between the indirectly coupled qubits 1 and 3 is $T=\sqrt{3/2} J^{-1}$, i.e. faster than a previous estimate based on a similar Hamiltonian and the assumption of local unitaries with zero time cost. Furthermore, performing a simple Walsh-Hadamard rotation in the Hlibert space of qubit 3 shows that the time-optimal synthesis of the $\mathrm{CNOT}^{\pm}(1, 3)$ (which acts as the identity when the control qubit 1 is in the state $\ket{0}$, while if the control qubit is in the state $\ket{1}$ the target qubit 3 is flipped as $\ket{\pm}\rightarrow \ket{\mp}$) also requires the same time $T$. Comment: 9 pages
Journal of Physics A Mathematical and Theoretical 09/2010; · 1.77 Impact Factor
• ##### Article: Efficient synthesis of quantum gates on a three-spin system with triangle topology
[Hide abstract]
ABSTRACT: Experiments in coherent nuclear and electron magnetic resonance and optical spectroscopy correspond to control of quantum-mechanical ensembles, guiding them from initial states to target states by unitary transformations. The control inputs (pulse sequences) that accomplish these unitary transformations should take as little time as possible so as to minimize the effects of relaxation and decoherence, and to optimize the sensitivity of the experiments. Here, we give an efficient synthesis of a class of unitary transformations on a three coupled spin-1/2 system with equal Ising coupling strengths. We show a significant time saving compared with conventional methods.
Physical Review A 12/2011; 84(6). · 3.04 Impact Factor
• Source
##### Article: Quantum brachistochrone problem for two spins-1/2 with anisotropic Heisenberg interaction
[Hide abstract]
ABSTRACT: We study the quantum brachistochrone evolution for a system of two spins-1/2 describing by an anisotropic Heisenberg Hamiltonian without $zx$, $zy$ interecting couplings in magnetic field directed along z-axis. This Hamiltonian realizes quantum evolution in two subspaces spanned by $|\uparrow\uparrow>$, $|\downarrow\downarrow>$ and $|\uparrow\downarrow>$, $|\downarrow\uparrow>$ separately and allows to consider brachistochrone problem on each subspace separately. Using operator of evolution for this Hamiltonian we generate quantum gates, namely an entanler gate, $SWAP$ gate, $iSWAP$ gate. We also show that the time required for the generation of an entangler gate and $iSWAP$ gate is minimal from all possible.
Journal of Physics A Mathematical and Theoretical 11/2012; 46(15). · 1.77 Impact Factor | 2014-09-18 04:09:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8781912922859192, "perplexity": 1017.5311010643588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657125488.38/warc/CC-MAIN-20140914011205-00183-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://mathoverflow.net/questions/39078/matrix-multiplication | # Matrix multiplication
Let I(n) and U(n) be the number of steps needed to invert an nxn matrix and nxn upper triangle matrix respectively. Can we prove I(n)<=cU(n), where c is some constant?
-
The answer is probably No, if you wish a constant independent of $n$. On the one hand, the naive method gives the optimal result $U(n)=n^2$. On the other hand, it is known that the complexity of inversion and that of matrix multiplication are the same (see for instance the second edition, to appear soon, of my book Matrices;Theory and Applications, GTM 216 Springer-Verlag, 2010). If the answer to your question is positive, this implies therefore that matrix multiplication can be done in $O(n^2)$ operations. This is highly unlikely. The state of the art tells us that it can be done in $O(n^{2.376})$ operations. Optimists believe that it could be done in $O(n^{2+\epsilon})$ for every $\epsilon>0$, but not in $O(n^2)$.
@Matt. Conversely, inversion of $3n\times 3n$ matrices can be used to multiply $n\times n$ matrices with the same complexity, up to a universal constant. If $A$ and $B$ are given, just invert the block triangular matrix whose diagonal is ($I_n$ $I_n$ $I_n$) and is boardered by the diagonal ($A$ $B$); the other blocks are $0_n$'s. – Denis Serre Sep 17 '10 at 13:37
Denis, I'm confused. To me, the naive method of inverting an upper triangular matrix has complexity $n^3$. Moreover, your answer seem to show that the answer is YES: You have just shown that $U(3n)$ bounds $n \times n$ matrix multiplication. – David Speyer Jul 11 '12 at 14:14
@David. You're right. I confused between solving $Ux=b$ and inverting $U$. – Denis Serre Jul 11 '12 at 14:40 | 2014-09-19 11:58:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9245892763137817, "perplexity": 215.61355946870535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131304.74/warc/CC-MAIN-20140914011211-00163-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://www.physicsforums.com/threads/order-of-element.439183/ | # Order of element
## Homework Statement
Let gcd(h,k)=1, o(a)=h, o(b)=k, show that o(ab)=hk
## Homework Equations
o(a)= order of a modulo n
o(a)=k iff k is smallest integer,$a^k=1 mod n$
## The Attempt at a Solution
to prove o(ab) l hk, no problem, just need to show $(ab)^{hk}=1 mod n$
and to prove hk l o(ab), use division algorithm
let, hk=p*o(ab)+q , $0\leq q<p$
it can be shown that $(ab)^q=1 mod n$, this will imply q=0, so o(ab) l hk
and i really think i didn't make any mistake, but i didn't use that gcd(h,k)=1.
can help me anywhere i wrong? or is this gcd(h,k)=1 is unnecessary?
Last edited:
jbunniii
Homework Helper
Gold Member
it can be shown that $(ab)^q=1 mod n$, this will imply q=0, so o(ab) l hk
I don't think this is correct. If it can be shown, then please show it.
It's simple to show that $o(ab) | hk$. It suffices to show that
$$(ab)^{hk} = 1$$
But
$$(ab)^{hk} = a^{hk} b^{hk} = (a^h)^k (b^k)^h = (1)(1) = 1$$
However, this isn't enough to imply that $o(ab) = hk$. For that, you will need to use the fact that $gcd(h,k) = 1$.
jbunniii
Homework Helper
Gold Member
P.S. You didn't specify what group or ring or field you are working in. I infer that $a$ and $b$ are elements of $\mathbb{Z}/(n)$ (integers modulo n). However, you did not mention whether $n$ is prime. If not, then be aware that not every element necessarily has an order. For example, in $\mathbb{Z}/(4)$, consider the element $a = 2$. There is no $k$ for which $2^k = 1 (\mod 4)$, so $w$ doesn't have an order in $\mathbb{Z}/(4)$.
So if $n$ is not necessarily prime, you need to do a bit more work to justify whether $o(ab)$ is even defined.
ahh, sorry i was blatantly showing o(ab) l hk twice, that was soo stupid, i have to show hk l o(ab)
i think i get it already but something not sure, so let o(ab)=r
and i know $o(a^k)=h$ and $o(b^h)=k$ since (h,k)=1
this one im not sure, r l hk then let e be such that re=hk
so then $(a^k)^r=(a^k)^{hk/e}=1^{k/e}=1$mod n, so is it ok? because im not sure power of a fraction is really define???
if it is ok then, h l r, similar argument to get k l r, so hk l r, since (h,k)=1
and btw, hmm this book earlier said that it always assumed that (a,n)=1 and (b,n)=1 without stating it. then (ab,n)=1 so (ab)x=1 mod n must have solution, sorry, i not state this earlier.
Last edited:
jbunniii
Homework Helper
Gold Member
Do you know any group theory? Let $A = \langle a\rangle$ and $B = \langle b\rangle$ be the cyclic groups generated by $a$ and $b$, respectively.
Then $|A| = o(a) = h$ and $|B| = o(b) = k$ are relatively prime. This implies a key fact: $A \cap B = 1$, the trivial group.
Now consider $(ab)^r$. For what values of $r$ can $(ab)^r = 1$? This is the same as $a^r b^r = 1$, or equivalently $a^r = b^{-r}$. So we have a common element, $x = a^r = b^{-r}$, which must be in both $A$ and $B$, right? What does this imply?
Last edited:
yea, but i just going through cyclic group very fast once since i'll be learning that next semester.
but that means $a^r = b^{-r}=1$right? and easily can show hk l r, problem solved, thanks jbunnii ^^ | 2021-06-15 10:11:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9545174241065979, "perplexity": 753.3581442307714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00548.warc.gz"} |
https://www.physicsforums.com/threads/subgroups-of-alternating-group.300551/ | # Subgroups of Alternating Group
1. Mar 17, 2009
Does A5 (the alternating group of degree 5) contain a subgroup of order m for each factor m of 60?
My intuition says yes, but I can't seem to find a way to prove this, short of writing out example subgroups for each factor, which is really tedious. Although if I have subgroups of orders 2, 3, 4, and 5 would it be enough to look at products of these?
2. Mar 17, 2009
### Daettil
Perhaps you should try to construct a subgroup of order 30. Such a subgroup is normal so you can constuct one by taking the union of conjugacy classes...
3. Mar 17, 2009
How do you know that a subgroup of order 30 must be normal?
I know that A5 is simple so cannot contain a normal subgroup except for {e} and {A5}.
4. Mar 18, 2009
### sutupidmath
There is a theorem that says: Let H be a subgroup of order t in a group G of order 2t. Then: H is normal in G, and moreover G/H={H,K}, where K consists of the t elements of G not in H.
Proof: Let g be an element of G such that g is in H. Then gH (the left coset) consists of exactly t elements as well. Moreover, since H is a subgroup=>gH=H=Hg.
Now, let r be any other element such that r is not in H. Then rH and H are going to be disjoint cosets(by another proposition: Two cosets are either identical or disjoint). But, again, rH must have exactly t elements, but in this case these t elements are the ones not contained in H, but rather in K. Now, the union of such cosets should give us the group G itself.
G={gH,rH}={Hg,Hr}=>gH=Hg => H<G (H is normal in G)
Edit: Another part of this theorem which probbably would help you prove what you want to prove is that: for every element g of G => g^2 is in H.
What i would probbably try to do is first determine how many three cycle permutations i.e. (abc) we have in A_5, and then how many 5-cycle, how many of the form (ab)(cd) etc. Then, this would give you an idea, say if there were a subgroup H of order 6, then all 3-cycles should be in H. Now, if the number of 3-cycles is greater than 6, then this would tell you that there is no such group of ord 6. So, try to work sth along these lines.
Edit2: As a matter of fact, from the top of my head i know that A5 does not have any normal subgroups of index 2,3 or 5. This is because,( i don't know whether you have been introduced yet), the group A_5 is not solvable.
In this case, since if a subgroup of ord 30 exists, then it must be normal, then if it is normal its index in A5 is 2, but since there is no normal subgroup of index 2 in A5, we conclude that there is no subgroup of ord 30 in A5.
Do you follow? (You still need fo fill in the 'why's?" though...:yuck:
Last edited: Mar 18, 2009
5. Mar 18, 2009
### matt grime
This is an odd statement. A_5 does not have a non-trivial normal subgroup of *any* index, since it is simple (not just not solvable - A_5 x C_2 is not solvable but has 2 normal subgroups, one of which has index 2). Why single out 2,3, and 5?
Again, this is odd. There are no normal subgroups - the central two steps of logic in this deduction are redundant, i.e. you can stop at
"In this case, since if a subgroup of ord 30 exists, then it must be normal <snip>. We conclude that there is no subgroup of ord 30 in A5"
6. Mar 18, 2009
### sutupidmath
Yes, you are right, of course!
When i pointed out normal subgroups of index 2,3,5, what i really had in mind was the solvability issue of A5(which by the way is not that relevant here). In other words, the fact that if A_5 would be solvable, then the only normal subgroups with index prime of A5, would respectively have indexes 2,3 or 5.
And, yes, my reasoning should have ended earlier, as you pointed out.
7. Mar 18, 2009
Taking a step back, how do I prove that A5 is simple? My approach would be to determine the conjugacy groups of A5 and their orders and use to these to show that there can be no other subgroups besides {e} and A5 itself. But I am not sure how to actually find the conjugacy groups. Or do you know of any other proof methods?
8. Mar 19, 2009
### matt grime
You omitted the word normal in your second sentence - you want to show that it has no subgroup of order 30, index 2, which is necessarily normal.
A_5 is in S_5. You know the conjugacy classes of elements in S_5 explicitly. Start from there: if two elements in A_5 are conjugate in S_5 when are they also conjugate in A_5? It's just a matter of calculations here.
So, do you know the conjugacy classes of elements in S_5?
9. Mar 19, 2009
Yes, yes, I did forget the word "normal."
So yes, I do have the conjugacy classes of S5 explicitly. I guess the key I'm missing is when two elements conjugate in S5 are also conjugate in A5. Is there a theorem about this?
10. Mar 19, 2009
### matt grime
Yes. But you can just work it out by doing it. Try S_3 and S_4 first. Or you can just look it up, of course. I can't decide if I condone that action in this case.
11. Mar 19, 2009 | 2017-08-22 07:47:04 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8068908452987671, "perplexity": 476.1537217369902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110485.9/warc/CC-MAIN-20170822065702-20170822085702-00367.warc.gz"} |
https://www.trustudies.com/ncert-solutions/class-10/maths/applications-of-trigonometric-ratios/ | # NCERT solution for class 10 maths applications of trigonometric ratios ( Chapter 9)
#### Solution for Exercise -9.1
Q.1 A circus artist is climbing a 20 m long rope, which is tightly stretched and tied from the top of a vertical pole to the ground. Find the height of the pole, if the angle made by the rope with the ground level is 30º (see figure).
###### Answer :
In $$∆ \ ABC$$ , $$\frac{AB}{AC} \ = \ sin30º$$
=> $$\frac{AB}{20} \ = \ \frac{1}{2}$$ [ ∵ $$sin30º \ = \ \frac{1}{2}$$ ]
=> $$AB \ = \ \frac{1}{2} \ × \ 20 \ = \ 10$$
∴ Height of the pole is 10 m.
Q2. A tree breaks due to storm and the broken part bends so that the top of the tree touches the ground making an angle 30º with it. The distance between the foot of the tree to the point where the top touches the ground is 8m. Find the height of the tree.
###### Answer :
In $$∆ \ ABC$$
$$\frac{AB}{BC} \ = \ tan30º \ => \ \frac{AB}{8} \ = \ \frac{1}{ \sqrt{3}}$$
=> $$AB \ = \ \frac{8}{ \sqrt{3}}$$ (1)
And, $$\frac{AC}{BC} \ = \ sec30º \ => \ \frac{AC}{8} \ = \ \frac{2}{ \sqrt{3}}$$
=> $$AC \ = \ \frac{16}{ \sqrt{3}}$$ (2)
Now, Height of the tree = AB + AC
$$=\ \frac{8}{ \sqrt{3}} \ + \ \frac{16}{ \sqrt{3}} \ = \ \frac{24}{ \sqrt{3}}$$
$$= \ \frac{24}{ \sqrt{3}} \ × \ \frac{ \sqrt{3}}{ \sqrt{3}} \ = \ 8\sqrt{3}$$ m
Q3. A contractor plans to install two slides for the children to play in a park. For the children below the age of 5 years, she prefers to have a slide whose top is at a height of 1.5 m, and is inclined at an angle of 30º to the ground, whereas for elder children, she wants to have a steep slide at a height of 3m, and inclined at an angle of 60º to the ground. What should be the length of the slide in each case?
###### Answer :
In $$∆ \ BDE$$
$$\frac{DE}{BD} \ = \ cosec 30º \ => \ \frac{DE}{1.5} \ = \ 2$$
=> $$DE \ = \ 2 × 1.5 \ = \ 3$$
And in $$∆ \ ABC$$
$$\frac{AC}{AB} \ = \ cosec 60º \ => \ \frac{AC}{3} \ = \ \frac{2}{ \sqrt{3}}$$
$$AC \ = \ \frac{2}{ \sqrt{3}} × 3 \ = \ 2\sqrt{3}$$
∴ Length of slides are $$3$$ m and $$2\sqrt{3}$$ m
Q4. The angle of elevation of the top of a tower from a point on the ground, which is 30 m away from the foot of the tower, is 30º. Find the height of the tower.
###### Answer :
Let AB be the tower of height h metres and let C be a point at a distance of 30 m from the foot of the tower. The angle of elevation of the top of the tower from point C is given as 30º.
In $$∆ \ CAB$$, we have
$$\frac{AB}{CA} \ = \ tan30º$$
$$\frac{h}{30} \ = \ \frac{1}{ \sqrt{3}} \ => \ h \ = \ \frac{30}{ \sqrt{3}} \ = \ 10 \sqrt{3}$$
Hence, the height of the tower is $$10 \sqrt{3}$$ metres.
Q5. A kite is flying at a height of 60 m above the ground. The string attached to the kite is temporarily tied to a point on the ground. The inclination of the string with the ground is 60º. Find the length of the string, assuming that there is no slack in the string.
###### Answer :
Let OA be the horizontal ground, and let K be the position of the kite at height 60 m above the ground. Let the length of the string OK be x metres. It is given $$∠ \ KOA \ = \ 60º$$
In $$∆ \ AOK$$, we have
$$\frac{AK}{OK} \ = \ sin60º \ => \ \frac{60}{x} \ = \ \frac{ \sqrt{3}}{2}$$
$$x \ = \ \frac{120}{ \sqrt{3}} \ = \ 40 \sqrt{3}$$
∴ the length of the string is $$40 \sqrt{3}$$ m
Q6. A 1.5 m tall boy is standing at some distance from a 30 m tall building. The angle of elevation from his eyes to the top of the building increases from 30º to 60º as he walks towards the building. Find the distance he walked towards the building.
###### Answer :
Let OA be the building and PL be the initial position of the man such that $$∠ \ APR=30º$$ and AO = 30 m. Let MQ be the position of the man at a distance PQ. Here $$∠ \ AQR \ = \ 60º$$.
Now from $$∆ \ ARQ$$ and $$∆ \ ARP$$ , we have
$$\frac{QR}{AR} \ = \ cot60º \ => \ \frac{QR}{AR} \ = \ \frac{1}{ \sqrt{3}} \ => \ QR \ = \ \frac{AR}{ \sqrt{3}}$$ (1)
and, $$\frac{PR}{AR} \ = \ cot30º \ => \ \frac{PR}{AR} \ = \ \sqrt{3} \ => \ PR \ = \ \sqrt{3}AR$$ (2)
From (1) and (2), we get
$$PQ \ = \ PR \ - \ QR \ = \ \sqrt{3}AR \ - \ \frac{AR}{ \sqrt{3}}$$ $$= \ \frac{(3-1)AR}{ \sqrt{3}} \ = \ \frac{2 \sqrt{3}}{3} AR$$
$$= \ \frac{2 \sqrt{3}}{3} \ × \ 28.5 \ = \ 19 \sqrt{3}$$ [∵ $$AR \ = \ 30 \ - \ 1.5 \ = \ 28.5$$m ]
∴ the distance walked by the man towards the building is $$19 \sqrt{3}$$ metres.
Q7. From a point on the ground the angles of elevation of the bottom and top of a tower fixed at the top of a 20 m high building are 45º and 60º respectively. Find the height of the tower.
###### Answer :
Let BC be the building of height 20 m and CD be the tower of height x metres. Let A be a point on the ground at a distance of y metres away from the foot B of the building.
In $$∆ \ ABC$$, we have
$$\frac{BC}{AB} \ = \ tan45º \ => \ \frac{20}{y} \ = \ 1$$ $$=> \ y \ = \ 20$$
In $$∆ \ ABD$$, we have
$$\frac{BD}{AB} \ = \ tan60º \ => \ \frac{20+x}{20} \ = \ \sqrt{3}$$
=> $$20+x \ = \ 20 \sqrt{3} \ => \ x \ = \ 20( \sqrt{3} -1)$$ $$x \ = \ 20(1.732 - 1) \ => \ x \ = \ 14.64$$
∴ the height of the tower is 14.64 metres.
Q8. A statue 1.6 m tall stands on the top a pedestal. From a point on the ground the angle of elevation of the top of the statue is 60º and from the same point the angle of elevation of the top of the pedestal is 45º. Find the height of the pedestal.
###### Answer :
Let BC be the pedestal of height h metres and CD be the statue of height 1.6 m. Let A be a point on the ground such that $$∠ \ CAB \ = \ 45º$$ and$$∠ \ DAB \ = \ 60º$$
In $$∆ \ ABC$$ and $$∆ \ ABD$$, we have
$$\frac{AB}{BC} \ = \ cot45º \ => \ \frac{AB}{h} \ = \ 1$$ $$=> \ AB \ = \ h$$ (1)
and, $$\frac{BD}{AB} \ = \ tan60º \ => \ \frac{BC + CD}{AB} \ = \ \sqrt{3}$$ $$=> \ \frac{h+1.6}{h} \ = \ \sqrt{3} \ => \ h+1.6 \ = \ h \sqrt{3}$$
=> $$h( \sqrt{3} \ - \ 1) \ = \ 1.6 \ => \ h \ = \ \frac{1.6}{ \sqrt{3} \ - \ 1} \ = \ \frac{1.6}{ \sqrt{3} \ - \ 1} × \frac{ \sqrt{3} \ + \ 1}{ \sqrt{3} \ + \ 1}$$
$$= \ \frac{1.6( \sqrt{3} \ + \ 1)}{2} \ = \ 0.8( \sqrt{3}+1)$$ $$= \ 2.1856$$
∴ the height of the pedestal is $$2.1856$$ m
Q9. The angle of elevation of the top of the building from the foot of the tower is 30º and the angle of elevation of the top of the tower from the foot of the building is 60º. If the tower is 50 m high, find the height of the building.
###### Answer :
Let AB be the building of height h and AC, the horizontal ground through C, the foot of the building. Since the building subtends an angle of 60º at C, hence $$∠ \ ACB \ = \ 30º$$. Let CD be the tower of height 50 m such that $$∠ \ CAB \ = \ 60º$$.
$$∆ \ BAC$$ and $$DCA$$ , we have
$$\frac{AC}{AB} \ = \ cot30º \ => \ \frac{AC}{h} \ = \ \sqrt{3}$$ $$=> \ AC \ = \ \sqrt{3}$$ (1)
and $$\frac{DC}{AC} \ = \ tan60º \ => \ \frac{50}{AC} \ = \ \sqrt{3}$$ $$=> \ AC \ = \ \frac{50}{ \sqrt{3}}$$ (2)
Equating the values of AC from (1) and (2), we get
$$\sqrt{3}h \ = \ 50 \sqrt{3} \ => \ h \ = \ \frac{50}{ \sqrt{3}} × \frac{1}{ \sqrt{3}}$$
$$= \ \frac{50}{3} \ = \ 16.66$$
∴ the height of the building is 16.66 m.
Q10. Two poles of equal heights are standing opposite each other on either side of the road, which is 80 m wide. From a point between them on the road, the angle of elevation of the top of the poles are 60º and 30º respectively. Find the height of the poles and the distances of the point from the poles
###### Answer :
Let AB and CD be two poles each of height h metres. Let P be a point on the road such that AP = x metres. Then CP = (80 – x) metres. Its is given that $$∠ \ APB \ = \ 60º$$ and $$∠ \ CPD \ = \ 30º$$.
In $$∆ \ APB$$, we have
$$\frac{AB}{AP} \ = \ tan60º \ => \ \frac{h}{x} \ = \ \sqrt{3}$$ $$=> \ h \ = \ \sqrt{3}x$$ (1)
In $$∆ \ CPD$$, we have
$$\frac{CD}{CP} \ = \ tan30º \ => \ \frac{h}{80-x} \ = \ \frac{1}{ \sqrt{3}}$$ $$=> \ h \ = \ \frac{80-x}{ \sqrt{3}}$$ (2)
Equating the values of h from (1) and (2), we get
$$\sqrt{3}x \ = \ \frac{80-x}{ \sqrt{3}} \ =>\ 3x \ = \ 80-x$$ $$=> \ x \ = \ 20$$
Putting x = 20 in (1), we get
$$h \ = \ \sqrt{3} × 20 \ = \ (1.732) × 20 \ = \ 34.64$$
∴ The point is at a distance of 20 metres from the first pole and 60 metres from the second pole. And the height of the pole is 34.64 metres.
Q11. A T.V. tower stands vertically on a bank of a river. From a point on the other bank directly opposite the tower, the angle of elevation of the top of the tower is 60º. From a point 20 m away from this point on the same bank, the angle of elevation of the top of the tower is 30º (see figure). Find the height of the tower and the width of the river.
###### Answer :
Let AB be the T.V. tower of height h standing on the bank of a river. Let C be the point on the opposite bank of the river such that BC = x metres. Let D be another point away from C such that CD = 20, and the angles of levation of the top of the T.V. tower at C and D are 60º and 30º respectively. i.e., $$∠ \ ACB \ = \ 60º$$ and $$∠ \ AOB \ = \ 30º$$
In $$∆ \ ABC$$ , we have
$$\frac{AB}{BC} \ = \ tan60º$$
$$=> \ \frac{h}{x} \ = \ \sqrt{3} \ => \ h \ = \ \sqrt{3}x$$ (1)
In $$∆ \ ABD$$, we have
$$\frac{AB}{BD} \ = \ tan30º \ => \ \frac{h}{x+20} \ = \ \frac{1}{ \sqrt{3}}$$ $$=> \ h \ = \ \frac{x+20}{ \sqrt{3}}$$ (2)
Equating values of h from (1) and (2), we get
$$\sqrt{3}x \ = \ \frac{x+20}{ \sqrt{3}} \ => \ 3x \ = \ x + 20$$ $$=> \ x = 10$$
Putting x = 10 in (1), we get
$$h \ = \ \sqrt{3} × 10 \ = \ 17.32$$
∴ the height of the T.V. tower is 17.32 metres and the widith of the river is 10 metres.
Q.12 From the top of a 7 m high building, the angle of elevation of the top of a cable tower is 60º and the angle of depression of its foot is 45º. Determine the height of the tower.
###### Answer :
Let AB be the building of height 7 metres and let CD be the cable tower. It is given that the angle of elevation of the top D of the tower observed from A is 60º and the angle of depression of the base C of the tower observed from A is 45º. Then $$∠ \ EAD \ = \ 60º$$ and $$∠ \ BCA \ = \ 45º$$ . Also AB = 7 m
In $$∆ \ EAD$$, we have
$$\frac{DE}{EA} \ = \ tan60º \ => \ \frac{h}{x} \ = \ \sqrt{3}$$ $$=> \ h \ = \ \sqrt{3} x$$ (1)
In $$∆ \ ABC$$, we have
$$\frac{AB}{BC} \ = \ tan45º \ => \ \frac{7}{x} \ = \ 1$$ $$=> \ x \ = \ 7$$
(2)
Putting x = 7 in (1), we get
$$h \ = \ 7 \sqrt{3} \ => \ DE \ = \ 7 \sqrt{3}$$m
∴ $$CD \ = \ CE \ + \ ED \ = \ 7 \ + \ 7\sqrt{3} \ = \ 19.124)$$m
∴ The height of the cable tower is 19.124 m.
Q13. As observed from the top of a 75 m tall lighthouse, the angles of depression of two ships are 30º and 45º. If one ship is exactly behind the other on the same side of the lighthouse, find the distance between the two ships.
###### Answer :
Let AB be the lighthouse of height 75 m and let two ships be at C and D such that the angles of depression from B are 45º and 30º respectively.
Let AC = x and CD = y.
In $$∆ \ ABC$$, we have
$$\frac{AB}{AC} \ = \ tan45º \ => \ \frac{75}{x} \ = \ 1$$ $$=> \ x \ = \ 75$$ (1)
In $$∆ \ ABD$$, we have
$$\frac{AB}{AD} \ = \ tan30º \ => \ \frac{75}{x+y} \ = \ \frac{1}{ \sqrt{3}}$$ $$=> \ x+y \ = \ 75 \sqrt{3}$$ (2)
From (1) and (2), we have
$$75+y \ = \ 75 \sqrt{3} \ => \ y \ = \ 75( \sqrt{3} - 1) \ => \ y \ = \ 75(1.732 - 1) \ = \ 54.9$$
∴ the distance between the two ships is 54.9 metres.
Q14. A 1.2 m tall girl spots a balloon moving with the wind in a horizontal line at a height of 88.2 m from the ground. The angle of elevation of the balloon from the eyes of the girl at any instant is 60º. After some time, the angle of elevation reduces to 30º. Find the distance travelled by the balloon during the interval.
###### Answer :
Let P and Q be the two positions of the balloon and let A be the point of observation. let ABC be the horizontal through A. It is given that angles of elevation of the balloon in two position P and Q from $$∠ \ PAB \ = \ 60º$$ , $$∠ \ QAB \ = \ 30º$$. It is also given that MQ = 88.2
$$=> \ CQ \ = \ MQ \ - \ MC \ = \ 88.2 \ - \ 1.2 \ = \ 87$$m .
In $$∆ \ ABP$$, we have
$$\frac{BP}{AB} \ = \ tan60º \ => \ \frac{87}{AB} \ = \ \sqrt{3}$$ $$=> \ AB \ = \ \frac{87}{ \sqrt{3}} \ = \ \frac{87 \sqrt{3}}{3} \ = \ 29 \sqrt{3}$$ (1)
In $$∆ \ ACQ$$, we have
$$\frac{CQ}{AC} \ = \ tan30º \ => \ \frac{87}{AC} \ = \ \frac{1}{ \sqrt{3}}$$ $$=> \ AC \ = \ 87 \sqrt{3}$$ (2)
∵ $$PQ \ = \ BC \ = \ AC \ - \ AB \ = \ 87 \sqrt{3} \ - \ 29 \sqrt{3} \ = \ 58 \sqrt{3}$$ $$= \ 100.456$$m
∴ the balloon travels $$100.456$$m
Q15. A straight highway leads to the foot of a tower. A man standing at the top of the tower observes a car at an angle of depression of 30º, which is approaching the foot of the tower with a uniform speed. Six minutes later, the angle of depression of the car is found to be 60º. Find the time taken by the car to reach the foot of the tower.
###### Answer :
Let AB be the tower of height h. Let C be the initial position of the car and let after 6 minutes the car be at D. It is given that the angles of depression at C and D are 30º and 60º respectively. Let the speed of the car be v metre per minute. Then ,
CD = Distance travelled by the car in 6 minutes
=> $$CD \ = \ 6v$$ metres [∵ Distance = Speed × Time]
Let the car takes t minutes to reach the tower AB from D. Then,
$$DA \ = \ vt$$ metres
In $$∆ \ ABD$$, we have
$$\frac{AB}{AD} \ = \ tan60º \ => \ \frac{h}{vt} \ = \ \sqrt{3}$$ $$=> \ h \ = \ \sqrt{3}vt$$
(1)
In $$∆ \ ABC$$ , we have
$$\frac{AB}{AC} \ = \ tan30º \ => \ \frac{h}{vt+6v} \ = \ \frac{1}{ \sqrt{3}}$$ $$=> \ \sqrt{3}h \ = \ vt \ + \ 6v$$ (2)
Substituting the value of h from (1) in (2), we get
$$\sqrt{3} \ × \ \sqrt{3}vt \ = \ vt \ + \ 6v \ => \ 3vt \ = \ vt \ + \ 6v$$ $$=> \ 3vt \ - \ vt \ = \ 6 \ => \ t \ = \ 3$$
∴ the car will reach the tower from D in 3 minutes.
Q16. The angle of elevation of the top of a tower from two points at a distance of 4 m and 9m from the base of the tower and in the same straight line with it are complementary. Prove that the height of the tower is 6 m.
###### Answer :
Let AB be the tower. Let C and D be the two points at distances 9 m and 4 m respectively from the base of the tower. Then AC = 9 m, AD = 4m.
Let $$∠ \ ACB \ = \ \theta$$ and $$∠ \ ADB \ = \ 90º \ - \ \theta$$
Let h be the height of the tower AB.
In $$∆ \ ACB$$, we have
$$\frac{AB}{AC} \ = \ tan \theta \ => \ \frac{h}{9} \ = \ tan \theta$$ (1)
In $$∆ \ ADB$$, we have
$$\frac{AB}{AD} \ = \ tan(90º \ - \ \theta) \ => \ \frac{h}{4} \ = \ cot \theta$$ (2)
From (1) and (2), we have
$$\frac{h}{9} \ × \ \frac{h}{4} \ = \ tan \theta \ × \ cot \theta \ => \ \frac{h^2}{36} \ = \ 1$$ $$=> \ h^2 \ = \ 36 \ => \ h \ = \ 6$$
∴ the height of the tower is 6 metres. | 2020-07-09 19:44:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7760286927223206, "perplexity": 439.50812529636545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655901509.58/warc/CC-MAIN-20200709193741-20200709223741-00488.warc.gz"} |
https://www.studyxapp.com/homework-help/the-convex-h-u-l-l-of-a-set-of-vectors-boldsymbolxi-i1-ldots-n-is-the-set-of-al-q1565793962219384833 | # Question Solved1 Answer9) The convex hull of a set of vectors xi , i = 1, . . . , n is the set of all vectors of the form x = X n i=1 αixi , where αi ≥ 0 and P i αi = 1. Given two sets of vectors, show that either they are linearly separable or their The convex $$h u l l$$ of a set of vectors $$\boldsymbol{x}_{i}, i=1, \ldots, n$$ is the set of all vectors of the form $\boldsymbol{x}=\sum_{i=1}^{n} \alpha_{i} \boldsymbol{x}_{i}$ where $$\alpha_{i} \geq 0$$ and $$\sum_{i} \alpha_{i}=1$$. Given two sets of vectors, show that either they are linearly separable or their convex hulls intersect. (To answer this, suppose that both statements are true, and consider the classification of a point in the intersection of the convex hulls.)
9) The convex hull of a set of vectors xi , i = 1, . . . , n is the set of all vectors of the form x = X n i=1 αixi , where αi ≥ 0 and P i αi = 1. Given two sets of vectors, show that either they are linearly separable or their convex hulls intersect. (To answer this, suppose that both statements are true, and consider the classification of a point in the intersection of the convex hulls.)
Transcribed Image Text: The convex $$h u l l$$ of a set of vectors $$\boldsymbol{x}_{i}, i=1, \ldots, n$$ is the set of all vectors of the form $\boldsymbol{x}=\sum_{i=1}^{n} \alpha_{i} \boldsymbol{x}_{i}$ where $$\alpha_{i} \geq 0$$ and $$\sum_{i} \alpha_{i}=1$$. Given two sets of vectors, show that either they are linearly separable or their convex hulls intersect. (To answer this, suppose that both statements are true, and consider the classification of a point in the intersection of the convex hulls.)
More
Transcribed Image Text: The convex $$h u l l$$ of a set of vectors $$\boldsymbol{x}_{i}, i=1, \ldots, n$$ is the set of all vectors of the form $\boldsymbol{x}=\sum_{i=1}^{n} \alpha_{i} \boldsymbol{x}_{i}$ where $$\alpha_{i} \geq 0$$ and $$\sum_{i} \alpha_{i}=1$$. Given two sets of vectors, show that either they are linearly separable or their convex hulls intersect. (To answer this, suppose that both statements are true, and consider the classification of a point in the intersection of the convex hulls.)
See all the answers with 1 Unlock
Get 4 Free Unlocks by registration
Consider a set of foints {z^(m)y:} and it 1 correaponding comvex hull. The two sets of the points be lixealy separable if there exists a veelor hat(omega) and a sealor wo suin that{:[, hat(omega)^(T)x^(n)+w_(0) > 0,AAx_(n)],[" s ", hat(omega)^(T)z_(m)^(m)+w_(0) < 0,AAz_(m)]:}Now we show that if their convex hulls intersest, the two sete of points cannot be livearly separable, and convercely that if they ave lixealy separable, ther convex hulls to notSolnt- Pirstlet calculate the lixeor discriminent for the pointe belonging to tue two convex hall. For the conwer hall of {x^(n)] the tineor. diserimivent isy(x)= hat(w)^(pi)x^(n)+w_(0)-(1)Again ne know x=sumalpha_(n)x^(n) whene x_(n)⩾0 & & s ... See the full answer | 2023-01-28 04:35:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956717848777771, "perplexity": 441.8391603796893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00056.warc.gz"} |
https://academia.stackexchange.com/questions/126403/how-to-deal-with-a-cynical-class | # How to deal with a cynical class?
I am dealing with this class that somehow became very cynical. We have constant complaints about the pace of the class, people asking literally the same questions over and over, blatant plagiarism, students pretty much insulting the lecturer and staff or laughing (sometimes at others asking questions) and being noisy during lectures etc.
This is especially weird since this is a professional education class. We are talking about people in their 30s on average paying $10k at a top 10 university and just for this one course. So it is not like this is some filler core requirement people are forced to attend against their will. These people were interested in this class before start and something messed up the morale. We had the same staff --instructor and TAs-- for other cohorts and never had such issues. I am thinking it has to do with the composition of this cohort - one or two know-it-all types had a negative attitude from the beginning and some others somewhat struggling a bit started taking on their attitude. Before we realized this all snowballed into the cohort turning into a classroom of 30+ yr old high schoolers. We have another 3 months to go and it is getting very exhausting for the staff and for the students still trying. People blatantly slowing down the class is not helping either. The instructor has been trying to talk to the whole class as well as the individual troublemakers along the lines of maintaining professionalism but that is simply having no effect. So the question is, how do we address this to salvage as much as we can and make it to the end of the class without it devolving further? • Are these people paying themselves or were they sent by a company who is paying? – Roland Mar 13 at 14:57 • What is your own position here? Student, TA, ... What? – Buffy Mar 13 at 17:54 • @Nat That's basically the case. No one academically that advanced afaik but we have bigwig corporate managers that can barely use computers alongside software engineers wanting to learn about data science and everything in between – Victor S Mar 13 at 19:33 • Gotcha - your situation makes a ton of sense, then. You might want to edit that info into the question, as it should help provide important context. – Nat Mar 13 at 19:34 • This is especially weird since this is a professional education class. — Sadly, no, it’s not weird at all. The fact that this is a professional education class makes this behavior much more likely, in my experience. – JeffE Mar 13 at 19:50 ## 6 Answers If they want to act as high schoolers, you'll need to treat them like high schoolers. I've taught in high school for a little time and while your semester already started, it is not too late to get a handle back on your class. The first few minutes after the class start is the most important moment to make it clear messing around will not be tolerated. I would try at least once, at the beginning of a class, to say something like this : You guys are being hard to manage and I'm sure you're aware of it. You are being disrespectful towards me, the rest of the teaching staff and other students who are trying to work. I'd like to know what is wrong, is there anything you think might make your experience better? This way, you make it clear you want to work with them to find a solution and that it is bothering many people. Give them a chance to work it out with you before going the "full authority" route. You might give some troublemakers to turn into "positive leaders". Even if this works, you need to start making it clear who has authority in your class. When you start your class, state your rules and don't move away from them. You'll probably need to set an example a couple of times before your students understand. You need to systematically apply the rules you gave your students or else they'll start exploiting you. Regarding what you said in your post : People asking literally the same questions over and over Offer the student to come see you after class if they asked a question that was already asked, this way you do not slow down the pace of your class and if it's an attempt at "trolling" the student simply won't come to see you. Blatant plagiarism Your school clearly has a policy against plagiarism, start using it without exception. These students lost their right to second chances. If you catch plagiarism, report it to the correct instance at your school and let the comity in place decide. Students pretty much insulting the lecturer and staff or laughing (sometimes at others asking questions) Respect is the most important thing in a class. If a kid in high school (where, at this age, education is a right not a service) can get kicked out of a class for being not being respectful, so can an adult. If I paid 10k$ for a class, you can be damn sure I'd stop this after being kicked out once. If you want to make sure you won't have repercussions, go see the head of your department to make sure they have your back if something happens.
Being noisy during lectures
To be fair, this can happen. If this is repeated, refer to the point above.
You said in a comment "That is a straightforward solution but the lead instructor in this case does not want to take authoritarian measures. So that's another challenge here". This is sad but University teachers aren't often good at managing this kind of class (because they don't have much experience with them). You should try talking to him/her about it, stating that it really messes up the flow of the class, the confidence of all the teaching staff and the overall credibility of the class.
As a last resort, when your class gets out of control, stop talking. More often than not, the students will deal with themselves and silence should come back to the class. If it doesn't well it is their loss.
To make this work, the best case scenario would be for the whole teaching staff to be on the same line and to make sure your boss has your back, but University students are customers of a service under certain conditions. If some students mess up with your class, they interfere with the students who are respecting the rules and this is not acceptable.
Edit
I've seen an OP's comment stating :
"That's basically the case. No one academically that advanced afaik but we have bigwig corporate managers that can barely use computers alongside software engineers wanting to learn about data science and everything in between"
This is a very complex problem to address, because it is hard to pace your class for the large diversity of backgrounds, which means either some students won't understand or some will find the class too slow and since you're alone, it's hard to find a middle ground. What I might propose is to :
• Either match students of different backgrounds together so that they can help each other (by whispering, of course) regarding their different expertise.
• Show examples that can "talk to" your different kind of students. You have, maybe among others, managers and software engineering students, so try to give specific situations/examples where each of these students can bring their expertise. It is likely, I think, that your problematic students are the kind of people that like to hear themselves talk. By getting them to participate, I'm pretty sure you'll be able to have a better handle on your class.
In the case of a professional education class, I think it's important to let the students talk/participate in class. These are not people who are used to sitting hours in a class to listen to someone talk like University students are used to. By giving more opportunities to participate, you'll probably have a better experience.
• And do refer to their behavior as 'high school' explicitly. You should not worry about shaming them into behaving. – user104070 Mar 13 at 20:12
• This situation seems strange to me. If we did that to any of the professors I have had, things would get difficult real fast. The laboratory courses would become 14 hour marathons, the assignments would become super hard, there would be pop quizzes and I would presume the exams would be excruciating. The grades would plummet, and the number of people having to attend the "summer exams" would increase... I would be very careful about offending someone that holds in their hands the keys to my future. – Stian Yttervik Mar 15 at 13:10
• When dealing with unruly students, it is important, though, to keep the composure. I was in a class where two students were being noisy. The professor yelled to keep silence, once and no more. On the other hand, one of my high school lectures lost it once, exploded screaming a bunch of colourful language, and the general reaction was more to contain laughter than to feel you should shut up. – Davidmh Mar 15 at 15:23
This happens to almost everyone who teaches in academia. We all eventually get "that class". Let me lay out some options for you:
Look for experts on your campus outside the department
First of all, if you have a college of education at your school, head over there and find a professor who was a former K-12 teacher. Further, if your school has a teacher prep program, you are in even better luck. There will likely be a few teaching veterans who have plenty of experience getting unruly classes (kids to adults) back under control. Some of those education professors who were former teachers have classroom management down to an art form. It can be really interesting to watch how effortlessly they do it as well. I recommend reaching out to the department head there and asking if there is a professor (or even graduate student) who might be able to give you some advice.
Ask others in your department if they have had a similar situation. Reach out to the department head. First of all, it might be best to fill your department head in on whats going on anyway. I have not known too many department heads who like surprises. Especially surprise calls from administration asking what they know about students complaining being kicked out of a class they paid $10,000 for. Self Fix it Grab a book on classroom management and see what you can do on your own. Its tough getting a class back in line mid semester, but there is plenty of help advice in books and from blogs. The usual remedy is to implement increased structure in the class. Get the class into doing routines. Remember, kicking out a student from class is not what you want to do. Banning a student from class is the nuclear option. Once you go down that route, things get out of your control. You might end up with an administrative problem. This gets even worse if you end up kicking the wrong person out. How are you going to even determine who is the ring leader in the first place? • Classroom management is an excellent skill to have - I'd like to learn it myself. I have been very impressed by some of the teacher's at my sons' elementary school. You can tell the ones who have had formal training versus the ones who are just up there trying to teach. It's night and day. – CramerTV Mar 14 at 1:01 • This doesn't really answer the question. "Ask someone else" and "read a book" are non-answers you can say about literally any question. The whole point of stackexchange is to "ask someone else," but there's no answers right here. – Xen2050 Mar 14 at 2:07 • @DanielR.Collins: Such incoherence is typical of deconstructionism. If you haven't heard of it, don't waste your time and energy looking it up. – user21820 Mar 14 at 3:01 • @ManuelRodriguez You seem to be looking at that a bit backwards, at least in regards to your "role playing game" point. It doesn't seem at all to me like this answer interpreting the problem as a RPG. The issue seems to be that you're interpreting the act of doing actions to achieve a goal as if it is some property specific to RPGs. I believe it is quite the opposite; that RPGs are based on real life cause and effect, where certain actions absolutely do modify the "plot". In this case the plot is what happens in real life. – JMac Mar 14 at 11:23 • @user21820: I have a philosophy degree, studied deconstructionism, and never seen anything as incoherent as the (now deleted) prior comment. – Daniel R. Collins Mar 15 at 4:22 I'm afraid that, as a TA there is probably very little you can do. The instructor might be able to do something and there are some suggestions in the other answers here that can help, but you need a certain amount of recognized authority to make it work. I have had to deal (once) with an equally difficult, though different situation. I was able to handle it with a "shock therapy" trick, but only because I was a senior professor at the institution. A junior faculty member probably wouldn't be able to make this work. In my case, the students weren't disruptive, just disengaged. They didn't take notes, didn't seem to study, didn't ask questions. Completely passive. I asked one student before class why he didn't take notes and he just pointed to his head as if he learned everything immediately without effort. Of course it doesn't work that way. My solution was to announce at the beginning of a class that I was willing to just fail everyone in the class and we could all stop pretending. I would stop pretending to teach, they would stop pretending to learn, and we wouldn't even have to waste the time coming to class. Shock and dismay. The real problem is that most of them had had an easy time up to then with their education and no one had challenged them very deeply. It wasn't that they were lazy but just that they didn't really know how to learn. So I spent a couple of classroom hours teaching them how to learn. This was in a second year Computer Science course, by the way. The problem the OP states is different, but, with sufficient authority, recognized by the university, if not by the particular students, a shock therapy might work. Walk out of the classroom at the first sign of disrespect. Ask a disruptive student to immediately carry a note to the department head and wait for a reply. The note would mention the disrespect. Announce a snap quiz. Make it hard. Very risky. In my case, the story went around the department and added to my mythical powers. The students improved. The time and effort wasn't wasted. But, if you try this, you'd better be certain that you will be allowed to follow through and that the department will back you. A junior member of the faculty would be advised to try it only with permission of the head and, in the current situation, concurrence of the staff of the course (all TAs). • A fun story, but not applicable to the OP's situation, sadly – user104070 Mar 13 at 20:14 • @GeorgeM, I suggested a variation. Shock therapy is the lesson, not the specific details. – Buffy Mar 13 at 20:20 Identify the troublemakers and eject them from class as soon as they step out of line. As you say, this kind of situation often snowballs out of a few bad elements. These students are not there for entertainment and this isn't a high school. Someone who doesn't want to learn isn't worth wasting any time. Removing the bad elements might help restoring a professional environment, keeping only people who are actually intent on learning the content of the class. Usually these students end up getting the hint. If they don't, ban them outright from the class after a few times. The other students deserve a quality course from you and the other teachers, and these troublemakers are preventing it. It is highly unlikely that the whole class is really "cynical", and you will be left with actual students, not people who want to pass the time. In the rare event that literally all of the registered people don't care about the class, congratulate yourself on getting paid time off and watch movies or read a book during scheduled class time. Now, since you used a dollar sign, wrote "top 10" and mentioned students "paying$10k", I'm pretty confident that you are in the US, so take my advice with a grain of salt. I am not from the US and it is my understanding that the motto "customer is king" permeates even non-mercantile aspects of society such as higher education. If these troublemakers try to get a refund which in turn provokes the administration into pushing back on your decision to eject/ban students, try to make the argument that movie theaters are well within their rights to kick out noisy spectators and not refund them anything. But in the end, it's up to you to decide whether the fight is worth it and whether you have the political clout to pull this kind of stunt.
• This is what I'd do too. I daresay that the instructors waited even too much before expelling the troublemakers. – Massimo Ortolano Mar 13 at 15:47
• Perhaps the instructor could give out candies when students listen for more than 5 minutes straight. "I do not want to take authoritarian measures" is dogma, dogma should be challenged. – Kurotakest Mar 13 at 16:00
• @Kurotakest I'd find it very patronizing to be rewarded for mere basic decency. – David Richerby Mar 13 at 16:29
• @VictorS : "...does not want to take authoritarian measures" -- then the lead instructor is either an idiot or a mamby-pamby coward. It is the instructor's responsibility -- duty, even -- to ensure the quality of the class for all of the other students as well, and he is sacrificing this rather than asserting his authority to do this. – MPW Mar 13 at 18:57
• @opa not a public good, but a service to a customer — Yes, and by interrupting the instructor and plagiarizing, you are making the service you’ve paid for (practice and feedback) impossible to deliver, not just to you but to everyone in the class. That’s like going to a restaurant, talking over the waiter when they try to take your order, and stealing the breadsticks from the next table. – JeffE Mar 13 at 19:56
If it were 100% of the class acting this way, I'd say you have some time to experiment and find the best way to fix things. But, if there has been a single student that has acted properly and done their due diligence throughout the semester, I think that makes a big difference.
If I were a student in this class who was not participating in counterproductive behavior, I would rely on the instructors acting quickly to ensure I was still getting the quality of education I deserved. I would (personally) not want to experience a few months of professors experimenting with different disciplinary methods. If the situation above is accurate, this would be the point at which I'd already be reaching out to administration for a refund or class reassignment and I'd specifically say "the professors can't control the class" even though the class itself is intentionally out of control.
If 80% of the class can't conduct themselves in class properly, I would expect them to be asked to leave. If you can construe this as an absence, then they have a finite number of removals before they automatically fail. If it's for disrespecting staff or other students, I can't imagine there being much pushback if you can get the person who was disrespected to sign a paper saying it happened.
Instituting class conduct rules, with definite consequences for breaking them, is also a quantifiable way to show your expectations and their lack of respect for them. The bottom line is that if you have to take disciplinary action at this level of academia, prepare for it to be challenged, so back it up with quantifiable evidence (not anecdotal).
Also, given the course"s topic, as a student I would respect the logic of saying "Some of you can't figure out how to conduct yourselves professionally amongst your peers, which is also the topic of this course. Classroom performance counts for 51% of your grade." I feel like that's a no-brainer, like you said, it's not just some core requirement they'll have to repeat, they are purchasing their own F a la carte. Remind them of that.
Not wanting to use authoritarian methods is admirable, but the longer you wait the more legitimate complaints you may get from the few students who want to be there and are making the proper effort. I think the burden is on you to show (starting very soon) you have enacted policies to combat, if not solve, the problem. If not, you (or the profs) could be held responsible for legitimate students asking for a refund, which is worse than disciplinary cases asking for one.
• Also, just to mention it, apologies for having to deal with this group of people at all. They are acting shamefully and wasting a lot of money/time. It is regrettable the burden of fixing the situation rests on the people who arent at fault. – Dpeif Mar 13 at 17:59
Well, I think the crucial information here is in the comments to the post, which reveal that the root cause of the problem is big difference in the skill levels between the attendees, the lower skill level guys happen to be managers who are accustomed to be outspoken and steer the conversation out of it's professional realm.
In this case, the reasonable solution will be dividing the classroom and providing for each skill group appropriate information, so that for all groups the pace will be manageable.
It will likely require time investment in creating two parallel curriculums. Also assignment levels may vary according to the skill set.
The managers are also likely interested in different aspects of the data science from the programmer guys, so it's reasonable if they will show their discontent about spending time on the material which has less value for them.
Additionally, an approach for smart kids at school may work, if the managers' ego will permit it - make time for workshops, and when working on assignments, make the people who catch the material faster, help the guys who struggle. If done well, it may prove a valuable experience for all the participants.
• Dividing the classroom isn't realistic in a classroom setting. While you provide information for one group, you'll lose the other one and soon enough you'll completely lose both. – IEatBagels Mar 15 at 19:07
• Sure, it will involve making two separate classrooms. – alex440 Mar 15 at 20:23
• Well I don't see how that can be an option in a University setting. – IEatBagels Mar 18 at 23:31
• It depends on the OP's situation. I just brought the option to the table, maybe it will lead the OP to some idea that is suitable to their situation. – alex440 Mar 19 at 5:33 | 2019-10-15 23:46:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28042954206466675, "perplexity": 1200.9969260505513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660829.5/warc/CC-MAIN-20191015231925-20191016015425-00210.warc.gz"} |
https://math.stackexchange.com/questions/2844995/trouble-understanding-the-proof-and-significance-of-a-probability-measure-being | # Trouble understanding the proof and significance of a probability measure being a continuous set function..
From Probability and Random Processes (3rd ed, Grimmett and Stirzaker)
Lemma
Let $A_1,A_2,\ldots$ be an increasing sequence of events so that $A_1 \subseteq A_2 \subseteq A_3 \subseteq \ldots$, and write $A$ for their limit: $$A=\bigcup_{i=1}^{\infty}A_i=\lim_{i\rightarrow\infty}A_i\hspace{0.1em}.$$ Then $\mathbb{P}(A)=\lim_{i\rightarrow\infty} \mathbb{P}(A_i)$.
Proof
$A=A_1 \cup (A_2 \setminus A_1) \cup (A_3 \setminus A_2) \cup \ldots$ is the union of a disjoint family of events. Thus, \begin{align} \mathbb{P}(A)&=\mathbb{P}(A_1)+ \sum_{i=1}^{\infty}\mathbb{P}(A_{i+1}\setminus A_i) \tag1 \\ &= \mathbb{P}(A_1)+\lim_{n\rightarrow\infty}\sum_{i=1}^{n-1}[\mathbb{P}(A_{i+1})-\mathbb{P}(A_i)] \tag2\\ &= \lim_{n\rightarrow\infty}\mathbb{P}(A_n) \tag3 \end{align} $$\tag*{\blacksquare}$$
I understand how writing $A$ as a disjoint union of events gives $(1)$, and how $\mathbb{P}(A_{i+1}\setminus A_i)=\mathbb{P}(A_{i+1})-\mathbb{P}(A_i)$ follows from the properties of a probability space, but I don't understand why we take $n-1$ as the upper limit of summation in $(2)$.
I do see that the partial sum in $(2)$ evaluates to $\sum_{i=1}^{n-1}[\mathbb{P}(A_{i+1})-\mathbb{P}(A_i)]=\mathbb{P}(A_n)-\mathbb{P}(A_1)$, then $(3)$ follows, so it seems like we had to have $n-1$ as the upper limit but I don't quite get what justified us in selecting it.
If I had selected $n$ as the upper limit and ended up with $\lim_{n\rightarrow\infty}\mathbb{P}(A_{n+1})$, would this have been incorrect?
I don't understand what this lemma is saying, what it's significance is or what it means for a set function to be continuous. It seems to me like we are just defining a notation, which I don't think is correct.
You are correct, the authors take the sum to $n-1$ in order to obtain $P(A_n)$. If they selected $n$ they would get $\lim_{n\to\infty}P(A_{n+1})$, but $\lim_{n\to\infty}P(A_{n+1})=\lim_{n\to\infty}P(A_{n})$
$P(\{X>0\})=P\big(\bigcup_{n=1}^\infty \big\{X>\frac{1}{n}\big\}\big)=\lim_{n\to\infty}P\big(\big\{X>\frac{1}{n}\big\}\big)$
as $\big\{X>\frac{1}{n}\big\}$ is an increasing sequence of events. | 2019-09-21 13:07:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987590312957764, "perplexity": 164.3107666137509}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574501.78/warc/CC-MAIN-20190921125334-20190921151334-00509.warc.gz"} |
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tvp&paperid=127&option_lang=eng | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Subscription Guidelines for authors Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Teor. Veroyatnost. i Primenen.: Year: Volume: Issue: Page: Find
Teor. Veroyatnost. i Primenen., 2005, Volume 50, Issue 4, Pages 733–753 (Mi tvp127)
On the CLT for means under the rotation action. I
M. Weber
Institut de Recherche Mathématique Avancée, Université de Strasbourg
Abstract: We propose a method allowing us to build, for various typical means generated by the action of any given irrational rotation of the circle, examples of $L^2$ functions satisfying the central limit theorem (CLT). We consider, for instance, nonlinear means, and means along the sequence of squares. In the latter case, the circle method of Hardy and Littlewood is used. We also give an example of continuous Gaussian random Fourier series with sample paths satisfying both the CLT and the almost sure CLT.
Keywords: central limit theorem, almost sure central limit theorem, irrational rotations, nonlinear averages, square averages, weighted averages, Gaussian randomization, random Fourier series, circle method.
DOI: https://doi.org/10.4213/tvp127
Full text: PDF file (1553 kB)
English version:
Theory of Probability and its Applications, 2006, 50:4, 631–649
Bibliographic databases:
Revised: 29.03.2005
Language:
Citation: M. Weber, “On the CLT for means under the rotation action. I”, Teor. Veroyatnost. i Primenen., 50:4 (2005), 733–753; Theory Probab. Appl., 50:4 (2006), 631–649
Citation in format AMSBIB
\Bibitem{Web05} \by M.~Weber \paper On the CLT for means under the rotation action.~I \jour Teor. Veroyatnost. i Primenen. \yr 2005 \vol 50 \issue 4 \pages 733--753 \mathnet{http://mi.mathnet.ru/tvp127} \crossref{https://doi.org/10.4213/tvp127} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=2331985} \zmath{https://zbmath.org/?q=an:1110.60018} \elib{http://elibrary.ru/item.asp?id=9157510} \transl \jour Theory Probab. Appl. \yr 2006 \vol 50 \issue 4 \pages 631--649 \crossref{https://doi.org/10.1137/S0040585X97982013} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000243284300005} | 2019-11-21 05:22:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5114040970802307, "perplexity": 11028.276470119796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670731.88/warc/CC-MAIN-20191121050543-20191121074543-00010.warc.gz"} |
https://gamedev.stackexchange.com/questions/138376/how-can-i-boost-cache-performance-when-storing-objects-in-a-scene-with-managed-l | # How can I boost cache performance when storing objects in a scene with managed languages?
So, for reasons that I won't go into (has to do with my team more so than a good objective reason, unfortunately), I'm building a soft game engine in C# on top of SharpDX. C++ wasn't an option. I can't store objects contiguously in C# when it comes to reference types like classes, so I'm thinking about taking a data-oriented-design approach to storing objects in my scene. Thus, my scene looks a bit like this right now (all of the fields are structs because I can store them by value type, and thus contiguously):
public delegate void UpdateMethod (ref ModelData md, ref Transform t, ...)
class Scene
{
ModelData[] models;
Transform[] transforms;
//...some other fields here
UpdateMethod[] updateMethods; //delegate array
void UpdateAll()
{
for (int i = 0; i < NUM_GAME_OBJECTS; i++)
{
if (updateMethods[i]!= null)
{
updateMethods[i](ref models[i], ref transforms[i], ...);
}
}
}
}
The idea being that I can now store the objects contiguously as opposed to just an array of contiguous references to the data if I made each component a class.
Is this even a viable technique, or am I abusing the DOD paradigm? I'm storing all of my objects like this, not just my dynamic ones. Perhaps I could split it up into two sets of arrays for dynamic and non-dynamic, so I'm not tempting branch-predictor slow-downs on the if statement for UpdateAll()?
Will an approach like this create more issues than it attempts to solve? I will note that, while updating occurs here, I outsource the actual carrying out of the component functionality to other systems. For example, an input subsystem determines keys pressed, so an object could check in its unique update method denoted by the delegate; the update doesn't do everything, just behavior specific to that bundle of game object data.
This seems analogous to operating over columns in a row-major language, in that I'm killing performance gain of data locality; am I correct in concluding it would it be beneficial to wrap all of these fields into some GameObjectData wrapper struct, and just store those in an array?
• You can generate arrays or collections of reference types that are contiguous in memory, although the constraints required to do so are often tight enough to render it negligible win or a net loss. – Josh Mar 8 '17 at 18:34
This is a viable technique in general, but as with any decision it comes with downsides. Most of them are related to ease of use: consider that your example updateMethods call is (probably) wrong, since you're copying models[i] and so on. Updating the instance of ModelData you get inside updateMethods won't change the one in models[]. You'll end up passing around a lot of ref parameters, or passing arrays-and-indexes to refer to individual objects.
It looks to me like you are worrying too much about the theoretical performance implications instead of the practical ones. That is, engage your profiler and determine where and when (or even if) you need to resort to this kind of approach.
It's very possible to make a game that performs well in C# without resorting to making everything a struct in a giant contiguous array so long as you also understand the implications of reference types on the garbage collector and the behavior of the garbage collector. Understanding how-and-when to avoid the garbage collector just as important as understanding how-and-when it's a good optimization to stuff a bunch of structs into a contiguous array (or understanding how-and-when to avoid, or not, allocation of memory in C++).
Also:
Secondly, this seems a lot like operating over columns...
That's true. Since your apparent "update method" takes one instance from each array, you don't actually gain a ton from the contiguous nature of each array, since you're reading from many very distant locations every iteration of the loop.
You don't necessarily need to wrap up everything in one big structure and store an array of those, though; you need to consider which data will benefit the most from being contiguously iterated over and pack that together. It's impractical to make everything perfectly cache coherent in any complex project, so it's a matter of picking and choosing when you get the most bang for your buck and dealing with other cases differently. This is another place where a profiler can help you.
• Thanks for the heads up, I had indeed forgotten to add the ref keyword. I've gone ahead and amended the text. – Scorch Mar 8 '17 at 19:55
Ordering the data is going to depend on existing data access patterns for your application. Profile first, and see what areas of execution are taking the longest as a result of cache misses.
Once you've done that, interleaved data (array of structs) will almost certainly be better than splitting into massive arrays of a single element type. How you interleave the data is going to depend on your profiling research, and experimentation.
It is possible that in some cases you may want repeat data across different interleaved arrays. For example, while AI processing may fit transform, physics, inventory and character stats data into a cache line, the rendering pipeline may want transform, lighting state, and colour information. That means you may be better off having the transform interleaved into two different arrays, each element of which contains the appropriate goodies as just mentioned. Otherwise memory access is scattered, and cache performance suffers.
As for where your logic sits... it's mostly an illusion. I've worked in pure C for long enough now to see just how irrelevant it is. It is just an organisational tool. You can write a single monolithic class to perform all logic, you can split into subsystems, you can have entities with methods or leave that method logic to a super-controller... what matters is only that you get your flow of control right. I tend to favour an approach of "the next controller up, creates, controls and destroys that which sits below it." | 2019-09-20 09:38:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2938406765460968, "perplexity": 1354.8473224776978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573988.33/warc/CC-MAIN-20190920092800-20190920114800-00134.warc.gz"} |
https://itprospt.com/num/13970087/suppose-that-a-person-takes-a-10-question-true-or-false | 5
# Suppose that a person takes a 10 question true or false test andguesses the answers. find the probabilty of getting at most 4answers correct...
## Question
###### Suppose that a person takes a 10 question true or false test andguesses the answers. find the probabilty of getting at most 4answers correct
suppose that a person takes a 10 question true or false test and guesses the answers. find the probabilty of getting at most 4 answers correct
#### Similar Solved Questions
##### Find the area of a region between two polar = curves QuestionFind the exact area inside2cos(0) and to the right ofr sec(0)Sorry; that's incorrect Try again?
Find the area of a region between two polar = curves Question Find the exact area inside 2cos(0) and to the right ofr sec(0) Sorry; that's incorrect Try again?...
##### If sin y'and tan y'what is the value of sec y"?sec y' = 8ssec y" = 8tsec yPsec y" =
If sin y' and tan y' what is the value of sec y"? sec y' = 8s sec y" = 8t sec yP sec y" =...
##### (4) Lat [ (")JnctAM (haL AlisiIlJtJtf(o) 2} ("/4-MWA J() = 1. Fizel clueexl Jurm lor f(n) (#hereBUJLE AHLELative
(4) Lat [ (") JnctAM (haL Alisi IlJtJt f(o) 2} ("/4- MWA J() = 1. Fizel clueexl Jurm lor f(n) (#here BUJLE AHLELative...
##### 77 U Question: 1 cuoqanim using 71 2 conconge
77 U Question: 1 cuoqanim using 71 2 conconge...
##### In class we discussed [7, 4,3]-code(a) check if this code is perfect code (b) check if this code is MDS code_
In class we discussed [7, 4,3]-code (a) check if this code is perfect code (b) check if this code is MDS code_...
##### How long will it takes to plate out 1.0g of Ni from aqueous Ni2+ with a current of 100.0A?
How long will it takes to plate out 1.0g of Ni from aqueous Ni2+ with a current of 100.0A?...
##### Use the van der Waals equation of state to calculate the pressure of 3.60 mol of HCl at 477 K in a 4.50 L vessel Van der Waals constants can be found in the van der Waals constants table:atmUse the ideal gas equation to calculate the pressure under the same conditions_atmUnder these conditions would you expect HCl or CClA to deviate more from ideal behavior? Why?
Use the van der Waals equation of state to calculate the pressure of 3.60 mol of HCl at 477 K in a 4.50 L vessel Van der Waals constants can be found in the van der Waals constants table: atm Use the ideal gas equation to calculate the pressure under the same conditions_ atm Under these conditions w...
##### AllatTupecse Lant Fo Ole +annc 0_clLdao we ececchi _ To Clc+Eccict Pfctchos/L uoluntcef 0,0 pfafur e racdomA Scict e Altr tktt| C1t bc(oc € re;ie Ardlaqua month iak . TK Colia 6CLDIBcl ov auc2 4qs 263 22 14251 736219 25, 12 ' ( 204 197 /28 2446 72 ! 76 (45 241 245 98 confidaCe Inkaai Fe,#t Obtaia ol f€€{ncc bChen C fep caans e~iCiencc_1o Su ppOct #af #ac {orlngh C dcq coe QQue cqim #at dlel (efunts? kem + (onfidence Lacrd 0o 8C5 lalexva ncpr (
Allat Tupecse Lant Fo Ole +annc 0_clLdao we ececchi _ To Clc+Eccict Pfctchos/L uoluntcef 0,0 pfafur e racdomA Scict e Altr tktt| C1t bc(oc € re;ie Ardlaqua month iak . TK Colia 6CLDI Bcl ov auc 2 4qs 263 22 14251 736219 25, 12 ' ( 204 197 /28 2446 72 ! 76 (45 241 245 98 confidaCe Inkaai F...
##### Calleine (CsH,N,O2) Is weak baso Wilh a pKb oi 10 4Part ACalciilalo Iho pH ora solulion corlamning calleino corceritralion ol 435 'Mg/L Express your answer t0 one decimal place_
Calleine (CsH,N,O2) Is weak baso Wilh a pKb oi 10 4 Part A Calciilalo Iho pH ora solulion corlamning calleino corceritralion ol 435 'Mg/L Express your answer t0 one decimal place_...
##### Figure 26.12.8 mIn Figure 26.1, a slit 0.3 10-3 m wide is illuminated by light of wavelength 506 nm. A diffraction pattern is seen on a screen 2.8 m from the slit.a) What is the linear distance on the screen between the first two diffraction minima on either side of the central diffraction maximum? (15 pts:)
Figure 26.1 2.8 m In Figure 26.1, a slit 0.3 10-3 m wide is illuminated by light of wavelength 506 nm. A diffraction pattern is seen on a screen 2.8 m from the slit. a) What is the linear distance on the screen between the first two diffraction minima on either side of the central diffraction maximu...
##### Use the arrow technique to evaluate the determinant of the given matrix. $$\left[\begin{array}{rrr} -2 & 7 & 6 \\ 5 & 1 & -2 \\ 3 & 8 & 4 \end{array}\right]$$
Use the arrow technique to evaluate the determinant of the given matrix. $$\left[\begin{array}{rrr} -2 & 7 & 6 \\ 5 & 1 & -2 \\ 3 & 8 & 4 \end{array}\right]$$...
##### (20 pls) Let(5 pls) Selert the correct stalement and explain your reasoning (select one). Since rank(A) we know there nt} Stlution thee normal equations Tor AxSince rank(A) tions for Axwe know there is & unique solution to the normal equa-iii, Since rank(A) tions for Ax =weknow there are many solutions to the normal cqu-Since rank(A) 2, we know there no solution the normal equations for Ax =Since rank(A) 2,we know there is & unique solulion L0 the normal cua tions for A =Since rank(A) ti
(20 pls) Let (5 pls) Selert the correct stalement and explain your reasoning (select one). Since rank(A) we know there nt} Stlution thee normal equations Tor Ax Since rank(A) tions for Ax we know there is & unique solution to the normal equa- iii, Since rank(A) tions for Ax = weknow there are ma...
##### Describe the appearance of the graphs of a function and its inverse.
Describe the appearance of the graphs of a function and its inverse....
##### Conjecture:If the positive integer m is not divisible by 3 then:• The longest string of consecutive m-apart primes is length 3,and,• If there are three consecutive m-apart primes then then theymust be the three numbers 3, 3+m and 3+2m.Prove this Conjecture
Conjecture: If the positive integer m is not divisible by 3 then: • The longest string of consecutive m-apart primes is length 3, and, • If there are three consecutive m-apart primes then then they must be the three numbers 3, 3+m and 3+2m. Prove this Conjecture...
##### Assume that we are interested in average number of persons infamilies of BHOS students. Assume you have taken a random samplefrom BHOS and ended up with the sample of 100 observations inyour_dataset. Please build 80, 85, 90, 95, 99 percent confidenceintervals for the population mean. Assume that 1) population isnormally distributed; 2) the variance of the population is equal to2.5.My Data [7 5 4 4 5 3 6 5 5 4 5 3 5 5 3 5 4 6 4 7 5 5 5 9 5 5 2 6 5 5 4 7 65 4 5 5 6 7 6 3 6 4 5 8 7 4 5 3 6 3 6 3 4
Assume that we are interested in average number of persons in families of BHOS students. Assume you have taken a random sample from BHOS and ended up with the sample of 100 observations in your_dataset. Please build 80, 85, 90, 95, 99 percent confidence intervals for the population mean. Assume that...
##### A Gixtit TICOI} AesicieQueetion tph which b dnw7 without dgr> cronil i rlkyl planar For &h lulkarilE erpli detenint wtletileT ittliLl _ AIdinit r dru; #ith to ale cuning-MiatAEC Er
a Gixtit TICOI} Aesicie Queetion tph which b dnw7 without dgr> cronil i rlkyl planar For &h lulkarilE erpli detenint wtletileT ittliLl _ AIdinit r dru; #ith to ale cuning- Miat AEC Er... | 2022-05-26 02:46:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7586814165115356, "perplexity": 5975.349157562466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662595559.80/warc/CC-MAIN-20220526004200-20220526034200-00110.warc.gz"} |
https://www.vedantu.com/question-answer/mr-kamnath-purchased-2-towels-for-rs-90-each-3-class-8-maths-cbse-5ef9c74b822afa5a12e52f46 | QUESTION
# Mr. Kamnath purchased 2 towels for Rs. 90 each, 3 shirts for Rs. 220 each and 4 trousers for Rs. 290 each from Swarajya Khadi Bhandar, Wardha. The Bhandar gave 30% rebate. Find the amount Mr. Kamnath paid.
Hint: To solve the question, we have to calculate the total cost of 2 towels, 3 shirts and 4 trousers, using the given cost of each piece. We have to analyse that the given information states Mr. Kamnath didn’t pay the total cost since Swarajya Khadi Bhandar gave 30% rebate to him. Thus, only the remaining 70% was paid by Mr. Kamnath.
We know that the total cost of n items purchased for Rs. x each $=n\times x=nx$ rupees.
The total cost of 2 towels purchased for Rs. 90 each $=2\times 90=180$ rupees.
The total cost of 3 shirts purchased for Rs. 220 each $=3\times 220=660$ rupees.
The total cost of 4 trousers purchased for Rs. 290 each $=4\times 290=1160$ rupees.
The total cost of commodities Mr. Kamnath purchased from Swarajya Khadi Bhandar, Wardha = Sum of cost of 2 towels, 3 shirts and 4 trousers
= 180 + 660 + 1160
= 840 + 1160
= Rs. 2000
The given percentage of rebate the Swarajya Khadi Bhandar gave to Mr. Kamnath = 30%
This implies that the percentage of total cost paid by Mr. Kamnath = 100% - 30% = 70%
Thus, the amount Mr. Kamnath paid = 70% of Rs. 2000
\begin{align} & =\dfrac{70}{100}\times 2000 \\ & =70\times 20 \\ & =1400 \\ \end{align}
$\therefore$ The amount Mr. Kamnath paid is equal to Rs. 1400.
Note: The possibility of mistake can be, not calculating the total cost of 2 towels, 3 shirts and 4 trousers, since only the cost of each piece is given. The other possibility can be, not analysing the given information that Mr. Kamnath didn’t pay the total cost since Swarajya Khadi Bhandar gave 30% rebate to him. Thus, only the remaining amount was paid by Mr. Kamnath. | 2020-07-16 02:03:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99092036485672, "perplexity": 4702.501699328947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657176116.96/warc/CC-MAIN-20200715230447-20200716020447-00303.warc.gz"} |
https://clay6.com/qa/32674/which-of-the-following-statements-states-the-terms-activity-and-selectivity?show=32676 | (a) ability of catalyst to increase the chemical reaction ; ability of catalyst to decrease the reaction . (b) ability of catalyst to decrease the reaction ; ability of catalyst to direct the reaction to give particular products. (c) ability of catalyst to decrease the reaction ; ability of catalyst to reverse the reaction to give particular products . (d) ability of catalyst to increase the chemical reaction ; ability of catalyst to direct the direction the reaction to give particular products | 2020-09-20 14:47:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3935009241104126, "perplexity": 2236.187141970141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198213.25/warc/CC-MAIN-20200920125718-20200920155718-00485.warc.gz"} |
https://plainmath.net/80024/why-isn-t-arctan-x | # Why isn't arctan ⁡<!-- --> θ<!-- θ --> = arcsin
Why isn't $\mathrm{arctan}\theta =\frac{\mathrm{arcsin}\theta }{\mathrm{arccos}\theta }$?
You can still ask an expert for help
## Want to know more about Trigonometry?
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
drumette824ed
For one thing, the principal value of arctan is from 0 to π on Monday, Wednesday, and Friday, and from $-\pi /2$ to $\pi /2$ on Tuesday, Thursday, and Saturday.
However $\frac{\mathrm{arcsin}x}{\mathrm{arccos}x}$ is unbounded as $x\to \pi /2$, so this can not be a value of arctan. | 2022-12-01 00:47:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8116401433944702, "perplexity": 1833.001243704184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00212.warc.gz"} |
https://physics.stackexchange.com/questions/515059/einsteins-quantum-elevator-in-a-vacuum | # Einstein's quantum elevator in a vacuum
If Einstein was in an elevator in free-fall in a vacuum and in his upturned hand was a miniature elevator containing a miniature Einstein who had a miniature elevator in his hand and so on in ever decreasing sizes until the last elevator is on the scale of the Planck length would all these objects theoretically fall at the same rate?
• the last elevator is on the scale of the Planck length We have little idea how anything behaves between about $10^{-18}$ meters and $10^{-35}$ meters. That is a lot of orders of magnitude. – G. Smith Nov 20 at 19:15
• Right. So the concept of free-fall becomes irrelevant at a certain point – Wookie Nov 21 at 11:59
• Who carries the measuring tapes, and how do they communicate their results? Some of the concepts lose their meanings in these 'independent' system arrangements... – Philip Oakley Nov 26 at 10:37
• @PhilipOakley - are you pointing out that it would be hard to realize the gravitational force on the objects separately as they would influence each other? – Wookie Nov 26 at 14:11
• @Wookie Actually it was about the lack of reference points in, and between, each of the elevators. So there is no 'clock' nor 'ruler' that can be trusted. "Free-fall" implicitly defines that you are holding the ruler, and the only watch is the pendulum clock on the wrist that holds the ruler. Plus some vain assumptions about the lift being 'glass'. Oh, and that mini-elevator being held, isn't actually elevating/lifting/lowering anything. so if we remove all the elevators and local attraction then by definition.. It becomes Zeno's Einsteins flying in formation. – Philip Oakley Nov 26 at 15:46
Mass of an object is an intrinsic property of the object itself, independent of the gravitational field.
Weight of the object depends on how it is moving and it has direction. Objects in freefall are weightless.
The answer to your question is yes, all the objects would be in freefall and feel weightless independent of their sizes.
What is the difference between weight and mass?
In a roughly uniform gravitational field, in the absence of any other forces, gravitation acts on each part of the body roughly equally, which results in the sensation of weightlessness, a condition that also occurs when the gravitational field is weak (such as when far away from any source of gravity). The experimental observation that all objects in free fall accelerate at the same rate, as noted by Galileo and then embodied in Newton's theory as the equality of gravitational and inertial masses, and later confirmed to high accuracy by modern forms of the Eötvös experiment, is the basis of the equivalence principle, from which basis Einstein's theory of general relativity initially took off.
https://en.wikipedia.org/wiki/Free_fall
• You are answering a different question than was asked. The question is "fall at the same rate" not "be in freefall" and/or "feel weightless." So, your statement and analysis is correct, but doesn't answer the question. – Jeff Learman Nov 26 at 17:48
• @Wookie why the deselect? – Árpád Szendrei Dec 7 at 16:23
• @ÁrpádSzendrei - ah so sorry, my nephew must have got at the computer. I will undo. Your answer was perfectly useful – Wookie Dec 8 at 16:45
The answer is no, these objects would not "fall at the same rate," because all the Einsteins' masses would attract each other. All Einsteins would be accelerating toward their barycenter, and thus would fall at different rates with respect to any reference point, such as the ground they're falling towards or the barycenter of all the Einsteins.
• This seems like a detail according to the original question, but it is absolutely true! – DarioP Nov 27 at 9:35
• Increase scale to make the greater Einstein be as great as our sun and you can see it with more strength. – kokbira Nov 28 at 15:38
• @Jeff Learman - would there be a minimum speed of falling that would allow them to escape each others gravitation? – Wookie Dec 2 at 12:48
• @Wookie - Yes, if they had something to jump from. The escape velocity for each of them would depend on the total mass and his distance from the barycenter. But if they're all just falling from an initial static setup, they would get closer together as they fell toward the ground, regardless of how fast they all fell towards the ground. – Jeff Learman Dec 3 at 13:28
Not at all. Einstein elevator makes zero the acceleration, not the velocity. Fortunately, Einstein elevator lead us to a uniform aceleration. So we don’t need to deal with position and speed uncertainties
We only have plane waves moving down ar several speeds. Of course, Elevators might not remain in Alberts hand for so much time: smallers ones will be the first changing its relative position due to the well defined z position. No body said velocity were the same for all of them at anytime.
• In my opinion this point is excellent – Wookie Dec 8 at 17:29
First, we must define the "elevator". If it is a plain elevator, then all the Einsteins have the same velocity at t=0. If the elevator comes with a shaft, then velocities may vary between any two adjacent Einsteins.
• Yes, that is true if mechanics were operating – Wookie Dec 8 at 17:26 | 2019-12-14 17:49:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7135175466537476, "perplexity": 846.7212068528075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541288287.53/warc/CC-MAIN-20191214174719-20191214202719-00015.warc.gz"} |
https://math.stackexchange.com/questions/4220034/prove-that-function-is-in-l1 | # Prove that function is in $L^1$
Let $$E \subset \mathbb{R}$$ be a measurable subset. Assume that $$\int_{E} |x|^{1/4} |f(x)|^2 dx < \infty$$ and $$\int_{E} x^4 |f(x)|^3 dx < \infty$$, then I want to prove $$f \in L^1 (E)$$.
Morally the first inequality says that $$f(x)$$ behaves nicely at $$0$$ and the second inequality says that $$f(x)$$ behaves nicely at $$\pm \infty$$ but I'm struggling to prove the statement rigorously. A possible idea is to use Holder's inequality: we know that $$f(x) x^{1/8} \in L^2 (E)$$ and $$f(x) x^{4/3} \in L^3 (E)$$ and probably it can be used somehow but I don't know how. Anyways, any ideas are greatly appreciated!
• For integrability far from zero, it seems useful to write $f(x) = f(x) x^r x^{-r}$ and then use Hölder inequality to compare with something finite. It is possible this problem expects something more advanced, like some kind of $L^p$ interpolation result. I'm not sure. Aug 8 '21 at 22:39
1. On $$\{x\in E: |x|\le 1\}$$, write $$|f(x)| = \left(|f(x)| \cdot|x|^{1/8}\right)\cdot |x|^{-1/8}$$ and apply the Cauchy-Schwarz inequality.
2. On $$\{x\in E: |x|>1\}$$, write $$|f(x)| = \left(|f(x)| \cdot |x|^{4/3}\right)\cdot |x|^{-4/3}$$ and apply Hölder with the conjugate exponents $$p=3$$, $$q=3/2$$.
• If you are working with the Lebesgue measure (which I think is implicit), then you can calculate the integral of $x^{-1/8}$ over $[0,1]$ by using the Riemann integral. Aug 8 '21 at 22:52
• Yes, that's right but it may happen that this set contains a heigbourhood of infinity. Nevertheless, how to see that $x^{-1/8} is integral on this set? – iou Aug 8 '21 at 23:02 • @iou$\int_{[0,1]}\frac{1}{x^8}\,dx=\frac{x^{7/8}}{7/8}\bigg|_{x=0}^{x=1}\$ which is certainly finite Aug 9 '21 at 0:19 | 2022-01-17 04:19:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391224980354309, "perplexity": 175.1708441296939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300289.37/warc/CC-MAIN-20220117031001-20220117061001-00301.warc.gz"} |
https://forum.arduino.cc/t/error-when-trying-to-compile-on-eee-pc/273101 | # Error when trying to compile on Eee PC
Hi guys,
As it states in the title, basically having that problem and to elaborate, I’m trying to compile an example sketch that normally works perfectly on standard PC/Laptops and on my Eee PC (Model: Asus x101ch) it gives me an error… please help me out, here is the error I get:
Compilation Error ← Click Me
the sketch is called PS3BT it comes with the USBHostShield library. I also correctly placed the USBHost library folder.
If I dont end up getting to the bottom of this and having it working I may have to sell my Eee pc as its no use to me if I cant program so any help at all would be really appreciated
Looks like you have two different USBHostShield2 libraries installed in two different places: C:\Program Files\Arduino\libraries\USBHostShield2\ Right library, wrong place C:\Users\Asus\Documents\Arduino\libraries\USBHostShield2maste\ Wrong library, right place.
Wow, you're right but I had installed the lib manually so I deleted all them libs and then imported the libraries instead and now it works! :D thanks for your help, much appreciated | 2021-10-23 07:24:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8499430418014526, "perplexity": 2867.540306676141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00455.warc.gz"} |
http://angg.twu.net/LATEX/2008filterp-abs.tex.html | Warning: this is an htmlized version! The original is across this link, and the conversion rules are here.
% (find-angg "LATEX/2008filterp-abs.tex")
% (defun c () (interactive) (find-zsh "cd ~/LATEX/ && latex 2008filterp-abs.tex"))
% (defun c () (interactive) (find-zsh "cd ~/LATEX/ && pdflatex 2008filterp-abs.tex"))
% (eev "cd ~/LATEX/ && Scp 2008filterp-abs.{dvi,pdf} edrx@angg.twu.net:slow_html/LATEX/")
% (find-dvipage "~/LATEX/2008filterp-abs.dvi")
% (find-pspage "~/LATEX/2008filterp-abs.pdf")
% (find-zsh0 "cd ~/LATEX/ && dvips -D 300 -o 2008filterp-abs.ps 2008filterp-abs.dvi")
% (find-pspage "~/LATEX/2008filterp-abs.ps")
% (ee-cp "~/LATEX/2008filterp-abs.pdf" (ee-twupfile "LATEX/2008filterp-abs.pdf") 'over)
% (ee-cp "~/LATEX/2008filterp-abs.pdf" (ee-twusfile "LATEX/2008filterp-abs.pdf") 'over)
\documentclass{book}
\usepackage{amsfonts}
\begin{document}
Natural Infinitesimals in Filter-Powers
Eduardo Ochs
{\tt http://angg.twu.net/}
\bigskip
\def\I{{\mathbb{I}}}
\def\N{{\mathbb{N}}}
\def\R{{\mathbb{R}}}
\def\F{{\mathcal{F}}}
\def\U{{\mathcal{U}}}
\def\Ro{{\mathcal{R}_0}}
\def\calN{{\mathcal{N}}}
\def\Set{{\mathbf{Set}}}
\def\w{{\omega}}
\def\o{{\mathbf{o}}}
\def\SetN{{\Set^\N}}
\def\SetI{{\Set^\I}}
\def\SetNN{{\Set^\N/\calN}}
\def\SetIF{{\Set^\I/\F}}
\def\SetIU{{\Set^\I/\U}}
\def\SetRRo{{\Set^\R/\Ro}}
Start from the standard universe, $\Set$, and construct the the
universe of sequences'', $\SetN$, and then the semi-standard
universe'', $\SetNN$, in which the quotient by the filter of
cofinite sets of naturals, $/\calN$'', identifies sequences which
differ only on finite sets of indices. Now generalize this a bit: a
{\sl filter-power}, $\SetIF$, is a universe of $\I$-indexed
sequences modulo a quotient that identifies sequences when they
coincide on sets of indices that are $\F$-big''.
If we substitute the filter $\F$ above by a (non-principal)
ultrafilter $\U$ we get a non-standard universe'' (or: an
ultrapower''), $\SetIU$, whose logic is very close to the one of
$\Set$ --- it has exactly two truth-values --- but in a $\SetIU$ we
have infinitesimals (the equivalence classes of $\I$-sequences tending
to 0), and we can use the transfer theorems'' of Non-Standard
Analysis to move truths back and forth between $\Set$ and $\SetIU$.
Non-principal ultrafilters cannot be constructed explicitly, and to
show that they exist we need the boolean prime ideal theorem, that is
slightly weaker than the axiom of choice; this makes the
infinitesimals of NSA quite hard to understand intuitively. On the
other hand, the infinitesimals in a semi-standard universe like
$\Set^\N/\calN$ or $\SetRRo$, where $\Ro$ is the filter of
neighborhoods of $0 \in \R$, are very simple to describe --- but the
logic of a filter-power has more than two truth values.
We will show how strictly calculational'' proofs in NSA involving
infinitesimals can be lifted through the quotient $\SetIF \to \SetIU$;
and then, by choosing the right $\I$ and $\F$, and by using the
natural infinitesimals'' --- that are identity maps in disguise,
modulo $\F$ --- we get a straightforward translation of these strictly
calculational proofs with infinitesimals into standard proofs in terms
of limits and continuity.
One archetypical example'' will be discussed in detail: $\forall \w \sim \infty \; \exists! \o \sim 0 \; (1+\frac{1}{\w})^\w = e^a + \o$,
where $\w$ is an infinitely big natural number. The presentation
should be accessible to people with basic knowledge of Calculus,
Analysis, and Topology.
\end{document} | 2018-08-18 00:32:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9195048213005066, "perplexity": 5011.5660346477325}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213247.0/warc/CC-MAIN-20180818001437-20180818021437-00660.warc.gz"} |
https://pos.sissa.it/350/212/ | Volume 350 - 7th Annual Conference on Large Hadron Collider Physics (LHCP2019) - Parallel QCD
Measurements of jet fragmentation and jet substructure with ALICE
M. Fasel* on behalf of the ALICE collaboration
Full text: pdf
Pre-published on: September 09, 2019
Published on: December 04, 2019
Abstract
We discuss the latest results from jet fragmentation
and jet substructure measurements performed with the ALICE experiment in
proton-proton and heavy-ion collisions in a wide range of jet transverse
momentum. The jet production cross sections and cross section
ratios for different jet resolution parameters will be shown in
a wide range of $p_{\textrm{T}}$. Results will be compared to
next-to-leading order pQCD calculations.
DOI: https://doi.org/10.22323/1.350.0212
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | 2022-05-26 10:57:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22553060948848724, "perplexity": 5885.44745837465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00436.warc.gz"} |
http://mathhelpforum.com/algebra/37532-solving-equation.html | # Math Help - solving equation
1. ## solving equation
Hi i have some problem solving this equation, I'm not sure its even possible but if it is it would give the answer to my task. so here it is:
$590^2+y^2=(1132+x)^2$
A fast answer here would be very much appreciated since I'm in a bit of a hurry
Greetings from Rickard Liljeros
OBs edit the "^2" in the end should be outside the ")"
2. You can solve for x in terms of y or you can solve for y in terms of x, but you cannot come up with a unique value for x and y.
3. Originally Posted by liljeros
Hi i have some problem solving this equation, I'm not sure its even possible but if it is it would give the answer to my task. so here it is:
$590^2+y^2=(1132+x)^2$
A fast answer here would be very much appreciated since I'm in a bit of a hurry
Greetings from Rickard Liljeros
OBs edit the "^2" in the end should be outside the ")"
$y = \pm 3456, x = 2374, -4368$
4. Originally Posted by Isomorphism
$y = \pm 3456, x = 2374, -4368$
How can we know we're looking for integer solutions ?
5. Originally Posted by masters
You can solve for x in terms of y or you can solve for y in terms of x, but you cannot come up with a unique value for x and y.
Thank you! Well then I must have done wrong when trying to solve my task. Maybe you could help me with how to think?
It is about a skiing slope and i get some numbers but not all, The task is to get the length of one part of the slope, will make a picture that looks like the one in the paper:
I should get X.
m= meters (length)
6. Originally Posted by liljeros
Thank you! Well then I must have done wrong when trying to solve my task. Maybe you could help me with how to think?
It is about a skiing slope and i get some numbers but not all, The task is to get the length of one part of the slope, will make a picture that looks like the one in the paper:
I should get X.
m= meters (length)
Well, you can use Thalès theorem :
AC/AB=AH/AM
Where H is 1132 from A and M is (1132+x) from A (in the slope).
You'll get a quadratic equation including x
7. $335:1132$ as $\;590:1132 + x$
so
$\frac{{335}}
{{1132}} = \frac{{590}}
{{1132 + x}}$
. Now solve for $x$.
8. Originally Posted by Moo
Well, you can use Thalès theorem :
AC/AB=AH/AM
Where H is 1132 from A and M is (1132+x) from A (in the slope).
You'll get a quadratic equation including x
Thanks that was what I tought as the only way also, though the answer then is 861,67 and the correct answer should be 860... Very close but they don't say anything about going for the closest tenth (sorry for bad math english im from sweden).... I guess that the only way is doing that though to get the correct answer...
Thanks for the help there!
9. Originally Posted by liljeros
Thanks that was what I tought as the only way also, though the answer then is 861,67 and the correct answer should be 860... Very close but they don't say anything about going for the closest tenth (sorry for bad math english im from sweden).... I guess that the only way is doing that though to get the correct answer...
Thanks for the help there!
Yes, I find it too... | 2014-12-18 09:13:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7821140885353088, "perplexity": 454.0079190213512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765698.11/warc/CC-MAIN-20141217075245-00083-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/361668/koma-script-fontsize-with-newtxmath-microtype-custom-protrusion-settings | # KOMA-Script ‘fontsize’ with ‘newtxmath’: ‘microtype’ custom protrusion settings not working
I want to use scrbook with fontsize=12pt. To get Linux Libertine with matching math, I use the newtxmath package. I wanted to set up microtype margin protrusion for footnote markers and it didn't take effect. After a few hours I developed this MWE to identify the problematic parts:
\documentclass[%
fontsize=12pt % critical line 1
]{scrbook}
\usepackage[utf8]{inputenc}
\usepackage[showframe=true]{geometry}
\usepackage[T1]{fontenc}
\usepackage{libertine}
\usepackage{newtxmath} % critical line 2
\usepackage{microtype}
\SetProtrusion{
encoding={*},
family={LinuxLibertineT-TLF},
series={*},
size={5,6,7,8}
}{1={ ,1000}}
\begin{document}
\noindent-A% Testing hyphen protrusion
\hfill
e$$\mathrm{e}$$% does text match with math?
\footnote{blah}% footnote marker protrusion?
\end{document}
The right margin area depending on the two critical lines:
I’d like to have the 12 pt fontsize, matching math font and footnote marker protrusion together. The -A in the document shows that regular protrusion works in all cases. Are my microtype protrusion settings even fully valid? I’m never sure on them and I’m more or less guessing there.
• newtxmath changes the math sizes. \sf@size (used by the superscript) has now size 8.8 and not 8. So you need to add this value to the size declaration: size={5,6,7,8,8.8} . – Ulrike Fischer Apr 1 '17 at 21:14
• @Ulrike Fischer: Thanks, it worked. You can write this as an answer if you want. – lblb Apr 1 '17 at 21:18
newtxmath changes the math sizes. \sf@size (used by the superscript) has now size 8.8 and not 8. So you need to add this value to the size declaration:
size={5,6,7,8,8.8}
or use a size range:
size={5-9}
(the last number has to be greater than 8.8 because microtype won't include the upper limit in the range itself)
• Do you know why size={*} doesn't work? Is * just not effective for size? – cfr Apr 1 '17 at 21:39
• * is imho only \normalsize, use size={-} for all sizes. – Ulrike Fischer Apr 1 '17 at 21:47
• Oh. Thanks. That seems strange, but Microtype often seems strange to me. ;) – cfr Apr 1 '17 at 23:38
• @cfr * always stands for "default", not for "any" (the choice of the asterisk has been somewhat unfortunate, I know...) – Robert Apr 2 '17 at 16:41 | 2020-02-22 05:25:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8082688450813293, "perplexity": 5238.838157255732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145648.56/warc/CC-MAIN-20200222023815-20200222053815-00215.warc.gz"} |
https://academic.bancey.com/category/reviews/ | # Category: <span>Reviews</span>
## Science via email
One thing scientists and engineers have to do daily is discuss collaborative work via email exchanges. This often includes the need to share and discuss mathematical equations and to represent variables with subscripts and superscripts or special characters; something that is tricky when you are emailing in plain text.
WikiImages / Pixabay
Of course it is possible to work around this problem! Email was invented by scientists, and for decades they have been communicating in this manner, using various conventions to convey the correct information using plaintext. However, if you are a Gmail user there is a nice extension that will make your equations look proper good.
## Tex for Gmail
TeX for Gmail is a Chrome browser extension that checks a Gmail email that you are writing for LaTeX markup and converts the markup to a visually prettier equation, using one of two modes. In Simple Math mode, subscripts and superscripts are correctly formatted but the current font is maintained and text remains ediatble. In Rich Math mode, the equation is rendered into TeX and replaced by an embedded image. The email recipient doesn’t need the extension installed on their browser in order to read your nice equations!
### Example
Original markup:
\$E = mc^{2}\$
Simple Math mode:
E = mc2
Rich Math mode:
$\dpi{300}\inline E = mc^{2}$
## Issues
One problem; once the extension has converted my markup to formatted text, I cannot get the markup back. So editing a small mistake usually means re-doing all the curly brackets and other stuff that a TeX equation requires. The only workaround seems to be to stay vigilant and use Undo (Ctrl-z), but this doesn’t work when you notice a mistake in an equation that you wrote a while ago. One improvement could be the option to restore any equation to the original markup.
## Conclusions
Overall, a great little tool to improve the clarity of science and maths communications over email. With a few small improvements it could be even better but it is already very usable.
### HP Spectre x360 keyboard turning off; problem solved!
I recently bought a new notebook; the HP Spectre x360, a 13” convertible ultra-slim notebook PC. It is a really nice piece of hardware, chiselled from solid aluminium with great battery life, decent performance and a pretty usable keyboard.
When the laptop is folded back it automatically converts to tablet mode, wherein the keyboard is automatically turned off so that the keys are not accidentally pressed while using the touchscreen interface.
But since I bought it I have found that, on booting up the device in laptop mode, with it sitting open on my desk, the keyboard is mistakenly turned off. No amount of opening and closing the screen will trick the keyboard into coming on, and it is necessary to log in using the touchscreen interface and on-screen keyboard. After lots of opening and closing it seemed that eventually the keyboard would come back on but there seemed to be no logic to the problem at all.
This post on the official HP forum describes a similar problem. Unfortunately the HP rep simply assumes a hardware fault in the hinges and advises the owner to send it back for repair. However, once my keyboard is on, the connection stays on even as I adjust the screen angle, so it seems a hinge connection failure is not the issue here.
Finally, after much confusion, I found the answer: When the laptop is rotated sideways it attempts to switch to tablet mode, even when I have not adjusted the screen angle. The sensing it done by an orientation sensor in the base of the laptop, not in the hinges! I have found that, to turn the keyboard back on and to go to laptop mode I can tilt the whole laptop towards me slightly, and voila! So far it has worked every time. I wish that there had been some way for me to find this answer out earlier!
# Introduction
Last July (nearly a year ago) I bought a larger format eReader that is manufactured by a chinese company named Onyx under the branding “Onyx Boox”. The model, the M96, was an upgrade to their previous one, the M92, and I had been waiting patiently for a while, desperate to have a larger eReader for textbooks.
The screen is 9.7 inches diagonally. This doesn’t sound much of an increase on a 6 inch screen, but consider that this gives 2.56 times the surface area (see comparison below).
That’s pretty good, but what I really wanted was a 13.1 inch eReader – full A4 size – but the only ones available were, well, not really available or prohibitively expensive. So I bit the bullet and went with the M96 as a next best option.
A year on, and I am mostly happy with the device. It has been really useful for reading academic journal articles and textbooks, so I now find myself less frequently printing out documents that I want to read. I was even able to set up BitTorrent Sync to automatically synchronize a directory on multiple computer so that I could wirelessly transfer documents over with ease. Great!
BitTorrent sync in the Play store on an Onyx Boox M96 eReader.
Here is a quick overview of my experiences with the M96. In it I highlight my criticisms and wishes for a future device.
# Hardware
## Build quality
The M96 feels well built and since I had it. The official case protects it well (apart from the power button – see below). I am a careful owner and have not dropped it, but it is often in my backpack being carried around and seems to have held up well.
## Battery life
Like most eReaders, the battery life on the M96 is pretty long. But the juice quickly runs out when the WiFi is turned on. A few times I have been caught out before making a long journey. Luckily, it is possible to toggle the WiFi on and off with one touch of the stylus, using the WiFi status icon. Cool! Only problem is that the power button is unprotected by the case and the device often turns itself on in my bag. If the WiFi is on too, then, say bye-bye to your battery!
## Old screen tech
A major disappointment with the device is the outdated screen technology. At a time (2014) when we were being teased with the eInk Mobius tech, and the newer Kindles and Kobos have nice high-resolution eInk screens, the M96 uses the 9.7 inch eInk Pearl display; the exact same screen component that was used in the Kindle DX four long years earlier. The main disadvantages over the newer eInk screens are apparent when reading PDF and DJVU documents. The resolution is not enough to read some smaller text like figure captions or make out smaller features in the figures themselves.
The other significant implication of using Pearl is the weight; Pearl has a glass substrate for the active eInk layer while Mobius has a plastic substrate. Considering it’s size, the M96 is quite hefty to hold, especially one-handed.
I am sure Onyx have a good reason for sticking with Pearl; cost, availability. I am not sure what the agreement between Sony and eInk is regarding Mobius, considering Sony invested money in its development. But I hope the next generation devices will use it (or whatever comes afterwards).
## SD card issues
Perhaps this is a problem that I could fix if I had the time, but I never got the SD card working. The idea is that the SD card gives extra storage capacity for more books, but on boot up I always had a problem with scanning of the SD card hanging forever. I gave up on the SD card a long ago since eBooks are mostly small anyway and the internal storage is sufficient.
# Operating system
The OS is Android, which makes the device feel a bit too much like a phone or tablet for my liking, but the upside is that you can install Android apps, such as BTSync, that give the device added functionality. You can also choose alternative reading apps, use Google Calendar, Gmail etc. The Kindle Android app from Amazon also works, although I don’t use it. The main problem for apps is that pretty much none of them are designed for eInk. Fancy animations and other advanced graphic features don’t work well with eInk, where the screen refresh rate and greyscale often cause problems. Many apps use colour coding, for example, which is less recognizable on a greyscale display.
I bought my M96 from a European supplier so I have the “Booxtor edition” with a slightly customized OS. Booxtor has been providing updates now and then, which have fixed most of the major issues that were at first apparent (I nearly sent my M96 back in the beginning). I am not sure of the exact relationship between this and the original OS from Onyx.
Generally, reading on the M96 is good. The extra screen real estate really makes a difference when compared to my old 3rd Gen Kindle Keyboard. I still believe that it would be even better to have a larger yet screen, allowing textbooks to be shown at their intended size. A newer screen tech would improve contrast and resolution, making reading more enjoyable.
## Navigating books
Navigating through books is not so bad when you are reading through linearly, but when you are reading a text book it is often necessary to jump around, for example to the index or the problem answers section at the back. This is one of the major weaknesses of digital reading devices over printed books. We need new interface designs to overcome this, and faster processors would probably help. However, the M96 does OK; one can use the stylus to control a slider that can choose any page number, it is just a little tricky to select precise page numbers in this way and you still need the full page to load fully before you know that you have landed on the correct one.
## Zooming
Zooming into text on the screen is somewhere that is always improved by having a touch interface. With the M96, touch is only possible through the stylus, which can be a little annoying, but at least the stylus allows some accuracy. Due to the slow speed of eInk this can be a slow process but it is much better than with, say, the old Kindle Keyboard.
# Annotation
The stylus input of the M96 makes notation relatively easy. Unfortunately it always requires navigating through a series of menus to do anything. It would be great if the stylus had a button assigned to always scribble, and another to erase. Perhaps another to zoom to selection? Well, I am not a seasoned Wacom user so I don’t know how many buttons would be practical on any stylus. Update: the M96 is now being shipped with a newer, bigger stylus that has an “erase” button.
The main problem with annotation is that for many books where the text is small it is hard to write small enough. I end up zooming in first to the target annotation area. This is slooooooooow, because the stylus function has to be reset between each step, and this means going through menus on screen. In a typical scenario I counted a minimum of 8 stylus “prods” before I was actually annotating. And that was with default notation settings. It shouldn’t be this hard. It should be like using a real book!
# Conclusion
The Onyx Boox M96 is helped greatly by the fact that no (viable) better alternative exists. The screen is still not big enough for some books, it is heavy and the display is low resolution. The user experience for someone who is reading and annotating at the same time is still far from the ease of using a real paper book. However, it is a great tool to have and makes reading from a digital library possible.
# What’s coming next?
The future of large-format eReaders depends more than anything on the manufacturers’ perceptions of public demand. Most people are served well by the 6 inch Kindle and other similarly sized devices; I myself find the Kindle perfect for reading novels and other books that are pure text and do not have fixed pagesetting, and that are intended to be read in a linear fashion, cover to cover.
The possibility for making 13.1 inch eReaders has existed for some time. eInk developed the Mobius technology with Sony over a year ago and their “ePaper” device has been on limited sale in Japan and the USA for a while. The problem is the price. It also only reads PDF files, with no possibility for EPUB, MOBI, DJVU etc.
Pocketbook are marketing a 13.3 inch eReader to certain industry sectors, but it is not viable for or targeted at general consumers. Netronix recently announced they will produce one too, and it looks very similar to Sony’s. Only without the silly limitations. Onyx stated last year that they, too, were planning a 13.1 inch eReader but it seems to have been out on the backburner while they try to finish other devices first.
So things are happening, but slowly. Fingers crossed we will see some progress before the end of this year! | 2021-02-28 22:33:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2957976758480072, "perplexity": 1747.4698410066724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361776.13/warc/CC-MAIN-20210228205741-20210228235741-00392.warc.gz"} |
http://math.stackexchange.com/questions/124644/dihedral-group-of-degree-4-and-sylow-3-subgroups | # Dihedral group of degree 4 and sylow 3-subgroups.
My professor has on our intranet uploaded some of his handwritten notes. I am a little worried about one of his statements and I'm suspecting that it contains a mistake.
"The dihedral group of degree 4 contains 4 different sylow 3-subgroups"
How is that possible?
The dihedral group of degree 4 ($D_{4}$) has order 8 and therefore every subgroup of $D_{4}$ need to have an order that divides 8 (according to lagrange) but a Sylow 3-subgroups has the order $3^{n}$ for some $1\leq n$, which makes the above statement false?
-
You are correct. There is a mistake/typo somewhere in the notes. $D_4$ (the dehidral group of order $8$ = symmetries of a square) has only 1 Sylow subgroup, namely the group itself. $D_4$ is its own Sylow 2-subgroup. :) – Bill Cook Mar 26 '12 at 13:06
It could be the dihedral group of order $2^n3$ for any $n\geq 2$, as in each case $4\equiv 1\text{ mod }3$ and $4\mid 2^n$. So...he could be talking about $D_{6}$...but these groups only have one copy of $C_3$...($(a^ib)^3=a^ibb^{-1}a^ia^ib=a^{3i}b\neq 1$ so our only choice for an element or order $3$ is $a^{\pm i}$, $i=2^{n-1}$). – user1729 Mar 26 '12 at 13:26
@bemyguest: For alternating group of degree four the statement is true, however.. maybe that's what he meant. – spin Mar 26 '12 at 14:59
Consider the dihedral group $D_{2n}$, where $3$ divides $n$. Now the group has a normal cyclic subgroup $H$ of order $n$ by definition. The subgroups of a cyclic normal subgroups are also normal, and thus the $3$-Sylow contained in $H$ is normal. But Sylow subgroups are conjugate so there can be only one $3$-Sylow. In particular there cannot be four. We can also generalize this a bit: any $p$-Sylow with $p \neq 2$ is normal in a dihedral group.
Thus the statement should talk about $2$-Sylows. But we cannot have four $2$-sylows, because $4 \not\equiv 1 \mod 2$. Also the dihedral group of degree $4$ also contains only one Sylow subgroup, which is the group itself.
My guess is that he meant the alternating group of degree four, which does have four $3$-Sylow subgroups. | 2014-08-01 02:36:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7485056519508362, "perplexity": 219.44488791343932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273874.36/warc/CC-MAIN-20140728011753-00433-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://libros.duhnnae.com/2017/jun3/149676324565-Quasi-Optimal-Leader-Election-Algorithms-in-Radio-Networks-with-Loglogarithmic-Awake-Time-Slots-Christian-Lavault-Jean-Francois-Marckert-Vlady-Ra.php | # Quasi-Optimal Leader Election Algorithms in Radio Networks with Loglogarithmic Awake Time Slots
Quasi-Optimal Leader Election Algorithms in Radio Networks with Loglogarithmic Awake Time Slots - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Descargar gratis o leer online en formato PDF el libro: Quasi-Optimal Leader Election Algorithms in Radio Networks with Loglogarithmic Awake Time Slots
A radio network RN is a distributed system consisting of $n$ radio stations. We design and analyze two distributed leader election protocols in RN where the number $n$ of radio stations is unknown. The first algorithm runs under the assumption of {\it limited collision detection}, while the second assumes that {\it no collision detection} is available. By limited collision detection, we mean that if exactly one station sends broadcasts a message, then all stations including the transmitter that are listening at this moment receive the sent message. By contrast, the second no-collision-detection algorithm assumes that a station cannot simultaneously send and listen signals. Moreover, both protocols allow the stations to keep asleep as long as possible, thus minimizing their awake time slots such algorithms are called {\it energy-efficient}. Both randomized protocols in RN areshown to elect a leader in $O\log{n}$ expected time, with no station being awake for more than $O\log{\log{n}}$ time slots. Therefore, a new class of efficient algorithms is set up that match the $\Omega\log{n}$ time lower-bound established by Kushilevitz and Mansour.
Autor: Christian Lavault; Jean-François Marckert; Vlady Ravelomanana
Fuente: https://archive.org/ | 2018-03-23 17:13:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7186073064804077, "perplexity": 4838.590462028402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648404.94/warc/CC-MAIN-20180323161421-20180323181421-00612.warc.gz"} |
https://vittorioromeo.info/index/blog/cppnow2017_tripreport_pt1.html | ## c++now 2017 trip report - part 1/2
27 may 2017
I'm back home in London after C++Now 2017. Besides experimenting on personal projects and playing Hollow Knight, I think that putting together the notes I've scribbled down during the conference into a coherent article is a good use of my time. I hope you'll find some of my thoughts interesting!
### background
In case you've never heard about it before, C++Now is a gathering of C++ experts in a beautiful (and expensive!) location in Aspen, CO. In constrast to other "more mainstream" conferences like CppCon and Meeting C++, most of the content is intended for advanced and expert code wizards.
One thing I loved about this year is the theme of the keynotes: other languages. The three talks were about Rust, Haskell and D. I find it very bold to have presentations on different languages at a C++ conference, especially when they're keynotes! This shows a level of open-mindedness, courage, and desire to make C++ and its users richer by taking inspiration from others - I feel glad to be part of this community.
I also spoke at the conference this year, with a slightly improved and lengthened version of the talk I gave at ACCU 2017: Implementing variant visitation using lambdas. The video is not yet available, but you can watch the ACCU version here on YouTube.
By the way, the social aspect of this conference is purely amazing: I felt like being part of a family reunion in a breathtaking venue. I have similar feelings when participating to other C++ conferences, but there's something about C++Now that's truly unique.
Enough rambling - let's get to my thoughts on my favorite sessions.
### tuesday, may 16
• Rust: Hack Without Fear! (by Niko Matsakis)
After Jon Kalb's warm welcome to the attendees and after the yearly Library in a Week planning presentation by Jeff Garland, the crowd of C++ experts was introduced to Rust by Niko Matsakis.
The keynote primarily focused on the safety of Rust, which is one of the dramatic differences from other system programming languages. Through the use of well-presented real-world examples Niko explained:
• Memory safety without garbage collection: Rust achieves this by having a compile-time strong ownership model. Things like dangling references are impossible in Rust because the compiler is able to track lifetimes and ownership.
• Concurrency without data races: the claim is that it is impossible to write code with a data race in Rust. This is done by preventing contemporaneous aliasing and mutability at compile-time.
• Abstraction without overhead: one of the similarities between C++ and Rust is their love for "zero-cost abstractions": Rust provides trait-based generics which are very powerful and do not introduce any unwanted run-time overhead. They kind of resemble Haskell type classes or C++0x concepts.
I had played around with Rust before the keynote and was aware of all the features Niko mentioned - hopefully it's needless to say that I think they're extremely useful and a step forward towards safer and saner system programming.
The keynote however didn't cover many of Rust's other interesting features such as macros or its great ecosystem. One of the things I've been thoroughly impressed by is cargo and its ease of use. I can literally clone any Rust project from GitHub and run it by typing
cargo run
in the terminal. All dependencies will be automagically downloaded and installed. cargo has been my most pleasant package management experience so far!
While I understand that it is impossible to cover all the nice features of Rust in one hour and an half, I was disappointed by the fact that Niko didn't mention any of the current issues with the languages and potential reasons that might drive away C++ developers.
I feel like there are many missing features that make generic programming much easier:
• Overloading doesn't exist in Rust, but it is possible to emulate it through the use of traits. This often results in a lot of boilerplate just to produce a function that behaves slightly differently for a small subset of types but whose semantics are excellently captured by a single name.
• There are no variadic templates and no type-level integers. This is a huge huge step backwards from what C++ offers, but thankfully it's being worked on as you read this blog post. The lack of these features drives me crazy because I really enjoy generic programming and metaprogramming and I feel like I'm being limited by the language here. Look at Rust's std::tuple implementation: it's copy-paste madness! This also means that implementing something like a small_vector<N> with heap-allocation fallback is currently impossible without macros or code generation.
• There is no automatic return type deduction. I'm not sure, but I feel like this is probably by-design or unfeasible due to the complexity and power of Rust's type system... I however must admit that I've been spoiled by C++'s auto and decltype(auto) return type deduction and I find them really convenient when the return type is obvious from the function name and the surrounding context or when it depends on the input arguments. Thankfully something called impl Trait is available in unstable versions of Rust, which greatly improves the generic programming experience.
fn something<T>(x: T) -> impl Iterator {
// ...
}
The code snippet above defines a function called something that takes a generic T argument and returns anything that is an Iterator. impl Trait is close enough to automatic return type deduction for me and I look forward to see it in stable Rust soon.
• The ownership/borrowing system is not perfect. There are some tasks that should be straightforward, but that are impossible without unsafe blocks. An example is swapping two array elements - the compiler prevents you from doing this because the array is being mutably aliased twice, even though the elements are completely separate!
This code...
fn main() {
let mut xs: [i32; 5] = [1, 2, 3, 4, 5];
std::mem::swap(&mut xs[0], &mut xs[1]);
}
...will produce the following error:
error[E0499]: cannot borrow xs[..] as mutable more than once at a time
--> <source>:3:37
|
3 | std::mem::swap(&mut xs[0], &mut xs[1]);
| ----- ^^^^^- first borrow ends here
| | |
| | second mutable borrow occurs here
| first mutable borrow occurs here
(By the way, Rust's error messages are awesome!)
Overall I liked this keynote, but I felt like it could have gone more in depth as the audience was full of C++ experts.
• Rethinking strings (by Mark Zeren)
This was a nice and interesting report on how the presenter revamped the use of strings at his company, VMWare. The key takeways for me were the following:
• std::string_view is awesome when ownership is not required and should be liberally used.
• The general consensus in the room for compile-time string constant definition was constexpr std::string_view{"..."}, as I once suggested on /r/cpp. This is great as it doesn't have any overhead compared to const char* but provides a way better interface that interoperates well with std::string.
• std::string is way too general. We would benefit in terms of readability, semantics and performance by multiple string types: unique_string, shared_string, fixed_string<N>, small_string<N>. I really like this type-based approach as it gives developers flexibility and API users clarity.
• Having "builders" for common operations such as cat(a, b, c, ...) and fmt(str, a, b, c, ...) is a good idea as they can nowadays be implemented very efficiently thanks to variadic templates and constexpr. I think that it would be great to have new standard formatting and concatenation facilities that compute as much as possible during compilation.
The talk ended with Mark showing us some code from his toy rethinking-strings library, which can be found here on GitHub. I strongly recommend checking it out as there are many interesting ideas in there.
• Expression Templates Everywhere with C++14 and Yap (by Zach Laine)
Zach presented the usage and implementation of Yap, a C++14 library proposed to Boost which aims to cover the same problem space as Boost.Proto in a nicer and more efficient way thanks to features introduced in the latest standards.
In short, it's an "expression template generator" that allows developers to easily create rich and powerful expression template trees. Zach used Boost.Hana to implement Yap and presented many interesting ideas and challenges encountered during his work.
This was overall a solid and interesting presentation and I recommend checking out the recording when its available online if you're interested in expression templates.
I'm honestly curious whether or not it would be feasible to use Yap as a building blocks for range-v3...
• constexpr ALL the things! (by Ben Deane and Jason Turner)
This talk blew my mind! The goal was to implement fully-constexpr JSON literals:
auto some_json_object = "{
"hi": 1234,
"bye": [0, 1, 2]
}"_json;
Ben and Jason presented the very clever design and implementation of a fully-constexpr JSON parser step-by-step. Starting from a constexpr string, they implemented a vector, a map, a mutable string, and a fully-constexpr parser framework!
By using constexpr "parser combinators" they managed to achieve the initial goal and optimize compilation times enough to make it actually useable.
A must watch.
• Social Event: Picnic
At the end of the day we had a nice picnic with friends and family. Burgers and chicken were cooked on the BBQ, and we also had ice cream!
Social events like this are great examples of why C++Now really feels like a family gathering: it is amazing to share good food and drinks in a beautiful environment while talking about hardcore C++ metaprogramming.
### wednesday, may 17
• Haskell taketh away: limiting side effects for parallel programming (by Ryan Newton)
The Haskell keynote woke us up after the previous night spent at the bar. Ryan defined Haskell as a "research project that escaped the lab" and explained how the languages makes it possible to define convienient and safe parallel programming abstractions.
The main idea is that Haskell is able to limit a function's side effects through the type system - C++ is unfortunately unable to do this. By defining multiple monads that limit operations it is possible to define safer abstractions over shared memory and parallel programming.
I found the ideas and work presented during the keynote really impressive, but I honestly would have liked to see more contents/techniques that could be applied in an useful way in C++ development. Regardless, I recommend watching the recording if you're interested in functional programming, Haskell, or parallel programming.
• A vision for C++20, and std2 (part 1 of 3) (by Alisdair Meredith)
While I was only able to attend a single session of the 3-part presentation/workshop hybrid by Alisdair, I still found it really valuable and engaging. The experience I had was a prime example of something that makes C++Now really great: audience interaction.
Pretty much everybody in the audience is an expert during the conference, and presenters welcome live discussion and critique during the talks. Alisdair covered many of the upcoming major features (e.g modules, coroutines, concepts, ...), mentioning their benefits and current potential issues. The audience<->presenter audience<->audience debate that kept going on during the talk excellently pointed out details/advantages/drawbacks in the upcoming features that people might not think about - hopefully the recording will do justice to this session.
One particular instance of disagreement that I care about is normal form in the Concepts TS. In the code snippet below
concept bool SomeConcept = /*...*/;
void foo(SomeConcept a, SomeConcept b);
the arguments a and b of the function must have the same type. I find this extremely counterintuitive and detrimental to the widespread adoption of normal form. I want to naturally read the above definition as: "foo takes an argument a that satisfies SomeConcept and an argument b that satisfies SomeConcept". Forcing the types to be equal feels weird to me and can be already done by using more verbose syntax.
Alisdair proposed a disambiguation syntax which I can get behind:
void foo(SomeConcept.0 a, SomeConcept.1 a);
While it would be useful to tersely force some arguments' types to be the same, I still strongly believe that the default should be "any type that satisfies the concept". I unfortunately don't have any formal argument for this, but it bothers me (and many others) immensely and I think there should be more discussion about this before the TS makes it into the Working Draft.
• The Mathematical Underpinnings of Promises in C++ (by David Sankel)
This very interactive and enjoyable talk tried to answer the following question: "what is the mathematical essence of a promise?"
David introduced the audience to the concepts of "denotational semantics" and "operational semantics", which are formal ways of reasoning about the "meaning" of programming language semantics by the use of logical mathematical expressions. It was really interesting to see how mathematical notation can be used to express the "meaning" of promises and to reason about it.
The presenter also made sure to make the audience understand how important it is to "discover" structures such as functors, applicatives, and monads during the design of an abstraction - doing that guarantees that the abstraction is powerful and flexible enough for many use cases.
The general idea is that developers should first reason about the mathematical essence of an abstraction, implement it, and "rinse&repeat" until it is refined and poweful enough. I believe that this approach is very valuable, and David proved it in his second, more practical talk: "Promises in C++: The Universal Glue for Asynchronous Programs". The second presentation covered his implementation of promises which looked solid and useful, making a case for the "mathematical essence" approach described above.
I highly recommend watching the recording of David's first presentation.
• Postmodern Immutable Data Structures (by Juan Pedro Bolivar Puente)
This is another presentation that I found mindblowing. Juan presented the concepts that power immer, a "C++ library implementing modern and efficient data immutable data structures" and demoed ewig, a text editor implemented with immer, built around an immutable data model.
He first began praising value semantics for being easy to reason about and multithreading-friendly. Unfortunately, a program that mainly deals with values is unfeasible as it requires copying data everywhere, which becomes a big problem when the amount of data increases.
The solution to the "copy problem" is "immutable persistent data structures".
• They are immutable because they're always const (i.e. adding an element to an immutable data structure produces a copy of the existing structure with the added element).
• They are persistent because they preserve their history. This leads to "structural sharing":
• No copies of the real data are required, as the history is always available and can be shared between instances. The history itself is compact thanks to the sharing.
• Comparisons become extremely fast as it is enough to perform cheap pointer comparisons for instances of structures that are known to share their internal representation.
A very basic immutable data structure is the list... which is very bad in practice due to cache-unfriendliness and lack of random access.
Juan said that what we really want is some sort of "immutable std::vector", which has all the benefits of persistent immutable data structures but the cache-friendliness of a vector. The idea is creating some sort of tree structure where the leaves are chunks of contiguous data: this is the "radix balanced tree" by P. Bagwell and T. Rompf.
I found the concept really interesting and Juan's implementation very scary!
After explaining variations of the aforementioned radix tree data structures and "transient data structures" he showed a live demo of ewig where he loaded a 1GB text file containing all the content of the Esperanto version of Wikpedia, selected all the test and copy-pasted it multiple times in the middle of the editor... without any kind of slowdown. I found that really impressive and realized that immutable persistent data structures are something I should learn more about and put in my multi-paradigm C++ developer toolbox.
A must watch.
• Type Based Template Metaprogramming is Not Dead (by Odin Holmes)
Now that Boost.Hana is here, "traditional" type-based metaprogramming is dead... right? Odin's presentation proves that's not really true if high performance is a requirement and heterogeneous computation are unneeded.
He's one of the authors of Kvasir::mpl, the fastest metaprogramming library on metaben.ch!
Kvasir is blazing fast because it follows the "Rule of Chiel", created by another author of the library: Chiel Douwes. Chiel heavily benchmarked various operations on multiple compilers on machines with custom barebones kernels to figure out what are the least and most expensive metaprogramming techniques for compilers. In short:
• Type aliases are very fast.
• Type instantiations are kind of slow.
• Template functions are very slow.
• SFINAE is extremely slow.
This hierarchy of "metaprogramming technique performance" makes it obvious that instantiations should be avoided as much as possible in performance critical compile-time projects... such as type-based metaprogramming libraries. Odin showed how many existing ideas such as metafunction compositions can actually be implemented with minimal instantiations - I honestly found the ideas and the results very impressive but wished he carefully explained the techniques and code snippets more slowly and step-by-step, as I found them quite unfamililar.
This is a talk I will definitely watch again when it becomes available on YouTube as I really want to understand the amazing ideas behind Kvasir and its extremely impressive performance.
• Lightning Talks (organized and moderated by Michael Caisse)
I loved this year's lightning talk so much! There were so many different and varied talks which were extremely funny or valuable (or both!) despite their short duration. Many thanks to Michael Caisse for moderating these sessions and making them possible.
I also won "third best lightning talk" with my short presentation: "You must type it three times". I showed the audience that building a constexpr/noexcept/SFINAE-friendly library higher-order function is extremely painful even in C++17, as it is often required to manually repeat the body of the function three times to achieve the aforementioned benefits. E.g.
template <typename F, typename... Ts>
constexpr auto log_and_call(F&& f, Ts&&... xs)
noexcept(noexcept(
std::forward<F>(f)(std::forward<Ts>(xs)...)
))
-> decltype(
std::forward<F>(f)(std::forward<Ts>(xs)...)
)
{
log << "calling f\n";
return std::forward<F>(f)(std::forward<Ts>(xs)...);
}
I encourage you to look at the slides if you're interested in the issue, and to share your ideas for possible solutions.
### is this the end?
I didn't originally plan to split this in two parts but I found that understanding my notes, recalling interesting moments during the sessions, and packaging everything into a coherent report takes a lot of time! Hopefully you found the first part of this trip report interesting - I hope to have the second (and final) part up on my blog as soon as possible.
Part 2 is now available here! | 2019-03-19 16:42:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25861603021621704, "perplexity": 2572.401088927731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202003.56/warc/CC-MAIN-20190319163636-20190319185636-00553.warc.gz"} |
https://learnzillion.com/lesson_plans/36033-lesson-8-the-nth-term | Lesson plan
# Lesson 8: The nth Term
teaches Alabama State Standards 8a-27.
teaches Alabama State Standards 8a-24.a.
teaches Alabama State Standards 8a-24.
teaches Arizona State Standards A2.F-BF.A.2
teaches Arizona State Standards A1.F-LE.A.2
teaches Common Core State Standards MP5 http://corestandards.org/Math/Practice/MP5
teaches Common Core State Standards HSF-BF.A.2 http://corestandards.org/Math/Content/HSF/BF/A/2
teaches Common Core State Standards HSF-LE.A.2 http://corestandards.org/Math/Content/HSF/LE/A/2
teaches Common Core State Standards MP8 http://corestandards.org/Math/Practice/MP8
teaches Common Core State Standards MP6 http://corestandards.org/Math/Practice/MP6
teaches Colorado State Standards HS.F-LE.A.2.
teaches Colorado State Standards HS.F-BF.A.2.
teaches Georgia State Standards MGSE9-12.F.LE.2.
teaches Georgia State Standards MGSE9-12.F.BF.2.
teaches Kansas State Standards F.LQE.2.
teaches Minnesota State Standards 9.2.2.5.
teaches Minnesota State Standards 9.2.2.4.
teaches Minnesota State Standards 8.2.2.5.
teaches Minnesota State Standards 8.2.2.4.
teaches Minnesota State Standards 8.2.1.5.
teaches Minnesota State Standards 8.2.1.4.
teaches Ohio State Standards F.LE.2.
teaches Ohio State Standards F.BF.2.
teaches Ohio State Standards F.IF.9.
teaches Pennsylvania State Standards CC.2.2.HS.C.5.
teaches Pennsylvania State Standards CC.2.2.HS.C.3.
# Lesson 8: The nth Term
The goal of this lesson is for students to understand that how an equation is written to represent a function depends on how the domain of a function is identified. With sequences, it is common to start at either $$f(1)$$ or $$f(0)$$. So far in this unit, the first term has been typically cited as $$f(1)$$. The exception has been when $$n=1$$ is confusing given the context, which is the case when the number of pieces of paper depends on the number of cuts. This lesson gives students a chance to study the effect this choice has when writing an equation to define a sequence and is also meant to help students review how to write equations of linear and exponential functions by using a table to express regularity in repeated reasoning (MP8). In the following lessons, students will write equations for these types of functions in various contexts.
Prior to this lesson students focused on defining sequences recursively using function notation. In this lesson, students will study equations representing functions that are known as explicit or closed-form definitions. A closed-form definition is one where the value of the term is determined from just the term number. This type of equation is one students are familiar with from their earlier work with linear and exponential equations.
A focus of this lesson is using precise language to explain patterns and understand how a function can be represented by two different equations (MP6).
Technology isn't required for this lesson, but there are opportunities for students to choose to use appropriate technology to solve problems. We recommend making technology available.
Lesson overview
• 8.1 Warm-up: Which One Doesn’t Belong: Repeated Operations (5 minutes)
• 8.2 Activity: More Paper Slicing (15 minutes)
• 8.3 Activity: A Sierpinski Triangle (15 minutes)
• Includes "Are you Ready for More?" extension problem
• Lesson Synthesis
• 8.4 Cool-down: Different Types of Equations (5 minutes)
Learning goals:
• Interpret an equation for the $$n^{\text{th}}$$ term of a sequence.
• Justify (orally and in writing) why different equations can represent the same sequence.
Learning goals (student facing):
• Let’s see how to find terms of sequences directly.
Learning targets (student facing):
• I can explain why different equations can represent the same sequence.
Required materials:
• Copies of blackline master
• Scissors
Required preparation:
• Use of the blackline master is optional. If using, prepare 1 pair of scissors for every 2 students.
Standards:
• This lesson builds on the standard:CCSS.HSF-LE.A.1MS.F-LE.1MO.A1.LQE.A.1aMO.A1.LQE.A.1b
• This lesson builds towards the standards:CCSS.HSF-BF.A.2MS.F-BF.2MO.A1.LQE.B.4CCSS.HSF-LE.A.2MS.F-LE.2MO.A1.LQE.A.3
IM Algebra 1, Geometry, Algebra 2 is copyright 2019 Illustrative Mathematics and licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
The Illustrative Mathematics name and logo are not subject to the Creative Commons license and may not be used without the prior and express written consent of Illustrative Mathematics. | 2021-10-23 07:39:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3625841438770294, "perplexity": 3739.2926558215304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00616.warc.gz"} |
https://gmatclub.com/forum/if-x-and-y-are-positive-integers-and-r-is-the-remainder-when-7-4x-277678.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 11 Dec 2018, 03:32
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in December
PrevNext
SuMoTuWeThFrSa
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345
Open Detailed Calendar
• ### Free GMAT Prep Hour
December 11, 2018
December 11, 2018
09:00 PM EST
10:00 PM EST
Strategies and techniques for approaching featured GMAT topics. December 11 at 9 PM EST.
• ### The winning strategy for 700+ on the GMAT
December 13, 2018
December 13, 2018
08:00 AM PST
09:00 AM PST
What people who reach the high 700's do differently? We're going to share insights, tips and strategies from data we collected on over 50,000 students who used examPAL.
# If x and y are positive integers and r is the remainder when (7^(4x+3)
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 51098
If x and y are positive integers and r is the remainder when (7^(4x+3) [#permalink]
### Show Tags
01 Oct 2018, 03:47
00:00
Difficulty:
55% (hard)
Question Stats:
55% (01:00) correct 45% (01:39) wrong based on 31 sessions
### HideShow timer Statistics
If x and y are positive integers and r is the remainder when $$(7^{4x+3} + y)$$ is divided by 10, what is the value of r ?
(1) x = 10
(2) y = 2
_________________
CEO
Status: GMATINSIGHT Tutor
Joined: 08 Jul 2010
Posts: 2710
Location: India
GMAT: INSIGHT
WE: Education (Education)
Re: If x and y are positive integers and r is the remainder when (7^(4x+3) [#permalink]
### Show Tags
01 Oct 2018, 03:53
Bunuel wrote:
If x and y are positive integers and r is the remainder when $$(7^{4x+3} + y)$$ is divided by 10, what is the value of r ?
(1) x = 10
(2) y = 2
Question: What is the remainder when $$(7^{4x+3} + y)$$ is divided by 10
CONCEPT: When a number is divided by 10 then the remainder will always be the unit digit of the number e.g. 37 divided by 10 leaves remainder 7 and 125 divided by 10 leaves remainder 5
i.e. we need to calculate the unit digit of $$(7^{4x+3} + y)$$
but Unit digit of $$(7^{4x+3})$$ is always same as unit digit of $$7^3$$ because cyclicity of unit digit of 7 is 4 i.e. Unit digit of powers of 7 repeat after every 4 powers
Hence we only need to know the Unit digit of y to answer the question
Statement 1: x = 10
NOT SUFFICIENT
Statement 2: y = 2
SUFFICIENT
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
Re: If x and y are positive integers and r is the remainder when (7^(4x+3) &nbs [#permalink] 01 Oct 2018, 03:53
Display posts from previous: Sort by | 2018-12-11 11:32:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36185112595558167, "perplexity": 2601.4029233428237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823618.14/warc/CC-MAIN-20181211104429-20181211125929-00191.warc.gz"} |
https://www.ques10.com/p/39487/derive-the-expression-for-rankines-active-earth-pr/ | 1
1.3kviews
Derive the expression for Rankine's Active earth pressure for cohesive back fill.
Being watched by a moderator
I'll actively watch this post and tag someone who might know the answer.
Consider a retaining wall of height $\mathrm{H}$ with a smooth vertical back, retaining a cohesive backfill. The relationship between the major principal stress $\sigma_{1}$ and minor principal stress $\sigma_{3}$ at failure (Plastic equilibrium) can be expressed in the form $$\sigma_{1}=\sigma_{3}\left(\frac{1+\sin \phi}{1-\sin \phi}\right)+2 c \sqrt{\frac{1+\sin \phi}{1-\sin \phi}}$$ Active State In active state, lateral stress $\sigma_{h}$ reduces to its minimum value i.e., $p_{a}$ while the vertical stress $\sigma_{v}$ remains unchanged. Since, $$\sigma_{v}\gt\sigma_{h}$$ Hence, $$\begin{array}{l} \sigma_{1}=\sigma_{v} \\ \sigma_{3}=\sigma_{h}=p_{a} \end{array}$$ Substituting the values of $\sigma_{1}$ and $\sigma_{3}$ in above eq., we get $\sigma_{v}=\sigma_{h}\left(\frac{1+\sin \phi}{1-\sin \phi}\right)+2 c \sqrt{\frac{1+\sin \phi}{1-\sin \phi}}$ $\sigma_{v}=p_{a}\left(\frac{1+\sin \phi}{1-\sin \phi}\right)+2 c \sqrt{\frac{1+\sin \phi}{1-\sin \phi}}$ $p_{a}=\left(\frac{1-\sin \phi}{1+\sin \phi}\right) \sigma_{v}-2 c \sqrt{\frac{1-\sin \phi}{1+\sin \phi}}$ $p_{a}=k_{a} \sigma_{v}-2 c \sqrt{k_{a}} \quad\left[\because k_{a}=\frac{1-\sin \phi}{1+\sin \phi}\right]$ | 2023-01-28 07:55:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8308559656143188, "perplexity": 1108.8466251234402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00037.warc.gz"} |
http://physics.stackexchange.com/tags/statics/hot | # Tag Info
47
This is a statics problem. Assume the cable is static, perfectly straight and horizontal. Pick any point on the cable and the sum of the forces on that point must equal zero. There is a force, due to gravity, "downward". So, there must be an equal, opposing force "upward". This upward force must come from the tension in the cable. But, if the cable is ...
16
To make it fall you need a torque. This torque is provided by the weight force acting on the center of mass of the object and by the offset between the center of mass and the edge of the object. Imagine your domino standing upright then tilt it. You are moving the center of mass. When the center of mass (blue) is on the right of the edge (red) then you have ...
11
Imagine a heavy chord raised off the ground between two blocks. Rather than consider all of the mass pieces of the rope, and the forces on them, we can simplify the problem a little bit by considering a slightly different one. The chord can be represented by a heavy ball (in the middle of the chord) connected by two massless strings to the blocks. From ...
11
Newton's first law of motion for a point particle states that a particle at rest will stay at rest and a particle in motion will stay in motion unless acted on by an unbalanced force. In other words, if the net force on the particle is zero, then the velocity of the particle will stay constant. Newton's first law of motion for a system of particles states ...
7
Yes, it's called the normal force. It comes from the rigidity of the stuff separating the object from the center of gravitational attraction, i.e. the rigidity of the rocks, dirt, floor, table, etc. If you'd like, you could think of this stuff as behaving like a spring with a huge spring constant. Any first-year physics textbook will cover this; there's a ...
6
This should be possible to solve in the same way we do the ordinary catenary problem, by variational calculus. Suppose the angular separation between the endpoints is $\Delta$, where we could define $\Delta = \frac{D}{R}$ if I understand the problem correctly. Let the shape of the rope be given by a function $r(\phi)$, and write the potential energy of the ...
6
Kind of. The negative sign indicates the direction of the force exerted by the spring on the mass. If you pull the mass to the right, the force from the spring is to the left. Since they go opposite directions, there is a minus sign. The problem states an external force exerted on the mass displaces it, presumably to a new equilibrium. The spring ...
6
As you have noticed yourself, your system is simply underdetermined. In order to find a unique solution you need to add some extra constraints in addition to Newton's equations. Imagine a table with more than four legs: the more legs you add, the more unknown forces you have. But the number of equations does not change. If we instead remove a leg we find a ...
5
Yes, it's possible. A static setup like this will work as long as any small motion of the parts would increase the potential energy. In this case, it looks like there is only one possible motion - rotation of the entire ruler-hanger-hammer piece about the axis where the ruler touches the table. If the ruler were to rotate down a little bit, the entire ...
5
A push up is a form of lever. The athlete must exert roughly half her body weight (under some assumptions I'll clarify at the end of the post.) We can solve this problem using the principle of virtual work. Assume the athlete raises her body through a small angle $\textrm{d}\theta$. Then her center of mass rises by $l \cos\theta \ \textrm{d}\theta$, with ...
5
It is hard to guess without seeing Gorillapod in use, but my guess would be the following: Center of mass could be understood as an average position of the mass of the object. In order for an object to be in stable equilibrium, its center of muss must be vertically above the area, which is enclosed by contact points of tripod's legs with the ground. If ...
5
Some engineering texts use "moment" and "couple" to talk about forces that tend to rotate an assembly (what physicist mean when they say "torque", but the engineers sometimes have a slightly different meaning for that word). A roughly translation guide is... A "couple" is a pair of opposite forces whose points of action are not co-linear. A couple is ...
5
Since this is a homework-type problem, here are some Hints for the force The electrostatic force $d\vec F$ on a small segment $dl$ of the rod given the field $\vec E$ of the other rod is $$d\vec F = \lambda\, dl \,\vec E$$ Determine the field of one rod, and use the above expression to integrate the force it exerts on the other rod. This is a 2D ...
5
The simple answer is that you can't fully solve this problem--because as you note it is under-constrained--under the assumptions that are made when you first start doing statics (that objects are completely rigid). The introduction of finite strains bring in additional relationships.
5
These are some of the Newtonian couples. The weight pulls down on the rope, and the rope pulls up on the weight(tension). The rope pulls down on the pulley(tension), and the pulley pulls up on the rope. The pulley pulls right on the rope , and the rope pulls left on the pulley(tension). the rope pulls right on the frame (tension), and the frame pulls left on ...
4
It's a 20-lbs staff. 20-lbs net force are required to hold it up, regardless of its orientation. If you just apply this force to one end of the staff, though, there would be a net torque. Instead, you need to use your hands to apply two forces to the staff. One force, exerted by the near hand on the very end of the staff, should be down. The other, ...
4
As joshphysics' answer showed, the force would indeed be infinite in the case of uniform lineic distributions, but the torque does not need to be. Using the same conventions joshphysics did, let's compute the elementary torque $d\vec{T}$ experienced by a piece $dr_2$ of rod 2 at position $\vec{r_2}$ from the joining point and integrate it over rod 2. It can ...
4
This looks like a simple linear blending problem. It is two-dimensional, but each dimension can be considered independently. The more to the right the weight is, the larger the fraction of it carried by F2 and F3. Basically, the fraction of the weight carried by F2 and F3 is X/W. Put more mathematically: (F2 + F3) / (F1 + F2 + F3 + F4) = X ...
3
The book is correct - how many significant figures are you given the data to ? I would probably use the middle of the supports (ie 16m) but that doesn't matter for working out the vertical forces. This question is also nothing to do with momentum, unless there is a part 2 where you work out the sideways force when the truck moves.
3
I know this is an old question, but for the benefit of people visiting here wondering what the answer was, here it goes: A droplet can stay at rest on an inclined plate because of small heterogeneities on the surface. This can either be a small roughness (of the order of nano/micrometers) or `dirty' spots where the surface chemistry is locally different. ...
3
$\newcommand{\t}[1]{\frac{T_#1}{\sin\theta_#1}}$ The correct way is with vectors. The shortcut way is to use Lami's theorem. Basically, if the tension in each string is $T_1$, and the angle opposite each string is $\theta_1$, then $$\t 1=\t 2=\t 3$$ Referring to this diagram, $\frac{A}{\sin \alpha}=\frac{B}{\sin \beta}=\frac{C}{\sin \gamma}$, where ...
3
Let $y(x)$ be the curve describing the shape of the cable. Let $T(x)$ be the tension in the cable. Consider a small segment extending from $x$ to $x+dx$. The horizontal component of the tension at the two ends of this segment must cancel out, so $T_x$ must be constant. If $\theta$ is the angle the cable makes with the vertical, so that $y'=\tan\theta$, ...
3
Start with differential form of Poisson's ratio: $$\frac{\text{d} x}{x}=- \nu \frac{\text{d} l}{l}$$ $$\int_{x_0}^{x_0+\Delta x} \frac{\text{d} x}{x}=- \nu \int_{l_0}^{l_0+\Delta l} \frac{\text{d} l}{l}$$ $$\ln \frac{x_0+\Delta x}{x_0}=- \nu \ln \frac{l_0+\Delta l}{l_0}$$ $$1 + \frac{\Delta x}{x_0}=\left(1+\dfrac{\Delta l}{l_0}\right)^{-\nu}$$
3
This is blatantly a homework question, so we're only allowed to discuss methods, and not give you the answer. With any problem like this the very first thing to do is draw a diagram. From the limited information in your question I think the situation loks like this: You know the force $F$ that you're using to lift the end of the plate, and you want to ...
3
Continuing from Pygmalion's answer a graphical explaination can be as below (Warning! - Representation may be a bit wierd and out of proportion) Initially without the camera this is the case After camera with long lenses is placed the COM (centre of mass) shifts upwards and outwards as below This might be the "Centre of Mass" issue you are talking ...
3
Suppose you have a balance that looks like this: where $m_1$ and $m_2$ are the weights you're comparing and $M$ is some large weight fixed to the balance. For simplicity let's $m_1$ is zero, so one one end of your scales you have some weight $m_2$ and there is nothing else on the other end. You are quite correct that with simple lever scales the lever ...
3
It's pretty simple: the forces at the anchor points would be infinite because of the 90° angle ;-) An example: Imagine two pillars with the same height. If you attach a rope on both of them and try to tighten it, you will slowly increase the pulling force at the top of the pillars while increasing the angle between rope and pillar. To fully straighten the ...
3
The pieces that support the most weight have higher friction and are more difficult to remove. The easier it is to remove a piece the less important it is structurally. Each block needs to support the weight of all the blocks above it, and it has to have at least 3 contact points spread apart like a three legged chair. With two contacts points it will create ...
3
The motion will depend on the forces that the torque arises from in the first place. Newton's second law must apply to the system's centre of mass and is of course reckoned with the nett force. So if this nett force is nought, the centre of mass either cannot shift or moves uniformly (if so we shall assume the sphere's centre of mass is stationary ...
3
If a body moves only because of the influence of a torque, then it will rotate about the center of gravity. There is no location for torques, only directions. You you take the equations of motion as seen here (http://physics.stackexchange.com/a/80449/392) you will see that the location of the torque does not enter into the equations. Only the location of ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2013-12-20 05:57:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.686325192451477, "perplexity": 264.43306645702387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345769117/warc/CC-MAIN-20131218054929-00006-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/321247/is-value-of-elbo-a-scalar-or-a-distribution | Is value of ELBO a scalar or a distribution?
I'm trying to get my head around variational inference, but I'm confused with the definition of ELBO, specifically an expectation over joint distribution. Here I use Variational Inference: A Review for Statisticians as a reference, although other sources use similar notation.
In Eq. 13 authors introduce ELBO as:
$$ELBO(q) = \mathbb{E}[\log p(z, x)] - \mathbb{E}[\log q(z)]$$
where expectations are taken w.r.t. $q(z)$.
If my math doesn't lie to me, the second term is expanded into:
$$\mathbb{E}[\log q(z)] = \mathbb{E_{q(z)}}[\log q(z)] = \int q(z) \cdot \log q(z) \cdot dz$$
and is integrated into a single number. It looks good to me and works well as an optimization objective. However, the first term is:
$$\mathbb{E}[\log p(z, x)] = \mathbb{E_{q(z)}}[\log p(z, x)] = \int q(z) \cdot \log p(z,x) \cdot dz$$
I.e. it contains joint distribution $\log p(z, x)$. Even if we integrate out $z$, we still have a random variable $x$ with multiple possible values. So as I understand it, the whole term $\mathbb{E}[\log p(z, x)]$ is a probability density itself.
If both statements above are correct, I understand that the definition of ELBO now looks like:
$$ELBO(q) = <\text{some density over }x> - <\text{scalar}>$$
I believe this means substracting a constant from each value of density in the first term. It also plays well with Eq. 14 in the paper which explains definition for evidence lower bound itself.
However, if value of ELBO is a distribution, what at all does it mean to optimize it?
To summarize my questions:
• is value of ELBO a single number or a distribution?
• if a number, how can be a lower bound of $\log p(x)$?
• if a distribution, how do we optimize it?
• are there any other mistakes in my understanding?
• You can think of the ELBO as a function from the the family of distributions you're optimizing over to $\mathbb{R}$. The reason it outputs a scalar is because you're assuming that you're working with some fixed $x$, so that $\log(p(x))$ is a constant and $p(z, x)$ is only a function in $z$. Jan 2, 2018 at 22:28
• Working with a fixed $x$ is unusual for me, but it actually makes sense. I need to re-read the paper with this idea in mind, but meanwhile: in derivations, do we assume that this fixed $x$ is a single observation or the whole dataset that we should somehow summarize? In other words, is, for example, $p(z | x)$ a probability of $z$ given some concrete sample of $x$? Jan 3, 2018 at 7:12
• It's fixed in that when you're actually implementing variational inference (or any other method of approximate inference) for some model, you're working with an actual data set. This data set is fixed, and when you calculate $p(x)$ for this data set, it will spit out some scalar. $p(z|x)$ is indeed the posterior probability of $z$, the unknowns, given your data $x$. How much have you studied Bayesian statistics before? It seems as though you might have a misunderstanding about the underlying framework that variational inference is used in (approximate bayesian inference). Jan 3, 2018 at 19:46
• Indeed, I'm a software engineer with experience in (non-Bayesian) machine learning with some serious gaps in understanding of Bayesian statistics, so suggestions for a modern well-structured introduction to the topic is also welcome :) Anyway, thanks for an excellent explanation! Jan 3, 2018 at 22:39
• People recommend Statistical Rethinking by McElreath as a good first pass introduction for building intuition. Books at a slightly higher level are Doing Bayesian Data Analysis by Kruschke and A First Course in Bayesian Statistical Methods by Hoff. The graduate level reference is Bayesian Data Analysis by Gelman et al. Bayesian Data Analysis is the only one that covers variational inference, but all of them should give you enough background to read through the the JASA review you're going through. Jan 3, 2018 at 23:32
1 Answer
Technically ELBO would be a functional, a function that takes a function as an argument. However, in practice most problems assume some class of distributions (e.g. Gaussian, Gamma, etc), which eliminates the functional aspect of the problem and then optimize within this class of distributions, making the problem a single variable, or multivariate problem depending on how many parameters are in the family.
• Ah, I should have written "value of ELBO" instead of just "ELBO", i.e. what is the codomain of ELBO. I fixed it in text. But from what you say about functionals, I understand ELBO's value is in $\mathbb{R}$, i.e. $\mathbb{E}[\log p(z, x)]$ is a number. Is it correct? If so, why do we ignore that $x$ is a r.v. itself? Jan 3, 2018 at 7:05
• You don't really ignroe that $x$ is a random variable, $x$ is the observed data. In most VB applications you want to estimate the parameters of a distribution using observed data. The $z$ is unobserved data and is therefore integrated/taken an expectation over $z$. What you notation obscures is that these distributions have parameter(s) that you want to estimate. Jan 3, 2018 at 14:19
• @LucasRoberts $x$ is observed, but usually it's assumed that $x\sim P_\theta(x)$, some sampling distribution. Suppose we want to incorporate this randomness of $x$, then the original idea (of ELBO as a distribution) sounds interesting Mar 15, 2021 at 18:51 | 2022-09-27 15:21:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.772793710231781, "perplexity": 397.3075137759917}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00301.warc.gz"} |
http://mathhelpforum.com/calculus/49355-integration-u-sub-trig.html | # Thread: Integration with u-sub and trig
1. ## Integration with u-sub and trig
$\int \frac{x^4}{\sqrt{x^{10}-2}}dx$
Let: $u=x^5 \text{ }du=5x^4dx$
$=\tfrac{1}{5}\int\frac{1}{\sqrt{u^2-(\sqrt{2})^2}}du$
Let: $u=\sqrt{2}\sec\theta \text{ }du=\sqrt{2}\sec\theta\tan\theta d\theta$
$=\tfrac{1}{5}\int\frac{\sqrt{2}\sec\theta\tan\thet a}{\sqrt{2}\tan\theta}d\theta$
$=\tfrac{1}{5}\int\sec\theta d\theta$
$=\tfrac{1}{5}\ln{\left|\sec\theta+\tan\theta\right |}+C$
$=\tfrac{1}{5}\ln{\left|\tfrac{u}{\sqrt{2}}+\tfrac{ \sqrt{u^2-2}}{\sqrt{2}}\right|}+C$
$=\tfrac{1}{5}\ln{\left|\tfrac{x^5}{\sqrt{2}}+\tfra c{\sqrt{x^{10}-2}}{\sqrt{2}}\right|}+C$
The book claims that the answer is:
$=\tfrac{1}{5}\ln{\left|x^5+\sqrt{x^{10}-2}\right|}+C$
I could see where the discrepancy could come from, but I don't see the error in my work... what did I do wrong?
2. Hello !
Originally Posted by symstar
$\int \frac{x^4}{\sqrt{x^{10}-2}}dx$
Let: $u=x^5 \text{ }du=5x^4dx$
$=\tfrac{1}{5}\int\frac{1}{\sqrt{u^2-(\sqrt{2})^2}}du$
Let: $u=\sqrt{2}\sec\theta \text{ }du=\sqrt{2}\sec\theta\tan\theta d\theta$
$=\tfrac{1}{5}\int\frac{\sqrt{2}\sec\theta\tan\thet a}{\sqrt{2}\tan\theta}d\theta$
$=\tfrac{1}{5}\int\sec\theta d\theta$
$=\tfrac{1}{5}\ln{\left|\sec\theta+\tan\theta\right |}+C$
$=\tfrac{1}{5}\ln{\left|\tfrac{u}{\sqrt{2}}+\tfrac{ \sqrt{u^2-2}}{\sqrt{2}}\right|}+C$
$=\tfrac{1}{5}\ln{\left|\tfrac{x^5}{\sqrt{2}}+\tfra c{\sqrt{x^{10}-2}}{\sqrt{2}}\right|}+C$
The book claims that the answer is:
$=\tfrac{1}{5}\ln{\left|x^5+\sqrt{x^{10}-2}\right|}+C$
I could see where the discrepancy could come from, but I don't see the error in my work... what did I do wrong?
It is correct
$\tfrac{x^5}{\sqrt{2}}+\tfrac{\sqrt{x^{10}-2}}{\sqrt{2}}=\tfrac{1}{\sqrt{2}} \left(x^5+\sqrt{x^{10}-2}\right)$
Thus $\tfrac{1}{5}\ln{\left|\tfrac{x^5}{\sqrt{2}}+\tfrac {\sqrt{x^{10}-2}}{\sqrt{2}}\right|}+C=\tfrac 15 \ln\left|\tfrac{1}{\sqrt{2}} \left(x^5+\sqrt{x^{10}-2}\right)\right|+C$
Use the rule $\ln(ab)=\ln(a)+\ln(b)$ :
$=\tfrac 15 \left(-\ln(\sqrt{2})+\ln\left|x^5+\sqrt{x^{10}-2}\right)\right|+C$
$=\tfrac 15 \ln\left|x^5+\sqrt{x^{10}-2}\right|\underbrace{-\tfrac{\ln(\sqrt{2})}{5}+C}_{\text{this is a constant}}$
$=\tfrac 15 \ln\left|x^5+\sqrt{x^{10}-2}\right|+C'$
3. Ah, I see. Thanks! | 2017-01-24 22:01:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9674572348594666, "perplexity": 1631.468118270673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00386-ip-10-171-10-70.ec2.internal.warc.gz"} |