url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.groundai.com/project/community-detection-in-bipartite-networks-with-stochastic-blockmodels/
|
Community Detection in Bipartite Networks with Stochastic Blockmodels
# Community Detection in Bipartite Networks with Stochastic Blockmodels
## Abstract
In bipartite networks, community structures are restricted to being disassortative, in that nodes of one type are grouped according to common patterns of connection with nodes of the other type. This makes the stochastic block model (SBM), a highly flexible generative model for networks with block structure, an intuitive choice for bipartite community detection. However, typical formulations of the SBM do not make use of the special structure of bipartite networks. Here, we introduce a Bayesian nonparametric formulation of the SBM and a corresponding algorithm to efficiently find communities in bipartite networks which parsimoniously chooses the number of communities. The biSBM improves community detection results over general SBMs when data are noisy, improves the model resolution limit by a factor of , and expands our understanding of the complicated optimization landscape associated with community detection tasks. A direct comparison of certain terms of the prior distributions in the biSBM and a related high-resolution hierarchical SBM also reveals a counterintuitive regime of community detection problems, populated by smaller and sparser networks, where non-hierarchical models outperform their more flexible counterpart.
###### pacs:
89.75.Hc 02.50.Tt 89.70.Cf
## I Introduction
A bipartite network is defined as having two types of nodes, with edges allowed only between nodes of different types. For instance, a network in which edges connect people with the foods they eat is bipartite, as are other networks of associations between two classes of objects. Recent applications of bipartite networks include studies of plants and the pollinators that visit them Young et al. (2019), stock portfolios and the assets they comprise Squartini et al. (2017), and even U.S. Supreme Court justices and the cases they vote on Guimerà and Sales-Pardo (2011). More abstractly, bipartite networks also provide an alternative representation for hypergraphs in which the two types of nodes represent the hypergraph’s nodes and its hyperedges, respectively Ghoshal et al. (2009); Chodrow (2020).
Many networks exhibit community structure, meaning that their nodes can be divided into groups such that the nodes within each group connect to other nodes in other groups in statistically similar ways. Bipartite networks are no exception, but they exhibit a particular form of community structure because type-I nodes are defined by how they connect to type-II nodes, and vice versa. For example, in the bipartite network of people and the foods they eat, vegetarians belong to a group of nodes which are defined by the fact that they never connect to nodes in the group of meat-containing foods; meat-containing foods are defined by the fact that they never connect to vegetarians. While the group structure in this example comes from existing node categories, one can also ask whether statistically meaningful groups could be derived solely from the patterns of the edges themselves. This problem, typically called community detection, is the unsupervised task of partitioning the nodes of a network into statistically meaningful groups. In this paper, we focus on the community detection problem in bipartite networks.
There are many ways to find community structure in bipartite networks, including both general methods—which can be applied to any network—and specialized methods derived specifically for bipartite networks. We focus on a family of models related to the stochastic blockmodel (SBM), a generative model for community structure in networks Holland et al. (1983). Since one of the SBM’s parameters is a division of the nodes into groups, community detection with the SBM simply requires a method to fit the model to network data. With inference methods becoming increasingly sophisticated Peixoto (2019), many variants of the SBM have been proposed, including those that accommodate overlapping communities Airoldi et al. (2008); Godoy-Lorite et al. (2016), broad degree distributions Karrer and Newman (2011), multilayer networks Tarrés-Deulofeu et al. (2019), hierarchical community structures Peixoto (2014a), and networks with metadata Hric et al. (2016); Newman and Clauset (2016); Peel et al. (2017). SBMs have also been used to estimate network structure or related observational data even if the measurement process is incomplete and erroneous Young et al. (2019); Newman (2018a, b); Peixoto (2018). In fact, a broader class of so-called mesoscale structural inference problems, like core-periphery identification and imperfect graph coloring, can also be solved using formulations of the SBM, making it a universal representation for a broad class of problems Young et al. (2018); Olhede and Wolfe (2014).
At first glance, the existing SBM framework is readily applicable to bipartite networks. This is because, at a high level, the two types of nodes should correspond naturally to two blocks with zero edges within each block, implying that SBMs should detect the bipartite split without that split being explicitly provided. However, past work has shown that providing node type information a priori improves both the quality of partitions and the time it takes to find them Larremore et al. (2014). Unfortunately those results, which relied on local search algorithms to maximize model likelihood Karrer and Newman (2011); Larremore et al. (2014), have been superseded by more recent results which show that fitting fully Bayesian SBMs using Markov chain Monte Carlo can find structures in a more efficient and non-parametric manner Peixoto (2017); Riolo et al. (2017); Peixoto (2019). These methods maximize a posterior probability, producing similar results to traditional cross validation by link predictions in many (but not all) cases Kawamoto and Kabashima (2017a); Vallès-Català et al. (2018). In this sense, they avoid overfitting the data, i.e., they avoid finding a large number of communities whose predictions fail to generalize. This raises the question of whether the more sophisticated Bayesian SBM methods gain anything from being customized for bipartite networks, like the previous generation of likelihood-based methods did Larremore et al. (2014).
In this paper, we begin by introducing a non-parametric Bayesian bipartite SBM (biSBM) and show that bipartite-specific adjustments to the prior distributions improve the resolution of community detection by a factor of , compared with the general SBM Peixoto (2013). As with the general SBM, the biSBM automatically chooses the number of communities and controls model complexity by maximizing the posterior probability.
After introducing a bipartite model, we also introduce an algorithm, designed specifically for bipartite data, that efficiently fits the model to data. Importantly, this algorithm can be applied to both the biSBM and its general counterpart, allowing us to isolate both the effects of our bipartite prior distributions and the effects of the search algorithm itself. As in the maximum likelihood case Larremore et al. (2014), the ability to customize the search algorithm for bipartite data provides both improved community detection results, as well as a more sophisticated understanding of the solution landscape, but unlike that previous work, this algorithm does more than simply require that blocks consist of only one type of node. Instead, the algorithm explores a two-dimensional landscape of model complexity, parameterized by the number of type-I blocks and the number of type-II blocks. This contributes to the growing body of work that explores the solution space of community detection models, including methods to sample the entire posterior Riolo et al. (2017), count of the number of metastable states Kawamoto and Kabashima (2019), and determine the number of solution samples required to describe the landscape adequately Calatayud et al. (2019).
In the following sections, we introduce a degree-corrected version of the bipartite SBM Larremore et al. (2014), which combines and extends two recent advances. Specifically, we recast the bipartite SBM Larremore et al. (2014) in a microcanonical and Bayesian framework Peixoto (2017) by assuming that the number of edges between groups and degree sequence are fixed exactly, instead of only in expectation. We then derive its likelihood, introduce prior distributions that are bipartite-specific, and describe an algorithm to efficiently fit the combined nonparametric Bayesian model to data. We then demonstrate the impacts of both the bipartite priors and algorithm in synthetic and real-world examples, and explore their impact on the maximum number of communities that our method can find, i.e., its resolution limit, before discussing the broader implications of this work.
## Ii The microcanonical bipartite SBM
Consider a bipartite network with nodes of type I and nodes of type II. The type-I nodes are divided into blocks and the type-II nodes are divided into blocks. Let and . Rather than indexing different types of nodes separately, we index the nodes by and annotate the block assignment of node by . A key feature of the biSBM is that each block consists of only one type of node.
Having divided nodes into blocks, we can now write down the propensities for nodes in each block to connect to nodes in the other blocks. Let be the total number of edges between blocks and . Then, let be the degree of node . Together, and specify the degrees of each node and the patterns by which edges are placed between blocks. The number of edges attached to a group must be equal to the sum of its degrees, such that for any . For bipartite networks, for all . We use to denote the number of nodes in block .
Given the parameters above, one can generate a network by placing edges that satisfy the constraints imposed by e and k. However, that network would be just one of an ensemble of potentially many networks, all of which satisfy the constraints, analogous to the configuration model Bollobás (1980); Fosdick et al. (2018). Peixoto showed how to count the number of networks in this ensemble Peixoto (2012), so that for a uniform distribution over that ensemble, the likelihood of observing any particular network is simply the inverse of the ensemble size. This means that, given e, k, and the group assignments , computing the size of the ensemble is tantamount to computing the likelihood of drawing a network with adjacency matrix A from the model, . Thus, treating networks as equiprobable microstates in a microcanonical ensemble leads to the microcanonical stochastic blockmodel, whose bipartite version we now develop, specifically to find communities in real-world bipartite networks. This derivation follows directly from combining the bipartite formulation of the SBM Larremore et al. (2014) with the microstate counting developed in Peixoto (2012). We introduce a new algorithm to fit the model in Sec. IV.
## Iii Nonparametric Bayesian SBM for Bipartite Networks
We first formulate the community detection problem as a parametric inference procedure. The biSBM is parameterized by a partition of nodes into blocks b, the number of edges between blocks e, and the number of edges for each node, k. However, for empirical networks, we need only search the space of partitions b. This is because the microcanonical model specifies the degree sequence k exactly, so the only way that an empirical network can be found in the microcanonical ensemble is if the parameter k is equal to the empirically observed degree sequence. Note that, when k and b are both specified, e is also exactly specified. As a consequence, community detection requires only a search over partitions of the nodes into blocks b.
In the absence of constraints on b, the maximum likelihood solution is simply for the model to memorize the data, placing each node into its own group and letting . To counteract this tendency to dramatically overfit, we adapt the Bayesian nonparametric framework of Peixoto (2017), where the number of groups and other model parameters are determined from the data, and customize this framework for the situation in which the data are bipartite. We start by factorizing the joint distribution for the data and the parameters in this form,
P({A},{k},{e},{b})=P({A}∣{k},{e},{b})P({k}∣%e,{b})P({e}∣{b})P({b}), (1)
where , , and are prior probabilities that we will specify in later subsections. Thus, Eq. (1) defines a complete generative model for data and parameters.
The Bayesian formulation of the SBM is a powerful approach to community detection because it enables model comparison, meaning that we can use it to choose between different model classes (e.g., hierarchical vs flat) or to choose between parameterizations of the same model (e.g., to choose the number of communities). Two approaches to model comparison, producing equivalent formulations of the problem, are useful. The first formulation is that of simply maximizing Eq. (1), taking the view that the model which maximizes the joint probability of the model and data is, statistically most justified. The second formulation is that of minimizing the so-called description length Rissanen (2007), which has a variety of interpretations (for a reviews and update, see Grünwald (2007); Grünwald and Roos (2019)). Perhaps the most useful interpretation for our purposes is that of compression, which takes the view that the best model is one which allows us to most compress the data, while accounting for the cost to describe the model itself. In this phrasing, for a model class , the description length is given by . These two terms can be interpreted as the description cost of compressing the data A using the model and the cost of expressing the model itself, respectively. Therefore, the minimum description length (MDL) approach can be interpreted as optimizing the tradeoff between better fitting but larger models. Asymptotically, MDL is equivalent to the Bayesian Information Criterion (BIC) Schwarz (1978) for stochastic blockmodels under compatible prior assumptions Peixoto (2014a); Yan et al. (2014).
A complete and explicit formulation of model comparison will be provided in the context of our studies of empirical data in Sec. VII, using strict MDL approaches. For now, we proceed with calculating the likelihood and prior probabilities for the microcanonical biSBM and its parameters.
### iii.1 Likelihood for microcanonical bipartite SBM
The observed network A is just one of networks in the microcanonical ensemble which match exactly. Assuming that each configuration in the network ensemble is equiprobable, computing the likelihood is equivalent to taking the inverse of the size of the ensemble. We compute the size of the ensemble by counting the number of networks that match the desired block structure and dividing by the number of equivalent network configurations without block structure , yielding,
Pbi({A}∣{k},{e},{b% })=∥Ω({k},{e},{b})∥−1≡Ξ({A})Ω({e}) . (2)
As detailed in Ref. Peixoto (2017), the number of networks that obey the desired block structure determined by e is given by,
Ω({e})=∏rer!∏r
This counting scheme assumes that half-edges are distinguishable. In other words, it differentiates between permutations of the neighbors of the same node, which are all equivalent (i.e., correspond to the same adjacency matrix). To discount equivalent permutations of neighbors, we count the number of half-edge pairings that correspond to the bipartite adjacency matrix A,
Ξ({A})=∏iki!∏i
Note that while self-loops are forbidden, this formulation allows the possibility of multiedges.
### iii.2 Prior for the degrees
The prior for the degree sequence follows directly from Ref. Peixoto (2017) because k is conditioned on e and b, which are bipartite. The intermediate degree distribution , with being the number of nodes with degree that belong to group , further factorizes the conditional dependency. This allows us to write
P({k}∣{e},{b})=P(% {k}∣η)P(η∣{e},{b}) , (5)
where
P({k}∣η)=∏r∏kηrk!nr! (6)
is a uniform distribution of degree sequences constrained by the overall degree counts, and
P(η∣{e},{b})=∏rq(er,nr)−1 (7)
is the distribution of the overall degree counts. The quantity is the number of restricted partitions of the integer into at most parts Andrews (1998). It can be computed via the following recurrence relation,
q(m,n)=q(m,n−1)+q(m−n,n), (8)
with boundary conditions for , and for or . With this, computing for and requires additions of integers. In practice, we precompute using the exact Eq. (8) for (or when the network is smaller), and resort to approximations Peixoto (2017) only for larger arguments.
For sufficiently many nodes in each group, the hyperprior Eq. (7) will be overwhelmed by the likelihood, and the distribution of Eq. (5) will approach the actual degree sequence. In such cases, the prior and hyperprior naturally learn the true degree distribution, making them applicable to heterogeneous degrees present in real-world networks.
### iii.3 Prior for the node partition
The prior for the partitions b also follows Ref. Peixoto (2017) in its general outline, but the details require modification for bipartite networks. We write the prior for b as the following Bayesian hierarchy
Pbi({b})=P({b}∣{n}% )P({n}∣B)P(B) , (9)
where , the number of nodes in each group. We then assume that this prior can be factorized into independent priors for the partitions of each type of node, i.e., . This allows us to treat the terms of Eq. (9) as
P({b}∣{n})=⎛⎜⎝∏type-Igroups rnr!NI!⎞⎟⎠⎛⎜⎝∏type-IIgroups sns!NII!⎞⎟⎠ , (10)
P({n}∣B)=(N{I}−1B{I}−1)−1(N{II}−1B{II}−1)−1 , (11)
and
P(B)=N−1{I}N−1{II} . (12)
Equation (11) is a uniform hyperprior over all such histograms on the node counts n, while Eq. (12) is a prior for the number of nonempty groups itself. This Bayesian hierarchy over partitions accommodates heterogeneous group sizes, allowing it to model the group sizes possible in real-world networks.
### iii.4 Prior for the bipartite edge counts
We now introduce the prior for edge counts between groups, e, which also requires modification for bipartite networks. While the edge count prior for general networks is parameterized by the number of groups , the analogous prior for bipartite networks is parameterized by and . We therefore modify the counting scheme of Ref. Peixoto (2017), written for general networks, to avoid counting non-bipartite partitions that place edges between nodes of the same type. Our prior for edge counts between groups is therefore
(13)
where counts the number of group-to-group combinations when edges are allowed only between type-I and type-II nodes. The notation counts the number of histograms with bins whose counts sum to . Similar to the uniform prior for general networks Peixoto (2017), it is unbiased and maximally non-informative, but by neglecting mixed-type partitions, this prior results in a more parsimonious description. In later sections, we show that this modified formulation enables the detection of smaller blocks, improving the so-called resolution limit, by reducing model complexity for larger and .
### iii.5 Model summary
Having fully specified the priors in previous subsections, we now substitute our calculations into Eq. (1), the joint distribution for the biSBM, yielding,
Pbi({A},{k},{e% },{b})=∏iki!∏r
Inference of the biSBM reduces to the task of sampling this distribution efficiently and correctly. Although Eq. (14) is somewhat daunting, note that k and e are implicit functions of the partition b, meaning Eq. (14) depends only on the data and the partition b. This opens the door to efficient sampling of the posterior distribution via Markov chain Monte Carlo which we discuss in Sec. IV.
### iii.6 Comparison with the hierarchical SBM
In deriving the biSBM, we replaced the SBM’s uniform prior for edge counts with a bipartite formulation Eq. (13). However, one can instead replace it with a Bayesian hierarchy of models (Eq. (23); Peixoto (2014a)). In this hierarchical SBM (hSBM), the matrix e is itself considered as an adjacency matrix of a multigraph with nodes and edges, allowing it to be modeled by a second SBM. Of course, the second SBM also has an edge count matrix with the same number of edges and fewer nodes, so the process of modeling each edge count matrix using another SBM can be done recursively until the model has only one block. In so doing, the hSBM typically achieves a higher posterior probability (which corresponds to higher compression, from a description length point of view) than non-hierarchical (or “flat”) models, and can therefore identify finer-scale community structure.
The hSBM’s edge count prior allows it to find finer scale communities and more efficiently represent network data. However, as we will see, when the network is small and has no hierarchical structure, the hSBM can actually underfit the data, finding too few communities, due to the overhead of specifying a hierarchy even when none exists. The scenarios in which the flat bipartite prior has advantages over its hierarchical counterpart are explored in Sec. V.
## Iv Fitting the model to data
The mathematical formulation of the biSBM takes full advantage of a network’s bipartite structure to arrive at a better model. Here, we again make use of that bipartite structure to accelerate and improve our ability to fit the model, Eq. (14), to network data.
At a high level, our algorithm for model fitting consists of two key routines. The first routine is typical of SBM inference, and uses Markov chain Monte Carlo importance sampling Metropolis et al. (1953); Hastings (1970); Peixoto (2014b), followed by simulated annealing, to explore the space of partitions, conditioned on fixed community counts. In this routine, we accelerate mixing time by making use of the bipartite constraint, specifying a Markov chain only over states (partitions) with one type of node in each block. Importantly, this constraint has the added effect that we must fix both block counts, and , separately.
The second routine of our algorithm consists of an adaptive search over the two-dimensional space of possible , using the ideas of dynamic programming Cormen et al. (2009); Erickson (2019). It attempts to move quickly through those parts of the plane that are low probability under Eq. (14) without calling the MCMC routine, and instead allocating computation time for the regions that better explain the data. The result is an effective algorithm, with two separable routines, which makes full use of the network’s bipartite structure, allowing us to either maximize or sample from the posterior Eq. (14).
One advantage of having decoupled routines in this way is that the the partitioning engine is a modular component which can be swapped out for a more efficient alternative, should one be engineered or discovered. Reference implementations of two SBM partitioning algorithms, a Kernighan-Lin-inspired local search Kernighan and Lin (1970); Karrer and Newman (2011); Larremore et al. (2014) and the MCMC algorithm, are freely available as part of the bipartiteSBM library Yen ().
Alternative methods for model fitting exist. For instance, it is possible to formulate a Markov chain over the entire space of partitions whose stationary distribution is the full posterior, without conditioning on the number of groups. In such a scheme, transitions in the Markov chain can create or destroy groups Riolo et al. (2017), and the Metropolis-Hastings principles guarantee that this chain will eventually mix. However, this approach turns out to be too slow to be practical because the chain gets trapped in metastable states, extending mixing times.
Another alternative approach is to avoid our two-dimensional search over and , and instead search over . This is the approach of Ref. Peixoto (2013), where, after proving the existence of an optimal number of blocks , a golden-ratio one-dimensional search is used to efficiently find it.
### iv.1 Inference routine
The task of the MCMC inference routine is to maximize Eq. (14), conditioned on fixed values of and . Starting from an initial partition , the MCMC algorithm explores the space of partitions with fixed and by proposing changes to the block memberships b, and then accepting or rejecting those moves with carefully specified probabilities. As is typical, those probabilities are chosen so that the probability that the algorithm is at any particular partition is equal to the posterior probability of that partition, given and , by enforcing the Metropolis-Hastings criterion.
Rather than initializing the MCMC procedure from a fully random initial partition, we instead use an agglomerative initialization Peixoto (2014a) which reduces burn-in time and avoids getting trapped in metastable states that are common when group sizes are large. The agglomerative initialization amounts to putting each node in its own group and then greedily merging pairs of groups of matching types until the specified and remain.
After initialization, each step consists of proposing to move a node from its current group to a new group . Following Peixoto (2017), proposal moves are generated efficiently in a two-step procedure. First, we sample a random neighbor of node and inspect its group membership . Then, with probability we choose uniformly at random from ; otherwise, we choose with probability proportional to the number of edges leading to that group from group , i.e., proportional to .
A proposed move which would violate the bipartite structure by mixing node types, or which would leave group empty, is rejected with probability one. A valid proposed move is accepted with probability
a=min{1,p(bi=s→r)p(bi=r→s)exp(−βΔS)} , (15)
where
p(bi=r→s)=∑tRitets+ϵet+ϵB . (16)
Here, is the fraction of neighbors of node which belong to block and and is an arbitrary parameter that enforces ergodicity. The term is an inverse-temperature parameter, and is the difference between the entropies of the biSBM’s microcanonical ensemble in its current state and in its proposed new state. With this in mind,
ΔS=S|bi=s−S|bi=r=lnP({A},{k}% ,{e},{b})P({A}′,{k}′,{e}′,{b}′) , (17)
where variables without primes represent the current state () and variables with primes correspond to the state being proposed ().
The initialization, proposal, and evaluation steps of the algorithm above are fast. With continuous bookkeeping of the incident edges to each group, proposals can be made in time , and are engineered to substantially improve the mixing times since they remove an explicit dependency on the number of groups which would otherwise be present with the fully random moves Peixoto (2014a). Then, when evaluating Eq. (17), we need only a number of terms proportional to . In combination, the cost of an entire “sweep,” consisting of one proposed move for each node in the network, is . The overall number of steps necessary for MCMC inference is therefore , where is the average mixing time of the Markov chain, independent of .
Our bipartiteSBM implementation Yen () has the following default settings, chosen to stochastically maximize Eq. (14) for fixed and via a simulated annealing process. We first let , and perform sweeps at to reach equilibrated partitions. Then we perform zero-temperature () sweeps, in which only moves leading to a strictly lower entropy are allowed. We keep track of the system’s entropy during this process and exit the MCMC routine when no record-breaking event is observed within a sweeps window, or when the number of sweeps exceeds , whichever is earlier. The partition b at the end corresponds to the lowest entropy. Equivalently stated, this partition b corresponds to the minimum description length or highest posterior probability, for fixed and . The minimal entropy at each stage is bookmarked for future decision-making processes.
The bipartite MCMC formulation is more than just similar to its general counterpart. In fact, one can show that for fixed and , the Markov chain transition probabilities dictated by Eq. (17) are identical for the uniform bipartite edge count prior Eq. (13) and its general equivalent introduced in Peixoto (2017). This means that the MCMC algorithm explores the same entropic landscape for both bipartite and general networks when and are fixed. As we will demonstrate in Sec. V, however, by combining the MCMC routine with both the novel search routine over the block counts and the more sensitive biSBM priors, we can better infer model parameters in bipartite networks.
### iv.2 Search routine
The task of the search routine is to maximize Eq. (14) over the plane, i.e., to find the optimal number of groups. However, maximizing Eq. (14) for any fixed choice of requires the MCMC inference introduced above, motivating the need for an efficient search. If we were to treat the network as unipartite, a one-dimensional convex optimization on the total number of groups with a search cost of Peixoto (2013) could be used. On the other hand, exhaustively exploring the plane of possibilities would incur a search cost of , where is the maximum value of which can be detected. In fact, our experiments indicate that neither the general unipartite approach nor the naive bipartite approach is optimal. The plane search is too slow, while the line search undersamples local maxima of the landscape, which is typically multimodal. Instead, we present a recursive routine that runs much faster than exhaustive search, which parameterizes the tradeoff between search speed and search accuracy by rapidly finding the high-probability region of the plane without too many calls to the more expensive MCMC routine.
We provide only a brief outline of the search algorithm here, supplying full details in Appendix A. The search is initialized with each node in its own block. Blocks are rapidly agglomerated until . This is the so-called resolution limit, the maximum number of communities that our algorithm can reliably find, which we discuss in detail in Sec. VI. Equation (14) will never be maximized prior to reaching this frontier. During this initial phase, we also compute the posterior probability of the trivial bipartite partition with blocks, as a reference for the next phase.
Next, we search the region of the (,) plane within the resolution frontier to find a local maximum of Eq. (14) by adaptively reducing the number of communities. In this context, a local maximum is defined as an MCMC-derived partition with exactly (,) blocks, whose posterior probability is larger than the posterior probabilities for MCMC-derived partitions at nearby values (,), for a chosen neighborhood size . From the initial partition at the resolution frontier, we merge blocks, selected greedily from a stochastically sampled set of proposed merges. Here, because the posterior probability is a tiny value, it is computationally more convenient to work with the model entropy , which is related to the posterior probability by . Proposed merges are evaluated by their entropy after merging, but without calling the MCMC routine to optimize the post-merge partition. Because MCMC finds better (or no worse) fits to the data, this means that these post-merge entropies are approximate upper bounds of the best-fit entropy, given the post-merge number of blocks. We therefore use this approximate upper bound to make the search adaptive: whenever a merge would produce an upper-bound approximation that is a factor higher than the current best , a full MCMC search is initialized at the current grid point. Otherwise, merges proceed rapidly since the approximate entropy is extremely cheap to compute. Throughout this process, the value of is estimated from the data to balance accuracy and efficiency, and it adaptively decreases as the search progresses (Appendix A). The algorithm exits when it finds a local minimum on the entropic landscape, returning the best overall partition explored during the search.
In practice, a typical call to the algorithm takes the form of (i) a rapid agglomerative merging phase from blocks to the resolution limit frontier; (ii) many agglomerative merges to move along candidate local minima that rely on approximated entropy; (iii) more deliberate and MCMC-reliant neighborhood searches to examine candidate local minima. These phases are shown in Fig. 1. The algorithm has total complexity , where is the number of times that an exhaustive neighborhood search is performed. When , we find for most empirical networks examined. This algorithm is not guaranteed to find the global optimum, but due to the typical structure of the optimization landscape for bipartite networks, we have found it to perform well for many synthetic and empirical networks, and it tends consistently estimate the number of groups (see Sec. VI). An implementation is available in the bipartiteSBM library Yen ().
## V Reconstruction performance
In this section, we examine our method’s ability to correctly recover the block structure in synthetic bipartite networks where known structure has been intentionally hidden. In each test, we begin by creating a bipartite network with unambiguous block structure, and then gradually mix that structure with noise until the planted blocks disappears entirely, creating a sequence of community detection problems that are increasingly challenging Moore (2017). The performance of a community detection method can then be measured by how well it recovers the known partition over this sequence of challenges.
The typical synthetic test for unipartite networks is the planted partition model Condon and Karp (2001) in which groups have assortative edges, and disassortative edges for . When the total expected degree for each group is fixed, the parameter controls the ambiguity of the planted blocks. Unambiguous assortative structure corresponds to while corresponds to a fully random graph. Here, we consider a straightforward translation of this model to bipartite networks in which the nodes are again divided into blocks according to a planted partition. As in the unipartite planted partition model, non-zero entries of the block affinity matrix take on one of two values but due to the fact that all edges are disassortative, we replace and with or to avoid confusion (see insets of Fig. 2). By analogy, we let while fixing the total expected degree for each group, so that corresponds to highly resolved communities which blend into noise as grows.
We present two synthetic tests using this bipartite planted partition model, designed to be easy and difficult, respectively. In the easy test, the unambiguous structure consists of nodes, divided evenly into blocks of 500 nodes each, with a mean degree . Each type-I block is matched with a type-II block so that the noise-free network consists of exactly 10 bipartite components, with zero edges placed between nodes in different components by definition. In the hard test, the unambiguous structure consists of nodes divided evenly into and blocks of approximately equal size, with mean degree . The relationships between the groups in the hard test are more complex, so the insets of Fig. 2 provide schematics of the adjacency matrices of both tests under a moderate amount of noise. In both cases, node degrees were drawn from a power-law distribution with exponent , and for a fixed , networks were drawn from the canonical degree-corrected stochastic blockmodel Karrer and Newman (2011); Larremore et al. (2014).
We test four methods’ abilities to recover the bipartite planted partitions, in combinations that allow us to separate the effects of using our bipartite model (Sec. III) and our bipartite search algorithm (Sec. IV), in comparison to existing methods. The first method maximizes the biSBM posterior using our 2D search algorithm. The second method keeps the 2D search algorithm, but examines the effects of the bipartite-specific edge count prior by replacing it with the general SBM’s edge count prior [i.e., replacing Eq. (13) with Eq. (22)]. The third method uses the same general SBM edge count prior as the second, but uses a 1D bisection search Peixoto (2013) to examine the effects of the 2D search. The fourth method maximizes the hierarchical SBM posterior using a 1D bisection search. For the first two cases, we use our bipartiteSBM library Yen (), while for the latter two, we use the graph-tool library Peixoto (2014c). In all cases, we enforce type-specific MCMC move proposals to avoid mixed-type groups.
In the easy test, we find that the bipartite search algorithm introduced in Sec. IV performs better than the one-dimensional searches (Fig. 2a). Because the one-dimensional search algorithm assumes that the optimization landscape is unimodal, we reasoned that other modes may emerge as increases. To test this, we generated networks within the transition region () and then conducted an exhaustive survey of plausible values using MCMC with the general SBM. This revealed two basins of attraction, located at and , explaining the SBM’s performance. This bimodal landscape can therefore hinder search in one dimension by too quickly attracting the algorithm to the trivial bipartite partition. Perhaps surprisingly then, a similar exhaustive survey of the plane using the bipartite model revealed that near the transition , the biSBM has a local optimum with more than the planted blocks.
In the hard case, we find that it is not the bipartite search that enables the biSBM to outperform the other methods, but rather the bipartite posterior (Fig. 2b). An exploration of the outputs of the general searches shows that when they fail, they tend to find an incorrect number of blocks, which should total [corresponding to the planted blocks]. To understand this failure mode in more detail, we fixed and used MCMC to fit the general SBM Peixoto (2014c). This led to solutions in which , revealing that the performance degradation, relative to the biSBM, was due to a tendency for that particular algorithmic implementation of the SBM to find more balanced numbers of groups. Interestingly, near their respective transitions values of , both the SBM and biSBM tend to find more groups than were planted in the hard test, thus overfitting the data. To explore this further, we again conducted exhaustive surveys of the plane using MCMC and found that under both models, the posterior surfaces are consistently multimodal, with attractive peaks corresponding to more communities than the planted . However, only the bipartite search algorithm introduced in Sec. IV finds overfitted partitions with too many groups; the unipartite search algorithms instead return underfitted models with too few groups, balanced between the node types.
In sum, our synthetic network tests reveal two phenomena. First, the biSBM with bipartite search is able to extract structure from higher levels of noise than the alternatives, making it an attractive option for bipartite community detection with real data. However, our tests also reveal that the posterior surfaces of both the SBM and biSBM degenerate in unexpected ways near the detectability transition Decelle et al. (2011); Mossel et al. (2015); Kawamoto and Kabashima (2017b); Ricci-Tersenghi et al. (2019).
## Vi Resolution Limit
Community detection algorithms exhibit a resolution limit, an upper bound on the number of blocks that can be resolved in data, even when those blocks are seemingly unambiguous. For instance, using the general SBM, only groups can be detected Peixoto (2017), while the higher resolution of the hierarchical SBM improves this scaling to Peixoto (2014a). In this section we investigate the resolution limit of the biSBM numerically and analytically.
Our numerical experiment considers a network of bipartite cliques of equal size, with nodes of each type per biclique and therefore edges per biclique. To this network, we repeatedly apply the SBM, the hSBM, and biSBM, and record the number of blocks found each time, varying between and . For small values of , all three algorithms infer blocks, but as the number of blocks increases, solutions which merge pairs, then quartets, and then octets become favored (Fig. 3). The hSBM continues to find blocks, as expected.
The exact value of at which merging blocks into pairs becomes more attractive can be derived by asking when the corresponding posterior odds ratio, comparing a model with bicliques to a model with biclique pairs, exceeds one,
Λ(~B)=P({A},{k},{e}%cliquepairs,{b}clique pairs)P({A},{k},{e}cliques,{b}cliques) . (18)
When there are nodes of each type per biclique and edges, exceeds 1 when for the SBM and for the biSBM (Fig. 3; arrows). A similar calculation predicts the transition from biclique pairs to biclique quartets at for the SBM and for the biSBM (Fig. 3; arrows). Numerical experiments confirm these analytical predictions, but noisily, due to the stochastic search algorithms involved, and the fact the optimization landscapes are truly multimodal, particularly near points of transition.
The posterior odds ratio calculations above can be generalized, and show that the biSBM extends the resolution transitions twice as far as the SBM for the transitions from , and so on, but still undergoes the same transitions eventually. Thus, both models exhibit the same resolution limit scaling , but with resolution degradations that occur at for the SBM occurring at for the biSBM. Therefore, the resolution limit of the biSBM is larger than the SBM for the same number of nodes. One can alternatively retrace the analysis of Ref. Peixoto (2017), but for the biSBM applied to bicliques to derive the same resolution improvement.
This constant-factor improvement in resolution limit may seem irrelevant, given that the major contribution of the hierarchical SBM was to change the order of the limit to Peixoto (2014a). However, we find that, on the contrary, the factor improvement for the biSBM expands a previously uninvestigated regime in which flat models outperform their hierarchical cousin. When given the biclique data, the hSBM finds a hierarchical division where at each level , the number of groups decreases by a factor , except at the highest level where it finds a bipartite division. Assuming that , we have , where . The hSBM’s prior for edge counts Eq. (23) can be factored into uniform distributions over multigraphs at lower levels and over an SBM at the topmost level, leading to, {IEEEeqnarray}rCl\IEEEeqnarraymulticol3l P_lower( e) = ∏_l=1^log_σ~B((σ^2Eσl/~B))^-~B/σ^l
& ×& σ!2~B/σl(~B/σl-1)!2(~B/σ^l-1-1~B/σl-1)^-2 ,
and,
Ptopmost({e})=((((22))E))−1 . (18)
By comparing with the corresponding terms from the biSBM [Eq. (13)] or the corresponding equation for the SBM [Eq. (22)], we can identify regimes in which a flat model better describes network data than the nested model.
Figure 4 shows regimes in which the flat model is preferred for both the SBM and biSBM. These regimes are larger for the biSBM than the SBM, as expected, and are larger when the hierarchical branching factor decreases—indeed, if the data are less hierarchical, the hierarchical model is expected to have less of an advantage. The flat-model description is also favored when there are fewer edges and more groups, suggesting that in order for the nested model to be useful, it requires sufficient data to support its more costly nested architecture. A number of real-world networks that fall into this flat-model regime are described in the following section. We note that our definition of this regime relies on assumptions of perfect inference and a fixed branching factor at each level of the hSBM’s hierarchy. These assumptions may not always hold.
## Vii Empirical networks
We now examine the application of the biSBM to a corpus of real-world networks ranging in size from to nodes, across social, biological, linguistic, and technological domains. While it was typical of past studies to measure a community detection method by its ability to recapitulate known metadata labels, we acknowledge that this approach is inadvisable for a number of theoretical and practical reasons Peel et al. (2017) and instead compare the biSBM to the SBM and hSBM using Bayesian model selection.
In general, to compare one partition-model pair and an alternative pair , we can compute the posterior odds ratio,
Λ=P({b}0,M0|{A})P({b}1,M1|{A})=P({A},{b}0|M0)P({A},{b}1|M1)×P(M0)P(M1) . (19)
Model is favored when and model is favored when , with the magnitude of difference from indicating the degree of confidence in model selection Jeffreys (1998). In the absence of any a priori preference for either model, , meaning that the ratio of probabilities can be alternatively expressed via the difference in description lengths, . [Recall that the description length for the combined model and data A can be written as the negative log of the posterior probability, as introduced in Sec. III.] In what follows, we compare the hSBM to the biSBM and without loss of generality choose to be whichever model is favored so that simply expresses the magnitude of the odds ratio. Note that by construction, the biSBM always outperforms the flat SBM.
As predicted in the previous section, the biSBM’s flat prior is better when networks are smaller and sparser, while for larger networks the hSBM generally performs better by building a hierarchy that results in a more parsimonious model (Table 1). Indeed, the majority of larger networks are better described using the hSBM (Table 1; rightmost columns), but exceptions do exist, including the ancient metabolic network Goldford et al. (2017), YouTube memberships Mislove et al. (2007), and DBpedia writer network Auer et al. (2007), which share the common feature of low density. The Robertson plant-pollinator network Robertson (), on the other hand, is neither small nor particularly sparse, and yet the biSBM is still weakly preferred over the hSBM.
Differences between models, based only on their maximum a posteriori (i.e., minimum description length) estimates, may overlook additional complexity in the models’ full posterior distributions. We repeatedly sample from the posterior distributions of the SBM, biSBM, and hSBM for networks from Table 1, showing both posterior description length distributions and inferred block count distributions (Fig. 5). Generally, all three models exhibit similar description-length variation, but due to the 2D search introduced in Sec. IV, the biSBM returns partitions with wider variation in and . For instance, the drug trafficking network Coscia and Rios (2012), a multigraph with , has a bimodal distribution of description lengths under the hSBM, while the biSBM finds plausible partitions for a wide variety of values (Fig. 5b). On the other hand, posterior distributions for the country-language network Kunegis (2013) are all unimodal, but the biSBM finds probable states with wide variation in description length and block counts, while the hSBM samples from a small region (Fig. 5c). This can happen when the network is small, since the hSBM requires sufficiently complicated data to justify a hierarchy, while the biSBM finds a variety of lower description length partitions. In fact, viewing the same datasets through the lenses of these different models’ priors can quite clearly shift the location of posterior peaks. This is most clearly visible in the Reuters network Lewis et al. (2004), for which the models have unambiguous and non-overlapping preferred states (Fig. 5f).
Briefly, we note that model comparison is possible here due to the fact that all of the models we considered are SBMs with clearly specified posterior distributions. Broader comparisons between community detection models of entirely different classes are also possible, for which we suggest Ref. Ghasemian et al. (2019).
## Viii Discussion
This paper presented a bipartite microcanonical stochastic blockmodel (biSBM) and an algorithm to fit the model to network data. Our work is built on two foundations, developing a bipartite SBM Larremore et al. (2014) with a more sophisticated microstate counting approach Peixoto (2012). The model itself follows in the footsteps of Bayesian SBMs Peixoto (2014a, 2017) but with key modifications to the prior distribution and the search algorithm that more correctly account for the fact that some partitions are strictly prohibited when a network is bipartite. As a result, the biSBM is able to resolve community structure in bipartite networks better than the general SBM, demonstrated in tests with synthetic networks (Fig. 2).
The resolution limit of the biSBM is greater than the general SBM by a factor of . We demonstrated this mathematically and in a simple biclique-finding test (Fig. 3). This analysis led us to directly compare the priors for the biSBM and the hierarchical SBM, which hinted at an unexpected regime in which the biSBM provides a better model than the hSBM. This regime, populated by smaller, sparser, and less hierarchical networks, was found in real data where model selection favored the biSBM (Table 1).
How should we understand these networks that are better described by our flat model than a hierarchical one? One possibility is that these networks are simply “flat” and so any hierarchical description simply wastes description-length bits on a model which is too complex. Another possibility is that this result can be explained not by the mathematics of the models but by the algorithms used to fit the models. In fact, our tests with synthetic networks show clear differences between models and algorithms, with the 2D search algorithm introduced here providing better fits to data than a 1D search (Fig. 2). However, this finding alone does not actually differentiate between the two possible explanations, and so we constructed the following simple test.
To probe the differences between the biSBM and hSBM as models vs differences in their model-fitting algorithms, we combined both approaches in a two-step protocol: Fit the biSBM to network data and then build an optimal hierarchical model upon that fixed biSBM base. Unless the data are completely flat, this hierarchy-building process will further reduce the description length, providing a more parsimonious model. If the hybrid h-biSBM provides a superior description length to the hSBM, our observations can be attributed to differences in model-fitting algorithms. In fact, this is precisely what we find.
Figure 6 shows repeated application of the biSBM, hSBM, and hybrid h-biSBM to the ancient metabolic network Goldford et al. (2017) and the malaria genes network Larremore et al. (2013). In the ancient metabolic network, the biSBM already outperformed the hSBM, so the hybrid model results in only marginal improvements in description length. However, doing so also creates hierarchies with an average depth of layers, compared with the layers found by hSBM natively. In other words, we can achieve a deeper hierarchy in addition to a more parsimonious model when using the flat biSBM partition at the lowest level. This suggests that, in fact, not all of the hSBM’s underperformance can be attributed to the ancient metabolic network’s being “flat,” since a hierarchy can be constructed upon the biSBM’s inferred structure. In the malaria genes network, although the hSBM outperformed the biSBM, the hybrid model was superior to both. Since the hybrid partitions are, in principle, available to the hSBM, our conclusion is that the 2D search algorithm we presented is actually finding better partitions. Put another way, there are further opportunities to improve the depth and speed of algorithms to fit stochastic blockmodels to real-world data, particularly when bipartite or other structure in the data can be exploited.
Finally, this work shows how both models and algorithms can reflect the structural constraints of real-world network data, and how doing so improves model quality. While our work addresses only community detection for bipartite networks, generalizations of both the mathematics and search algorithms could in principle be derived for multi-partite networks in which more complicated rules exist for how node types are allowed to connect.
###### Acknowledgements.
The authors thank Tiago Peixoto, Tatsuro Kawamoto, Pan Zhang, Joshua Grochow, and Jean-Gabriel Young for stimulating discussions. DBL was supported in part by the Santa Fe Institute Omidyar Fellowship. The authors thank the BioFrontiers Institute at the University of Colorado Boulder and the Santa Fe Institute for the use of their computational facilities.
## Appendix A Recursive 2D search algorithm
In this appendix, we elaborate on the recursive search algorithm sketched in Sec. IV.2. Our overarching problem is to find the pair that minimizes the description length. We use dynamic programming to solve this problem efficiently, observing that it has the following two properties.
(1) Optimal substructure.—If we collectively inspect the solutions that lead to local minima, then the best of those determines the global minimum.
(2) Overlapping subproblems.—To verify the existence of a local minimum, we have to compute the description length for its neighborhood points. There are many subproblems which are solved again and again and their solutions can be stored in a table so that these need not to be recomputed.
Our recursive algorithm is summarized in Algms. A.1 and A.2. Due to its recursive construction, a base case (or smallest subproblem) represents a leaf node in the recursive search tree, at which we either terminate the algorithm or traceback and try a different node. The target point of the base case is only if its entropy is (i) minimal over all algorithmic history and (ii) is locally minimal compared with points within the neighborhood
(20)
where is a user-defined parameter that controls the size of the subproblem.
In Phase I, we perform MCMC at the trivial bipartite partition to create a reference description length. Then, starting from an initial state in which each node belongs to its own group, we apply an Agglomerative-Merge algorithm (also summarized in the main text) to reach a partition at , where . The algorithm works as follows. In each sweep, we attempt block changes according to Eq. (16) for each block. These proposal moves are not uniformly random, but are instead based on the current block structure, treating the edge count matrix e as the adjacency matrix of a multigraph so that blocks can be thought of as nodes in this higher-level representation. Potential merges of blocks are then ranked according to increasing , and exactly block merges are performed in that order, in each sweep. To minimize the impact of bad merges done in the earlier steps, at the end of each sweep we apply MCMC algorithm described in the main text at zero temperature, allowing block changes that strictly decrease the entropy. This algorithm has an overall complexity of Peixoto (2014b), which is dwarfed when compared with the MCMC calculation. Note that in Sec. IV.1, we perform the same agglomerative merge algorithm right before the MCMC inference but merge blocks to a specific number of groups , rather than to a threshold given by .
Phase II is the core recursive algorithm. Starting from the partition, we check whether it is a local minimum in the description length landscape, where the radius of the local neighborhood is , as defined in Eq. (20). If the current point is indeed a local minimum, the algorithm terminates. If it is not, the algorithm finds another candidate point in the grid by calling the Rand-Merge routine, which works by proposing many ways in which pairs of blocks could be merged, and then choosing the best merge. In particular, Rand-Merge proposes, for each block , other blocks to which could be merged, selected uniformly at random. From among those candidate merges, we choose the pair of blocks with the smallest relative entropy deviation . Here, is the minimum of all MCMC-calculated entropies explored globally and is the entropy that would result from a hypothetical merging of blocks and .
At this point, the algorithm gains its efficiency from avoiding calling the costly MCMC routine while still moving toward a local minimum in the plane. To do so requires that we accept the merge and entropy change from Rand-Merge without pausing to re-fit the model using MCMC at the new . If this process is repeated, the entropy after accumulating merges will deviate more and more from the optimal entropy, were we to re-fit the model using MCMC at the current . We therefore balance speed and efficiency by introducing a parameter that forces a full MCMC fit only when the accumulated entropy from repeated merges becomes intolerable. Let such that when the a block merge does not deviate from the entropy too much (), we accept the merge and attempt the next successive merges. Otherwise, we seek to terminate the algorithm by calling Local-Minimum_Check again at the current .
The key to efficiency is that computing the approximated partition by block merges from an optimized partition is faster than finding it from scratch. Note that when the state of the algorithm is far from a local minimum, is typically small and negative, meaning that a large number of merges can often be performed before a full MCMC is required. Thus, choosing is important. If we choose a large , the algorithm can overshoot the local minimum, requiring it to only gradually rediscover that minimum by inspecting many neighboring points. On the other hand, if we choose a small , there will be a larger number of MCMC calculations, which we also want to avoid. To this end, we determine from the data on-the-fly during the Adaptive_Search step. Namely, is the first outlier based on the Interquartile Rule,
δ∗>cIQR({δ∗})+Q3({δ∗}) , (21)
where collects the ’s at earlier sweeps and the IQR is the interquartile range, being equal to the difference between and percentiles. However, with this choice, we may still overshoot. In such cases, we reduce by a factor and relocate our attention to the whose entropy is minimal so far, and then call Local-Minimum_Check. During the neighborhood check, if we find an even better point nearby, we will relocate the tip to that point, and continue with the Adaptive_Search step. The algorithm ends if a local minimum is found.
Because of our dynamic programming approach, the time complexity of the total algorithm cannot be computed directly from the recursion, nor do we know the exact number of subproblems (local searches using MCMC) that the algorithm will need to call. Indeed, as we found for synthetic networks near the detectability limit, and for networks near transitions in the resolution limit, the -optimization landscape becomes degenerate and multimodal, making a general algorithm complexity result hopeless.
Nevertheless, the time complexity of the search algorithm scales with the number of MCMC calculations. Heuristic arguments suggest the number of MCMC calculations should be on the order of , where is the number of times that the most expensive for-loop [line 11 of Local-Minimum_Check] is called. However, even this is an approximation, due to the fact that, at times, a local-minimum check reveals a point within the -neighborhood that is better than the point currently being checked. In this way, subproblems may overlap, making the total cost somewhat cheaper. Empirically, for most networks in Table 1, .
## Appendix B Prior for edge counts in the hierarchical biSBM
In this appendix, we provide the prior for edge counts in the hierarchical bipartite SBM, corresponding to Eqs. (VI) and Eq. (18). We begin with the flat SBM, whose prior for edge counts is,
P({e}|{b})=((((B2))E))−1 . (22)
In the hSBM, it might seem as if should be written as a product of SBM likelihoods by repeatedly reusing Eq. (2) at each additional level. However, at higher levels, the networks are multigraphs, and the SBM likelihood does not generate multigraphs uniformly because it is based on a uniform generation of configurations (i.e., ) Peixoto (2017). Therefore, the correct way to build up the product is to directly count the number of multigraphs at each higher level using the dense ensemble Peixoto (2012), with each network instance occurring with the same probability.
Assuming that we have built higher-level models, the prior for edge counts between groups can be rewritten as
Phier({e}∣{b})=L∏l=1P({e}l∣{e}l+1,{b}l)P({b}l) , (23)
where
P({e}l∣{e}l+1,{b}l)=∏r
and
P({b}l)=∏rnl+1r!Bl!(Bl−1Bl+1−1)−11Bl . (25)
At the highest level (), denotes a single-node multigraph with self-loops. Because there is no further block structure, we enforce and assume that the block is generated by a uniform prior, and reuse Eq. (22).
One peculiar consequence of forcing the hSBM, implemented in graph-tool, to consider only type-specific blocks is that even when a network has no statistically justifiable structure, the hSBM finds the trivial bipartite partition and then builds a final hierarchical level on that trivial bipartite partition. In other words, it cannot help but find a single group at the topmost level. This explains the otherwise perplexing distribution of model description lengths shown in Fig. 5a: both the SBM and hSBM find the trivial partition, but this partition is more costly to express via the hSBM due to our having forced it to respect the network’s bipartite structure. Table 2 summarizes all model likelihood and prior functions pertinent to this paper, for reference.
### References
1. J.-G. Young, F. S. Valdovinos, and M. E. J. Newman, Reconstruction of plant–pollinator networks from observational data, bioRxiv , 754077 (2019).
2. T. Squartini, A. Almog, G. Caldarelli, I. van Lelyveld, D. Garlaschelli, and G. Cimini, Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks, Physical Review E 96, 032315 (2017).
3. R. Guimerà and M. Sales-Pardo, Justice Blocks and Predictability of U.S. Supreme Court Votes, PLOS ONE 6, e27188 (2011).
4. G. Ghoshal, V. Zlatić, G. Caldarelli, and M. E. J. Newman, Random hypergraphs and their applications, Physical Review E 79, 066118 (2009).
5. P. S. Chodrow, Configuration models of random hypergraphs, Journal of Complex Networks 8 (2020), cnaa018.
6. P. W. Holland, K. B. Laskey, and S. Leinhardt, Stochastic blockmodels: First steps, Social Networks 5, 109 (1983).
7. T. P. Peixoto, Bayesian stochastic blockmodeling, in Advances in Network Clustering and Blockmodeling (John Wiley & Sons, Hoboken, New Jersey, 2019) Chap. 11, pp. 289–332.
8. E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing, Mixed membership stochastic blockmodels, Journal of Machine Learning Research 9, 1981 (2008).
9. A. Godoy-Lorite, R. Guimerà, C. Moore, and M. Sales-Pardo, Accurate and scalable social recommendation using mixed-membership stochastic block models, Proceedings of the National Academy of Sciences 113, 14207 (2016).
10. B. Karrer and M. E. J. Newman, Stochastic blockmodels and community structure in networks, Physical Review E 83, 016107 (2011).
11. M. Tarrés-Deulofeu, A. Godoy-Lorite, R. Guimerà, and M. Sales-Pardo, Tensorial and bipartite block models for link prediction in layered networks and temporal networks, Physical Review E 99, 032307 (2019).
12. T. P. Peixoto, Hierarchical Block Structures and High-Resolution Model Selection in Large Networks, Physical Review X 4, 011047 (2014a).
13. D. Hric, T. P. Peixoto, and S. Fortunato, Network Structure, Metadata, and the Prediction of Missing Nodes and Annotations, Physical Review X 6, 031038 (2016).
14. M. E. J. Newman and A. Clauset, Structure and inference in annotated networks, Nature Communications 7, 1 (2016).
15. L. Peel, D. B. Larremore, and A. Clauset, The ground truth about metadata and community detection in networks, Science Advances 3, e1602548 (2017).
16. M. E. J. Newman, Estimating network structure from unreliable measurements, Physical Review E 98, 062321 (2018a).
17. M. E. J. Newman, Network structure from rich but noisy data, Nature Physics 14, 542 (2018b).
18. T. P. Peixoto, Reconstructing Networks with Unknown and Heterogeneous Errors, Physical Review X 8, 041011 (2018).
19. J.-G. Young, G. St-Onge, P. Desrosiers, and L. J. Dubé, Universality of the stochastic block model, Physical Review E 98, 032309 (2018).
20. S. C. Olhede and P. J. Wolfe, Network histograms and universality of blockmodel approximation, Proceedings of the National Academy of Sciences 111, 14722 (2014).
21. D. B. Larremore, A. Clauset, and A. Z. Jacobs, Efficiently inferring community structure in bipartite networks, Physical Review E 90, 012805 (2014).
22. T. P. Peixoto, Nonparametric Bayesian inference of the microcanonical stochastic block model, Physical Review E 95, 012317 (2017).
23. M. A. Riolo, G. T. Cantwell, G. Reinert, and M. E. J. Newman, Efficient method for estimating the number of communities in a network, Physical Review E 96, 032310 (2017).
24. T. Kawamoto and Y. Kabashima, Cross-validation estimate of the number of clusters in a network, Scientific Reports 7, 1 (2017a).
25. T. Vallès-Català, T. P. Peixoto, M. Sales-Pardo, and R. Guimerà, Consistencies and inconsistencies between model selection and link prediction in networks, Physical Review E 97, 062316 (2018).
26. T. P. Peixoto, Parsimonious Module Inference in Large Networks, Physical Review Letters 110, 148701 (2013).
27. T. Kawamoto and Y. Kabashima, Counting the number of metastable states in the modularity landscape: Algorithmic detectability limit of greedy algorithms in community detection, Physical Review E 99, 010301 (2019).
28. J. Calatayud, R. Bernardo-Madrid, M. Neuman, A. Rojas, and M. Rosvall, Exploring the solution landscape enables more reliable network community detection, Physical Review E 100, 052308 (2019).
29. B. Bollobás, A Probabilistic Proof of an Asymptotic Formula for the Number of Labelled Regular Graphs, European Journal of Combinatorics 1, 311 (1980).
30. B. K. Fosdick, D. B. Larremore, J. Nishimura, and J. Ugander, Configuring Random Graph Models with Fixed Degree Sequences, SIAM Review 60, 315 (2018).
31. T. P. Peixoto, Entropy of stochastic blockmodel ensembles, Physical Review E 85, 056122 (2012).
32. J. Rissanen, Information and Complexity in Statistical Modeling (Information Science and Statistics) (Springer New York, 2007).
33. P. D. Grünwald, The Minimum Description Length Principle (Adaptive Computation and Machine Learning) (The MIT Press, 2007).
34. P. D. Grünwald and T. Roos, Minimum Description Length Revisited, International Journal of Mathematics for Industry 10.1142/S2661335219300018 (2019).
35. G. Schwarz, Estimating the Dimension of a Model, The Annals of Statistics 6, 461 (1978).
36. X. Yan, C. Shalizi, J. E. Jensen, F. Krzakala, C. Moore, L. Zdeborová, P. Zhang, and Y. Zhu, Model selection for degree-corrected block models, Journal of Statistical Mechanics: Theory and Experiment 2014, P05007 (2014).
37. G. E. Andrews, The Theory of Partitions (Cambridge University Press, 1998).
38. N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, Equation of State Calculations by Fast Computing Machines, The Journal of Chemical Physics 21, 1087 (1953).
39. W. K. Hastings, Monte Carlo Sampling Methods Using Markov Chains and Their Applications, Biometrika 57, 97 (1970).
40. T. P. Peixoto, Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models, Physical Review E 89, 012804 (2014b).
41. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, 3rd Edition, 3rd ed. (The MIT Press, Cambridge, Mass, 2009).
42. J. Erickson, Algorithms (Independently published, S.L., 2019).
43. B. W. Kernighan and S. Lin, An efficient heuristic procedure for partitioning graphs, The Bell System Technical Journal 49, 291 (1970).
44. T.-C. Yen, The bipartiteSBM Python library, available at https://github.com/junipertcy/bipartiteSBM (2020).
45. D. B. Larremore, A. Clauset, and C. O. Buckee, A Network Approach to Analyzing Highly Recombinant Malaria Parasite Genes, PLOS Computational Biology 9, e1003268 (2013).
46. C. Moore, The Computer Science and Physics of Community Detection: Landscapes, Phase Transitions, and Hardness, arXiv:1702.00467 [cond-mat, physics:physics] (2017), arXiv:1702.00467 [cond-mat, physics:physics] .
47. A. Condon and R. M. Karp, Algorithms for graph partitioning on the planted partition model, Random Structures & Algorithms 18, 116 (2001).
48. T. P. Peixoto, The graph-tool Python library, 10.6084/m9.figshare 1164194 (2014c), available at https://graph-tool.skewed.de.
49. A. Decelle, F. Krzakala, C. Moore, and L. Zdeborová, Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications, Physical Review E 84, 066106 (2011).
50. E. Mossel, J. Neeman, and A. Sly, Reconstruction and estimation in the planted partition model, Probability Theory and Related Fields 162, 431 (2015).
51. T. Kawamoto and Y. Kabashima, Detectability thresholds of general modular graphs, Physical Review E 95, 012304 (2017b).
52. F. Ricci-Tersenghi, G. Semerjian, and L. Zdeborová, Typology of phase transitions in Bayesian inference problems, Physical Review E 99, 042109 (2019).
53. A. Clauset, E. Tucker, and M. Sainz, The Colorado Index of Complex Networks, available at https://icon.colorado.edu/ (2016).
54. L. W. Jones, A. Davis, B. B. Gardner, and M. R. Gardner, Deep South: A Social Anthropological Study of Caste and Class, Southern Economic Journal 9, 159 (1942).
55. A. Joern, Feeding patterns in grasshoppers (Orthoptera: Acrididae): Factors influencing diet specialization, Oecologia 38, 325 (1979).
56. A.-M. Niekamp, L. A. G. Mercken, C. J. P. A. Hoebe, and N. H. T. M. Dukers-Muijrers, A sexual affiliation network of swingers, heterosexuals practicing risk behaviours that potentiate the spread of sexually transmitted infections: A two-mode approach, Social Networks Special Issue on Advances in Two-Mode Social Networks, 35, 223 (2013).
57. C. K. McMullen, Flower-visiting insects of the Galápagos Islands, Pan-Pacific entomologist (USA) 69, 95 (1993).
58. T. di Milano, Ordinanza di applicazione di misura coercitiva con mandato di cattura-art. (Operazione Infinito), Ufficio del giudice per le indagini preliminari (2011).
59. L. M. Gerdes, K. Ringler, and B. Autin, Assessing the Abu Sayyaf Group’s Strategic and Learning Capacities, Studies in Conflict & Terrorism 37, 267 (2014).
60. O. Rozenblatt-Rosen, R. C. Deo, M. Padi, G. Adelmant, M. A. Calderwood, T. Rolland, M. Grace, A. Dricot, M. Askenazi, M. Tavares, S. J. Pevzner, F. Abderazzaq, D. Byrdsong, A.-R. Carvunis, A. A. Chen, J. Cheng, M. Correll, M. Duarte, C. Fan, M. C. Feltkamp, S. B. Ficarro, R. Franchi, B. K. Garg, N. Gulbahce, T. Hao, A. M. Holthaus, R. James, A. Korkhin, L. Litovchick, J. C. Mar, T. R. Pak, S. Rabello, R. Rubio, Y. Shen, S. Singh, J. M. Spangle, M. Tasan, S. Wanamaker, J. T. Webber, J. Roecklein-Canfield, E. Johannsen, A.-L. Barabási, R. Beroukhim, E. Kieff, M. E. Cusick, D. E. Hill, K. Münger, J. A. Marto, J. Quackenbush, F. P. Roth, J. A. DeCaprio, and M. Vidal, Interpreting cancer genomes using systematic host network perturbations by tumour virus proteins, Nature 487, 491 (2012).
61. F. E. Clements and F. L. Long, Experimental Pollination: An Outline of the Ecology of Flowers and Insects, 336 (Carnegie Institution of Washington, 1923).
62. A. C. Murphy, S. F. Muldoon, D. Baker, A. Lastowka, B. Bennett, M. Yang, and D. S. Bassett, Structure, function, and control of the human musculoskeletal network, PLOS Biology 16, e2002811 (2018).
63. M. Coscia and V. Rios, Knowing Where and How Criminal Organizations Operate Using Web Content, in Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM ’12 (ACM, Maui, Hawaii, USA, 2012) pp. 1412–1421.
64. J. Kunegis, KONECT: The Koblenz Network Collection, in Proceedings of the 22Nd International Conference on World Wide Web, WWW ’13 Companion (ACM, Rio de Janeiro, Brazil, 2013) pp. 1343–1350.
65. J. C. Nacher and J.-M. Schwartz, Modularity in Protein Complex and Drug Interactions Reveals New Polypharmacological Properties, PLOS ONE 7, e30028 (2012).
66. C. Robertson, Flowers and Insects; Lists of Visitors of Four Hundred and Fifty-Three Flowers (Science Press, Lancaster, PA, USA, 1929).
67. K.-I. Goh, M. E. Cusick, D. Valle, B. Childs, M. Vidal, and A.-L. Barabási, The human disease network, Proceedings of the National Academy of Sciences 104, 8685 (2007).
68. Y.-Y. Ahn, S. E. Ahnert, J. P. Bagrow, and A.-L. Barabási, Flavor network and the principles of food pairing, Scientific Reports 1, 1 (2011).
69. M. Gerlach, T. P. Peixoto, and E. G. Altmann, A network approach to topic models, Science Advances 4, eaaq1360 (2018).
70. D. Yang, D. Zhang, Z. Yu, and Z. Yu, Fine-grained Preference-aware Location Search Leveraging Crowdsourced Digital Footprints from LBSNs, in Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp ’13 (ACM, Zurich, Switzerland, 2013) pp. 479–488.
71. J. E. Goldford, H. Hartman, T. F. Smith, and D. Segrè, Remnants of an Ancient Metabolism without Phosphate, Cell 168, 1126 (2017).
72. R. Alberich, J. Miro-Julia, and F. Rossello, Marvel Universe looks almost like a real social network, arXiv:cond-mat/0202174 (2002), arXiv:cond-mat/0202174 .
73. D. D. Lewis, Y. Yang, T. G. Rose, and F. Li, RCV1: A New Benchmark Collection for Text Categorization Research, Journal of Machine Learning Research 5, 361 (2004).
74. A. Mislove, M. Marcon, K. P. Gummadi, P. Druschel, and B. Bhattacharjee, Measurement and Analysis of Online Social Networks, in Proceedings of the 7th ACM SIGCOMM Conference on Internet Measurement, IMC ’07 (ACM, San Diego, California, USA, 2007) pp. 29–42.
75. S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. Ives, DBpedia: A Nucleus for a Web of Open Data, in The Semantic Web, Lecture Notes in Computer Science, Vol. 4825, edited by K. Aberer, K.-S. Choi, N. Noy, D. Allemang, K.-I. Lee, L. Nixon, J. Golbeck, P. Mika, D. Maynard, R. Mizoguchi, G. Schreiber, and P. Cudré-Mauroux (Springer, Berlin, Heidelberg, 2007) pp. 722–735.
76. S. H. Jeffreys, The Theory of Probability (Oxford University Press, New York, 1998).
77. A. Ghasemian, H. Hosseinmardi, and A. Clauset, Evaluating overfit and underfit in models of network community structure, IEEE Transactions on Knowledge and Data Engineering 32, 1722 (2019).
|
2020-10-22 11:49:41
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024423122406006, "perplexity": 785.5459447822534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879537.28/warc/CC-MAIN-20201022111909-20201022141909-00256.warc.gz"}
|
https://math.stackexchange.com/questions/882228/how-many-graphs-can-have-the-same-line-graph
|
# How many graphs can have the same line graph?
Suppose we have a finite simple graph $G$. Call $\mathcal O$ the set of graphs without isolated vertices up to isomorphism whose line graph is isomorphic to $G$. Can $\mathcal O$ contain more than one element?
Whitney's result is given as, "If the line graphs of two connected graphs are isomorphic, then the underlying graphs are isomorphic, except in the case of the triangle graph $K_3$ and the claw $K_{1,3}$, which have isomorphic line graphs but are not themselves isomorphic."
|
2019-11-14 09:25:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6733060479164124, "perplexity": 111.35083841065065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668334.27/warc/CC-MAIN-20191114081021-20191114105021-00047.warc.gz"}
|
https://archive.lib.msu.edu/crcmath/math/math/r/r424.htm
|
## Ryser Formula
A formula for the Permanent of a Matrix
where the Sum is over all Subsets of , and is the number of elements in . The formula can be optimized by picking the Subsets so that only a single element is changed at a time (which is precisely a Gray Code), reducing the number of additions from to .
It turns out that the number of disks moved after the th step in the Towers of Hanoi is the same as the element which needs to be added or deleted in the th Addend of the Ryser formula (Gardner 1988, Vardi 1991, p. 111).
References
Gardner, M. The Icosian Game and the Tower of Hanoi.'' Ch. 6 in The Scientific American Book of Mathematical Puzzles & Diversions. New York: Simon and Schuster, 1959.
Knuth, D. E. The Art of Computer Programming, Vol. 2: Seminumerical Algorithms, 3rd ed. Reading, MA: Addison-Wesley, p. 515, 1998.
Nijenhuis, A. and Wilf, H. Chs. 7-8 in Combinatorial Algorithms. New York: Academic Press, 1975.
Vardi, I. Computational Recreations in Mathematica. Reading, MA: Addison-Wesley, p. 111, 1991.
|
2021-12-04 20:33:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.922217607498169, "perplexity": 852.283037207998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363006.60/warc/CC-MAIN-20211204185021-20211204215021-00356.warc.gz"}
|
https://proxieslive.com/tag/integral/
|
## Asymptotic value of an integral using Mathematica
When I plug the integral
$$n \int_0^1 t^{r-1}(2t-t^2)(2-2t) dt$$, I get the following : $$2^{(-1 + 2 n) }nt^{(-1 + r) }(Beta[n, n] – 4 Beta[1/2, 1 + n, n])$$ where $$Beta(x,a,b)=\int_0^x u^{a-1}(1-u)^{b-1} du$$ is the incomplete beta integral.I want to know whether Mathematica can give me intermediate steps and,more importantly,how I can get the asyptotic limit of the value of the integral as n approches infinity.Lots of thanks for any help or hints in advance
## Solve an algebraic equation with an integral
I am trying to compute for the variable zm in terms of t which is written as an algebraic equation with an integral in it. The final answer should be zm = zm[t].
t - Integrate[(c[zm] z^(d - 1))/(f[z] Sqrt[f[z] + c[zm]^2 z^(2 d - 2)]), {z, 0, zm}] == 0
Just a note, c[zm] contains a negative sign inside the square root so that zm must be greater than zh in order for c[zm] to be real.
d = 3; zh = 2; c[zm_] := Sqrt[-(1 - zm^(d + 1)/zh^(d + 1))]/zm^(d - 1); f[z_] := (1 - z^(d + 1)/zh^(d + 1)); In[8]:= Integrate[(c[zm] z^(d - 1))/(f[z] Sqrt[f[z] + c[zm]^2 z^(2 d - 2)]), {z, 0, zm}, Assumptions -> zm > 2] Out[8]= (1/64)*Pi*((-32 - 32*I) - (Sqrt[2*Pi]*zm*Sqrt[-16 + zm^4]*(-1 + Hypergeometric2F1[-(1/4), 1, 1/4, 16/zm^4]))/Gamma[5/4]^2) Solve[t - Integrate[(c[zm] z^(d - 1))/(f[z] Sqrt[f[z] + c[zm]^2 z^(2 d - 2)]), {z, 0, zm}] == 0 , zm]
I am not sure if using Solve can really find the expression, also the result of the integral contains an imaginary term, but it should not right since c[zm] is real from the Assumptions -> zm>2?
## An alternative command to compute a logarithmic integral
I am trying to see if Mathematica can calculate: $$\int_0^1\frac{\ln(x)\ln(1-x)\ln(1+x)}{x}dx,$$ which has a well-known closed form. So I tried Integrate[Log[x]Log[1-x]Log[1+x]/x,{x,0,1}] but I waited for about 20 mins and it didn’t show any result, so I stopped the calculations. Is there a command that helps Mathematica calculate the meant integral within few mins?
## Help with NIntegrate settings to evaluate integral
I am trying to evaluate this integral: \begin{align*} \alpha_{2}=\int_{-\infty}^{1.645}\left[1-\Phi\left(\frac{\sqrt{25}}{\sqrt{15}} 1.645-\frac{\sqrt{10}}{\sqrt{15}} z_{1}\right)\right] \phi\left(z_{1}\right) d z_{1} \end{align*} where $$\Phi$$ is the Normal CDF and $$\phi$$ is the normal PDF. I know that the answer should be 0.03325.
I used the following code, but it doesn’t converge to an answer. Any suggestions?
pdf[x_] := PDF[NormalDistribution[0, 1], x] cdf[x_] := CDF[NormalDistribution[0, 1], x] NIntegrate[ 1 - cdf[Sqrt[25]/Sqrt[15] 1.645 - Sqrt[10]/Sqrt[15] x]* pdf[x], {x, -Infinity, 1.645}]
which returns
NIntegrate::slwcon: Numerical integration converging too slowly; suspect one of the following: singularity, value of the integration is 0, highly oscillatory integrand, or WorkingPrecision too small. NIntegrate::ncvb: NIntegrate failed to converge to prescribed accuracy after 9 recursive bisections in x near {x} = {-8.16907*10^224}. NIntegrate obtained 8.58263912306054315.954589770191005*^27949 and 8.58263912306054315.954589770191005*^27949 for the integral and error estimates.
The following code in R gives me the correct answer:
inside <- function(z1, n1, n2, cv) { nt <- n1 + n2 (1 - pnorm(sqrt(nt/n2) * cv - sqrt(n1/n2) * z1)) * dnorm(z1) } additional.error <- function(n1, n2, cv) { integrate(inside, lower = -Inf, upper = cv, n1 = n1, n2 = n2, cv = 1.645)\$ value } additional.error(n1 = 10, n2 = 15, cv = qnorm(0.95)) $$`$$
## Is concealed difficulty an integral part of the cliffhanger scene form?
The “Cliffhangers” section in Masters of Umdaar says that the difficulties for rolls should be concealed until rolled against:
Of course, GMs, don’t reveal a difficulty for a specific approach until a player attempts it—let them stumble around to see which methods are more effective. (MoU 28, Cliffhangers: Running the Cliffhanger)
That stands out because it’s contrary to standard practices in Fate. On the one hand, that makes it feel like an optional playstyle preference note; on the other hand, it can be read as a deliberate and noteworthy departure from Fate norms to introduce a different sort of experience to the game.
Are concealed difficulties a crucial part of the cliffhanger concept or is this just a playstyle preference of the author? What difference does concealing difficulties make to the table experience when using cliffhangers?
## Derivative of a Definite Integral
I have two equations as follows,
$$a = \int_0^1 dx \frac{c z_s^{d+1} x^d}{\sqrt{(1-(z_s/z_h)^{d+1} x^{d+1})(1-c^2 z_s^{2d} x^{2d})}} \tag{1}\label{1},$$
\begin{align} S &= \frac{1}{4 z_s^{d-1}}\Bigg(-\frac{\sqrt{(1-c^2 z_s^{2d})(1-b^{d+1})}}{d-1} – \frac{1}{d-1} c^2 z_s^{2d} \int^1_0 dx x^d \sqrt{\frac{(1-(b x)^{d+1})}{(1-c^2(z_s x)^{2d})}}\ & -\frac{b^{d+1}(d+1)}{2(d-1)} \int^1_0 dx x \sqrt{\frac{(1-c^2(z_s x)^{2d})}{(1-(b x)^{d+1})}}\ & + b^{d+1}\int^1_0 dx \frac{x}{\sqrt{(1-(b x)^{d+1})(1-c^2(z_s x)^{2d})}}\Bigg) \tag{2}\label{2} \end{align}
where $$c=c(z_s)$$ is a function of $$z_s$$, while $$a$$ (I can fix a value for this), $$z_h$$, $$d$$ (dimension) are constants, also $$b=z_s/z_h$$.
My goal is to obtain an expression for $$S$$ independent of $$c$$, the conditions that can help with this requirement are,
$$\frac{dS}{dz_s} = 0 \tag{3}\label{3},$$
and $$\eqref{1}$$. From $$\eqref{1}$$ and $$\eqref{2}$$, we can take the derivative with respect to $$z_s$$ and impose $$\eqref{3}$$ on the derivative of $$\eqref{2}$$ so that we can get expressions involving $$c$$ and $$\frac{dc}{dz_s}$$ in both the derivatives of $$\eqref{1}$$ and $$\eqref{2}$$, then eliminate both $$c$$ and $$\frac{dc}{dz_s}$$.
However, I tried to do the typical way as in the documentation but it does not produce the result I want.
d = 3; Integrate[(c[zs] zs^(d + 1) x^d)/((1 - x^(d + 1) (zs/zh)^(d + 1)) (1 - c[zs]^2 (zs x)^(2 d)))^(1/2), {x, 0, 1}] D[Integrate[(c[zs] zs^(d + 1) x^d)/((1 - x^(d + 1) (zs/zh)^(d + 1)) (1 - c[zs]^2 (zs x)^(2 d)))^(1/2), {x, 0, 1}] == 0, zs]
## Numerical contour integral
I am trying to compute the double integral for fixed $$m,z>0$$:
Integrate[(Gamma[y/2] Sqrt[Gamma[3 - y]/Gamma[y]])/ Gamma[(3 - y)/2] z^(3 - y) (Exp[-m x] - 1) x^(y - 3)/x, {x, 0, \[Infinity]},{y, 3/2 - I \[Infinity], 3/2 + I \[Infinity]}]
The integral over $$x$$ can be done analytically, and the result depends on the product $$mz$$, so there is effectively just one parameter. The $$x$$ integral needs to be split into two regions I suspect, and in one region the contour of the $$y$$ integral would need to be deformed so that it remains convergent. Since the $$y$$ integral involves complicated branch cuts, I wanted to be able to do it numerically for a range of $$mz$$, to get a least a few digits of precision. I am having difficulty getting stable results numerically tho.
## Error in nonlinearmodel fit for a function with Definite Integral and complex number
I am trying to fit a function and Here is the code which I am trying, Please find the data here dataset
Data = Import[ "E:\Shelender\codes\Mathematica\Aelastic \ relaxation\datat.asc"]; real = Data[[All, {1, 2}]]; imag = Data[[All, {1, 3}]]; w = 1.26*10^8; k = 1.38*10^-23; f[H_, s_, d_] := ((1/(Sqrt[2*Pi]*s*H))*Exp[(-(Log[(H/d)])^2/(2*s^2))]) dynamic[x_?NumericQ, s_?NumericQ, d_?NumericQ, A_?NumericQ, t_?NumericQ] := A*(1 - NIntegrate[ f[H, s, d]/((1 + (I*w*t*Exp[((H)/(k*x))]))), {H, 0, \[Infinity]}]) fit = ResourceFunction["MultiNonlinearModelFit"][ Rationalize[{real, imag}, 0], ComplexExpand[ReIm@dynamic[x, s, d, A, t]], Rationalize[{{A, 1.0*10^-4}, {t, 1.0*10^-12}, {d, 10}, {s, 0.25}}, 0], {x}, PrecisionGoal -> 3, AccuracyGoal -> 3]; fit["ParameterTable"] Show[ListPlot[{real, imag}], Plot[{fit[1, x], fit[2, x]}, {x, 0, Max[real[[All, 1]], imag[[All, 1]]]}, PlotRange -> All], PlotRange -> All]
Although I am not getting any error but fit values are completely off
## How to calculate this kind of double definite integral directly
Let $$D=\left\{(x, y) \mid x^{2}+y^{2} \leq \sqrt{2}, x \geq 0, y \geq 0\right\}$$, $$\left[1+x^{2}+y^{2}\right]$$ represents the largest integer not greater than $$1+x^{2}+y^{2}$$, now I want to calculate this double integral $$\iint_{D} x y\left[1+x^{2}+y^{2}\right] d x d y$$.
reg = ImplicitRegion[x^2 + y^2 <= Sqrt[2] && x >= 0 && y >= 0, {x, y}]; Integrate[x*y*Round[1 + x^2 + y^2], {x, y} ∈ reg]
But the result I calculated using the above method is not correct, the answer is $$\frac{3}{8}$$, what should I do to directly calculate this double integral (without using the technique of turning double integral into iterated integral)?
## Solving integral involving absolute value of a vector
I am trying to integrate the following in mathematica:
$$\int_0^r \frac{exp(-k_d(|\vec{r}-\vec{r_j}|+|\vec{r}-\vec{r_i}|)}{|\vec{r}-\vec{r_j}|\times|\vec{r}-\vec{r_i}|}r^2dr$$.
I have first defined, the following functions,
$$\vec p(x,y,z)= (x-x_j)\hat i + (y-y_j)\hat j+(z-z_j)\hat k$$
Similarly,
$$\vec q(x,y,z)= (x-x_i)\hat i + (y-y_i)\hat j+(z-z_i)\hat k$$.
And,
$$\vec r(x,y,z)=x\hat i + y\hat j+z\hat k$$
Then I clicked the integration symbol in the classroom assistant panel and typed the integrand in the $$expr$$ portion. While typing this, I have used $$Abs$$ to take modulus of the functions $$\vec p(x,y,z)$$ and $$\vec q(x,y,z)$$ . I have included the limits as $$0$$ to $$Abs(r)$$ and the $$var$$ as $$r$$ in the integration symbol. But when I press( Shift + Enter ) no output value is shown . Can anyone tell me where I have made mistake ?
|
2021-05-17 19:53:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 55, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7539946436882019, "perplexity": 2254.5686891867012}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00634.warc.gz"}
|
https://ideas.repec.org/a/spr/joptap/v178y2018i2d10.1007_s10957-018-1298-1.html
|
# Exact Worst-Case Convergence Rates of the Proximal Gradient Method for Composite Convex Minimization
## Author
Listed:
(Université catholique de Louvain)
• Julien M. Hendrickx
(Université catholique de Louvain)
• François Glineur
(Université catholique de Louvain
Université catholique de Louvain)
## Abstract
We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function, whose proximal operator is available. We establish the exact worst-case convergence rates of the proximal gradient method in this setting for any step size and for different standard performance measures: objective function accuracy, distance to optimality and residual gradient norm. The proof methodology relies on recent developments in performance estimation of first-order methods, based on semidefinite programming. In the case of the proximal gradient method, this methodology allows obtaining exact and non-asymptotic worst-case guarantees that are conceptually very simple, although apparently new. On the way, we discuss how strong convexity can be replaced by weaker assumptions, while preserving the corresponding convergence rates. We also establish that the same fixed step size policy is optimal for all three performance measures. Finally, we extend recent results on the worst-case behavior of gradient descent with exact line search to the proximal case.
## Suggested Citation
• Adrien B. Taylor & Julien M. Hendrickx & François Glineur, 2018. "Exact Worst-Case Convergence Rates of the Proximal Gradient Method for Composite Convex Minimization," Journal of Optimization Theory and Applications, Springer, vol. 178(2), pages 455-476, August.
• Handle: RePEc:spr:joptap:v:178:y:2018:i:2:d:10.1007_s10957-018-1298-1
DOI: 10.1007/s10957-018-1298-1
as
File Function: Abstract
File URL: https://libkey.io/10.1007/s10957-018-1298-1?utm_source=ideas
LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
---><---
As the access to this document is restricted, you may want to look for a different version below or search for a different version of it.
## References listed on IDEAS
as
1. Taylor, A. & Hendrickx, J. & Glineur, F., 2015. "Smooth Strongly Convex Interpolation and Exact Worst-case Performance of First-order Methods," LIDAM Discussion Papers CORE 2015013, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
2. NESTEROV, Yurii, 2012. "Efficiency of coordinate descent methods on huge-scale optimization problems," LIDAM Reprints CORE 2511, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
3. NESTEROV, Yurii, 2013. "Gradient methods for minimizing composite functions," LIDAM Reprints CORE 2510, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
4. TAYLOR, Adrien B. & HENDRICKX, Julien M. & François GLINEUR, 2016. "Exact worst-case performance of first-order methods for composite convex optimization," LIDAM Discussion Papers CORE 2016052, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
5. Ion Necoara & Yurii Nesterov & François Glineur, 2019. "Linear convergence of first order methods for non-strongly convex optimization," LIDAM Reprints CORE 3000, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
6. DEVOLDER, Olivier & GLINEUR, François & NESTEROV, Yurii, 2011. "First-order methods of smooth convex optimization with inexact oracle," LIDAM Discussion Papers CORE 2011002, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
7. DE KLERK, Etienne & GLINEUR, François & TAYLOR, Adrien B., 2016. "On the Worst-case Complexity of the Gradient Method with Exact Line Search for Smooth Strongly Convex Functions," LIDAM Discussion Papers CORE 2016027, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
Full references (including those not matched with items on IDEAS)
## Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
as
Cited by:
1. Donghwan Kim & Jeffrey A. Fessler, 2021. "Optimizing the Efficiency of First-Order Methods for Decreasing the Gradient of Smooth Convex Functions," Journal of Optimization Theory and Applications, Springer, vol. 188(1), pages 192-219, January.
2. Sandra S. Y. Tan & Antonios Varvitsiotis & Vincent Y. F. Tan, 2021. "Analysis of Optimization Algorithms via Sum-of-Squares," Journal of Optimization Theory and Applications, Springer, vol. 190(1), pages 56-81, July.
3. Wei Peng & Hui Zhang & Xiaoya Zhang & Lizhi Cheng, 2020. "Global complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptions," Journal of Global Optimization, Springer, vol. 78(1), pages 69-89, September.
## Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.
1. Sandra S. Y. Tan & Antonios Varvitsiotis & Vincent Y. F. Tan, 2021. "Analysis of Optimization Algorithms via Sum-of-Squares," Journal of Optimization Theory and Applications, Springer, vol. 190(1), pages 56-81, July.
2. Olivier Fercoq & Zheng Qu, 2020. "Restarting the accelerated coordinate descent method with a rough strong convexity estimate," Computational Optimization and Applications, Springer, vol. 75(1), pages 63-91, January.
3. TAYLOR, Adrien B. & HENDRICKX, Julien M. & François GLINEUR, 2016. "Exact worst-case performance of first-order methods for composite convex optimization," LIDAM Discussion Papers CORE 2016052, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
4. Ching-pei Lee & Stephen J. Wright, 2019. "Inexact Successive quadratic approximation for regularized optimization," Computational Optimization and Applications, Springer, vol. 72(3), pages 641-674, April.
5. Le Thi Khanh Hien & Cuong V. Nguyen & Huan Xu & Canyi Lu & Jiashi Feng, 2019. "Accelerated Randomized Mirror Descent Algorithms for Composite Non-strongly Convex Optimization," Journal of Optimization Theory and Applications, Springer, vol. 181(2), pages 541-566, May.
6. Kimon Fountoulakis & Rachael Tappenden, 2018. "A flexible coordinate descent method," Computational Optimization and Applications, Springer, vol. 70(2), pages 351-394, June.
7. Majid Jahani & Naga Venkata C. Gudapati & Chenxin Ma & Rachael Tappenden & Martin Takáč, 2021. "Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences," Computational Optimization and Applications, Springer, vol. 79(2), pages 369-404, June.
8. Masoud Ahookhosh & Le Thi Khanh Hien & Nicolas Gillis & Panagiotis Patrinos, 2021. "Multi-block Bregman proximal alternating linearized minimization and its application to orthogonal nonnegative matrix factorization," Computational Optimization and Applications, Springer, vol. 79(3), pages 681-715, July.
9. Yangyang Xu & Shuzhong Zhang, 2018. "Accelerated primal–dual proximal block coordinate updating methods for constrained convex optimization," Computational Optimization and Applications, Springer, vol. 70(1), pages 91-128, May.
10. Quoc Tran-Dinh, 2019. "Proximal alternating penalty algorithms for nonsmooth constrained convex optimization," Computational Optimization and Applications, Springer, vol. 72(1), pages 1-43, January.
11. Donghwan Kim & Jeffrey A. Fessler, 2017. "On the Convergence Analysis of the Optimized Gradient Method," Journal of Optimization Theory and Applications, Springer, vol. 172(1), pages 187-205, January.
12. Abbaszadehpeivasti, Hadi & de Klerk, Etienne & Zamani, Moslem, 2021. "The exact worst-case convergence rate of the gradient method with fixed step lengths for L-smooth functions," Other publications TiSEM 061688c6-f97c-4024-bb5b-1, Tilburg University, School of Economics and Management.
13. Masoud Ahookhosh & Arnold Neumaier, 2018. "Solving structured nonsmooth convex optimization with complexity $$\mathcal {O}(\varepsilon ^{-1/2})$$ O ( ε - 1 / 2 )," TOP: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 26(1), pages 110-145, April.
14. Md Sarowar Morshed & Md Saiful Islam & Md. Noor-E-Alam, 2020. "Accelerated sampling Kaczmarz Motzkin algorithm for the linear feasibility problem," Journal of Global Optimization, Springer, vol. 77(2), pages 361-382, June.
15. Pavel Dvurechensky & Alexander Gasnikov, 2016. "Stochastic Intermediate Gradient Method for Convex Problems with Stochastic Inexact Oracle," Journal of Optimization Theory and Applications, Springer, vol. 171(1), pages 121-145, October.
16. Rachael Tappenden & Peter Richtárik & Jacek Gondzio, 2016. "Inexact Coordinate Descent: Complexity and Preconditioning," Journal of Optimization Theory and Applications, Springer, vol. 170(1), pages 144-176, July.
17. Kaiwen Ma & Nikolaos V. Sahinidis & Sreekanth Rajagopalan & Satyajith Amaran & Scott J Bury, 2021. "Decomposition in derivative-free optimization," Journal of Global Optimization, Springer, vol. 81(2), pages 269-292, October.
18. Xuefei Lu & Alessandro Rudi & Emanuele Borgonovo & Lorenzo Rosasco, 2020. "Faster Kriging: Facing High-Dimensional Simulators," Operations Research, INFORMS, vol. 68(1), pages 233-249, January.
19. repec:spr:compst:v:77:y:2013:i:3:p:305-321 is not listed on IDEAS
20. Zhaosong Lu & Xiaojun Chen, 2018. "Generalized Conjugate Gradient Methods for ℓ 1 Regularized Convex Quadratic Programming with Finite Convergence," Mathematics of Operations Research, INFORMS, vol. 43(1), pages 275-303, February.
21. Jueyou Li & Zhiyou Wu & Changzhi Wu & Qiang Long & Xiangyu Wang, 2016. "An Inexact Dual Fast Gradient-Projection Method for Separable Convex Optimization with Linear Coupled Constraints," Journal of Optimization Theory and Applications, Springer, vol. 168(1), pages 153-171, January.
### Keywords
Proximal gradient method; Composite convex optimization; Convergence rates; Worst-case analysis;
All these keywords.
## Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:joptap:v:178:y:2018:i:2:d:10.1007_s10957-018-1298-1. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: . General contact details of provider: http://www.springer.com .
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.
IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.
|
2021-12-01 03:29:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.481137752532959, "perplexity": 6391.388514318416}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.78/warc/CC-MAIN-20211201022332-20211201052332-00493.warc.gz"}
|
https://davisartwalk.com/smqant/1c1aaf-cosine-similarity-large-datasets-python
|
auto_awesome_motion. Prerequisite – Measures of Distance in Data Mining. Cosine Similarity is a way to measure overlap Suppose that the vectors contain only zeros and ones. norma, metrics. For small corpora (up to about 100k entries) we can compute the cosine-similarity between the query and all entries in the corpus. I’ve seen it used for sentiment analysis, translation, and some rather brilliant work at Georgia Tech for detecting plagiarism. The numberator is just a sum of 0’s and 1’s. array ([1, 2, 3]) b = np. sklearn.metrics.pairwise.cosine_similarity¶ sklearn.metrics.pairwise.cosine_similarity (X, Y = None, dense_output = True) [source] ¶ Compute cosine similarity between samples in X and Y. Cosine similarity, or the cosine kernel, computes similarity as the normalized dot product of X and Y: dim (int, optional) – Dimension where cosine similarity is computed. norm (b) cos = dot / (norma * normb) # use library, operates on sets of vectors aa = a. reshape (1, 3) ba = b. reshape (1, 3) cos_lib = cosine_similarity (aa, ba) print … This will produce a frequency matrix, which you can then use as the input for sklearn.metrics.pairwise_distances(), which will give you a pairwise distance matrix. np.dot(a, b)/(norm(a)*norm(b)) Analysis. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready. This blog is my extended memory; it contains code snippets that I would otherwise forget. Cosine similarity large datasets python. linalg. The method that I need to use is "Jaccard Similarity ". Python | How and where to apply Feature Scaling? By using our site, you This is just 1-Gram analysis not taking into account of group of words. I took the text from doc_id 200 (for me) and pasted some content with long query and short query in both matching score and cosine similarity. depending on the user_based field of sim_options (see Similarity measure configuration).. $$Similarity(A, B) = \cos(\theta) = \frac{A \cdot B}{\vert\vert A\vert\vert \times \vert\vert B \vert\vert} = \frac {18}{\sqrt{17} \times \sqrt{20}} \approx 0.976$$ These two vectors (vector A and vector B) have a cosine similarity of 0.976. then calculate the cosine similarity between 2 different bug reports. cos_lib = cosine_similarity(aa, ba) dot, Dask Dataframes allows you to work with large datasets for both data manipulation and building ML models with only minimal code changes. a = np.array([1,2,3]) Kite is a free autocomplete for Python developers. array ([1, 1, 4]) # manually compute cosine similarity dot = np. How to Choose The Right Database for Your Application? Here is the output which shows that Bug#599831 and Bug#1055525 are more similar than the rest of the pairs. This is a problem, and you want to de-duplicate these. Figure 1. dot (a, b) norma = np. In our case, the inner product space is the one defined using the BOW and tf … We can measure the similarity between two sentences in Python using Cosine Similarity. If θ = 90°, the ‘x’ and ‘y’ vectors are dissimilar. aa = a.reshape(1,3) The reason for that is that from sklearn.metrics.pairwise import cosine_similarity cosine_similarity(df) to get pair-wise cosine similarity between all vectors (shown in above dataframe) Step 3: Make a list of tuple … One of the reasons for the popularity of cosine similarity is that it is very efficient to evaluate, especially for sparse vectors. Please use ide.geeksforgeeks.org, On my computer I get: This site uses Akismet to reduce spam. the library is "sklearn", python. Cosine similarity works in these usecases because we ignore magnitude and focus solely on orientation. Cosine Similarity Python Scikit Learn. We can measure the similarity between two sentences in Python using Cosine Similarity. y / ||x|| * ||y||, The dissimilarity between the two vectors ‘x’ and ‘y’ is given by –. The formula to find the cosine similarity between two vectors is – b = np.array([1,1,4]) A problem that I have witnessed working with databases, and I think many other people with me, is name matching. Dask – How to handle large data in python using parallel computing Now, all we have to do is calculate the cosine similarity for all the documents and return the maximum k documents. Here is how to compute cosine similarity in Python, either manually (well, using numpy) or using a specialised library: import numpy as np generate link and share the link here. dot = np.dot(a, b) The cosine similarity between the two points is simply the cosine of this angle. Devise a Movie Recommendation System based Netflix and IMDB dataset using collaborative filtering and cosine similarity. Note that with a distance matrix, values closer to 0 are more similar pairs (while in a cosine similarity matrix, values closer to 0 are less similar pairs). normb = np.linalg.norm(b) Create notebooks or datasets and keep track of their status here. The dataset contains all the questions (around 700,000) asked between August 2, 2008 and Ocotober 19, 2016. Cosine similarity implementation in python: Cosine similarity is particularly used in positive space, where the outcome is neatly bounded in [0,1]. add New Notebook add New Dataset. An example of this is shown below for a different news article, but it gives a good look at how a larger matrix would look. In set theory it is often helpful to see a visualization of the formula: We can see that the Jaccard similarity divides the size of … Python¶. 0 Active Events. The cosine similarity is beneficial because even if the two similar data objects are far apart by the Euclidean distance because of the size, they could still have a smaller angle between them. # vectors # use library, operates on sets of vectors I often use cosine similarity at my job to find peers. 0 Active Events. Cosine similarity is for comparing two real-valued vectors, but Jaccard similarity is for comparing two binary vectors (sets). The values might differ a slight bit on the smaller decimals. Figure 1 shows three 3-dimensional vectors and the angles between each pair. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Movie recommendation based on emotion in Python, Python | Implementation of Movie Recommender System, Item-to-Item Based Collaborative Filtering, Frequent Item set in Data set (Association Rule Mining). Overview of Scaling: Vertical And Horizontal Scaling, SQL | Join (Inner, Left, Right and Full Joins), Commonly asked DBMS interview questions | Set 1, Introduction of DBMS (Database Management System) | Set 1, similarity between two sentences in Python, Understanding "Things" in Internet of Things, Types of Keys in Relational Model (Candidate, Super, Primary, Alternate and Foreign), Write Interview Tika-Similarity uses the Tika-Python package (Python port of Apache Tika) to compute file similarity based on Metadata features. While there are libraries in Python and R that will calculate it sometimes I’m doing a small scale project and so I use Excel. from sklearn.metrics.pairwise import cosine_similarity # Initialize an instance of tf-idf Vectorizer tfidf_vectorizer = TfidfVectorizer # Generate the tf-idf vectors for the corpus tfidf_matrix = tfidf_vectorizer. print( When plotted on a multi-dimensional space, the cosine similarity captures the orientation (the angle) of the data objects and not the magnitude. The cosine similarity between two vectors is measured in ‘θ’. from sklearn.metrics.pairwise import cosine_similarity I guess it is called "cosine" similarity because the dot product is the product of Euclidean magnitudes of the two vectors and the cosine of the angle between them. Default: 1 Default: 1 eps ( float , optional ) – Small value to avoid division by zero. The cosine of an angle is a function that decreases from 1 to -1 as the angle increases from 0 to 180. Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space.It is defined to equal the cosine of the angle between them, which is also the same as the inner product of the same vectors normalized to both have length 1. In practice, cosine similarity tends to be useful when trying to determine how similar two texts/documents are. Analysis of Attribute Relevance in Data mining, Multilevel Association Rule in data mining, Difference between Nested Subquery, Correlated Subquery and Join Operation, Advantages and Disadvantages of Normalization. auto_awesome_motion. cos_lib[0][0] To calculate similarity using angle, you need a function that returns a higher similarity or smaller distance for a lower angle and a lower similarity or larger distance for a higher angle. A commonly used approach to match similar documents is based on counting the maximum number of common words between the documents.But this approach has an inherent flaw. Cosine similarity for very large dataset, even though your (500000, 100) array (the parent and its children) fits into memory any pairwise metric on it won't. python machine-learning information-retrieval clustering tika cosine-similarity jaccard-similarity cosine-distance similarity-score tika-similarity metadata-features tika-python Updated 13 days ago In the following example, we define a small corpus with few example sentences and compute the embeddings for the corpus as well as for our query. Therefore, the numerator measures Experience. In Data Mining, similarity measure refers to distance with dimensions representing features of the data object, in a dataset. Cosine similarity is a measure of distance between two vectors. # manually compute cosine similarity Data Structures and Algorithms – Self Paced Course, We use cookies to ensure you have the best browsing experience on our website. expand_more. In cosine similarity, data objects in a dataset are treated as a vector. Don’t stop learning now. linalg. First the Theory. 0. Cosine similarity is a metric, helpful in determining, how similar the data objects are irrespective of their size. 0. In text analysis, each vector can represent a document. Python | Measure similarity between two sentences using cosine similarity Last Updated : 10 Jul, 2020 Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. Consider an example to find the similarity between two vectors – ‘x’ and ‘y’, using Cosine Similarity. both vectors have one in the same dimensions. Short Query Produce a user interface to suggest content based on genre & time using Dash (Python) Note that this algorithm is symmetrical meaning similarity of A and B is the same as similarity of B and A. Let’s understand how to use Dask with hands-on examples. normb, The formula to find the cosine similarity between two vectors is –. The following table gives an example: For the human reader it is obvious that both … If you want, read more about cosine similarity and dot products on Wikipedia. Some of the popular similarity measures are –, Cosine similarity is a metric, helpful in determining, how similar the data objects are irrespective of their size. The greater the value of θ, the less the value of cos θ, thus the less the similarity between two documents. If θ = 0°, the ‘x’ and ‘y’ vectors overlap, thus proving they are similar. Next, I find the cosine-similarity of each TF-IDF vectorized sentence pair. My name is Pimin Konstantin Kefaloukos, also known as Skipperkongen. Pairwise cosine similarity of a large dataset Posted 12-05-2019 10:32 PM (332 views) Hi, I have a table (matrix) like this: id: year: var1: var2: A similar problem occurs when you want to merge or join databases using the names as identifier. Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space. GitHub Gist: instantly share code, notes, and snippets. For these algorithms, another use case is possible when dealing with large datasets: compute the set or … Things to improve. Manhattan distance: Manhattan distance is a metric in which the distance between two points is … Cosine similarity is the normalised dot product between two vectors. It is open source and works well with python libraries like NumPy, scikit-learn, etc. The ‘x’ vector has values, x = { 3, 2, 0, 5 } norm (a) normb = np. The ‘y’ vector has values, y = { 1, 0, 0, 0 }, The formula for calculating the cosine similarity is : Cos(x, y) = x . 4y ago. Learn how your comment data is processed. cos, cos = dot / (norma * normb) The similarity search functions that are available in packages like OpenCV are severely limited in terms of scalability, as are other similarity search libraries considering “small” data sets (for example, only 1 million vectors). Here’s how to do it. ba = b.reshape(1,3) # use library, operates on sets of vectors, Comparing the Corona outbreak in Scandinavia and South, South-East Asia, How to compute and plot Bollinger Bands® in Python, Asyncio returns corutine objects instead of results ccxt - Witty Answer, Easy parallel HTTP requests with Python and asyncio, Open Data sharing for free – myprivate42's ramblings, Running Apache Spark EMR and EC2 scripts on AWS with read write S3 | BigSnarf blog, Word-count exercise with Spark on Amazon EMR. 18. import numpy as np from sklearn. Smaller the angle, higher the similarity. I have the data in pandas data frame. Databases often have multiple entries that relate to the same entity, for example a person or company, where one entry has a slightly different spelling then the other. ). Cosine similarity is defined as follows. Others, like cosine similarity, work using what is sometimes called the profile of the strings, which takes into account the number of occurences of each shingle. In cosine similarity, data objects in a dataset are treated as a vector. Cosine is a trigonometric function that, in this case, helps you describe the orientation of two points. Writing code in comment? pairwise import cosine_similarity # vectors a = np. Example : fit_transform (corpus) # compute and print the cosine similarity matrix cosine_sim = cosine_similarity (tfidf_matrix, tfidf_matrix) print (cosine_sim) Code faster with the Kite plugin for your code editor, featuring Line-of-Code Completions and cloudless processing. norma = np.linalg.norm(a) That is, as the size of the document increases, the number of common words tend to increase even if the documents talk about different topics.The cosine similarity helps overcome this fundamental flaw in the ‘count-the-common-words’ or Euclidean distance approach. The cosine similarity is the cosine of the angle between two vectors. There is another way you can do the same without reshaping the dataset. Note: if there are no common users or items, similarity will be 0 (and not -1). If this distance is less, there will be a high degree of similarity, but when the distance is large, there will be a low degree of similarity. Attention reader! Vector can represent a document common users or items, similarity will be 0 ( and -1. My computer I get: this site uses Akismet to reduce spam get! To -1 as the angle increases from 0 to 180 see similarity measure configuration ) ( int, )... ; it contains code snippets that I need to use dask with hands-on examples (. Share the link here cloudless processing not -1 ) occurs when you want to or... Θ, thus proving they are similar of 0 ’ s simply the cosine similarity the... Use case is possible when dealing with large datasets for both data manipulation and building ML models only! The value of cos θ, the ‘ x ’ and ‘ y ’ overlap! The smaller decimals on orientation ( [ 1, 4 ] ) b = np vectors are.! -1 as the angle increases from 0 to 180 job to find peers,! Dask Dataframes allows you to work with large datasets: compute the between. Also known as Skipperkongen angle increases from 0 to 180 calculate the cosine similarity between vectors. Is very efficient to evaluate, especially for sparse vectors cosine of the angle between two sentences in using... Norma = np, helps you describe the orientation of two points is simply the cosine similarity between two is! Similarity dot = np of distance between two documents data Mining, similarity measure configuration..... / ( norm ( a, b ) norma = np it used for sentiment analysis each! # 599831 and Bug # 599831 and Bug # 599831 and Bug # 599831 cosine similarity large datasets python #. Of an inner product space just 1-Gram analysis not taking into account of of. Collaborative filtering and cosine similarity ) b = np and all entries in the corpus and you want merge. 2 different Bug reports measure configuration ) a slight bit on the user_based field of sim_options ( see similarity configuration! I need to use is Jaccard similarity similarity works in these usecases we! Of two points is simply the cosine of the data object, in a dataset are as! A measure of distance between two sentences in Python using cosine similarity, similarity will be 0 ( not. 0 ’ s understand how to Choose the Right Database for your Application two non-zero vectors an... They are similar it contains code snippets that I would otherwise forget …., and you want to merge or join databases using the names as identifier: compute the cosine-similarity the... Dataset are treated as a vector the Right Database for your Application figure shows. = 0°, the less the value of θ, the dissimilarity between the two is... Simply the cosine similarity is the normalised dot product between two vectors is – vectors ‘ ’! On Wikipedia, in a dataset are treated as a vector on our.... ) # manually compute cosine similarity between the two vectors is – you to with... Same without reshaping the dataset avoid division by zero a dataset are treated as a.. An angle is a measure of distance between two non-zero vectors of an inner product.. Common users or items, similarity will be 0 ( and not )... Code, notes, and you want to de-duplicate these ‘ θ ’ by – code. Join databases using the names as identifier extended memory ; it contains snippets... And all entries in the corpus normalised dot product between two vectors ‘ x ’ and ‘ y ’ using. ||Y||, the less the similarity between 2 different Bug reports be 0 ( and not -1 ) Netflix IMDB... A measure of distance between two vectors is measured in ‘ θ ’ use! Of distance between two vectors – ‘ x ’ and ‘ y ’, using cosine is. Where to apply Feature Scaling Completions and cloudless cosine similarity large datasets python this case, helps you describe the orientation two. Use case is possible when dealing with large datasets for both data manipulation and building ML models only! Allows you to work with large datasets for both data manipulation and building ML with... And keep track of their size, 1, 4 ] ) b =.... Your code editor, featuring Line-of-Code Completions and cloudless processing and the angles between each pair in these because! For these algorithms, another use case is possible when dealing with large datasets for both manipulation. Do the same without reshaping the dataset norm ( b ) / ( norm ( ).: Consider an example to find the cosine of an inner product space [,! The smaller decimals: this site uses Akismet to reduce spam – Small value avoid! For the popularity of cosine similarity is computed popularity of cosine similarity is that is! Another way you can do the same without reshaping the dataset for sparse vectors configuration ) (,. Value to avoid division by zero the normalised dot product between two vectors ‘ ’. – Small value to avoid division by zero please use ide.geeksforgeeks.org, generate and... Vectors – ‘ x ’ and ‘ y ’ vectors overlap, thus the less the similarity between two is... The orientation of two points to reduce spam case is possible when dealing cosine similarity large datasets python datasets... Vectors of an inner product space determining, how similar the data objects a... Common users or items, similarity will be 0 ( and not )., optional ) – Dimension where cosine similarity is computed smaller decimals Konstantin Kefaloukos, known... ) b = np y / ||x|| * ||y||, the less the value of,... Users or items, similarity measure configuration ) increases from 0 to 180 similarity. A vector code changes 3 ] ) # manually compute cosine similarity of this angle vectors and angles... Dot product between two documents measure the similarity between two vectors is measured in ‘ ’. Is the cosine of this angle often use cosine similarity if there are no common users or,... ||Y||, the less the value of θ, thus proving they are.... In the corpus the same without reshaping the dataset helps you describe the orientation of two points words. About cosine similarity is the output which shows that Bug # 1055525 are more similar the! The data object, in this case, helps you describe the orientation two! Please use ide.geeksforgeeks.org, generate link and share the link cosine similarity large datasets python figure 1 three. This is a measure of similarity between two vectors is measured in ‘ θ ’ can compute the or! 1 to -1 as the angle between two non-zero vectors of an angle is a measure of distance two... Collaborative filtering and cosine similarity is a measure of similarity between the Query and entries... Less the similarity between 2 different Bug reports the pairs – ‘ x ’ and ‘ y ’ vectors,. Thus proving they are similar a sum of 0 ’ s eps ( float, optional ) Small! Or datasets and keep track of their size is the normalised dot product between two vectors is in! Of their size account of group of words s and 1 ’ s understand how to Choose the Database! Similarity measure configuration ) the ‘ x ’ and ‘ y ’ vectors are dissimilar and cloudless processing numberator. Which shows that Bug # 1055525 are more similar than the rest of the reasons the... The Right Database for your Application the less the value of θ, the. 0°, the less the value of θ, thus proving they are similar vectors is – with dimensions features... Bit on the smaller decimals – Small value to avoid division by zero code that! Work with large datasets: compute the cosine-similarity between the two vectors np.dot ( a ) * (! Occurs when you want to merge or join databases using the names identifier... The greater the value of cos θ, the less the similarity between vectors... Editor, featuring Line-of-Code Completions and cloudless processing, 3 ] ) # compute... As Skipperkongen and share the link here, generate link and share the link here similarity measure refers distance... Merge or join databases using the names as identifier a Movie Recommendation System Netflix! Two vectors is – the rest of the angle increases from 0 to 180 very efficient evaluate! Values might differ a slight bit on the smaller decimals ’, using cosine similarity the data are! Figure 1 shows three 3-dimensional vectors and the angles between each pair ‘ x ’ and ‘ ’! Or join databases using the names as identifier of words the Right for... Link and share the link here ] ) # manually compute cosine similarity the... Of cosine similarity dot = np just a sum of 0 ’ s and 1 ’ s a. Will be 0 ( and not -1 ) text analysis, translation, and you want to de-duplicate these common... Occurs when you want, read more about cosine similarity between two documents code editor, featuring Line-of-Code Completions cloudless. And dot products on Wikipedia way you can do the same without reshaping the.! The dataset want, read more about cosine similarity between two vectors x. Vectors of an angle is a trigonometric function that, in this case, helps you describe the orientation two. There is another way you can do the same without reshaping the.! Np.Dot ( a, b ) ) analysis object, in this case, helps you describe orientation... A Movie Recommendation System based Netflix and IMDB dataset using collaborative filtering cosine.
Questioning In The Classroom, Ocean Kayaking Bc Coast, Ma Political Science Syllabus Ignou, Illinois Action For Child Care Payment, Home Loan Affordability Calculator, How To Determine Best Practices, Tyson Buffalo Chicken Strips Grilled, Bass Clarinet Sheet Music, Aesop Fabulous Face Oil Ingredients,
|
2021-03-04 03:45:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6115220189094543, "perplexity": 1566.570897893547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00348.warc.gz"}
|
https://dsp.stackexchange.com/questions/69111/change-in-frequency-on-differentiation/69113
|
# Change in frequency on differentiation
Is there any possible periodic signal can exist(even mathematically) whose period gets change after differentiation?
• Including points of discontinuity or excluding them? – hops Jul 15 '20 at 6:03
• general mathematical case possible – user215805 Jul 15 '20 at 6:17
The point about discontinuity was meant to hint at the following example. Consider the (unbounded) signal that is defined by $$x(t) = \lceil{t}\rceil.$$
It's derivative is $$x'(t) = \sum_{k=-\infty}^{\infty} \delta(t-k).$$
This is periodic with period $$T=1$$ whereas $$x(t)$$ is aperiodic. Admittedly, this example is similarly impractical for most real-world applications in the same vein as the example given by V.V.T.
• Except that both your $\lceil t \rceil$ can be taken as a ramp plus a sawtooth, and the sawtooth is periodic. Both this an VVT's example are then of "things that can be easily decomposed into functions with periodic elements". So while, strictly speaking it's not periodic -- it's got a strong periodic character to it. – TimWescott Jul 15 '20 at 19:30
• It is true (and interesting) that this can be decomposed into those signals and one of them is periodic with the correct period, but that doesn't change the fact that the signal $x(t)$ is not periodic (and doesn't even have a well-defined Fourier transform). So, I think this example and the example proposed by V.V.T. still meet the question criteria. I tried to offer the disclaimer that in practice unbounded signals like this aren't that interesting since they don't occur in practical systems. – hops Jul 15 '20 at 20:35
• But then the more general answer is to take any periodic signal and add a ramp. Then poof! you have an aperiodic signal that differentiates into a periodic one. – TimWescott Jul 16 '20 at 0:21
• I see your point. – hops Jul 16 '20 at 0:31
• @TimWescott, hops, That's VVT's observation. An interesting fact, but the OP question starts "Is there any possible periodic signal ..." so technically, this is a aside, but I am not recommending the OP change this accepted answer. – Cedron Dawg Jul 16 '20 at 10:51
No, in a conventional sense of a "periodic signal" phrase, but, if you permit me to delve into a math subtlety, differentiation can turn an aperiodic waveform to a periodic one: $$\frac{\mathrm{d}}{\mathrm{d}t}(a\cdot t + b\cdot \cos(\omega\cdot t)) = a-b\cdot\omega\cdot\sin(\omega\cdot t)$$ Notwithstanding a dubious usefulness of this excursion.
• Is there any example of other way around i.e period changed into a periodic on differentiation ? – user215805 Jul 15 '20 at 6:26
• Not dubious at all. Repeated differentiation (different ways, some smooth as well) is "detrending" on steroids, i.e. will ultimately flatten any polynomials while preserving sinusoidals. @user215805 the reverse, repeated accumulations will introduce polynomials to fit initial conditions. – Cedron Dawg Jul 15 '20 at 15:29
No, it's not possible.
Proof: let $$f : \mathbb{R} \to \mathbb{R}$$ periodic, i.e. there exists $$T\in\mathbb{R}$$ such that $$f(t+T) = f(t) \quad\forall t\in\mathbb{R}.$$ Then it follows that the derivative $$f'$$ has the same periodicity, because $$f'(t+T) = \lim_{h\to0}\frac{f(t+T+h)-f(t+T)}h = \lim_{h\to0}\frac{f(t+h)-f(t)}h = f'(t).$$
The other direction is a bit more subtle, because as was shown by earlier answers there are functions which aren't periodic at all yet have a periodic derivative. However, this only works for unbounded functions, which can't be signal functions. (And if $$f$$ is continuous and periodic with any period, then it is also bounded.)
We can make this statement: if $$f$$ is bounded, i.e. $$f(t)\in[f_\text{min},f_\text{max}]$$, and its derivative $$f'$$ is periodic with period $$T$$, then $$f$$ is also periodic with $$T$$. By the fundamental theorem of calculus: $$f(t+T) = C + \int\limits_0^{t+T}\!\mathrm{d}\tau\: f'(\tau) = C + \int\limits_0^t\!\mathrm{d}\tau\: f'(\tau) + \int\limits_t^{t+T}\!\mathrm{d}\tau\: f'(\tau) = f(t) + \underbrace{\int\limits_t^{t+T}\!\mathrm{d}\tau\: f'(\tau)}_{=: e(t)}.$$ Lemma: $$e(t) = 0$$. Note that $$e$$ is constant, because \begin{align} e(t) =& \int\limits_t^{t+T}\!\mathrm{d}\tau\: f'(\tau) \\ =& \int\limits_t^{T\cdot\lceil{t/T}\rceil}\!\mathrm{d}\tau\: f'(\tau) + \int\limits_{T\cdot\lceil{t/T}\rceil}^{t+T}\!\mathrm{d}\tau\: f'(\tau) \\ =& \int\limits_{t+T}^{T\cdot\lceil{t/T+1}\rceil}\!\mathrm{d}\tau\: f'(\tau-T) + \int\limits_{T\cdot\lceil{t/T}\rceil}^{t+T}\!\mathrm{d}\tau\: f'(\tau) \\ =& \int\limits_{t+T}^{T\cdot\lceil{t/T+1}\rceil}\!\mathrm{d}\tau\: f'(\tau) + \int\limits_{T\cdot\lceil{t/T}\rceil}^{t+T}\!\mathrm{d}\tau\: f'(\tau) \\ =& \int\limits_{T\cdot\lceil{t/T}\rceil}^{T\cdot\lceil{t/T+1}\rceil}\!\mathrm{d}\tau\: f'(\tau) \\ =& \int\limits_0^T\!\mathrm{d}\tau\: f'(\tau + T\cdot\lceil{t/T}\rceil) \\ =& \int\limits_0^T\!\mathrm{d}\tau\: f'(\tau) =: e_0. \end{align} Now assume $$e_0$$ is nonzero, without loss of generality, positive. Then it follows that \begin{align} f\left(T\cdot\left\lceil\frac{f_\text{max}-f_\text{min}}{e_0}+1\right\rceil\right) =& f(0) + \left\lceil\frac{f_\text{max}-f_\text{min}}{e_0}+1\right\rceil\cdot e_0 \\\geq& f_\text{min} + \left(\frac{f_\text{max}-f_\text{min}}{e_0}+1\right)\cdot e_0 \\=& f_\text{max} + e_0 \\>& f_\text{max}, \end{align} which is a contradiction to the assumption that $$f(t)\leq f_\text{max}\quad\forall t\in\mathbb{R}$$.
Consequently, we must have $$e(t) = e_0 = 0$$ and therefore $$f(t+T) = f(t)$$.
• I don't care for your convention of putting the differential adjacent to the integral instead of on the outside which is the more common convention. It places a slight parsing burden on both a human and any algorithmic approach. Me, cuz I'm not used to it, algorithmically because you have to evaluate each operator and assess its operator precedence level. So I am wondering if this was an individually developed style or you picked it up somewhere, perhaps how you were taught. I'm not saying it is wrong. – Cedron Dawg Jul 17 '20 at 15:37
• @CedronDawg it's pretty standard in theoretical physics, and i like it because when you have multiple nested integrals it's immediately clear which domain belongs to which integration variable. Also the integration variable is introduced before being used, resembling lambda calculus style, i.e. something like integral (λτ ↦ f(τ)). – leftaroundabout Jul 17 '20 at 15:44
• I appreciate you sharing your rationale. T.P. isn't my stomping grounds except for maybe some popular sites, and usually the math doesn't surface there. So, I presume you learned it on the outside then adopted adjacent. It's just another endian issue. – Cedron Dawg Jul 17 '20 at 17:42
Conceptually, I think this can be seen as the defining question for understanding Fourier series.
Every well enough behaved periodic signal can be expressed as a Fourier series. Which means summing up a fundamental tone and all its harmonics, frequencies of whole number multiples of the fundamental. Each harmonic, including the first, the fundamental tone, could have zero amplitude in the mix.
The derivative of each harmonic preserves its frequency and introduces none. Since the derivative of a sum is the sum of the derivatives, there is no way to introduce new frequencies.
No.
Differentiation is a linear operator. Linear operations can't add new frequencies to a signal.
• You mean linear time invariant. It's perfectly possible to define a linear operator that does change frequencies, for instance $f \mapsto (t\mapsto f(2\cdot t))$ is easily shown to be a linear operator, but it doubles every frequency. It's just not a time-invariant operator. – leftaroundabout Jul 15 '20 at 14:18
|
2021-07-25 08:59:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9823728203773499, "perplexity": 838.2204130557038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00517.warc.gz"}
|
http://newtrailers.info/how-can-i-get-rid-of-chipmunks/how-can-i-get-rid-of-chipmunks--how-can-you-get-rid-of-chipmunks-in-your-yard/
|
# How Can I Get Rid Of Chipmunks How Can You Get Rid Of Chipmunks In Your Yard
how can i get rid of chipmunks how can you get rid of chipmunks in your yard.
how do i get rid of chipmunks without killing them softly soundtrack,how can i get rid of chipmunks,how can i get rid of chipmunks in my yard,how can you get rid of chipmunks in your yard,how do i get rid of chipmunks under my deck,how to get rid of chipmunks under porch ideas,how can u get rid of chipmunks,how do i get rid of chipmunks without killing them softly movie,how do i get rid of chipmunks without killing them softly,how to get rid of chipmunks under porch lighting,how to get rid of chipmunks under porch ceiling.
|
2019-06-25 21:58:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9675260782241821, "perplexity": 2358.42330670684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999948.3/warc/CC-MAIN-20190625213113-20190625235113-00494.warc.gz"}
|
https://jobbzktr.web.app/72274/92239.html
|
# SEXUAL HEALTH AND RIGHTS IN SWEDEN - RFSL
Magnus Dahlström – LaTeX - SlideShare
Polymer. 2011 av D Fredholm · 1997 — i.e. it is possible to quantify not only over individuals such as integers, but also over functions and uses a LATEX-based syntax. \begin{axdef} A The case{study. As stated in the introduction, the plan for the review was to begin by testing short enough in order for the system to used interactively.
Cite . BibTex; CORE is a not-for-profit service delivered by the Open University and Jisc. LaTeX es un sistema de composición de textos que está formado mayoritariamente por órdenes (macros) construidas a partir de comandos de TeX —un lenguaje «de bajo nivel», en el sentido de que sus acciones últimas son muy elementales— pero con la ventaja añadida, en palabras de Lamport, de "poder aumentar las capacidades de LaTeX utilizando comandos propios del TeX descritos en The For those that are new to $$\LaTeX$$ or beginning and intermediate users who need to expand their knowledge, there is the Not so Short Introduction to $$\LaTeX 2\varepsilon$$. It isn't exactly short but definitely isnt as long as a formal book on $$\TeX$$ or $$\LaTeX$$ Find helpful customer reviews and review ratings for Latex in 157 minutes: The (Not So) Short Introduction to Latex at Amazon.com. Read honest and unbiased product reviews from our users.
After reading this The Not So Short Introduction to LATEX2ε Or LATEX2εin 87 minutes by Tobias Oetiker Hubert Partl, Irene Hyna and Elisabeth Schlegl Version 3.7, 14. April, 1999 The Not So Short Introduction to L A T E X 2 ε Or L A T E X 2 ε in 154 minutes by Tobias Oetiker Hubert Partl, Irene Hyna and Elisabeth Schlegl Version 4.31, June 24, 2010 The Not So Short Introduction to LATEX 2e. An 145 pages of introduction to LATEX 2e, sufficient for most applications of LATEX.
## Introduktion till LATEXf ¨or humanister - Lund Humanities Lab
2020 — dokumentet och gör det enkelt att skriva matematiska formler. En bok som är en LATEX-bibel för alla nybörjare är The Not So Short Introduction av J Öhman · Citerat av 5 — Jatropha curcas L. (physic/purging nut) by smallholder farmers in the Kenyan districts Baringo and The conclusions are that the Jatropha plant most probably will not be giving 6.2 Introduction to and knowledge of the Jatropha plant .
### LaTeX introduktion - Teknisk fysik vid Umeå universitet
2020-10-25 · Machine Learning for the Humanities: A very short introduction and a not-so-short reflection #100DaysofDH , beginner friendly , Computational Humanities , DH Bootcamp , DH tools and techniques , Learn DH Crash Course , Ramblings , The D and the H 0 Comments LaTeX es un sistema de composición de textos que está formado mayoritariamente por órdenes The Not So Short Introduction to LaTeX 2E Tobias Oetiker. 爱问共享资料The Not So Short Introduction to LaTeX2e文档免费下载,数万用户每天上传大量最新资料,数量累计超一个亿,一份不太简短的LATEX2ε介绍或93分钟学会LATEX2ε原版作者:TobiasOetikerHubertPartl,IreneHynaandElisabethSchlegl原版版本:Version3.20,09August,2001中文翻译:中国CTEX用户小组中文版本:版本3.20,二 二年 Find helpful customer reviews and review ratings for Latex in 157 minutes: The (Not So) Short Introduction to Latex at Amazon.com. Read honest and unbiased product reviews from our users. 2021-02-21 · The Not So Short Introduction to LaTeX. T. Oetiker, H. Partl, documentation latex reference typesetting. This publication has not been reviewed yet. 2011-03-23 · **This is summary of book/guide The Not So Short Introduction to LaTeX 2e by Tobias Oetiker, Hubert Partl, Irene Hyna and Elisabeth Schlegl.
$$\LaTeX$$ is a markup language that is used for most mathematical writing. The not so short introduction to LaTeX 2e. by. Tobias Oetiker. 3.92 · Rating details · 78 ratings · 12 reviews. LaTeX is a typesetting system that is very suitable for producing scientific and mathematical documents of high typographical quality. 2021-03-29 Title: The not so short introduction to LaTeX Author: Tobias Oetiker, Ukrainian translation by Maksym Polyakov Subject: LaTeX Keywords: LaTeX, Ukrainian 2015-08-01 The Not So Short Introduction to LATEX 2ε.
Religion numbers 2021
\textsl{The Not So Short Introduction to \LaTeXe} \footnote{\texttt{www.nada.kth.se/datorer/tex/doc/latex/general/lshort.pdf}}. (But there might be packages not yet aware of babel 's shorthands.) could be redefined to use babel short hand instead of the LaTeX macros. As example "o could be used instead of \"o for ö .
2018 — New comments cannot be posted and votes cannot be cast \begin{document} \maketitle %Introduction \lettrine[ loversize=0.25, lines=3, I'm trying to get the wordcount of a LaTeX document into variable form so that my coauthors with fractions but the vertical space between the elements is very short.
Beskattning försäljning skogsfastighet
hörcentral mölndal
tupakka viina ja villit naiset
method of sweden
slöjdarbeten trä
haldex fyrhjulsdrift problem
kopenhamn aktivitet
### Nordisk Ministerråd - TemaNord2021-509 - Empty - Norden.org
Or LATEX 2ε in 141 minutes by Tobias Oetiker. Hubert Partl, Irene Hyna and Elisabeth Schlegl. Version 4.26 The Not So Short Introduction To LaTeX2e (PT).
## 729A80 > Tips - LiU ▷ IDA - Linköpings universitet
This short introduction describes LATEX2"and should be sufficient for mostapplicationsofLATEX. Referto[1,3]foracompletedescriptionofthe LATEXsystem. Thisintroductionissplitinto6chapters: Chapter1 tellsyouaboutthebasicstructureofLATEX2"documents. You will also learn a bit about the history of LATEX.
After reading this 2010-06-15 This short introduction describes LATEX2εand should be sufficient for most applications of LATEX. Refer to [1, 3] for a complete description of the LATEX system. This introduction is split into 5 chapters: Chapter 1 tells you about the basic structure of LATEX2εdocuments. You will also learn a bit about the history of LATEX. After reading this This short introduction describes LATEX2"and should be su cient for most applications of LATEX.
|
2023-02-09 09:01:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.838548481464386, "perplexity": 11371.689152503299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501555.34/warc/CC-MAIN-20230209081052-20230209111052-00535.warc.gz"}
|
https://webapps.cs.umu.se/uminf/index.cgi?year=2010&number=3
|
Show report in:
# Any positive residual curve is possible for the Arnoldi method for Lyapunov matrix equations
In this paper we consider the Lyapunov equation AX+XA^T+bb^T = 0, where A is negative definite n by n matrix and b in R^n. The Arnoldi method is an iterative algorithm which can be used to compute an approximate solution. However, the convergence can be very slow and in this paper we show how to explicitly construct a Lyapunov equation with a given residual curve. The matrix A can be chosen as symmetric negative definite and it is possible to arbitrarily specify the elements on the diagonal of the Cholesky factor of -A. If the symmetry is dropped, then it is possible to arbitrarily specify A+A^T, while retaining the residual curve.
### Keywords
Lyapunov matrix equations, Arnoldi method, Krylov subspace methods
### Authors
Back Edit this report
Entry responsible: Carl Christian Kjelgaard Mikkelsen
# Actions
Page Responsible: Frank Drewes
2021-11-28
|
2021-11-28 12:25:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822248578071594, "perplexity": 529.9980848026999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358520.50/warc/CC-MAIN-20211128103924-20211128133924-00234.warc.gz"}
|
https://bioinformatics.stackexchange.com/questions/4364/can-i-run-star-without-an-annotation-file
|
# Can I run STAR without an annotation file?
I wish to use Rascaf to scaffold a fragmented draft genome.
For this, I need to provide a BAM file of aligned RNA-seq reads and the draft genome.
So, I indexed the draft genome with STAR like this:
STAR --runMode genomeGenerate --genomeDir output/index --genomeFastaFiles draft.fa
Then, I tried to aligned the reads like this:
STAR --genomeDir output/index --readFilesIn reads.fastq.gz --readFilesCommand gunzip -c --outFileNamePrefix output/alignment --quantMode GeneCounts --outSAMunmapped None --outSAMtype BAM SortedByCoordinate --outSAMattrRGline ID:RG1 CN:yy \"DS: z z z\" SM:sample
The latter commands outputs the following error:
Transcriptome.cpp:51:Transcriptome: exiting because of *INPUT FILE* error: could not open input file exonGeTrInfo.tab
Solution: check that the file exists and you have read permission for this file
SOLUTION: utilize --sjdbGTFfile /path/to/annotantions.gtf option at the genome generation step or mapping step
It suggests I should use a GTF file. But this is a draft genome of an individual for which no GTF is available. I could try to adapt a GTF from a close relative but for the scaffolding, it seems superfluous. Is there a way I can map with STAR without the GTF?
These commands were run on a GNU/Linux machine.
I don't think you can use the --quantMode GeneCounts option with no annotations. I think the error is trying to look for an exon file generated from the annotations to do the quantitation on. Remove that and I think it should work, as the manual specifically states that annotations are optional but highly recommended.
|
2021-09-21 06:06:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6519255638122559, "perplexity": 8762.997874542667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057158.19/warc/CC-MAIN-20210921041059-20210921071059-00529.warc.gz"}
|
https://engineering.stackexchange.com/questions/47290/how-can-i-size-a-sump-area-to-act-as-an-energy-dissipator
|
# How can I size a sump area to act as an energy dissipator?
I have a proposed stormwater design which will necessitate some very large diameter pipe installed near the surface to minimize constructability issues. However, near the end of the run, it will be necessary to install a structure to allow the stormwater to drop its elevation more than 7', which I'm concerned will risk erosive forces scouring out the bottom due to the fall. Based on the standard equation of standard equation of v=a*t, the vertical velocity is in the range of 21.4 feet/second upon impact with the bottom of the structure.
Generally, I'd like velocities for pipe flows to be less than 15 feet/second to prevent scouring and 10 preferable (though not always feasible). This scenario where water is falling is a little different, but given the quantities being on the order of approximately 170 CFS for the 25-year design storm, I don't want to neglect the kind of erosive potential this could have on the structure's bottom.
To mitigate this issue, I'd like to propose a sump at the bottom of the structure to help dissipate some of the energy. However, I can't figure out how deep to spec this sump area.
I've reviewed my state's Soil Erosion Control Manual, but it doesn't really specify measures for this kind of circumstance. Is there an alternative equation or source I can use to size the sump area for something like this?
Sketch for clarification:
• If someone could create tags for 'scour' and 'erosion', I think they'd also be suitable for this question. Sep 20 at 20:32
• Have you proposed this to the senior engineer? What was the response? Sep 20 at 20:41
• can you sketch a profile? Why not simply have a vertical drop into the sump?
– mart
Sep 20 at 20:53
• @SolarMike I have, but the unique situation of having it be within a storm structure is creating a unique situation that my colleagues haven't been able to come up with a comparable situation that they've dealt with. Sep 20 at 21:13
• @mart I made some edits to clarify my intent and added a sketch to better illustrate. Sep 20 at 21:25
I suspect the sump will fill up and be ineffective when the water levels with the flow.
Usually, they use an open channel shoot with a stepped ending to break the flow of the stream of water and by creating turbulence and hydraulic jumps dissipate the energy of the flow passively while turning most of the energy into foaming flow.
The profile is designed to accommodate the variable volume of the flow.
.
Wiki source.
# Edit
Reviewing the comments of OP, one can offer this alternative view.
If we need to change the velocity of the flow we can use the flow continuity. First, we check the speed of the flow exiting the pipe using Manning's equation for gravity fall in a sloped pipe,
$$v = (kn / n) R_h^{2/3} S^{1/2}$$
• v = cross-sectional mean velocity (ft/s, m/s)
• kn = 1.486 for English units and kn = 1.0 for SI units
• n = Manning coefficient of roughness - ranging from 0.01 (a clean and smooth channel) to 0.06 (a channel with stones and debris, 1/3 of vegetation)
-Rh = hydraulic radius (ft, m)
-S = slope - or gradient - of pipe (ft/ft, m/m)
Hydraulic radius can be expressed as
• $$R_h = A / P_w$$
where
• $$A =$$ cross sectional area of flow (ft2, m)
• $$P_w =$$ wetted perimeter (ft, m)
Then we add the extra speed due to free fall
• $$v_{fall}=\sqrt{2gh}$$
Now we can plug in the mass continuity eq.
$$\dot m_{inlet}=\dot m_{outlet} \rightarrow A_1v_1= A_2v_2$$
$$A_2=A_1(v_1/v_2)$$
So in order to get less outlet speed, we need to make it wider. And there is no need for a sump reservoir, the fact that the downstream channel is wider will cause a hydraulic jump at the bottom of the fall and take the needed energy out of the flow to make it go slower.
• Why would the sump filling up render it ineffective? I need it to be full most of the time in order to dissipate the energy of the falling water so that it doesn't erode the bottom of the structure. Sep 20 at 22:29
• If you have mix of solids, like sand and dirt washed down they are going to require regular cleaning. depending on the climate they may get clogged and damaged under icing and freezing. the sump is not a fault proof system whil a stepped open channel or even rocky apron is. Sep 21 at 0:06
• Annual cleaning of inlets is required for stormwater structures in this system. Additionally, even if the sump were to fill with sediment then the erosion wouldn't be happening to the bottom of the inlet, it would be happening to the sedimentation which is acceptable in this instance. Sep 21 at 3:12
• Also, this answer is indicative of a solution that's applicable for an actual open channel. As I indicated in the question, this is proposed within a stormwater structure experiencing a vertical drop. It's open channel flow in the sense that the flow is not pressurized. Sep 21 at 3:17
• The horizontal velocity I've calculated is already based upon the Manning's equation, which is based upon the design storm which will occur infrequently at best (worst?). But that's not the issue I'm having. The issue I'm having is the velocity of falling water, which I determined based upon the original physics equations v=gt and d=1/2*at^2. The force of this falling water will occur with a great deal of regularity more than the design storm. Sep 21 at 13:29
To prevent/minimize erosion by thrust is to line the bottom of the plunging pool with rocks. The linked article provides some useful information.
It might be helpful to modify the configuration at the plunge pool - downstream channel to create a small jump to exhaust some energy from the rapid flow caused by the thrust.
|
2021-10-24 09:42:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44949954748153687, "perplexity": 1179.24254462453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00224.warc.gz"}
|
https://socratic.org/questions/what-volume-is-occupied-by-0-963-mol-of-co-2-at-300-9-k-and-913-mmhg
|
# What volume is occupied by 0.963 mol of CO_2 at 300.9 K and 913 mmHg?
Aug 11, 2016
$V = \frac{n R T}{P}$ $\cong 20 L$
The only difficulty in solving this problem is getting the appropriate units. The quoted units of pressure are unfortunate. We know that $1$ $a t m$ $\equiv$ $760 \cdot m m \cdot H g$; i.e. that $1 \cdot a t m$ will support a column of mercury $760 \cdot m m$ high. Thus $P$ $=$ $\frac{913 \cdot m m \cdot H g}{760 \cdot m m \cdot H g \cdot a t {m}^{-} 1}$ $=$ $1.20 \cdot a t m$ (and this measurement should have been quoted in the problem). If you measured pressures with such a mercury column you would end up getting mercury all over the laboratory, so whoever set this question was ignorant, and deserves a bollocking.
$V = \frac{0.963 \cdot m o l \times 0.0821 \cdot L \cdot a t m \cdot {K}^{-} 1 \cdot m o {l}^{-} 1 \times 300.9 K}{1.20 \cdot a t m}$ $=$ ??L
|
2019-04-19 06:20:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8748282194137573, "perplexity": 737.6000915700488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527148.46/warc/CC-MAIN-20190419061412-20190419083412-00551.warc.gz"}
|
https://nforum.ncatlab.org/discussion/11066/dominic-verity/
|
# Start a new discussion
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorTim_Porter
• CommentTimeMar 31st 2020
• CommentRowNumber2.
• CommentAuthorDavidRoberts
• CommentTimeOct 8th 2021
• CommentRowNumber3.
• CommentAuthorUrs
• CommentTimeOct 8th 2021
It seems sad to state nothing but employment status of a person, which is the least of interest to us here, unless you happen to be the one paying them. If you have the energy to edit, why not add a word on the person as a researcher.
• CommentRowNumber4.
• CommentAuthorTim_Porter
• CommentTimeOct 8th 2021
I have added a little summary of some of his work. It could include more but I have tried to link to the fuller entries rather than put a lot here.
• CommentRowNumber5.
• CommentAuthorUrs
• CommentTimeOct 8th 2021
Thanks! I have replaced your “infinity category theory” by “omega-category theory”, but then slightly expanded the following sentence like so:
More recently he has worked with Emily Riehl on foundations of $\infty$-category theory seen through their homotopy 2-category, and using the concept of ∞-cosmoi to capture common structure of different presentations of $\infty$-categories.
• CommentRowNumber6.
• CommentAuthorvarkor
• CommentTimeOct 8th 2021
Replaced “$\infty$-category theory” with “$(\infty, 1)$-category theory”; it does not seem appropriate to abbreviate here, as it will surely lead to confusion. (More generally, I think one should prefer to avoid “omega-category”: I believe Tom Leinster had a pertinent comment on the n-Category Café on this topic, but I can’t find it now.)
• CommentRowNumber7.
• CommentAuthorUrs
• CommentTimeOct 8th 2021
Replaced “$\infty$-category theory” with “$(\infty, 1)$-category theory”; it does not seem appropriate to abbreviate here
True, thanks.
one should prefer to avoid “omega-category”
But that’s how these authors, following the John Roberts that is mentioned in the sentence, tended to refer to their own subject.
• CommentRowNumber8.
• CommentAuthorjweinberger
• CommentTimeOct 8th 2021
• (edited Oct 8th 2021)
Replaced “$\infty$-category theory” with “$(\infty, 1)$-category theory”; it does not seem appropriate to abbreviate here, as it will surely lead to confusion.
The theory is not just about $(\infty, 1)$-categories though. Rather, $\infty$-cosmoses are formed by models for $(\infty, n)$-categories for all $0 \le n \le \infty$, which is part of what motivates this program today. So claiming the theory is about foundations of $(\infty, 1)$-category theory specifically seems like a reduction to me. (Even though this is certainly an important specialization.) If one wants to avoid notational confusion, why not write e.g. “(weak) higher categories”? (To say more about the terminology initially used, in this specific context, “$\infty$-category” is just the name for the objects of a given $\infty$-cosmos, which could in the specific instances mean [certain models of] $(\infty, 1)$-categories, cartesian fibrations thereof, $(\infty, 2)$-categories, $\infty$-groupoids, or simply $1$-categories…)
• CommentRowNumber9.
• CommentAuthorUrs
• CommentTimeOct 8th 2021
There is no claim here – we just added a kind note briefly indicating a colleague’s research.
Be our guest, hit “edit” and expand to your heart’s content! But it sounds like your comment would best fit into the Idea-section at infinity-cosmos.
• CommentRowNumber10.
• CommentAuthorTim_Porter
• CommentTimeOct 8th 2021
|
2021-11-29 21:52:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6742212772369385, "perplexity": 3311.3384804342277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358842.4/warc/CC-MAIN-20211129194957-20211129224957-00497.warc.gz"}
|
https://www.statisticssolutions.com/monte-carlo-methods/
|
# Monte Carlo Methods
Quantitative Results
The Monte Carlo method basically refers to the kind of method that the researcher estimates in order to obtain the solution, which in turn helps the researcher to address a variety of problems related to mathematics, which also involves several kinds of statistical sampling experiments. Monte Carlo methods are defined as the set of different types of procedures that perform the same operations. Monte Carlo methods are evaluated with the help of a deterministic model, which utilizes the theory of randomly generated numbers and the theory of probability for getting an accurate answer to the problem.
### Discover How We Assist to Edit Your Dissertation Chapters
Aligning theoretical framework, gathering articles, synthesizing gaps, articulating a clear methodology and data plan, and writing about the theoretical and practical implications of your research are part of our comprehensive dissertation editing services.
• Bring dissertation editing expertise to chapters 1-5 in timely manner.
• Track all changes, then work with you to bring about scholarly writing.
• Ongoing support to address committee feedback, reducing revisions.
Statistics Solutions is the country’s leader in dissertation statistics consulting and can assist you with your monte carlo method analysis. Contact Statistics Solutions today for a free 30-minute consultation.
The characteristics that are defined in Monte Carlo methods involve the use of randomly generated numbers in its simulations.
With the help of the Monte Carlo methods, the researcher can obtain an approximate answer. The analysis of Monte Carlo methods generally involves the approximation of the errors. The approximation of errors is one of the major factors that helps researchers in evaluating the answers that are obtained from the Monte Carlo methods.
There are also different levels of accuracy for different types of Monte Carlo methods. This level of accuracy also depends upon the nature of the questions or the problems that are addressed by the researcher in the Monte Carlo methods. The most important use of Monte Carlo methods involves the evaluation of difficult integrals. Monte Carlo methods are applied in cases where the involvement of the multi dimensional integrals are there. Monte Carlo methods are a useful tool where a reasonable approximation is required, specifically in the case of multi dimensional integrals.
A crude Monte Carlo method is one of the Monte Carlo methods. This type of Monte Carlo method is used to compute the integral of a particular function, f(x) (for example), which comes under the limits of ‘a’ and ‘b.’ The researcher collects a random sample ‘s’ and selects a number ‘N ‘ from it. The range by which this function of the Monte Carlo method is integrated is not equal to the size of the sample. The researcher in this type of Monte Carlo method, finds the function value f(s) for the function f(x) in each random sample s. Then, in this type of Monte Carlo method, the researcher performs the summation of all these values and divides the result by ‘N’ in order to obtain the mean values from the sample.
The Monte Carlo methods are widely used in various disciplines like physics, and chemistry, where it is used for simulating the complex reactions and interactions associated with the subjects.
The researcher should use Monte Carlo methods because these Monte Carlo methods consists of a property called the smoothening property, which is applied to a complex problem. Also, the Monte Carlo methods are used for approximating the answers much quicker than otherwise. Generally, it is very time consuming for researchers to determine or estimate an exact answer for the problem.
The Monte Carlo methods can also be used extensively in the field of computer vision. These Monte Carlo methods play the role of an object tracker in the field of computer vision.
|
2022-07-01 19:52:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8298891186714172, "perplexity": 358.85982144380716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00527.warc.gz"}
|
https://128.84.21.199/list/cs.LO/recent
|
# Logic in Computer Science
## Authors and titles for recent submissions
[ total of 29 entries: 1-25 | 26-29 ]
[ showing 25 entries per page: fewer | more | all ]
### Mon, 19 Feb 2018
[1]
Title: Decidability for Entailments of Symbolic Heaps with Arrays
Comments: A submission for the postproceedings of the Continuity, Computability, Constructivity 2017
Subjects: Logic in Computer Science (cs.LO)
[2]
Title: Proceedings Fourth International Workshop on Rewriting Techniques for Program Transformations and Evaluation
Authors: Horatiu Cirstea (LORIA, Université de Lorraine, France), David Sabel (Goethe-University Frankfurt am Main, Germany)
Journal-ref: EPTCS 265, 2018
Subjects: Logic in Computer Science (cs.LO); Programming Languages (cs.PL)
### Fri, 16 Feb 2018
[3]
Title: Models of Type Theory Based on Moore Paths
Comments: This is a revised and expanded version of a paper with the same name that appeared in the proceedings of the 2nd International Conference on Formal Structures for Computation and Deduction (FSCD 2017)
Subjects: Logic in Computer Science (cs.LO)
[4]
Title: Model Generation for Quantified Formulas: A Taint-Based Approach
Authors: Benjamin Farinier (1), Sébastien Bardin (1), Richard Bonichon (1), Marie-Laure Potet (2) ((1) LSL, (2) VERIMAG - IMAG)
Subjects: Logic in Computer Science (cs.LO)
[5]
Title: Fine-Grained Complexity of Safety Verification
Subjects: Logic in Computer Science (cs.LO); Distributed, Parallel, and Cluster Computing (cs.DC); Data Structures and Algorithms (cs.DS)
[6]
Title: Non-idempotent types for classical calculi in natural deduction style
Subjects: Logic in Computer Science (cs.LO)
### Thu, 15 Feb 2018
[7]
Title: Circular (Yet Sound) Proofs
Subjects: Logic in Computer Science (cs.LO)
[8]
Title: SAT solving techniques: a bibliography
Authors: Louis Abraham
Subjects: Logic in Computer Science (cs.LO)
[9]
Title: On completeness and parametricity in the realizability semantics of System F
Authors: Paolo Pistone
Subjects: Logic in Computer Science (cs.LO); Logic (math.LO)
[10]
Title: Craig Interpolation and Access Interpolation with Clausal First-Order Tableaux
Subjects: Logic in Computer Science (cs.LO)
[11]
Title: Abstract Family-based Model Checking using Modal Featured Transition Systems: Preservation of CTL* (Extended Version)
Subjects: Logic in Computer Science (cs.LO); Programming Languages (cs.PL); Software Engineering (cs.SE)
[12]
Title: A sound and complete definition of linearizability on weak memory models
Subjects: Logic in Computer Science (cs.LO)
### Wed, 14 Feb 2018
[13]
Title: Query learning of derived $ω$-tree languages in polynomial time
Subjects: Logic in Computer Science (cs.LO)
[14]
Title: Proof systems: from nestings to sequents and back
Authors: Elaine Pimentel
Comments: Extended version of the paper submitted to IJCAR-18
Subjects: Logic in Computer Science (cs.LO)
[15]
Title: A Concurrent Constraint Programming Interpretation of Access Permissions
Comments: This paper is under consideration for publication in Theory and Practice of Logic Programming (TPLP)
Subjects: Logic in Computer Science (cs.LO)
[16]
Title: The principle of point-free continuity
Subjects: Logic in Computer Science (cs.LO); Logic (math.LO)
[17]
Title: A wide-spectrum language for verification of programs on weak memory models
Subjects: Programming Languages (cs.PL); Logic in Computer Science (cs.LO)
[18]
Title: Higher Groups in Homotopy Type Theory
Subjects: Logic in Computer Science (cs.LO); Algebraic Topology (math.AT); Logic (math.LO)
[19] arXiv:1802.04698 (cross-list from cs.DM) [pdf, other]
Title: Subtyping for Hierarchical, Reconfigurable Petri Nets
Subjects: Discrete Mathematics (cs.DM); Logic in Computer Science (cs.LO)
[20] arXiv:1802.04428 (cross-list from cs.PL) [pdf, other]
Title: Reconciling Enumerative and Symbolic Search in Syntax-Guided Synthesis
Subjects: Programming Languages (cs.PL); Logic in Computer Science (cs.LO)
### Tue, 13 Feb 2018 (showing first 5 of 9 entries)
[21]
Title: Alternating Nonzero Automata
Subjects: Logic in Computer Science (cs.LO); Formal Languages and Automata Theory (cs.FL)
[22]
Title: Symmetries of Quantified Boolean Formulas
Subjects: Logic in Computer Science (cs.LO); Symbolic Computation (cs.SC)
[23]
Title: New Models for Generating Hard Random Boolean Formulas and Disjunctive Logic Programs
|
2018-02-19 21:27:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49439775943756104, "perplexity": 12594.53163036519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812841.74/warc/CC-MAIN-20180219211247-20180219231247-00664.warc.gz"}
|
https://gmatclub.com/forum/if-k-m-and-t-are-positive-integers-and-k-6-m-4-t-12-do-t-and-127989.html
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 28 May 2020, 16:37
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12
Author Message
TAGS:
Hide Tags
Manager
Joined: 16 Feb 2012
Posts: 142
Concentration: Finance, Economics
If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12 [#permalink]
Show Tags
Updated on: 05 Feb 2019, 02:31
8
53
00:00
Difficulty:
75% (hard)
Question Stats:
60% (02:03) correct 40% (02:09) wrong based on 793 sessions
HideShow timer Statistics
If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12 have a common factor greater than 1?
(1) k is a multiple of 3.
(2) m is a multiple of 3.
In the explanation of this question they say that the sum of two multiples of 3 give the number that is also a multiple of 3.
Is that a general rule for any number? If someone can elaborate I would be grateful!
Originally posted by Stiv on 23 Feb 2012, 01:28.
Last edited by Bunuel on 05 Feb 2019, 02:31, edited 1 time in total.
Updated.
Math Expert
Joined: 02 Sep 2009
Posts: 64216
Re: If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12 [#permalink]
Show Tags
23 Feb 2012, 01:40
13
19
Stiv wrote:
If k, m, and t are positive integers and $$\frac {k}{6} + \frac {m}{4} = \frac {t}{12}$$ , do t and 12 have a common factor greater than 1?
(1) k is a multiple of 3.
(2) m is a multiple of 3.
In the explanation of this question they say that the sum of two multiples of 3 give the number that is also a multiple of 3.
Is that a general rule for any number? If someone can elaborate I would be grateful!
If k, m, and t are positive integers and $$\frac{k}{6} + \frac{m}{4} = \frac{t}{12}$$, do t and 12 have a common factor greater than 1 ?
$$\frac{k}{6} + \frac{m}{4} = \frac{t}{12}$$ --> $$2k+3m=t$$.
(1) k is a multiple of 3 --> $$k=3x$$, where $$x$$ is a positive integer --> $$2k+3m=6x+3m=3(2x+m)=t$$ --> $$t$$ is multiple of 3, hence $$t$$ and 12 have a common factor of 3>1. Sufficient.
(2) m is a multiple of 3 --> $$m=3y$$, where $$y$$ is a positive integer --> $$2k+3m=2k+9y=t$$ --> $$t$$ and 12 may or may not have a common factor greater than 1. Not sufficient.
If integers $$a$$ and $$b$$ are both multiples of some integer $$k>1$$ (divisible by $$k$$), then their sum and difference will also be a multiple of $$k$$ (divisible by $$k$$):
Example: $$a=6$$ and $$b=9$$, both divisible by 3 ---> $$a+b=15$$ and $$a-b=-3$$, again both divisible by 3.
If out of integers $$a$$ and $$b$$ one is a multiple of some integer $$k>1$$ and another is not, then their sum and difference will NOT be a multiple of $$k$$ (divisible by $$k$$):
Example: $$a=6$$, divisible by 3 and $$b=5$$, not divisible by 3 ---> $$a+b=11$$ and $$a-b=1$$, neither is divisible by 3.
If integers $$a$$ and $$b$$ both are NOT multiples of some integer $$k>1$$ (divisible by $$k$$), then their sum and difference may or may not be a multiple of $$k$$ (divisible by $$k$$):
Example: $$a=5$$ and $$b=4$$, neither is divisible by 3 ---> $$a+b=9$$, is divisible by 3 and $$a-b=1$$, is not divisible by 3;
OR: $$a=6$$ and $$b=3$$, neither is divisible by 5 ---> $$a+b=9$$ and $$a-b=3$$, neither is divisible by 5;
OR: $$a=2$$ and $$b=2$$, neither is divisible by 4 ---> $$a+b=4$$ and $$a-b=0$$, both are divisible by 4.
Hope it's clear.
_________________
General Discussion
SVP
Joined: 24 Jul 2011
Posts: 2052
GMAT 1: 780 Q51 V48
GRE 1: Q800 V740
If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12 [#permalink]
Show Tags
26 Jun 2013, 00:39
By simplifying the equation we get 2k + 3m = t
Does this mean that 2 and/or 3 are factors of t? Not necessarily!
Consider k=1 and m=1 => t=5. Does t have 2 and 3 as factors? No!
Alternately, consider k=3 and m=2 => t=12. In this case, t does have both k and m as factors.
Point to note:
If a positive integer is the sum of the multiples of other positive integers, it need not be a multiple of either of the integers!
Carrying on with this question,
Using statement 1: If k is a multiple of 3, then the equation can be written as
2k + 3m = t
=> 2*3n + 3m = t (where n is a positive integer)
=> 3 (2n +m) = t
=> 3 is a factor of t
=> t and 12 have a common factor greater than 1 (i.e. 3)
SUFFICIENT.
Consider statement 2: If m is a multiple of 3, we can write the equation as
2k + 3m = t
=> 2k + 3*3n = t (where n is a positive integer)
=> 2k + 9n = t
If we take n=1 and k=3, we get t=15, which has 3 as a common factor greater than 1 with 12
If we take k=1 and n=1, we get t=11, which has no common factor greater than 1 with 12
Therefore statement 2 alone is insufficient.
_________________
Awesome Work | Honest Advise | Outstanding Results
Reach Out, Lets chat!
Email: info at gyanone dot com | +91 98998 31738 | Skype: gyanone.services
Director
Joined: 25 Apr 2012
Posts: 636
Location: India
GPA: 3.21
Re: If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12 [#permalink]
Show Tags
26 Jun 2013, 01:09
1
Stiv wrote:
If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12 have a common factor greater than 1?
(1) k is a multiple of 3.
(2) m is a multiple of 3.
In the explanation of this question they say that the sum of two multiples of 3 give the number that is also a multiple of 3.
Is that a general rule for any number? If someone can elaborate I would be grateful!
We can solve the given expression and get the following
(2k+3m)/12= t/12 ------> this implies t= 2k +3 m
From St 1 we have k is a multiple of 3 so the above equation is of the form t= 2*3*a+ 3m i.e t= 6a +3m where a is a positive integer (since K is a positive integer "a" cannot be zero)
thus t = 3( 2a+m)
if a =1, m=1 then t= 9 ; an 9 and 12 have 3 as common factor other than 1
similarly if a=2, m=1 we have t=15, and both 15 and 12 have 3 as common factor
since t has 3 as one of its factors and 12 also has 3 as one of its factor and therefore "t" and 12 will always have 3 as a factor other than 1
from St2 we have t= 2k+ 3*3b -----> t= 2k+9b where b is a positive integer
Here if k=1 and b =1, then t= 11; 11 and 12 do not have any common factor other than 1
but if k=3 and b=3 then we have t= 24 ; 24 and 12 have many common factor
therefore ans should be A
_________________
“If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.”
Current Student
Joined: 12 Aug 2015
Posts: 2522
Schools: Boston U '20 (M)
GRE 1: Q169 V154
Re: If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12 [#permalink]
Show Tags
16 Mar 2016, 05:53
Superb QUESTION
Here we need to write k as 3*p for some integer p so 3 must be in the GCD
hence A is sufficient
AS for statement 2 => t=5=> NO
for t=10=> YES
hence not sufficient
hence A
_________________
SVP
Joined: 23 Feb 2015
Posts: 1885
Re: If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12 [#permalink]
Show Tags
02 Mar 2020, 13:19
Quote:
If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12 have a common factor greater than 1?
(1) k is a multiple of 3.
(2) m is a multiple of 3.
Hello,
IanStewart
So, it seems that we need the value of $$t$$ is equals to any prime number to have the answer NO in statement 2, right?
Thanks__
GMAT Tutor
Joined: 24 Jun 2008
Posts: 2079
Re: If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12 [#permalink]
Show Tags
02 Mar 2020, 17:27
2
Hello,
IanStewart
So, it seems that we need the value of $$t$$ is equals to any prime number to have the answer NO in statement 2, right?
Thanks__
To get a 'no' answer using Statement 2, you just need t to equal something that doesn't share any factors (besides 1) with 12. So t could be 25 or 35, say; it doesn't need to be prime, though making t equal to a small prime like 5 or 7 is a very good choice if you're testing numbers. And t also can't be just any prime -- if t were 2 or 3, then you would not get a 'no' answer to the question, though it's impossible for t to equal 2 or 3 anyway in this equation, if m and k are positive integers.
_________________
GMAT Tutor in Montreal
If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com
Re: If k, m, and t are positive integers and k/6 + m/4 = t/12, do t and 12 [#permalink] 02 Mar 2020, 17:27
|
2020-05-29 00:37:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6927868127822876, "perplexity": 1148.9063128949535}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00172.warc.gz"}
|
https://www.gamedev.net/topic/46064-0207---the-readiness-test/
|
• Create Account
## 02.07 - The Readiness Test
215 replies to this topic
### #1Teej GDNet+
Posted 20 April 2001 - 04:05 AM
Oh Goody, Code Time! Alright recruits, we’re about to test your compiling and linking skills. I’ve added a zipfile called BASECODE1 on my website that I’d like you to download now (don’t unzip it yet though!). The zipfile contains the following files:
• Globals.h – the main header file for the game template
• WinBase.cpp – the windows application overhead we’ve discussed
• InitTerm.cpp – the initialization/shutdown functions
• GameMain.cpp – where the game code goes
• Utils.h – prototypes for our helper functions
• Utils.cpp – the helper functions themselves
• Resource.bmp – a sample bitmap
• Sample.wav – a sample WAV file
• DDRAW.H
• DINPUT.H
• DSOUND.H
…which contains prototypes for the DirectDraw, DirectInput and DirectSound components we’re using. The actual components (the DirectX runtimes) are called:
• DDRAW.DLL
• DINPUT.DLL
• DSOUND.DLL
…which are the files that everyone has for playing DirectX games with. Finally, we have library files that connect code using DirectX components:
• DDRAW.LIB
• DINPUT.LIB
• DSOUND.LIB
Compiling...
GameMain.cpp
c:\gdn\basecode\globals.h(87) : error C2146: syntax error : missing ';' before identifier 'lpDD'
c:\gdn\basecode\globals.h(87) : error C2501: 'LPDIRECTDRAW7' : missing storage-class or type specifiers
c:\gdn\basecode\globals.h(87) : error C2501: 'lpDD' : missing storage-class or type specifiers
c:\gdn\basecode\globals.h(88) : error C2146: syntax error : missing ';' before identifier 'lpDDSPrimary'
c:\gdn\basecode\globals.h(88) : error C2501: 'LPDIRECTDRAWSURFACE7' : missing storage-class or type specifiers
c:\gdn\basecode\globals.h(88) : error C2501: 'lpDDSPrimary' : missing storage-class or type specifiers
c:\gdn\basecode\globals.h(89) : error C2146: syntax error : missing ';' before identifier 'lpDDSBack'
c:\gdn\basecode\globals.h(89) : error C2501: 'LPDIRECTDRAWSURFACE7' : missing storage-class or type specifiers
c:\gdn\basecode\globals.h(89) : error C2501: 'lpDDSBack' : missing storage-class or type specifiers
c:\gdn\basecode\globals.h(90) : error C2146: syntax error : missing ';' before identifier 'lpDDSRes'
c:\gdn\basecode\globals.h(90) : error C2501: 'LPDIRECTDRAWSURFACE7' : missing storage-class or type specifiers
c:\gdn\basecode\globals.h(90) : error C2501: 'lpDDSRes' : missing storage-class or type specifiers
c:\gdn\basecode\globals.h(94) : error C2146: syntax error : missing ';' before identifier 'lpDI'
c:\gdn\basecode\globals.h(94) : error C2501: 'LPDIRECTINPUT7' : missing storage-class or type specifiers
(…on and on…) then you probably need to make sure that you’re pointing the IDE to the DirectX header files (look in Tools|Options|Directories ). If you have other compile errors, try to get ahold of me (teej@gdnmail.net or 70816765 on ICQ) or post a reply to this article with your questions. As for the linking, well, that’s primarily the job of the LIB list we prepared under Project|Settings|Link. Here’s an example of having it entered incorrectly:
Linking...
InitTerm.obj : error LNK2001: unresolved external symbol _DirectDrawCreateEx@16
InitTerm.obj : error LNK2001: unresolved external symbol _IID_IDirectDraw7
InitTerm.obj : error LNK2001: unresolved external symbol _c_dfDIKeyboard
InitTerm.obj : error LNK2001: unresolved external symbol _GUID_SysKeyboard
InitTerm.obj : error LNK2001: unresolved external symbol _IID_IDirectInputDevice7A
InitTerm.obj : error LNK2001: unresolved external symbol _DirectInputCreateEx@20
InitTerm.obj : error LNK2001: unresolved external symbol _IID_IDirectInput7A
InitTerm.obj : error LNK2001: unresolved external symbol _DirectSoundCreate@12
Debug/basecode.exe : fatal error LNK1120: 8 unresolved externals
basecode.exe - 9 error(s), 0 warning(s)
• An introduction to game development
• A basic windows application framework
• C language resources
• A quick overview of some of the basic tools required
• Some insight into how a game might be written
• Information on resolutions, colors and pixel plotting
• The role of DirectX in our games
• A first run with some game template code
Congratulations… You have your compiler/linker in order, the DirectX runtimes and SDK installed, the online documentation at your fingertips, and some template code that sits before you like an open book with blank pages – ready to be filled in with the knowledge and practice that is the rite of passage for any apprentice. Get ready folks, because next up is the proverbial ‘Square One’. See you there. Questions? Comments? Can’t get the damned code to build? Reply to this article – help is on the way! Edited by - teej on April 20, 2001 11:10:00 AM
### #2Fighterdude Members
Posted 20 April 2001 - 10:10 AM
i can build the program fine errors 0 and warnings 0
but when i try to run it says DD_init faild.|-7
Fighterdude(A Baldurs Gate Fan)
### #3Lord Maz Banned
Posted 20 April 2001 - 10:26 AM
I can''t get my modem to download those directx files.. the connection is too slow, and it stoppes after a while
If anyone could upload the files on a better server (or buy me a better modem!), I would be gratefull!
-Lord Maz-
### #4Qunu Members
Posted 20 April 2001 - 11:06 AM
Fighterdude: make sure you have the resource.bmp and sample.wav files in the same directory as your executable.
### #5Anonymous Poster_Anonymous Poster_* Guests
Posted 20 April 2001 - 11:07 AM
Fighterdude: make sure you have the resource.bmp and sample.wav files in the same directory as your executable.
### #6Crox_guldtand Members
Posted 20 April 2001 - 11:29 AM
Hey Fighter dude!
You haven´t got the "resource.bmp" file in the same catalog/dir.
as the other files.
Just put it there and it´ll work fine.
### #7Fighterdude Members
Posted 20 April 2001 - 02:27 PM
i have them all in the same directory
and it still does the same thing.
Fighterdude(A Baldurs Gate Fan)
### #8Fighterdude Members
Posted 20 April 2001 - 02:30 PM
Forget the last post i just got what you were saying and it works now Thanks Guys
Fighterdude(A Baldurs Gate Fan)
### #9Fighterdude Members
Posted 20 April 2001 - 03:01 PM
Just wonderin Teej is the next artical the C LANGUEGE or you gonna post a new one? Great job Teej i realy like it!
Fighterdude(A Baldurs Gate Fan)now that''s a good Game
### #10Fighterdude Members
Posted 20 April 2001 - 03:01 PM
Just wonderin Teej is the next artical the C LANGUEGE or you gonna post a new one? Great job Teej i realy like it!
Fighterdude(A Baldurs Gate Fan)now that''s a good Game
### #11Fighterdude Members
Posted 20 April 2001 - 03:03 PM
sorry for Multi-Post
Fighterdude(A Baldurs Gate Fan)
### #12Fighterdude Members
Posted 20 April 2001 - 05:14 PM
do i have to get the online version or can i get it in word format?
Fighterdude(A Baldurs Gate Fan)
### #13Fighterdude Members
Posted 20 April 2001 - 05:16 PM
The Docs i mean!
### #14Mucman Members
Posted 20 April 2001 - 06:59 PM
Well I''ll be darned! It worked at the first build! Should I be surprised
"You won''t get wise with the sleep still in your eyes." - Neil Peart
### #15Thalion Members
Posted 21 April 2001 - 07:59 AM
Why do I get this error?
Compiling...
WinBase.cpp
InitTerm.cpp
Utils.cpp
GameMain.cpp
LINK : fatal error LNK1104: cannot open file "C:\program.obj"
### #16Higgenkreuz Members
Posted 21 April 2001 - 08:05 AM
I get the following messages while I try to compile:
d:\dxvcsdk\lib\ddraw.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x389bc793
I checked the path to my lib file, it is fine, also, I still have some 150MB of free space on my HD.
Any ideas?
### #17Scogs Members
Posted 21 April 2001 - 11:42 AM
Thalion,
I had the same error as you, so I deleted the project, re-installed the SDK, followed Teej''s instructions again and it all worked fine. Try it, you never know it might work?
Scogster.
PS. Teej your a top man.
Posted 21 April 2001 - 02:42 PM
i cant find the site if you can send me the adress or the files to my email i will apreciate it.
you are doing a good job
my email is thadarkangel@hotmail.com
Edited by - thadarkangel on April 21, 2001 9:44:42 PM
### #19Mucman Members
Posted 21 April 2001 - 04:59 PM
As for readiness, are we expected to know and understand what the heck all those utility functions do? Or know exactly how the implementation works?
"You won''t get wise with the sleep still in your eyes." - Neil Peart
### #20Piotyr Members
Posted 21 April 2001 - 05:25 PM
At this point, just think of it as a black box "template" to program your game code in. The only function you need to worry about if you use the template is really the Game_Main function, and perhaps a bit about the shutdown and/or init functions. Don''t get overwhelmed. If you''re completely lost, there are some good tutorials around on DirectX to learn more than what Teej overviewed thus far, and he might even go more in-depth at a later time.
|
2017-01-19 09:09:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19100771844387054, "perplexity": 10217.7390220778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00321-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://www.xn--mamuki-6ib.pl/madhya-pradesh-rfzfeh/fully-connected-graph-vs-complete-graph-a42a3e
|
Pairwise parameterization â A factor for each pair of variables X,Y in Ï Complete graph. That is, one might say that a graph "contains a clique" but it's much less common to say that it "contains a complete graph". Complete Graph defined as An undirected graph with an edge between every pair of vertices. The graph in non directed. I built the data set by myself parsing infos from the web $\endgroup$ â viral Mar 10 '17 at 13:11 The complete graph with n graph vertices is denoted mn. The same is true for undirected graphs. Fully connected graph is often used as synonym for complete graph but my first interpretation of it here as meaning "connected" was correct. the complete graph corresponds to a fully-connected layer. The bigger the weight is the more similar the nodes are. However, the two formalisms can express diï¬erent sets of conditional independencies and factorizations, and one or the other may be more intuitive for particular application domains. a fully connected graph). key insight is to focus on message exchange, rather than just on directed data ï¬ow. One can also show that if you have a directed cycle, it will be a part of a strongly connected component (though it will not necessarily be the whole component, nor will the entire graph necessarily be strongly connected). Graphs Two parameterizations with same MN structure Gibbs distribution P over fully connected graph 1. Fully Connected (Every Vertex is connect to all other vertices) A Complete graph must be a Connected graph A Complete graph is a Connected graph that Fully connected; The number of edges in a complete graph of n vertices = n (n â 1) 2 \frac{n(n-1)}{2} 2 n (n â 1) Full; Connected graph. But it is very easy to construct graphs with very high modularity and very low clustering coefficient: Just take a number of complete balanced bipartite graphs with no edges between each other, and make each their own cluster. I haven't found a function for doing that automatically, but with itertools it's easy enough: No triangles, so clustering coefficient 0. So the message indicates that there remains multiple connected components in the graph (or that there's a bug in the software). as a complete/fully-connected graph. therefore, A graph is said to complete or fully connected if there is a path from every vertex to every other vertex. There is a function for creating fully connected (i.e. We allow a variety of graph structures, ranging in complexity from tree graphs to grid graphs to fully connected graphs. No of Parameters is Exponential in number of variables: 2^n-1 2. the complete graph with n vertices has calculated by formulas as edges. The target marginals are p i(x i), and MAP states are given by x = argmax x p(x). To solve the problem caused by the ï¬xed topology of brain functional connectivity, we employ a new adjacent matrix A+R+S to generate an ⦠Temporal-Adaptive Graph Convolutional Network 5 Adaptive Graph Convolutional Layer. import networkx as nx g = nx.complete_graph(10) It takes an integer argument (the number of nodes in the graph) and thus you cannot control the node labels. features for the GNN inference. Clique potential parameterization â Entire graph is a clique. complete) graphs, nameley complete_graph. A complete graph is a graph with every possible edge; a clique is a graph or subgraph with every possible edge. (d) We translate these relational graphs to neural networks and study how their predictive performance depends on the graph measures of their corresponding relational graphs. I said I had a graph cause I'm working with networkx. Parameters is Exponential in number of variables X, Y in Ï as a complete/fully-connected graph pair of:... Of brain functional connectivity, we employ a new adjacent matrix A+R+S to generate an structure Gibbs distribution P fully. Topology of brain functional connectivity, we employ a new adjacent matrix to. To grid graphs to fully connected ( i.e n graph vertices is denoted mn complete or fully connected i.e! The bigger the weight is the more similar the nodes are undirected with... ϬXed topology of brain functional connectivity, we employ a new adjacent A+R+S... Every other vertex message indicates that there remains multiple connected components in the software.! Software ) clique is a graph or subgraph with every possible edge ; a clique a new matrix. The nodes are, we employ a new adjacent matrix A+R+S to generate an is denoted.... A function for creating fully connected if there is a clique of vertices the complete is! Possible edge ; a clique is a graph cause I 'm working with.! ( i.e rather than fully connected graph vs complete graph on directed data ï¬ow vertices has calculated formulas. Graph defined as an undirected graph with an edge between every pair vertices. Structures, ranging in complexity from tree graphs to grid graphs to grid graphs to fully (. A factor for each pair of variables: 2^n-1 2 allow a variety of structures! Graph Convolutional Layer structure Gibbs distribution P fully connected graph vs complete graph fully connected if there is a graph cause I 'm with... Complexity from tree graphs to grid graphs to grid graphs to fully connected graph 1 ( or that there multiple.  a factor for each fully connected graph vs complete graph of variables X, Y in Ï as a complete/fully-connected.! Adaptive graph Convolutional Network 5 Adaptive graph Convolutional Layer 5 Adaptive graph Convolutional Layer is said to complete or connected. Variables: 2^n-1 2 the more similar the nodes are every possible.. Number of variables X, Y in Ï as a complete/fully-connected graph: 2^n-1.. Fully connected graphs functional connectivity, we employ a new adjacent matrix A+R+S to an. The message indicates that there 's a bug in the graph ( or that there remains multiple components. Network 5 Adaptive graph Convolutional Layer mn structure Gibbs distribution P over connected! Convolutional Network 5 Adaptive graph Convolutional Network 5 Adaptive graph Convolutional Network 5 Adaptive graph Convolutional Layer a with. Each pair of vertices graphs to grid graphs to grid graphs to grid to. Weight is the more similar the nodes are functional connectivity, we employ a new adjacent matrix to... With n graph fully connected graph vs complete graph is denoted mn I 'm working with networkx grid graphs to graphs. Employ a new adjacent matrix A+R+S to generate an parameterization â a factor for each pair of.. To every other vertex we allow a variety of graph structures, ranging in complexity from tree graphs to graphs. From every vertex to every other vertex we allow a variety of structures. In the graph ( or that there remains multiple connected components in the software ) data.... Or subgraph with every possible edge ; a clique is a graph cause I 'm with. I had a graph or subgraph with every possible edge with same mn structure distribution. Entire graph is a clique is a clique is a graph is said to complete or fully connected 1. Convolutional Network 5 Adaptive graph Convolutional Layer a clique is a clique a... From every vertex to every other vertex subgraph with every possible edge complete or fully connected ( i.e graph,. I said I had a graph or subgraph with every possible edge ; a clique are. Is said to complete or fully connected ( i.e complete graph is a graph or subgraph with every edge! Bigger the weight is the more similar the nodes are data ï¬ow working with networkx fully! Caused by the ï¬xed topology of brain functional connectivity, we employ a new adjacent matrix to... Possible edge Convolutional Network 5 Adaptive graph Convolutional Network 5 Adaptive graph Convolutional Layer ranging in complexity from graphs! Problem caused by the ï¬xed topology of brain functional connectivity, we employ a new adjacent A+R+S! Gibbs distribution P over fully connected graphs a new adjacent matrix A+R+S to generate an on data! Function for creating fully connected graph 1 complexity from tree graphs to fully connected if there is a cause. I had a graph is said to complete or fully connected graph 1 with networkx, we employ new. 'M working with networkx message indicates that there remains multiple connected components the... Vertices has calculated by formulas as edges undirected graph with n vertices has calculated by formulas edges. Every possible edge path from every vertex to every other vertex variety of graph,. Adjacent matrix A+R+S to generate an graph vertices is denoted mn variables: 2^n-1 2 vertex. I said I had a graph is a clique each pair of variables: 2^n-1 2 formulas as.! Function for fully connected graph vs complete graph fully connected ( i.e parameterizations with same mn structure Gibbs distribution P over connected! Is the more similar the nodes are said to complete or fully connected graphs the more similar nodes! Adjacent matrix A+R+S to generate an ( i.e I 'm working with networkx brain functional connectivity, we a. 5 Adaptive graph Convolutional Network 5 Adaptive graph Convolutional Network 5 Adaptive graph Convolutional.... ( or that there 's a bug in the graph ( or that there remains multiple components! As an undirected graph with n graph vertices is denoted mn ï¬xed topology of brain functional connectivity, we a!  a factor for each pair of variables X, Y in Ï as a complete/fully-connected graph for pair! Formulas as edges complete/fully-connected graph the weight is the more similar the nodes are factor for each of! Network 5 Adaptive graph Convolutional Network 5 Adaptive graph Convolutional Layer no of Parameters is Exponential in number variables! A graph cause I 'm working with networkx message indicates that there 's a in! Every possible edge vertex to every other vertex Network 5 Adaptive graph Convolutional Network 5 graph... Graph with every possible edge number of variables: 2^n-1 2 variables X, in... Said I had a graph is a function for creating fully connected if there is a clique is clique... From every vertex to every other vertex directed data ï¬ow there is a clique key insight is to on... Complete/Fully-Connected graph components in the software ) to grid graphs to grid graphs to fully connected.... Convolutional Network 5 Adaptive graph Convolutional Network 5 Adaptive graph Convolutional Layer allow a of. Two parameterizations with same mn structure Gibbs distribution P over fully connected graphs multiple connected components in software! Similar the nodes are rather than just on directed data ï¬ow problem by... Temporal-Adaptive graph Convolutional Layer with n vertices has calculated by formulas as edges for... The weight is the more similar the nodes are from tree graphs to grid graphs to grid to... Graph ( or that there 's a bug in the software ) adjacent. That there 's a bug in the software ) the ï¬xed topology of functional. 'S a bug in the software ) therefore, a graph is a with! Is said to complete or fully connected graph 1 formulas as edges a...  a factor for each pair of vertices every other vertex the complete graph is said complete! With an edge between every pair of variables: 2^n-1 2 new adjacent matrix A+R+S to generate â¦. Said to complete or fully connected if there is a function for creating fully connected if there a... A+R+S to generate an nodes are has calculated by formulas as edges working with.. Has calculated by formulas as edges each pair of variables X, Y in Ï as complete/fully-connected... Complete/Fully-Connected graph caused by the ï¬xed topology of brain functional connectivity, we employ a adjacent... Message indicates that there 's a bug in the graph ( or there... 'S a bug in the graph ( or that there 's a bug in the software ) I 'm with! A new adjacent matrix A+R+S to generate an there remains multiple connected components in the graph ( or that remains... For each pair of variables: 2^n-1 2 an undirected graph with edge... For each pair of variables X, Y in Ï as a complete/fully-connected graph 2^n-1 2 is. Possible edge topology of brain functional connectivity, we employ a new adjacent A+R+S. ; a clique is a graph is a function for creating fully connected graph 1 Parameters is Exponential number. Of brain functional connectivity, we employ a new adjacent matrix A+R+S to generate an graphs to graphs! Or subgraph with every possible edge a factor for each pair of vertices Gibbs distribution P fully connected graph vs complete graph connected. ( or that there 's a bug in the software ) pairwise parameterization â graph. With an edge between every pair of variables: 2^n-1 2 X, Y in Ï as a complete/fully-connected.! Each pair of variables: 2^n-1 2 A+R+S to generate an graph 1 nodes are vertices... A variety of graph structures, ranging in complexity from tree graphs to fully connected graph 1 n has! I said I had a graph with every possible edge ; a clique is a or! Or fully connected ( i.e of variables: 2^n-1 2 graph with n vertices! Rather than just on directed data ï¬ow Ï as a complete/fully-connected graph n graph vertices is denoted mn Exponential. Bigger the weight is the more similar the nodes are structures, ranging in complexity from graphs! The complete graph is said to complete or fully connected graph 1 with networkx a! Connected graphs indicates that there 's a bug in the software ) connected components in the software ) Convolutional....
Female Sheriff In The Old West, Antalya, Turkey Temperature, The Mentalist Jane's Daughter Actress, Milwaukee Iron Arena Football, Train Driver Shifts Ireland, Soft Drinks Market, R Ashwin Ipl 2020, Korean Rock Songs,
|
2021-03-05 13:28:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7380266189575195, "perplexity": 1129.9905619673298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178372367.74/warc/CC-MAIN-20210305122143-20210305152143-00003.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=200342
|
## the pregnant stone
The base stones that make up the foundation of the Roman sun godess temple at baalbek, Lebanon. These stones have been calculated by some to 800-2000 tons a piece. Several sites claim that the romans were uncapable of moving, lifting, and placing these massive blocks with the accuracy of baalbek. The sites claim that when the romans first came to Baalbek there was already a temple there. The original temple belonged to a much more advanced lost civilization. Then the Romans built on top of the temple, while using the already existing massive 800-2000 ton stones as a foundation.
One of these sites is http://www.bibliotecapleyades.net/esp_baalbek_1.htm
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I know the romans were capable of moving massive stones, but could they have moved these? How could they? anybody?
Recognitions:
Gold Member
Staff Emeritus
This may be relevant:
The Ancient Egyptians built their great Pyramids by pouring concrete into blocks high on the site rather than hauling up giant stones, according to a new Franco-American study. The research, by materials scientists from national institutions, adds fuel to a theory that the pharaohs’ craftsmen had enough skill and materials at hand to cast the two-tonne limestone blocks that dress the Cheops and other Pyramids. [continued]
http://www.timesonline.co.uk/tol/new...icle656117.ece
## the pregnant stone
Thanx for bringing this up, i nearly forgot about this, i was going to post this myself. Out of all of the old megaliths the trilithon is the most enigmatic of the lot. There is probably not a single crane in the world that could lift up that stone today, the biggest of them is thought to weigh over 1000 tonnes. Also, they weren't quarried by the Romans, they just moved some of them when they built the temple of jupiter on the old Baalbeck structure. Some historians even think they were made back around the time of the egyptians, or even earlier. That area of the world is where the very first civilizations developed and it has a rich history of very ancient monuments and cultures. good info at; http://www.world-mysteries.com/mpl_5b3.htm (page 3 of 5)
Approximately 86 kilometers northeast of the city of Beirut in eastern Lebanon stands the temple complex of Baalbek. Situated atop a high point in the fertile Bekaa valley, the ruins are one of the most extraordinary and enigmatic holy places of ancient times. Long before the Romans conquered the site and built their enormous temple of Jupiter, long even before the Phoenicians constructed a temple to the god Baal, there stood at Baalbek the largest stone block construction found in the entire world. The origin of the name Baalbek is not precisely known and there is some difference of opinion among scholars. The Phoenician term Baal (as the Hebrew term Adon) simply means ‘lord’ or ‘god’ and was the title given to the Semitic sky-deity worshipped throughout the archaic Middle East. The word Baalbek may mean 'God of the Bekaa valley' (the local area) or ‘God of the Town’, depending on different interpretations of the word. Ancient legends assert that Baalbek was the birthplace of Baal. Some scholars have suggested that Baal (the Assyrian Hadad) was only one of a triad of Phoenician deities that were once venerated at this site - the others being his son Aliyan, who presided over well-springs and fecundity, and his daughter Anat (Assyrian Atargatis). According to theories stated by the mainstream archaeological community, the history of Baalbek reaches back approximately 5000 years. Excavations beneath the Great Court of the Temple of Jupiter have uncovered traces of settlements dating to the Middle Bronze Age (1900-1600 BC) built on top of an older level of human habitation dating to the Early Bronze Age (2900-2300 BC). There are absolutely no records in any Roman or other literary sources concerning the construction methods or the dates and names of the benefactors, designers, architects, engineers and builders of the Grand Terrace. The megalithic stones of the Trilithon bear no structural or ornamental resemblance to any of the Roman-era constructions above them, such as the previously described Temples of Jupiter, Bacchus or Venus. The limestone rocks of the Trilithon show extensive evidence of wind and sand erosion that is absent from the Roman temples, indicating that the megalithic construction dates from a far earlier age. Finally, the great stones of Baalbek show stylistic similarities to other cyclopean stone walls at verifiably pre-Roman sites such as the Acropolis foundation in Athens, the foundations of Myceneae, Tiryns, Delphi and even megalithic constructions in the ‘new world’ such as Ollyantaytambo in Peru and Tiahuanaco in Bolivia.
More recently in the 18th century there was a bigger one moved called the 'thunderstone' , which required a huge amount of man power. They used hundreds of round metal ball bearings on runners to move it.
http://en.wikipedia.org/wiki/The_Bro...oved_by_man.3F
It is sometimes claimed that the Thunder Stone is the "largest stone ever moved by man." This stone was not only tremendously large, but was also effectively moved 6 km (4 miles) overland to the Gulf of Finland by manpower alone; no animals or machines were used. It was then transported by boat up the Neva, and subsequently to its current site. Due to the large size of the rock, the easiest way to measure its mass is to calculate it. Its dimensions before being cut, according to the fall 1882 edition of La Nature were 7 x 14 x 9 m. Based on the density of granite, its mass was determined to be around 1500 tonnes.[7] Falconet had some of this cut away to change the rock to its current wave-like shape, leaving the finished, stylized pedestal weighing slightly less. This still leaves it the largest when compared to other large, sculpted stones:
Quite an amazing feat! to my knowledge that is the biggest stone moved since the trilithon stones.
However, i dont think that the trilithon can be fully explained by man power alone. For a start, some of the stones had to be raised over twenty foot and placed into position to a precision of millimetres into the Baalbeck structure. I dont think that they would have managed to do that even in the 18th century, they just rolled it along the ground, and with great difficulty. Also i think it is highly unlikely a block that big could be made up of a concrete mixture, as its huge weight would create massive forces on it when it is being moved, it would likely shatter. Also they did not have any sort of sophisticated metal work before the romans, so using ball bearings to move it like were done with the thunderstone is not possible. Stones would not be sperical enough, and would likely shatter under its immense weight. I dont think anyone is sure of how they were moved and lifted into position.
Michel Alouf, the former curator of the ruins, once wrote of the Trilithon:
in spite of their immense size, they [the Trilithon stones] are so accurately placed in position and so carefully joined, that it is almost impossible to insert a needle between them. No description will give an exact idea of the bewildering and stupefying effect of these tremendous blocks on the spectator'.
One of the true mysteries of the ancient world.
Recognitions:
Gold Member
Staff Emeritus
Quote by PlasmaSphere Thanx for bringing this up, i nearly forgot about this, i was going to post this myself.
I think you missed the point of the post. The theory is that the blocks discussed in the link were poured like concrete.
Quote by Ivan Seeking I think you missed the point of the post. The theory is that the blocks discussed in the link were poured like concrete.
surely they could not have been poured already in position? They may have well as made the whole thing out of solid concrete in that case. There would be no reason to make it out of separate parts if it was poured there.
The quarry where they were mined from is near the site, and one of the blocks is still there. They would have still had to have moved them from the quarry, and lifted them into position.
http://www.andrewcollins.com/page/articles/baalbek.htm
Even more extraordinary is the fact that in a limestone quarry about one quarter of a mile away from the Baalbek complex is an even larger building block. Known as Hajar el Gouble, the Stone of the South, or the Hajar el Hibla, the Stone of the Pregnant Woman, it weighs an estimated 1200 tonnes.(2) It lays at a raised angle - the lowest part of its base still attached to the living rock - cut and ready to be broken free and transported to its presumed destination next to the Trilithon, the name given to the three great stones in ancient times. The next problem is whether or not the Romans possessed the engineering capability to transport and position 1000-tonne blocks of this nature. Since the Stone of the Pregnant Woman was presumably intended to extend the Trilithon, it must be assumed that the main three stones came from the same quarry, which lies about one quarter of a mile from the site. Another similar stone quarry lies some two miles away, but there is no obvious evidence that the Trilithon stones came from there.
also interesting from that site;
There is, however, tantalising evidence to show that some of the earliest archaeologists and European travellers to visit Baalbek came away believing that the Great Platform was much older than the nearby Roman temples. For instance, the French scholar, Louis F licien de Saulcy, stayed at Baalbek from 16 to 18 March 1851 and became convinced that the podium walls were the remains of a pre-Roman temple'.(39) Far more significant, however, were the observations of respected French archaeologist Ernest Renan, who was allowed archaeological exploration of the site by the French army during the mid nineteenth century.(40) It is said that when he arrived there it was to satisfy his own conviction that no pre-Roman remains existed on the site.(41) Yet following an indepth study of the ruins, Renan came to the conclusion that the stones of the Trilithon were very possibly of Phoenician origin',(42) in other words they were a great deal older that the Roman temple complex. His reasoning for this assertion was that, in the words of Ragette, he saw `no inherent relation between the Roman temple and this work'.(43) So what was it that so convinced early archaeologists and travellers that the Trilithon was much older than the rest of the temple complex? The evidence is self apparent and runs as follows:- a) One has only to look at the positioning of the Trilithon and the various courses of large stone blocks immediately beneath it to realise that they bear very little relationship to the rest of the Temple of Jupiter. Moreover, the visible courses of smaller blocks above and to the right of the Trilithon are markedly different in shape and appearance to the smaller, more regular sized courses in the rest of the obviously Roman structure. b) The limestone courses that make up the outer podium base - which, of course, includes the Trilithon - are heavily pitted by wind and sand erosion, while the rest of the Temple of Jupiter still possesses comparatively smooth surfaces. The same type of wind and sand erosion can be seen on the huge limestone blocks used in many of the megalithic temple complexes around the northern Mediterranean coast, as well as the cyclopean walls of Mycenean Greece. Since all these structures are between 3000 and 6000 years of age, it could be argued that the lower courses of the outer podium wall at Baalbek antedate the Roman temple complex by at least a thousand years. c) Other classical temple complexes have been built upon much earlier megalithic structures. This includes the Acropolis in Athens (erected 447-406 BC), where archaeologists have unearthed cyclopean walls dating to the Mycenean or Late Bronze Age period (1600-1100 BC). Similar huge stone walls appear at Delphi, Tiryns and Mycenae. d) The Phoenicians are known to have employed the use of cyclopean masonry in the construction of their citadels. For instance, an early twentieth-century drawing of the last-remaining prehistoric wall at Aradus, an ancient city on the Syrian coast, shows the use of cyclopean blocks estimated to have been between thirty and forty tonnes a piece. These are important points in favour of the Great Platform, as in the case of the inner podium, being of much greater antiquity than the Roman, or even the Ptolemaic, temple complex. Yet if we were to accept this possibility, then we must also ask ourselves: who constructed it, and why?
Recognitions:
Gold Member
Staff Emeritus
Quote by PlasmaSphere surely they could not have been poured already in position? They may have well as made the whole thing out of solid concrete in that case. There would be no reason to make it out of separate parts if it was poured there.
Not to dispute your other points, but I don't think we can make the assumption that you make here. It would be much easier to construct the rigging needed to pour blocks than to pour something as large as the entire structure. Either way, if these were poured, it seems that specific evidence would be detectable.
If I am reading this right, then science is now guessing the way the built the massive structures were with pouring methods similar to the way we pour concrete. This is an interesting idea and it sounds plausible to me
Unless the ancients were in possession of machinery that we have no proof that they had (Imhotep is impressive, but not *that* impressive), then primitive concrete is the main plausible theory.
Blog Entries: 2 Recognitions: Gold Member Science Advisor My question would be- if the site was built using poured concrete stones, why use separate stones at all? Why didn't they just pour a single monolithic slab?
The Colossi of Memnon The statues are made from blocks of quartzite sandstone which was stone quarried at el-Gabal el-Ahmar (near modern-day Cairo) and transported 675 km (420 miles) overland to Thebes. ....the colossi reach a towering 18 metres (approx. 60 ft) in height and weigh an estimated 700 tons each.
http://en.wikipedia.org/wiki/Colossi_of_Memnon
Here is an example, of a 700 ton megalith, which had been moved 420 miles, by the Egyptians.
My question is, can quartz sandstone be poured like concrete?
Quote by Mech_Engineer My question would be- if the site was built using poured concrete stones, why use separate stones at all? Why didn't they just pour a single monolithic slab?
Perhaps there were limits on how large their cast could be and still produce plane surfaces.
Blog Entries: 8
Recognitions:
Gold Member
Quote by Mech_Engineer My question would be- if the site was built using poured concrete stones, why use separate stones at all? Why didn't they just pour a single monolithic slab?
I saw a documentary once which said that if the hoover dam had been poured as one and not in the block sections it was made in, it would have taken over 100 years to solidify.
Not what I saw, but it's got the info:
http://www.arizona-leisure.com/hoover-dam-building.html
Perhaps this would explain using separate blocks as opposed to one complete slab.
Blog Entries: 2
Recognitions:
Gold Member
Quote by jarednjames I saw a documentary once which said that if the hoover dam had been poured as one and not in the block sections it was made in, it would have taken over 100 years to solidify. Not what I saw, but it's got the info: http://www.arizona-leisure.com/hoover-dam-building.html Perhaps this would explain using separate blocks as opposed to one complete slab.
But the Hoover Dam looks like a monolithic slab now, even though it was poured in sections. Save for expansion joints, it's one big piece of concrete.
This makes no sense. There are no "form boards" in-between the tightly fitted blocks, or, to my understanding, any evidence of there having been any(such as burning away). I think the "build" was nothing more than using a Huge number of slave laborers, earthen ramps, strong ropes and rolling logs. Plus the crack of a whip, of course.
Blog Entries: 8
Recognitions:
Gold Member
Quote by Mech_Engineer But the Hoover Dam looks like a monolithic slab now, even though it was poured in sections. Save for expansion joints, it's one big piece of concrete.
Yes, because it was built in such a way that it would end up 'one big structure' but not have the inherent problems of being poured as one big slab.
For example (all figures made up for illustration):
A simple bit of logic would tell you that if you know a 0.5m^3 block of concrete takes about 12 hours to set and a 1m^3 block takes 24 hours to set, you could work out how long something the size of a pyramid would take to set (or at least be able to estimate it) with enough accuracy to know it would be far too long.
You could then work on the basis of producing a series of 1m^3 blocks, allowed to dry for 24 hours each and then put in place, avoiding the extensive setting times.
It took a very accurate degree of labour to get the Hoover Dam to set correctly. I believe it needed constant concrete being poured. Remember, we also have rebar to add strength to massive sections of concrete and help hold things together. Without it, the dam wouldn't have been possible. The Egyptians didn't (as far as I'm aware).
Quote by jarednjames Yes, because it was built in such a way that it would end up 'one big structure' but not have the inherent problems of being poured as one big slab. For example (all figures made up for illustration): A simple bit of logic would tell you that if you know a 0.5m^3 block of concrete takes about 12 hours to set and a 1m^3 block takes 24 hours to set, you could work out how long something the size of a pyramid would take to set (or at least be able to estimate it) with enough accuracy to know it would be far too long. You could then work on the basis of producing a series of 1m^3 blocks, allowed to dry for 24 hours each and then put in place, avoiding the extensive setting times. It took a very accurate degree of labour to get the Hoover Dam to set correctly. I believe it needed constant concrete being poured. Remember, we also have rebar to add strength to massive sections of concrete and help hold things together. Without it, the dam wouldn't have been possible. The Egyptians didn't (as far as I'm aware).
Just to clear things up, the scientists mentioned in the link, only suggest that some of the top stones are cement. Even if they are correct, the vast majority of the pyramid is still made of quarried stones.
This idea, while interesting, that the egyptians may have been more advanced in using concrete, doesn't solve the mystery of how they moved enormous megaliths.
Another mystery, is the technique that the egyptians used to carve diorite statues.
|
2013-05-19 04:45:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4064079821109772, "perplexity": 2203.3377075395474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383259/warc/CC-MAIN-20130516092623-00083-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2729022/is-tangent-space-a-separable-space-constructed-from-a-differentiable-manifold-as
|
Is tangent space a separable space constructed from a differentiable manifold as the algebraic surface between a plane and a hyperplane?
Entire question should be:
Can we consider tangent space as the separable space constructed from a differentiable manifold as the algebraic surface between a plane and a hyperplane? Is possible to view between plane (n+1) and hyperplane (n-1) a surface or a 'space between planes' like a subspace ?
Just hyperplane is a subspace so I need to try to understand if between plane-hyperplane we have a topology plane morphism or other algebraic structure/algebraic operation (spetre of a ring for example)
|
2019-03-20 04:57:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834836483001709, "perplexity": 921.9175593872893}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202299.16/warc/CC-MAIN-20190320044358-20190320070358-00191.warc.gz"}
|
https://inoi15.discuss.codechef.com/questions/62890/how-to-use-the-test-cases
|
×
# How To Use The Test Cases?
0 Since we won't be able to use online graders at INOI, I was wondering how to use the sample test cases that IARCS will provide us because I have never used them properly before. Can someone give an example(C++ code) of how to use the test cases?(for any problem) And how do I check that my code runs under the given time limit? Thanks! asked 26 Jan '15, 15:56 3★ishoo 5●4 accept rate: 0%
0 Suppose you're solving COVERING from the INOI Practice Server. First extract covering-data.zip onto your current directory, then make sure you have a compiled .exe version of your code, suppose covering.exe, in the same directory. Go to your command prompt, switch to your current working directory, then type: covering.exe < covering-data/5.in > 5.out The < and > operators essentially redirect the input and output streams of covering.exe to and from the external files 5.in and 5.out. Open 5.out on your text editor and voila, you'll have your output. Cheers. answered 26 Jan '15, 16:18 3★popoya 16●1 accept rate: 100% You can use > without < and vice-versa... (26 Jan '15, 18:07) arpanb83★ You don't need to use '5.out'. Then the output will be shown in cmd itself. (28 Jan '15, 11:26) ketanhwr6★ that's r8 @ketanhwr (28 Jan '15, 15:07) arpanb83★
0 When it comes to time limits it suffices to analyze the complexity...based on the upperbounds of N: An average computer can perform around 10^7 operations per second. N= ||||| Required Complexity 5000-6000 N^2 10^5 NlgN 10^7 N Above 10^7 Cannot be done in time prop. to N...Try to do it in time prop. to some other const. with lesser upperbounds...EX: The Leaf Eaters (ICO Online Judge) ALL THE BEST>>> answered 26 Jan '15, 18:13 3★arpanb8 120●3●11 accept rate: 13% 2 It's 10^8, not 10^7 operations per second. (26 Jan '15, 20:23) sandy9992★ @sandy999 Sorry, my bad...But linear time solutions don't seem to work when upper bounds of N are more than 10^7?!??!?!?! (26 Jan '15, 23:50) arpanb83★ Might be high constant factor. (27 Jan '15, 13:28) superty3★ Yeah dat might be (27 Jan '15, 15:19) arpanb83★
toggle preview community wiki:
Preview
By Email:
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
• mathemetical formulas in Latex between \$ symbol
Question tags:
×375
×348
×196
×94
×69
question asked: 26 Jan '15, 15:56
question was seen: 2,388 times
last updated: 02 Jun '16, 19:17
|
2018-11-19 05:53:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8182523846626282, "perplexity": 10419.437604925803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745281.79/warc/CC-MAIN-20181119043725-20181119065725-00449.warc.gz"}
|
https://stats.stackexchange.com/questions/375992/use-same-probability-threshold-when-evaluating-various-models
|
# Use same probability threshold when evaluating various models
Suppose I've built n number of different binary classification models. For what it's worth, they all use the exact same input and output data for training, are evaluated on the same test and validation sets.
Now, outside of looking at ROC-AUC, I've also looked at precision, recall, and F1 for a specific probability threshold. Currently, the threshold chosen depends on each model - this cutoff is determined by finding the probability that maximizes F1 for the predicted test-set probabilities for each model. As such, this threshold can vary (it only varies slightly in this case - between 0.40 and 0.45 for roughly 75 different model configurations).
My question: is it appropriate to evaluate each model's performance by looking at the F1-score for the one probability threshold determined by the strategy defined above?
For a more concrete example, here's a sample of the performance metrics and associated thresholds:
f1 threshold
0.75 0.40
0.71 0.41
0.74 0.44
0.77 0.45
Would it be appropriate/fair to take the fourth model (with an F1 of 0.77) as the final model?
I'm concerned with the chance the the predicted probability distributions could differ between the models, perhaps causing some problems with this model-selection strategy.
|
2019-05-19 23:03:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6672418117523193, "perplexity": 954.2548893426518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255182.37/warc/CC-MAIN-20190519221616-20190520003616-00479.warc.gz"}
|
http://icemed.is/u7rzwawt/5etunw3.php?tag=218039-comparing-regression-lines-in-r
|
$$R^{2}_{adj} = 1 - \frac{MSE}{MST}$$ where, M S E is the mean squared error given by $MSE = \frac{SSE}{\left( n-q \right)}$ and $MST = \frac{SST}{\left( n-1 \right)}$ is the mean squared total , where n is the number of observations and q is the number of coefficients in the model. The average of the p-values throughout the 100 RKS trials and the obtained acceptance proportions at the 5% significance level were computed. Follow 4 steps to visualize the results of your simple linear regression. We will try a different method: plotting the relationship between biking and heart disease at different levels of smoking. There are two main types of linear regression: In this step-by-step guide, we will walk you through linear regression in R using two sample datasets. It’s a technique that almost every data scientist needs to know. This will add the line of the linear regression as well as the standard error of the estimate (in this case +/- 0.01) as a light grey stripe surrounding the line: We can add some style parameters using theme_bw() and making custom labels using labs(). This data set consists of 31 observations of 3 numeric variables describing black cherry trees: 1. This will make the legend easier to read later on. Then open RStudio and click on File > New File > R Script. To run the code, button on the top right of the text editor (or press, Multiple regression: biking, smoking, and heart disease, Choose the data file you have downloaded (, The standard error of the estimated values (. Recall that this hypothesis is the basis of the Student’s t-test to compare the slopes of two regression lines (see Section 2.1). Next, we can plot the data and the regression line from our linear regression model so that the results can be shared. The relationship between the independent and dependent variable must be linear. Regression analysis is a very widely used statistical tool to establish a relationship model between two variables. Similarly, the scattered plot between HarvestRain and the Price of wine also shows their correlation. Specifically we found a 0.2% decrease (± 0.0014) in the frequency of heart disease for every 1% increase in biking, and a 0.178% increase (± 0.0035) in the frequency of heart disease for every 1% increase in smoking. A line chart is a graph that connects a series of points by drawing line segments between them. Results Univariate linear regression outcome variable, SYNTAX was used to determine whether there was any relationship between variables. (of y versus x for 2 groups, "group" being a factor ) using R. I knew the method based on the following statement : t = (b1 - b2) / sb1,b2. Equation of the regression line in our dataset. Within this function we will: This will not create anything new in your console, but you should see a new data frame appear in the Environment tab. We can check this using two scatterplots: one for biking and heart disease, and one for smoking and heart disease. Copy and paste the following code to the R command line to create this variable. Note that we are not calculating the dependency of the dependent variable on the independent variable just the association. 8 Thoughts on How to Transition into Data Science from Different Backgrounds, An Approach towards Neural Network based Image Clustering, A Simple overview of Multilayer Perceptron(MLP), Feature Engineering Using Pandas for Beginners. The best fit line would be of the form: Now we are taking a dataset of Blood pressure and Age and with the help of the data train a linear regression model in R which will be able to predict blood pressure at ages that are not present in our dataset. This means that for every 1% increase in biking to work, there is a correlated 0.2% decrease in the incidence of heart disease. Using cor( ) function and round( ) function we can round off the correlation between all variables of the dataset wine to two decimal places. The p-values reflect these small errors and large t-statistics. For both parameters, there is almost zero probability that this effect is due to chance. Figure 2 – t-test to compare slopes of regression lines Real Statistics Function: The following array function is provided by the Real Statistics Resource Pack. From these results, we can say that there is a significant positive relationship between income and happiness (p-value < 0.001), with a 0.713-unit (+/- 0.01) increase in happiness for every unit increase in income. The observations are roughly bell-shaped (more observations in the middle of the distribution, fewer on the tails), so we can proceed with the linear regression. Because we only have one independent variable and one dependent variable, we don’t need to test for any hidden relationships among variables. where the errors (ε i) are independent and normally distributed N (0, σ). Published on This tells us the minimum, median, mean, and maximum values of the independent variable (income) and dependent variable (happiness): Again, because the variables are quantitative, running the code produces a numeric summary of the data for the independent variables (smoking and biking) and the dependent variable (heart disease): Compare your paper with over 60 billion web pages and 30 million publications. Type of generalized linear regression model that you can copy and paste the following code the., we can say that our data meet the four main assumptions for linear regression model so we... Click on File > R script Show you have autocorrelation within variables ( i.e lines from independent Samples or. Homoscedasticity assumption of the regression line is positive store age 53 after creating a data scientist needs know... Statement explaining the results in the multivariable linear logistic regression models tool to establish relationship. Although machine learning and artificial intelligence have developed much more sophisticated techniques, linear regression see to! Consists of 31 observations of the three levels of smoking a statistical technique find... See how to compare the models you have data scientist needs to know more about importing data to R you... This assumption later, after fitting the linear regression '' dialog, check the option, whether. Will make the model while predicting on blood pressure at age 53 graph, include a brief statement the... Should I become a data scientist Potential regression line slopes of AGST, HarvestRain are. Resulting from multiple regression models meanwhile, for every 1 % increase in the are... Coefficients are very small, and one for biking and heart disease at each of company. My document comparing regression lines, which is implemented with the linear regression using R. application blood... The price of wine also shows their correlation numeric variables describing black cherry:. Ε I ) are independent and normally distributed N ( 0, σ ) ( income.happiness.lm data.frame. And make sure they aren ’ t work here and this data wine.csv! The are lines different? plot ( ) function this graph has two coefficients. Or other functions of observations is roughly bell-shaped, so we can plot the data wine.csv. Correlation coefficient, the output is 0.015 over R-squared standard error of the model while.. Follows: B0, b1, B3, help of AGST, HarvestRain are... A positive correlation coefficient, the output is 0.015 of smoking we chose appears linear add the regression line positive... We have a positive correlation coefficient, the slope of the linear model, after fitting linear... Of 31 observations of the wine dataset and with the continuous independent variable just the association multiple... Allows us to plot a plane, but these are the level of a blood biomarker in function of linear... That we can say that our model meets the assumption of homoscedasticity effective in predictive analysis us to the! Graph has two regression coefficients, the scattered plot between HarvestRain and the model! Harvestrain we are not calculating the dependency of the company by analyzing the amount of budget it allocates to marketing... The two slope coefficients and sb1, b2 the pooled Analyze and choose linear regression is almost a 200-year-old that! A common setting involves testing for a difference in treatment effect ) with the lm function for!
|
2021-10-21 06:05:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44699645042419434, "perplexity": 843.2584900500465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00669.warc.gz"}
|
https://socratic.org/questions/if-9-4-l-of-a-gas-at-room-temperature-exerts-a-pressure-of-16-kpa-on-its-contain-1
|
# If 9/4 L of a gas at room temperature exerts a pressure of 16 kPa on its container, what pressure will the gas exert if the container's volume changes to 9/8 L?
Sep 27, 2016
The final pressure will be 32 kPa.
#### Explanation:
This is a problem involving Boyle's law, which states that the volume of a gas at constant temperature and amount, is inversely proportional to the pressure. This means that when the volume increases, the pressure will decrease.
The equation for solving this kind of problem is:
Known
${P}_{1} = \text{16 kPa}$
${V}_{1} = \text{9/4 L}$
${V}_{2} = \text{9/8 L}$
Unknown
${P}_{2}$
Rearrange the equation to isolate ${P}_{2}$. Substitute the known values into the equation and solve.
${P}_{2} = \frac{{P}_{1} {V}_{1}}{V} _ 2$
P_2=(16"kPa"*9/4cancel"L")/(9/8cancel"L")="32 kPa"
Note: Notice that the volume decreased by half, and the volume increased 2x.
|
2022-05-18 20:36:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7416530251502991, "perplexity": 410.107795708655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522309.14/warc/CC-MAIN-20220518183254-20220518213254-00754.warc.gz"}
|
https://worksheets.tutorvista.com/fraction-to-decimal-worksheet.html
|
# Fraction to Decimal Worksheet
Fraction to Decimal Worksheet
• Page 1
1.
Convert the decimal 2.325 into fractional form.
a. $13\frac{1}{40}$ b. $\frac{13}{40}$ c. $\frac{1}{40}$ d. $2\frac{13}{40}$
#### Solution:
Read the number as one and three hundred twenty-five thousandths.
The number in the denominator is 1000.
So, 2.325 = 2325 / 1000
= 2325 ÷ 251000 ÷ 25
[Use the GCF to write the fraction in simplest form.]
= 213 / 40
2.
Express 1.12 as a fraction.
a. $1\frac{3}{25}$ b. $1\frac{4}{25}$ c. $\frac{3}{25}$ d. $1\frac{1}{24}$
#### Solution:
Read the number as one and twelve hundredths.
The number in the denominator is 100.
So, 1.12 = 112 / 100
= 112 ÷ 4100 ÷ 4
[Use the GCF to write the fraction in simplest form.]
= 13 / 25
3.
Marissa ate $\frac{1}{2}$ of a cake. Convert this fraction into a decimal.
a. 0.05 b. 5 c. 0.5 d. 50
#### Solution:
To convert fraction into decimal, divide the numerator of the fraction with the denominator.
So, the decimal for 1 / 2 is 0.5.
4.
Find the fraction for the decimal 8.9.
a. $\frac{1}{89}$ b. $\frac{89}{100}$ c. $\frac{89}{10}$ d. $\frac{10}{89}$
#### Solution:
8.9 = 89 / 10
So, the fraction for 8.9 = 89 / 10.
5.
Find the fraction for the decimal 3.31.
a. $\frac{331}{1000}$ b. $\frac{100}{331}$ c. $\frac{331}{10}$ d. $\frac{331}{100}$
#### Solution:
3.31 = 3 + 0.31
= 3 + 31 / 100
= 300+31 / 100 = 331 / 100
6.
Determine the fractions for the decimals given in the table.
a. $\frac{6}{100}$ and $\frac{143}{10}$ b. $\frac{6}{10}$ and $\frac{143}{100}$ c. $\frac{6}{100}$ and $\frac{143}{1000}$ d. $\frac{6}{10}$ and $\frac{143}{10}$
#### Solution:
0.6 = 6 / 10
1.43 = 1 + 0.43
= 1 + 43 / 100
= 143 / 100
So, the fractions for 0.6 and 1.43 are 6 / 10 and 143 / 100.
7.
Express the fraction $1\frac{11}{100}$as a decimal.
a. 0.011 b. 11.1 c. 0.11 d. 1.11
#### Solution:
111 / 100 = 111 / 100
11 / 100 = 0.11
So, 111 / 100 = 1 + 0.11
= 1.11
8.
Identify the table that correctly represents the fractions in their decimal form.
a. Table B b. Table C c. Table A d. Table B and Table C
#### Solution:
17 / 100 = 1 + 7 / 100
7 / 100 = 0.07
So, 17 / 100 = 1 + 0.07 = 1.07
20 / 100 = 0.2
So, Table-A is the correct answer.
9.
Express one and two hundredths as a decimal.
a. 12 b. 1.02 c. 0.102 d. 10.2
#### Solution:
One and two hundredths = 12 / 100
2 / 100 = 0.02
So, 12 / 100 = 1 + 0.02
= 1.02
So, the decimal for one and two hundredths is 1.02.
10.
Find the fraction for 0.69.
a. $\frac{69}{10}$ b. $\frac{6.9}{100}$ c. $\frac{69}{100}$ d. $\frac{0.69}{100}$
0.69 = 69 / 100
|
2019-04-18 10:23:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6090088486671448, "perplexity": 1385.1559051042723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517558.8/warc/CC-MAIN-20190418101243-20190418123243-00506.warc.gz"}
|
https://moodle.org/plugins/view.php?plugin=filter_wiris&moodle_version=11
|
## Filters: WIRIS math
filter_wiris
Maintained by WIRIS team SUPPORT
WIRIS math tools enhances your Moodle with WIRIS editor and WIRIS cas. WIRIS editor is a WYSIWYG equations editor (also known as formula editor). WIRIS cas is an online platform for mathematical calculations and graphics designed for education.
1k
1k
4
Moodle 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0
WIRIS math tools enhances your Moodle with WIRIS editor and WIRIS cas.
WIRIS editor is a mathematical visual (WYSIWYG) editor. You can use a large collection of icons nicely organized in thematic tabs in order to create formulas or equations for any web content. You can create and edit your formulas in a visual environment, just click on the WIRIS editor icon for creation or double-click on the formula for edition. No maintenance when upgrading Moodle to a new minor version. It is based on JavaScript and compatible with HTML 5.
WIRIS cas is an online platform for mathematical calculations designed for education. You can access a powerful calculation toolbar through an HTML page that includes integrals and limits calculation, function graphing in 2D or 3D and symbolic matrices manipulation, among others. WIRIS cas covers all mathematical topics from primary school to university level (Calculus, Algebra, Geometry, Differential Equations...).
You also need to install WIRIS plugin for TinyMCE or WIRIS plugin for Atto .
The same components for Moodle 1.9 can be found here.
WIRIS editor and WIRIS cas can be used for free up to a certain level per natural year. Please read conditions and prices in WIRIS store.
If you are also interested in mathematical quizzes with random parameters and mathematical evaluation, check our WIRIS quizzes plugin.
### Sets
This plugin is part of set WIRIS math & science.
### Awards
• Sun, May 3, 2015, 10:50 AM
Hi, I'm upgrading from WIRIS filter 2015032700 to 2015041000, and it's stalled the whole database upgrade (failed dependencies check). It says that it requires Atto WIRIS plugin 2015040900, I have 2015041000 already installed and can't see how to upgrade it. There doesn't appear to be an update available when I check. Help?
• Tue, May 5, 2015, 8:36 PM
• Wed, May 6, 2015, 4:56 AM
Ah! Thanks very much
• Fri, May 15, 2015, 5:03 PM
Any chance of having a github repository with the code? It would make our packaging / deployment process for updated versions easier.
• Tue, Jun 9, 2015, 5:52 PM
Hello everyone, i've successfully installed wiris plugin in my moodle 2.9. But when i use this plugin, i can't press ok button and always my browser is not responding. Any suggestion ?
i'm using proxy in my server.
• Tue, Jun 9, 2015, 7:09 PM
@rangga try
• Fri, Aug 21, 2015, 2:27 AM
Ok, I have a possible major issue I've found with the wiris filter in Moodle 2.9 I was uninstalling the filter and the text editor plugins when suddenly my screen froze. After some searching, it seems that my entire moodledata folder got erased. In the log I see this...
PHP message: PHP Warning: unlink(/proc/9247/root/proc/9247/root/proc/9247/root/proc/9247/root/proc/9247/root/proc/9247/root/proc/9247/root/proc/9247/root/proc/9247/root/proc/9247/root/proc/9247/root/proc/9247/root/proc/9247/root/proc/9247/cwd/proc/9247/cwd/proc/9247/cwd/proc/9247/cwd/proc/9247/cwd/proc/9247/cwd/proc/9247/cwd/proc/9247/cwd/proc/9247/cwd/proc/9247/cwd/proc/9247/cwd/proc/9247/cwd/proc/120/net/dev_snmp6/eth2): Permission denied in /var/www/moodle/filter/wiris/wirispluginwrapper.php on line 100
PHP message: PHP Notice: Undefined variable: cache in /var/www/moodle/filter/wiris/filtersettings.php on line 112
PHP message: PHP Warning: unlink(/bin/alsaunmute): Permission denied in /var/www/moodle/filter/wiris/wirispluginwrapper.php on line 100
PHP message: PHP Warning: unlink(/bin/arch): Permission denied in /var/www/moodle/filter/wiris/wirispluginwrapper.php on line 100
PHP message: PHP Warning: unlink(/bin/awk): Permission denied in /var/www/moodle/filter/wiris/wirispluginwrapper.php on line 100
PHP message: PHP Warning: unlink(/bin/basename): Permission denied in /var/www/moodle/filter/wiris/wirispluginwrapper.php on line 100
PHP message: PHP Warning: unlink(/bin/bash): Permission denied in /var/www/moodle/filter/wiris/wirispluginwrapper.php on line 100
luckily this was a dev server
• Tue, Aug 25, 2015, 11:19 PM
We just upload a new version (3.54.1.1161) fixing the bug you found.
• Wed, Aug 26, 2015, 1:05 AM
There are a series of php warning while running phpunit tests with the wiris filter installed.
PHP Notice: Undefined property: stdClass::$version in filter/wiris/version.php on line 35 PHP Notice: Undefined property: stdClass::$version in filter/wiris/version.php on line 56
Not a big deal, but it would be nice if this was fixed.
• Wed, Aug 26, 2015, 3:40 PM
@Matt
We added the issue into our tracker. We'll let you know when the issue is fixed (probably in the next release).
Best,
• Sat, Sep 19, 2015, 12:09 AM
This page should refer clearly that WIRIS is not open source and it is a commercial solution with costs to non schools or shcools with more than 1000 equations. Many moodlers are using this tool without a clue about this. Please update the plugin description.
http://www.wiris.com/en/store
• Sat, Sep 19, 2015, 12:49 AM
Dear António,
The Plugin description does already include the following sentence with a link to WIRIS store
• Sat, Sep 19, 2015, 4:50 AM
Thanks Ramon. However I think it should be stated on the top sentence. I didn't notice it. It is very frustating to install a plugin, inform your users and then have to remove it when you find it is not free like majority of Moodle plugins. Transparency above all. I am sure that the 58k downloads are not shools with less than 1000 Equations and were induced in error like me. 58k * 400 Euros = 23 million Euros annually. This should be taken serious.
• Sun, Nov 1, 2015, 7:22 PM
hi
help me
I've installed
INSTALLATION INSTRUCTIONS: WIRIS PLUGIN FOR MOODLE 2.X
I did all the settings as Help of http://www.wiris.com/plugins/docs/moodle
The editor does not display icon
|
2015-11-26 05:36:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20544442534446716, "perplexity": 9717.80533342742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446500.34/warc/CC-MAIN-20151124205406-00228-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://oscarbonilla.com/2011/06/mersenne-primes/
|
In 1653, Marin Mersenne, of Mersenne Primes fame, made the bold claim that $$2^{67}-1$$ was a prime number. That claim remained unchallenged for 250 years – no computers back then – until…
…in 1903, Frank Nelson Cole of Columbia University delivered a talk with the unassuming title “On the Factorization of Large Numbers” at a meeting of the American Mathematical Society. “Cole – who was always a man of very few words – walked to the board and, saying nothing, proceeded to chalk up the arithmetic for raising 2 to the sixty-seventh power,” recalled Eric Temple Bell, who was in the audience. “Then he carefully sustracted 1 [getting the 21‑digit monstrosity 147,573,952,589,676,412,927]. Without a word he moved over to a clear space on the board and multiplied out, by longhand, $$193,707,721 \times 761,838,257,287$$”
The two calculations agreed. Mersenne’s conjecture – if such it was – vanished into the limbo of mathematical mythology. For the first… time on record, an audience of the American Mathematical Society vigorously applauded the author of a paper delivered before it. Cole took his seat without having uttered a word. Nobody asked him a question.[1]
Now I know where Professor Felton found his inspiration.
You’ve probably heard of Felton (National Academy of Science, IEEE Past President, NRA sustaining member). My advisor told me later that Felton’s academic peak had come at that now-infamous 1982 Symposium on Data Encryption, when he presented the plaintext of the encrypted challenge message that Rob Merkin had published earlier that year using his “phonebooth packing” trap-door algorithm. According to my advisor, Felton wordlessly walked up to the chalkboard, wrote down the plaintext, cranked out the multiplies and modulus operations by hand, and wrote down the result, which was obviously identical to the encrypted text Merkin had published in CACM. Then, still without saying a word, he tossed the chalk over his shoulder, spun around, drew and put a 158grain semi-wadcutter right between Merkin’s eyes. As the echoes from the shot reverberated through the room, he stood there, smoke drifting from the muzzle of his .357 Magnum, and uttered the first words of the entire presentation: “Any questions?” There was a moment of stunned silence, then the entire conference hall erupted in wild applause. God, I wish I’d been there.[2]
2. Auto-weapons by Olin Shivers.
|
2018-05-26 19:16:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36389032006263733, "perplexity": 6224.531456950942}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867859.88/warc/CC-MAIN-20180526190648-20180526210648-00098.warc.gz"}
|
https://socratic.org/questions/5a6ad94eb72cff535eea6ac5#541498
|
# Question a6ac5
Jan 26, 2018
$- 101376$
#### Explanation:
Looking for the coefficient of ${a}^{5} {b}^{7}$ in the expansion of ${\left(a - 2 b\right)}^{12}$.
The binomial theorem says ${\left(x + y\right)}^{n} = {\sum}_{k = 0}^{n} \left(\begin{matrix}n \\ k\end{matrix}\right) {x}^{n - k} \cdot {y}^{k}$
So ${\left(a - 2 b\right)}^{12} = {\sum}_{k = 0}^{12} \left(\begin{matrix}12 \\ k\end{matrix}\right) {a}^{12 - k} \cdot {\left(- 2 b\right)}^{k}$.
For the term we're seeking, we need the term when $k = 7$:
$\left(\begin{matrix}12 \\ 7\end{matrix}\right) {a}^{12 - 7} \cdot {\left(- 2 b\right)}^{7}$
$= 792 {a}^{5} {\left(- 2\right)}^{7} {b}^{7}$
$= - 792 \cdot 128 {a}^{5} {b}^{7}$
$= - 101376 {a}^{5} {b}^{7}$
so the coefficient is $- 101376$.
Note:
((12),(7))=(12!)/(7!(12-7)!) #
$= \frac{\left(12\right) \left(11\right) \left(10\right) \left(9\right) \left(8\right)}{\left(5\right) \left(4\right) \left(3\right) \left(2\right) \left(1\right)}$
$= \left(11\right) \left(9\right) \left(8\right) = 792$
|
2022-08-16 17:22:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970848560333252, "perplexity": 2112.3719777892147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00423.warc.gz"}
|
http://mathhelpforum.com/discrete-math/19121-conjurer-s-trick.html
|
1. ## Conjurer's trick
Hi, everyone! I have a exciting puzzle here:
"A conjurer invites a audience on stage. Then the conjurer goes to the tormentor and his partner then asks the audience to write a series of numbers on a board. He(the partner) erases 2 adjacent numbers in that series. Then, the conjurer come back to the stage and try to guess which numbers are erased.
Find the least length of the series of numbers to the conjurer always guesses correctly?"
2. Originally Posted by le_su14
I have a exciting puzzle but I can't solve it. Help me, please!
A conjurer invites a audience on stage. Then the conjurer goes to the tormentor and his partner then asks the audience to write a series of numbers on a board. He(the partner) erases 2 adjacent numbers in that series. Then, the conjurer come back to the stage and try to guess which numbers are erased.
Find the least length of the series of numbers to the conjurer always guesses correctly?
Can't be done, in general.
However I suspect you are learning about arithmatic and geometric series, which have the following formulas (respectively)
$a_n = a_0 + nd$
$a_n = a_0r^n$
Each of these equations has two unknowns in it ( $a_0, d$ and $a_0,r$ respectively) so we need two numbers in the series in order to solve for them. BUT we can always write both an arithmatic and geometric series between any two numbers, so to distinguish between the cases we need an extra number. So your answer is likely that you need three numbers from the series.
However, not all series are so nice. For example, the Fibbonacci series
$a_n = a_{n - 1} + a_{n - 2};~a_0 = 1, ~ a_1 = 1$
fits neither of these types of series, and of course the series of numbers 3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, ... representing the digits of $\pi$ follow no known pattern. So your problem is insolvable in general.
-Dan
3. Originally Posted by topsquark
Can't be done, in general.
However I suspect you are learning about arithmatic and geometric series, which have the following formulas (respectively)
$a_n = a_0 + nd$
$a_n = a_0r^n$
Each of these equations has two unknowns in it ( $a_0, d$ and $a_0,r$ respectively) so we need two numbers in the series in order to solve for them. BUT we can always write both an arithmatic and geometric series between any two numbers, so to distinguish between the cases we need an extra number. So your answer is likely that you need three numbers from the series.
However, not all series are so nice. For example, the Fibbonacci series
$a_n = a_{n - 1} + a_{n - 2};~a_0 = 1, ~ a_1 = 1$
fits neither of these types of series, and of course the series of numbers 3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, ... representing the digits of $\pi$ follow no known pattern. So your problem is insolvable in general.
-Dan
This puzzle is given by my teacher. I'm learning about advance discrete.
In this puzzle, the series of numbers is writen randomly by the audience. So, maybe there is no rule in this series. A key of this problem is that which 2 adjacent numbers are chosen in this random series by the partner.
How can he choose it?
4. Hi topsquark,
The partner needs at least 101 numbers. Let $a = (a_1 + a_3 +...+ a_{101})$ mod 10; $b = (a_2 + a_4 + a_{100})$ mod 10.
The first erased position: $p = 10*a + b$. (a = 0, b = 0 means p = 100)
The conjurer uses the value of p and a' ( where $a' = a_1 + a_3 +...+ a_{p-1} + a_{p+1} +...+ a_{101}$) to perceive a, b and $a_p$.
Similarly, he uses p and b' (where $b' = a_0 + a_3 +...+ a_{p-1} + a_{p+1} +...+ a_{101}$) to perceive $a_{p+1}$.
|
2017-02-19 15:49:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8192732334136963, "perplexity": 573.4666820441719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00338-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.nature.com/articles/434557a?error=cookies_not_supported&code=5104b643-b07f-475e-a1e5-060715c9358c
|
Drug safety special
Why do current safety systems sometimes fail to notice that approved drugs are causing serious adverse effects? And can surveillance methods be improved? Simon Frantz investigates.
Credit: K. BROFSKY
It's 3 a.m. in a typical US emergency room, and doctors are bustling around a patient in his mid-sixties, one of several admitted that night with severe chest pain. He's anxious, sweating and struggling to breathe — classic symptoms of a heart attack.
By the following evening, he has stabilized, thanks to the prompt administration of streptokinase, which dissolved the clot that was starving his heart muscle of oxygen. It's time to examine factors such as obesity, smoking and blood cholesterol that might have contributed to the attack, and advise the patient about lifestyle changes that will help minimize the chances of a recurrence.
An estimated one million Americans are admitted to hospital under similar circumstances each year; about a third of them die. This is an all-too-common emergency, with a host of everyday causes.
No one thinks to consider whether the patient's attack might have been triggered by the pills, prescribed to ease the pain of arthritis, sitting on his bedside table back at home.
At least that was the case before 30 September 2004, when the pharmaceutical giant Merck withdrew its painkiller Vioxx from the market after it was linked to an increased risk of heart problems. The news was met with confusion, anxiety and outrage. How could the Food and Drug Administration (FDA), which is supposed to protect the US population from unsafe medicines, and which approved Vioxx for sale in May 1999, not have noticed the dangers?
Hidden dangers
Critics argue that the FDA needs to re-examine its priorities in light of the Vioxx scare (see page 554). But whatever the outcome of that debate, the drug's withdrawal has revealed how adverse effects such as heart attacks, which already occur commonly in the general population, can slip under the radar of current drug-safety surveillance. And in the wake of the Vioxx controversy, experts are examining ways in which methods of detecting such effects could be improved.
The problem with Vioxx was never going to emerge in the phase III clinical trials that are used to judge whether a drug should be put on the market. The heart attacks occurred in less than 2% of patients who had taken the drug for many months1, whereas phase III trials are typically carried out on a few thousand patients over a timescale of several weeks. Only when hundreds of thousands of patients had taken Vioxx for an extended period would any problems become apparent. Such events can only feasibly be investigated after a drug is in general use.
Current post-marketing surveillance systems can easily detect adverse events that are unexpected and rare. For instance, the multiple sclerosis treatment Tysabri was withdrawn from the US market in February, about four months after it was given expedited approval, when two people contracted a rare brain disorder called progressive multifocal leucoencephalopathy, or PML. “PML is a very rare event, so this was like a red flag waving,” says Steven Galson, acting director of the FDA's Center for Drug Evaluation and Research.
Invisible enemy
The difficulty with common conditions such as heart attacks is that cases that might be triggered by prescription drugs readily get lost in the noise. That's especially true for patients with ailments such as arthritis, who tend to be older, and therefore more susceptible to heart disease. Conventional post-marketing surveillance schemes, which mainly rely on doctors informing drug companies of adverse events associated with their products, and companies in turn informing regulators, are almost useless in this regard.
“Spontaneous reporting systems cannot distinguish whether adverse events such as heart attacks are the result of a drug or whether it would happen in that community anyway,” says Alasdair Breckenridge, who chairs the UK Medicines and Healthcare products Regulatory Agency.
Bitter pill: health problems linked to Vioxx exposed the frailties of post-market drug safety surveillance. Credit: M. DERR/AP
In such cases, the burden falls on researchers in the field of pharmacoepidemiology, who study clinical databases containing millions of patient records, documenting information on medicines prescribed, hospitalizations and deaths. Taking their cues from potential problems highlighted in clinical trials, case reports or lab experiments, these researchers use statistical tools to study large patient populations over extended periods.
“We look through the databases to find an exposed group and a non-exposed group that are quite similar except that one group gets Vioxx, for example, and one group doesn't,” explains Susan Jick, an epidemiologist at the Boston University School of Public Health, and co-director of the Boston Collaborative Drug Surveillance Program, based in Lexington, Massachusetts. “Or you do a case-control study where you look at similar people who have had, say, a heart attack, and those who haven't, and you see whether they have taken Vioxx or not.”
This sounds simple. But in practice, it is a painstaking and labour-intensive process. The problem is particularly acute in cases in which it is difficult to determine whether an adverse event is a consequence of the underlying disease, or the medication — for example, when studying patients with heart attacks who are taking drugs to control their blood pressure. But even in less ambiguous situations, it's tough to sort patients into groups that can fairly be compared to one another.
Class divides
Wayne Ray's team demonstrated that long-term use of Vioxx increases the risk of a heart attack. Credit: D. JOHNSON/VANDERBILT UNIV.
In the case of Vioxx, which belongs to a class of drugs called COX-2 inhibitors, concerns about an elevated risk of cardiovascular disease were first raised in 2001 by a reanalysis of a study comparing patients on drugs from this class with those on an older painkiller called naproxen2. At the time, Merck suggested this was because of the cardioprotective effect of naproxen, rather than adverse effects of Vioxx.
Researchers led by epidemiologist Wayne Ray of the Vanderbilt University School of Medicine in Nashville suspected otherwise. Their initial study, which examined patients on naproxen and other drugs of the same class, known as non-steroidal anti-inflammatory agents, found no evidence of a cardioprotective effect3. But a direct examination of the risks posed by Vioxx, conducted by a collaboration including Ray's team, and led by FDA researcher David Graham, wasn't published until 2005.
The study4 suggested that up to 140,000 excess cases of serious coronary heart disease — resulting, Graham says, in at least 26,000 deaths — could have occurred in the United States while Vioxx was on the market. But this required painstaking effort to decide which patients to include in the comparisons, examining such variables as existing cardiovascular disease and arthritis. The project drew on the skills of a cardiologist, a rheumatologist and a statistician. “You have to have all these experts who can bring their knowledge to bear,” says Ray. “It might take a few months to do the computer runs, but it takes many, many months of careful thought going into it beforehand.”
“The frustration is that we have the technology, but there is very little funding for carrying out post-approval studies. ”
Such difficulties are compounded by a dearth of support for pharmacoepidemiological studies. “The frustration is that we have the technology, but there is very little funding for carrying out these studies,” says Brian Strom, an epidemiologist at the University of Pennsylvania School of Medicine in Philadelphia.
US government agencies don't provide significant funding for such studies. The FDA's Office of Drug Safety, which is responsible for monitoring and assessing the safety of existing drugs, has extremely limited funding for epidemiological studies of its own. A small programme at the Agency for Healthcare Research and Quality allocates just $5 million for drug-safety surveillance studies — a fraction of the roughly$450 million it costs on average to do the clinical trials on a single drug before it can be brought to market.
Aside from funding for more studies of drug safety, the main need is to develop clinical databases that pharmacoepidemiologists can readily query to test their hunches about adverse drug events. These need to incorporate full records of patients' medical history, including any hospitalizations and the medicines they were prescribed.
The largest such database is the UK General Practice Research Database, which contains over 2.5 million patient records. It is a product of Britain's nationalized healthcare system, in which family doctors are patients' first point of contact. This database has proved successful, for instance, in confirming fears that antidepressants might trigger suicidal behaviour in some patients5. But ideally researchers want to extend studies to the United States, where drugs are typically approved earlier, and are taken up more quickly by larger numbers of patients.
Separate sources
Here, the problem is that such data are collated by individual Health Maintenance Organizations (HMOs), such as the California-based Kaiser Permanente, whose database was used for Graham's Vioxx study. Integrating these databases, and ensuring that the data they contain have common definitions and formats, would create a powerful resource for observational studies — but it is a major challenge. “Integration is probably the biggest stumbling block,” says Donald Berry, who chairs the Department of Biostatistics and Applied Mathematics at the M. D. Anderson Cancer Center in Houston, Texas.
Nevertheless, some attempts at database integration are now getting off the ground. Eric Larson, director of the Center for Health Studies in Seattle, part of the non-profit Group Health Cooperative, is developing the Coordinated Clinical Studies Network. Involving 13 HMOs around the United States, the aim is to create an integrated data ‘warehouse’ containing clinical information on up to 18 million people, initially focusing on cardiovascular disease. The three-year project began in November 2004 and has been funded to the tune of \$2.7 million by the National Institutes of Health.
Even with the funding hurdle cleared, other problems might hamper this ambitious initiative. The biggest difficulty is the lack of will to carry out this project, says Larson. For people working in private sector HMOs, devoting up to three years to setting up a network, rather than actually using it, is a daunting prospect. “This is a labour of public service,” says Larson.
## References
1. 1
Bresalier, R. S. et al. N. Engl. J. Med. 352, 1092–1102 (2005).
2. 2
Mukherjee, D., Nissen, S. E. & Topol, E. J. J. Am. Med. Assoc. 286, 954–959 (2001).
3. 3
Ray, W. A., Stein, C. M., Hall, K., Daugherty, J. R. & Griffin, M. R. Lancet 359, 118–123 (2002).
4. 4
Graham, D. J. et al. Lancet 365, 475–481 (2005).
5. 5
Jick, H., Kaye, J. A. & Jick, S. S. J. Am. Med. Assoc. 292, 338–343 (2004).
Authors
## Rights and permissions
Reprints and Permissions
Frantz, S. Chasing shadows. Nature 434, 557–558 (2005). https://doi.org/10.1038/434557a
• Published:
• Issue Date:
• ### The effect of exposure misclassification in spontaneous ADR reports on the time to detection of product-specific risks for biologicals: a simulation study
• Niels S. Vermeer
• , Hans C. Ebbers
• , Sabine M. J. M. Straus
• , Hubert G. M. Leufkens
• , Toine C. G. Egberts
• & Marie L. De Bruin
Pharmacoepidemiology and Drug Safety (2016)
• ### Cardiovascular safety of drugs not intended for cardiovascular use: need for a new conceptual basis for assessment and approval
• Jeffrey S. Borer
• , Hubert Pouleur
• , Ferrenc Follath
• , Janet Wittes
• , Marc A. Pfeffer
• , Bertram Pitt
European Heart Journal (2007)
• ### Electronic Medical Records: Chapter One, The Promise
• Michael J. Pentecost
Journal of the American College of Radiology (2006)
|
2020-07-16 17:01:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2782876491546631, "perplexity": 4526.828154587199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657172545.84/warc/CC-MAIN-20200716153247-20200716183247-00584.warc.gz"}
|
https://math.stackexchange.com/questions/2126521/fractal-dimension-of-a-spring-whose-wire-is-actually-a-spring-etc
|
Fractal dimension of a spring whose wire is actually a spring, etc.
At a high level, he talks about how shapes can have different dimensionality depending on the scale they're observed from, an example being a spring. Far away, it looks like a line. Get closer, and it looks like a tube. Even closer, it looks like a (twisted) line again. So the guy talks about the limit of the dimension as the length scale goes to 0. But what if the twisted line itself was a spring, so that the dimension isn't convergent? What types of "measure" would we use that case?
• You might be interested in this. Also try to google search for fractal dimension+DNA. Feb 2 '17 at 23:04
That is a very nice video - thanks for sharing!
A key stretch of the video to address your question is from about 17 minutes in until about the 18 1/2 minute mark. At the beginning of that stretch, he lists four specific definitions of fractal dimension, including:
• Box-counting dimension
• Hausdorff dimension
• Packing dimension
The box-counting dimension is the simplest of these and is defined by $$\text{dim}(E) = \lim_{\varepsilon\to0^{+}}\frac{\log(N_{\varepsilon}(E))}{\log(1/\varepsilon)},$$ where $N_{\varepsilon}(E)$ is the number of $\varepsilon$ mesh squares that intersect the set $E$.
Around the middle of that time stretch the video makes a compelling case that it doesn't really make sense to take the limit as $\varepsilon\to0$ but, in the words of the video, we should "look at a sufficiently wide range of scales, from very zoomed out up to very zoomed in, and compute the dimension at each one. In this applied perspective, a shape is considered to be fractal only when the measured dimension stays approximately constant across multiple scales." What this indicates is that fractals should be sets that display some degree of regularity and that, from that perspective, your set that oscillates back and forth from one dimensional to two dimensional should maybe not be considered as a fractal at all.
Having said that, I think it's pretty clear that the narrator's perspective is that of a programmer interested in applied mathematics, as opposed to that of a pure mathematician. To be clear, I think that is a valid perspective and that his narrative is spot on - from that perspective. As a pure mathematician, though, it is certainly feasible to compute a specific fractal dimension of a set like yours. They might not all agree, though. In the case of box-counting dimension, the limit might not exist, but the $\limsup$ and $\liminf$ certainly will. Thus, you can obtain an upper and a lower box-counting dimension.
For concreteness sake, let's say that a set is regular if its upper box-counting dimension is equal to its lower box-counting dimension. From my perspective (that of a pure mathematician) there's nothing wrong with an "irregular" set; it just might not enjoy some of the properties that regular sets do. For example, the dimension of the Cartesian produce of regular sets is the sum of their dimensions, ie: $$\dim(A\times B) = \dim(A)+\dim(B).$$ This is not necessarily true for irregular sets.
But "irregular" sets still live squarely in this domain of analysis. An example of a set that is irregular in this sense is explored in the question Minkowski Dimension of Special Cantor Set. In that question, there is a set $C'$ defined by $$C' = \left\{\sum_{i=1}^{\infty} a_i4^{-i} : a_i \in \{0,3\} \, \, \text{if} \, \, (2k)! \leq i \leq (2k+1)! \,\, \text{and arbitrary otherwise} \right\}.$$ As it turns out, the upper box-counting dimension of $C'$ is $1$ while the lower box-counting dimension is $1/2$. The reason is that the set is specifically constructed so over scales in the interval $$\left[\frac{1}{4^{(2k)!}}, \frac{1}{4^{(2(k-1))!}}\right]$$ it looks to have one dimension but over scales in the interval $$\left[\frac{1}{4^{(2k+1)!}}, \frac{1}{4^{(2k)!}}\right]$$ it looks to have a different dimension.
• If fractals are sets that display regularity in some sense wrt dimension, could you point me to the sets which don't and their field of study? Feb 3 '17 at 16:59
• @allidoiswin I don't quite share the viewpoint that sets must be regular to lie within the realm of fractal geometry and I've added an example to the question to reflect that. Feb 3 '17 at 17:36
|
2022-01-18 17:30:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8073384761810303, "perplexity": 293.7297450949655}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300934.87/warc/CC-MAIN-20220118152809-20220118182809-00514.warc.gz"}
|
https://www.hackerearth.com/practice/algorithms/dynamic-programming/introduction-to-dynamic-programming-1/practice-problems/algorithm/accomodation-a5c006f3/
|
Accommodation
/
## Algorithms, Dynamic Programming
Problem
Editorial
Analytics
There is a hotel with M floors. $i^{th}$ floor of the hotel has infinite identical rooms, each room can accommodate $C[i]$ people (Two rooms of same floor are indifferentiable and have same capacity while two rooms of different floors have different capacity).
There is one rule:
Any room on $i^{th}$ floor will accommodate exactly $C[i]$ people (not less or more).
Now N identical people come for accommodation. You can assign any of them to any room of any floor following the mentioned rule.
Way of assigning:
If we have 5 people and 3 floors. Let's say floor 1 has room capacity 1 and floor 2 has room capacity 2, then:
(1,2,2) is a way of assigning people. This means we assign one person out of those 5 people to any room of floor 1. The remaining 4 people are assigned to two rooms of floor 2, each room accommodating 2 people.
We will consider (1,2,2), (2,1,2), (2,2,1) as the same ways as we can't differentiate between them.
You have to tell number of different ways of accommodating N people.
Two ways are considered different if one way is not a permutation of other way.
Input Format:
First line consists of two integers M and N, denoting number of floors and number of people respectively.
Second line consists of M space separated integers denoting capacity of floors. $i^{th}$ integer denotes capacity of $i^{th}$ floor.
Output Format:
Print the number of different ways of accommodating people.
Since the number of ways can be large, print the answer modulo $10^9+7$.
Input Constraints:
$1 \le N * M \le 10^6$
$1 \le C[i] \le 10^6$
All $C[i]$ are different.
SAMPLE INPUT
3 5
1 2 3
SAMPLE OUTPUT
5
Explanation
We can assign as follows:
(1,1,1,1,1) : assign each of the 5 people to rooms of first floor.
(1,1,1,2) : assign 3 people to rooms of first floor of, 2 people to room on second floor.
(1,1,3) : assign 2 people to rooms of first floor, 3 people to room of third floor.
(1,2,2) : assign 4 people to rooms of second floor with each room having 2 people, 1 person to room of first floor.
(2,3) : assign 2 people to room of second floor, 3 people to room of third floor.
Time Limit: 1.0 sec(s) for each input file.
Memory Limit: 256 MB
Source Limit: 1024 KB
## This Problem was Asked in
Initializing Code Editor...
|
2020-11-29 07:56:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21908822655677795, "perplexity": 2291.9910799000004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141197278.54/warc/CC-MAIN-20201129063812-20201129093812-00569.warc.gz"}
|
https://www.jobilize.com/course/section/two-problem-formulations-by-openstax?qcr=www.quizover.com
|
# 0.10 Constrained least squares (cls) problem (Page 2/5)
Page 2 / 5
## Two problem formulations
As mentioned in [link] , one can address problem [link] in two ways depending on how one views the role of the transition band in a CLS problem. The original problem posed by Adams in [link] can be written as follows,
$\begin{array}{cc}\underset{h}{\text{min}}\hfill & {\parallel D\left(\omega \right)-H\left(\omega ;h\right)\parallel }_{2}\hfill \\ \text{subject}\phantom{\rule{4.pt}{0ex}}\text{to}\hfill & |D\left(\omega \right)-H\left(\omega ;h\right)|\phantom{\rule{-0.166667em}{0ex}}\le \phantom{\rule{-0.166667em}{0ex}}\tau \phantom{\rule{1.em}{0ex}}\forall \phantom{\rule{0.277778em}{0ex}}\omega \in \left[0,{\omega }_{pb}\right]\cup \left[{\omega }_{sb},\pi \right]\hfill \end{array}$
where $0\phantom{\rule{-0.166667em}{0ex}}<\phantom{\rule{-0.166667em}{0ex}}{\omega }_{pb}\phantom{\rule{-0.166667em}{0ex}}<\phantom{\rule{-0.166667em}{0ex}}{\omega }_{sb}\phantom{\rule{-0.166667em}{0ex}}<\phantom{\rule{-0.166667em}{0ex}}\pi$ . From a traditional standpoint this formulation feels familiar. It assigns fixed frequencies to the transition band edges as a number of filter design techniques do. As it turns out, however, one might not want to do this in CLS design.
An alternate formulation to [link] could implicitly introduce a transition frequency ${\omega }_{tb}$ (where ${\omega }_{pb}\phantom{\rule{-0.166667em}{0ex}}<\phantom{\rule{-0.166667em}{0ex}}{\omega }_{tb}\phantom{\rule{-0.166667em}{0ex}}<\phantom{\rule{-0.166667em}{0ex}}{\omega }_{sb}$ ); the user only specifies ${\omega }_{tb}$ . Consider
$\begin{array}{ccc}\underset{h}{\text{min}}\hfill & {\parallel D\left(\omega \right)-H\left(\omega ;h\right)\parallel }_{2}\hfill & \forall \phantom{\rule{0.277778em}{0ex}}\omega \in \left[0,\pi \right]\hfill \\ \text{subject}\phantom{\rule{4.pt}{0ex}}\text{to}\hfill & |D\left(\omega \right)-H\left(\omega ;h\right)|\phantom{\rule{-0.166667em}{0ex}}\le \phantom{\rule{-0.166667em}{0ex}}\tau \hfill & \forall \phantom{\rule{0.277778em}{0ex}}\omega \in \left[0,{\omega }_{pb}\right]\cup \left[{\omega }_{sb},\pi \right]\hfill \end{array}$
The algorithm at each iteration generates an induced transition band in order to satisfy the constraints in [link] . Therefore $\left\{{\omega }_{pb},{\omega }_{sb}\right\}$ vary at each iteration.
It is critical to point out the differences between [link] and [link] . [link] .a explains Adams' CLS formulation, where the desired filter response is only specified at the fixed pass and stop bands. At any iteration, Adams' method attempts to minimize the least squares error ( ${\epsilon }_{2}$ ) at both bands while trying to satisfy the constraint $\tau$ . Note that one could think of the constraint requirements in terms of the Chebishev error ${\epsilon }_{\infty }$ by writing [link] as follows,
$\begin{array}{cc}\underset{h}{\text{min}}\hfill & {\parallel D\left(\omega \right)-H\left(\omega ;h\right)\parallel }_{2}\hfill \\ \text{subject}\phantom{\rule{4.pt}{0ex}}\text{to}\hfill & {\parallel D\left(\omega \right)-H\left(\omega ;h\right)\parallel }_{\infty }\phantom{\rule{-0.166667em}{0ex}}\le \phantom{\rule{-0.166667em}{0ex}}\tau \phantom{\rule{1.em}{0ex}}\forall \phantom{\rule{0.277778em}{0ex}}\omega \in \left[0,{\omega }_{pb}\right]\cup \left[{\omega }_{sb},\pi \right]\hfill \end{array}$
In contrast, [link] .b illustrates our proposed problem [link] . The idea is to minimize the least squared error ${\epsilon }_{2}$ across all frequencies while ensuring that constraints are met in an intelligent manner. At this point one can think of the interval $\left({\omega }_{pb},{\omega }_{sb}\right)$ as an induced transition band, useful for the purposes of constraining the filter. [link] presents the actual algorithms that solve [link] , including the process of finding $\left\{{\omega }_{pb},{\omega }_{sb}\right\}$ .
It is important to note an interesting behavior of transition bands and extrema points in ${l}_{2}$ and ${l}_{\infty }$ filters. [link] shows ${l}_{2}$ and ${l}_{\infty }$ length-15 linear phase filters (designed using Matlab's firls and firpm functions); the transition band was specified at $\left\{{\omega }_{pb}=0.4/\pi ,{\omega }_{sb}=0.5/\pi \right\}$ . The dotted ${l}_{2}$ filter illustrates an important behavior of least squares filters: typically the maximum error of an ${l}_{2}$ filter is located at the transition band. The solid ${l}_{\infty }$ filter shows why minimax filters are important: despite their larger error across most of the bands, the filter shows the same maximum error at all extrema points, including the transition band edge frequencies. In a CLS problem then, typically an algorithm will attempt to reduce iteratively the maximum error (usually located around the transition band) of a series of least squares filters.
Another important fact results from the relationship between the transition band width and the resulting error amplitude in ${l}_{\infty }$ filters. [link] shows two ${l}_{\infty }$ designs; the transition bands were set at $\left\{0.4/\pi ,0.5/\pi \right\}$ for the solid line design, and at $\left\{0.4/\pi ,0.6/\pi \right\}$ for the dotted line one. One can see that by widening the transition band a decrease in error ripple amplitude is induced.
what is variations in raman spectra for nanomaterials
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
research.net
kanaga
sciencedirect big data base
Ernesto
Introduction about quantum dots in nanotechnology
what does nano mean?
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
absolutely yes
Daniel
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
|
2020-03-29 23:40:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 28, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6806514263153076, "perplexity": 2088.8981503970226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00207.warc.gz"}
|
https://cs.stackexchange.com/questions/11183/how-to-find-the-pumping-length-of-a-context-free-language
|
# How to find the pumping length of a context-free language?
Please help me understand, and if possible, tips, to determine a pumping length $p$.
Suppose I have the example :
Let $G$ be a Context-Free-Grammar with a set of variables $\{S,A,B,C\}$, set of terminals $\{0,1\}$, start variable $S$, and rules
$S \to ABA \mid SS$
$A \to S0 \mid 1C1$
$B \to S1 \mid 0$
$C \to 0$
Now given the above, how do I find the pumping length $p$?
Please explain how you actually got it from the grammar.
• Are you trying to prove the language isn't regular, or that it isn't context free? The pumping lemmas for either case are not suitable for proving that a language is regular or context free, just that they *aren't *. – Luke Mathieson Apr 10 '13 at 4:01
• @LukeMathieson Proving that it is NOT context-free, and I need help to determine the pumping length K – Gaak Apr 10 '13 at 4:13
• if you just need to prove $L$ is not CFL, you usually don't need to explicitly know the pumping length $p$, but just assume it exists. To find $p$ is a different question (and quite interesting one!). As a short and imprecise comment I can say that if you have a grammar $G$ then $p \le |R|$ where $R$ is the set of production rules of that CFG. – Ran G. Apr 10 '13 at 5:44
• @Luke I think he's trying to answer an assignment question which is precisely this - what is the pumping length $p$ corresponding to the grammar. That's how I interpret the sentence "please explain how you actually got it from the grammar". – Yuval Filmus Apr 10 '13 at 6:25
• Look at the proof of the Pumping Lemma, the answer is there. Note that "the" Pumping length is not unique; clearly, for any Pumping lengths $p$ all $n \geq p$ are also Pumping lengths. Therefore, $2^{100}$ is probably a correct answer, if uninstructive. – Raphael Apr 10 '13 at 9:00
As Raphael says in the comments, any sufficiently large number is suitable as a pumping length, however (also as he points out), the proof of the pumping lemma gives an upper bound for the minimum pumping length.
Similar to the pumping lemma for regular languages, we want to take a length that is sufficiently large that we can guarantee that we have used the same production at least twice. For regular languages this often easy to think of as the number states in a minimal DFA for that language.
For context free languages expressed with a grammar, we can take a similar approach and ask "how many symbols can be in the string before I know that I must have used some production twice?". To figure this out you may want to consider how many symbols any production could insert into the string, and what the parse tree for a sufficiently long string may look like (you can consider each level of the tree to be one production). If you do this correctly you can get a minimal pumping length for that grammar (not exactly a precise concept), then taking the minimum of these minimal lengths over all grammars for the language will give you the minimum for the language.
Further hint:
If there's a rule with $m$ symbols on the right hand side, and the parse tree is $d$ levels deep, what's the maximum number of symbols in the string?
• I think we have to restrict ourselves w.l.o.g. to chain- and $\varepsilon$-free CFGs, as does the proof (?). – Raphael Apr 12 '13 at 13:20
• @Raphael not the ones I have here (Sipser and Kinber & Smith), you only need that a long enough terminal string guarantees that the parse tree has a minimum depth. – Luke Mathieson Apr 13 '13 at 1:46
• I think cycles generating $\varepsilon$ have to be avoided, since those can not be pumped to generate longer words. If I remember correctly, the proofs I have seen use Chomsky normal form which does not have that problem. – Raphael Apr 13 '13 at 13:03
• Ah, the proofs "here" implicitly avoid that by starting with a lemma that any string of length at least $p=w^{d}$ has a parse tree of depth at least $d$ where $w$ is the maximum number of symbols on right of a production - so any productions that lead to $\varepsilon$ don't add to the length $p$, effectively restricting our interest to paths that actually produce a non-$\varepsilon$ character. – Luke Mathieson Apr 14 '13 at 2:11
|
2020-08-15 14:43:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273885250091553, "perplexity": 348.70866841634916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740848.39/warc/CC-MAIN-20200815124541-20200815154541-00550.warc.gz"}
|
https://livesabundantly.com/if9rr/non-linear-relationship-formula-801f2f
|
In our next article, we explain the foundations of functions. Notice that the x-coordinate of the centre $$(4)$$ has the opposite sign as the constant in the expression $$(x-4)^2$$. 6. The only thing to remember here is that if there is a minus sign in front of the fraction (or if the equation can be manipulated in that form), it is a negative hyperbola. At first, this doesn’t really look like any of the forms we have dealt with. This example uses the equation solved for in Step 1. However, notice how the $$5$$ in the numerator can be broken up into $$2+3$$. This subject guide is just the beginning of the skills students will learn in curve sketching, as their knowledge will build from here all the way until they finish their HSC. Notice how we needed to square root the 16 in the equation to get the actual radius length of $$4$$. So the final equation should be $$y=(x-4)^2-4$$. Following Press et al. For example, let’s take a look at the graphs of $$y=(x+3)^3$$ and $$y=(x-2)^3$$. Solving for one of the variables in either equation isn’t necessarily easy, but it can usually be done. If one equation in a system is nonlinear, you can use substitution. See our, © 2020 Matrix Education. The transformations we can make on the cubic are exactly the same as the parabola. with parameters a and b and with multiplicative error term U. When you plug 3 + 4y into the second equation for x, you get (3 + 4y)y = 6. Understand: That non-linear equations can be used as graphical representations to show a linear relationship on the Cartesian Plane. 8. 5. We can now split the fraction into two, taking $$x+2$$ as one numerator and $$3$$ as the other. Again, we can apply a scaling transformation, which is denoted by a constant a being multiplied in front of the $$x^3$$ term. A worksheet to test your Knowledge of Functions and your Curve Sketching skills questions across 4 levels of difficulty. So, we can rewrite the equation as $$y=-\frac{1}{(x-4)}$$. This is what we call a positive hyperbola. Question 5. Let's try using the procedure outlined above to find the slope of the curve shown below. We need to shift the POI to the left by $$3$$ and down by $$5$$. There is also a minus sign in front of the fraction, so the hyperbola should lie in the second and fourth quadrants. Let's try using the procedure outlined above to find the slope of the curve shown below. We take your privacy seriously. Does the graph in Exercise 2 represent a proportional or a nonproportional linear relationship? But because the Pearson correlation coefficient measures only a linear relationship between two variables, it does not work for all data types - your variables may be strongly associated in a non-linear way and still have the coefficient close to zero. • Graph is a straight line. Again, pay close attention to the POI of each cubic. In other words, when all the points on the scatter diagram tend to lie near a smooth curve, the correlation is said to be non linear (curvilinear). What a linear equation is. These functions have graphs that are curved (nonlinear), but have no breaks (smooth) Our sales equation appears to be smooth and non-linear: All the linear equations are used to construct a line. For the basic hyperbola, the asymptotes are at $$x=0$$ and $$y=0$$, which are also the coordinate axes. Determine if a relationship is linear or nonlinear. They should understand the significance of common features on graphs, such as the $$x$$ and $$y$$ intercepts. Now we can clearly see that there is a horizontal shift to the right by $$4$$. Compare the blue curve $$y=\frac{2}{x}$$ with the red curve $$y=\frac{1}{x}$$, and we can clearly see the blue curve is further from the origin, as it has a greater scaling constant $$a$$. If both of the equations in a system are nonlinear, well, you just have to get more creative to find the solutions. They find that for every dollar increase in the price of a gallon of jet fuel, the cost of their LA-NYC flight increases by about \$3500. After you solve for a variable, plug this expression into the other equation and solve for the other variable just as you did before. In this article, we give you a comprehensive breakdown of non-linear equations. Show Step-by … With our Matrix Year 10 Maths Term Course, you will revise over core Maths topics, sharpen your skills and build confidence. Now a solution for the system, the system that has three equations, two of which are nonlinear, in order to … The relationship between $$x$$ and $$y$$ is called a linear relationship because the points so plotted all lie on a single straight line. Recommended Articles. This is an example of a linear relationship. By default, we should always start at a standard parabola $$y=x^3$$ with POI (0,0) and direction positive. And the last one, the last one, x squared plus y squared is equal to five, that's equal to that circle. We can see in the black curve $$y=(x+2)^2$$, the vertex has shifted to the left by $$2$$, dictated by the $$+2$$ in our equation. Again, we can apply a scaling transformation, which is denoted by a constant $$a$$ in the numerator. First, I’ll define what linear regression is, and then everything else must be nonlinear regression. 5. There is a negative in front of the $$x$$, so we should take out a $$-1$$. In such circumstances, you can do the Spearman rank correlation instead of Pearson's. The direction of all the parabolas has not changed. These new asymptotes now dictate the new quadrants. This is an example of a linear relationship. Excerpts and links may be used, provided that full and clear credit is given to Matrix Education and www.matrix.edu.au with appropriate and specific direction to the original content. Thus, the graph of a nonlinear function is not a line. Instead of a vertex or POI, hyperbolas are constricted into quadrants by vertical and horizontal asymptotes. Non-Linear Equations (Curve Sketching), Graph a variety of parabolas, including where the equation is given in the form $$y=ax^2+bx+c$$, for various values of $$a, b$$ and $$c$$, Graph a variety of hyperbolic curves, including where the equation is given in the form $$y=\frac{k}{x}+c$$ or $$y=\frac{k}{x−a}$$ for integer values of $$k, a$$ and $$c$$, Establish the equation of the circle with centre $$(a,b)$$ and radius $$r$$, and graph equations of the form $$(x−a)^2+(y−b)^2=r^2$$ (Communicating, Reasoning), Describe, interpret and sketch cubics, other curves and their transformations, The coordinates of the point of inflexion (POI). The reason why is because the variables in these graphs have a non-linear relationship. The most basic circle has centre $$(0,0)$$ and radius $$r$$. This has been a guide to Non-Linear Regression in Excel. The wider the scatter, the ‘noisier’ the data, and the weaker the relationship. Once you have detected a non-linear relationship in your data, the polynomial terms may not be flexible enough to capture the relationship, and spline terms require specifying the knots. This article will cover the following NESA Syllabus Outcomes: We will be covering the following topics: Students should be familiar with the coordinate system on the cartesian plane. In regression analysis, curve fitting is the process of specifying the model that provides the best fit to the specific curves in your dataset.Curved relationships between variables are not as straightforward to fit and interpret as linear relationships. A strong statistical background is required to understand these things. Spearman’s (non-parametric) rank-order correlation coefficient is the linear correlation coefficient (Pearson’s r) of the ranks. This is enough information to sketch the hyperbola. Since there is no minus sign in front of the fraction, the hyperbola lies in the first and third quadrants. Substitute the value(s) from Step 3 into either equation to solve for the other variable. Students who have a good grasp of how algebraic equations can relate to the coordinate plane, tend to do well in future topics, such as calculus. So the equation becomes $$y=\frac{1}{2}\times \frac{1}{(x-2)}$$. This new vertical asymptote, alongside the horizontal asymptote $$y=0$$ (which has not changed), dictate where the quadrants are on the plane. The most common models are simple linear and multiple linear. For example, suppose an airline wants to estimate the impact of fuel prices on flight costs. Because you found two solutions for y, you have to substitute them both to get two different coordinate pairs. Free system of non linear equations calculator - solve system of non linear equations step-by-step This website uses cookies to ensure you get the best experience. A circle with centre $$(-10,10)$$ and radius $$10$$. $$y=\frac{(x+5)}{(x+2)}$$ (Challenge! For example, let’s take a look at the graphs of $$y=(x-3)^2$$ and $$y=(x+2)^2$$. A linear relationship is the simplest to understand and therefore can serve as the first approximation of a non-linear relationship. However, notice that the asymptotes which define the quadrants have not changed. Interpret the equation y = mx + b as defining a linear function (Common Core 8.F.3) Linear v Non Linear Functions 1 (8.F.3) How can you tell if a function is linear? In this general case, the centre would be at $$(k,h)$$. This is the most basic form of the parabola and is the starting point to sketching all other parabolas. Nonlinear relationships, in general, are any relationship which is not linear. Generalized additive models, or GAM, are a technique to automatically fit a spline regression. Regression analysis includes several variations, such as linear, multiple linear, and nonlinear. A circle with centre $$(5,0)$$ and radius $$3$$. In a nonlinear system, at least one equation has a graph that isn’t a straight line — that is, at least one of the equations has to be nonlinear. Since there is no constant inside the square, there is no horizontal shift. Nonlinear regression is a form of regression analysis in which data is fit to a model and then expressed as a mathematical function. Linear and Non-Linear are two different things from each other. Join 75,893 students who already have a head start. A simple negative parabola, with vertex $$(0,0)$$, 2. Substitute the value of the variable into the nonlinear equation. Following Press et al. Determine if a relationship is linear or nonlinear. https://datascienceplus.com/first-steps-with-non-linear-regression-in-r Your pre-calculus instructor will tell you that you can always write a linear equation in the form Ax + By = C (where A, B, and C are real numbers); a nonlinear system is represented by any other form. Again, similarly to parabolas, it is important to note that neither the POI nor the direction have changed. Elements of Linear and Non-Linear Circuit. First, let us understand linear relationships. Again, the direction of the parabolas has not changed. Generalized additive models, or GAM, are a technique to automatically fit a spline regression. Nonlinear regression analysis is commonly used for more complicated data sets in which the dependent and independent variables show a nonlinear relationship. _____ Answer: It represents a non-proportional linear relationship. My introductory textbooks only offers solutions to various linear ones. For example, let’s investigate the circle $$(x-4)^2+(y+3)^2=4$$. 9. If this constant is positive, we shift to the left. It’s very rare to use more than a cubic term.The graph of our data appears to have one bend, so let’s try fitting a quadratic linea… However, there is a constant outside the square, so we have a vertical shift upwards by $$3$$. So now we know the vertex should only be shifted up by $$3$$. Notice the difference from the previous section, where the constant was inside the square. Simply, a negative hyperbola occupies the second and fourth quadrants. Since there is no minus sign outside the $$(x+3)^3$$, the direction is positive (bottom-left to top-right). Medications, especially for children, are often prescribed in proportion to weight. If this constant is positive, we shift to the left. This is shown in the figure on the right below. To sketch this parabola, we again must look at which transformations we need to apply. Hyperbolas have a “direction” as well, which just dictates which quadrants the hyperbola lies in. © Matrix Education and www.matrix.edu.au, 2020. The graph looks a little messy, but we just need to pay attention to the vertex of each graph. Similarly if the constant is negative, we shift to the right. Take a look at the following graphs, $$y=x^3+3$$ and $$y=x^3-2$$. Similarly, if the constant is negative, we shift the horizontal asymptote down. A non-linear equation is such which does not form a straight line. A better way of looking at it is by paying attention to the vertical asymptote. Non Linear (Curvilinear) Correlation. Your answers are. Here’s what happens when you do: Therefore, you get the solutions to the system: These solutions represent the intersection of the line x – 4y = 3 and the rational function xy = 6. The student now introduces a new variable T 2 which would allow him to plot a graph of T 2 vs L, a linear plot is obtained with excellent correlation coefficient. A linear relationship is a trend in the data that can be modeled by a straight line. The example below demonstrates how the Quadratic Formula is sometimes used to help in solving, and shows how involved your computations might get. No spam. The most common way to fit curves to the data using linear regression is to include polynomial terms, such as squared or cubed predictors.Typically, you choose the model order by the number of bends you need in your line. This is simply a (scaled) hyperbola, shifted left by $$2$$ and up by $$1$$. Non-linear relationships and curve sketching. This is just a scaled positive hyperbola, shifted to the right by $$2$$. Finally, we investigate a vertical shift in the hyperbola, dictated by adding a constant $$c$$ outside of the fraction. 7. of our 2019 students achieved an ATAR above 90, of our 2019 students achieved an ATAR above 99, was the highest ATAR achieved by 3 of our 2019 students, of our 2019 students achieved a state ranking. The number $$95$$ in the equation $$y=95x+32$$ is the slope of the line, and measures its steepness. From point A (0, 2) to point B (1, 2.5) From point B (1, 2.5) to point C (2, 4) From point C (2, 4) to point D (3, 8) • Equation can be written in the form y = mx + b Examples of linear, exponential and quadratic functions. Correlation is said to be non linear if the ratio of change is not constant. When we have a minus sign in front of the $$x^2$$, the direction of the parabola changes from upwards to downwards. The GRG Nonlinear method is used when the equation producing the objective is not linear but is smooth (continuous). You now have y + 9 + y2 = 9 — a quadratic equation. Now let's use the slope formula in a nonlinear relationship. The vertical asymptote has shifted from the $$y$$-axis to the line $$x=-3$$ (ie. The bigger the constant, the “further away” the hyperbola. It looks like a curve in a graph and has a variable slope value. Notice the difference from the previous section, where the constant was inside the denominator. This difference is easily seen by comparing with the curve $$y=\frac{2}{x}$$. Are there examples of non-linear recurrence relations with explicit formulas, and are there any proofs of non-existence of explicit formulas for other non-linear recurrence relations, or are they simply " hopeless " to figure out? Functions are one of the important foundations for Year 11 and 12 Maths. They have two properties: centre and radius. Don’t break out the calamine lotion just yet, though. This can be … This is simply a negative cubic, shifted up by $$\frac{4}{5}$$ units. The graph of a linear equation forms a straight line, whereas the graph for a non-linear relationship is curved. Let’s look at the graph $$y=3x^2$$. Note that if the term on the RHS is given as a number, we should first square root the number to find the actual radius, before sketching. The distinction between linear and non-linear correlation is based upon the constancy of the ratio of change between the variables. The direction has changed, but the vertex has not. We can also say that we are reflecting about the $$x$$-axis. We can generally picture a relationship between two variables as a ‘cloud’ of points scattered either side of a line. Similarly, in the blue curve $$y=(x-3)^3$$, the vertex has shifted to the right by $$3$$. Similarly if the constant is negative, we shift to the right. In the next sections, you will learn how to apply them to cubics, hyperbolas, and circles. Solving for one of the graph for a linear relationship value ( s ) from Step into... Then expressed as a ‘ cloud ’ of points scattered either side of a vertex POI... That neither the vertex in Excel models are simple linear and non-linear correlation said... To pay attention to the same as factorising \ ( 95\ ) in the on... Looks a little different from parabolas or cubics to successfully navigate through senior mathematics much easier co-ordinates to plot on... ( a\ ) in the black curve \ ( x^2+y^2=16\ ) it looks like a in. Apply to our use of cookies applying the transformations you have to get the actual radius of! Equation so the \ ( ( k, h ) \ ) -axis on! Would be at \ ( 3\ ) and direction downwards can be used as graphical representations to show linear. By the equations in a system are nonlinear, well, you have just learnt parts. Since the ratio is constant, the top equation is linear this example, suppose an airline wants to the. The value of the fraction, the hyperbola has shifted left by \ ( y=3x^2\.... Of all the parabolas has not and nonlinear 10 is a form the... The ‘ noisier ’ the data, and shows how involved your computations get... How the red curve \ ( ( 0,0 ) \ ) y=ae^ bx! ( y=x^2-2\ ), the steeper the parabola represents the intersections of the \ ( ). Variable will not always bring non linear relationship formula the same as \ ( y=x^2-2\ ) and., suppose a problem asks you to solve the linear correlation coefficient ( Pearson ’ vertex... Check out before going on to Year 10 basic circle has centre (! To provide you with a constant \ ( 4\ ) radius length of \ y=\frac... } +3\ ) for example, suppose a problem asks you to see this is a.! Horizontal asymptote down ) hyperbola, it lies in the next sections, you can substitution... Paying attention to the inside of the \ ( x^2+y^2=16\ ) is in. Because you found two solutions for y, you get x = 3 4y! In Step 1 makes more sense, but we just learned the example below demonstrates how the red curve (! 'S use the slope of the minus sign in non linear relationship formula of the forms have. X \ ) -axis in front of the parabolas has not changed of inflexion ( POI ) the square 5,0... ( y\ ) -axis a scaled positive hyperbola, it is important to note that neither the down. ( x^2\ ) error term U guide, so get out there and ace mathematics the Cartesian plane is,! ( x+a ) ^3 +c\ ) if you solve for x2 or y2 in one of the (. The zero product property to solve for x, you will revise over core Maths topics, sharpen your and. General case, the graph \ ( 2\ ) quadrants by vertical non linear relationship formula horizontal asymptotes fact, a hyperbola! Bend in the first and third quadrants that factorised \ ( 2+3\ ) the limits of need! And 12 Maths circumstances, you can use substitution ve learnt something new from this subject guide, so can! In order for you to see this page as it is also squared, it becomes. Pass through the origin \ ( y=-\frac { 1 } { 2 {! Features of a hyperbola proportional or a nonproportional linear relationship no horizontal to. Hyperbola lies in are, of course, you consent to our use cookies! Substitute the value from Step 3 into either equation isn ’ t break out the lotion! Point of inflexion has not changed “ further away ” the hyperbola simply negative! Figure on the asymptotes which define the quadrants have not changed = mx + examples... In this article, we shift to the inside of the circle \ x^3\. Are instigating a vertical shift upwards by \ ( 2\ ) and radius \ ( 2\ ) sharpen skills. X } +3\ ) equation in a nonlinear relationship centre which is not constant non linear relationship formula. ( Pearson ’ s look at the circle \ ( y=x^3+3\ ) and up by (! The graph does not form a straight line is simply a ( scaled ) hyperbola, it means 're., we are instigating a horizontal shift of the fraction and down by \ 1\... Radius length of \ ( 4\ ) is strictly prohibited make many in... Are a technique to automatically fit a spline regression actual radius length of \ ( 1\ ) the vertical.... Similarly to parabolas, it just becomes \ ( ( x-3 ) ^2\ ), the POI to line... Use the zero product property to solve for y = –1 needed square. Equation forms a straight line which the dependent and independent variables show a linear on..., elimination is out of the fraction constant inside the squares from or... More complicated data sets in which the dependent and independent variables show linear. Changed, but both are linear relationships, and then expressed as a function! Equation producing the objective is not linear to successfully navigate through senior mathematics and secure your fundamentals Year 11 12! Can apply a scaling transformation, which just dictates which quadrants the hyperbola positive! Plot non-linear relationships in Year 10 is a line then everything else must be nonlinear is. That each unit change in non linear relationship formula black curve \ ( 3\ ) and up \... = –1 positive and lies in the asymptotes which define the quadrants not. Produces one more bend in the numerator can be used as graphical representations to show a nonlinear relationship ’... { 4 } { ( x-4 ) } \ ) and radius \ ( a\ in., a number of phenomena were thought to be linear but is smooth ( ). Are simple linear and non-linear are two important features of a parabola: vertex and direction is! Material is doubled, its weight will also double as a mathematical function note that neither the POI to horizontal! There is a crucial gateway to being able to successfully navigate through senior mathematics and secure your fundamentals is... To appear, we shift to the left each Step of the given equations GRG. But is smooth ( continuous ) it more clearly that neither the vertex of cubic... Formula is sometimes used to help in solving, and the parabola investigate to... This message, it just becomes \ ( y=x^2+3\ ), so we can apply a scaling,. Out a \ ( y=-x^3\ ) goes from bottom-left to top-right, which just dictates which quadrants the hyperbola lie! Approximation of a non-linear equation is: \ ( 95\ ) in the black curve (! So our final equation should be able to successfully navigate through senior mathematics and secure your fundamentals express written. We shift the POI down sides, this Doesn ’ t really look like any of fraction... System is nonlinear, well, which is the linear equation forms straight! More sense, but the vertex of each cubic we hope that please... Vertical shift in the black curve \ ( x\ ) and up by (... Curve shown below line of the curve red curve \ ( 2\ ) cubics, hyperbolas, measures... Y=X^3-2\ ), the first term just cancels to become \ ( y=x^3-2\.. Cubics, hyperbolas, and =C1^2 to see this is very similar to shift... \Displaystyle y=ae^ { bx } U\, \ ( ( 4-x ) non linear relationship formula ), a... + b examples of smooth nonlinear functions have a head start to do is add 9 to sides., is very similar fashion to sketching parabolas both sides to get two different things from each.... This general case, the non linear relationship formula is upwards scientists realized that this was only true as an approximation not! Shift, dictated by subtracting a constant slope add a constant to right. Blue curve \ ( y=\frac { ( x+5 ) } \ ) units, 3 written permission from this ’! So we can see this is very similar to a model and then expressed as a mathematical function from. Without express and written permission from this subject guide, so get out and... Apply them to cubics, hyperbolas are constricted into quadrants by non linear relationship formula and horizontal asymptotes x^3\... What linear regression is, and the parabola circumstances, you get ( 3 + 4y (. Head start to note that neither the POI nor the direction of the curve \ ( -1\ ) creative... Give you a comprehensive breakdown of non-linear equations can be written in the form y = mx + b of... The domains *.kastatic.org and *.kasandbox.org are unblocked is said to linear... First, this becomes make on the Cartesian plane this difference is easily seen by with. Shows how involved your computations might get, I ’ ll define what linear regression is before about... Realized that this was only true as an approximation given by the equations the! Prescribed in proportion to weight looks a little different from parabolas or cubics not through... Hope that you please re-enable your Javascript, where the constant is,... { x } \ ) and radius \ ( 3\ ) and up by \ ( {. Use co-ordinates to plot points on the Cartesian plane { bx } U\, \! to solve linear.
|
2021-08-02 19:51:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7805421352386475, "perplexity": 448.6958465302671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154356.39/warc/CC-MAIN-20210802172339-20210802202339-00051.warc.gz"}
|
https://blender.stackexchange.com/questions/162391/how-to-get-the-vertices-from-an-edge
|
# How to get the vertices from an edge
I make a selection of specific edges
indices = [442, 443, 444, 445]
edges = [e for e in bm.edges]
ee0 = []
for vert in edges:
if vert.index in indices:
ee0 = ee0 + [vert]
No I would like to get the vertices connected to these edges So I tried
verts = [v for e in ee0 if e.select for v in e.verts]
the result of verts is empty How can I get the vertices connected to an edge?
The reason why your result is empty is because (and im assuming) your not selecting the vertices
Remove the if e.select
verts = [v for e in ee0 for v in e.verts]
Code edit
To the questioner, Please ignore this answer.: it is for others who may come across this question / answer after code to get bmesh elements from their indices, or for those like me who cannot see "not nice" code with out suggesting alteration. Rather than long-commenting added here as an alternate answer.
If we have the indices of vertices can look them up via bm.verts[index], after calling bm.verts.ensure_lookup_table() to ensure the indexing. Similarly for edges and faces. Need only loop over indices.
[bm.edges[i] for i in edge_indices]
Looping over every vertex and checking if v.index == index or worse if v.index in indices, then appending with verts = verts + [v] (suggest verts.append(v)) is both grossly inefficient and s shown, not required.
Use a set for verts in edges to have only one occurrence of each vert. There will be same vert added to list for each connected edge in the list comprehension. Cast it back to a list if need be. list(set(verts))
import bpy
import bmesh
context = bpy.context
ob = context.object
me = ob.data
bm = bmesh.new()
bm.from_mesh(me)
edge_indices = [0, 2, 4]
bm.edges.ensure_lookup_table()
edges = [bm.edges[i] for i in edge_indices]
all_verts_in_edges = set(v for e in edges for v in e.verts)
print(edges)
print(all_verts_in_edges)
|
2020-07-08 02:32:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5696186423301697, "perplexity": 3336.3393556421497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00468.warc.gz"}
|
https://cs.stackexchange.com/questions/42681/best-pathfinding-algorithm-for-undirected-unweighted-graph
|
# Best pathfinding algorithm for undirected unweighted graph [closed]
I have an unweighted undirected graph with every node connected with an average of two hundred other nodes (nodes are people from social network). What will be the fastest algorithm to find the shortest path from one node to another?
• What have you tried? What research have you done? What algorithms have you considered? Have you tried benchmarking any of them? We expect you to do research on your own before asking, and to show us in the question what you've tried. See cs.stackexchange.com/help/how-to-ask. The obvious answer is BFS; so what research have you done into that method? – D.W. May 18 '15 at 4:27
• Maybe these slides are helpful. – Juho May 18 '15 at 6:28
• @Juho thanks, those slides are really helpful. – Viacheslav Kravchenko May 18 '15 at 13:55
• @D.W. I think this question is too easy to computer science community to do my own researches and benchmark all possible algorthms. – Viacheslav Kravchenko May 18 '15 at 14:00
• You're saying this question should be so easy for the CS.SE answerers that you don't think you should have to do any research on your own before asking? That's not how it works. – D.W. May 18 '15 at 15:30
assuming you don't have any heuristic function about the distance to the target, a good solution that is valid is bi-directional BFS:
Algorithm idea: do a BFS search simultaneously from the source and the target: [BFS until depth 1 in both, until depth 2 in both, ....].
The algorithm will end when you find a vertex v, which is in both BFS's front.
Algorithm behavior: The vertex v that terminates the algorithm's run will be exactly in the middle between the source and the target.
This algorithm will yield much better result in most cases then BFS from the source [explanation why it is better then BFS follows], and will surely provide an answer, if one exist.
why is it better then BFS from the source?
assume the distance between source to target is k, and the branch factor is B [every vertex has B edges].
BFS will open: 1 + B + B^2 + ... + B^k vertices.
bi-directional BFS will open: 2 + 2B + 2B^2 + 2B^3 + .. + 2B^(k/2) vertices.
for large B and k, the second is obviously much better than the first.
NOTE, that this solution does NOT require storing the whole graph in memory, it only requires implementing a function: successor(v) which returns all the successors of a vertex [all vertices you can get to, within 1 step from v]. With this, only the nodes you open [2 + 2B + ... + 2B^(k/2) as explained above] should be stored. To further save memory, you can use Iterative Deepening DFS from one direction, instead of BFS, but it will consume more time.
In your case, the difference between BFS and bi-directional BFS, assuming 6 degrees of seperation is between developing ~200^6 = 2.4e13 nodes, and 2*200^3=16,000,000. As you can see, bi-directional BFS needs much less nodes to be developed than the alternative.
Disclaimer: Most of this answer (except the last part) is copied from my answer on StackOverflow.com, as suggested in meta (and effectively applied by a mod in security.SE)
• Thanks for the deep explanation. I was thinking about bi-directional BFS but was wondering if there are some better solutions. But it seems like BFS is the best solution in my case. – Viacheslav Kravchenko May 18 '15 at 14:03
|
2020-02-23 09:00:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4527938663959503, "perplexity": 1112.5981032355387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145747.6/warc/CC-MAIN-20200223062700-20200223092700-00024.warc.gz"}
|
https://cduu.wordpress.com/category/latex/
|
### Archive
Archive for the ‘LaTeX’ Category
You can set the general depth of the contents listing using:
\setcounter{tocdepth}{n} where n is the level, starting with 0 (chapters only)
and the general depth subsection headings using:
\setcounter{secnumdepth}{n} where n is the level, starting with 0 (chapters only)
in the preamble (i.e. before \begin{document}. This will work for the whole document.
Categories: LaTeX
You can set the general depth of the contents listing using:
\setcounter{tocdepth}{n} where n is the level, starting with 0 (chapters only)
in the preamble (i.e. before \begin{document}. This will work for the whole document.
If you want to have the appendixes listed only as chapters you can use the package tocvsec2 which allows you to change the depth level in different parts of the document. You can set the level up to sections in one part with:
\settocdepth{chapter}
and later set it to sections or subsections. The level will remain the way you set it until the next \settocdepth command. Be shure not to have a line \renewcommand{\tableofcontents} right before the \settocdepth.
Categories: LaTeX
## List with small line spacing
\usepackage{paralist}
\begin{compactitem}
\item {this}
\item {that}
\end{compactitem}
Categories: LaTeX
## JabRef Eintragstypen (Entrytypes)
• @article: ein Journalpaper
• @inproceedings: ein Konferenzpaper
• @masterthesis: Eine Diplomarbeit u.ä.
• @techreport: Technical Report, z.B. vom IMISE oder Ontomed
• @incollection: Buchbeiträge, z.B. Telemedizinführer
• @proceedings: Tagungsband, Tagungsbericht, Veröffentlichung
• @misc: wenn sonst nichts zutrifft
Categories: LaTeX
## Page margins
There are a few commands to redefine the page layout:
Command
\baselinestretch A decimal value for the spacing.
Example:
To set double-spacing on your document, use the command: \renewcommand{\baselinestretch}{2}
\textwidth The normal width of the text on the page.
Example:
To change this, use the command: \setlength{\textwidth}{x}
where x is a length.
NOTE: If you change the textwidth, you will almost certainly want to change the evenside- and oddsidemargin.
\textheight The normal height of the body of a page.
\oddsidemargin
One inch less than the distance from the left edge of the paper to the left margin of the text on right-hand pages.
\evensidemargin The same as \oddsidemargin except for left-hand pages.
\marginparwidth The width of marginal notes.
\marginparsep The amount of horiz. space between the outer margin and a marginal note.
\topmargin One inch less than the distance from the top edge of the paper to the top of the page’s head.
\headsep The amount of vertical space between the header and the body of a page.
\toskip The minimum distance from the top of the body to the bottom of the first line of text.
\footheight The height of a box containing the page’s footer.
\footskip The distance from the bottom of the last line of text in the body to the bottom of the footer.
You can use the commands like this:
Command
\newlength{cmd} define cmd to be a length
\setlength{cmd}{len} set length of cmd to be len
\settowidth{cmd}{txt} set cmd to width of txt
### Units
Unit
cm Centimetres
em
The width of the letter M in the current font
ex
The height of the letter x in the current font
in
Inches
pc
Picas (1pc = 12pt)
pt
Points (1in = 72.27pt)
mm
Millimetres
## \footheight
The \footheight command will not affect the way the style works with LaTeX2.09. You can use the other commands like \textheight25.5cm instead to format your page.
Categories: LaTeX
## Font sizes
Note that the font size definitions are set by the document class. Depending on the document style the actual font size may differ from that listed above. And not every document class has unique sizes for all 10 size commands.
Absolute Point Sizes in the article, proc, report, book, and letter Document Classes
size 10pt (default) 11pt option 12pt option
\tiny 6.80565 7.33325 7.33325
\scriptsize 7.97224 8.50012 8.50012
\footnotesize 8.50012 9.24994 10.00002
\small 9.24994 10.00002 10.95003
\normalsize 10.00002 10.95003 11.74988
\large 11.74988 11.74988 14.09984
\Large 14.09984 14.09984 15.84985
\LARGE 15.84985 15.84985 19.02350
\huge 19.02350 19.02350 22.82086
\Huge 22.82086 22.82086 22.82086
Absolute Point Sizes in the memoir, amsart, and amsbook Document Classes
size 10pt (default) 11pt option 12pt option
\tiny 7.33325 7.97224 8.50012
\scriptsize 7.97224 8.50012 9.24994
\footnotesize 8.50012 9.24994 10.00002
\small 9.24994 10.00002 10.95003
\normalsize 10.00002 10.95003 11.74988
\large 10.95003 11.74988 14.09984
\Large 11.74988 14.09984 15.84985
\LARGE 14.09984 15.84985 19.02350
\huge 15.84985 19.02350 22.82086
\Huge 19.02350 22.82086 22.82086
Absolute Point Sizes in the slides Document Class
size
\tiny 17.27505
\scriptsize 20.73755
\footnotesize 20.73755
\small 20.73755
\normalsize 24.88382
\large 29.86258
\Large 35.82510
\LARGE 43.00012
\huge 51.60014
\Huge 51.60014
Absolute Point Sizes in the beamer Document Class
size 10pt (default) 11pt option 12pt option
\tiny 5.31258 6.37509 6.37509
\scriptsize 7.43760 8.50012 8.50012
\footnotesize 8.50012 9.24994 10.00002
\small 9.24994 10.00002 10.95003
\normalsize 10.00002 10.95003 11.74988
\large 11.74988 11.74988 14.09984
\Large 14.09984 14.09984 16.24988
\LARGE 16.24988 16.24988 19.50362
\huge 19.50362 19.50362 23.39682
\Huge 23.39682 23.39682 23.39682
## Font styles
There are three main font families: roman (e.g., Times), sans serif (e.g., Arial) and monospace (e.g., Courier). You can also specify styles such as italic and bold.
The following table lists the commands you will need to access the typical font styles:
LaTeX command Equivalent to Output style Remarks
\textnormal{…} {\normalfont …} document font family this is the default or normal font
\emph{…} {\em …} emphasis typically italics
\textrm{…} {\rmfamily …} roman font family
\textsf{…} {\sffamily …} sans serif font family
\texttt{…} {\ttfamily …} teletypefont family this is a fixed-width or monospace font
\textup{…} {\upshape …} upright shape the same as the normal typeface
\textit{…} {\itshape …} italic shape
\textsl{…} {\slshape …} slanted shape a skewed version of the normal typeface (similar to, but slightly different from, italics)
\textsc{…} {\scshape …} Small Capitals
\uppercase{…} uppercase (all caps) Also \lowercase. There are some caveats, though; see here.
\textbf{…} {\bfseries …} bold
\textmd{…} {\mdseries …} medium weight a font weight in between normal and bold
You may have noticed the absence of underline. Although this is available via the \underline{…} command, text underlined in this way will not break properly. This functionality has to be added with the ulem (underline emphasis) package. Stick \usepackage{ulem} in your preamble. By default, this overrides the \emph command with the underline rather than the italic style. It is unlikely that you wish this to be the desired effect, so it is better to stop ulem taking over \emph and simply call the underline command as and when it is needed.
• To restore the usual em formatting, add \normalem straight after the document environment begins. Alternatively, use \usepackage[normalem]{ulem}.
• To underline, use \uline{…}.
• To add a wavy underline, use \uwave{…}.
• And for a strike-out \sout{…}.
Categories: LaTeX
## BibTeX and bibliography styles
In your LaTeX file, these two commands insert the reference/bibliography section in your publication:
\bibliography{xxx}
\bibliographystyle{yyy}
The “xxx” is the name of the bib file (yyy.bib) containing the reference database, e.g. \bibliography{mybiblio} would call on file “mybiblio.bib”.
The “yyy“‘ is a style name. See some of the available styles in section below. You can also use your own style file (.bst) with this command.
You can also use a subfolder for your styles:
\bibliographystyle{./Styles/mystyle}
## Bibliography styles
Here you can find some bibliography styles.
Here you can find some bibliography styles for German texts.
The chicago style is one of my favorite styles.
The PDF file bibstyles.pdf illustrates how these bibliographic styles render citations and reference entries:
1: ieeetr 2: unsrt 3: IEEE 4: ama 5: cj 6: nar 7: nature 8: phjcp 9: is-unsrt 10: plain 11: abbrv 12: acm 13: siam 14: jbact 15: amsplain 16: finplain 17: IEEEannot 18: is-abbrv 19: is-plain 20: annotation 21: plainyr 22: decsci 23: jtbnew 24: neuron 25: cell 26: jas99 27: abbrvnat 28: ametsoc 29: apalike 30: jqt1999 31: plainnat 32: jtb 33: humanbio 34: these 35: chicagoa 36: development 37: unsrtnat 38: amsalpha 39: alpha 40: annotate 41: is-alpha 42: wmaainf 43: alphanum 44: apasoft
If you want to edit a .bst file you should have a look at this file and if you want to create a new one you can use makebst (command: latex makebst).
## Compiling the document and bibliography
To fully compile and cross-link references you have to repeat some commands. To create a .dvi or .pdf file use the following commands:
to create .dvi file: to create .pdf file: result:
1 latex mydocument pdflatex mydocument creates .aux file which includes keywords of any citations
2 bibtex mydocument bibtex mydocument uses the .aux file to extract cited publications from the database in the .bib file, formats them according to the indicated style, and puts the results into in a .bbl file
3 latex mydocument pdflatex mydocument inserts appropriate reference indicators at each point of citation, according to the indicated bibliography style
4 latex mydocument pdflatex mydocument refines citation references and other cross-references, page formatting and page numbers
Categories: LaTeX
|
2018-07-23 06:11:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9153622984886169, "perplexity": 10818.917908474017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594954.59/warc/CC-MAIN-20180723051723-20180723071723-00622.warc.gz"}
|
https://math.stackexchange.com/questions/3631257/a-variable-parabola-touches-the-x-axis-and-y-axis-at-a1-0-and-b0-1
|
# A variable parabola touches the $x$-axis and $y$-axis at $A(1,0)$ and $B(0,1)$. Find the locus of its focus.
A variable parabola touches the $$x$$-axis and $$y$$-axis at $$A(1,0)$$ and $$B(0,1)$$ on the co-ordinate plane respectively. Now, we are required to find the locus of the focus of this variable parabola.
The process to arrive at this locus is a standard one, and goes like,
Starting with facts (observations),
1. The parabola has $$x$$- and $$y$$-axes as its tangents, and it lies in the first quadrant
2. We know that these tangents intersect orthogonally and hence the intersection point lies on its directrix.
3. Since the directrix passes through the origin let its equation be $$y=mx$$.
4. Now $$A(1,0)$$ and $$B(0,1)$$ lies on the parabola hence if we define focus as $$F(h,k)$$ we find that from the definition of parabola \begin{align} FA &= \text{(distance from A to the directrix)} \\ FB &=\text{(distance from B to the directrix)} \end{align} Hence we have sufficient conditions to get the locus,
Writing, $$(FA)^2 = (h-1)^2 + (k-0)^2 = \frac{|(0)-m(1)|^2}{1+m^2}$$
$$(FB)^2 = (h-0)^2 + (k-1)^2 = \frac{|(1)-m(0)|^2}{1+m^2}$$
Adding both and simplifying we get the locus of $$F(h,k)$$ as,
$$x^2 + y^2 - x - y + 0.5 = 0$$
This is an imaginary equation which doesn't give the locus of $$F(h,k)$$, So my question is how to interpret this result, What does it mean to have a set of imaginary focal points? or Are there is any reason to claim that my solution process is wrong? If yes, then what is the correct way to obtain the locus of $$F(h,k)$$?
Contrary to what you wrote, $$x=y=\frac12$$ satisfies your last equation: the circle isn’t imaginary but consists of a single point. This is as it should be: two points and the tangents at those points uniquely determine a parabola.
• @MukunthA.G The equation is that of a circle. If you substitute the coordinates of its center into the left-hand side, the resulting value is the negative of the square of its radius. (Try this with the generic circle equation for yourself!) When I did that, I got $0$, so the equation actually represents a single point—the center of the degenerate circle. – amd Apr 18 '20 at 8:44
• @MukunthA.G Complete the square. $$(x^2-x+1/4) + (y^2-y+1/4) = (x-1/2)^2 + (y-1/2)^2 = 0.$$ Therefore, $x = y = 1/2$ is a solution. – heropup Apr 18 '20 at 9:13
• @heropup Right. After completing the squares of $x^2+y^2-2hx-2ky+f=0$ you have $(x-h)^2+(y-k)^2+f'=0$, so the left-over constant term $f'$ is clearly equal to the value of the original expression at $x=h$, $y=k$. – amd Apr 18 '20 at 20:10
OP has a fixed parabola $$\sqrt{x}+\sqrt{y}=1,$$ it have only a fixed pint, so no locus.
To have family of tncated parabolas touching $$x$$ and $$y$$ axes at $$(c,0)$$ and $$(0,c)$$ The equation of the parabola is $$\sqrt{x}+\sqrt{y}=\sqrt{c},~~~~(1)$$ We can rationalize (1) to have full parabola as: $$\left(\frac{x-y}{\sqrt{2}}\right)^2=c\sqrt{2}\frac{(x+y+c/2)}{\sqrt{2}} \implies L_1^2=4AL_2,~~ L_1~\text{perpendicular to}~ L_2$$ The Eq. of axis of the parabola is $$L_1= 0 \implies y=x$$ and Equation of the latus rectum is $$L_2=A$$, their intersection gives the focus $$F$$ as $$y=x, L_2=\frac{x+y-c/2}{\sqrt{2}}=\frac{c}{2\sqrt{2}} \implies F~is~ (c/2,c/2)$$ Therefor the locus of the focus of the family of parablolas (1) is $$y=x$$ which is the fixed axis of the parabolas,
|
2021-08-03 12:08:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 30, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515218138694763, "perplexity": 243.71920304396286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154457.66/warc/CC-MAIN-20210803092648-20210803122648-00570.warc.gz"}
|
http://planetawesomeness.com/b0u6rgp6/81fd2b-real-analysis-1-multiple-choice-questions-with-answers
|
This quiz needs more questions. Ask Question Asked 1 year, ... Browse other questions tagged real-analysis sequences-and-series or ask your own question. Let $\{s_n\}$ be a convergent sequence. Every pair of real numbers a and b satisfied the following conditions a > b, a = b, a < b . (a) Suppose fn: A → R is uniformly continuous on A for every n ∈ N and fn → f uniformly on A. # 1 Thing # Where to buy Shop for Best Price Multiple Choice Questions On Real Analysis Pdf And Multiple Choice Test On Cheetahs Pdf . If function is Reimanns integrable on [ a, b] then function must be, 29. 2 Question 2. Sequences and Series Chapter Exam Instructions. Derivatives and the Mean Value Theorem 3 4. One portion consist of Real analysis multiple choice questions, second one is Real analysis definitions and topics explanation portion and third portion consist of Real analysis related videos for their complete understandings. A number $L$ is called limit of the function $f$ when $x$ approaches to $c$ if for all $\varepsilon>0$, there exist $\delta>0$ such that ……… whenever $0<|x-c|<\delta$. For example, jaguar speed -car Search for an exact match Put a word or phrase inside quotes. Choose your answers to the questions and click 'Next' to see the next set of questions. 2. If f is real valued and monotonic on [a , b] then f is, 9. Set of numbers which have ordered fields, 23. An improper Reimann Integral can without infinite, 26. Making statements based on opinion; back them up with references or personal experience. Facebook Is there a rational number exists between any two rational numbers. However below, taking into consideration you visit this web page, it will be fittingly no question Mathematical Events REAL ANALYSIS II MULTIPLE CHOICE QUESTIONS UNIT 1: 1. Here are a dozen questions inspired by the qroup problems: Quiz2PracticeQuestions.pdf. A 6. If there exists a bijection of N onto S then set is known as, 49. (b) is a positive number. PTE Pearson Multiple Choice Questions With Multiple Correct Answers Practice Test 13, Multiple choice is a form of an objective assessment in which respondents are asked to select only correct answers out of the choices from a list. Your email address will not be published. If a function is strictly monotone then It is, 21. which of the following is not countable set, 22. Grade 7 Maths Questions on Set Theory With Answers. A 2. Of the top 27% (54 people), 45 answered question 1 correctly (83%) Of the bottom 54 people only 18 (33%) answered question 1 correctly. Please be sure to answer the question. Every bounded sequence has a subsequence which, 17. Which of the following numbers is not irrational. 'multiple choice questions ii pdf download available may 10th, 2018 - multiple choice questions ii a pdf file contains 30 multiple choice questions on real analysis commutative algebra and linear algebra discover the world s''i multiple choice questions 50 may 12th, 2018 - i multiple choice questions 50 all in a random sample of utc students 50 ETC. "Electrical Circuit Analysis MCQ" bo… If $\lim_{x \to c}f(x)=L$, then ………… sequence $\{x_n\}$ such that $x_n \to c$, when $n\to \infty$, one has $\lim_{n \to \infty}f(x_n)=L$. Undergraduate Calculus 1 2. De nitions (2 points each) 1.State the de nition of a metric space. Contributors, Except where otherwise noted, content on this wiki is licensed under the following license:CC Attribution-Noncommercial-Share Alike 4.0 International. If $\lim_{n\to\infty}s_n=s$, then. Every infinite sequence in a compact metric space has a subsequence which, 20. Real Analysis: Revision questions 1. Use MathJax to format equations. Let $f(x)=\frac{x^2-5x+6}{x-3}$, then $\lim_{x\to 3}f(x)=$……….. A 10. X Exclude words from your search Put - in front of a word you want to leave out. Here are a lot of multiple-choice questions: Quiz 2 Mult Choice Practice.pdf. Prove that $\left\{\frac{1}{n+1} \right\}$ is decreasing sequence. If g.l.b of a set belong to the set then, 33. 1. which function is continuous everywhere, 3. Every increasing sequence of positive numbers diverges or has single limit point. (a) Show that √ 3 is irrational. This property is referred to as, 53. Number Theory, yes sir (B) $|s_n-s_m|\epsilon$. 45. Also, the solutions and explanations are included. CC Attribution-Noncommercial-Share Alike 4.0 International. Read more. Gonit Sora গণিত চ’ৰা is a multi lingual (English and Assamese) web magazine devoted to publishing well written and original articles related to science in general and mathematics in particular. In nite Series 3 5. Report Error, About Us 1. Participate plz provide us Mcqz of (C) it is bounded above but may not be bounded below. YouTube Channel Matric Section In the following multiple choice questions, circle the correct answer. Some of these questions can be challenging and need more time to be solved. The students should select the correct multiple answers from a list of possible options Every convergent sequence has …………….. one limit. The set of all real algebric numbers is, 30. If you could provide us with some already written or saved MCQs about the following subjects, we’ll be forever thankful. An alternating series $\sum (-1)^n a_n$, where $a_n\geq 0$ for all $n$, is convergent if. (B) $\left(\frac{1}{2},\frac{1}{3} \right)$, A number which is neither even nor odd is, A number which is neither positive nor negative is. The author of this page is Dr. What is the difference between rational and irrational numbers? What is norm of partition $\{0,3,3.1,3.2,7,10 \}$ of interval $[0,10]$. A.o.A Sir, I’m a student of Bs (Hons) Mathematics studying pure maths currently, my 6th semester’s finals+mids (online MCQs) are just around the corner and my whole class is nervous. A sequence is said to be divergent if it is, 4. Question Bank Department of Mathematics Janki Devi Memorial College (University of Delhi) B.Sc. Introduction to Topology Multiple choice questions. 56. 4. If f is differentiable in [ a, b] then it is monotonically increasing if, 51. (D) $|s_n-s_m|<\epsilon$ for all $n,m0$, there exists positive integer $n_0$ such that. Provide details and share your research! Is the set of rational numbers countable? This file contains the answers to the questions … Which one is not partition of interval $[1,5]$. 3. 50% more of the upper group answered the question correctly). If least upper bound exists then it is, 32. C 8. If f is differentiable at x ε [ a, b] then f at x is, 65. This is a compulsory subject in MSc and BS Mathematics in most of the universities of Pakistan. For every closed subset of R , the real line is, 31. So prepare real analysis to attempt these questions. 46. If a real number is not rational then it is ……………. Since fn → f converges uniformly on A there exists N ∈ Nsuch that |fn(x) −f(x)| < ǫ 3 for all x ∈ A and n > N. (C) $|s_n-s_m|<\epsilon$ for all $n,m>n_0$. B 1. If we have an inflection point x = a then, 34. Countability (18 questions). I. (A) there exists number $\lambda$ such that $|s_n|<\lambda$ for all $n\in\mathbb{Z}$. Real Analysis II Hence p itself is divisible by 3, as 3 is a prime Is there a real number exists between any two real numbers. Mathematical Statistics Define what is meant by ‘a set S of real numbers is (i) bounded above, (ii) bounded below, (iii) bounded’. classiscal Mechnics, Your email address will not be published. Suppose that √ 3 is rational and √ 3 = p/q with integers p and q not both divisible by 3. (B) $\left| \sum a_n \right|$ is convergent but $\sum a_n$ is divergent. A sequence $\{s_n\}$ is said to be bounded if. Required fields are marked *, Real Analysis MCQs 01 for NTS, PPSC, FPSC, May or may not be a complete ordered field, may or may not smallest number of the set, is always bounded but may or may not attains its bound. questions first and the harder questions last within each group of multiple-choice questions and again within each group of student-produced response questions. B 3. We are going to add short questions and MCQs for Real Analysis. You may find it quite challenging to get a perfect score! (Hons.) Here are the answers: Quiz 2 Mult Choice Practice Answer Key.pdf. ASSISTANT (BPS 14) IN PUNJAB POLICE DEPARTMENT, WRITTEN TEST FOR RECRUITMENT TO THE POST OF PROVINCIAL MANAGEMENT SERVICE 2014, ETC (BS-17), WRITTEN TEST FOR RECRUITMENT TO THE POST OF PROVINCIAL MANAGEMENT SERVICE PAPER 2012 GENERAL KNOWLEDGE MCQS, WRITTEN TEST FOR RECRUITMENT TO THE POST OF PROVINCIAL MANAGEMENT SERVICE. (A) $|s_n-s_m|<\epsilon$ for all $n,m>0$. Then find $\lim_{n\to\infty z_n}$, where $z_n=x_n-2y_n$. Don’t let the question position or question type deter you from answering questions. limit(s). An improper Reimann Integral can without infinite, 35. BSc Section Electrical Circuit Analysis Multiple Choice Questions and Answers (MCQs): Quizzes & Practice Tests with Answer Key (Electrical Circuit Analysis Quick Study Guide & Course Review Book 1) contains course review tests for competitive exams to solve 806 MCQs. Read and attempt to answer every question you can. The subject is similar to calculus but little bit more abstract. Thank you!!! A set $A$ is said to be countable if there exists a function $f:A\to \mathbb{N}$ such that, Let $A=\{x| x\in \mathbb{N} \wedge x^2 \leq 7 \}$. Give examples of sets which are/are not bounded above/below. D 5. FSc Section MULTIPLE CHOICE QUESTIONS. The correlation coefficient is used to determine: a. If the sequence $\{x_n\}$ converges to 5 and $\{y_n\}$ converges to 2. https://en.wikibooks.org/wiki/Real_analysis/Section_1_Exercises/Answers (ii) Show that your "is actually positive. 1. The Riemann Integral and the Mean Value Theorem for Integrals 4 6. Then supremum of $A$ is. Analysis Multiple Choice Questions With Answersmultivariate analysis - How to analyse multiple choice and ... revelation multivariate analysis multiple choice questions with answers that you are looking for. The function f is continuous at a M if lim f(x)= ----- x :D (a) f(b) (b) f(c) (c) f(a) * (d) f(x) 2. Real Analysis 2 But avoid … Asking for help, clarification, or responding to other answers. Following is the list of multiple choice questions in this brand new series: MCQ in Differential Equations . Below, you are given an open set Sand a point x 2S. The set of real number can be denoted as, 67. We compute @f @x = e1 xcosy( cosy) @f @x j(1;0;ˇ) = e 1cos0( cos0) = e0( 1) = 1: The correct answer is (a). Give an examples of two divergence sequences, whose sum is convergent. Then find $\lim_{n\to\infty z_n}$, where $x_n=2y_n-3z_n$. If two sub-sequences of a sequence converge to two different limits, then a sequence ……………, A series $\sum a_n$ is convergent if and only if ………………… is convergent. If a sequence is unbounded or it does not converge then this sequence is called, 15. If the sequence is decreasing, then it ……………. Prove that f is uniformly continuous on A. Home Sitemap, Follow us on If the sequence is increasing, then it ……………. D 9. Real Analysis (Math 131A) Sequences (26 questions). 31 What is the average … (863 more words) … The subject is similar to calculus but little bit more abstract. (b) Does the result in (a) remain true if fn → f pointwise instead of uni- formly? Start passage. For two real numbers x and y with x > 0 , there exist a natural number n s.t, 36. PART 1: MCQ from Number 1 – 50 Answer key: PART 1. Prove that every convergent sequence is bounded. Gonit Sora is an attempt to bridge the gap between classroom math teaching and real life practical and fun mathematics. Multiple choice question on sequence and series. Online Question and Answer in Differential Equations Series. This quiz tests notions such as lim inf, lim sup, inf, sup, convergence, Cauchy convergence, boundedness, and limit points of sequences. We get the relation p2 = 3q2 from which we infer that p2 is divisible by 3. MULTIPLE CHOICE QUESTIONS In the following multiple-choice questions, select the best answer. Improper Integrals 5 7. Classical Mechanics (C) there exists positive real number $s$ such that $|s_n|0 such that B(x;") ˆS: Your job is to do the following: (i) Provide such an ">0 that \works". Math 431 - Real Analysis I Solutions to Test 1 Question 1. If S={1\n | n £ N } the g.l.b of S is, 28. Bounded monotonic sequence will be decreasing if it converges to its, 52. I really enjoyed the rest. Mathematics Paper: C3 Real Analysis (Semester II, CBCS) Multiple Choice Questions 1. A numerical value used as a summary measure for a sample, such as sample mean, is known as a. a. population parameter b. sample parameter c. sample statistic d. population mean e. None of the above answers is correct. Real Analysis MCQs 01 consist of 69 most repeated and most important questions. • (a) Let ǫ > 0. It will no question squander the time. Is product of two convergent sequences convergent? This is a compulsory subject in MSc and BS Mathematics in most of the universities of Pakistan. In a regression and correlation analysis if r2 = 1, then a. SSE = SST b. SSE = 1 c. SSR = SSE d. SSR = SST 17. Question 1. Then S has a supremum if S has, 18. 50% discount - Buy Mock Test series just INR 100/-Use IAMCIVILENGINEER promo code and get 50% discount. Give an example of sequence, which is bounded but not convergent. (D)$\{a_n\}$is decreasing and$\lim a_n=0$. 58. which of the following statements is not correct ? Supremum and infimum of an empty set is, 27. Short explanation of the dataframe: There is some question, say Q1, and for this question there are multiple "Sub-questions", say a,b,c, where for each of these "sub-question/options" respondents were asked to answer using some scale, in this example from 1 to 3. Let S be a set of real numbers. Unlike the facility factor, discrimination can have a range of between -1 and +1, where -1 If the sequence$\{x_n\}$converges to 3 and$\{y_n\}$converges to 4. Let$\sum a_n$be a series of non-negative terms. If L is the tangent line to a function f at x = a then, 12. BS-17 2017 (Held in January 2, 2018), WRITTEN TEST FOR RECRUITMENT TO THE POST OF JUNIOR CLERK (BS-11), WRITTEN TEST FOR RECRUITMENT TO THE POST OF PROVINCIAL MANAGEMENT SERVICE, ETC. Which of the following has not multiplicative inverse, 13. Mathematical Methods This contains 20 Multiple Choice Questions for Mathematics Test: Real Analysis- 7 (mcq) to study with solutions a complete question bank. (D)$\sum |a_n|$is divergent but$\sum a_n$is convergent. If$\lim_{n\to\infty} a_n=0$, then$\sum a_n$……………. Is the sequence$\left\{\frac{n+2}{n+1} \right\}$is increasing or decreasing? MULTIPLE CHOICE QUESTIONS (50%) All answers must be written on the answer sheet; write answers to five questions in each row, for example: 1. Define what is meant by (a) ‘the real number H is an upper/lower bound for the set S’. A series$\sum \frac{1}{n^p}$is convergent if. B 7. 61. Privacy & Cookies Policy (a) -1 (b) 1=e (c) 0 (d) ˇ=e (e) ˇ Answer 1. (D) it is bounded below but may not be bounded above. The converse of Cauchy integral theorem is known as, 19. Software MathJax reference. Real Analysis MCQs 01 for NTS, PPSC, FPSC 22/02/2019 09/07/2020 admin Real Analysis MCQs Real Analysis MCQs 01 consist of 69 most repeated and most important questions. Twitter Concept of the divisibility only exists in set of ………….. Math 4317 : Real Analysis I Mid-Term Exam 1 25 September 2012 Instructions: Answer all of the problems. A sequence is a function whose domain is, 43. 55. The set of all ___________ numbers form a sequence. (B) there exists real number$p$such that$|s_n| \epsilon $easy... Mathematics Paper: C3 real Analysis has, 18 good mix of easy and! Set belong to the questions that follow exist a natural number n s.t, 36 then! Is Reimanns integrable on [ a, b ] then it …………… all syllabus subject... You may find it quite challenging to get a perfect score question can... Decreasing, then it is, 21. which of the following statements is not rational then is! From a list of multiple Choice questions, select the best Answer subjects, we ’ ll be forever.. R, 41, 15 the following has not multiplicative inverse, 13 \epsilon$ for all $n m. Non empty bounded set of numbers which have ordered fields, 23 e ˇ. A ) remain true if fn → f pointwise instead of uni- formly domain is 31! > b, a < b rational numbers and q not both divisible by 3 best Answer y with >. Given table and Answer the questions … I of Mathematics Janki Devi Memorial College ( University of Delhi B.Sc. Are/Are not bounded above/below intersection of two infinite sets is, 31$ interval... Bounded [ a, b ] then it is, 65 is irrational question Bank Department of Mathematics Devi. Conditions a > b, a < b: 1 if S= { 1\n | n n... Engineering all syllabus and subject wise mock tests function must be, 29 subject wise mock tests course more... Ask your own question over the given table and Answer the questions follow! Be a series $\sum a_n$ be a series of non-negative terms n } the g.l.b of S,. Be increasing if, 51 of degree _________ is Lipschitzian on R ’ t let question. $such that$ |s_n| < \lambda $such that$ |s_n| < \lambda $that! Which are/are not bounded above/below in most of the divisibility only exists set. A bijection of n onto S then set is, 4 on real Analysis ( Semester II, CBCS multiple. Is …………… MCQs we are going to add short questions and click 'Next ' to see next! \ }$ is decreasing and $\ { a_n\ }$ converges to 5 and \! = 3q2 from which we infer that p2 is divisible by 3 of... We get the relation p2 = 3q2 from which we infer that p2 is divisible by 3 answers from list. Questions for Mathematics Test: real Analysis without infinite, 35 and \. Relation p2 = 3q2 from which we infer that p2 is divisible by.... The given years you are given an open set Sand a point x a. Absolutely convergent if ) 1.State the de nition of a metric ( x =... 0, there exist a natural number n s.t, 36 you may find it quite to... First and the Mean Value Theorem for Integrals 4 6 complete Civil Engineering all syllabus and subject mock... Written or saved MCQs about the following has not multiplicative inverse, 13 { n^p $... In [ a, b ] then it is, 6 the best Answer b, a = b a...$, where $x_n=2y_n-3z_n$ if f ' ( x ) exists it! Not be bounded below, jaguar speed -car Search for an exact match Put a or... Mean Value Theorem for Integrals 4 6 then S has, 18 sequence ${! A word or phrase inside quotes Exam Instructions statements is not partition of interval$ 1,5., 24,... Browse other questions tagged real-analysis sequences-and-series or ask your question! Is …………… up with references or personal experience every infinite sequence in x { }... Line to a function whose domain is, 27 n+1 } \right\ } $is said to bounded. If S= { 1\n | n £ n } the g.l.b of S is, 6 of. Add short questions and tough questions let$ \ { s_n\ } $of interval$ [ 1,5 ].... C3 real Analysis: Revision questions 1 4317: real Analysis- 7 Quiz give a... Answer every question you can ………….. of its sub-sequences converges to $S,... Math teaching and real life practical and fun Mathematics called, 15 function f (,. Is ________ if and only if it converges to 5 and$ \ { y_n\ $... Is the sequence is called, 15 is on the Expenditures of Institution ( in lakh Rupees per.: MCQ in Differential Equations of easy questions and MCQs for real Analysis ( Semester,. Questions for Mathematics Test: real Analysis- 7 Quiz give you a good mix of questions. Not both divisible by 3 Analysis, linear algebra and commutative algebra conditions! In x C ) 0 ( D )$ |s_n-s_m| < \epsilon $for all$ n\in\mathbb Z... For example, jaguar speed -car Search for an exact match Put a word phrase. Question Bank Civil Engineering all syllabus and subject wise mock tests exist a natural number n,... To determine: a \left\ { \frac { 1 } { n+1 } \right\ $... S_N=S$, then Civil Engineering all syllabus and subject wise mock tests course contains more than 5800 MCQs 128. Is irrational an open set Sand a point x 2S de nitions ( 2 questions ) every sequence! A sequence converges to its, 52 not convergent = a then, 34 Reimann can! Final EXAMINATION SOLUTIONS, MAS311 real Analysis I question 1 ordered fields, 23 and irrational numbers type you...: MCQ from number 1 – 50 Answer key: part 1 brand new series: MCQ number... Real algebric numbers is, 4 g.l.b of S is, 65 some these! Every pair of real numbers has a infimum f pointwise instead of uni- formly this:... 2 Mult Choice Practice.pdf word or phrase inside quotes ask question Asked 1 year,... other! Qroup problems: Quiz2PracticeQuestions.pdf result in ( a ) there exists a bijection of onto... Analysis ( Semester II, CBCS ) multiple Choice questions 1 = p/q with integers p and q both. Function f at x is, 27 sequence will be decreasing if 0,3,3.1,3.2,7,10 \ } real analysis 1 multiple choice questions with answers! 3 and $\ { s_n\ }$ converges to 3 and \! Numbers is, 6 $\sum a_n \right|$ is convergent but $\sum a_n$ …………… inside quotes multiple. Sample questions for PRELIMINARY real Analysis help, clarification, or responding to other answers have! Questions that follow \sum \frac { 1 } { n+1 } \right\ } $is increasing decreasing... The answers to the questions and tough questions and Answer the questions … I on opinion ; back up. Subjects, we ’ ll be forever thankful of these questions can be challenging and need time... A natural number n s.t, 36, which is bounded below may... Analysis ( Semester II, CBCS ) multiple Choice questions on real Analysis Exam VERSION Contents., 33 for Mathematics Test: real Analysis: short questions and MCQs for real Analysis I question 1 this! Of two infinite sets is, 31, 33 S$, where $x_n=2y_n-3z_n$ is of. Limit point function must be, 29 with references or personal experience \lambda $for all$ n m. > b, a < b S is, 43 is convergent but $\sum |a_n|$ is,... An attempt to bridge the gap between classroom math teaching and real life practical and fun Mathematics ... X + 1/x is uniformly continuous on, 24 these questions can be denoted as, 19 the table! Jaguar speed -car Search for an exact match Put a word or phrase inside quotes ) 1.State the nition. = x + 1/x is uniformly continuous on, 24 converges to 3 and $a_n=0! Unit 1: 1 which is bounded below group answered the question correctly ) is... Your is actually positive is Lipschitzian on R, 65 and MCQs for real (... Questions first and the harder questions last within each group of multiple-choice questions: 2. is actually positive this complete Civil Engineering all syllabus and subject wise mock tests course contains more 5800... \Epsilon$ for all $n, m > \epsilon$ for all $n m... Is meant by ( a ) -1 ( b )$ |s_n-s_m| < \epsilon $for all$,. M real analysis 1 multiple choice questions with answers n_0 \$ of sets which are/are not bounded above/below other questions tagged real-analysis sequences-and-series or your!, b ] then f is differentiable in [ a, b then... Analysis- 7 Quiz give you a good mix of easy questions and again within each of... = p/q with integers p and q not both divisible by 3 4317.
|
2021-06-17 09:37:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.56119704246521, "perplexity": 1234.1778674365657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629632.54/warc/CC-MAIN-20210617072023-20210617102023-00487.warc.gz"}
|
https://aitopics.org/mlt?cdid=arxivorg%3A98DCA958&dimension=pagetext
|
### Gated Orthogonal Recurrent Units: On Learning to Forget
We present a novel recurrent neural network (RNN) based model that combines the remembering ability of unitary RNNs with the ability of gated RNNs to effectively forget redundant/irrelevant information in its memory. We achieve this by extending unitary RNNs with a gating mechanism. Our model is able to outperform LSTMs, GRUs and Unitary RNNs on several long-term dependency benchmark tasks. We empirically both show the orthogonal/unitary RNNs lack the ability to forget and also the ability of GORU to simultaneously remember long term dependencies while forgetting irrelevant information. This plays an important role in recurrent neural networks. We provide competitive results along with an analysis of our model on many natural sequential tasks including the bAbI Question Answering, TIMIT speech spectrum prediction, Penn TreeBank, and synthetic tasks that involve long-term dependencies such as algorithmic, parenthesis, denoising and copying tasks.
### Gated Orthogonal Recurrent Units: On Learning to Forget
We present a novel recurrent neural network (RNN) based model that combines the remembering ability of unitary RNNs with the ability of gated RNNs to effectively forget redundant/irrelevant information in its memory. We achieve this by extending unitary RNNs with a gating mechanism. Our model is able to outperform LSTMs, GRUs and Unitary RNNs on several long-term dependency benchmark tasks. We empirically both show the orthogonal/unitary RNNs lack the ability to forget and also the ability of GORU to simultaneously remember long term dependencies while forgetting irrelevant information. This plays an important role in recurrent neural networks. We provide competitive results along with an analysis of our model on many natural sequential tasks including the bAbI Question Answering, TIMIT speech spectrum prediction, Penn TreeBank, and synthetic tasks that involve long-term dependencies such as algorithmic, parenthesis, denoising and copying tasks.
### Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies
Modelling long-term dependencies is a challenge for recurrent neural networks. This is primarily due to the fact that gradients vanish during training, as the sequence length increases. Gradients can be attenuated by transition operators and are attenuated or dropped by activation functions. Canonical architectures like LSTM alleviate this issue by skipping information through a memory mechanism. We propose a new recurrent architecture (Non-saturating Recurrent Unit; NRU) that relies on a memory mechanism but forgoes both saturating activation functions and saturating gates, in order to further alleviate vanishing gradients. In a series of synthetic and real world tasks, we demonstrate that the proposed model is the only model that performs among the top 2 models across all tasks with and without long-term dependencies, when compared against a range of other architectures.
### Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs
Using unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data. This approach appears particularly promising for Recurrent Neural Networks (RNNs). In this work, we present a new architecture for implementing an Efficient Unitary Neural Network (EUNNs); its main advantages can be summarized as follows. Firstly, the representation capacity of the unitary space in an EUNN is fully tunable, ranging from a subspace of SU(N) to the entire unitary space. Secondly, the computational complexity for training an EUNN is merely $\mathcal{O}(1)$ per parameter. Finally, we test the performance of EUNNs on the standard copying task, the pixel-permuted MNIST digit recognition benchmark as well as the Speech Prediction Test (TIMIT). We find that our architecture significantly outperforms both other state-of-the-art unitary RNNs and the LSTM architecture, in terms of the final performance and/or the wall-clock training speed. EUNNs are thus promising alternatives to RNNs and LSTMs for a wide variety of applications.
### Fast-Slow Recurrent Neural Networks
Processing sequential data of variable length is a major challenge in a wide range of applications, such as speech recognition, language modeling, generative image modeling and machine translation. Here, we address this challenge by proposing a novel recurrent neural network (RNN) architecture, the Fast-Slow RNN (FS-RNN). The FS-RNN incorporates the strengths of both multiscale RNNs and deep transition RNNs as it processes sequential data on different timescales and learns complex transition functions from one time step to the next. We evaluate the FS-RNN on two character based language modeling data sets, Penn Treebank and Hutter Prize Wikipedia, where we improve state of the art results to 1.19 and 1.25 bits-per-character (BPC), respectively. In addition, an ensemble of two FS-RNNs achieves 1.20 BPC on Hutter Prize Wikipedia outperforming the best known compression algorithm with respect to the BPC measure. We also present an empirical investigation of the learning and network dynamics of the FS-RNN, which explains the improved performance compared to other RNN architectures. Our approach is general as any kind of RNN cell is a possible building block for the FS-RNN architecture, and thus can be flexibly applied to different tasks.
|
2019-07-22 18:19:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5377619862556458, "perplexity": 1766.6506825202086}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528208.76/warc/CC-MAIN-20190722180254-20190722202254-00007.warc.gz"}
|
http://www.j.sinap.ac.cn/nst/EN/10.1007/s41365-018-0389-x
|
# Nuclear Science and Techniques
《核技术》(英文版) ISSN 1001-8042 CN 31-1559/TL 2019 Impact factor 1.556
Nuclear Science and Techniques ›› 2018, Vol. 29 ›› Issue (4): 59
• NUCLEAR ELECTRONICS AND INSTRUMENTATION •
### Readout electronics of a prototype time-of-flight ion composition analyzer for space plasma
Xing Fan 1,2 • Xian-Peng Zhang 3 • Geng Tian 3 • Chao-Wen Yang 1,2
1. 1 Department of Nuclear Engineering and Technology, College of Physics Science and Technology, Sichuan University, Chengdu 610064, China
2 Key Laboratory of Radiation Physics and Technology, Ministry of Education, Sichuan University, Chengdu 610064, China
3 Northwest Institute of Nuclear Technology, Xi’an 710024, China
• Contact: Chao-Wen Yang E-mail:ycw@scu.edu.cn
• Supported by:
This work was supported by the National Natural Science Foundation of China (Nos. 11205108, 11475121, and 11575145) and the Excellent Youth Fund of Sichuan University (No. 2016SCU04A13).
PDF ShareIt Export Citation
Xing Fan, Xian-Peng Zhang, Geng Tian, Chao-Wen Yang . Readout electronics of a prototype time-of-flight ion composition analyzer for space plasma.Nuclear Science and Techniques, 2018, 29(4): 59
Citations
Altmetrics
Abstract:
In this study, a novel phoswich detector for beta–gamma coincidence detection is designed. Unlike the triple crystal phoswich detector designed by researchers at the University of Missouri, Columbia, this phoswich detector is of the semi-well type, so it has a higher detection efficiency. The detector consists of BC-400 and NaI:Tl with decay time constants of 2.4 and 230 ns, respectively. The BC-400 scintillator detects beta particles, and the NaI:Tl cell is used for gamma detection. Geant4 simulations of this phoswich detector find that a 2-mm-thick BC- 400 scintillator can absorb nearly all of the beta particles whose energies are below 700 keV. Further, for a 2.00-cmthick NaI:Tl crystal, the gamma source peak efficiency for photons ranges from a maximum of nearly 90% at 30 keV to 10% at 1 MeV. The self-absorption effect is also discussed in this paper in order to determine the carrier gas’s influence.
|
2022-01-21 23:48:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25039640069007874, "perplexity": 7546.184957739454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303717.35/warc/CC-MAIN-20220121222643-20220122012643-00587.warc.gz"}
|
https://chemistry.stackexchange.com/questions/114501/predict-the-major-product-of-the-following-reaction-with-mechanism
|
# Predict the major product of the following reaction with mechanism [closed]
I started to do the problem by coordinating H+ with the lone pair of oxygen then after that I can't proceed .please send the solution by show the detailed mechanism.
## closed as off-topic by Mithoron, Todd Minehardt, Tyberius, Jon Custer, Karsten TheisApr 30 at 14:21
This question appears to be off-topic. The users who voted to close gave this specific reason:
If this question can be reworded to fit the rules in the help center, please edit the question.
• Are you sure the carbonyl will be protonated when there's an imine group around? – William R. Ebenezer Apr 28 at 17:47
• I think answer is (C), which is impossible to achieve under given conditions. – Mathew Mahindaratne Apr 28 at 20:37
There are two equally important path for the given reaction: (i) Via protonation of imine $$\ce{N}$$; and (ii) Via protonation of keto $$\ce{O}$$.
(i) Protonation of imine $$\ce{N}$$ would end up with 1,4-addition to give structure (A), which rearranges to structure (B) via keto-enol tautomerism.
(ii) It is also possible that Protonation can be undergone through keto $$\ce{O}$$, which would end up with 1,4-addition followed by keto-enol tautomerism to give structure (D). See the depicted arrow-pushing mechanism below:
|
2019-10-19 03:02:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6171748042106628, "perplexity": 4195.577215569471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688674.52/warc/CC-MAIN-20191019013909-20191019041409-00266.warc.gz"}
|
https://www.queryoverflow.gdn/query/burnside-39-s-theorem-on-invariant-subpaces-21_3260239.html
|
# Burnside's theorem on invariant subpaces
by PainBouchon Last Updated June 13, 2019 12:20 PM
One version of this theorem states that if $$E$$ is a complex vector space with $$dim(E)<+\infty$$, and $$A$$ an unitary sub-algebra of $$\mathcal{L}(E)$$ for which there are no non-trivial subspaces $$F$$ invariant by all the elements of $$A$$ (simultaneously), then $$A=\mathcal{L}(E)$$. In other words, if $$A$$ is a strict unitary subalgebra of $$\mathcal{L}(E)$$, then there is a subspace $$F$$ non-trivial invariant by all elements of A.
Do you know some simple examples of such $$A$$ ? Could we prove some results concerning simultaneous triangularization using directly this theorem? I have only found other theorems resulting of this one, or examples not very simple.
Thanks!
Tags :
Two examples.
i) Two randomly chosen complex $$n\times n$$ matrices $$A,B$$ span the full matrix algebra with probability $$1$$. cf. my post in
Probability that two random matrices span the full matrix algebra
Note that it's not obvious...
ii) -Easier- Let $$A,B$$ be two $$2\times 2$$ matrices s.t. $$e^Ae^B=e^{A+B}\not= e^Be^A$$. Show that $$A,B$$ are simultaneously triangularizable.
loup blanc
June 13, 2019 12:01 PM
## Related Questions
Updated October 15, 2017 04:20 AM
Updated August 14, 2017 23:20 PM
Updated May 11, 2019 17:20 PM
Updated May 11, 2019 23:20 PM
|
2019-09-18 18:53:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8690897822380066, "perplexity": 502.00333433905246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573323.60/warc/CC-MAIN-20190918172932-20190918194932-00125.warc.gz"}
|
https://money.stackexchange.com/questions/8898/does-the-black-scholes-model-apply-to-american-style-options/71117
|
Does the Black-Scholes Model apply to American Style options?
After reading the Wikipedia article on the Black-Scholes model, it looks to me like it only applies to European options based on this quote:
The Black–Scholes model (pronounced /ˌblæk ˈʃoʊlz/1) is a mathematical model of a financial market containing certain derivative investment instruments. From the model, one can deduce the Black–Scholes formula, which gives the price of European-style options.
and
American options and options on stocks paying a known cash dividend (in the short term, more realistic than a proportional dividend) are more difficult to value, and a choice of solution techniques is available (for example lattices and grids).
Is this correct? If so, is there a similar model for American Style options? My previous understanding was that the options price was based on it's intrinsic value + the time value. I'm really not sure how these values are arrived at though.
I found this related question/answer, but it doesn't address this directly: Why are American-style options worth more than European-style options?
The difference between an American and European option is that the American option can be exercised at any time, whereas the European option can be liquidated only on the settlement date. The American option is "continuous time" instrument, while the European option is a "point in time" instrument. Black Scholes applies to the latter, European, option. Under "certain" (but by no means all) circumstances, the two are close enough to be regarded as substitutes.
One of their disciples, Robert Merton, "tweaked" it to describe American options. There are debates about this, and other tweaks, years later.
Black-Scholes is "close enough" for American options since there aren't usually reasons to exercise early, so the ability to do so doesn't matter. Which is good since it's tough to model mathematically, I've read.
Early exercise would usually be caused by a weird mispricing for some technical / market-action reason where the theoretical option valuations are messed up. If you sell a call that's far in the money and don't get any time value (after the spread), for example, you probably sold the call to an arbitrageur who's just going to exercise it. But unusual stuff like this doesn't change the big picture much.
• Nice use of the word arbitrageur! I hand't seen that word before; I had to go look that one up. Jun 10 '11 at 17:33
• -1 "tough to model mathematically"?!? Sorry cannot understand that at all. You can easily model lattices and recursion eqs with spreadsheet after calcuating the rates and then last valuations applying the max(C-K, 0), where C is the forecoming exercising value and K is the purchase value for short option (inversersely for long option) and then just backward recursion with \frac{1}{1+r}(qC_{u} + (1-u)C_{d}) where C_{u} is the last upper value and C_{d} is the last down value and the q is the arbitrage-free rate (assuming non-arbitrage situation). Discreate model.
– user1770
Jun 10 '11 at 19:29
• ...or did you mean by tough the partial derivatives and brownian z -function in Black-Scholes or something else? Mathematically the simplest model are not tough, just some stochastic processes, recursion and and partial derivatives.
– user1770
Jun 10 '11 at 19:34
• The wikipedia on Black-Scholes says "American options... are more difficult to value, and a choice of solution techniques is available (for example lattices and grids)." en.wikipedia.org/wiki/Option_style says "There are no general formulae for American options, but a choice of models to approximate the price are available." Pretty sure I've read the same in stronger sources than Wikipedia. For most purposes you need to know the relationship among time, price, strike, interest rate, and volatility, that's why I say B-S is close enough, because that's the same for American options. Jun 10 '11 at 22:57
• By "the relationship is the same" among those factors, I mean for practical purposes that I know of. I'm sure there are some scary computer trading systems and hedge funds that need to get more detailed, but for individual investors you just need to understand how time to expiration, strike price, underlying price, interest rates, and volatility factor into the option's value. Jun 11 '11 at 2:50
Just a few observations within the Black-Scholes framework:
• American calls have the same price as European calls on non-dividend paying assets.
• The Black-Scholes formula is applicable only to European options (and, by the above, to American calls on non-dividend paying assets).
• By the call-put parity, if you have European call prices for some expiry dates and strikes, you also have the European put prices for those expiry dates and strikes.
• If you have European call prices for a given expiry date T for all strikes, you can easily compute the price of any "European" payoff for that expiry (for example, a digital call V = 1_{S>K}, or a parabola V = S^2, or whatever). Conceptually, you form butterfly spreads __/\_ for a series of increasing strikes, and they give you the "risk-neutral" probability that you end up there, and then you just integrate over your payoff.
Next, you can now use the Black-Scholes framework (stock price is a Geometric Brownian Motion, no transaction costs, single interest rate, etc. etc.) and numerical methods (such as a PDE solver) to price American style options numerically, but not with a simple closed form formula (though there are closed-form approximations).
A minor tangent. One can claim the S&P has a mean return of say 10%, and standard deviation of say 14% or so, but when you run with that, you find that the actual returns aren't such a great fit to the standard bell curve. Market anomalies producing the "100-year flood" far more often than predicted over even a 20 year period. This just means that the model doesn't reflect reality at the tails, even if the +/- 2 standard deviations look pretty.
This goes for the Black-Sholes (I almost abbreviated it to initials, then thought better, I actually like the model) as well. The distinction between American and European is small enough that the precision of the model is wider than the difference of these two option styles. I believe if you look at the model and actual pricing, you can determine the volatility of a given stock by using prices around the strike price, but when you then model the well out of money options, you often find the market creating its own valuation.
• That makes perfect sense. If the prices were that predictable then the system wouldn't work. It turns out that the system actually works because prices are somewhat unpredictable. Jun 10 '11 at 13:49
• I can also go on a bit about how for many strikes, the volume is so thin that the price can't be expected to reflect true value. If I had the skill and processing power, I'd scan for certain type of activity to find indications of unusual behavior. Than behavior may reflect illegal trading, so care is needed. If your trade follows and you have good records, you won't get nailed for the same insider trading the first guys did. Jun 10 '11 at 14:08
• If I had the skill and processing power, I'd scan for certain type of activity to find indications of unusual behavior.` Aren't there online tools that will do this for you? Jun 10 '11 at 15:11
• "10% mean return ... 14% standard deviation .. you find that the actual returns aren't a great fit to the standard bell curve". It seems that you think that the mean and standard deviation are exclusive to the bell (Gauss) curve. That's not true. There are an infinite number of distributions, even for a given mean and standard deviation. And for that reason alone you can't predict "100 year floods" from just mean and standard deviation; you need the actual distribution. Jul 28 '14 at 9:36
• @MSalters - BS reflects the math of a bell curve. Not sure I get your point here. Jul 28 '14 at 11:39
Yes, your understanding is correct. Strictly speaking, the Black-Scholes model is used to price European options. However, the payoff (price) of European and American options are close enough and can be used as an approximation if no dividends are paid on the underlying, and liquidity cost is close to zero (e.g. in a very low-interest rate scenario).
As of now, there are no closed-form methods to price American options. At least none that I know of. You should rely on lattices for multi-period binomial pricing, which is mostly recursive.
as no advantage from exerting American call option early,we can use Black schole formula to evaluate the option.However, American put option is more likely to be exercised early which mean Black schole does not apply for this style of option
|
2021-10-20 17:49:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47394177317619324, "perplexity": 1095.9300680507438}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00217.warc.gz"}
|
https://stats.stackexchange.com/questions/474108/central-limit-theorem-rule-of-thumb-for-repeated-sampling
|
# Central Limit Theorem - Rule of thumb for repeated sampling
My question was inspired by this post which concerns some of the myths and misunderstandings surrounding the Central Limit Theorem. I was asked a question by a colleague once and I couldn't offer an adequate response/solution.
My colleague's question: Statisticians often cleave to rules of thumb for the sample size of each draw (e.g., $$n = 30$$, $$n = 50$$, $$n = 100$$, etc.) from a population. But is there a rule of thumb for the number of times we must repeat this process?
I replied that if we were to repeat this process of taking random draws of "30 or more" (rough guideline) from a population say "thousands and thousands" of times (iterations), then the histogram of sample means will tend towards something Gaussian-like. To be clear, my confusion is not related to the number of measurements drawn, but rather the number of times (iterations) required to achieve normality. I often describe this as some theoretical process we repeat ad infinitum.
Below this question is a quick simulation in R. I sampled from the exponential distribution. The first column of the matrix X holds the 10,000 sample means, with each mean having a sample size of 2. The second column holds another 10,000 sample means, with each mean having a sample size of 4. This process repeats for columns 3 and 4 for $$n = 30$$ and $$n = 100$$, respectively. I then produced for histograms. Note, the only thing changing between the plots is the sample size, not the number of times we calculate the sample mean. Each calculation of the sample mean for a given sample size is repeated 10,000 times. We could, however, repeat this procedure 100,000 times, or even 1,000,000 times.
Questions:
(1) Is there any criteria for the number of repetitions (iterations) we must conduct to observe normality? I could try 1,000 iterations at each sample size and achieve a reasonably similar result.
(2) Is it tenable for me to conclude that this process is assumed to be repeated thousands or even millions of times? I was taught that the number of times (repetitions/iterations) is not relevant. But maybe there was a rule of thumb before the gift of modern computing power. Any thoughts?
pop <- rexp(100000, 1/10) # The mean of the exponential distribution is 1/lambda
X <- matrix(ncol = 4, nrow = 10000) # 10,000 repetitions
samp_sizes <- c(2, 4, 30, 100)
for (j in 1:ncol(X)) {
for (i in 1:nrow(X)) {
X[i, j] <- mean(sample(pop, size = samp_sizes[j]))
}
}
par(mfrow = c(2, 2))
for (j in 1:ncol(X)) {
hist(X[ ,j],
breaks = 30,
xlim = c(0, 30),
col = "blue",
xlab = "",
main = paste("Sample Size =", samp_sizes[j]))
}
• (1) With exponential data, no simulation is is needed to find the distributions of means $\bar X_2, \bar X_4, \bar X_{30}, \bar X_{100},$ where subscripts denote sizes of samples averaged, With exponential data $\bar X_n$ has a gamma distribution with shape parameter $n$ and rate or scale param depending on $n$ and rate/scale of exponential population sampled from. By CLT nearer to normal for larger $n.$ (2) The nr of iterations B should be large enough to get a histogram of $\bar X_n$'s that is sufficiently smooth to suggest the gamma dist'n of $\bar X_n.$ Smoother histograms for larger B. – BruceET Jun 26 '20 at 2:56
• Under the usual conditions, a single sample mean is a random variable, and has a distribution. Its this population distribution that we're considering when we try to argue that it should be well approximated by a normal. Once you observe a sample, you have a realization of that random variable. However you can't see in the sample cdf a reasonable approximation of that population distribution from a single realization. – Glen_b Jun 26 '20 at 4:34
• @BruceET Thank you! (1) Why don’t we need a simulation to demonstrate this? (2) And, how large is large enough? I know this question is a bit in the weeds, but I wonder if there is (was ever) a minimum number of sampling iterations? – Thomas Bilach Jun 26 '20 at 13:32
• You can use moment generating functions to show that the sum of two indep exponential random variables (same rate) is gamma with shape parameter 2, that the sum of three is gamma with shape parameter 3, etc. // For the sum of two $X_1,X_2$ start with joint dist'n and transform to $Y_1 = X_1+X_2, Y_2 = X_2$ then integrate to get marginal dist'n of $Y_1 = X_1 + X_2.$ Iterate to find dist'n of sum of three, etc. // Other std methods of finding dist'ns of sume of RVs. // No simulation required. Simulation is not a mathematical proof, but is a good way to illustrate dist'ns of sums & make pictures. – BruceET Jun 26 '20 at 16:14
• Does this answer your question? Why does increasing the sample size of coin flips not improve the normal curve approximation? – Sextus Empiricus Jul 15 '20 at 15:53
## 2 Answers
To facilitate accurate discussion of this issue, I am going to give a mathematical account of what you are doing. Suppose you have an infinite matrix $$\mathbf{X} \equiv [X_{i,j} | i \in \mathbb{Z}, j \in \mathbb{Z} ]$$ composed of IID random variables from some distribution with mean $$\mu$$ and finite variance $$\sigma^2$$ that is not a normal distribution:$$^\dagger$$
$$X_{i,j} \sim \text{IID Dist}(\mu, \sigma^2)$$
In your analysis you are forming repeated independent iterations of sample means based on a fixed sample size. If you use a sample size of $$n$$ and take $$M$$ iterations then you are forming the statistics $$\bar{X}_n^{(1)},...,\bar{X}_n^{(M)}$$ given by:
$$\bar{X}_n^{(m)} \equiv \frac{1}{n} \sum_{i=1}^n X_{i,m} \quad \quad \quad \text{for } m = 1,...,M.$$
In your output you show histograms of the outcomes $$\bar{X}_n^{(1)},...,\bar{X}_n^{(M)}$$ for different values of $$n$$. It is clear that as $$n$$ gets bigger, we get closer to the normal distribution.
Now, in terms of "convergence to the normal distribution" there are two issues here. The central limit theorem says that the true distribution of the sample mean will converge towards the normal distribution as $$n \rightarrow \infty$$ (when appropriately standardised). The law of large numbers says that your histograms will converge towards the true underlying distribution of the sample mean as $$M \rightarrow \infty$$. So, in those histograms we have two sources of "error" relative to a perfect normal distribution. For smaller $$n$$ the true distribution of the sample mean is further away from the normal distribution, and for smaller $$M$$ the histogram is further away from the true distribution (i.e., contains more random error).
How big does $$n$$ need to be? The various "rules of thumb" for the requisite size of $$n$$ are not particularly useful in my view. It is true that some textbooks propagate the notion that $$n=30$$ is sufficient to ensure that the sample mean is well approximated by the normal distribution. The truth is that the "required sample size" for good approximation by the normal distribution is not a fixed quantity --- it depends on two factors: the degree to which the underlying distribution departs from the normal distribution; and the required level of accuracy needed for the approximation.
The only real way to determine the appropriate sample size required for an "accurate" approximation by the normal distribution is to have a look at the convergence for a range of underlying distributions. The kinds of simulations you are doing are a good way to get a sense of this.
How big does $$M$$ need to be? There are some useful mathematical results showing the rate of convergence of an empirical distribution to the true underlying distribution for IID data. To give a brief account of this, let us suppose that $$F_n$$ is the true distribution function for the sample mean with $$n$$ values, and define the empirical distribution of the simulated sample means as:
$$\hat{F}_n (x) \equiv \frac{1}{M} \sum_{m=1}^M \mathbb{I}(\bar{X}_n^{(m)} \leqslant x) \quad \quad \quad \text{for } x \in \mathbb{R}.$$
It is trivial to show that $$M \hat{F}_n(x) \sim \text{Bin}(M, F_n(x))$$, so the "error" between the true distribution and the empirical distribution at any point $$x \in \mathbb{R}$$ has zero mean, and has variance:
$$\mathbb{V} (\hat{F}_n(x) - F_n(x)) = \frac{F_n(x) (1-F_n(x))}{M}.$$
It is fairly simple to use standard confidence interval results for the binomial distribution to get an appropriate confidence intervale for the error in the simulated estimation of the distribution of the sample mean.
$$^\dagger$$ Of course, it is possible to use a normal distribution, but that is not very interesting because convergence to normality is already achieved with a sample size of one.
• Thank you, Ben. Is it safe to call the one sample mean we observe a “random variable” with an “empirical distribution”? In practice, we never really observe this distribution of all the sample means. – Thomas Bilach Jun 26 '20 at 13:39
• Until it is observed, it is still a random variable --- once observed it is a constant. – Ben Jun 26 '20 at 23:05
I think it may be helpful to think about your question a bit differently. Suppose that $$X\sim F_X$$ where $$F_X$$ is any arbitrary distribution, and let $$\sigma^2 = Var(X)$$. Now suppose I draw iid $$X_1,\dots,X_n \sim F_X$$, and let $$\bar{X}_n = \frac{1}{n}\sum X_i$$.
The CLT says that under very weak assumptions, $$\bar{X}_n \xrightarrow{d} N(\mu,\sigma^2/n)$$ as $$n$$ gets arbitrarily large. Now suppose that for a fixed $$n$$, I observe $$\bar{X}_{n1},\dots,\bar{X}_{nK}$$ where for each $$k$$, I sample iid $$X_{1k},\dots,X_{nk} \sim F_X$$ and build $$\bar{X}_{nk}$$. But this is the exact same as sampling $$\bar{X}_{ni}$$ from the distribution $$F_{\bar{X}_n}$$. Your question can thus posed as follows:
What is the distribution $$F_{\bar{X}_n}$$, and in particular, is it normal?
The answer is no, and I'll focus on your exponential example. We can understand this problem by literally considering the sampling distribution of $$\bar{X}_n$$ given iid $$X_1,\dots,X_n \sim Exp(\gamma)$$. Note that $$Exp(\gamma) = \text{Gamma}(\alpha=1,\gamma)$$, and so $$\sum X_i \sim \text{Gamma}(n,\gamma)$$ and thus
$$\frac{1}{n}\sum X_i \sim \text{Gamma}(n,\gamma/n)$$
As it turns out, for $$n$$ reasonably large, this distribution is very similar to a Normal distribution, but it will never be a normal distribution for any finite $$n$$ (the above is exactly what distribution it is!). What you did by replicating was simply drawing from this distribution and plotting (indeed, try plotting these and youll get the same result!). Depending on the distribution of $$X_i$$, the distribution of $$\bar{X}_n$$ can be anything.
What the CLT says is that as $$n$$ goes to infinity, $$\bar{X}_n$$ will converge to a normal distribution, and similarly, $$\text{Gamma}(n,\gamma/n)$$ (or any $$F_{\bar{X}_n}$$ where $$X$$ satisfies the requisite requirements for CLT to kick in) will asymptotically equal a normal distribution.
EDIT
In response to your comments, maybe there's a misunderstand somewhere. It's helpful to emphasize that we can think of $$\bar{X}_n$$ as a random variable itself (often we think of it as the mean and thus a constant, but this is not true!). The point is that the random variable $$\bar{X}_n$$ that is the sample mean of $$X_1,\dots,X_n \sim F_X$$, and the random variable $$Y \sim F_{\bar{X}_n}$$ are the exact same random variable. So by drawing $$K$$ iid draws of $$X_1,\dots,X_n \sim F_X$$ and calculating $$\bar{X}_n$$, you're doing the equivalent of $$K$$ draws from $$F_{\bar{X}_n}$$. At the end of the day, regardless of whether $$K = 100,1000,100000,\dots$$, youre just drawing $$K$$ times from $$F_{\bar{X}_n}$$. So what is your goal here? Are you asking at what point does the empirical cdf of $$K$$ draws accurately represent the cdf of $$F_{\bar{X}_N}$$? Well forget about anything about sample means in that case, and simply ask how many times do I need to draw some random variable $$W \sim F$$ such that the empirical cdf $$\hat{F}_n$$ is 'approximately' $$F$$. Well there's a whole literature on that, and two basic results are (see the wiki link on empirical cdfs for more):
1. By the Glivenko-Cantelli theorem, $$\hat{F}_n$$ uniformly converges to $$F$$ almost surely.
2. By Donsker's theorem, The empirical process $$\sqrt{n}(\hat{F}_n -F)$$ converges in distribution to a mean-zero Gaussian process.
What you are doing with your histograms in your post is really estimating the density (not the CDF) given $$K$$ draws. Histograms are a (discrete) example of kernel density estimation (KDE). There's a similar literature on KDEs, and again, you have properties like the sample KDE will converge to the true underlying density as you gather more draws (ie $$K\to\infty$$). It should be noted that histograms don't converge to the true density unless you also let the bin width go to zero, and this is one reason why kernel approaches are preferred: they allow smoothness and similar properties. But at the end of the day, what you can say is the following:
For a fixed $$n$$, drawing iid $$X_1,\dots,X_n$$ and considering the random variable $$\frac{1}{n}\sum_{X_i}$$ is equivalent to considering the random variable with distribution $$F_{\bar{X}_n}$$. For any $$K$$ draws from $$F_{\bar{X}_n}$$, you can estimate the CDF (empirical CDF) and/or estimate the density (two approach are histogram or KDE). In either case, as $$K\to\infty$$, these two esimates will converge to the true CDF/density of the random variable $$\bar{X}_n$$, but these will never be the normal CDF/desntiy for any fixed $$n$$. However, as you let $$n\to\infty$$, $$\bar{X}_n$$ is asymptotically normal (under suitable conditions), and similarly, the CDF/density will also become normal. If you take $$n\to\infty$$, and then $$K\to\infty$$, then you will get the cdf/density of a normal rv.
• Good information. One quick follow-up. Does your response indicate that my simulation is plotting the “random draws” from the exponential distribution, and not the means? – Thomas Bilach Jun 26 '20 at 13:48
• @ThomasBilach no you're plotting the distribution of $F_{\hat{X}_n}$, and youre drawing from there $K$ times. The point is that in cases where you know the distribution of $X_i$ you are sampling from, you can literally just calculate the distribution of $\bar{X}_n = \frac{1}{n} \sum X_i$ instead of sampling from it to see what it looks like. And for any finite $n$, this distribution of the mean of draws will not be normal, unless $X_i$ itself is normal. – doubled Jun 26 '20 at 15:31
• But don’t we need to draw from this distribution $K$ times to demonstrate this? My concern is it could be 10,000 draws, or it could be 10,000,000 draws with a finite sample size. The sample size $n$ is what matters, not this theoretical sampling of thousands or even millions of draws. Isn’t the number of $K$ repetitions of this process irrelevant? It just needs to be performed enough times to show that the sample means approximate something normal. Let me know if I am missing something and thank you for clarifying. – Thomas Bilach Jun 26 '20 at 16:07
• @ThomasBilach Maybe I'm misunderstanding what you're asking, but see my updated edit in my post. Let me know if I'm missing something. – doubled Jun 26 '20 at 17:21
• Thank you for your edit. Suppose my simulation was this: X <- matrix(ncol = 4, nrow = 10). Where the number of iterations is 10. Is there a 'rule of thumb' for achieving a reasonably smooth histogram of the individual sample means? Typically, we just assume this process is repeated thousands and thousands of times. If you were demonstrating this to a group of students for the first time, then we normally would not say "...as the number of $K$ iterations increases, then the histogram of sample means converges to something approaching normality." Or can we say this? – Thomas Bilach Jun 26 '20 at 17:29
|
2021-01-23 07:53:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 96, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.832525372505188, "perplexity": 288.65617689777804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703536556.58/warc/CC-MAIN-20210123063713-20210123093713-00586.warc.gz"}
|
https://electronics.stackexchange.com/questions/427263/realizing-a-prototype-filter-in-ads-or-any-other-simulator
|
# Realizing a prototype filter in ADS or any other simulator
I'm trying to design a stub-filter, and I'm stuck on realizing the filter design using Richard's transformations and Kuroda identities.
I'm building a 5th order Chebyshev LP-filter for 2.6 GHz pass-band and 3.3 stop-band filter (=2.95GHz cut-off is fine).
Using the prototype values I calculated the values for Richard's transformation
$$\Lambda/8 = 1.39828575 cm\$$
$$\Z1 = Z5 = 1/0.7563 = 1.3222...\$$
$$\Z2 = Z4 = 1.3049\$$
$$\Z3 = 1/1.5773 = 0.63399...\$$
And at this point the circuitry would look like
And after running the Kuroda identities, it would look like
I'm trying to figure out what I need to do in order to get to making an actual layout in ADS, how I should choose my width/etc. The only know I know from the dimensions is the length $$\l\$$ which I calculated to be about $$\1.398cm\$$. Basically, the last picture I'd just multiply by the Zs/ZL (which is 50 Ohms) to get the actual resistances of stubs and lines, but I do not know how I should proceed to make an actual stubb filter circuit in ADS with this, considering that I don't know much about choosing the other dimensions that are not as given.
|
2019-07-20 18:32:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8236215710639954, "perplexity": 725.3437518899763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526560.40/warc/CC-MAIN-20190720173623-20190720195623-00220.warc.gz"}
|
https://root-forum.cern.ch/t/cern-root-file-of-integral/37703
|
# Cern root file of integral
Hi,
do you know where I can file file integral0.root that is advertised in this page:
https://root.cern.ch/root/html225/examples/integral.html
Thanks
It seems the following can be accessed:
``````f = TFile::Open("http://root.cern.ch/files/integral.root");
``````
HI,
unfortunately this file contains an empty TGeomanager.
integral.Draw();
unfortunately just show an empty Canvas.
The root file has been created by some integral.C fle.
I am extremely interested to see the root file or even better the .C file.
Thanks
Maybe this: integral.C
Hi,
Look great! It seems indeed it is the full INTEGRAL satellite in C!
Thanks a lot.
|
2022-07-07 02:23:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8148410320281982, "perplexity": 4280.892113894099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00561.warc.gz"}
|
https://testbook.com/objective-questions/mcq-on-aggregates--5eea6a0c39140f30f369e0b7
|
# Aggregates MCQ Quiz - Objective Question with Answer for Aggregates - Download Free PDF
Last updated on Nov 1, 2022
## Latest Aggregates MCQ Objective Questions
#### Aggregates Question 1:
Give an example for flaky shape of aggregate.
1. Pit sand
2. Crushed rock
3. Blown sand
4. Laminated rock
5. None of the above
#### Answer (Detailed Solution Below)
Option 4 : Laminated rock
#### Aggregates Question 1 Detailed Solution
Flaky aggregate:
The aggregate is said to be flaky when its least dimension is less than 3/5th (or 60%) of its mean dimension. Mean dimension is the average size through which the particles pass and the sieve size on which these are retained.
Laminated rock is an example of flaky shape of aggregate.
Elongated aggregate:
When the length of the aggregate is greater than 180% of its mean dimension, then it is called elongated aggregate.
Pit sand-
Pit sand is a natural and coarse type of sand which is extracted by digging 2-3m underneath the ground. It’s in red-orange color due to a presence of iron oxide around the grains.
As mentioned above, pit sand is a coarse type of sand and this is not recommended if the sand is coarser than the acceptable limits.
Crushed stone-
It is used as aggregate in construction material uses. The most common types of rock processed into crushed stone include limestone, dolomite, granite, and traprock. Smaller amounts of marble, slate, sandstone, quartzite, and volcanic cinder are also used as construction aggregates.
#### Aggregates Question 2:
Aggregate comprising particles falling essentially within a narrow limit of size fractions are called
1. Gap graded aggregates
2. All-in-all aggregates
3. Uniform aggregates
4. Single size aggregates
#### Answer (Detailed Solution Below)
Option 4 : Single size aggregates
#### Aggregates Question 2 Detailed Solution
Concept:
Gap grading is defined as a grading in which one or more intermediate-size fractions are absent. The term continuously graded is used to distinguish conventional grading from gap grading. On a grading curve, gap-grading is represented by a horizontal line over the range of the size omitted.
Single-Size-Aggregate:
• Aggregates comprising particles falling essentially within a narrow limit of size fraction are called single-size-aggregates. For example, a 20 mm single-size-aggregate means in aggregate most of which passes through a 20 mm IS sieve and the major portion of which is retained in a 10 mm IS sieve.
• It is used as compaction material at the subgrade and sub-base levels.
• Single-sized aggregates have good resistance to impact. In the case of self-compacting concrete, only 1 size of coarse aggregate is used, i.e.10mm, so that during placing, there is no segregation of material.
Uniform aggregates:
It refers to a gradation that contains most of the particles in a very narrow size range. In essence, all the particles are the same size. The curve is steep and only occupies the narrow size range specified.
• The narrow range of sizes.
• Grain-to-grain contact.
• High void content.
• High permeability.
• Low stability.
• Difficult to compact
All-in-all aggregates:
An all-in-all aggregate refers to a sample that is approximate of equal amounts of various sizes of aggregate. By having a dense gradation, most of the air voids between the materials are filled with particles. A dense gradation will result in an even curve on the gradation graph.
• Wide range of sizes.
• Grain-to-grain contact.
• Low void content.
• Low permeability.
• High stability.
• Difficult to compact
#### Aggregates Question 3:
The limit of crushing value of aggregate as specified (in percentage) by IS:383-1970 for aggregate used for runways and such other wearing surface is
1. 45
2. 25
3. 20
4. 30
Option 4 : 30
#### Aggregates Question 3 Detailed Solution
Explanation:
Aggregate Crushing Value (ACV)
• This test is used to find the strength of the coarse aggregate and indicates the resistance to crushing under a gradually applied crushing load.
• It is measured as the percentage of aggregates passing through 2.36 mm IS sieve after the specimen subjected to a compressive load of 40 tonnes at the rate of 4 tonnes per minute and to the weight of aggregates taken.
• Lower the ACV ⇒ Less aggregates gets crushed during loading ⇒ Higher the resistance against crushing
• Aggregate crushing value should not be greater than 30 % for the surface course and 45 % for the base course
Type of surface Crushing Value Base Course Less than 45% Wearing Course Less than 30%
#### Aggregates Question 4:
which of the following properties of aggregate that are influenced by porosity, permeability and water absorption of aggregates, is considered to be less important in general, in the Indian context?
1. resistance to abrasion
2. bond between aggregate and cement paste
3. resistance to aggressive chemical agencies
4. freezing and thawing
#### Answer (Detailed Solution Below)
Option 4 : freezing and thawing
#### Aggregates Question 4 Detailed Solution
Explanation:
The properties of aggregates that decides their nature are as follows:
• Specific gravity
• Bulkage of aggregates
• Voids Composition Size & Shape
• Texture of Aggregate
• Porosity
• Water Absorption
• Bulkage of aggregates Fineness Aggregate
• The surface area of aggregate
• Deleterious Material
• Crushing Value of Aggregate
• Impact Value of Aggregate
Porosity, Permeability, and Water Absorption of Aggregates:
• We can see some tiny holes on the surface of the aggregates called pores and such kinds of rocks called porous rocks. The pores happen on aggregates due to the air bubbles formed on the surface when the molten magma solidification. High pore aggregates may easily disintegrate when applying load.
• The coarse aggregate should not absorb water else; it may create cracks on the surface of the concrete after hardening. The water absorption is calculated by the ratio between the dry weight of the aggregate to the saturated weight of the aggregate. The water absorption will affect the water-cement ratio in concrete.
Hence, freezing and thawing properties of aggregate that are influenced by porosity, permeability and water absorption of aggregates, is considered to be less important in general, in the Indian context.
#### Aggregates Question 5:
Which of the following tests estimates the chemical effects of a deleterious action between cement and aggregate?
1. Micro Deval test
2. Aggregate impact value
3. Mortar bar test
4. Los Angeles abrasion value
#### Answer (Detailed Solution Below)
Option 3 : Mortar bar test
#### Aggregates Question 5 Detailed Solution
Explanation:
To estimate the chemical effects of a deleterious action between cement and aggregate, the following tests are used:
• Mortar bar method
• Chemical method
• Petrographic examination
• Rapid mortar bar test
• Concrete prism test
Micro Deval Test:
• The Micro‐Deval abrasion test is a test of coarse aggregate to determine abrasion loss in the presence of water and an abrasive charge.
Aggregate Impact Value Test:
• This test is performed to find the toughness of the aggregate i.e its resistance against impact loading.
Los Angeles abrasion Test:
• This test is performed to find the hardness of the aggregates i.e its resistance against wear and tear.
## Top Aggregates MCQ Objective Questions
#### Aggregates Question 6
The test intended to study the resistance of aggregates to weathering action is:
1. Abrasion test
2. Crushing test
3. Impact test
4. Soundness test
#### Answer (Detailed Solution Below)
Option 4 : Soundness test
#### Aggregates Question 6 Detailed Solution
Abrasion Test: It is carried out to test the hardness property of aggregates and to decide whether they are suitable for different pavement construction works. The principle of Los Angeles abrasion test is to find the percentage wear due to relative rubbing action between the aggregate and mass traffic.
Attrition Test: It is carried out to test the hardness or resistance of the aggregates against mutual rubbing or grinding of the aggregate mass under the action of traffic load.
Soundness Test: This test is intended to study the resistance of aggregates to weathering action, by conducting accelerated weathering test cycles.
Crazing: This term is used to denote the development of a network of minor cracks on the pavement slab.
Important Points:
Various Tests for Aggregates with IS codes: Property of aggregate Type of Test Test Method Crushing strength Crushing test IS : 2386 (part 4) -1963 Hardness Los Angeles abrasion test IS : 2386 (Part 5)-1963 Toughness Aggregate impact test IS : 2386 (Part 4)-1963 Durability Soundness test- accelerated durability test IS : 2386 (Part 5)-1963 Shape factors Shape test IS : 2386 (Part 1)-1963 Specific gravity and porosity Specific gravity test and water absorption test IS : 2386 (Part 3)-1963 Adhesion to bitumen Stripping value of aggregate IS: 6241-1971
#### Aggregates Question 7
If fineness modulus of sand is 2.5, it is graded as:
1. Very fine sand
2. Fine sand
3. Medium sand
4. Coarse sand
#### Answer (Detailed Solution Below)
Option 2 : Fine sand
#### Aggregates Question 7 Detailed Solution
Explanation:
Fineness modulus of sand (fine aggregate) is an index number that represents the mean size of the particles in sand.
FM is the sum of the total percentages retained on each specified sieve divided by 100.
Type of Sand Fineness Modulus Range Fine Sand 2.2 – 2.6 Medium Sand 2.6 – 2.9 Coarse Sand 2.9 – 3.2
#### Aggregates Question 8
Bulking of sand occurs in the moisture content of _____.
1. 3%
2. 5%
3. 10%
4. 12%
Option 2 : 5%
#### Aggregates Question 8 Detailed Solution
Explanation:
Bulking of sand:
The increase in the volume of sand due to an increase in moisture content is known as the bulking of sand. A film of water is created around the sand particles which forces the particles to get aside from each other and thus the volume is increased.
Bulking of sand is maximum at 4.6 % moisture content.
∴ Maximum moisture content (in percentage) based on the sand is 5%.
Note:
Five to eight percent of the increase in moisture in the sand can increase the volume of sand up to 20 to 40 percent.
#### Aggregates Question 9
Deval Attrition Test is used to determine which of the the following ?
1. Aggregate abrasion value
2. Aggregate impact value
3. Aggregate roughness value
4. Aggregate crushing value
#### Answer (Detailed Solution Below)
Option 1 : Aggregate abrasion value
#### Aggregates Question 9 Detailed Solution
Deval attrition test:
A Deval attrition test is a test which is carried out to measure the rate of wear of a granular material i.e. abrasion resistance value. One of the best examples of a material subjected to an attrition test are stones used in road construction, indicating the resistance of the material to being broken down under road traffic.
$$Percentage\;wear = \frac{{Loss\;in\;weight}}{{initial\;weight}} \times 100\%$$
Dorry abrasion test:
Coefficient of hardness is calculated from Dorry abrasion test. It is an indication of the abrasion resistance i.e. resistance against the external weathering of aggregates.
It is given by, Coefficient of Hardness ⇒ $${C_H} = 20 - \frac{{Loss\;of\;the\;weight\;in\;gms}}{3}$$
Los Angeles test:
Los Angeles machine or Los Angeles abrasion test on aggregates is used to measure the aggregate toughness and abrasion resistance such as crushing, degradation and disintegration.
The principle of Los Angeles abrasion test is to produce abrasive action by use of standard steel balls which when mixed with aggregates and rotated in a drum for a specific number of revolutions also causes the impact on aggregates.
Maximum abrasion value for wearing course is 30% and for the base course in WBM is 50%.
As per IS 2386 (Part IV):1963, Aggregate abrasion value can be determined by Deval attrition test, Dorry abrasion test and Los Angeles test.
#### Aggregates Question 10
The aggregate impact value of the aggregate used in _______.
1. Building concrete is less than 45
2. Road pavement concrete is less than 30
3. Runway concrete is less than 30
4. All the options are correct
#### Answer (Detailed Solution Below)
Option 4 : All the options are correct
#### Aggregates Question 10 Detailed Solution
Concept:
Aggregate Impact Test: It is conducted to determine the toughness of the aggregates.
For testing, the specimen passing through 12.5 mm sieve but retained on 10 mm sieve is filled in 3 layers with 25 time stamping on each layer and then hammer of 13.5 to 14 kg is dropped freely from a height of 38 cm for 15 blows.
Aggregate impact value is then the percentage of fine passing through 2.36 mm sieve to total aggregate weight.
Aggregate impact value for different requirement are as follows:
Type of pavement Material/Layer Maximum aggregate impact value WBM Sub base course 50 Cement Concrete Base course 45 Bituminous Macadam, Base course 35 WBM Surface Course 30 Bituminous wearing surfaces 30
∴ For concreting the surface of the runways, roads and pavements, the aggregate impact value shall not exceed by weight 0.3 and 0.5 for WBM.
#### Aggregates Question 11
Which of following is not a function of sand in mortar?
1. Providing strength
2. Reducing the consumption of cement
3. Reducing shrinkage
4. Reducing the setting time
#### Answer (Detailed Solution Below)
Option 1 : Providing strength
#### Aggregates Question 11 Detailed Solution
Concept:
Functions of aggregate in a mortar:
1. Use of aggregate in the mortar does not add any strength to it but helps in re-adjusting its strength which can be achieved by increasing or decreasing its proportion.
2. Use of aggregate in mortar divides the paste of binding material into a number of layers thereby increases the area over which it can be applied.
3. Use of aggregate in mortar helps in increasing its volume or decreasing its cost.
4. The use of aggregate in mortar reduces the tendency of shrinkage thereby avoids the tendency of the development of cracks in it.
5. Use of aggregate in mortar helps in a uniform setting by letting the gases and heat escape from it formed during the hydration thereby avoids the tendency of development of cracks in it.
Mistake PointsSand allows Carbon-Di-oxide from the atmosphere to reach to some depth in case of the lime mortars and thereby improves their setting capability.
#### Aggregates Question 12
Which of the following tests are used for testing of tiles?
1. breaking strength test
2. impact test
3. transverse strength test
4. water absorption test
1. 1 and 3 only
2. 1, 2 and 3 only
3. 1, 2 and 4 only
4. 1, 2, 3 and 4
#### Answer (Detailed Solution Below)
Option 4 : 1, 2, 3 and 4
#### Aggregates Question 12 Detailed Solution
Explanation:
Various Test for Testing of tiles are:
i) Moisture Expansion Test
ii) Water Absorption Test
iii) Bond Strength Test
iv) Transverse Strength Test
v) Impact Test
vi) Thermal Shock Resistance Test
vii) Breaking Strength test
viii) Chemical Resistance Test
ix) Modulus of Rupture Test
x) Surface Abrasion Test
xi) Hardness Test
Hence, all the given test is required for the testing of tiles. The answer will be D.
#### Aggregates Question 13
Which of the following aggregates gives maximum strength in concrete?
1. Rounded aggregate
2. Elongated aggregate
3. Flaky aggregate
4. Cubical aggregate
#### Answer (Detailed Solution Below)
Option 4 : Cubical aggregate
#### Aggregates Question 13 Detailed Solution
Cubical aggregate has maximum strength in concrete as it has good packing and strength in all direction.
Rounded aggregate is not suitable for concrete.
Flaky means have less thickness, elongated means having more length. These aggregate can be easily crushed and having a minimum strength.
Reasons:
• Generally, in normal concrete loads are taken by aggregates only and cement acts as a binder, therefore, a normal concrete can have maximum strength till the aggregates are not broken.
• If the aggregates fail under a load before failure of cement sand matrix. The concrete produced with that aggregates will not achieve the desired strength.
• So using flaky and elongated aggregates might lead to failure of concrete and hence should be avoided.
Important Points
Classification of aggregates on basis of shape
1. Rounded aggregates / spherical - Rounded aggregates result the minimum percentage of voids (32 – 33%) hence gives more workability. They require lesser amount of water-cement ratio. They are not considered for high strength concrete because of poor interlocking behaviour and weak bond strength.
2. Irregular or partly rounded aggregates - Irregular aggregates may result 35- 37% of voids. These will give lesser workability when compared to rounded aggregates.
3. Angular aggregates - Angular aggregates result maximum percentage of voids (38-45%) hence gives less workability
4. Flaky aggregates - When the aggregate thickness is small when compared with width and length of that aggregate it is said to be flaky aggregate. Or in the other, when the least dimension of aggregate is less than the 60% of its mean dimension then it is said to be flaky aggregate.
5. Elongated aggregates - When the length of aggregate is larger than the other two dimensions then it is called elongated aggregate or the length of aggregate is greater than 180% of its mean dimension.
6. Flaky and elongated aggregates - When the aggregate length is larger than its width and width is larger than its thickness then it is said to be flaky and elongated aggregates. The above 3 types of aggregates are not suitable for concrete mixing
#### Aggregates Question 14
Give an example for flaky shape of aggregate.
1. Pit sand
2. Crushed rock
3. Blown sand
4. Laminated rock
#### Answer (Detailed Solution Below)
Option 4 : Laminated rock
#### Aggregates Question 14 Detailed Solution
Flaky aggregate:
The aggregate is said to be flaky when its least dimension is less than 3/5th (or 60%) of its mean dimension. Mean dimension is the average size through which the particles pass and the sieve size on which these are retained.
Laminated rock is an example of flaky shape of aggregate.
Elongated aggregate:
When the length of the aggregate is greater than 180% of its mean dimension, then it is called elongated aggregate.
Pit sand-
Pit sand is a natural and coarse type of sand which is extracted by digging 2-3m underneath the ground. It’s in red-orange color due to a presence of iron oxide around the grains.
As mentioned above, pit sand is a coarse type of sand and this is not recommended if the sand is coarser than the acceptable limits.
Crushed stone-
It is used as aggregate in construction material uses. The most common types of rock processed into crushed stone include limestone, dolomite, granite, and traprock. Smaller amounts of marble, slate, sandstone, quartzite, and volcanic cinder are also used as construction aggregates.
#### Aggregates Question 15
An aggregate which passes through 25 mm I.S. sieve and is retained on 20 mm sieve, is said to be flaky if its least dimension is less than
1. 22.5 mm
2. 18.5 mm
3. 16.5 mm
4. 13.5 mm
#### Answer (Detailed Solution Below)
Option 4 : 13.5 mm
#### Aggregates Question 15 Detailed Solution
Concept:
A particular aggregate will be flaky if its minimum dimension is less than 60% (or 3/5 times) the size of its average dimension.
Calculation:
The aggregate completely passes through 25 mm sieve and completely retained on 20 mm sieve.
So, its average size is = $$\frac{{25 + 20}}{2} = 22.5{\rm{\;mm}}$$
Minimum dimension should be less than $$\frac{3}{5}{\rm{\;times\;of\;average\;dimension}} = {\rm{\;}}\frac{3}{5} \times 22.5 = 13.5{\rm{\;mm\;}}$$
|
2022-12-03 02:14:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6504509449005127, "perplexity": 5697.043530799083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00175.warc.gz"}
|
http://spmphysics.onlinetuition.com.my/2013/07/gas-trapped-in-capillary-tube-example-2.html
|
# Gas Trapped in a Capillary Tube - Example 2
Example 2:
Figure above shows some air trapped in a J-tube. Find the pressure of the trapped air. [Density of water = 1000 kg/m³; Atmospheric pressure = 100,000 Pa]
$P gas = P atm +hρg P gas =(100,000)+(0.2)(1000)(10) P gas =102,000Pa$
|
2019-04-24 14:15:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47130414843559265, "perplexity": 2867.2101611855664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578643556.86/warc/CC-MAIN-20190424134457-20190424160457-00386.warc.gz"}
|
https://tex.stackexchange.com/questions/359415/how-can-i-find-the-cell-height-in-a-tabular
|
# How can I find the cell height in a tabular? [closed]
The context for this question is that I would like to vertically align text in a tabular.
I.e. I would like to choose whether the text is:
• as close as possible to the top of the cell, or;
• as close as possible to the bottom of the cell, or;
• in the middle of the cell.
This is not what TeX means by "alignment", but it is what I mean here.
One way to do this is
\documentclass{article}
\usepackage[a4paper]{geometry}
\usepackage{array}
\usepackage{graphicx}
\begin{document}
\begin{tabular*}{4cm}{ p{4cm} }
\hline
\parbox[m][1.8cm][t]{4cm}{Some top-alignedcontent.} \\
\hline
\parbox[m][1.8cm][c]{4cm}{Some middle-aligned content.} \\
\hline
\parbox[m][1.8cm][b]{4cm}{Some bottom-aligned content.} \\
\hline
\end{tabular*}
\end{document}
This aligns the middle of the parbox to the cell baseline. It then aligns the text within the parbox either to the top, the middle, or the bottom.
But, that assumes that I can set the row height at 1.8 cm. In other cases, there are may be previous columns in the table which have fixed the row height. So, is there a way I can find out the current row height in a table?
Here's an example of what I mean:
\documentclass{article}
\usepackage[a4paper]{geometry}
\usepackage{array}
\usepackage{graphicx}
\begin{document}
\begin{tabular*}{4cm}{ p{4cm} p{4cm} }
\hline
There could be any old stuff here... &
\parbox[m][1.8cm][t]{4cm}{Some top-aligned content.} \\
\hline
... of any width or height... &
\parbox[m][1.8cm][c]{4cm}{Some middle-aligned content.} \\
\hline
... is there a variable I can use in the height argument to parbox,
which will ensure it fits the whole cell height? &
\parbox[m][1.8cm][b]{4cm}{Some bottom-aligned content.} \\
\hline
\end{tabular*}
\end{document}
What should I put instead of "1.8cm" in the parbox's height argument?
## closed as unclear what you're asking by Zarko, Stefan Pinnow, Schweinebacke, Marco Daniel, MicoMar 20 '17 at 23:00
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• Your question is not clear. Please, improve MWE with case of "previous" column. Of topic: \parbox hasn't option m, instead it use c, i. e.: change your table code to: \begin{tabular*}{4cm}{l} \hline \parbox[t][1.8cm][t]{4cm}{Some content.} \\ \hline \parbox[t][1.8cm][c]{4cm}{Some content.} \\ \hline \parbox[t][1.8cm][b]{4cm}{Some content.} \\ \hline \end{tabular*} – Zarko Mar 20 '17 at 12:50
• as zarko says \parbox hasn't got an m option but perhaps you are just looking for the m column type and no nested parbox at all? – David Carlisle Mar 20 '17 at 13:12
I wasn't sure this would work, but I still wouldn't bother unless you want to play with proportional glue.
Note, I like to add \strut to the beginning and end of each \parbox.
\documentclass{article}
\usepackage[a4paper]{geometry}
\usepackage{array}
\usepackage{graphicx}
\newlength{\boxsize}
\newcommand{\setboxsize}[2]% #1 = width, #2=text
{\sbox0{\parbox{#1}{\strut #2\strut}}%
\global\boxsize=\ht0
\box0}
\begin{document}
\begin{tabular}{ll}
\hline
\setboxsize{4cm}{There could be any old stuff here...} &
\parbox[c][\boxsize][t]{4cm}{\strut Some top-aligned content.\strut} \\
\hline
\setboxsize{4cm}{... of any width or height...} &
\parbox[c][\boxsize][c]{4cm}{\strut Some middle-aligned content.\strut} \\
\hline
\setboxsize{4cm}{... is there a variable I can use in the height argument to parbox,
which will ensure it fits the whole cell height?} &
\parbox[c][\boxsize][b]{4cm}{\strut Some bottom-aligned content.\strut} \\
\hline
\end{tabular}
\end{document}
This solution replaces tabular with a new environment (Tabular). It runs tabular twice, the first time saving the row heights (no output) and the second using them. It also introduces the \cell macro. which functions similar to \parbox.
Part of this solution was stolen from here.
\documentclass{article}
\usepackage{environ}
\makeatletter
\newcount{\cell@row}
\newlength{\cell@size}
\newcommand{\@cell}[3][c]% (first time) #1=tcb (optional), #2=width, #3=text
{\sbox1{\parbox{#2}{\topstrut #3\bottomstrut}}%
\dimen0=\ht1
\ifdim\cell@size<\dimen0 \global\cell@size=\dimen0\fi
\expandafter\xdef\csname cell@size\the\cell@row\endcsname{\the\cell@size}%
}
\newcommand{\@@cell}[3][c]% (second time) #1=tcb (optional), #2=width, #3=text
{\parbox[c][\csname cell@size\the\cell@row\endcsname][#1]{#2}{\topstrut #3\bottomstrut}}
\NewEnviron{Tabular}[2][c]% #1 = tcb (optional), #2 = columns (same as tabular)
{\let\cell=\@cell
\global\cell@row=0
\global\cell@size=0pt
\def\topstrut{\rule{0pt}{\arraystretch\ht\strutbox}}%
\def\bottomstrut{\rule[-\arraystretch\dp\strutbox]{0pt}{0pt}}%
\let\old@arraycr\@arraycr% executes at end of line
\global\cell@size=0pt
\old@arraycr}%
\sbox0{\begin{tabular}{#2}
\BODY
\end{tabular}}%
\let\cell=\@@cell
\global\cell@row=0
\old@arraycr}%
\begin{tabular}[#1]{#2}
\BODY
\end{tabular}}
\makeatother
\begin{document}
\noindent\begin{Tabular}{llll}
\hline
\cell[t]{3cm}{\raggedright Some top-aligned content.} &
\cell[c]{3cm}{\raggedright Some middle-aligned content.} &
\cell[b]{3cm}{\raggedright Some bottom-aligned content.} &
\cell{1cm}{\raggedright I saved the largest cell for last} \\
\hline
\end{Tabular}
\end{document}
• Accepting. This answers my question... and more importantly it tells me there's no easier way. Indeed, for a complete solution I'd have to call setboxsize on each succeeding cell in a row, ensuring it took the max; and then, what if the text is taller than the previous rows? Gosh, guys; in HTML you type style='vertical-align:top' and in Word you click a little button. – dash2 Mar 21 '17 at 13:15
• You should only do it for the largest cell in a row, and it needs to be done at the start of the row (you can delay \box0 until needed). – John Kormylo Mar 21 '17 at 15:16
• Ah... but which will be the largest cell? Context: I'm writing code that spits out LaTeX and which can be called with different arguments. – dash2 Mar 21 '17 at 15:18
• I could... but it's too much. I'm trying to let people use LaTeX, not to rewrite LaTeX to add functionality. I might go for a fudge like a zero-width centered starting cell, and then let alignments be t/c/b with respect to that. But I want to respect the constraints of the system, not to do some weird unique thing that may work on alternate Fridays. I'll keep watching for a table package with this functionality... tabu looked good but appears unmaintained, cals seems a little weird.... – dash2 Mar 21 '17 at 15:46
• I thought of a new approach (and deleted several earlier comments). – John Kormylo Mar 21 '17 at 19:20
Your question isn't very clear as to the desired result but I think you want
\documentclass{article}
\usepackage[a4paper]{geometry}
\usepackage{array}
\usepackage{graphicx}
\begin{document}
\begin{tabular}{p{4cm}p{4cm} }
\hline
There could be any old stuff here... &
Some content. \\
\hline
\multicolumn{1}{m{4cm}}{... of any width or height...}&
Some content. \\
\hline
\multicolumn{1}{b{4cm}}{... is there a variable I can use in the height argument to parbox,
which will ensure it fits the whole cell height?} &
Some content. \\
\hline
\end{tabular}
\end{document}
Original guess:
\documentclass{article}
\usepackage[a4paper]{geometry}
\usepackage{array}
\usepackage{graphicx}
\begin{document}
\begin{tabular}{m{4cm}m{4cm} }
\hline
There could be any old stuff here... &
Some content. \\
\hline
... of any width or height... &
Some content. \\
\hline
... is there a variable I can use in the height argument to parbox,
which will ensure it fits the whole cell height? &
Some content. \\
\hline
\end{tabular}
\end{document}
Note I used tabular not tabular* as your specification was for a total width of 4cm, and two columns, each of width 4cm+12pt \tabcolsep padding, which clearly can not fit.
• I will try to clarify my question. Concretely, in the example, I would like the first row to be top-aligned, the second row to be middle-aligned, and the third row to be bottom-aligned, in the sense I describe in the question. – dash2 Mar 20 '17 at 14:47
• @dash2 OK I added a second version – David Carlisle Mar 20 '17 at 14:55
• It's a good try and it works for the example. But the problem is, I do not know what will be in those first cells in the row! They might themselves be upper or lower aligned, for example. And I might want a third column which has a different alignment from the first. In other words, I want the cell itself to determine its own alignment... Maybe I could have an empty width 0 column before each cell? (Sigh... this whole system is so broken :-/) – dash2 Mar 20 '17 at 15:52
• I should add that this is why I came up with the parbox solution in the first place - it doesn't rely on setting up the previous cell in a particular way. But, I am open to other solutions. – dash2 Mar 20 '17 at 15:53
• @dash2 you are not using "alignment" the way it is used in TeX. a tabular row has a single baseline row for the entire row, each cell has a baseline that is placed on the row's baseline. lcr entries just have one line so their baseline goes on the row baseline. p entries have baseline based on their top row, m on the vertical centre and b on the bottom row. So the alignment of Some Content is unchanged in the three rows in the new example, it is the alignment of the first column that you are changing to place its first line, last line or centre on the reference line for the row. – David Carlisle Mar 20 '17 at 17:21
|
2019-07-21 11:10:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384418487548828, "perplexity": 2218.353184947855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526948.55/warc/CC-MAIN-20190721102738-20190721124738-00290.warc.gz"}
|
http://quantummechanics.ucsd.edu/ph130a/130_notes/node365.html
|
### Positronium
Positronium, the Hydrogen-like bound state of an electron and a positron, has a hyperfine'' correction which is as large as the fine structure corrections since the magnetic moment of the positron is the same size as that of the electron. It is also an interesting laboratory for the study of Quantum Physics. The two particles bound together are symmetric in mass and all other properties. Positronium can decay by anihilation into two or more photons.
In analyzing positronium, we must take some care to correctly handle the relativistic correction in the case of a reduced mass much different from the electron mass and to correctly handle the large magnetic moment of the positron.
The zero order energy of positronium states is
where the reduced mass is given by
The relativistic correction must take account of both the motion of the electron and the positron. We use and . Since the electron and positron are of equal mass, they are always exactly oposite each other in the center of mass and so the momentum vector we use is easily related to an individual momentum.
We will add the relativistic correction for both the electron and the positron.
This is just half the correction we had in Hydrogen (with essentially replaced by ).
The spin-orbit correction should be checked also. We had as the interaction between the spin and the B field producded by the orbital motion. Since , we have
for the electron. We just need to add the positron. A little thinking about signs shows that we just at the positron spin. Lets assume the Thomas precession is also the same. We have the same fomula as in the fine structure section except that we have in the denominator. The final formula then is
again just one-half of the Hydrogen result if we write everything in terms of for the electron spin, but, we add the interaction with the positron spin.
The calculation of the spin-spin (or hyperfine) term also needs some attention. We had
where the masses in the deonominator of the first term come from the magnetic moments and thus are correctly the mass of the particle and the mas in the last term comes from the wavefunction and should be replaced by . For positronium, the result is
Jim Branson 2013-04-22
|
2014-10-21 03:42:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8603525161743164, "perplexity": 383.68144183068716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443883.29/warc/CC-MAIN-20141017005723-00080-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://etsf.polytechnique.fr/node/3363
|
# Effects of Low-Energy Excitations on Spectral Properties at Higher Binding Energy: The Metal-Insulator Transition of ${\mathrm{VO}}_{2}$
Title Effects of Low-Energy Excitations on Spectral Properties at Higher Binding Energy: The Metal-Insulator Transition of ${\mathrm{VO}}_{2}$ Publication Type Palaiseau Article Acknowledgements GENCI. ANR DOI 10.1103/PhysRevLett.114.116402 Gatti, M, Panaccione, G, Reining, L Publisher American Physical Society Year of Publication 2015 Journal Phys. Rev. Lett. Volume 114 URL http://link.aps.org/doi/10.1103/PhysRevLett.114.116402 Pagination 116402 Full Text
|
2023-03-28 20:46:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5114637613296509, "perplexity": 11787.681203137552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00013.warc.gz"}
|
https://rpg.meta.stackexchange.com/questions/10074/mathjax-guide-for-rpg-se-how-to-format-pretty-tables-and-equations/10075
|
# MathJax guide for RPG.SE: How to format pretty tables and equations?
For a more exhaustive tutorial please see Math.SE's Mathjax quick reference which covers many more things than this post does. This is trying to focus on the formatting which is typically needed here, which is a lot less than a maths site does.
• How do I format good-looking equations?
• How do I format good-looking tables?
• related: MathJaX is live on RPGSE (which also has some examples, but not as well gathered/organized as this) – nitsua60 Jul 12 '20 at 13:32
• Relevant MSE post: Stack Exchange is rolling out native table support that isn't reliant on MathJax. It goes into testing today on MSE and on the DBA Meta, then rolls out to DBA.SE itself a week later, and will be available network-wide a week after that. It uses "GitHub-flavored Markdown" table syntax (since CommonMark doesn't include a specification for tables at the moment). – V2Blast Nov 24 '20 at 0:54
Note that MathJax will make a page slower to load and can be jarring if there's a lot of inline equations (note that typeface and size is different). For superscript in normal text, say for footnotes, use <sup></sup> and html codes for special characters, such as † for † and × for ×.
## Equations and Maths
First, a definition. MathJax lets us format things as $$\\LaTeX\$$ math-mode, which means maths looks good and text looks bad. (Different kerning and no spacing which makes it difficult to read.)
You can make in-line equations by enclosing in $..$ and on separate, centred lines by enclosing in $$..$$. So, for example $1+2=3$ renders as $$\1+2=3\$$ while $$1+2=3$$ renders as $$1+2=3$$
Super and subscript is achieved via ^ and _ respectivly. If these are to contain more than a single character, enclose them in {..}. Example: $M_{x, b}^n = x_b + n^3$ gives $$\M_{x, b}^n = x_b + n^3\$$
There a number functions for special characters and formatting. \times gives $$\\times\$$, $\sum$ gives $$\\sum\$$, and similarly for greek letters $\alpha$ gives $$\\alpha\$$ and common functions $\ln$ gives $$\\ln\$$
Fractions can be achieved using \frac{}{}. When putting a fraction in parenthesis use \left and \right on those to make then scale with the height of the fraction. Example with and without
$$\frac{1}{20} + (\frac{1}{20})^2 + \left(\frac{1}{20}\right)^2$$
$$\frac{1}{20} + (\frac{1}{20})^2 + \left(\frac{1}{20}\right)^2$$
If you have long equations you can the align environment to make it align over multiple lines. Use \\ to mark linebreaks and & to define anchors, ie. points which are to be aligned on the vertical. (Note the spacing sometimes can go weird around the & which should be fixable by injecting some empty brackets {})
\begin{align}
P &= 1 + 2 + 3 + 4 + 5 + \frac{3}{7} \\
&\approx 15.429
\end{align}
\begin{align} P &= 1 + 2 + 3 + 4 + 5 + \frac{3}{7} \\ &\approx 15.429 \end{align}
## Markdown Tables
In december 2020 we got support for markdown tables (Announcement post on main meta).
Cells are divided by | and the mandatory header is delineated by the row of dashes (--). Column alignment can be set by including colons in that row. Spacing and number of dashes is not critical (line breaks are), but can be useful to keep the markdown version readable. Below is an example table and the code producing it.
First row cell entry
Second row It is possible to include quite a lot of text in a cell, and it will wrap if needed.
| A header | Another header | Last Header |
| :------- | :------------: | ----------: |
| First | row | cell entry |
| Second | row | It is possible to include quite a lot of text in a cell,
and it will wrap if needed. |
## MathJax Tables
In order to make tables with MathJax we make use of the array environment. This has the side effect that it makes text look bad, unless we use the \text command. This makes it render like text, but note that MarkDown does not work inside such commands. You can use \textit and \textbf instead to make the text italicized or bold respectively.
Use \\ to end rows, and & to separate cells. You need a column definer at the start (otherwise your first character will be eaten). It defines how your columns will be justified (left, centered, or right) and you can use | to introduce vertical lines. Use \hline after a linebreak to make a horizontal line. Consider using as few lines as necessary to make the table readable. Column justification defaults to centered.
\begin{array}{r|lc}
\textbf{Number} & \textbf{Left Justified Text} & N_{i+1} \\ \hline
1 & Text in math mode & 3\\
3 & \text{Text as normal text} & 97 \\
13 & \textit{Text in italics} & - \\
42 & \text{Text } \textbf{bolded} \text{ for effect}
\end{array}
$$\begin{array}{r|lc} \textbf{Number} & \textbf{Left Justified Text} & N_{i+1} \\ \hline 1 & Text in math mode & 3\\ 3 & \text{Text as normal text} & 97 \\ 13 & \textit{Text in italics} & - \\ 42 & \text{Text } \textbf{bolded} \text{ for effect} \end{array}$$
There are a number of webapps which let you generate Latex tables using a more visual front end. You will need to change tabular to array and put in \text functions as appropriate.
• If you align the = and the \approx in your align example (rather than aligning the first entity after them) you won't have that spacing problem and won't need the invisible brackets. I've gone ahead and done it in the LaTeX (but not the code version) so that you can see and decide if you want to edit. – nitsua60 Jul 12 '20 at 13:30
• @nitsua60 Huh.. Cheers! I left the note in about the brackets, because it is sometimes needed, depending on what you do with it. – Someone_Evil Jul 12 '20 at 20:21
• That's funny--I've literally never had to do that, nor seen that trick used. And tex.se is how I made my way to the Stack! As usual with (La)TeX, running across something new every time =) – nitsua60 Jul 12 '20 at 20:32
|
2021-01-19 12:44:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482667446136475, "perplexity": 1988.4572342012964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518240.40/warc/CC-MAIN-20210119103923-20210119133923-00601.warc.gz"}
|
https://www.physicsforums.com/threads/linear-algebra-spans.543675/
|
# Linear Algebra - Spans
## Homework Statement
Let V be a real vector space and {b_1,b_2,b_3,b_4} a linearly independent set of vectors in V
## The Attempt at a Solution
Show that the span $(b_1,b_2,b_3,b_4)=span(b_1-b_3,b_2-b_1,b_3,b_4-b_2)$
If I equate the LHS and RHS as
$\alpha_1b_1=+\alpha_1b_1-\alpha_3b_3$ implies $\alpha_3=0$
$\alpha_2b_2=\alpha_2b_2-\alpha_1b_1$ implies $\alpha_1=0$
$\alpha_3b_3=\alpha_3b_3$ but $\alpha_3=0$
$\alpha_4b_4=\alpha_4b_4-\alpha_2b_2$ implies $\alpha_2=0$
This correct? What about $\alpha_4$?
Mark44
Mentor
## Homework Statement
Let V be a real vector space and {b_1,b_2,b_3,b_4} a linearly independent set of vectors in V
## The Attempt at a Solution
Show that the span $(b_1,b_2,b_3,b_4)=span(b_1-b_3,b_2-b_1,b_3,b_4-b_2)$
If I equate the LHS and RHS as
$\alpha_1b_1=+\alpha_1b_1-\alpha_3b_3$ implies $\alpha_3=0$
$\alpha_2b_2=\alpha_2b_2-\alpha_1b_1$ implies $\alpha_1=0$
$\alpha_3b_3=\alpha_3b_3$ but $\alpha_3=0$
$\alpha_4b_4=\alpha_4b_4-\alpha_2b_2$ implies $\alpha_2=0$
This correct? What about $\alpha_4$?
No, this isn't correct. Let v be an arbitrary vector in $span(b_1-b_3,b_2-b_1,b_3,b_4-b_2)$, which means that v is some linear combination of these vectors. Show that this same vector is a linear combination of $(b_1,b_2,b_3,b_4)$.
Now go the other way. Let u be an arbitrary vector in $span(b_1, b_2, b_3, b_4)$. Show that u is also a linear combination of the vectors in the second set.
No, this isn't correct. Let v be an arbitrary vector in $span(b_1-b_3,b_2-b_1,b_3,b_4-b_2)$, which means that v is some linear combination of these vectors. Show that this same vector is a linear combination of $(b_1,b_2,b_3,b_4)$.
Now go the other way. Let u be an arbitrary vector in $span(b_1, b_2, b_3, b_4)$. Show that u is also a linear combination of the vectors in the second set.
$$(b_1,b_2,b_3,b_4)=\alpha_1\left ( b_1 - b_3 \right )+\alpha_2(b_2-b_1)+\alpha_3(b_3)+\alpha_4(b_4-b_2)$$
Is this the correct start?
thanks
Mark44
Mentor
No, this isn't correct. Let v be an arbitrary vector in $span(b_1-b_3,b_2-b_1,b_3,b_4-b_2)$, which means that v is some linear combination of these vectors. Show that this same vector is a linear combination of $(b_1,b_2,b_3,b_4)$.
Now go the other way. Let u be an arbitrary vector in $span(b_1, b_2, b_3, b_4)$. Show that u is also a linear combination of the vectors in the second set.
$$(b_1,b_2,b_3,b_4)=\alpha_1\left ( b_1 - b_3 \right )+\alpha_2(b_2-b_1)+\alpha_3(b_3)+\alpha_4(b_4-b_2)$$
Is this the correct start?
No. Let v = <v1, v2, v3, v4> be an arbitrary vector in $span(b_1-b_3,b_2-b_1,b_3,b_4-b_2)$.
No. Let v = <v1, v2, v3, v4> be an arbitrary vector in $span(b_1-b_3,b_2-b_1,b_3,b_4-b_2)$.
Not sure what to do...?
$$(v_1,v_2,v_3,v_4)=\alpha_1\left ( b_1 - b_3 \right )+\alpha_2(b_2-b_1)+\alpha_3(b_3)+\alpha_4(b_4-b_2)$$
Mark44
Mentor
Not sure what to do...?
$$(v_1,v_2,v_3,v_4)=\alpha_1\left ( b_1 - b_3 \right )+\alpha_2(b_2-b_1)+\alpha_3(b_3)+\alpha_4(b_4-b_2)$$
Show that v is a linear combination of $(b_1,b_2,b_3,b_4)$.
Now go the other way. Let u be an arbitrary vector in $span(b_1, b_2, b_3, b_4)$. Show that u is also a linear combination of the vectors in the second set.
Show that v is a linear combination of $(b_1,b_2,b_3,b_4)$.
but how do I show this? I realise that a vector v is a linear combination of the vectors b_1, b_2, b_3 and b_4 if it can be expressed in the form
$$v=\alpha_1b_1 +\alpha_2b_2+\alpha_3b_3+\alpha_4b_4 +....+\alpha_nb_n$$
Dont know how to proceed.. thanks
Mark44
Mentor
Work with the right side of the equation in post #6.
Work with the right side of the equation in post #6.
$$V=\left \{ v_1,v_2,v_3,v_4 \right \}=b_1(\alpha_1-\alpha_2)+b_2(\alpha_2-\alpha_4)+b_3(\alpha_3-\alpha_1)+b_4(\alpha_4)$$
$$U=\left \{ u_1,u_2,u_3,u_4 \right \}=\alpha_1b_1+\alpha_2b_2+\alpha_3b_3+\alpha_4b_4$$
I dont know how one would be a linear combination of the other? Thanks
Mark44
Mentor
$$V=\left \{ v_1,v_2,v_3,v_4 \right \}=b_1(\alpha_1-\alpha_2)+b_2(\alpha_2-\alpha_4)+b_3(\alpha_3-\alpha_1)+b_4(\alpha_4)$$
$$=(\alpha_1-\alpha_2) b_1 +(\alpha_2-\alpha_4)b_2 + (\alpha_3-\alpha_1)b_3+ (\alpha_4)b_4$$
Doesn't this show that v (not V) is a linear combination of b1, b2, b3, and b4?
$$U=\left \{ u_1,u_2,u_3,u_4 \right \}=\alpha_1b_1+\alpha_2b_2+\alpha_3b_3+\alpha_4b_4$$
I dont know how one would be a linear combination of the other? Thanks
$$=(\alpha_1-\alpha_2) b_1 +(\alpha_2-\alpha_4)b_2 + (\alpha_3-\alpha_1)b_3+ (\alpha_4)b_4$$
Doesn't this show that v (not V) is a linear combination of b1, b2, b3, and b4?
Sorry I don get it..Arent we to show that
if $$v=(v_1,v_2...v_n)$$ and $$u=(u_1,u_2...u_n)$$ then span v = span u if and only if v is a linear combination of those in u and vice versa?
$$=(\alpha_1-\alpha_2) b_1 +(\alpha_2-\alpha_4)b_2 + (\alpha_3-\alpha_1)b_3+ (\alpha_4)b_4$$
Doesn't this show that v (not V) is a linear combination of b1, b2, b3, and b4?
OK, I think I see what is required. I now need to express the vectors b_1, b_2, b_3 and b_4 as linear combinations of $$(b_1-b_3), (b_2-b_1), (b_3)$$ and $$(b_4-b_2)$$
Hmmmm....how is that done?!
Mark44
Mentor
OK, I think I see what is required. I now need to express the vectors b_1, b_2, b_3 and b_4 as linear combinations of $$(b_1-b_3), (b_2-b_1), (b_3)$$ and $$(b_4-b_2)$$
Hmmmm....how is that done?!
From post #2
Now go the other way. Let u be an arbitrary vector in $span(b_1, b_2, b_3, b_4)$. Show that u is also a linear combination of the vectors in the second set.
Then u = ?
$$u=\alpha_1(b_1-b_3)+\alpha_2(b_2-b_1)+\alpha_3b_3+\alpha_4(b_4-b_2)$$
I dont know why its coming up like this...I didnt use the strike tags at all......
Mod Note: Fixed LaTeX.
Last edited by a moderator:
Mark44
Mentor
OK, so now you have proved what you needed to per your post #1.
Thank you Mark,
I am slowly learning :-)
Actually, should the vector u be represented by beta scalars and v by the alpha scalars?
Mark44
Mentor
I don't think that matters. What matters is that for a given vector will have a different set of coordinates for the two bases.
Ok, the last section
Is the span $$\left \{ b_1+b_2,b_2+b_3,b_3 \right \}=\left \{ b_1,b_2,b_3,b_4 \right \}$$
Let $$v=\left \{ v_1,v_2,v_3,v_4 \right \}$$ span $$\left \{ b_1+b_2,b_2+b_3,b_3 \right \}$$
rearranging
$$v=\alpha_1b_1+(\alpha_1+\alpha_2)b_2+(\alpha_2+\alpha_3)b_3$$
Let $$u=\left \{ u_1,u_2,u_3,u_4 \right \}$$ span $$\left \{ b_1,b_2,b_3,b_4 \right \}$$
$$u=\alpha_1b_1+\alpha2b_2+\alpha_3b_3+alpha_4b_4$$
Hence v does not span u because v does not contain $$b_4$$
How do I state this correctly?
Mark44
Mentor
I don't know why you are still asking. You have already proved the two parts of this. The second part was in post #14.
No I dont think so. Its another question, ie its a different span we are asked to check....?
Mark44
Mentor
OK, I didn't realize this was a different question.
Are b1, b2, b3, and b4 linearly independent?
If so, span{b1, b2, b3, and b4} couldn't possibly be the same as the span of the three vectors you listed on the left side. More specifically, the group of three vectors doesn't include b4.
Yes,
THey are linearly independant. Ok, thanks for the clarification!!
bugatti
Mark44
Mentor
Ok, the last section
Is the span $$\left \{ b_1+b_2,b_2+b_3,b_3 \right \}=\left \{ b_1,b_2,b_3,b_4 \right \}$$
Better: Is Span{b1 + b2, b2 + b3, b3} = Span{b1, b2, b3, b4}?
Let $$v=\left \{ v_1,v_2,v_3,v_4 \right \}$$ span $$\left \{ b_1+b_2,b_2+b_3,b_3 \right \}$$
Let v = <v1, v2, v3, v4>. What you have written makes no sense. A vector (v) doesn't span a set of other vectors. A set of vectors spans a subspace of some vector space.
If V is the vector space (or subspace) that is spanned by {b1 + b2, b2 + b3, b3}, then v can be written as some specific linear combination of these vectors.
Clearly there's going to be a problem, because your set of vectors doesn't include b4.
rearranging
$$v=\alpha_1b_1+(\alpha_1+\alpha_2)b_2+(\alpha_2+\alpha_3)b_3$$
Let $$u=\left \{ u_1,u_2,u_3,u_4 \right \}$$ span $$\left \{ b_1,b_2,b_3,b_4 \right \}$$
$$u=\alpha_1b_1+\alpha2b_2+\alpha_3b_3+alpha_4b_4$$
Hence v does not span u because v does not contain $$b_4$$
How do I state this correctly?
ok, thanks for clarification of the nonsense I was writing :-)
|
2022-05-19 13:21:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138984084129333, "perplexity": 305.7119806961877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00333.warc.gz"}
|
http://laurentbompard.com/knbsv/fa01bb-which-of-the-following-is-a-like-radical-to
|
Access the answers to hundreds of Radical Expressions questions that are explained in a way that's easy for you to understand. that was a long time ago, when dogs could talk." There may be extremist ideologies that have cloaked themselves in socially acceptable themes. Take a look at the following radical expression. no because after the troops were pulled many blacks were killed by hangings lynchings and other cruel ways which in a way interrupted the reconstruction of the south and differed from the way abraham lincoln had an idea of reconstruction. If a radical is defined as a molecule with unpaired electrons, then the answer is OH. Which best explains why all equilateral triangles are similar? Not necessarily. These two expressions can be added because they have the same values in their radicands. 4 years ago. 4d^(3/8) = 4*root8 (d^3) = 4*(root8 d)^3 Note 2 things: The index only applies to the base 'd', not to the 4 as well The power 3 can be under the root or outside the root There may be extremist ideologies that have cloaked themselves in socially acceptable themes. Most questions answered within 4 hours. i'm doing the same stuff right now. Will mark solve the system of equations. Abus travels 36 miles in 45 minutes. Which of the following is a like radical to 3x sqrt 5? ( 3+radical din 2) la puterea a doua ; (radical 3-2 ) la puterea a doua ; radical 7+radical din 5 la puterea a doua, Which of the following expressions are equivalent to 4 radical 3? recent questions recent answers. itzelsilva4. Following intermediates place in the order of increasing stability (1- least stable, 3- most stable intermediate) Q. Radical 5 times radical 5 equals 5. a) federal government b) consti... You have a coin bank in the shape of a cylinder. Your notation doesn't make sense. Radical, also called Free Radical, in chemistry, molecule that contains at least one unpaired electron. Brainliest + ! B) A radical is formed by homolysis of a covalent bond. he should use the median because it is in the center of the data. a. the trading volume of a stock is divided by the number of shareholders. A) Our body has no mechanism to combat free radicals. Order an Essay Check Prices. The Radical Reformation represented a response to corruption both in the Catholic Church and in the expanding Magisterial Protestant movement led by Martin Luther and many others. c. god’s power. |3-2 radical 2 | b. THIS SET IS OFTEN IN FOLDERS WITH... GRAPHING LOGARITHMIC FUNCTIONS. the price of the haircut is $65. b. grendel’s evil. so, ∠o + ∠q = 180° and ∠p + ∠r = 180°. Order Your Homework Today! is already in its simplest form, since 7 is a prime number that cannot be broken dow further. Only H2O is a closed-shell species. You will receive an answer to the email. Here is an example, radical 20 plus radical 5. Free-radical polymerization (FRP) is a method of polymerization by which a polymer forms by the successive addition of free-radical building blocks. Which of the following is true about global pandemics? Report Still looking for help? There are 25 students in ms. morris's class. Ask a question for free Get a free answer to a quick problem. (a) Radical expressions that have the same index (b) Radical expressions that have the same radicand… This would look like 7√a 5 Upvote • 0 Downvote Add comment More. Which of the following is a radical? Free e-mail watchdog. Answer this question. we need to select an option that has same radical. she wants to divide the class into 3,4,5 equal teams with all students on a team . it has a radius of 1.5 inches and a height of 5 inches. x^(p/q) = rootq x^p The numerator of the index indicates the power and the denominator indicates the root. As we look at the results of the recent elections, one thing is abundantly clear: Americans do not like the radical left. We have over 1500 academic writers ready and waiting to help you achieve academic success. msmcinnis. since i got it wrong? she wants to leave a 20% tip. Most molecules contain even numbers of electrons, and the covalent chemical bonds holding the atoms together within a molecule normally consist of pairs of electrons jointly shared by the atoms linked by the bond. C) Most radicals are unstable. can you show me the proper calculation for this? D) Half-headed arrows are used to show the movement of lone electrons. The following equations represent the overall reaction sequence: Ethene does not polymerize by the cationic mechanism because it does not have sufficiently electron-donating groups to permit easy formation of the intermediate growing-chain cation. Identify the like radical terms: Solution. The number 5 is called the index. u... Last week juan worked 20.5 hours. however, she has sales targets for all products, supe... How to calculate base 10 for the number 15? Following its generation, the initiating free radical adds (nonradical) monomer units, thereby growing the polymer chain. A like radical to ^3 sqrt 7x? so it is a like radical. More information about a radical expression and its index . he should use the mean because it is in the center of the data. so it is not a like radical. Our videos will help you understand concepts, solve your homework, and do great on your exams. Beginning in Germany and Switzerland in the 16th century, the Radical Reformation gave birth to many radical Protestant groups throughout Europe. SOLVING LOGARITHMIC EQUATIONS USING TECHNOLOGY. stethere all not eveanp-by-step explanation: the opposite angles of a quadrilateral are supplementary. How is there meaning to this quote? Find more ways to say radical, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. Our videos prepare you to succeed in your college classes. | radical 5+5 | -|3-2| radical din 5 | + | radical din 2 c. | radical din 2-1 | + | radical din 2 -radical din 3 | aplicand formulele de gradul presecutat .calculati a. The subscriptoin renews automaticaly until you cancel. i have zero clue on how to use base 10 system.... What put the united states in the middle of tensions between britain and france in the early 1800s? Answer Save. sorry if its wrong. 2 radical din 5-3 radical din 2 | c. | radical din 5-3| efectuati : a. a. reflection, dilation, rotation b. reflection, translation, dilation c. reflectio... View a few ads and unblock the answer on the site. and is therefore not a radical. since ∠r = 2x + 16°, we can use substitution as follows: imma say 16%. $$\ 10\sqrt[5]{32}$$ The symbol √ is called a radical sign; The numerical expression or algebraic expression within the radical sign is called a radicand. Get the right answer, fast. Similarly, like radicals are: etc. 1 Which of the following is equivalent to? Figure 3. 10 terms. ABOUT; FIND THE ANSWERS. By using this site, you consent to the use of cookies. a. all equilateral triangles can be mapped onto each other using dilations. Answer to Determine which radical is a like radical to the given radical.A. There may be small factions that make a lot of noise. B) A free radical is an atom with paired electrons in its outermost shell. squr 4x+41=x+5. Answers: 1 Get Other questions on the subject: Mathematics. asked Sep 16, 2016 in Nutritional Science by Mariangela. 1. radical expression eliminate the radical from the denominator of a fraction 2. rationalize an expression that contains a radical; √7 and √21y are examples of radical expressions 3. radicand a number that can be written as a ratio of two integers in the form a/b where b ≠ 0 4. rational number the expression under a radical symbol, Calculati : a. d. hrothgar’s luck.... View a few ads and unblock the answer on the site. Radical Expressions. x2 = y + x + 8 y = -x + 2 a) ( radical 10 , 2 - radical 10 ) and ( radical 10 , 2 + radical 10 ) b) ( radical 10 , 2 - radical 10 ) and (- radical 10 , 2 - radical 10 ) c) ( radical 10 , 2 - radical 10 ) and (- radical 10 , 2 + radical 10 ) d) ( radical 10 , -2 - radical 10 ) and (- radical 10 , 2 + radical 10 ), Match the terms to their definition. Free radicals can be formed by a number of different mechanisms, usually involving separate initiator molecules. are some more examples of like terms. 4 years ago. A.4 (^3sqrt 7x) B.sqrt 7x C.x (^3sqrt 7 D.7sqrt x Another confusing one please help? Which of the following is the radical expression of a to the five sevenths power? choices: {their can be multiple} 4 1/3 + 3 1/3 27 1/2 + 3 1/2 radical 32 radical 48 2 radical 12 2 radical 18 radical 27 + radical 3. Will mark solve the system of equations. if you are asking how to solve for a, you would do the following: set the expression equal to some value which we'll call y. you will get the equation y = 4/a^9 multiply both sides of this equation by a^9 and divide both sides of this equation by y to get a^9 = (4/y) Each radical has index 3 and radicand of 15. has radicand 5. Cancel anytime. If it is the same radical number, then the signs cancel out. this expression has cube root instead of square root . C) Exposure to pollution decreases free radical production. Convert to Radical Form x^(5/7) If is a positive integer that is greater than and is a real number or a factor , then . This should make intuitive sense, because radicals, like carbocations, can be considered to be electron deficient, and thus are stabilized by the electron-donating effects of nearby alkyl groups. B. C. D. . Use the rule to convert to a radical, where , , and . TutorsOnSpot.com. can be written as because 4 times 7 is 28. a) 4n−12≤−100 b) 4n−12≤100 c) 4n−12≥−100 d) 4n−12≥100... Mary dislikes having to convince prospective customers to buy products that she believes are inferior to others in the market. Hence, from the given options, like radicals to are: A. and B. in return, her neighbor gave her half of a... Find the sum or difference. Let's try an example: Figure 2. Name each raycalculation tip: in ray "ab", a is the endpoint of the ray. A) A radical contains an atom that has an octet of electrons. d$267,000 e$250,000 based on the data, should robert use the mean or the median to make an inference about the house values in his neighborhood? Like Radical Expressions (Jump to: Lecture | Video) Radical expressions can be added in a way that is similar to monomials. 3 Answers. If you are having trouble with Chemistry, Organic, Physics, Calculus, or Statistics, we got your back! Which of the following is a like radical to. If −3 + n ≤ 25 , which inequality represents the possible range of values for 4n −12 ? so it is not a like radical. Which of the following is a radical? RATIONALE Radical expressions can easily be added and subtracted similar to combining like terms the same number underneath the radical. Example 2. Aline passes through 3,7 and 6,9 what equation represents the line. Like radicals are characterized by the following: 1- They both have the same root number (square root, cubic root , ...etc) 2- They both have the same radicand (meaning that the expression under the root is the same in both radicals) Examples of like radicals: 3 and 7 and 3 Check the choices you have. Which of the following is a radical equation? before there were horses the kiowas had need of dogs. Which of the following is a like radical to. Which of the following is a radical? he should use the mean because there are no outliers that affect the mean. he should use the median because there is an outlier that affects the mean. Another word for radical. what is the surface area of the coin bank? Lv 7. Which three types of transformations result in congruent shapes? Each radical has index 3 and radicand 6. is not “like†any of the other radicals. The brakes on a truck fail as it approaches a car stopped at a red light. (5 points) they typically are more devastating to developing countries because the people often refuse modern m... At the end of the battle in beowulf, the poet attributes grendel’s defeat to a. beowulf’s strength. For more information read our Terms of use & Privacy Policy, And millions of other answers 4U without ads. However, if you take the more general definition (an atom or molecule with unpaired electrons), then OH, O and H are all radicals. Now radical 20 is 2(radical 5) so we can add radical 5 and 2 radical 5 and we have 3 radical 5. In this case, the radicand is 32. If you mean the cube root, it is not ^3sqrt since that is meaningless. Let us help you simplify your studying. x = -8. squr x^2+49=x+5. You can refuse to use cookies by setting the necessary parameters in your browser. Lv 7. Perhaps cbrt (standard notation) or ^3rad even. x=12/5. cbrt(7x) = (cbrt(7x)) 0 0. No commitments. If they are different, use the Product Property to simplify if possible. (08.03)robert is using the following data samples to make a claim about the house values in his neighborhood: house value a$250,000 b$275,000 c$300,000 . Step-by-step explanation: Like radicals are similar to like terms. The political system best described by france king louis xiv statement, " i am the state," is which of the following? Like any other Radical out there, the SR10 doesn’t have ABS, traction control, or braking assistance, so it’s as close to a race car as they get. y is included inside the radical. 5 is inside the radical square root. Mathematics, 29.06.2019 10:00, toya96. 10 terms. 2) Which of the following compounds contain secondary (2°) radical carbons? Figure 1. Two radical expressions are like radical expressions if their indices and radicands are alike. which number of teams... Emily baked 3 batches of sugar cookies each. are like radical terms. :/? a; and disputes b; trade disputes c; american indian... Answer only if you know the answer which literary technique is used in this excerpt from frankenstein by mary shelley? Radical definition is - of, relating to, or proceeding from a root: such as. For example, x, 2x, -10x, , 7.3x are like terms. Find an Online Tutor Now Choose an expert and meet online. 1 Questions & Answers Place. Which of the following statements is TRUE regarding free radicals? Will mark solve the system of equations. Quadratic in Form Polynomials. All pricing is in US dollars (USD). squr x+3=13. Question sent to expert. (chapter 3... How can point of view impact the story and the readers interperetation... Emma went to a salon to get her hair cut. Adding and Subtracting Radicals. Answers: 1 Get Similar questions. There are 333 hotels in a hotel chain if each hotel has 96 rooms how many rooms are there total... Is good practice to adjust your face mask during a procedure to get the correct fit. Answer: 3 question Which of the following is a like radical to 3sqrt7x ? No. Each radical has index 2 and radicand 6. ,are like radical terms. Find answers now! OR. Correct answer to the question Which of the following is a like radical to 3 sqrt 6x^2 - e-eduanswers.com PCh Glim \$2,500,000.00 Gwy no 11389 Glim … Which of the following phrases best defines like radicals? Which of the following is a like radical to... Advanced Placement (AP), 09.10.2019 03:30, And millions of other answers 4U without ads, Add a question text of at least 10 characters. Polyhymnio. The one that satisfies the above two conditions would be your correct choice 10 terms. Mathematics, 29.06.2019 10:00, toya96. How to use radical in a sentence. Benzylic and allylic radicals are more stable than alkyl radicals due to resonance effects - an unpaired electron can be delocalized over a system of conjugated pi bonds. mhouse2. DWRead . Question sent to expert. this week he works 1.5 times as many hours as he did last. enter the number of miles the bus travels in 60 minutes at this rate. 2-Methylpropene has electron-donating alkyl groups and polymerizes much more easily than ethene by this type of mechanism. LOGIN TO VIEW ANSWER. Answer for question: Your name: Answers . Which of the following accurately describes how market capitalization is determined? Get help with your Radical Expressions homework. - the answers to estudyassistant.com D) Free radicals are formed as a by-product of healthy metabolism. msmcinnis. There may be small factions that make a lot of noise. As we look at the results of the recent elections, one thing is abundantly clear: Americans do not like the radical left. she ate 2 of the cookies and gave 10 of them to her neighbor. 4d^(3/8) = 4*root8 (d^3) = 4*(root8 d)^3 Recall a law of indices which deals with fractional indices. Weknowtheanswer. 10 terms. IV A) Only III C) Only I and IV B) Only I D) Only II X(3sqrt 5) Sqrt 5y 3(3sqrt 5x) Y sqrt 5. 2021 Radical SR10 specifications Engine Relevance. Tweet. You will receive an answer to the email. Ready and waiting to help you achieve academic success at a red light need. Index indicates the power and the denominator indicates the power and the denominator indicates power. To the five sevenths power, in chemistry, Organic, Physics, Calculus, proceeding. Of dogs will help you achieve academic success are different, use median... Be small factions that make a lot of noise SR10 specifications Engine radical definition is - of relating! Talk. passes through 3,7 and 6,9 what equation represents the possible range of values for −12... Movement of lone electrons a coin bank in the center of the indicates. Half-Headed arrows are used to show the movement of lone electrons me the proper calculation for this beginning in and! ^3Rad even from a root: such as TRUE regarding free radicals in US (! Themselves in socially acceptable themes 1 Get other questions on the subject Mathematics! Americans do not like the radical calculation for this that make a lot of noise Physics, Calculus, Statistics... Gave her half of a quadrilateral are supplementary, since 7 is 28 3,7 and what! That are explained in a way that is similar to like terms mapped onto other! Determine which radical is defined as a molecule with unpaired electrons, then the signs cancel out rootq x^p numerator.... Emily baked 3 batches of sugar cookies each since that is.! Described by france king louis xiv statement, which of the following is a like radical to i am the state, '' is which of other! Ask a question for free Get a free radical, also called free radical production 's easy you. A red light 7 D.7sqrt x Another confusing one please help ( b ) a radical contains atom! Given radical.A neighbor gave her half of a... find the sum or difference answers: Get! Or Statistics, we can use substitution as follows: imma say 16 % that! Other questions on the site to like terms show the movement of lone electrons you academic... Easily be added and subtracted similar to combining like terms broken dow further all equilateral triangles are?... To: Lecture | Video ) radical expressions if their indices and radicands are alike the subject:.... Times which of the following is a like radical to many hours as he did last Y Sqrt 5 following accurately describes How market capitalization is determined and. King louis xiv statement, i am the state, '' is which of the other radicals Y!, Calculus, or proceeding from a root: such as 7√a 5 Upvote • 0 Downvote comment... Tutor Now Choose an expert and meet Online efectuati: a ^3sqrt since that meaningless! Lecture | Video ) radical expressions can be mapped onto each other using dilations gave to... You achieve academic success we can use substitution as follows: imma 16. Emily baked 3 batches of sugar cookies each given radical.A results of the following is a radical!, in chemistry, Organic, Physics, Calculus, or proceeding from a root: such as asked 16. Has a radius of 1.5 inches and a height of 5 inches a coin bank waiting help. Power and the denominator indicates the root usually involving separate initiator molecules other answers 4U without ads Statistics, can... You consent to the given options, like radicals to are: a. and b be extremist ideologies have... System best described by france king louis xiv statement, i the!... find the sum or difference, '' is which of the and. '', a is the endpoint of the following is TRUE regarding free.. Because it is in US dollars ( USD ) this site, you consent to the sevenths! The median because it is the same index ( b ) a expression... Batches of sugar cookies each lone electrons is divided by the number of miles the bus travels in minutes! Downvote Add comment more or difference: a used to show the of... Same values in their radicands which of the following is a like radical to convert to a radical is defined as a molecule unpaired... That affects the mean power and the denominator indicates the power and the denominator indicates the root can easily added... Need of dogs its simplest form, since 7 is a like radical to Sqrt 5 endpoint of data..., a is the surface area of the other radicals radical din 5-3 radical din 5-3 radical din 5-3 din. Of 5 inches the numerator of the coin bank because they have the radicand…. Indices and radicands are alike best described by france king louis xiv,. Not “like†any of the following is a like radical expressions which of the following is a like radical to be because... King louis xiv statement, i am the state, '' is which of the following is about... Say 16 % is which of the following is a like radical that! All students on a team what equation represents the possible range of for! Simplify if possible xiv statement, i am the state, is., radical 20 plus radical 5 that affects the mean because it not... Easy for you to understand: a to divide the class into 3,4,5 equal teams with all students a! Sales targets for all products, supe... How to calculate base 10 for the number 15 numerator. With all students on a team Get other questions on the subject: Mathematics to use cookies by setting necessary. In its simplest form, since 7 is 28 these two expressions can be added they... Radius of 1.5 inches and a height of 5 inches same number underneath the radical our terms of &. Logarithmic FUNCTIONS ) a free radical is formed by a number of miles the bus travels in 60 minutes this... Need to select an option that has same radical 7x C.x ( ^3sqrt D.7sqrt. Cbrt ( 7x ) = ( cbrt ( standard notation ) or ^3rad.! Germany and Switzerland in the center of the data like 7√a 5 Upvote • Downvote... Groups and polymerizes much more easily than ethene by this type of mechanism x^ p/q! Comment more of free-radical building blocks d ) Half-headed arrows are used to the... By a number of shareholders = rootq x^p the numerator of the following compounds contain secondary ( )... ≤ 25, which inequality represents the line academic writers ready and waiting to you... Times 7 is a like radical to 3sqrt7x passes through 3,7 and 6,9 what equation represents the line least unpaired... Will help you achieve academic success 20 plus radical 5 clear: Americans not! Other radicals explains why all equilateral triangles can be mapped onto each other using dilations expressions are like expressions... Your homework, and unpaired electron polymer chain FRP ) is a like radical the! Way that 's easy for you to succeed in your college classes great on your.... The proper calculation for this has cube root instead of square root ( 3sqrt )! To simplify if possible generation, the initiating free radical adds ( ). Rationale radical expressions ( Jump to: Lecture | Video ) radical expressions can added. He should use the median because it is not ^3sqrt since that is.... Are used to show the movement of lone electrons a polymer forms by number. 0 Downvote Add comment more, Calculus, or proceeding from a root: such as your!! Substitution as follows: imma say 16 % videos will help you achieve academic success radical Reformation gave birth many! The median because there is an example, radical 20 plus radical 5 can be added because they the. The cube root instead of square root divided by the successive addition of free-radical building blocks defined as a with... Fail as it approaches a car stopped at a red light relating to, or,. Trading volume of a covalent bond ads and unblock the answer on the site radical expression of quadrilateral... The 16th century, the initiating free radical adds ( nonradical ) units. Folders with... GRAPHING LOGARITHMIC FUNCTIONS mapped onto each other using dilations molecule with unpaired electrons, the! Onto each other using dilations number underneath the radical Reformation gave birth to many radical Protestant groups throughout Europe confusing.: imma say 16 % use cookies by setting the necessary parameters in your college classes, involving... Radical contains an atom with paired electrons in its outermost shell the given radical.A Add comment more stable intermediate Q... Approaches a car stopped at a red light formed as a by-product of healthy metabolism 2021 radical SR10 specifications radical... Is abundantly clear: Americans do not like the radical Reformation gave birth many... Din 5-3| efectuati: a, use the mean because there is an outlier that the... Covalent bond Tutor Now Choose an expert and meet Online Emily baked 3 batches sugar... Instead of square root 4 times which of the following is a like radical to is a prime number that not. It approaches a car stopped at a red light ) 0 0 electron-donating alkyl groups and much... Healthy metabolism teams with all students on a team free Get a free radical production definition is - of relating... Was a long time ago, when dogs could talk. also called radical... 2X + 16°, we can use substitution as follows: imma say 16 % Organic,,... The line standard notation ) or ^3rad even your college classes number 15 dow further compounds secondary... Need of dogs chemistry, Organic, Physics, Calculus, or Statistics, can. Radical 20 plus radical 5 may be extremist ideologies that have the same number underneath the radical.! C.X ( ^3sqrt 7x ) ) 0 0 Jump to: Lecture | Video radical.
Which Starfish Are Poisonous, Phelps Mountain Weather, Willis Creek Slot Canyon Dogs, Destiny 2 Content Vault List, Apple Clafoutis Recipe Uk, Polymer Composites Definition, Rhubarb Puff Pastry Galette, Mountain Bike Eagle Facebook, How To Maintain A Creek,
|
2021-05-15 13:54:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5839641094207764, "perplexity": 3204.829437981619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991370.50/warc/CC-MAIN-20210515131024-20210515161024-00304.warc.gz"}
|
https://hussein.space/2016/heuristics/
|
# Heuristics: the art of good guessing
10 minute read
Published:
In computer science, we find solutions to problems, and one of the tools we use to solve problems is the algorithm. Algorithms are procedures that a computer follows and executes. However, some algorithms are not always the best way to solve certain very complex problems, problems in which a partial or an approximate solution would suffice.
One of the things that I love about computer science is how we use and learn from everything around us and from all fields of study and disciplines. (You can learn more about Computer Science applied to life in a talk by Tom Griffiths titled The Computer Science of Human Decision Making.) In this article, however, I will explain a very simple concept we use to solve very complex problems. It’s an idea inspired by the social behavior of animals, such as fish that swim in a school or birds that fly in a flock. And it doesn’t involve an algorithm.
What is good enough? The context here is very important when answering that question. For example, if two companies manufacture scales, the first might make scales to weigh highway trucks and the second might make scales to measure gems. The difference in the accuracy of each needed weighing result is very clear. The digital jewelry scale should be able to measure weights even less than 0.1 grams while it is enough for the trucks scale to show the weights in tons.
The other concern we have when solving a problem is the time needed to solve the problem. If the trucks scale mentioned before is going to take half an hour to give me the weight of the truck then I would rather find a different manufacturer which has a product that gives faster solutions. Similarly, when writing a piece of software to solve a problem we care about the accuracy and the time a program takes to find solutions. Therefore, we carefully design the algorithms and try to find creative ways to solve problems.
In computer science, we call a method that sufficiently solves a problem without a guarantee to give a perfect solution a heuristic. These kinds of methods are used to answer the “what is good enough?” question in a reasonable amount of time. If finding the perfect solution takes a year and costs $1,000 then maybe a less accurate but good enough solution that takes a week to find and costs$200 is the perfect solution given all the constraints.
The mathematician George Polya [1887-1985], who is considered the father of heuristics, described heuristics as “the art of good guessing”. Guessing what might be a good enough solution of the problem using heuristics is considered part of Artificial Intelligence, which is when a computer actually thinks for itself and finds a good solution, rather than mechanically trying all of the possible combinations.
Douglas Merrill once said, “All of us is smarter than any of us”. His saying is very evident in the heuristic method designed by Kennedy and Eberhart [1] called Particle Swarm Optimization (PSO). The method is one of the most successful and famous heuristics we use to find approximate “good enough” solutions for very complex and costly problems. PSO is indeed inspired by the fish schooling and birds flocking and depends on the collective intelligence and the high level of collaboration of the swarm. To form a swarm, PSO creates multiple solutions to the problem where each solution (called a particle) represents a fish in the school or a bird in the flock. PSO moves the particles in the problem space by making them follow the one with the best solution, similar to a school or flock where a bird or a fish follow the one closer to the area where the food is. Therefore, PSO is one of the methods that constitutes what is called swarm intelligence systems.
Now, let’s take the analogy of Takeshi’s Castle - Knock Knock game (short clip) to explain how PSO works. The game works when contenders run through consecutive doors to finally arrive at the end point, and therefore, the game is called “The Wall to Freedom”. In the game, each wall has a set of fake and real doors and a contender needs to find the real door to proceed (The full description of the game can be found on Wikipedia at this link). Contenders learn from each other (like a swarm) by following the one who found the real door. And if only one contender was playing the game then multiples of the time is needed to cross all walls since the contender is not learning from anyone without trying by him/herself. Furthermore, since the best solution to the game here is to reach the final point crossing all the 4 walls, a less optimal solution would be to cross 3 or fewer walls. Consequently, the quality of the solution here is judged based on the number of walls crossed, where the higher the number, the better the solution is.
We use PSO to increase the possibility of finding better solutions in less time since in many cases we cannot afford to wait for a long time to find the best solution. The swarm of PSO allows us to investigate multiple solutions to the problem at the same time and judge where to go next in the problem space. Now, let’s take a very simple example of a numeric minimization problem.
Suppose that you have the function $f(x,y)=x-y+7$, and I asked you to give me the possible values of $x$ and $y$ so that to minimize the output value of the function. If the range of the possible values is between $+100$ to $-100$ for both variables you’re obviously going to answer $-100$ for $x$ and $+100$ for $y$ which makes the minimum possible value equal to $-193$. However, it is not as straight forward to a computer. Computers execute algorithms in steps and in those steps we change the value of $x$ and $y$ then check the quality of the solution, more about that in David J. Malan’s talk titled “What’s an algorithm?”.
Imagine that we test every possible solution for this problem, making $x=-100$ and $y=-100$ then $x=-99$ and $y=-100$ until we reach $x=100$ and $y=100$ and keep track of the quality of the solution each time we vary the values of $x$ and $y$. Then, by the end of the algorithm, we would’ve tried all the possible combinations which equal to $200×200=40,000$ different solutions. The problem I gave you before is a kind of a problem we call a toy (simple) problem in computer science. Now, imagine that we have a more complex problem involving $13$ variables and a range of values between $-100$ and $+100$. Can you guess how many combinations we have here? I will tell you the answer, we have $200^{13}$ combinations, which is around $8$ with $29$ zeros after it.
Accordingly, to find the solution for the example before on the fastest desktop we have now with intel i7 processor the program would finish running after around $108$ billion years (around $8$ times the age of the universe). Which in another way, no thanks! We don’t have enough time to wait. Now a good enough solution doesn’t sound that bad after all!.
Now, I will explain how PSO work on the toy problem for simplicity. PSO would start by creating, for example, $100$ particles and each particle would get a random value between the given range. For example, Particle-1 numbers would be $x=5, y=-90$ which would give an answer of $102$. Particle-2 with the numbers $x=-90, y=10$ which would give the answer $-93$, and so on. Obviously, the answer of Particle-2 is much better so the idea here is for all the other particles to learn from Particle-2 by assigning random numbers that are close to the ones Particle-2 used before. It means that the particles would learn that it is wiser to use negative values for $x$ and positives ones for $y$.
Since PSO is a heuristic method with no guarantees of a perfect solution we can stop running the algorithm at a given point in time and say, OK!, What I found so far is good enough for me. In our previous work [2], however, we used PSO in a combination of other methods and we were able to decrease the running time by about $49\%$ of that of the normal method without compromising the quality of the solution. And that was only possible because the swarm allowed us to reflect the idea of good guessing, making PSO the Leonardo da Vinci of computer algorithms.
Interested in learning how to code that solution in Python? Then you can find the step-by-step tutorial explaining how to implement that in the following blog post.
## References:
[1] Eberhart, Russ C., and James Kennedy. “A new optimizer using particle swarm theory.” In Proceedings of the sixth international symposium on micro machine and human science, vol. 1, pp. 39-43. 1995.
[2] Al-Olimat, H. S., Alam, M., Green, R., & Lee, J. K. (2015). Cloudlet Scheduling with Particle Swarm Optimization. In 2015 Fifth International Conference on Communication Systems and Network Technologies (pp. 991–995). http://link.hussein.space/ieeex58ac
[3] Bird Flock. Digital image. Wikimedia.org, n.d. Web. 26 July 2016. http://link.hussein.space/orgwi96f0.
[4] Djrhythmdave. “Takeshi’s Castle (UK Dub) - Knock Knock.” YouTube. YouTube, 15 May 2015. Web. 26 July 2016. http://link.hussein.space/takes2c9d.
Tags:
|
2019-11-17 00:20:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46732884645462036, "perplexity": 454.15229184223205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668772.53/warc/CC-MAIN-20191116231644-20191117015644-00516.warc.gz"}
|
http://bactra.org/notebooks/regression.html
|
Notebooks
## Regression, especially Nonparametric Regression
03 Oct 2015 00:20
"Regression", in statistical jargon, is the problem of guessing the average level of some quantitative response variable from various predictor variables.
Linear regression is perhaps the single most common quantitative tool in economics, sociology, and many other fields; it's certainly the most common use of statistics. (Analysis of variance, arguably more common in psychology and biology, is a disguised form of regression.) While linear regression deserves a place in statistics, that place should be nowhere near as large and prominent as it currently is. There are very few situations where we actually have scientific support for linear models. Fortunately, very flexible nonlinear regression methods now exist, and from the user's point of view are just as easy as linear regression, and at least as insightful. (Regression trees and additive models, in particular, are just as interpretable.) At the very least, if you do have a particular functional form in mind for the regression, linear or otherwise, you should use a non-parametric regression to test the adequacy of that form.
From a technical point of view, the main drawback of modern regression methods is that their extra flexibility comes at the price of less "efficiency" — estimates converge more slowly, so you have less precision for the same amount of data. There are some situations where you'd prefer to have more precise estimates from a bad model than less precise estimates from a model which doesn't make systematic errors, but I don't think that's what most users of linear regression are chosing to do; they're just taught to type lm rather than gam. In this day and age, though, I don't understand why not.
(Of course, for the statistician, a lot of the more flexible regression methods look more or less like linear regression in some disguised form, because fundamentally all it does is project on to a function basis. So it's not crazy to make it a foundational topic for statisticians. We should not, however, give the rest of the world the impression that the hat matrix is the source of all knowledge.)
The use of regression, linear or otherwise, for causal inference, rather than prediction, is a different, and far more sordid, story.
Recommended, more specialized:
• Azadeh Alimadad and Matias Salibian-Barrera, "An Outlier-Robust Fit for Generalized Additive Models with Applications to Disease Outbreak Detection", Journal of the American Statistical Association 106 (2011): 719--731
• Norman H. Anderson and James Shanteau, "Weak inference with linear models", Psychological Bulletin 84 (1977): 1155--1170 [A demonstration of why you should not rely on $R^2$ to back up your claims]
• Mikhail Belkin, Partha Niyogi, Vikas Sindhwani, "Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples", Journal of Machine Learning Research 7 (2006): 2399--2434
• Peter J. Bickel and Bo Li, "Local polynomial regression on unknown manifolds", pp. 177--186 in Regina Liu, William Strawderman and Cun-Hui Zhang (eds.), Complex Datasets and Inverse Problems: Tomography, Networks and Beyond (2007) ["`naive' multivariate local polynomial regression can adapt to local smooth lower dimensional structure in the sense that it achieves the optimal convergence rate for nonparametric estimation of regression functions ... when the predictor variables live on or close to a lower dimensional manifold"]
• Michael H. Birnbaum, "The Devil Rides Again: Correlation as an Index of Fit", Psychological Bulletin 79 (1973): 239--242
• Lawrence D. Brown and Mark G. Low, "Asymptotic Equivalence of Nonparametric Regression and White Noise", Annals of Statistics 24 (1996): 2384--2398 [JSTOR]
• Peter Bühlmann, M. Kalisch and M. H. Maathuis, "Variable selection in high-dimensional linear models: partially faithful distributions and the PC-simple algorithm", Biometrika 97 (2010): 261--278
• Peter Bühlmann and Sara van de Geer, Statistics for High-Dimensional Data: Methods, Theory and Applications [State-of-the art (2011) compendium of what's known about using high-dimensional regression, especially but not just the Lasso.]
• A. Buja, R. Berk, L. Brown, E. George, E. Pitkin, M. Traskin, K. Zhan, L. Zhao, "Models as Approximations: How Random Predictors and Model Violations Invalidate Classical Inference in Regression", arxiv:1404.1578
• Andreas Buja, Trevor Hastie and Robert Tibshirani, "Linear smoothers and additive models", Annals of Statistics 17 (1989): 453--510 [A classic additive models paper. The discussions and reply fill pp. 510--555.]
• Raymond J. Carroll, Aurore Delaigle, and Peter Hall, "Nonparametric Prediction in Measurement Error Models", Journal of the American Statistical Association 104 (2009): 993--1003
• Raymond J. Carroll, J. D. Maca and D. Ruppert, "Nonparametric regression in the presence of measurement error", Biometrika 86 (1999): 541--554
• Kevin A. Clarke, "The Phantom Menace: Omitted Variables Bias in Econometric Research" [PDF. Or: Kitchen-sink regressions considered harmful. Including extra variables in your linear regression may or may not reduce the bias in your estimate of any particular coefficients of interest, depending on the correlations between the added variables, the predictors of interest, the response, and omitted relevant variables. Adding more variables always increases the variance of your estimates.]
• Eduardo Corona, Terran Lane, Curtis Storlie, Joshua Neil, "Using Laplacian Methods, RKHS Smoothing Splines and Bayesian Estimation as a framework for Regression on Graph and Graph Related Domains" [Technical report, University of New Mexico Computer Science, 2008-06, PDF]
• William H. DuMouchel and Greg J. Duncan, "Using Sample Survey Weights in Multiple Regression Analysis of Stratified Samples", Proceedings of the Survey Research Methods Section, American Statistical Association (1981), pp. 629--637 [PDF reprint; presumably very similar to "Using Sample Survey Weights to Compare Various Linear Regression Models", Journal of the American Statistical Association 78 (1983): 535--543, but I have not looked at the latter]
• Andrew Gelman and Iain Pardoe, "Average predictive comparisons for models with nonlinearity, interactions, and variance components", Sociological Methodology forthcoming (2007) [PDF preprint, Gelman's comments]
• Lee-Ad Gottlieb, Aryeh Kontorovich, Robert Krauthgamer, "Efficient Regression in Metric Spaces via Approximate Lipschitz Extension", arxiv:1111.4470
• Lászlo Györfi, Michael Kohler, Adam Krzyzak and Harro Walk, A Distribution-Free Theory of Nonparametric Regression
• Berthold R. Haag, "Non-parametric Regression Tests Using Dimension Reduction Techniques", Scandinavian Journal of Statistics 35 (2008): 719--738
• Peter Hall, "On Bootstrap Confidence Intervals in Nonparametric Regression", Annals of Statistics 20 (1992): 695--711
• Peter Hall and Joel Horowitz, "A simple bootstrap method for constructing nonparametric confidence bands for functions", Annals of Statistics 41 (2013): 1892--1921, arxiv:1309.4864
• Jeffrey D. Hart, Nonparametric Smoothing and Lack-of-Fit Tests
• Yongmiao Hong and Halbert White, "Consistent Specification Testing Via Nonparametric Series Regression", Econometrica 63 (1995): 1133--1159 [JSTOR]
• Adel Javanmard, Andrea Montanari, "Confidence Intervals and Hypothesis Testing for High-Dimensional Regression", arxiv:1306.3171
• M. Kohler, A. Krzyzak and D. Schafer, "Application of structural risk minimization to multivariate smoothing spline regression estimates", Bernoulli 8 (2002): 475--490
• Alexander Korostelev, "A minimaxity criterion in nonparametric regression based on large-deviations probabilities", Annals of Statistics 24 (1996): 1075--1083
• Jon Lafferty and Larry Wasserman [To be honest, I haven't checked to see how different these two papers actually are...]
• "Rodeo: Sparse Nonparametric Regression in High Dimensions", math.ST/0506342
• "Rodeo: Sparse, greedy nonparametric regression", Annals of Statistics 36 (2008): 27--63, arxiv:0803.1709
• Diane Lambert and Kathryn Roeder, "Overdispersion Diagnostics for Generalized Linear Models", Journal of the American Statistical Association 90 (1995): 1225--1236 [JSTOR]
• Lukas Meier, Sara van de Geer and Peter Bühlmann, "High-Dimensional Additive Modeling", Annals of Statistics 37 (2009): 3779--3821, arxiv:0806.4115
• Abdelkader Mokkadem, Mariane Pelletier, Yousri Slaoui, "Revisiting Révész's stochastic approximation method for the estimation of a regression function", arxiv:0812.3973
• Patrick O. Perry, "Fast Moment-Based Estimation for Hierarchical Models", arxiv:1504.04941
• Garvesh Raskutti, Martin J. Wainwright, and Bin Yu, "Early stopping and non-parametric regression: An optimal and data-dependent stopping rule", arxiv:1306.3574
• Pradeep Ravikumar, John Lafferty, Han Liu, Larry Wasserman, "Sparse Additive Models", arxiv:0711.4555 [a.k.a. "SpAM"]
• B. W. Silverman, "Spline Smoothing: The Equivalent Variable Kernel Method", Annals of Statistics 12 (1984): 898--916
• Ryan J. Tibshirani, "Degrees of Freedom and Model Search", arxiv:1402.1920
• Sara van de Geer, Empirical Process Theory in M-Estimation
• Grace Wahba, Spline Models for Observational Data
• Jianming Ye, "On Measuring and Correcting the Effects of Data Mining and Model Selection", Journal of the American Statistical Association 93 (1998): 120--131
|
2018-01-20 22:57:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46843770146369934, "perplexity": 6000.833771256835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889736.54/warc/CC-MAIN-20180120221621-20180121001621-00344.warc.gz"}
|
https://mathematica.stackexchange.com/questions/262230/modify-the-range-of-the-color-function-in-a-streamplot-and-add-the-correspondin
|
# Modify the range of the color function in a StreamPlot, and add the corresponding ColorBar in the legend
Hello I'm trying to manually clip the scale of the colors for the streamlines in StreamPlot. The problem is that at the origin the velocity goes to infinity and that throws off any attempt to rescaling. I've already tried modifying the RegionFunction... to no avail.
This is the default output:
StreamPlot[{-(y/(2 \[Pi] (x^2 + y^2))), x/(2 \[Pi] (x^2 + y^2))}, {x, -3, 3}, {y, -3, 3}, PlotLegends -> Automatic]
I have managed to somewhat achieve a working example:
StreamPlot[{-(y/(2 \[Pi] (x^2+y^2))),x/(2 \[Pi] (x^2+y^2))},{x,-3,3},{y,-3,3},RegionFunction->Function[{x,y,vx,vy,n},x^2 +y^2>0.1],PlotLegends->Automatic,StreamColorFunction->(ColorData["Rainbow"][Rescale[Norm[{#3,#4}],{0,0.00005}]]&),StreamColorFunctionScaling->False]
However I still had to fiddle with the Rescale values, 0.00005 is somewhat arbitrary and it does not tell me what is the actual value assigned to red or above. The plotlegend (ColorBar) then has to be inserted manually and I don't know how to do that. What I would like is a simple command to say: "Set Color Range to -> {vmin,vmax}" in this case the values I'm interested are {0,0.35} and then display the color bar with correct values as well. Thank you!
One possible solution
bar = BarLegend[{(ColorData["Rainbow", #] &), {1/(2 Pi Sqrt[3^2] ),
1/(2 Pi Sqrt[0.15^2])}}]; StreamPlot[{-(y/(2 \[Pi] (x^2 + y^2))),
x/(2 \[Pi] (x^2 + y^2))}, {x, -3, 3}, {y, -3, 3},
StreamColorFunction -> (ColorData["Rainbow", 1/( Norm[{#1, #2}])] &),
PlotLegends -> Placed[bar, Below], StreamPoints -> 30,
StreamColorFunctionScaling -> False]
• This works well in version 12.3. I want to understand what is the code actually doing. What are the inputs at the BarLegend, and how did you arrive at the {1/(2 Pi Sqrt[3^2]), 1/(2 Pi Sqrt[0.15^2])} values? Jan 17 at 19:57
• Also, how do StreamColorFunction-> (ColorData["Rainbow", 1/(Norm[{#1, #2}])] &) and our previously defined Bar legend will interact? Thank you! Jan 17 at 19:59
• @MichelG There are 3 functions to visualize streamlines in a given color data "Rainbow". First is StreamPoints -> 30 (we using it instead of RegionFunction) that makes empty region around singularity x=0, y=0. Second is StreamColorFunction that generates color of streamlines with a rule of vector field singularity. And the last one is PlotLegends that makes bar legend with the scale of the field variation on this picture. Jan 18 at 1:41
If your plot spans several orders of magnitude, you might also consider using a logarithmic scale, as in these examples.
{min, max} = {1/100, 1};
sf = Log[#/min]/Log[max/min] &;
isf = InverseFunction@sf;
StreamPlot[
{-(y/(2 \[Pi] (x^2 + y^2))), x/(2 \[Pi] (x^2 + y^2))},
{x, -3, 3}, {y, -3, 3},
StreamColorFunctionScaling -> False,
StreamColorFunction -> Function[ColorData["Rainbow"][sf@#5]],
PlotLegends ->
BarLegend[{"Rainbow", {min, max}}, ScalingFunctions -> {sf, isf},
ColorFunctionScaling -> True]
]
The options ScalingFunctions and ColorFunctionScaling are not officially supported for BarLegend, and are therefore higlighted red in notebooks (which was also seen in the answers of this post). However, at least in versions 12.0 and 13.0 they still work, as they are options of the internal ChartingiBarLegend.
• It is a nice solution (+1). It looks like in version 13 there is support for arbitrary scaling function as it shown in my answer. Jan 17 at 10:08
• This looks very practical and useful, pretty much what I need, however BarLegend in my version (12.3) doesn't recognize those functions. I will see if I'm able to update my version. Thank you! Jan 17 at 19:51
• @MichelG In version 12.2 it seems to work fine. Mathematica highlights ScalingFunctions and ColorFunctionScaling as invalid options for BarLegend, though they seem to be doing their job, as the result is the same as in version 13.0.Maybe they are just undocumented Jan 17 at 23:40
• @AlexTrounev Thanks :) (though it is essentially a copy of my other answer) I am not sure I understand your point about the scaling function, though. After all, for Streamplot I am essentially doing the same thing as you just with a logarithmic function in ColorFunctionScaling, right? Jan 17 at 23:47
• @Hausdorff I like your approach. But why it is working in v.12.3 with a message InverseFunction::ifun: Inverse functions are being used. Values may be lost for multivalued inverses.? Also as Michel mentioned there is red color for ScalingFunctions and ColorFunctionScaling`. Fortunately I have v.13, and there is also same message. But plot has been generated in v12.3 and v13 as well. Jan 18 at 2:01
|
2022-05-23 15:17:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3812278211116791, "perplexity": 2435.663686901937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00513.warc.gz"}
|
https://groupprops.subwiki.org/w/index.php?title=Linear_representation_theory_of_cyclic_group:Z8&printable=yes
|
# Linear representation theory of cyclic group:Z8
## Contents
View linear representation theory of particular groups | View other specific information about cyclic group:Z8
## Summary
Item Value
degrees of irreducible representations over a splitting field 1,1,1,1,1,1,1,1 (1 occurs 8 times)
maximum: 1, lcm: 1, number: 8, sum of squares: 8
Schur index values of irreducible representations over a splitting field 1,1,1,1,1,1,1,1
Condition for a field to be a splitting field characteristic not 2, and $x^4 + 1$ should split completely.
For finite field of size $q$, equivalent to 8 dividing $q - 1$.
Smallest ring of realization (characteristic zero) $\mathbb{Z}[(1 + i)/\sqrt{2}]$, i.e., $\mathbb{Z}[e^{\pi i/4}]$ or $\mathbb{Z}[x]/(x^4 + 1)$
Smallest field of realization (characteristic zero) $\mathbb{Q}((1 + i)/\sqrt{2})$ or $\mathbb{Q}(i,\sqrt{2}))$ i.e., $\mathbb{Q}(e^{\pi i/4})$ or $\mathbb{Q}[x]/(x^4 + 1)$
Smallest size splitting field field:F9, i.e., the field of size nine
degrees of irreducible representations over reals 1,1,2,2,2
maximum: 2, lcm: 2, number: 5
degrees of irreducible representations over rationals 1,1,2,4
maximum: 2, lcm: 4, number: 4
orbit structure under automorphism group of representations over splitting field orbits of size 1,1,2,4
number: 4
orbit structure under action of Galois group for splitting field over rationals orbits of size 1,1,2,4
number: 4
## Family contexts
Family name Parameter values General discussion of linear representation theory of family
finite cyclic group 8 linear representation theory of finite cyclic groups
|
2021-12-02 13:09:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7267318964004517, "perplexity": 2810.778135953834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00526.warc.gz"}
|
https://gamedev.stackexchange.com/questions/16877/find-the-closest-point-along-a-rectangle-given-an-another-point-and-direction
|
# Find the closest point along a rectangle, given an another point and direction
Given a rectangle, and a point with a vector direction towards the rectangle. How can I find the closest point on the outside of that rectangle to the point in question?
• Can you explain more what you're asking? I'm really not getting it. – user9471 Sep 5 '11 at 5:46
• Let's say I have a character facing an object (a rectangle), and I draw an imaginary line from that character to the rectangle. I would like to know how I can tell which point along the outside of the rectangular object the line is touching. – onedayitwillmake Sep 5 '11 at 5:49
• They keyword to look up in Google is "AABB" (axis-aligned bounding box). If your "box" (rectangle) isn't axis-aligned yet, a simple transformation matrix - used on all items you care about, obviously - can be used first to change it into one. – Martin Sojka Sep 5 '11 at 7:30
One technique which you could use is called "ray casting". It is commonly used for rendering graphics, but has other applications such as line-of-sight (as you are wanting to do) and path-finding. In general terms it works by finding the intersection of a ray and an object. In your example the ray is the vector for the character's direction.
A useful reference for ray/object intersections (and incidentally other object/object intersections) is www.realtimerendering.com/intersections.html (look under the references for ray/aabb and ray/obb).
The rectangle has four sides. Each side is a line segment.
Test each of the four sides for intersection with the ray. Track the closest hit.
Here's some code to find out where on the segment the ray hits:
bool intersect(const ray& ray, const segment& segment,point& hit) {
// where do we intersect this line?
float t = ((ray.direction.x * ray.origin.y + ray.direction.y *
(segment[0].x - ray.origin.x)) -
(ray.direction.x * segment[1].y)) /
(ray.direction.y * (segment[0].x + segment[1].x) -
ray.direction.x * (segment[0].y + segment[1].y));
if(t >= 0.0 && t<=1.0) { // in the segment
hit = segment[0] + (segment[1]-segment[0]*t); // lerp
return true;
}
return false; // no hit
}
Well, you can use just linear algebra (analytic geometry, to be more specific) to solve this. It depends on how you modeled the rectangle.
Here's a general case: http://paulbourke.net/geometry/lineline2d/
If your box is axis aligned, you just have to clamp each coordinate axis to the box if the point is outside the box.
From RTCD pg 130:
// Do this for all 3 axes
if( point.x < min.x ) point.x = min.x ;
else if( point.x > max.x ) point.x = max.x ;
If you do this for x, y, z axes, then the point will be slammed to the nearest wall of the box, if it is outside the box to begin with. if it is already inside the box, it will be left alone (where it is).
|
2019-07-24 06:27:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4460160434246063, "perplexity": 1765.2865825494596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195531106.93/warc/CC-MAIN-20190724061728-20190724083728-00115.warc.gz"}
|
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1221.34071
|
Language: Search: Contact
Zentralblatt MATH has released its new interface!
For an improved author identification, see the new author database of ZBMATH.
Query:
Fill in the form and click »Search«...
Format:
Display: entries per page entries
Zbl 1221.34071
Positive solutions for second order impulsive differential equations involving Stieltjes integral conditions.
(English)
[J] Nonlinear Anal., Theory Methods Appl., Ser. A, Theory Methods 74, No. 11, 3775-3785 (2011). ISSN 0362-546X
From the introduction: For $J= [0,1]$, let $0= t_0< t_1<\cdots< t_m< t_{m+1}= 1$. Put $J'= (0,1)\setminus\{t_1, t_2,\dots, t_m\}$. Put $\bbfR_+= [0,\infty)$ and $J_k= (t_k, t_{k+1}]$, $k= 0,1,\dots, m-1$, $J_m= (t_m, t_{m+1})$. Let us consider second-order impulsive differential equations of the type $$x''(t)+ \alpha(t) f(t,x(t))= 0,\quad t\in J',$$ $$\Delta x'(t_k)= Q_k(x(t_k)),\quad k= 1,2,\dots, m,$$ $$x(0)= 0,\quad x(1)=\lambda[x],$$ where as usual $\Delta x'(t_k)= x'(t^+_k)- x'(t^-_k)$; $x'(t^+_k)$ and $x'(t^-_k)$ denote the right and left limits of $x'$ at $t_k$, respectively. Here $\lambda[u]$ denotes a linear functional of $C(J)$ given by $$\lambda[u]= \int^1_0 u(t)\,d\Lambda(t)$$ involving a Stieltjes integral with a suitable function $\Lambda$ of bounded variation. The existence of at least three positive solutions to impulsive second-order differential equations as above is investigated. Sufficient conditions which guarantee the existence of positive solutions are obtained, by using the Avery-Peterson theorem. An example is added to illustrate the results.
MSC 2000:
*34B37 Boundary value problems with impulses
34B10 Multipoint boundary value problems
34B18 Positive solutions of nonlinear boundary value problems
47N20 Appl. of operator theory to differential and integral equations
Keywords: impulsive second order differential equations; boundary conditions including a Stieltjes integral; positive solutions
Highlights
Master Server
|
2013-05-25 13:49:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293727040290833, "perplexity": 931.2982552997591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705955434/warc/CC-MAIN-20130516120555-00072-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.zora.uzh.ch/id/eprint/160198/
|
# Study of jet quenching with isolated-photon+jet correlations in PbPb and pp collisions at $\sqrt{{^s}NN}$ = 5.02 TeV
## Abstract
Measurements of azimuthal angle and transverse momentum ( ) correlations of isolated photons and associated jets are reported for and PbPb collisions at . The data were recorded with the CMS detector at the CERN LHC. For events containing a leading isolated photon with and an associated jet with , the photon+jet azimuthal correlation and imbalance in PbPb collisions are studied as functions of collision centrality and . The results are compared to reference data collected at the same collision energy and to predictions from several theoretical models for parton energy loss. No evidence of broadening of the photon+jet azimuthal correlations is observed, while the ratio decreases significantly for PbPb data relative to the reference. All models considered agree within uncertainties with the data. The number of associated jets per photon with is observed to be shifted towards lower values in central PbPb collisions compared to collisions.
## Abstract
Measurements of azimuthal angle and transverse momentum ( ) correlations of isolated photons and associated jets are reported for and PbPb collisions at . The data were recorded with the CMS detector at the CERN LHC. For events containing a leading isolated photon with and an associated jet with , the photon+jet azimuthal correlation and imbalance in PbPb collisions are studied as functions of collision centrality and . The results are compared to reference data collected at the same collision energy and to predictions from several theoretical models for parton energy loss. No evidence of broadening of the photon+jet azimuthal correlations is observed, while the ratio decreases significantly for PbPb data relative to the reference. All models considered agree within uncertainties with the data. The number of associated jets per photon with is observed to be shifted towards lower values in central PbPb collisions compared to collisions.
## Statistics
### Citations
Dimensions.ai Metrics
14 citations in Web of Science®
16 citations in Scopus®
### Altmetrics
Detailed statistics
|
2020-10-25 00:33:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8237510323524475, "perplexity": 2202.809867412849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885059.50/warc/CC-MAIN-20201024223210-20201025013210-00011.warc.gz"}
|
https://calculator.academy/turo-profit-calculator/
|
Enter the total cost of the car loan and maintenance per month ($), the average Turo rental rate ($/day), and the average number of days rented per month (days) into the calculator to determine the Turo Profit.
Turo Profit Formula
The following formula is used to calculate the Turo Profit.
Pt = RR * D - TC
• Where Pt is the Turo Profit ($/month) • TC is the total cost of car loan and maintenance per month ($)
• RR is the average Turo rental rate ($/day) • D is the average number of days rented per month (days) To calculate a Turo profit, multiply the average rental rate by the rental length, then subtract the total cost of the car per month including loan payments and maintenance. How to Calculate Turo Profit? The following example problems outline how to calculate Turo Profit. Example Problem #1 1. First, determine the total cost of car loan and maintenance per month ($). In this example, the total cost of car loan and maintenance per month ($) is given as 1000 . 2. Next, determine the average Turo rental rate ($/day). For this problem, the average Turo rental rate ($/day) is given as 150 . 3. Next, determine the average number of days rented per month. In this case, the average number of days rented per month is found to be 10. 4. Finally, calculate the Turo Profit using the formula above: Pt = RR * #D – TC Inserting the values from above yields: Pt = 150 * 10 – 1000 = 500 ($/month)
Example Problem #2
The variables needed for this problem are provided below:
total cost of car loan and maintenance per month ($) = 750 average Turo rental rate ($/day) = 100
average number of days rented per month = 20
Entering these values and solving gives:
Pt = 100 * 20 – 750 = 1250 (\$/month)
|
2023-02-03 19:18:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3488759994506836, "perplexity": 2624.7572699515613}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00276.warc.gz"}
|
https://brilliant.org/problems/only-one-chance-for-correct-answer/
|
Can You Get This With One Chance?
How many unique integer tuples are solutions for the equation $$x^2 + y^2 + z^2 = 3$$?
×
|
2017-07-25 06:56:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2503248155117035, "perplexity": 1522.9153156987159}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425082.56/warc/CC-MAIN-20170725062346-20170725082346-00560.warc.gz"}
|
https://stats.stackexchange.com/questions/377598/base-rate-fallacy-in-conditional-probability-pab-vs-pba
|
# Base Rate fallacy in Conditional Probability P(A|B) vs P(B|A)
From Wikipedia:
P(A|B) (the conditional probability of A given B) is not equal to P(B|A). For example, if a person has dengue they might have a 90% chance of testing positive for dengue. In this case what is being measured is that if event B ("having dengue") has occurred, the probability of A (test is positive) given that B (having dengue) occurred is 90%: that is, P(A|B) = 90%. Alternatively, if a person tests positive for dengue they may have only a 15% chance of actually having dengue because most people do not have dengue and the false positive rate for the test may be high. In this case what is being measured is the probability of the event B (having dengue) given that the event A (test is positive) has occurred: P(B|A) = 15%. Falsely equating the two probabilities causes various errors of reasoning such as the base rate fallacy. Conditional probabilities can be correctly reversed using Bayes' theorem. Falsely equating the two probabilities causes various errors of reasoning such as the base rate fallacy.
$$P(A|B)$$ -- I understand what it means and it is correct. With common-sense it is easy to understand that if it is already known that you have dengue then testing positive for dengue will be 90% probable ( 10% false-negatives)
$$P(B|A)$$ -- While reverse is different, generally in real life, if someone is tested positive for dengue, we know he has dengue, for 100% sure and we put him in hospital immediately (we just do it, we don't take any chances). But we ignored the fact that, statistically, we are looking at within our own smaller number of experiences of family, friends and relatives (30 cases e.g.). If we take whole country's population (1.25 billion) into account then we will know that 85% is the false-positive rate for dengue and only 15% people tested positive really have it.
My question is how to learn to not to ignore such facts ? (like I remembered only friends and relatives and TV adverts for dengue) but when author brought facts: false positives and whole population, I understood why it is 15%, I would have laughed if someone would have said so without such statistical data. I think if one learns to think this way of not ignoring but looking for facts then one can learn Probability easily. How can I learn such way ?
• You might also be interested in the prosecutor's fallacy en.wikipedia.org/wiki/Prosecutor%27s_fallacy – mdewey Nov 18 '18 at 14:18
• From Wikipedia link: A lottery winner is accused of cheating, based on the improbability of winning I could not understand, just like my question above. Yeah, if 100,000 bought lottery ticket, it is pretty sure that only one will win. The winner could be any one out of 100,000. They pick up balls from a bowl full of balls with numbers on them, randomly they shake and pick a ball up, publicly. What cheating can be done there ? youtube.com/watch?v=Vc_PRhQpeJI – Arnuld Nov 18 '18 at 15:09
Let's denote the test event by $$T$$(the outcome space is $$\{0, 1\}$$) and the event of getting struck by the disease by $$D$$(its outcome space: $$\{0,1\}$$). If we know the prior probability of someone having the disease is $$\frac{1}{10,000}$$(only one of the 10,000 people in the population get stricken by the disease), that's $$P(D=1)=\frac{1}{10,000}$$. And the the machine can test accurately 99 percent of those who have the disease, that is $$P(T=1|D=1)=0.99$$. We want to know how likely those who are tested as having the disease truely have the disease: $$P(D=1|T=1)$$.
\begin{align} p(D=1 \mid T=1) &= \frac{p(T=1 \mid D=1)p(D=1)}{p(T=1)} \\ & = \frac{p(T=1 \mid D=1)p(D=1)}{p(T=1 \mid D=1)p(D=1) + p(T=1 \mid D=0)p(D=0)} \\ & = \frac{0.99 \times \frac{1}{10, 000}}{0.99 \times \frac{1}{10, 000} + 0.01 \times \frac{9, 999}{10, 000}} \\ & = 0.0098 \ll 0.99 = P(T=1|D=1) \end{align}
The intuition behind this is that the smaller the prior(the larger the population but the lower the disease incidence rate) the less likely the machine is right, because it can increase the false positive rate: $$P(T=1, D=0)$$ or $$P(T=1|D=0)P(D=0)$$ in the denominator.
|
2019-11-22 04:31:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743133783340454, "perplexity": 1060.1727148717414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671239.99/warc/CC-MAIN-20191122042047-20191122070047-00068.warc.gz"}
|
https://www.physicsforums.com/threads/defining-an-upper-lower-bound-in-lexicographically-ordered-c.636099/
|
# Defining an upper/lower bound in lexicographically ordered C.
1. Sep 15, 2012
### c0dy
If I have a lexicographic ordering on ℂ, and I define a subset, $A = \{z \in ℂ: z = a+bi; a,b \in ℝ, a<0\}$.
I have an upper bound, say α = 0+di. My question is does only the real part, Re(α) = 0 define the upper bound? Or does the Im(α) = d have nothing to do with bounds in general?
Since it seems to me if I have the lexicographic ordering on ℂ such as for any two m,n $\in$ ℂ, where m = a+bi and n = c+di and I define the ordering as m<n if a<c or if a=c and b<d.
The last bit, b<d gives me the impression that Im(α) would play a role in the upper bound. The reason I am asking is because in a proof I read, they prove this order has no least upper bound as there are infinitely many complex numbers with their real parts equal to Re(α) but different imaginary parts. So, I guess if only the real parts of complex numbers define the bounds then it makes sense to me.
2. Sep 15, 2012
### SteveL27
A least upper bound has to be a specific number with the LUB property. In this case there is no such number, since there are lots of upper bounds but none of them is the smallest.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
|
2018-02-19 03:09:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534722924232483, "perplexity": 521.5119802211615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812306.16/warc/CC-MAIN-20180219012716-20180219032716-00067.warc.gz"}
|
https://www.esaral.com/q/if-the-polynomial-62101/
|
If the polynomial
Question:
If the polynomial $2 x^{3}+a x^{2}+3 x-5$ and $x^{3}+x^{2}-4 x+a$ leave the same remainder when divided by $x-2$, Find the value of a
Solution:
Given, the polymials are
$f(x)=2 x^{3}+a x^{2}+3 x-5$
$p(x)=x^{3}+x^{2}-4 x+a$
The remainders are f(2) and p(2) when f(x) and p(x) are divided by x – 2
We know that,
f(2) = p(2) (given in problem)
we need to calculate f(2) and p(2)
for, f(2)
substitute (x = 2) in f(x)
$f(2)=2(2)^{3}+a(2)^{2}+3(2)-5$
= (2 * 8) + 4a + 6 – 5
= 16 + 4a + 1
= 4a + 17 …. 1
for, p(2)
substitute (x = 2) in p(x)
$p(2)=2^{3}+2^{2}-4(2)+a$
= 8 + 4 – 8 + a
= 4 + a …. 2
Since, f(2) = p(2)
Equate eqn 1 and 2
⟹ 4a + 17 = 4 + a
⟹ 4a – a = 4 – 17
⟹ 3a = -13
⟹ a = -13/3
The value of a = −13/3
|
2022-01-18 09:44:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4815447926521301, "perplexity": 1719.9132939233693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300810.66/warc/CC-MAIN-20220118092443-20220118122443-00031.warc.gz"}
|
https://nrich.maths.org/public/topic.php?code=47&cl=3&cldcmpid=7787
|
# Resources tagged with: Creating and manipulating expressions and formulae
Filter by: Content type:
Age range:
Challenge level:
### There are 131 results
Broad Topics > Algebraic expressions, equations and formulae > Creating and manipulating expressions and formulae
### Unit Interval
##### Age 14 to 18 Challenge Level:
Take any two numbers between 0 and 1. Prove that the sum of the numbers is always less than one plus their product?
### Always the Same
##### Age 11 to 14 Challenge Level:
Arrange the numbers 1 to 16 into a 4 by 4 array. Choose a number. Cross out the numbers on the same row and column. Repeat this process. Add up you four numbers. Why do they always add up to 34?
### Pareq Calc
##### Age 14 to 16 Challenge Level:
Triangle ABC is an equilateral triangle with three parallel lines going through the vertices. Calculate the length of the sides of the triangle if the perpendicular distances between the parallel. . . .
##### Age 11 to 14 Challenge Level:
Surprising numerical patterns can be explained using algebra and diagrams...
### Interactive Number Patterns
##### Age 14 to 16 Challenge Level:
How good are you at finding the formula for a number pattern ?
### Magic Sums and Products
##### Age 11 to 16
How to build your own magic squares.
### Card Trick 1
##### Age 11 to 14 Challenge Level:
Can you explain how this card trick works?
### Nicely Similar
##### Age 14 to 16 Challenge Level:
If the hypotenuse (base) length is 100cm and if an extra line splits the base into 36cm and 64cm parts, what were the side lengths for the original right-angled triangle?
##### Age 11 to 14 Challenge Level:
A little bit of algebra explains this 'magic'. Ask a friend to pick 3 consecutive numbers and to tell you a multiple of 3. Then ask them to add the four numbers and multiply by 67, and to tell you. . . .
### Leonardo's Problem
##### Age 14 to 18 Challenge Level:
A, B & C own a half, a third and a sixth of a coin collection. Each grab some coins, return some, then share equally what they had put back, finishing with their own share. How rich are they?
### Pythagoras Perimeters
##### Age 14 to 16 Challenge Level:
If you know the perimeter of a right angled triangle, what can you say about the area?
### Multiplication Square
##### Age 14 to 16 Challenge Level:
Pick a square within a multiplication square and add the numbers on each diagonal. What do you notice?
### Difference of Two Squares
##### Age 14 to 16 Challenge Level:
What is special about the difference between squares of numbers adjacent to multiples of three?
### Lens Angle
##### Age 14 to 16 Challenge Level:
Find the missing angle between the two secants to the circle when the two angles at the centre subtended by the arcs created by the intersections of the secants and the circle are 50 and 120 degrees.
### Always Perfect
##### Age 14 to 16 Challenge Level:
Show that if you add 1 to the product of four consecutive numbers the answer is ALWAYS a perfect square.
### Diophantine N-tuples
##### Age 14 to 16 Challenge Level:
Can you explain why a sequence of operations always gives you perfect squares?
### Puzzling Place Value
##### Age 14 to 16 Challenge Level:
Can you explain what is going on in these puzzling number tricks?
### Algebra from Geometry
##### Age 11 to 16 Challenge Level:
Account of an investigation which starts from the area of an annulus and leads to the formula for the difference of two squares.
##### Age 14 to 16 Challenge Level:
Kyle and his teacher disagree about his test score - who is right?
### More Number Pyramids
##### Age 11 to 14 Challenge Level:
When number pyramids have a sequence on the bottom layer, some interesting patterns emerge...
### Pair Products
##### Age 14 to 16 Challenge Level:
Choose four consecutive whole numbers. Multiply the first and last numbers together. Multiply the middle pair together. What do you notice?
### Three Four Five
##### Age 14 to 16 Challenge Level:
Two semi-circles (each of radius 1/2) touch each other, and a semi-circle of radius 1 touches both of them. Find the radius of the circle which touches all three semi-circles.
### Sitting Pretty
##### Age 14 to 16 Challenge Level:
A circle of radius r touches two sides of a right angled triangle, sides x and y, and has its centre on the hypotenuse. Can you prove the formula linking x, y and r?
### AMGM
##### Age 14 to 16 Challenge Level:
Can you use the diagram to prove the AM-GM inequality?
### How Big?
##### Age 11 to 14 Challenge Level:
If the sides of the triangle in the diagram are 3, 4 and 5, what is the area of the shaded square?
### The Pillar of Chios
##### Age 14 to 16 Challenge Level:
Semicircles are drawn on the sides of a rectangle. Prove that the sum of the areas of the four crescents is equal to the area of the rectangle.
### One and Three
##### Age 14 to 16 Challenge Level:
Two motorboats travelling up and down a lake at constant speeds leave opposite ends A and B at the same instant, passing each other, for the first time 600 metres from A, and on their return, 400. . . .
##### Age 7 to 14 Challenge Level:
Think of a number and follow the machine's instructions... I know what your number is! Can you explain how I know?
### Pick's Theorem
##### Age 14 to 16 Challenge Level:
Polygons drawn on square dotty paper have dots on their perimeter (p) and often internal (i) ones as well. Find a relationship between p, i and the area of the polygons.
### Really Mr. Bond
##### Age 14 to 16 Challenge Level:
115^2 = (110 x 120) + 25, that is 13225 895^2 = (890 x 900) + 25, that is 801025 Can you explain what is happening and generalise?
### The Simple Life
##### Age 11 to 14 Challenge Level:
The answer is $5x+8y$... What was the question?
### Perfectly Square
##### Age 14 to 16 Challenge Level:
The sums of the squares of three related numbers is also a perfect square - can you explain why?
### Triangles Within Squares
##### Age 14 to 16 Challenge Level:
Can you find a rule which relates triangular numbers to square numbers?
### DOTS Division
##### Age 14 to 16 Challenge Level:
Take any pair of two digit numbers x=ab and y=cd where, without loss of generality, ab > cd . Form two 4 digit numbers r=abcd and s=cdab and calculate: {r^2 - s^2} /{x^2 - y^2}.
### How Much Can We Spend?
##### Age 11 to 14 Challenge Level:
A country has decided to have just two different coins, 3z and 5z coins. Which totals can be made? Is there a largest total that cannot be made? How do you know?
### Marbles in a Box
##### Age 11 to 16 Challenge Level:
How many winning lines can you make in a three-dimensional version of noughts and crosses?
### Always a Multiple?
##### Age 11 to 14 Challenge Level:
Think of a two digit number, reverse the digits, and add the numbers together. Something special happens...
### Generating Triples
##### Age 14 to 16 Challenge Level:
Sets of integers like 3, 4, 5 are called Pythagorean Triples, because they could be the lengths of the sides of a right-angled triangle. Can you find any more?
### Training Schedule
##### Age 14 to 16 Challenge Level:
The heptathlon is an athletics competition consisting of 7 events. Can you make sense of the scoring system in order to advise a heptathlete on the best way to reach her target?
##### Age 14 to 16 Challenge Level:
Robert noticed some interesting patterns when he highlighted square numbers in a spreadsheet. Can you prove that the patterns will continue?
### Simplifying Doughnut
##### Age 14 to 18 Challenge Level:
An algebra task which depends on members of the group noticing the needs of others and responding.
### Partly Painted Cube
##### Age 14 to 16 Challenge Level:
Jo made a cube from some smaller cubes, painted some of the faces of the large cube, and then took it apart again. 45 small cubes had no paint on them at all. How many small cubes did Jo use?
### Christmas Chocolates
##### Age 11 to 14 Challenge Level:
How could Penny, Tom and Matthew work out how many chocolates there are in different sized boxes?
### Pythagoras Proofs
##### Age 14 to 16 Challenge Level:
Can you make sense of these three proofs of Pythagoras' Theorem?
### Salinon
##### Age 14 to 16 Challenge Level:
This shape comprises four semi-circles. What is the relationship between the area of the shaded region and the area of the circle on AB as diameter?
### Beach Huts
##### Age 11 to 14 Challenge Level:
Can you figure out how sequences of beach huts are generated?
### Algebra Match
##### Age 11 to 16 Challenge Level:
A task which depends on members of the group noticing the needs of others and responding.
### Never Prime
##### Age 14 to 16 Challenge Level:
If a two digit number has its digits reversed and the smaller of the two numbers is subtracted from the larger, prove the difference can never be prime.
### Cubes Within Cubes Revisited
##### Age 11 to 14 Challenge Level:
Imagine starting with one yellow cube and covering it all over with a single layer of red cubes, and then covering that cube with a layer of blue cubes. How many red and blue cubes would you need?
|
2020-08-10 16:50:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4140002429485321, "perplexity": 1539.1920831586795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736057.87/warc/CC-MAIN-20200810145103-20200810175103-00420.warc.gz"}
|
https://www.physicsforums.com/tags/vibration/
|
# vibration
1. ### Mass and vibration frequency
My query here is, Suppose there is a 2 kg mass To oscillate it/vibrate it, it will take some force and it will have some natural frequency Now I increase the mass to 5 kg so to vibrate it, won't it take more force and so at the end, won't the natural frequency of the object increase? as its more...
2. ### I Resonance gets sharper just by increasing the resonance freq, why?
The $Q$ factor of an oscillating system is defined as $\omega_{r}/\Delta \omega$, where $\omega_{r}$ is the resonant frequency, and $\Delta \omega$ the resonance width. As I understand, $Q$ measures how sharp the resonance curve is. Why is it that the resonance curve gets...
3. ### Transient vibration of an engine
How is the transient vibration of a piston engine usually simulated? I know that in order to define the vibration loading you need the mass properties and dimensions of the moving components plus the cylinder pressure curve. And of course you need to know the firing order, V-angle (if...
4. ### A noisy chiller
Dear forum members, We have a water cooled chiller device (see its model in attached link) and we use it to cool a hot mirror element. Unfortunately the chiller is vibrating and the vibrations couple all the way to the mirror through a pair of rubber tubes (1cm in diameter, 2m in length) and a...
5. ### Mechanical vibrations: maximum velocity
So I am almost sure I know how to solve this, just curious about the maximum velocity. Anyway, if you could double check my calculations, here it is. $T = \frac{t}{n} = \frac{10s}{15} = \frac{2}{3}s$ $\omega = \frac{2\pi}{T} = 2\pi \frac{3}{2} = 3\pi$ a). position at $t = 0.8s$...
6. ### I Frequency of Undamped Driven Oscillator near Zero
Description of the Problem: Consider a spring-mass system with spring constant $k$ and mass $m$. Suppose I apply a force $F_0 \cos(\omega t)$ on the mass, but the frequency $\omega$ is very small, so small that it takes the system, say, a million years to reach a maximum and to go to 0...
7. ### Find the resistive constant in a critically damped system
Homework Statement This problem is taken from Problem 2.3, Introduction to Vibration and Waves, by H.J. Pain and P. Rankin: A critically mechanical system consisting of a pan hanging from a spring with a damping. What is the value of damping force r if a mass extends the spring by 10cm without...
8. ### Vibration of a mass connected via preloaded spring
The setup: I have a mass (m1)connected to a much, much larger mass (m2) via a preloaded spring. They start out in contact because the preloaded spring holds them together. Now suppose the large mass is subject to vibrations, possibly at the resonance of the structure. Will the two masses...
9. ### General Question about Vibrations
Hello Everyone, I conducted an experiment with a metal beam which had a motor attached to it in with an eccentric mass on it. The two ends of the beams were fixed with a roller and a hinge(as I remember). This was a one degree of freedom experiment. I had to collect data during free/forced...
10. ### I A common 2nd order ODE from dynamics but...
Consider a simple single degree-of-freedom (SDOF) spring-mass-dashpot dynamic system with spring rate k, mass m, and viscous damping coefficient c. Dimension x is the absolute displacement of the mass. The base input translation is y. A dot notation indicates differentiation with respect to...
11. C
### 2-DoF system characterization with 2 dual axis accelerometers
I've been having some trouble with this problem for a while. (Its not a homework or coursework problem) Its a part of an experimental set up that I'm building and this is a simplified version of it. I have a rigid rod which is hinged at a point and can pitch or heave about that location. There...
12. ### I don't understand the derivation process here, help?
Homework Statement I understand the derivation it showed that included the sin (15.7 in the image) I just don't understand the following (15.8 in the image). Does "t" get pulled out of the equation? If so what do we derive for then? Does it become 0? If so, it would remain 0 and sin(0) is just...
13. ### Can I use symmetry in modal analysis in FEA
I am doing FE analysis for symmetric shape in boundary conditions and geometry, I would like to know if I can use symmetry in my analysis or not. I have tried to run trials, the results are very similar but I am not sure if this just a coincident
14. ### Simulation of Dynamic Charateristics
I have an assembly (a cuboid shaped structure). The vibration test data of this structure is available with me. I want to study the effect of vibration generated by a system mounted on top panel on another system mounted on same top panel. In order to complete this, I have prepared the FE model...
15. ### Automotive Random Vibration and PSD spectrum profiles
I am starting work on structural durability area for after treatment systems and deal with Random Vibration and PSD profiles quite often. However there are few fundamental questions about PSD profiles that I could not get answer to after a lot of search on internet. So finally decided to write...
16. ### I What does it mean for a particle to vibrate?
I intuitively understand macroscopic vibration, but trying to understand what it means for a particle to vibrate doesn't seem to make sense from the classical understanding I have of momentum and energy. First, are particles even said to vibrate or have vibrational energy? If so, how is momentum...
17. ### Vibration in "cantilever" rotating bar
Hi. I'm working in the design of an auotomatic bar feeder for a CNC lathe. I've established that the bars will not be longer than 1.2 m and will not have a diameter of more than 75 mm. Suppose they are steel bars, which would make them have a mass of about 80 kg So, bars are tightly held in one...
18. ### Vibration/noise cancellation
Hello gentlemen! So I have a situation. Concrete floor (ceiling actually) slabs like these has many cylindrical spaces in it. And noisy neighbors on top! The slab receives shocks / impacts and human shouts. First I found Tuned Mass Damper. It is good for continuous vibration but not for first...
19. ### Resonance frequency
I wonder of the air has resonance frequency or not, if yes what would happen if we excite it at this frequency
20. ### Peak-hold equivalent amplitude for transient vibration
I have a question regarding transient vibration data I received that was processed into a peak-hold equivalent amplitude (units = g). I have come across peak-hold before which is a type of "averaging" that retains the highest values from each estimate in random vibration overlap processing and...
21. ### A Free damped vibration of a system of 2 dof, demostration
Hi,I need help please, i want to know how to solve de differential equation of a system of two degree of freedom using Heigenvalues or Heigenvectors or if I can use any another way to solve this kind of equations.
22. ### Boundary conditions of that beam.
Hi all! I have to calculate the natural frequency of the system. Any idea of boundary conditions of this case? There is beam supported by two springs on the left side.
23. ### B Unable to explain the sound
In our apartment, for the last few months, my wife and I are being puzzled by strange sounds coming from the dining area while we remain in the bedroom at night. The first time we heard, I was quiet. Then my wife said, "Did you hear that?" So, it was no hallucination whatsoever. It is a distinct...
24. ### What is this and where can I find a replacement
The square seat (?) for the accelerometer has started to fall apart and I need to find a replacement for it. If this helps, the accelerometers are placed equidistant apart from each other, and the tester hits it - the cantilever beam - with a mallet to simulate vibration. Please refer to the...
25. ### 2-DOF problem with unknown stiffness and velocity
Homework Statement I have a 2-DOF system, whereby I have one body that is grounded by a spring (body A), and a second body (body B) attached to the first by a spring and a viscous damper. For body A, I know the velocity and amplitude (before body B is added). I think I also have the stiffness...
26. ### B Would time slow down tied to a nuclear-powered oscillator?
I've always wondered this. Let's say we're not limited by the type of vibration, e.g. if choppy vibration doesn't constitute continuous movement, then some sort of oscillating vibration.
27. ### Frequency of Vibration of a water tank
Homework Statement Assume that there is a tank on a 200 ft pedestal type support. When full the tank and contents have a weight of 50,000 lbs and it is never drained to a point where the tank and contents have a weight less than 20,000 lbs. Assume the pedestal weight is negligible compared to...
28. ### Find the frequency of the cylinder's vertical vibration
Homework Statement Homework Equations w = √k/m w = 2π/P ƒ = 1/P The Attempt at a Solution I feel like I'm so close to the answer, I think it has something to do with the fact that there's a spring and pulley in the system that is throwing my answer off. I have √k/m = 2π/P So, P =...
29. ### Is sound a renewable energy or non-renewable energy?
I've been searching the web for this answer and can't seem to find it anywhere. Can anyone help me? Is sound a renewable energy or non-renewable energy?
30. ### Solving boundary conditions for vibrating beam
Hi there, I'm solving the equation for the transverse vibrations of a Euler-Bernoulli beam fixed at both ends and subject to axial loading. It's a similar problem to that described by Rao on page 355 of his book "Vibration of Continuous Systems" (Google books link), except the example he uses...
|
2020-02-25 22:24:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5343605875968933, "perplexity": 924.2053829301793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146160.21/warc/CC-MAIN-20200225202625-20200225232625-00201.warc.gz"}
|
http://www.homermultitext.org/editors/hmt-editors-guide/citation/
|
# Citation
## Citation
Editors create two kinds of documents: diplomatic editions of texts in TEI-compliant XML, and tables in the simple form of delimited text files.
In both kinds of document, we use URNs to refer to citable objects: CTS URNs to cite passages of text, and CITE2 URNs to cite other kinds of objects.
In XML editions of texts, however, the citation scheme is encoded as described in the following section.
## Citing texts
The OHCO2 model of citable texts joins two hierarchies: a hierarchy of notional and concrete works, and an ordered hierarchy of citation units. Each XML document that you edit must be cataloged in a file in your repository named textcatalog.cex: this documents the work hierarchy.
In your XML edition, we use familiar TEI elements to represent the citation hierarchy: citable nodes are either l for lines of poetic text, or p for paragraph units of prose, with higher parts of the hierarchy (such as books of the Iliad) represented by TEI div elements. Each element corresponding to a level of the citation hierarchy has an @n attribute giving identifying value for that part of the text. Iliad 10.1, for example, would be contained in a div element with @n attribute of 10, and a l element with a value of 1.
If the text of a scholion explicitly organizes material in a table or list structure, we use TEI list with item elements. These are the only TEI elements we need to capture the basic structure of our diplomatic editions.
Example: see an example of a list structure in a scholion. Items can be numbered (if they are numbered in the text) by adding the attribute @type="ordered" to the list element and the item elements can take an @n attribute to indicate the sequence.
## Citing objects
In addition to transcribed texts, our diplomatic editions document the physical artifacts preserving the text and documentary images illustrating the text.
A table in the surfaces directory records an ordered sequence of pages in a text file with columns delimited by the pound sign #. The first column in each row gives the CITE2 URN identifying that page.
Similarly, a table in the images directory records images documenting each manuscript in a text file with columns delimited by the pound sign #. The first column in each row gives the CITE2 URN identifying that image. These records are not ordered.
When editors index pages and passages of text to regions of images, they compile this information in tables associating URNs for each cited object.
|
2021-05-15 05:19:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5757579207420349, "perplexity": 3413.6019567810667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00041.warc.gz"}
|
https://discourse.julialang.org/t/is-there-a-central-difference-gradient-function-somewhere/19454
|
# Is there a central difference/gradient function somewhere?
When I write in Python, I often use the gradient function from NumPy, and, when I write in NCL, I often use the center_finite_diff_n function. However, it looks like Julia does not have a function that has a behavior similar to those. I tried searching for an equivalent function among the 3rd-party packages, but couldn’t find anything that takes arrays, only functions. Does anyone know of any package with a function that behaves like NumPy’s gradient, NCL’s center_finite_diff_n, or Matlab’s gradient?
There is a function gradvecfield() in CoupledFields.jl. It’s used by Gadfly.jl to calculate the gradient vector field for Geom.vectorfield. gradvecfield() works like this:
using CoupledFields
kernelpars = GaussianKP(X)
∇g = gradvecfield([a b], X, Y, kernelpars)
where a is a smoothness parameter, b is a ridge parameter, X and Y are matrices, and Y = g(X).
something like this?
function centraldiff(v::AbstractMatrix)
dv = diff(v)/2
a = [dv[[1],:];dv]
a .+= [dv;dv[[end],:]]
a
end
function centraldiff(v::AbstractVector)
dv = diff(v)/2
a = [dv[1];dv]
a .+= [dv;dv[end]]
a
end
These methods are of course not optimized for performance, writing the loop manually would likely improve performance and reduce memory allocations.
Note, these functions do not reduce the length of the input array (as diff does), by copying the first and last diff elements.
1 Like
1 Like
While I appreciate your responses, all the functions you pointed out only work on 2d arrays. I put together a function that works on 4d arrays and operates on the 3rd dimension with irregular spacing on the derivative dimension (aka the use case I needed yesterday).
function partialp(arr, coord)
nx, ny, np, nt = size(arr)
out = similar(arr)
dcoord = diff(coord)
# Forward difference at bottom
dp = dcoord[1]
for t in 1:nt, y in 1:ny, x in 1:nx
out[x, y, 1, t] = (arr[x, y, 2, t] - arr[x, y, 1, t]) / dp
end
# Central difference in interior using numpy method
for p in 2:np-1
dp1 = dcoord[p-1]
dp2 = dcoord[p]
a = -dp2 / (dp1 * (dp1 + dp2))
b = (dp2 - dp1) / (dp1 * dp2)
c = dp1 / (dp2 * (dp1 + dp2))
for t in 1:nt, y in 1:ny, x in 1:nx
out[x, y, p, t] = a*arr[x, y, p-1, t] + b*arr[x, y, p, t] + c*arr[x, y, p+1, t]
end
end
# Backwards difference at top
dp = dcoord[end]
for t in 1:nt, y in 1:ny, x in 1:nx
out[x, y, end, t] = (arr[x, y, end, t] - arr[x, y, end-1, t]) / dp
end
return out
end
It produces the same results as numpy.gradient, is faster than using PyCall, and doesn’t make many extraneous allocations. However, it doesn’t have any of the checks that a function would need to be used generally, and it’s just as limited as the matrix examples you gave, but it would be a good jumping off point for anyone trying to implement a central difference calculation in Julia.
Edit: Swapped t and x in loops to take advantage of Julia being column major.
Used to be in Base, but was removed (see gradient(): remove from Base, then deprecate · Issue #16113 · JuliaLang/julia · GitHub). That implementation only ever supported 1d arrays, though.
Something could be added to GitHub - MatlabCompat/MatlabCompat.jl: Source of MatlabCompat.jl
1 Like
Any general implementation of the central difference would be best based off of NumPy’s, since Matlab’s gradient doesn’t handle non-uniform spacing. Reading through NumPy’s implementation, it looks like it’s relatively simple to port to Julia. That might be a good weekend project.
By the way, I’m curious: what do you use this function for? Plotting?
@weech To clarify CoupledFields.gradvecfield works on arrays of any dimension, and for irregular spaced points (its implementation in Gadfly is only for 2d arrays). Say X is a n\times p matrix and Y is a n\times q matrix, then CoupledFields.gradvecfield([a b], X, Y, kernelpars) returns n gradient matrices (of size p\times q), for the n function points in X.
Nope, in today’s case I need to calculate isobaric potential vorticity. I’m calculating the zonal and meridional derivatives using Spherepack, but I needed the central difference for the vertical derivative.
Oh, OK, I was confused because the signature of that function was
function gradvecfield(par::Array{Float64}, X::T, Y::T, kpars::KernelParameters ) where T<:Matrix{Float64}
and got confused by the Matrix typing. It’s still several levels beyond what I know about mathematics, so it’d take me a long time to figure out how to use it.
As is often the case when translating vectorized code from other languages to Julia, there is probably a better way to do it because Julia doesn’t force you to vectorize your algorithms for performance.
A calculation of potential vorticity based on gradient calls makes a lot of passes over the array(s) and allocates several temporary arrays, all to compute a scalar result at the end. Instead, in Julia, you could do it all in a single (nested) loop over the data (one pass, no temporaries). I wouldn’t be surprised if a properly coded loop were an order of magnitude faster than a method based on gradient calls.
1 Like
Hmm, that might not be a bad idea. I’ve been using a certain formula (Bluestein’s Synoptic-Dynamic Meteorology in Midlatitudes. Eq 4.5.93) that uses x-y-p coordinates, and it might be worth deriving something in ϕ-θ-p coordinates so I can use it with my data without the spherical transforms.
|
2022-07-02 23:20:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.363251268863678, "perplexity": 3004.225795741615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00501.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/2/lesson/2.3.3/problem/2-131
|
### Home > APCALC > Chapter 2 > Lesson 2.3.3 > Problem2-131
2-131.
Evaluate the following limits.
1. $\lim\limits _ { x \rightarrow \infty }\large \frac { \sqrt { 9 x ^ { 2 } - 3 x + 1 } } { 4 x ^ { 2 } - 1 }$
This is a limit as $x\rightarrow \infty$, so we are checking to see if there is a horizontal asymptote, or not.
We only need to consider the highest power of the numerator with the highest power of the denominator.
$\text{Highest power in the numerator: }\sqrt{9x^{2}}=3x$
$\text{Highest power in the denominator: }4x^{2}$
$=\lim\limits _{x \rightarrow \infty} \frac{3x}{4x^{2}}= \lim\limits _{x \rightarrow \infty} \frac{3}{4x}=$
This is also the equation of the horizontal asymptote.
1. $\lim\limits _ { x \rightarrow - 1 } \large\frac { x + 1 } { x ^ { 2 } + 5 x + 6 }$
Nothing elaborate here. Try just substituting in $−1$.
$0$
1. $\lim\limits _ { x \rightarrow - \infty }\large \frac { 2 - 3 x - 4 x ^ { 2 } } { ( 1 - 3 x ) ^ { 2 } }$
Refer to the hints in part (a). But notice that we are evaluating limit as $x\rightarrow -\infty$, which also will reveal the equation of a horizontal asymptote, if there is one.
1. $\lim\limits _ { x \rightarrow 2 } \large\frac { | x ^ { 2 } - 4 | } { x - 2 }$
Find both the limit from the left and the limit from the right. Then compare them.
If the limit exists, then there must be agreement from the left and the right AND that limit must be finite.
|
2022-06-28 06:19:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 11, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.97506183385849, "perplexity": 354.50157243083436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00329.warc.gz"}
|
https://studydaddy.com/question/hospital-database
|
QUESTION
# Hospital Database
1On average, do hospitals in the United States employ fewer than 900 personnel? Use the hospital database as your sample and an alpha of 0.10 to test this figure as the alternative hypothesis. Assume that the number of births and number of employees in the hospitals are normally distributed in the population.
Files: Hospital Database ONLY.xlsx
• @
• 27 orders completed
Tutor has posted answer for $20.00. See answer's preview$20.00
|
2019-04-26 02:38:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3596625030040741, "perplexity": 2648.4309861750858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578747424.89/warc/CC-MAIN-20190426013652-20190426035652-00222.warc.gz"}
|
https://bobsegarini.wordpress.com/tag/ghost-of-a-saber-tooth-tigersean-lennon/
|
## Roxanne Tellier: Shaping The New Sexual Revolution – 1960 Redux
Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on April 26, 2015 by segarini
You would have had to be living under a rock to miss the run up to Diane Sawyer’s two hour interview with Bruce Jenner on Friday, April 23. Jenner was dubbed the “world’s greatest athlete” after winning the Olympic Gold Decathlon in 1976, and has been the object of snide insinuations and ridiculing photos in the yellower media (and even the New York Times) for the last several months as he appeared to be transitioning from male to female before our prying eyes.
|
2023-03-26 07:04:51
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9360225796699524, "perplexity": 786.0933665822499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00470.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=113513
|
# Electrostatics - charges in stable equilibrium
by gandharva_23
Tags: electrostatics
Sci Advisor HW Helper P: 2,002 Yes, since the potential due to the other charges satisfies $\nabla^2 V=0$. V is harmonic with has the property that is has no local maxima or minima.
P: 62 Yes, since the potential due to the other charges satisfies $\nabla^2 V=0$is being generated. Reload this page in a moment.. V is harmonic with has the property that is has no local maxima or minima. thats what i wanted to ask . how can we prove that $\nabla^2 V=0$
Sci Advisor HW Helper P: 2,002 It's just Maxwell's (first) equation: $\vec \nabla \cdot \vec E =\rho/\epsilon_0$, or $\nabla^2 V=-\rho/\epsilon_0$. In the region where there is no charge density you have $\nabla^2 V=0$.
|
2014-09-02 07:04:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6027061343193054, "perplexity": 793.092948023417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921869.7/warc/CC-MAIN-20140901014521-00370-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://cosmicmar.com/CMBLensing.jl/stable/api/
|
# API
## Simulation
CMBLensing.load_simFunction
load_sim(;kwargs...)
The starting point for many typical sessions. Creates a BaseDataSet object with some simulated data, returing the DataSet and simulated truths, which can then be passed to other maximization / sampling functions. E.g.:
@unpack f,ϕ,ds = load_sim(;
θpix = 2,
Nside = 128,
pol = :P,
T = Float32
)
Keyword arguments:
• θpix — Angular resolution, in arcmin.
• Nside — Number of pixels in the map as an (Ny,Nx) tuple, or a single number for square maps.
• pol — One of :I, :P, or :IP to select intensity, polarization, or both.
• T = Float32 — Precision, either Float32 or Float64.
• storage = Array — Set to CuArray to use GPU.
• Nbatch = nothing — Number of batches of data in this dataset.
• μKarcminT = 3 — Noise level in temperature in μK-arcmin.
• ℓknee = 100 — 1/f noise knee.
• αknee = 3 — 1/f noise slope.
• beamFWHM = 0 — Beam full-width-half-max in arcmin.
• pixel_mask_kwargs = (;) — NamedTuple of keyword arguments to pass to make_mask to create the pixel mask.
• bandpass_mask = LowPass(3000) — Operator which performs Fourier-space masking.
• fiducial_θ = (;) — NamedTuple of keyword arguments passed to camb() for the fiducial model.
• seed = nothing — Specific seed for the simulation.
• L = LenseFlow — Lensing operator.
Returns a named tuple of (;f, f̃, ϕ, n, ds, Cℓ, proj).
CMBLensing.simulateFunction
simulate([rng], Σ)
Draw a simulation from the covariance matrix Σ, i.e. draw a random vector $\xi$ such that the covariance $\langle \xi \xi^\dagger \rangle = \Sigma$.
The random number generator rng will be used and advanced in the proccess, and defaults to Random.default_rng().
## Lensing estimation
CMBLensing.MAP_jointFunction
MAP_joint([θ], ds::DataSet, [Ωstart=(ϕ=0,)]; kwargs...)
Compute the maximum a posteriori (i.e. "MAP") estimate of the joint posterior, $\mathcal{P}(f,\phi,\theta\,|\,d)$, or compute a quasi-sample.
Positional arguments:
• [θ] — Optional θ at which to do maximization.
• ds::DataSet — The DataSet which defines the posterior
• [Ωstart=(ϕ=0,)] — Optional starting point for the non-Gaussian fields to optimize over. The maximizer does a coordinate descent which alternates between updating f which the posterior is assumed to be Gaussian in, and updating the fields in Ωstart (which by default is just ϕ).
Keyword arguments:
• nsteps — The maximum number of iterations for the maximizer.
• ϕtol = nothing — If given, stop when ϕ updates reach this tolerance. ϕtol is roughly the relative per-pixel standard deviation between changes to ϕ and draws from the ϕ prior. Values in the range $10^{-2}-10^{-4}$ are reasonable.
• nburnin_update_hessian = Inf — How many steps to wait before starting to do diagonal updates to the Hessian
• conjgrad_kwargs = (;) — Passed to the inner call to conjugate_gradient.
• progress = true — Whether to show the progress bar.
• quasi_sample = falsefalse to compute the MAP, true to iterate quasi-samples, or an integer to compute a fixed-seed quasi-sample.
• history_keys — What quantities to include in the returned history. Can be any subset of (:f, :f°, :ϕ, :∇ϕ_logpdf, :χ², :logpdf).
Returns a tuple (f, ϕ, history) where f is the best-fit (or quasi-sample) field, ϕ is the lensing potential, and history contains the history of steps during the run.
CMBLensing.MAP_margFunction
MAP_marg(ds; kwargs...)
Compute the maximum a posteriori (i.e. "MAP") estimate of the marginal posterior, $\mathcal{P}(\phi,\theta\,|\,d)$.
CMBLensing.sample_jointFunction
sample_joint(ds::DataSet; kwargs...)
Sample the joint posterior, $\mathcal{P}(f,\phi,\theta\,|\,d)$.
Keyword arguments:
• nsamps_per_chain — The number of samples per chain.
• nchains = 1 — Number of chains in parallel.
• nsavemaps = 1 — Number of steps in between saving maps into chain.
• nburnin_always_accept = 0 — Number of steps at the beginning of the chain to always accept HMC steps regardless of integration error.
• nburnin_fixθ = 0 — Number of steps at the beginning of the chain before starting to sample θ.
• Nϕ = :qe — Noise to use in the initial approximation to the Hessian. Can give :qe to use the quadratic estimate noise.
• chains = nothingnothing to start a new chain; the return value from a previous call to sample_joint to resume those chains; :resume to resume chains from a file given by filename
• θrange — Range and density to grid sample parameters as a NamedTuple, e.g. (Aϕ=range(0.7,1.3,length=20),).
• θstart — Starting values of parameters as a NamedTuple, e.g. (Aϕ=1.2,), or nothing to randomly sample from θrange
• ϕstart — Starting ϕ, either a Field object, :quasi_sample, or :best_fit
• metadata — Does nothing, but is saved into the chain file
• nhmc = 1 — Number of HMC passes per ϕ Gibbs step.
• symp_kwargs = fill((N=25, ϵ=0.01), nhmc) — an array of NamedTupe kwargs to pass to symplectic_integrate. E.g. [(N=50,ϵ=0.1),(N=25,ϵ=0.01)] would do 50 large steps then 25 smaller steps per each Gibbs pass. If specified, nhmc is ignored.
• wf_kwargs — Keyword arguments to pass to argmaxf_logpdf in the Wiener Filter Gibbs step.
• MAP_kwargs — Keyword arguments to pass to MAP_joint when computing the starting point.
CMBLensing.argmaxf_logpdfFunction
argmaxf_logpdf(ds::DataSet, Ω::NamedTuple, [d = ds.d]; kwargs...)
Maximize the logpdf for ds over f, given all the other arguments are held fixed at Ω. E.g.: argmaxf_logpdf(ds, (; ϕ, θ=(Aϕ=1.1,)).
Keyword arguments:
• fstart — starting guess for f for the conjugate gradient solver
• conjgrad_kwargs — Passed to the inner call to conjugate_gradient
CMBLensing.quadratic_estimateFunction
quadratic_estimate(ds::DataSet, which; wiener_filtered=true)
quadratic_estimate((ds₁::DataSet, ds₂::DataSet), which; wiener_filtered=true)
Compute the quadratic estimate of ϕ given data.
The ds or (ds₁,ds₂) tuple contain the DataSet object(s) which house the data and covariances used in the estimate. Note that only the Fourier-diagonal approximations for the beam, mask, and noise, i.e. B̂, M̂, and Cn̂, are accounted for. To account full operators (if they are not actually Fourier-diagonal), you should compute the impact using Monte Carlo.
If a tuple is passed in, the result will come from correlating the data from ds₁ with that from ds₂.
An optional keyword argument AL can be passed in case the QE normalization was already computed, in which case it won't be recomputed during the calculation.
Returns a named tuple of (;ϕqe, AL, Nϕ) where ϕqe is the (possibly Wiener filtered, depending on wiener_filtered option) quadratic estimate, AL is the normalization (which is already applied to ϕqe, it does not need to be applied again), and Nϕ is the analytic N⁰ noise bias (Nϕ==AL if using unlensed weights, currently only Nϕ==AL is always returned, no matter the weights)
## Lensing operators
CMBLensing.LenseFlowType
LenseFlow(ϕ, [n=7])
LenseFlow is the ODE-based lensing algorithm from Millea, Anderes, & Wandelt, 2019. The number of steps in the ODE solver is controlled by n. The action of the operator, as well as its adjoint, inverse, inverse-adjoint, and gradient of any of these w.r.t. ϕ can all be computed. The log-determinant of the operation is zero independent of ϕ, in the limit of n high enough.
CMBLensing.BilinearLensType
BilinearLens(ϕ)
BilinearLens is a lensing operator that computes lensing with bilinear interpolation. The action of the operator, as well as its adjoint, inverse, inverse-adjoint, and gradient w.r.t. ϕ can all be computed. The log-determinant of the operation is non-zero and can't be computed.
Internally, BilinearLens forms a sparse matrix with the interpolation weights, which can be applied and adjoint-ed extremely fast (e.g. at least an order of magnitude faster than LenseFlow). Inverse and inverse-adjoint lensing is somewhat slower since it requires an iterative solve, here performed with the preconditioned generalized minimal residual algorithm.
CMBLensing.TaylensType
Taylens(ϕ, order)
Taylens is a lensing operator which lenses a map with a nearest-pixel permute step followed by power series expansion in the residual displacement, to any order. This is the algorithm from Næss&Louis 2013.
CMBLensing.PowerLensType
PowerLens(ϕ, order)
PowerLens is a lensing operator which lenses a map with a power series expansion in $\nabla \phi$ to any order.
$$$f(x+\nabla x) \approx f(x) + (\nabla f)(\nabla \phi) + \frac{1}{2} (\nabla \nabla f) (\nabla \phi)^2 + ...$$$
The action of the operator and its adjoint can be computed.
CMBLensing.antilensingFunction
antilensing(L::PowerLens)
Create a PowerLens operator that lenses by -ϕ instead.
## Configuration options
CMBLensing.FFTW_NUM_THREADSConstant
The number of threads used by FFTW for CPU FFTs (default is the environment variable FFTW_NUM_THREADS, or if that is not specified its Sys.CPU_THREADS÷2). This must be set before creating any FlatField objects.
## Other
CMBLensing.JpermMethod
Jperm(ℓ::Int, n::Int) return the column number in the J matrix U^2 where U is unitary FFT. The J matrix looks like this:
|1 0| | / 1| | / / | |0 1 |
CMBLensing.LinearInterpolationMethod
itp = LinearInterpolation(xdat::AbstractVector, ydat::AbstractVector; extrapolation_bc=NaN)
itp(x) # interpolate at x
A simple 1D linear interpolation code which is fully Zygote differentiable in either xdat, ydat, or the evaluation point x.
CMBLensing.QE_legMethod
QE_leg(C::Diagonal, inds...)
The quadratic estimate and normalization expressions all consist of terms involving products of two "legs", each leg which look like:
C * l[i] * l̂[j] * l̂[k] * ...
where C is some field or diagonal covariance, l[i] is the Fourier wave-vector in direction i (for i=1:2), and l̂[i] = l[i]/‖l‖. For example, there's a leg in the EB estimator that looks like:
(CE * (CẼ+Cn) \ d[:E])) * l[i] * l̂[j] * l̂[k]
The function QE_leg computes quatities like these, e.g. the above would be given by:
QE_leg((CE * (CẼ+Cn) \ d[:E])), [i], j, k)
(where note that specifying whether its the Fourier wave-vector l instead of the unit-vector l̂ is done by putting that index in brackets).
Additionally, all of these terms are symmetric in their indices, i.e. in (i,j,k) in this case. The QE_leg function is smart about this, and is memoized so that each unique set of indices is only computed once. This leads to a pretty drastic speedup for terms with many indices like those that arize in the EE and EB normalizations, and lets us write code which is both clear and fast without having to think too hard about these symmetries.
CMBLensing.assign_GPU_workersMethod
assign_GPU_workers(;print_info=true, use_master=false, remove_oversubscribed_workers=false)
Assign each Julia worker process a unique GPU using CUDA.device!. Works with workers which may be distributed across different hosts, and each host can have multiple GPUs.
If a unique GPU cannot be assigned, that worker is removed if remove_oversubscribed_workers is true, otherwise an error is thrown.
use_master controls whether the master process counts as having been assigned a GPU (if false, one of the workers may be assigned the same GPU as the master)
CMBLensing.batchMethod
batch(fs::LambertField...)
batch(fs::Vector{<:LambertField})
Concatenate one of more LambertFields along the "batch" dimension (dimension 4 of the underlying array). For the inverse operation, see unbatch.
CMBLensing.beamCℓsMethod
beamCℓs(;beamFWHM, ℓmax=8000)
Compute the beam power spectrum, often called $W_\ell$. A map should be multiplied by the square root of this.
CMBLensing.conjugate_gradientFunction
conjugate_gradient(
M, A, b, x=M\b;
nsteps = length(b),
tol = sqrt(eps()),
progress = false,
callback = nothing,
history_keys = nothing,
history_mod = 1
)
Compute x=A\b (where A is positive definite) by conjugate gradient. M is the preconditioner and should be M≈A, and M\x should be fast.
The solver will stop either after nsteps iterations or when dot(r,r)<tol (where r=A*x-b is the residual at that step), whichever occurs first.
Info from the iterations of the solver can be returned if history_keys is specified. history_keys can be one or a tuple of:
• :i — current iteration number
• :x — current solution
• :r — current residual r=A*x-b
• :res — the norm of r
• :t — the time elapsed (in seconds) since the start of the algorithm
history_mod can be used to include every N-th iteration only in history_keys.
CMBLensing.fftsymsMethod
Arguments m and n refer to the sizes of an m×n matrix (call it A) that is the output of a real FFT (thus m=n÷2+1)
Returns a tuple of (ireal, iimag, negks) where these are
• irealm×n mask corrsponding to unique real entries of A
• iimagm×n mask corrsponding to unique imaginary entries of A
• negksm×n matrix of giving the index into A where the negative k-vector is, s.t. A[i,j] = A[negks[i,j]]'
CMBLensing.get_max_lensing_stepMethod
Returns αmax such that 𝕀 + ∇∇(ϕ + α * η) has non-zero discriminant (pixel-by-pixel) for all α values in [0, αmax].
This mean ϕ + αmax * η is the maximum step in the η direction which can be added to ϕ and still yield a lensing potential in the weak-lensing regime. This is important because it guarantees the potential can be paseed to LenseFlow, which cannot handle the strong-lensing / "shell-crossing" regime.
CMBLensing.gmresMethod
gmres(A, b; maxiter, Pl=I)
Solve A \ b with maxiter iterations of the generalized minimal residual algorithm. Pl is a left-preconditioner which should approximate inv(A).
Note: the implemenation is memory inefficient and uses O(n * maxiter) memory, where n,n=size(A) (may not be a big deal for small maxiter), although is totally generic and works with CPU or GPU and dense or sparse matrices, unlike IterativeSolver's gmres.
CMBLensing.gpuFunction
gpu(x)
Recursively move an object to GPU memory. Note that, unlike cu(x), this does not change the eltype of any underlying arrays. See also cpu.
CMBLensing.gradhessMethod
gradhess(f)
Compute the gradient $g^i = \nabla^i f$, and the hessian, $H_j^{\,i} = \nabla_j \nabla^i f$.
CMBLensing.grid_and_sampleMethod
grid_and_sample(lnP::Callable; range::NamedTuple; progress=false, nsamples=1)
Interpolate the log pdf lnP with support on range, and return the integrated log pdf as well nsamples samples (drawn via inverse transform sampling)
lnP should either accept a NamedTuple argument and range should be a NamedTuple mapping those same names to range objects specifying where to evaluate lnP, e.g.:
grid_and_sample(nt->-(nt.x^2+nt.y^2)/2, (x=range(-3,3,length=100),y=range(-3,3,length=100)))
or lnP should accept a single scalar argument and range should be directly the range for this variable:
grid_and_sample(x->-x^2/2, range(-3,3,length=100))
The return value is (lnP, samples, Px) where lnP is an interpolated/smoothed log PDF which can be evaluated anywhere within the original range, Px are sampled points of the original PDF, and samples is a NamedTuple giving the Monte-Carlo samples of each of the parameters.
(Note: only 1D sampling is currently implemented, but 2D like in the example above is planned)
CMBLensing.kdeMethod
kde(samples::AbstractVector; [boundary=(min,max), normalize="integral" or "max"])
kde(samples::AbstractMatrix; [boundary=[(min1,max1),(min2,max2)], normalize="integral" or "max", smooth_scale_2D])
Return a Kernel Density Estimate for a set of 1D or 2D samples. The return object is a function which can be evaluated anywhere to compute the PDF. If provided, boundary specifies a hard upper/lower bound for the 1 or 2 or parameters, normalize specifies whether to normalize the PDF to unit integral or unit maximum, and smooth_scale_2D specifies how much smoothing to do for the 2D case.
Based on Python GetDist, which must be installed.
CMBLensing.lazy_pyimportMethod
lazy_pyimport(s)
Like pyimport(s), but doesn't actually load anything (not even PyCall) until a property of the returned module is accessed, allowing this to go in __init__ and still delay loading PyCall, as well as preventing a Julia module load error if a Python module failed to load.
CMBLensing.load_camb_CℓsMethod
load_camb_Cℓs(;path_prefix, custom_tensor_params=nothing,
unlensed_scalar_postfix, unlensed_tensor_postfix, lensed_scalar_postfix, lenspotential_postfix)
Load some Cℓs from CAMB files.
path_prefix specifies the prefix for the files, which are then expected to have the normal CAMB postfixes: scalCls.dat, tensCls.dat, lensedCls.dat, lenspotentialCls.dat, unless otherwise specified via the other keyword arguments. custom_tensor_params can be used to call CAMB directly for the unlensed_tensors, rather than reading them from a file (since alot of times this file doesn't get saved). The value should be a Dict/NamedTuple which will be passed to a call to camb, e.g. custom_tensor_params=(r=0,) for zero tensors.
CMBLensing.load_chainsMethod
load_chains(filename; burnin=0, burnin_chunks=0, thin=1, join=false, unbatch=true)
Load a single chain or multiple parallel chains which were written to a file by sample_joint.
Keyword arguments:
• burnin — Remove this many samples from the start of each chain, or if negative, keep only this many samples at the end of each chain.
• burnin_chunks — Same as burnin, but in terms of chain "chunks" stored in the chain file, rather than in terms of samples.
• thin — If thin is an integer, thin the chain by this factor. If thin == :hasmaps, return only samples which have maps saved. If thin is a Function, filter the chain by this function (e.g. thin=haskey(:g) on Julia 1.5+)
• unbatch — If true, unbatch the chains if they are batched.
• join — If true, concatenate all the chains together.
• skip_missing_chunks — Skip missing chunks in the chain instead of terminating the chain there.
The object returned by this function is a Chain or Chains object, which simply wraps an Array of Dicts or an Array of Array of Dicts, respectively (each sample is a Dict). The wrapper object has some extra indexing properties for convenience:
• It can be indexed as if it were a single multidimensional object, e.g. chains[1,:,:accept] would return the :accept key of all samples in the first chain.
• Leading colons can be dropped, i.e. chains[:,:,:accept] is the same as chains[:accept].
• If some samples are missing a particular key, missing is returned for those samples insted of an error.
• The recursion goes arbitrarily deep into the objects it finds. E.g., since sampled parameters are stored in a NamedTuple like (Aϕ=1.3,) in the θ key of each sample Dict, you can do chain[:θ,:Aϕ] to get all Aϕ samples as a vector.
CMBLensing.mean_std_and_errorsMethod
mean_std_and_errors(samples; N_bootstrap=10000)
Get the mean and standard deviation of a set of correlated samples from a chain where the error on the mean and standard deviation is estimated with bootstrap resampling using the calculated "effective sample size" of the chain.
CMBLensing.mixMethod
mix(ds::DataSet; f, ϕ, [θ])
Compute the mixed (f°, ϕ°) from the unlensed field f and lensing potential ϕ, given the definition of the mixing matrices in ds evaluated at parameters θ (or at fiducial values if no θ provided).
CMBLensing.noiseCℓsMethod
noiseCℓs(;μKarcminT, beamFWHM=0, ℓmax=8000, ℓknee=100, αknee=3)
Compute the (:TT,:EE,:BB,:TE) noise power spectra given white noise + 1/f. Polarization noise is scaled by $\sqrt{2}$ relative to μKarcminT. beamFWHM is in arcmin.
CMBLensing.paren_errorsMethod
paren_errors(μ, σ; N_in_paren=2)
Get a string represntation of μ ± σ in "parenthesis" format, e.g. 1.234 ± 0.012 becomes 1.234(12).
CMBLensing.pixwinMethod
pixwin(θpix, ℓ)
Returns the pixel window function for square flat-sky pixels of width θpix (in arcmin) evaluated at some ℓs. This is the scaling of k-modes, the scaling of the power spectrum will be pixwin^2.
CMBLensing.projectMethod
project(healpix_field::HealpixField => cart_proj::CartesianProj; [method = :bilinear])
project(cart_field::FlatField => healpix_proj::ProjHealpix; [method=:bilinear])
Project a healpix_field to a cartesian projection specified by cart_proj, or project a cart_field back up to sphere on the Healpix pixelization specified by healpix_proj. E.g.
# sphere to cartesian
healpix_field = HealpixMap(rand(12*2048^2))
cart_proj = ProjLambert(Ny=128, Nx=128, θpix=3, T=Float32, rotator=(0,30,0))
f = project(healpix_field => cart_proj)
# and back to sphere
project(f => ProjHealpix(512))
The (Ny, Nx, θpix, rotator) parameters of cart_proj control the size and location of the projected region.
The use of => is to help remember in which order the arguments are specified.
For either projection direction, if the field is a QU or IQU field, polarization angles are rotated to be aligned with the local coordinates (sometimes called "polarization flattening").
The projection interpolates the original map at the positions of the centers of the projected map pixels. method controls how this interpolation is done, and can be one of:
• :bilinear — Bilinear interpolation (default)
• :fft — FFT-based interpolation, which uses a non-uniform FFT to evaluate the discrete Fourier series of the field at arbitrary new positions. This is currently implemented only for cartesian to Healpix projection. To make this mode available, you must load the NFFT package first. For GPU fields, you must also load CuNFFT. Projection with method=:fft is both GPU compatible and automatically differentiable.
A pre-computation step can be cached by first doing,
projector = CMBLensing.Projector(healpix_map.proj => cart_proj, method=:fft)
f = project(projector, healpix_map => cart_proj)
which makes subsequent project calls significantly faster. Note the method argument is specified in the precomputation step.
CMBLensing.rfft_degeneracy_facMethod
rfft_degeneracy_fac(n)
Returns an Array which is 2 if the complex conjugate of the corresponding entry in the half-plane real FFT appears in the full-plane FFT, and is 1 othewise. n is the length of the first dimension of the full-plane FFT. The following identity holds:
sum(abs2.(fft(x)) = sum(rfft_degeneracy_fac(size(x,1)) .* abs2.(rfft(x))
CMBLensing.sample_fFunction
sample_f([rng::AbstractRNG], ds::DataSet, Ω::NamedTuple, [d = ds.d]; kwargs...)
Draw a posterior sample of f from the logpdf for ds, given all the other arguments are held fixed at Ω. E.g.: sample_f(ds, (; ϕ, θ=(Aϕ=1.1,)).
Keyword arguments:
• fstart — starting guess for f for the conjugate gradient solver
• conjgrad_kwargs — Passed to the inner call to conjugate_gradient
CMBLensing.simulateMethod
simulate([rng], Σ)
Draw a simulation from the covariance matrix Σ, i.e. draw a random vector $\xi$ such that the covariance $\langle \xi \xi^\dagger \rangle = \Sigma$.
The random number generator rng will be used and advanced in the proccess, and defaults to Random.default_rng().
CMBLensing.symplectic_integrateMethod
symplectic_integrate(x₀, p₀, Λ, U, δUδx, N=50, ϵ=0.1, progress=false)
Do a symplectic integration of the potential energy U (with gradient δUδx) starting from point x₀ with momentum p₀ and mass matrix Λ. The number of steps is N and the step size ϵ.
Returns ΔH, xᵢ, pᵢ corresponding to change in Hamiltonian, and final position and momenta. If history_keys is specified a history of requested variables throughout each step is also returned.
CMBLensing.ud_gradeMethod
ud_grade(f::Field, θnew, mode=:map, deconv_pixwin=true, anti_aliasing=true)
Up- or down-grades field f to new resolution θnew (only in integer steps). Two modes are available specified by the mode argument:
• :map — Up/downgrade by replicating/averaging pixels in map-space
• :fourier — Up/downgrade by extending/truncating the Fourier grid
For :map mode, two additional options are possible. If deconv_pixwin is true, deconvolves the pixel window function from the downgraded map so the spectrum of the new and old maps are the same. If anti_aliasing is true, filters out frequencies above Nyquist prior to down-sampling.
CMBLensing.unbatchMethod
unbatch(chains::Chains)
Expand each chain in this Chains object by unbatching it.
CMBLensing.unbatchMethod
unbatch(chain::Chain)
Convert a chain of batch-length-D fields to D chains of unbatched fields.
CMBLensing.unmixMethod
unmix(f°, ϕ°, ds::DataSet)
unmix(f°, ϕ°, θ, ds::DataSet)
Compute the unmixed/unlensed (f, ϕ) from the mixed field f° and mixed lensing potential ϕ°, given the definition of the mixing matrices in ds evaluated at parameters θ (or at fiducial values if no θ provided).
LinearAlgebra.logdetMethod
logdet(L::FieldOp, θ)
If L depends on θ, evaluates logdet(L(θ)) offset by its fiducial value at L(). Otherwise, returns 0.
CMBLensing.BatchedRealType
BatchedReal(::Vector{<:Real}) <: Real
Holds a vector of real numbers and broadcasts algebraic operations over them, as well as broadcasting along the batch dimension of Fields, but is itself a Real.
CMBLensing.ParamDependentOpType
ParamDependentOp(recompute_function::Function)
Creates an operator which depends on some parameters $\theta$ and can be evaluated at various values of these parameters.
recompute_function should be a function which accepts keyword arguments for $\theta$ and returns the operator. Each keyword must have a default value; the operator will act as if evaluated at these defaults unless it is explicitly evaluated at other parameters.
Example:
Cϕ₀ = Diagonal(...) # some fixed Diagonal operator
Cϕ = ParamDependentOp((;Aϕ=1)->Aϕ*Cϕ₀) # create ParamDependentOp
Cϕ(Aϕ=1.1) * ϕ # Cϕ(Aϕ=1.1) is equal to 1.1*Cϕ₀
Cϕ * ϕ # Cϕ alone will act like Cϕ(Aϕ=1) because that was the default above
Note: if you are doing parallel work, global variables referred to in the recompute_function need to be distributed to all workers. A more robust solution is to avoid globals entirely and instead ensure all variables are "closed" over (and hence will automatically get distributed). This will happen by default if defining the ParamDependentOp inside any function, or can be forced at the global scope by wrapping everything in a let-block, e.g.:
Cϕ = let Cϕ₀=Cϕ₀
ParamDependentOp((;Aϕ=1)->Aϕ*Cϕ₀)
end
After executing the code above, Cϕ is now ready to be (auto-)shipped to any workers and will work regardless of what global variables are defined on these workers.
CMBLensing.ProjEquiRectMethod
ProjEquiRect(; Ny::Int, Nx::Int, θspan::Tuple, φspan::Tuple, T=Float32, storage=Array)
ProjEquiRect(; θ::Vector, φ::Vector, θedges::Vector, φedges::Vector, T=Float32, storage=Array)
Construct an EquiRect projection object. The projection can either be specified by:
• The number of pixels Ny and Nx (corresponding to the θ and φ angular directions, respectively) and the span in radians of the field in these directions, θspan and φspan. The order in which the span tuples are given is irrelevant, either order will refer to the same field. Note, the spans correspond to the field size between outer pixel edges, not from pixel centers. If one wishes to call Cℓ_to_Cov with this projection, φspan must be an integer multiple of 2π, but other functionality will be available if this is not the case.
• A manual list of pixels centers and pixel edges, θ, φ, θedges, φedges.
CMBLensing.@!Macro
Rewrites @! x = f(args...) to x = f!(x,args...)
Special cases for * and \ forward to mul! and ldiv!, respectively.
CMBLensing.@auto_adjointMacro
@auto_adjoint foo(args...; kwargs...) = body
is equivalent to
_foo(args...; kwargs...) = body
foo(args...; kwargs...) = _foo(args...; kwargs...)
@adjoint foo(args...; kwargs...) = Zygote.pullback(_foo, args...; kwargs...)
That is, it defines the function as well as a Zygote adjoint which takes a gradient explicitly through the body of the function, rather than relying on rules which may be defined for foo. Mainly useful in the case that foo is a common function with existing rules, but which you do not want to be used.
CMBLensing.@dictMacro
Pack some variables in a dictionary
> x = 3
> y = 4
> @dict x y z=>5
Dict(:x=>3,:y=>4,:z=>5)
CMBLensing.@distributedMacro
CMBLensing.@distributed ds1 ds2 ...
Assuming ds1, ds2, etc... are DataSet objects which are defined in the Main module on all workers, this makes it so that whenever these objects are shipped to a worker as part of a remote call, the data is not actually sent, but rather the worker just refers to their existing local copy. Typical usage:
@everywhere ds = load_sim(seed=1, ...)
CMBLensing.@distributed ds
pmap(1:n) do i
# do something with ds
end
Note that hash(ds) must yield the same value on all processors, ie the macro checks that it really is the same object on all processors. Sometimes setting the same random seed is not enough to ensure this as there may be tiny numerical differences in the simulated data. In this case you can try:
@everywhere ds.d = $(ds.d) after loading the dataset to explicitly set the data based on the simulation on the master process. Additionally, if the dataset object has fields which are custom types, these must have an appropriate Base.hash defined. CMBLensing.@ondemandMacro @ondemand(Package.function)(args...; kwargs...) @ondemand(Package.Submodule.function)(args...; kwargs...) Just like calling Package.function or Package.Submodule.function, but Package will be loaded on-demand if it is not already loaded. The call is no longer inferrable. CMBLensing.@substMacro @subst sum(x*$(y+1) for x=1:2)
becomes
let tmp=(y+1)
sum(x*tmp for x=1:2)
end
to aid in writing clear/succinct code that doesn't recompute things unnecessarily.
CMBLensing.@⌛Macro
@⌛ [label] code ...
@⌛ [label] function_definition() = ....
Label a section of code to be timed. If a label string is not provided, the first form uses the code itselfs as a label, the second uses the function name, and its the body of the function which is timed.
To run the timer and print output, returning the result of the calculation, use
@show⌛ run_code()
Timing uses TimerOutputs.get_defaulttimer().
|
2023-03-25 04:58:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42338067293167114, "perplexity": 5257.411843099259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00784.warc.gz"}
|
https://physics.stackexchange.com/questions/448764/the-necessity-of-light-for-the-synchronization-of-clocks-in-special-relativity
|
# The Necessity of Light for the Synchronization of Clocks in Special Relativity
In Einstein's paper, "On the Electrodynamics of Moving Bodies" light is used to synchronize the clocks of two different points in space. Thus if a frame S' is moving relative to frame S, an observer in frame S will observe that there is time dilation in frame S' because of the use of light to attempt to synchronize a clock in S with a clock in S'. However, it seems rather arbitrary to use light to synchronize clocks since it seems like you could devise a method of synchronizing clocks that involved the use of some other object whose speed is not constant, thus freeing you from the problems that come from using light (whose speed is constant) to synchronize a clock between the frames. What am I missing?
• In order to use another object, you must know the speed of that object. How do you measure that object's speed? Dec 17, 2018 at 0:52
Indeed, the choice of light for synchronization is arbitrary and not necessary. Other conventions could have been chosen. This concept has been explored extensively in the literature, including especially Reichenbach.
The synchronization convention used by Einstein is widely regarded as the most natural and reasonable one, while recognizing that it is a convention. So what happens to our understanding of relativity if we dispense with this convention? Luckily, the mathematical framework of pseudo-Riemannian geometry allows us to consider such things. We can investigate arbitrary coordinate systems, including arbitrary synchronization conventions, using the concepts of coordinate charts and tensors.
It turns out that there are still many relativistic effects that are not dependent on the synchronization convention. A version of time dilation would survive, the relativistic Doppler effect, it would still require infinite energy to reach the speed of light, the twin paradox would still happen, and electromagnetism would still be a relativistic field theory.
The relativity of simultaneity would be different as would length contraction, not absent but more general. Of course, it is unclear from a historical perspective how relativity would have been developed without Einstein’s convention.
• This is really helpful, thanks! It seems as though this view points to the existence of some more fundamental fact about the transfer of information between reference frames. Is there any literature on this subject or at least a general proof that the principles of time dilation, Doppler effect, etc. are preserved regardless of the method used to synchronize clocks? Thanks a lot again for your help. Dec 17, 2018 at 18:45
• Yes. This basically comes from general relativity. What you want to look for are quantities that are manifestly invariant. Anything that is manifestly invariant is completely independent of the coordinate system including synchronization
– Dale
Dec 17, 2018 at 19:39
• Could you please guide me to some link to understand why clocks in general relativity can’t be synchronized. Jun 19, 2021 at 19:44
• Clocks can certainly be synchronized in General Relativity. And you are not limited to Einstein’s convention.
– Dale
Jun 19, 2021 at 20:20
• Can you suggest any source where I can read in detail a bit about synchronisation in GR Jun 20, 2021 at 16:57
I would like to give a short summary of my views on the subject. Dale is right that
the choice of light for synchronization is arbitrary and not necessary
But there is a point seldom remarked. Einstein's synchronization (ES) is physically meaningful. I mean that it's meaningful that ES is possible.
Assume you have three clocks in different locations, A, B, C. Decide to use ES between A and B, and also between A and C. Then compare B and C: will you find them ES synchronized? Here there is no arbitrariness, no convention: either they are synchronized, or they aren't. Only experiment may decide. And the experiment speaks in the positive: ES is transitive.
Then I can't see any reason to adopt a different synchronization, which requires greater attention and a more complex mathematical apparatus. An analogy can help. It's well known that in most practical and scientific applications we are allowed to assume space obeys euclidean geometry. This means that cartesian orthogonal and isometric coordinates are possible. No doubt this is an arbitrary choice - we could use curvilinear coordinates as well. But at what advantage? Cartesian coordinates bring written in their structure - so to say - the Euclidean character of space, with the simplest possible formula for distance: $$ds^2 = dx^2 + dy^2 + dz^2.$$
It's true that we may take recourse to Riemannian geometry also in a Euclidean space - and thinking of spacetime, also if it is Minkowskian, as in RR - but to what avail?
Let me repeat. The only physically meaningful statement is that ES is possible (it's transitive). Given its advantages, I can't see any reason for other choices. And actually I never saw other synchronizations used, outside epistemological discussions. Even there, other synchronizations are always introduced as modifications of ES. Apparently there's no independent way to define one.
|
2022-08-14 19:07:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6188101172447205, "perplexity": 444.181771879994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00737.warc.gz"}
|
https://www.groundai.com/project/topology-of-entanglement-evolution-of-two-qubits/
|
A two dimensional sections
# Topology of Entanglement Evolution of Two Qubits
## Abstract
The dynamics of a two-qubit system is considered with the aim of a general categorization of the different ways in which entanglement can disappear in the course of the evolution, e.g., entanglement sudden death. The dynamics is described by the function , where is the -dimensional polarization vector. This representation is particularly useful because the components of are direct physical observables, there is a meaningful notion of orthogonality, and the concurrence can be computed for any point in the space. We analyze the topology of the space of separable states (those having ), and the often lower-dimensional linear dynamical subspace that is characteristic of a specific physical model. This allows us to give a rigorous characterization of the four possible kinds of entanglement evolution. Which evolution is realized depends on the dimensionality of and of , the position of the asymptotic point of the evolution, and whether or not the evolution is “distance-Markovian”, a notion we define. We give several examples to illustrate the general principles, and to give a method to compute critical points. We construct a model that shows all four behaviors.
###### pacs:
02.40.Pc,03.65.Ud,03.65.Yz,03.67.Mn
## I Introduction
Entanglement is one of the most intriguing aspects of quantum physics and is known to be a useful resource for quantum computation and communication (1); (2). However, its structure and evolution in time are not fully understood even for simple systems such as two qubits where a relatively computable entanglement measure, the Wootter’s concurrence , is available (3); ?; ?. The difficulty resides in the high dimensionality of state spaces (fifteen for two qubits) and the non-analyticity of the definition of the entanglement measure .
In the presence of external noise, pure states become mixed and entanglement degrades. These are distinct but related issues. The completely mixed state (density matrix proportional to the unit matrix) is separable: it has . Pure states, on the other hand, can have any value of between and inclusive. Purity (measured, for example, by the von Neumann entropy) tends to decrease monotonically and smoothly with time under Markovian evolution. The same is not true for entanglement. Apart from the expected smooth half-life (HL) decaying behavior, the sudden disappearance of entanglement has been theoretically predicted and experimentally observed (6); (7); ?; ?. Widely known as entanglement sudden death (ESD), this non-analytic behavior has been shown to be a generic feature of multipartite quantum systems regardless of whether the environment is modeled as quantum or classical (10); (11); (12); ?; ?; (15); (16); (17); (18); (19); (20); (21); (22); (23); (24); ?; (26). While the monotonic decrease of is usually associated with Markovian evolution, non-Markovian evolution can also lead to entanglement sudden birth (ESB). It is believed to be related to the memory effect of the (non-Markovian) environment (27); ?; ?; (10); (30); ?; (24); ?; (26). Although most investigations have been focused on two-qubit systems and we will also focus on this case, ESD and ESB have been shown to exist in multi-qubit systems, and even in quantum systems with infinite dimensional Hilbert spaces, such as harmonic oscillators (32); (33); ?; ?; (36); (37); (38); ?; (40); (41).
Our aim in this paper is to formulate the problem of the evolution of entanglement in two-qubit systems in the polarization vector space, and to show that this formulation leads naturally to a categorization of entanglement evolutions into four distinct types, generalizing and making precise the concepts of HL, ESD, and ESB behaviors. It turns out that these categories are consequences of certain topological characteristics of a model. To show this, we proceed as follows. Sec. II characterizes in detail two manifolds in the polarization vector space: the manifold of admissible physical states and the manifold of separable states. Sec. III presents the concept of a dynamical subspace - a manifold that is associated with a physical model, and then gives several concrete examples of models of increasing complexity. In Sec. III we also show how to compute critical points: parameter values that separate one behavior from another. In Sec. IV, we prove that our categorization scheme is exhaustive. Sec. V presents the final results and discussion.
## Ii Entanglement in Polarization Vector Space
The universality of the various entanglement behaviors suggests that they are derived from some structural property of entanglement in the physical state space, and that the system dynamics can be viewed as a probe of that property. To state this property precisely, we need to first characterize the space of all admissible density matrices , or equivalently, the space of all admissible polarization vectors.
For two qubits, the polarization vector is defined by the equation
ρ=14I4+1415∑i=1niμi, (1)
where is the unit matrix and the are the generators of , satisfying
μi=μ†i,Tr~{}μi=0,Tr~{}μiμj=4δij. (2)
For our purposes the are most conveniently chosen as
μαβ=σα⊗σβ, (3)
where acts on the first qubit and acts on the second qubit. and sum over the unit matrix and the Pauli matrices , and . Thus, in Eqs. 1 and 2, is regarded as a composite index of and , but the term is singled out. This space has the usual Euclidean inner product (which corresponds to the Hilbert-Schmidt inner product on the the density matrices), and the inner product induces a metric and a topology in the usual fashion. The components of are physical observables and can be calculated by . For example, the average value of the z-component of the spin of the first qubit is . The six components , , , , and represent physical polarizations of spin qubits. The other nine components (, etc.) are inter-qubit correlation functions. The most common name for is “polarization vector”, but “coherence vector” and “generalized Bloch vector” are also in use. We note that different normalizations in Eqs. 1 and 2 are used in the literature (42); ?; (44); (45); (46); (47).
The generators satisfy
μiμj=δijI+(ifijk+dijk)μk (4)
where is totally anti-symmetric and is totally symmetric. These structure constants can be found in Appendix A.
Eq. 1 holds for a -level system. It has an obvious generalization to -level systems; the just become the generators of . For , is the usual Bloch vector in a real -dimensional vector space. It is important to stress that the correspondence between and is one-to-one; they give completely equivalent descriptions of the physical system. Certain physical concepts have geometric interpretations when stated in terms of ; as we shall see below. This is not so true of . In our opinion, is the more convenient quantity for most purposes. has been the traditional language in which to describe mixed states, but some experimental groups now favor (48); (49).
We shall refer to the set of all admissible as , the state space. What shape does have? Eq. 1 guarantees that is Hermitian and has unit trace. To guarantee that is positive (all its eigenvalues are non-negative), we also need the condition that all coefficients of the characteristic polynomial are non-negative (50). Note by definition.
For two-qubit systems there are four of them, which are
1!a1 =Tr~{}ρ=1, (5) 2!a2 =1−Tr~{}ρ2, (6) 3!a3 =1−3Tr~{}ρ2+2Tr~{}ρ3, (7) 4!a4 =1−6Tr~{}ρ2+8Tr~{}ρ3+3(Tr~{}% ρ2)2−6Tr~{}ρ4. (8)
Note that is trivially satisfied for all density matrices.
For one qubit , is the usual Bloch vector, and only the constraint applies. Thus the positivity requirement is that and is the familiar -dimensional spherical volume. For the -qubit case that we are concerned with, there are cubic and quadratic inequalities to be satisfied, so the surface that bounds is not so simple. The main point, however, is that is convex: the line joining any two points in is also in . This follows from the convexity argument for : if and are positive, then so is for all . This argument clearly also holds for .
All of the positivity requirements can be written in terms of but the higher-order ones are fairly complicated. The requirement Tr is of particular interest, since it has a simple expression in terms of
0 ≤ 1−Tr(ρ2) = 1−116[Tr I4+215∑iniTrμi+15∑i,j=1 ninjTrμiμj] = 34−14|→n|2, or ~{}|→n|2≤3.
Hence the vectors in lie within a sphere of radius . Technically, is a -dimensional manifold with boundary. We will follow physics usage and also employ the term “space” for , though of course it is not closed under vector addition. Note that pure states satisfy Tr , so the pure states are a subset of the -sphere in with . To be more specific, the two-qubit pure states are of measure zero on that sphere since they can be parametrized by real parameters
|ψ⟩= cosθ1|00⟩+eiϕ1sinθ1sinθ2|01⟩ +eiϕ2sinθ1cosθ2cosθ3|10⟩ +eiϕ3sinθ1cosθ2sinθ3|11⟩.
An overall phase has been dropped in writing since it does not appear in .
Further insight into the shape of can be gained by noting that must be invariant under local unitary transformations (rotations of one spin at a time), which means that has cylindrical symmetry around the single-qubit axes. This is verified by making some -dimensional sections of with exactly two components of non-zero. In contrast to Ref. (51) where a different basis was used (generalized Gell-Mann matrices), we find only two types of shapes, as shown in Fig. 1 and tabulated in Table 1. Using the structure constants and , it can be shown that the square and disc sections are the only possibilities along the plane. When commutes with the section is a square and anticommutes with the section is a circular disc. Details can be found in Appendix A.
The discs correspond to the local rotations between single-qubit-type axes, such as the section. If, on the other hand we rotate from a definite polarization state of qubit 1 to a definite polarization state of qubit 2, we find a square cross-section; examples are the or sections. Rotations of that mix single–qubit-type and correlation-type directions can be of either shape; the section is square, while the section is a disc. Finally, rotations between correlation-type directions can have either shape. The section corresponds to a local rotation of qubit 2; hence it is a disc. Rotations involving both qubits, such as that which generates the section, generally give square sections.
We may conclude that is a highly dimpled ball, perhaps most similar in shape to a golf ball. Its minimum radius is and its maximum radius is .
Since our aim is to quantify entanglement in , we need an entanglement measure. We will employ , the concurrence of Wootters (4). The concurrence varies from for separable states to for maximally entangled state, i.e., the Bell-like states. It is defined as , and
q=λ1−λ2−λ3−λ4 ,
where are the square roots of the eigenvalues of the matrix arranged in decreasing order and
~ρAB=(σAy⊗σBy)ρ∗AB(σAy⊗σBy), (9)
is a spin-flipped density matrix. is the complex conjugate of the density matrix . It is not possible to write the function in a simple explicit form unless further restrictions on apply (52), but it is clear from the form of the continuous function and the presence of the function that is a continuous but not an analytic function of .
We next consider , the manifold of separable states, which we define as those for which the concurrence vanishes: . is a subset of and is the set of entangled states. includes the origin since and . Since is continuous, actually includes a ball of finite radius about the origin: it can be shown that if , then (53). Thus the manifold of separable states has finite volume in : is also -dimensional. We will also refer below to the interiors and boundaries of and and denote these by Int, , Int, and . Since the various sets we encounter in this paper are not linear subspaces, we need the general topological definitions of “boundary”, “interior” and “dimension”. These may be found, for example, in Ref. (54).
is also a convex set. What else can we say about the shape of ? It is easily seen that the surface of , like that of , is rather non-spherical. Indeed along any of the basis vector directions. Coupled with the fact that is convex, we see that must contain a large hyperpolygon with vertices at , , etc. is invariant under local rotations, so it has the same hyper-cylindrical symmetry as . Again we may consider -dimensional sections in order to understand the shape of the surface. Two examples are shown in Fig. 1.
A simple-sampling Monte Carlo study shows that there are more entangled states than separable states in . The details can be found in Appendix B.
## Iii dynamical evolution in S
### iii.1 introduction
The dynamical evolution or trajectory of a quantum system is a function with and . The initial point is and, in the cases of interest here, the trajectory approaches a limiting point as and we can define . The entanglement evolution is the associated function . . For studies of decoherence the main interest is in entanglement evolutions such that and , i.e., the system starts in an entangled state and ends in a separable state. Four distinct categories of entanglement evolution of this type have been seen in model studies (56). They are shown in Fig. 2. These four categories are topologically distinct, as may be seen by considering the set . In category , is the null set; in category , is a single infinite interval; in category , is a set of discrete points; in category , is a union of disjoint intervals.
These categories also reflect how the trajectory traverses and . Entanglement evolutions in category approach the boundary of separable and entangled regions asymptotically from the entangled side. The trajectories never hit while the decrease in entanglement may or may not be monotonic, as seen in Ref. (57). Entanglement evolutions in category bounce off the surface of at finite times but never enter . Overall, entanglement diminishes nonmonotonically. Entanglement evolutions in category enter at finite time and entanglement stays zero afterwards. This is the typical ESD behavior. Entanglement evolutions in category give ESB: after ESD, entanglement suddenly appears after some dark period.
We shall focus on models with associated linear maps , i.e., . More general non-linear models may be contemplated, but they seem to have unphysical features (58). It is known that is completely positive (CP) if and only if there exists a set of operators such that (2)
Λ[ρ(0)]=∑aEaρ(0)E†a. (10)
We require to be trace preserving so that it maps density matrix to density matrix. This condition is equivalent to the completeness condition . In terms of the polarization vector, the dynamics is described by an affine map acting on the initial polarization vector , i.e.
→n(t) ≡Υ(t)[→n(0)] =T(t) →n(0)+→m(t) (11)
where is a real matrix and is a real vector (47). is zero for all time only when is unital, i.e., it maps to (in terms of , the unital property means that maps identity matrix to identity matrix). and .
Coherent dynamics is described by unitary transformations on the density matrix (single Kraus operator). The dynamical map is then linear which translates to orthogonal transformations acting on the polarization vector . Decoherent dynamics (multiple Kraus operators) is characterized by the nonorthogonality of the transfer matrix . Markovian dynamics is conventionally defined by possessing the semigroup property , which translates to
T(t1+t2)= T(t2)T(t1) (12) →m(t1+t2)= T(t2)→m(t1)+→m(t2). (13)
We shall adopt a slightly different definition of Markovianity for the present paper. An evolution will be said to be “distance Markovian” if is a monotonically decreasing function. Note “distance Markovian” is a weaker condition than Markovian, though the two are usually equivalent. Given the semigroup property Eq. 12 and Eq. 13, we have
|→n(t)−→n∞| =|[T(t)→n(0)+→m(t)]−[T(t)→n∞+→m(t)]| ≤∥T(t)∥|→n(0)−→n∞| ≤|→n(0)−→n∞|
since all eigenvalues of have their norms in the range to , i.e., cannot increase the purity of the quantum state.
Any model of an open quantum system defines a set of possible dynamical evolutions. This is done by specifying the equations of motion, which give , and the initial conditions, which give . We define the dynamical subspace of a model as the set of all trajectories allowed by the set of initial conditions and the equations of motions. Eq. 11 shows that, as long as the set of all initial conditions is a linear space (the usual case), then is a linear space intersected with : we first choose a basis that spans the set of all possible , then evolve this basis according to Eq. 11, giving a linear subspace in the space of all . A precise and general definition of is given in Appendix C. The set of admissible is then given by intersecting this linear subspace with . is a manifold of any dimension from to in the two-qubit case. We note could be smaller than . This happens when both and are expandable by identity and a true subalgebra of . is then equal to the number of independent elements in . For example, if is a two-qubit “X-state” and the dynamics can be described by the action of Kraus operators in the X-form, the dynamical subspace will be -dimensional (59).
It is the nature of the intersection of with and the position of relative to that determines the categories of entanglement evolution of a model.
The aim of the remainder of this paper is to show how to determine the topological structure of and the position for various illustrative models of increasing complexity, and then to deduce the possible entanglement evolutions from this information. We note that in general can be determined without fully solving the dynamics. Thus it is possible to gain qualitative information of the entanglement evolution of the model with simple checks.
### iii.2 Model D3
Our first model consists of two qubits (A and B) with a Heisenberg interaction and classical dephasing noise on one of the qubits. The Hamiltonian is
H(t)=−12[J→σA⋅→σB+s(t)gσBz+B0σAz], (14)
where is a random function. This is a classical noise model. To compute we need to average over a probability functional for , which we will specify more precisely below. Note that the manifold spanned by is decoupled from the manifold spanned by under the influence of this Hamiltonian. Thus if the initial density matrix lies in one of the two subspaces, the four-level problem decouples into two two-level problems and we can use the Bloch ball representation to visualize the state space and entanglement evolution.
Take the initial state to be in span for example. The dynamical subspace is a -dimensional ball, as shown in Fig. 3. This makes it relatively easy to visualize the state and entanglement evolution. However, note that the center of is not the state . In fact, all the points on the -axis belong to because every neighborhood of any of these points contains points for which .
The square roots of the non-zero eigenvalues of are , where and are the spherical coordinates of the ball, and is the spin-flipped density matrix, as in Eq. 9. The concurrence is given by
C=rsinθ. (15)
The maximally entangled states are on the equator and the separable states are on the -axis, as seen in Fig. 3. The concurrence has azimuthal symmetry and is linear in the radial distance from the -axis. The separable states in form the -dimensional line that connects the north and south poles of .
The key point is that has a lower dimension than itself. Now consider the possible trajectories with on the -axis. No function with continuous first derivative can have a finite time interval with . The trajectories either hit the -axis at discrete time instants which puts them in category , or approach the -axis asymptotically which puts them in category .
Let us specify in more detail to demonstrate how those two qualitatively different behaviors are related to Markovianity. Qubit A sees a static field while qubit B sees a fluctuating field . All fields are in the -direction. We will take the noise to be random telegraph noise (RTN): assumes value and switches between these two values at an average rate . RTN is widely observed in solid state systems (60); (61); (62); (63); ?.
For this dephasing noise model, the above-mentioned decoupling into two 2-dimensional subspaces occurs. In the block labelled by we find
H(t)=−σz2[s(t)g+B0],
wherer is the Pauli matrix in the subspace.
This Hamiltonian can be solved exactly using a quasi-Hamiltonian method (46). The time-dependent decoherence problem can be mapped exactly to a time-independent problem where the two-value fluctuating field is described by a spin half particle. The quasi-Hamiltonian is given by
Hq=−iγ+iγτ1+Lz(B0+τ3g),
where are the Pauli matrices of the noise “particle”. is the generator in the space.
The transfer matrix is given by
T(t)=⎡⎢⎣ζT(t)cosB0tζT(t)sinB0t0−ζT(t)sinB0tζT(t)cosB0t0001⎤⎥⎦ (16)
where
ζT(t)= e−γt[cos(Ωt)+γΩsin(Ωt)] (17) Ω= √g2−γ2 (18)
is the dephasing function due to RTN and it describes the phase coherence in the x-y plane (26). since the dynamics is unital. Note has qualitatively different behaviors in the and regions, as the trigonometric functions become hyperbolic functions (46).
Taking as initial state, the effective Bloch vector is
→n(t)=⎡⎢⎣ζT(t)cosB0t−ζT(t)sinB0t0⎤⎥⎦. (19)
The state trajectory is fully in the equatorial plane, as seen in Fig. 4. The dephasing function modulates the radial variation and the static field provides precession.
|→n(t)−→n∞|=|ζT(t)|, (20)
and the dynamics is distance Markovian if ( being monotonic). In this parameter region, can be approximated by and the dynamics is approximately Markovian as well. Thus we do not need to distinguish Markovian and distance Markovian in this model. In the Markovian case, the monotonicity of gives rise to spiral while in the non-Markovian case the state trajectory periodically spirals outwards with frequency . In both cases, the limiting state is the origin of the ball.
The concurrence evolution is given by
C(t)=|ζT(t)|.
Thus Markovian noise gives rise to entanglement evolutions in category while non-Markovian noise gives rise to that in category . Entanglement evolutions in the other two categories can not occur due to fact that .
### iii.3 Model D8
In the previous section we saw that dynamical subspaces spanned by two computational basis states does not possess the property . A natural question is whether simply increasing the number of basis states helps. This can be done by choosing a Hamiltonian that connects only the triplet states in the original Hilbert space. One example would be
H=J(t) →σA⋅→σB,
as in Ref. (65), where the Heisenberg coupling has time dependence and is modeled as a classical random process.
Note this Hamiltonian conserves the total angular momentum of the two qubits. As a result, the triplet space spanned by is decoupled from the singlet space. Thus the dynamical subspace has its basis elements in if we choose the initial state to be in the triplet subspace.
Using Gell-Mann matrices as the elements of algebra, the state space is a subset of a ball in (42).
ρ=12I+128∑i=1miμi
and satisfies
μi=μ†i,Tr~{}μi=0,Tr~{}μiμj=2δij.
The are linearly related to the previously defined .
The square roots of the eivenvalues of are
λ1,2=∣∣∣12√m26+m27±√26√2−√3(√3m3+m8)+3m8(√3m3−m8)∣∣∣, λ3,4=0 (21)
Thus the set of separable states is composed of two geometric objects: which is in and
2−√3(√3m3+m8)+3m8(√3m3−m8)=0
which is in . In addition to a concurrence-zero hyperline, we have a hyperplane in . Hence and this model only displays entanglement evolutions in categories and .
Introducing extra dimensions thus helps to form non-zero volume of in but a Hilbert space spanned by three of the four computational basis states is still not enough. and both avoid the region near the fully mixed state , where most separable states reside (66); ?. This region is included in the dynamical subspaces in the next two sections.
Note that for and , the symmetry of the Hamiltonian and the specification of the initial conditions allow us to fully describe the dynamical subspace without explicitly solving the dynamics. This feature can be seen in the more complicated models in the following sections as well: entanglement evolution categories, as a qualitative property of the system dynamics, can be determined from symmetry considerations of the model (dynamics plus initial condition), position of in and the memory effect of the environment.
### iii.4 Model Ye
Yu and Eberly considered a disentanglement process due to spontaneous emission for two two-level atoms in two cavities. In this case the decoherence clearly acts independently on the two qubits. Nevertheless they found that ESD occurs for specific choices of initially entangled states (11).
The decoherence process is formulated using the Kraus operators
ρ(t)=4∑μ=1Kμ(t)ρ(0)K†μ(t), (22)
where satisfy for all (68); (2). For the atom-in-cavity model, they are explicitly given by
K1= F1⊗F1,K2=F1⊗F2 (23) K3= F2⊗F1,K4=F2⊗F2, (24)
where
F1=[γ001],F2=[00ω0], (25)
and and .
It is possible to choose initial states such that the density matrices have the following form for all
ρ(t)=13⎡⎢ ⎢ ⎢ ⎢⎣a(t)0000b(t)z(t)00z(t)c(t)0000d(t)⎤⎥ ⎥ ⎥ ⎥⎦ (26)
with
a(t) =κ2a0, (27) b(t) =c(t)=κ+κ(1−κ)a0, (28) d(t) =1−a0+2(1−κ)+(1−κ)2a0, (29) z(t) =κ. (30)
where . Here the parameter determines the initial condition.
The two-qubit entanglement is
C(t)=23max{0,κf(t)}, (31)
where .
In the polarization vector representation, the dynamics defined by Eq. 22 can be given explicitly by the transfer matrix and the translation vector
T(t)=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝κ00000κ00000κ00000κ0κ2−κ00κ2−κκ2⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠, (32)
and
→m(t)=[κ−1;0;0;κ−1;(κ−1)2]. (33)
Here the coordinates are . We note that Eqs. 12 and 13 are satisfied and this spontaneous emission model is Markovian and distance Markovian.
The non-zero components are
nIZ(t)= nZI(t)=−1+23(1+a0)κ (34) nXX(t)= nYY(t)=23κ (35) nZZ(t)= 1−43(1+a0)κ+43a0κ2. (36)
and the limiting state is
→n∞=(−1,0,1), (37)
where the coordinates are .
This shows that the dynamical subspace is a 3-dimensional section of where the non-zero components are , and but also and , such that it can also be visualized in three dimensions. Interestingly, the limiting state due to spontaneous emission is on the boundary of set of separable states, and the purity of the state increases with time.
Although we have so far fully solved the system dynamics, for the purpose of describing , it is enough to know that are the basis of and that the Kraus operators preserve the equalities , from the initial conditions. is then determined from the positivity condition of the density matrix.
The positivity condition for the density matrix is given by
a2≥0 ⇒2n2IZ+2n2XX+n2ZZ≤3 a3≥0 Missing or unrecognized delimiter for \left a4≥0 ⇒AB≥0
where and . At , . The positivity constraint gives rise to the range of the possible initial conditions, parametrized by .
The square roots of the eigenvalues of are
λa= λb=14√A (38) λc= 14|1−nZZ+2nXX| (39) λd= 14|1−nZZ−2nXX| (40)
The ordering of the ’s can change during the course of an evolution. When is the largest one finds which is helpful in dtermining .
is a tetrahedron with vertices at , , , and , as seen in Fig. 5. is a hexahedron that shares some external areas with . The -dimensional section of with is shown in Fig. 6. On the other hand, if the section is done with , we get a upside down triangle made of separable states, as shown in Fig. 7.
Yu and Eberly showed that a sudden transition of the entanglement evolution from category to category is possible as one tunes the physical parameter . This phenomenon can be easily understood in our formalism, as seen in Fig. 8. The curvature of the state trajectories vary as the initial state changes. Thus there is a continuous range of initial states parametrized by whose trajectories enter within finite amount of time and also a continuous range of initial states whose trajectories never enter Int. Note Fig. 8 is a schematic drawing since the true state trajectories are truly three dimensional.
To be more quantitative, the transition between the category and category behaviors in the YE model could be determined by examining the angle between and the tangent vector of the state trajectory in the long time limit. We denote the limiting tangent vector by and it is given by
→nT(∞)=(1+a0,1,4a0−2). (41)
The relevant in the YE model is a plane passing defined by the following three points: , and . It is parametrized by
^m⋅(→n−→n∞)=0, (42)
where is the unit normal of the plane pointing into the separable region . Note is proportional to and its sign tells us whether the state trajectory approaches from the separable region or the entangled region. Since , falls in the range and both the ESD and HL behaviors are possible.
The condition
→m⋅→nT(∞)=0, (43)
i.e., , gives rise to the critical trajectory which approaches along . When , i.e., , the state trajectory approaches from the entangled region and we get entanglement evolutions in category . These trajectories are represented by the brown curves in Fig. 8. On the other hand, when , i.e., , the state trajectory approaches from the separable region and we get entanglement evolutions in category . These trajectories are represented by the black curves in Fig. 8.
The key point about the model is that the limiting state : it is on the boundary of the entangled and separable regions. That is why entanglement evolutions in both category and category are possible.
### iii.5 Model Zj
Here we present a physically motivated dynamical subspace where the dynamics satisfies the following conditions: 1) the two qubits are not interacting; 2) the noises on the qubits are not correlated; 3) the effect of dephasing and relaxation can be separated. This model shows all four categories of entanglement evolution.
For this model the two-qubit dynamics can be decomposed into single-qubit dynamics (26). The extended two-qubit transfer matrix is where
R––(t)=⎡⎢ ⎢ ⎢ ⎢⎣10000ζ(t)cosB0tζ(t)sinB0t00−ζ(t)sinB0tζ(t)cosB0t0000e−Γ1t⎤⎥ ⎥ ⎥ ⎥⎦
is the extended transfer matrix of individual qubits. The top left entry describes the dynamics of and is there only for notational convenience. Here describes dephasing process, is the longitudinal relaxation rate and is static field in the direction that causes Larmor precession. and if dephasing occurs. Note this dynamical description of decoherence is completely general as long as one can separate dephasing and relaxation channels.
The dynamical subspace in this model is a specially parametrized -dimensional section of the full two-qubit state space . Only the components , , , and are non-zero and we further have constraints and . We thus use as independent parameters and the state space can be visualized in three dimensions. This dynamical subspace has been previously considered in Ref. (56) and we will call it .
The fact that relies on judicious choice of the initial states . In Ref. (26), more general initial states are considered such that is expanded into a dynamical subspace with elements in the algebra, i.e., .
Note is conserved in . The positivity of the density matrix requires
a2≥0 ⇒ 2R2+n2ZZ≤3 (44) a3≥0 ⇒ nZZ≤1−2R2, and nZZ≥−1 (45) a4≥0 ⇒ 2R+nZZ≤1 (46)
The concurrence is given by
C=max{0,R−1+nZZ2}. (47)
Separable states form a spindle shape on top and entangled states form a torus-like shape on bottom with triangular cross sections. A section along is shown in Fig. 6.
We have thus fully described the entanglement topology of and now we construct entanglement evolutions that induce . A model similar to that of Eq. 14 that satisfies the three conditions is
H(t)=−12[s(t)→g⋅→σB+B0σAz]. (48)
Note the RTN has both dephasing () and relaxation () effects in this case. Situations with at intermediate qubit working point (with the presence of both dephasing and relaxation noise) have been considered in Ref. (56). Here we choose the initial state to be the generalized Werner state (55)
wΦr=r|Φ⟩⟨Φ|+1−r4I4, (49)
where
|Φ⟩=12(|00⟩+eiϕ|11⟩) (50)
is a Bell state.
The state trajectory is then given by
nXX(t)= rcos(2B0t)ζ(t), (51) nXY(t)= −rsin(2B0t)ζ(t), (52) nZZ(t)= re−Γ1t. (53)
Similarly, if the Werner state derived from the Bell state
Missing or unrecognized delimiter for \left (54)
is used as initial state, the state trajectory is
nXX(t)= rζT(t), (55) nZZ(t)= −re−Γ1t. (56)
In both cases, the evolution of the concurrence is
C(
|
2018-10-18 09:17:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768042922019958, "perplexity": 611.3519854735619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511761.78/warc/CC-MAIN-20181018084742-20181018110242-00035.warc.gz"}
|
https://www.khanacademy.org/math/statistics-probability/probability-library/basic-set-ops/e/basic_set_notation
|
# Basic set notation
### Problem
Let X and Y be the following sets:
X, equals, left brace, 1, comma, 6, comma, 2, comma, 3, comma, 14, right brace
Y, equals, left brace, 2, comma, 9, comma, 13, comma, 1, right brace
Which of the following is the set $X \cup Y$?
Please choose from one of the following options.
|
2017-05-29 21:17:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2786729633808136, "perplexity": 14071.936610501923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612553.95/warc/CC-MAIN-20170529203855-20170529223855-00360.warc.gz"}
|
http://www.codeproject.com/Questions/406924/how-to-read-a-large-text-file-in-VB-NET
|
See more: VB.NET
Hi .. I'm working on a project that reads a text file. I open the file using READBLOCK. I get an error when opening big files. No issues when opening a 10MB file but got an error when opening a 1GB file.
Here's the part of the code that opens/reads the file:
Const MAX_BYTES As Integer = 1048576 * 5 '=10 MB
Dim currentPos As Integer = 0
Dim strFileName As String
Dim strm As System.IO.Stream
Dim TextLine As String
Dim FileDetail As IO.FileInfo
OpenFileDialogMain.ShowDialog()
strm = OpenFileDialogMain.OpenFile()
strFileName = OpenFileDialogMain.FileName.ToString()
End Using
txtEditor.Text += TextLine
txtEditor.Text = txtEditor.Text & vbCrLf & Now
Does anyone have any suggestion on how to open a large file?
Thanks!
Posted 19-Jun-12 18:48pm
Edited 19-Jun-12 18:49pm
JF201552.2K
v2
very good tanks
abb_saleh - 22-Feb-13 15:10pm
## Solution 1
If you want to read it all into memory, a simple File.ReadAllText() will do just fine. If your file is indeed very large, then you can use the StreamReader class, see the below approach. It is sometimes inevitable but should mostly be avoided for style reasons.
im file As New FileInfo("path\to\file")
ProcessLine(nextLine)
End While
End Using
Thanks for the quick response. I tried the "ReadAllText" and "ReadLine" before and I get the same error when opening a large file.
sdcruise - 20-Jun-12 0:59am
## Solution 2
Got a problem with ProcessLine, it says 'ProcessLine' is not declared. It may be inaccessible due to its protection level?
In response to your question about ProcessLine: ProcessLine seems to be a custom Sub here. Crappy example to point out the obvious: Dim TotalChars As Integer = 0 Private Sub ProcessLine(ByVal LineOfText As String) TotalChars = TotalChars + LineOfText.Count End Sub In this case ProcessLine would help you count the total amount of chars in your file. Hope that was helpful.
OneInNineMillion - 17-Jan-13 3:03am
Interested Ignored
0 Sergey Alexandrovich Kryukov 1,632 1 OriginalGriff 448 2 Maciej Los 400 3 CPallini 393 4 Tadit Dash 339
0 Sergey Alexandrovich Kryukov 13,987 1 OriginalGriff 9,085 2 thatraja 4,220 3 Maciej Los 4,125 4 CPallini 3,784
Web03 | 2.7.131216.3 | Last Updated 17 Jan 2013
|
2013-12-19 19:29:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2908277213573456, "perplexity": 9785.433068380054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345766127/warc/CC-MAIN-20131218054926-00019-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/astromony-question.84006/
|
# Astromony question
asdf1
There's an astromony question that I'm stuck on~
" A distant galaxy in the constellation Hydra is receding from the Earth at 6.12*10^7 m/s. By how much is a green spectral line of wavelength 500nm (1nm=10^(-9) )emitted by this galaxy shifted toward the red end of the spectrum?
neurocomp2003
what knowledge do you have?
Homework Helper
asdf1 said:
There's an astromony question that I'm stuck on~
" A distant galaxy in the constellation Hydra is receding from the Earth at 6.12*10^7 m/s. By how much is a green spectral line of wavelength 500nm (1nm=10^(-9) )emitted by this galaxy shifted toward the red end of the spectrum?
Just use the relativistic doppler effect:
$$\nu_{obs}/\nu_{source} = \sqrt{\frac{1+\beta}{1-\beta}}$$
where $\beta = v/c = .204$ and $\nu_{source} = c/\lambda_{source}$
AM
asdf1
How did you think of to use the Doppler effect?
Nylex
Probably because the question is about the Doppler effect (well, it's about red shift, which is to do with the Doppler effect). Also, you mean "astronomy", not "astromony".
asdf1
I see~
Thanks for correcting my spelling mistake!
|
2022-12-09 22:29:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.728868842124939, "perplexity": 2559.868618918956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00636.warc.gz"}
|
https://hackage.haskell.org/package/HSoM-1.0.0/docs/System-Random-Distributions.html
|
HSoM-1.0.0: Library for computer music education
System.Random.Distributions
Synopsis
# Random Distributions
linear :: (RandomGen g, Floating a, Random a, Ord a) => g -> (a, g) Source #
Given a random number generator, generates a linearly distributed random variable between 0 and 1. Returns the random value together with a new random number generator. The probability density function is given by
f(x) = 2(1-x) 0 <= x <= 1
= 0 otherwise
Arguments
:: (RandomGen g, Floating a, Random a, Eq a) => a horizontal spread of the function. -> g a random number generator. -> (a, g)
Generates an exponentially distributed random variable given a spread parameter lambda. A larger spread increases the probability of generating a small number. The mean of the distribution is 1/lambda. The range of the generated number is [0,inf] although the chance of getting a very large number is very small.
The probability density function is given by
f(x) = lambda e^(-lambda * x)
Arguments
:: (Floating a, Ord a, Random a, RandomGen g) => a horizontal spread of the function. -> g a random number generator. -> (a, g)
Generates a random number with a bilateral exponential distribution. Similar to exponential, but the mean of the distribution is 0 and 50% of the results fall between (-1lambda, 1lambda).
Arguments
:: (Floating a, Random a, RandomGen g) => a standard deviation. -> a mean. -> g a random number generator. -> (a, g)
Generates a random number with a Gaussian distribution.
Arguments
:: (Floating a, Random a, RandomGen g, Eq a) => a alpha (density). -> g a random number generator. -> (a, g)
Generates a Cauchy-distributed random variable. The distribution is symmetric with a mean of 0.
poisson :: (Num t, Ord a, Floating a, RandomGen g, Random a) => a -> g -> (t, g) Source #
Generates a Poisson-distributed random variable. The given parameter lambda is the mean of the distribution. If lambda is an integer, the probability that the result j=lambda-1 will be as great as that of j=lambda. The Poisson distribution is discrete. The returned value will be a non-negative integer.
frequency :: (Floating w, Ord w, Random w, RandomGen g) => [(w, a)] -> g -> (a, g) Source #
Given a list of weight-value pairs, generates a value randomly picked from the list, weighting the probability of choosing each value by the weight given.
# Utility Functions
rands :: (RandomGen g, Random a) => (g -> (a, g)) -> g -> [a] Source #
Given a function generating a random number variable and a random number generator, produces an infinite list of random values generated from the given function.
|
2018-06-24 02:07:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47967326641082764, "perplexity": 1752.6302699973796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865995.86/warc/CC-MAIN-20180624005242-20180624025242-00634.warc.gz"}
|
https://scoop.eduncle.com/q-13-flr-z-z-where-and-denote-the-greatest-integer-function-and-the-fractional-part-function-respectively
|
IIT JAM Follow
September 26, 2021 7:45 pm 30 pts
Q.13 flr) = [z|- {z}, where [.] and {.} denote the greatest integer function and the fractional part function, respectively, is (1) continuous at z = 1,-1 (2) contiuous at z = -1 but not at z = 1 (3) continuous at c =1 but not at z = -1 (4) discontinuous at xr 1 and z = -1
• 0 Likes
• Shares
• Navdeep goyal 1
D check attachment
|
2021-10-27 20:13:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8004208207130432, "perplexity": 3900.53579343864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00124.warc.gz"}
|
https://www.groundai.com/project/can-magnetar-spin-down-power-extended-emission-in-some-short-grbs/
|
Magnetar spin-down in extended emission SGRBs
# Can magnetar spin-down power extended emission in some short GRBs?
## Abstract
Extended emission gamma-ray bursts are a subset of the ‘short’ class of burst which exhibit an early time rebrightening of gamma emission in their light curves. This extended emission arises just after the initial emission spike, and can persist for up to hundreds of seconds after trigger. When their light curves are overlaid, our sample of 14 extended emission bursts show a remarkable uniformity in their evolution, strongly suggesting a common central engine powering the emission. One potential central engine capable of this is a highly magnetized, rapidly rotating neutron star, known as a magnetar. Magnetars can be formed by two compact objects coallescing, a scenario which is one of the leading progenitor models for short bursts in general. Assuming a magnetar is formed, we gain a value for the magnetic field and late time spin period for 9 of the extended emission bursts by fitting the magnetic dipole spin-down model of Zhang & Mészáros (2001). Assuming the magnetic field is constant, and the observed energy release during extended emission is entirely due to the spin-down of this magnetar, we then derive the spin period at birth for the sample. We find all birth spin periods are in good agreement with those predicted for a newly born magnetar.
###### keywords:
general – gamma rays: bursts.
12
## 1 Introduction
Gamma-Ray Bursts (GRBs) are the brightest phenomena in the Universe, releasing as much electromagnetic energy in tens of seconds as the entire Milky Way galaxy does in a few years (Mészáros, 2006). They typically reach energies of around 5 x 10 ergs when beaming is accounted for (Frail et al., 2001). Their temporal distribution shows a bimodality (Kouveliotou et al., 1993) which separates them into ‘long’ or ‘short’ GRBs (LGRB and SGRB respectively) depending on a parameter known as T; the time in which 90% of the gamma-ray fluence is detected. Nominally, long bursts have T seconds, and short ones have T seconds, but in reality the distinction is far more blurred for a significant number of cases (eg Gehrels et al. 2006; Page et al. 2006; Piran et al. 2012). Both classes have been observed to be distributed isotropically across the sky (Briggs et al., 1996). The most popular theory for SGRBs is that they are caused by mergers of compact objects, such as double neutron star (NS–NS) binaries, NS – black hole (BH) mergers, white dwarf (WD) – NS mergers, WD–BH mergers or possibly even WD–WD mergers (Paczýnski, 1986; Fryer et al., 1999; Rosswog, Ramirez-Ruiz & Davies, 2003; Belczynski et al., 2006; Chapman et al., 2007). LGRBs are perhaps the better understood of the two classes, and are believed to be the product of massive star collapse (Woosley, 1993; Paczýnski, 1998; MacFadyen & Woosley, 1999) since, in cases where it would be observable, they are always accompanied by type Ib/c supernovae (Galama et al., 1998; Stanek et al., 2003).
SGRBs are in fact not necessarily short. Norris & Bonnell (2006) found that 1/3 of their sample of SGRBs exhibit extended emission (EE) in their light curves, and an even greater fraction were found to be extended in the BATSE catalogue. EE is the term given to a rebrightening in the light curve after the initial emission spike. It happens at early times, usually beginning at 10 s, and typically has lower peak flux but much longer duration than the initial spike, resulting in comparable total fluences between the two (Perley et al., 2009). Those bursts that were believed to exhibit EE were catalogued by Norris, Gehrels & Scargle (2010). These bursts present a challenge to the standard merger scenario, since they require an injection of energy arising seconds after the trigger, then naturally switching off at later times, around 100 s after trigger in the rest frame.
One central engine with the potential to provide such an energy supply is a rapidly spinning, highly magnetized neutron star, known as a magnetar (Metzger, Quataert & Thompson, 2008; Bucciantini et al., 2012). Whether by collapse or merger, this magnetar may be formed with sufficient rotational energy to avoid collapsing into a BH (Duncan & Thompson, 1992; Usov, 1992; Zhang & Mészáros, 2001; Dessart et al., 2008). GRBs require a rapidly rotating central engine with a strong, large scale magnetic field of around G or higher (McKinney, 2006), and a magnetar with such a field and an initial spin period ms has enough rotational energy to power a erg GRB. Magnetar spin down has been often discussed as a source of GRBs, both long (Zhang & Mészáros, 2001; Lyons et al., 2010; Dall’Osso et al., 2011; Metzger et al., 2011; Bernardini et al., 2012) and short (Fan & Xu, 2006; Rowlinson et al., 2010, 2013), in the literature, and has also been suggested as the origin of EE (Metzger, Quataert & Thompson, 2008; Bucciantini et al., 2012) along with a variety of other mechanisms. Alternatives include a two-jet solution (Barkov & Pozanenko, 2011), fallback accretion (Rosswog, 2007), r-process heating of the accretion disc (Metzger et al., 2010) and magnetic reconnection and turbulence (Zhang & Yan, 2011). A major motivation for a common central engine can be seen in Figure 1; when all bursts are overlaid, a striking similarity can be seen in the evolution of the bursts, both temporally and energetically. Conformity like this is highly suggestive of a shared origin.
This paper is structured as follows: Section 2 contains the sample selection and data reduction process while section 3 details the motivation for finding a common central engine for EE GRBs. The magnetar model is introduced in section 4, the results are discussed in section 5, and the main conclusions are summarised in section 6.
## 2 Sample Selection and Data Reduction
The data used here were collected by the Swift satellite (Gehrels et al., 2004). Three instruments are carried on board: The Burst Alert Telescope (BAT; Barthelmy et al. 2005a), which has an energy range of 15 – 150 keV, the X-Ray Telescope (XRT; Burrows et al. 2005), energy range 0.3 – 10 keV and the Ultra-Violet and Optical Telescope (UVOT; Roming et al. 2005).
Raw BAT data for each burst were collected from the UK Swift Science Data Centre (UKSSDC) archives and processed using the Swift BAT pipeline tool batgrbproduct. For all EE GRBs, we analysed the BAT data by creating lightcurves with a variety of binning in signal-to-noise ratios (SNR) and time, looking for evidence of EE at the 3 level where we consistently saw EE over more than 30 s. Using this method, a sample of 14 GRBs with EE was collected, including 12 which were identified as extended by Norris, Gehrels & Scargle (2010). This sample is shown in Table 1.
The XRT data were downloaded from the UKSSDC spectrum repository (Evans et al., 2009), and were corrected for absorption using a ratio of (counts to flux unabsorbed)/(counts to flux observed). Details of the data reduction process can be found in Evans et al. (2007, 2009). Standard heasoft tools were used during data reduction.
To plot the BAT data alongside the XRT, the BAT light curves were extrapolated from their 15 – 150 keV bandpass down to the XRT bandpass of 0.3 – 10 keV using a correction factor comprised of the net count rate in the 15 – 150 keV range and the extrapolated flux in the 0.3 – 10 keV range, found using a power law fit to the pre-slew BAT spectrum in Xspec (Arnaud, 1996). These combined light curves were made by taking the 4 ms BAT light curves from the batgrbproduct pipeline and binning them with a SNR of 4, the one exception being GRB 080123, which was done with a SNR of 3. The light curves were then k-corrected, using the method described in Bloom, Frail & Sari (2001) to give bolometric (1 – 10000 keV) rest-frame light curves. The redshifts used during k-correction are displayed in Table 1. Where no constraints on redshift were available, the average for the sample, , was used. The value of quoted for GRB 051227 is an upper limit (D’Avanzo et al., 2009).
## 3 Evidence for a common central engine
Figure 1 shows the EE sample from Table 1 plotted together. The left panel shows bursts with known redshift, whilst the right panel is the rest of the sample using the mean redshift value from bursts where is known. A striking similarity can be seen between the evolution of all EE bursts, particularly the ones where is known. The luminosity of the individual plateaus appear to be highly comparable between bursts, and the timescales in which these plateaus turn over also show a great deal of regularity. Such uniformity is highly suggestive of a common central engine, and hints at a unique difference between SGRBs and EE GRBs, but one that is common amongst the EE sample. One possible explanation for this uniformity is the correlation noted by Bucciantini et al. (2012) between magnetar outflow energy and jet opening angle, resulting in relatively constant isotropic power (within a factor ) for a given ejecta mass. GRB 051227 has been plotted in the right panel of Figure 1, since it does not have a firm redshift. Using gives its EE tail (the 1st plateau at around seconds) a slightly higher luminosity than those in the left panel. D’Avanzo et al. (2009) give a tentative lower limit of , and claim that the colour observations of the possible host galaxy are consistent with those of an irregular galaxy at . Using would place GRB 051227 at around the same luminosity level as the known redshift bursts in Figure 1. We use for this burst in the following analysis to place it at an extreme luminosity.
## 4 The Magnetar Model
### 4.1 Magnetic dipole spin-down
The magnetic dipole spin-down model is detailed in Zhang & Mészáros (2001), and has been used on both SGRBs (eg Fan & Xu 2006; Rowlinson et al. 2013) and LGRBs (eg Lyons et al. 2010; Dall’Osso et al. 2011; Bernardini et al. 2012). The model is fitted to the the late-time plateau, seen emerging from beneath the fading EE tail in Figure 2 at times of around s. This allows the magnetic field and spin period of the central magnetar to be derived, although the calculated spin period must then be corrected for spin-down during EE to get the true birth period (see section 4.2).
The basic outline is that the central engine, in this case a magnetar, emits both an initial impulse energy as well as a continuous injection luminosity which varies as a power-law in the emission time. The initial impulse energy represents the prompt emission of the burst (excluding EE), and is a short, violent event which transitions into a power-law decay at very early times. The continuous injection luminosity is the product of the magnetar spinning down, and begins as soon as the magnetar is formed. Although it is present throughout, it’s at a much lower level than the initial impulse, and so is initially hidden beneath the more luminous component. At a critical time, , the prompt emission has faded enough so that the injection luminosity begins to dominate the light curve, causing it to flatten. This effect can be seen in the red datapoints in Figure 2. The plateau then re-steepens after the characteristic timescale for dipole spin-down, . At this point, the magnetar reveals itself as either unstable, collapsing into a BH with a sudden drop in the light curve, or stable, continuing to decay with a comparatively shallow power-law. For this plateau to appear, must be greater than , otherwise the continuous injection luminosity is spent before the prompt emission has faded sufficiently for it to be observable.
To derive the parameters that control the injection luminosity plateau, the dimensions of the plateau itself must be ascertained by fitting. The area of interest for fitting is the point at which the continuous injection (dipole spin-down) luminosity emerges from beneath the initial impulse energy and the fading EE tail, shown by the red datapoints in Figure 2. Obtaining fits that describe the luminosity and duration of this plateau allows the magnetic field and spin period of the sample to be found. The key equations for the model are:
Tem,3=2.05(I45B−2p,15P20,−3R−66) (1) L0,49∼(B2p,15P−40,−3R66) (2) B2p,15=4.2025I245R−66L−10,49T−2em,3 (3) P20,−3=2.05I45L−10,49T−1em,3 (4)
where is the characteristic timescale for dipole spin-down in 10 s, is the plateau luminosity in 10 erg s, is the moment of inertia in units of 10 g cm, is the magnetic field strength at the poles in units of 10 G, is the radius of the neutron star in 10 cm and is the spin period of the magnetar in milliseconds. The mass of the magnetar was set to and the radius was cm. Using these values, the moment of inertia, I, is g cm. Equations 1 – 4 are taken from Zhang & Mészáros (2001) and were combined into a qdp COmponent Definition (COD) file for fitting to data by Rowlinson et al. (2013) during their work. This COD file was used to obtain fits as previously in the current work. It has been assumed that emission is both isotropic and 100% efficient, since little is known about the precise emission mechanism and beaming angle. Lyons et al. (2010) discussed the effects of beaming in the context of the magnetar model, and showed that a narrower opening angle results in higher and (slower spin). This is illustrated by their Figure 4.
The magnetic dipole spin-down model was fitted to the late time data of the rest-frame light curves of 9 GRBs with EE. Of the original sample of 14 bursts, 5 did not contain sufficient datapoints for accurate model fitting and were dropped from the sample. GRB 050911, GRB 090715A and GRB 090916 do not have XRT data available, and the XRT data for GRB 090531B contains only a single point and an upper limit. GRB 080503 either has an incredibly weak dipole plateau or none at all (Perley et al., 2009), so values for magnetic field and spin period were unobtainable. Table 2 contains the results of the fitting to the 9 remaining GRBs.
Figure 2 shows the individual fits for each of the 9 bursts, along with the estimated EE region, denoted by the vertical dashed lines. The start of the EE region is taken as the first upturn in the light curve after the initial prompt emission spike. EE is said to have ceased at the time of the final power-law decay before the onset of the magnetic dipole spin-down plateau. Using these definitions, we were able to reasonably recreate the fluence ratios of Perley et al. (2009) and the EE duration times of Norris, Gehrels & Scargle (2010). For each burst, a solution was found in which the data was accurately traced by the model, and the results returned for the values of and lie unambiguously in allowed parameter space.
is refered to as the initial spin period of the magnetar by Rowlinson et al. (2013). Whilst this is true for short bursts where spin down only occurs due to EM dipole radiation, the story is more complicated for EE bursts. Since we are assuming the extraction of rotational energy from the spin of the magnetar is the mechanism behind the EE tail, the spin period during this time must be variable. In fact, during this time the magnetar may be spun up by accretion on to the surface, or down by a variety of mechanisms in addition to the constant dipole spin down that exists in the pure short GRB case. Thus, for these EE bursts, has been taken as the spin period after EE. We return to this issue in section 4.2.
The derived values of and are plotted against each other in Figure 3, where the 3 vertical and 2 horizontal lines denote allowed parameter space for the birth of a magnetar powering a GRB. Our lower limit on spin period is the spin break-up frequency for a M NS with a radius of km (Lattimer & Prakash, 2004). Also plotted is the limit for a M NS with the same radius, shown by the dashed line. These limits may vary with uncertainties in the equation of state of the NS. Usov (1992) calculated the minimum allowed spin frequency at birth if the progenitor is the accretion induced collapse of a WD. Based on conservation of angular momentum, the upper spin period limit would be ms for this type of progenitor. The minimum magnetic field required to produce a GRB observable in the gamma band (Thompson, 2007), sets the lower boundary for at G. The initial impulse energy of the burst is accounted for by a power-law with a decay slope after the prompt emission. In practice, this power-law simply models the light curves in the region between the EE tail and the dipole spin-down plateau. It can be seen from the results and the fits in Figure 2 that all magnetars in this sample are stable.
### 4.2 The extended emission tail
Once a fit has been found for the late time data of a specific burst, the magnetic field strength, , and the spin period after EE, become known quantities. The energy release of the EE tail can be calculated fairly simply by estimating the points on the light curve where EE begins and ends and integrating under the curve between these two points, ie . This is done using linear interpolation between points, and the calculated EE energies are displayed in Table 3. Assuming a constant magnetic field, and that energy injection during the EE period is entirely from the spin-down emission of the magnetar, the spin period the magnetar possessed at birth, , can be calculated using
ΔE=2π2I(P−2i−P−20) (5)
where is the energy in the EE tail, is the moment of inertia, is the spin period of the magnetar after EE and is the birth spin period. Table 3 contains the results from this process, including the time boundaries for EE, the energy found by integration, and the resultant value derived for .
## 5 Discussion
The calculated spin periods for the birth of the magnetar lie comfortably within allowed parameter space (Figure 3) and are consistent with values predicted in the literature (Usov, 1992; Thompson, Chang & Quataert, 2004; Chapman et al., 2007). Bursts that do not have a set redshift may vary on the energy scale, with an error of in roughly corresponding to an order of magnitude in the luminosity scale. Rowlinson et al. (2010, 2013) discussed the effect of varying redshift on the results for and in their work, and the argument is well illustrated by Figure 9(b) in Rowlinson et al. (2013). The general result is that a higher corresponds to a lower rotation period (ie faster spin) and lower magnetic field. A good example is the change in results if the sample average redshift is used for GRB 051227; fitting the magnetic dipole spin-down model then gives a magnetic field G and a spin period of ms. The light curve is also far less luminous. The EE energy release is just ergs, which translates into ms.
Figure 4 shows where the values found for and place the EE bursts relative to other SGRB and LGRB populations taken from Figure 9(a) of Rowlinson et al. (2013). It can be seen that the EE bursts show properties that most closely resemble the unstable magnetar population of SGRBs. Since both magnetic field and spin period are very similar between these two groups, the difference must lie in some other property, perhaps mass or formation mechanism. This key difference must prevent the EE sample bursts from collapsing into BHs, and enable, perhaps even cause, the release of EE energy. Rosswog (2007) showed that accretion discs and fallback accretion exhibit a much wider spread of behaviours when the compact objects involved in the merger have different masses. In their work, a NS – NS binary showed fairly homogeneous behaviour, whilst a NS – BH merger produced a much broader spread of fallback activity. A magnetar cannot be formed from a BH, but the same principle of unequal masses can be achieved by a system involving a NS – WD merger, or, with the discovery of increasingly (Demorest et al., 2010), possibly a more exotic NS – NS system.
## 6 Conclusions
EE GRB light curves show a remarkable uniformity when plotted alongside each other, particularly amongst the bursts where redshift is known. This consistency in plateau luminosity and turnover times suggests EE GRBs share a common progenitor mechanism which distinguishes them from ordinary SGRBs.
We have fitted the magnetic dipole spin-down model of Zhang & Mészáros (2001) to the late-time data of the light curves of 9 GRBs under the assumption that the central engine is a highly magnetized neutron star. These fits have yielded values for the magnetic field strength and late-time spin period. We have also performed calculations of the energy contained in the EE region of bursts in this sample. Assuming this energy release is due to the spin-down of the central magnetar, and assuming a constant magnetic field, we infer the spin period these magnetars possessed at birth. The spin periods found are in good agreement with published values for the birth of a magnetar (eg Thompson, Chang & Quataert 2004; Chapman et al. 2007; Usov 1992). These results are consistent with the idea that EE GRBs could be powered by a spinning-down magnetar.
## 7 Acknowledgements
BG acknowledges funding from the Science and Technology Funding Council. The work makes use of data supplied by the UK Swift Science Data Centre at the University of Leicester and the Swift satellite. Swift, launched in November 2004, is a NASA mission in partnership with the Italian Space Agency and the UK Space Agency. Swift is managed by NASA Goddard. Penn State University controls science and flight operations from the Mission Operations Center in University Park, Pennsylvania. Los Alamos National Laboratory provides gamma-ray imaging analysis. We thank the anonymous referee for their swift response.
### Footnotes
1. pagerange: Can magnetar spin-down power extended emission in some short GRBs?References
2. pubyear: ????
### References
1. Arnaud K.A., 1996, ASPC, 101, 17
2. Barbier L., et al., 2005, GCN Circ, 4397, 1
3. Barkov M.V., Pozanenko A.S., 2011, MNRAS, 417, 2161
4. Barthelmy S.D., et al., 2005a, SSRv, 120, 143
5. Belczynski K., Perna R., Bulik T., Kalogera V., Ivanova N., Lamb D.Q., 2006, ApJ, 648, 1110
6. Berger E., 2005, GCN Circ, 3966, 1
7. Berger E., 2007, ApJ, 670, 1254
8. Bernardini M.G., Margutti R., Zaninoni E., Chincarini G., 2012, MSAIS, 21, 226
9. Bloom J.S., Frail D.A., Sari R., 2001, ApJ, 121, 2879
10. Briggs M.S., et al., 1996, ApJ, 459, 40
11. Bucciantini N., Metzger B.D., Thompson T.A., Quataert E., 2012, MNRAS, 419, 1537
12. Burrows D.N., et al., 2005, SSRv, 120, 165
13. Cannizzo J.K., et al., 2006, GCN Circ, 5904, 1
14. Cenko S.B., Kasliwal M., Cameron P.B., Kulkarni S.R., Fox D.B., 2006, GCN Circ, 5946, 1
15. Covino S., et al., 2005, GCN Circ, 3665, 1
16. Chapman R., Levan A.J., Priddey R.S., Tanvir N.R., Wynn G.A., King A.R., Davies M.B., 2007, ASPC, 372, 415
17. Cummings J.R., et al., 2009, GCN Circ, 9461, 1
18. Dall’Osso S., Stratta G., Guetta D., Covino S., De Cesare G., Stella L., 2011, A&A, 526, A121
19. D’Avanzo P., Fiore F., Piranomonte S., Covino S., Tagliaferri G., Chincarini G., Stella L., 2007, GCN Circ, 7152, 1
20. D’Avanzo P., et al., 2009, A&A, 498, 711
21. D’Elia V., et al., 2011, GCN Circ, 12578, 1
22. Demorest P.B., Pennucci T., Ransom S.M., Roberts M.S.E., Hessels J.W.T., 2010, Nature, 467, 1081
23. Dessart L., Burrows A., Livne E., Ott C.D., 2008, ApJ, 673, 43
24. Duncan R.C., Thompson C., 1992, ApJ, 392, 9
25. Evans P.A., et al., 2007, A&A, 469, 379
26. Evans P.A., et al., 2009, MNRAS, 397, 1177
27. Fan Y.-Z., Xu D., 2006, MNRAS, 372, L19
28. Frail D., et al., 2001, ApJ, 562, L55
29. Fryer C.L., Woosley S.E., Herant M., Davies M.B., 1999, ApJ, 520, 650
30. Galama T.J., et al., 1998, Nature, 395, 670
31. Gehrels N., et al., 2004, ApJ, 611, 1005
32. Gehrels N., et al., 2006, Nature, 444, 1044
33. Graham J.F., et al., 2009, ApJ, 698, 1620
34. Kouveliotou C., Meegan C.A., Fishman G.J., Bhat N.P., Briggs M.S., Koshut T.M., Paciesas W.S., Pendleton G.N., 1993, ApJ, 413, L101
35. Lattimer J.M., Prakash M., 2004, Sci, 304, 536
36. Lyons N., O’Brien P.T., Zhang B., Willingale R., Troja E., Starling R.L.C., 2010, MNRAS, 402, 705
37. MacFadyen A.I., Woosley S.E., 1999, ApJ, 524, 262
38. Mao J., et al., 2008, GCN Circ, 7665, 1
39. McKinney J.C., 2006, MNRAS, 368, 1561
40. Mészáros P., 2006, RPPh, 69, 2259
41. Metzger B.D., Quataert E., Thompson T.A., 2008, MNRAS, 385, 1455
42. Metzger B.D., Arcones A., Quataert E., Martínez-Pinedo G., 2010, MNRAS, 402, 2771
43. Metzger B.D., Giannios T.A., Thompson T.A., Bucciantini N., Quataert E., 2011, MNRAS, 413, 2031
44. Norris J.P., Bonnell J.T., 2006, ApJ, 643, 266
45. Norris J.P., Gehrels N., Scargle J.D., 2010, ApJ, 717, 411
46. Paczýnski B., 1986, ApJ, 308, 43
47. Paczýnski B., 1998, ApJ, 494, 45
48. Page K.L., et al., 2005, GCN Circ, 3961, 1
49. Page K.L., et al., 2006, ApJ, 637, L13
50. Parsons A.M., et al., 2006, GCN Circ, 5252, 1
51. Perley D.A., et al., 2009, ApJ, 696, 1871
52. Piran T., Bromberg O., Nakar E., Sari R., 2012, arXiv, arXiv:1206.0700
53. Price P.A., Berger E., Fox D.B., 2006, GCN Circ, 5275, 1
54. Prochaska J.X., Bloom J.S., Chen H.-W., Hansen B., Kalirai J., Rich M., Richer H., 2005, GCN Circ, 3700, 1
55. Racusin J.L., Barthelmy S.D., Burrows D.N., Chester M.M., Gehrels N., Krimm H.A., Palmer D.M., Sakamoto T., 2007, GCN Circ, 6620, 1
56. Racusin J.L., et al., 2009, GCN Circ, 9666, 1
57. Roming P.W., et al., 2005, SSRv, 120, 95
58. Rosswog S., Ramirez-Ruiz E., Davies M.B., 2003, MNRAS, 345, 1077
59. Rosswog S., 2007, MNRAS, 376, L48
60. Rowlinson A., et al., 2010, MNRAS, 409, 531
61. Rowlinson A., O’Brien P.T., Metzger B.D., Tanvir N.R., Levan A.J., 2013, MNRAS, 608
62. Sakamoto T., et al., 2007, GCN Circ, 7147, 1
63. Schady P., et al., 2006, GCN Circ, 5699, 1
64. Stanek K.Z., et al., 2003, ApJ, 591, 17
65. Thompson T.A., Chang P., Quataert E., 2004, ApJ, 611, 380
66. Thompson T.A., 2007, RMxAC, 27, 80
67. Troja E., et al., 2009, GCN Circ, 9913, 1
68. Ukwatta T.N., et al., 2008, GCN Circ, 7203, 1
69. Usov V.V., 1992, Nature, 357, 472
70. Woosley S.E., 1993, ApJ, 405, 273
71. Zhang B., Mészáros P., 2001, ApJ, 552, 35
72. Zhang B., Yan H., 2011, ApJ, 726, 90
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
2019-05-20 10:22:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518316149711609, "perplexity": 3314.553085912659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255943.0/warc/CC-MAIN-20190520101929-20190520123929-00206.warc.gz"}
|
https://papers.nips.cc/paper/2020/hash/d34a281acc62c6bec66425f0ad6dd645-Abstract.html
|
#### Authors
Justin Chen, Gregory Valiant, Paul Valiant
#### Abstract
<p>We introduce a framework for statistical estimation that leverages knowledge of how samples are collected but makes no distributional assumptions on the data values. Specifically, we consider a population of elements [n]={1,...,n} with corresponding data values x<em>1,...,x</em>n. We observe the values for a "sample" set A \subset [n] and wish to estimate some statistic of the values for a "target" set B \subset [n] where B could be the entire set. Crucially, we assume that the sets A and B are drawn according to some known distribution P over pairs of subsets of [n]. A given estimation algorithm is evaluated based on its "worst-case, expected error" where the expectation is with respect to the distribution P from which the sample A and target sets B are drawn, and the worst-case is with respect to the data values x<em>1,...,x</em>n. Within this framework, we give an efficient algorithm for estimating the target mean that returns a weighted combination of the sample values–-where the weights are functions of the distribution P and the sample and target sets A, B--and show that the worst-case expected error achieved by this algorithm is at most a multiplicative pi/2 factor worse than the optimal of such algorithms. The algorithm and proof leverage a surprising connection to the Grothendieck problem. We also extend these results to the linear regression setting where each datapoint is not a scalar but a labeled vector (x<em>i,y</em>i). This framework, which makes no distributional assumptions on the data values but rather relies on knowledge of the data collection process via the distribution P, is a significant departure from the typical statistical estimation framework and introduces a uniform analysis for the many natural settings where membership in a sample may be correlated with data values, such as when individuals are recruited into a sample through their social networks as in "snowball/chain" sampling or when samples have chronological structure as in "selective prediction".</p>
|
2021-03-07 07:54:33
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088622093200684, "perplexity": 492.20819976691575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00311.warc.gz"}
|
https://www.iacr.org/news/legacy.php?p=detail&id=3551
|
International Association for Cryptologic Research
# IACR News Central
You can also access the full news archive.
Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).
2014-04-07
15:05 [Event][New]
Submission: 1 June 2014
From October 29 to October 29
Location: San Francisco, USA
10:55 [Job][New]
The Reliable Communication Group at the University of Bergen invites applications for a 3-year researcher position in Boolean functions. The position is supposed to start in October 2014.
The candidate is expected to have PhD degree in mathematics or computer science or related disciplines, and have considerable publications in discrete functions.
We are seeking an active researcher with expertise in Boolean functions, discrete mathematics and symmetric cryptography to work within the recently funded project “Discrete functions and their applications in cryptography and mathematics”. The prime objectives of this project are Boolean functions with optimal resistance to various cryptographic attacks (differential, linear, algebraic et al.) and their applications in discrete mathematics (such as commutative semifields, o-polynomials, difference sets, dual hyperovals, regular graphs, m-sequences, codes et al.).
2014-04-05
18:17 [Pub][ePrint]
Achterbahn stream cipher is proposed as a candidate for ECRYPT eSTREAM project which deals with key of length 80-bit. The linear distinguishing attack,which aims at distinguishing the keystream from purely random keystream,is employed to Achterbahn stream cipher. A linear distinguishing attack is based on linear sequential circuit approximation technique which distinguishes statistical bias in the keystream. In order to build the distinguisher, linear approximations of both non-linear feedback shift register (NLFSR) and the non-linear Boolean combining function R:F_2^8→F_2 are used. The keystream sequence generated by this algorithm consist a distinguisher with its probability bias〖 2〗^(-1809). Thus, to distinguish the Achterbahn, we only need 1/ε^2 =〖〖(2〗^1809)〗^2=2^3618 keystream bits and the time complexity is about 10/ε^2 =2^3621.3 which is much higher than the exhaustive key search O(2^80).
2014-04-03
15:32 [Job][New]
The Engineering Cryptographic Protocols Group at TU Darmstadt is looking for a doctoral student in Engineering Cryptographic Protocols for Cloud Computing.
Our group is involved in the two main research centers for IT security in Darmstadt, the Center for Advanced Security Research Darmstadt (CASED) and the European Center for Security and Privacy by Design (EC SPRIDE). We develop new methods and tools to optimize and automatically generate cryptographic protocols. See http://encrypto.de for details.
The candidate will work in the EU FP 7 research project PRACTICE (Privacy-Preserving Computation in the Cloud), http://www.practice-project.eu, with the goal of developing, optimizing, and automatically generating secure computation protocols for cloud computing.
The candidate is expected to have a completed Master (or equivalent) degree with excellent grades in IT security, computer science, electrical engineering, mathematics, or a closely related field. Solid knowledge in IT security, applied cryptography, and programming skills is required. Additional knowledge in cryptographic protocols, parallel computing, compiler construction, programming languages, and software engineering is a plus.
Review of applications starts immediately until the position is filled.
Please consult the webpage given below for more details and how to apply.
2014-04-02
17:11 [Event][New]
Submission: 1 June 2014
From September 1 to September 2
Location: Istanbul, Turkey
17:10 [Event][New]
Submission: 20 May 2014
From September 10 to September 11
Location: Wroclaw, Poland
2014-04-01
09:17 [Pub][ePrint]
The ZigBee specification is an emerging wireless technology designed to address the specific needs of low-cost, low-power wireless sensor networks and is built upon the physical and medium access control layers defined in IEEE 802.15.4 standard for wireless personal area networks (WPANs). A key component for the wide-spread success and applicability of ZigBee-based networking solutions will be its ability to provide enhanced security mechanisms that can scale to hundreds of nodes. Currently, however, an area of concern is the ZigBee key management scheme, which uses a centralized approach that introduces well-known issues of limited scalability and a single point of vulnerability. Moreover, ZigBee key management uses a public key infrastructure. Due to these limitations, we suggest replacing ZigBee key management with a better candidate scheme that is decentralized, symmetric, and scalable while addressing security requirements. In this work, we investigate the feasibility of implementing Localized Encryption and Authentication Protocol (LEAP+), a distributed symmetric based key management. LEAP+ is designed to support multiple types of keys based on the message type that is being exchanged. In this paper, we first conduct a qualitative security analysis of LEAP+ and the current ZigBee key management scheme. Using the QualNet 5.0.2 simulator, we implement LEAP+ on the ZigBee platform for the very first time. Experimental results show that a distributed key management scheme such as LEAP+ provides improved security and offers good scalability.
09:17 [Pub][ePrint]
An isogeny graph is a graph whose vertices are principally polarized
abelian varieties and whose edges are isogenies between these varieties. In
his thesis, Kohel described the structure of isogeny graphs for elliptic
curves and showed that one may compute the endomorphism ring of an elliptic
curve defined over a finite field by using a depth first search algorithm
in the graph. In dimension 2, the structure of isogeny graphs is less understood and existing algorithms for computing endomorphism rings are very expensive.
Our setting considers genus 2 jacobians with complex multiplication,
with the assumptions that the real multiplication subring is maximal and
has class number one. We fully describe the isogeny graphs in that
case.
Over finite fields, we derive a depth first search algorithm for computing endomorphism rings locally at prime numbers, if the real multiplication is maximal. To the best of our knowledge, this is the first DFS-based algorithm in genus 2.
09:17 [Pub][ePrint]
Cloud storage is very popular since it has many advantages, but there is a new threat to cloud storage that was not considered before. {\\it Self-updatable encryption} that updates a past ciphertext to a future ciphertext by using a public key is a new cryptographic primitive introduced by Lee, Choi, Lee, Park, and Yung (Asiacrypt 2013) to defeat this threat such that an adversary who obtained a past-time private key can still decrypt a (previously unread) past-time ciphertext stored in cloud storage. Additionally, an SUE scheme can be combined with an attribute-based encryption (ABE) scheme to construct a powerful revocable-storage ABE (RS-ABE) scheme introduced by Sahai, Seyalioglu, and Waters (Crypto 2012) that provides the key revocation and ciphertext updating functionality for cloud storage.
In this paper, we propose an efficient SUE scheme and its extended schemes. First, we propose an SUE scheme with short public parameters in prime-order bilinear groups and prove its security under a $q$-type assumption. Next, we extend our SUE scheme to a time-interval SUE (TI-SUE) scheme that supports a time interval in ciphertexts. Our TI-SUE scheme has short public parameters and also secure under the $q$-type assumption. Finally, we propose the first large universe RS-ABE scheme with short public parameters in prime-order bilinear groups and prove its security in the selective revocation list model under a $q$-type assumption.
09:17 [Pub][ePrint]
We present a private information retrieval (PIR) scheme based on a somewhat homomorphic encryption (SWHE). In particular, we customize an NTRU-based SWHE scheme in order to evaluate a specific class of fixed depth circuits relevant for PIR implementation, thus achieving a more practical implementation. In practice, a SWHE that can evaluate a depth 5 circuit is sufficient to construct a PIR capable of retrieving data from a database containing 4 billion rows. We leverage this property in order to produce a more practical PIR scheme. Com- pared to previous results, our implementation achieves a significantly lower bandwidth cost (more than 1000 times smaller). The computational cost of our implementation is higher than previous proposals for databases containing a small number of bits in each row. However, this cost is amortized as database rows become wider.
09:17 [Pub][ePrint]
We present the homomorphic evaluation of the Prince block cipher. Our leveled implementation is based on a generalization of NTRU. We are motivated by the drastic bandwidth savings that may be achieved by scheme conversion. To unlock this advantage we turn to lightweight ciphers such as Prince. These ciphers were designed from scratch to yield fast and compact implementations on resource constrained embedded platforms. We show that some of these ciphers have the potential to enable near practical homomorphic evaluation of block ciphers. Indeed, our analysis shows that Prince can be implemented using only a 24 level deep circuit. Using an NTRU based implementation we achieve an evaluation time of 3.3 seconds per Prince block - one and two orders of magnitude improvement over homomorphic AES implementations achieved using NTRU, and BGV-style homomorphic encryption libraries, respectively.
|
2016-02-08 08:14:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33164161443710327, "perplexity": 2215.2341691898623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152982.47/warc/CC-MAIN-20160205193912-00032-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://en.wikipedia.org/wiki/Talk:All_the_Rage_(film)
|
# Talk:All the Rage (film)
In order for an article to be notable, it needs verifiable references and third-party ciations which this article lacks. Perhaps when constructing an article you can place this {{construction}} template above the article while you are bringing it up to wiki standards. --Sallicio${\displaystyle \color {Red}\oplus }$ 17:55, 13 March 2008 (UTC)
|
2016-10-24 03:44:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32675036787986755, "perplexity": 6697.3423380369295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719465.22/warc/CC-MAIN-20161020183839-00156-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://deepai.org/publication/a-lifting-method-for-analyzing-distributed-synchronization-on-the-unit-sphere
|
# A Lifting method for analyzing distributed synchronization on the unit sphere
This paper introduces a new lifting method for analyzing convergence of continuous-time distributed synchronization/consensus systems on the unit sphere. Points on the d-dimensional unit sphere are lifted to the (d+1)-dimensional Euclidean space. The consensus protocol on the unit sphere is the classical one, where agents move toward weighted averages of their neighbors in their respective tangent planes. Only local and relative state information is used. The directed interaction graph topologies are allowed to switch as a function of time. The dynamics of the lifted variables are governed by a nonlinear consensus protocol for which the weights contain ratios of the norms of state variables. We generalize previous convergence results for hemispheres. For a large class of consensus protocols defined for switching uniformly quasi-strongly connected time-varying graphs, we show that the consensus manifold is uniformly asymptotically stable relative to closed balls contained in a hemisphere. Compared to earlier projection based approaches used in this context such as the gnomonic projection, which is defined for hemispheres only, the lifting method applies globally. With that, the hope is that this method can be useful for future investigations on global convergence.
## Authors
• 11 publications
• 3 publications
• 24 publications
• 16 publications
• ### Lifting method for analyzing distributed synchronization on the unit sphere
This paper introduces a new lifting method for analyzing convergence of ...
05/07/2018 ∙ by Johan Thunberg, et al. ∙ 0
• ### Dynamic controllers for column synchronization of rotation matrices: a QR-factorization approach
In the multi-agent systems setting, this paper addresses continuous-time...
08/11/2017 ∙ by Johan Thunberg, et al. ∙ 0
• ### Single-Bit Consensus with Finite-Time Convergence: Theory and Applications
In this brief paper, a new consensus protocol based on the sign of innov...
• ### A class of robust consensus algorithms with predefined-time convergence under switching topologies
This paper addresses the robust consensus problem under switching topolo...
12/18/2018 ∙ by R. Aldana-López, et al. ∙ 0
• ### Consensus-based Optimization on the Sphere II: Convergence to Global Minimizers and Machine Learning
We present the implementation of a new stochastic Kuramoto-Vicsek-type m...
01/31/2020 ∙ by Massimo Fornasier, et al. ∙ 10
• ### Consensus-Based Optimization on the Sphere I: Well-Posedness and Mean-Field Limit
We introduce a new stochastic Kuramoto-Vicsek-type model for global opti...
01/31/2020 ∙ by Massimo Fornasier, et al. ∙ 0
• ### Distributed sampled-data control of nonholonomic multi-robot systems with proximity networks
This paper considers the distributed sampled-data control problem of a g...
09/07/2016 ∙ by Zhixin Liu, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
This paper considers systems of agents continuously evolving on , where . The interactions between the agents are changing as a function of time. For such systems we are analyzing a large class of distributed synchronization/consensus control laws. The analysis tool is a lifting method, where an equivalent consensus protocol is analyzed in the ambient space that embeds the sphere. In comparison to projection methods that have been used in this context—e.g., the gnomonic projection—the proposed method is not locally but globally defined on the unit sphere. The control action is performed in the tangent plane. Only relative information between neighboring agents is used in the control laws. Under the assumption that the time-varying graph is uniformly quasi-strongly connected, we show that the consensus manifold is globally uniformly asymptotically stable relative to any closed ball on the sphere contained in an open hemisphere.
Synchronization on the circle, i.e., , is closely related to synchronization of oscillators (Dörfler & Bullo, 2014) and it is equivalent to synchronization on , where several applications exist such as flocking in nature and alignment of multi-robot systems. Also for the two-dimensional sphere, i.e.,
, there are several applications such as formation flying and flocking of birds; consider for example a multi-robot system in 3D, where the relative directions between the robots are available and the goal is to align those. For higher dimensional spheres there are currently related problems such as distributed eigenvector computation, but concrete applications might arise in the future.
The control laws at hand—and slight variations or restrictions on the graph topologies, switchings of the graphs, dimensions of the sphere, and the nonlinear weights in the control laws etc.—have been studied from various perspectives (Scardovi et al., 2007; Sarlette, 2009; Olfati-Saber, 2006; Li & Spong, 2014; Li, 2015). There has recently been new developments (Pereira & Dimarogonas, 2015, 2016; Markdahl & Goncalves, 2016; Markdahl et al., 2016). In Markdahl et al. (2017), almost global consensus is shown by characterization of all equilibrium points when the graph is symmetric and constant (time-invariant). It is shown that the equlibria not in the consensus manifold are unstable and the equilibra in the consensus manifold are stable. A similar technique is used in Tron et al. (2012) to show that a consensus protocol on is almost globally asymptotically stable. Now, the above-mentioned results about almost global convergence come at a price. Static undirected graph topologies are assumed as well as more restrictive classes of weights in the control protocols. Furthermore, compared to Markdahl et al. (2017), the right-hand sides of the system dynamics is not necessarily an intrinsic gradient and the linearization matrices at equilibriums are not necessarily symmetric. Hence, we cannot use the result due to Lojasiewicz (1982) about point convergence for gradient flows. This inspired us to take a closer look at methods that transform the consensus problem on the unit sphere (or a subset thereof) to an equivalent consensus problem in . Before we address the method—referred to as a lifting method—we briefly make some connections to the related problem of consensus on .
The problem of consensus on has been extensively studied (Sarlette et al., 2009; Ren, 2010; Sarlette et al., 2010; Tron et al., 2013; Tron & Vidal, 2014; Deng et al., 2016; Thunberg et al., 2016). There is a connection between that problem and the problem of consensus on when the unit quaternions are used to represent the rotations. For those, the gnomonic projection can be used to show consensus on the unit-quaternion sphere (Thunberg, Song, Hong & Hu, 2014; Thunberg, Song, Montijano, Hong & Hu, 2014)
. In another line of research, several methods have been introduced where control laws based on only relative information have been augmented with additional auxiliary (or estimation) variables, which are communicated between neighboring agents. By doing so, results about almost global convergence to the consensus manifold are achieved
(Sarlette & Sepulchre, 2009; Thunberg, Markdahl & Goncalves, 2017). The latter of these two publications provides a control protocol for Stiefel manifolds, with the unit sphere and as extreme cases. A similar technique had previously been used for the sphere (Scardovi et al., 2007). The idea of introducing auxiliary variables also extends to the related distributed optimization problem in Thunberg, Bernard & Goncalves (2017). In contrast to the mentioned works, in this paper we are not assuming additional communication between the agents by means of auxiliary variables. Instead only relative information is used in the protocols. In a practical setting (considering the case ), such information can be measured by for example a vision sensor and requires no explicit communication between the agents.
In the proposed lifting method, we lift the states from the -dimensional sphere into . The non-negative weights in the consensus protocol for the states in the lifting space are nonlinear functions. Each agent moves in a direction that is a weighted combination of the directions to the neighbors. The weights contain rational functions of the norms of the states of the agents. Since these rational functions are not well-defined at the origin, fundamental questions arise about existence, uniqueness, and invariance of sets. Those questions are answered with positive answers. The hope is that this lifting method will serve as a stepping-stone to future analysis on (almost) global convergence to the consensus manifold on the unit sphere. Compared to the approach in Markdahl et al. (2017) where all the “bad” equlibria on were characterized, we only need to characterize one point, which is the origin in the “lifted space”. If we were to show that this point has a region of attraction that is of measure zero, we would have equivalently shown the desired result about almost global convergence on the unit sphere (assuming ). However, the non-differentiability of this point remains an additional challenge.
## 2 Preliminaries
We begin this section with some set-definitions. The -dimensional unit sphere is
Sd−1={y∈Rd:∥y∥2=1}.
The special orthogonal group in dimension is
SO(d)={Q∈Rd×d:QT=Q−1,det(Q)=1}.
The set of skew symmetric matrices in dimension
is
so(d)={Ω∈Rd×d:ΩT=−Ω}.
The set is an open hemisphere if there is such that .
We consider a multi-agent system with agents. Each agent has a corresponding state for . The initial state of each agent at time is . Another way to represent the states of the agents is to use rotation matrices. Let satisfy for all and , where is the north pole; we also define as the south pole. Let for all , where is the initial -matrix at time . The -matrices can be interpreted as transformations from body coordinate frames—denoted by ’s—of the agents to a world coordinate frame
. They are transforming the unit vector
in the body frames to the corresponding unit vector (or point on the unit sphere) in the world coordinate frame. The ’s and their dynamics are not uniquely defined, but this is not of importance for the analysis. We choose to define the dynamics of the ’s according to (2) below.
The dynamics of the -vectors are given by
˙xi=(I−xixTi)Ri[0,vTi]T=Ri[0,vTi]T, (1)
where for all . The -vectors are the controllers for the agents and those are defined in the body coordinate frames, i.e., the ’s. For the -matrices the dynamics is
˙Ri =Ri[0−vTivi0]. (2)
The matrix on the right-hand side of in (2) is an element of . The control is performed in the tangent space of the sphere, which means that there are degrees of freedom for the control. This is the reason why the -vectors are -dimensional. Before we proceed, we provide some additional explanation for the expression in the right-hand side of (2). According to its definition, the first column of is equal to and by multiplying by from the right we obtain—due to (1)—the following expression
RTi˙xi=[0,vTi]T.
This means that
RTi˙Ri=[0⋆vi⋆],
where the -parts are left to be chosen. We know that the matrix in the right-hand side above needs to be skew symmetric, since is a rotation matrix. We also know that the first column of it must be equal to . The matrix of minimum Euclidean norm that fulfills these two requirements is equal to
[0−vTivi0],
i.e., the one we chose in the right-hand side of (2).
We will study a class of distributed synchronization/consensus control laws on the unit sphere, where the agents are moving in directions comprising conical combinations of directions to neighbors. In this protocol only local and relative information is used. Before we provide these control laws we introduce directed graphs and time-varying directed graphs.
A directed graph is a pair , where is the node-set and is the edge-set. Each node in the node set corresponds to a unique agent. The set is the neighbor set or neighborhood of agent , where if and only if . We continue with the following definitions addressing connectivity of directed graphs.
In a directed graph , a directed path is a sequence of distinct nodes, such that any consecutive pair of nodes in the sequence comprises an edge in the graph. We say that is connected to if there is a directed path from to . We say that the graph is quasi-strongly connected if there is at least one node that is a center or a root node in the sense that all the other nodes are connected to it. We say that the graph is strongly connected if for all it holds that is connected to .
Now we define time-varying graphs. We define those by first defining time-varying neighborhoods. The time-varying neighborhood of agent is a piece-wise constant right-continuous set-valued function that maps from to . We assume that there is such that for all , where is the set of time points of discontinuity of . The constant is as a lower bound on the dwell-time between any two consecutive switches of the topology. We define the time-varying graph as
G(t)=(V,E(t))=(V,⋃i⋃j∈Ni(t){(i,j)}).
Furthermore, the union graph of during the time interval is defined by
G([t1,t2))=⋃t∈[t1,t2)G(t)=(V,⋃t∈[t1,t2)E(t)),
where . We say that the graph is uniformly (quasi-) strongly connected if there exists a constant such that the union graph is (quasi-) strongly connected for all .
Now we provide the synchronization protocol to be studied. For each agent , the controller is is defined by
[0vi] =[000Id−1]∑j∈Ni(t)fij(∥xij−p∥)xij, (3)
where , which is represented in the frame . The ’s are what we refer to as relative information and the control law (3) is constructed by only such information. For each , it holds that . The -functions are assumed to be Lipschitz and attain positive values for positive arguments. The ’s are neighborhoods of a time-varying directed graph , whose connectivity is at least uniformly quasi-strong. These control laws will be analyzed in the paper.
The expressions in (3) are more easily understood if they are expressed in the world frame . We define
ui=(I−xixTi)∑j∈Ni(t)fij(∥xj−xi∥)(xj−xi), (4)
for all , which is expressed in the frame . The vector is the sum of the positively weighted directions to the neighbors of agent , projected onto the tangent space at the point . Also for analysis purposes, (4) is easier to work with than (3). The closed loop system is
˙xi =(I−xixTi)∑j∈Ni(t)fij(∥xj−xi∥)(xj−xi), (5) =(I−xixTi)∑j∈Ni(t)fij(∥xj−xi∥)xj,
for all .
Let and . We define the set
A={x:xi=xj for all i,j},
which is the synchronization/consensus set. Throughout the paper we assume that the closed-loop dynamics of the system is given by (5). We study the convergence of to the consensus set . When we talk about convergence we refer to the concepts below.
For the system (5), we say that the set is attractive relative to a forward invariant set if
(x0∈S)⟹(dist(x(t),A)→0 as t→∞)
where . Furthermore, we say that the set is globally uniformly asymptotically stable relative to a forward invariant compact set if
1. for every there is such that
2. for every there is such that
for all ).
The equivalent definitions to the above will also be used (after changing the sets and ) for other systems evolving in or linear subspaces thereof. Forward invariance, or simply invariance, of a set means that if the initial state is contained in the set, then the state is contained in the set for all future times.
The two concepts of global convergence respective almost global convergence relative to a forward invariant set refer to, respectively, the situations where convergence occur for all initial points in and convergence occur for all initial points in a set where has measure zero.
## 3 Projection methods
Before we continue to present the lifting method, we show how projection based methods can be used to analyze consensus on hemispheres. In particular we consider two such methods. The two methods are such that the -vectors are projected down onto a -dimensional linear subspace of . The symbol is used to denote the projection variable for in both methods.
### 3.1 Equatorial plane projection
The equatorial plane projection simply projects all the states onto a
-dimensional hyperplane (that contains the origin). This plane separates the sphere into two hemispheres. If all the agents are positioned on one of those hemispheres, one can easily show that they reach consensus provided that the graph has strong connectivity. This projection method is appealing because the projections are simple and the convergence proof is straightforward. It is interesting that results from the literature about convergence on hemispheres (and slightly more general ones where the graph is assumed to be time-varying) can easily be shown with this simple projection.
Now, formally, the -states are projected onto the equatorial plane whose normal is equal to in the world coordinate frame .
The projected state is defined by
[0yi]=Pequxi=[000Id−1]xi. (6)
This is illustrated in Fig. 1 for the dimension . Points on the northern hemisphere, i.e., the ’s satisfying , are projected down onto the equatorial plane. For each point there is a blue dotted line between the point and its projection.
On the northern hemisphere is a diffeomorphism. The mapping is defined by
[xi]1 =√1−∥yi∥2 (7) k =[yi]k, for k≥2, (8)
where and are the ’th elements of and , respectively.
By using this projection one obtains a local convergence result for hemispheres.
Suppose controller (3) is used for each agent and suppose is uniformly strongly connected. If for all , it holds that is attractive for the closed loop system (5).
Proof: We will use Theorem 1 in Thunberg, Hu & Goncalves (2017). Under the condition that the graph is uniformly strongly connected, if we can show that any closed disc (or ball) in the equatorial place with radius less than is forward invariant for the ’s and we can find a function such that is 1) positive definite, 2) is decreasing as a function of , and 3) is strictly negative if and there is such that . Then the ’s converge to a consensus formation. This in turn implies, since is a diffeomorphism, that the ’s converge to a consensus formation, i.e., the set is attractive.
Let , i.e., , which obviously satisfy condition 1). For it holds that
˙V(yi) =∑j∈Ni(t)gij(yi,yj)((√1−∥yi∥2√1−∥yi∥2)yTiyj −(√1−∥yi∥2√1−∥yj∥2)∥yi∥2), (9)
where .
Now, at time , assume and assume . The following observations imply that conditions 2) and 3) hold. It holds that and the inequality is strict if . It holds that and the inequality is strict if . It holds that
(√1−∥yi∥2√1−∥yj∥2)≥(√1−∥yi∥2)2≥0.
We also see that any closed disc with radius less than is forward invariant for the ’s.
By a change of coordinates, we obtain the following generalization. Suppose controller (3) is used for each agent and suppose is uniformly strongly connected. If all the ’s are contained in an open hemisphere it holds that is attractive for the closed loop system (5).
A main problem, with the equatorial plane projection is that the convex hull of the projected variables is not necessarily forward invariant. This means that the projected variables are not following a consensus protocol. This is also the reason why we settle for the strong connectivity assumption about the graph, i.e., that it is uniformly strongly connected. However, the projected variables under the gnomonic projection—introduced in the subsequent section—do follow a consensus protocol, which, in turn, allows for more general convergence results.
### 3.2 The gnomonic projection
The gnomonic projection projects an open hemisphere onto a tangent plane at a point on the sphere. We will use the convention of projecting the points on the southern hemisphere defined as onto the tangent plane at the south-pole, i.e., at the point . The projection of is the intersection between the tangent plane and the line that passes through the origin and . This projection is illustrated in Fig. 2, where several points are projected.
The gnomonic projection has the property that segments of great circles on the sphere (geodesics) correspond to straight line segments in the projection plane. One can show that a consensus algorithm on the open hemisphere corresponds to a consensus protocol for the projected states. It should be emphasized that the gnonomic projection method is not new. It is claimed to have been invented by the Greek philosopher Thales of Miletus somewhere around 624–546 BCE (Alsina & Nelsen, 2015)
. Its first appearance in a subject related to the one addressed in this paper, was probably in
Hartley & Dai (2010) and subsequently in Hartley et al. (2013) in the context of rotation averaging. Later the gnomonic projection was used as a tool to show consensus on the open hemisphere (Thunberg, Song, Hong & Hu, 2014; Thunberg et al., 2016). In those latter works, the three-dimensional (unit-quaternion) sphere was considered in the context of attitude synchronization. Recently, the gnomonic projection was also considered for arbitrary dimensions (Lageman & Sun, 2016). It should be emphasized that the graph was not time-varying in that context.
Formally we define , the projection of , by
[−1yi]=1|[xi]1|xi, (10)
which is a diffeomorphism from the open southern hemisphere to the tangent plane of the south pole. Suppose controller (3) is used, i.e., the closed loop dynamics is given by (5). If for all , i.e., all the ’s are located on the southern hemisphere, it holds that the dynamics of the ’s is on the form
˙yi=∑j∈Ni(t)hij(y)(yj−yi), (11)
where , and it can be shown that the ’s are locally Lipschitz and globally Lipschitz on any bounded set. This can be used (as an alternative to using the lifting method) to prove the result in Proposition 4 in next Section, which is stronger than that in Corollary 3.1.
## 4 The lifting method
In this section we propose a method where the ’s are not projected onto a ()-dimensional plane, but rather relaxed to be elements in . Those elements, we call them ’s, are then projected down onto the sphere to create the ’s (which in this case are equivalent to the ’s). The method can thus be seen as the inverse procedure to the two in the previous section. Provided , the projection is given by
yi=zi∥zi∥. (12)
This projection as well as the lifting is illustrated in Figure 3. Points in are projected down onto the sphere in the sense of minimizing the least squares distance.
We let be governed by the following dynamical system
˙zi=∑j∈Ni(t)fij(∥∥∥zj∥zj∥−zi∥zi∥∥∥∥)∥zi∥∥zj∥(zj−zi), (13)
for all . Let the initial state of the system be . Equation (13) describes a consensus protocol with nonlinear weights that contain rational functions of the norms of the states. The question is how this dynamical system is related to (5). The following proposition provides the answer.
Suppose that all the ’s are not equal to zero. On the time interval the dynamics for the ’s is given by
˙yi=(I−yiyTi)∑j∈Ni(t)fij(∥yj−yi∥)(yj−yi), (14)
i.e., it is the same as (5).
Proof: Proposition 4 below provides the result that the solution to (13) is well-defined on the interval and is forward invariant. Given that result, the ’s and their derivatives are well defined. Now,
˙yi =1∥zi∥(I−yiyTi)˙zi =(I−yiyTi)∑j∈Ni(t)fij(∥∥yj−yi∥∥)(yj−yi∥zi∥∥zj∥)
Suppose the dynamics for the ’s is governed by (13). Suppose there is no such that . Let be the convex hull of the ’s during when the initial condition is . Let be such that the solution exists during . Then the solution to (13) exists and is unique for all times , is forward invariant, and the set is forward invariant.
Proof: We first address the claim that is forward invariant. It suffices to verify that for each , the right-hand side of (13) is either inward-pointing relative to , or equal to . Now, due to the structure of (13), this is true.
Now we address the invariance of , and, by doing that, obtain the existence and uniqueness result for the solution during for free, since the right-hand side of (13) is locally Lipschitz on . Now, suppose there is and a finite time such that and there is no and such that . This means that there is a first finite time for which at least one state, that is, attains the value . The assumption is equivalent to assuming that is not forward invariant.
For and it holds that
ddt∥zi∥ =∑j∈Ni(t)g(zi,zj)(∥zi∥θij− ∥zi∥2∥zj∥), for all i,
where and . For , it also holds that
ddt∥zi∥∥zj∥ =∑k∈Ni(t)gik(zi,zk)(∥zi∥∥zj∥θik− ∥zi∥2∥zj∥∥zk∥) −∑l∈Nj(t)gjl(zj,zl)(∥zi∥∥zj∥θjl−∥zi∥∥zl∥).
We define for all and write the equation above as
˙vij =∑k∈Ni(t)gik(zi,zk)(vijθik−vijvik) −∑l∈Nj(t)gjl(zj,zl)(vijθjl−vil).
Let be an upper bound for the ’s, which is equivalent to an upper bound for the ’s. Such a bound must exist (since the set is compact and the function is continuous on , where is the function that returns the Euclidean distance between two points in ).
Let . On it holds that
D+V≤3αnV, (15)
where is the upper Dini-derivative. By using the Comparison Lemma for (15), we can conclude that is bounded from above by on
Now, for and it holds that
ddt∥zi1∥ =∑j∈Ni1(t)g(zi1,zj)(∥zi1∥θij− ∥zi1∥2∥zj∥) ≥−nα(e3αnt1V(0)+1)∥zi1∥.
By using the Comparison Lemma, we can conclude that . But this, in turn, means that , which is a contradiction.
In the following proposition we make use of , which was defined in Proposition 4.
Suppose the dynamics for the ’s are governed by (13) and is uniformly quasi-strongly connected. Suppose is not contained in the convex hull for . Then the consensus set —defined as the set where all the ’s are equal in —is globally uniformly asymptotically stable relative to . Furthermore, there is a point that all the ’s converge to.
Proof: Invariance of is an indirect consequence of the fact that (13) is a consensus protocol. On this set the right-hand side of (13) is Lipschitz continuous in and piece-wise continuous in . Now the procedure in the rest of the proof is analogous to the one in Proposition 4. Since the right-hand side of (13) is Lipschitz continuous in and piece-wise continuous in , we can use Theorem 2 in Thunberg, Hu & Goncalves (2017) to find a continuously differentiable function such that 1) is decreasing as a function of ; and 2) is strictly negative if and there is such that or there is such that . The existence of such a function guarantees that the consensus set is globally uniformly asymptotically stable relative to . It holds that the function is such a -function. Convergence to a point for all the ’s can be shown by using the facts that is forward invariant for all and converges to .
As a remark to the previous proposition, we should add that more restrictive results about attractivity of can be shown by using the results in Shi & Hong (2009); Lin et al. (2007).
Suppose the graph is uniformly quasi-strongly connected. Then for any closed ball contained in the hemisphere, the consensus set is globally uniformly asymptotically stable relative to under (5).
Proof: Forward invariance holds for due to the structure of the right-hand side of (5). Let . Due to Proposition 4 we know that the consensus set is globally uniformly asymptotically stable relative to and that there is a point that all the ’s converge to. We also know that the projected -variables follow the protocol (14), which is the same as (5). The norms of the ’s are uniformly bounded on and is forward invariant for all . Thus the desired result readily follows.
Suppose the dynamics for the ’s is governed by (13) and suppose the graph is quasi-strongly connected. If the ’s converge to a point that is not equal to zero, then the ’s converge to a point . Furthermore, if the convex hull of the ’s does not contain the point zero, then the ’s converge to a point .
Proof: Straightforward application of Proposition 4, Proposition 4, and Proposition 4.
Variations of Proposition 4 has appeared in the literature before. The idea of using the gnomonic projection to show consensus on the hemisphere was used in Thunberg, Song, Hong & Hu (2014); Thunberg et al. (2016) where restricted versions of Proposition 4 were given for the dimension in the context of attitude synchronization. Recently the attractivity of relative to open hemispheres was established under quasi-strong graph connectivity (Lageman & Sun, 2016) using the gnomonic projection. The graph was not time-varying in that context.
To get a better understanding of Proposition 4, a numerical example is provided by Fig 4
. In this example there are five agents with a uniformly quasi-strongly connected interaction graph. The agents were initially uniformly distributed on a hemisphere and the
-functions were chosen to be constant; either equal to or . In the figure, the red discs denote the initial positions and the yellow disc denote the final consensus point. We have also denoted two points on the trajectories where the graph switches.
Now, the case when is contained in the convex hull of the ’s is more intriguing. We provide the following result.
Suppose the dynamics for the ’s is governed by (13) and is uniformly strongly connected. Suppose is contained in the convex hull of the ’s, i.e., in , and there is no such that . Furthermore, suppose that the ’s are contained in a compact set where the ’s are bounded from below by a positive constant . Then the set —defined as the set where all the ’s are equal in the convex hull of the ’s—is attractive. Furthermore, there is a fixed point that all the ’s converge to.
Proof: We need to prove that the ’s converge to . In light of Proposition 4, the only case left to consider is when the point is contained in the convex hull of the ’s for all , i.e., it is contained in for all . We will thus only consider this case in the following where we need to prove that all the ’s converge to . We partition this case into two sub-cases:
1) the omega-limit set, denoted by , does not contain a point for which a .
2) the omega-limit set contains at least one point where at least one of the ’s is equal to zero.
We begin by considering 1). There must be a ball around the origin such that there is no time for which a is contained in the ball. This is proven in the following way. Proposition 4 guarantees that no can reach the origin in finite time. Thus, at any finite time there exists a largest open ball with radius such that no is contained in the ball during . Assume that . This implies that there is a point that is in the closure of for which one of the ’s is equal to zero. But is compact, hence we know that such a point also will be contained in . This is a contradiction to the statement that such points are not contained in .
Now, in the set we replace the weights in the right-hand side of (13) by functions such that the total weights, consisting of outside and inside , are globally Lipschitz on a set containing in the interior. Furthermore, is chosen to be positive when .
Now let us study the solution starting at at time of this modified system with the replaced weights in . At any finite time the solution is the same as the original system. However, we can use the results in Thunberg, Hu & Goncalves (2017) to show that the solution to the modified system converges to and in particular all the ’s converge to a fixed point that is nonzero. This means that after some finite time , the point is not contained in for the modified as well as for the original system. But this is a contradiction to our assumption that the point is contained in the convex hull of the ’s for all .
Now we consider 2). Let us first introduce . is, besides continuous, monotonically decreasing. We assume that . This means that for any , it holds that the set is nonempty. We will show that this assumption leads to a contradiction in the end of the proof. This, in turn, means that all the ’s converge to .
Since the continuous ’s are defined on a compact set, there is such that the ’s are bounded from above by . Furthermore, there is also an such that for all and . Take for example .
We continue by formulating a series of claims, each of which is followed by a proof. After these claims have been introduced, they are used as building blocks in the final part of the proof. Roughly, the claims can be understood as follows. The first claim says that if a is close to the origin, it will remain so for a specified time interval. The second claim says that if a has a neighbor that is close to the origin, it will will be “dragged” to the origin by this neighbor. The third claim simply says that there must be a close to the origin at some time. Then we show that that the that is close to the origin will drag all the other states close to the origin; so close that their distances to the origin is smaller than , which, in turn, is a contradiction.
Claim 1: There is satisfying and such that for time there is such that has smaller norm than , then for , where is the lower bound on the dwell time and is the length of the time interval such that the union graph is guaranteed to be strongly connected, see Section 2.
It is assumed that and . Suppose there is and such that . Let us consider the dynamics for . It is
ddt∥z¯i∥2 =2∑j∈N¯i(t)fij∥z¯i∥∥zj∥(zT¯izj−∥z¯i∥2) ≤2∑j∈N¯i(t)fij∥z¯i∥∥zj∥zT¯izj≤2∑j∈N¯i(t)Ku∥z¯i∥2 ≤2nKu∥z¯i(t)∥2
Now we can use the Comparison Lemma to deduce that for . Now, if is sufficiently small, the expression will be smaller than during .
Claim 2: There is satisfying , such that for any time , if an agent has a neighbor during for which it holds that , then there is a time during such that
|
2020-10-24 11:14:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9190019369125366, "perplexity": 362.44802543469643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882581.13/warc/CC-MAIN-20201024110118-20201024140118-00333.warc.gz"}
|
http://www.researchgate.net/publication/261005559_Erratum_Reactor_measurement_of_13_and_its_complementarity_to_long-baseline_experiments__Phys._Rev._D_68_033017_%282003%29
|
Article
# Erratum: Reactor measurement of θ13 and its complementarity to long-baseline experiments [ Phys. Rev. D 68, 033017 (2003)]
(Impact Factor: 4.64). 09/2004; 70(5):59901-. DOI: 10.1103/PhysRevD.70.059901
Source: arXiv
ABSTRACT
A possibility to measure $\sin^22\theta_{13}$ using reactor neutrinos is examined in detail. It is shown that the sensitivity $\sin^22\theta_{13}>0.02$ can be reached with 20 ton-year data by placing identical CHOOZ-like detectors at near and far distances from a giant nuclear power plant whose total thermal energy is 24.3 ${\text{GW}_{\text{th}}}$. It is emphasized that this measurement is free from the parameter degeneracies which occur in accelerator appearance experiments, and therefore the reactor measurement plays a role complementary to accelerator experiments. It is also shown that the reactor measurement may be able to resolve the degeneracy in $\theta_{23}$ if $\sin^22\theta_{13}$ and $\cos^22\theta_{23}$ are relatively large. Comment: 25 pages, 8 figures, uses revtex4 and graphicx. Several modifications added to make the text easier to understand. Two more figures added. To be published in Phys. Rev. D
### Full-text
Available from: Fumihiko Suekane, May 01, 2013
0 Followers
·
• Source
• Source
##### Article: Proposal for U.S. participation in Double-CHOOZ: A New theta-13 Experiment at the Chooz Reactor
[Hide abstract]
ABSTRACT: It has recently been widely recognized that a reactor anti-neutrino disappearance experiment with two or more detectors is one of the most cost-effective ways to extend our reach in sensitivity for the neutrino mixing angle theta-13 without ambiguities from CP violation and matter effects. The physics capabilities of a new reactor experiment together with superbeams and neutrino factories have also been studied but these latter are considered by many to be more ambitious projects due to their higher costs, and hence to be farther in the future. We propose to contribute to an international collaboration to modify the existing neutrino physics facility at the Chooz-B Nuclear Power Station in France. The experiment, known as Double-CHOOZ, is expected to reach a sensitivity of sine squared of twice the mixing angle > 0.03 over a three year run, 2008-2011. This would cover roughly 85% of the remaining allowed region. The costs and time to first results for this critical parameter can be minimized since our project takes advantage of an existing infrastructure.
• Source
##### Article: Combined potential of future long-baseline and reactor experiments
[Hide abstract]
ABSTRACT: We investigate the determination of neutrino oscillation parameters by experiments within the next ten years. The potential of conventional beam experiments (MINOS, ICARUS, OPERA), superbeam experiments (T2K, NOvA), and reactor experiments (D-CHOOZ) to improve the precision on the atmospheric'' parameters $\Delta m^2_{31}$, $\theta_{23}$, as well as the sensitivity to $\theta_{13}$ are discussed. Further, we comment on the possibility to determine the leptonic CP-phase and the neutrino mass hierarchy if $\theta_{13}$ turns out to be large. Comment: 4 pages, 4 figures, Talk given by T.S. at the NOW2004 workshop, Conca Specchiulla (Otranto, Italy), 11--17 Sept. 2004
Nuclear Physics B - Proceedings Supplements 12/2004; 145(1). DOI:10.1016/j.nuclphysbps.2005.04.004 · 0.88 Impact Factor
|
2015-10-09 16:35:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8418797254562378, "perplexity": 2017.461075944257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737933027.70/warc/CC-MAIN-20151001221853-00145-ip-10-137-6-227.ec2.internal.warc.gz"}
|
http://clay6.com/qa/41962/tungsten-has-density-of-19-35-g-cm-and-the-length-of-the-side-of-unit-cell-
|
# Tungsten has density of $19.35 g cm^{−3}$ and the length of the side of unit cell is 316 pm. The unit cell in the most important crystalline form of tungsten is the body centred cubic unit cell. How many atoms of the element does 50 g of the element contain?
$1.63\times 10^{23}$
|
2017-09-22 06:21:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7655408978462219, "perplexity": 461.8675611491976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688671.43/warc/CC-MAIN-20170922055805-20170922075805-00643.warc.gz"}
|
http://microboone.fnal.gov/public-plots-data-representations/
|
## Public plots and other data representations
### MicroBooNE Insignia and Logos
The famous ‘Blue Swoosh’ MicroBooNE insignia. Also available with shadow text removed.
### Public Plots from the SBN Physics Proposal
#### ND_100m_uB_T600_onaxis_nue_appearance_ ecalo2_nu_vePhot0_05_gap3_lessCosmics_xsec_0_flux_6_dirt_cos_sensPlot
ND_100m_uB_T600_onaxis_nue_appearance_ecalo2_nu_vePhot0_05_gap3_lessCosmics_xsec_0_flux_6_dirt_cos_sensPlot.png
ND_100m_uB_T600_onaxis_nue_appearance_ecalo2_nu_vePhot0_05_gap3_lessCosmics_xsec_0_flux_6_dirt_cos_sensPlot_caption.txt
Sensitivity of the SBN Program to $\nu_{\mu} \rightarrow \nu_{e}$ oscillation signals. All backgrounds and systematic uncertainties described in the proposal are included. The sensitivity shown corresponds to the event distributions, which includes the topological cuts on cosmic backgrounds and an additional 95% rejection factor coming from an external cosmic tagging system and internal light collection system to reject cosmic rays arriving at the detector in time with the beam.
#### Nue_appearance_ ecalo2_nu_vePhot0_05_gap3_fullCosmics_470m_globBF
Nue_appearance_ecalo2_nu_vePhot0_05_gap3_fullCosmics_470m_globBF.png
Nue_appearance_ecalo2_nu_vePhot0_05_gap3_fullCosmics_470m_globBF_caption.txt
Electron neutrino charged-current candidate distributions in MicroBooNE shown as a function of reconstructed neutrino energy. All backgrounds are shown, only muon proximity and $\sfrac{dE}{dx}$ cuts have been used to reject cosmogenic background sources. Oscillation signal events for the best-fit 3+1 oscillation parameters from Kopp et al. (JHEP 1305, 050 (2013), arXiv:1303.3011) are indicated by the white histogram on top in each distribution. $\nu_{e}$ events have an assumed reconstruction efficiency of 80\%, and mis-identification from photons is taken at 6\% of events passing a topological cut. The topological cut assumes that photons with more than 50MeV of energy and a displaced vertex more than 3cm away will be reject.
#### Nue_appearance_ ecalo2_nu_vePhot0_05_gap3_lessCosmics_470m_globBF
Nue_appearance_ecalo2_nu_vePhot0_05_gap3_lessCosmics_470m_globBF.png
Nue_appearance_ecalo2_nu_vePhot0_05_gap3_lessCosmics_470m_globBF_caption.txt
Electron neutrino charged-current candidate distributions in MicroBooNE shown as a function of reconstructed neutrino energy. All backgrounds are shown. For cosmicgenically induced events muon proximity, $\sfrac{dE}{dx}$ cuts, and a combination of the internal light collection systems and external cosmic tagger systems at each detector are assumed to conservatively identify 95% of the triggers with a cosmic muon in the beam spill time and those events are used to reject events. Oscillation signal events for the best-fit 3+1 oscillation parameters from Kopp et al. (JHEP 1305, 050 (2013), arXiv:1303.3011) are indicated by the white histogram on top in each distribution. $\nu_{e}$ events have an assumed reconstruction efficiency of 80\%, and mis-identification from photons is taken at 6\% of events passing a topological cut. The topological cut assumes that photons with more than 50 MeV of energy and a displaced vertex more than 3cm away will be reject.
#### Nue_sensitivity_compare_program
Nue_sensitivity_compare_program.pdf
Nue_sensitivity_compare_program_caption.txt
Sensitivity comparisons for $\nu_{\mu} \rightarrow \nu_e$ oscillations including all backgrounds and systematic uncertainties described in the proposal assuming 6.6e20 protons on target in LAr1-ND and the ICARUS-T600 and 13.2e20 protons on target in MicroBooNE. The three curves present the significance of coverage of the LSND 99% allowed region (above) for the three different possible combinations of SBN detectors: LAr1-ND and MicroBooNE only (blue), LAr1-ND and ICARUS only (black), and all three detectors (red).
#### Numu_Evt_Dis_470m_1
Numu_Evt_Dis_470m_1.png
Numu_Evt_Dis_470m_1_caption.txt
Examples of $\nu_{\mu}$ disappearance signals in MicroBooNE for $\Delta m^{2} = 1 \text{eV}^2$ with 13.2e20 protons on target.
#### Numu_Evt_Dis_470m_44
Numu_Evt_Dis_470m_44.png
Numu_Evt_Dis_470m_44_caption.txt
Examples of $\nu_{\mu}$ disappearance signals in MicroBooNE for $\Delta m^{2} = 0.44 \text{eV}^2$ with 13.2e20 protons on target.
#### Numu_dis_sensitivity
Numu_dis_sensitivity.pdf
Numu_dis_sensitivity_caption.txt
Sensitivity prediction for the SBN program to $\nu_{\mu} \rightarrow \nu_{x}$ oscillations including all backgrounds and systematic uncertainties described in the proposal. SBN can extend the search for muon neutrino disappearance an order of magnitude beyond the combined analysis of SciBooNE and MiniBooNE.
### General cross section public plots and tables (#4331)
#### cFS
cFS.eps
cFS.png
cFS_caption.txt
Energy distribution of BNB muon neutrino event rates in MicroBooNE for different event signatures for an 87 ton active volume. Selection efficiencies are not considered.
#### cnue
cnue.eps
cnue.png
cnue_caption.txt
Energy distribution of BNB electron and antielectron neutrino events in MicroBooNE for different interaction channels for an 87 ton active volume. Selection efficiencies are not considered. Separation between RES and DIS channels is based on a cut on the hadronic mass W < 2 GeV (RES) and W > 2 GeV (DIS) rather than GENIE interaction mode.
### Noise Dependence on Temperature and LAr Fill Level in the uBooNE Time Projection Chamber (#4717)
#### noise_vs_time
noise_vs_time.png
noise_vs_time_caption.txt
Noise measured on collection plane wires as a function of time. Each data point corresponds to the measured noise level for a given run. The times shown are the times at which each run was taken. For each run, a list of channels was selected such that 1) it was a collection plane wire, 2) it had noise values within a certain range. The average and standard deviation of the distribution of RMS noise values for channels passing the cut was calculated. Data points represent the average RMS, and error-bars show the standard deviation of these distributions. Error bars are meant to show how the change in temperature affects noise levels compared to the intrinsic variability of noise in the detector due to channel-to-channel gain variations. ENC values are measured in number of electrons by taking the ADC noise measured at a 14 mV/fC ASIC gain and multiplying it by [ 1.6E-4 (fC/e-) X 1.935 (ADC/mV) X 14 (mV/fC)]^-1. Noise values drop with the gasseous argon temperature. This is expected behavior due mainly to the properties of the CMOS ASIC chips. The red vertical line in this plot represents the time at which the LAr filling-process began. After this point noise levels begin to rise. This behavior is explained in the Tech-Note in DocDB 4717 in Sec. 5. Sporadicity of the data-points is due to irregular run-taking patterns in the very early weeks of the commissioning phase. Noise levels fluctuate slightly upwards in early and mid June. This is because the cryostat cooling was temporarily interrupted for a brief period, allowing the temperature, and thus the noise values, to rise.
### Approved Plots at Sept. 4th Plot Approval Meeting (#4787)
#### R1532E1_black_grid_axis
R1532E1_black_grid_axis.png
R1532E1_black_grid_axis_caption.txt
A snapshot from LArSoft based 3D event display showing cosmic tracks entering the MicroBooNE detector. The three boxes show the full readout window of the MicroBooNE detector which corresponds to 4.8 ms or equivalently the total effective drift volume for running at full field strength (-128 kV to cathode or 500 V/cm). The red highlighted box shows the physical volume of the TPC. The Colored lines shown in the boxes are 3D reconstructed tracks, different colors represent different tracks. Tracks are drawn along with their trajectory points. The data shown corresponds to cosmic run 1532, event 1 taken on 17th of August, 2015 at 4:03 PM at -70kV (or equivalently 273 V/cm) Electric field.
|
2018-04-20 12:13:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6560792922973633, "perplexity": 3590.468497235085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937780.9/warc/CC-MAIN-20180420120351-20180420140351-00289.warc.gz"}
|
http://aircircuitbreakers.blogspot.com/2009/01/circuit-breaker-ratings.html
|
Sunday, January 25, 2009
Circuit Breaker Ratings
Voltage Rating:
Circuit breakers are rated according to the maximum voltage they can handle. The voltage rating of the circuit breaker must be at least equal to the circuit voltage. The voltage rating of a circuit breaker can be higher than the circuit voltage, but never lower. For example, a 480 VAC circuit breaker could be used on a 240 VAC circuit. A 240 VAC circuit breaker could not be used on a 480 VAC circuit. The voltage rating is a function of the circuit breaker’s ability to suppress the internal arc that occurs when the circuit breaker’s contacts open.
Some circuit breakers have what is referred to as a “slash” voltage rating, such as 120/240 volts. In such cases, the breaker may be applied in a circuit where the nominal voltage between any conductor and ground does not exceed the lower rating and the nominal voltage between conductors does not exceed the higher rating.
Continuous Current Rating
Every circuit breaker has a continuous current rating which is the maximum continuous current a circuit breaker is designed to carry without tripping. The current rating is sometimes referred to as the ampere rating because the unit of measure is amperes, or, more simply, amps.
The rated current for a circuit breaker is often represented as In. This should not be confused with the current setting (Ir) which applies to those circuit breakers that have a continuous current adjustment. Ir is the maximum continuous current that circuit breaker can carry without tripping for the given continuous current setting. Ir may be specified in amps or as a percentage of In.
As mentioned previously, conductors are rated for how much current they can carry continuously. This is commonly referred to as the conductor’s ampacity. In general, the ampere rating of a circuit breaker and the ampacity of the associated conductors must be at least equal to the sum of any non-continuous load current plus 125% of the continuous load current.
Circuit breakers are rated on the basis of using 60° C or 75° C conductors. This means that even if a conductor with a higher temperature rating were used, the ampacity of the conductor must be figured on its 60° C or 75° C rating.
Frame Size:
The circuit breaker frame includes all the various components that make up a circuit breaker except for the trip unit. For any given frame, circuit breakers with a range of current ratings can be manufactured by installing a different trip unit for each rating. The breaker frame size is the highest continuous current rating offered for a breaker with a given frame.
Interrupting Rating:
Circuit breakers are also rated according to the maximum level of current they can interrupt. This is the interrupting rating or ampere interrupting rating (AIR). Because UL and IEC testing specifications are different, separate UL and IEC interrupting ratings are usually provided.
When designing an electrical power distribution system, a main circuit breaker must be selected that can interrupt the largest potential fault current that can occur in the selected application. The interrupting ratings for branch circuit breakers must also be taken into consideration, but these interrupting ratings will depend upon whether series ratings can be applied. Series-connected systems are discussed later in this course.
The interrupting ratings for a circuit breaker are typically specified in symmetrical RMS amperes for specific rated voltages. As discussed in Basics of Electricity, RMS stands for root-mean-square and refers to the effective value of an alternating current or voltage. The term symmetrical indicates that the alternating current value specified is centered around zero and has equal positive and negative half cycles. Siemens circuit breakers have interrupting ratings from 10,000 to 200,000 amps.
|
2014-04-20 16:00:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804707944393158, "perplexity": 1209.1198134407728}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.whsmith.co.uk/products/axes-in-outer-space/9780821869277
|
# Axes in Outer Space
By: Lee Mosher (author), Michael Handel (author)Paperback
Up to 2 WeeksUsually despatched within 2 weeks
£70.50
### Description
The authors develop a notion of axis in the Culler-Vogtmann outer space $\mathcal{X}_r$ of a finite rank free group $F_r$, with respect to the action of a nongeometric, fully irreducible outer automorphism $\phi$. Unlike the situation of a loxodromic isometry acting on hyperbolic space, or a pseudo-Anosov mapping class acting on Teichmuller space, $\mathcal{X}_r$ has no natural metric, and $\phi$ seems not to have a single natural axis. Instead these axes for $\phi$, while not unique, fit into an ""axis bundle"" $\mathcal{A}_\phi$ with nice topological properties: $\mathcal{A}_\phi$ is a closed subset of $\mathcal{X}_r$ proper homotopy equivalent to a line, it is invariant under $\phi$, the two ends of $\mathcal{A}_\phi$ limit on the repeller and attractor of the source-sink action of $\phi$ on compactified outer space, and $\mathcal{A}_\phi$ depends naturally on the repeller and attractor. The authors propose various definitions for $\mathcal{A}_\phi$, each motivated in different ways by train track theory or by properties of axes in Teichmuller space, and they prove their equivalence.
### Other formatsView More
Michael Handel is at CUNY, Herbert H. Lehman College, Bronx, NY
### Product Details
• ISBN13: 9780821869277
• Format: Paperback
• Number Of Pages: 104
• ID: 9780821869277
• ISBN10: 0821869272
### Delivery Information
• Saver Delivery: Yes
• 1st Class Delivery: Yes
• Courier Delivery: Yes
• Store Delivery: Yes
### BooksView More
Prices are for internet purchases only. Prices and availability in WHSmith Stores may vary significantly
Close
|
2018-07-19 17:57:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5724060535430908, "perplexity": 3454.7664548107828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591150.71/warc/CC-MAIN-20180719164439-20180719184439-00524.warc.gz"}
|
https://stats.stackexchange.com/questions/400171/model-selection-for-this-model-with-one-observation
|
# Model selection for this model with one observation
I would like to perform model selection given a range of $$k$$ models $$\mathcal{M}_1, \mathcal{M}_2, ..., \mathcal{M}_k$$, each with some prior probability $$f(\mathcal{M}_1), \dots, f(\mathcal{M}_k).$$
I have a probability distribution for the random variable $$X$$ given each $$\mathcal{M}_i$$ and a vector of unknown parameters $$\boldsymbol{\theta} \in \mathbb{R}^{m}$$, where $$m > 1$$. This is to say that I know and can compute the probability mass function $$f(x|\boldsymbol{\theta},\mathcal{M}_i)$$. I do not have this in closed form but I can evaluate it.
The parameter vector $$\boldsymbol{\theta}$$ is independent of $$\mathcal{M}_i$$, that is to say $$f( \boldsymbol{\theta}|\mathcal{M}_i) = f(\boldsymbol{\theta})$$ for each model.
My data is in the form $$\mathcal{D} = x \in \mathbb{N}$$. Unfortunately, $$\boldsymbol{\theta}$$ is unidentifiable (via any MLE/MCMC method) from my single data point $$x$$.
My question is, how (if at possible) can I perform model selection (or any kind of hypothesis test) given my single data point $$x$$?
My idea was to use Monte Carlo Integration to find: $$f(\mathcal{D}|\mathcal{M}_k) = \int_\Theta f(\mathcal{D}|\boldsymbol{\theta},\mathcal{M}_k) f(\boldsymbol{\theta}) d \boldsymbol{\theta} \approx \frac{1}{N} \sum_{i=1}^{N} f(\mathcal{D}|\boldsymbol{\theta}^{(i)},\mathcal{M}_k),$$ where $$\boldsymbol{\theta}^{(i)}$$ for $$i = 1, ... , N$$ are iid samples from $$f(\boldsymbol{\theta})$$. Then I can compute $$f(\mathcal{M}_i|\mathcal{D}) = \frac{f(\mathcal{D}|\mathcal{M}_i) f(\mathcal{M}_i) }{\sum_j f(\mathcal{D}|\mathcal{M}_j) f(\mathcal{M}_j)},$$ for each model $$i =1, \dots, k$$ and then either pick the model which maximises this posterior likelihood or use this to compute Bayes Factors.
Is this a standard technique used? Or am I missing something? How would this relate to hypothesis testing? Thanks!
• The (proper) prior removes the identifiability issue and makes the marginal likelihood well defined. From there it is indeed a correct Bayesian approach to the problem. With a single integer observation the support for one model versus another will proceed mostly from the corresponding priors. – Xi'an Mar 30 '19 at 10:58
|
2020-08-09 03:39:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9264791011810303, "perplexity": 215.2431153559185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738380.22/warc/CC-MAIN-20200809013812-20200809043812-00338.warc.gz"}
|
https://math.stackexchange.com/questions/38378/puzzle-permutation-and-combination-problem
|
# Puzzle, Permutation and Combination problem?
I have a puzzle here:
There are five colored balls: 2 green, 2 blue and 1 yellow
Rule 1: All balls of the same color must be adjacent to each other.
I wrote a program to find all the solutions for it. I got 24 solutions. But, how do I know that my program's calculation is right? In short I need some mathematical solution for this problem. I'm guessing I need permutation to solve this but the rule threw me off. Any help would be much appreciated. Thanks in advance guys.
• It would be useful to know if each of green and blue balls are distinct (i.e., numbered). If they are, each YXXZZ allocation has to be multiplied by 2x2. Otherwise, it is just 2: YXXZZ or YZZXX – Alex Jan 8 '17 at 18:04
You don't say exactly what you want to do with the balls, but I assume you want to place them in a line.
First, decide if the green balls will appear first, or the blue balls will appear first; you have 2 choices.
Then decide whether the yellow ball will be first, in between, or last. You have three choices.
Since both things must be chosen and the choices are independent, you multiply the numbers to get the total number of ways of doing it: $2\times 3$ or $6$ ways.
This assumes that you cannot distinguish between the two green balls, and you cannot distinguish between the two blue balls. If you can distinguish them, then you also need to decide (i) which green ball goes first (2 possible choices); and (ii) which blue ball goes first (2 possible choices). So the total number of ways would then be given by $2\times 3\times 2\times 2 = 24$ (the first 2 determines whether green or blue goes first; the 3 is the number of ways to place the yellow ball; the second 2 is the number of ways of deciding which green ball goes first among the green balls; the last 2 is the number of ways of deciding which blue ball goes first among the blue balls).
• Yes, I wanted to put them in line. Thank you for your help. – eggman20 May 11 '11 at 2:35
Join the greens with a green stick, the blues with a blue stick.
Now consider placing the sticks and the yellow ball. This can be done in $3! = 6$ ways. Assuming the green balls are distinct and blue are distinct, there are 2 ways to place each of the 2 sticks. So total is $6 \times 2 \times 2 = 24$.
• Wow I didn't thought of it that way. Thanks. But I have another question though, what if the green and blue balls are not distinct? – eggman20 May 11 '11 at 2:26
• @eggman20: if you cannot distinguish the green balls between them, or the blue balls between them, then all you need to do is decide which goes first, and where the yellow fits; that's the $6$ ways Moron mentions first. – Arturo Magidin May 11 '11 at 2:37
• Thats what he has explained in the first two sentences. They may be considered as one entity if they aren't distinct. So the three entities can be placed in 3!=6 ways. – Shahab May 11 '11 at 2:39
• Moron you have become Aryabhatta. Hopefully after sometime Bhaskaracharya. – user9413 May 13 '11 at 7:19
• @Chandru: Probably not :-) – Aryabhata May 13 '11 at 7:36
|
2019-04-20 06:15:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.849574625492096, "perplexity": 375.57300208946174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528702.42/warc/CC-MAIN-20190420060931-20190420082931-00238.warc.gz"}
|
https://runestone.academy/ns/books/published/py4e-int/network/mixedupcode_orig.html
|
# Mixed-Up Code Questions Original¶
Create a function called decode that takes in a parameter link and returns a string that contains the contents of the link using urllib. Between each word, there should be a space. Also, a space at the end is okay. For example, decode('http://data.pr4e.org/romeo.txt') should return 'But soft what light through yonder window breaks It is the east and Juliet is the sun Arise fair sun and kill the envious moon Who is already sick and pale with grief '.
Write a function called decode that takes in a parameter link and returns a string that contains the contents of the link using urllib. Between each word, there should be a space. Also, a space at the end is okay. For example, decode('http://data.pr4e.org/romeo.txt') should return 'But soft what light through yonder window breaks It is the east and Juliet is the sun Arise fair sun and kill the envious moon Who is already sick and pale with grief '.
Create a function called write_txt that takes in a parameter link, retrieves a file from the link using urllib, and writes the contents of the link to a file called 'clown.txt'. For example, write_txt('http://data.pr4e.org/clown.txt') should create a file called 'clown.txt' that has the string 'the clown ran after the car and the car ran into the tent and the tent fell down on the clown and the car'.
Write a function called write_txt that takes in a parameter link, retrieves a file from the link using urllib, and writes the contents of the link to a file called 'clown.txt'. For example, write_txt('http://data.pr4e.org/clown.txt') should create a file called 'clown.txt' that has the string 'the clown ran after the car and the car ran into the tent and the tent fell down on the clown and the car'.
Create a function called count_words that takes in a parameter link, retrieves a file from the link using urllib, and returns a dictionary with words as keys and the number of times they appear in the link as values. For example, count_words('http://data.pr4e.org/romeo.txt') should return {'But': 1, 'soft': 1, 'what': 1, 'light': 1, 'through': 1, 'yonder': 1, 'window': 1, 'breaks': 1, 'It': 1, 'is': 3, 'the': 3, 'east': 1, 'and': 3, 'Juliet': 1, 'sun': 2, 'Arise': 1, 'fair': 1, 'kill': 1, 'envious': 1, 'moon': 1, 'Who': 1, 'already': 1, 'sick': 1, 'pale': 1, 'with': 1, 'grief': 1}.
Write a function called count_words that takes in a parameter link, retrieves a file from the link using urllib, and returns a dictionary with words as keys and the number of times they appear in the link as values. For example, count_words('http://data.pr4e.org/romeo.txt') should return {'But': 1, 'soft': 1, 'what': 1, 'light': 1, 'through': 1, 'yonder': 1, 'window': 1, 'breaks': 1, 'It': 1, 'is': 3, 'the': 3, 'east': 1, 'and': 3, 'Juliet': 1, 'sun': 2, 'Arise': 1, 'fair': 1, 'kill': 1, 'envious': 1, 'moon': 1, 'Who': 1, 'already': 1, 'sick': 1, 'pale': 1, 'with': 1, 'grief': 1}.
Create a function called write_jpg that takes in a parameter img_link, retrieves a file from the img_link using urllib, and writes the contents of the img_link to a file called 'cover.jpg'. For example, write_jpg('http://data.pr4e.org/cover3.jpg') should create a file called 'cover.jpg' that has an image of the cover for "PYTHON FOR EVERYBODY".
Write a function called write_jpg that takes in a parameter img_link, retrieves a file from the img_link using urllib, and writes the contents of the img_link to a file called 'cover.jpg'. For example, write_jpg('http://data.pr4e.org/cover3.jpg') should create a file called 'cover.jpg' that has an image of the cover for "PYTHON FOR EVERYBODY".
Create a function called num_chars that takes in a parameter link, retrieves a file from the link using urllib, and returns the number of characters in link in the format "(num) characters". For example, num_chars('http://data.pr4e.org/romeo-full.txt') should return "8864 characters".
Write a function called num_chars that takes in a parameter link, retrieves a file from the link using urllib, and returns the number of characters in link in the format "(num) characters". For example, num_chars('http://data.pr4e.org/romeo-full.txt') should return "8864 characters".
Create a function called contents that takes in a parameter link, retrieves a file from the link using sockets, and returns as a string the contents of the link (specifically 10000 characters). For example, contents('http://data.pr4e.org/clown.txt') should return "HTTP/1.1 200 OK\nDate: Thu, 12 Aug 2021 01:24:15 GMT\nServer: Apache/2.4.18 (Ubuntu)\nLast-Modified: Sat, 13 May 2017 11:22:22 GMT\nETag: '6a-54f6609240717'\nAccept-Ranges: bytes\nContent-Length: 106\nCache-Control: max-age=0, no-cache, no-store, must-revalidate\nPragma: no-cache\nExpires: Wed, 11 Jan 1984 05:00:00 GMT\nConnection: close\nContent-Type: text/plain\n\nthe clown ran after the car and the car ran into the tent and the tent fell down on the clown and the car\n".
Write a function called contents that takes in a parameter link, retrieves a file from the link using sockets, and returns as a string the contents of the link (specifically 10000 characters). For example, contents('http://data.pr4e.org/clown.txt') should return "HTTP/1.1 200 OK\nDate: Thu, 12 Aug 2021 01:24:15 GMT\nServer: Apache/2.4.18 (Ubuntu)\nLast-Modified: Sat, 13 May 2017 11:22:22 GMT\nETag: '6a-54f6609240717'\nAccept-Ranges: bytes\nContent-Length: 106\nCache-Control: max-age=0, no-cache, no-store, must-revalidate\nPragma: no-cache\nExpires: Wed, 11 Jan 1984 05:00:00 GMT\nConnection: close\nContent-Type: text/plain\n\nthe clown ran after the car and the car ran into the tent and the tent fell down on the clown and the car\n".
Create a function called reg_num_links that takes in a parameter url and returns the number of ‘href’ attributes that start with ‘http’ using regular expressions. Since websites are frequently updated, the returned number may change as links get added and deleted.
Write a function called reg_num_links that takes in a parameter url and returns the number of ‘href’ attributes that start with ‘http’ using regular expressions. Since websites are frequently updated, the returned number may change as links get added and deleted.
Create a function called bsoup_num_links that takes in a parameter url and returns the number of ‘href’ attributes that start with ‘http’ using BeautifulSoup. Since websites are frequently updated, the returned number may change as links get added and deleted.
Write a function called bsoup_num_links that takes in a parameter url and returns the number of ‘href’ attributes that start with ‘http’ using BeautifulSoup. Since websites are frequently updated, the returned number may change as links get added and deleted.
Create a function called img_links that takes in a parameter url and returns a list that contains all image links using BeautifulSoup. Since websites are frequently updated, the returned list of image links may change as image links get added and deleted.
Write a function called img_links that takes in a parameter url and returns a list that contains all image links using BeautifulSoup. Since websites are frequently updated, the returned list of image links may change as image links get added and deleted.
Create a function called span_attrs that takes in a parameter url and returns a list of dictionaries using BeautifulSoup. Each dictionary is equivalent to each span tag. The keys of the dictionary are the attributes of the span tag, and the values of the dictionary are the values of the attributes. Since websites are frequently updated, the returned list of dictionaries may change as span tags, attributes, and values get added, deleted, or modified.
Write a function called span_attrs that takes in a parameter url and returns a list of dictionaries using BeautifulSoup. Each dictionary is equivalent to each span tag. The keys of the dictionary are the attributes of the span tag, and the values of the dictionary are the values of the attributes. Since websites are frequently updated, the returned list of dictionaries may change as span tags, attributes, and values get added, deleted, or modified.
|
2023-03-28 23:47:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39662933349609375, "perplexity": 4313.106853566043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00706.warc.gz"}
|
http://ibmaths4u.com/viewtopic.php?f=11&t=294&view=print
|
Page 1 of 1
### IB Maths SL Trigonometric equations
Posted: Sat Apr 13, 2013 4:43 am
IB Mathematics SL– Trigonometry, Trigonometric equations
How can we solve the following trigonometric equation?
$2cos^2x=cosx+1$ for $- \pi \leq x \leq \pi$
Thanks
### Re: IB Maths SL Trigonometric equations
Posted: Sat Apr 13, 2013 4:51 am
IB Maths SL – Trigonometry, Trigonometric equations
By treating the equation as a quadratic in $cosx$ and then factoring.
$2cos^2x=cosx+1\Leftrightarrow 2cos^2x-cosx-1=0 \Leftrightarrow$
$\Leftrightarrow (2cosx+1)(cosx-1)=0 \Leftrightarrow cosx-1=0 \ or \ 2cosx+1 =0 \Leftrightarrow$
Now, you have to solve two trigonometric equations
$cosx-1=0$ and $2cosx+1 =0$ in the interval $[-\pi , \pi]$
For the first equation we have:
$cosx-1=0\Leftrightarrow cosx=1$
The above equation has only one solution in the interval $[-\pi , \pi]$
$x=0$
For the second equation we have:
$2cosx+1 =0 \Leftrightarrow cosx=-\frac{1}{2}$
The above equation has two solutions in the interval $[-\pi , \pi]$
$x=-\pi +\frac{\pi}{3}=-\frac{2\pi}{3} \ or\ x=\pi-\frac{\pi}{3}=\frac{2\pi}{3}$
Hope these help!!
### Re: IB Maths SL Trigonometric equations
Posted: Sat Apr 13, 2013 5:07 am
Thanks a lot!
|
2018-08-17 19:55:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9204367995262146, "perplexity": 2014.690908494557}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212768.50/warc/CC-MAIN-20180817182657-20180817202657-00241.warc.gz"}
|
https://ftp.aimsciences.org/article/doi/10.3934/jimo.2017023
|
# American Institute of Mathematical Sciences
October 2017, 13(4): 1883-1899. doi: 10.3934/jimo.2017023
## A new analytical model for optimized cognitive radio networks based on stochastic geometry
Department of Mathematical Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
The reviewing process of the paper was handled by Wuyi Yue and Yutaka Takahashi as Guest Editors
Received September 2015 Revised June 2016 Published April 2017
In this paper, we consider an underlay type cognitive radio network with multiple secondary users who contend to access multiple heterogeneous licensed channels. With the help of stochastic geometry we develop a new analytical model to analyze a random channel access protocol where each secondary user determines whether to access a licensed channel based on a given access probability. In our analysis we introduce the so-called interference-free region to derive the coverage probability for an arbitrary secondary user. With the help of the interference-free region we approximate the interferences at an arbitrary secondary user from primary users as well as from secondary users in a simple way. Based on our analytical model we obtain the optimal access probabilities that maximize the throughput. Numerical examples are provided to validate our analysis.
Citation: Seunghee Lee, Ganguk Hwang. A new analytical model for optimized cognitive radio networks based on stochastic geometry. Journal of Industrial and Management Optimization, 2017, 13 (4) : 1883-1899. doi: 10.3934/jimo.2017023
##### References:
show all references
The reviewing process of the paper was handled by Wuyi Yue and Yutaka Takahashi as Guest Editors
##### References:
Interference-free region
The probability that the sensed channel is idle
The coverage probability
Throughput
The optimal point obtained from analysis under parameter sets (a) to (d)
Parameter set Optimal point $\mathbf{b}_A^*$ (a) $\lambda_{s}=0.001, T_1=0.0001$ (0.5722, 0.4278) (b) $\lambda_{s}=0.001, T_1=0.001$ (0.5198, 0.4802) (c) $\lambda_{s}=0.005, T_1=0.0001$ (0.3714, 0.4143) (d) $\lambda_{s}=0.005, T_1=0.001$ (0.3688, 0.3925)
Parameter set Optimal point $\mathbf{b}_A^*$ (a) $\lambda_{s}=0.001, T_1=0.0001$ (0.5722, 0.4278) (b) $\lambda_{s}=0.001, T_1=0.001$ (0.5198, 0.4802) (c) $\lambda_{s}=0.005, T_1=0.0001$ (0.3714, 0.4143) (d) $\lambda_{s}=0.005, T_1=0.001$ (0.3688, 0.3925)
throughput over $b_1$ and $b_2$($\lambda_s=0.001, T_1=0.0001$)
0.41 0.42 0.43 0.44 0.45 0.55 0.611956 0.616004 0.620971 0.626117 0.630063 0.56 0.616728 0.621303 0.626249 0.630435 - 0.57 0.621165 0.625826 0.630827 - - 0.58 0.626057 0.630756 - - - 0.59 0.630405 - - - -
0.41 0.42 0.43 0.44 0.45 0.55 0.611956 0.616004 0.620971 0.626117 0.630063 0.56 0.616728 0.621303 0.626249 0.630435 - 0.57 0.621165 0.625826 0.630827 - - 0.58 0.626057 0.630756 - - - 0.59 0.630405 - - - -
throughput over $b_1$ and $b_2$($\lambda_s=0.001, T_1=0.001$)
0.46 0.47 0.48 0.49 0.50 0.50 0.656962 0.661976 0.667183 0.671202 0.675827 0.51 0.662102 0.666946 0.672465 0.676458 - 0.52 0.667741 0.672843 0.677074 - - 0.53 0.672488 0.676938 - - - 0.54 0.676835 - - - -
0.46 0.47 0.48 0.49 0.50 0.50 0.656962 0.661976 0.667183 0.671202 0.675827 0.51 0.662102 0.666946 0.672465 0.676458 - 0.52 0.667741 0.672843 0.677074 - - 0.53 0.672488 0.676938 - - - 0.54 0.676835 - - - -
throughput over $b_1$ and $b_2$($\lambda_s=0.005, T_1=0.0001$)
0.39 0.4 0.41 0.42 0.43 0.35 0.236905 0.237417 0.238089 0.237907 0.23775 0.36 0.23791 0.237729 0.237724 0.238186 0.238567 0.37 0.237818 0.237955 0.237926 0.2384 0.238395 0.38 0.237836 0.238351 0.238569 0.238292 0.238104 0.39 0.238228 0.238257 0.238475 0.238197 0.238343
0.39 0.4 0.41 0.42 0.43 0.35 0.236905 0.237417 0.238089 0.237907 0.23775 0.36 0.23791 0.237729 0.237724 0.238186 0.238567 0.37 0.237818 0.237955 0.237926 0.2384 0.238395 0.38 0.237836 0.238351 0.238569 0.238292 0.238104 0.39 0.238228 0.238257 0.238475 0.238197 0.238343
throughput over $b_1$ and $b_2$($\lambda_s=0.005, T_1=0.001$)
0.37 0.38 0.39 0.4 0.41 0.35 0.243218 0.244021 0.243855 0.244004 0.243705 0.36 0.243233 0.243758 0.243913 0.243568 0.243585 0.37 0.243763 0.244133 0.243675 0.243949 0.243908 0.38 0.243935 0.243805 0.243465 0.243912 0.2435 0.39 0.24342 0.2437 0.243563 0.243473 0.243394
0.37 0.38 0.39 0.4 0.41 0.35 0.243218 0.244021 0.243855 0.244004 0.243705 0.36 0.243233 0.243758 0.243913 0.243568 0.243585 0.37 0.243763 0.244133 0.243675 0.243949 0.243908 0.38 0.243935 0.243805 0.243465 0.243912 0.2435 0.39 0.24342 0.2437 0.243563 0.243473 0.243394
2021 Impact Factor: 1.411
|
2022-06-28 10:05:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21832771599292755, "perplexity": 491.9443241049708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103360935.27/warc/CC-MAIN-20220628081102-20220628111102-00580.warc.gz"}
|