url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://roxxcloud.com/spintronic-reservoir-computing-without-driving-current-or-magnetic-field/
|
Roxxcloud
Cloud and Tech News
# Spintronic reservoir computing without driving current or magnetic field
### Definition of magnetic field and relaxation time
The magnetic field $$\mathbf H$$ relates to the energy density $$\varepsilon$$ as $$\mathbf H=-\partial \varepsilon /\partial (M \mathbf m)$$, and therefore, is obtained from Eq. (1) as
\beginaligned \mathbf H=\beginpmatrix -4\pi M N_x m_x \\ -4\pi M N_y m_y \\ \left[ \left( 2K_1/M \right) -4\pi M N_z \right] m_z + \left( 4K_2/M \right) \left( 1-m_z^2 \right) m_z \endpmatrix. \endaligned
(9)
We should note that the magnetization dynamics described by the LLG equation is unchanged by adding a term proportional to $$\mathbf m$$ to $$\mathbf H$$ because the LLG equation conserves the magnitude of $$\mathbf m$$. Adding a term as such corresponds to shifting the origin of the energy density $$\varepsilon$$ by a constant. In the present case, we added a term $$4\pi M N_x\mathbf m$$ to $$\mathbf H$$ and obtained Eq. (2), where we should remind that $$N_x=N_y$$ because we assume a cylinder shaped MTJ. The added term to $$\mathbf H$$ shifts the origin of the energy density $$\varepsilon$$ by the constant $$-2\pi M^2 N_x \mathbf m^2=-2\pi M^2N_x$$ and makes it depend on $$m_z$$ only.
The LLG equation in the present system can be integrated as
\beginaligned t=\frac1+\alpha ^2\alpha \gamma \left( H_\mathrmK1+H_\mathrmK2\right) \left[ \log \left( \frac\cos \theta _\mathrmf\cos \theta _\mathrmi\right) -\fracH_\mathrmK1+H_\mathrmK2H_\mathrmK1\log \left( \frac\sin \theta _\mathrmf\sin \theta _\mathrmi\right) +\fracH_\mathrmK22H_\mathrmK1\log \left( \fracH_\mathrmK1+H_\mathrmK2 \sin ^2\theta _\mathrmfH_\mathrmK1+H_\mathrmK2 \sin ^2\theta _\mathrmi\right) \right] , \endaligned
(10)
where $$\theta _\mathrmi$$ and $$\theta _\mathrmf$$ are the initial and final values of $$\theta =\cos ^-1m_z$$. Equation (10) provides the relaxation time from $$\theta =\theta _\mathrmi$$ to $$\theta =\theta _\mathrmf$$. Note that the relaxation time is scaled by $$\alpha \gamma H_\mathrmK1/(1+\alpha ^2)$$ and $$H_\mathrmK2/H_\mathrmK1$$, which can be manipulated by the voltage control of magnetic anisotropy. We also note that Eq. (10) has logarithmic divergence due to asymptotic behavior in the relaxation dynamics.
### Role of spin-transfer torque
We neglected spin-trasnfer torque in the main text because the current magnitude in typical MTJ used for voltage control of magnetic anisotropy effect is usually small. For example, when using typical values47,49 for the voltage (0.4 V), resistance (60 k$$\Omega$$), and cross-section being $$\pi \times 60^2$$ nm$$^2$$, the value of the current density is about 0.06 MA/cm$$^2$$ (6.7 $$\mu$$A in terms of current). Such a value is sufficiently small compared with that used in spin-transfer torque switching experiments54. To verify the argument, we perform numerical simulations, where spin-transfer torque, $$-H_\mathrms \mathbf m \times (\mathbf p \times \mathbf m)$$, is added to the right-hand side of Eq. (3). We fix the values of $$H_\mathrmK2=500$$ Oe and $$H_\mathrmK1=-0.1 H_\mathrmK2=-50$$ Oe. The unit vector $$\mathbf p$$ along the direction of the magnetization in the reference layer points to the positive z direction. Spin polarization P in the spin-transfer torque strength, $$H_\mathrms=\hbar P j/(2eMd)$$, is assumed to be 0.5. Figure 4a shows time evolution of $$\mathbf m$$ for the current density j of 0.06 MA/cm$$^2$$. Although the magnetization slightly moves from the initial (stable) state due to spin-transfer torque, the change of the magnetization direction is small compared with that shown in Fig. 1b. Therefore, we do not consider that spin-transfer torque plays a major role in physical reservoir computing, although current cannot be completely zero in experiments.
For comprehensiveness, however, we also show the magnetization dynamics when the current density j is increased by one order Figure 4b shows the dynamics for $$j=0.6$$ MA/cm$$^2$$, where the magnetization switching by spin-transfer torque is observed. We note that the current density is sufficiently small compared with that used in typical MTJs in nonvolatile memory54. Nevertheless, the switching is observed because of a small value of the magnetic anisotropy field in the present system. We assume that $$H_\mathrmK2$$ is finite and $$|H_\mathrmK1|<H_\mathrmK2$$ to make a tilted state of the magnetization [$$m_z=\pm \sqrt1-($$] stable due to the following reason. Remind that there are other stable states, such as $$m_z=\pm 1$$ for $$H_\mathrmK1>0$$ and $$m_z=0$$ for $$H_\mathrmK1<0$$, when $$H_\mathrmK2=0$$. Note that these states ($$m_z=\pm 1$$ or $$m_z=0$$) are always local extrema of energy landscape. Accordingly, once the magnetization saturates to these states, it cannot change the direction even if another input is injected. This conclusion can be understood in a different way, where the relaxation time given by Eq. (10) shows a divergence when $$\theta _\mathrmi=0$$ ($$m_z=+1$$), $$\pi$$ ($$m_z=-1$$), or $$\pi /2$$ ($$m_z=0$$) is substituted. On the other hand, for a finite $$H_\mathrmK2$$, the magnetization can move from the state $$m_z=\pm \sqrt/H_\mathrmK2)$$ when an input signal changes the value of $$H_\mathrmK1$$ and makes the state no longer an extremum. We note that the assumption $$|H_\mathrmK1|<H_\mathrmK2$$ restricts the magnitude of the magnetic field. In fact, the magnitude of $$\mathbf H$$ is small due to a small value of $$H_\mathrmK2=500$$ Oe found in experiments47,48 and the restriction of $$|H_\mathrmK1|<H_\mathrmK2$$. Since a critical current density destabilizing the magnetization by spin-transfer effect is proportional to the magnitude of the magnetic field, a small $$\mathbf H$$ implies that a small current mentioned above might induce a large-amplitude magnetization dynamics.
In summary, the magnitude of the current density is sufficiently small, and the magnetization dynamics are mainly driven by voltage control of magnetic anisotropy effect. The condition to stabilize a tilted state, however, might make the magnitude of the magnetic field, as well as the critical current density of spin-transfer torque switching, small. Thus, even a small current may cause nonnegligible dynamics. Simultaneously, however, it is practically difficult to increase the current magnitude by one order, and therefore, in the present study, we still consider that voltage control of magnetic anisotropy effect is the main driving force of the magnetization dynamics.
### Evaluation method of memory capacity
The memory capacity corresponds to the number of data which can be reproduced from the output data, as mentioned in the main text. The evaluation of the memory capacity consists of two processes. During the first process called training (or learning), weights are determined to reproduce the target data from the output data. In the second process, the reproducibility of the target data defined from other input data is evaluated.
Let us first describe the training process. We inject the random binary input $$b_k=0$$ or 1 into MTJ as voltage pulse. The number of the random input is N. The input $$b_k$$ is converted to the first order magnetic anisotropy field through the voltage control of magnetic anisotropy, which is described by Eq. (6). We choose $$m_z$$ as output data, which can be measured experimentally through magnetoresistance effect. Figure 5a shows an example of the time evolution of $$m_z$$ in the presence of several random binary inputs, where the values of the parameters are those at the maximum STM capacity conditions, i.e., the pulse width and the first order magnetic anisotropy field are $$t_\mathrmp=69$$ ns and $$H_\mathrmK1^(1)=-430$$ Oe. As can be seen, the injection of the random input drives the dynamics of $$m_z$$.
The dynamical response $$m_z(t)$$, during the presence of the kth input $$b_k$$, is divided into nodes, where the number of nodes is $$N_\mathrmnode$$. We denote the $$i(=1,2,\ldots ,N_\mathrmnode)$$th output with respect to the kth input as $$u_k,i=m_z(t_0+(k-1)t_\mathrmp+i(t_\mathrmp/N_\mathrmnode))$$, where $$t_0$$ is time for washout. The output $$u_k,i$$ is regarded as the status of the ith neuron at a discrete time k. Figure 5b shows an example of the time evolution of $$m_z$$ with respect to an input pulse, whereas the dots in the inset of the figure are the nodes $$u_k,i$$ defined from $$m_z$$. The method to define such virtual neurons is called time-multiplexing method15,20,21. We also introduce bias term $$u_k,N_\mathrmnode+1=1$$. In the training process, we introduce weight $$w_D,i$$ and evaluate its value to minimize the error,
\beginaligned \sum _k=1^N\left( \sum _i=1^N_\mathrmnode+1w_D,iu_k,i -y_k,D\right) ^2, \endaligned
(11)
where, $$y_k,D$$ are the target data defined by Eqs. (4) and (5). For simplicity, we omit the superscripts such as “STM” and “PC” in the target data because the difference in the evaluation method of the STM and PC capacities is merely due to the definition of the target data. In the following, we add superscripts or subscripts, such as “STM” and “PC”, when distinguishing quantities related to their capacities are necessary. The weight should be introduced for each target data. According to the above statement, we denote the weight to evaluate the STM (PC) capacity as $$w_D,i^\mathrmSTM(PC)$$, when necessary. Also, we note that the weights are different for each delay D.
Once the weights are determined, we inject other random binary inputs $$b_k^\prime$$ to the reservoir, where the number of the input data is $$N^\prime$$. Note that $$N^\prime$$ is not necessarily the same with N. Here, we use the prime symbol to distinguish the input data from those used in training. Similarly, we denote the output and target data with respect to $$b_k^\prime$$ as $$u_n,i^\prime$$ and $$y_n,D^\prime$$, respectively, where $$n=1,2,\ldots ,N^\prime$$. From the output data $$u_n,i^\prime$$ and the weight $$w_D,i$$, we define the system output $$v_n,D^\prime$$ as
\beginaligned v_n,D^\prime =\sum _i=1^N_\mathrmnode+1w_D,iu_n,i^\prime . \endaligned
(12)
Figure 5c shows an example of the comparison between the target data $$y_n,D^\prime$$ (red line) and the system output $$v_n,D^\prime$$ (blue dots) of STM task with $$D=1$$. It is shown that the system output well reproduces the target data. The reproducibility of the target data is quantified from the correlation coefficient $$\mathrmCor(D)$$ between $$y_n,D^\prime$$ and $$v_n,D^\prime$$ defined as
\beginaligned \mathrmCor(D)\equiv \frac \sum _n=1^N^\prime \left( y_n,D^\prime – \langle y_n,D^\prime \rangle \right) \left( v_n,D^\prime – \langle v_n,D^\prime \rangle \right) \sqrt \sum _n=1^N^\prime \left( y_n,D^\prime – \langle y_n,D^\prime \rangle \right) ^2 \sum _n=1^N^\prime \left( v_n,D^\prime – \langle v_n,D^\prime \rangle \right) ^2 , \endaligned
(13)
where $$\langle \cdots \rangle$$ represents the averaged value. Note that the correlation coefficients are defined for each delay D. We also note that the correlation coefficients are defined for each kind of capacity, as in the case of the weights and target data. In general, $$[\mathrmCor(D)]^2 \le 1$$, where $$[\mathrmCor(D)]^2=1$$ holds only when the system output completely reproduces the target data. Figure 5d shows an example of the dependence of $$[\mathrmCor(D)]^2$$ for STM task on the delay D. The results implies that the reservoir well reproduces the target data until $$D=3$$, whereas the reproducibility drastically decreases with the delay D increasing. The STM and PC capacities, $$C_\mathrmSTM$$ and $$C_\mathrmPC$$, are defined as
\beginaligned C=\sum _D=1^D_\mathrmmax\left[ \mathrmCor(D)\right] ^2. \endaligned
(14)
Note that the definition of the memory capacity obeys, for example, Refs.18,20,21,25, where the memory capacity in Eq. (14) is defined by the correlation coefficients starting from $$D=1$$. In some papers such as Refs.15,30, however, the square of the correlation coefficient at $$D=0$$ is added to the right-hand side of Eq. (14).
In the present study, we introduce $$N_\mathrmnode=250$$ nodes and use $$N=1000$$ and $$N^\prime =1000$$ random binary pulses for training of the weight and evaluation of the memory capacity, respectively. The number of nodes is chosen so that the value of the capacity saturates with the number of nodes increasing; see the inset of Fig. 5d. We also use 300 random binary pulses before the training and between training and evaluation for washout. The maximum delay $$D_\mathrmmax$$ is 20. Note that the value of each node should be sampled within a few hundred picosecond: specifically, in the case of an example shown in Fig. 2c, it is necessary to sample data within $$t_\mathrmp/N_\mathrmnode=69 \mathrmns/250 \simeq 276$$ ps. We emphasize that it is experimentally possible to sample data within such a short time. For example, in Ref.21, $$t_\mathrmp=20$$ ns and $$N_\mathrmnode=200$$ were used, where the sampling step is 100 ps.
The evaluation procedure of the NMSE in NARMA task is similar to that of the memory capacity. The binary input data, $$b_k=0$$ or 1, in the evaluation of the memory capacity are replaced by uniform random number $$r_k$$ in (0, 1). The variable $$z_k$$ in Eq. (7) is generally defined as $$z_k=\mu +\sigma r_k$$30, where the parameters $$\mu$$ and $$\sigma$$ are determined to make $$z_k$$ be in (0, 0.2)15. As in the case of the evaluation of the memory capacity, the evaluation of the NMSE consists of two procedures. The first procedure is the training, where the weight is determined to reproduce the target data from the output data $$u_k,i$$. Secondly, we evaluate the reproducibility of another set of the target data from the system output $$v_n^\mathrmNARMA2$$ defined from the weight and the output data. Then, the NMSE can be evaluated. Note that some papers13,27,30 define the NMSE in a slightly different way, where $$\sum _n=1^N^\prime \left( y_n^\mathrmNARMA2 \right) ^2$$ in the denominator of Eq. (8) is replaced by $$\sum _n=1^N^\prime \left( y_n^\mathrmNARMA2 – \overliney^\mathrmNARMA2 \right) ^2$$, where $$\overliney^\mathrmNARMA2$$ is the average of the target data $$y_n^\mathrmNARMA2$$. In this work, we use the definition given by Eq. (8), which is used, for example, in Refs.12,15,18.
We evaluated the conditional Lyapunov exponent as follows58. The LLG equation was solved by the fourth-order Runge-Kutta method with time increment of $$\Delta t=1$$ ps. We added perturbations $$\delta \theta$$ and $$\delta \phi$$ with $$\varepsilon =\sqrt\delta \theta ^2+\delta \varphi ^2=10^-5$$ to $$\theta (t)$$ and $$\varphi (t)$$ at time t. Let us denote the perturbed $$\theta (t)$$ and $$\varphi (t)$$ as $$\theta ^\prime (t)$$ and $$\varphi ^\prime (t)$$, respectively. Solving the LLG equation from time t to $$t+\Delta t$$, the time evolution of the perturbation is obtained as $$\varepsilon ^\prime (t)=\sqrt[\theta ^\prime (t+\Delta t)-\theta (t+\Delta t)]^2+[\varphi ^\prime (t+\Delta t)-\varphi (t+\Delta t)]^2$$. A temporal Lyapunov exponent is obtained as $$\lambda (t)=(1/\Delta t)\log [\varepsilon ^\prime (t)/\varepsilon ]$$. Repeating the procedure, the temporal Lyapunov exponent is averaged as $$\lambda (\mathcal N)=(1/\mathcal N)\sum _i=1^\mathcal N\lambda (t_i)=[1/(\mathcal N \Delta t)]\sum _i=1^\mathcal N\log \\varepsilon ^\prime [t_0+(i-1) \Delta t]/\varepsilon \$$, where $$t_0$$ is time at which the first random input is injected, whereas $$\mathcal N$$ is the number of averaging. The Lyapunov exponent is given by $$\lambda =\lim _\mathcal N \rightarrow \infty \lambda (\mathcal N)$$. In the present study, we used the time range same as that used in the evaluations of the memory capacity and the NMSE and added uniform random input. Hence, notice that $$\mathcal N=\mathcal Mt_\mathrmp/\Delta t$$ depends on the pulse width, where $$\mathcal M$$ is the total number of the random inputs including washout, training, and evaluation. We confirmed that $$\lambda (\mathcal N)$$ monotonically saturates to zero; at least, $$|\lambda (\mathcal N)|$$ is one or two orders of magnitudes smaller than $$1/t_\mathrmp$$. Thus, the expansion rate of the perturbation, $$1/\lambda (\mathcal N)$$, is much slower than the injection rate of the input signal. Considering these facts, we concluded that the largest Lyapunov exponent can be regarded as zero, and therefore, chaos is absent. Note that the absence of chaos in the present system relates to the facts that the free layer is axially symmetric and the applied voltage modifies the perpendicular anisotropy only. When there are factors breaking the symmetry, such as spin-transfer torque with an in-plane spin polarization, chaos will appear30.
|
2022-10-01 14:39:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9627626538276672, "perplexity": 395.72994043020554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00650.warc.gz"}
|
https://dsnielsen.com/page/2/
|
# Consistency strength of forcing axioms
Previously I’ve only been talking about large cardinals and determinacy theories as if they were the only consistency hierarchies around. There is another important class of axioms, which has the added benefit of being, dare I say it, more useful to mathematicians not working in set theory. The reason for this is probably that these forcing axioms have a handful of consequences of a non-set theoretic nature, making them easier to apply in (mathematical) practice. When it comes to the consistency strength of these axioms though, things get a lot more hazy: we know very little about the strength of (almost all of) these axioms. I’ll introduce these axioms here and state what is known to date.
# Jónsson successors of singulars
We currently don’t know whether or not there can exist a singular cardinal $\rho$ such that $\rho^+$ is a Jónsson cardinal. I’ll try to survey some of the properties these strange things satisfy and how much is known about the consistency strength of the existence of them.
# Limitations of ZFC determinacy
I was recently playing a (set-theoretic) game and the question of whether it was determined slowly emerged. As I was working in a ZFC context, most of the determinacy results were of no use to me, so I tried to investigate how much we really know about ZFC determinacy. Of course we can’t have full determinacy (AD), but how about definable variants, where we alter both the objects played and the length of the game?
# HODs of models of determinacy
HOD is the proper class of all sets $x$ such that both $x$ and all the elements of the transitive closure of $x$ are definable using ordinal parameters. HOD is a model of ZFC, but in general its structure is not really known. In the late 90’s it was shown by Steel and Woodin that $\textsf{HOD}^{L(\mathbb R)}$ exhibits mouse-like behaviour, and since then there’s been a great interest in finding the HODs of other models than $L(\mathbb R)$. I’ll here give a (non-exhaustive) overview of both which HODs have been shown to have this mouse-like structure and also explain the general strategy used so far in finding these mice.
# Borel Determinacy
The proof of Borel determinacy doesn’t seem to have the best reputation, as it’s both rather long, quite technical and it’s really easy to lose track of what’s going on. I’ve noticed that the same proof can be presented in a more structural setting, making the core ideas of the proof be slightly clearer. I’ll try here to present what’s going on in the proof, using the structural framework of games I set up in my previous post. The full proof can be found in my determinacy project.
# The structure of games
Games in set theory are usually informally described by describing the rules of the two players and the winning condition. Sometimes we need to describe an interaction between games, which then becomes quite ad hoc, describing certain functions with properties that are desired in the specific proof in question. This is especially prominent in the proof of Borel determinacy, where unraveling coverings are used to transfer determinacy statements between games. I’ll here attempt to describe this framework in a more abstract setting, viewing games as objects in their own right, which at the very least might make the proof of Borel determinacy clearer.
# From Determinacy to a Woodin II
In the prequel I sketched a proof of how determinacy hypotheses could imply the measurability of both $(\bf\delta^1_1)^{L(\mathbb R)}$ and $(\bf\delta^2_1)^{L(\mathbb R)}$ inside $L(\mathbb R)$. The latter is really the first step in showing the much stronger assertion that $\Theta^{L(\mathbb R)}$ is Woodin. I’ll here sketch what main ideas are involved in the proof of this fact.
|
2017-10-18 03:32:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7353607416152954, "perplexity": 345.72507720262735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822739.16/warc/CC-MAIN-20171018032625-20171018052625-00464.warc.gz"}
|
https://mathoverflow.net/questions/313039/duality-between-coalgebras-and-pseudocompact-algebras-uniqueness
|
# Duality between coalgebras and (pseudocompact) algebras - uniqueness
The following result is well-known. It can for example be found in [Iovanov: The representation theory of profinite algebras, Theorem 1.0.2]. For definitions, see below.
Let $$k$$ be a field. The following are equivalent for a $$k$$-algebra $$A$$:
• $$A$$ is pseudocompact,
• $$A$$ is profinite,
• $$A\cong C^*$$ where $$C$$ is a coalgebra (and $$(-)^*=\operatorname{Hom}_k(-,k)$$).
The questions I have are about uniqueness:
• What is an explicit example of a profinite algebra where $$C$$ is not unique, i.e. two coalgebras $$C$$ and $$C'$$ whose duals are isomorphic as $$k$$-algebras (but of course not as pseudocompact algebras as the categories of coalgebras and pseudocompact algebras are dual.
• Formulated another way: What is an example of an algebra $$A$$ with two pseudocompact topologies on it?
• What about (infinitely generated) modules over a (noetherian) pseudocompact algebra.
Some background:
A pseudocompact $$k$$-algebra is a Hausdorff linear topological $$k$$-algebra ($$k$$ here has the discrete topology) $$A$$ having a basis $$\mathcal{F}$$ consisting of ideals of finite codimension such that the natural morphism $$A\to \varprojlim A/I$$ is an isomorphism.
There is a duality of categories between the category of $$k$$-algebras and the category of coalgebras giving by the $$k$$-dual in one direction and the continuous dual in the other direction.
Given any profinite algebra $$A$$, i.e. an algebra which can be written as $$\varprojlim A_i$$ for finite dimensional algebras $$A_i$$, the initial topology for $$A\to \varprojlim A_i$$ (where the $$A_i$$ are given the discrete topology) gives $$A$$ a pseudocompact topology.
A priori there is no reason that an infinite dimensional algebra should not have more than one pseudocompact topology. Definitely a profinite algebra can usually be written as $$\varprojlim$$ in many different ways.
In [Gastel-Van den Bergh: Graded modules of Gelfand-Kirillov dimension one over three-dimensional Artin-Schelter regular algebras, 1997, Proposition 3.21] I found the result that for a noetherian pseudocompact ring there is a unique pseudocompact topology, namely the $$J$$-adic topology where $$J$$ denotes the Jacobson radical. I assume that his works in the $$k$$-enriched setting as well. Thus an answer to the second question should be non-noetherian.
Similarly in [Schneider: p-adic Lie groups, Corollary 22.4] I found that a similar uniqueness result holds for finitely generated pseudocompact modules over noetherian pseudocompact rings.
I also found that the similar case of profinite groups was only settled in 2007 in [Nikolov-Segal: On finitely generated profinite groups, I: strong completeness and uniform bounds] proving that there is a unique pseudocompact topology on each finitely generated profinite group. In this area there are some explicit counterexamples, see e.g. this mathoverflow question.
|
2023-04-01 09:02:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8722562193870544, "perplexity": 359.4135061272084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00540.warc.gz"}
|
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=vmj&paperid=428&option_lang=eng
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Vladikavkaz. Mat. Zh.: Year: Volume: Issue: Page: Find
Personal entry: Login: Password: Save password Enter Forgotten password? Register
Vladikavkaz. Mat. Zh., 2012, Volume 14, Number 3, Pages 63–73 (Mi vmj428)
This article is cited in 4 scientific papers (total in 4 papers)
The Riemann–Hhilbert boundary value problem for generalized analytic functions in Smirnov classes
S. B. Klimentovab
a Southern Federal University, Russia, Rostov-on-Don
b South Mathematical Institute of VSC RAS, Russia, Vladikavkaz
Abstract: Under study is the Riemann–Hilbert boundary value problem for generalized analytic functions of a Smirnov class in a bounded simply connected domain whose boundary is a Lyapunov curve or a Radon curve without cusps. The coefficient of the boundary value condition is assumed continuous and perturbed by a bounded measurable function or continuous and perturbed by a bounded variation function. The paper uses the special representation for generalized analytic functions of Smirnov classes from the author's paper [16], which reduces the problem to that for holomorphic functions. The problem for the holomorphic functions was under study in the author's papers [1,2].
Key words: Riemann–Hilbert boundary value problem, generalized analytic functions, Smirnov classes.
Full text: PDF file (166 kB)
References: PDF file HTML file
UDC: 517.518.234+517.548.3
Received: 28.08.2011
Citation: S. B. Klimentov, “The Riemann–Hhilbert boundary value problem for generalized analytic functions in Smirnov classes”, Vladikavkaz. Mat. Zh., 14:3 (2012), 63–73
Citation in format AMSBIB
\Bibitem{Kli12} \by S.~B.~Klimentov \paper The Riemann--Hhilbert boundary value problem for generalized analytic functions in Smirnov classes \jour Vladikavkaz. Mat. Zh. \yr 2012 \vol 14 \issue 3 \pages 63--73 \mathnet{http://mi.mathnet.ru/vmj428}
Linking options:
• http://mi.mathnet.ru/eng/vmj428
• http://mi.mathnet.ru/eng/vmj/v14/i3/p63
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. Kokilashvili V., Paatashvili V., “The Riemann Boundary Value Problem in Variable Exponent Smirnov Class of Generalized Analytic Functions”, Proc. A Razmadze Math. Inst., 169 (2015), 105–118
2. V. Paatashvili, “Certain properties of generalized analytic functions from Smirnov class with a variable exponent”, Mem. Differ. Equ. Math. Phys., 69 (2016), 77–91
3. V. Kokilashvili, V. Paatashvili, “On the Riemann-Hilbert boundary value problem for generalized analytic functions in the framework of variable exponent spaces”, Math. Meth. Appl. Sci., 40:18 (2017), 7267–7286
4. Pozzi E., “Hardy Spaces of Generalized Analytic Functions and Composition Operators”, Concr. Operators, 5:1 (2018), 9–23
• Number of views: This page: 277 Full text: 117 References: 43 First page: 1
Contact us: math-net2020_01 [at] mi-ras ru Terms of Use Registration Logotypes © Steklov Mathematical Institute RAS, 2020
|
2020-01-17 18:28:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23595821857452393, "perplexity": 9756.569941925798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00259.warc.gz"}
|
https://quantumcomputing.stackexchange.com/questions/20879/what-is-the-best-notation-to-write-pairs-of-one-qubit-ket-states/20880#20880
|
# What is the best notation to write pairs of one-qubit ket states?
I am working on coming up with practice problems for a QC course. I have a problem that considers two qubits as so:
$$|\psi_a\rangle = \alpha_a |0\rangle + \beta_a |1\rangle$$
$$|\psi_b\rangle = \alpha_b |0\rangle + \beta_b |1\rangle$$
But I found the alphas and betas combined with a's and b's very confusing. I was just wondering if there was a standard notation for this type of thing or any notation others have found easier to understand.
• The metacharacters \langle and \rangle give you the left and right angle brackets. Aug 20 '21 at 0:05
• $|\psi_a\rangle=a_0|0\rangle + a_1|1\rangle$ and $|\psi_b\rangle=b_0|0\rangle + b_1|1\rangle$? Aug 20 '21 at 13:13
I am not sure if this helps in any way but you could consider the state of an arbitrary qubit $$i$$ as:
$$|\psi_i \rangle = \alpha_i |0\rangle + \beta_i |1\rangle$$
So if $$|\psi_1 \rangle$$ and $$|\psi_2 \rangle$$ correspond to the state of qubit $$q_1$$ and $$q_2$$ then we have $$|\psi_1 \rangle = \alpha_1 |0\rangle + \beta_1 |1\rangle$$ $$|\psi_2 \rangle = \alpha_2 |0\rangle + \beta_2 |1\rangle$$
This avoids mixing $$\alpha$$, $$\beta$$ with $$a$$ and $$b$$. Again, these are just dummy variables so you can change them to whatever you want.
• Yep, this is what I would do. Sometimes I go even further and use just a single symbol $c$ for all coefficients, with an extra subscript giving the basis state. For example: $|\psi_1\rangle=c_{01}|0\rangle+c_{11}|1\rangle$. But this has a high risk of getting confused with multi-qubit basis states, so the answer above is probably best for a course. Aug 20 '21 at 0:03
|
2022-01-22 05:35:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872300922870636, "perplexity": 238.20472027391983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303747.41/warc/CC-MAIN-20220122043216-20220122073216-00439.warc.gz"}
|
http://micromagnetics.org/magnum.fd/controller.html
|
# Controllers¶
magnum.create_controller(run, params, *args, **kwargs)
Create a controller object, depending on the environment in which the script was executed: TODO: Explain.
This function creates a controller object that is responsible for starting multiple simulations e.g. for parameter sweeps. Depending on how the simulation script is started, an instance of one of the following controller classes is created:
• LocalController
• (more to follow)
If the script is started locally, a LocalController instance is returned.
Example on how to use controllers:
mesh = RectangularMesh(...)
world = World(mesh, ...)
def run(n):
print "Running with parameter", n
solver = Solver(world)
# etc...
solver.solve(...)
controller = Controller(run, [100, 200, 300, 400])
controller.start()
This will run 4 simulations, printing:
Running with parameter 100
Running with parameter 200
Running with parameter 300
Running with parameter 400
|
2019-01-19 20:25:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2685031592845917, "perplexity": 11030.136814335809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583681597.51/warc/CC-MAIN-20190119201117-20190119223117-00618.warc.gz"}
|
https://www.menuiserie-mac.fr/hemp-beer-bfvr/viewtopic.php?1289fb=convergence-uniforme-s%C3%A9rie
|
# convergence uniforme série
1. Pointwise convergence Uniform convergence Uniform convergence f n(z) → f(z) uniformly if for every > 0 there is an N( ) such that for all n > N( ) we have f n(z)−f(z) < for all z in the domain. In de analyse, een deelgebied van de wiskunde, is uniforme convergentie een sterkere vorm van convergentie dan puntsgewijze convergentie. 5.0. We now want to develop tools that will allow us to show that these functions are analytic. In: Introductory Problem Courses in Analysis and Topology. Convergence, in mathematics, property (exhibited by certain infinite series and functions) of approaching a limit more and more closely as an argument (variable) of the function increases or decreases or as the number of terms of the series increases.. For example, the function y = 1/x converges to zero as x increases. Uniform convergence can be used to construct a nowhere-differentiable continuous function. This function converges pointwise to zero. UX(x )=3 f(x)0= , O 0. Recall that in general, it is not enough to know that the sum f(x) = lim n→∞ f n(x) converges everywhere and that each f This script finds the convergence, sum, partial sum graph, radius and interval of convergence, of infinite series. Mais cette approximation est dautant moins bonne que lintervalle où se déplace la variable est large. The «Series convergence test» pod value Explanation; By the harmonic series test, the series diverges. A theorem which gives sufficient conditions for the uniform convergence of a series or sequence of functions by comparing them with appropriate series and sequences of numbers; established by K. Weierstrass . the convergence cannot be uniform on $$(-∞,∞)$$, as the function $$f$$ is not continuous. We will also see that uniform convergence is what allows … The uniform convergence of ∂ υ t 0 / ∂ x can be proved with (53).Indeed, the uniform convergence for t > ε of the partial with respect to x of the first component of the second member results from the fact that this component is represented by a Poisson integral. 4 Uniform convergence In the last few sections we have seen several functions which have been defined via series or integrals. Answer: Since uniform convergence is equivalent to convergence in the uniform metric, we can answer this question by computing $\du(f_n, f)$ and checking if $\du(f_n, f)\to0$. V. Convergence of random processes and limit theorems in probability theory. The geometric representation of the non-uniform convergence by means of the approximation curves y = sn (x) is given in Fig. The de nition of convergence The sequence xn converges to X when this holds: for any >0 there exists K such that jxn − Xj < for all n K. Informally, this says that as n gets larger and larger the numbers xn get closer and closer to X.Butthe de nition is something you can work with precisely. 21. In mathematics, a series is the sum of the terms of an infinite sequence of numbers.. La convergence uniforme d'une suite de fonctions ∈ est une forme de convergence plus exigeante que la convergence simple.La convergence devient uniforme quand toutes les suites (()) ∈ avancent vers leur limite respective avec une sorte de « mouvement d'ensemble ». The situation is more complicated for differentiation since uniform convergence of does not tell anything about convergence of .Suppose that converges for some , that each is differentiable on , and that converges uniformly on . Thus: n2 EX. How to use convergence in a sentence. 2. Suppose that (f n) is a sequence of functions, each continuous on E, and that f n → f uniformly on E. Then f is continuous on E. Proof. https://goo.gl/JQ8Nys How to Prove Uniform Convergence Example with f_n(x) = x/(1 + nx^2) 1.
|
2021-01-15 23:02:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8920269012451172, "perplexity": 742.3843264486427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00750.warc.gz"}
|
https://math.stackexchange.com/questions/829636/can-we-find-a-bound-so-that-we-can-conclude-g-is-a-p-group
|
# Can we find a bound so that we can conclude $G$ is a $p$-group?
Let $$n_p$$ be number of the elements of order $$p$$ in a group $$G$$.
My motivation is that if $$n_2\ge\dfrac 34 |G|$$ then $$G$$ is $$2$$-group. You can check it from this.
Is there such general bound for $$n_p$$ to conclude $$G$$ is a $$p$$-group?
• Well, there is the somewhat trivial bound $n_p\ge |G|-1$ ... – Hagen von Eitzen Jun 10 '14 at 19:18
• @HagenvonEitzen: as a constant ratio pleas :) – mesel Jun 10 '14 at 19:19
• We would obviously want the best bound, so effectively we're searching for the supremum of $n_p/|G|$ over all non-$p$-groups $G$. Perhaps some numerical exploration could be done. – blue Jun 10 '14 at 19:21
• As I pointed out in comment to the previous post about $n_2$, a Frobenius group with complement of order $p$ provides an example of a non $p$-group with $n_p = (p-1)|G|/p$, and you can increase the ratio slightly by taking a direct product with a large elementary abelian $p$-group. – Derek Holt Jun 10 '14 at 21:56
• In case people are curious, here are some large values for $n_p/|G|$ I've found (where I take the supremum over $G \times C_p^n$, so these values are not attained in my examples, just approached): 2: 2/3, 3: 3/4, 5: 9/11, 7: 7/8, 11: 21/23, 13: 25/27, 17: 97/103, 19: 3611/3629. They are all from Frobenius groups as Derek suggested, and are best possible for $|G| < 500$ (larger $G$ give more opportunity for variety, but cannot take as much advantage of the $C_p^n$ trick). – Jack Schmidt Jun 11 '14 at 4:59
|
2019-05-25 10:50:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385272026062012, "perplexity": 404.0177641486149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258003.30/warc/CC-MAIN-20190525104725-20190525130725-00493.warc.gz"}
|
http://en.wikibooks.org/wiki/Prealgebra_for_Two-Year_Colleges/To_the_instructor
|
# Prealgebra for Two-Year Colleges/To the instructor
This is a prealgebra text for adult students at a two-year college. This text is meant to facilitate teaching via the Socratic method. The book also endeavors to connect topics through themes that are useful to adults, as opposed to covering fractions, decimals, and percents in three separate chapters, as if they were unconnected concepts.
## Target Audience
This is a prealgebra text for adult students at a two-year college. Most of these students have passed a prealgebra class (or higher) in the past, but do not remember enough of it to accurately solve linear equations in one variable that contain fractions and decimals, without the aid of a calculator.
These students remember many mathematical procedures and rules, but they do not remember them correctly, or they do not know when to use which procedure. For example, these students remember that "two negatives make a positive," so they will tell you that (-3) + (-5) = +8. Also, they will tell you that
${1 \over 3} + {5 \over 8} = {6 \over 11}$,
because they are misapplying the rule
${1 \over 3} \times {5 \over 8} = {1 \times 5 \over 3 \times 8}$.
When asked to draw a circle and shade in two fifths, these students are likely to draw one of the pictures below.
or
## Socratic workbook
This text is meant to facilitate teaching via the Socratic method. Our premise is that use of the Socratic method will help students understand the reasons behind the procedures and rules. Students will then be able to figure out which rule to use, or (if they have forgotten the rule altogether) they will be able to reason their way to an answer.
In order to facilitate the Socratic method, this text is in the style of a workbook. There is a teacher's edition, with both halves of the Socratic dialogue (suggestions for teacher's half are in red italics). The student edition has white space for the student's work in place of the italic font.
## Scope
This book includes the content of the traditional prealgebra curriculum, but emphasizes themes that are useful to adults.
### Content
Expository material explaining procedures and rules is relegated to the appendices. Students have already seen the exposition. The expository material includes
• procedures for adding, subtracting, multiplying, and dividing rational numbers represented both as fractions and as decimals;
• procedures for solving linear equations in one variable, where the coefficients and constant terms may be rational numbers represented both as fractions and as decimals
• geometrical formulas for
• perimeter of a polygon and circumference of a circle
• area of triangles, rectangles, and circles
• volumes of cuboids, right circular cylinders, and right triangular prisms.
• metiric system, U.S. customary units, and unit conversion
### Themes
• Problem solving, following the approach of George Polya
• Estimation
• Explaining one's reasoning clearly, accurately, and logically
• Flexibility in shifting between representations of a number
• decimal
• fraction
• percent
• number line
• area
• words
• Flexibility in shifting between representations of a binary operation (viz., addition, subtraction, multiplication, division, exponentiation)
• pictures
• formula
• verbal descriptions
• Flexibility in shifting between representations of a function
• input-output table
• Cartesian graph
• formula
• verbal input-output rule
|
2013-12-11 14:09:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5370187163352966, "perplexity": 2405.4848152686445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164037376/warc/CC-MAIN-20131204133357-00029-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://mathamaze.co.uk/circles/simple_canvas.html
|
# Some arcs
Sometimes it seems a bit annoying that whenever (exaggerating) I write code, I have to write it three times... python, javascript, tikzpicture, SVG, DXF, whatever... (I often generate the initial svg in python; write up papers including tikzpicture)
So this is to just show conversion between an arc in canvas and in SVG and in tikzpicture. In canvas, the default direciton of the arc is clockwise.
click on the red dots on the canvas to move and change the arc (does not work on the SVG). The code below the figures is a stand alone html.
HTLM/JavaScript/Canvas; format for arc: ctx.arc(cx,cy,r, ang1, ang2)
>
Canvas code:
SVG; format is "A rx ry x-axis-rotation large-arc-flag sweep-flag x y".
SVG code:
For latex tikzpicture, the simplest format is: \draw (x,y) arc (start:stop:radius); I'll put a version with this below.
Latex code:
However, sometimes I prefer to use \usetikzlibrary{angles}, and use the format "\pic [draw, angle radius=radius]{angle=begin--center--end};" so I'll add that too. One is closer to the canvas version, the other closer to the SVG version. Bear in mind that the y-axis is measured in the opposite direction from SVG and canvas. The following you will have to cut and paste and compile yourself. Working out the scale is a bit weird, and I think maybe different when the latex is standalone, perhaps. Tikz is so powerful! There is so much built in!
Latex code:
|
2020-04-01 16:20:00
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505135178565979, "perplexity": 3880.7912761941157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00206.warc.gz"}
|
http://mathoverflow.net/questions/87136/real-roots-strictly-between-two-adjacent-integers-for-monic-polynomials-in-mat
|
# Real roots strictly between two adjacent integers for monic polynomials in $\mathbb{Z}[x]$
I am stuck trying to prove a problem that seems obvious: namely, given a monic polynomial with integral coefficients $$f(x)= x^n +a_1x^{n-1}+\ldots+a_1x+a_0,$$ $a_i\in \mathbb{Z}$, is it true that no pair of real roots lie strictly between any two adjacent integers?
I was able to prove it when $n=2$ and I have also sketched a proof for some $n$ that if such roots exist, then they will not be rational. Does this boil down to a nontrivial question or am I missing something?
Thank you.
-
any two adjacent integers? – darij grinberg Jan 31 '12 at 14:35
Why not choose two polynomials with roots between your favourite two consecutive integers, and then multiply them together? The product will have all the roots that the factors have. – James Cranch Jan 31 '12 at 14:45
@James: What about $n=3$? – Ramiro de la Vega Jan 31 '12 at 15:24
The cubic $x^3-7x-7$ has two roots between $-2$ and $-1$. – Richard Stanley Jan 31 '12 at 19:04
Richard Stanley's solution generalizes for all $n\geq 3$; that is, $x^{n-3}(x^3-7x-7)$. – Unknown Feb 1 '12 at 0:31
|
2015-06-30 10:13:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945815920829773, "perplexity": 334.84997036760154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093400.45/warc/CC-MAIN-20150627031813-00241-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/5051fbd3e4b0821420445ae2
|
## A community for students. Sign up today
Here's the question you clicked on:
## anonymous 4 years ago a car travelling at 7.0m/s acceleraes uniformly at 2.5 m/s square to reach a speed off 12.0 m/s. how long does it takes for this accelerataion to occur
• This Question is Open
1. anonymous
2.5 seconds.
2. anonymous
3. anonymous
Soooo
4. PaxPolaris
$\Large a={v-u \over t}$ u= initial velocity ... v= final velocity $\Large \implies t= {v-u \over a}$
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
2016-12-06 08:16:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27004241943359375, "perplexity": 12563.532540028944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541886.85/warc/CC-MAIN-20161202170901-00198-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1078377/defining-sine-and-cosine
|
# Defining sine and cosine
We know the following are true about sine and cosine (and that they can be proven geometrically):
• $\sin(a+b)=\sin(a)\cos(b)+\sin(b)\cos(a)$
• $\cos(a+b)=\cos(a)\cos(b)-\sin(a)\sin(b)$
• $\lim\limits_{x\to0}\dfrac{\sin x}x=1$
• $\lim\limits_{x\to0}\dfrac{\cos x-1}x=0$
• They are continuous
Let's say we have two real functions: $s(x)$ and $c(x)$. If we know that the above are true for $s$ and $c$ (i.e. $s(a+b)=s(a)c(b)+s(b)c(a)$, etc.), can we conclude that $s$ and $c$ are equal to $\sin$ and $\cos$ respectively? In other words, are sine and cosine the only two functions that satisfy the above? Do the five points above uniquely define the sine and cosine?
I was thinking of the unit circle definition of sine and cosine, and I knew that there are many non-geometric definitions of them. I was wondering if the four facts shown above were enough to count as a non-geometric definition.
(Without the third point, stuff like $\sin(x \text{ degrees})$ and $\cos(x \text{ degrees})$ would also work; in other words, the third point specifies that we're using radians.)
EDIT: Added fourth point, since $s(x)=e^x\sin(x)$, $c(x)=e^x\cos(x)$ would work if it was omitted.
• If not, what if we threw $\lim_{x\to0}\frac{\cos x-1}x=1$ into the mix? – Akiva Weinberger Dec 23 '14 at 3:37
• I think you mean $\lim_{x \rightarrow 0} \frac{\cos x - 1}{x} = 0$ :-) – Bungo Dec 23 '14 at 3:42
• @Bungo close enough – Akiva Weinberger Dec 23 '14 at 3:43
• @Bungo It just makes the question all the more interesting if we throw a different definition in. – Milo Brandt Dec 23 '14 at 3:43
• What about $s(x) = \sinh x$? – MathMajor Dec 23 '14 at 3:58
Yes, this uniquely defines the sine and cosine functions. In particular, let's write $$f(x)=\cos(x)+i\sin(x)$$ where $i$ is the imaginary unit. Then, the sum identities will yield, after a bit of computation that $f(x)f(y)=f(x+y)$. This is somewhat tedious, but easy to verify. But guess what! The only continuous functions which can satisfy $f(x)f(y) = f(x+y)$ are exponential functions; to prove this, notice that, for integer $n$, it is clear that $f(nx)=f(x)^n$. You can use this to show that, over the rationals, $f$ is an exponential function (i.e. $f(x)=e^{ax}$).*
Since we have $\sin(0)=0$, $\sin'(0)=1$, $\cos(0)=1$ and $\cos'(0)=0$, this implies that $f'(0)=i$. The only exponential function satisfying this is $f(x)=e^{ix}$. Extracting real and imaginary parts yields, uniquely, cosine and sine.
*If you wish to be formal about this, it would be wise to prove that $|f(x)|=e^{\alpha x}$ first, then prove that $\arg(f(x))\equiv \beta x$ - you can avoid the issue of $n^{th}$ roots being non-unique in the complex plane by separating your argument into these two sections.
• Nice! Thank you! (Interesting: my counter example—the one that made me edit my post—corresponds to $f(x)=e^{(1+i)x}$.) – Akiva Weinberger Dec 23 '14 at 4:18
• @columbus Yeah, I noticed that; when you posted the comment with that solution on your question, I'd written the first two paragraphs basically the same as above and had been unsuccessfully trying to prove that $\cos'(0)=0$. I probably would have been at that forever without every actually checking for extraneous solutions if you hadn't commented. – Milo Brandt Dec 23 '14 at 4:20
• This may be a stupid question, but what if I replaced the fourth, "new" bullet-point with $\sin(-x)=-\sin(x),\cos(-x)=\cos(x)$? EDIT: Given the other bullet-points, is that equivalent to $\sin^2(x)+\cos^2(x)=1$? I'm too tired to check. – Akiva Weinberger Dec 23 '14 at 4:20
• Well, you'd still have that $f$ is exponential, and thus that $$f(-x)=\frac{1}{f(x)}$$ However, your condition forces $|f(-x)|=|f(x)|$ which, for them to be reciprocals, implies that $|f(x)|=1$. Therefore, the solutions for $f$ are of the form $f(x)=e^{\alpha i x}$ for $\alpha$ with real part $0$, since $f$ is restricted to the unit circle. (Of note is the solution $f(x)=1$). If you keep that $\sin'(0)=1$, then yes, that would also uniquely define $f$. – Milo Brandt Dec 23 '14 at 4:24
If you add the conditions $\lim_{x \to 0}\frac{\cos(x) - 1}{x}=0$ and $c(x),s(x)$ are continuously differentiable (i.e. $c,s \in C^1(\mathbb{R})$, we can then force $c(x) = \cos(x)$ and $s(x) = \sin(x)$.
Writing $f(x) = c(x)+i s(x)$, the first two relations become simply: $f(x+y) = f(x)f(y)$ (comparing real and imaginary parts on both sides). In particular, note that $f(0)=f(0)^2$, so $f(0)=0$ or $f(0)=1$. If $f(0)=0$, then $f(x)=f(x+0)=f(x)f(0)=0$, so $f(x)=0$ identically, contradicting $\lim_{x \to 0} \frac{\sin(x)}{x} = 0$. So $f(0)=1$.
Taking the derivative with respect to $x$ gives $f'(x+y)=f'(x)f(y)$ and so:
$$f'(y)=f'(0)f(y)$$ Which is an ODE whose solution is uniquely determined by the values $f(0), f'(0)$.
We showed above that $f(0)=1$.
And $$f'(0) = \lim_{x \to 0} \frac{f(x)-f(0)}{x} = \lim_{x \to 0} \frac{c(x)-1 + is(x)}{x} = i$$
We know (by construction) that $f(0)$ and $f'(0)$ match what they would be if $c(x)=\cos(x)$ and $s(x) = \sin(x)$, and since $f(0$ and $f'(0)$ uniquely determine $f(x)$ (from the ODE), this completes the proof.
Note that we do need to know $\lim_{x \to 0}\frac{\cos(x)-1}{x}$ to determine $f'(0)$, which is why without that condition you get the counterexamples mentioned in the comments.
I'm not sure if there are any weird counterexamples if you omit the differentiability condition, but that assumption seems like a natural one to make.
|
2020-10-27 15:38:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9636684060096741, "perplexity": 178.10188898179268}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894203.73/warc/CC-MAIN-20201027140911-20201027170911-00607.warc.gz"}
|
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Structure_and_Properties_(Tro)/02%3A_Matter_Measurement_and_Problem_Solving/2.06%3A_Problem-Solving_Strategies
|
Skip to main content
# 2.6: Problem-Solving Strategies
We know the conversion factor is correct when units cancel appropriately, but a conversion factor is not unity, however. Rather it is a physical quantity (or the reciprocal of a physical quantity) which is related to the two other quantities we are interconverting. The conversion factor works because of the relationship, not because it is has a value of one. Once we have established that a relationship exists, it is no longer necessary to memorize a mathematical formula. The units tell us whether to use the conversion factor or its reciprocal. Without such a relationship, however, mere cancellation of units does not guarantee that we are doing the right thing.
A simple way to remember relationships among quantities and conversion factors is a “road map“of the type shown below:
$\text{Mass }\overset{density}{\longleftrightarrow}\text{ volume or }m\overset{\rho }{\longleftrightarrow}V\text{ }$
This indicates that the mass of a particular sample of matter is related to its volume (and the volume to its mass) through the conversion factor, density. The double arrow indicates that a conversion may be made in either direction, provided the units of the conversion factor cancel those of the quantity which was known initially. In general the road map can be written
$\text{First quantity }\overset{\text{conversion factor}}{\longleftrightarrow}\text{ second quantity}$
General Steps in Performing Dimensional Analysis
1. Identify the "given" information in the problem. Look for a number with units to start this problem with.
2. What is the problem asking you to "find"? In other words, what unit will your answer have?
3. Use ratios and conversion factors to cancel out the units that aren't part of your answer, and leave you with units that are part of your answer.
4. When your units cancel out correctly, you are ready to do the math. You are multiplying fractions, so you multiply the top numbers and divide by the bottom numbers in the fractions.
As we come to more complicated problems, where several steps are required to obtain a final result, such road maps will become more useful in charting a path to the solution.
Example $$\PageIndex{1}$$: Volume to Mass Conversion
Black ironwood has a density of 67.24 lb/ft3. If you had a sample whose volume was 47.3 ml, how many grams would it weigh? (1 lb = 454 g; 1 ft = 30.5 cm).
Solution
The road map
$V\xrightarrow{\rho }m\text{ } \nonumber$
tells us that the mass of the sample may be obtained from its volume using the conversion factor, density. Since milliliters and cubic centimeters are the same, we use the SI units for our calculation:
$\text{Mass} = m = 47.3 \text{cm}^{3} \times \dfrac{\text{67}\text{.24 lb}}{\text{1 ft}^{3}} \nonumber$
Since the volume units are different, we need a unity factor to get them to cancel:
$m\text{ = 47}\text{.3 cm}^{\text{3}}\text{ }\times \text{ }\left( \dfrac{\text{1 ft}}{\text{30}\text{.5 cm}} \right)^{\text{3}}\text{ }\times \text{ }\dfrac{\text{67}\text{.24 lb}}{\text{1 ft}^{\text{3}}}\text{ = 47}\text{.3 cm}^{\text{3}}\text{ }\times \text{ }\dfrac{\text{1 ft}^{\text{3}}}{\text{30}\text{.5}^{\text{3}}\text{ cm}^{\text{3}}}\text{ }\times \text{ }\dfrac{\text{67}\text{.24 lb}}{\text{1 ft}^{\text{3}}} \nonumber$
We now have the mass in pounds, but we want it in grams, so another unity factor is needed:
$m\text{ = 47}\text{.3 cm}^{\text{3}}\text{ }\times \text{ }\dfrac{\text{1 ft}^{\text{3}}}{\text{30}\text{.5}^{\text{3}}\text{ cm}^{\text{3}}}\text{ }\times \text{ }\dfrac{\text{67}\text{.24 lb}}{\text{1 ft}^{\text{3}}}\text{ }\times \text{ }\dfrac{\text{454 g}}{\text{ 1 lb}}\text{ = 50}\text{0.9 g} \nonumber$
In subsequent chapters we will establish a number of relationships among physical quantities. Formulas will be given which define these relationships, but we do not advocate slavish memorization and manipulation of those formulas. Instead we recommend that you remember that a relationship exists, perhaps in terms of a road map, and then adjust the quantities involved so that the units cancel appropriately. Such an approach has the advantage that you can solve a wide variety of problems by using the same technique.
## Contributors and Attributions
2.6: Problem-Solving Strategies is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
• Was this article helpful?
|
2022-09-25 01:35:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7777383327484131, "perplexity": 531.1220720459512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00147.warc.gz"}
|
https://www.nature.com/articles/s41598-019-40649-9?error=cookies_not_supported&code=f33f6f65-05e6-4afc-8050-bdcee67cf110
|
## Introduction
Perceptual decision making is the act of choosing an option based on the evaluation of sensory evidence1. To understand how the brain translates the interpretation of sensory information into behavior, it is essential to study the mechanism by which this psychophysical judgment process occurs2,3,4. To address this issue, human behavior in visual tasks such as motion detection has been studied extensively2,5,6. In such studies, a net motion direction discrimination task has been frequently implemented with a dynamic random dot display and observers’ response characteristics (i.e., reaction time, accuracy, decision confidence) were measured2,7,8,9,10,11. Thereafter, neurophysiological studies examined the relationship between neural activity patterns and psychophysical behavior in monkeys, revealing a strong correlation between the neuronal and behavioral data2,5,7,12. Similarly, computational models suggested that perceptual decision making arises through the integration of sensory information8,10,11 and can be described by the diffusion-to-boundary process model9,13,14.
In a number of studies of visual perception, however, human behavioral data indicate a substantial variation across the observer behavior even when an identical stimulus is given1,11,15,16. This inter-individual variability in perceptual behavior, often ignored or considered noise, has been recently studied more carefully using brain imaging techniques and individual variability appears to be related to local structure or connectivity of the brain17,18. Further research is required, as the notion that inter-individual differences in perceptual decisions should be considered structural variations of neural circuits as opposed to mere statistical noise remains under debate.
A recent study on the perceptual decision making process during a motion perception task11 suggested that subjective decision times reflects different profiles of evidence accumulated by each individual and showed that the bounded evidence accumulation model13,14 could predict subject behavior from their observed decision time. This suggests that inter-individual variability in perceptual decision time may be due to the diverse computation of the decision variable and the wide variation of decision threshold in individuals, and may be of particular importance for those investigating the origin of inter-individual variability in perceptual behavior.
Given this, we hypothesized that if perceptual decisions reflect individual characteristics of each brain circuit, then the individuals have a unique sensory integration profile to make a perceptual decision. Specifically, we assumed that the time course of sensory integration needed to make a single perceptual decision– termed a “sensory integration kernel” – will be consistent within a subject, independent of instantaneous stimulus dynamics. We anticipate that this intrinsic kernel size may vary across subjects and this may be an origin of inter-individual variability in perceptual behavior. Therefore, we suggest that wide variation in perceptual behavior originates from the intrinsic characteristics of brain circuits of individuals for sensory integration and that this should be considered as crucial information of subject-specific characteristics of perception.
To validate our hypothesis, we performed a series of psychophysics experiments using a random dot display which motion coherence temporally varies – coherence-varying motion discrimination task. We measured a temporal sensory integration kernel of each individual by estimating the motion coherence pattern that triggered perceptual decision, using stimuli of various temporal dynamics. We observed that each subject has a very consistent length and profile of temporal kernel, independent of the stimulus dynamics given. Observed kernel size (the maximum integration interval) largely varied across subjects and accurately predicted the inter-individual variability in responses. Additionally, we found that the kernel size-matched motion stimulus maximized the probability of correct response in each individual performance. Furthermore, we found that subjects’ characteristics of illusory motion perception were highly correlated with the observed intrinsic kernel. Therefore, our results suggest that an intrinsic, perceptual kernel is a critical factor to study sensory perception and that the inter-individual variability can be considered as a subject-specific trait from this sensory integration kernel.
## Results
### Perceptual decision making during coherence-varying motion discrimination task
To characterize individual motion perception sensory integration, we designed a coherence-varying motion discrimination task with random dot display. For a motion stimulus, black dots were positioned in a circular annulus and a certain portion of the dots in each frame was shifted to new rotated positions (clockwise or counter-clockwise) in the next movie frame, while other dots were randomly positioned. During a single 60 s trial, the ratio of rotating dots (motion coherence, c) and a direction of rotation (sign of c) change over time (Fig. 1a,b, see the Methods section for details, see also ref.10). Because we are interested in how instantaneous fluctuating motion dynamics drive the perceptual decision, we designed the motion coherence pattern so it can fluctuate at four different frequencies. By applying Gaussian filters of four different widths to the random number, the 60 s coherence pattern can fluctuate with central frequency varying from 0.15 Hz (F1; lowest) to 1.24 Hz (F4; highest), which was fixed for a single trial (Fig. 1c, see details in Method, Supplementary Fig. S1, Movies S1S4). The estimated motion energy in the angular direction10,19,20,21 confirmed that the designed motion coherence pattern is well embedded in the random dot stimulus. This motion coherence pattern was used to represent the global rotational motion in further analysis (Supplementary Fig. S2, see details in Method). During the observation, subjects were asked to report the perceived direction of global rotation – clockwise (CW), counter-clockwise (CCW) or ambiguous—as soon as they perceived a rotational motion. So, the subjects reported the direction of rotation whenever their perceived direction was switched from the previously perceived rotational direction (e.g., report CCW when perceived direction switches from CW to CCW). As a result, individual perceptual response patterns to a given motion coherence pattern were obtained (Fig. 1d, middle).
To quantify the subject’s perceptual sensory integration kernel, we measured the average motion coherence pattern that triggered perceptual responses using the reverse correlation method22,23,24. We captured the motion coherence pattern within the 10-second window prior to the time point whenever subject reported the direction of the perceived motion (Fig. 1d). Then, the sampled motion coherence patterns were averaged together, creating the response-triggered average stimulus (RTA). The RTA measured in each subject allowed us to find the temporal profile of sensory integration for a perceptual decision, which we defined as the “sensory integration kernel” of the subject (Fig. 1e). Temporal profile of the RTA showed two windows of opposite sign: stimulus right before response drives decision to the same (positive) direction with a given motion, and stimulus far before drives decision in the opposite (negative) direction (see Supplementary Fig. S3 for control analysis). We found that an individual RTA curve fit well to a superposition of two alpha functions, similar to the quantification of the temporal receptive field structure of retinal neurons25.
$$RTA(t)={A}_{1}{(\frac{t}{{\tau }_{1}})}^{n}{e}^{-\frac{(n-1)t}{{\tau }_{1}}}-{A}_{2}{(\frac{t}{{\tau }_{2}})}^{n}{e}^{-\frac{(n-1)t}{{\tau }_{2}}}$$
(1)
We focused on the parameter T0, i.e. the timing that the RTA first crosses the zero-coherence (temporal window of positive sensory integration), for the profile of this kernel because this value reveals the size of the temporal window for effective sensory integration for decision making. Another parameter, Apos, i.e., the positive amplitude of the RTA, reveals how strong the instantaneous motion signal at time t must be on average to induce a perceptual decision, which also reveals the individual characteristics of sensory integration along with T0. Thus, integration of the RTA amplitude, or the area, illustrates how much a visual motion signal can induce a perceptual decision in each individual. Although four parameters are required to describe a complete kernel profile (e.g., amplitude of positive/negative peaks and width of positive/negative windows), we assumed that T0 can represent the individual characteristics of the kernel in this study, which directly indicates the maximum size of the sensory integration window (see Supplementary Fig. S4 for detailed shape parameters and their characteristics).
We first compared the observed RTA curves across different stimulus dynamics conditions and found that T0 values (the positive kernel sizes) were consistent across stimulus conditions, even though the frequency of motion fluctuation changed 8-fold (Fig. 1f). We confirmed that the differences in T0 across the stimulus conditions were insignificant in our sample (p = 0.11, F(3, 114) = 2.02, repeated measure ANOVA, Bayes factor = 0.054, N = 39, T0 = 1.71 ± 0.64, 1.82 ± 0.65, 1.75 ± 0.75, and 1.65 ± 0.53 for F1, F2, F3, and F4, respectively, mean ± S.D). This suggests that the time course of motion integration within an individual is fairly consistent and independent of the stimulus dynamics. We then averaged the RTAs from all four conditions to obtain an average sensory integration kernel for each subject. In the averaged RTA – sensory integration kernel, we found that the kernel size T0 varied noticeably from 0.8 to 3.5 sec across individuals (Fig. 1g). We also confirmed this result from the analysis of local motion energy of actual stimuli presented to the subjects. The temporal profile of the motion energy kernel in the local spatial segments was not significantly different from that obtained from the global motion coherence pattern (repeated-measures ANOVA, F(4, 160) = 1.66, p = 0.16, N = 41, Bayes factor = 0.032), which confirms that the observed kernel is an intrinsic characteristic of each subject (See Supplementary Fig. S2). Additionally, we also observed that the intrinsic profile of the kernel is maintained even when there exists an imbalance between the CW and CCW decisions or inter-decision-intervals (See Supplementary Fig. S5).
Using the observed kernels, we tried to predict the subjects’ perceptual response to the stimulus in Fig. 1. From a linear convolution of the stimuli pattern and the observed kernel, we were able to successfully reproduce the perceptual response pattern and, in particular, Nswitch, defined as the average number of perceptual switches within 60 seconds trial, in each subject (Fig. 2a, see Supplementary Fig. S6 for detailed illustration). Our model predicted that the Nswitch value of the subject would be inversely related to the observed kernel size T0, and this was confirmed by our observed data (Fig. 2b,c, r = 0.86, p < 2.20 × 10−13, between the observed Nswitch and 1/T0, r = 0.79, 3.36 × 10−10 between the predicted Nswitch and 1/T0, Pearson’s correlation coefficient, N = 42). In addition, our model predicted that subjects with small T0 would have large increment of Nswitch as stimulus frequency increases, while subjects with large T0 would have fewer changes in Nswitch across different stimulus frequency conditions. We measured the ΔNswitch of each subject (Fig. 2b) and confirmed that ΔNswitch is inversely related to the observed kernel size T0, as our model predicted (Fig. 2d, r = 0.75, p < 8.77 × 10−9, between observed ΔNswitch and 1/T0, r = 0.75, p < 1.35 × 10−8 between predicted ΔNswitch and 1/T0, Pearson’s correlation coefficient, N = 42).
If the individual sensory integration kernel size determines the number of perceptual switching during the task, we may then assume that the motion detection performance and the response delay of each subject are also governed by the kernel size T0. For instance, an individual with small T0 may better detect the fast change of rotational direction than an individual with large T0. To validate this hypothesis, we defined the motion detection performance and the response delay using the cross-correlation between the stimulus and response patterns (Fig. 2e,f). Cross-correlation between motion coherence pattern S(t) and response pattern R(t) estimates the how two patterns are similar. Because the subjects’ responses must occur after a stimulus is given, we calculated the correlation value as we increased the delay of the response pattern (Fig. 2e), to find the value of the optimal performance and response delay. In this cross-correlation curve with time delay, motion detection performance was defined as the maximum amplitude of the curve, and response delay was defined as the time point at the maximum amplitude (Fig. 2f, see also Supplementary Fig. S7). We then tested whether kernel size T0, could predict the motion detection performance or the response delay of the individual, using 75% of the trials to extract the kernel and the other 25% of trials to measure the behavior (see details in Methods). As expected, the kernel size T0 of individual subjects was negatively correlated with motion detection performance (Fig. 2g) and the response delay was also strongly correlated with T0 (Supplementary Fig. S7e). Taken together, RTA could precisely measure the individual time course of perceptual decisions with intrinsic kernel size T0. We then expected that the observed subject-specific sensory integration kernel may be responsible for inter-individual variability in perceptual behavior and might enable us to predict individual performances under a given stimulus condition.
### Kernel-matched stimulus optimizes motion discrimination performance
Based on the observations that subjects have various timescales of sensory integration, we predicted that the performance of subjects might be optimized by matching the stimulus pattern to the observed kernel profile. Our assumption was that if the evidence accumulation is governed by the observed kernel, integrated motion information would be maximized when the stimulus duration perfectly matches the size of the positive portion of kernel, T0. According to the temporal profile of our observed decision kernel, when the stimulus duration is shorter than the kernel size, the integrated information will increase as the stimulus duration increases. On the other hand, when the stimulus duration exceeds T0, the sum of integrated information decreases because the negative portion of the kernel starts to contribute (Fig. 3a, top). Thus, the probability of a correct decision would be maximized when the stimulus duration matches T0, and would decrease when the stimulus duration exceeds T0 (Fig. 3a, bottom). To validate this hypothesis, we designed our next experiment to have random dots generate a motion with a fixed direction (clockwise or counter-clockwise). The motion coherence was set at a constant level (5%), but the motion duration varied from 0.5 to 5 seconds. Subjects were asked to observe the stimulus until the end of the movie and then to report the motion direction perceived at the last moment of the stimulation, while the subjects were unaware of the fact that stimulus has a fixed rotational direction (Fig. 3b). If our assumption is correct, the accuracy of the perceptual decision will be highest when the stimulus duration matches to subject’s own T0, and is not high enough when the stimulus duration is shorter or longer than subject’s T0. Our experimental results confirmed that the probability of correct response, pcorrect did not simply increase as the stimulus duration increased, rather they showed a peak at a certain value of stimulus duration in more than half of the subjects (Fig. 3c, subjects 3 and 4). To avoid the possibility of a hazard rate effect on the task, we checked whether pcorrect does not increase as the trial number increases, and if pcorrect does not increase as the stimulus duration increases. We found that pcorrect does not show increasing tendency as the trial number increases, and that the average pcorrect of the whole population does not increase as the stimulus duration increases. In fact, we found that only three subjects out of 19 showed maximum pcorrect for the longest stimulus duration (5 s). This suggests that there exists an optimal size of evidence accumulation for making the correct decision (see Supplementary Fig. S8).
To examine whether the optimal perception occurs when stimulus duration is matched to the intrinsic kernel size, we fit the pcorrect curve to an alpha function that can describe both increment and decrement of the pcorrect curve. Then we estimated Topt, the stimulus duration that induces the maximum pcorrect in each subject and compared it with the individual kernel size, T0. As expected, subjects’ Topt was strongly correlated to T0 (Fig. 3d, r = 0.55, p = 0.0105, N = 21, Pearson’s correlation coefficient). We observed that the value of Topt varied across subjects, according to their kernel sizes. (Fig. 3e, left, orange and blue). As a result, when the stimulus duration was given as a single fixed value, each subject would show a noticeably different performance.
When we normalized the time axis of each subject’s performance curve with their intrinsic kernel size T0, the performance curves instead showed a similar trend, which increased toward 1 (Tstim = Topt) and gradually decreased after (Fig. 3e, right, Fig. 3f, see Supplementary Fig. S8 for details). As a result, in the normalized time scale, the population average showed a peak around 1 (Fig. 3f, red solid line), suggesting that most subjects showed the maximum pcorrect when the stimulus duration matched their intrinsic sensory integration kernel size. Taken together, these results confirm that sensory integration in an individual is governed by the observed non-linear kernel profile and the performance of a perceptual task may also vary, depending on the difference between the kernel size and stimulus duration.
### Illusory motion perception and sensory integration kernel
Thus far, our sensory integration kernel has been estimated from apparent motion signals. We further examined the notion that the observed intrinsic kernel may predict subjects’ behavior for illusory motion perception. Previous studies have shown that random dots scattered in an annulus induce an illusory rotational motion26,27 and that the perceived motion direction switches spontaneously between clockwise and counter-clockwise, showing a typical bistable perception dynamic26,28,29. We hypothesized that this periodic alternation in bistable perception might be also governed by the intrinsic kernel of subjects. To validate this hypothesis, we analyzed the response behavior in which subjects were asked to report the direction of the perceived motion from completely random dots (coherence, c = 0) were shown (Fig. 4a, see Movies S5). Consistent with previous studies, most subjects reported illusory rotational motion in this condition and the direction of perceived motion was periodically altered, spontaneously26. To quantify temporal features of this bistable perception, we measured the phase duration, τ, of illusory motion in one direction. Similar to a previous report30, we fit the measured τ values of a subject to a log-normal distribution and estimated the peak value $$\bar{\tau }$$, as a representation of individual dynamics of bistable perception.
The bistable phase duration, or $$\bar{\tau }$$, remained consistent within an individual, but varied across individuals. For example, Subject 5′s periodic alternation (Fig. 4b, top) appeared relatively faster than that of Subject 6 (Fig. 4b, bottom). The distribution of individual τ values were fit well to a log-normal distribution in most cases (Fig. 4c); thus the peak value of the distribution $$\bar{\tau }$$, was compared across subjects ($$\bar{\tau }$$ = 1.90 for Subject 5, 3.21 for Subject 6). The peak value, $$\bar{\tau }$$, varied greatly, from 0.5 to 5 seconds across subjects ($$\bar{\tau }$$ = 2.25 ± 1.10 seconds, mean ± S.D, see Supplementary Fig. S9). However, subjects who had a long intrinsic sensory integration time, T0, also tended to have slow switching dynamics with a large $$\bar{\tau }$$, while subjects who had a short intrinsic sensory integration interval tended to have fast switching dynamics with a small $$\bar{\tau }.$$ (Fig. 4d). As predicted, we observed a strong positive correlation between the values of $$\bar{\tau }$$ and T0, (Fig. 4e, r = 0.53, p = 2.99 × 10−4, Pearson correlation coefficient, N = 42). This strong correlation between the observed kernel size and the switching dynamics in bistable perception suggests that the observed intrinsic time of sensory integration may govern the perceptual response to illusory motions, as well as apparent motions.
## Discussion
Previous studies of motion perception have suggested that perceptual decisions arise through an accumulation of evidence, thus this process can be characterized by the drift-diffusion model13,14. In this bounded-evidence-accumulation model, the inter-individual variability in perceptual decisions is frequently explained by various conceptual parameters such as a decision boundary threshold, evidence accumulation rate, and choice bias10,11. The model can partially predict observed experimental results such as individual accuracy of perception. However, it still remains unclear what physical variables may indeed represent those decision parameters and if any of them are intrinsically consistent to characterize individual variance of subject behavior. Our result quantitatively describes the temporal profile of sensory integration for perceptual decisions, providing new insight into how decision variables should be implemented in the drift-diffusion model. Specifically, our result suggests that sensory integration is highly nonlinear in the temporal dimension, as observed in our kernel estimation, while the drift-diffusion model suggests that evidence accumulation (or the drift rate) is uniform over time. Moreover, our model can describe the decision variable as integrated sensory evidence in the drift-diffusion model, and the observed nonlinear integration kernel can then precisely describe how the drift rate changes nonlinearly, which is diverse across individuals. Thus, our finding of an intrinsic kernel suggests an alternative description of the drift-diffusion model and provides direct evidence that the intrinsic sensory integration interval is subject-specific and stimulus independent. Another notable issue here is that the amplitude of the integration kernel, Apos, varies with the stimulus condition (frequency), while the kernel size, T0, remains consistent (Supplementary Fig. S4). This suggests that subjects can modulate the total amount of motion integration depending on the stimulus condition while keeping the integration time constant during a response decision. Taken together, our finding of an intrinsic kernel suggests an alternative description of the drift-diffusion model and provides direct evidence of intrinsic sensory integration interval that is subject-specific and stimulus independent. Our results also suggest that the inter-individual variability in perceptual decisions may originate from this intrinsic sensory integration timescale and therefore may be considered a predictable trait.
One of the notable features of our sensory integration kernel is the existence of a negative portion of the profile. Similar to the current result, previous studies have also examined the extent to which present stimuli contribute to current decisions20,31, but only a positive contribution of the evidence was considered in such cases. In those reports, it was reported that stimuli presented early (1 s prior to the decision) affect the perceptual decision more strongly than those presented later. However, if we look at a small temporal window of the kernel of 1–2 s before the response, our observation showed trends identical those in the earlier findings (see Supplementary Fig. S10). Most likely, previous studies did not observe a negative portion because they only used a stimulus with a short duration and were thus unable to observe the integration process long before one second. The kernel observed in our study suggests that far before the decision (>1 s), the evidence does not always positively affect sensory integration. Then, one might ask by what mechanisms the negative portion of the decision kernel emerges? One possible candidate is the motion aftereffect. Because our motion stimulus is very long and consistent in time, it may generate a strong aftereffect that reverses the direction of the perceived rotational motion. Similar to previous findings that visual orientation perception can be negatively affected by the recent history of stimulus via tilt-aftereffect32, our motion perception can also be negatively affected by the motion aftereffect. If the motion aftereffect were strong enough to generate negative integration, our integration kernel must contain information on the diverse motion aftereffect profiles33 in individuals, which might be confirmed in the consequent study.
We were able to demonstrate that the observed sensory integration kernel can accurately predict diverse characteristics of perceptual behavior. In our first experiment, the number of perceived motion switching under the same stimulus conditions varied across the subjects (Fig. 2b) and this number was inversely related to the observed subject’s kernel size (Fig. 2c). Moreover, it was noticeable that subjects with shorter kernel size could detect the motion direction better than the subjects with the longer kernel size when the motion coherence of the stimulus fluctuated with different frequencies (Fig. 2g, Supplementary Fig. S7). Regardless of the stimulus frequency, subjects with the shorter kernel perceived the change of motion direction better than those with the longer kernel, potentially because a shorter integration kernel may induce less sampling error in integrating noisy coherent signals than a longer sampling kernel and therefore may be advantageous for encoding highly varying stimuli (see Supplementary Fig. S7b). Another noticeable result is the strong correlation between the response delay and the observed kernel size. In our observations, the response delay and the kernel size were almost identical; thus the response delay appeared very consistent within a subject and diverse across subjects, similar to the kernel profile (Supplementary Fig. S7e). In accordance with the previous observation of the relationship between decision time and motion discrimination accuracy11, this suggests that the timing of the subjects’ decision provides information about an individual’s decision process.
Contrary to anecdotal observations, we demonstrated that longer duration of constant motion stimulus did not enhance subject performance. Indeed, when the stimulus contains a constant motion with a fixed direction, a longer duration of stimulus would generate more information accumulated in the correct direction of the decision variable, therefore the drift-diffusion model predicts a higher correct ratio of the decision. In contrast, our observed sensory integration kernel has a highly non-linear structure with a positive peak and a negative overshoot thereafter. Thus, stimulus information provided within the size of the positive part of the kernel would enhance the performance, while a longer stimulus duration may induce negative drift and degrade the decision performance (Fig. 3a). As predicted by the observed kernel, our experiments showed that there exists an optimal stimulus duration for each subject and the subject’s performance became worse when the stimulus duration became longer than this length. Note that it is sub-optimal strategy34,35 to weigh any evidence negatively in the second experiment because subjects should not be affected by early evidence for optimal performance. However, our results show that most subjects negatively integrate early presented evidence when the stimulus duration is sufficiently long. This negative weighing may have been induced by motion aftereffects, indicating that the observed kernel profile reflects the complicated characteristics of sensory integration for motion perception instead of simple linear integration. Therefore, our second experiment suggests that sensory integration is not a simple linear accumulation, but can be predicted by the observed non-linear kernel within each subject T0 (Fig. 3e,f). This result raises an important issue; often, human psychophysics experiments are performed with fixed parameters of stimulus for all subjects and the responses are averaged across subjects to ignore inter-individual variation. Under these conditions, each subject will make a distinct decision behavior by their intrinsic kernels and the analysis could be misguided if we ignore the subject-specific traits. For example, if we simply average all the subject responses from a fixed timescale of stimuli, the averaged result may not show any clear trend (Fig. 3e, left). But, if we consider the subject-specific traits by kernel size so that the stimulus parameters were matched to the individual integration time, a common tendency of responses might be properly observed (Fig. 3e, right). This suggests that psychophysics experiments should be designed and performed carefully with a consideration of subject-specific differences.
Lastly, we showed that the observed kernel could predict the temporal features of bistable perception. The bistable perception in our third experiment is of a dynamic illusory motion, where subjects perceive a rotational motion of quasi-consistent duration from a totally random signal. For decades, it has been of interest to find the underlying mechanism of the bistable perception36,37,38,39, particularly on the origin of periodic alternation of perceived states. It has been reported that the bistable switching of frequencies from different types of stimuli is correlated in each subject, suggesting a common mechanism of bistable alternation40,41,42. We found a strong linear correlation between the phase duration of bistable perception and the sensory integration kernel size (Fig. 4e). Based on this, we argue that the origin of the subject-specific motion integration dynamics may be relevant to previous findings pertaining to bistable perception. First, it was reported that the gray matter volumes of the bilateral superior parietal lobes (SPL) were significantly correlated with the perceptual switching of a rotating structure-from-motion stimulus17,18,43. Specifically, individuals’ gray matter volumes of the anterior SPL (aSPL) were positively correlated with the phase duration and individuals’ gray matter volumes of the posterior SPL (pSPL) were negatively correlated with the phase duration. These outcomes suggest that the gray matter volumes of the superior posterior lobes of individuals determine the motion integration time. Second, it was also reported that the phase duration of bistable switching was significantly increased when lorazepam, a GABAa receptor agonist, was given to humans44. Similar to this result, computational models reported that inhibition can slow down the switching of bistable perception44,45,46,47,48. This suggests that the inhibition level of an individual brain may reflect the temporal scale of motion integration. Future studies should be conducted to confirm these notions.
In conclusion, we were able to verify an individual profile of sensory integration kernel from our controlled random dot stimulus and showed that human perceptual behaviors are governed by this kernel. The size of the kernel predicted an optimal stimulus duration for correct perceptual decision and the temporal characteristics of response under bistable conditions. Overall, our findings suggest that perceptual decisions arise in the intrinsic timescale of the sensory integration process.
## Methods
### Participants
Forty-five subjects (23 females, 22 males, ranging in ages from 20–29 years, with normal or corrected-to-normal vision) were enrolled in this study. All experimental procedures were approved by the Institutional Review Board (IRB) of KAIST (KH2017-05) and all procedures were carried out in accordance with approved guidelines. Written informed consent was obtained from all subjects.
### Display and visual stimulus
Visual stimuli were presented on an LCD monitor screen (DELL U3014, 29.8 inches, 2560 × 1600, 60 Hz temporal resolution) for all experiments. Subjects were positioned 160 cm away from the monitor and were asked to report their perception of the stimulus using buttons on the keyboard. At each frame of the stimulus, black dots were distributed in a circular annulus. The inner and outer radii of the annulus were at a 3.5 degree and 5 degree visual angle, respectively, from the center of the screen. The individual dots were 5 minute of solid angle in diameter and the dot density was set to 5 dots/deg2. The refresh rate of the visual stimuli was 20 Hz; thus, dots at each frame lasted for 50 ms and the dot locations were repositioned in the next frame. A black cross appeared at the center of the screen and each subject was asked to fix his or her eyes on the cross during the experiment. Stimulus conditions – including viewing distance, radii of the annulus, dot size, dot density, refresh rate, and the angle of rotation – were optimized based on the results from preliminary trials and previous references26 and was applied to all subjects. All visual stimuli were generated with MATLAB Psychtoolbox 3.049.
### First experiment: Coherence-varying motion discrimination task
The 1st experiment was comprised of five conditions. In one condition, the motion coherence level of the stimulus was set to 0 for a duration of 60 seconds (Fig. 4). In this condition, all of the dots in every frame were randomly located in the annulus and did not produce any global rotational motion. In the other four conditions, the motion coherence level of the stimulus, S(t), was set to fluctuate over time (Figs 1 and 2). In these conditions, S(t) was calculated from the following equation:
$${\rm{S}}({\rm{t}})={A}_{1}{\int }_{t=0}^{60}{C}_{0}(t)g(t-\tau )d\tau$$
where C0(t) is a random number from the normal distribution of N(0, 0.05) and g(t) is a Gaussian filter:
$$g({\rm{t}})=\frac{1}{{\sigma }_{filter}\sqrt{2\pi }}{e}^{\frac{-{t}^{2}}{2{{\sigma }_{filter}}^{2}}}$$
with four different σfilter values of 100, 200, 400, and 800 ms. A1 is a constant to normalize the amplitude of S(t) (A1 = 5.4, 7.6, 10.75, or 15.20 for σfilter values of 100, 200, 400, and 800 ms, respectively), which results in the average value of absolute coherence amplitude (e.g., average |S(t)|) becomes 8% under four different frequency conditions (see Supplementary Fig. S1). The sign of S(t) determined the rotation direction (clockwise for positive, counter-clockwise for negative values). At each frame, dots of S(t) were rotated by an angle θrotate = ±5° in the next frame. The detailed statistics of S(t) are shown in Fig. S1.
### Second experiment: Kernel-matched motion discrimination task
In the stimulus in the 2nd experiment (Fig. 3), the black cross appears in the center of the screen for fixation. Aft0er 3 seconds, black cross changes the color to red for cueing the upcoming stimulus. After 1 second of cueing, the black cross appears for 2 seconds, and then the black dots in the annulus are plotted on the screen. The dots were set to have a fixed rotational direction, clockwise (CW) or counter-clockwise (CCW), which lasted for Tstim. During Tstim (stimulus duration), the coherence level was fixed at 5%. Stimulus duration, Tstim, was randomly chosen from the pool [0.5, 1, 1.5, 2, 3, 5] seconds (Fig. 3b). The sequence of stimulus duration conditions was randomized that subjects cannot predict the stimulus duration of the current trial.
### Behavior
In the first experiment (Figs 1, 2 and 4), subjects viewed rotating dots on the screen and were asked to report the direction of rotation by pressing the arrow keys on the keyboard whenever they perceived a change in the rotational direction of the dots (the right arrow key for clockwise rotation, the left arrow key for counter-clockwise rotation, and the down arrow key for mixed or ambiguous rotation). After the first direction report, subjects were asked to report the changes in direction via the corresponding key of a keyboard. Subjects were informed to press the down arrow key (mixed or ambiguous button) whenever the subject perceived no motion, strong motion in non-angular direction or both clockwise and counter-clockwise motion simultaneously. During the entire experiment, subjects rarely reported mixed/ambiguous rotation (less than 0.15% of time on average) and all of the subjects perceived rotational motion while watching26,27. Prior to data acquisition, subjects watched 30 s of random dots with no coherent motion (i.e., motion coherence level = 0 during stimulation), and before the main experiment, subjects performed one training session that contained three trials of 60 s to be familiar for the keyboard report. In the first experiment, each subject performed a total of 80 sequences of the trials: 64 trials (16 trials × 4 frequency conditions) of a coherence-varying motion condition and 16 trials of a random motion condition (S(t) = 0), with a random sequence of conditions.
In the second experiment, subjects were asked to fixate on the center of the screen and be aware of the upcoming stimulus when the red cross appeared. When dots appeared on the screen, subjects were asked to concentrate on the stimulus for the entire duration with no keyboard response. After visual stimulation ended, subjects were asked to report the rotational direction of the stimulus perceived at the last moment of the stimulus. Specifically, they were instructed to report the perceived direction for two possible cases; first, if they perceived no change in the motion direction through the stimulus duration, they simply reported the perceived direction, and second, if they perceived changes in the motion direction, they reported the very last direction perceived. To ensure that the subjects attended during the entire stimulus duration, we asked them to attend fully after the appearance of the red cross. Subjects were not informed that the given motion direction was fixed or that motion coherence was held constant. In the second experiment, each subject performed 50 perceptual decisions under six conditions of varying stimulus duration (300 total trials), with a randomly assigned sequence of the conditions.
Subjects performed two experimental sessions in a single day: the 1st session consists of 20 trials in the 1st experiment and the 2nd session consists of 30 trials of the 2nd experiment. Behavioral data was acquired over several days. In both the 1st and 2nd experiments, subjects did not receive any feedback during the experiments. Subjects were not informed about the experimental conditions (e.g., stimulus frequency of the 1st experiment, constant motion coherence of the 2nd experiment, etc.) and the objective of the experiments.
### Analysis
#### Extracting sensory integration kernel from coherence-varying motion discrimination task: Response-Triggered Average
To explore the subject-specific profile of the sensory integration kernel, the time course of the sensory integration for the perceptual decision was extracted from the 1st experiment for each subject (Fig. 1). To extract a subject’s kernel, we first measured the time point at which the perceptual switch was reported, tswitch. In a single frequency condition, Fi of motion coherence fluctuation, we extracted the motion coherence pattern 10 seconds prior to every jth response of switching time, tswitch = j and averaged these response-triggering motion coherence patterns as follows:
$${{\rm{RTA}}}_{{F}_{i}}=\sum _{switch=j}^{{N}_{switch}}\frac{sgn(switch)\,{S}_{{F}_{i}}({t}_{switch=j}-10\, \sim \,{t}_{switch=j})}{{N}_{switch}}$$
To obtain the average integration kernel of a subject, the RTAs from four different frequency conditions were averaged:
$${{\rm{RTA}}}_{{\rm{average}}}=\sum _{i=1}^{4}RT{A}_{{F}_{i}}/4$$
To minimize the possibility that the long and short RTAs came from the difference in switching numbers during the experiment, we generated a control response in which the responses were shuffled at random times, but with the same distribution of inter-response-interval. Then, the power of the kernel, P(t) = Σ (RTA(t)2) between the actual observed RTA and control RTA were compared (see Supplementary Fig. S3 for details).
After we obtained the RTA of each individual, we further investigated several factors that might generate bias on the shape of the RTA. First, we checked to see if the clockwise and counter-clockwise responses induced any differences in the RTA profile. We extracted an RTA using only CW or only CCW responses and compared two separate RTAs. Second, we checked to see if prior decisions could affect the RTA profile. To remove any contribution from prior decisions, we extracted the RTA using the condition that there was no other response between t = 0 (current decision) and t = 5, which is the RTA for a single decision. We compared the parameters of the kernel for this condition to the original kernel we observed. See the detailed results in Supplementary Fig. S5.
### Parameters to describe the shape of sensory integration kernel
Four parameters of the sensory integration kernel were defined. Apositive is defined as a positive amplitude of an individual RTA - sensory integration kernel, Anegative as a negative amplitude of the RTA, T0, as the first zero-coherence crossing of RTA, Tkernel, as the timing when the negative RTA amplitude became less than 10% of the Anegative value. Correlations between each parameter were calculated and are reported in Supplementary Fig. S4.
### Motion energy analysis of the visual stimuli
While global rotational motion strength can be described by the designed motion coherence pattern, actual rotational motion strength presented to the subjects may vary locally in the annulus. To examine the net motion strength of the presented dot stimulus, we computed the motion energy of the stimulus in an angular dimension, following the previously published procedure21. Because the rotational motion is in an angular direction, we first summed the luminance value of each image frame radially, leaving the luminance value as a function of angular direction and time:
$$\bar{S}(\theta ,t)=\sum _{r=3.5^\circ }^{5^\circ }Stimulus\,Luminance(r,\theta ,t)$$
Then, to extract the local motion energy, a 24-degree pie window was chosen for calculating the local motion energy, and then was slid by 6 degrees with 18-degree overlap between adjacent spatial blocks. In total, we calculate the motion energy in 60 pie segments. The pie width (24 degrees) was chosen to match the size of the spatial filter we applied for motion energy analysis (approximately 2 visual degrees), and the moving window (6 degrees) was chosen to make enough segments (60 segments = 6°/360°), which could reduce the boundary artifact. Thus, the luminance value of the ith pie segment was defined as:
$${\bar{S}}_{\angle {i}^{th}}(\theta ,t)=\bar{S}(6(i-1)^\circ < \theta < 24+6(i-1)^\circ ,t)$$
From the luminance value of each ith pie segment ($${\bar{S}}_{\angle {i}^{th}}(\theta ,\,t)$$), the opponent motion energy was calculated by subtracting the counter-clockwise (CCW) rotational energy from the clockwise (CW) rotational energy:
$${\rm{ME}}{({\rm{t}})}_{\angle {i}^{th}}={\int }^{}CW\,energ{y}_{\angle {i}^{th}}(\theta ,t)d\theta -{\int }^{}CCW\,energ{y}_{\angle {i}^{th}}(\theta ,t)d\theta$$
The CW energy and CCW energy were calculated by squaring the linear convolution between the stimulus and spatiotemporal filter:
$$CW\,energ{y}_{\angle {i}^{th}}(\theta ,t)={({\bar{S}}_{\angle {i}^{th}}(\theta ,t)\ast C{W}_{1}(\theta ,t))}^{2}+{({\bar{S}}_{\angle {i}^{th}}(\theta ,t)\ast C{W}_{2}(\theta ,t))}^{2}$$
$$CCW\,energ{y}_{\angle {i}^{th}}(\theta ,t)={({\bar{S}}_{\angle {i}^{th}}(\theta ,t)\ast CC{W}_{1}(\theta ,t))}^{2}+{({\bar{S}}_{\angle {i}^{th}}(\theta ,t)\ast CC{W}_{2}(\theta ,t))}^{2}$$
where * denotes the linear convolution and $$C{W}_{i}(\theta ,t)$$ and $$CC{W}_{i}(\theta ,t)$$ is two pairs of spatiotemporal filters selective for each direction and defined as:
$$\begin{array}{rcl}{{\rm{CW}}}_{1}({\rm{\theta }},{\rm{t}}) & = & fast(t)O(\theta )+slow(t)E(\theta )\\ {{\rm{CW}}}_{2}(\theta ,t) & = & fast(t)E(\theta )-slow(t)O(\theta )\\ {{\rm{CCW}}}_{1}({\rm{\theta }},{\rm{t}}) & = & -fast(t)O(\theta )+slow(t)E(\theta )\\ {{\rm{CCW}}}_{2}(\theta ,t) & = & fast(t)E(\theta )+slow(t)O(\theta )\end{array}$$
Following the references, fast(t) and slow(t) denotes the temporal filter19,21:
$${\rm{fast}}({\rm{t}})={(kt)}^{6}{e}^{-kt}[\frac{1}{6!}-\frac{\beta {(kt)}^{2}}{(8)!}]$$
$${\rm{slow}}({\rm{t}})={(kt)}^{9}{e}^{-kt}[\frac{1}{9!}-\frac{\beta {(kt)}^{2}}{(11)!}]$$
and $$O(\theta )$$ and $$\,E(\theta )$$ are the spatial filter of odd and even Gabor function:
$${\rm{E}}({\rm{\theta }})=\,\cos (2\pi f\theta ){e}^{{(\theta /\sigma )}^{2}}$$
$${\rm{O}}({\rm{\theta }})=\,\sin (2\pi f\theta ){e}^{{(\theta /\sigma )}^{2}}$$
with parameters β = 0.9 and k = 60 for the temporal filter, and f = 0.5 cpd and σ = 0.6° for the spatial filter.
Having this motion energy pattern in each local spatial segment $$({\rm{ME}}{({\rm{t}})}_{\angle {i}^{th}})$$, we took three actions: (1) calculated the similarity between the motion coherence pattern and global motion energy, (2) compared the kernel obtained from the motion energy to the kernel obtained from the motion coherence pattern (Figs 1d and 3) compared the extracted kernel in the four quadrants (upper, lower, right, and left segments) to investigate possible spatial bias while integrating the motion. To compare the extracted kernel in the four quadrants, we fit the coefficient A0 ~ A4 using linear regression model:
$${{\rm{RTA}}}_{{\rm{average}}}({\rm{t}})={A}_{0}+{A}_{1}RT{A}_{upperME}(t)+{A}_{2}RT{A}_{rightME}(t)+{A}_{3}RT{A}_{lowerME}(t)+{A}_{4}RT{A}_{leftME}(t)$$
where RTAME(t) is the response-triggered-average in each quadrant using motion energy, and RTAaverage(t) is the response-triggered-average using motion coherence level (sensory integration kernel). For details, see illustration and analysis results in Supplementary Fig. S2.
### Predicting the subject’s perceptual response with observed sensory integration kernel
To predict a subjects’ perceptual response with the observed individual kernel, the data sets were divided into two subsets: 75% of the trials were sampled to estimate individual kernels, RTAsubset(t) and another 25% of the trials were used to measure behavioral parameters, the response pattern R(t) as a validation set. This random sampling of estimation and validation sets was repeated 100 times, and the profile of RTAsubset(t) was used to predict the R(t) at each sampling. To predict a subjects’ response R(t) to a given stimulus S(t), we took a linear convolution of the motion coherence pattern S(t) with the individual motion integration kernel RTAsubset:
$$L(t)={\int }^{}RT{A}_{subset}(\tau )S{(t-\tau )}_{{F}_{i}}d\tau$$
thus L(t) shows the linear response to the stimulus.
We predicted that the response switches when the integrated response L(t) exceeds the threshold value, Lth were as following:
$${R}_{predicted}(t)=\{\begin{array}{c}+1(CW)\,when\,\,L(t)\ge {L}_{th}\\ -1(CCW)\,when\,\,L(t)\le -\,{L}_{th}\\ R(t-1)\,when-{L}_{th} < L(t) < {L}_{th}\end{array}$$
and the threshold value Lth was calculated from the observed kernel as:
$${L}_{th}=\sum _{t=-10}^{0}RT{A}_{subset}{(t)}^{2}$$
As a result, predicted response pattern Rpredicted(t) can be obtained from RTAsubset(t), and it was compared to observed response, Robserved(t). To examine the goodness-of-prediction, the cross-correlation between the Rpredicted(t) and the Robserved(t) was calculated (see Supplementary Fig. S6). High positive peak denotes good prediction of response pattern. As a control, the perceptual response was switched at random times, while maintaining the same inter-response-interval of the actual response.
### Estimating the number of perceptual switching with kernel size T0
To show that the number of perceptual switches is predictable from a subject’s sensory integration kernel, we counted subjects’ switch response (CW to CCW; CCW to CW) during each 60 s trial (Fig. 2a) under each of the four frequency conditions. Then, the average number of responses in each frequency condition – $${N}_{switch;{F}_{i}}$$ – was obtained in each subject. To calculate the inverse relation between T0 and Nswitch, we fitted the relationship between the subjects’ average number of responses $${\bar{N}}_{switch}=\sum _{i=1}^{4}{N}_{switch;{F}_{i}}/4$$ and the inverse of subjects’ kernel size 1/T0 to the formula $${\bar{N}}_{switch}=\frac{{\rm{C}}}{{{\rm{T}}}_{0}}$$, where C is a subject-specific constant of response rate (units of count). The same fitting was applied to the predicted response Rpredicted(t) from the kernel (Fig. 2a). For example, observed population data estimated C as 26.0 and the predicted response data estimated C as 21.2 (Fig. 2c). Pearson’s correlation coefficient between the $${\bar{N}}_{switch}$$ and 1/T0 was calculated to show significance. To show the trend of Nswitch changes under four different frequency conditions, $${\rm{\Delta }}{N}_{switch}=\sum _{i=1}^{3}({N}_{switch;{F}_{i+1}}-{N}_{switch;{F}_{i}})/3$$ were calculated in the observed data and the predicted response. Then, the relationship between the ΔNswitch and T0 was fitted into the formula $${{\rm{\Delta }}N}_{{\rm{switch}}}=\frac{{{\rm{C}}}_{1}}{{{\rm{T}}}_{0}}+{C}_{2}$$ where C1 and C2 are the fitting parameters from population data (Fig. 2d). The observed data estimates (C1, C2) were (3.93 and −1.46) and the predicted response estimates were (5,42 and −2.99). Pearson’s correlation coefficient between the $${\rm{\Delta }}{N}_{switch}$$ and 1/T0 was calculated to show significance.
### Predicting motion detection performance and response delay with kernel size T0
To examine the motion detection performance and response delay of a subject’s behavior in the 1st experiment, the cross-correlation curve between the stimulus S(t) and the response R(t) pair was calculated (Fig. 2e). Here, S(t) is the motion coherence level at each frame and R(t) is perceived direction at each frame (+1 for clockwise rotation, −1 for counter-clockwise rotation, and 0 for mixed rotation). The normalized cross-correlation CC(t) between the S(t) and R(t) was calculated (Fig. 2e and Supplementary Fig. S7). Motion detection performance was defined as the maximum value of CC(t) at t = 0~5 seconds (since all of the subjects’ cross-correlation curve showed positive maximum value before 5 seconds) and response time was defined as the time lag at which CC(t) reaches a maximum value (see Fig. 2e and Supplementary Fig. S7 and ref.26).
### Second experiment: Perceptual response to a motion of different duration
In the experiment with a short visual stimulation (Fig. 3), the trial was counted as correct if the reported direction was matched the stimulus rotational direction. Then, the probability of correct response, pcorrect was defined as a number of correct responses per total trial number. Because the pcorrect of different stimulus duration shows does not simply elicit increment, the fitting function formula describing the relation between the correct ratio and stimulus duration should contain both increment and decrement as the stimulus duration increases. Thus, the fitting function was chosen as an alpha function:
$${{\rm{p}}}_{{\rm{correct}}}({T}_{stim})={C}_{1}{(\frac{{T}_{stim}}{\tau })}^{n}{e}^{-(n-1)\frac{{T}_{stim}}{\tau }}+{C}_{2}$$
where n describes the slope of the curve, τ describes the time constant, C1 and C2 determines the curve amplitude and the baseline pcorrect. We compared root mean square error (rmse) and the coefficient of determination using a linear function ($${{\rm{p}}}_{{\rm{correct}}}({T}_{stim})={p}_{1}{T}_{stim}+{p}_{2})$$, Weibull function ($${{\rm{p}}}_{{\rm{correct}}}({T}_{stim})=1-0.5{{\rm{e}}}^{{(-{T}_{stim}/\alpha )}^{\beta }}+c)$$ and the alpha function for fitting the pcorrect curve over stimulus duration, which confirms that the alpha function can fit the pcorrect curve better than the other two can. The fitting quality— coefficient of determination, R2—of the subjects’ pcorrect curve to the alpha function was 0.56 ± 0.21, which is higher than the R2 of the linear function (0.22 ± 0.24) and Weibull function (−0.10 ± 0.96) (see examples in Fig. 3c, and in Supplementary Fig. S8).
In each fitted pcorrect curve of the individual, the time point of the maximum pcorrect – Topt was estimated (Fig. 3c). The Pearson’s correlation value between Topt and kernel size T0 was calculated to determine if motion integration is governed by the observed kernel (Fig. 3d). Note that Topt is solely estimated from 2nd experiment and T0 is solely estimated from the 1st experiment.
Next, we investigated the general trend of each subject’s behavior to determine whether the average pcorrect was maximized at T0 (see Supplementary Fig. S8). From the fitted pcorrect curve, we Z-scored the pcorrect and then rescaled the time axis Tstim with respect to the subject’s kernel size, T0. After we obtained the normalized pcorrect curve in the time domain, we averaged all subject curves. As a control, we rescaled each subject curve with shuffled T0 of another subject. See Fig. 3e,f, and Supplementary Fig. S8 for details.
In such experiments with variable stimulus durations, the hazard rate may affect the subject’s behavior. To avoid any unexpected effects of the hazard rate, we examined two possible scenarios. First, pcorrect may increase as trial number increases in a single session. A single experimental session consisted of 30 trials of randomly assigned sequence of six different durations. Therefore, a subject may predict the stimulus duration later in the session. We measured the slope of pcorrect as a function of the trial number, and the result did not show any meaningful trend of a correctness change (the slope was close to 0), rejecting the first scenario. Second, the correctness could increase over time in a single trial. When the stimulus duration exceeds a certain length, the subject can presumably predict the length of the stimulus because candidate durations longer than that would be limited. We investigated whether pcorrect differed across the stimulus durations, and the results showed there were no significant differences, rejecting the second scenario, too. We summarize these results in Supplementary Fig. S8.
### Perceptual reponses to illusory motion in bistable condition
For the condition S(t) = 0 (Fig. 4), phase duration τ was defined as the time interval between each switch of the perceived state. For each 60-second trial, the initial 10 seconds of data were excluded for the adaptation stage and the lower 1% and upper 5% of τ data points were excluded. Measured phase durations were converted into a cumulative density function, then fit to a log-normal distribution as:
$${F}_{\tau }=\frac{1}{2}[1+{\rm{erf}}(\frac{\mathrm{ln}\,\tau -\mu }{\sigma \sqrt{2}})],$$
where
$${\rm{erf}}(x)=\frac{2}{\sqrt{\pi }}{\int }_{0}^{x}{e}^{-{t}^{2}}\,dt$$
The log-normal distribution is a logarithm form of the normal distribution; thus, the peak of the τ distribution is analogous to the mean of the normal distribution. Therefore, $$\bar{\tau }$$ was used as the representative figure of perceptual switching distribution and $$\bar{\tau }$$ was then estimated from the fitted function as:
$$\bar{\tau }={e}^{\mu -{\sigma }^{2}}$$
Fitting was performed using the MATLAB function ‘NonlinearLeastSquares’.
### Statistical test
P-values and the type of statistical test used in the analysis are denoted in each figure caption and in the main text. We used a repeated-measure ANOVA to examine individual differences across the frequency conditions. Pearson’s correlation was used for the analysis of all linear correlations. We used a random shuffling method for comparison between the control and observed data, as described in the main text and figure legends. We calculated the ANOVA-Bayes Factor to determine the consistency of the T0 values across different frequency conditions (Fig. 1f) and across the local quadrants (Fig. S2) using the BayesFactor package of R.
### Data exclusion
Forty-five subjects participated in the 1st experiment. Data from three subjects were discarded from the analysis because the subject reported extremely small number of responses within 60 s trials (average response < 5 per 60 seconds trial). Total number of the subjects used in the further analysis is N = 42. This twenty-four subjects participated in the 2nd experiment, who also participated in the 1st experiment. Data from three subjects were discarded from the analysis with the same criteria of the 1st experiment, leaving total N = 21.
|
2023-03-26 07:30:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5581722259521484, "perplexity": 1558.8073827738322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00084.warc.gz"}
|
https://socratic.org/questions/what-is-the-domain-of-f-g-x
|
# What is the domain of (f@g)(x)?
Jul 24, 2015
If $g : A \to B$ and $f : B \to C$, then the domain of $f \circ g$ is
${\overline{g}}^{- 1} \circ {\overline{f}}^{- 1} \left(C\right)$
using the notation described below...
#### Explanation:
If $g$ is a function that maps some elements of a set $A$ to elements of a set $B$, then the domain of $g$ is the subset of $A$ for which $g \left(a\right)$ is defined.
More formally:
$g \subseteq A \times B :$
$\forall a \in A \forall {b}_{1} , {b}_{2} \in B$
$\left(\left(a , {b}_{1}\right) \in g \wedge \left(a , {b}_{2}\right) \in g\right) \implies {b}_{1} = {b}_{2}$
Use the notation ${2}^{A}$ to represent the set of subsets of $A$ and ${2}^{B}$ the set of subsets of $B$.
Then we can define the pre-image function:
${\overline{g}}^{- 1} : {2}^{B} \to {2}^{A}$ by ${\overline{g}}^{- 1} \left({B}_{1}\right) = \left\{a \in A : g \left(a\right) \in {B}_{1}\right\}$
Then the domain of $g$ is simply ${\overline{g}}^{- 1} \left(B\right)$
If $f$ is a function that maps some elements of set $B$ to elements of a set $C$, then:
${\overline{f}}^{- 1} : {2}^{C} \to {2}^{B}$ is defined by ${\overline{f}}^{- 1} \left({C}_{1}\right) = \left\{b \in B : f \left(b\right) \in {C}_{1}\right\}$
Using this notation, the domain of $f \circ g$ is simply
${\overline{g}}^{- 1} \left({\overline{f}}^{- 1} \left(C\right)\right) = \left({\overline{g}}^{- 1} \circ {\overline{f}}^{- 1}\right) \left(C\right)$
|
2021-01-25 19:50:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 28, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9807223081588745, "perplexity": 87.1603502357901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703644033.96/warc/CC-MAIN-20210125185643-20210125215643-00245.warc.gz"}
|
https://literaryquotation.net/blah-blah-blah-blah-at-speaking-or-in-singing/
|
# Blah Blah Blah Blah at speaking or in singing
0
744
Have you ever heard word or words like “blah blah blah blah”from others at the time of speaking? Of course ‘Yes’, you have heard these words. But what does it mean? Have you ever think. We often use blah blah blah blah in our speaking like ferret biting and Ads to refer to something which is boring or without meaningful content.
Many people thinks that BLAH BLAH BLAH BLAH is one of the most discussed blah blah when Blah blah of blah blah. People say blah blah when they actually refer something which is boring or without meaningful content.
But you will be surprised that there is a song with the title BLAH BLAH BLAH BLAH. Very impressive! Isn’t? blah blah blah is actually a song composed by van Buuren, who is an American singer and songwriter. This song is first performed by Dutch DJ and record producer Armin van Buuren. It was released on 18 May 2018 by the label Armada Music and Armind
## Blah Blah Blah Song
Anyway Blah Blah Blah Song has been published in youtube channel in May 2018. The vide has reached at 512 Million views. The song is in the following;
Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah.
ALL WE EVER HEAR FROM YOU IS BLAH BLAH BLAH
ALL WE EVER DO IS GO JA JA JA
AND WE DON’T EVEN CARE ABOUT WHAT THEY SAY
CAUSE IT’S JA JA JA JA
BLAH BLAH BLAH BLAH
You can check the song Blah Blah Blah in the following YouTube Video:
What do you think?
WH Auden the shield of Achilles Critical Analysis
Critical Discussion On the poem Ode To A Nightingale
Character of satan in paradise lost book 1 | Satan as a renaissance hero in paradise lost
To His Coy Mistress Analysis And Summary By Andrew Marvell
Somebody Once Told Me The World Was Macaroni Lyrics
The Love Song of J. Alfred Prufrock represents conflict of modern man
Robert Browning as a writer of dramatic monologue
Tennyson as a representative poet of his age or Victorian Period
SHARE
|
2022-01-24 00:26:48
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9915244579315186, "perplexity": 1735.2849191379955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304345.92/warc/CC-MAIN-20220123232910-20220124022910-00613.warc.gz"}
|
https://doc.dataiku.com/dss/8.0/operations/disk-usage.html
|
# Managing DSS disk usage¶
Various subsystems of DSS consume disk space in the DSS data directory. Some of this disk space is automatically managed and reclaimed by DSS (like temporary files), but some needs some administrator decision and management. For example, job logs are not automatically garbage collected, because a user or administrator may want to access it an arbitrary amount of time later.
## Automating cleanup tasks through DSS macros¶
For many of the things that can be cleaned up, DSS provides “macros” to semi-automate these. These macros can be run manually, but you can also automate them using a scenario.
A common setup is thus to create a dedicated “Administration project”, accessibble only to DSS administrators. In this project, you create scheduled scenarios that call the macros.
Note
Most of these macros can only be run with full DSS Administrator privileges
### Running the cleanup macros manually¶
• Select the appropriate macro
• Select parameters. Most cleanup macros default to running only for the current project, but have an option to run for all projects (which you can only set if you are a DSS administrator)
• Run the macro
### Running the cleanup macros automatically¶
• Create a new scenario
• Add a “Run a macro” step
• Select a time-based trigger for the scenario, configure it and activate your scenario
## Job logs¶
• Location in data dir: jobs
• Kind of data: historical logs
• Include in backups: As desired, not strictly required
Each time a job is run in DSS, DSS makes a snapshot of the project configuration/flow/code, runs the job, and keeps various logs and diagnostic information for this job.
This information is extremely useful for understanding job issues, and is not automatically garbage-collected by DSS, in case user wants to investigate what happened with a job at a later point.
For each job, a subfolder is created as jobs/PROJECT_KEY/JOB_ID.
It is safe to remove folders of jobs that are not currently running. Logs of these jobs will not be available anymore, but the existence of the job will still be registered in the DSS UI.
### Manual cleanup¶
Job folders that correspond to not-active-anymore jobs can be removed manually.
## Scenario logs¶
• Location in data dir: scenarios
• Kind of data: historical logs
• Include in backups: As desired, not strictly required
Each time a scenario is run in DSS, DSS makes a snapshot of the project configuration/flow/code, runs the scenario (which, in turn, generally runs one or several jobs), and keeps various logs and diagnostic information for this scenario run.
This information is extremely useful for understanding scenario issues, and is not automatically garbage-collected by DSS, in case user wants to investigate what happened with a job at a later point.
For each scenario run, a subfolder is created as scenarios/PROJECT_KEY/SCENARIO_ID/SCENARIO_RUN_ID.
It is safe to remove folders of scenario runs that are not running anymore. Logs of these scenario runs will not be available anymore, but the existence of the scenario run will still be registered in the DSS UI.
### Manual cleanup¶
Scenario folders that correspond to not-active-anymore scenario runs can be removed manually.
## Saved models¶
• Location in data dir: saved_models
• Kind of data: machine learning models
• Include in backups: Yes
When a machine learning model is deployed from a ML Task onto the Flow of a project, a copy of the data for this saved model is made in the saved_models folder.
Each time the saved model is retrained by running it from the Flow, a new version of the saved model is made, and a new copy of the model data is kept.
The size of a saved model version is highly dependent on the algorithm and characteristics of the data, and can range from hundreds of kilobytes to gigabytes.
The saved_models folder on disk is structured as saved_models/PROJECT_KEY/SAVED_MODEL_ID/versions/VERSION_ID
DSS never automatically removes old versions of saved models, as the user may elect to revert to a previous version at any time (for example if he notices that the newer version does not perform as expected). Old versions can be removed by the user from the UI of a saved model
### Manual cleanup¶
It is safe to delete (without going through the DSS UI) the VERSION_ID folder of versions that are not currently the active version of the saved model.
Warning
Deleting the VERSION_ID folder corresponding to the currently active version would render the saved model unusable.
## Analysis data¶
• Location in data dir: analysis-data
• Kind of data: machine learning models and machine learning staging data
• Include in backups: Yes
When a model is trained in a visual analysis, by creating a ML Task, various kind of information is kept in the analysis-data folder.
This folder is structured like:
• analysis-data/ANALYSIS_ID/MLTASK_ID/
• sessions
• splits
### sessions¶
The sessions subfolder contains the actual data of the machine learning models, for each model trained within this ML task. The size of a machine learning model is highly dependent on the algorithm and characteristics of the data, and can range from hundreds of kilobytes to gigabytes.
DSS never automatically removes old models trained in previous sessions, as the user may elect to deploy or compare any of the previous versions at any time.
Most of the data in sessions is cleared when the user deletes models or sessions from the DSS UI. In addition, the whole folder of the ML Task (including sessions) is removed when the user deletes a MLTask (or its containing visual analysis) from the DSS UI.
### splits¶
If you use the Python (in-memory) machine learning model, the splits folder contains the train and test splits data, which can become big.
DSS does not automatically remove old splits data, as the user may want to reuse them at a later time by reusing train/test split settings with the same configuration. The whole folder of the ML Task (including splits) is removed when the user deletes a MLTask (or its containing visual analysis) from the DSS UI.
It is possible to manually remove old CSV files in each splits folder, but you will lose some of the ability to compare exactly models since they won’t be based on the same splits. In addition, you might lose the Predicted Data and Charts screens
## Temporary files¶
• Location in data dir: tmp
• Kind of data: temporary data
• Include in backups: No
The tmp folder contains various temporary data. Most of it is automatically cleared as needed. Cleanup is not generally required.
### Manual cleanup¶
This folder can be cleared (i.e. remove all the files within the folder, but not the folder itself) while DSS is not running.
./bin/dss stop
rm -rf tmp/*
./bin/dss start
Warning
Removing tmp or altering its content while DSS is running may render DSS inoperative
## Caches¶
• Location in data dir: caches
• Kind of data: cached data
• Include in backups: No
The cache contains precomputed data, used notably for the Explore and Charts features. This folder can be cleared (i.e. remove all the files within the folder, but not the folder itself) while DSS is not running.
### Manual cleanup¶
Removing data from the caches folder can lead to increased display time for Explore and Charts the first time they are used after the removal.
./bin/dss stop
rm -rf caches/*
./bin/dss start
## Exports¶
• Location in data dir: exports
• Kind of data: temporary data
• Include in backups: As desired
The exports folder contains download files for exports made by users
There is one subfolder of exports per stored export data.
### Manual cleanup¶
You can remove old subfolders within this folder. Removing them will make the exports not available anymore for download by users
|
2021-01-26 19:26:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.186591237783432, "perplexity": 4397.213237717815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704803308.89/warc/CC-MAIN-20210126170854-20210126200854-00254.warc.gz"}
|
https://electronics.stackexchange.com/questions/72127/understanding-rc-time-constant?noredirect=1
|
# Understanding rc time constant
I get that if you put a resistor in series with a capacitor and add DC voltage, the cap will take longer to charge up.
How can I calculate, how long it takes for the capacitor to charge up/discharge with the given capacitance and resistance and how can I calculate the voltage at a given time?
• Did you try searching on Google or Wikipedia? – Kevin Chen Jun 9 '13 at 5:09
• This probably isn't an exact duplicate, but the following question is very similar and has good answers: electronics.stackexchange.com/questions/4951/… – PeterJ Jun 9 '13 at 12:42
• Its a very basic thing about the capacitor for which you get so many materials to go through. Just try searching on the net else go for few basic electronics books where you will get a clear understanding on the capacitor and its related equations. – Durgaprasad Jun 9 '13 at 14:38
• +1 to counteract someones downvote. It's a better Q than that. – user3624 Jun 11 '13 at 1:50
The voltage across a capacitance $C$ at time $t$, which was initially at voltage $V_0$, which is discharging through a resistance $R$, is given by:
$$V(t) = V_0 e^{\frac{-t}{RC}}$$
Charging a capacitor with a battery of voltage $V_b$ through a series resistor is similar:
$$V(t) = V_b(1-e^{\frac{-t}{RC}})$$
From these equations, you can see as time goes on, the capacitor voltage approaches its final value ($0V$ for discharging, $V_b$ for charging) but never reaches it. So, if you want to know how long it takes to (dis)charge, you first have to decide at what point to call the capacitor (dis)charged.
Let's say we want to define 99% discharged as "discharged". How long will this take? Say we had a capacitor charged to $1V$; when it is "discharged" it will be at $0.01V$. We can substitute these values into the first equation and solve for $t$:
$$0.01V = 1V\cdot e^{\frac{-t}{RC}}$$
$$\require{cancel} \frac{0.01\cancel{V}}{\cancel{1V}} = e^{\frac{-t}{RC}}$$
$$ln(0.01) = \frac{-t}{RC}$$
$$-ln(0.01) RC = t$$
$$4.6 RC \approx t$$
$RC$ is the time constant, so this tells us that after about 4.6 time constants, the capacitor will be 99% discharged. The same is true for charging.
• succinct and perfect +1 – Andy aka Jun 9 '13 at 14:03
|
2020-02-16 19:11:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5533779859542847, "perplexity": 505.1782845996999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141396.22/warc/CC-MAIN-20200216182139-20200216212139-00513.warc.gz"}
|
https://stacks.math.columbia.edu/tag/07WT
|
## 96.6 Deformation categories
We match the notation introduced above with the notation from the chapter “Formal Deformation Theory”.
Lemma 96.6.1. Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$ satisfying (RS). For any field $k$ of finite type over $S$ and any object $x_0$ of $\mathcal{X}$ lying over $k$ the predeformation category $p : \mathcal{F}_{\mathcal{X}, k, x_0} \to \mathcal{C}_\Lambda$ (96.3.0.2) is a deformation category, see Formal Deformation Theory, Definition 88.16.8.
Proof. Set $\mathcal{F} = \mathcal{F}_{\mathcal{X}, k, x_0}$. Let $f_1 : A_1 \to A$ and $f_2 : A_2 \to A$ be ring maps in $\mathcal{C}_\Lambda$ with $f_2$ surjective. We have to show that the functor
$\mathcal{F}(A_1 \times _ A A_2) \longrightarrow \mathcal{F}(A_1) \times _{\mathcal{F}(A)} \mathcal{F}(A_2)$
is an equivalence, see Formal Deformation Theory, Lemma 88.16.4. Set $X = \mathop{\mathrm{Spec}}(A)$, $X' = \mathop{\mathrm{Spec}}(A_2)$, $Y = \mathop{\mathrm{Spec}}(A_1)$ and $Y' = \mathop{\mathrm{Spec}}(A_1 \times _ A A_2)$. Note that $Y' = Y \amalg _ X X'$ in the category of schemes, see More on Morphisms, Lemma 37.14.3. We know that in the diagram of functors of fibre categories
$\xymatrix{ \mathcal{X}_{Y'} \ar[r] \ar[d] & \mathcal{X}_ Y \times _{\mathcal{X}_ X} \mathcal{X}_{X'} \ar[d] \\ \mathcal{X}_{\mathop{\mathrm{Spec}}(k)} \ar@{=}[r] & \mathcal{X}_{\mathop{\mathrm{Spec}}(k)} }$
the top horizontal arrow is an equivalence by Definition 96.5.1. Since $\mathcal{F}(B)$ is the category of objects of $\mathcal{X}_{\mathop{\mathrm{Spec}}(B)}$ with an identification with $x_0$ over $k$ we win. $\square$
Remark 96.6.2. Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$. Let $k$ be a field of finite type over $S$ and $x_0$ an object of $\mathcal{X}$ over $k$. Let $p : \mathcal{F} \to \mathcal{C}_\Lambda$ be as in (96.3.0.2). If $\mathcal{F}$ is a deformation category, i.e., if $\mathcal{F}$ satisfies the Rim-Schlessinger condition (RS), then we see that $\mathcal{F}$ satisfies Schlessinger's conditions (S1) and (S2) by Formal Deformation Theory, Lemma 88.16.6. Let $\overline{\mathcal{F}}$ be the functor of isomorphism classes, see Formal Deformation Theory, Remarks 88.5.2 (10). Then $\overline{\mathcal{F}}$ satisfies (S1) and (S2) as well, see Formal Deformation Theory, Lemma 88.10.5. This holds in particular in the situation of Lemma 96.6.1.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2021-09-28 01:48:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9868606925010681, "perplexity": 256.25337481337885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058589.72/warc/CC-MAIN-20210928002254-20210928032254-00219.warc.gz"}
|
https://adrian.ng/SQL/cte/dedupe/
|
Sometimes we don’t want to have reptitions of certain values across multiple tuples in our dataset. For instance, we might not want to see an email address repeated more than once and simply using DISTINCT is not an option.
In this case, I will invariably use a CTE to remove the offending repetitions.
### Row Number
This solution will use int ROW_NUMBER(), which returns an integer than increments (from 1, by 1) as it traverses the dataset. It is also a windowed function such that we can partition our dataset along identical values if we so wish.
### Subquery
Suppose we include ROW_NUMBER() like so:
SELECT
Email
, ROW_NUMBER() OVER (PARTITION BY Email ORDER BY Email) AS rn
FROM dbo.Account;
### CTE
This means we can delete any row where RN > 1. However, we cannot use ROW_NUMBER() in WHERE or HAVING clauses. Therefore we use a CTE.
WITH cteDedupe AS (
SELECT
ROW_NUMER() OVER (PARTITION BY Email ORDER BY Email) AS rn
FROM dbo.Account
)
DELETE cteDedupe
WHERE rn > 1;
|
2022-10-03 05:26:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3662210702896118, "perplexity": 2978.799753494454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00749.warc.gz"}
|
http://humancommunications.wikia.com/wiki/Transmit_Beamforming
|
## FANDOM
186 Pages
The transmit beamforming technique in a wireless communication system is a beamforming technique applied to the downlink channel where the downlink is the transmission link from the transmitter to a single receiver or multiple receivers.
In order to describe beamforming process efficiently, we assume that the transmitter has $M$ transmit antennas and each receiver has a single antenna. The received signal at the receiver in the transmit beamforming system is modelled as
$\mathbf{x} = \sum_{i=1}^K \mathbf{w}_i s_i = \mathbf{W} \mathbf{s}$
where $s_i$ in $\mathbf{s} = [s_1, s_2, \ldots, s_K]$ is the message signal (chosen from a complex Gaussian alphabet with a constraint of power $P/M$) intended for the $i$th receiver and $\mathbf{w}_i \in \mathbb{C}^{M \times 1}$ in $\mathbf{W} = [\mathbf{w}_1, \mathbf{w}_2, \ldots, \mathbf{w}_M]$ is the corresponding beamforming vector.
$y=ax+b$
|
2017-06-29 05:28:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5187476277351379, "perplexity": 268.93901845644524}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323870.46/warc/CC-MAIN-20170629051817-20170629071817-00223.warc.gz"}
|
https://tutorbin.com/questions-and-answers/text-for-the-matrix-aleftbeginarrayrrr-4-8-0-1-2-0-0-0-2-endarrayright
|
Question
\text { For the matrix } A=\left[\begin{array}{rrr} 4 & 8 & 0 \\ -1 & -2 & 0 \\ 0 & 0 & 2 \end{array}\right] \text { find all the eigenvalues and for each } eigenvalue the general solution to the eigenvector problem.
Fig: 1
Fig: 2
### Submit a new Query
Success
Assignment is successfully created
|
2023-01-29 05:58:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999812841415405, "perplexity": 493.8185759294633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499700.67/warc/CC-MAIN-20230129044527-20230129074527-00334.warc.gz"}
|
https://www.physicsforums.com/threads/node-analysis-help.396211/
|
# Homework Help: Node analysis help
1. Apr 18, 2010
### James889
Hi,
I have the following circuit
[PLAIN]http://img217.imageshack.us/img217/828/upg248.png [Broken]
I need to write a KCL equation to solve for $$I_s$$
Im really bad at this. But here's what i tried.
$$\frac{v_1}{5} + \frac{v_1 - v_2}{5} + 1 = 0$$
Im not sure how to write an equation for the $$v_2[\tex] node. Some of the current wil l travel down the 10 ohm resistor. Please help /james Last edited by a moderator: May 4, 2017 2. Apr 18, 2010 ### xcvxcvvc You can write out your Nodal equations, which will have the three unknowns Is, V1, and V2. Next, you can solve for V1 in terms of V2 to reduce your number of unknowns to two, allowing you to solve them both. V1 = V2 + 10 If you have trouble with applying Nodal analysis, always start with the big picture and substitute smaller pieces into it: for v1's node: [tex]I_{R_1} + I_{R_2} + I_s = 0$$
where $$R_1$$ is the topmost 5 ohm resistor and $$R_2$$ is the left, vertical 5 ohm resistor.
for v2's node:
$$I_s + 1 = I_{5 ohm} + I_{10 ohm}$$
Last edited by a moderator: May 4, 2017
3. Apr 19, 2010
### James889
Can you really mix volatges and currents like that?
4. Apr 19, 2010
### xcvxcvvc
nodal analysis uses the sum of current into and out of a node to find voltages(thanks, berkeman). Your next step is to substitute voltages divided by resistances in place of those currents. Note, a voltage divided by a resistance is a current.
Example: For node v1, the current headed toward the top 5 ohm resistor is
$$\frac{v_1 - v_2}{5}$$
because v1 - v2 is the + to - voltage across the resistor that would cause current to leave the node. By convention, each resistor's voltage is calculated so that the current leaves. Example:
the current for that same resistor during the nodal analysis of node with v2 is equal to
$$\frac{v_2 - v_1}{5}$$
leaving the node. You could just as easily say that
$$\frac{v_1 - v_2}{5}$$ is entering the node, but keeping a consistent approach for all nodes reduces error.
edit: I just noticed you highlighted the ten when you asked "can you mix currents and voltages?" The answer is no mix happened in that relationship between V1 and V2. Look at the diagram: a battery has ten volts across it. Another way to say it is the voltage at the positive sign (relative to ground) minus the voltage at the minus sign (relative to ground) equals the voltage across the component. Therefore:
$$V_{12} = 10 = V_1 - V_2$$
Last edited: Apr 19, 2010
5. Apr 19, 2010
### Staff: Mentor
Small typo -- should read "sum of currents into or out of a node". That's how you are setting up the equations, just that one sentence came out wrong I think.
|
2018-09-25 15:00:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5112810730934143, "perplexity": 1343.9358600589537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161661.80/warc/CC-MAIN-20180925143000-20180925163400-00262.warc.gz"}
|
https://support.gurobi.com/hc/en-us/community/posts/360059965972-TSP-SYS
|
After reading some documentation on sys, I now know it gives back a list, in the case for the TSP problem what's the list of?
Hi Jose,
sys.argv is a list of the command-line arguments used when running the program. The first argument is always the name of the program.
The tsp.py example is meant to be run from the command line with the syntax
python tsp.py N
where N is an integer representing the number of random cities to use when solving the TSP. The second argument is used to construct a list of random points:
n = int(sys.argv[1])random.seed(1)points = [(random.randint(0, 100), random.randint(0, 100)) for i in range(n)]
If you want to fix the number of cities within the code rather than specifying it via command-line argument, you can replace the lines
if len(sys.argv) < 2: print('Usage: tsp.py npoints') sys.exit(1)n = int(sys.argv[1])
with (e.g.):
n = 25
Thanks!
Eli
Thank you so much for your answer, it clarifies a lot. No wonder when I tried to run it from Jupyter Notebook it showed an error message
|
2020-05-27 22:20:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44334912300109863, "perplexity": 1191.0790176994867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396163.18/warc/CC-MAIN-20200527204212-20200527234212-00399.warc.gz"}
|
https://cdl-prep.com/district-of-columbia-air-brake-test
|
# Free District of Columbia CDL Air Brakes Practice Test 2023
Do you need an Air Brakes endorsement or an L endorsement for your commercial driving license? The District of Columbia CDL Air Brake has some differences from other endorsements because your license will receive a mark of restriction if you fail the test. So having good preparation before exam day is very necessary. To ensure that our DC CDL practice test questions are relevant, all of them are based on the DC CDL Manual. Each question has a detailed explanation for you to thoroughly learn the format and the topic. Don't be afraid of having a restriction on your license. Let’s try with our CDL practice test to get ready to pass the District of Columbia CDL Air Brake Test now.
Our CDL practice tests:
Based on 2021 DC commercial driver's license manual
Full answers + detailed explanations
Perfect for first-time, renewal applicants
DC CDL Air Brakes Test format:
25 questions
20 correct answers to pass
80% passing score
List of questions
1.
In case of emergency when driving a vehicle equipped with dual parking control valves, you should:
2.
The driver must be able to see a low air pressure warning which comes on before pressure in the service air tanks fall below ____ psi.
3.
All air brake equipped vehicles have:
4.
At 55 mph on dry pavement, the air brake lag distance is how many feet?
5.
On a long or steep downgrade, the foot brakes should be used:
6.
The spring brakes used on the chambers in a straight truck will bring you to a stop when air pressure drops below ___ psi.
7.
The application pressure gauge shows how much air pressure you:
8.
When should the spring brakes activate?
9.
What should a driver do when the low air pressure warning signal comes on?
10.
The safety valve reduces pressure at ___ psi.
11.
If you are driving down a steep downgrade and have reached your safe speed of 40 mph, you would apply the service brake until your speed dropped to ____ mph.
12.
Average stopping distance under normal conditions at 55 mph is:
13.
If your vehicle has an alcohol evaporator, every day during cold weather you should:
14.
To stop the vehicle, the brake shoes and linings are pushed against:
15.
In case of air pressure loss, a mechanical means of preventing the vehicle from moving is called:
16.
To test the parking brake, stop the vehicle, put the parking brake on, and:
17.
The service brake system ________.
18.
On newer vehicles what color is the parking brake knob?
19.
Some vehicles are equipped with a separate air tank used specifically to release the spring brakes in an emergency, called a:
20.
Brake drums or discs _______.
21.
How often should alcohol containers be checked during cold weather?
22.
What is used to put alcohol into the system and reduce ice risk?
23.
What is the purpose of a supply pressure gauge?
24.
In a dual air brake system, one system will typically control the rear brakes, and the other:
25.
What is an alternative name for a brake pedal?
|
2023-02-04 01:49:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22352734208106995, "perplexity": 6170.999515132248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500080.82/warc/CC-MAIN-20230204012622-20230204042622-00782.warc.gz"}
|
https://openmdao.org/newdocs/versions/latest/examples/beam_optimization_example_part_2.html
|
Revisiting the Beam Problem - Minimizing Stress with KS Constraints and BSplines¶
The following example shows the optimization of a simple beam with rectangular cross section. The beam has been subdivided into elements along the length of the beam, and one end is fixed. The goal is to minimize the volume (and hence the mass of the homogeneous beam) by varying the thickness in each element without exceeding a maximum stress constraint while the beam is subject to multiple load cases, each one being a distributed force load that varies sinusoidally along the span.
Constraining the bending stress on each element leads to a more computationally expensive derivative calculation, so we will use the KSFunction to reduce the stress vector for each load case to a single scalar value. To do so, we also need to insert an ExecComp component that converts the stress into a form where a negative value means it is satisfied, and a positive value means it is violated.
The problem presented here is also an example of a multi-point implementation, where we create a separate instance of the parts of the calculation that are impacted by different load cases. This enables our model to take advantage of multiple processors when run in parallel.
If we allow the optimizer to vary the thickness of each element, then we have a design variable vector that is as wide as the number of elements in the model. This may perform poorly if we have a large number of elements in the beam. If we assume that the optimal beam thickness is going to have a smooth continuous variation over the length of the beam, then it is a good candidate for using an interpolation component like SplineComp to reduce the number of design variables we need.
For this example, we have 25 elements, but can reduce that to 5 control points for the optimizer’s design variables by including the SplineComp.
The code for the top system is this:
"""
This is a multipoint implementation of the beam optimization problem.
This version minimizes volume while satisfying a max bending stress constraint in each element
"""
import numpy as np
import openmdao.api as om
from openmdao.test_suite.test_examples.beam_optimization.components.local_stiffness_matrix_comp import LocalStiffnessMatrixComp
from openmdao.test_suite.test_examples.beam_optimization.components.moment_comp import MomentOfInertiaComp
from openmdao.test_suite.test_examples.beam_optimization.components.multi_stress_comp import MultiStressComp
from openmdao.test_suite.test_examples.beam_optimization.components.volume_comp import VolumeComp
from openmdao.utils.spline_distributions import sine_distribution
def divide_cases(ncases, nprocs):
"""
Divide up load cases among available procs.
Parameters
----------
ncases : int
nprocs : int
Number of processors.
Returns
-------
list of list of int
Integer case numbers for each proc.
"""
data = []
for j in range(nprocs):
data.append([])
wrap = 0
for j in range(ncases):
idx = j - wrap
if idx >= nprocs:
idx = 0
wrap = j
data[idx].append(j)
return data
class MultipointBeamGroup(om.Group):
"""
System setup for minimization of volume (i.e., mass) subject to KS aggregated bending stress constraints.
"""
def initialize(self):
self.options.declare('E')
self.options.declare('L')
self.options.declare('b')
self.options.declare('volume')
self.options.declare('max_bending')
self.options.declare('num_elements', 5)
self.options.declare('num_cp', 50)
self.options.declare('parallel_derivs', False, types=bool, allow_none=True)
def setup(self):
E = self.options['E']
L = self.options['L']
b = self.options['b']
volume = self.options['volume']
max_bending = self.options['max_bending']
num_elements = self.options['num_elements']
num_nodes = num_elements + 1
num_cp = self.options['num_cp']
parallel_derivs = self.options['parallel_derivs']
x_interp = sine_distribution(num_elements)
comp = om.SplineComp(method='bsplines', num_cp=num_cp, x_interp_val=x_interp)
I_comp = MomentOfInertiaComp(num_elements=num_elements, b=b)
comp = LocalStiffnessMatrixComp(num_elements=num_elements, E=E, L=L)
# Parallel Subsystem for load cases.
# Determine how to split cases up over the available procs.
nprocs = self.comm.size
for j, this_proc in enumerate(divide):
num_rhs = len(this_proc)
name = 'sub_%d' % j
# Load is a sinusoidal distributed force of varying spatial frequency.
force_vector = np.zeros((2 * num_nodes, num_rhs))
for i, k in enumerate(this_proc):
end = 1.5 * np.pi
end += k * 0.5 * np.pi / (num_load_cases - 1)
x = np.linspace(0, end, num_nodes)
f = - np.sin(x)
force_vector[0:-1:2, i] = f
comp = MultiStatesComp(num_elements=num_elements, force_vector=force_vector,
num_rhs=num_rhs)
comp = MultiStressComp(num_elements=num_elements, E=E, num_rhs=num_rhs)
self.connect('local_stiffness_matrix_comp.K_local',
'parallel.%s.states_comp.K_local' % name)
for k in range(num_rhs):
sub.connect('states_comp.d_%d' % k,
'stress_comp.displacements_%d' % k,
src_indices=np.arange(2 *num_nodes))
if parallel_derivs:
color = 'red_%d' % k
else:
color = None
comp = om.KSComp(width=num_elements, upper=max_bending,
parallel_deriv_color=color)
sub.connect('stress_comp.stress_%d' % k,
'KS_%d.g' % k)
parallel_deriv_color=color)
comp = VolumeComp(num_elements=num_elements, b=b, L=L)
self.connect('interp.h', 'I_comp.h')
self.connect('interp.h', 'volume_comp.h')
self.connect('I_comp.I', 'local_stiffness_matrix_comp.I')
Next we run the model, and choose ScipyOptimizeDriver SLSQP to be our optimizer. At the conclusion of optimization, we print out the design variable, which is the thickness for each element.
from openmdao.test_suite.test_examples.beam_optimization.components.multi_states_comp import MultiStatesComp
E = 1.
L = 1.
b = 0.1
volume = 0.01
max_bending = 100.0
num_cp = 5
num_elements = 25
model = MultipointBeamGroup(E=E, L=L, b=b, volume=volume, max_bending = max_bending,
num_elements=num_elements, num_cp=num_cp,
prob = om.Problem(model=model)
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.driver.options['tol'] = 1e-9
prob.driver.options['disp'] = True
prob.setup(mode='rev')
prob.run_driver()
print(prob['interp.h'][0])
Optimization terminated successfully (Exit mode 0)
Current function value: [0.0349489]
Iterations: 46
Function evaluations: 170
|
2022-07-07 16:16:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5070279240608215, "perplexity": 4094.8503585683206}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00088.warc.gz"}
|
http://star-www.dur.ac.uk/~pdraper/splat/sun243.htx/sun243su23.html
|
#### 3.20 Region statistics window
Using this window you can get simple constantly updated statistics on regions of the current spectrum, or fuller statistics for the whole spectrum, or collections of regions.
Constantly updated statistics for regions are:
• Mean
• Standard deviation
• Minimum
• Maximum
• Integrated flux
• TSYS
The integrated flux is a numerical estimate and does not require even coordinate spacing. It can be removed from this list by deselecting the Options->Show flux integral item. This can be used to quickly estimate the flux in a line, providing your spectra are background subtracted.
The TSYS value is a system temperature estimate and requires data from the JCMT, or you can set the necessary parameters by enabling the Options->Set TSYS parameters item, which reveals a set of controls not seen by default. Note that the effective exposure time is:
${t}_{eff}={t}_{on}\ast {t}_{off}/\left({t}_{on}+{t}_{off}\right)$
where ${t}_{on}$ and ${t}_{off}$ are the integration times spent on and off the source. The backend degradation factor is instrument specific.
When you use the buttons along the bottom of the window these statistics are extended to also include:
• Median
• Mode
• Sum
• Number of data points
Note that the Mode is determined as $3\ast Median-2\ast Mean$, not from a distribution, so will not be correct for highly skewed distributions.
If you select the Options->Show extra stats menu item you can also see:
• Sum of squares
• RMS (square root of the mean square)
• Variance
• Standard error
• $25%$ and $75%$ quantiles
• Skew
• Kurtosis
and if you select Error stats->you can get basic statistics on the values that are being used as the spectrum errors (or variances).
To get statistics for the whole of the current spectrum just press the Whole stats button. If you want statistics for parts of the spectrum you will need to define the coordinate ranges you want to use (see the next part), and then press the Selected stats button to get statistics for the selected regions combined together (if none are selected then all regions are used), or All stats to get statistics for all the regions combined together.
Coordinate ranges
This part of the window is used to define the regions of the spectra that you want to see statistics for. Once defined you will see simple, continuously updated, statistics. You can also get fuller statistics using the Selected stats and All stats buttons.
To add a region press the Add button and then drag out a region in the display area of the plot window. This should result in the creation of a green rectangular figure.
You can interact with the figure, moving it side-to-side and resizing it. To do this point at the figure and press the left mouse button. This ‘selects’ the figure and adds grips to its exterior. Note that it also becomes the selected row in the Coordinate ranges: table. To move the figure just drag it and to resize it drag a grip (the little black squares). The associated coordinate range in the table updates with these changes.
To add a second range just press Add and repeat. The ranges can be overlapped or not.
To fine tune the ranges you can edit the values in the ranges table, just point at the coordinate you want to change and double click the left mouse button. This should enable the text editing cursor. Just make the modifications you want and press <Return> to make the changes permanent. (Note: if your spectra have sky coordinates shown for the X axis, then you should use the same format for your edits).
To read a set of ranges from a disk file choose the File->Read ranges menu item. The format of the input file is simple. It should have at least two fields separated by whitespace or commas. Comments are indicated by lines starting with a hash (#) and are ignored. You can also save the ranges and simple statistics to disk file using (File->Save ranges). It is also possible to append the contents of the Full stats log: region to the file SPLATStats.log by pressing the Save to log file button.
Accelerator keys
• Control-s Show full stats for all the selected ranges.
• Control-l Show full stats for all the ranges.
• Control-h Show full stats for the whole spectrum,
• Control-d Add a coordinate range (interactive or non-interactive).
• Control-e Delete the selected coordinate ranges.
• Control-w Close the window.
• F1 Display help on SPLAT-VO.
• Shift-F1 Display help on window.
|
2018-01-23 17:28:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37710535526275635, "perplexity": 2048.596303759814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892059.90/warc/CC-MAIN-20180123171440-20180123191440-00618.warc.gz"}
|
https://www.gamedev.net/forums/topic/64131-ripple-effects/
|
#### Archived
This topic is now archived and is closed to further replies.
# ripple effects
This topic is 5933 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Ripple, not Nipple. Is there any specific way to make a ripple effect in direct3d? i was supposed to post this in my other topic but forgot.
##### Share on other sites
check out Game Programming Gems for cool water ripple effects. they use opengl, but ripple effects are graphics API independent anyway.
a2k
##### Share on other sites
If your ripple effect ends up sucking try coding a nipple effect, it might distract your viewer away from the awful ripple effect.
------------
- outRider -
##### Share on other sites
TASTELESS ! IMPERIALISTIC ! EXPLOIATIVE ! DEGRADING TO WOMEN !
Oh...excuse me...that was the feminist mime.
good one
Actually...I think combining a circle and a sine function would make it work. Ive toyed with how the different equations would work in my mind but Ive never coded a ripple.
I came, I saw, I got programmers block.
~V''''lion
##### Share on other sites
How is that degrading to women, men have nipples too!
I _just did this for a demo I sent in to a company. There''s actually a decent tut on GameDev, though the one in Gems is a little better.
No trig is needed, you just use a weighted average of points on a grid.
I had problems with the rendering, if the waves got too big, it didn''t look very smooth at all. I started fiddling with bezier interpolations, but ran outta time. It was a lot of fun loading various pictures and making them ripple.
Magmai Kai Holmlor
- Not For Rent
##### Share on other sites
Hmmmm.... Does that work for 3d images as well?
thats the thiing i want ripplesd
##### Share on other sites
3D? Like a floating sphere of water?
It would take a more complicated control grid - you''d need a grid along the surface of the object.... and you''d need to know which other points were around it....
Instead of using x,y,z you could use sphereical notation, theta,phi, rho - and make rho go in and out (as opposed to z up & down). Then the tesselator needs to create the correct triangles from the spherical control points.
I think I''d start with a planar surface before I tried a sphere.
##### Share on other sites
this topic may help to show you what i want to do.
[url]http://www.gamedev.net/community/forums/topic.asp?topic_id=63908[/url]
##### Share on other sites
quote:
If your ripple effect ends up sucking try coding a nipple effect
I thought "sucking" is the nipple effect )
##### Share on other sites
I believe Lactating would be the correct effect, otherwise you''ve got equivilant to a mouth in terms of functionality
If you can read this, All your base are belong to us!
|
2018-01-22 05:36:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22241072356700897, "perplexity": 5248.6952993685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890991.69/warc/CC-MAIN-20180122034327-20180122054327-00437.warc.gz"}
|
https://dsp.stackexchange.com/questions/23662/k-means-for-2d-point-clustering-in-python/23663
|
# K-means for 2D point clustering in python
I have a set of points(2D) whose pixels are set and want to perform k-means on these pixels. Is clustering the 2D coordinates the right way ?
If so, can that be done using any libraries in python ?
It can be done very easily with the scikit-learn. Examples are easy to find on their website, i.e. here. In my opinion it is the best way to go.
Modified code example from the above link:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.datasets.samples_generator import make_blobs
##############################################################################
# Generate sample data
np.random.seed(0)
batch_size = 45
centers = [[1, 1], [-1, -1], [1, -1]]
n_clusters = len(centers)
X, labels_true = make_blobs(n_samples=3000, centers=centers, cluster_std=0.7)
##############################################################################
# Compute clustering with Means
k_means = KMeans(init='k-means++', n_clusters=3, n_init=10)
k_means.fit(X)
k_means_labels = k_means.labels_
k_means_cluster_centers = k_means.cluster_centers_
k_means_labels_unique = np.unique(k_means_labels)
##############################################################################
# Plot result
colors = ['#4EACC5', '#FF9C34', '#4E9A06']
plt.figure()
plt.hold(True)
for k, col in zip(range(n_clusters), colors):
my_members = k_means_labels == k
cluster_center = k_means_cluster_centers[k]
plt.plot(X[my_members, 0], X[my_members, 1], 'w',
markerfacecolor=col, marker='.')
plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=6)
plt.title('KMeans')
plt.grid(True)
plt.show()
Yielding:
• How do we give the dataset of 2D points for clustering in this function? – 1010101 May 26 '15 at 9:43
• Please check the variable X in the line no. 14. It's simply a numpy array of a shape (n_samples, n_dim). – jojek May 26 '15 at 9:47
• Can we find the values in each cluster ? i.e. the data and the correspoinding centoirds ? – 1010101 May 26 '15 at 15:56
• I suggest you to go through the code slowly, trying to debug every line. What you are trying to achieve is done in line 32 and used later in 34. – jojek May 26 '15 at 16:00
|
2020-09-28 05:44:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2147243469953537, "perplexity": 3029.188544126544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401585213.82/warc/CC-MAIN-20200928041630-20200928071630-00167.warc.gz"}
|
https://possiblywrong.wordpress.com/2011/05/29/generalizing-fitch-cheneys-five-card-trick/
|
## Generalizing Fitch Cheney’s Five-Card Trick
Ok, this should be the last of the card magic for a while. But there is actually some relevance to recent discussion, as we will see; I was reminded of this trick by the prisoner puzzle from the last couple of posts. I will describe the effect, including a “computer version” that makes for an interesting classroom exercise, and finally present a generalization of some mathematical results about how the trick works.
First, the original effect, as credited to William Fitch Cheney, Jr. The magician turns his back (or leaves the room), and asks a spectator to look through a standard 52-card poker deck and select any 5 cards. For example:
Cards selected by the spectator.
The magician’s assistant takes the chosen cards, then hands four of them, one at a time, back to the spectator who announces each card in turn to the magician, until one card remains, known only to the spectator and the assistant:
Cards presented to the magician.
After hearing the four cards, the magician immediately announces the fifth chosen card, the ace of diamonds!
It is an interesting problem to determine how the trick is done. The relationship to last week’s prisoner puzzle is clearly seen: in both cases, we are presented with an arbitrarily selected configuration of objects, and we must communicate some information about that configuration by modifying it in some limited way.
The trick obviously requires two parties, both of whom know how it works: the assistant to select and order the subset of 4 cards, and the magician to determine the remaining card. However, we can replace either or both of the assistant and the magician with a computer, which can be convenient for experimenting with the trick repeatedly, particularly in the classroom where students can search for patterns in the way cards are selected and ordered. In fact, performing the trick does not even require an actual deck of cards; the spectators/students can simply name their chosen cards.
To that end, following are two (deliberately comment-free) Python scripts that substitute for the assistant and the magician:
The assistant:
import itertools
ranks = ['ace', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine',
'ten', 'jack', 'queen', 'king']
suits = ['clubs', 'diamonds', 'hearts', 'spades']
print("I am the magician's assistant. Please select any five cards from a")
print("standard deck, e.g., 'ace of clubs', 'two of diamonds', etc.")
cards = []
for i in range(5):
while True:
try:
card = input('Enter card {0}: '.format(i + 1))
card = card.lower().split()
cards.append((ranks.index(card[0]), suits.index(card[-1])))
break
except (IndexError, ValueError):
for (i, j) in itertools.combinations(range(5), 2):
if cards[i][1] == cards[j][1]:
offset = (cards[j][0] - cards[i][0]) % 13
if offset > 6:
(i, j) = (j, i)
offset = 13 - offset
order = [cards[k] for k in (set(range(5)) - set((i, j)))]
order.sort()
order = list(list(itertools.permutations(order))[offset - 1])
order.insert(2, cards[i])
break
print('Read the following cards in order to the magician:')
for (rank, suit) in order:
print('The {0} of {1}.'.format(ranks[rank], suits[suit]))
The magician:
import itertools
ranks = ['ace', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine',
'ten', 'jack', 'queen', 'king']
suits = ['clubs', 'diamonds', 'hearts', 'spades']
print("I am the magician. Please tell me four of the cards that you selected")
print("in order as given to you by my assistant.")
cards = []
for i in range(4):
while True:
try:
card = input('Enter card {0}: '.format(i + 1))
card = card.lower().split()
cards.append((ranks.index(card[0]), suits.index(card[-1])))
break
except (IndexError, ValueError):
rank, suit = cards[2]
order = (cards[0], cards[1], cards[3])
offset = list(itertools.permutations(sorted(order))).index(order) + 1
rank = (rank + offset) % 13
print('The remaining card is the {0} of {1}!'.format(ranks[rank], suits[suit]))
There is a lot of interesting mathematics in this trick. See the references below for two great detailed discussions; Simonson and Holm in particular describe the trick as a tool for student investigation of the many involved areas of discrete mathematics.
One question that has been addressed frequently is, with how large a deck of cards may the trick be performed? It turns out that 52 cards is nowhere near the limit; the trick is possible even with a deck of 124 cards. More generally, the papers below show that, given any m-subset of n cards, it is possible to determine the remaining card from an appropriate arrangement of $m-1$ of them, if and only if $n \leq m! + m - 1$.
The main point of this post is to generalize this further. It is often the case in mathematics that a problem may be easier to tackle– or at least more elegant– if we try to solve a slightly harder problem instead. I think this is one of those cases. Consider the following more general form of the trick: for a triple $(n, m, k)$, with $n > m > k$, given any m-subset of n cards, arrange k of them so that the remaining $m-k$ cards may be determined. Thus, for example, Fitch’s original trick corresponds to the triple (52, 5, 4). Then we will show that the trick is possible if and only if
${n \choose m} \leq (n)_k$
where the left-hand side is the binomial coefficient and the right-hand side is the falling factorial. More simply, the number of subsets of m cards must be at most the number of arrangements of k cards.
Proof: That the condition is necessary is straightforward, since each possible subset of m cards chosen by the spectator must be “communicated” to the magician by a distinct k-arrangement. To show TONCAS (the obvious necessary condition is also sufficient– I was a student of West:)), consider the bipartite graph G with partite sets S and T, where S is the set of all m-subsets of n cards, and T is the set of all k-arrangements, with edges defined by inclusion. A matching saturating S would correspond to a strategy for performing the trick.
The König-Egerváry theorem states that the maximum size of a matching in a bipartite graph equals the minimum size of a vertex cover. A simple corollary guarantees a matching of size at least $e(G)/\Delta(G)$, since each vertex in a cover is incident to at most $\Delta(G)$ edges. Assuming $|S| \leq |T|$— the condition in the theorem– this ratio is $|S|$, thus G has a matching saturating S.
To wrap up, note that the theorem proved in the references corresponds to the particular case $k = m - 1$, in which case the graph G for the bounding condition is regular and $|S| = |T|$. Here we are simply making use of the more general version of Hall’s marriage theorem mentioned by Simonson-Holm… or the Birkhoff–von Neumann theorem for doubly-stochastic matrices used by Kleber, or the König-Egerváry theorem, or Menger’s theorem about graph connectivity, or the Ford-Fulkerson theorem about network flows, or Dilworth’s theorem about partially ordered sets… all of which are equivalent!
References:
1. Kleber, M., The Best Card Trick. Mathematical Intelligencer, 24 (2002). [PDF]
2. Simonson, S. and Holm, T., Using a Card Trick to Teach Discrete Mathematics. Primus: Problems, Resources and Issues in Mathematics Undergraduate Studies, 13 (2003):248-269. [PDF]
This entry was posted in Uncategorized. Bookmark the permalink.
|
2018-02-25 21:17:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7967607975006104, "perplexity": 2002.0660889983792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817437.99/warc/CC-MAIN-20180225205820-20180225225820-00159.warc.gz"}
|
https://ncatlab.org/nlab/show/Content
|
# nLab Content
The three areas of interest at the $n$Lab are
and their interaction with category theory and higher category theory.
|
2016-10-21 11:18:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2652166187763214, "perplexity": 1866.912167087454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717963.49/warc/CC-MAIN-20161020183837-00180-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://bcaffo.github.io/courses/06_StatisticalInference/homework/hw2.html
|
Homework 2 for Stat Inference
(Use the arrow keys to navigate)
Brian Caffo
Johns Hopkins Bloomberg School of Public Health
• These are some practice problems for Statistical Inference Quiz 2
• They were created using slidify interactive which you will learn in Creating Data Products
• Please help improve this with pull requests here (https://github.com/bcaffo/courses)
The probability that a manuscript gets accepted to a journal is 12% (say). However, given that a revision is asked for, the probability that it gets accepted is 90%. Is it possible that the probability that a manuscript has a revision asked for is 20%?
1. Yeah, that's totally possible.
2. No, it's not possible.
3. It's not possible to answer this question.
$A = accepted$, $B = revision$. $P(A) = .12$, $P(A | B) = .90$. $P(B) = .20$
$P(A \cap B) = P(A | B) * P(B) = .9 \times .2 = .18$ this is larger than $P(A) = .12$, which is not possible since $A \cap B \subset A$.
Suppose that the number of web hits to a particular site are approximately normally distributed with a mean of 100 hits per day and a standard deviation of 10 hits per day. What's the probability that a given day has fewer than 93 hits per day expressed as a percentage to the nearest percentage point?
1. 76%
2. 24%
3. 47%
4. 94%
Let $X$ be the number of hits per day. We want $P(X \leq 93)$ given that $X$ is $N(100, 10^2)$.
round(pnorm(93, mean = 100, sd = 10) * 100)
[1] 24
Suppose 5% of housing projects have issues with asbestos. The sensitivity of a test for asbestos is 93% and the specificity is 88%. What is the probability that a housing project has no asbestos given a negative test expressed as a percentage to the nearest percentage point?
1. 0%
2. 5%
3. 10%
4. 20%
5. 50%
6. 100%
$A = asbestos$, $T_+ = tests positive$, $T_- = tests negative$. $P(T_+ | A) = .93$, $P(T_- | A^c) = .88$, $P(A) = .05$.
We want $P(A^c | T_-) = \frac{P(T_- | A^c) P(A^c)}{P(T_- | A^c) P(A^c) + P(T_- | A) P(A)}$
(.88 * .95) / (.88 * .95 + .07 * .05)
[1] 0.9958
Suppose that the number of web hits to a particular site are approximately normally distributed with a mean of 100 hits per day and a standard deviation of 10 hits per day.
1. What number of web hits per day represents the number so that only 5% of days have more hits? Express your answer to 3 decimal places.
Let $x$ be the number of hits per day. We want $x$ so that $F(x) = 0.95$.
116.449
round(qnorm(.95, mean = 100, sd = 10), 3)
[1] 116.4
round(qnorm(.05, mean = 100, sd = 10, lower.tail = FALSE), 3)
[1] 116.4
Suppose that the number of web hits to a particular site are approximately normally distributed with a mean of 100 hits per day and a standard deviation of 10 hits per day.
1. Imagine taking a random sample of 50 days. What number of web hits would be the point so that only 5% of averages of 50 days of web traffic have more hits? Express your answer to 3 decimal places.
Let $\bar X$ be the average number of hits per day for 50 randomly sampled days. $X$ is $N(100, 10^2 / 50)$.
102.326
round(qnorm(.95, mean = 100, sd = 10 / sqrt(50) ), 3)
[1] 102.3
round(qnorm(.05, mean = 100, sd = 10 / sqrt(50), lower.tail = FALSE), 3)
[1] 102.3
You don't believe that your friend can discern good wine from cheap. Assuming that you're right, in a blind test where you randomize 6 paired varieties (Merlot, Chianti, ...) of cheap and expensive wines
1. What is the change that she gets 5 or 6 right expressed as a percentage to one decimal place?
Let $p=.5$ and $X$ be binomial
10.9
round(pbinom(4, prob = .5, size = 6, lower.tail = FALSE) * 100, 1)
[1] 10.9
Consider a uniform distribution. If we were to sample 100 draws from a a uniform distribution (which has mean 0.5, and variance 1/12) and take their mean, $\bar X$
1. What is the approximate probability of getting as large as 0.51 or larger expressed to 3 decimal places?
Use the central limit theorem that says $\bar X \sim N(\mu, \sigma^2/n)$
0.365
round(pnorm(.51, mean = 0.5, sd = sqrt(1 / 12 / 100), lower.tail = FALSE), 3)
[1] 0.365
If you roll ten standard dice, take their average, then repeat this process over and over and construct a histogram,
1. what would it be centered at?
$E[X_i] = E[\bar X]$ where $\bar X = \frac{1}{n}\sum_{i=1}^n X_i$
The answer will be 3.5 since the mean of the sampling distribution of iid draws will be the population mean that the individual draws were taken from.
If you roll ten standard dice, take their average, then repeat this process over and over and construct a histogram,
1. what would be its variance expressed to 3 decimal places?
$Var(\bar X) = \sigma^2 /n$
The answer will be 0.292 since the variance of the sampling distribution of the mean is $\sigma^2/10$ where $\sigma^2$ is the variance of a single die roll, which is
mean((1 : 6 - 3.5)^2 / 10)
[1] 0.2917
The number of web hits to a site is Poisson with mean 16.5 per day.
1. What is the probability of getting 20 or fewer in 2 days expressed as a percentage to one decimal place?
Let $X$ be the number of hits in 2 days then $X \sim Poisson(2\lambda)$
1
round(ppois(20, lambda = 16.5 * 2) * 100, 1)
[1] 1
|
2019-08-25 03:04:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.747946560382843, "perplexity": 766.1760896144968}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322170.99/warc/CC-MAIN-20190825021120-20190825043120-00442.warc.gz"}
|
https://www.physicsforums.com/threads/convergence-of-sequences.169000/
|
Homework Help: Convergence of sequences
1. May 5, 2007
pivoxa15
1. The problem statement, all variables and given/known data
Does
A subsequence of a sequence X converges to a point in I => The sequence X in I converges to a point in I
?
3. The attempt at a solution
I think yes because the subsequence is the sequence itself minus a few finite number of points. Since they both are in the same set I, I can't see why not.
Last edited: May 5, 2007
2. May 5, 2007
Office_Shredder
Staff Emeritus
A subsequence can be lacking an infinite number of points in I. If a sequence is 1,-1,1,-1,1,-1,...
the subsequence
1,1,1,1,1...
certainly converges. You can tell me what you think about the statement
3. May 5, 2007
pivoxa15
Good example.
The question should be
Does
A subsequence of a sequence X converges to a point in I <= The sequence X in I converges to a point in I
?
Now it should be yes.
But we genearlly refer to seq and subseq as containing an infinite number of points.
4. May 5, 2007
quasar987
If $$f:\mathbb{N}\rightarrow I$$ is a sequence in a set I, then a subsequence of f is a sequence of the form h = f o g, where $$g:\mathbb{N}\rightarrow\mathbb{N}$$ is a strictly increasing sequence of natural numbers.
I like to think of g as a discriminating function that picks which guys from f it wants in its kickball team.. or which girls does the Maharajah wants in its harem, or... any such pictorial analogy to remember the definition.
Last edited: May 5, 2007
5. May 5, 2007
quasar987
And it goes farther too: The sequence X in I converges to a point y in I <=> Every subsequence of X converges to y.
6. May 6, 2007
pivoxa15
It a little subtle. Every is essential. I didn't have every in my original statement so no if and only if condition.
|
2018-05-27 03:38:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8773589730262756, "perplexity": 918.1098015513232}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867995.55/warc/CC-MAIN-20180527024953-20180527044953-00333.warc.gz"}
|
https://forum.poshenloh.com/category/644/bonus-2
|
@The-Darkin-Blade Hi again! The variable $$s$$ is there just to make the expression look a little bit cleaner and to make Heron's Formula easier to memorize. See, $$s$$ is just half of the triangle's perimeter, or the semiperimeter: $$s = \frac{ a + b + c}{2}$$ If we tried to remember Heron's Formula only in terms of $$a, b,$$ and $$c,$$ (without $$s$$), the expression would look a lot more ugly. It would look like this: $$\sqrt{(\frac{a + b + c}{2} )(\frac{-a + b + c}{2} )(\frac{a - b + c}{2} )(\frac{a + b -c }{2}}$$ There's another nice thing about $$s$$: since it is equal to the perimeter divided by $$2,$$ it takes us a long way toward the area formula for a triangle. Remember the area formula is $$\frac{1}{2} \times \text{ base } \times \text{ height}$$ So all you have to do to get the area of the triangle is take $$s$$ and multiply it to the sum of the heights of each triangle!
|
2020-09-29 05:09:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219679832458496, "perplexity": 131.09875354076652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401624636.80/warc/CC-MAIN-20200929025239-20200929055239-00395.warc.gz"}
|
https://www.physicsforums.com/threads/9-10-people-get-this-wrong.420725/
|
9/10 people get this wrong
1. Aug 6, 2010
elfboy
evaluate the real part: (-1-2i)^(1/3)
and
show why (1-2i)^(1/3) is not equal to (-1)^(1/3)*(-1+2i)^(1/3)
you will be pulling your hair out
Last edited: Aug 6, 2010
2. Aug 6, 2010
Mentallic
You know, I learnt something today. My hair doesn't come out easily!
14 pages of working, and I hit a dead end :grumpy:
3. Aug 6, 2010
gomunkul51
A. The real part is about 0.2 ? :)
B. Because multiplying by (-1)^(1/3) with turn the second complex number by 45 degrees and will not give you (different quarters) (1-2i)^(1/3).
Last edited: Aug 6, 2010
4. Aug 6, 2010
hgfalling
For part a...
Well, converting this complex number to polar coordinates, we get
$$r = \sqrt{5}$$
$$\theta = \arctan 2 - \pi$$
and for the cube root we have
$$r = \sqrt[6]{5}$$
$$\theta = \frac{\arctan 2 - \pi}{3}$$
so the real part is
$$\Re z = r \cos \theta = \sqrt[6]{5} \cos \frac{\arctan 2 - \pi}{3} \approx 1.018$$
5. Aug 6, 2010
hgfalling
$$(1 - 2i)^\frac{1}{3} = \sqrt[6]{5} \left ( \cos \frac{-\arctan{2}}{3} + i \sin \frac{-\arctan{2}}{3}\right)$$
by the same logic as above.
$$(-1)^\frac{1}{3} (-1 + 2i)^\frac{1}{3} = \left(\frac{1}{2} + i \frac{\sqrt{3}}{2}\right) \left(\sqrt[6]{5} \left ( \cos \frac{\arctan{2}}{3} + i \sin \frac{\arctan{2}}{3}\right) \approx -0.2013 + 1.292 i$$
so those things aren't equal. It's not clear why they would be, since $a^c b^c = (ab)^c$ isn't an identity for the complex numbers.
6. Aug 6, 2010
jackmell
You made no mention of principal value so the real part is actually multi-valued:
$$(-1-2i)^{1/3}=5^{1/6}e^{i/3(-\pi+\arctan(2)+2k\pi)},\quad k=0,1,2$$
7. Aug 6, 2010
...
...
...
8. Aug 6, 2010
elfboy
yup For part one, it's the minus pi part after the arctangent which is the part that tripped me up.
9. Aug 6, 2010
DaveC426913
I am in the top 10 percentile of this teaser.
I definitely did not get it wrong.
10. Aug 6, 2010
Dick
Yet more proof, as if any were needed, that you ARE Mr. Smartypants! The problem could be a little less trap inducing by pointing out, as jackmell did, that "^(1/3)" has to be defined. Otherwise, it's multivalued. I am curious what Mentallic spent 14 pages on. Trying to find a solution in radicals?
11. Aug 7, 2010
Mentallic
Converting into a+ib form.
More specifically, I assumed $$cos(x), sin(x)$$ where $$x=\frac{tan^{-1}2-\pi}{3}$$ wasn't a sufficient enough answer just as it wouldn't be sufficient to leave an answer as $$sin(cos^{-1}(1/2))$$ since it should be simplified.
This led me down a long and treacherous road... I thought I was going to finally get the answer in the end but it didn't seem like it was going to simplify the way I hoped. I do have an idea of how to solve it, but what I'm thinking of doing next will be the death of me. It'll require at least another 10 pages of working and simplifying :/
Last edited: Aug 7, 2010
12. Aug 7, 2010
Staff: Mentor
Do I classify as 1/10 or 9/10 if I have not even attempted to solve (other than entering the expression into wolfram alpha)?
13. Aug 7, 2010
Mentallic
Obviously the 1/10 else if you were in the 9/10 and then checked it, you would change your answer
|
2018-03-24 01:22:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7760913372039795, "perplexity": 1600.0802193873446}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649508.48/warc/CC-MAIN-20180323235620-20180324015620-00642.warc.gz"}
|
https://math.tutorvista.com/number-system/absolute-value-function.html
|
Top
# Absolute Value Function
Absolute value is a term used in mathematics to indicate the distance of a number from the zero of a number line. Absolute value of a number is always positive. The absolute value of x is denoted as |x|, gives the distance between x and 0. Absolute value of 5, |5|, gives the distance between 0 and 5.
Related Calculators Absolute Value Calculator Absolute Value Equation Calculator absolute mean deviation calculator Absolute Pressure Calculator
## Absolute Value Function Definition
Suppose x is a real number. Then the absolute value of x is defined as follows:
It is also called piece-wise function.
Definition of absolute value function is read as:
If x is greater than or equal to zero, then use |x| = x
That is, if a number is non negative, then its absolute value is itself.
For example:
| 3 | = 3, | 6 | = 6.
If x is less than zero, then use |x| = - x
That is, if a number is negative, then its absolute value is its opposite.
For example:
| - 3 | = 3, | - 6 | = 6.
Use the below widget to evaluate absolute equation.
## Absolute Value Functions and Graphs
The function f(x) = |x| is called the absolute value function.
The domain of the absolute value function is the set of real numbers and the range is the set of positive real numbers.
If x is positive then f(x) = x and if x is negative then f(x) = - x so that f(x) is always positive. Graph of absolute value function is always "V" shaped.
Graph of f(x) = |x| or y = |x|
## Graphing Absolute Value Functions
A function of the form f(x) = |ax + b| + c, where a$\neq$0 is an absolute value function. Graphs of absolute value functions are not look like the graphs of linear functions. Because of behavior of absolute values, it is important to include negative inputs when graphing absolute value functions.
Step for graphing an absolute value function:
Step 1:
Write two equations using the definition of absolute value.
Step 2: Solve both equations for y.
Step 3: Shade the lines in proper "V".
### Solved Examples
Question 1: Graph f(x) = |x + 3|
Solution:
Step 1: Put y = |x + 3|
Step 2: When x ≥ 0
x 0 1 2 3 y = x + 3 3 4 5 6
Step 3: When x < 0
x -1 -3 -4 -5 -6 y = - (x + 3) 2 0 1 2 3
Step 4: Graph the values
Question 2: Graph f(x) = |x| + 2
Solution:
Step 1: Put y = |x| + 2
Step 2: When x ≥ 0
x 0 1 2 y = x + 2 2 3 4
Step 3: When x < 0
x -1 -2 -3 y = - x + 2 3 4 5
Step 4: Graph the values
## Derivative of Absolute Value Function
Derivatives of absolute values can be done by using the formula:
$\frac{\mathrm{d} }{\mathrm{d} x}$ |f(x)| = $\frac{f(x)}{|f(x)|}$.f '(x)
where f(x) $\neq$ 0
### Solved Examples
Question 1: Find the derivative of |x + 9|
Solution:
Let f(x) = x + 9
derivative of |f(x)| using formula
$\frac{\mathrm{d} }{\mathrm{d} x}$ |f(x)| = $\frac{f(x).f^'(x)}{|f(x)|}$
f '(x) = $\frac{\mathrm{d} }{\mathrm{d} x}$(x + 9) = 1
$\frac{\mathrm{d} }{\mathrm{d} x}$ |x + 9| = $\frac{(x + 9). 1}{|x + 9|}$
= $\frac{x + 9}{|x + 9|}$
if x > 0 then |x + 9| = x + 9
$\frac{\mathrm{d} }{\mathrm{d} x}$ |x + 9| = $\frac{x + 9}{|x + 9|}$
=
$\frac{x + 9}{x + 9}$
= 1
if x < 0 then |x + 9| = -(x + 9)
$\frac{\mathrm{d} }{\mathrm{d} x}$ |x + 9| = $\frac{x + 9}{|x + 9|}$
=
$\frac{x + 9}{-(x + 9)}$
= -1
and f '(x) does not exist at x = - 9.
Question 2: Find the derivative of |2x + 1|
Solution:
Let f(x) = 2x + 1
derivative of |f(x)| using formula
$\frac{\mathrm{d} }{\mathrm{d} x}$ |f(x)| = $\frac{f(x).f^'(x)}{|f(x)|}$
f '(x) = $\frac{\mathrm{d} }{\mathrm{d} x}$(2x + 1) = 2
$\frac{\mathrm{d} }{\mathrm{d} x}$|2x + 1| = $\frac{2x + 1}{|2x + 1|}$ * 2
= $\frac{2(2x + 1)}{|2x + 1|}$
if x > 0 then |2x + 1| = 2x + 1
$\frac{2(2x + 1)}{|2x + 1|}$ = $\frac{2(2x + 1)}{2x + 1}$
= 2
if x < 0 then |2x + 1| = -(2x + 1)
$\frac{2(2x + 1)}{|2x + 1|}$ = $\frac{2(2x + 1)}{-(2x + 1)}$
= - 2
and f '(x) does not exist at x = $\frac{-1}{2}$.
Question 3: Find the derivative of x + |x +7|
Solution:
Let g(x) = x + |f(x)|, where f(x) = x + 7
g'(x) = 1 + $\frac{\mathrm{d} }{\mathrm{d} x}$ |f(x)|
derivative of |f(x)| using formula
$\frac{\mathrm{d} }{\mathrm{d} x}$ |f(x)| = $\frac{f(x).f^'(x)}{|f(x)|}$
f '(x) = $\frac{\mathrm{d} }{\mathrm{d} x}$(x + 7) = 1
and f '(x) does not exist at x = - 7
$\frac{\mathrm{d} }{\mathrm{d} x}$ |x + 7| = $\frac{(x + 7). 1}{|x + 7|}$
= $\frac{x + 7}{|x + 7|}$
if x > 0 then |x + 7| = x + 7
$\frac{x + 7}{|x + 7|}$ = $\frac{x + 7}{x + 7}$
= 1
$\frac{\mathrm{d} }{\mathrm{d} x}$ g(x) = 1 + 1 = 2
if x < 0 then |x + 7| = -(x + 7)
$\frac{x + 7}{|x + 7|}$ = $\frac{x + 7}{-(x + 7)}$
= - 1
$\frac{\mathrm{d} }{\mathrm{d} x}$ g(x) = 1 - 1 = 0
More topics in Absolute Value Function Absolute Value Inequalities Graphing Absolute Value Functions
NCERT Solutions NCERT Solutions NCERT Solutions CLASS 6 NCERT Solutions CLASS 7 NCERT Solutions CLASS 8 NCERT Solutions CLASS 9 NCERT Solutions CLASS 10 NCERT Solutions CLASS 11 NCERT Solutions CLASS 12
Related Topics Math Help Online Online Math Tutor
*AP and SAT are registered trademarks of the College Board.
|
2019-05-22 04:17:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6707382798194885, "perplexity": 743.1874513816051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256724.28/warc/CC-MAIN-20190522022933-20190522044933-00218.warc.gz"}
|
https://www.physicsforums.com/threads/the-last-big-toughy.12036/
|
# The last big toughy
Ok the last question on those pages of question is this and i have got stuck and cant figure it out. I need to show that (ds=degrees)
Sin(45ds+x) +cos(45ds+x) = sqaureroot2 cosx.
Ok so i did left side first and changed it to.
Sin45Cosy + Cos45Siny +Cos45Cosy - Sin45Siny
Now what i want to do is divide out the CosY and Sin Y but even if i do that it doesnt make square root two and i dont know what to do from there. Can you help me solve this last practise question.
They've given you a trig problem that includes the sine and cosine of
a special angle. That should prod you in the right direction. If not, look at the attached text file.
#### Attachments
• sincos.txt
275 bytes · Views: 160
HallsofIvy
Homework Helper
What you are doing doesn't make a whole lot of sense. What is Y?
sin(45+ x)= sin(45)cos(x)+ cos(45)sin(x) and
cos(45+ x)= cos(45)cos(x)- sin(45)sin(x).
Do you know what sin(45) and cos(45) are?
Plug those into the two equations above and add.
Oh ok i get it, ya i thought of putting the 1/root2 in but wasnt sure how to go from there but i understand now. Thankyou so much!
Originally posted by majinknight
Ok the last question on those pages of question is this and i have got stuck and cant figure it out. I need to show that (ds=degrees)
Sin(45ds+x) +cos(45ds+x) = sqaureroot2 cosx.
Ok so i did left side first and changed it to.
Sin45Cosy + Cos45Siny +Cos45Cosy - Sin45Siny
Now what i want to do is divide out the CosY and Sin Y but even if i do that it doesnt make square root two and i dont know what to do from there. Can you help me solve this last practise question.
how old are you buddy?
16 currently in grade 11. Why?
Originally posted by PrudensOptimus
how old are you buddy?
Haven't you discovered the "Profile" button yet, buddy? :)
Just testing out the "math typesetting" thing:
\begin{align*} \sin(45^\circ + x) &= \sin(45^\circ) \cos x + \cos(45^\circ) \sin x \\ &= \frac{\cos x}{\sqrt{2}} + \frac{\sin x}{\sqrt{2}} \\ &= \frac{\cos x + \sin x}{\sqrt{2}} \end{align*}
Originally posted by majinknight
16 currently in grade 11. Why?
lol which state are you in?
Im in Ontario Canada. Its a province not a state.
are you in the canadian school system?
Yes i am, i am glad you are all so interested in me. I feel so special
heh, it sounded alot like one of our A-level questions, i was curious about it.
|
2021-07-31 00:50:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9644119143486023, "perplexity": 1698.2586580056648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00021.warc.gz"}
|
http://cms.math.ca/cjm/kw/Waring's%20problem
|
Canadian Mathematical Society www.cms.math.ca
location: Publications → journals
Search results
Search: All articles in the CJM digital archive with keyword Waring's problem
Expand all Collapse all Results 1 - 2 of 2
1. CJM 2006 (vol 58 pp. 476)
Chipalkatti, Jaydeep
Apolar Schemes of Algebraic Forms This is a note on the classical Waring's problem for algebraic forms. Fix integers $(n,d,r,s)$, and let $\Lambda$ be a general $r$-dimensional subspace of degree $d$ homogeneous polynomials in $n+1$ variables. Let $\mathcal{A}$ denote the variety of $s$-sided polar polyhedra of $\Lambda$. We carry out a case-by-case study of the structure of $\mathcal{A}$ for several specific values of $(n,d,r,s)$. In the first batch of examples, $\mathcal{A}$ is shown to be a rational variety. In the second batch, $\mathcal{A}$ is a finite set of which we calculate the cardinality.} Keywords:Waring's problem, apolarity, polar polyhedronCategories:14N05, 14N15
2. CJM 2002 (vol 54 pp. 417)
Wooley, Trevor D.
Slim Exceptional Sets for Sums of Cubes We investigate exceptional sets associated with various additive problems involving sums of cubes. By developing a method wherein an exponential sum over the set of exceptions is employed explicitly within the Hardy-Littlewood method, we are better able to exploit excess variables. By way of illustration, we show that the number of odd integers not divisible by $9$, and not exceeding $X$, that fail to have a representation as the sum of $7$ cubes of prime numbers, is $O(X^{23/36+\eps})$. For sums of eight cubes of prime numbers, the corresponding number of exceptional integers is $O(X^{11/36+\eps})$. Keywords:Waring's problem, exceptional setsCategories:11P32, 11P05, 11P55
top of page | contact us | privacy | site map |
© Canadian Mathematical Society, 2016 : https://cms.math.ca/
|
2016-02-11 06:29:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7561700344085693, "perplexity": 883.3853686603874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161718.0/warc/CC-MAIN-20160205193921-00022-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://crypto.stackexchange.com/questions/34179/what-asymmetric-key-exchange-algorithms-are-known-besides-dh
|
# What asymmetric key exchange algorithms are known besides DH?
On Wikipedia, a lot of the subjects that are said to be different key exchange methods are often just protocols that incorporate the Diffie-Hellman algorithm into them. The only other key exchange algorithm I know of besides DH is Algebraic Eraser, of which I don't know much about. Are there any others? And I don't mean key exchange schemes based on symmetric key primitives.
• I think there is some super singular elliptic curve isomorphism thingy. No idea how practical it is. – CodesInChaos Apr 2 '16 at 11:13
Using lattices/ring-LWE, there is Lattice Cryptography for the Internet (by me), which inherits from Ring-LWE encryption, and has been implemented by Bos et al. with further improvements by Alkim et al.
The underlying mechanism is conceptually DH-like, but uses completely different mathematics. We start with a uniformly random $a \in R_q = R/qR$, which can be chosen by one of the parties or by a trusted third party. Here $R$ is an appropriate choice of ring, e.g., $R=\mathbb{Z}[X]/(X^n+1)$ for power-of-two $n$ (in the few hundreds, for current security estimates).
The basic protocol works as follows (oversimplifying a bit): to establish a key, the first party chooses a "short" random $e \in R$, and announces $E \approx e \cdot a \in R_q$, where the approximation hides some short random error. Similarly, the second party chooses a short $f \in R$, and announces $F \approx a \cdot f \in R_q$. The first party can then compute $e \cdot F \approx e \cdot a \cdot f \in R_q$, and the second party can compute $E \cdot f \approx e \cdot a \cdot f \in R_q$. The parties then use some kind of "reconciliation" mechanism to extract a common secret key from their shared "noisy" versions of $e \cdot a \cdot f$.
The above mechanism can be proved secure against passive eavesdroppers assuming the hardness of the corresponding Ring-LWE problem (which itself can be proved quantumly as hard as worst-case problems on ideal lattices, for appropriate parameters). Of course, in reality we need authenticated key exchange and other properties; these can be obtained using additional techniques that originated in the DH setting (see the first link for details).
• My layman's question: If an adversary is capable to tap exactly all bits that are being transferred between the communication partners, wouldn't he be able to employ the same "reconciliation" mechanism to obtain the common secret key? – Mok-Kong Shen Apr 3 '16 at 11:53
• No, because the eavesdropper sees only $a, E,F$. There's no apparent way to get anything close to $e \cdot a \cdot f$ from this: e.g., $E \cdot F$ doesn't work (just expand it out). In fact, one can prove that the reconciled key is indistinguishable from uniform to the eavesdropper, under the Ring-LWE assumption. – Chris Peikert Apr 3 '16 at 12:12
• I don't understand the math behind lattices. Diffie-Hellman is very simple, requiring only a few operations learned by me as early as sixth grade. – Melab Apr 6 '16 at 9:06
• If you know how to add and multiply polynomials, you can understand this KE protocol. – Chris Peikert Apr 6 '16 at 11:28
• @ChrisPeikert That I do know how to do, but I'm guessing that this requires a more exotic form of them. Polynomials, for one thing, are not numbers, so there much be some sort of novel mapping between integers and polynomials in this system. What are the key sizes necessary for strength levels of 128, 192, and 256 bits? – Melab Apr 7 '16 at 4:35
different key exchange methods ... Are there any others?
More key exchange methods:
• Kirchhoff's-law-Johnson-(like)-noise (KLJN) secure key distribution (a) (b)
• quantum key distribution (c) (d)
(These rely more on hardware than on algorithms)
|
2019-12-13 10:07:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6856858134269714, "perplexity": 1023.9196018655532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00419.warc.gz"}
|
http://achloan.tk/radiometric-dating-equation-910897.html
|
Let's get started
In ac libero urna. Suspendisse sed odio ut mi auctor blandit. Duis luctus nulla metus.
how old is the earth, radiometric dating, how accurate is radiometric dating, types of radioactive dating, which minerals are best to use for radiometric techniques to establish the age of the earth?, age of earth and universe, radiometric dating history, radiometric dating equation.
What is Carbon (14C) Dating? Carbon Dating Definition
Radiometric dating equation jump to the age equation the equation is most conveniently expressed in terms of the measured quantity radiometric dating equation n t rather than the constant radiometric dating explained how accurate is radiometric dating initial value no.
Rad Pro Calculator: Free Online Radioactive Isotopes Decay Calculator
Dating impact in the equation, the ions set up a very weak current that can be episode to determine the rate of impacts and the relative concentrations of different atoms in the beams. Uranium—lead radiometric dating involves using uranium equation uranium to date a substance's absolute age.
Radiometric dating, often called radioactive dating, is a technique used to determine the age of materials such as rocks. It is based on a comparison between the.
Earth sciences - Radiometric dating,
This is the International Radiocarbon Dating Standard. Ninety-five percent of the activity of Oxalic Acid from the year is equal to the measured activity of the.
Precise dating has been accomplished since that relates radioactive decay to geologic time is called the age equation and is: . and shale are related to the radiometric time scale by bracketing them within time zones.
Radiometric Dating Equation Clarification, Physics Forums
In radiometric dating, the decaying matter is called the parent isotope and the stable The equation (called the 'age equation') below shows the relationship of .
How To Radiometric Dating Equation – No Interracial Dating
Radiocarbon dating (usually referred to simply as carbon dating) is a concept than $$k$$ for radioactivity, so although Equation $$\ref{E3}$$ is.
An Essay on Radiometric Dating
Principles of Radiometric Dating. The dating equation used for K-Ar is: where = and refers to fraction of 40 K that decays to 40 Ar. Some of the problems associated with K-Ar dating are Excess argon. This is only a problem when dating very young rocks or in .
Radiometric dating - Simple English Wikipedia, the free encyclopedia
The equation is most conveniently expressed in terms of the measured quantity N (t) rather than the constant initial value No.
homework and exercises - Radiometric dating calculation - Physics Stack Exchange
Although we now recognize lots of problems with that calculation, the age of 25 my was accepted by most Principles of Radiometric Dating.
Dating Methods Using Radioactive Isotopes
Radiocarbon dating can be used on samples of bone, cloth, wood and plant fibers. 35% of its carbon 14 still, then we can substitute values into our equation.
Radiometric dating - Wikipedia
Useful for calculating today's activity for any radioactive isotope. You may also back decay sources to find out the original activity (or for any date), knowing the current activity.
Absolute Geologic Time
Radiometric dating (often called radioactive dating) is a way to find out how old something is. The method compares the amount of a naturally occurring radioactive isotope and its decay products, in samples. The method uses known decay rates.
Radiometric dating is a means of determining the "age" of a mineral specimen by in a mineral, it would be a simple matter to calculate its age by the formula.
Radiometric dating Facts for Kids
Lectures will focus on absolute dating techniques. Radiometric Dating. Our ability to interpret and The Radiometric Decay Equation. A constant-rate process.
Carbon 14 Dating - Math Central
Radiometric Dating - Graphical Method. The purpose of Mathematical calculation of radiometric dating involves the use of a simple equation.
Nov 23, · Calculate the age of a sample using radiometric dating. Calculate the age of a sample using radiometric dating. Skip navigation The Relationship Equation - Numberphile - Duration:
Earlham College - Geology - Radiometric Dating
This activity leads students through derivations of the equations associated with radiometric dating: the radioactive decay equation, the half-life.
Radiometric dating - RationalWiki
Radiometric dating (often called radioactive dating) is a way to find out how old something is. The method compares the amount of a naturally occurring radioactive isotope and its decay products, in samples.
Separable Differential Equations
Indonesia radiometric dating is the equation relates radioactive decay for each engineering and water can involve other. 1/2 the first list gives the origin, it is it uses the half-life of radioactive dating.
DETERMINING AGE OF ROCKS AND FOSSILS
Lecture 3: Radiometric Dating – Simple Decay This is the basic radioactive decay equation used for determining ages of rocks, minerals and.
K-Ar dating calculation (video), Khan Academy
Radiometric dating is a means of determining the "age" of a mineral specimen by determining the relative amounts present of certain radioactive elements. By "age" we mean the elapsed time from when the mineral specimen was formed. Radioactive elements "decay" (that is, change into other elements) by.
Carbon dating, scientific technology,
For geologic dating, where the time span is on the order of the age of the earth From the radioactive decay equations, an expression for elapsed time can be.
|
2019-05-24 15:28:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6201057434082031, "perplexity": 2993.9275958333387}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257660.45/warc/CC-MAIN-20190524144504-20190524170504-00203.warc.gz"}
|
http://joelmoreira.wordpress.com/2012/06/08/convergence-along-ultrafilters/
|
## Convergence along ultrafilters
— 1. Introduction —
On my previous post about recurrent theorems I stated Khintchine’s theorem and Sarkozy’s theorem. There I classified Khintchine’s theorem as a theorem about large intersections and Sarkozy’s theorem as a theorem about large recurrent times.
This will be the first in a series of two posts where I will prove the following result, which I would classify as a theorem about large recurrent times for large intersections. Recall that a measure preserving system (shortened to m.p.s.) is a quadruple ${(X,{\cal B},\mu, T)}$, where ${(X,{\cal B},\mu)}$ is a probability space and ${T:X\rightarrow X}$ preserves the measure, i.e. for each ${A\in {\cal B}}$ we have ${\mu(T^{-1}A)=\mu(A)}$.
Theorem 1 Let ${(X,{\cal B},\mu, T)}$ be a m.p.s. and let ${A\in {\cal B}}$ have positive measure. Let ${q\in {\mathbb Z}[x]}$ be a polynomial such that ${q(0)=0}$. Then for any ${\lambda<1}$, the set
$\displaystyle \{n\in {\mathbb Z}:\mu(T^{-q(n)}A\cap A)>\lambda\mu(A)^2\}$
is syndetic, i.e. has bounded gaps.
It turns out we just need the polynomial ${q}$ to be divisible, i.e., for each ${k\in {\mathbb N}}$ there is some ${n}$ such that ${q(n)}$ is divisible by ${k}$ (If ${q(0)=0}$, or actually if ${q(n)=0}$ for some ${n}$ then ${q}$ is automatically divisible). Also, in order to prove that the set is syndetic we prove the stronger fact that it is indeed IP${^*}$.
The main tool to prove theorem 1 is that of limits along ultrafilters (which I mentioned in the end of my previous post on different ways of taking limits). We denote this as ${p}$-lim, where ${p}$ is an ultrafilter on ${{\mathbb N}}$ (which explains why the polynomial was given the name “${q}$“). The proof follows this survey by Bergelson (it’s Theorem 3.11 there), where there is much more information about this and similar results.
In this post I define and prove (most of) the results about ultrafilters that we will need. I will complete the proof of theorem 1 in my next post.
Definition 2 (Ultrafilter) An ultrafilter on ${{\mathbb N}}$ is a collection ${p}$ of subsets of ${{\mathbb N}}$ satisfying the following ${4}$ conditions.
1. ${\emptyset\notin p}$.
2. If ${A\in p}$ and ${A\subset B}$ then ${B\in p}$.
3. If ${A}$ and ${B}$ are in ${p}$ then also ${A\cap B\in p}$.
4. If ${A\notin p}$ then ${{\mathbb N}\setminus A\in p}$.
Given a natural number ${n}$, it is not hard to check that the collection of all subsets of ${{\mathbb N}}$ that contains ${n}$ form an ultrafilter. Such ultrafilters are called principal. However this are rather uninteresting ultrafilters, so we will only consider non-principal ultrafilters. The existence of a non-principal ultrafilter requires the axiom of choice (under the form of the Zorn’s lemma): one considers some non-principal collection of subsets satisfying the first ${3}$ conditions (such a collection is called a filter, for instance all sets of the form $\{n,n+1,n+2,...\}$) and then consider the family of all filters that contain the first given filter. It’s not hard to check that we are in conditions of the Zorn’s lemma and that the maximal element will be a filter that also satisfies the fourth axiom of ultrafilter. This explains why ultrafilters are sometimes called maximal filters.
Another way to think about ultrafilters is to see them as finitely additive probability measures on ${{\mathbb N}}$ that only give the value ${0}$ and ${1}$. More precisely ${p(A)=1\iff A\in p}$ and ${p(A)=0\iff A\notin p}$. This motivates the following definition:
Definition 3 (Convolution of ultrafilters) Let ${p}$ and ${r}$ be ultrafilters. We define their convolution as
$\displaystyle A\in p*r\qquad\iff\qquad \{n\in{\mathbb N}:A-n\in p\}\in r$
One can check that this corresponds to the usual convolution of measures. One type of ultrafilters that are of special interest are the idempotent ultrafilters:
Definition 4 (Idempotent Ultrafilters) An ultrafilter ${p}$ is called idempotent if ${p*p=p}$.
The fact that idempotent ultrafilters exist is a consequence of a theorem of Ellis and uses topological properties of the set of all ultrafilters, namely that this set with the convolution is a compact left continuous semigroup.
Given an ultrafilter ${p}$, and a sequence ${\{x_n\}_{n\in{\mathbb N}}}$ in a topological space, one can consider the limit of ${\{x_n\}}$ along ${p}$:
Definition 5 (Convergence along an ultrafilter) Let ${p}$ be an ultrafilter and let ${\{x_n\}_{n\in{\mathbb N}}}$ be a sequence in some topological space ${X}$. Let ${x\in X}$. We say that ${x_n}$ converges to ${x}$ along ${p}$ (or that ${ p}$-${\displaystyle\lim_{n\rightarrow\infty} x_n=x}$) if for each neighborhood ${U}$ of ${x}$, the set ${\{n\in{\mathbb N}: x_n\in U\}}$ is in ${p}$.
The most fascinating aspect of this method of convergence is that if the topological space ${X}$ is compact, then any sequence has limit along ${p}$:
Proposition 6 Let ${p}$ be an ultrafilter, let ${X}$ be a compact Hausdorff space and let ${\{x_n\}_{n\in{\mathbb N}}}$ be any sequence taking values on ${X}$. Then there exists exactly one point ${x\in X}$ such that ${ p}$-${\displaystyle\lim_{n\rightarrow\infty} x_n=x}$.
Proof: First we prove existence, if no such ${x}$ exists, then for each point ${y\in X}$ there is an open neighborhood ${U_y}$ of ${y}$ such that ${\{n\in{\mathbb N}: x_n\in U_y\}\notin p}$. The cover ${U_y}$ will have a finite subcover by compactness, and so we can partition ${{\mathbb N}}$ into finitely many disjoint pieces, according to which atom of the subcover contains ${x_n}$ (if ${x_n}$ belongs to more than one atom of the finite subcover, choose any of those atoms arbitrarily). Also by constructions, no piece in this partition is in ${p}$, and we can easily see that this contradicts the fact that ${p}$ is an ultrafilter.
To prove uniqueness, let ${x\neq y}$ be two distinct points in ${X}$. Choose two disjoint neighborhoods ${U_x}$ of ${x}$ and ${U_y}$ of ${y}$. The sets ${\{n:x_n\in U_x\}}$ and ${\{n:x_n\in U_y\}}$ are also disjoint and so they can’t both be in ${p}$, so ${x}$ and ${y}$ can’t both be a $p$-${\lim}$ of ${\{x_n\}}$. $\Box$
We will use this fact on spheres in the ${L^2}$ space (which are compact in the weak topology by the Banach-Alaoglu theorem). Finally we need the following result, relating ${p}$-${\lim}$ with the convolution of ultrafilters:
Proposition 7 Let ${p}$ and ${r}$ be ultrafilters, let ${X}$ be a compact Hausdorff space and let ${\{x_n\}_{n\in{\mathbb N}}}$ be a sequence taking values in ${X}$. Then
$\displaystyle (p*r)\text{-}\lim_{n\rightarrow\infty} x_n=r\text{-}\lim_{t\rightarrow\infty} \left(p\text{-}\lim_{m\rightarrow\infty} x_{t+m}\right)$
Proof: Let ${\displaystyle x=(p*r)\text{-}\lim_{n\rightarrow\infty} x_n}$ and let ${\displaystyle y_t=p\text{-}\lim_{m\rightarrow\infty} x_{t+m}}$. Then for each neighborhood ${U}$ of ${x}$, we have
$\displaystyle \begin{array}{rcl} \{n:x_n\in U\}\in p*r&\iff&\displaystyle \{t:\{n:x_n\in U\}-t\in p\}\in r\\&\iff&\displaystyle \{t:\{n-t:x_n\in U\}\in p\}\in r\\&\iff& \displaystyle \{t:\{m:x_{t+m}\in U\}\in p\}\in r\\&\iff&\displaystyle \{t:y_t\in U\}\in r \end{array}$
Since this happens for every neighborhood of ${x}$ we conclude that ${r}$-${\displaystyle\lim_{t\rightarrow\infty} y_t=x}$.
$\Box$
We will also need another fact about convergence along ultrafilters, roughly speaking it says that passing to certain “subsequences” doesn’t change the limit. First let’s make a definition
Definition 8 Let ${p}$ be an ultrafilter, and let ${A\subset {\mathbb N}}$. Let ${\{x_n\}}$ be some sequence taking values in a topological space. We denote the ${p}$-${\lim}$ over ${A}$ to be ${\displaystyle p\text{-}\lim_{n\rightarrow\infty;n\in A}x_n=x}$ and this means that for each neighborhood ${U}$ of ${x}$, the set ${\{n\in A: x_n\in U\}\in p}$.
Note that if ${A\in p}$ then ${p}$-${\lim_{n\rightarrow\infty;n\in A}x_n=p}$-${\lim_{n\rightarrow\infty}x_n}$ and if ${A\notin p}$ then the ${p}$-${\lim}$ over ${A}$ doesn’t exist.
Corollary 9 Let ${p}$ be an idempotent ultrafilter and let ${a\in{\mathbb N}}$. Also let ${\{x_n\}}$ be a sequence in a compact Hausdorff space. We have:
1. If ${B\in p}$ then ${B\cap(a{\mathbb N})\in p}$.
2. ${p}$-${\displaystyle \lim_{n\rightarrow\infty, n\in a{\mathbb N}}x_n=p}$-${\displaystyle \lim_{n\rightarrow\infty}x_n}$.
Proof:
1. We can partition ${{\mathbb N}=a{\mathbb N}\cup(a{\mathbb N}+1)\cup...\cup (a{\mathbb N}+(a-1))}$, so by the condition (4) in the definition of ultrafilters and a simple induction we conclude that exactly one of the sets ${a{\mathbb N}+i}$ is in ${p}$, we now will prove that ${i=0}$. Since ${p}$ is idempotent we have that ${\{x\in{\mathbb N}:(a{\mathbb N}+i+x)\in p\}\in p}$. But the set ${(a{\mathbb N}+i+x)}$ is in ${p}$ exactly when ${a|x}$, and so the set ${a{\mathbb N}\in p}$ as desired. Now the intersection ${(a{\mathbb N})\cap B\in p}$ as well.
2. This follows from part (1) and the comment before this corollary.
$\Box$
|
2014-12-21 14:21:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 162, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9712238311767578, "perplexity": 108.76725539585692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771374.156/warc/CC-MAIN-20141217075251-00081-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/7729/lower-bound-for-probability-distribution-of-a-random-variable
|
# lower bound for probability distribution of a random variable
If $X$ is a random variable with finite mean $\mu$ and variance $\sigma^2$, how do I show that the estimate
\begin{equation*} P[\mu − d\sigma < X < \mu + d\sigma] ≥ 1 − 1/d^2~\forall d>1 \end{equation*}
holds? I found this in a book but unable to see the proof. Note that $X$ may not be normal.
-
## 1 Answer
This is Chebyshev's inequality, which holds for any probability distribution. There are two proofs given on the linked Wikipedia page - a measure-theoretic one, and one that uses Markov's inequality.
Your expression is in a different form, though, than the one on the Wikipedia page. To see how they are the same, observe that
$$P[\mu - d \sigma < X < \mu + d \sigma] \geq 1 - 1/d^2$$ is equivalent to $$P[|X - \mu| < d \sigma] \geq 1 - 1/d^2,$$ which is equivalent to $$P[|X - \mu| > d \sigma] \leq 1/d^2,$$
which is the one on the Wikipedia page.
-
"which holds for any probability distribution" - this is only true of distributions whose variance is defined. – R R Feb 3 '15 at 3:06
|
2016-07-25 06:44:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9349699020385742, "perplexity": 188.57356741211873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824204.68/warc/CC-MAIN-20160723071024-00058-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/173973/limit-of-a-subsequence
|
# Limit of a subsequence
I studied a definition,and I didn´t find it in anyother book (but those I use). It´s like a point of closure for sequences.We call $a$ "value of closure" of $(x_n)$ when $a$ is the limit of a subsequence of $(x_n)$.
The question is:
For a real number $a$ be a "value of closure" is necessary and sufficient that $\forall \epsilon >0$ and $\forall k \in \mathbb{N}$ given ,there is $n > k$ such that $|x_n -a|< \epsilon$.
I could do the first part ($a \Rightarrow |x_n - a|<\epsilon$) but not the $\Leftarrow$.
Thanks for any help!
-
Such an $a$ is sometimes called a subsequential limit of the sequence. – Brian M. Scott Jul 22 '12 at 21:00
oh!That´s it!Thanks!!!! – Charlie Jul 22 '12 at 21:39
To expand the comment, fix $n_1=1$ for example. We can find $n_2>1$ such that $|x_{n_2}-a|<\frac 12$. Assume that $n_1<n_2<\dots<n_k$ are construct. For $\varepsilon=2^{-(k+1)}$, we can find $n_{k+1}>n_k$ such that $$|x_{n_{k+1}}-a|\leq 2^{-(k+1)}.$$ Hence we have construct a subsequence $\{x_{n_k}\}$ such that $|x_{n_k}-a|\leq 2^{-k}$ for all integer $k$. This proves that $a$ is a value of closure of $\{x_n\}$.
|
2014-07-13 22:13:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9779453277587891, "perplexity": 223.69173656671904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776438940.80/warc/CC-MAIN-20140707234038-00068-ip-10-180-212-248.ec2.internal.warc.gz"}
|
http://mymathforum.com/differential-equations/339295-help-needed-partial-differentiation-equation-matrices.html
|
My Math Forum Help needed with partial differentiation equation of matrices
Differential Equations Ordinary and Partial Differential Equations Math Forum
March 2nd, 2017, 01:01 AM #1
Newbie
Joined: Oct 2016
From: Singapore
Posts: 11
Thanks: 0
Help needed with partial differentiation equation of matrices
With reference to Eq. 16 in the attachment, I need help in understanding how the partial differentiation is done to obtain the answers in Eq. 17.
Eqs. 10, 11, and 15 are the inputs needed to solve Eq. 16.
I am puzzled how a matrix could be differentiated with respect to another matrix. I would highly appreciate it if an example calculation can be shown. Thanks in advance!
Attached Images
Equations.JPG (32.1 KB, 2 views)
March 2nd, 2017, 01:43 AM #2 Senior Member Joined: Sep 2015 From: USA Posts: 1,790 Thanks: 923 They give you the formula to use $R_{i\_{st}} = \displaystyle{\sum_{k=1}^3}~R_k \dfrac{\partial v_k}{\partial v_{i \_{st}}}$ a bit of examination reveals that $\dfrac{\partial v_k}{\partial v_{i \_{st}}} = v_k \cdot v_{i \_ st}$
March 2nd, 2017, 04:09 AM #3 Newbie Joined: Oct 2016 From: Singapore Posts: 11 Thanks: 0 Hi romsek, thanks for responding. I have some questions: 1) Which maths principle did you use to arrive at the dot product? 2) vk and vi_st are both 1x3 matrices. Isn't it not possible to perform the dot product? Thanks
March 2nd, 2017, 09:30 AM #4
Senior Member
Joined: Sep 2015
From: USA
Posts: 1,790
Thanks: 923
Quote:
Originally Posted by mikeraj Hi romsek, thanks for responding. I have some questions: 1) Which maths principle did you use to arrive at the dot product? 2) vk and vi_st are both 1x3 matrices. Isn't it not possible to perform the dot product? Thanks
1) i used the ancient method of looking at the results and the formula used to get them to identify the unknown piece.
March 3rd, 2017, 02:20 AM #5 Newbie Joined: Oct 2016 From: Singapore Posts: 11 Thanks: 0 The dot product will work if vi_st is transposed to a 3x1 matrix. With this I can get the answers in Eq. 17
Tags differentiation, equation, matrices, needed, partial
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post mikeraj Linear Algebra 3 March 5th, 2017 04:17 AM mathdisciple Calculus 2 March 23rd, 2014 11:49 PM asifrahman1988 Calculus 2 October 27th, 2012 04:06 PM vtek Calculus 1 February 19th, 2012 09:16 AM jakeward Calculus 3 December 13th, 2010 03:39 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
2018-03-23 15:01:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806805431842804, "perplexity": 3648.5110780705013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648313.85/warc/CC-MAIN-20180323141827-20180323161827-00412.warc.gz"}
|
https://solvedlib.com/binomial-distribution-a-college-finds-that-40,351703
|
# Binomial Distribution: A college finds that 40 percent of all students take a course in statistics....
###### Question:
Binomial Distribution:
A college finds that 40 percent of all students take a course in statistics. Success is defined as a student not taking a course in statistics.
P(Success) = P(student does not take statistics) = 1- .4000 =.6000 .
If a group of 8 students are considered: What is the probability that less than 3 students do not take statistics? (use the table in the back of your textbook) (do not round)
#### Similar Solved Questions
##### 1 the V B 6o hy 2
1 the V B 6o hy 2...
##### Please answer these questions Design Thinking Assignment (HW2) Observations to Insights Now it's time to practice...
please answer these questions Design Thinking Assignment (HW2) Observations to Insights Now it's time to practice a little design thinking. When talking about observation as a core tenet of design thinking, it's easy to say, "I've observed all my life. I don't need to practi...
##### Problem 8 Your laptop consumes 2.0 Watts of power operating on its 10V Battery a) Find...
Problem 8 Your laptop consumes 2.0 Watts of power operating on its 10V Battery a) Find I, the current flowing in the laptop. b) Find R, the resistance of the laptop. c) Find Q, the charge moving through your laptop if it is ON for 10 minutes....
##### If when conducting the experiment, 43.5 grams of Fe2O3 are collected is the percent yield 10....
If when conducting the experiment, 43.5 grams of Fe2O3 are collected is the percent yield 10. Give the formulas for the following acids (Extra Credit 5 pts) Sufuric acid Phosphoric acid Hydrosulfuric acid Hydrochloric acid Chloric acid Table 9-2 Activity Series Will replace H, from liquid water, ste...
##### X2–2x-3 Let f(x) = Find the X-value(s) for which the graph of y = f(x) has...
x2–2x-3 Let f(x) = Find the X-value(s) for which the graph of y = f(x) has a horizontal tangent line. x+2...
##### 4. (16 points) Given the two planes21 2=321 + 3y + 2 = 0,find:the cosine of the angle between them; al equation for the line of their intersection_
4. (16 points) Given the two planes 21 2=3 21 + 3y + 2 = 0, find: the cosine of the angle between them; al equation for the line of their intersection_...
##### Warm upCircle exists, cquation, y I6y +x 10r =-64 Find for this cquation, using implicit differentiation. [4 points]This circle includes the point (9,1) Use A to find the slope of the tangent line through this pointMg =6, then determine Hint: This can be done cithcr by implicit diflerentintion OR by rewriting asy = 6/x.
Warm up Circle exists, cquation, y I6y +x 10r =-64 Find for this cquation, using implicit differentiation. [4 points] This circle includes the point (9,1) Use A to find the slope of the tangent line through this point Mg =6, then determine Hint: This can be done cithcr by implicit diflerentintion ...
##### Consider the power series ax(x 1)k . Assume that we know that the series @k converges; but that k_0 k 0ak (3k) diverges. k=0Explain how we know that the power series converges at2Docs the scrics ak(-4)k converge or diverge? Or do we not have cnough information to answer this k=0 question? Briefly explain your reasoning in words_c) Doecs the serics (-1)kak converge O diverge? Or do we not have cnough information to answer this k=u question? Briefly explain your reasoning in words_State & poss
Consider the power series ax(x 1)k . Assume that we know that the series @k converges; but that k_0 k 0 ak (3k) diverges. k=0 Explain how we know that the power series converges at 2 Docs the scrics ak(-4)k converge or diverge? Or do we not have cnough information to answer this k=0 question? Briefl...
##### The unexplained variance in the regression analysis is also known as: Regression variance Total variance Predicted...
The unexplained variance in the regression analysis is also known as: Regression variance Total variance Predicted variance Residual variance...
##### Area V4 is important for color constancy. What is color constancy?
Area V4 is important for color constancy. What is color constancy?...
##### 2et9) Skekk the negtmn_cfinteqtatimn andswkch thcondo & Inteqale b find thivalue c Xht_inteqnal se x*dzdy
2et9) Skekk the negtmn_cfinteqtatimn andswkch thcondo & Inteqale b find thivalue c Xht_inteqnal se x*dzdy...
##### Refer to the table below: Of the 36 possible outcomes_ determine the number for which the sum (for both dice) is 2DieDieOne can roll = sum of 2 inway(s).
Refer to the table below: Of the 36 possible outcomes_ determine the number for which the sum (for both dice) is 2 Die Die One can roll = sum of 2 in way(s)....
##### Assume that acceleration due to gravity is -32 ft/s?. A rock is thrown straight downward with a velocity of -10 ftls from a bridge which is 110 feet above a river:021 20 PointsHow long does it take for the rock to hit the water; and how fast is the rock traveling when it hits the water?
Assume that acceleration due to gravity is -32 ft/s?. A rock is thrown straight downward with a velocity of -10 ftls from a bridge which is 110 feet above a river: 021 20 Points How long does it take for the rock to hit the water; and how fast is the rock traveling when it hits the water?...
##### QUEstion 17Select the 4 True statements from the choices below: Cardiac output is the same at rest as it is when you are exercising Cardiac output tracks the events associated with heartbeat. The time between one heartbeat and the next is slightly less than second- Stroke volume measures the amount of blood pumped per beat: The majority of the cardiac cycle is spent in diastole An average heart rate for adults is around 70-75 beats per minute.
QUEstion 17 Select the 4 True statements from the choices below: Cardiac output is the same at rest as it is when you are exercising Cardiac output tracks the events associated with heartbeat. The time between one heartbeat and the next is slightly less than second- Stroke volume measures the amount...
##### Briefly describe anodic stripping voltammetry and cathodic stripping voltammetry.
Briefly describe anodic stripping voltammetry and cathodic stripping voltammetry....
##### Please write your answer in the proper space given for each problem_ Problem L Let {2n} and {yn} be sequences such that: a) n < Yn for all n; b) {*n} is increasing; {yn} is decreasing: Show that {"n} and {yn} are convergent and lim Tn < lim Un. n-'C n.C6Problem 5_ 14. Let c € R and let f R Rbe such that lim VIf(r) (a) Show that if L 0, then lim f (x) 0, (6) Show by example that if L z0, then may not have a limit at
Please write your answer in the proper space given for each problem_ Problem L Let {2n} and {yn} be sequences such that: a) n < Yn for all n; b) {*n} is increasing; {yn} is decreasing: Show that {"n} and {yn} are convergent and lim Tn < lim Un. n-'C n.C6 Problem 5_ 14. Let c â‚...
##### In Java (The Person, Student, Employee, Faculty, and Staff classes) Design a class named Person and...
In Java (The Person, Student, Employee, Faculty, and Staff classes) Design a class named Person and its two derived classes named Student and Employee. Make Faculty and Staff derived classes of Employee. A person has a name, address, phone number, and e-mail address. A student has a class status (fr...
##### Yield to maturity ofAS1000bond with aG96 obupon rate, semiannualaupoits andfwoven to maturity is 7.6% APR, compo price be? unded semia 48 06 the spot rates for six months, ears are 1%, 1....
yield to maturity ofAS1000bond with aG96 obupon rate, semiannualaupoits andfwoven to maturity is 7.6% APR, compo price be? unded semia 48 06 the spot rates for six months, ears are 1%, 1.1%, and 13%, all quoted as semiannually in 1% 11. Assume the current Treasu e pounded APRs. What is the price of ...
##### Take Home 3 Math 1520 1. Consumer and Producer Surplus. Round answers to the nearest whole...
Take Home 3 Math 1520 1. Consumer and Producer Surplus. Round answers to the nearest whole number. a) Find the consumer and producer surpluses at the equilibrium (X,P). Po(x) =678 - Demand price P:() -37102+x Supply price H S b) Find the new equilibrium and surpluses for the outward shift in demand ...
##### 3 Remember that for projective geometry, the dual of a statement is found by exchanging "point...
3 Remember that for projective geometry, the dual of a statement is found by exchanging "point with "ie" (a) Write the dual of the following statement, and then sketch picturess- trating both the statement and its dual. "Two distinct points are on one and only one line." (b) Draw...
##### Question 1010 ptsFind the fraction of association (0) for 1.00e-2 Mand 1.00e-12 Msodium acetate, respectively, Does a increase of decrease with dilution? (Kb-5.7e-10)0.57% and 0.024%. The more dilute the solution; the smaller is alpha0.057% and 0.0076%. The more dilute the solution; the smaller is alpha0.0076% and 0.057%. The more dilute the solution, the greater is alpha0.024% and 0.57%. The more dilute the solution, the greater is alpha
Question 10 10 pts Find the fraction of association (0) for 1.00e-2 Mand 1.00e-12 Msodium acetate, respectively, Does a increase of decrease with dilution? (Kb-5.7e-10) 0.57% and 0.024%. The more dilute the solution; the smaller is alpha 0.057% and 0.0076%. The more dilute the solution; the smaller ...
##### Question 6 (20 marks)(a) Outline any five circumstances when the time to discontinue the CPR(b) Describe the five steps to combat or begin treatment of shock
Question 6 (20 marks) (a) Outline any five circumstances when the time to discontinue the CPR (b) Describe the five steps to combat or begin treatment of shock...
##### Use strain energy increments in the OWL Table Reference (see References button, Strain Energy Increments calculate the energy difference between the two chair conformations of the compound below: Specify substituent positions (axial or equatorial) in the more stable chair: Estimate the percent of the more stable chair at equilibrium at 258C. (To determine the percent of the more stable chair at equilibrium; first calculate Kea" and then use this value find the percentage.)CI'CH3Answers
Use strain energy increments in the OWL Table Reference (see References button, Strain Energy Increments calculate the energy difference between the two chair conformations of the compound below: Specify substituent positions (axial or equatorial) in the more stable chair: Estimate the percent of th...
##### Convert the polar equationrectangular equation2 _ sin &Convert the complex number to polar form 513ithe polar form the complex number (7,327" operations on the complex numbers in polar fon Please leave your answers polar form . 13) Perfonn (3,1372) x (14,2639) (26,1479) (2,2179) [(2,239)159i = (9, 908). Answer in polar form with- 2 0,0" < 0 360". 14) Find both solutions to 2*
Convert the polar equation rectangular equation 2 _ sin & Convert the complex number to polar form 513i the polar form the complex number (7,327" operations on the complex numbers in polar fon Please leave your answers polar form . 13) Perfonn (3,1372) x (14,2639) (26,1479) (2,2179) [(2,23...
##### [5 marks] Find a unit normal to the surface described by z xly3 at the point (2,1,3) using the (grad) operator: 5. [8 marks] Consider the integral J [; xydydx Sketch the region of integration Evaluate the integral: Reverse the order of integration and compute the integral
[5 marks] Find a unit normal to the surface described by z xly3 at the point (2,1,3) using the (grad) operator: 5. [8 marks] Consider the integral J [; xydydx Sketch the region of integration Evaluate the integral: Reverse the order of integration and compute the integral...
##### In right-handed coordinates, leta = [1, 2 , 4],b = [3, 1 3 55 ],andc = [5, 4, 0]: Calculate the following expressions.(a xb)xc =[a X (bXc) =[
In right-handed coordinates, let a = [1, 2 , 4],b = [3, 1 3 55 ],andc = [5, 4, 0]: Calculate the following expressions. (a xb)xc =[ a X (bXc) =[...
##### Table 2. Most recent amount of cigarettes smoked daily before onset of the present illness, lung cancer cases and matched controls with other diseases Great Brtain_1948-1952Daily number cigarettesCasesControlsOdds Ratio(referent) |1-1456570615-2444540825+340182All smokers350296Total357357Question Compute the odds ratio by category of dally cigarette consumption , comparing each smoking calegory t0 nonsmokers_ Hint: start by drawing 2-by-2 (able_
Table 2. Most recent amount of cigarettes smoked daily before onset of the present illness, lung cancer cases and matched controls with other diseases Great Brtain_1948-1952 Daily number cigarettes Cases Controls Odds Ratio (referent) | 1-14 565 706 15-24 445 408 25+ 340 182 All smokers 350 296 To...
##### Electronics and inhabitants of the International Space Station generate a significant amount of thermal energy that...
Electronics and inhabitants of the International Space Station generate a significant amount of thermal energy that the station must get rid of. The only way that the station can exhaust thermal energy is by radiation, which it does using thin, 1.5 m -by-3.1 m panels that have a working temperature ...
##### Q-1 A solid circular rod with a diameter d = 18 mm is shown in Figure...
Q-1 A solid circular rod with a diameter d = 18 mm is shown in Figure Q-3. The rod is made of an aluminum alloy that has an elastic modulus E = 72 GPa and a Poisson's ratio v=0.32. When subjected to the axial load P, the diameter of the rod decreases by 0.025 mm. For a factor of safety of 1.75, ...
##### Given the information below for StartUp.Com, compute the expected share price at the end of 2017...
Given the information below for StartUp.Com, compute the expected share price at the end of 2017 using price ratio analysis. (Do not round intermediate calculations. Round your answer to 2 decimal places.) Year 2013 2014 2015 2016 Price N/A $71.12$ 98.32 \$ 107.18 EPS N/A −7.80 &mi...
##### 1. The following table depicts the yield curve for various bond maturities. Use the yield curve...
1. The following table depicts the yield curve for various bond maturities. Use the yield curve to calculate the one year expected yields (i) using the unbiased expectations theory. Time to maturity Multi-year return 2% 3% 4% 4.5% 5.5%...
##### 54.2. Suppose X is a random sample of size n = 1 from a uniform distribution...
54.2. Suppose X is a random sample of size n = 1 from a uniform distribution defined on the interval (0, e). Construct a 98% confidence interval for θ and...
##### Make it clear and sure vw.acparspau rauo depenog upo wC support comniois. EXAMPLE 6.8 A concrete...
make it clear and sure vw.acparspau rauo depenog upo wC support comniois. EXAMPLE 6.8 A concrete beam with a symmetrical 1-section has flange width and depth of 200 mm and 60 mm, respectively. The thickness of the web is 80 mm and the overall depth is 400 mm. The beam is prestressed by a cable carry...
##### The data shown represent the age (in weeks which babies first cravl, based on survey 0f 12 mothers. Complete parts (a) and (b).52 37 44 35_3139 47 37 52 26 39 26Click here l0 Yiew_page ofthe_standard noual distribution table Click here to iew_page oithe_standard nounal distribution table Click_here to ewthe lable of cuicalLvalues 08788"ctDoes the boxplot suggest that is reasonable I0 consiruct coniidence interval for the population rean?Yes _ because the distribution is roughly symmetric wi
The data shown represent the age (in weeks which babies first cravl, based on survey 0f 12 mothers. Complete parts (a) and (b). 52 37 44 35_3139 47 37 52 26 39 26 Click here l0 Yiew_page ofthe_standard noual distribution table Click here to iew_page oithe_standard nounal distribution table Click_her...
##### S! SJWI uI V Julod 1 apouied &4 jo poadstx snipe_ 34} 4IIM _Le S3XEU1 pue&qW 0! spenba W 50 Snipe= Jo Yed . fe[n3li? [5uozuoy e 4} BujaOW tv od 12 popred 8 j0 401j31?[3338 34) JV€
S! SJWI uI V Julod 1 apouied &4 jo poadstx snipe_ 34} 4IIM _Le S3XEU1 pue&qW 0! spenba W 50 Snipe= Jo Yed . fe[n3li? [5uozuoy e 4} BujaOW tv od 12 popred 8 j0 401j31?[3338 34) JV€...
|
2022-07-06 01:58:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6609750986099243, "perplexity": 4216.2152848135365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00065.warc.gz"}
|
http://physics.stackexchange.com/tags/supersymmetry/hot
|
# Tag Info
## Hot answers tagged supersymmetry
3
There are obviously differeing genus types according to which partition function in d-dimensional QFT. At the very outset of $0$-d QFT the index is the push forward in ordinary de Rahm Cohomology; in other words, the integration of differential forms. The genus as you put it, is non-existant here. In $1$-d QFT the index of the Dirac Operator is ...
2
The variation $\delta F$ for any field (or degree of freedom) $F$, given an infinitesimal transformation, is always calculated as the commutator $$\delta F = [ \bar\epsilon Q, F ]$$ where $\bar \epsilon$ is a parameter ("angle" or "shift" or some generalization) of the transformation and $Q$ is the generator. (Those may be replaced by other letters.) ...
1
Not only must supersymmetry be broken, it must be broken in a way that doesn't lead to phenomenological disasters. In the MSSM, $D$-term breaking with a Fayet-Iliopoulos term achieves neither: it doesn't break supersymmetry but it does lead to phenomenological disasters! Consider again your potential \begin{align} V&=\sum_i |m_i|^2 |\phi_i|^2 +1/2 ... 1 The bracket you have written is of the form[\delta,\delta] A_\mu = v^\nu \partial_\nu A_\mu + \partial_\mu (v^\nu A_\nu). As you pointed out, the first term corresponds to a translation by $v^\mu$. The second term corresponds to a gauge transformation $\delta A_\mu = \partial_\mu \lambda$ with $\lambda = v^\nu A_\nu$. So the algebra closes up to a ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2015-05-25 05:50:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8439693450927734, "perplexity": 881.1026328643352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928414.45/warc/CC-MAIN-20150521113208-00268-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.daniweb.com/programming/software-development/threads/207765/running-a-file-in-and-exe-program-through-python
|
i need help trying to open an .exe file then opening a file in said exe program i.e i wanna open a file in flash after opening flash.
so far i can open the exe file with no problem but from there I'm stuck.
this is what i have
import os
os.system("flash.exe")
Flash can take command-line arguments. So you can do os.system("flash.exe myFileToOpen.fla") . That'll open Flash and have it open the document "myFileToOpen.fla".
If the FLA is in a different directory however, you'll need to give the path in front of the filename in the command line argument. It …
## All 4 Replies
Flash can take command-line arguments. So you can do os.system("flash.exe myFileToOpen.fla") . That'll open Flash and have it open the document "myFileToOpen.fla".
If the FLA is in a different directory however, you'll need to give the path in front of the filename in the command line argument. It can be relative or absolute. And if it contains spaces, you'll need to wrap it in quotes.
# absolute path and with quotes because of spaces
os.system("flash.exe 'C:\the\directory\to my file\here.fla'")
# or for relative, using just the .. syntax
os.system("flash.exe ../doc.fla")
i tried that and am getting can not open file. and i wanna be able to run a jsfl file after that
i figured it out thanks so much
Just for my confirmation, try this if it works:
import subprocess
subprocess.call(["flash.exe",the location of the file])
Be a part of the DaniWeb community
We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.
|
2021-10-22 18:25:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5568589568138123, "perplexity": 3825.360501572513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585518.54/warc/CC-MAIN-20211022181017-20211022211017-00414.warc.gz"}
|
http://www.ceus-now.com/how-to-make-this-following-table/
|
Home > tables > How to make this following table?
# How to make this following table?
May 14Hits:1
I am using the slashbox package in a table as shown next. However as you can see, the diagonal line in the first cell is not the real diagonal of that cell. This is because of the existence of more than a row in that cell. So how to make the diagonal line reach from the top left corner to the bottom right corner of the first cell to the left?
Here is my code:
\documentclass[12pt]{article} \usepackage{slashbox} \begin{document} \begin{table}[htbp] \begin{center} \begin{tabular}[htp]{||l|c|c|c|c||} \hline \backslashbox{Adult}{Motion types} & can definitely & \multicolumn{2}{c|}{can definitely } & can definitely\\ person & walk & \multicolumn{2}{c|}{jump} & run\\ \cline {3-4} & & more than & more than & \\ & & 10 cm & 20 cm & \\[4mm] \hline \hline Joe & Yes & No & No & No \\[4mm] \hline \end{tabular} \end{center} \end{table} \end{document}
Thank you.
To get a cell spanning several rows, you can use the multirow package. \backslashbox works well with this within its implemantion limits - the slash is constructed as a LaTeX picture, and so there are only a limited number of slopes for the line allowed.
\documentclass[12pt]{article}
\usepackage{slashbox,multirow}
\begin{document}
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htp]{||l|c|c|c|c||}
\hline
& can definitely & \multicolumn{2}{c|}{can definitely } & can definitely\\
& walk & \multicolumn{2}{c|}{jump} & run\\
\cline {3-4} & & more than & more than & \\
& & 10 cm & 20 cm &\\
\hline \hline
Joe & Yes & No & No & No \\
Margaret & Yes & Yes & Yes & Yes \\
\hline
\end{tabular}
\end{center}
\end{table}
\end{document}
A more sophisticated solution with, more flexible choices of slash line, would use something like tikz, see Diagonal lines in table cell for some such approaches, but would still require such a \multirow command.
## Related Articles
• ### HTML Utopia! Design Websites Without Tables - Parts 1 and 2April 12
Welcome to HTML Utopia – the two part series on creating Web page designs without tables. Part 1 introduces the concept of using CSS to create a design without tables, and explains the reasons why this philosophy has become so popular. Part 2 explore
• ### The Trick to Rounded Corner TablesMay 1
One technique that has always been of interest to new Web designers is building rounded corners for the tables in their site. So how do they get that "professional" look? . It's actually a relatively easy task with most graphic editors. It is a
• ### HTML Utopia: Designing Without Tables Using CSS, Chapter 1: Getting The Lay Of The LandMay 31
CSS In Context Almost as soon as the Web became popularized by the emergence of the first graphical Web browser (the forerunner to Netscape Navigator), graphic designers became aware of a problem. The method by which the Web browser displayed informa
• ### HTML Utopia: Designing Without Tables Using CSS, Chapter 3: Digging Below The SurfaceMay 31
This chapter completes our look at the "mechanics" of CSS: the background you need to have to work with the technology. It covers six major topics: quick review of the three methods for assigning CSS properties to HTML documents use of shorthand
• ### HTML Utopia: Designing Without Tables Using CSS, Chapter 4: CSS Website DesignMay 31
CSS Website Design The development of any Website begins with its design. Typically, you'll have a statement from your client, or at least a rough idea in your head, of the intended capabilities of the site. If you're a by-the-book sort of developer,
• ### HTML Utopia: Designing Without Tables Using CSS, Chapter 5: Building the SkeletonMay 31
Building the Skeleton Most books on CSS begin by teaching you how to deal with the bits and pieces of a Web page: fonts, colors, lists, and the like. Then, they move on to explaining the broader issues associated with CSS positioning (CSS-P), which a
• ### Practical Web Design - Introduction to Tables, Part 1June 7
If you take a tour of the poorer neighborhoods in my neck of the woods, you'll find a lot of single-wide trailers. They aren't much to live in - they're ugly, they're crowded, they leak, they have no style whatsoever - but as domiciles, they're funct
• ### Practical Web Design - Introduction to Tables, Part 2July 4
Some sobering facts: two-thirds of the world's installed PCs are three years old or older Windows 98 is the operating system most used in the world's PCs This means that a huge number of people view our Web pages with "out of date" browsers such
• ### HTML Utopia: Designing Without Tables Using CSSJuly 16
We can look at Cascading Style Sheets (CSS) from a number of contextual perspectives. I prefer to view them as a correction to a fundamental mistake that was made at the beginning of Web Time, back in the old days of the early 1990s, when Tim Berners
• ### Tables Vs. CSS - A Fight to the DeathMay 27
The first time I surfed the Internet it was through a dumb terminal. After a short while, its monochromatic screen was spewing out the full script to Monty Python's "Holy Grail" from a server in Minnesota. It was nothing short of magical. We had
• ### Build Table-less CSS Layouts with eZ publishJanuary 24
Whenever you read about design and HTML these days, you see references to so-called table-less CSS layout. This article covers how to use the power of CSS to theme eZ publish 3.5. Here, we'll walk through every step required to create a new theme usi
• ### Alter Table Row Background Colors Using JavaScriptFebruary 10
Many sites that present tabular data use alternating background colors to increase the readability of that data. And as I developed a site, I realised I wanted to do that, too. The problem? In my case the table was not generated by a server side appl
• ### TILT! Tables sneak in the backdoorMay 4
There's been a lot written about using the DOM to retro-fit HTML to make to work and look nicer than is otherwise possible. Little did we know the dark force we were tinkering with. Pining for the simplicity of table-based layout, Dimitri Glazkov has
• ### Leaving Money on the Table: Printable Versions of ArticlesNovember 15
If you offer printable versions of your site's content with all the graphics and ads removed you really need to take a couple steps first to make sure you aren't leaving money on the table. Back in 2001 SitePoint had an article linked to from Slashdo
• ### Table-Based Layout Is The Next Big ThingFebruary 28
The recently-launched SitePoint CSS Reference (and its print version, The Ultimate CSS Reference) contains extensive coverage of some of the more obscure areas of CSS. One such area that is going to become very important with the release of IE8 later
• ### Rowspans & Colspans in CSS TablesSeptember 9
"-we've implemented every property in CSS 2.1 and are closing in on our goal of complete support for the CSS 2.1 specification by the time we release." If you were to guess who recently made that statement you'd be forgiven for thinking it came
• ### Create dump file only from some SQLite tables September 16
How do I dump the data, and only the data, not the schema, of some SQLite3 tables of a database (not all the tables)? The dump should be in SQL format, as it should be easily re-entered into the database later and should be done from the command line
• ### Updating Table via .on('click') Only Works One Time October 14
I have a bit of code where I am looping though all the select boxes on a page and binding a .hover event to them to do a bit of twiddling with their width on mouseon/off. This happens on page ready and works just fine. The problem I have is that any
• ### Register Event listener onto a table that has has been created dynamically through AJAX October 14
I have a bit of code where I am looping though all the select boxes on a page and binding a .hover event to them to do a bit of twiddling with their width on mouseon/off. This happens on page ready and works just fine. The problem I have is that any
• ### Conditionally concat table values November 7
How do I get: id Name Value 1 A 4 1 B 8 2 C 9 to id Column 1 A:4, B:8 2 C:9 --------------Solutions------------- No CURSOR, WHILE loop, or User-Defined Function needed. Just need to be creative with FOR XML and PATH. [Note: This solution only works o
|
2017-03-27 16:27:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2011747807264328, "perplexity": 2653.4824785049577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189490.1/warc/CC-MAIN-20170322212949-00071-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/questions/79664/wicks-theorem-again
|
# Wick's theorem again
Could someone please elaborate on the accepted answer to this mathoverflow post?
I'm working on a problem that looks like this $$I=\int d^{n} x\, f(\vec x)\, e^{-\frac{1}{2} \vec x \cdot \Sigma^{-1}\cdot \vec x}$$ I thought about expanding $f(\vec x)$ as a series and using Wick's theorem for each term, but I can't figure how to do the resulting sum. The post in question seems like it may answer this, but I don't really understand the notation in the first equation of his. In my example, $f(\vec x)=\prod_{i}f_{i}(x_{i})$ if that helps. Any clarification of the aforementioned answer would be awesome.
-
In the particular case where $f(\vec x)$ is a sum of expressions, where each expression has a total even power of the $x^i$ like : $f(x) = x_1x_2 + x_1^2 x_2 x_3 + ...$, . we may present general expressions. we have :
$$I(\Sigma)=\int d^{n} x\, e^{-\frac{1}{2} \vec x \cdot \Sigma^{-1}\cdot \vec x}\tag{1} = \sqrt{(2 \pi)^ndet \Sigma}$$
We have : $$\langle x_{i_1}\,x_{i_2} x_{i_3}\,x_{i_4}...x_{i_{2n-1}}\,x_{i_{2n}}\rangle= \frac{\int d^{n} x\,(x_{i_1}\,x_{i_2} x_{i_3}\,x_{i_4}...x_{i_{2n-1}}\,x_{i_{2n}})\, e^{-\frac{1}{2} \vec x \cdot \Sigma^{-1}\cdot \vec x}}{\int d^{n} x\, e^{-\frac{1}{2} \vec x \cdot \Sigma^{-1}\cdot \vec x}} \\= \Sigma_{contractions} (\Sigma_{j_1j_2})(\Sigma_{j_3j_4})....(\Sigma_{j_{2n-1}j_{2n}})\tag{2}$$ where a contraction is a repartition 2 by 2 of indices $i_1...i_{2n}$ in pairs $(j_1j_2), (j_3j_4)...(j_{2n-1}j_{2n})$, with no double counting ($(j_1j_2) = (j_2j_1)$)
Now, with $1$ and $2$, you may theorically calculate your integrals, for instance, with $f(x) = x_1x_2x_3x_4$, you have :
$$$$I=\int d^{n} x\, f(\vec x)\, e^{-\frac{1}{2} \vec x \cdot \Sigma^{-1}\cdot \vec x}$$ = I(\Sigma)(\Sigma_{12}\Sigma_{34} + \Sigma_{13}\Sigma_{24}+\Sigma_{14}\Sigma_{23})\tag{3}$$
-
I get that, but the post seemed to say that the resulting sum of contractions could be represented in terms of the even coefficients of the original function. That's what I'd like to know. – TeeJay Oct 5 '13 at 23:39
@TeeJay : In the math article, you have the very particular choice $\Sigma^{-1} = \mathbb{Id}$, It is obvious, that, with $\Sigma^{-1} = \mathbb{Id}$, the odd terms of the function disappear in the integration. But your question was for a general $\Sigma^{-1}$. If you take my last example, with $f(x) = a^{1234} x_1x_2x_3x_4$, the result would be $$$I=\int d^{n} x\, f(\vec x)\, e^{-\frac{1}{2} \vec x \cdot \Sigma^{-1}\cdot \vec x}$$ = I(\Sigma)a^{1234}(\Sigma_{12}\Sigma_{34} + \Sigma_{13}\Sigma_{24}+\Sigma_{14}\Sigma_{23})$,.... – Trimok Oct 6 '13 at 7:38
@TeeJay : ......more generally if you know the decomposition of $f$ into a sum of expressions of total even power in the $x_i$, $f(x) = \large \Sigma_{i_1...i_{2n}} a_{i_1....i_{2n}} x_{i_1}....x_{i_{2n}}$, the result would be, $I= I(\Sigma)\large \Sigma_{i_1...i_{2n} a_{i_1....i_{2n}}}(\Sigma_{contractions (i_1...i_{2n} \to (j_1j_2)....(j_{2n-1}j_{2n})}\Sigma_{j_1j_2}....\Sigma_{j_{2n-1}j_{2n}})$ – Trimok Oct 6 '13 at 7:39
|
2015-11-25 10:36:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9357960820198059, "perplexity": 230.638300985729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445080.12/warc/CC-MAIN-20151124205405-00291-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://nrich.maths.org/public/leg.php?code=-68&cl=2&cldcmpid=4773
|
# Search by Topic
#### Resources tagged with Visualising similar to Circle Panes:
Filter by: Content type:
Stage:
Challenge level:
##### Other tags that relate to Circle Panes
Area. Rectangles. Investigations. Squares. Working systematically. Generalising. Perimeters. Circles. Visualising. Length/distance.
### Framed
##### Stage: 3 Challenge Level:
Seven small rectangular pictures have one inch wide frames. The frames are removed and the pictures are fitted together like a jigsaw to make a rectangle of length 12 inches. Find the dimensions of. . . .
### Dissect
##### Stage: 3 Challenge Level:
It is possible to dissect any square into smaller squares. What is the minimum number of squares a 13 by 13 square can be dissected into?
### More Pebbles
##### Stage: 2 and 3 Challenge Level:
Have a go at this 3D extension to the Pebbles problem.
### Rolling Around
##### Stage: 3 Challenge Level:
A circle rolls around the outside edge of a square so that its circumference always touches the edge of the square. Can you describe the locus of the centre of the circle?
### A Square in a Circle
##### Stage: 2 Challenge Level:
What shape has Harry drawn on this clock face? Can you find its area? What is the largest number of square tiles that could cover this area?
### Tied Up
##### Stage: 3 Challenge Level:
In a right angled triangular field, three animals are tethered to posts at the midpoint of each side. Each rope is just long enough to allow the animal to reach two adjacent vertices. Only one animal. . . .
### Rati-o
##### Stage: 3 Challenge Level:
Points P, Q, R and S each divide the sides AB, BC, CD and DA respectively in the ratio of 2 : 1. Join the points. What is the area of the parallelogram PQRS in relation to the original rectangle?
### The Old Goats
##### Stage: 3 Challenge Level:
A rectangular field has two posts with a ring on top of each post. There are two quarrelsome goats and plenty of ropes which you can tie to their collars. How can you secure them so they can't. . . .
### World of Tan 8 - Sports Car
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outline of this sports car?
### Ten Hidden Squares
##### Stage: 2 Challenge Level:
These points all mark the vertices (corners) of ten hidden squares. Can you find the 10 hidden squares?
### Jomista Mat
##### Stage: 2 Challenge Level:
Looking at the picture of this Jomista Mat, can you decribe what you see? Why not try and make one yourself?
### World of Tan 9 - Animals
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outline of this goat and giraffe?
### World of Tan 13 - A Storm in a Tea Cup
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outline of these convex shapes?
### Making Tangrams
##### Stage: 2 Challenge Level:
Here's a simple way to make a Tangram without any measuring or ruling lines.
### Sea Defences
##### Stage: 2 and 3 Challenge Level:
These are pictures of the sea defences at New Brighton. Can you work out what a basic shape might be in both images of the sea wall and work out a way they might fit together?
### Zooming in on the Squares
##### Stage: 2 and 3
Start with a large square, join the midpoints of its sides, you'll see four right angled triangles. Remove these triangles, a second square is left. Repeat the operation. What happens?
### Fractional Triangles
##### Stage: 2 Challenge Level:
Use the lines on this figure to show how the square can be divided into 2 halves, 3 thirds, 6 sixths and 9 ninths.
### Isosceles Triangles
##### Stage: 3 Challenge Level:
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
### All in the Mind
##### Stage: 3 Challenge Level:
Imagine you are suspending a cube from one vertex (corner) and allowing it to hang freely. Now imagine you are lowering it into water until it is exactly half submerged. What shape does the surface. . . .
### Conway's Chequerboard Army
##### Stage: 3 Challenge Level:
Here is a solitaire type environment for you to experiment with. Which targets can you reach?
### World of Tan 5 - Rocket
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outline of the rocket?
### Triangle Inequality
##### Stage: 3 Challenge Level:
ABC is an equilateral triangle and P is a point in the interior of the triangle. We know that AP = 3cm and BP = 4cm. Prove that CP must be less than 10 cm.
### Construct-o-straws
##### Stage: 2 Challenge Level:
Make a cube out of straws and have a go at this practical challenge.
### Three Squares
##### Stage: 1 and 2 Challenge Level:
What is the greatest number of squares you can make by overlapping three squares?
### World of Tan 12 - All in a Fluff
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outline of these rabbits?
### World of Tan 3 - Mai Ling
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outline of Mai Ling?
### More Building with Cubes
##### Stage: 2 Challenge Level:
Here are more buildings to picture in your mind's eye. Watch out - they become quite complicated!
### Efficient Cutting
##### Stage: 3 Challenge Level:
Use a single sheet of A4 paper and make a cylinder having the greatest possible volume. The cylinder must be closed off by a circle at each end.
### Midpoint Triangle
##### Stage: 2 Challenge Level:
Can you cut up a square in the way shown and make the pieces into a triangle?
### Concrete Wheel
##### Stage: 3 Challenge Level:
A huge wheel is rolling past your window. What do you see?
### Khun Phaen Escapes to Freedom
##### Stage: 3 Challenge Level:
Slide the pieces to move Khun Phaen past all the guards into the position on the right from which he can escape to freedom.
### World of Tan 7 - Gat Marn
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outline of this plaque design?
### Like a Circle in a Spiral
##### Stage: 2, 3 and 4 Challenge Level:
A cheap and simple toy with lots of mathematics. Can you interpret the images that are produced? Can you predict the pattern that will be produced using different wheels?
### Fence It
##### Stage: 3 Challenge Level:
If you have only 40 metres of fencing available, what is the maximum area of land you can fence off?
### World of Tan 6 - Junk
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outline of this junk?
### Coloured Edges
##### Stage: 3 Challenge Level:
The whole set of tiles is used to make a square. This has a green and blue border. There are no green or blue tiles anywhere in the square except on this border. How many tiles are there in the set?
### On the Edge
##### Stage: 3 Challenge Level:
If you move the tiles around, can you make squares with different coloured edges?
### World of Tan 16 - Time Flies
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outlines of the candle and sundial?
### World of Tan 19 - Working Men
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outline of this shape. How would you describe it?
### Dice, Routes and Pathways
##### Stage: 1, 2 and 3
This article for teachers discusses examples of problems in which there is no obvious method but in which children can be encouraged to think deeply about the context and extend their ability to. . . .
### World of Tan 20 - Fractions
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outlines of the chairs?
### World of Tan 18 - Soup
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outlines of Mai Ling and Chi Wing?
### World of Tan 17 - Weather
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outlines of the watering can and man in a boat?
### Muggles Magic
##### Stage: 3 Challenge Level:
You can move the 4 pieces of the jigsaw and fit them into both outlines. Explain what has happened to the missing one unit of area.
### Take Ten
##### Stage: 3 Challenge Level:
Is it possible to remove ten unit cubes from a 3 by 3 by 3 cube made from 27 unit cubes so that the surface area of the remaining solid is the same as the surface area of the original 3 by 3 by 3. . . .
### An Unusual Shape
##### Stage: 3 Challenge Level:
Can you maximise the area available to a grazing goat?
### Hidden Squares
##### Stage: 3 Challenge Level:
Rectangles are considered different if they vary in size or have different locations. How many different rectangles can be drawn on a chessboard?
### Diagonal Dodge
##### Stage: 2 and 3 Challenge Level:
A game for 2 players. Can be played online. One player has 1 red counter, the other has 4 blue. The red counter needs to reach the other side, and the blue needs to trap the red.
### Convex Polygons
##### Stage: 3 Challenge Level:
Show that among the interior angles of a convex polygon there cannot be more than three acute angles.
### World of Tan 21 - Almost There Now
##### Stage: 2 Challenge Level:
Can you fit the tangram pieces into the outlines of the lobster, yacht and cyclist?
|
2017-03-30 10:59:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2434556931257248, "perplexity": 3282.2382549251915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193716.70/warc/CC-MAIN-20170322212953-00565-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/165860/reference-for-existence-and-blow-up-results-in-transport-like-pdes
|
# reference for existence and blow up results in transport-like PDEs
This question was originally posted by me on math.stackexchange but I didn't get any answers and I thought that perhaps it would be better off here. I hope it's appropriate, I've encountered the problem while reading research articles but it's still possible that the theorem I'm asking for is actually elementary. I'm trying to work my way through some articles on global existence of solutions to vlasov equation and I'm having trouble coming up as to why a $C^1$-norm bound on our density function contradicts the maximal time interval being bounded. Anyway, here we go:
I'm looking for references to results regarding maximal time existence of solutions of a certain transport-like PDE, more precisely this one (I'm working in three dimensional space, that is $x$ and $v$ are three dimensional if it matters): $$\partial_t f + v \cdot \nabla_x f + E(f,t,x) \cdot \nabla_v f = 0$$ where the important fact is that $E$ depends on $f$, on it's integral with respect to $v$ to be exact. I'm not very experienced with nonlinear equations but I was told there was a general theorem for situations like this one saying, that if the maximal interval of existence $[0, T)$ is bounded, i.e. $T<\infty$ and the solution cannot be extended further than $T$ then its $C^1$ norm must tend to $\infty$ when $t \rightarrow T$. I would very much appreciate any pointers as to sources in which I can look for a result like this one, to anything relevant in fact, thanks!
• Viewing the PDE as an abstract ODE in Banach spaces can give a hint/flavour compared to the following usual result for finite dimensional ODE's: if $\frac{dy}{dt}=F(t,y)$ for a 'nice' $F$, then the only possibility for the maximal existence time $T$ to be finite is $\lim\limits_{t\to T^-}|y(t)|=\infty$. Then adapting to infinite dimensional PDE's usually requires to find a suitable functional setting (you mentioned the $C^1$ norm, but weaker Sobolev spaces are also very frequent). Apart from that I'm no expert in transport equations so I don't have a precise reference to point you to, sorry. – leo monsaingeon May 11 '14 at 18:26
|
2021-08-05 14:23:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721597194671631, "perplexity": 171.81697998474132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155925.8/warc/CC-MAIN-20210805130514-20210805160514-00710.warc.gz"}
|
https://codereview.stackexchange.com/questions/36074/guess-a-random-number-between-a-selected-interval
|
# Guess a random number between a selected interval
My project for my class is to create a Java-based game where the user must enter a number between 1-20 and a number between 250 and 300. The computer randomly chooses a number between those 2 numbers. Then the user has 10 guesses to correctly guess what number the computer is "thinking of."
There is a catch, though. We cannot use while loops, only if and else-if statements. I have started this code and was wondering if I'm on the right track. Please point out anything that might help me!
package guessthenumber;
import java.util.Scanner;
import java.util.Random;
public class GuessTheNumber {
public static void main(String[] args)
{
int lowBoundary, highBoundary, secretNumber, boundaryDifference,
g1, g2, g3, g4, g5, g6, g7, g8,
g9, g10;
//g1, g2,... = guess 1, guess2,...
Scanner keyboard = new Scanner(System.in);
System.out.println("Can you guess the number I'm thinking of?\nLet's "
+ "see if you can guess the right number within 10 guesses.\n\n"
+ "First please enter a number between 1 and 20.");
lowBoundary = keyboard.nextInt();
System.out.println("Excellent! Next enter a number between 250 and "
+ "350.");
highBoundary = keyboard.nextInt();
boundaryDifference = highBoundary - lowBoundary;
Random randomNumber = new Random();
secretNumber = randomNumber.nextInt(boundaryDifference) + lowBoundary;
System.out.println("The secret number has been chosen. Now you must "
+ "guess\nthe secret number within 10 guesses or else you lose."
+ "\n(Hint: The secret number is between " + lowBoundary +
" and " + highBoundary + ".)");
g1 = keyboard.nextInt();
if (g1 == secretNumber) {
System.out.println("CONGRATULATIONS! YOU'VE CORRECTLY GUESSED\nTHE"
+ "SECRET NUMBER ON YOUR FIRST GUESS!!");
}
else if (g1 < secretNumber) {
System.out.println("Higher than " + g1 + ". Guess again!");
g2 = keyboard.nextInt();
{
if (g2 == secretNumber) {
System.out.println("Awesome! You've correctly guessed\nthe"
+ "secret number in 2 guesses!");
}
else if (g2 < secretNumber) {
System.out.println("Higher than " + g2 + ". Guess again!");
g3 = keyboard.nextInt();
if (g3 == secretNumber) {
System.out.println("Great job! You've correctly guessed\n"
+ "the secert number in 3 guesses!");
}
else if (g3 < secretNumber) {
System.out.println("Higher than " + g3 + ". Guess again!");
g4 = keyboard.nextInt();
if (g4 == secretNumber) {
System.out.println("Good job! You've correctly guessed\nthe"
+ " secret number in 4 guesses!");
}
else if (g4 < secretNumber) {
System.out.println("Higher than " + g4 + ". Guess again!");
g5 = keyboard.nextInt();
if (g5 == secretNumber) {
+ " secret number in 5 guesses!");
}
else if (g4 > secretNumber) {
System.out.println("Lower than " + g4 + ". Guess again!");
}
}
}
else if (g3 > secretNumber) {
System.out.println("Lower than " + g3 + ". Guess again!");
}
}
else if (g2 > secretNumber) {
System.out.println("Lower than " + g2 + ". Guess again!"
);
g3 = keyboard.nextInt();
}
}
}
else if (g1 > secretNumber) {
System.out.println("Lower than" + g1 + ". Guess again!");
{
}
}
}
}
I'm guessing you're not allowed to use any kind of loop, because then you could just use a for-loop.
Anyhow, I would suggest you encapsulate more of your code in methods. Think about what kind of operations you use repeatedly.
Also, you don't need multiple guessing variables (g1, g2, g3, ... , g10); you can make do with one.
Now to solve the main problem. I would suggest using a recursive method (a method that calls for itself). It can very well act as a loop and you could, for example, count the guesses with it among other things.
Here's an example of a recursive method that returns the sum of all integers from 1 up to a given number:
public static int sumIntegers(int number){
if (number == 1)
return number;
return number + sumIntegers(number - 1);
}
If you would call this method like this:
sumIntegers(5);
You would get: 5 + 4 + 3 + 2 + 1 = 15
Hope it helps!
• +1 for giving a push in the right direction without revealing too much (considering the homework tag on the question) – Simon Forsberg Nov 25 '13 at 21:12
This code is somewhat broken (slightly):
boundaryDifference = highBoundary - lowBoundary;
Random randomNumber = new Random();
secretNumber = randomNumber.nextInt(boundaryDifference) + lowBoundary;
This is one of the most common duplicate problems on StackOverflow, and it's a lesson you should just learn....
The above code will never produce the value 'highBoundary.
This is because nextInt(boundaryDifference) produces a result from 0 to boundaryDistance excluding boundaryDistance.
The right way to get a random number from a given range including both limits of the range is:
int val = min + randomNumber.nextInt((max - min) + 1);
Never forget the +1`
|
2021-06-17 21:51:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37060433626174927, "perplexity": 5219.919169235781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487633444.37/warc/CC-MAIN-20210617192319-20210617222319-00249.warc.gz"}
|
https://eccc.weizmann.ac.il/title/Q
|
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > A-Z > Q:
A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z - Other
Q
TR19-015 | 7th February 2019
William Kretschmer
#### QMA Lower Bounds for Approximate Counting
We prove a query complexity lower bound for $QMA$ protocols that solve approximate counting: estimating the size of a set given a membership oracle. This gives rise to an oracle $A$ such that $SBP^A \not\subset QMA^A$, resolving an open problem of Aaronson [2]. Our proof uses the polynomial method to ... more >>>
TR05-129 | 30th October 2005
Scott Aaronson
#### QMA/qpoly Is Contained In PSPACE/poly: De-Merlinizing Quantum Protocols
This paper introduces a new technique for removing existential quantifiers
over quantum states. Using this technique, we show that there is no way
to pack an exponential number of bits into a polynomial-size quantum
state, in such a way that the value of any one of those bits ... more >>>
TR03-021 | 4th April 2003
Mikhail Vyalyi
#### QMA=PP implies that PP contains PH
We consider possible equality QMA=PP and give an argument
against it. Namely, this equality implies that PP contains PH. The argument is based on the strong form of Toda's theorem and
the strengthening of the proof for inclusion $QMA\subseteq PP$ due to Kitaev and Watrous.
more >>>
TR11-084 | 23rd May 2011
Madhur Tulsiani, Julia Wolf
#### Quadratic Goldreich-Levin Theorems
Decomposition theorems in classical Fourier analysis enable us to express a bounded function in terms of few linear phases with large Fourier coefficients plus a part that is pseudorandom with respect to linear phases. The Goldreich-Levin algorithm can be viewed as an algorithmic analogue of such a decomposition as it ... more >>>
TR15-205 | 15th December 2015
Emanuele Viola
#### Quadratic maps are hard to sample
This note proves the existence of a quadratic GF(2) map
$p : \{0,1\}^n \to \{0,1\}$ such that no constant-depth circuit
of size $\poly(n)$ can sample the distribution $(u,p(u))$
for uniform $u$.
more >>>
TR17-031 | 15th February 2017
Thomas Watson
#### Quadratic Simulations of Merlin-Arthur Games
Revisions: 1
The known proofs of $\text{MA}\subseteq\text{PP}$ incur a quadratic overhead in the running time. We prove that this quadratic overhead is necessary for black-box simulations; in particular, we obtain an oracle relative to which $\text{MA-TIME}(t)\not\subseteq\text{P-TIME}(o(t^2))$. We also show that 2-sided-error Merlin--Arthur games can be simulated by 1-sided-error Arthur--Merlin games with quadratic ... more >>>
TR17-123 | 2nd August 2017
Dmitry Gavinsky, Rahul Jain, Hartmut Klauck, Srijita Kundu, Troy Lee, Miklos Santha, Swagato Sanyal, Jevgenijs Vihrovs
#### Quadratically Tight Relations for Randomized Query Complexity
Let $f:\{0,1\}^n \rightarrow \{0,1\}$ be a Boolean function. The certificate complexity $C(f)$ is a complexity measure that is quadratically tight for the zero-error randomized query complexity $R_0(f)$: $C(f) \leq R_0(f) \leq C(f)^2$. In this paper we study a new complexity measure that we call expectational certificate complexity $EC(f)$, which is ... more >>>
TR05-036 | 28th March 2005
Hubie Chen
#### Quantified Constraint Satisfaction, Maximal Constraint Languages, and Symmetric Polymorphisms
The constraint satisfaction problem (CSP) is a convenient framework for modelling search problems; the CSP involves deciding, given a set of constraints on variables, whether or not there is an assignment to the variables satisfying all of the constraints. This paper is concerned with the quantified constraint satisfaction problem (QCSP), ... more >>>
TR05-024 | 8th February 2005
Michael Bauland, Elmar Böhler, Nadia Creignou, Steffen Reith, Henning Schnoor, Heribert Vollmer
#### Quantified Constraints: The Complexity of Decision and Counting for Bounded Alternation
We consider constraint satisfaction problems parameterized by the set of allowed constraint predicates. We examine the complexity of quantified constraint satisfaction problems with a bounded number of quantifier alternations and the complexity of the associated counting problems. We obtain classification results that completely solve the Boolean case, and we show ... more >>>
TR17-145 | 19th September 2017
Roei Tell
#### Quantified derandomization of linear threshold circuits
Revisions: 2
One of the prominent current challenges in complexity theory is the attempt to prove lower bounds for $TC^0$, the class of constant-depth, polynomial-size circuits with majority gates. Relying on the results of Williams (2013), an appealing approach to prove such lower bounds is to construct a non-trivial derandomization algorithm for ... more >>>
TR18-156 | 8th September 2018
Mark Bun, Robin Kothari, Justin Thaler
#### Quantum algorithms and approximating polynomials for composed functions with shared inputs
Revisions: 1
We give new quantum algorithms for evaluating composed functions whose inputs may be shared between bottom-level gates. Let $f$ be a Boolean function and consider a function $F$ obtained by applying $f$ to conjunctions of possibly overlapping subsets of $n$ variables. If $f$ has quantum query complexity $Q(f)$, we give ... more >>>
TR04-045 | 15th April 2004
Hartmut Klauck, Robert Spalek, Ronald de Wolf
#### Quantum and Classical Strong Direct Product Theorems and Optimal Time-Space Tradeoffs
A strong direct product theorem says that if we want to compute
k independent instances of a function, using less than k times
the resources needed for one instance, then our overall success
probability will be exponentially small in k.
We establish such theorems for the classical as well as ... more >>>
TR04-023 | 21st January 2004
Yaoyun Shi
#### Quantum and Classical Tradeoffs
We initiate the study of quantifying the quantumness of
a quantum circuit by the number of gates that do not preserve
the computational basis, as a means to understand the nature
of quantum algorithmic speedups.
Intuitively, a reduction in the quantumness requires
an increase in the amount of classical computation, ... more >>>
TR13-010 | 4th January 2013
Yang Liu, Shengyu Zhang
#### Quantum and randomized communication complexity of XOR functions in the SMP model
Communication complexity of XOR functions $f (x \oplus y)$ has attracted increasing attention in recent years, because of its connections to Fourier analysis, and its exhibition of exponential separations between classical and quantum communication complexities of total functions.However, the complexity of certain basic functions still seems elusive especially in the ... more >>>
TR02-013 | 30th January 2002
Chris Pollett, Farid Ablayev, Cristopher Moore, Chris Pollett
#### Quantum and Stochastic Programs of Bounded Width
Revisions: 1
We prove upper and lower bounds on the power of quantum and stochastic
branching programs of bounded width. We show any NC^1 language can
be accepted exactly by a width-2 quantum branching program of
polynomial length, in contrast to the classical case where width 5 is
necessary unless \NC^1=\ACC. ... more >>>
TR03-005 | 28th December 2002
Scott Aaronson
#### Quantum Certificate Complexity
Given a Boolean function f, we study two natural generalizations of the certificate complexity C(f): the randomized certificate complexity RC(f) and the quantum certificate complexity QC(f). Using Ambainis' adversary method, we exactly characterize QC(f) as the square root of RC(f). We then use this result to prove the new relation ... more >>>
TR99-032 | 7th July 1999
Cristopher Moore
#### Quantum Circuits: Fanout, Parity, and Counting
We propose definitions of $\QAC^0$, the quantum analog of the
classical class $\AC^0$ of constant-depth circuits with AND and OR
gates of arbitrary fan-in, and $\QACC^0[q]$, the analog of the class
$\ACC^0[q]$ where $\Mod_q$ gates are also allowed. We show that it is
possible to make a cat' state on ... more >>>
TR05-003 | 23rd December 2004
Scott Aaronson
#### Quantum Computing, Postselection, and Probabilistic Polynomial-Time
I study the class of problems efficiently solvable by a quantum computer, given the ability to "postselect" on the outcomes of measurements. I prove that this class coincides with a classical complexity class called PP, or Probabilistic Polynomial-Time. Using this result, I show that several simple changes to the axioms ... more >>>
TR05-146 | 25th November 2005
Gábor Erdèlyi, Tobias Riege, Jörg Rothe
#### Quantum Cryptography: A Survey
Revisions: 2
We survey some results in quantum cryptography. After a brief
introduction to classical cryptography, we provide the physical and
mathematical background needed and present some fundamental protocols
from quantum cryptography, including quantum key distribution and
quantum bit commitment protocols.
more >>>
TR07-032 | 27th March 2007
Pavel Pudlak
#### Quantum deduction rules
We define propositional quantum Frege proof systems and compare it
with classical Frege proof systems.
more >>>
TR17-011 | 22nd January 2017
Boaz Barak, Pravesh Kothari, David Steurer
#### Quantum entanglement, sum of squares, and the log rank conjecture
For every constant $\epsilon>0$, we give an $\exp(\tilde{O}(\sqrt{n}))$-time algorithm for the $1$ vs $1-\epsilon$ Best Separable State (BSS) problem of distinguishing, given an $n^2\times n^2$ matrix $M$ corresponding to a quantum measurement, between the case that there is a separable (i.e., non-entangled) state $\rho$ that $M$ accepts with probability $1$, ... more >>>
TR10-165 | 4th November 2010
Dmitry Gavinsky, Tsuyoshi Ito
#### Quantum Fingerprints that Keep Secrets
We introduce a new type of cryptographic primitive that we call hiding fingerprinting. No classical fingerprinting scheme is hiding. We construct quantum hiding fingerprinting schemes and argue their optimality.
more >>>
TR06-020 | 10th February 2006
Akinori Kawachi, Tomoyuki Yamakami
#### Quantum Hardcore Functions by Complexity-Theoretical Quantum List Decoding
Revisions: 1
We present three new quantum hardcore functions for any quantum one-way function. We also give a "quantum" solution to Damgard's question (CRYPTO'88) on his pseudorandom generator by proving the quantum hardcore property of his generator, which has been unknown to have the classical hardcore property.
Our technical tool is ... more >>>
TR19-041 | 7th March 2019
Srinivasan Arunachalam, Alex Bredariol Grilo, Aarthi Sundaram
#### Quantum hardness of learning shallow classical circuits
In this paper we study the quantum learnability of constant-depth classical circuits under the uniform distribution and in the distribution-independent framework of PAC learning. In order to attain our results, we establish connections between quantum learning and quantum-secure cryptosystems. We then achieve the following results.
1) Hardness of learning ... more >>>
TR20-066 | 28th April 2020
Scott Aaronson, Shalev Ben-David, Robin Kothari, Avishay Tal
#### Quantum Implications of Huang's Sensitivity Theorem
Based on the recent breakthrough of Huang (2019), we show that for any total Boolean function $f$, the deterministic query complexity, $D(f)$, is at most quartic in the quantum query complexity, $Q(f)$: $D(f) = O(Q(f)^4)$. This matches the known separation (up to log factors) due to Ambainis, Balodis, Belovs, Lee, ... more >>>
TR05-038 | 10th April 2005
Ran Raz
#### Quantum Information and the PCP Theorem
We show how to encode $2^n$ (classical) bits $a_1,...,a_{2^n}$
by a single quantum state $|\Psi \rangle$ of size $O(n)$ qubits,
such that:
for any constant $k$ and any $i_1,...,i_k \in \{1,...,2^n\}$,
the values of the bits $a_{i_1},...,a_{i_k}$ can be retrieved
from $|\Psi \rangle$ by a one-round Arthur-Merlin interactive ... more >>>
TR20-185 | 1st December 2020
Srinivasan Arunachalam, Alex Grilo, Tom Gur, Igor Oliveira, Aarthi Sundaram
#### Quantum learning algorithms imply circuit lower bounds
We establish the first general connection between the design of quantum algorithms and circuit lower bounds. Specifically, let $\mathrm{C}$ be a class of polynomial-size concepts, and suppose that $\mathrm{C}$ can be PAC-learned with membership queries under the uniform distribution with error $1/2 - \gamma$ by a time $T$ quantum algorithm. ... more >>>
TR18-201 | 30th November 2018
Anurag Anshu, Naresh Boddu, Dave Touchette
#### Quantum Log-Approximate-Rank Conjecture is also False
Comments: 1
In a recent breakthrough result, Chattopadhyay, Mande and Sherif [ECCC TR18-17] showed an exponential separation between the log approximate rank and randomized communication complexity of a total function $f$, hence refuting the log approximate rank conjecture of Lee and Shraibman [2009]. We provide an alternate proof of their randomized communication ... more >>>
TR20-087 | 8th June 2020
Uma Girish, Ran Raz, Wei Zhan
#### Quantum Logspace Algorithm for Powering Matrices with Bounded Norm
Revisions: 1
We give a quantum logspace algorithm for powering contraction matrices, that is, matrices with spectral norm at most 1. The algorithm gets as an input an arbitrary $n\times n$ contraction matrix $A$, and a parameter $T \leq poly(n)$ and outputs the entries of $A^T$, up to (arbitrary) polynomially small additive ... more >>>
TR18-087 | 20th April 2018
Kun He, Qian Li, Xiaoming Sun, Jiapeng Zhang
#### Quantum Lov{\'a}sz Local Lemma: Shearer's Bound is Tight
Lov{\'a}sz Local Lemma (LLL) is a very powerful tool in combinatorics and probability theory to show the possibility of avoiding all bad" events under some weakly dependent" condition. Over the last decades, the algorithmic aspect of LLL has also attracted lots of attention in theoretical computer science \cite{moser2010constructive, kolipaka2011moser, harvey2015algorithmic}. ... more >>>
TR18-137 | 7th August 2018
Scott Aaronson
#### Quantum Lower Bound for Approximate Counting Via Laurent Polynomials
We consider the following problem: estimate the size of a nonempty set $S\subseteq\left[ N\right]$, given both quantum queries to a membership oracle for $S$, and a device that generates equal superpositions $\left\vert S\right\rangle$ over $S$ elements. We show that, if $\left\vert S\right\vert$ is neither too large nor ... more >>>
TR14-109 | 14th August 2014
Aran Nayebi, Scott Aaronson, Aleksandrs Belovs, Luca Trevisan
#### Quantum lower bound for inverting a permutation with advice
Revisions: 1
Given a random permutation $f: [N] \to [N]$ as a black box and $y \in [N]$, we want to output $x = f^{-1}(y)$. Supplementary to our input, we are given classical advice in the form of a pre-computed data structure; this advice can depend on the permutation but \emph{not} on ... more >>>
TR02-072 | 12th November 2002
Scott Aaronson
#### Quantum Lower Bound for Recursive Fourier Sampling
We revisit the oft-neglected 'recursive Fourier sampling' (RFS) problem, introduced by Bernstein and Vazirani to prove an oracle separation between BPP and BQP. We show that the known quantum algorithm for RFS is essentially optimal, despite its seemingly wasteful need to uncompute information. This implies that, to place BQP outside ... more >>>
TR19-062 | 18th April 2019
Scott Aaronson, Robin Kothari, William Kretschmer, Justin Thaler
#### Quantum Lower Bounds for Approximate Counting via Laurent Polynomials
Revisions: 2
This paper proves new limitations on the power of quantum computers to solve approximate counting---that is, multiplicatively estimating the size of a nonempty set $S\subseteq [N]$.
Given only a membership oracle for $S$, it is well known that approximate counting takes $\Theta(\sqrt{N/|S|})$ quantum queries. But what if a quantum algorithm ... more >>>
TR96-003 | 4th December 1995
Alexei Kitaev
#### Quantum measurements and the Abelian Stabilizer Problem
We present a polynomial quantum algorithm for the Abelian stabilizer problem
which includes both factoring and the discrete logarithm. Thus we extend famous
Shor's results. Our method is based on a procedure for measuring an eigenvalue
of a unitary operator. Another application of this
procedure is a polynomial ... more >>>
TR12-024 | 25th March 2012
Scott Aaronson, Paul Christiano
#### Quantum Money from Hidden Subspaces
Forty years ago, Wiesner pointed out that quantum mechanics raises the striking possibility of money that cannot be counterfeited according to the laws of physics. We propose the first quantum money scheme that is (1) public-key, meaning that anyone can verify a banknote as genuine, not only the bank that ... more >>>
TR14-151 | 13th November 2014
Debajyoti Bera
#### Quantum One-Sided Exact Error Algorithms
Revisions: 2
We define a complexity class for randomized algorithms with one-sided error that is exactly equal to a constant (unlike the usual definitions, in which the error is only bounded above or below by a constant). We show that the corresponding quantum classes (one each for a different error probability) are ... more >>>
TR10-143 | 19th September 2010
Bo'az Klartag, Oded Regev
#### Quantum One-Way Communication is Exponentially Stronger Than Classical Communication
In STOC 1999, Raz presented a (partial) function for which there is a quantum protocol
communicating only $O(\log n)$ qubits, but for which any classical (randomized, bounded-error) protocol requires $\poly(n)$ bits of communication. That quantum protocol requires two rounds of communication. Ever since Raz's paper it was open whether the ... more >>>
TR18-105 | 30th May 2018
Joseph Fitzsimons, Zhengfeng Ji, Thomas Vidick, Henry Yuen
#### Quantum proof systems for iterated exponential time, and beyond
We show that any language in nondeterministic time $\exp(\exp(\cdots\exp(n)))$, where the number of iterated exponentials is an arbitrary function $R(n)$, can be decided by a multiprover interactive proof system with a classical polynomial-time verifier and a constant number of quantum entangled provers, with completeness $1$ and soundness $1 - \exp(-C\exp(\cdots\exp(n)))$, ... more >>>
TR09-102 | 21st October 2009
Andrew Drucker, Ronald de Wolf
#### Quantum Proofs for Classical Theorems
Alongside the development of quantum algorithms and quantum complexity theory in recent years, quantum techniques have also proved instrumental in obtaining results in classical (non-quantum) areas. In this paper we survey these results and the quantum toolbox they use.
more >>>
TR08-086 | 9th July 2008
Vikraman Arvind, Partha Mukhopadhyay
#### Quantum Query Complexity of Multilinear Identity Testing
Motivated by the quantum algorithm in \cite{MN05} for testing
commutativity of black-box groups, we study the following problem:
Given a black-box finite ring $R=\angle{r_1,\cdots,r_k}$ where
$\{r_1,r_2,\cdots,r_k\}$ is an additive generating set for $R$ and a
multilinear polynomial $f(x_1,\cdots,x_m)$ over $R$ also accessed as
a ... more >>>
TR19-136 | 23rd September 2019
Sourav Chakraborty, Arkadev Chattopadhyay, Nikhil Mande, Manaswi Paraashar
#### Quantum Query-to-Communication Simulation Needs a Logarithmic Overhead
Buhrman, Cleve and Wigderson (STOC'98) observed that for every Boolean function $f : \{-1, 1\}^n \to \{-1, 1\}$ and $\bullet : \{-1, 1\}^2 \to \{-1, 1\}$ the two-party bounded-error quantum communication complexity of $(f \circ \bullet)$ is $O(Q(f) \log n)$, where $Q(f)$ is the bounded-error quantum query complexity of $f$. ... more >>>
TR07-013 | 6th February 2007
Andris Ambainis, Joseph Emerson
#### Quantum t-designs: t-wise independence in the quantum world
A t-design for quantum states is a finite set of quantum states with the property of simulating the Haar-measure on quantum states w.r.t. any test that uses at most t copies of a state. We give efficient constructions for approximate quantum t-designs for arbitrary t.
We then show that an ... more >>>
TR06-055 | 10th April 2006
Scott Aaronson, Greg Kuperberg
#### Quantum Versus Classical Proofs and Advice
This paper studies whether quantum proofs are more powerful than
classical proofs, or in complexity terms, whether QMA=QCMA. We prove
two results about this question. First, we give a "quantum oracle
separation" between QMA and QCMA. More concretely, we show that any
quantum algorithm needs order sqrt(2^n/(m+1)) queries to find ... more >>>
TR12-136 | 26th October 2012
Dan Boneh, Mark Zhandry
#### Quantum-Secure Message Authentication Codes
Revisions: 2
We construct the first Message Authentication Codes (MACs) that are existentially unforgeable against a quantum chosen message attack. These chosen message attacks model a quantum adversary’s ability to obtain the MAC on a superposition of messages of its choice. We begin by showing that a quantum secure PRF is sufficient ... more >>>
TR16-001 | 9th January 2016
Eli Ben-Sasson, Alessandro Chiesa, Ariel Gabizon, Madars Virza
#### Quasi-Linear Size Zero Knowledge from Linear-Algebraic PCPs
Revisions: 1
The seminal result that every language having an interactive proof also has a zero-knowledge interactive proof assumes the existence of one-way functions. Ostrovsky and Wigderson (ISTCS 1993) proved that this assumption is necessary: if one-way functions do not exist, then only languages in BPP have zero-knowledge interactive proofs.
Ben-Or et ... more >>>
TR17-135 | 10th September 2017
Ramprasad Saptharishi, Anamay Tengse
#### Quasi-polynomial Hitting Sets for Circuits with Restricted Parse Trees
Revisions: 1
We study the class of non-commutative Unambiguous circuits or Unique-Parse-Tree (UPT) circuits, and a related model of Few-Parse-Trees (FewPT) circuits (which were recently introduced by Lagarde, Malod and Perifel [LMP16] and Lagarde, Limaye and Srinivasan [LLS17]) and give the following constructions:
• An explicit hitting set of quasipolynomial size for ... more >>>
TR12-113 | 7th September 2012
Manindra Agrawal, Chandan Saha, Nitin Saxena
#### Quasi-polynomial Hitting-set for Set-depth-$\Delta$ Formulas
We call a depth-$4$ formula $C$ $\textit{ set-depth-4}$ if there exists a (unknown) partition $X_1\sqcup\cdots\sqcup X_d$ of the variable indices $[n]$ that the top product layer respects, i.e. $C(\mathbf{x})=\sum_{i=1}^k {\prod_{j=1}^{d} {f_{i,j}(\mathbf{x}_{X_j})}}$ $,$ where $f_{i,j}$ is a $\textit{sparse}$ polynomial in $\mathbb{F}[\mathbf{x}_{X_j}]$. Extending this definition to any depth - we call ... more >>>
TR19-090 | 27th June 2019
Ronen Shaltiel, Swastik Kopparty, Jad Silbak
#### Quasilinear time list-decodable codes for space bounded channels
Revisions: 2
We consider codes for space bounded channels. This is a model for communication under noise that was studied by Guruswami and Smith (J. ACM 2016) and lies between the Shannon (random) and Hamming (adversarial) models. In this model, a channel is a space bounded procedure that reads the codeword in ... more >>>
TR12-115 | 11th September 2012
Michael Forbes, Amir Shpilka
#### Quasipolynomial-time Identity Testing of Non-Commutative and Read-Once Oblivious Algebraic Branching Programs
Revisions: 1
We study the problem of obtaining efficient, deterministic, black-box polynomial identity testing (PIT) algorithms for read-once oblivious algebraic branching programs (ABPs). This class has an efficient, deterministic, white-box polynomial identity testing algorithm (due to Raz and Shpilka), but prior to this work had no known such black-box algorithm. Here we ... more >>>
TR12-002 | 4th January 2012
Akinori Kawachi, Benjamin Rossman, Osamu Watanabe
#### Query Complexity and Error Tolerance of Witness Finding Algorithms
Revisions: 3
We propose an abstract framework for studying search-to-decision reductions for NP. Specifically, we study the following witness finding problem: for a hidden nonempty set $W\subseteq\{0,1\}^n$, the goal is to output a witness in $W$ with constant probability by making randomized queries of the form is $Q\cap W$ nonempty?''\ where $Q\subseteq\{0,1\}^n$. ... more >>>
TR10-126 | 12th August 2010
Thomas Watson
#### Query Complexity in Errorless Hardness Amplification
Revisions: 2
An errorless circuit for a boolean function is one that outputs the correct answer or don't know'' on each input (and never outputs the wrong answer). The goal of errorless hardness amplification is to show that if $f$ has no size $s$ errorless circuit that outputs don't know'' on at ... more >>>
TR20-133 | 8th September 2020
Noga Ron-Zewi, Ronen Shaltiel, Nithin Varma
#### Query complexity lower bounds for local list-decoding and hard-core predicates (even for small rate and huge lists)
A binary code $\text{Enc}:\{0,1\}^k \rightarrow \{0,1\}^n$ is $(\frac{1}{2}-\varepsilon,L)$-list decodable if for every $w \in \{0,1\}^n$, there exists a set $\text{List}(w)$ of size at most $L$, containing all messages $m \in \{0,1\}^k$ such that the relative Hamming distance between $\text{Enc}(m)$ and $w$ is at most $\frac{1}{2}-\varepsilon$.
A $q$-query local list-decoder is ... more >>>
TR10-067 | 14th April 2010
Sourav Chakraborty, Eldar Fischer, Arie Matsliah
#### Query Complexity Lower Bounds for Reconstruction of Codes
We investigate the problem of {\em local reconstruction}, as defined by Saks and Seshadhri (2008), in the context of error correcting codes.
The first problem we address is that of {\em message reconstruction}, where given oracle access to a corrupted encoding $w \in \zo^n$ of some message $x \in \zo^k$ ... more >>>
TR20-108 | 19th July 2020
Arijit Bishnu, Arijit Ghosh, Gopinath Mishra, Manaswi Paraashar
#### Query Complexity of Global Minimum Cut
Revisions: 1
In this work, we resolve the query complexity of global minimum cut problem for a graph by designing a randomized algorithm for approximating the size of minimum cut in a graph, where the graph can be accessed through local queries like \textsc{Degree}, \textsc{Neighbor}, and \textsc{Adjacency} queries.
Given $\epsilon \in (0,1)$, ... more >>>
TR12-063 | 17th May 2012
Raghav Kulkarni, Miklos Santha
#### Query complexity of matroids
Let $\mathcal{M}$ be a bridgeless matroid on ground set $\{1,\ldots, n\}$ and $f_{\mathcal{M}}: \{0,1\}^n \to \{0, 1\}$ be the indicator function of its independent sets. A folklore fact is that $f_\mathcal{M}$ is `evasive," i.e., $D(f_\mathcal{M}) = n$ where $D(f)$ denotes the deterministic decision tree complexity of $f.$ Here we prove ... more >>>
TR10-173 | 9th November 2010
Yeow Meng Chee, Tao Feng, San Ling, Huaxiong Wang, Liang Feng Zhang
#### Query-Efficient Locally Decodable Codes
A $k$-query locally decodable code (LDC)
$\textbf{C}:\Sigma^{n}\rightarrow \Gamma^{N}$ encodes each message $x$ into
a codeword $\textbf{C}(x)$ such that each symbol of $x$ can be probabilistically
recovered by querying only $k$ coordinates of $\textbf{C}(x)$, even after a
constant fraction of the coordinates have been corrupted.
Yekhanin (2008)
constructed a $3$-query LDC ... more >>>
TR17-053 | 22nd March 2017
Mika Göös, Toniann Pitassi, Thomas Watson
#### Query-to-Communication Lifting for BPP
Revisions: 1
For any $n$-bit boolean function $f$, we show that the randomized communication complexity of the composed function $f\circ g^n$, where $g$ is an index gadget, is characterized by the randomized decision tree complexity of $f$. In particular, this means that many query complexity separations involving randomized models (e.g., classical vs.\ ... more >>>
TR17-024 | 16th February 2017
Mika Göös, Pritish Kamath, Toniann Pitassi, Thomas Watson
#### Query-to-Communication Lifting for P^NP
Revisions: 1
We prove that the $\text{P}^{\small\text{NP}}$-type query complexity (alternatively, decision list width) of any boolean function $f$ is quadratically related to the $\text{P}^{\small\text{NP}}$-type communication complexity of a lifted version of $f$. As an application, we show that a certain "product" lower bound method of Impagliazzo and Williams (CCC 2010) fails to ... more >>>
TR19-103 | 7th August 2019
Arkadev Chattopadhyay, Yuval Filmus, Sajin Koroth, Or Meir, Toniann Pitassi
#### Query-to-Communication Lifting Using Low-Discrepancy Gadgets
Revisions: 2
Lifting theorems are theorems that relate the query complexity of a function $f:\left\{ 0,1 \right\}^n\to \left\{ 0,1 \right\}$ to the communication complexity of the composed function $f\circ g^n$, for some “gadget” $g:\left\{ 0,1 \right\}^b\times \left\{ 0,1 \right\}^b\to \left\{ 0,1 \right\}$. Such theorems allow transferring lower bounds from query complexity to ... more >>>
ISSN 1433-8092 | Imprint
|
2021-01-26 06:10:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7607474327087402, "perplexity": 2947.200163614614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704798089.76/warc/CC-MAIN-20210126042704-20210126072704-00287.warc.gz"}
|
https://studyforce.com/course/mathematics-i-1132/lessons/solve-proportions/
|
# Mathematics I (Math 1132)
Durham College, Mathematics
Free
• 42 lessons
• 0 quizzes
• 10 week duration
• ##### Measurements
An introduction to numerical computation. Emphasis is placed on scientific and engineering notation, the rule of significant figures, and converting between SI and Imperial units.
• ##### Fractions, Percentage, Ratios and Proportion
Emphasis here is placed on understanding fractions, percent, and using ratios to compare quantities and set up proportions to solve problems.
• ##### Geometry
This unit focuses on analyzing and understand the characteristics of various shapes, both 2D and 3D.
## Mathematics I (Math 1132)
### Solve Proportion Equations
You were introduced to ratios and proportions a few units ago, but you hadn’t acquired the skills to solve equations. Now that you know the techniques to solving equations, you can finally begin to solve proportion-related equations and word problems.
The simplest type of proportion equation to solve is when you have one ratio equal to another, as shown below:
To solve this, you can cross-multiply the denominators of each fraction with the opposing numerator:
$\frac{4}{5}=\frac{3}{x}\phantom{\rule{0ex}{0ex}}5\left(3\right)=4x\phantom{\rule{0ex}{0ex}}15=4x\phantom{\rule{0ex}{0ex}}\frac{15}{4}=x\phantom{\rule{0ex}{0ex}}3.75=x\phantom{\rule{0ex}{0ex}}$
For more complicated proportions, such as the one below, you still use the same technique.
Solution
[collapse]
To solve problems involving proportions, you need to know how to convert verbal statement to equations, first and foremost. To change sentences into mathematical expressions and equations, look for key words like these:
Increased: +
Sum: +
More: +
Decreased: –
Difference: –
Less: –
Twice: ×2
Doubled: ×2
Tripled: ×3
The same: =
The first example in the video below can be solved without knowing how to do this, but most questions in this question, including Example 2 does.
|
2019-01-24 08:51:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7353219389915466, "perplexity": 1668.375452337811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584519757.94/warc/CC-MAIN-20190124080411-20190124102411-00259.warc.gz"}
|
https://datascience.stackexchange.com/questions/32638/xgboost-results-are-not-invariant-under-monotone-predictor-transformations
|
# XGBoost results are not invariant under monotone predictor transformations?
It is believed by many that tree-based methods are invariant under monotone transformations of the predictors. But recently I've read a paper (https://arxiv.org/pdf/1611.04561.pdf, referred to as the arxiv paper later) that says whether it's invariant depends on how the split threshold is chosen (there are three methods), and according to this paper, xgboost would be invariant under transformations because it uses the sweep left method. This is mentioned in the last paragraph on pp.2 and first paragraph on pp.3.
But when I read the original xgboost paper by Chen, the split algorithm looks much more sophisticated than any method mentioned in the arxiv paper, and it looks like it should be sensitive to transformations. I've tried xgboost on regression for a few data sets, and if I have column subsampling turned on, I do see different results with predictor transformations.
Can anyone give me some confirmation on this topic? I'm confused mostly by the arxiv paper.
• Do you confirm that making no change also has no change in your outputs? You may be seeing randomness changing things. – kbrose Jun 4 '18 at 22:45
• Yes. Seed is set, and results don't change if I run multiple times with the same set of parameters. – hooyeh Jun 5 '18 at 0:22
|
2020-01-17 14:32:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8118577003479004, "perplexity": 721.9353164462368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589560.16/warc/CC-MAIN-20200117123339-20200117151339-00320.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0A7S
|
Lemma 47.16.9. Let $(A, \mathfrak m, \kappa )$ be a Noetherian local ring with normalized dualizing complex. Let $I \subset \mathfrak m$ be an ideal of finite length. Set $B = A/I$. Then there is a distinguished triangle
$\omega _ B^\bullet \to \omega _ A^\bullet \to \mathop{\mathrm{Hom}}\nolimits _ A(I, E)[0] \to \omega _ B^\bullet [1]$
in $D(A)$ where $E$ is an injective hull of $\kappa$ and $\omega _ B^\bullet$ is a normalized dualizing complex for $B$.
Proof. Use the short exact sequence $0 \to I \to A \to B \to 0$ and Lemmas 47.16.4 and 47.16.2. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2020-06-06 09:14:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9678184390068054, "perplexity": 248.1462806236728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348511950.89/warc/CC-MAIN-20200606062649-20200606092649-00076.warc.gz"}
|
https://rw.mathematicstip.com/5497-42-complex-line-integrals.html
|
# 4.2: Complex Line Integrals
Line integrals are also called path or contour integrals. Given the ingredients we define the complex lineintegral (int_{gamma} f(z) dz) by
[int_{gamma} f(z) dz := int_{a}^{b} f(gamma (t)) gamma ' (t) dt. label{4.2.1}]
You should note that this notation looks just like integrals of a real variable. We don’t need the vectors and dot products of line integrals in (R^2). Also, make sure you understand that the product (f(gamma (t)) gamma '(t)) is just a product of complex numbers.
An alternative notation uses (dz = dx + idy) to write
[int_{gamma} f(z) dz = int_{gamma} (u + iv) (dx + idy) label{4.2.2}]
Let’s check that Equations ef{4.2.1} and ef{4.2.2} are the same. Equation ef{4.2.2} is really a multivariable calculus expression, so thinking of (gamma (t)) as ((x(t), y(t))) it becomes
[int_{gamma} f(z) dz = int_a^b [u(x(t), y(t)) + iv (x(t), y(t)] (x'(t) + iy'(t))dt]
but
[u(x(t), y(t)) + iv (x(t), y(t)) = f(gamma (t))]
and
[x'(t) + iy'(t) = gamma '(t)]
so the right hand side of Equation ef{4.2.2} is
[int_{a}^{b} f(gamma (t)) gamma '(t) dt.]
That is, it is exactly the same as the expression in Equation ef{4.2.1}
Example (PageIndex{1})
Compute (int_{gamma} z^2 dz) along the straight line from 0 to (1 + i).
Solution
We parametrize the curve as (gamma (t) = t(1 + i)) with (0 le t le 1). So (gamma '(t) = 1 + i). The line integral is
[int z^2 dz = int_{0}^{1} t^2 (1 + i)^2 (1 + i) dt = dfrac{2i(1 + i)}{3}. onumber]
Example (PageIndex{2})
Compute (int_{gamma} overline{z} dz) along the straight line from 0 to (1 + i).
Solution
We can use the same parametrization as in the previous example. So,
[int_{gamma} overline{z} dz = int_{0}^{1} t(1 - i) (1 + i) dt = 1. onumber]
Example (PageIndex{3})
Compute (int_{gamma} z^2 dz) along the unit circle.
Solution
We parametrize the unit circle by (gamma ( heta) = e^{i heta}), where (0 le heta le 2pi). We have (gamma '( heta) = ie^{i heta}). So, the integral becomes
[int_{gamma} z^2 dz = int_{0}^{2pi} e^{2i heta} i e^{i heta} d heta = int_{0}^{2pi} ie^{3i heta} d heta = dfrac{e^{3i heta}}{3} vert_{0}^{2pi} = 0. onumber]
Example (PageIndex{4})
Compute (int overline{z} dz) along the unit circle.
Solution
Parametrize (C): (gamma (t) = e^{it}), with (0 le t le 2pi). So, (gamma '(t) = ie^{it}). Putting this into the integral gives
[int_{C} overline{z} dz = int_{0}^{2pi} overline{e^{it}} i e^{it} dt = int_{0}^{2pi} i dt = 2pi i. onumber]
## Integration using Euler's formula
In integral calculus, Euler's formula for complex numbers may be used to evaluate integrals involving trigonometric functions. Using Euler's formula, any trigonometric function may be written in terms of complex exponential functions, namely e i x > and e − i x > and then integrated. This technique is often simpler and faster than using trigonometric identities or integration by parts, and is sufficiently powerful to integrate any rational expression involving trigonometric functions. [1]
## Properties of Fourier Series Spectrum
A signal's Fourier series spectrum ck has interesting properties.
If s(t) is real,
Real-valued periodic signals have conjugate-symmetric spectra.
This result follows from the integral that calculates the c k from the signal. Furthermore, this result means that:
The real part of the Fourier coefficients for real-valued signals is even. Similarly,
The imaginary parts of the Fourier coefficients have odd symmetry. Consequently, if you are given the Fourier coefficients for positive indices and zero and are told the signal is real-valued, you can find the negative-indexed coefficients, hence the entire spectrum. This kind of symmetry,
is known as conjugate symmetry .
If s(-t) = s(t), which says the signal has even symmetry about the origin,
Given the previous property for real-valued signals, the Fourier coefficients of even signals are real-valued. A real-valued Fourier expansion amounts to an expansion in terms of only cosines, which is the simplest example of an even signal.
If s(-t) = - s(t), which says the signal has odd symmetry,
Therefore, the Fourier coefficients are purely imaginary. The square wave is a great example of an odd-symmetric signal.
The spectral coefficients for a periodic signal delayed by &tau, s(t-&tau), are:
where c k denotes the spectrum of s(t).
Delaying a signal by &tau seconds results in a spectrum having a linear phase shift of [-frac<2pi k au >] in comparison to the spectrum of the undelayed signal. Note that the spectral magnitude is unaffected. Showing this property is easy.
Note that the range of integration extends over a period of the integrand. Consequently, it should not matter how we integrate over a period, which means that:
The complex Fourier series obeys Parseval's Theorem , one of the most important results in signal analysis. This general mathematical result says you can calculate a signal's power in either the time domain or the frequency domain.
## Math Insight
In the introduction to scalar line integrals, we derived the formula for $slint$, the line integral of a function $f$ over a curve parametrized by $dllp(t)$ for $a le t le b$: egin slint =int_a^b dlsi(dllp(t))| dllp,'(t) | dt. end
However, the value of this integral should not depend on the specific parametrization $dllp$, as it is designed to capture quantities like the total mass of a wire with density $dlsi$. The integral should only depend on the density function $dlsi(vc)$ and the image curve, denoted by $dlc$, which is the set of points $dllp(t)$ for all values of $t$ in the interval $[a,b]$.
For the reason, we can think of the line integral as being over the curve $dlc$, rather than the particular parametrization given by $dllp(t)$. To reflect this viewpoint, we could write the integral that gives the mass of the slinky as egin dslint = pslint, end where the only difference from above is that we replaced $dllp$ with $dlc$.
The notation $dslint$ makes it explicit that line integrals are independent of the parametrization $dllp(t)$ (after all, the notation does not mention $dllp(t)$). You may remember that the same curve $dlc$ can be parametrized by many functions. For example, we gave two different parametrizations of the unit circle. Above, we parametrized the slinky by $dllp(t) = (cos t, sin t, t)$, for le t le 2pi$. We could have equally well parametrized the same slinky by$adllp(t) = (cos 2t, sin 2t, 2t)$for le t le pi$. (If $dllp(t)$ and $adllp(t)$ were the positions at time $t$ of two particles traveling along the slinky, the particle given by $adllp(t)$ travels twice as fast, covering the slinky in half the time, compared to the particle given by $dllp(t)$.)
As long as the density $dlsi(vc)$ is unchanged, the mass of the slinky had better be the same, no matter which parametrization we use. Hence, it must be true that the mass is both egin dslint = int_ dlsi , dals = pslint<0><2pi> end and egin dslint = int_ dlsi , dals = pslint<0>. end
Can you see why the integral is the same for both parametrizations $dllp(t)$ and $adllp(t)$, i.e., why egin pslint<0><2pi> = pslint<0>? end
In the first case, you are integrating over an interval that is twice as long (from 0 to $2pi$ versus from 0 to $pi$), but the speed $| dllp'(t)|$ is half the speed $| adllp'(t) |$. The two effects cancel and the integrals are equal.
Since the line integral of a vector field $dlvf$ over the curve is based on the line integral of a scalar function $f = dlvf cdot vc$, where $vc$ is the unit tangent vector of the curve, we expect that line integrals of vector fields should also be independent of the parametrization $dllp(t)$. Indeed, this is the case, with one important exception. Since $vc = dllp'(t)/dllp(t)$, the unit tangent vector, and hence the integral $dlint$, will be independent of the speed $|dllp'(t)|$ of the parametrization. The unit tangent vector, as its name implies will always be of length 1. But, at any point along the curve, there are two unit tangent vectors pointing in opposite directions. If we call one of them $vc$, then the other is $-vc$. The choice of these unit tangent vectors will depend on the direction that $dllp(t)$ transverses the curve $dlc$ as $t$ increases.
We refer to the choice of the unit tangent vector, or equivalently, the choice of the direction to tranverse $dlc$, as the orientation of the curve. Every simple curve has two orientations, one corresponding to one unit tangent vector $vc$ and the other corresponding to its opposite $-vc$. Scalar line integrals are independent of curve orientation, but vector line integrals will switch sign if you switch the orientation of the curve. This make sense intuitively, as the mass of the slinky shouldn't change, but the work done by a force field changes sign if you move in the opposite direction.
The examples of line integrals of scalar functions and vector fields include calculations of the same line integral with different parametrizations.
## SLI 4.2
Conversion achieved thanks to high-quality transformers that preserve the entire bandwidth and to the STOP NOISE technology offering total rejection of electrical interference as transformers provide a galvanic separation between source and amplifier.
Robust and compact shielded aluminium chassis with connection diagram on the bottom and connection panel on one side that facilitates installation even in the rear dashboard.
Wiring labelled and terminated with Molex* connector featuring a tear-proof plastic strip, for a long-lasting and easy connection.
4. TOTAL OEM INTEGRATION
DSR (Double Step-Down Ratio)
High-Power (8:1) and Low-Power (4:1) step down conversion ratio selectable via DIP switch independently for each input, front and rear, for a correct interface with both traditional OEM sources (Low-Power) or OEM amplifiers featuring high-voltage outputs (High-Power) up to 35V RMS.
USS (UNIVERSAL SPEAKERS SIMULATOR)
It simulates the load of the speakers on the 4 input channels (Front and Rear), for a complete compatibility of the interface with the OEM sources that detect the original speaker load (impedance) connected to their outputs.
ART TM (AUTOMATIC REMOTE TURN-ON)**
Function compatible with all OEM sources, including the latest generation ones, which creates a 12 V output for the remote control of the amplifier when the SLI is powered.
*Molex is a registered trademark owned by Molex, LLC in the United States
** In countries where the legislation on electromagnetic compatibility is not mandatory under ECE / UN Regulation no. 10/5
## Integration in Proton NMR
• Contributed by Chris Schaller
• Professor (Chemistry) at College of Saint Benedict/Saint John's University
There is additional information obtained from 1 H NMR spectroscopy that is not typically available from 13 C NMR spectroscopy. Chemical shift can show how many different types of hydrogens are found in a molecule integration reveals the number of hydrogens of each type. An integrator trace (or integration trace) can be used to find the ratio of the numbers of hydrogen atoms in different environments in an organic compound.
An integrator trace is a computer generated line which is superimposed on a proton NMR spectra. In the diagram, the integrator trace is shown in red.
An integrator trace measures the relative areas under the various peaks in the spectrum. When the integrator trace crosses a peak or group of peaks, it gains height. The height gained is proportional to the area under the peak or group of peaks. You measure the height gained at each peak or group of peaks by measuring the distances shown in green in the diagram above - and then find their ratio.
For example, if the heights were 0.7 cm, 1.4 cm and 2.1 cm, the ratio of the peak areas would be 1:2:3. That in turn shows that the ratio of the hydrogen atoms in the three different environments is 1:2:3.
Figure NMR16. 1 H NMR spectrum of ethanol with solid integral line. Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
Looking at the spectrum of ethanol, you can see that there are three different kinds of hydrogens in the molecule. You can also see by integration that there are three hydrogens of one type, two of the second type, and one of the third type -- corresponding to the CH3 or methyl group, the CH2 or methylene group and the OH or hydroxyl group. That information helps narrow down the number of possible structures of the sample, and so it makes structure elucidation of an unknown sample much easier.
Integral data can be given in different forms. You should be aware of all of them. In raw form, an integral is a horizontal line running across the spectrum from left to right. Where the line crosses the frequency of a peak, the area of the peak is measured. This measurement is shown as a jump or step upward in the integral line the vertical distance that the line rises is proportional to the area of the peak. The area is related to the amount of radio waves absorbed at that frequency, and the amount of radio waves absorbed is proportional to the number of hydrogen atoms absorbing the radio waves.
Sometimes, the integral line is cut into separate integrals for each peak so that they can be compared to each other more easily.
Figure NMR17. 1 H NMR spectrum of ethanol with broken integral line. Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
Often, instead of displaying raw data, the integrals are measured and their heights are displayed on the spectrum.
Figure NMR18. 1 H NMR spectrum of ethanol with numerical integrals.
Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
Sometimes the heights are "normalized". They are reduced to a lowest common factor so that their ratios are easier to compare. These numbers could correspond to numbers of hydrogens, or simply to their lowest common factors. Two peaks in a ratio of 1H:2H could correspond to one and two hydrogens, or they could correspond to two and four hydrogens, etc.
Figure NMR19. 1 H NMR spectrum of ethanol with normalized integral numbers.
Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
## 4.2: Complex Line Integrals
In the descriptions of the following functions, z is the complex number x + i y , where i is defined as sqrt (-1) .
Compute the magnitude of z .
The magnitude is defined as | z | = sqrt (x^2 + y^2) .
: arg ( z ) : angle ( z )
Compute the argument, i.e., angle of z .
This is defined as, theta = atan2 ( y , x ) , in radians.
Return the complex conjugate of z .
The complex conjugate is defined as conj ( z ) = x - i y .
: cplxpair ( z ) : cplxpair ( z , tol ) : cplxpair ( z , tol , dim )
Sort the numbers z into complex conjugate pairs ordered by increasing real part.
The negative imaginary complex numbers are placed first within each pair. All real numbers (those with abs (imag ( z ) / z ) < tol ) are placed after the complex pairs.
tol is a weighting factor which determines the tolerance of matching. The default value is 100 and the resulting tolerance for a given complex pair is 100 * eps (abs ( z (i))) .
By default the complex pairs are sorted along the first non-singleton dimension of z . If dim is specified, then the complex pairs are sorted along this dimension.
Signal an error if some complex numbers could not be paired. Signal an error if all complex numbers are not exact conjugates (to within tol ). Note that there is no defined order for pairs with identical real parts but differing imaginary parts.
## Integration segment limits
Depending on the endpoints used by an integration method, open or closed rules are distinguished.
Open rules do not use endpoints. The open integration methods can be used in cases where the integrand function is undefined in some points.
E.g. using rectangle method we can approximate ln(x) definite integral value on (0,1) line segment, in spite of ln(0) is undefined.
In opposite, Closed rules use endpoints as well as midpoints to evaluate integrand function values.
Half-opened rules (e.g., left rectangle rule or right rectangle rule) can also be used to approximate integral on the line segment opened from only one side.
## Wolfram Web Resources
The #1 tool for creating Demonstrations and anything technical.
Explore anything with the first computational knowledge engine.
Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more.
Join the initiative for modernizing math education.
Solve integrals with Wolfram|Alpha.
Walk through homework problems step-by-step from beginning to end. Hints help you try the next step on your own.
Unlimited random practice problems and answers with built-in Step-by-step solutions. Practice online or make a printable study sheet.
Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more.
The expression builder is a graphical tool for building expressions. It is available in the Conditional Split Transformation Editor, Derived Column Transformation Editor dialog boxes, and in the Expression Builder dialog box, is a graphical tool for building expressions.
The expression builder provides folders that contain package-specific elements, and folders that contain the functions, type casts, and operators that the expression language provides. The package-specific elements include system variables and user-defined variables. In the Conditional Split Transformation Editor and Derived Column Transformation Editor dialog boxes, you can also view data columns. To build expressions for the transformations, you can drag items from the folders to the Condition or Expression column or you can type the expression directly in the column. The expression builder automatically adds needed syntax elements such as the @ prefix on variable names.
The names of user-defined and system variables are case-sensitive.
Variables have scope, and the Variables folder in the expression builder lists only variables that are in scope and available to use. For more information, see Integration Services (SSIS) Variables.
|
2021-12-06 21:05:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564983606338501, "perplexity": 1235.7622567901694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363312.79/warc/CC-MAIN-20211206194128-20211206224128-00209.warc.gz"}
|
https://math.stackexchange.com/questions/1927769/proving-matrix-factorization-using-schur-complement
|
# Proving matrix factorization using Schur complement
In my studies of matrix analysis I came across this problem:
Let $A$ be a real symmetric matrix which is also positive semidefinite and we are asked to prove, by induction on the dimension of $A$ that there exist the following factorization $A=LL^T$ and $A=UU^T$ where $L,U$ are lower and upper triangular matrices, respectively. We are instructed to prove this fact using the Schur complement, which is defined for the following block matrix $A = \left( \begin{array}{ccc} B & C \\ C^T & E \end{array} \right)$ where $E$ is principal square submatrix of $A$ the Schur complement is defined as $A/E = B-CE^{\dagger}C^T$ where $E^{\dagger}$ is the Moore-Penrose generalized inverse.
To be honest I have no idea how to prove this factorization using the assumption A is positive semidefinite and I have no idea how to carry out the induction using the Schur complement. I appreciate all help on this.
|
2019-05-22 23:38:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9466553926467896, "perplexity": 52.723464064733164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256980.46/warc/CC-MAIN-20190522223411-20190523005411-00540.warc.gz"}
|
http://nicolasshu.com/zotero_and_papis.html
|
Zotero and papis
So Zotero can be very helpful for citation management. There’s a guide I previously wrote to setup Zotero. That setup allows you to set Zotero to be easily backed up. However, the way that Zotfile operates is that it only moves the PDF file to your synced folder. That setup is written so that it creates a flat directory for your synced folder.
Recently, I also found a new repository called papis. It provides a pretty nice TUI interface for your citation management, but it also is nicely maintained, it has a variety of different repositories which go with it, along with a rofi, a dmenu, and an emacs package for it! It’s pretty great!
It also has a Zotero interface which allows you to pull information from your Zotero and create your own papis library. But the way that it currently operates is that it creates a new library by using the key that the SQLite uses. In other words, it would have the structure of
library/
├── 9Q2KKMZL
| ├── publication_A.pdf
| └── info.yaml
└── EINISJX9
├── publication_B.pdf
└── info.yaml
Although, papis is able to pick it up, but it’s hard to look over it if you wanted to use a regular file manager to look into it. The current papis-zotero has a few bugs and documentation that could be improved, but I wanted to do something that would create a nice workflow to work with both.
Proposed Workflow
The proposed workflow is meant to use the Zotero connector or the Zotero application.
PDFs & Files -> Zotero -> papers -> papis
This will require a few pieces of software to be used:
It is meant so that the sychronizing software takes care of the backing up of the data, Zotero provides a database which you can use a GUI with, and we copy the relevant information to a desired directory which has the PDF files organized, any other file we attached to Zotero, along with the info.yaml needed for papis.
Prerequisites
Zotero
In order to install Zotero, you can use
pacman -S zotero-bin
Papis
For papis, you can use use AUR’s aur/papis, but, at the time of this writing, it is a bit out of date. So it’s best if you use pip
pip3 install papis
# or
sudo pip3 install papis
SyncThing
For SyncThing, you can use either community/syncthing or use a Docker container for it. For the Docker container, you may either use linuxserver’s syncthing which has the Docker compose file:
---
version: "2.1"
services:
syncthing:
image: lscr.io/linuxserver/syncthing:latest
container_name: syncthing
hostname: syncthing #optional
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
volumes:
- /path/to/appdata/config:/config
- /path/to/data1:/data1
- /path/to/data2:/data2
ports:
- 8384:8384
- 22000:22000/tcp
- 22000:22000/udp
- 21027:21027/udp
restart: unless-stopped
or Syncthing’s Syncthing, which I think it is more often updated, and has the following Docker compose file below, which is taken from here
---
version: "3"
services:
syncthing:
image: syncthing/syncthing
container_name: syncthing
hostname: my-syncthing
environment:
- PUID=1000
- PGID=1000
volumes:
- /wherever/st-sync:/var/syncthing
ports:
- 8384:8384 # Web UI
- 22000:22000/tcp # TCP file transfers
- 22000:22000/udp # QUIC file transfers
- 21027:21027/udp # Receive local discovery broadcasts
restart: unless-stopped
Read through this entire guide to know where to put the volumes first.
Setup
Goal
The goal is to have the following:
~/sync/
├── papers/
│ ├── publication_A/
│ │ ├── publication_A.pdf
│ │ └── info.yaml
│ └── publication_B/
│ ├── publication_B.pdf
│ ├── code.py
│ └── info.yaml
└── zotero/
├── locate/
├── pipes/
├── storage/
│ ├── GSFPIRZY/
│ │ └── (used to have publication_A.pdf)
│ └── RSERDEPY/
│ ├── (used to have publication_A.pdf)
│ └── code.py
├── styles/
├── translators/
└── zotero.sqlite
Directories
Now, how will we set this up. Let us say I have two computers. I will create folder structures as follows:
~/sync/
├── papers/
└── zotero/
You may wish to keep those folders synced individually or together. Individually might be best if in one of the computers is to store the directories in different relative paths to each other.
Zotero
Now, follow the setup steps for preperly installing Zotero.
Note: Since this is written in Markdown and using Jekyll, I can’t write a curly bracket followed by a percent sign. So I’ll indicate curly brackets as square brackets [ ]
1. Install Zotero
2. Install the Zotfile plugin
3. Under Zotfile Preferences, leave General Settings > Source Folder for Attaching New Files blank
4. Under Zotfile Preferences, go to General Settings > Location of Files > Custom Location and set it as ~/sync/papers. Additionally, check the Use subfolder defined by and use a model for storing your files. You may use:
• /[%y_][%t] for ~/sync/papers/2016_Name of the Paper/
• [%t] for ~/sync/papers/Name of the Paper/
Now, every time you add a new file via the Zotero connector, you it will automatically send the new files to the ~/sync/papers/{name_of_paper}/{name_of_paper}.pdf
Next, go to Edit > Preferences > Advanced > Data Directory Location and set a custom location to ~/sync/zotero. It will ask you:
The directory you selected is empty. To move an existing Zotero data directory, you will need to manually move files from the existing data directory to the new location after Zotero has closed.
Use new directory?
Click on “Yes”, close Zotero, and move the files from ~/Zotero into that new folder.
At this point, you should start to have the following:
~/sync/
├── papers/
└── zotero/
├── locate/
├── pipes/
├── storage/
├── styles/
├── translators/
└── zotero.sqlite
Then once you start to add your files via the Zotero connector and via the GUI, you will get the following:
~/sync/
├── papers/
│ ├── publication_A/
│ │ └── publication_A.pdf
│ └── publication_B/
│ └── publication_B.pdf
└── zotero/
├── locate/
├── pipes/
├── storage/
│ ├── GSFPIRZY/
│ │ └── (used to have publication_A.pdf)
│ └── RSERDEPY/
│ ├── (used to have publication_A.pdf)
│ └── code.py
├── styles/
├── translators/
└── zotero.sqlite
As you can see, the PDF files will be automatically moved to the correct location. However, as you may also notice, the other attachments are not moved to the correct location, nor is the ~/sync/papers compatible with papis yet.
Transfer Information from Zotero to Papis
Here, I wrote a script which allows you to gather the data from your local Zotero’s SQLite database and parse it to an output directory where it can be used by papis. This can be found in the [zotero2papis’s repository)[https://github.com/nicolasshu/zotero2papis). In order to use it, take look through the documentation or follow this.
Install it by first cloning the repository
git clone https://github.com/nicolasshu/zotero2papis
Then install it via
pip install -e .
# or
pip install .
# or
python3 setup.py install
Once that is done, you may use the tool on your terminal by running:
zotero2papis -z {zotero_directory} -o {output_directory}
where -z / --zotdir will be the parent directory where the Zotero SQLite database is located (e.g. ~/Zotero or ~/sync/zotero) and -o / --outdir is a destination directory where your library will be formed. If the destination directory does not exist, it will be created for you.
Once you do that, you will have the following structure:
~/sync/
├── papers/
│ ├── publication_A/
│ │ ├── publication_A.pdf
│ │ └── info.yaml
│ └── publication_B/
│ ├── publication_B.pdf
│ ├── code.py
│ └── info.yaml
└── zotero/
├── locate/
├── pipes/
├── storage/
│ ├── GSFPIRZY/
│ │ └── (used to have publication_A.pdf)
│ └── RSERDEPY/
│ ├── (used to have publication_A.pdf)
│ └── code.py
├── styles/
├── translators/
└── zotero.sqlite
Once done, you can ensure that the papis configuration file, which is found in ~/.config/papis/config has a library set to dir = ~/sync/papers, and you can start ot use papis.
|
2023-03-27 13:00:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21156130731105804, "perplexity": 9421.522589575092}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00370.warc.gz"}
|
http://mathhelpforum.com/calculus/104397-newton-quotient-law.html
|
1. ## Newton Quotient Law
I am so confused with #11. I did some practice problems with actual numerical values but I'm just not getting this question right.
Any help would be greatly appreciated!
$\lim_{h\to 0}\frac{\frac{2}{x+h-4}-\frac{2}{x-4}}{h}$
|
2016-08-29 08:22:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7504357099533081, "perplexity": 745.9738084226913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982953863.79/warc/CC-MAIN-20160823200913-00292-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/im-baaaack-problem/
|
# I'm back problem (Double Euler's?)
Algebra Level 4
$\large \ln(k) = \exp\left[ -\ln(3) + i \sin^{-1} \left( \frac{24}{25} \right) \right]$
Suppose we define $$k$$ such that the equation above is fulfilled, and if the imaginary part of $$k$$ can be expressed in the form of $\large e^{\frac AB} \sin\left( \frac CD\right)$
where $$A,B,C$$ and $$D$$ are positive integers such that $$\gcd(A,B) = \gcd(C,D) = 1$$, find the value of $$A+B+C+D$$.
Details and Assumptions
We define $$\exp (x)$$ as $$e^x$$.
×
|
2019-04-20 19:15:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809952974319458, "perplexity": 177.31752399206562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529962.12/warc/CC-MAIN-20190420180854-20190420202854-00435.warc.gz"}
|
https://www.zigya.com/study/book?class=11&board=bsem&subject=Physics&book=Physics+Part+I&chapter=Motion+in+A+Plane&q_type=&q_topic=Scalars+And+Vectors&q_category=&question_id=PHEN11039064
|
 Can ever three vectors of different magnitude be added to get null vector? from Physics Motion in A Plane Class 11 Manipur Board
### Book Store
Download books and chapters from book store.
Currently only available for.
CBSE Gujarat Board Haryana Board
### Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
Class 10 Class 12
Can ever three vectors of different magnitude be added to get null vector?
Yes. If all the three vectors lie in the same plane, three vectors of different magnitude can be added.Â
144 Views
Give three examples of vector quantities.
Force, impulse and momentum.
865 Views
What are the basic characteristics that a quantity must possess so that it may be a vector quantity?
A quantity must possess the direction and must follow the vector axioms. Any quantity that follows the vector axioms are classified as vectors.Â
814 Views
What is a vector quantity?
A physical quantity that requires direction along with magnitude, for its complete specification is called a vector quantity.
835 Views
What is a scalar quantity?
A physical quantity that requires only magnitude for its complete specification is called a scalar quantity.
1212 Views
Give three examples of scalar quantities.
Mass, temperature and energy
769 Views
|
2018-08-18 02:54:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3865084946155548, "perplexity": 3904.7981582097877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213264.47/warc/CC-MAIN-20180818021014-20180818041014-00582.warc.gz"}
|
https://www.lmfdb.org/L/rational/16/300%5E8/1.1/c1e8-0?Submit=magma&download=1&query=%7B'degree':%2016,%20'conductor':%2065610000000000000000,%20'rational':%20True,%20'spectral_label':%20'c1e8-0'%7D
|
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin
16-300e8-1.1-c1e8-0-0 $1.54$ $1.08\times 10^{3}$ $16$ $2^{16} \cdot 3^{8} \cdot 5^{16}$ 1.1 $$[1.0]^{8} 1 1 0 0.0858872 Modular form 300.2.j.a 16-300e8-1.1-c1e8-0-1 1.54 1.08\times 10^{3} 16 2^{16} \cdot 3^{8} \cdot 5^{16} 1.1$$ $[1.0]^{8}$ $1$ $1$ $0$ $0.177931$ Modular form 300.2.h.b
16-300e8-1.1-c1e8-0-2 $1.54$ $1.08\times 10^{3}$ $16$ $2^{16} \cdot 3^{8} \cdot 5^{16}$ 1.1 $$[1.0]^{8} 1 1 0 0.250773 Modular form 300.2.m.b 16-300e8-1.1-c1e8-0-3 1.54 1.08\times 10^{3} 16 2^{16} \cdot 3^{8} \cdot 5^{16} 1.1$$ $[1.0]^{8}$ $1$ $1$ $0$ $0.306086$ Modular form 300.2.e.c
16-300e8-1.1-c1e8-0-4 $1.54$ $1.08\times 10^{3}$ $16$ $2^{16} \cdot 3^{8} \cdot 5^{16}$ 1.1 $$[1.0]^{8} 1 1 0 0.330493 Modular form 300.2.e.d 16-300e8-1.1-c1e8-0-5 1.54 1.08\times 10^{3} 16 2^{16} \cdot 3^{8} \cdot 5^{16} 1.1$$ $[1.0]^{8}$ $1$ $1$ $0$ $0.359595$ Modular form 300.2.m.a
16-300e8-1.1-c1e8-0-6 $1.54$ $1.08\times 10^{3}$ $16$ $2^{16} \cdot 3^{8} \cdot 5^{16}$ 1.1 $$[1.0]^{8} 1 1 0 0.370407 Modular form 300.2.e.e 16-300e8-1.1-c1e8-0-7 1.54 1.08\times 10^{3} 16 2^{16} \cdot 3^{8} \cdot 5^{16} 1.1$$ $[1.0]^{8}$ $1$ $1$ $0$ $0.521523$ Modular form 300.2.h.a
16-300e8-1.1-c1e8-0-8 $1.54$ $1.08\times 10^{3}$ $16$ $2^{16} \cdot 3^{8} \cdot 5^{16}$ 1.1 $$[1.0]^{8} 1 1 0 0.568355 Modular form 300.2.j.c 16-300e8-1.1-c1e8-0-9 1.54 1.08\times 10^{3} 16 2^{16} \cdot 3^{8} \cdot 5^{16} 1.1$$ $[1.0]^{8}$ $1$ $1$ $0$ $0.571585$ Modular form 300.2.j.b
|
2021-05-17 03:51:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9807226657867432, "perplexity": 300.19840155705816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00117.warc.gz"}
|
https://rdrr.io/cran/pracma/man/gaussHermite.html
|
# gaussHermite: Gauss-Hermite Quadrature Formula In pracma: Practical Numerical Math Functions
## Description
Nodes and weights for the n-point Gauss-Hermite quadrature formula.
## Usage
1 gaussHermite(n)
## Arguments
n Number of nodes in the interval ]-Inf, Inf[.
## Details
Gauss-Hermite quadrature is used for integrating functions of the form
\int_{-∞}^{∞} f(x) e^{-x^2} dx
over the infinite interval ]-∞, ∞[.
x and w are obtained from a tridiagonal eigenvalue problem. The value of such an integral is then sum(w*f(x)).
## Value
List with components x, the nodes or points in]-Inf, Inf[, and w, the weights applied at these nodes.
## Note
The basic quadrature rules are well known and can, e. g., be found in Gautschi (2004) — and explicit Matlab realizations in Trefethen (2000). These procedures have also been implemented in Matlab by Geert Van Damme, see his entries at MatlabCentral since 2010.
## References
Gautschi, W. (2004). Orthogonal Polynomials: Computation and Approximation. Oxford University Press.
Trefethen, L. N. (2000). Spectral Methods in Matlab. SIAM, Society for Industrial and Applied Mathematics.
gaussLegendre, gaussLaguerre
## Examples
1 2 3 4 5 6 7 cc <- gaussHermite(17) # Integrate exp(-x^2) from -Inf to Inf sum(cc$w) #=> 1.77245385090552 == sqrt(pi) # Integrate x^2 exp(-x^2) sum(cc$w * cc$x^2) #=> 0.88622692545276 == sqrt(pi) /2 # Integrate cos(x) * exp(-x^2) sum(cc$w * cos(cc\$x)) #=> 1.38038844704314 == sqrt(pi)/exp(1)^0.25
### Example output
[1] 1.772454
[1] 0.8862269
[1] 1.380388
pracma documentation built on Dec. 11, 2021, 9:57 a.m.
|
2022-01-21 01:24:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7150231599807739, "perplexity": 6244.686614550815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00700.warc.gz"}
|
https://tex.stackexchange.com/questions/323986/how-to-draw-a-partial-or-incomplete-box-around-one-or-more-words-within-a-paragr
|
# How to draw a partial or incomplete box around one or more words within a paragraph
I am trying to draw a partial box around one or more words within a paragraph--that is, a rectangle with one or more sides missing. I have read the documentation for the fancybox package and it doesn't seem to be able to do what I want. I was able to produce thus ugly hack:
That's just a vertical bar | followed by a word with \overline and \uline. The trick is to connect the vertical and horizontal lines so that you have a rectangle missing just one side.
Is there an easy, general way to do this, perhaps with Tikz? I don't know enough about that package to be able to do this myself, but I'm wondering if there might be other solutions too.
This solution EDITED to allow multiple sides to be stricken.
Made into a macro \partbox{<sides>}{<content>} where sides are the sides to be stricken, in any combination (in any order) of l, b, r, or t. By using \fbox, it is customizeable with the use of \fboxsep and \fboxrule.
Being a box, the words if more than one, will not be permitted to break across a line.
\documentclass{article}
\usepackage{trimclip}
\newif\iflclip
\newif\ifbclip
\newif\ifrclip
\newif\iftclip
\def\CLIP{\dimexpr\fboxrule+.2pt\relax}
\def\nulclip{0pt}
\newcommand\partbox[2]{%
\lclipfalse\bclipfalse\rclipfalse\tclipfalse%
\let\lkern\relax\let\rkern\relax%
\let\lclip\nulclip\let\bclip\nulclip\let\rclip\nulclip\let\tclip\nulclip%
\parseclip#1\relax\relax%
\iflclip\def\lkern{\kern\CLIP}\def\lclip{\CLIP}\fi
\ifbclip\def\bclip{\CLIP}\fi
\ifrclip\def\rkern{\kern\CLIP}\def\rclip{\CLIP}\fi
\iftclip\def\tclip{\CLIP}\fi
\lkern\clipbox{\lclip{} \bclip{} \rclip{} \tclip}{\fbox{#2}}\rkern%
}
\def\parseclip#1#2\relax{%
\ifx l#1\lcliptrue\else
\ifx b#1\bcliptrue\else
\ifx r#1\rcliptrue\else
\ifx t#1\tcliptrue\else
\fi\fi\fi\fi
\ifx\relax#2\relax\else\parseclip#2\relax\fi
}
\parskip 1ex
\begin{document}
\partbox{l}{dans} \partbox{b}{dans} \partbox{r}{dans} \partbox{t}{dans}
\partbox{lt}{dans} \partbox{lr}{dans} \partbox{lb}{dans}
\partbox{tb}{dans} \partbox{tr}{dans}
\partbox{br}{dans}
\partbox{rlt}{dans} \partbox{rbt}{dans} \partbox{blt}{dans} \partbox{blr}{dans}
\end{document}
Just to show the ability to use \fboxsep and \fboxrule, here is the identical result, but with \fboxsep=0pt\relax\fboxrule=1pt\relax set at the beginning of the document:
If one wishes it not to interfere with linespacing, then this tweak should work, adding a \vphantom and \smash. Of course, it will not prevent overlap, if \fboxsep and/or \fboxrule are set large enough (NOTE: this solution is still the original variety, only allowing a single side to be stricken):
\documentclass{article}
\usepackage[nopar]{lipsum}
\usepackage{trimclip}
\def\CLIP{\dimexpr\fboxrule+.2pt\relax}
\newcommand\partbox[2]{\leavevmode\vphantom{#2}\smash{%
\ifx#1l\clipbox{\CLIP{} 0pt 0pt 0pt}{\fbox{#2}}\else
\ifx#1b\clipbox{0pt \CLIP{} 0pt 0pt}{\fbox{#2}}\else
\ifx#1r\clipbox{0pt 0pt \CLIP{} 0pt}{\fbox{#2}}\else
\ifx#1t\clipbox{0pt 0pt 0pt \CLIP{}}{\fbox{#2}}\else
\fi\fi\fi\fi
}}
\parskip 1ex
\begin{document}
\lipsum[4] \partbox{l}{dans}
\lipsum[4] \partbox{b}{dans}
\lipsum[4] \partbox{r}{dans}
\lipsum[4] \partbox{t}{dans}
\lipsum[4]
\end{document}
• I wonder how does this look like in a paragraph, surrounded by running text. – Matsmath Aug 9 '16 at 11:44
• @Matsmath It will behave like \fbox, subject to changes in \fboxrule and \fboxsep. – Steven B. Segletes Aug 9 '16 at 11:46
• Beautiful solution. I wonder, is there any easy way to modify this code to remove two of the sides--say, the top and left, at one time? – twoblackboxes Aug 9 '16 at 12:48
• @twoblackboxes Please see revision of 1st answer to see how it can be done. – Steven B. Segletes Aug 9 '16 at 13:26
• @StevenB.Segletes This is really excellent work. Thank you so much! – twoblackboxes Aug 9 '16 at 13:33
Is there an easy, general way to do this, perhaps with Tikz?
I cannot understand the tendency of people to use a sledgehammer to crack a nut. Perhaps with tabular?
\documentclass{article}
\begin{document}
\tabcolsep.2em
Un voyage \begin{tabular}{|c }\hline dans\\\hline \end{tabular} l'space.
Un voyage \begin{tabular}{|c|}\hline dans\\ \end{tabular} l'space.
Un voyage \begin{tabular}{ c|}\hline dans\\\hline \end{tabular} l'space.
Un voyage \begin{tabular}{|c|} dans\\\hline \end{tabular} l'space.
\end{document}
If you hate tabular, some \makebox, \rule and \vrule commands can do the same work. However, it is tedious finding manually the right widths of the horizontal rules. But this is solved using \widthof ofthe calc package. This make a complete framed box with automatic width:
\vrule%
\makebox[0pt][l]{\rule[-.25em]{\widthof{dans}+.2em}{.4pt}}%
\makebox[0pt][l]{\rule[.85em]{\widthof{dans}+.2em}{.4pt}}%
\makebox[\widthof{dans}+.2em][c]{dans}%
\vrule\
Simply removing rows 1,2,3 or 5 in the above code you can obtain the incomplete boxes.
Convert both solutions in macros is straightforward. For simplicity, instead of a single macro with two arguments, I suggest make four macros where the text is the unique argument. For instance:
\documentclass{article}
\usepackage{calc}
\newcommand\openleftbox[1]{%
\vrule
\makebox[0pt][l]{\rule[-.25em]{\widthof{#1}+.2em}{.4pt}}%
\makebox[0pt][l]{\rule[.85em]{\widthof{#1}+.2em}{.4pt}}%
\makebox[\widthof{#1}+.2em][c]{#1}%
%\vrule\
}
\begin{document}
Un voyage \openleftbox{dans} l'space.
\end{document}
• Sledgehammer? If you go to my profile, you will see that "I enjoy... reinventing the wheel" Perhaps I should add that I enjoy cracking nuts with a sledgehammer. – Steven B. Segletes Aug 10 '16 at 3:22
• @StevenB.Segletes In this way? – Fran Aug 10 '16 at 3:38
Another solution with tcbox (from tcolorbox.
\documentclass{article}
\usepackage{tcolorbox}
\usepackage{lmodern}
\newtcbox{\lbox}[1][]{on line, sharp corners, colback=white, colframe=black, size=small, leftrule=0pt,#1}
\newtcbox{\rbox}[1][]{on line, sharp corners, colback=white, colframe=black, size=small, rightrule=0pt,#1}
\newtcbox{\tbox}[1][]{on line, sharp corners, colback=white, colframe=black, size=small, toprule=0pt,#1}
\newtcbox{\bbox}[1][]{on line, sharp corners, colback=white, colframe=black, size=small, bottomrule=0pt,#1}
\begin{document}
The \lbox{quick} \tbox{brown} \bbox{fox} \rbox{jumps} over the \lbox[colback=red!30, colframe=blue]{lazy dog}.
\end{document}
This can be achieved with the efbox package.
Examples from the documentation:
\documentclass[convert]{standalone}
\usepackage{efbox}
\begin{document}
\efbox{Foo}
\efbox[rightline=false,topline=false]{Foo}
\efbox[topline,backgroundcolor=red]{Foo}
\efbox[linewidth=2pt,font=\Large]{Large Foo}
\efbox[rightline=false,topline=false,linecolor=blue,linewidth=2pt]{Foo}
\efbox[margin=10pt,backgroundcolor=yellow,font=\ttfamily\itshape]{Italic Typewriter Foo}
\efbox[linewidth=3pt,margin=5pt,backgroundcolor=red]{Foo}
\efbox[hidealllines,backgroundcolor=red]{Foo}
\efbox{Foo}
\efbox[hidealllines,backgroundcolor=red,margin=15pt]{Foo}
\efbox[margin=15pt,linewidth=5pt]{Foo}
\efbox[bottomline=false,rightline=false,linewidth=2pt,margin=1pt,backgroundcolor=yellow]{Foo}
\efbox{Foo}
\end{document}
• This is the easiest method presented here. However, I don't know why, but it is bugged. I managed to draw what i want after many experiments, adding backslashes in front etc.. – Nikos Feb 22 '17 at 10:27
Package fbox has been recently added to CTAN. This package adds an optional parameter to \fbox. This new parameter allows to declare which box sides should be drawn.
\documentclass{article}
\usepackage{fbox}
\begin{document}
The \fbox[tb]{quick} \fbox[lr]{brown} \fbox[ltb]{fox} \fbox[trb]{jumps} \fbox[lb]{over} \fbox[tr]{the} \fbox{lazy dog}.
\end{document}
|
2019-12-14 05:50:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8265742659568787, "perplexity": 3926.331024641306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540584491.89/warc/CC-MAIN-20191214042241-20191214070241-00165.warc.gz"}
|
https://chemistry.stackexchange.com/questions/98226/molar-heat-of-combustion-with-different-fuels
|
# Molar heat of combustion with different fuels
I'm not sure exactly how to attack this question, I'm sure that you have to use equations such as
$$Q=mc\,\Delta T$$ $$n=\frac{M}{m}$$
But I don't know where to use them. Any help would be appreciated
I just attempted the question, I'm not sure if this is the correct working.
Assuming 1 mol of each substance
Ethyne: \begin{align}n&=\frac{M}{m}\\[6pt] 1&=\frac{26.038\ \mathrm{g/mol}}{m}\\[6pt] m&=26.038\ \mathrm g\end{align} producing 1630 kJ
Hexane: Same working as above so: $$m=86.178\ \mathrm g$$ producing 1300 kJ
Ethane: $$m=30.07\ \mathrm g$$ producing 1560 kJ
Simplifying each of the mass and energy relationships for each substance gives:
Ethyne: 1 gram for 62.6 kJ
Hexane: 1 gram for 15.09 kJ
Ethane: 1 gram for 51.89 kJ
This would imply that ethyne would be the best fuel, as it uses the least amount of mass and the greatest amount of energy is produced.
Not sure if this is correct. Any tips would be great.
• I think you are on the right track and already have the answer (which is the highest energy density per unit mass). Also this is an exemplary way to pose questions like this on this site as you showed your working (many similar questions get closed because people don't bother to show their thoughts). – matt_black Jun 13 '18 at 8:49
$$\ce{C2H2}$$ has a mass of $$26.04\ \mathrm{g/mol}$$. Since the $$\Delta H$$ in its combustion is $$1\,630\ \mathrm{kJ/mol}$$, you will be looking at a heat density of $$\frac{1\,630\ \mathrm{kJ/mol}}{26.04\ \mathrm{g/mol}} = 62.6\ \mathrm{\frac{kJ}{g}}.$$
Similarly, for $$\ce{C6H14}$$, you get $$\frac{1\,300\ \mathrm{kJ/mol}}{86.178\ \mathrm{g/mol}} = 15.1\ \mathrm{\frac{kJ}{g}}.$$
For $$\ce{C2H6}$$, you get $$\frac{1\,560\ \mathrm{kJ/mol}}{30.07\ \mathrm{g/mol}} = 51.9\ \mathrm{\frac{kJ}{g}}.$$
|
2020-10-27 23:42:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7298968434333801, "perplexity": 418.7406267554981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894890.32/warc/CC-MAIN-20201027225224-20201028015224-00524.warc.gz"}
|
http://mathoverflow.net/questions/81780/nonseparable-disintegration-theory-references
|
# Nonseparable disintegration theory: references
I mean a theorem of the following kind. Let $A$ be a C*-algebra, and let $\pi: A\to B(H)$ be its representation. Then there exist a set $P$ with a positive measure $\mu$, a field of Hilbert spaces such that $H\simeq \int_P H_p d\mu(p)$, and irreducible representations $\pi_p: A\to B(H_p)$ such that $\pi=\int_P \pi_p d\mu(p)$.
In classical references (Dixmier/Takesaki/Kadison...) both $A$ and $H$ are assumed to be separable. Is there a canonical reference for the nonseparable case?
I have found two articles, not counting particular cases: S. Teleman On reduction theory. {\it Rev. Roumaine Math. Pures Appl.} {\bf 21}, no.~4 (1976), 465--486. and R. Henrichs Decomposition of invariant states and nonseparable C*-algebras. Publ. Res. Inst. Math. Sci. 18, 159-181 (1982). Both use definition of fields of Hilbert spaces given by W. Wils in Direct integrals of Hilbert spaces I. {\it Math. Scand.} {\bf 26} (1970), 73--88.
Both prove the theorem above (Henrichs for the unital case), with one main difference: in Teleman's version, $P$ is a subset of pure states of $A$, but $\mu$ may not be regular (not every set is approximated by compacts from inside). In Henrichs', $\mu$ is regular but one and the same irrep can repeat, even for every $p$.
In the history of this question there were lots or erroneous articles, so I treat these two also with caution. I've gone through Teleman's proof(because it is self-contained). It seems correct, but it turns out that $\pi_p$ may be zero, and this is not indicated in the paper. Through Henrichs I didn't go in detail. He relies on a rarely used theorem of Tomita, for which he however gives an independent proof.
So this is my question: do you use this theory, and if yes, what authors do you refer to?
-
A related problem (if you look at it from the viewpoint of disintegration of states in C*-algebras) is the problem of disintegration of measures in a nonseparable setting. There is a recent paper which revisits this old problem by M. Kosiek and K. Rudol, "Fibers of the $L^\infty$ Algebra and Disintegration of Measures". Archiv der Mathematik 97 (2011) 559-567. Supposing it is also correct, it may help...
|
2014-03-08 15:47:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971195638179779, "perplexity": 227.90008617436013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999655040/warc/CC-MAIN-20140305060735-00098-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://project.auto-multiple-choice.net/boards/2/topics/8391?r=8392
|
## Print page numbers on pages
I have tried a few things such as setting pagestyle and changing documentclass, but I am unable to have the page number (and preferably number of pages) at the bottom of each page. Can anyone lead me in the right direction? - I don't see anyone else having this problem on the forums, so it might just be me?
### Replies (2)
#### RE: Print page numbers on pages - Added by Gérard Carpeauxabout 1 month ago
a part of my template
\documentclass[12pt,a4paper,french]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[francais,bloc,chiffres,outsidebox]{automultiplechoice}
\usepackage{babel}
\begin{document}
\AMCsetFoot{\hfill{}sujet \no\AMCStudentNumber \hfill{}page~\thepage{} sur \AMCpageref{lastpage}\hfill{}}
\onecopy{1}{ %
\AMClabel{lastpage}
}
\end{document}
#### RE: Print page numbers on pages - Added by Arne Vilsenabout 1 month ago
Thanks a lot for this, Gérard!
(1-2/2)
|
2019-06-20 17:03:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19295355677604675, "perplexity": 2997.903580760955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999263.6/warc/CC-MAIN-20190620165805-20190620191805-00484.warc.gz"}
|
https://holoeye.com/2020/02/
|
### Super-resolved angular displacement estimation based upon a Sagnac interferometer and parity measurement
Author(s):
Jian-Dong Zhang, Zi-Jing Zhang, Long-Zhu Cen, Jun-Yan Hu and Yuan Zhao
Abstract:
“Super-resolved angular displacement estimation is of crucial significance to the field
of quantum information processing. Here we report an estimation protocol based on a Sagnac
interferometer fed by a coherent state carrying orbital angular momentum. In a lossless scenario,
through the use of parity measurement, our protocol can achieve a 4-fold super-resolved output
with quantum number ; meanwhile, a shot-noise-limited sensitivity saturating the quantum
Cramér-Rao bound is reachable. We also consider the effects of several realistic factors, including
nonideal state preparation, photon loss, and inefficient measurement. Finally, with mean photon
number 𝑁¯ = 2.297 and ℓ = 1 taken, we experimentally demonstrate a super-resolved effect of
angular displacement with a factor of 7.88.”
Publication: Optics Express
Issue/Year: Vol. 28, Issue 3, pp. 4320-4332
DOI: 10.1364/OE.384082
### Three-Dimensional Holographic Reconstruction of Brain Tissue Based on Convolution Propagation
Author(s):
Rania M. Abdelazeem and Doaa Youssef and Jala El-Azab and Salah Hassab-Elnaby and Mostafa Agour
Abstract:
” In this study, a dynamic holographic projection system for brain tissue and its anatomical structures extracted from Magnetic Resonance (MR) plane slice is reported. Computer holograms are calculated using a modied Gerchberg-Saxton (GS) iterative algorithm where the projection is based on the plane wave decomposition. First, brain anatomy includes white matter (WM), grey matter (GM) and brain tissue are extracted. Then, phase holograms using the proposed method are generated. Finally, single phase hologram for the whole brain anatomy is generated and is optically reconstructed by a phase-only spatial light modulator (SLM) at dierent depths. The obtained results revealed that the three-dimensional holographic projection of MR brain tissue can aid to provide better interpretation of brain anatomical
structure to achieve better diagnostic results.”
Publication: Journal of Physics: Conference Series
Issue/Year: Vol. 1472
DOI: 10.1088/1742-6596/1472/1/012008
### Microchannels inside bulk PMMA generated by femtosecond laser using adaptive beam shaping
Author(s):
Roth, Gian-Luca; Rung, Stefan; Esen, Cemal & Hellmann, Ralf
Abstract:
“In this contribution, we report on the generation of internal microchannels with basically unlimited channel length inside of PMMA bulk material by femtosecond laser. A precisely controllable and stable circular channel cross section is obtained by using a spatial light modulator to compensate the writing depth depending spherical aberration. Furthermore, the generation of a rotatable elliptical input beam by adaptive optics ensures a fitting of the beam shaping to the writing direction. In this study, we report on both, the effect of the ellipticity of the input beam and the effect of a correction of the spherical aberration on the circularity of the resulting internal microchannels. Moreover, we demonstrate the application of this writing technique by creating microfluidic testing structures inside of a transparent standard polymer.”
|
2022-05-16 06:26:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4390896260738373, "perplexity": 4815.945880578782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662509990.19/warc/CC-MAIN-20220516041337-20220516071337-00287.warc.gz"}
|
https://stats.stackexchange.com/questions/392310/tuning-distance-threshold-in-online-clustering
|
# Tuning distance threshold in online clustering
In online clustering there are approaches where a threshold $$r$$ on the distance to the nearest cluster is used to determine whether a new data point should be associated to an existing cluster or become its own cluster.
This kind of hyperparameter appears to me to be the kind that is somewhat difficult to tune, as it is not only dependant on the feature space itself but also the actual density of instances within it.
While the context of this question in particular is that I need to employ the approach by Souza et al. that I referenced below, which proposes a data stream classification model based on such a clustering method, this question doesn't doesn't need to be restricted to it:
Assuming there is a criterion $$s(r)$$ that assesses the quality of such a distance-threshold-based model, how could a set of threshold values to be evaluated be constructed given an observed data sample $$X$$ (and $$\mathbf{y}$$, in my case) ?
There is a question about a particular parameter choice for this type of approach, but I couldn't find any references to the presumed threshold value.
As is sadly often the case, the authors of the referenced article provide no information as to how they determined the used value for $$r$$ in their experiments.
Reference:
Souza, V. M., Silva, D. F., Batista, G. E., & Gama, J. (2015, December). Classification of evolving data streams with infinitely delayed labels. In Machine Learning and Applications (ICMLA), 2015 IEEE 14th International Conference on (pp. 214-219). IEEE.
• Much of the stream clustering stuff seems to be impractical nonsense, yes. Never tested on real data, often even on stupid things like poker hands data artificially "streamed"... – Anony-Mousse Feb 16 at 21:56
• To break a lance for the literature, there are sadly very few real-world data sets available for research purposes in this field. – deemel Feb 17 at 18:37
• Which isn't really a good excuse for just making up data such as the poker hands "stream", isn't it? – Anony-Mousse Feb 18 at 0:55
1. Given a training set $$X_{train}$$ or (if computation power doesn't allow) a random sample from it, compute the pairwise distances matrix $$D_X$$.
2. Evaluate the empirical CDF $$\hat{F}(D_X)$$.
3. Define the lower and upper bounds of your parameter space for the threshold $$r$$ based on this.
I selected the minimum and maximum of the set of values for which $$f=\hat{F}(d)$$ satisfied both $$d>0$$ and $$f<=0.4$$, although one could of course extend this
|
2019-09-22 20:34:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5519874691963196, "perplexity": 565.8738888929421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575674.3/warc/CC-MAIN-20190922201055-20190922223055-00179.warc.gz"}
|
https://discuss.codedrills.io/t/stack-of-boxes-editorial/99
|
# Stack of Boxes - Editorial
## About the Problem
Setter(s) Janmansh Agarwal
Tester(s) Vichitr Gandas
Difficulty Cakewalk
Topics Adhoc, Maths
Practice Link
Contest Link
## Solution Idea
In one operation we are removing total of 4 boxes, hence the sum of the number of boxes in the stacks should be divisible by 4 . Now, suppose we remove 2 boxes from first stack, total of x times, remove 2 boxes from second stack y times and remove 2 boxes from third stack z times. Now if we are able to clear all the boxes from all three stacks then following conditions would hold -
A = 2*x + y + z,
B = x + 2*y + z,
C = x + y + 2*z .
Because total number of removed boxes should be equal to initial number of boxes in each stack.
From above three conditions we get,
A + B + C = 2*x + y + z + x + 2*y + z + x + y + 2*z = 4*(x+y+z) .
From here total number of moves,
x+y+z = (A+B+C)/4.
Total number of moves should be an integer hence (A+B+C) should be divisible by 4.
Also the minimum element should be greater than total number of moves hence
min(A,B,C) >= x+y+z
=> min(A,B,C) >= (A+B+C)/4
### Complexity Analysis
Time Complexity: \mathcal{O}(1) as we just need to check two conditions.
Space Complexity: \mathcal{O}(1) as no extra space needed.
## Codes
Setter's Code
#include<bits/stdc++.h>
using namespace std;
#define ll long long
void solve(){
ll a[3];
cin>>a[0]>>a[1]>>a[2];
ll sum=0;
sum += a[0]+a[1]+a[2];
sort(a,a+3);
ll k=sum;
if(sum%4){
cout<<"NO\n";
return;
}
k/=4;
if(a[0]>=k){
cout<<"YES\n";
return;
}
cout<<"NO\n";
return;
}
signed main(){
ios_base::sync_with_stdio(false); cin.tie(NULL); cout.tie(NULL);
ll t=1;
cin>>t;
while (t--){
solve();
}
return 0;
}
Tester's Code
/***************************************************
@author: vichitr
Compiled On: 02 Mar 2021
*****************************************************/
#include<bits/stdc++.h>
#define int long long
#define fast ios::sync_with_stdio(0); cin.tie(NULL); cout.tie(NULL)
using namespace std;
signed main()
{
fast;
int t = 1;
cin >> t;
assert(t <= 1e5);
for(int i = 1; i <= t; i++)
{
int a, b, c; cin >> a >> b >> c;
assert(a > 0 and a <= 1e9);
assert(b > 0 and b <= 1e9);
assert(c > 0 and c <= 1e9);
if((a + b + c) % 4 == 0 and min({a, b, c}) >= (a + b + c) / 4)
cout<<"YES\n";
else
cout<<"NO\n";
}
return 0;
}
If you have used any other approach, share your approach in comments!
If you have any doubts, ask them in comments!
3 Likes
Hi @vichitr are we not allowed to test our solutions now? Can u pls say if that’s so, because am trying to submit the solution of this problem for past 2 days and the judge keeps running forever, if it’s not so can u please check if there’s some issue, thank u
Hey, you should be able to submit on practice and contest both pages. Will ask the team to look on the issue!
Am sorry @vichitr (also for the ping) , it seems to be a mistake from my side, after going through the submissions I see the verdicts there, extremely sorry for the trouble.
3 Likes
Good to hear that its working! Might have happened because of low internet speed!
|
2021-06-15 13:20:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4043842852115631, "perplexity": 2856.7004981763416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621273.31/warc/CC-MAIN-20210615114909-20210615144909-00180.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2007.8.943
|
# American Institute of Mathematical Sciences
November 2007, 8(4): 943-970. doi: 10.3934/dcdsb.2007.8.943
## Homoclinic trajectories and chaotic behaviour in a piecewise linear oscillator
1 Department of Applied Mathematics, University College, Cork, Ireland
Received September 2006 Revised July 2007 Published August 2007
In this paper we consider the equation $\ddot x+x=\sin(\sqrt{2}t)+s(x)\,$ where $s(x)$ is a piece-wise linear map given by min$\{5x,1\}$ if $x\ge0$ and by max$\{-1, 5x\}$ if $x<0$. The existence of chaotic behaviour in the Smale sense inside the instability area is proven. In particular transversal homoclinic fixed point is found. The results follow from the application of topological degree theory the computer-assisted verification of a set of inequalities. Usually such proofs can not be verified by hands due to vast amount of computations, but the simplicity of our system leads to a small set of inequalities that can be verified by hand.
Citation: Alexei Pokrovskii, Oleg Rasskazov, Daniela Visetti. Homoclinic trajectories and chaotic behaviour in a piecewise linear oscillator. Discrete & Continuous Dynamical Systems - B, 2007, 8 (4) : 943-970. doi: 10.3934/dcdsb.2007.8.943
[1] Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk. Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, , () : -. doi: 10.3934/era.2021002 [2] Andy Hammerlindl, Jana Rodriguez Hertz, Raúl Ures. Ergodicity and partial hyperbolicity on Seifert manifolds. Journal of Modern Dynamics, 2020, 0: 331-348. doi: 10.3934/jmd.2020012 [3] Skyler Simmons. Stability of broucke's isosceles orbit. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021015 [4] Jesús A. Álvarez López, Ramón Barral Lijó, John Hunton, Hiraku Nozawa, John R. Parker. Chaotic Delone sets. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021016 [5] Marcos C. Mota, Regilene D. S. Oliveira. Dynamic aspects of Sprott BC chaotic system. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1653-1673. doi: 10.3934/dcdsb.2020177 [6] Héctor Barge. Čech cohomology, homoclinic trajectories and robustness of non-saddle sets. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020381 [7] Tahir Aliyev Azeroğlu, Bülent Nafi Örnek, Timur Düzenli. Some results on the behaviour of transfer functions at the right half plane. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020106 [8] Álvaro Castañeda, Pablo González, Gonzalo Robledo. Topological Equivalence of nonautonomous difference equations with a family of dichotomies on the half line. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020278 [9] Tian Ma, Shouhong Wang. Topological phase transition III: Solar surface eruptions and sunspots. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 501-514. doi: 10.3934/dcdsb.2020350 [10] Ran Zhang, Shengqiang Liu. On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1197-1204. doi: 10.3934/dcdsb.2020159 [11] Guihong Fan, Gail S. K. Wolkowicz. Chaotic dynamics in a simple predator-prey model with discrete delay. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 191-216. doi: 10.3934/dcdsb.2020263 [12] Jann-Long Chern, Sze-Guang Yang, Zhi-You Chen, Chih-Her Chen. On the family of non-topological solutions for the elliptic system arising from a product Abelian gauge field theory. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3291-3304. doi: 10.3934/dcds.2020127 [13] Sishu Shankar Muni, Robert I. McLachlan, David J. W. Simpson. Homoclinic tangencies with infinitely many asymptotically stable single-round periodic solutions. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021010 [14] Ying Lv, Yan-Fang Xue, Chun-Lei Tang. Ground state homoclinic orbits for a class of asymptotically periodic second-order Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1627-1652. doi: 10.3934/dcdsb.2020176 [15] Yuanfen Xiao. Mean Li-Yorke chaotic set along polynomial sequence with full Hausdorff dimension for $\beta$-transformation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 525-536. doi: 10.3934/dcds.2020267
2019 Impact Factor: 1.27
|
2021-01-17 16:15:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5950047373771667, "perplexity": 5451.260830207655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513062.16/warc/CC-MAIN-20210117143625-20210117173625-00674.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=Lisboa
|
• ### The wind and the magnetospheric accretion onto the T Tauri star S Coronae Australis at sub-au resolution(1709.01348)
Sept. 5, 2017 astro-ph.SR
To investigate the inner regions of protoplanetary disks, we performed near-infrared interferometric observations of the classical TTauri binary system S CrA. We present the first VLTI-GRAVITY high spectral resolution ($R\sim$4000) observations of a classical TTauri binary, S CrA (composed of S CrA N and S CrA S and separated by $\sim$1.4"), combining the four 8-m telescopes in dual-field mode. Our observations in the near-infrared K-band continuum reveal a disk around each binary component, with similar half-flux radii of about 0.1 au at d$\sim$130 pc, inclinations ($i=$28$\pm$3$^o$\ and $i=$22$\pm$6$^o$), and position angles (PA=0$^o\pm$6$^o$ and PA=-2$^o\pm$12$^o$), suggesting that they formed from the fragmentation of a common disk. The S CrA N spectrum shows bright HeI and Br$\gamma$ line emission exhibiting inverse P-Cygni profiles, typically associated with infalling gas. The continuum-compensated Br$\gamma$ line visibilities of S CrA N show the presence of a compact Br$\gamma$ emitting region the radius of which is about $\sim$0.06 au, which is twice as big as the truncation radius. This component is mostly tracing a wind. Moreover, a slight radius change between the blue- and red-shifted Br$\gamma$ line components is marginally detected. The presence of an inverse P-Cygni profile in the HeI and Br$\gamma$ lines, along with the tentative detection of a slightly larger size of the blue-shifted Br$\gamma$ line component, hint at the simultaneous presence of a wind and magnetospheric accretion in S CrA N.
• ### Distinguishing black-hole spin-orbit resonances by their gravitational wave signatures. II: Full parameter estimation(1507.05587)
Feb. 15, 2016 gr-qc, astro-ph.HE
Gravitational waves from coalescing binary black holes encode the evolution of their spins prior to merger. In the post-Newtonian regime and on the precession timescale, this evolution has one of three morphologies, with the spins either librating around one of two fixed points ("resonances") or circulating freely. In this work we perform full parameter estimation on resonant binaries with fixed masses and spin magnitudes, changing three parameters: a conserved "projected effective spin" $\xi$ and resonant family $\Delta\Phi=0,\pi$ (which uniquely label the source), the inclination $\theta_{JN}$ of the binary's total angular momentum with respect to the line of sight (which determines the strength of precessional effects in the waveform), and the signal amplitude. We demonstrate that resonances can be distinguished for a wide range of binaries, except for highly symmetric configurations where precessional effects are suppressed. Motivated by new insight into double-spin evolution, we introduce new variables to characterize precessing black hole binaries which naturally reflects the timescale separation of the system and therefore better encode the dynamical information carried by gravitational waves.
• SNO+ is a large liquid scintillator-based experiment located 2km underground at SNOLAB, Sudbury, Canada. It reuses the Sudbury Neutrino Observatory detector, consisting of a 12m diameter acrylic vessel which will be filled with about 780 tonnes of ultra-pure liquid scintillator. Designed as a multipurpose neutrino experiment, the primary goal of SNO+ is a search for the neutrinoless double-beta decay (0$\nu\beta\beta$) of 130Te. In Phase I, the detector will be loaded with 0.3% natural tellurium, corresponding to nearly 800 kg of 130Te, with an expected effective Majorana neutrino mass sensitivity in the region of 55-133 meV, just above the inverted mass hierarchy. Recently, the possibility of deploying up to ten times more natural tellurium has been investigated, which would enable SNO+ to achieve sensitivity deep into the parameter space for the inverted neutrino mass hierarchy in the future. Additionally, SNO+ aims to measure reactor antineutrino oscillations, low-energy solar neutrinos, and geoneutrinos, to be sensitive to supernova neutrinos, and to search for exotic physics. A first phase with the detector filled with water will begin soon, with the scintillator phase expected to start after a few months of water data taking. The 0$\nu\beta\beta$ Phase I is foreseen for 2017.
• ### Localized starbursts in dwarf galaxies produced by impact of low metallicity cosmic gas clouds(1509.00180)
Sept. 1, 2015 astro-ph.GA
Models of galaxy formation predict that gas accretion from the cosmic web is a primary driver of star formation over cosmic history. Except in very dense environments where galaxy mergers are also important, model galaxies feed from cold streams of gas from the web that penetrate their dark matter haloes. Although these predictions are unambiguous, the observational support has been indirect so far. Here we report spectroscopic evidence for this process in extremely metal-poor galaxies (XMPs) of the local Universe, taking the form of localized starbursts associated with gas having low metallicity. Detailed abundance analyses based on Gran Telescopio Canarias (GTC) optical spectra of ten XMPs show that the galaxy hosts have metallicities around 60 % solar on average, while the large star-forming regions that dominate their integrated light have low metallicities of some 6 % solar. Because gas mixes azimuthally in a rotation timescale (a few hundred Myr), the observed metallicity inhomogeneities are only possible if the metal-poor gas fell onto the disk recently. We analyze several possibilities for the origin of the metal-poor gas, favoring the metal-poor gas infall predicted by numerical models. If this interpretation is correct, XMPs trace the cosmic web gas in their surroundings, making them probes to examine its properties.
• ### The calibration system for the photomultiplier array of the SNO+ experiment(1411.4830)
Jan. 16, 2015 hep-ex, physics.ins-det
A light injection system using LEDs and optical fibres was designed for the calibration and monitoring of the photomultiplier array of the SNO+ experiment at SNOLAB. Large volume, non-segmented, low-background detectors for rare event physics, such as the multi-purpose SNO+ experiment, need a calibration system that allow an accurate and regular measurement of the performance parameters of their photomultiplier arrays, while minimising the risk of radioactivity ingress. The design implemented for SNO+ uses a set of optical fibres to inject light pulses from external LEDs into the detector. The design, fabrication and installation of this light injection system, as well as the first commissioning tests, are described in this paper. Monte Carlo simulations were compared with the commissioning test results, confirming that the system meets the performance requirements.
• ### The long bar as seen by the VVV survey: I. Colour-magnitude diagrams(1209.4370)
Sept. 27, 2012 astro-ph.GA
The VISTA Variable Survey (VVV) is able to map the Galaxy at l<0 with an unpaired depth (at least 3 mag deeper than 2MASS), opening new possibilities for studying the inner structure of the Milky Way. In this paper we concentrate on the exploitation of these data to better understand the spatial disposition and distribution of the structures present in the inner Milky Way, particularly the Long Bar and its interaction with the inner disc. The observations show the presence of a clear overdensity of stars with associated recent stellar formation that we interpret as the traces of the Long Bar, and we derive an angle for it of 41+/-5 with the Sun-Galactic centre line, touching the disc near l=27 and l=-12. The colour-magnitude diagrams presented here also show a lack of disc stars in several lines of sight, a fact that we associate with the truncation of the disc by the potential of this bar for Galactocentric radius less than 5kpc.
• ### Effect of the frequency chirp on laser wakefield acceleration(1112.4380)
The role of laser frequency chirps in the laser wakefield accelerator is examined. We show that in the linear regime, the evolution of the laser pulse length is affected by the frequency chirp, and that positive (negative) chirp compresses (stretches) the laser pulse, thereby increasing (decreasing) the peak vector potential and wakefield amplitude. In the blowout regime, the frequency chirp can be used to fine tune the localized etching rates at the front of the laser. In our simulations, chirped laser pulses can lead to 15% higher self-trapped electrons, and 10% higher peak energies as compare to the transform-limited pulse. Chirps may be used to control the phase velocity of the wake, and to relax the self-guiding conditions at the front of the laser. Our predictions are confirmed by multi-dimensional particle-in-cell simulations with OSIRIS.
• Ensuring the radiation hardness of PbWO4 crystals was one of the main priorities during the construction of the electromagnetic calorimeter of the CMS experiment at CERN. The production on an industrial scale of radiation hard crystals and their certification over a period of several years represented a difficult challenge both for CMS and for the crystal suppliers. The present article reviews the related scientific and technological problems encountered.
• ### Images IV: Strong evolution of the oxygen abundance in gaseous phases of intermediate mass galaxies since z=0.8(0810.0272)
Oct. 2, 2008 astro-ph
Intermediate mass galaxies (logM(Msun)>10) at z~0.6 are the likeliest progenitors of the present-day numerous population of spirals. There is growing evidence that they have evolved rapidly since the last 6 to 8 Gyr ago, and likely have formed a significant fraction of their stellar mass, often showing perturbed morphologies and kinematics. We have gathered a representative sample of 88 such galaxies and have provided robust estimates of their gas phase metallicity. For doing so, we have used moderate spectral resolution spectroscopy at VLT/FORS2 with unprecedented high S/N allowing to remove biases coming from interstellar absorption lines and extinction to establish robust values of R23=([OII]3727 + [OIII]4959,5007)/Hbeta. We definitively confirm that the predominant population of z~0.6 starbursts and luminous IR galaxies (LIRGs) are on average, two times less metal rich than the local galaxies at a given stellar mass. We do find that the metal abundance of the gaseous phase of galaxies is evolving linearly with time, from z=1 to z=0 and after comparing with other studies, from z=3 to z=0. Combining our results with the reported evolution of the Tully Fisher relation, we do find that such an evolution requires that ~30% of the stellar mass of local galaxies have been formed through an external supply of gas, thus excluding the close box model. Distant starbursts & LIRGs have properties (metal abundance, star formation efficiency & morphologies) similar to those of local LIRGs. Their underlying physics is likely dominated by gas infall probably through merging or interactions. Our study further supports the rapid evolution of z~0.4-1 galaxies. Gas exchanges between galaxies is likely the main cause of this evolution.
• ### Characterization of the hot Neptune GJ 436b with Spitzer and ground-based observations(0707.3809)
Sept. 14, 2007 astro-ph
We present Spitzer Space Telescope infrared photometry of a secondary eclipse of the hot Neptune GJ436b. The observations were obtained using the 8-micron band of the InfraRed Array Camera (IRAC). The data spanning the predicted time of secondary eclipse show a clear flux decrement with the expected shape and duration. The observed eclipse depth of 0.58 mmag allows us to estimate a blackbody brightness temperature of T_p = 717 +- 35 K at 8 microns. We compare this infrared flux measurement to a model of the planetary thermal emission, and show that this model reproduces properly the observed flux decrement. The timing of the secondary eclipse confirms the non-zero orbital eccentricity of the planet, while also increasing its precision (e = 0.14 +- 0.01). Additional new spectroscopic and photometric observations allow us to estimate the rotational period of the star and to assess the potential presence of another planet.
• ### Accurate Spitzer infrared radius measurement for the hot Neptune GJ 436b(0707.2261)
July 24, 2007 astro-ph
We present Spitzer Space Telescope infrared photometry of a primary transit of the hot Neptune GJ 436b. The observations were obtained using the 8 microns band of the InfraRed Array Camera (IRAC). The high accuracy of the transit data and the weak limb-darkening in the 8 microns IRAC band allow us to derive (assuming M = 0.44 +- 0.04 Msun for the primary) a precise value for the planetary radius (4.19 +0.21-0.16 Rearth), the stellar radius (0.463 +0.022-0.017 Rsun), the orbital inclination (85.90 +0.19-0.18 degrees) and transit timing (2454280.78186 +0.00015-0.00008 HJD). Assuming current planet models, an internal structure similar to that of Neptune with a small H/He envelope is necessary to account for the measured radius of GJ 436b.
• ### Generating multi-GeV electron bunches using single stage laser wakefield acceleration in a 3D nonlinear regime(physics/0612227)
The extraordinary ability of space-charge waves in plasmas to accelerate charged particles at gradients that are orders of magnitude greater than in current accelerators has been well documented. We develop a phenomenological framework for Laser WakeField Acceleration (LWFA) in the 3D nonlinear regime, in which the plasma electrons are expelled by the radiation pressure of a short pulse laser, leading to nearly complete blowout. Our theory provides a recipe for designing a LWFA for given laser and plasma parameters and estimates the number and the energy of the accelerated electrons whether self-injected or externally injected. These formulas apply for self-guided as well as externally guided pulses (e.g. by plasma channels). We demonstrate our results by presenting a sample Particle-In-Cell (PIC) simulation of a 30f sec, 200T W laser interacting with a 0.75cm long plasma with density 1.5*10^18 cm^-3 to produce an ultra-short (10f s) mono-energetic bunch of self-injected electrons at 1.5 GeV with 0.3nC of charge. For future higher-energy accelerator applications we propose a parameter space, that is distinct from that described by Gordienko and Pukhov [Physics of Plasmas 12, 043109 (2005)] in that it involves lower densities and wider spot sizes while keeping the intensity relatively constant. We find that this helps increase the output electron beam energy while keeping the efficiency high.
• ### Multiwavelength analysis of the young open cluster NGC 2362(astro-ph/0602487)
Feb. 22, 2006 astro-ph
We present a multiwavelength analysis of the young open cluster NGC 2362. UBVRcIc CCD photometric observations, together with available data in the Chandra data base, near infrared data from the Two Micron All Sky Survey (2MASS), and recently published Halpha spectroscopy were used to get information about the evolutionary stage of the cluster and the main physical properties of its stellar content. Cluster membership is estimated for every individual star by means of ZAMS and isochrone fitting. The cluster is confirmed to host a richly populated pre-main sequence (PMS), and to contain a large amount of X-ray emitting stars, which reach from the PMS members of GK spectral type, up to the most luminous OB type main sequence (MS) members. The PMS cluster members show no significant age spread, and the comparison to both PMS and post-MS isochrones suggests a younger age for the more massive MS than for lower mass PMS members. The analysis allows to asses the validity of currently used pre-main sequence evolutionary models, and supports the suggestion of a well defined positive correlation of the X-ray emission from PMS stars with their bolometric luminosity. Clear differences are found on the other hand, between the X-ray activity properties of MS and PMS cluster members, both in the relation between X-ray luminosity and bolometric luminosity, and in spectral properties as well.
• ### Size distribution of circumstellar disks in the Trapezium cluster(astro-ph/0506585)
June 24, 2005 astro-ph
In this paper we present results on the size distribution of circumstellar disks in the Trapezium cluster as measured from HST/WFPC2 data. Direct diameter measurements of a sample of 135 bright proplyds and 14 silhouettes disks suggest that there is a single population of disks well characterized by a power-law distribution with an exponent of -1.9 +- 0.3 between disk diameters 100-400 AU. For the stellar mass sampled (from late G to late M stars) we find no obvious correlation between disk diameter and stellar mass. We also find that there is no obvious correlation between disk diameter and the projected distance to the ionizing Trapezium OB stars. We estimate that about 40% of the disks in the Trapezium have radius larger than 50 AU. We suggest that the origin of the Solar system's (Kuiper belt) outer edge is likely to be due to the star formation environment and disk destruction processes (photoevaporation, collisions) present in the stellar cluster on which the Sun was probably formed. Finally, we identified a previously unknown proplyd and named it 266-557, following convention.
• ### The CORALIE survey for southern extra-solar planets. XII. Orbital solutions for 16 extra-solar planets discovered with CORALIE(astro-ph/0310316)
Nov. 10, 2003 astro-ph
This paper summarizes the information gathered for 16 still unpublished exoplanet candidates discovered with the CORALIE echelle spectrograph mounted on the Euler Swiss telescope at La Silla Observatory. Amongst these new candidates, 10 are typical extrasolar Jupiter-like planets on intermediate- or long-period (100<P<1350d) and fairly eccentric (0.2<e<0.5) orbits (HD19994, HD65216, HD92788, HD111232, HD114386, HD142415, HD147513, HD196050, HD216437, HD216770). Two of these stars are in binary systems. The next 3 candidates are shorter-period planets (HD6434, HD121504) with lower eccentricities among which a hot Jupiter (HD83443). More interesting cases are finally given by the multiple-planet systems HD82943 and HD169830. The former is a resonant P_2/P_1=2/1 system in which planet-planet interactions are influencing the system evolution. The latter is more hierarchically structured.
• ### Soliton generation of two-component Bose-Einstein condensates in optical lattices(cond-mat/0307156)
July 8, 2003 cond-mat.soft
Coupled nonlinear Schrodinger equations (CNLS) with an external elliptic function potential model a quasi one--dimensional interacting two-component Bose-Einstein condensate trapped in a standing light wave. New families of stationary solutions of the CNLS with a periodic potential are presented and their stability studied by direct numerical simulations. Some of these solutions allow reduction to Manakov system. From a physical point of view these solutions can be interpreted as exact Bloch states at the edge of the Brillouin zone. Some of them are stable while others are found to be unstable against modulations of long wavelength. The solutions which are modulationally unstable are shown to lead to the formation of localized ground states of the coupled BEC system.
• ### Scalar Mesons within a model for all non-exotic mesons(hep-ph/0201006)
Jan. 2, 2002 hep-ph
We describe a four-parameter model for non-exotic meson-meson scattering, which accommodates all non-exotic mesons, hence also the light scalar mesons, as resonances and bound states characterised by complex singularities of the scattering amplitude as a function of the total invariant mass. The majority of the full $S$-matrix mesonic poles stem from an underlying confinement spectrum. However, the light scalar mesons K0*(830), a0(980), f0(400-1200), and f0(980) do not, but instead originate in 3P0-barrier semi-bound states. In the case of bound states, wave functions can be determined. For ccbar and bbbar, radiative transitions have been calculated. Here we compare the results to the data.
• ### Probing the pc- and kpc-scale environment of the powerful radio galaxy Hercules A(astro-ph/0111607)
Nov. 30, 2001 astro-ph
We present the kpc-scale behaviour of the powerful extragalactic radio source Hercules A and the behaviour of the intracluster gas in which the radio source is situated. We have found that Hercules A exhibits a strong Laing-Garrington effect. The X-ray observations have revealed an extended X-ray emission elongated along the radio galaxy axis. The estimated temperature of the cluster is kT = 2.45 keV and the central electron density is no~7.8 x 10^(-3) cm^(-3) which reveals a hot, dense environment in which Hercules A is situated. From the combined study of the radio and X-ray data we have estimated a central value of 3<Bo (muG)<9. We also present the most recent results from the analysis of the radio data on the pc-scale structure of the radio galaxy, observed at 18 cm by the EVN-MERLIN array. A faint but compact radio source, coincident with the optical centre of Hercules A was detected by the EVN at 18 mas resolution. The total flux density of the EVN core is 14.6 mJy. Its angular size is 18 x 7 mas with a position angle of ~139 degrees. There is also evidence for extended emission in the NW-SE direction, most probably from the eastern pc-scale jet. If this is true then there is a misalignment between the direction of the pc-eastern and the aligned kpc-scale jets of ~35 degrees.
• ### Quasiparticle spectrum of a type-II superconductor in a high magnetic field with randomly pinned vortices(cond-mat/9905180)
May 13, 1999 cond-mat.supr-con
We show that gapless superconductivity of a strongly type-II superconductor in a high magnetic field prevails in the presence of disorder, suggesting a topological nature. We calculate the density of states of the Bogoliubov-de Gennes quasiparticles for a two-dimensional inhomogeneous system in both cases of weak and strong disorder. In the limit of very weak disorder, the effect is very small and the density of states is not appreciably changed. As the disorder increases, the density of states at low energies increases and the ratio of the low-energy density of states to its maximum increases significantly.
• ### Hall conductance of a pinned vortex lattice in a high magnetic field(cond-mat/9905181)
May 13, 1999 cond-mat.supr-con
We calculate the quasiparticle contribution to the zero temperature Hall conductance of two-dimensional extreme type-II superconductors in a high magnetic field, using the Landau basis. As one enters the superconducting phase the Hall conductance is renormalized to smaller values, with respect to the normal state result, until a quantum level-crossing transition is reached. At high values of the order parameter, where the quasiparticles are bound to the vortex cores, the Hall conductance is expected to tend to zero due to a theorem of Thouless.
• ### Neutrino Magnetic Moment Upper Bound From Solar Neutrino Observations(hep-ph/9712462)
Dec. 19, 1997 hep-ph
Using the data from SuperKamiokande, Kamiokande and Homestake solar neutrino experiments we derive an upper bound on the magnetic moment of the neutrino and find $\mu_{\nu_e}\leq(2.2-2.3)\times 10^{-10}\mu_{B}$, within four different standard solar models. We assume equal magnetic moments for all neutrino flavours. This limit is obtained when neutrinos do not undergo any "disappearence" mechanism other than the magnetic moment conversion due to the solar magnetic field and for a total or nearly total suppression of the intermediate energy neutrinos. In our work we consider an energy dependent suppression of solar neutrinos. We also point out that the limit may be further reduced if the detector threshold energy in $\nu_{e,x}e^{-}$ scattering with solar neutrinos is decreased.
• ### Characteristic functions and process identification by neural networks(physics/9712035)
Dec. 17, 1997 physics.data-an
Principal component analysis (PCA) algorithms use neural networks to extract the eigenvectors of the correlation matrix from the data. However, if the process is non-Gaussian, PCA algorithms or their higher order generalisations provide only incomplete or misleading information on the statistical properties of the data. To handle such situations we propose neural network algorithms, with an hybrid (supervised and unsupervised) learning scheme, which constructs the characteristic function of the probability distribution and the transition functions of the stochastic process. Illustrative examples are presented, which include Cauchy and Levy-type processes
• ### Non-commutative time-frequency tomography(physics/9712022)
Dec. 12, 1997 physics.data-an
The characterization of non-stationary signals requires joint time and frequency information. However, time (t) and frequency (omega) being non-commuting variables there cannot be a joint probability density in the (t,omega) plane and the time-frequency distributions, that have been proposed, have difficult interpretation problems arising from negative or complex values and spurious components. As an alternative we propose to obtain time-frequency information by looking at the marginal distributions along rotated directions in the (t,omega) plane. The rigorous probability interpretation of the marginal distributions avoids all interpretation ambiguities. Applications to signal analysis and signal detection are discussed as well as an extension of the method to other pairs of non-commuting variables.
• ### On the computation of quantum characteristic exponents(quant-ph/9711068)
Nov. 26, 1997 quant-ph
A quantum characteristic exponent may be defined, with the same operational meaning as the classical Lyapunov exponent when the latter is expressed as a functional of densities. Existence conditions and supporting measure properties are discussed as well as the problems encountered in the numerical computation of the quantum exponents. Although an example of true quantum chaos may be exhibited, the taming effect of quantum mechanics on chaos is quite apparent in the computation of the quantum exponents. However, even when the exponents vanish, the functionals used for their definition may still provide a characterization of distinct complexity classes for quantum behavior.
• ### Boundary-layer control by electric fields: A feasibility study(physics/9705020)
May 15, 1997 physics.flu-dyn
A problem of great concern in aviation and submarine propulsion is the control of the boundary layer and, in particular, the methods to extend the laminar region as a means to decrease noise and fuel consumption. In this paper we study the flow of air along an airfoil when a layer of ionized gas and a longitudinal electric field are created in the boundary layer region. By deriving scaling solutions and more accurate numerical solutions we discuss the possibility of achieving significant boundary layer control for realistic physical parameters. Practical design formulas and criteria are obtained. We also discuss the perspectives for active control of the laminar-to-turbulent transition fluctuations by electromagnetic field modulation.
|
2021-03-01 12:30:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6273406147956848, "perplexity": 2103.957999359006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362513.50/warc/CC-MAIN-20210301121225-20210301151225-00210.warc.gz"}
|
https://motls.blogspot.com/2006/03/consequences-of-outage.html
|
## Friday, March 17, 2006 ... //
### Consequences of an outage
What happens when a server ("filer") at blogger.com fails and the guys have a hard time to replace it, just like it happened to this blog in the last 16 hours? Among other things, the following events could be expected to occur:
• Readers in 120 countries are doomed and outraged
• A global conflict is imminent
• Instead of 2,500 page views during the period, they receive 2,500 "403 Forbidden" error messages
|
2019-11-21 01:17:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17132927477359772, "perplexity": 2737.8857017127725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670643.58/warc/CC-MAIN-20191121000300-20191121024300-00299.warc.gz"}
|
https://www.nag.com/numeric/nl/nagdoc_26.2/nagdoc_cl26.2/html/f08/f08bhc.html
|
# NAG C Library Function Document
## 1Purpose
nag_dtzrzf (f08bhc) reduces the $m$ by $n$ ($m\le n$) real upper trapezoidal matrix $A$ to upper triangular form by means of orthogonal transformations.
## 2Specification
#include #include
void nag_dtzrzf (Nag_OrderType order, Integer m, Integer n, double a[], Integer pda, double tau[], NagError *fail)
## 3Description
The $m$ by $n$ ($m\le n$) real upper trapezoidal matrix $A$ given by
$A = R1 R2 ,$
where ${R}_{1}$ is an $m$ by $m$ upper triangular matrix and ${R}_{2}$ is an $m$ by $\left(n-m\right)$ matrix, is factorized as
$A = R 0 Z ,$
where $R$ is also an $m$ by $m$ upper triangular matrix and $Z$ is an $n$ by $n$ orthogonal matrix.
## 4References
Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia http://www.netlib.org/lapack/lug
## 5Arguments
1: $\mathbf{order}$Nag_OrderTypeInput
On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. See Section 3.3.1.3 in How to Use the NAG Library and its Documentation for a more detailed explanation of the use of this argument.
Constraint: ${\mathbf{order}}=\mathrm{Nag_RowMajor}$ or $\mathrm{Nag_ColMajor}$.
2: $\mathbf{m}$IntegerInput
On entry: $m$, the number of rows of the matrix $A$.
Constraint: ${\mathbf{m}}\ge 0$.
3: $\mathbf{n}$IntegerInput
On entry: $n$, the number of columns of the matrix $A$.
Constraint: ${\mathbf{n}}\ge {\mathbf{m}}$.
4: $\mathbf{a}\left[\mathit{dim}\right]$doubleInput/Output
Note: the dimension, dim, of the array a must be at least
• $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{pda}}×{\mathbf{n}}\right)$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$;
• $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}×{\mathbf{pda}}\right)$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$.
The $\left(i,j\right)$th element of the matrix $A$ is stored in
• ${\mathbf{a}}\left[\left(j-1\right)×{\mathbf{pda}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$;
• ${\mathbf{a}}\left[\left(i-1\right)×{\mathbf{pda}}+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$.
On entry: the leading $m$ by $n$ upper trapezoidal part of the array a must contain the matrix to be factorized.
On exit: the leading $m$ by $m$ upper triangular part of a contains the upper triangular matrix $R$, and elements ${\mathbf{m}}+1$ to n of the first $m$ rows of a, with the array tau, represent the orthogonal matrix $Z$ as a product of $m$ elementary reflectors (see Section 3.3.6 in the f08 Chapter Introduction).
5: $\mathbf{pda}$IntegerInput
On entry: the stride separating row or column elements (depending on the value of order) in the array a.
Constraints:
• if ${\mathbf{order}}=\mathrm{Nag_ColMajor}$, ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$;
• if ${\mathbf{order}}=\mathrm{Nag_RowMajor}$, ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
6: $\mathbf{tau}\left[\mathit{dim}\right]$doubleOutput
Note: the dimension, dim, of the array tau must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$.
On exit: the scalar factors of the elementary reflectors.
7: $\mathbf{fail}$NagError *Input/Output
The NAG error argument (see Section 3.7 in How to Use the NAG Library and its Documentation).
## 6Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 2.3.1.2 in How to Use the NAG Library and its Documentation for further information.
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_INT
On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{m}}\ge 0$.
On entry, ${\mathbf{pda}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{pda}}>0$.
NE_INT_2
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$ and ${\mathbf{m}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge {\mathbf{m}}$.
On entry, ${\mathbf{pda}}=〈\mathit{\text{value}}〉$ and ${\mathbf{m}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$.
On entry, ${\mathbf{pda}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 2.7.6 in How to Use the NAG Library and its Documentation for further information.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 2.7.5 in How to Use the NAG Library and its Documentation for further information.
## 7Accuracy
The computed factorization is the exact factorization of a nearby matrix $A+E$, where
$E2 = Oε A2$
and $\epsilon$ is the machine precision.
## 8Parallelism and Performance
nag_dtzrzf (f08bhc) makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the x06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
The total number of floating-point operations is approximately $4{m}^{2}\left(n-m\right)$.
The complex analogue of this function is nag_ztzrzf (f08bvc).
## 10Example
This example solves the linear least squares problems
$minx bj - Axj 2 , j=1,2$
for the minimum norm solutions ${x}_{1}$ and ${x}_{2}$, where ${b}_{j}$ is the $j$th column of the matrix $B$,
$A = -0.09 0.14 -0.46 0.68 1.29 -1.56 0.20 0.29 1.09 0.51 -1.48 -0.43 0.89 -0.71 -0.96 -1.09 0.84 0.77 2.11 -1.27 0.08 0.55 -1.13 0.14 1.74 -1.59 -0.72 1.06 1.24 0.34 and B= 7.4 2.7 4.2 -3.0 -8.3 -9.6 1.8 1.1 8.6 4.0 2.1 -5.7 .$
The solution is obtained by first obtaining a $QR$ factorization with column pivoting of the matrix $A$, and then the $RZ$ factorization of the leading $k$ by $k$ part of $R$ is computed, where $k$ is the estimated rank of $A$. A tolerance of $0.01$ is used to estimate the rank of $A$ from the upper triangular factor, $R$.
### 10.1Program Text
Program Text (f08bhce.c)
### 10.2Program Data
Program Data (f08bhce.d)
### 10.3Program Results
Program Results (f08bhce.r)
|
2021-07-31 23:57:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 98, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9950122237205505, "perplexity": 2860.6186184606822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154127.53/warc/CC-MAIN-20210731234924-20210801024924-00484.warc.gz"}
|
https://www.sbt-durabi.org/articles/xml/OzEX/
|
General Article
International Journal of Sustainable Building Technology and Urban Development. 30 December 2022. 472-487
https://doi.org/10.22712/susb.20220034
# MAIN
• Introduction
• Literature Review
• Identification of the heat vulnerability areas
• Designation of cooling centers
• Gaps in literature review
• Material and Methodology
• Data and Variables
• Results and Discussions
• Vulnerability-Resilience Index (VRI)
• Analysis of Variance
• Hotspot analysis
• Conclusions
Introduction
Heat waves are becoming more frequent and extreme due to global warming. High density development and population aging at an unprecedently rapid pace are worsening the impacts of heat waves in many cities. Heat waves are one of natural disasters that negatively affect people’s health, and heat exposure leads to increased incidences of heat related illness such as heatstroke, sunstroke, heat exhaustion, and heat cramps. According to the Emergency Room Monitoring System Operation Results Report prepared by the Korea Centers for Disease Control and Prevention in 2021, the number of heat wave days from June to August 2021 averaged 11.8 days, an increase of 53.2% compared to the previous year [1]. The number of patients with heat related illnesses reported to the emergency room monitoring system was 1,376, an increase of 27.6% from 1,078 people in 2020. A total of 20 deaths was reported in 2021, and this record marked the second largest death cases since 2011.
It is noteworthy that the levels of vulnerability to the effects of heat waves vary by people’s social, physical, and economic characteristics. To prevent heat related illnesses, it is important for people to get access to cooling centers with air conditioning. However, not every person does have good accessibility to cooling shelters due to the absence of the facilities nearby. In other words, the lack of the facilities that can lower body temperature in the event of a heat wave leads to an increase in the incidence of heat-related diseases by lowering the individual’s ability to adapt to the heat wave. Previous research explained that the negative health impacts caused by heat waves were high among the socially vulnerable population. In the 2020 Heat Wave Impact Report by Korea Environment Institute defined low-income, 65-year-old or older, outdoor workers, and single-person households as vulnerable population to heat waves [2]. Ahn et al. defined vulnerable groups as the elderly, children, outdoor workers, single-person households, vulnerable housing, pet owners, low-income households, and chronically ill people, considering physical, environmental, and social factors [3].
To reduce adverse impacts of extreme heat event, the Seoul metropolitan government has planned and operated cooling centers since 2018. A cooling center is a facility designated to allow people exposed to extreme heat events to get relief in air. As heat waves were designated as a natural disaster by the Framework Act on the Management of Disasters and Safety in 2018 [4], Korean government started to propose heat response plans of opening cooling shelters and extending its operating hours. However, the current cooling centers are designated by local governments without sufficient consideration of the degree of vulnerability in the region, such as the distribution of vulnerable groups to extreme heat events [5]. According to the 2021 Senior Cooling Center Designation and Support Standards released by Seoul Welfare Policy Office, a cooling center should be designated at a place that is available to use and easily accessible to the people exposed to heat wave [6]. Also, the area at risk of landslides, tsunamis, and flooding should be avoided. Such standards for designating cooling centers do not closely consider the degree of vulnerability to heat waves and the distribution of vulnerable groups in the target area. Particularly, there is a lack of consideration of accessibility to the facilities for the disabled and the elderly. Designation of cooling centers without considering vulnerable areas and vulnerable groups to heat waves would result in not providing enough accessibilities to cooling centers. Also, previous studies emphasized that the accessibility of vulnerable groups should be considered when designating cooling centers [7, 8, 9, 10].
To play a significant role in reducing adverse impacts of heat wave in many cities, more spatial analyses for installing a cooling center are needed to consider the vulnerable population to heat waves and spatial vulnerability in cities. Therefore, this study aims to analyze whether the current cooling centers have been planned by significantly considering the vulnerability to heat waves in each administrative dong in Seoul. In addition, the results of this study would contribute to improve existing guidelines for installing cooling shelters with better accessibility especially for vulnerable populations to heat waves.
Literature Review
Identification of the heat vulnerability areas
Cho et al. derived vulnerable areas to thermal environment in Seoul and analyzed the region’s physical environment, population, social, and economic characteristics [11]. Variables were collected by dividing them into climate, population, socio-economic, and land use characteristics. Heat island areas were identified through hotspot analysis, variance analysis, and logistic regression analysis, and further physical and social characteristics of the region were studied. As a result of the study, vulnerable areas of the thermal environment were mainly clustered around the old city center and the sub-city center. The areas had higher air temperatures than the surrounding areas, and the vulnerable socio-economic groups were concentrated.
Another study considering the socio-economic characteristics of the population is a study by Bae et al. that analyzed the spatial relationship between vulnerable areas and the risk of heat waves in Cheongju-si [12]. This study evaluated heat wave vulnerable areas by conducting spatial autocorrelation analysis with the average radiant temperature, a temperature variable, and the possibility of residence of the vulnerable class. In addition, nine variables classified as vulnerable factors in physical, economic, and living conditions were selected to analyze the possibility of residence of the vulnerable. As a result of the analysis, the vulnerable group was also likely to live in areas with a high risk of exposure to heat waves, and the vulnerability to heat waves was exceptionally high in the old city center.
Designation of cooling centers
Kim et al. analyzed the degrees of exposure to heat wave and responsive behaviors of the heat wave vulnerable groups [9]. As a heat wave response system, they tried to suggest ways to improve the cooling center implementation plans by reflecting the vulnerable characteristics. An indoor and outdoor temperature observations and site survey were conducted in areas with a high incidence rate of heat related diseases. The analysis showed that the vulnerable were likely to be exposed to high temperatures, and their ability to cope with heat waves was remarkably low. In particular, it was concluded that the effect of the policy might be limited because the actual usage of cooling centers and accessbility to the facilities were limited especially for the disabled and population without transportation modes.
Yoon studied whether the spatial equity of public services was considered for the disaster-prone class by overlapping the distribution of the disaster-prone class and the service area of the cooling centers in Busan Metropolitan City [14]. As a result, it was confirmed that vulnerable people to disaster were clustered particularly in 24 administrative dongs in Busan. However, these areas were not coverd by service areas of existing cooling centers. This study was meaningful in that it emphasized the urgent need to designate these facilities and prepare operating guidelines in the most vulnerable areas. Also, the spatial distribution of the vulnerable should be applied for improving equity in public services. However, since the aging index data in this research was analyzed based on the resident registration population, only the static census population was considered. Therefore, the actual needs for floating population in these districts to use the cooling centers during the extreme heat wave were not fuly considered. The living population refers the de facto population that consisted of all persons who are presented in a given areas at a specific time using open-source spatial big data and the mobile phone signal-based data. It includes the census population and the floating population from outside of Seoul metropolitan area for short-term and mid-to-long-term stays. Using this data, it is possible to accurately analyze the sensitivity to the heat wave by considering both the census population and the floating population in the target area during the heat wave. Our study considered the spatio-temporal pattern changes of the vulnerable class by using living population dataset.
Gaps in literature review
Our study filled the gaps in literature review in two main aspects. First, S-DoT data was used to utilize the temperature data across in Seoul. S-DoT data is environmental information data measured by 1,100 sensors installed throughout Seoul. It collects information such as fine dust, temperature, noise, and wind speed. Previous studies on heat waves generally used the average temperature, maximum temperature, and minimum temperature data released by the Korea Meteorological Administration. However, since the data of the Korea Meteorological Administration is measured at an observatory installed for each administrative Gu level, it is not possible to closely measure the temperature differences in each administrative dong. On the other hand, S-DoT data can grasp a more specific temperature of each administrative dong because the sensors are distributed across administrative dongs. Recently, some studies have emerged that analyzed the environmental characteristics between areas utilizing the S-DoT data [15, 16, 17]. Therefore, our study used S-DoT data as climate exposure data to reflect the region’s more detailed temperature for the time frame of study.
Second, vulnerable areas to heat waves were analyzed considering the floating population. Many previous research used the number of resident registered populations as a population variable to analyze the vulnerability to heat waves in the study areas. However, this can lead to results far from existing characteristics because the population living in the area during the heat wave time cannot be considered. For example, outdoor workers in different workplaces are likely to have heat related diseases in other places where the workers have not registered, because they usually work outdoors during the day. However, considering only the resident registration population, living population data may not be sufficiently reflected. Therefore, our study analyzed the characteristics of the floating population that were not included in previous studies by using the living population data provided by Seoul metropolitan government.
Material and Methodology
Data and Variables
To analyze the degrees of vulnerability to heat waves in Seoul, South Korea, 425 administrative dongs in Seoul were considered for this research. As of May 2022, the total designated number of cooling centers in Seoul was 33,161. Heat waves have occurred during summer daytime in South Korea, and heat-related diseases have been reported mostly during the corresponding time [18]. Therefore, the data between PM 12:00 and 17:00 from July to August in 2020, was analyzed for this study.
The empirical analyses for this study were conducted as follows. Based on literature review, the variables for this research were carefully identified. These variables were standardized using the Vulnerability-Resilience Index (VRI) to find spatial distribution of the vulnerability to heat waves in each administrative dong using ArcGIS. Further, ANOVA (Analysis of Variance) test and LISA (Local Indicators of Spatial Association) cluster analysis were conducted to identify the spatial relationship between vulnerable areas to heat waves and locations of cooling centers.
To investigate the vulnerability to heat waves in 425 administrative dongs in Seoul, the variables explained in Table 1 were used. Also, variables were categorized into climate exposure, sensitivity, and adaptability based on the concept of climate change vulnerability index [19]. Climate exposure refers to the degree to which the system is exposed to significant climatic variations caused by climate change, and S-DoT (Smart Seoul Data of Things) data was used to measure climate exposure. As mentioned above, S-DoT data collects temperature data for every hour, enabling detailed temperature measurement by the administrative dong level. Only the data between PM 12:00 and 17:00 from July to August in 2020, was analyzed for this study. Figure 1 shows the result of average temperature in each administrative dong utilizing S-DoT data. Sensitivity refers to the degree to which system is affected by or responsive to climate stimuli. Variables representing sensitivity for this research were hourly average floating population under the age of 10, hourly average floating population over the age of 65, the number of people living alone, the number of basic livelihood recipients, the number of low-income seniors, the number of disabled, childhood support expenses, and elderly support expenses. Floating population under the age of 10 and over 65 were calculated utilizing the population data from PM 12:00 to 17:00 between July 1 to August 31. Each of the total number of floating population under the age of 10 and over 65 was divided by the total number of floating population for each administrative dong. Finally, adaptability means the ability of the system to respond to damage caused by climate change. Many previous studies analyzed vulnerability to heat waves by using financial ability (household income), social infrastructure (healthcare, education, public facilities), physical environment (green area, etc.) as variables for measuring adaptability. This study considered household income variables, the number of leisure welfare facilities for the elderly, medical institutions, medical personnel, and the green areas for measuring adptabilty to extreme heat exposure.
### The average temperature of the administrative dongs in Seoul.
For this study, Vulnerability-Resilience Index (VRI) was measured to standardize different variables of climate exposure, senstivity, and adaptability in each dong to put these variables on an identical scale. Considering the concept of climate change vulnerability assessment proposed in previous studies, researchers have developed and applied various vulnerability assessment indicators especially for built environment in South Korea [20, 21, 22, 23, 24]. Yoo and Kim developed a Vulnerability-Resilience Index (VRI) to evaluate climate change vulnerability by modifying the vulnerability concept proposed by Moss et al. [25, 26]. Our study evaluated the vulnerability-resilience to heat waves by administrative dong based on the modified vulnerability indicators for further empirical analyses. To standardize various variables into one scale, each variable was derived as a Z-value using mean and standard deviation, which was arithmetically averaged by category (climate exposure, sensitivity, adaptability). Vulnerability-Resilience Index (VRI) to heat waves was then calculated for each administrative dong.
##### (1)
$VRI=exposure+sensitivity2+adaptability2$
Equation (1) calculates the VRI index by reflecting each of the three concepts explained by IPCC. Climate exposure and sensitivity are combined to represent the potential risk of a disaster. These potential risks are combined with the adaptability index to offset the disaster risk. Therefore, the VRI index derived through Equation (1) represents the overall vulnerability-resilience of the region. Since the VRI value contains both a negative aspect of the concept of vulnerability, and a positive attribute of the concept of resilience, it should be interpreted carefully by considering conceptual signs to standardized variables [24]. Climate exposure and sensitivity variables are interpreted as negative (-) aspects, and variables of adaptability are interpreted as positive (+) signs. In other words, the larger the Z-value of the climate exposure and sensitivity variables means more vulnerability. Therefore, the vulnerability increases and the total VRI value decreases. On the other hand, the larger the Z-value of the adaptability means more resilience, so the vulnerability decreases and the overall VRI value increases.
For this study, based on the VRI index of each administrative dong, further spatial and statistical analyses were conducted as follows. Clustering analysis of administrrative dongs based on the VRI, an analysis of variance to test the difference of distribution of cooling centers among the clustered groups , and hot spot analysis to find spatially clusterd vulnerable areas were empirically analyzed.
Results and Discussions
Vulnerability-Resilience Index (VRI)
This study attempted to analyze spatial patterns of vulnerable areas to heat waves by grouping administrative dongs based on the degrees of VRI values in Seoul. Choi et al. showed the spatial distribution of heat wave vulnerability by dividing the VRI index of administrative dongs into quartiles [13]. This study applied the quartile method and organized the VRI values in ascending order and then classified administrative dongs by vulnerable stages from G1 to G4. G1 group consists of the administrative dongs that are relatively vulnerable to heat waves, G2 is for the administrative dongs that belongs to 25 to 50%. The administrative dongs that correspond to 50 to 75% belong to G3, and G4 is for the group of administrative dongs that are relatively less vulnerable to heat waves. Figure 2 shows the results of classifying the Vulnerability-Resilience index of administrative dongs in Seoul.
### Classified Vulnerability-Resilience Index (VRI) of administrative dongs in Seoul.
As a result of this analysis, 106 of the 425 administrative dongs were classified as G1. In particular, administrative dongs in G1 group the most vulnerable to heat waves are listed below. Yongsan-gu (Huam-dong, Yongsan 2-ga-dong, Namyeong-dong, Yongmun-dong, Itaewon 1-dong, Itaewon 2-dong, Cheongpa-dong, Hangang-ro dong), Jung-gu ( Sogong-dong, Hoehyeon-dong, Pil-dong, Jangchung-dong, Gwanghui-dong, Jungnim-dong, Sindang-dong, Dasan-dong, Cheonggu-dong), Seongbuk-gu (Donam 1-dong, Donam 2-dong, Jeongneung 1-dong, Jeongneung 2-dong, Jeongneung 3-dong, Seongbuk-dong, Dongseon-dong, Anam-dong, Bomun-dong, Jangwi 1-dong, Jangwi 3-dong, Seokwan-dong), and Gwanak-gu (Cheongnim-dong, Haengun-dong, Nakseongdae-dong, Jungang-dong, Seowon-ding, Inheon-dong, Seorim-dong, Sillim-dong, Nanhyang-dong, Jowon-dong, Daehak-dong, Seonghyeon-dong, Sinsa-dong, Cheongryong-dong). On the other hand, 106 of the 425 administrative dongs were classified as G4. In particular, some parts of southern districts and northern districts in Seoul were relatively less vulnerable to heat waves. These administrative dongs in Southern districts are Gangdong-gu (Cheonho 1-dong, Cheonho 3-dong, Seongnae 2-dong, Seongnae 3-dong, Dunchon 2-dong, Gangil-dong, Amasa 1-dong, Gil-dong, Cheonho 2-dong), Songpa-gu (Pungnap 1-dong, Geoyeo 1-dong, Macheon-1dong, Macheon-2dong, Garak 2-dong, Jamsilbon-dong, Jamsil 4-dong, Jangji-dong), Gangnam-gu (Sinsa-dong, Nonhyeon 1-dong, Nonhyeon 2-dong, Yeoksam 1-dong, Dogok 1-dong, Gaepo 4-dong, Ilwonbon-dong, Suseo-dong, Segok-dong, Cheongdam-dong, Apgujeong-dong), and Seocho-gu (Seocho 2-dong, Seocho 3-dong, Seocho 4-dong, Banpo 3-dong, Banpo 4-dong, Bangbae 4-dong, Naegok-dong, Yangjae 1-dong). Also, administrative dongs in Northern district are Eunpyeong-gu (Galhyeon 1-dong, Bulgwang 1-dong, Daejo-dong, Jingwan-dong, Bulgwang 2-dong, Eungam 3-dong, Yeokchon-dong), Dongdaemun-gu (Cheongnyangni-dong, Yongshin-dong, Jegi-dong, Jeonnong 1-dong, Jangan 1-dong, Jangan 2-dong), Dobong-gu (Ssangmun 2-dong, Banghak 1-dong, Banghak 2-dong, Banghak 3-dong, Dobong 1-dong), and Nowon-gu (Gongneung 2-dong, Sanggye 1-dong, Sanggye 2-dong, Sanggye 5-dong, Sanggye 6∙7-dong, Junggye 2∙3-dong, Gongneung 1-dong). Using the result of the spatial disparities in vulnerability based on VRI values, this study further analyzed the spatial relationship between cooling centers and vulnerable areas to extreme heat exposure.
Analysis of Variance
An Analysis of Variance to test the difference among the four groups classified based on VRI values was conducted. This statistical model was used to find the relationship between the number of cooling centers and vulnerability groups in Seoul. A total of 33,161 cooling shelters were assigned to the four groups from the most vulnerable dongs to heat waves (G1) to the relatively less vulnerable areas (G4). ANOVA was conducted based on the assumtion that the average value of the spatial distribution of cooling centers among four different groups classified by the VRI index would be different.
As a result of ANOVA, it was found that there was a statistically significant difference in the average number of cooling centers across the four groups in Seoul (see Figure 3 and Table 2). The average number of cooling shelters was the least in G1 , the group most vulnerable areas to heat waves at 58.53, while the average number of the facilities was the most in G4, the group relatively less susceptible to extreme heat exposure at 95.23. This result can be interpreted that cooling centers are more distributed in relatively less vulnerable areas than the most vulnerable area to heat wave. Since cooling centers have been designated to increase adaptability to heat waves, more cooling shleters should be installed to the administrative dongs grouped as G1, the most vulnerable to heat waves in Seoul. However, the result of the ANOVA analysis showed that the current cooling centers are distributed more in relatively less vulnerable areas than in administrative dongs vulnerable to heat waves. Therefore, it is necessary to provide additional cooling centers to administrative dongs vulnerable to heat waves which were grouped as G1 and G2.
### ANOVA test of the average number of cooling centers in the groups.
#### Descriptive statistics and results of ANOVA test 1
Groups Number of observations Sum of Cooling Shelters Average Dispersion G1 106 6,204 58.53 857.26 G2 106 7,492 70.68 1,090.20 G3 107 7,780 72.71 1,614.02 G4 106 10,094 95.23 2,628.16 Variable factor Sum of squares Degree of freedom Square average F ratio P-value F reject Variable factor Sum of squares Processing 74,446.02 3 24,815.34 16.04 6.8871E-10 2.6261 Residual 651,526.1 421 1,547.57 System 725,972.1 424
For further analysis of ANOVA test, a variance analysis was conducted on the distribution of cooling shelters per square kilometer to find whether the cooling shelters are properly distributed considering the area of each administrative dong in Seoul. The number of cooling shelters per square kilomiter was calculated by dividing the number of cooling centers by the area of each administrative dong.
As a result of the AVONA test to check whether there was statistical differences between groups considering the number of cooling centers per square kilometer, it was found that the average differences in the spatial distribution of cooling shelters per unit area (1 km2) from G1 to G4 were statistically significant. The average number of shelters per unit area for each group was derived as follows; G1 was 73.94, G2 was 91.90, G3 was 72.84, and G4 was 71.48 (see Figure 4 and Table 3). Except for the G2 group, there were not big significant differences of the average number of cooling shelters among the three groups. To provide people better accessibilities to cooling centers, more facilities should be distributed in the most vulnerable area (G1). However, as the differences among the groups were not significant, it can be interpreted that the cooling centers in Seoul have been supplied focusing on the number of cooling centers without considering the spatial disparaties of vulnerability of each administrative dongs.
### ANOVA test of the average number of cooling centers/km2 in the groups.
#### Descriptive statistics and results of ANOVA test 2
Groups Number of observations Sum of Cooling Shelters Average Dispersion G1 106 7,838 73.94 2,344.19 G2 106 9,741 91.90 3,812.80 G3 107 7,793 72.84 2,134.59 G4 106 7,577 71.48 2,604.66 Variable factor Sum of squares Degree of freedom Square average F ratio P-value F reject Processing 29,492.09 3 9,830.69 3.6106 0.0134 2.6260 Residual 1,146,240 421 2,722.66 System 1,175,732 424
Cooling shleters as one of adpatation strategies to extreme heat exposure should be provided in consideration of the spatial disparities in vulnerability of the administrative dongs to increase its effectiveness as a policy implementation. However, the results of this study showed that only a quantitative supply of cooling centers have been implemented without careful consideration of vulnerability in various aspects. It showed that there were many cooling centers distributed in relatively less vulnerable areas than in the most vulnerable administravie dongs to heat wave. Therefore, cooling shelters should be supplied considering the vulnerability to heat waves, especially in areas analyzed as G1 and G2 groups relatively vulnerable to heat waves.
Hotspot analysis
Based on the results of heat wave vulnerability analysis, this study conducted an LISA analysis to find areas in which supply of cooling centers is needed most. LISA analysis is a local spatial autocorrelation analysis method that compares a specific region’s index value with a neighboring region’s index value to derive spatially clustered regions. The spatial association derived through LISA analysis is largely divided into four categories: HH type (high-high) adjacent to regions with high index values, LL type (low-low) adjacent to regions with low values, HL type (high-low), and LH type (low-high) adjacent to regions with high values. Also, HL type and LH type are considered unusual areas [27]. Therefore, this study conducted a LISA analysis especially to find spatial clusters of HH and LL types. LISA analysis results were described in Figure 5. Since this study conducted LISA analysis using the VRI, careful interpretation of VRI is necessary. As the VRI is an indicator that includes both vulnerability and resilience conepts, it means that the higher the VRI, the lower the vulnerability, while the lower the VRI index, the higher the vulnerability. Therefore, compared to the typical interpretation of result from LISA analysis, the administrative dongs belonging to HH (high-high) should be interpreted as the least vulnerable areas to heat waves, and the administrative dongs corresponding to LL (low-low) were the most vulnerable areas to heat waves. This is summarized as follows.
• High-high (HH): Areas with high VRI for the administarive dong’s VRI and the surrounding area’s VRI. Relatively less vulnerable administrative dongs.
• Low-low (LL): Areas where both the VRI of the district and the VRI of the surrounding region are low. The most vulnerable administrative dongs.
• High-low (HL): Areas with high VRI but a relatively low VRI.
• Low-high (LH): Areas with low VRI but a relatively high VRI.
### LISA cluster analysis results of each administrative dong in Seoul.
As a result of the LISA analysis, as shown in Table 4, HIGH-HIGH areas appeared in some administrative dongs of Gangnam-gu, Gangdong-gu, Nowon-gu, Seocho-gu, Songpa-gu, and Eunpyeong-gu. This means that administrative dongs clustered in the HIGH-HIGH areas are relatively less vulnerable to heat waves. On the other hand, some administrative dongs in Jongno-gu, Jung-gu, Yongsan-gu, Gwanak-gu, Dongjak-gu, Dongdaemun-gu, Seongbuk-gu, Yangcheon-gu, and Yeongdeungpo-gu showed LOW-LOW areas. In particular, the largest cluster appeared in the administrative dongs of Jung-gu (Sogong-dong, Hoehyeon-dong, Pil-dong, Jangchung-dong, Gwanghui-dong, Sindang 5-dong, Jungnim-dong, Sindang-dong, Dasan-dong, Cheonggu-dong, Donghwa-dong), Yongsan-gu (Huam-dong, Yongsan 2-ga-dong, Namyeong-dong, Hyochang-dong, Yongmun-dong, Itaewon 1-dong, Itaewon 2-dong, Cheongpa-dong, Wonhyo-ro 1-dong, Hangang-ro 1-dong), Seongbuk-gu (Donam 1-dong, Donam 2-dong, Jeongneung 1-dong, Jeongneung 2-dong, Jeongneung 3-dong, Gileum 1-dong, Gileum 2-dong, Seongbuk-dong, Dongseon-dong), and Gwanak-gu (Cheongnim-dong, Haengun-dong, Nakseongdae-dong, Jungang-dong, Inheon-dong, Seorim-dong, Sinsa-dong, Cheongryong-dong, and Samseong-dong). This means that relatively vulnerable administrative dongs are concentrated in Jung-gu, Yongsan-gu, Seongbuk-gu, and Gwanak-gu. Therefore, the cooling centers that can increase the ability to adapt to the heat wave should be provided to the area first.
#### Classification of administrative dongs by LISA cluster analysis
Classes Corresponding Administrative Dongs High-High Hagye 1-dong, Galhyeon 1-dong, Galhyeon 2-dong, Bulgwang 2-dong, Seocho 2-dong, Seocho 3-dong, Seocho 4-dong, Jamwon-dong, Yangjae 2-dong, Sinsa-dong, Nonhyeon 1-dong, Nonhyeon 2-dong, Samseong 2-dong, Yeoksam 1-dong, Yeoksam 2-dong , Dogok 1-dong, Gaepo 1-dong, Apgujeong-dong, Pungnap 1-dong, Geocheon-dong, Macheon-dong, Macheon-dong, Cheonho 3-dong, Seongnae 1-dong, Seongnae 2-dong, Seongnae 3-dong, Dunchon 2-dong, Cheonho 2-dong, Geoyeo 1-dong, Wirye-dong. Low-Low Sajik-dong, Gyonam-dong, Sogong-dong, Hoehyeon-dong, Pil-dong, Jangchung-dong, Gwanghui-dong, Sindang 5-dong, Jungnim-dong, Sindang-dong, Dasan-dong, Cheonggu-dong, Donghwa-dong, Huam-dong, Yongsan 2-ga-dong, Namyeong-dong, Hyochang-dong, Yongmun-dong, Itaewon 1-dong, Itaewon 2-dong, Cheongpa-dong, Wonhyo-ro 1-dong, Hangang-ro 1-dong, Kumho 1-ga-dong, Songjeong-dong, Kumho 2.3-ga-dong, Imun 1-dong, Imun 2-dong, Junghwa 2-dong, Donam 1-dong, Donam 2-dong, Jeongneung 1-dong, Jeongneung 2-dong, Jeongneung 3-dong, Gileum 1-dong, Gileum 2-dong, Seongbuk-dong, Dongseon-dong, Samyang-dong, Chunghyeon-dong, Mok 2-dong, Mok 3-dong, Hwagok 6-dong, Singil 4-dong, Singil 6-dong, Daelim 1-dong, Sadang 1-dong, Sadang 3-dong, Sadang 4-dong, Sadang 5-dong, Sindaebang 1-dong, Cheongnim-dong, Haengun-dong, Nakseongdae-dong, Jungang-dong, Inheon-dong, Seorim-dong, Sinsa-dong, Cheongryong-dong, and Samseong-dong High-Low Pyeongchang-dong, Muak-dong, Ewha-dong, Cheongunhyoja-dong, Myeong-dong, Euljiro-dong, Yaksu-dong, Jangwi 2-dong, Jongam-dong, Bukhyeon-dong, Gongdeok-dong, Sinwon-dong, Euncheon-dong, Nangok-dong, Miseong-dong, Jamsil Bon-dong Low-High Mangwoo 3-dong, Gusan-dong, Seocho 1-dong, Banpo 1-dong, Dogok 2-dong, and Dunchon 1-dong
Conclusions
This study evaluated the vulnerability-resilience of the heat wave in 425 administrative dongs in Seoul, and analyzed its significant relationships with the existing spatial distribution of cooling shelters in Seoul. A variance analysis was conducted to confirm whether the vulnerability to heat waves in administrative dongs and the spatial distribution of cooling centers had a significant relationship. As a result of the analysis, the average number of cooling centers was the smallest in the most vulnerable group to heat waves. Results from the analysis of the number of cooling centers per unit area showed similar differences among vulnerable groups. It can be interpreted that only quantitative supply of the cooling centers has been made without considering the spatial disparities in vulnerability of each administrative dong in Seoul.
In addition, LISA cluster analysis revealed areas that require an additional installation of cooling shelters in vulnerable areas to heat waves. As a result of the analysis, it was found that relatively vulnerable administrative dongs are concentrated in Jung-gu, Yongsan-gu, Seongbuk-gu, and Gwanak-gu. Considering the vulnerability to heat waves in the regions when supplying cooling shelters, it is possible to increase the service accessibility of vulnerable population to extreme heat exposure. As these people could have better accessibility to the facility, the effectiveness of the city’s mitigation strategy to heat waves will be enhanced. Therefore, to increase the district’s ability to adapt to heat wave events, additional cooling centers should be provided in the administrative dongs according to the priority based on vulnerability.
This study has the following contributions. First, it was confirmed that the cooling centers in Seoul currently have been implemented across admistrative distrcits, but these are spatially distributed without considering individual vulnerability to heat wave in each administrative dong. This can lead to ineffective utilization of cooling shelters by lowering accessibility of vulnerable population to extreme heat waves to cooling shelters. Therefore, to increase effectiveness of the the adpatation strategy for extreme heat exposure, it is suggested that additional cooling shelters are necessary for the vulnerable areas to heat waves. Based on the results of this study, the priority administrative dongs for additional cooling centers were suggested as shown in Table 5. The priority areas are the LL cluster and G1 administrative dongs, where the most vulnerable administrative dongs are clustered. Additional cooling centers in the areas can effectively mitigate heat wave vulnerability.
#### Priority administrative dongs for cooling centers
Administrative Dongs Priority administrative dongs for cooling centers Sajik-dong, Sogong-dong, Hoehyeon-dong, Pil-dong, Jangchung-dong, Gwanghui-dong, Jungnim-dong, Sindang-dong, Dasan-dong, Cheonggu-dong, Huam-dong, Yongsan 2-ga-dong, Namyeong-dong, Yongmun-dong, Itaewon 1-dong, Itaewon 2-dong, Cheongpa-dong, and Hangang-ro dong, Imun 2-dong, Donam 1-dong, Donam 2-dong, Jeongneung 1-dong, Jeongneung 2-dong, Jeongneung 3-dong, Seongbuk-dong, Dongseon-dong, Chunghyeon-dong, Singil 4-dong, Daelim 1-dong, Sadang 3-dong, Sadang 5-dong, Cheongnim-dong, Haengun-dong, Nakseongdae-dong, Jungang-dong, Inheon-dong, Seorim-dong, Sinsa-dong, Cheongryong-dong
Second, this study utilized floating population and S-DoT spatial big datasets, that were not fully considered in previous studies. Since the previous research was conducted using the static census population data and the data from the Korea Meteorological Administration, there were some barriers to analyze the actual influences of the heat wave to floating population and to consider detailed climate information in administrative dong level. However, utilizing the spatial big datasets allows to find areas vulnerable to heat waves more precisely. Compared to the results of a study by Choi et al. who derived vulnerable areas for administrative dongs in Seoul, the results of our study found similar vulnerable districtis of Yongsan-gu, Jung-gu, Eunpyeong-gu, and Seocho-gu [13]. However, unlike previous research, the results of this study showed that Gangnam-gu was a less vulnerable area and Gwanak-gu was a vulnerable area to heat wave. This different results can be interpreted that floating population, household income, and S-DoT data in our study were significant factors to analyze spatial disparities in vulnerability to heat wave.
This study analyzed areas that require additional cooling center implementation considering the spatial distribution of cooling centers and vulnerabilities to heat wave in administrative dong level. However, the actual accessibility to the cooling shelter was not carefully considered because only the number aspects of cooling centers distributed in administrative dong was considered. Also, there is some limitations that there may be some differences from the actual vulnerability caused by heat waves because variables are not weighted in VRI. Therefore, it is necessary to conduct a study in consideration of the service area of the cooling centers in future study. Also, it is necessary to derive an accurate vulnerable areas to heat wave by weighting variables for VRI analysis. Furthermore, this study used only temperature data among the climate information provided from S-DoT. Furthrer research considering the temperature and relative humidity would evalute more accurate vulnerability to extreme heat exposure.
## References
1
Korea Disease Control and Prevention Agency (KCDC), Results of the 2021 heat-related illness surveillance. Public Health Weekly Report, 14(46) (2021), pp. 3251-3263.
2
Korea Environment Institute, Heat Wave Impact Report. (2020).
3
H.C. Ahn, C.W. Shon, I.C. Shin, I.H. Jang, H.C. Pahk, H.M. Cho, J.A. Kim, J.H. Lee, H.R. Lee, and J.W. Choi, How does the heat wave change the Seoulite's lifestyle. The Seoul Institute, (2021).
4
D.W. Kim, C.Y. Kwon, J.E. Kim, and J.S. Lee, Characteristics of Heat Waves From a Disaster Perspective. Journal of Preventive Medicine and Public Health. Journal of Preventive Medicine and Public Health. 53(1) (2020), pp. 26-28. DOI: 10.3961/jpmph.19.315.
5
C.Y. Kwon, J.E. Kim, M.H. Lee, and D.W. Kim, Development of an Evaluation Methodology for Accessibility of Cooling Centers in Ulsan using Landsat Images. KCIS a collection of academic papers. 2020(2) (2020), pp. 496-497.
6
Seoul Welfare Policy Office, 2021 Designation and Support Criteria for Senior Citizens' Hot Shelters [WWW Document], n.d. [Online], 2021. Available at: https://opengov.seoul.go.kr/sanction/22840083 [Accessed 15/07/2022].
7
S.G. Nayak, S. Shrestha, S.C. Sheridan, W.H. Hsu, N.A. Muscatiello, C.I. Pantea, Z. Ross, P.L. Kinney, M. Zdeb, S.A. Hwang, and S. Lin, Accessibility of cooling centers to heat-vulnerable populations in New York State. Journal of Transport & Health. 14 (2019), 100563. DOI: 10.1016/j.jth.2019.05.002. 10.1016/j.jth.2019.05.002
8
J.W. Kim, A Study on the Cooling Center Manual of Facility and Maintenance for Extreme Heat Disaster. Journal of The Korean Society of Hazard Mitigation. 8(4) (2008), pp. 17-22.
9
D.S. Kim, J.C. Park, and Y.R. Chae, The Policy Measures to Reduce Heat-Wave Damage of Vulnerable Groups in Korea. Journal of Environmental Policy and Administration. 28(2) (2020), pp. 211-230. DOI: 10.15301/jepa.2020.28.2.211. 10.15301/jepa.2020.28.2.211
10
Y.R. Chae, Y.J. Ahn, and D.S. Kim, A Study on the Effectiveness of Heat Shelter in preparation for Heat Wave. KEI Focus. 4(1) (2016), pp. 1-28.
11
H.M. Cho, J.H. Ha, and S.G. Lee, Exploring Physical Environments, Demographic and Socioeconomic Characteristics of Urban Heat Island Effect Areas in Seoul. Journal of the Korean Regional Science Association. 35(4) (2019). DOI: 10.22669/krsa.2019.35.4.061.
12
M.K. Bae, B.E. Kim, and C.Y. Lee, Analysis on the Spatial Relationship between the Residential Area of the Vulnerable Groups and the Hazardous Area during the Heat Wave. Journal of Environmental Policy and Administration. 28(3) (2020). DOI: 10.15301/jepa.2020.28.3.243.
13
Y.S. Choi, J.W. Kim, and U. Lim, An Analysis on the Spatial Patterns of Heat Wave Vulnerable Areas and Adaptive Capacity Vulnerable Areas in Seoul. Journal of Korea Planning Association. 53(7) (2018), pp. 87-107. DOI: 10.17208/jkpa.2018.12.53.7.87.
14
S.B. Yoon, The Spatial Distribution Characteristics of Disaster-vulnerable Population and Cooling center. Master's thesis at the Graduate School of Pusan National University. (2022).
15
H.S. Ahn, J.W. Lee, and A. Hong, Urban form and air pollution: Clustering patterns of urban form factors related to particulate matter in Seoul, Korea. Sustainable Cities and Society. 81 (2022), 103859. DOI: 10.1016/j.scs.2022.103859.
16
D.Y. Yoo, Analysis of fine dust exposure mechanisms and estimation of vulnerable walking areas using IoT city big data. Master's thesis at University of Seoul. (2022).
17
G.S. Shin, Assessing the risk of sidewalks due to freezing: a case study in Mia-dong. Master's thesis at Seoul National University General Graduate School. (2021).
18
Korea Disease Control and Prevention Agency (KCDC), Annual Report on the Notified Patients with Heat-related illness in Korea, Seoul [WWW Document], [Online], 2020. Available at: https://www.kdca.go.kr/ contents.es?mid=a20308040106 [Accessed 18/08/2022].
19
J.J. McCarthy, O.F. Canziani, N.A. Leary, D.J. Dokken, and K.S. White, Climate Change 2001: Impacts, Adaptation, and Vulnerability: contribution of working group II to the third assessment report of the Intergovernment Panel on Climate Change. 2001, Cambridge: Cambridge University Press.
20
J.K. Koh and H.S. Kim, A Study on Vulnerability Assessment to Climate Change in Gyeonggi-Do. Gyeonggi Development Institute. (2009).
21
H.S. Hwang and B.S. Byun, Building Vulnerability Index on Climate Change: Focused on Seoul Metropolitan City. Journal of Environmental Policy and Administration. 19(4) (2011), pp. 93-119.
22
J.S. Lee and H.I. Choi, Comparison of Flood Vulnerability Assessment Outcomes by Classification Schemes for Vulnerability Components to Climate Change. Journal of The Korean Society of Hazard Mitigation. 18(3) (2018). pp. 221-229. DOI: 10.9798/KOSHAM.2018.18.3.221. 10.9798/KOSHAM.2018.18.3.221
23
K.W. Kim, B.C. Park, J.B. Heo, J.Y. Kang, and I.J. Lee, Assessment of Heat Wave Vulnerability in Busan Using the IPCC Climate Change Vulnerability Assessment Framework. The Korea Spatial Planning Review. 104 (2020), pp. 23-38.
24
D.W. Kim, J.E. Kim, C.R. Jang, and M.Y. Jang, Assessment of Heatwave Vulnerability in Korea Considering Socio-economic Indices. Journal of The Korean Society of Hazard Mitigation. 21(5) (2021), pp. 39-47. DOI: 10.9798/KOSHAM.2021.21.5.39.
25
G.Y. Yoo and I.A. Kim, Development and application of a climate change vulnerability index. Korea Environment Institute. (2008).
26
R.H. Moss, A.L. Brenkert, and E.L. Malone, Vulnerability to Climate Change: A Quantitative Approach. Report No. PNNL-SA-33642, Pacific Nortwest National Laboratory, Washington DC. (2001).
27
H.Y. Lee and J.H. Shim, GIS Geographic Information: Theory and Practice. 2011, Seoul, Korea: Beopmunsa.
|
2023-03-30 07:57:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5013681650161743, "perplexity": 7745.52576825646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949107.48/warc/CC-MAIN-20230330070451-20230330100451-00778.warc.gz"}
|
https://www.ctan.org/ctan-ann/pkg/gincltex
|
# Announcements for gincltex
## gincltex – Include TeX files as graphics (.tex support for \includegraphics)
The package builds on the standard packages graphics and/or graphicx and allows external source files to be included, in the same way as graphic files, by \includegraphics. In effect, then package adds support for the .tex extension.
Some of the lower level operations like clipping and trimming are implemented using the adjustbox package which includes native pdf support and uses the pgf package for other output formats.
Package gincltex Version 0.3 Copyright 2009 Martin Scharrer Maintainer Martin Scharrer
Atom Atom 1.0 feed with announcements for package gincltex.
|
2021-01-19 03:19:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873875737190247, "perplexity": 11102.748475346469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517559.41/warc/CC-MAIN-20210119011203-20210119041203-00702.warc.gz"}
|
http://www.cs.cmu.edu/~15110-s13/pa3/index.html
|
# 15110 Spring 2013 [Kaynar/Gunawardena]
## Programming Assignment 3 - due Tuesday, February 5 by 11:59 pm
Note: You are responsible for protecting your solutions to these problems from being seen by other students both physically (e.g., by looking over your shoulder) and electronically. In particular, since the lab machines use the Andrew File System to share files worldwide, you need to be careful that you do not put files in a place that is publicly accessible.
If you are doing the assignment on the Gates-Hillman Cluster machines we use in lab or on unix.andrew.cmu.edu, our recommendation is that you create a pa3 directory under ~/private/15110 for this assignment. That is, the new directory pa3 is inside a directory called 15110, which is inside the directory called private, which is inside your home directory. (Every new andrew account is set up with a private subdirectory within the account's home directory that only that account can access.) Please refer to Setting up Directories for information about managing your directories.
### Overview
For this assignment, you will create a Ruby source file (that is, a text file containing Ruby code) for each of the problems below. You should store all of these files in a folder named pa3, which you will zip up and submit.
As you will discover this semester, computer programs are rarely correct when they are first written. They often have bugs. In addition to writing the code, you should test your code on multiple inputs, for which you independently know the correct output (e.g., by plugging the inputs into a calculator).
### The Algorithms
1. [2 point] DNA has the shape of a double helix which consists of two bonded nucleotide chains. Each base in these chains can bond with only one other base. Therefore, the sequence of these two chains must match precisely to form a double helix. The four bases are A (adenine), T (thymine), C (cytosine), and G (guanine). A can bond with T, and C can bond with G.
Write a Ruby function called DNA_match(base), in DNA_match.rb, that takes in a character and returns the complementary base as a character. Your function should following the algorithm below:
• If base is "A", return "T".
• Otherwise, if base is "T", return "A".
• Otherwise, if base is "C", return "G".
• Otherwise, return "C".
Example Usage:
=> true
>> DNA_match("A")
=> "T"
>> DNA_match("T")
=> "A"
2. [2 point] Recall from pa2 that $/$ and $\%$ can be used to isolate digits in an integer. Write the function digits_squared_sum(n), in digits_squared_sum.rb, that takes a non-negative integer and returns the sum of the squares of the digits.
For example, the integer $152$ becomes $1^2 + 5^2 + 2^2 = 1 + 25 + 4 = 30$.
Use the following algorithm:
1. Set sum to $0$.
2. While n is not $0$, do the following:
1. Isolate the rightmost digit in n and set digit to this isolated value.
2. Divide n by $10$ so that the rightmost digit is no longer there and set n to this new value.
3. Add $digit^2$ to sum and set sum to this new value.
3. Return sum.
Hint: Apply the above algorithm to the given example (number 152) to observe how it works before coding your function in Ruby.
Example Usage:
=> true
>> digits_squared_sum(152)
=> 30
>> digits_squared_sum(3700)
=> 58
3. [2 points] ASCII-art.
By printing out different characters at different locations, it is possible to create images. This is sometimes called ASCII art and works best in a terminal that ues a fixed-width font. Regular shapes, such as the right triangle shown below, are easy to create - even at different sizes - algorithmically.
*
* *
* *
* *
* *
* * * * * *
This right triangle can be created using the following algorithm, which requires the triangle's height (that is, the number of lines in the triangle). It assumes that height is non-negative.
First, print out 1 asterisk("*") on the first line (this is row 1.). Then, for the next $height - 2$ lines, print out one asterisk, print out $2*row - 3$ spaces where $row$ represents the current row number that is being printed, and then print one asterisk. Finally, for the last line, print out $height$ copies of an asterisk followed by a space ("* ").
Note that a denegarate case of a right triangle would arise when $height =1$. In that case, we would not need to perform all of the steps decrsibed above; we would print a single asterisk.
Create a Ruby function right_triangle(height) (in right_triangle.rb) that implements the algorithm described above. Your function must return nil.
Hint: You will probably need a loop inside of a loop for part of your solution. If you use a loop inside of another loop, make sure to use a different loop variable for each loop (e.g. if the outer loop uses "i", the inner loop can use "j").
Example usage:
>> right_triangle(5)
*
* *
* *
* *
* * * * *
=> nil
>> right_triangle(8)
*
* *
* *
* *
* *
* *
* *
* * * * * * * *
=> nil
4. Imagine how a soda machine works. Before the machine distributes a can, it expects to be paid for the can (often times in exact change). The machine will display how much you owe, and as you feed it coins, it will tell you how much you still need to pay. Say a can of soda costs $\$0.95$. You feed the machine a quarter. It will now read that you owe it$\$0.70$. You keep giving the machine change until you have paid the entire amount and recieve your soda. Assume for the following questions that soda machines only take quarters, dimes, nickles, and pennies.
1. [2 points] Write the function count_coins(price), in coins.rb, that returns the fewest number of coins needed to pay for a can of soda that costs price amount of dollars. Use the following formula:
UPDATED!!!
1. Set amount_not_paid equal to (price * 100).to_i()
2. Set coins_needed equal to $0$.
3. While amount_not_paid is greater than $0.1$, do the following:
1. If amount_not_paid is greater than or equal to $25$, do the following:
1. Decrease amount_not_paid by $25$.
2. Add $1$ to coins_needed
2. Otherwise, if amount_not_paid is greater than or equal to $10$ and less than $25$, do the following:
1. Decrease amount_not_paid by $10$.
2. Add $1$ to coins_needed
3. Otherwise, if amount_not_paid is greater than or equal to $5$ and less than $10$, do the following:
1. Decrease amount_not_paid by $5$.
2. Add $1$ to coins_needed
4. Otherwise, do the following:
1. Decrease amount_not_paid by $1$.
2. Add $1$ to coins_needed
4. Return coins_needed.
2. [2 points] Write the function which_coins(price) in coins.rb(in addition to the code for count_coins ), that prints out how many of each coin type is needed. Use the same basic format as in part a, except instead of keeping track of coins_needed, keep variables for quarter, dime, nickle, and penny. Don't forget to return nil.
Example Usage:
=> true
>> count_coins(1.17)
=> 8
>> which_coins(1.17)
You need 4 quarter(s), 1 dime(s), 1 nickle(s), and 2 penny(s).
=> nil
>> count_coins(0.11)
=> 2
>> which_coins(0.11)
You need 0 quarter(s), 1 dime(s), 0 nickle(s), and 1 penny(s).
=> nil
### Submission
You should now have a pa3 directory that contains the files DNA_match.rb, digits_squared_sum.rb, right_triangle.rb, and coins.rb each containing the corresponding function(s). Zip up your directory and hand it in.
|
2022-07-04 11:36:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34919309616088867, "perplexity": 2016.6925031728365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104375714.75/warc/CC-MAIN-20220704111005-20220704141005-00379.warc.gz"}
|
https://izbicki.me/blog/fast-nearest-neighbor-queries-in-haskell.html
|
# Fast Nearest Neighbor Queries in Haskell
posted on 2015-07-21
Two weeks ago at ICML, I presented a method for making nearest neighbor queries faster. The paper is called Faster Cover Trees and discusses some algorithmic improvements to the cover tree data structure. You can find the code in the HLearn library on github.
The implementation was written entirely in Haskell, and it is the fastest existing method for nearest neighbor queries in arbitrary metric spaces. (If you want a non-Haskell interface, the repo comes with a standalone program called hlearn-allknn for finding nearest neighbors.) In this blog post, I want to discuss four lessons learned about using Haskell to write high performance numeric code. But before we get to those details, let’s see some benchmarks.
### The benchmarks
The benchmark task is to find the nearest neighbor of every point in the given dataset. We’ll use the four largest datasets in the mlpack benchmark suite and the Euclidean distance. (HLearn supports other distance metrics, but most of the compared libraries do not.) You can find more details about the compared libraries and instructions for reproducing the benchmarks in the repo’s bench/allknn folder.
the runtime of finding every nearest neighbor
Notice that HLearn’s cover trees perform the fastest on each dataset except for YearPredict. But HLearn can use all the cores on your computer to parallelize the task. Here’s how performance scales with the number of CPUs on an AWS EC2 c3.8xlarge instance with 16 true cores:
With parallelism, HLearn is now also the fastest on the YearPredict dataset by a wide margin. (FLANN is the only other library supporting parallelism, but in my tests with this dataset parallelism actually slowed it down for some reason.)
You can find a lot more cover tree specific benchmarks in the Faster Cover Trees paper.
### The lessons learned
The Haskell Wiki’s guide to performance explains the basics of writing fast code. But unfortunately, there are many details that the wiki doesn’t cover. So I’ve selected four lessons from this project that I think summarize the state-of-the-art in high performance Haskell coding.
Lesson 1: Polymorphic code in GHC is slower than it needs to be.
Haskell makes generic programming using polymorphism simple. My implementation of the cover tree takes full advantage of this. The CoverTree_ type implements the data structure that speeds up nearest neighbor queries. It is defined in the module HLearn.Data.SpaceTree.CoverTree as:
data CoverTree_
( childC :: * -> * ) -- the container type to store children in
( leafC :: * -> * ) -- the container type to store leaves in
( dp :: * ) -- the type of the data points
= Node
{ nodedp :: !dp
, level :: {-#UNPACK#-}!Int
, nodeWeight :: !(Scalar dp)
, numdp :: !(Scalar dp)
, maxDescendentDistance :: !(Scalar dp)
, children :: !(childC (CoverTree_ childC leafC dp))
, leaves :: !(leafC dp)
}
Notice that every field except for level is polymorphic. A roughly equivalent C++ struct (using higher kinded templates) would look like:
template < template <typename> typename childC
, template <typename> typename leafC
, typename dp
>
struct CoverTree_
{
dp *nodedp;
int level;
dp::Scalar *nodeWeight;
dp::Scalar *numdp;
dp::Scalar *maxDescendentDistance;
childC<CoverTree_<childC,leafC,dp>> *children;
leafC<dp> *leaves;
}
Notice all of those nasty pointers in the C++ code above. These pointers destroy cache performance for two reasons. First, the pointers take up a significant amount of memory. This memory fills up the cache, blocking the data we actually care about from entering cache. Second, the memory the pointers reference can be in arbitrary locations. This causes the CPU prefetcher to load the wrong data into cache.
The solution to make the C++ code faster is obvious: remove the pointers. In Haskell terminology, we call this unpacking the fields of the Node constructor. Unfortunately, due to a bug in GHC (see issues #3990 and #7647 and a reddit discussion), these polymorphic fields cannot currently be unpacked. In principle, GHC’s polymorphism can be made a zero-cost abstraction similar to templates in C++; but we’re not yet there in practice.
As a temporary work around, HLearn provides a variant of the cover tree specialized to work on unboxed vectors. It is defined in the module HLearn.Data.SpaceTree.CoverTree_Specialized as:
data CoverTree_
( childC :: * -> * ) -- must be set to: BArray
( leafC :: * -> * ) -- must be set to: UArray
( dp :: * ) -- must be set to: Labaled' (UVector "dyn" Float) Int
= Node
{ nodedp :: {-#UNPACK#-}!(Labeled' (UVector "dyn" Float) Int)
, nodeWeight :: {-#UNPACK#-}!Float
, level :: {-#UNPACK#-}!Int
, numdp :: {-#UNPACK#-}!Float
, maxDescendentDistance :: {-#UNPACK#-}!Float
, children :: {-#UNPACK#-}!(BArray (CoverTree_ exprat childC leafC dp))
, leaves :: {-#UNPACK#-}!(UArray dp)
}
Since the Node constructor no longer has polymorphic fields, all of its fields can be unpacked. The hlearn-allknn program imports this specialized cover tree type, giving a 10-30% speedup depending on the dataset. It’s a shame that I have to maintain two separate versions of the same code to get this speedup.
Lesson 2: High performance Haskell code must be written for specific versions of GHC.
Because Haskell code is so high level, it requires aggressive compiler optimizations to perform well. Normally, GHC combined with LLVM does an amazing job with these optimizations. But in complex code, sometimes these optimizations don’t get applied when you expect them. Even worse, different versions of GHC apply these optimizations differently. And worst of all, debugging problems related to GHC’s optimizer is hard.
I discovered this a few months ago when GHC 7.10 was released. I decided to upgrade HLearn’s code base to take advantage of the new compiler’s features. This upgrade caused a number of performance regressions which took me about a week to fix. The most insidious example happened in the findNeighbor function located within the HLearn.Data.SpaceTree.Algorithms module. The inner loop of this function looks like:
go (Labeled' t dist) (Neighbor n distn) = if dist*ε > maxdist
then Neighbor n distn
else inline foldr' go leafres
$sortBy (\(Labeled' _ d1) (Labeled' _ d2) -> compare d2 d1)$ map (\t' -> Labeled' t' (distanceUB q (stNode t') (distnleaf+stMaxDescendentDistance t)))
$toList$ stChildren t
where
leafres@(Neighbor _ distnleaf) = inline foldr'
(\dp n@(Neighbor _ distn') -> cata dp (distanceUB q dp distn') n)
(cata (stNode t) dist (Neighbor n distn))
(stLeaves t)
maxdist = distn+stMaxDescendentDistance t
For our purposes right now, the important thing to note is that go contains two calls to foldr': one folds over the CoverTree_’s childC, and one over the leafC. In GHC 7.8, this wasn’t a problem. The compiler correctly specialized both functions to the appropriate container type, resulting in fast code.
But for some reason, GHC 7.10 did not want to specialize these functions. It decided to pass around huge class dictionaries for each function call, which is a well known cause of slow Haskell code. In my case, it resulted in more than a 20 times slowdown! Finding the cause of this slowdown was a painful exercise in reading GHC’s intermediate language core. The typical tutorials on debugging core use trivial examples of only a dozen or so lines of core code. But in my case, the core of the hlearn-allknn program was several hundred thousand lines long. Deciphering this core to find the slowdown’s cause was one of my more painful Haskell experiences. A tool that analyzed core to find function calls that contained excessive dictionary passing would make writing high performance Haskell code much easier.
Once I found the cause of the slowdown, fixing it was trivial. All I had to do was call the inline function on both calls to foldr. In my experience, this is a common theme in writing high performance Haskell code: Finding the cause of problems is hard, but fixing them is easy.
Lesson 3: Immutability and laziness can make numeric code faster.
The standard advice in writing numeric Haskell code is to avoid laziness. This is usually true, but I want to provide an interesting counter example.
This lesson relates to the same go function above, and in particular the call to sortBy. sortBy is a standard Haskell function that uses a lazy merge sort. Lazy merge sort is a slow algorithm—typically more than 10 times slower than in-place quicksort. Profiling hlearn-allknn shows that the most expensive part of nearest neighbor search is calculating distances (taking about 80% of the run time), and the second most expensive part is the call to sortBy (taking about 10% of the run time). But I nevertheless claim that this lazy merge sort is actually making HLearn’s nearest neighbor queries much faster due to its immutability and its laziness.
We’ll start with immutability since it is pretty straightforward. Immutability makes parallelism easier and faster because there’s no need for separate threads to place locks on any of the containers.
Laziness is a bit trickier. If the explanation below doesn’t make sense, reading Section 2 of the paper where I discuss how a cover tree works might help. Let’s say we’re trying to find the nearest neighbor to a data point we’ve named q. We can first sort the children according to their distance from q, then look for the nearest neighbors in the sorted children. The key to the cover tree’s performance is that we don’t have to look in all of the subtrees. If we can prove that a subtree will never contain a point closer to q than a point we’ve already found, then we “prune” that subtree. Because of pruning, we will usually not descend into every child. So sorting the entire container of children is a waste of time—we should only sort the ones we actually visit. A lazy sort gives us this property for free! And that’s why lazy merge sort is faster than an in-place quick sort for this application.
Lesson 4: Haskell’s standard libraries were not designed for fast numeric computing.
While developing this cover tree implementation, I encountered many limitations in Haskell’s standard libraries. To work around these limitations, I created an alternative standard library called SubHask. I have a lot to say about these limitations, but here I’ll restrict myself to the most important point for nearest neighbor queries: SubHask lets you safely create unboxed arrays of unboxed vectors, but the standard library does not. (Unboxed containers, like the UNPACK pragma mentioned above, let us avoid the overhead of indirections caused by the Haskell runtime. The Haskell wiki has a good explanation.) In my experiments, this simple optimization let me reduce cache misses by around 30%, causing the program to run about twice as fast!
The distinction between an array and a vector is important in SubHask—arrays are generic containers, and vectors are elements of a vector space. This distinction is what lets SubHask safely unbox vectors. Let me explain:
In SubHask, unboxed arrays are represented using the UArray :: * -> * type in SubHask.Algebra.Array. For example, UArray Int is the type of an unboxed array of ints. Arrays can have arbitrary length, and this makes it impossible to unbox an unboxed array. Vectors, on the other hand, must have a fixed dimension. Unboxed vectors in SubHask are represented using the UVector :: k -> * -> * type in SubHask.Algebra.Vector. The first type parameter k is a phantom type that specifies the dimension of the vector. So a vector of floats with 20 dimensions could be represented using the type UVector 20 Float. But often the size of a vector is not known at compile time. In this case, SubHask lets you use a string to identify the dimension of a vector. In hlearn-allknn, the data points are represented using the type UVector "dyn" Float. The compiler then statically guarantees that every variable of type UVector "dyn" Float will have the same dimension. This trick is what lets us create the type UArray (UVector "dyn" Float).
The hlearn-allknn program exploits this unboxing by setting the leafC parameter of CoverTree_ to UArray. Then, we call the function packCT which rearranges the nodes in leafC to use the cache oblivious van Embde Boas tree layout. Unboxing by itself gives a modest 5% performance gain from the reduced overhead in the Haskell run time system; but unpacking combined with this data rearrangement actually cuts runtime in half!
Unfortunately, due to some current limitations in GHC, I’m still leaving some performance on the table. The childC parameter to CoverTree_ cannot be UArray because CoverTree_s can have a variable size depending on their number of children. Therefore, childC is typically set to the boxed array type BArray. The GHC limitation is that the run time system gives us no way to control where the elements of a BArray exist in memory. Therefore, we do not get the benefits of the CPU prefetcher. I’ve proposed a solution that involves adding new primops to the GHC compiler (see feature request #10652). Since there are typically more elements within the childC than the leafC on any given cover tree, I estimate that the speedup due to better cache usage of BArrays would be even larger than the speedup reported above.
### Conclusion
My experience is that Haskell can be amazingly fast, but it takes a lot of work to get this speed. I’d guess that it took about 10 times as much work to create my Haskell-based cover tree than it would have taken to create a similar C-based implementation. (I’m roughly as proficient in each language.) Furthermore, because of the current limitations in GHC I pointed out above, the C version would be just a bit faster.
So then why did I use Haskell? To make cover trees 10 times easier for programmers to use.
Cover trees can make almost all machine learning algorithms faster. For example, they’ve sped up SVMs, clustering, dimensionality reduction, and reinforcement learning. But most libraries do not take advantage of this optimization because it is relatively time consuming for a programmer to do by hand. Fortunately, the fundamental techniques are simple enough that they can be implemented as a compiler optimization pass, and Haskell has great support for libraries that implement their own optimizations. So Real Soon Now (TM) I hope to show you all how cover trees can speed up your Haskell-based machine learning without any programmer involvement at all.
|
2017-06-25 00:15:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2870025038719177, "perplexity": 2481.4028597953584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320368.57/warc/CC-MAIN-20170624235551-20170625015551-00352.warc.gz"}
|
http://www.lmfdb.org/knowledge/show/st_group.roadmap
|
show · st_group.roadmap all knowls · up · search:
## Sato-Tate groups
• Add generators for all groups
• Add formula for the Haar measure
## Available data
• Sato-Tate groups for all Dirichlet characters in the LMFDB
• Sato-Tate groups for all Artin representations in the LMFDB
|
2019-05-19 19:00:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370318055152893, "perplexity": 11613.822750455894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255092.55/warc/CC-MAIN-20190519181530-20190519203530-00546.warc.gz"}
|
https://ja.overleaf.com/articles/the-mechanics-of-a-spiral-water-pump/cfwrtqddfygq
|
# The Mechanics of a Spiral Water Pump
Author
Alex Garcia, Ann Lin, Michael Yep
AbstractThe fundamental necessity for water is a widespread issue affecting many communities across the globe. In this project, our team sought to provide an innovative solution to this problem for a small community with access to a relatively close source of flowing water. Resultant flow rate of water was calculated to be 0.498 L/min. Stress and strain analysis on individual subsections of the system were as follows: -44.2 MPa for bending moment in the rod; -7.04 MPa for shear stresses in the rod of the axis; -8.88E-5 for shear strain in the rod ; -1.11 MPa and .429 MPa of shear stresses in the L brackets; -0.123 MPa for radial stress of spiral, -0.422 MPa for hoop stress of spiral, -2.10 MPa for bending moment of spiral, and -0.176 MPa for the maximum shear stress of the spiral pump on the rotating wheel. The focus of the design remained fixated on water acquisition; however, further additions can be made for water purification.
|
2021-04-23 03:21:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.5970505475997925, "perplexity": 2752.5384583570876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00038.warc.gz"}
|
https://electronics.stackexchange.com/questions/16383/lipo-3s-11-1v-power-regulator
|
# LiPo 3S 11.1V Power Regulator
I'm currently working on a small hobby robot, where I want to use a FEZ Panda II along with a RX433 to remote control it. My power source is a Turnigy 3000mAh 3S 20C Lipo which gives me 11.1V to work with, which is too much for the FEZ, but perfect for the motors. I could easily use a resistor to get the right voltage for the FEZ, but I'd like a more generic solution, so I don't have to worry about varying current consumption from different sensors and what not. Using an LM317, I can regulate the voltage to my needs. My question is this:
When I turn on/control the motors, will this affect the input of the power regulator and in turn affect the output, and if it does, how do I stabilize it? I could test it, but I don't want to risk frying my FEZ just yet.
Also, I see people talking about an LDO regulator for the Arduino, is this more appropriate than a simple LM317 regulator?
- Motor Specs
Voltage = 12 V
Normal Load Current = 120 mA
Stall Current = 360 mA
- FEZ Panda II Specs
Voltage = Through USB port or an external DC 6-9V power supply (connecting both is safe).
Active Current = 103 mA.
Idle Current = 65 mA.
Hibernate Current = 3.75mA.
• The motor specs are interesting, but not that important. We need to know what current and voltage this FEZ thing requires. – Olin Lathrop Jul 5 '11 at 23:18
It's not the motor specs that matter, assuming you're happy with how it runs from your 11V source. The issue is how much current this FEZ thing needs at what voltage.
A linear regulator should work well enough as long as the motor doesn't temporarily make the 11V supply dip so low that the regulator can't maintain the output voltage anymore.
In theory, a regulator puts out a constant voltage as long as the input voltage is above some threshold. If you're trying to get 5V or 3.3V, then 11V is plenty in that regard. However, in reality regulators can only cope with a certain speed of input voltage change. Motors can cause large and sudden spikes, which can get onto the regulated output to some extent even if the regulator input stays above its minimum threshold.
Fortunately the solution to both problems is simple. Put a diode followed by electrolytic storage cap between the 11V supply and the regulator input. That will prevent negative going spikes from making it to the regulator input. The capacitor will hold up the regulator input voltage for the short duration of any spike.
In a generally noisy environment like what you describe, it's a good idea to put some high frequency filtering in front of the regulator. I don't know how much current this FEZ thing draws, but if its just a micrcontroller at let's say 200mA, then a ferrite bead or "chip inductor" followed by 10uF ceramic cap right accross the regulator input and ground terminals will do fine. This will attenuate the high frequencies that the regulator is not so good at dealing with.
Another point is that a linear regulator will be rather inefficient in this application. Even if this FEZ thing wants 5V, that's still 6V it will drop for about 45% efficiency. That by itself may not be a big deal since the power wasted as heat may be small compared to what the motors use. It's probably more a issue of dealing with the heat. Again, that depends on the specs of the FEZ. If it draws 50mA at 5V, then the regulator will only dissipate 300mW. Not a big deal for a TO-220 case in free air. If on the other hand it draws 400mA at 3.3V, then the regulator will dissipate over 3W, which needs to be specifically dealt with.
It might therefore be worth looking into a switcher. At 80% efficiency it would only dissipate 330mW with 400mA at 3.3V out.
• Any recommended ICs for a switch-mode supply? A quick google found me an LM2675, but I'm unsure exactly what I need to look for. – William Mariager Jul 5 '11 at 22:11
• @mindworx: There are many possible ICs and other solutions, but as I already asked, we need to know the current and voltage requirements of the FEZ. – Olin Lathrop Jul 5 '11 at 23:20
• Added the FEZ Panda II specs which were oddly not in the manual I linked, sorry about that. – William Mariager Jul 6 '11 at 0:14
• @mindworx: It's your job to supply the basic specs explicitly, whether they are in the link or not. I'm not likely to follow a link just to answer a question. – Olin Lathrop Jul 6 '11 at 0:22
• @mindworx: In any case, 103mA at let's say 7V would cause a linear regulator to dissipate 420mA. That's doable in free air with the right case. It would also draw the whole 103mA from the 11.1V supply, so would cost you 1.14W. If that's OK, then a linear will be easier. A 7V switcher with 80% efficiency only costs you 900mW in total power. If the extra 1/4 Watt matters, then that's worth looking into. If it doesn't, I'd stay with the simpler linear regulator. – Olin Lathrop Jul 6 '11 at 0:24
It's unlikely to do any harm.
My reasoning: You have a 20C LiPo. That means it is capable of outputting 20x its capacity (3Ah) in amps. So you could draw up to 60A from it with no ill effects. In order to do this, the internal resistance must be low.
I fly my model planes with much smaller Rhino 750mAh 3S batteries. These are capable of giving 15A continuously. Drawing around 6.5A for my motor, I notice a drop of around 0.3V. 350mA is very unlikely to have a significant effect especially from such a big battery.
However, all voltage regulators have something called line regulation and line response characterised in the datasheet. A linear regulator can be modelled as a variable resistor controlled by an op-amp to stabilise the output (ignoring, for now, the short circuit/over temperature protections.) This resistor can't change value instantly, so if you increase the voltage by 10V you will get a little spike on the output, maybe 100mV. The good news: a decrease in voltage will very likely result in a decrease of output voltage. So you're unlikely to fry your device. Line regulation is Vout change over Vin change. Line response is the amount of variation due to transient events.
To ensure maximum stability and minimum spiking, install good electrolytic capacitors on the input and output as well as some small 100n ceramics.
|
2019-11-22 07:44:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3711334764957428, "perplexity": 1875.70089711579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00498.warc.gz"}
|
JFIF>CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), default quality C \$.' ",#(7),01444'9=82<.342C 2!!22222222222222222222222222222222222222222222222222XX" }!1AQa"q2#BR\$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr \$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (³Fe^d~-Q@T7Qng;(zk2f@\$bƶo?\$UzK[b%sx`:vG/u8)#?~ڋkXe<̬=cu{hG\-bDQL'GҪF]/0Y';koOޕ CpC[?ZKzUuɍ?{o\]WR%, Rf娸Ӯ>Sè?Zѯti a 9E{@|EGs2^DLW(wЌi5`:(Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ '[VeAm؍Ow3@OM,,q3\AK^[*OxCi^E|Gº+~"ޞS{ S|U?t4WySLU1vE^yxk:jޮMJ<#R?碎w(7Z.|0F+[wX3\FF0z@#\$EDrI[\$vk}(3e,"F!}>nNkkEu231{dZmŰ»!O0O@Ǭ,`_ZxòʁAv-K" FGq X>Ās]/VQ@ʼ(`p7)u?xfh1}#K 0)p9=k(Pgo! z{P#ب ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (K[˙}*EeO7>(XIm}~81b?N뺯wO+JF=yrhaVv ,cm[ߊ5giwk\-`i[+m{ߤ/~F'}#=ivFV5\$H6^;_ށ-^I[k d{e:ez 'HƠn㏡:5FQn]吱'rhԾ6t":4FwC C1C\>xšRdovcWzl=jn1E3/afap*ke h*0&ߜոr23o"LDʪ:?iVЂ>ܝ8_~}FqbaKN CݲF~Y+*tQ@!8ܽoH{l#W;:;#.M~nCϠ5B)f }h ,FZZ:
h U;[7"kӳpx57z[H.i6[d s^ݠˬK&8~km4uG)\4,̀CPŕƻ#f-ZØ@bފWxVqzn(((((((((((((((((((((((((X qg?@9nƬikss`Hp+ZUPLnen7=\%s{J1NcϬnc0Svun"w~n%{3a]+&[8Suz9zS/^']]cS\$LA'uk>3\$wSS +]GI&?,i<+jp1\$?xPs-Fht^=JK tKEdV8nivu:qn q]HgqYխ[Iΰ[;v͙Fy==I56[@ں^YFpdI[jT}:q]Cj|Un-<1>:2VR9)/bcVmά?iY]\Hoky3XS;MWk<"?Q:בʟSfiGax31ȟ*g{f\q-9٨KkgW,@wtp(ڋ9ߍPl(Ih_IR&Y?{1M>_Cd ?_R6ǖiQvM z;Z]+NnI§8&X!Wޱ |cpP KE3xrCi[ a.]ck^)QEQEQEQEej#ѴUݨV Cբ> S]AcrXBaىL-?wTTq,:l2 ( ( ( ( (W h \+_Z%u *Zs\c]RqoD<@Rqwmhpz>QR%쇃-]Dž>%KW*&魇?1=*g뻨%Z;u g_b8"m^PI&3EJ|q>sP-..8GzO\$AEoonX*LIgJׯt)NIL#r-z4PGo͕ělBg)9;#ّD2 E:Fr[cA<%x 4c=cQ1^aZQ̲!j+QT (((((Yc&i8d5^Cu>p[7I\Laݤ:EKYͫXPCIW53JM}*3\&j6{"j7+Wٟ@I'ԟQKow0r gc8~#`fi,|=m/-"ȣ~gV϶85 \$s\UōJ8ȞsI-cysj(}GXMNHb@v;X^WJ_IY/TwZarO- q!ul9O; NLּvH]1GBGOÃLgh؝'\$篇Pkj3X?Zw CԚ#ejUL+qZeJ|\$ZPiAXc{sZqX*.pG&1`FR{sZR8du7 =1@Eam%@*f( 8> 6Zɸ؞:YuEN?R+&YO3g.Ijݳ9 TFOf|sQi?X^іպĊMy56߽XjIn7;p o~?=\$VȺQ~5Z m[-5\$j>lt1]M+ :d &ҽ'JѴZv60q֯H((((*ivkBԷw xMՃL8?\%M 0Fr(H!7m)\B JeYbyhҙ5[. ];EռE4\$}1/qM墈4uYfLJC3ٮXšz)q}[oo.!Wn`~z[[C"#jjKKxi,VK#H Fk6]iAU%TB}Wz^~+3 w"e&f #_^#"ӵK \$XUWSAk@>B^tFr/j?<ܢ,wRl{uҫ;NccUm e^]4+-Kxh`C pzk]-bσf2Eݤ >pEuhzvB<= }e26:-ݽWDꔝ^MjZ̋-!ߗ}7V(ЎaEEGJ6\$]q0' ƹb#f%ʴ͑=Xr͞jΓj!4Ӝ?v>pzO9pKK}z:d Ԓ:BקxL"IgW4*-ﲄdק??V]V~d"^E [;ku(!^āT~@QEQEQEQEQMfTRTu\$RNyկص#КdWe9K,\$z}W?z[dH4a<؏ý+ǖ:\R%\$HؿzWZTBd8Vϥ-ƦWܜi f<|vyƶֵ\$9c.^Oܳt= 69UIOhYZյ{gBKmy_wn~/U^D[[]Jɱ.PT?_R\Y,FMwU!m3t~˹diGQ6u?V&\#zCEuQ_Z:>ڧ9l'f^Z'wy-O\$rjzV"! HX =`K6 ?N:`lETR-b\$T+յ;yN#9rE k6uƧeBVVNO =rjV@Oy{/dԭ>+Ha%⋍FAk[rDM>qqڼƺğ|_@T*nuB'ő?r|NuZutȥ(N>NH4#k^4]2{"(c: fQ|y,6cb}IҸd"^{i3%O"t5\ pa4DSM/4Cr;s&7m_5Z[l#`pS0ln> P 2\$wQ:Ub1p+:7F;я imZLFUcd}\$=@Y ZF q[ q*@1:B"NTQ.A̬?Cӥi*( ui%7QE((w#*':/|l gndƟSQ`:|QcΜqWzAG?_ӓ:=NW(xMh9t4irhX`qKؐ`Gҵ|D:b}n+HT6b{úVW2w,ڞ jj\M. SMg]2%йUBut}KZ-'/ԏjiW7SXX5ij1idf';tKN8KFuH},Loޭi6p1Z]N@DbVnk dG|vFqN"KUGPG [(Kȱ%'g rJ=KpQWlାE{m=b1:?u5_LBR>Y;%Cb+-/;X\"\ƬTv fX2 e[{"EAF=HβuM%2_VKF Rb Z XYU T~/YPmm\$r q#Rצ&p#ƮIɧ7`VvBoA/43Ƽmb6W()O t rwU W#xy SE\$ ȉ>iQҵ WiI'9\$ kwjݙn9N}7y7ڑERDE W t&SNh盼Jqx3`,qw?]:.I';#_GҽG :Nohou?j5 2`6V={@uIX-1HzF?^cxkw{[ch\}a^}1t~qYǮ:Ҹ],I\AbO\Eحj3ז>[)\?d4}.fVҤ[*I0MBmRYSr=0?w<0IocKˈl_қ #a?Eh"`Yn>QVznls&9H~UýشHX,Xҟ(&F4\$HɌ}[\oFFK-;NnJ|4KC峓,@,' M[R! M>x?n)7u;SM?t!\.}xkJtU p᷏u d7N fj_5m{uX崎" 1?6:t-ͭ)8 xծmtGU ?3]SJ :Lv*P+nfQ { :~9D>?Y~^i{I˱14 khv8XBOrkxZK-Z\z?Fiz-L`.~ZX*>&9ukNUo*#>BҴ #BҴ[4R~:)QEQEQEW rxj++u\$acs?J揋ړPq+H6?sMύTj:pO^(K"Hx<+]KXȱ-B_>щ\πvOaf~̹UV K`uL4XuN9dĚyU##1xlG;uyX6,.~h{»䑈ޭxNag!g,1{qt[k&EGpj]2Kf847By[69b2`:CKR~nh)Q^oe`}Ee jY/P*'ۻ{=CFAՅۆ@?*R5+,-c cmR̛19?wZM7&}k[D,⹝K쟿H8?y-˖wxcdwp a fWͨCv}}>ōd^!;K~`&X{Nxb©zYO[:*ȗ=,l__- U)o ɸ%-%*sa:nEvJ *,1\$z*{&ݜ's{7߄+u}zh f8[ğàQ㩈E| XQ&&j-&̄X:eq|gږ0.5gCprcCg7z41&<vJ\X"? }:%w\``){Z)<\$-K}o=H,֢k/B _@U,l#ɪxW@eУǗH\$j?-:.cRh ༭Unc>nmFQryc#S{ci!\$5X17Y_KKwv2 \$rW'=e[&L#!W\$_hKiU}C[vBGAORwrzsS#4cccAW]@m/W-S@`rjC3m!m yt@=+NR:wcE?η>3K#6xӇKHY5kIuL"ۻN7#?:?GMEVe9_9WgQ ij0~x~3xFE~ug{H ]H֊Ͳ-&'VrU }O_=|x!o_?*hòch&EHȎe5vЎ=햧RG ώ=8\/uSH26ȓ# yq{vYdf1 CҴ{Pд˿ \Zu dۖ\ba6O9ݎkż\Ewg"l@ꇨW'+`ۊ l 9^S4Ow2",8ǁ5av(OjKS8|W;>H8lֲZ@t:Rvc< u=aYs\Jʻ8\$@ET sϨbKrNMq֎V{^{{ׯN,MII'-=>KY'x;WKVϑ Q/ 5LGq9~yfbw1ޢȎ5i\$8UHԳ&O(kcGHyhѼ;l-600Li7պƹ'G1=}q{HSt?R>>G} yS\5/Gz(yNRWTPNQ@Q@Q@Q@Q@s4#<:ge>#oVƅ¢O⇈ ao8a"6GzLeԺL\$ӗ=sɤѴӮwܩ{kl8=Uz]+(.f§ZfzڞY6: \$5.&mFH!yc֮gMS6z:l .*s22ǀ>ҕoHMD]03k>9C[Oa*B{q\'87CCemR{yq/آʠᾝI(wQvϛ{s9s7*@;͊>1X֮X*Dn.os!P,Ki_k},!7%c8--/IV}\6-r(듀&C=XBB?v%މ8}okEe^߂:mr4O>Lxc?uG~ͤiqwkRUO@qoLIn@n'z~+&AER(((((K1'szύ)Ycqq((?^h#0E,I=ּoijay>#^jlFu3z~y PԮV3-' U7G G#NO}y]0qwp
?z_ꚫGuIΝly,\$4=*Wº/>ϣEn ǓЀ#UӦT6wSG:*0)E}\~ xO5`^^ _b!'">;B<l|&vIiSQ2v@kHܞ~C?hčH0\$X>Gz8"&;hg ʌǸc.I#'kg7w/D3ʩ;I?cȷ\튽\ =җ]'u#@ʤ31= m{]7OO6c<{z"* EV)"nc^U¾4Ҽừ;Ync|6?[&~P`O^ _j:gí. vssKꚱ)}-^}i AO?jM;F' ds\$嘓Ԟk.54[.>ߪ2h_V=x xYKa; oԝg+++m6+8RhPGh0G@*atDVv=Yr{ӦQE0 ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (+_XZv3Y^)I"uʰ5^.;\$m)Nj .˨C\$p~SLxN:{WHλ-mˬnZ1齸rx{>iѤ\W<*hQ@Q@Q@Q@{p&R5}KÖOqrôqű@ןH_-5-y,&oF{)\`0|n&b, 8~pqV,f@!G< ɟ1'ۚ.\,WӣYw\uP7ҹ SΣrZ==Oܮ?U.`q2D+־&w \DžW:m:X0iZ\Qi<*8uz=OZܱ<ʃ?Z%-K{s!-RC~YfU'Bl"|ٝ.x?Z7 tOJ<{@Gk"9uɆd9рUS\4? @clR)aoY?sɭ)AESЫ 0kSh~"Ӧ5접U?_[י|S"u"0 XGhAÜzJL"Xf⺍Kxe&\W=*1f"Rrh6e8h]yjj\$ cl@ژ r5n;,<ƶ u㽏Ee &y&|%{4E \$RJ 7U' ձErd ?=ujxG½c q My GDN}ͼ=~&1kIϔ@H=vڽH42-?MH-*==MhQLAEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPES@X*Hw=>\$T %pV%nt-Hب?¾yZ)|Xp=S aAi v_XJrF!\9i %ĬIg\$xQqi=ؠшң4bAԀ@89Y`Y`ψ,B3?v_jʦr#n|~m]/{ +J#ŝ;UszOwyƏk;dԤ%8G|/&K+[ uHG*((((yiڏXZBR~̓Ac9ܶ+f@KEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEOإSJi3>'&p5XKp\zzjki3ʅb]Ycwki \$ ]9t\$nz\hQ߾0tu";=gK6xJ<2R? i,5R|{PeYWD?ڹ_u+-NxYFՅ:1J#g zxBԕl1> *G>"pj#}H.A*ncksM\.p?02LO\$TM#8GҾ7u\Vq}Y^ះ^-/P'v_ 1\fڬ]jؘIJrr_Ӱ"> <]66y:S%#O`9~Z)AEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPXGhأ 7רVas0XNkW}0yYcr:wEwgW{r6vOWEvz?-̖n6=NOMzud|?J-W4Տ& ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (?
|
2018-03-23 01:34:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.897038459777832, "perplexity": 399.32288396322343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648113.87/warc/CC-MAIN-20180323004957-20180323024957-00156.warc.gz"}
|
|
http://www.zora.uzh.ch/id/eprint/76558/
|
# Males wander to look for females - from the model species Arabidopsis thaliana to its related species in the natural field -
Shimizu, Kentaro K (2002). Males wander to look for females - from the model species Arabidopsis thaliana to its related species in the natural field -. Biohistory, 32:14-15.
|
2017-08-22 18:59:38
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8457362651824951, "perplexity": 7374.665214687153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112539.18/warc/CC-MAIN-20170822181825-20170822201825-00054.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/8237-basis-col-nul.html
|
1. ## Basis for Col(A)/Nul(A)
Given the Matrix A = [[1,3,-4,2,-1,6],[0,0,1,-3,6,0],[0,0,0,1,4,-3],[0,0,0,0,0,0]] (this matrix is 4x6);
1.) Find a basis for col(A^T)
2.) Show that nul(A) is orthogonal to col(A^T)
2. Ok so my hint for this was:
For 1.) Reduce (A^T)^T = A to echelon form (for your problem it already
is in echelon form) then transpose the non-zero rows. Or, reduce A^T to
echelon form then choose the columns of A^T which contain leading row
entries (i.e. pivots). There is, in this case, a third way: eyeball
A^T.
For 2.) Let w be in null(A) and v be in col(A^T). Then Aw = 0 and
v = A^Tx for some x. Compute w dot v = w^Tv = ...
For #1, I found the transpose of A to be:
[[1,0,0,0],[3,0,0,0],[-4,1,0,0],[2,-3,1,0],[-1,7,4,0],[6,0,-3,0]]
(6x4 matrix)
When I rref the transpose of A I get:
[1,0,0,0], [0,1,0,0], [0,0,1,0], [0,0,0,0], [0,0,0,0], [0,0,0,0]
The first 3 columns of pivots, if I did that correctly...but whats the basis? The first 3 columns in the orig?
And for #2
What's the nul(A) and when I multiply them I should get 0. So I will have some m * 3 matrix times another matrix? Dot product them?
3. Well I figured it out.
I'm just confused if col(A^T) is:
the vectors: [1,3,-4,2,-1,6], [0,0,1,-3,7,0],[0,0,0,1,4,-3] <--- 3 6x1 vectors
OR: if it is [1,3,-4,2,-1,6], [0,0,1,-3,7,0],[0,0,0,1,4,-3] <-- 3 1x6 vectors...
For #2 I was able to find the nul(A) and when multiplying each of the vectors from nul(A) to col(A) I got 0, so I am positive I did this right...which leads me to think the 6x1 vectors are right for col(A) but I want to make sure...
but...col(A^T) is the same as row(A) which makes me think of the 2nd...so im confused.
,
### basis for null
Click on a term to search for related topics.
|
2017-10-22 05:33:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905001163482666, "perplexity": 2895.3501855921127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825141.95/warc/CC-MAIN-20171022041437-20171022061437-00358.warc.gz"}
|
https://nigerianscholars.com/past-questions/mathematics/question/192168/
|
Home » » Find the value of x for which the function f(x) = 2x3 - x2 - 4x + 4 has a maximu...
# Find the value of x for which the function f(x) = 2x3 - x2 - 4x + 4 has a maximu...
### Question
Find the value of x for which the function f(x) = 2x3 - x2 - 4x + 4 has a maximum value
### Options
A) $$\frac{2}{3}$$
B) 1
C) -1
D) -$$\frac{2}{3}$$
### Explanation:
f(x) = 2x3 - x2 - 4x + 4
f(x) = 6x2 - 2x - 4 at turning point, f1(x) = 0
6x2 - 2x - 4 = 0, 3x2 - x - 2 = 0, 3x2 - 3x + 2x - 2 = 0
(3x + 2)(x - 1) = 0, x = -$$\frac{2}{3}$$ or 1
f11(x) = 12x - 2,
when x = $$\frac{2}{3}$$, f11(x) = 12(-$$\frac{2}{3}$$) - 2 = -10 < 0
$$\to$$ f(x) is maximum @ x = -$$\frac{2}{3}$$
when x = 1, f11(x) = 12(1)- 2 = 10 > 0
$$\to$$ f(x) is maximum @ x = 1
## Dicussion (1)
• f(x) = 2x3 - x2 - 4x + 4
f(x) = 6x2 - 2x - 4 at turning point, f1(x) = 0
6x2 - 2x - 4 = 0, 3x2 - x - 2 = 0, 3x2 - 3x + 2x - 2 = 0
(3x + 2)(x - 1) = 0, x = -$$\frac{2}{3}$$ or 1
f11(x) = 12x - 2,
when x = $$\frac{2}{3}$$, f11(x) = 12(-$$\frac{2}{3}$$) - 2 = -10 < 0
$$\to$$ f(x) is maximum @ x = -$$\frac{2}{3}$$
when x = 1, f11(x) = 12(1)- 2 = 10 > 0
$$\to$$ f(x) is maximum @ x = 1
|
2022-01-18 13:07:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40091025829315186, "perplexity": 545.4400806717476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00545.warc.gz"}
|
http://support.ircam.fr/docs/Antescofo/manuals/Reference/exp_variable/
|
## Variables¶
Antescofo variables are imperative variables: they are like a box that holds a value. The assignment of a variable consists of changing the value stored in the box:
$v := expr let$v := expr
The two forms are equivalent, but the let keyword is sometimes mandatory, see below.
An assignment is an action and like other actions, it can be done after a delay. We stress that variable assignments are actions and not expressions. However, they are instantaneous and they can appear in extended expressions in the body of a function.
Variables are named with a $-identifier. By default, a variable is global - that is, it can be referred to in an expression anywhere in a score. Note that variables are not typed: the same variable may hold and integer and later a string, for example. User variables are assigned within an augmented score using Assignment Actions, see assignment. However, they can also be assigned by the external environment, using a dedicated API: Also see the section accessing scoped variable below. ### Histories: Accessing the Past Values of a Variable¶ Variables are managed in an imperative manner. The assignment of a variable is seen as an internal event that occurs at some date. Such event is associated to a logical instant. Each variable has a time-stamped history. So, the value of a variable at a given date can be recovered from the history, achieving the notion of stream of values. Thus, $v corresponds to the last value (or the current value) of the stream. It is possible to access the value of a variable at some date in the past using the dated access:
[date]:$v returns the value of variable $v at date date. The date can be expressed in three different ways:
• as an update count: for instance, expression [2#]:$v returns then antepenultimate value of the stream; • as an absolute date: expression [3s]:$v returns the value of $v three seconds ago; • and as a relative date: expression [2.5]:$v returns the value of $v 2.5 beats ago. For each variable, the programmer may specify the size $n$ of its history variable declaration. So, only the $n$ “last values” of the variable are recorded. Accessing the value of a variable beyond the recorded values returns an undefined value. #### Dates functions¶ Two functions let the composer to know the date of a logical instant associated to the assignment of a variable: @date([n#]:$v)
returns the date in absolute time of the $n$th to the last assignement of and
@rdate([n#]:$v) returns the date in relative time (relative to the musician). These forms mimic the form of functions but they are not; they are special forms and only accept a variable or the dated access to a variable. ### Variable Declaration¶ Variables are global by default, that is, visible everywhere in the score, or they are declared local to a sequence of actions which limits its scope and puts a constraint on its lifetime. For instance, the scope of a variable declared local in a loop body is restricted to one instance of the loop body, so two loop bodies refer to two different instances of the local variable. This is also the case for the body of a whenever or of a process. #### Local Variables¶ To make a variable local to a scope, it must be explicitly declared using a @local declaration. A scope is introduced for the body of each compound action. The declaration, may appear everywhere in the scope and takes a comma separated list of variables: @local$a, $i,$j, $k There can be several declarations in the same scope and all local variables can be accessed from the beginning of the scope, regardless of the location of their declaration. A local variable may hide a global variable and there is no warning if it does. A local variable can be accessed only within its scope. For instance $x := 1
group {
@local $x$x := 2
print "local var $x: "$x
}
print "global var $x: "$x
will print
local var $x: 2 global var$x: 1
A local variable can be referred as soon as its nearest enclosing scope is started but it can persist beyond the enclosing scope lifetime. For instance, consider this example :
Group G
{
@local $x 2 Loop L { ...$x ...
}
}
The loop ::antescofo L nested in the group runs forever and accesses the local variable after “the end” of the group G (the group ends whith the launch of its last action, see Action Sequence). This use of $x is perfectly legal. Antescofo manages variables efficiently and the memory allocated for $x persists as long as needed by the children of G but no more.
#### History Length of a Variable¶
For each variable, Antescofo only records a history of limited size. This size is predetermined, when the score is loaded, as the maximum of the history sizes that appears statically in expressions and in variable declarations.
In a declaration, the specification of a history size for the variable takes the form:
n:$v where n is an integer. This syntax specifies that variable has an history of length at least n. To make it possible to specify the size of global variable's history, there is a declaration @global @global$x, 100:$y similar to the declaration @local. Global variable declarations may appear anywhere an action may appear. Variables are global by default, thus, the sole purpose of a global declaration, beside documentation, is to specify history lengths. The occurence of a variable in an expression is also used to determine the length of its history. In an expression, the nth past value of a variable $v is accessed using the dated access construction (see above):
[n#]:$v When n is a constant integer, the length of the history is assumed to be at least n. When there is no declaration and no dated access with constant integers, the history size has an implementation dependant default size. The special form @history_length($x) returns the length of the history of the variable $x. ### History reflected in a Map or in a Tab¶ The history of a variable may be accessed also through a map or a tab. Three special functions are used to build a map (resp. a tab) from the history of a variable: • @map_history($x) returns a map where key $n$ refers to the $n-1$ to the last value of $x. In other word, the element associated to $1$ in the map is the current value, the previous value is associated to element $2$, etc. The size of this list is the size of the variable history, see the paragraph History Length of a Variable below. However, if the number of updates to the variable is less than the history length, the corresponding undefined values are not recorded in the map. • @tab_history($x) is similar to the previous function but returns a tab where $i$th element refers to the the $n-1$ to the last value of $x. • @map_history_date($x) returns a map where the value of key $n$ is the date (physical time) of the $n-1$ to the last update of $x. The previous remark on the map size applies here too. • @tab_history_date($x) builds a tab (instead of a map) of the dates in physical time of the of updates of the var $x. • @map_history_rdate($x) returns a map where the value associated to key $n$ is the relative date of $n-1$ to the last update of $x. The previous remark on the map size applies here too. • @tab_history_rdate($x) builds a tab (instead of a map) of the dates in relative time of the updates of the var $x. These six functions are special forms: they only accept a variable as an argument. These functions build a snapshot of the history at the time they are called. Later, the same call will eventually build different maps and tabs. Beware that the history of a variable is managed as a ring buffer: when the buffer is full, any new update takes the place of the oldest value. #### Plotting the history of a variable¶ The history of a variable can be plotted in absolute or in relative time using the command @plot and @rplot. These two functions are special forms accepting only a list of variables as arguments. They return true if the plot succeeded and false elsewhere. If there is only one argument $x , the referred values can be a tab (of numeric values) and each element in the history of the tab is plotted as a time series on the same window. If they are more than one argument, each variable must refer to a numeric value and the time series of the variables values are plotted on the same window.
Note that only the values stored in the history are plotted : so usually one has to specify the length of the history to record, using a @global or @local declaration.
The @plot and @rplot special forms expand to a call to the function gnuplot1. For example, the expression expands into
@gnuplot( "$x", @history_tab_date($x), @history_tab($x), "$y", @history_tab_date($y), @history_tab($y) )
See description of @gnuplot in Library Functions.
### Accessing a Local Variable “From Outside its Scope of Definition”¶
A local variable can be accessed in its scope of definition, or from one of its child scopes, using its identifier. It is possible to access the variable from “outside its scope” using the dot notation through an exec. Here, “outside” means “not in the scope of definition nor in one of its children”. Beware that accessing a local variable from outside its definition scope:
• is correct only within the lifetime of the variable,
• does not extend the lifetime of the variable which is still bound to the lifetime of its definition scope and its children.
If the scope of definition of the variable is not alive at the time of the access, an undefined value is returned and an error is signaled. Else, if there is no variable with this identifier locally defined in the scope, then the variable is looked up in the enclosing scope. The process is iterated until the top-level is reached. At this point, if there is no global variable with the specified identifier, an undefined value is returned and an error is signaled.
#### The Dot Notation¶
To access the variable defined in one specific instance of a group, or more generally of a compound action introducing a scope (@whenever, loop, process call, etc.), one must use the dot notation through the exec referring to this instance. Exec are introduced in section Exec.
It is possible to read the value of a local variable through the dot notation:
$p := ::P()$x_of_p := $p.$x
Expression $p.$x get the value of the local variable $x in the process ::P launched at the previous line. The instance of the process is accessed trough its exec, see section Exe. The expression at the left of the dot operator may be more complex than just a variable: $p := [ ::P() | (10) ]
$x_of_p2 :=$p[2].$x The first line launch 10 instances of process ::P using a tab comprehension. The second line get the local variable of the third instance of ::P. #### Assigning a Variable From Outside its Scope¶ As previously mentionned, a variable can be assigned from “outside”, see: • the reception of an OSC message OSCreceive, • the message setvar, • the function @loadvar, • the assignment using the dot notation. The OSCreceive and the setvar command can be used only for global variable. But local variable can be the target of the two other mechanisms. The assignment of a local variable through the dot notation is similar to an usual assignment: $p := ::P()
let $p.$x := 33 // assign the local variable $x in the process ::P The expression at the left of the dot operator may be more complex than just a variable: $p := [ ::P() | (10) ]
let $p[2].$x := 33
The first line launches 10 instances of process ::P. The second line sets the local variable of the third instance of ::P.
Notice the let keyword: it is needed in an assignment when the expression in the left hand side of the assignment is more complex than a variable.
These assignments are monitored by the whenever where the local variable $x appears. But an expression $p.$x does not monitor the local variable of the process. See section Reference to a scoped variable ### Antescofo System Variables¶ System variables are internal variables managed directly by Antescofo and are updated automatically by the system. They are useful for interacting with, for example, the machine listener during performances and creating interactive setups. The following variables are managed as ordinary variables: • $BEAT_POS is the position of the last detected event in the score.
• $DURATION is the duration of the last detected event, as specified in the score. • $ENERGY is the current normalized energy of the audio signal from the listening machine. The returned value is always between $0.0$ and $1.0$ and is equivalent to the Calibration Output of the Antescofo object in Max and Pd. NOTE: The variable is updated with high frequency (equal to the analysis hop size). Use it with care inside processes and Whenever constructs.
• $LAST_EVENT_LABEL is the label of the last event seen. This variable is updated only if the event has a label. • $PITCH is the pitch (in MIDI Cents) of the current event. This value is well defined in the case of a NOTE and is not meaningful for the other kinds of event.
• $RT_TEMPO represents the tempo currently inferred by the listening machine from the input audio stream. • $SCORE_TEMPO returns the tempo constant in the score at the exact score position where it is called.
• $RCNOW is the date in relative time (in beats) of the “current instant”. It can be interpreted as the current position in the score. This position is continuously updated between the occurence of two events as specified by the current tempo. Thus, if the next event occurs later than anticipated, the value of $RCNOWwill jump backward.
• $RNOW is the date in relative time (in beats) of the “current instant”. It can be interpreted as the current position in the score. This position is continuously updated between two events as specified by the current tempo. But, contrary to $RCNOW, the increase stops when the position in the score of the next waited event is reached and RNOW is stuck until the occurrence of this event or the detection of a subsequent event (making this one missed). Thus, cannot decrease.
Note that when an event occurs, several system variables are likely to change simultaneously. Notice that, as for all variables, they are case-sensitive.
### Special Variables¶
These variables are similar to system variables, but they cannot be watched by a whenever:
• $NOW corresponds to the absolute date of the “current instant” in seconds. The “current instant” is the instant at which the value of is required. • $MYSELF denotes the exec of the enclosing compound action.
• $THISOBJ may appears in method definitions where it refers to the object on which the method is applied, or in the clauses of an object definition where it refers to the current instance. ### Variables and Notifications¶ In Antescofo, a set of entities to be notified is associated to each variable. The notification mechanism is the core device used by the reactive engine to implement the computations. Notification of events from the machine listening module drops down to the more general case of variable-change notification from an external environment. Actions associated to a musical event are notified through the variable $BEAT_POS. This is also the case for the group, loop and curve constructions which need the current position in the score to launch their actions with @loose synchronization strategy. The whenever construction is notified by all the variables that appear in its condition. The scheduler must also be globally notified upon any update of the tempo computed by the listening module and on the update of variables appearing in the local tempi expressions.
Temporal Shortcuts. The notification of a variable change may trigger a computation that may end, directly or indirectly, in the assignment of the same variable. This is known as a “temporal shortcut” or a “non causal” computation. The reactive engine takes care of stopping the propagation when a cycle is detected. See section Causal Score and Temporal Shortcuts.
The next section temporal variables investigates the use of a variable to track a process and to infer a tempo. Then we take a look at
1. The gnuplot program is a portable command-line driven graphing utility for Linux, MS Windows, Mac OSX, and many other platforms. Cf. www.gnuplot.info. It must be installed on the system to have a working @gnuplot function.
|
2019-03-20 05:49:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 15, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4476882517337799, "perplexity": 1290.3569160426575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202299.16/warc/CC-MAIN-20190320044358-20190320070358-00381.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/nhm.2016012
|
# American Institute of Mathematical Sciences
December 2016, 11(4): 627-653. doi: 10.3934/nhm.2016012
## Homogenization of nonlinear dissipative hyperbolic problems exhibiting arbitrarily many spatial and temporal scales
1 Department of Quality Technology and Management, Mechanical Engineering and Mathematics, Mid Sweden University, S-83125 Östersund, Sweden, Sweden
Received May 2015 Revised April 2016 Published October 2016
This paper concerns the homogenization of nonlinear dissipative hyperbolic problems \begin{gather*} \partial _{tt}u^{\varepsilon }\left( x,t\right) -\nabla \cdot \left( a\left( \frac{x}{\varepsilon ^{q_{1}}},\ldots ,\frac{x}{\varepsilon ^{q_{n}}},\frac{t }{\varepsilon ^{r_{1}}},\ldots ,\frac{t}{\varepsilon ^{r_{m}}}\right) \nabla u^{\varepsilon }\left( x,t\right) \right) \\ +g\left( \frac{x}{\varepsilon ^{q_{1}}},\ldots ,\frac{x}{\varepsilon ^{q_{n}} },\frac{t}{\varepsilon ^{r_{1}}},\ldots ,\frac{t}{\varepsilon ^{r_{m}}} ,u^{\varepsilon }\left( x,t\right) ,\nabla u^{\varepsilon }\left( x,t\right) \right) =f(x,t) \end{gather*} where both the elliptic coefficient $a$ and the dissipative term $g$ are periodic in the $n+m$ first arguments where $n$ and $m$ may attain any non-negative integer value. The homogenization procedure is performed within the framework of evolution multiscale convergence which is a generalization of two-scale convergence to include several spatial and temporal scales. In order to derive the local problems, one for each spatial scale, the crucial concept of very weak evolution multiscale convergence is utilized since it allows less benign sequences to attain a limit. It turns out that the local problems do not involve the dissipative term $g$ even though the homogenized problem does and, due to the nonlinearity property, an important part of the work is to determine the effective dissipative term. A brief illustration of how to use the main homogenization result is provided by applying it to an example problem exhibiting six spatial and eight temporal scales in such a way that $a$ and $g$ have disparate oscillation patterns.
Citation: Liselott Flodén, Jens Persson. Homogenization of nonlinear dissipative hyperbolic problems exhibiting arbitrarily many spatial and temporal scales. Networks and Heterogeneous Media, 2016, 11 (4) : 627-653. doi: 10.3934/nhm.2016012
##### References:
[1] G. Allaire, Homogenization and two-scale convergence, SIAM J. Math. Anal., 23 (1992), 1482-1518. doi: 10.1137/0523084. [2] G. Allaire and M. Briane, Multiscale convergence and reiterated homogenization, Proc. Roy. Soc. Edinburgh Sect. A, 126 (1996), 297-342. doi: 10.1017/S0308210500022757. [3] M. M. Cavalcanti, V. N. Domingos Cavalcanti and J. A. Soriano, Existence and boundary stabilization of a nonlinear hyperbolic equation with time-dependent coefficients, Electron. J. Differential Equations, (1998), 21 pp. [4] M. M. Cavalcanti, V. N. Domingos Cavalcanti, J. A. Soriano and J. S. Souza, Homogenization and uniform stabilization for a nonlinear hyperbolic equation in domains with holes of small capacity, Electron. J. Differential Equations, (2004), 19 pp. [5] D. Cioranescu and P. Donato, An Introduction to Homogenization, Oxford Lecture Series in Mathematics and its Applications, 17, The Clarendon Press, Oxford University Press, New York, 1999. [6] L. C. Evans, The perturbed test function method for viscosity solutions of nonlinear PDE, Proc. Roy. Soc. Edinburgh Sect. A, 111 (1989), 359-375. doi: 10.1017/S0308210500018631. [7] L. C. Evans, Periodic homogenisation of certain fully nonlinear partial differential equations, Proc. Roy. Soc. Edinburgh Sect. A, 120 (1992), 245-265. doi: 10.1017/S0308210500032121. [8] L. Flodén, A. Holmbom, M. Olsson and J. Persson, Very weak multiscale convergence, Appl. Math. Lett., 23 (2010), 1170-1173. doi: 10.1016/j.aml.2010.05.005. [9] L. Flodén, A. Holmbom, M. Olsson Lindberg and J. Persson, Detection of scales of heterogeneity and parabolic homogenization applying very weak multiscale convergence, Ann. Funct. Anal., 2 (2011), 84-99. doi: 10.15352/afa/1399900264. [10] L. Flodén, A. Holmbom, M. Olsson Lindberg and J. Persson, Homogenization of parabolic equations with an arbitrary number of scales in both space and time, J. Appl. Math., 2014 (2014), Art. ID 101685, 16 pp. doi: 10.1155/2014/101685. [11] L. Flodén and M. Olsson, Reiterated homogenization of some linear and nonlinear monotone parabolic operators, Can. Appl. Math. Q., 14 (2006), 149-183. [12] L. Flodén and M. Olsson, Homogenization of some parabolic operators with several time scales, Appl. Math., 52 (2007), 431-446. doi: 10.1007/s10492-007-0025-2. [13] M. Hairer, E. Pardoux and A. Piatnitski, Random homogenisation of a highly oscillatory singular potential, Stoch. Partial Differ. Equ. Anal. Comput., 1 (2013), 571-605. doi: 10.1007/s40072-013-0018-y. [14] A. Holmbom, Homogenization of parabolic equations. An alternative approach and some corrector-type results, Appl. Math., 42 (1997), 321-343. doi: 10.1023/A:1023049608047. [15] J.-L. Lions, Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires, Dunod; Gauthier-Villars, Paris, 1969. [16] D. Lukkassen, G. Nguetseng and P. Wall, Two-scale convergence, Int. J. Pure Appl. Math., 2 (2002), 35-86. [17] A. K. Nandakumaran and M. Rajesh, Homogenization of a nonlinear degenerate parabolic differential equation, Electron. J. Differential Equations, (2001), 19 pp. [18] G. Nguetseng, A general convergence result for a functional related to the theory of homogenization, SIAM J. Math. Anal., 20 (1989), 608-623. doi: 10.1137/0520043. [19] G. Nguetseng, Deterministic homogenization of a semilinear elliptic partial differential equation of order $2m$, Math. Rep. (Bucur.), 8 (2006), 167-195. [20] G. Nguetseng, H. Nnang and N. Svanstedt, $G$-convergence and homogenization of monotone damped hyperbolic equations, Banach J. Math. Anal., 4 (2010), 100-115. doi: 10.15352/bjma/1272374674. [21] G. Nguetseng, H. Nnang and N. Svanstedt, Asymptotic analysis for a weakly damped wave equation with application to a problem arising in elasticity, J. Funct. Spaces Appl., 8 (2010), 17-54. doi: 10.1155/2010/291670. [22] G. Nguetseng, H. Nnang and N. Svanstedt, Deterministic homogenization of quasilinear damped hyperbolic equations, Acta Math. Sci. Ser. B Engl. Ed., 31 (2011), 1823-1850. doi: 10.1016/S0252-9602(11)60364-0. [23] G. Nguetseng and J. L. Woukeng, Deterministic homogenization of parabolic monotone operators with time dependent coefficients, Electron. J. Differential Equations, (2004), 23 pp. [24] G. Nguetseng and J. L. Woukeng, $\Sigma$-convergence of nonlinear parabolic operators, Nonlinear Anal., 66 (2007), 968-1004. doi: 10.1016/j.na.2005.12.035. [25] H. Nnang, Deterministic homogenization of weakly damped nonlinear hyperbolic-parabolic equations, NoDEA Nonlinear Differential Equations Appl., 19 (2012), 539-574. doi: 10.1007/s00030-011-0142-1. [26] L. S. Pankratov and I. D. Chueshov, Averaging of attractors of nonlinear hyperbolic equations with asymptotically degenerate coefficients, Mat. Sb., 190 (1999), 99-126. doi: 10.1070/SM1999v190n09ABEH000427. [27] E. Pardoux and A. Piatnitski, Homogenization of a singular random one-dimensional PDE with time-varying coefficients, Ann. Probab., 40 (2012), 1316-1356. doi: 10.1214/11-AOP650. [28] J. Persson, Selected Topics in Homogenization, Mid Sweden University Doctoral Thesis 127, 2012. (URL: http://www.diva-portal.org/smash/get/diva2:527223/FULLTEXT01.pdf.) [29] J. Persson, Homogenization of monotone parabolic problems with several temporal scales, Appl. Math., 57 (2012), 191-214. doi: 10.1007/s10492-012-0013-z. [30] N. Svanstedt, Convergence of quasi-linear hyperbolic equations, J. Hyperbolic Differ. Equ., 4 (2007), 655-677. doi: 10.1142/S0219891607001306. [31] N. Svanstedt and J. L. Woukeng, Periodic homogenization of strongly nonlinear reaction-diffusion equations with large reaction terms, Appl. Anal., 92 (2013), 1357-1378. doi: 10.1080/00036811.2012.678334. [32] M. I. Vishik and B. Fidler, Quantative averaging of global attractors of hyperbolic wave equations with rapidly oscillating coefficients, Uspekhi Mat. Nauk., 57 (2002), 75-94. doi: 10.1070/RM2002v057n04ABEH000534. [33] J. L. Woukeng and D. Dongo, Multiscale homogenization of nonlinear hyperbolic equations with several time scales, Acta Math. Sci. Ser. B Engl. Ed., 31 (2011), 843-856. doi: 10.1016/S0252-9602(11)60281-6. [34] E. Zeidler, Nonlinear Functional Analysis and its Applications IIA. Linear Monotone Operators, Springer Verlag, New York, 1990. doi: 10.1007/978-1-4612-0985-0.
show all references
##### References:
[1] G. Allaire, Homogenization and two-scale convergence, SIAM J. Math. Anal., 23 (1992), 1482-1518. doi: 10.1137/0523084. [2] G. Allaire and M. Briane, Multiscale convergence and reiterated homogenization, Proc. Roy. Soc. Edinburgh Sect. A, 126 (1996), 297-342. doi: 10.1017/S0308210500022757. [3] M. M. Cavalcanti, V. N. Domingos Cavalcanti and J. A. Soriano, Existence and boundary stabilization of a nonlinear hyperbolic equation with time-dependent coefficients, Electron. J. Differential Equations, (1998), 21 pp. [4] M. M. Cavalcanti, V. N. Domingos Cavalcanti, J. A. Soriano and J. S. Souza, Homogenization and uniform stabilization for a nonlinear hyperbolic equation in domains with holes of small capacity, Electron. J. Differential Equations, (2004), 19 pp. [5] D. Cioranescu and P. Donato, An Introduction to Homogenization, Oxford Lecture Series in Mathematics and its Applications, 17, The Clarendon Press, Oxford University Press, New York, 1999. [6] L. C. Evans, The perturbed test function method for viscosity solutions of nonlinear PDE, Proc. Roy. Soc. Edinburgh Sect. A, 111 (1989), 359-375. doi: 10.1017/S0308210500018631. [7] L. C. Evans, Periodic homogenisation of certain fully nonlinear partial differential equations, Proc. Roy. Soc. Edinburgh Sect. A, 120 (1992), 245-265. doi: 10.1017/S0308210500032121. [8] L. Flodén, A. Holmbom, M. Olsson and J. Persson, Very weak multiscale convergence, Appl. Math. Lett., 23 (2010), 1170-1173. doi: 10.1016/j.aml.2010.05.005. [9] L. Flodén, A. Holmbom, M. Olsson Lindberg and J. Persson, Detection of scales of heterogeneity and parabolic homogenization applying very weak multiscale convergence, Ann. Funct. Anal., 2 (2011), 84-99. doi: 10.15352/afa/1399900264. [10] L. Flodén, A. Holmbom, M. Olsson Lindberg and J. Persson, Homogenization of parabolic equations with an arbitrary number of scales in both space and time, J. Appl. Math., 2014 (2014), Art. ID 101685, 16 pp. doi: 10.1155/2014/101685. [11] L. Flodén and M. Olsson, Reiterated homogenization of some linear and nonlinear monotone parabolic operators, Can. Appl. Math. Q., 14 (2006), 149-183. [12] L. Flodén and M. Olsson, Homogenization of some parabolic operators with several time scales, Appl. Math., 52 (2007), 431-446. doi: 10.1007/s10492-007-0025-2. [13] M. Hairer, E. Pardoux and A. Piatnitski, Random homogenisation of a highly oscillatory singular potential, Stoch. Partial Differ. Equ. Anal. Comput., 1 (2013), 571-605. doi: 10.1007/s40072-013-0018-y. [14] A. Holmbom, Homogenization of parabolic equations. An alternative approach and some corrector-type results, Appl. Math., 42 (1997), 321-343. doi: 10.1023/A:1023049608047. [15] J.-L. Lions, Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires, Dunod; Gauthier-Villars, Paris, 1969. [16] D. Lukkassen, G. Nguetseng and P. Wall, Two-scale convergence, Int. J. Pure Appl. Math., 2 (2002), 35-86. [17] A. K. Nandakumaran and M. Rajesh, Homogenization of a nonlinear degenerate parabolic differential equation, Electron. J. Differential Equations, (2001), 19 pp. [18] G. Nguetseng, A general convergence result for a functional related to the theory of homogenization, SIAM J. Math. Anal., 20 (1989), 608-623. doi: 10.1137/0520043. [19] G. Nguetseng, Deterministic homogenization of a semilinear elliptic partial differential equation of order $2m$, Math. Rep. (Bucur.), 8 (2006), 167-195. [20] G. Nguetseng, H. Nnang and N. Svanstedt, $G$-convergence and homogenization of monotone damped hyperbolic equations, Banach J. Math. Anal., 4 (2010), 100-115. doi: 10.15352/bjma/1272374674. [21] G. Nguetseng, H. Nnang and N. Svanstedt, Asymptotic analysis for a weakly damped wave equation with application to a problem arising in elasticity, J. Funct. Spaces Appl., 8 (2010), 17-54. doi: 10.1155/2010/291670. [22] G. Nguetseng, H. Nnang and N. Svanstedt, Deterministic homogenization of quasilinear damped hyperbolic equations, Acta Math. Sci. Ser. B Engl. Ed., 31 (2011), 1823-1850. doi: 10.1016/S0252-9602(11)60364-0. [23] G. Nguetseng and J. L. Woukeng, Deterministic homogenization of parabolic monotone operators with time dependent coefficients, Electron. J. Differential Equations, (2004), 23 pp. [24] G. Nguetseng and J. L. Woukeng, $\Sigma$-convergence of nonlinear parabolic operators, Nonlinear Anal., 66 (2007), 968-1004. doi: 10.1016/j.na.2005.12.035. [25] H. Nnang, Deterministic homogenization of weakly damped nonlinear hyperbolic-parabolic equations, NoDEA Nonlinear Differential Equations Appl., 19 (2012), 539-574. doi: 10.1007/s00030-011-0142-1. [26] L. S. Pankratov and I. D. Chueshov, Averaging of attractors of nonlinear hyperbolic equations with asymptotically degenerate coefficients, Mat. Sb., 190 (1999), 99-126. doi: 10.1070/SM1999v190n09ABEH000427. [27] E. Pardoux and A. Piatnitski, Homogenization of a singular random one-dimensional PDE with time-varying coefficients, Ann. Probab., 40 (2012), 1316-1356. doi: 10.1214/11-AOP650. [28] J. Persson, Selected Topics in Homogenization, Mid Sweden University Doctoral Thesis 127, 2012. (URL: http://www.diva-portal.org/smash/get/diva2:527223/FULLTEXT01.pdf.) [29] J. Persson, Homogenization of monotone parabolic problems with several temporal scales, Appl. Math., 57 (2012), 191-214. doi: 10.1007/s10492-012-0013-z. [30] N. Svanstedt, Convergence of quasi-linear hyperbolic equations, J. Hyperbolic Differ. Equ., 4 (2007), 655-677. doi: 10.1142/S0219891607001306. [31] N. Svanstedt and J. L. Woukeng, Periodic homogenization of strongly nonlinear reaction-diffusion equations with large reaction terms, Appl. Anal., 92 (2013), 1357-1378. doi: 10.1080/00036811.2012.678334. [32] M. I. Vishik and B. Fidler, Quantative averaging of global attractors of hyperbolic wave equations with rapidly oscillating coefficients, Uspekhi Mat. Nauk., 57 (2002), 75-94. doi: 10.1070/RM2002v057n04ABEH000534. [33] J. L. Woukeng and D. Dongo, Multiscale homogenization of nonlinear hyperbolic equations with several time scales, Acta Math. Sci. Ser. B Engl. Ed., 31 (2011), 843-856. doi: 10.1016/S0252-9602(11)60281-6. [34] E. Zeidler, Nonlinear Functional Analysis and its Applications IIA. Linear Monotone Operators, Springer Verlag, New York, 1990. doi: 10.1007/978-1-4612-0985-0.
[1] Fabio Camilli, Claudio Marchi. On the convergence rate in multiscale homogenization of fully nonlinear elliptic problems. Networks and Heterogeneous Media, 2011, 6 (1) : 61-75. doi: 10.3934/nhm.2011.6.61 [2] Y. Efendiev, B. Popov. On homogenization of nonlinear hyperbolic equations. Communications on Pure and Applied Analysis, 2005, 4 (2) : 295-309. doi: 10.3934/cpaa.2005.4.295 [3] Alexander Mielke. Weak-convergence methods for Hamiltonian multiscale problems. Discrete and Continuous Dynamical Systems, 2008, 20 (1) : 53-79. doi: 10.3934/dcds.2008.20.53 [4] Jie Zhao. Convergence rates for elliptic reiterated homogenization problems. Communications on Pure and Applied Analysis, 2013, 12 (6) : 2787-2795. doi: 10.3934/cpaa.2013.12.2787 [5] Jean Louis Woukeng. $\sum$-convergence and reiterated homogenization of nonlinear parabolic operators. Communications on Pure and Applied Analysis, 2010, 9 (6) : 1753-1789. doi: 10.3934/cpaa.2010.9.1753 [6] Dag Lukkassen, Annette Meidell, Peter Wall. Multiscale homogenization of monotone operators. Discrete and Continuous Dynamical Systems, 2008, 22 (3) : 711-727. doi: 10.3934/dcds.2008.22.711 [7] Mogtaba Mohammed, Mamadou Sango. Homogenization of nonlinear hyperbolic stochastic partial differential equations with nonlinear damping and forcing. Networks and Heterogeneous Media, 2019, 14 (2) : 341-369. doi: 10.3934/nhm.2019014 [8] Assyr Abdulle, Yun Bai, Gilles Vilmart. Reduced basis finite element heterogeneous multiscale method for quasilinear elliptic homogenization problems. Discrete and Continuous Dynamical Systems - S, 2015, 8 (1) : 91-118. doi: 10.3934/dcdss.2015.8.91 [9] Nils Svanstedt. Multiscale stochastic homogenization of monotone operators. Networks and Heterogeneous Media, 2007, 2 (1) : 181-192. doi: 10.3934/nhm.2007.2.181 [10] Walter Allegretto, Yanping Lin, Zhiyong Zhang. Convergence to convection-diffusion waves for solutions to dissipative nonlinear evolution equations. Conference Publications, 2009, 2009 (Special) : 11-23. doi: 10.3934/proc.2009.2009.11 [11] Patrick Henning. Convergence of MsFEM approximations for elliptic, non-periodic homogenization problems. Networks and Heterogeneous Media, 2012, 7 (3) : 503-524. doi: 10.3934/nhm.2012.7.503 [12] Zhanying Yang. Homogenization and correctors for the hyperbolic problems with imperfect interfaces via the periodic unfolding method. Communications on Pure and Applied Analysis, 2014, 13 (1) : 249-272. doi: 10.3934/cpaa.2014.13.249 [13] Patrick Henning, Mario Ohlberger. Error control and adaptivity for heterogeneous multiscale approximations of nonlinear monotone problems. Discrete and Continuous Dynamical Systems - S, 2015, 8 (1) : 119-150. doi: 10.3934/dcdss.2015.8.119 [14] Erik Kropat. Homogenization of optimal control problems on curvilinear networks with a periodic microstructure --Results on $\boldsymbol{S}$-homogenization and $\boldsymbol{Γ}$-convergence. Numerical Algebra, Control and Optimization, 2017, 7 (1) : 51-76. doi: 10.3934/naco.2017003 [15] Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete and Continuous Dynamical Systems, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380 [16] Eric Cancès, Claude Le Bris. Convergence to equilibrium of a multiscale model for suspensions. Discrete and Continuous Dynamical Systems - B, 2006, 6 (3) : 449-470. doi: 10.3934/dcdsb.2006.6.449 [17] Miroslav Grmela, Michal Pavelka. Landau damping in the multiscale Vlasov theory. Kinetic and Related Models, 2018, 11 (3) : 521-545. doi: 10.3934/krm.2018023 [18] Marco Squassina. Preface: Recent progresses in the theory of nonlinear nonlocal problems. Discrete and Continuous Dynamical Systems - S, 2018, 11 (3) : i-i. doi: 10.3934/dcdss.201803i [19] Bruno Fornet, O. Guès. Penalization approach to semi-linear symmetric hyperbolic problems with dissipative boundary conditions. Discrete and Continuous Dynamical Systems, 2009, 23 (3) : 827-845. doi: 10.3934/dcds.2009.23.827 [20] Joel Fotso Tachago, Giuliano Gargiulo, Hubert Nnang, Elvira Zappale. Multiscale homogenization of integral convex functionals in Orlicz Sobolev setting. Evolution Equations and Control Theory, 2021, 10 (2) : 297-320. doi: 10.3934/eect.2020067
2021 Impact Factor: 1.41
|
2022-07-04 00:10:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6423677802085876, "perplexity": 3516.338562888204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104277498.71/warc/CC-MAIN-20220703225409-20220704015409-00644.warc.gz"}
|
https://www.physicsforums.com/threads/integration-by-u-substitution.385842/
|
# Integration by u-substitution
1. Mar 11, 2010
### b_roberts
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
I just started integration. This is the first example from the lesson. I get to the part where they take the derivative of u with respect to x. But the next step, where it says dx=du/6x, I'm totally lost. I don't understand how they got that, or what it means, or even how it is read.
2. Mar 11, 2010
### Dustinsfl
du/dx=6x
basic algebra rules ofmultiplying by dx and then dividing by 6x gives us dx.
3. Mar 11, 2010
### Hurkyl
Staff Emeritus
Basic rules of algebra don't apply -- this is a change of variable in an integral substitution (or maybe algebra of differential forms, depending on your POV). Happily, the result is similar to basic rules of algebra, so it's easy to remember/manipulate.
4. Mar 11, 2010
### b_roberts
How can you multiply and divide to get dx=du/6x if du/dx is a symbol of notation and has nothing to do with division? I just want to understand where the dx = du/6x comes from, and how it is supposed to be read. Is it read "with respect to x = the derivative of u divided by 6x?
5. Mar 11, 2010
### Dustinsfl
In the integration when you do u sub, you still have a dx. How can you integrate variable u with a dx?
In order to do so, you need to know what dx is in u.
To find that, you take (du/dx)*dx=6x*dx which yields du=6x*dx so now we divide. du/6x=(6x*dx)/6x. Now we have dx=du/6x.
Lastly, subsitute back in so you have an integral that is in terms of u only.
6. Mar 11, 2010
### b_roberts
I don't understand how you can multiply du/dx by dx and then cancel the dx from the numerator and denominator, though. I thought du/dx was a symbol of notation?
7. Mar 11, 2010
### Dustinsfl
How is one suppose to integrate u with a differential of dx?
8. Mar 11, 2010
### benhou
dx and du are read how they look like, e.g. dx --> [Dee Ex]. And from du/dx=6x to dx=du/6x, think about the definition of a derivative. $$dy/dx=\underbrace{lim}_{\Delta x\rightarrow 0}\frac{\Delta y}{\Delta x}$$
Just like $$\Delta y$$ and $$\Delta x$$, dy and dx can be separated. Think of a curve, the slope of a tangent line at an arbitrary point can be represented by rise/run. Actually draw the rise and run, you will get dy as the rise and dx as the run.
9. Mar 11, 2010
### Dustinsfl
This may help: when you do sub, you don't have to change the variables. Instead we can change the differential.
Your new dx would be (6xdx) as we see there is a x in the integral and an x in the differential which is what we needed. Since we now have a multiple of 6, we need to divide by 6 to make up for the difference.
The new integral is now 2/6 int (x(3x2-5)7)(6xdx). 2/6 since I brought the 2 out of the integral.
Now when you integrate just integrate this, (3x2-5)7), as one term.
10. Mar 11, 2010
### vela
Staff Emeritus
It is, and you aren't really canceling the dx's. But as Hurkyl said above, everything works as if you can. Think of it as a mnemonic device for changing variables in an integral.
|
2017-10-17 17:26:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8777597546577454, "perplexity": 852.8355244248345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822145.14/warc/CC-MAIN-20171017163022-20171017183022-00409.warc.gz"}
|
https://www.enotes.com/homework-help/f-x-2x-3-30x-2-144x-6-what-2-critcal-numbers-and-b-418310
|
# f(x)=-2x^3+30x^2-144x+6. What are the 2 critcal numbers A & B, then find f"(A)= and f"(B)=. where are the local max and min?
You need to determine the critical values A and B, hence, you need to solve for x the equation f'(x) = 0, such that:
`f'(x) = (-2x^3 + 30x^2 - 144x + 6)'`
`f'(x) = -6x^2 + 60x - 144`
You need to solve the equation f'(x) = 0, hence, you need to substitute -`6x^2 + 60x - 144` for `f'(x)` , such that:
`-6x^2 + 60x - 144 = 0`
Factoring out by -6 yields:
`-6(x^2 - 10x + 24) = 0`
Dividing by -6 yields:
`x^2 - 10x + 24 = 0`
You may substitute -`10 x` with `-4x - 6x` , such that:
`x^2 -4x - 6x + 24 = 0`
Grouping the terms, yields:
`(x^2 -4x) - (6x - 24) = 0`
Factoring out x in the first group and 6 in the second group yields:
`x(x - 4) - 6(x - 4) = 0 => (x - 4)(x - 6) = 0`
`x - 4 = 0 => x_1 = 4`
`x - 6= 0 => x_2 = 6`
Hence, evaluating the critical values yields `A = 4` and `B = 6.`
You need to evaluate the second order derivative, such that:
`f''(x) = (f'(x))' => f''(x) = (-6x^2 + 60x - 144)'`
`f''(x) = -12x + 60`
You need to evaluate `f''(A)` and `f''(B)` , such that:
`f''(A) = f''(4) => f''(A) = -12*A + 60 => f''(4) = -12*4 + 60 => f''(4) = 60 - 48 = 12`
`f''(B) = f''(6) => f''(B) = -12*B + 6 => f''(6) = -12*6 + 60 = -12`
Hence, evaluating f''(A) and `f''(B)` , under the given conditions, yields `f''(A) = f''(4) = 12` and` f''(B) = f''(6) = -12.`
Approved by eNotes Editorial Team
|
2022-09-30 00:04:55
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.858958899974823, "perplexity": 2119.862323193377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00515.warc.gz"}
|
http://www.sceneadvisor.com/New-York/mean-squared-error-proof.html
|
Address 401 William Floyd Pkwy, Shirley, NY 11967 (631) 772-6786 http://www.battlegroundsli.com
# mean squared error proof Coram, New York
Mean squared error is the negative of the expected value of one specific utility function, the quadratic utility function, which may not be the appropriate utility function to use under a By using this site, you agree to the Terms of Use and Privacy Policy. MR0804611. ^ Sergio Bermejo, Joan Cabestany (2001) "Oriented principal component analysis for large margin classifiers", Neural Networks, 14 (10), 1447–1461. Since an MSE is an expectation, it is not technically a random variable.
Mean Squared Error (MSE) of an Estimator Let $\hat{X}=g(Y)$ be an estimator of the random variable $X$, given that we have observed the random variable $Y$. In the Bayesian approach, such prior information is captured by the prior probability density function of the parameters; and based directly on Bayes theorem, it allows us to make better posterior By using this site, you agree to the Terms of Use and Privacy Policy. Springer.
Definition of an MSE differs according to whether one is describing an estimator or a predictor. See also James–Stein estimator Hodges' estimator Mean percentage error Mean square weighted deviation Mean squared displacement Mean squared prediction error Minimum mean squared error estimator Mean square quantization error Mean square In an analogy to standard deviation, taking the square root of MSE yields the root-mean-square error or root-mean-square deviation (RMSE or RMSD), which has the same units as the quantity being In other words, for $\hat{X}_M=E[X|Y]$, the estimation error, $\tilde{X}$, is a zero-mean random variable \begin{align} E[\tilde{X}]=EX-E[\hat{X}_M]=0. \end{align} Before going any further, let us state and prove a useful lemma.
Luenberger, D.G. (1969). "Chapter 4, Least-squares estimation". Since some error is always present due to finite sampling and the particular polling methodology adopted, the first pollster declares their estimate to have an error z 1 {\displaystyle z_{1}} with Note that MSE can equivalently be defined in other ways, since t r { E { e e T } } = E { t r { e e T } ISBN0-495-38508-5. ^ Steel, R.G.D, and Torrie, J.
Moon, T.K.; Stirling, W.C. (2000). Bibby, J.; Toutenburg, H. (1977). share|improve this answer answered Nov 9 '14 at 19:35 AdamO 17.1k2563 Oh I see. For an unbiased estimator, the MSE is the variance of the estimator.
Examples Mean Suppose we have a random sample of size n from a population, X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} . p.229. ^ DeGroot, Morris H. (1980). In other words, x {\displaystyle x} is stationary. ISBN0-387-98502-6.
The basic idea behind the Bayesian approach to estimation stems from practical situations where we often have some prior information about the parameter to be estimated. MathHolt 80.994 προβολές 16:09 Calculating Bias and Efficiency of Statistics - Διάρκεια: 14:08. x ^ = W y + b . {\displaystyle \min _ − 4\mathrm − 3 \qquad \mathrm − 2 \qquad {\hat − 1}=Wy+b.} One advantage of such linear MMSE estimator is Every new measurement simply provides additional information which may modify our original estimate.
The minimum excess kurtosis is γ 2 = − 2 {\displaystyle \gamma _{2}=-2} ,[a] which is achieved by a Bernoulli distribution with p=1/2 (a coin flip), and the MSE is minimized The fourth central moment is an upper bound for the square of variance, so that the least value for their ratio is one, therefore, the least value for the excess kurtosis It is required that the MMSE estimator be unbiased. Belmont, CA, USA: Thomson Higher Education.
Linear MMSE estimators are a popular choice since they are easy to use, calculate, and very versatile. ISBN9780471016564. Thus Bayesian estimation provides yet another alternative to the MVUE. References ^ a b Lehmann, E.
random-variable expected-value mse share|improve this question asked Nov 9 '14 at 19:28 statBeginner 3331311 add a comment| 1 Answer 1 active oldest votes up vote 5 down vote accepted The trick Criticism The use of mean squared error without question has been criticized by the decision theorist James Berger. The MMSE estimator is unbiased (under the regularity assumptions mentioned above): E { x ^ M M S E ( y ) } = E { E { x | y The system returned: (22) Invalid argument The remote host or network may be down.
We can describe the process by a linear equation y = 1 x + z {\displaystyle y=1x+z} , where 1 = [ 1 , 1 , … , 1 ] T the dimension of y {\displaystyle y} ) need not be at least as large as the number of unknowns, n, (i.e. Moreover, $X$ and $Y$ are also jointly normal, since for all $a,b \in \mathbb{R}$, we have \begin{align} aX+bY=(a+b)X+bW, \end{align} which is also a normal random variable. Thus a recursive method is desired where the new measurements can modify the old estimates.
Here's a quick and easy proofFor more videos like this, visit me: www.statisticsmentor.com Κατηγορία Εκπαίδευση Άδεια Τυπική άδεια YouTube Εμφάνιση περισσότερων Εμφάνιση λιγότερων Φόρτωση... Διαφήμιση Αυτόματη αναπαραγωγή Όταν είναι ενεργοποιημένη η Belmont, CA, USA: Thomson Higher Education. As with previous example, we have y 1 = x + z 1 y 2 = x + z 2 . {\displaystyle {\begin{aligned}y_{1}&=x+z_{1}\\y_{2}&=x+z_{2}.\end{aligned}}} Here both the E { y 1 } Definition of an MSE differs according to whether one is describing an estimator or a predictor.
Addison-Wesley. ^ Berger, James O. (1985). "2.4.2 Certain Standard Loss Functions". We can then define the mean squared error (MSE) of this estimator by \begin{align} E[(X-\hat{X})^2]=E[(X-g(Y))^2]. \end{align} From our discussion above we can conclude that the conditional expectation $\hat{X}_M=E[X|Y]$ has the lowest Estimator The MSE of an estimator θ ^ {\displaystyle {\hat {\theta }}} with respect to an unknown parameter θ {\displaystyle \theta } is defined as MSE ( θ ^ ) Values of MSE may be used for comparative purposes.
MR1639875. ^ Wackerly, Dennis; Mendenhall, William; Scheaffer, Richard L. (2008). Properties of the Estimation Error: Here, we would like to study the MSE of the conditional expectation. Theory of Point Estimation (2nd ed.). In statistical modelling the MSE, representing the difference between the actual observations and the observation values predicted by the model, is used to determine the extent to which the model fits
This definition for a known, computed quantity differs from the above definition for the computed MSE of a predictor in that a different denominator is used. That is, the n units are selected one at a time, and previously selected units are still eligible for selection for all n draws. For random vectors, since the MSE for estimation of a random vector is the sum of the MSEs of the coordinates, finding the MMSE estimator of a random vector decomposes into This can be seen as the first order Taylor approximation of E { x | y } {\displaystyle \mathrm − 8 \ − 7} .
Since an MSE is an expectation, it is not technically a random variable.
|
2019-11-23 00:00:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9440125823020935, "perplexity": 1078.7564436349392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672170.93/warc/CC-MAIN-20191122222322-20191123011322-00533.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-the-inequality-2-x-5-32
|
How do you solve the inequality 2| x -5| > 32?
Jun 11, 2018
Solving an inequality involves finding all the possible values of x that satisfy the equation; in this example $x > 21$ and $x < - 11$
Explanation:
To solve any modulus equation, you need to do two separate calculations - one where the bit in the modulus is positive, and one where it is negative. Hence, you will always get two solutions for any (linear) modulus function.
Calculation 1 (positive):
$2 \left(x - 5\right) > 32$
$2 x - 10 > 32$
$2 x > 42$
$x > 21$
Calculation 2 (negative):
$- 2 \left(x - 5\right) > 32$
$- 2 x + 10 > 32$
$- 2 x > 22$
$- x > 11$
$x < - 11$
Notice that when you change the negative signs, you must also flip the direction of the inequality.
You could also solve this graphically by plotting the modulus graph, but as this is more complex (and presents more opportunities for errors) I always stick to this method for solving.
|
2020-04-04 19:41:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518641591072083, "perplexity": 467.60872114119024}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00219.warc.gz"}
|
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-options/output
|
C# Compiler Options that control compiler output
The following options control compiler output generation. The new MSBuild syntax is shown in Bold. The older csc.exe syntax is shown in code style.
• DocumentationFile / -doc: Generate XML doc file from /// comments.
• OutputAssembly / -out: Specify the output assembly file.
• PlatformTarget / -platform: Specify the target platform CPU.
• ProduceReferenceAssembly / -refout: Generate a reference assembly.
• TargetType -target: Specify the type of the output assembly.
DocumentationFile
The DocumentationFile option allows you to place documentation comments in an XML file. To learn more about documenting your code, see Recommended Tags for Documentation Comments. The value specifies the path to the output XML file. The XML file contains the comments in the source code files of the compilation.
<DocumentationFile>path/to/file.xml</DocumentationFile>
The source code file that contains Main or top-level statements is output first into the XML. You'll often want to use the generated .xml file with IntelliSense. The .xml filename must be the same as the assembly name. The .xml file must be in the same directory as the assembly. When the assembly is referenced in a Visual Studio project, the .xml file is found as well. For more information about generating code comments, see Supplying Code Comments. Unless you compile with <TargetType:Module>, file will contain <assembly> and </assembly> tags specifying the name of the file containing the assembly manifest for the output file. For examples, see How to use the XML documentation features.
Note
The DocumentationFile option applies to all files in the project. To disable warnings related to documentation comments for a specific file or section of code, use #pragma warning.
OutputAssembly
The OutputAssembly option specifies the name of the output file. The output path specifies the folder where compiler output is placed.
<OutputAssembly>folder</OutputAssembly>
Specify the full name and extension of the file you want to create. If you don't specify the name of the output file, MSBuild uses the name of the project to specify the name of the output assembly. Old style projects use the following rules:
• An .exe will take its name from the source code file that contains the Main method or top-level statements.
• A .dll or .netmodule will take its name from the first source code file.
Any modules produced as part of a compilation become files associated with any assembly also produced in the compilation. Use ildasm.exe to view the assembly manifest to see the associated files.
The OutputAssembly compiler option is required in order for an exe to be the target of a friend assembly.
PlatformTarget
Specifies which version of the CLR can run the assembly.
<PlatformTarget>anycpu</PlatformTarget>
• anycpu (default) compiles your assembly to run on any platform. Your application runs as a 64-bit process whenever possible and falls back to 32-bit when only that mode is available.
• anycpu32bitpreferred compiles your assembly to run on any platform. Your application runs in 32-bit mode on systems that support both 64-bit and 32-bit applications. You can specify this option only for projects that target .NET Framework 4.5 or later.
• ARM compiles your assembly to run on a computer that has an Advanced RISC Machine (ARM) processor.
• ARM64 compiles your assembly to run by the 64-bit CLR on a computer that has an Advanced RISC Machine (ARM) processor that supports the A64 instruction set.
• x64 compiles your assembly to be run by the 64-bit CLR on a computer that supports the AMD64 or EM64T instruction set.
• x86 compiles your assembly to be run by the 32-bit, x86-compatible CLR.
• Itanium compiles your assembly to be run by the 64-bit CLR on a computer with an Itanium processor.
On a 64-bit Windows operating system:
• Assemblies compiled with x86 execute on the 32-bit CLR running under WOW64.
• A DLL compiled with the anycpu executes on the same CLR as the process into which it's loaded.
• Executables that are compiled with the anycpu execute on the 64-bit CLR.
• Executables compiled with anycpu32bitpreferred execute on the 32-bit CLR.
The anycpu32bitpreferred setting is valid only for executable (.EXE) files, and it requires .NET Framework 4.5 or later. For more information about developing an application to run on a Windows 64-bit operating system, see 64-bit Applications.
You set the PlatformTarget option from Build properties page for your project in Visual Studio.
The behavior of anycpu has some additional nuances on .NET Core and .NET 5 and later releases. When you set anycpu, publish your app and execute it with either the x86 dotnet.exe or the x64 dotnet.exe. For self-contained apps, the dotnet publish step packages the executable for the configure RID.
ProduceReferenceAssembly
The ProduceReferenceAssembly option specifies a file path where the reference assembly should be output. It translates to metadataPeStream in the Emit API. The filepath specifies the path for the reference assembly. It should generally match that of the primary assembly. The recommended convention (used by MSBuild) is to place the reference assembly in a "ref/" subfolder relative to the primary assembly.
<ProduceReferenceAssembly>filepath</ProduceReferenceAssembly>
Reference assemblies are a special type of assembly that contains only the minimum amount of metadata required to represent the library's public API surface. They include declarations for all members that are significant when referencing an assembly in build tools. Reference assemblies exclude all member implementations and declarations of private members. Those members have no observable impact on their API contract. For more information, see Reference assemblies in the .NET Guide.
The ProduceReferenceAssembly and ProduceOnlyReferenceAssembly options are mutually exclusive.
TargetType
The TargetType compiler option can be specified in one of the following forms:
• library: to create a code library. library is the default value.
• exe: to create an .exe file.
• module to create a module.
• winexe to create a Windows program.
• winmdobj to create an intermediate .winmdobj file.
• appcontainerexe to create an .exe file for Windows 8.x Store apps.
Note
For .NET Framework targets, unless you specify module, this option causes a .NET Framework assembly manifest to be placed in an output file. For more information, see Assemblies in .NET and Common Attributes.
<TargetType>library</TargetType>
The compiler creates only one assembly manifest per compilation. Information about all files in a compilation is placed in the assembly manifest. When producing multiple output files at the command line, only one assembly manifest can be created and it must go into the first output file specified on the command line.
If you create an assembly, you can indicate that all or part of your code is CLS-compliant with the CLSCompliantAttribute attribute.
library
The library option causes the compiler to create a dynamic-link library (DLL) rather than an executable file (EXE). The DLL will be created with the .dll extension. Unless otherwise specified with the OutputAssembly option, the output file name takes the name of the first input file. When building a .dll file, a Main method isn't required.
exe
The exe option causes the compiler to create an executable (EXE), console application. The executable file will be created with the .exe extension. Use winexe to create a Windows program executable. Unless otherwise specified with the OutputAssembly option, the output file name takes the name of the input file that contains the entry point (Main method or top-level statements). One and only one entry point is required in the source code files that are compiled into an .exe file. The StartupObject compiler option lets you specify which class contains the Main method, in cases where your code has more than one class with a Main method.
module
This option causes the compiler to not generate an assembly manifest. By default, the output file created by compiling with this option will have an extension of .netmodule. A file that doesn't have an assembly manifest cannot be loaded by the .NET runtime. However, such a file can be incorporated into the assembly manifest of an assembly with AddModules. If more than one module is created in a single compilation, internal types in one module will be available to other modules in the compilation. When code in one module references internal types in another module, then both modules must be incorporated into an assembly manifest, with AddModules. Creating a module isn't supported in the Visual Studio development environment.
winexe
The winexe option causes the compiler to create an executable (EXE), Windows program. The executable file will be created with the .exe extension. A Windows program is one that provides a user interface from either the .NET library or with the Windows APIs. Use exe to create a console application. Unless otherwise specified with the OutputAssembly option, the output file name takes the name of the input file that contains the Main method. One and only one Main method is required in the source code files that are compiled into an .exe file. The StartupObject option lets you specify which class contains the Main method, in cases where your code has more than one class with a Main method.
winmdobj
If you use the winmdobj option, the compiler creates an intermediate .winmdobj file that you can convert to a Windows Runtime binary (.winmd) file. The .winmd file can then be consumed by JavaScript and C++ programs, in addition to managed language programs.
The winmdobj setting signals to the compiler that an intermediate module is required. The .winmdobj file can then be fed through the WinMDExp export tool to produce a Windows metadata (.winmd) file. The .winmd file contains both the code from the original library and the WinMD metadata that is used by JavaScript or C++ and by the Windows Runtime. The output of a file that’s compiled by using the winmdobj compiler option is used only as input for the WimMDExp export tool. The .winmdobj file itself isn’t referenced directly. Unless you use the OutputAssembly option, the output file name takes the name of the first input file. A Main method isn’t required.
appcontainerexe
If you use the appcontainerexe compiler option, the compiler creates a Windows executable (.exe) file that must be run in an app container. This option is equivalent to -target:winexe but is designed for Windows 8.x Store apps.
To require the app to run in an app container, this option sets a bit in the Portable Executable (PE) file. When that bit is set, an error occurs if the CreateProcess method tries to launch the executable file outside an app container. Unless you use the OutputAssembly option, the output file name takes the name of the input file that contains the Main method.
|
2021-06-24 04:23:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18277782201766968, "perplexity": 3443.3542621678803}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488550571.96/warc/CC-MAIN-20210624015641-20210624045641-00402.warc.gz"}
|
https://math.stackexchange.com/questions/166303/find-the-length-of-the-chord-and-the-distance-between-parallel-chords-given-the
|
# Find the length of the chord and the distance between parallel chords, given their angles
The radius of a circle is 21.4 meter. Find the length of the chord subtended by a central angle of 110 degrees 40 minutes and the distance between two parallel chords on the same side of the center subtended by central angles 118 degrees 40 minutes and 52 degrees 20 minutes.
Progress: I know the radius is 21.4 meter, I set up a triangle but my final answer was wrong.
• Well I know the radius is 21.4 meter I set up a triangle but my final answer was wrong because I am not sure how to set it up. – Fernando Martinez Jul 3 '12 at 21:35
• draw perpendicular to the chord, the perpendicular will bisect the angle angle subtended at the center. – Santosh Linkha Jul 3 '12 at 21:39
• Please stop making titles that are descriptions followed by a question mark? – rschwieb Jul 3 '12 at 23:45
You have $\dfrac{a}{r} = \sin \alpha$ and $\dfrac{b}{r} = \cos \alpha$.
• @Bishop: Try working out $b$ for both chords – Henry Jul 4 '12 at 6:39
|
2019-06-19 23:11:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6367363929748535, "perplexity": 422.53757135630804}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999066.12/warc/CC-MAIN-20190619224436-20190620010436-00481.warc.gz"}
|
https://www.technetiummo939.site/wiki/Kamikiri_(papercutting)
|
# Rose (mathematics)
Roses specified by the sinusoid ${\displaystyle r=\cos(k\theta )}$ for various rational numbered values of the angular frequency k=n/d. Roses specified by ${\displaystyle r=\sin(k\theta )}$ are rotations of these roses by one-quarter period of the sinusoid in a counter-clockwise direction about the pole (origin). For proper mathematical analysis, ${\displaystyle k}$ must be expressed in irreducible form.
In mathematics, a rose or rhodonea curve is a sinusoid specified by either the cosine or sine functions with no phase angle that is plotted in polar coordinates. Rose curves or "rhodonea" were named by the Italian mathematician who studied them, Guido Grandi, between the years 1723 and 1728.[1]
## General overview
### Specification
A rose is the set of points in polar coordinates specified by the polar equation
${\displaystyle r=a\cos(k\theta )}$[2]
or in Cartesian coordinates using the parametric equations
${\displaystyle x=r\cos(\theta )=a\cos(k\theta )\cos(\theta )}$
${\displaystyle y=r\sin(\theta )=a\cos(k\theta )\sin(\theta )}$.
Roses can also be specified using the sine function.[3] Since
${\displaystyle \sin(k\theta )=\cos \left(k\theta -{\frac {\pi }{2}}\right)=\cos \left(k\left(\theta -{\frac {\pi }{2k}}\right)\right)}$.
Thus, the rose specified by ${\displaystyle \,r=a\sin(k\theta )}$ is identical to that specified by ${\displaystyle \,r=a\cos(k\theta )}$ rotated counter-clockwise by ${\displaystyle \pi /2k}$ radians, which is one-quarter the period of either sinusoid.
Since they are specified using the cosine or sine function, roses are usually expressed as polar coordinate (rather than Cartesian coordinate) graphs of sinusoids that have angular frequency of ${\displaystyle k}$ and an amplitude of ${\displaystyle a}$ that determine the radial coordinate ${\displaystyle (r)}$ given the polar angle ${\displaystyle (\theta )}$ (though when ${\displaystyle k}$ is a rational number, a rose curve can be expressed in Cartesian coordinates since those can be specified as algebraic curves[4]).
### General properties
Roses are directly related to the properties of the sinusoids that specify them.
#### Petals
• Graphs of roses are composed of petals. A petal is the shape formed by the graph of a half-cycle of the sinusoid that specifies the rose. (A cycle is a portion of a sinusoid that is one period ${\displaystyle T=2\pi /k}$ long and consists of a positive half-cycle, the continuous set of points where ${\displaystyle r\geq 0}$ and is ${\displaystyle T/2=\pi /k}$ long, and a negative half-cycle is the other half where ${\displaystyle r\leq 0}$.)
• The shape of each petal is same because the graphs of half-cycles have the same shape. The shape is given by the positive half-cycle with crest at ${\displaystyle (a,0)}$ specified by ${\displaystyle r=a\cos(k\theta )}$ (that is bounded by the angle interval ${\displaystyle -T/4\leq \theta \leq T/4}$). The petal is symmetric about the polar axis. All other petals are rotations of this petal about the pole, including those for roses specified by the sine function with same values for ${\displaystyle a}$ and ${\displaystyle k}$.[5]
• Consistent with the rules for plotting points in polar coordinates, a point in a negative half-cycle cannot be plotted at its polar angle because its radial coordinate ${\displaystyle r}$ is negative. The point is plotted by adding ${\displaystyle \pi }$ radians to the polar angle with a radial coordinate ${\displaystyle |r|}$. Thus, positive and negative half-cycles can be coincident in the graph of a rose. In addition, roses are inscribed in the circle ${\displaystyle r=a}$.
• When the period ${\displaystyle T}$of the sinusoid is less than or equal to ${\displaystyle 4\pi }$, the petal's shape is a single closed loop. A single loop is formed because the angle interval for a polar plot is ${\displaystyle 2\pi }$ and the angular width of the half-cycle is less than or equal to ${\displaystyle 2\pi }$. When ${\displaystyle T>4\pi }$ (or ${\displaystyle |k|<1/2}$) the plot of a half-cycle can be seen as spiraling out from the pole in more than one circuit around the pole until plotting reaches the inscribed circle where it spirals back to the pole, intersecting itself and forming one or more loops along the way. Consequently, each petal forms 2 loops when ${\displaystyle 4\pi (or ${\displaystyle 1/4\leq |k|<1/2}$), 3 loops when ${\displaystyle 8\pi (or ${\displaystyle 1/6\leq |k|<1/4}$), etc. Roses with only one petal with multiple loops are observed for ${\displaystyle k=1/3,k=1/5,k=1/7,etc.}$ (See the figure in the introduction section.)
• A rose's petals will not intersect each other when the angular frequency ${\displaystyle k}$ is a non-zero integer; otherwise, petals intersect one another.
#### Symmetry
All roses display one or more forms of symmetry due to the underlying symmetric and periodic properties of sinusoids.
• A rose specified as ${\displaystyle r=a\cos(k\theta )}$ is symmetric about the polar axis (the line ${\displaystyle \theta =0}$) because of the identity ${\displaystyle a\cos(k\theta )=a\cos(-k\theta )}$ that makes the roses specified by the two polar equations coincident.
• A rose specified as ${\displaystyle r=a\sin(k\theta )}$ is symmetric about the vertical line ${\displaystyle \theta =\pi /2}$ because of the identity ${\displaystyle a\sin(k\theta )=a\sin(\pi -k\theta )}$ that makes the roses specified by the two polar equations coincident.
• Only certain roses are symmetric about the pole.
• Individual petals are symmetric about the line through the pole and the petal's peak, which reflects the symmetry of the half-cycle of the underlying sinusoid. Roses composed of a finite number of petals are, by definition, rotationally symmetric since each petal is the same shape with successive petals rotated about the same angle about the pole.
## Roses with non-zero integer values of k
The rose ${\displaystyle r=\cos(4\theta )}$. Since ${\displaystyle k=4}$ is an even number, the rose has ${\displaystyle 2k=8}$ petals. Line segments connecting successive peaks lie on the circle ${\displaystyle r=1}$ and will form an octagon. Since one peak is at ${\displaystyle (1,0)}$ the octagon makes sketching the graph relatively easy after the half-cycle boundaries (corresponding to apothems) are drawn.
The rose specified by ${\displaystyle r=\cos(7\theta )}$. Since ${\displaystyle k=7}$ is an odd number, the rose has ${\displaystyle k=7}$ petals. Line segments connecting successive peaks lie on the circle ${\displaystyle r=1}$ and will form a heptagon. The rose is inscribed in the circle ${\displaystyle r=1}$.
When ${\displaystyle k}$ is a non-zero integer, the curve will be rose-shaped with ${\displaystyle 2k}$ petals if ${\displaystyle k}$ is even, and ${\displaystyle k}$ petals when ${\displaystyle k}$ is odd.[6] The properties of these roses are a special case of roses with angular frequencies ${\displaystyle (k)}$ that are rational numbers discussed in the next section of this article.
• The rose is inscribed in the circle ${\displaystyle r=a}$, corresponding to the radial coordinate of all of its peaks.
• Because a polar coordinate plot is limited to polar angles between ${\displaystyle 0}$ and ${\displaystyle 2\pi }$, there are ${\displaystyle 2\pi /T=k}$ cycles displayed in the graph. No additional points need be plotted because the radial coordinate at ${\displaystyle \theta =0}$ is the same value at ${\displaystyle \theta =2\pi }$ (which are crests for two different positive half-cycles for roses specified by the cosine function).
• When ${\displaystyle k}$ is even (and non-zero), the rose is composed of ${\displaystyle 2k}$ petals, one for each peak in the ${\displaystyle 2\pi }$ interval of polar angles displayed. Each peak corresponds to a point lying on the circle ${\displaystyle r=a}$. Line segments connecting successive peaks will form a regular polygon with an even number of vertices that has its center at the pole and a radius through each peak, and likewise:
• The roses are symmetric about the pole.
• The roses are symmetric about each line through the pole and a peak (through the "middle" a petal) with the polar angle between the peaks of successive petals being ${\displaystyle 2\pi /2k=\pi /k}$ radians. Thus, these roses have rotational symmetry of order ${\displaystyle 2k}$.
• The roses are symmetric about each line that bisects the angle between successive peaks, which corresponds to half-cycle boundaries and the apothem of the corresponding polygon.
• When ${\displaystyle k}$ is odd, the rose is composed of the ${\displaystyle k}$ petals, one for each crest (or trough) in the ${\displaystyle 2\pi }$ interval of polar angles displayed. Each peak corresponds to a point lying on the circle ${\displaystyle r=a}$. These rose's positive and negative half-cycles are coincident, which means that in graphing them, only the positive half-cycles or only the negative half-cycles need to plotted in order to form the full curve. (Equivalently, a complete curve will be graphed by plotting any continuous interval of polar angles that is ${\displaystyle \pi }$ radians long such as ${\displaystyle \theta =0}$ to ${\displaystyle \theta =\pi }$.[7]) Line segments connecting successive peaks will form a regular polygon with an odd number of vertices, and likewise:
• The roses are symmetric about each line through the pole and a peak (through the "middle" a petal) with the polar angle between the peaks of successive petals being ${\displaystyle 2\pi /k}$ radians. Thus, these roses have rotational symmetry of order ${\displaystyle k}$.
• The rose’s petals do not overlap.
• The roses can be specified by algebraic curves of order ${\displaystyle k+1}$ when k is odd, and ${\displaystyle 2(k+1)}$ when k is even.[8]
### The circle
A rose with ${\displaystyle k=1}$ is a circle that lies on the pole with a diameter that lies on the polar axis when ${\displaystyle r=a\cos(\theta )}$. The circle is the curve's single petal. (See the circle being formed at the end of the next section.) In Cartesian coordinates, the equivalent cosine and sine specifications are ${\displaystyle (x-a/2)^{2}+y^{2}=(a/2)^{2}}$ and ${\displaystyle x^{2}+(y-a/2)^{2}=(a/2)^{2}}$, respectively.
A rose with ${\displaystyle k=2}$ is called a quadrifolium because it has 4 petals. In Cartesian Coordinates the cosine and sine specifications are ${\displaystyle (x^{2}+y^{2})^{3}=a^{2}(x^{2}-y^{2})^{2}}$ and ${\displaystyle (x^{2}+y^{2})^{3}=4(axy)^{2}}$, respectively.
### The trifolium
A rose with ${\displaystyle k=3}$ is called a trifolium[9] because it has 3 petals. The curve is also called the Paquerette de Mélibée. In Cartesian Coordinates the cosine and sine specifications are ${\displaystyle (x^{2}+y^{2})^{2}=a(x^{3}-3xy^{2})}$ and ${\displaystyle (x^{2}+y^{2})^{2}=-a(x^{3}-3xy^{2})}$, respectively.[10] (See the trifolium being formed at the end of the next section.)
### Total and petal areas
The total area of a rose with polar equation of the form
${\displaystyle r=a\cos(k\theta )}$ or ${\displaystyle r=a\sin(k\theta )\,}$, where ${\displaystyle k}$ is a non-zero integer, is
${\displaystyle {\frac {1}{2}}\int _{0}^{2\pi }(a\cos(k\theta ))^{2}\,d\theta ={\frac {a^{2}}{2}}\left(\pi +{\frac {\sin(4k\pi )}{4k}}\right)={\frac {\pi a^{2}}{2}}}$, when ${\displaystyle k}$ is even; and
${\displaystyle {\frac {1}{2}}\int _{0}^{\pi }(a\cos(k\theta ))^{2}\,d\theta ={\frac {a^{2}}{2}}\left({\frac {\pi }{2}}+{\frac {\sin(2k\pi )}{4k}}\right)={\frac {\pi a^{2}}{4}}}$, when ${\displaystyle k}$ is odd.[11]
When ${\displaystyle k}$ is even, there are ${\displaystyle 2k}$ petals; and when ${\displaystyle k}$ is odd, there are ${\displaystyle k}$ petals, so the area of each petal is ${\displaystyle {\frac {\pi a^{2}}{4k}}}$.
## Roses with rational number values for k
In general, when ${\displaystyle k}$ is a rational number in the irreducible fraction form ${\displaystyle k=n/d}$, where ${\displaystyle n}$ and ${\displaystyle d}$ are non-zero integers, the number of petals is the denominator of the expression ${\displaystyle 1/2-1/(2k)=(n-d)/2n}$.[12] This means that the number of petals is ${\displaystyle n}$ if both ${\displaystyle n}$ and ${\displaystyle d}$ are odd, and ${\displaystyle 2n}$ otherwise.[13]
• In the case when both ${\displaystyle n}$ and ${\displaystyle d}$ are odd, the positive and negative half-cycles of the sinusoid are coincident. The graph of these roses are completed in any continuous interval of polar angles that is ${\displaystyle d\pi }$ long.[14]
• When ${\displaystyle n}$ is even and ${\displaystyle d}$ is odd, or visa versa, the rose will be completely graphed in a continuous polar angle interval ${\displaystyle 2d\pi }$ long.[15] Furthermore, the roses are symmetric about the pole for both cosine and sine specifications.[16]
• In addition, when ${\displaystyle n}$ is odd and ${\displaystyle d}$ is even, roses specified by the cosine and sine polar equations with the same values of ${\displaystyle a}$ and ${\displaystyle k}$ are coincident. For such a pair of roses, the rose with the sine function specification is coincident with the crest of the rose with the cosine specification at on the polar axis either at ${\displaystyle \theta =d\pi /2}$ or at ${\displaystyle \theta =3d\pi /2}$. (This means that roses ${\displaystyle r=a\cos(k\theta )}$ and ${\displaystyle r=a\sin(k\theta )}$ with non-zero integer values of ${\displaystyle k}$ are never coincident.)
• The rose is inscribed in the circle ${\displaystyle r=a}$, corresponding to the radial coordinate of all of its peaks.
### The Dürer folium
A rose with ${\displaystyle k=1/2}$ is called the Dürer folium, named after the German painter and engraver Albrecht Dürer. The roses specified by ${\displaystyle r=a\cos(\theta /2)}$ and ${\displaystyle r=a\sin(\theta /2)}$ are coincident even though ${\displaystyle a\cos(\theta /2)\neq a\sin(\theta /2)}$. In Cartesian Coordinates the rose is specified as ${\displaystyle (x^{2}+y^{2})[2(x^{2}+y^{2})-a^{2}]^{2}=a^{4}x^{2}}$.[17]
The Dürer folium is also a trisectrix, a curve that can be used to trisect angles.
### The limaçon trisectrix
A rose with ${\displaystyle k=1/3}$ is a limaçon trisectrix that has the property of trisectrix curves that can be used to trisect angles. The rose has a single petal with two loops. (See the animation below.)
Examples of roses ${\displaystyle r=\cos(k\theta )}$ created using gears with different ratios.
The rays displayed are the polar axis and ${\displaystyle \theta =\pi /2}$.
Graphing starts at ${\displaystyle \theta =2\pi }$ when ${\displaystyle k}$ is an integer, ${\displaystyle \theta =2d\pi }$ otherwise, and proceeds clock-wise to ${\displaystyle \theta =0}$.
The circle, k=1 (n=1, d=1). The rose is complete when ${\displaystyle \theta =\pi }$ is reached (one-half revolution of the lighter gear).
The limaçon trisectrix, k=1/3 (n=1, d=3), has one petal with two loops. The rose is complete when ${\displaystyle \theta =3\pi }$ is reached (one and one-half revolution of the lighter gear).
The trifolium, k=3 (n=3, d=1). The rose is complete when ${\displaystyle \theta =\pi }$ is reached (one-half revolution of the lighter gear).
The 8 petals of the rose with k=4/5 (n=4, d=5) is each, a single loop that intersect other petals. The rose is symmetric about the pole. The rose is complete at ${\displaystyle \theta =0}$ (five revolutions of the lighter gear).
## Roses with irrational number values for k
A rose curve specified with an irrational number for ${\displaystyle k}$ has an infinite number of petals[18] and will never complete. For example, the sinusoid ${\displaystyle r=a\cos(\pi \theta )}$ has a period ${\displaystyle T=2}$, so, it has a petal in the polar angle interval ${\displaystyle -1/2\leq \theta \leq 1/2}$ with a crest on the polar axis; however there is no other polar angle in the domain of the polar equation that will plot at the coordinates ${\displaystyle (a,0)}$. Overall, roses specified by sinusoids with angular frequencies that are irrational constants form a dense set (i.e., they come arbitrarily close to specifying every point in the disk ${\displaystyle r\leq a}$).
## Notes
1. ^ O'Connor, John J.; Robertson, Edmund F., "Rhodonea", MacTutor History of Mathematics archive, University of St Andrews
2. ^ Mathematical Models by H. Martyn Cundy and A.P. Rollett, second edition, 1961 (Oxford University Press), p. 73.
3. ^ "Rose (Mathematics)". Retrieved 2021-02-02.
4. ^ Robert Ferreol. "Rose". Retrieved 2021-02-03.
5. ^ Xah Lee. "Rose Curve". Retrieved 2021-02-12.
6. ^ Eric W. Weisstein. "Rose (Mathematics)". Wolfram MathWorld. Retrieved 2021-02-05.
7. ^ "Number of Petals of Odd Index Rhodonea Curve". ProofWiki.org. Retrieved 2021-02-03.
8. ^ Robert Ferreol. "Rose". Retrieved 2021-02-03.
9. ^ "Trifolium". Retrieved 2021-02-02.
10. ^ Eric W. Weisstein. "Paquerette de Mélibée". Wolfram MathWorld. Retrieved 2021-02-05.
11. ^ Robert Ferreol. "Rose". Retrieved 2021-02-03.
12. ^ Jan Wassenaar. "Rhodonea". Retrieved 2021-02-02.
13. ^ Robert Ferreol. "Rose". Retrieved 2021-02-05.
14. ^ Xah Lee. "Rose Curve". Retrieved 2021-02-12.
15. ^ Xah Lee. "Rose Curve". Retrieved 2021-02-12.
16. ^ Jan Wassenaar. "Rhodonea". Retrieved 2021-02-02.
17. ^ Robert Ferreol. "Dürer Folium". Retrieved 2021-02-03.
18. ^ Eric W. Weisstein. "Rose (Mathematics)". Wolfram MathWorld. Retrieved 2021-02-05.
|
2021-08-01 13:19:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 154, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918075442314148, "perplexity": 861.7023411356346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154214.36/warc/CC-MAIN-20210801123745-20210801153745-00044.warc.gz"}
|
https://www.hpmuseum.org/forum/thread-14468.html
|
Exponents Issue
02-04-2020, 11:23 AM
Post: #1
dimchansky Junior Member Posts: 6 Joined: Feb 2020
Exponents Issue
Expecting the expression
Code:
x^(1/9)*(x*x^(1/3))^(1/6)
to be simplified to
Code:
x^(1/3)
, but the calculator does not simplify it (tried collect, simplify functions with no effect). TI-89 Titanium and TI-Nspire both calculators simplify it.
Any ideas?
02-05-2020, 12:26 AM
Post: #2
CyberAngel Member Posts: 231 Joined: Jul 2018
RE: Exponents Issue
(02-04-2020 11:23 AM)dimchansky Wrote: Expecting the expression
Code:
x^(1/9)*(x*x^(1/3))^(1/6)
to be simplified to
Code:
x^(1/3)
, but the calculator does not simplify it (tried collect, simplify functions with no effect). TI-89 Titanium and TI-Nspire both calculators simplify it.
Any ideas?
A funny try in the middle of the night:
Code:
#cas expcollect(f):= BEGIN LOCAL g,s,v; v:=mat2list(algvar(f)); s:=SIZE(v); v:=head(mat2list(v(s))); g:="'solve(y_="+f+","+v+")'"; g:=expr(g); f:="'solve("+v+"="+g+",y_)'"; RETURN expr(f); END; #end
VPN
02-05-2020, 10:42 PM
Post: #3
Claudio L. Senior Member Posts: 1,677 Joined: Dec 2013
RE: Exponents Issue
(02-04-2020 11:23 AM)dimchansky Wrote: Expecting the expression
Code:
x^(1/9)*(x*x^(1/3))^(1/6)
to be simplified to
Code:
x^(1/3)
, but the calculator does not simplify it (tried collect, simplify functions with no effect). TI-89 Titanium and TI-Nspire both calculators simplify it.
Any ideas?
I don't think both expressions are equivalent with x<0? I don't know, it seems to me some of the multiple solutions might get lost in the shuffle and that's why the CAS refuses to simplify. Wolfram Alpha seems to think it's a bad idea to simplify this too.
02-05-2020, 11:05 PM
Post: #4
CyberAngel Member Posts: 231 Joined: Jul 2018
RE: Exponents Issue
(02-05-2020 10:42 PM)Claudio L. Wrote:
(02-04-2020 11:23 AM)dimchansky Wrote: Expecting the expression
Code:
x^(1/9)*(x*x^(1/3))^(1/6)
to be simplified to
Code:
x^(1/3)
, but the calculator does not simplify it (tried collect, simplify functions with no effect). TI-89 Titanium and TI-Nspire both calculators simplify it.
Any ideas?
I don't think both expressions are equivalent with x<0? I don't know, it seems to me some of the multiple solutions might get lost in the shuffle and that's why the CAS refuses to simplify. Wolfram Alpha seems to think it's a bad idea to simplify this too.
Right!
Fractional or rational exponents using Power function differ from the Root function.
Note that negative real numbers can have odd roots. A third root of -8 is -2.
He asked for the TI-like simplification.
02-06-2020, 12:21 AM
Post: #5
Albert Chan Senior Member Posts: 825 Joined: Jul 2018
RE: Exponents Issue
(02-05-2020 10:42 PM)Claudio L. Wrote: Wolfram Alpha seems to think it's a bad idea to simplify this too.
Mathematica "^" only simplify if c is integer:
• (a*b)^c → a^c * b^c
• (a^b)^c → a ^ (b*c)
To force above rules for other c, use PowerExpand, but result might not be correct.
x^(1/9)*(x*x^(1/3))^(1/6) // PowerExpand → x^(1/3)
(02-05-2020 11:05 PM)CyberAngel Wrote: Fractional or rational exponents using Power function differ from the Root function.
I tried XCas using root, but it made no difference.
From XCas help, it says root(a,b) returns b^(1/a) (root(2,3)=sqrt(3)).
02-06-2020, 04:06 AM
Post: #6
CyberAngel Member Posts: 231 Joined: Jul 2018
RE: Exponents Issue
(02-06-2020 12:21 AM)Albert Chan Wrote: X
(02-05-2020 11:05 PM)CyberAngel Wrote: Fractional or rational exponents using Power function differ from the Root function.
I tried XCas using root, but it made no difference.
From XCas help, it says root(a,b) returns b^(1/a) (root(2,3)=sqrt(3)).
Try on an emulator and on a real calculator
CAS
a third root or cubic root of minus eight
A] NTHROOT(3,-8)
(minus eight) ←the surrounding () are mandatory
to power one 3rd
B] (-8)^(1/3)
Check the CAS complex off/ use i off and principal off
What do you get?
– –
VPN
02-06-2020, 09:19 PM
Post: #7
swagner53 Junior Member Posts: 15 Joined: Jun 2018
RE: Exponents Issue
Hi,
I entered the equation in algebraic mode exactly as you had it, pressed enter, and got x^(1/3). I am on hardware C, version 2.1.14425, CAS 1.5. There seems to be some improvements is the new version.
02-07-2020, 09:03 AM
Post: #8
dimchansky Junior Member Posts: 6 Joined: Feb 2020
RE: Exponents Issue
(02-06-2020 09:19 PM)swagner53 Wrote: Hi,
I entered the equation in algebraic mode exactly as you had it, pressed enter, and got x^(1/3). I am on hardware C, version 2.1.14425, CAS 1.5. There seems to be some improvements is the new version.
That's interesting.. Because I get the different result on the same version, but on Emu.
02-07-2020, 09:38 AM
Post: #9
dimchansky Junior Member Posts: 6 Joined: Feb 2020
RE: Exponents Issue
(02-05-2020 10:42 PM)Claudio L. Wrote: I don't think both expressions are equivalent with x<0? I don't know, it seems to me some of the multiple solutions might get lost in the shuffle and that's why the CAS refuses to simplify. Wolfram Alpha seems to think it's a bad idea to simplify this too.
Yeah, it can be the case.
But then, assume function does not help:
02-07-2020, 12:47 PM
Post: #10
DrD Senior Member Posts: 1,102 Joined: Feb 2014
RE: Exponents Issue
wxmaxima:
(%i1) x^(1/9)*(x*x^(1/3))^(1/6);
(%o1) x^(1/3)
For students studying laws of exponents, (with x being strictly symbolic), this result confirms a manually-derived result.
02-07-2020, 02:21 PM
Post: #11
roadrunner Member Posts: 283 Joined: Jun 2015
RE: Exponents Issue
Perhaps this is helps:
-road
02-07-2020, 02:26 PM
Post: #12
dimchansky Junior Member Posts: 6 Joined: Feb 2020
RE: Exponents Issue
(02-07-2020 02:21 PM)roadrunner Wrote: Perhaps this is helps:
But how did you get the first simplification result? Doesn't work for me:
What's the version of your calculator? Hardware/Emu?
02-07-2020, 04:26 PM
Post: #13
roadrunner Member Posts: 283 Joined: Jun 2015
RE: Exponents Issue
On the emulator:
1. copy the expression from your first post in this thread and past onto the command line, click enter;
2. click on the expression;
3. click simplify;
4. click on the answer;
5. repeat from step 3.
I just now tried it on the handheld but keyed in the expression manually, it returned identical results.
hardware info:
-road
02-07-2020, 04:29 PM
Post: #14
dimchansky Junior Member Posts: 6 Joined: Feb 2020
RE: Exponents Issue
(02-07-2020 04:26 PM)roadrunner Wrote: On the emulator:
1. copy the expression from your first post in this thread and past onto the command line, click enter;
2. click on the expression;
3. click simplify;
4. click on the answer;
5. repeat from step 3.
I just now tried it on the handheld but keyed in the expression manually, it returned identical results.
doesn't work on emulator, it returns the same answer after simplification.
02-07-2020, 04:41 PM
Post: #15
roadrunner Member Posts: 283 Joined: Jun 2015
RE: Exponents Issue
Here's something interesting:
Copy your expressing into the emulator and click enter
Clicking the expressing on the right and clicking simplify gives a different answer than clicking the expression on the left?!??!?!?!
-road
02-07-2020, 04:52 PM
Post: #16
roadrunner Member Posts: 283 Joined: Jun 2015
RE: Exponents Issue
Here's a better example:
-road
02-07-2020, 08:41 PM
Post: #17
thenozone Junior Member Posts: 15 Joined: Mar 2017
RE: Exponents Issue
wolfram alpha give me:- see attachments,
but i could have mistyped.
Attached File(s) Thumbnail(s)
02-07-2020, 08:54 PM
Post: #18
swagner53 Junior Member Posts: 15 Joined: Jun 2018
RE: Exponents Issue
If I enter the equation by itself and press enter, then I get x^(1/3) - no simplify. If I start with simplify, I get the same results as Roadrunner (cycle 3times with simplify and finally get x^(1/3). Note I have set the Simplify setting in CAS to Maximum.
02-07-2020, 10:15 PM
Post: #19
dimchansky Junior Member Posts: 6 Joined: Feb 2020
RE: Exponents Issue
(02-07-2020 04:41 PM)roadrunner Wrote: Here's something interesting:
Copy your expressing into the emulator and click enter
Clicking the expressing on the right and clicking simplify gives a different answer than clicking the expression on the left?!??!?!?!
yes, this way it works! it's some kind of black magic!
02-07-2020, 10:58 PM
Post: #20
Claudio L. Senior Member Posts: 1,677 Joined: Dec 2013
RE: Exponents Issue
(02-07-2020 08:41 PM)thenozone Wrote: wolfram alpha give me:- see attachments,
but i could have mistyped.
You got it right, except that's the "mobile version", if you look at the desktop result, you'll see above the plots another box that says "result" and it shows the expression as X^(1/9)*(x^(4/3))^(1/6). So it only collapsed the x*x^(1/3) into x^(4/3)
Also notice on your own pictures, the result at the bottom reads "Alternate form assuming x>0:", so it's not quite the same.
The problem is that when x<0, any fractional power will choose the principal root, but depending on the order of operators, it may choose a different root. For example, assume x=-8
Basic algebra tells you:
x*x^(1/3) = x^(4/3) = (x^(1/3))^4 = (x^4)^(1/3)
The principal root of (-8)^(1/3) = 2*e^(i*pi/3) = (1+√3*i)
(x^(1/3))^4 = (1+√3*i)^4 = (-8-8*√3*i) = 16*e^(-2/3*pi*i)
That's a valid solution. Now the other form:
(x^4)^(1/3) = 4096^(1/3) = 16
which is also a completely valid solution.
If we had a system that calculates all roots no matter what, we'd have all 3 roots of x^(1/3) at the end on all the different possible simplification paths (we'd have 9 different roots with x^(1/9) then 6 of them will collapse into the same result, ending with only 3 roots, precisely the roots of x^(1/3)).
So the simplification is not wrong per se, but evaluation of the two expressions may return different roots, which can be very confusing.
To make it worse, it may change the chosen root as you change the value of x, so the plot may appear to be discontinuous when the expression is actually not.
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s)
|
2020-02-27 05:32:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37983328104019165, "perplexity": 11889.370848386481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00243.warc.gz"}
|
http://math.stackexchange.com/questions/97671/odds-of-getting-specific-color-of-jelly-beans-in-a-handful/97674
|
Odds of getting specific color of Jelly Beans in a handful?
I have a bag of jelly beans with approx 1190 Jelly Belly's in it. There are 50 different flavors. Assuming the amount of Jelly Belly's per flavor are equal (so, 23.8 of each bean):
If I pull 6 Jelly Beans from my unopened bag, what are the odds that 3 of them will be the same color?
I'm asking because I got a bag of jelly beans for Christmas and got 3 of the same color in the handful I just pulled out... and it's been many many years since my statistics class in college.
Thank you for the help. It's driving me batty and nobody at work cares about my Jelly Belly question except me =(
I would have tried to figure this out on my own as it seems very easy, but I don't even know where to begin looking.
-
23.8 of each bean??? – Rasmus Jan 9 '12 at 17:43
Can we be even more approximate and say the bag has 1200 beans? – David Mitra Jan 9 '12 at 17:44
Sure? Does it even matter though since there are only 50 colors? You can be as approximate as you want. I won't know the difference. ..and yeah, one of the beans was missing a corner, hence the .8! – Bryan Jan 9 '12 at 17:45
@Rasmus Some people decide they don't like the flavor of a bean after all and put it back in the bag :) – David Mitra Jan 9 '12 at 17:45
ps jelly beans don't have corners. – Bryan Jan 9 '12 at 17:50
Let's pretend the bag is very large, so drawing one bean of a flavor doesn't change the probabilities. There are $50^6$ possible draws, all the same probability. The ways to get exactly $3$ of a flavor can be counted as $\binom 63=15$ choices for which $3$ will match times $50$ choices for which flavor to match times $49^3$ for the other $3$. There is a tiny error as we double count the case you get two three of a kinds. So the chance is $\frac {15*50*49^3}{50^6}=352947/62500000 = 0.005647152$ or about $1$ in $177$.
|
2015-10-10 03:53:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7537969350814819, "perplexity": 757.9002787688115}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737940789.96/warc/CC-MAIN-20151001221900-00087-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/101887/do-gamma-matrices-form-a-basis/102078
|
# Do gamma matrices form a basis?
Do the four gamma matrices form a basis for the set of matrices $GL(4,\mathbb{C})$? I was actually trying to evaluate a term like $\gamma^0 M^\dagger \gamma^0$ in a representation independent way, where $M, M^\dagger$ are $4\times 4$ matrices.
• To form a complete basis of 𝑀(4,ℂ), are the coefficients of the 16-elements vector real or complex? May 3, 2020 at 15:27
As previous answers have correctly noted gamma matrices do not forma a basis of $$M(4,\mathbb{C})$$. Nevertheless you can construct one from them in the following way
• 1 the identity matrix $$\mathbb{1}$$
• 4 matrices $$\gamma^\mu$$
• 6 matrices $$\sigma^{\mu\nu}=\gamma^{[\mu}\gamma^{\nu]}$$
• 4 matrices $$\sigma^{\mu\nu\rho}=\gamma^{[\mu}\gamma^{\nu}\gamma^{\rho]}$$
• 1 matrix $$\sigma^{\mu\nu\rho\delta}=\gamma^{[\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{\delta]}=i\epsilon^{\mu\nu\rho\delta}\gamma^5$$
these 16 matrices form the basis that we were looking for.
Furthermore they are used to construct the spinor bilinears multiplying by $$\bar{\psi}$$ on the left and $$\psi$$ on the right, which transform in the Lorentz indices as follows
• $$\bar{\psi}\psi$$ scalar
• $$\bar{\psi}\gamma^\mu\psi$$ vector
• $$\bar{\psi}\sigma^{\mu\nu}\psi$$ 2nd rank (antisymmetric) tensor
• $$\bar{\psi}\sigma^{\mu\nu\rho}\psi$$ pseudovector
• $$\bar{\psi}\gamma^5\psi$$ pseudoscalar
the fact that they form a basis of of $$M(4,\mathbb{C})$$ is very important because these are then the only independent spinor bilinears (i.e $$\bar{\psi}M\psi$$) that can be constructed, any other can be expressed a linear combination of these. A different issue is if it would make any sense to sum any of these since they are different types of tensors under Lorentz group transformations.
• I can't help but notice that a 2nd rank symmetric tensor is conspicuously absent can you form one from the anticommutator and the spinors as in the above? Dec 19, 2018 at 7:18
• What does the bracket notation $^{[\mu}\gamma^{\nu]}$ in $\gamma^{[\mu}\gamma^{\nu]}$ means? Aug 1, 2019 at 13:09
• @AlexandreH.Tremblay It's just the antisymmetrization $\gamma^{[\mu} \gamma^{\nu]} = \frac{1}{2} (\gamma^{\mu} \gamma^{\nu} - \gamma^{\nu} \gamma^{\mu})$
– jpm
Aug 9, 2019 at 8:33
To complement V. Moretti's excellent answer, it's worth emphasizing that the dimension of the four-by-four complex matrices $\mathbb C^{4\times 4}$, when seen as a vector space over $\mathbb C$, is $4\!\times\!4=16$. As such, a set of four matrices (i.e. vectors in $\mathbb C^{4\times 4}$) can never be a basis for it.
It's also worth saying that the general linear group $\text{GL}(4,\mathbb C)\subset\mathbb C^{4\times 4}$, i.e. the four-by-four matrices with nonzero determinant, is not a vector space (for one, it doesn't have a zero), and it is therefore misleading to speak of a basis for it. That said, it is still possible to ask for a minimal set of vectors (i.e. matrices) whose span will contain $\text{GL}(4,\mathbb C)$; this turns out to require a full basis of $\mathbb C^{4\times 4}$ because the matrices you 'skip', $\mathbb C^{4\times 4}\setminus\text{GL}(4,\mathbb C)$, have measure zero, so $\text{GL}(4,\mathbb C)$ is a complex manifold of dimension 16 and requires that many parameters to describe.
No they do not, due to dimensional reasons, but they are generators of the algebra. That is, $I$ and the products of $\gamma^a$ (products of one, two, three and four matrices) form such a basis.
NOTE ADDED. As Emilio Pisanty correctly remarked (also making some further interesting comments) $GL(4, \mathbb C)$ is not a linear space so questions about bases of it are inappropriate. In fact I implicitly interpreted that $GL(4,\mathbb C)$ as $M(4, \mathbb C)$, the complex algebra of $4\times 4$ complex matrices which, by definition, is also a complex vector space.
• What is the difference between the notations $GL(4,\mathbb{C})$ and $M(4,\mathbb{C})$? Is the objection of basis due to the fact that former is a group and not a linear vector space?
– SRS
Nov 8, 2016 at 10:08
• Yes, $GL(n, K)$ is a group, $M(n, K)$ is an associative unital algebra, but it is not a group when forgetting the linear structure since it also includes non-invertible matrices differently from $GL(n, K)$. Nov 8, 2016 at 10:18
|
2022-09-27 15:19:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232298135757446, "perplexity": 316.51428389304095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00249.warc.gz"}
|
http://www.jeanpaulvidal.fr/q345r-tank/2a166e245484f3d53b35578.html
|
### croatia horizontal cylindrical tank building volume
Tank Volume CalculatorCalculate the filled volume of a horizontal cylinder tank by first finding the area, A, of a circular segment and multiplying it by the length, l. Area of the circular segment, the grey shaded area, is A = (1/2)r 2 (θ - sinθ) where θ = 2*arccos(m/r) and θ is in radians.
Tel: 0086-371-861#518#27
Mail: [email protected]
### (PDF) HOW TO CALCULATE THE VOLUMES OF PARTIALLY
In an horizontal cylindrical tank with flat tops having the . following dimensions: L = 7.62 m , D = 2.54 m , H = 0.762 m . croatia horizontal cylindrical tank building volume Total volume of the tank,* Sloped Bottom Tank - arachnoid croatia horizontal cylindrical tank building volumeThe easy part the cylindrical section above the slope, which has a volume of: (1) $\displaystyle v = \pi r^2 h$ v = volume; r = tank radius; h = cylindrical section height; More difficult the tank's sloped section, which lies between the tank's bottom and the top of the slope where the tank
### 3 Ways to Work Out Water Tank Capacity - wikiHow
Nov 19, 2019 · Find the volume of your tank. To determine the volume of a rectangular tank, multiply the length (l) times the width (w) times the height (h). The width is the horizontal distance from side to side. The length is the longest dimension, and the height is the vertical length from top to bottom.Horizontal Cylindrical Tank Volume CalculatorHorizontal cylindrical tank volume side view diagram Dip Chart Depth - Inches ~ Increments 1/8" 1/4" 1/3" 1/2" 1" 2" 3" 6" 12"Highland Tank - Gauge ChartsPlease note that these charts are theoretical and are intended as a guide for estimating tank/vessel volumes. Required Choose Tank Style Horizontal Cylindrical Horizontal Cylindrical (Elliptical Heads) Horizontal Rectangular Vertical Flat Bottom Vertical Dished Bottom Vertical Coned Bottom
### Cylindrical Tank Calculator
Example: Inputting liquid level = 3, diameter = 24, tank length = 30, then clicking "Inches" will display the total tank volume in cubic inches and US Gallons and will also show the volume at the 3 inch level. In addition, a dipstick chart is automatically generated. The default dipstick chart is in increments of one but you can change that by clicking on one of the increment buttons.Horizontal Cylindrical Tank Volume Calculator - MetricHorizontal cylindrical tank volume end view diagram Fill Rate Fill Times @ Litres / Minute. Total Tank Fill Time Current Time to Fill Current Time to Empty Fraction Precision All Inch inputs and dimensions are actual physical finished sizes (unless otherwise noted) If you're cutting blocks, concrete, stone or ANYTHING and there croatia horizontal cylindrical tank building volumeHorizontal Cylindrical Tank Volume CalculatorHorizontal cylindrical tank volume side view diagram Dip Chart Depth - Inches ~ Increments 1/8" 1/4" 1/3" 1/2" 1" 2" 3" 6" 12"
### Horizontal Cylinder Formula
A horizontal cylinder has a length of 200 centimeters and a diameter of 100 centimeters. It is partially filled to a depth of 20 centimeters. What is the volume when filled to this depth? First, we'll make some calculations to be used later: Total Cylinder Volume = r² length = 2,500 200 = 1,570,796 cm³Tank Volume Calculator - Inch CalculatorFor example, lets find the volume of a cylinder tank that is 36 in diameter and 72 long. radius = 36 ÷ 2 radius = 18 tank volume = × 18 2 × 72 tank volume = 73,287 cu in. Thus, the capacity of this tank is 73,287 cubic inches.Tank Volume Calculator - Oil TanksThe tank size calculator on this page is designed for measuring the capacity of a variety of fuel tanks. Alternatively, you can use this tank volume calculator as a water volume calculator if you need to calculate some specific water volume. The functionality of
### Tank Calculation | MrExcel Message Board
Feb 12, 2020 · Im new to this. But here's my question. Im trying to make a spreadsheet where I can put in Feet in one cell, Inches in the other and have it calculate the gallons. Its a horizontal tank with cylindrical end caps, with a total of 62,926 gallons at 10'8 7/8". has any one done this? As of right croatia horizontal cylindrical tank building volumeVolume of Horizontal Cylinder - MATHVolume of Horizontal Cylinder. How do we find the volume of a cylinder like this one, when we only know its length and radius, and how high it is filled? First we work out the area at one end (explanation below): Area = cos-1 (r hr) r 2 (r h) (2rh h 2) Where: r is the cylinder's radius; h is the height the cylinder is filled toVolume of a Cylinder - Web FormulasThe volume of a cylinder is found by multiplying the area of its top or base by its height and is defined as: V = · r 2 · h Example 1: A cylindrical water storage tank has an inside base radius of 7m and depth of 11 m. Find the capacity of the tank in kiloliters (1kl = 1m 3). Solution: Base radius: r = 7 m Height: h = 11 m
### Volume of a cylinder with calculator - Math Open Reference
Volume of a partially filled cylinder. One practical application is where you have horizontal cylindrical tank partly filled with liquid. Using the formula above you can find the volume of the cylinder which gives it's maximum capacity, but you often need to know the volume of liquid in the tank A downloadable spreadsheet simplifies the use of theseSep 05, 2017 · both horizontal and vertical tanks with spherical heads. The calculation of the liquid in the heads is approxi-mate. The graph shows lines for tank diameters from 4 to 10 ft, and tank lengths from 1 to 50 ft. The accuracy of the liquid volume depends on certain approximations and the precision of interpolations that may be required.API 650 ABOVEGROUND STORAGE TANKS, Part I: aboveground cylindrical tanks, steel fabricated and welded, with nominal capacities ranging from 79.5 m3 to 1590 m3 (in standard sizes). 1.1.6) API 12F The requirements of this standard are similar to API 12D, in this case for tanks that will be manufactured at workshops, with nominal capacities ranging from croatia horizontal cylindrical tank building volume
### Calculating Tank Volume
Horizontal Cylindrical Tanks Fluid volume as a function of fluid height can be calculated for a horizontal cylindrical tank with either conical, ellipsoidal, guppy, spherical, or torispherical heads where the fluid height, h, is measured from the tank bottom to the fluid surface, see Figs. 1 and 2. A guppy head is a conical head where the apex croatia horizontal cylindrical tank building volumeCalculation of Liquid Volume in a Horizontal Cylindrical croatia horizontal cylindrical tank building volumeCalculation of Liquid Volume in a Horizontal Cylindrical Container: This calculator calculates the volume of liquid inside a horizontal cylindrical container at any given height of liquid. The other required dimensions are the diameter and length of the tank. Values to be Entered Values to be Calculated; Diameter of CylinderContent of Horizontal - or Sloped - Cylindrical Tank and Volume of partly filled horizontal or sloped cylindrical tanks and pipes - an online calculator Sponsored Links The online calculator below can be used to calculate the volume and mass of liquid in a partly filled horizontal or sloped cylindrical tank if you know the inside diameter and the level of the liquid the tank.
### Cylindrical Tank Calculator
Example: Inputting liquid level = 3, diameter = 24, tank length = 30, then clicking "Inches" will display the total tank volume in cubic inches and US Gallons and will also show the volume at the 3 inch level. In addition, a dipstick chart is automatically generated. The default dipstick chart is in increments of one but you can change that by clicking on one of the increment buttons.DESIGN RECOMMENDATION FOR STORAGE TANKS ii Introduction There are several types of storage tanks, e.g., above-ground, flat-bottomed, cylindrical tanks for the storage of refrigerated liquefied gases, petroleum, etc., steel or HOW TO CALCULATE THE VOLUMES OF PARTIALLY FULL 2.1. Cylindrical Tanks The majority of tanks used in the chemical industry are cylindrical tanks, either in horizontal or vertical configuration. Consider, for example, a cylindrical tank with length L and radius R, filling up to a height H. If you want to obtain the volume of the liquid that partially fills the tank
### Horizontal Cylindrical Tank Volume and Level Calculator
Volume calculation on a partially filled cylindrical tank Some Theory. Using the theory. Use this calculator for computing the volume of partially-filled horizontal cylinder-shaped tanks.With horizontal cylinders, volume changes are not linear and in fact are rather complex as the theory above shows. Fortunately you have this tool to do the work for you.Liquid Volume of Horizontal Tank with Dished Ends croatia horizontal cylindrical tank building volumeJan 18, 2013 · I am the Head of Department for quality control & Environment. we calculated our Horizontal Tank volume-Tori spherical Head, As per our knowledge it is giving spurious reading. Tank Dia is 2400 mm,LOS is 3000 mm, Torispherical head length from center are 472 mm. Internal lining material is natural rubber with thickness 4.3 mm (three layers).Spill Prevention Control and Countermeasure (SPCC) Planm is the tank volume (Tank B). l is the Tank B volume fraction for H/D ratio (table). m (ft 3) l o Calculate the displacement of each additional horizontal cylindrical tank within the same secondary containment: Height of Tank C Below = 24 in Containment Wall (in) i Tank
### Storage Tanks - ULS Corporate
Storage tanks are available in many shapes: vertical and horizontal cylindrical; open top and closed top; flat bottom, cone bottom, slope bottom and dish bottom. Large tanks tend to be vertical cyStorage Tanks - ULS CorporateStorage tanks are available in many shapes: vertical and horizontal cylindrical; open top and closed top; flat bottom, cone bottom, slope bottom and dish bottom. Large tanks tend to be vertical cyTANK VOLUME CALCULATOR [How to Calculate Tank Jun 20, 2019 · Horizontal Capsule Tank. Lets now say that I have a water capsule tank (a cylindrical tank with circular ends) which measures 10 inches in diameter and 30 inches in horizontal side length. I want to know its volume in cubic inches and therefore its liquid capacity (how much water I can fit in the tank
### Tank Volume Calculator - Horizontal Elliptical - Metric
Horizontal Cylindrical Tank Volume: Horizontal Elliptical Tank Volume: Vertical Cylindrical Tank Volume: Rectangular Tank Volume: Directory: Inch: Inch: Inch: Inch: Horizontal Elliptical Tank Volume, Dip Chart and Fill Times - Metric. Hemispherical Ends Ellipsoidal Ends - adj Flat Ends: Side Ellipse WidthTank Volume Calculator - Inch CalculatorFor example, lets find the volume of a cylinder tank that is 36 in diameter and 72 long. radius = 36 ÷ 2. radius = 18. tank volume = × 182 × 72. tank volume = 73,287 cu in. Thus, the capacity of this tank is 73,287 cubic inches.Tank Volume Calculator for Ten Various Tank ShapesApr 17, 2019 · Cylindrical tank volume formula. To calculate the total volume of a cylindrical tank, all we need to know is the cylinder diameter (or radius) and the cylinder height (which may be called length, if it's lying horizontally).. Vertical cylinder tank; The total volume of a cylindrical tank may be found with the standard formula for volume - the area of the base multiplied by height.
### Tank Volume Calculator
Calculate the filled volume of a horizontal cylinder tank by first finding the area, A, of a circular segment and multiplying it by the length, l. Area of the circular segment, the grey shaded area, is A = (1/2)r 2 ( - sin) where = 2*arccos(m/r) and is in radians.Vessel Volume & Level CalculationEstimates Volume filled in a Vessel with Ellipsoidal (2:1 Elliptical), Spherical (Hemispherical), Torispherical (ASME F&D, Standard F&D, 80:10 F&D) and Flat heads. Data Orientation Horizontal Vessel Volume & Level CalculationEstimates Volume filled in a Vessel with Ellipsoidal (2:1 Elliptical), Spherical (Hemispherical), Torispherical (ASME F&D, Standard F&D, 80:10 F&D) and Flat heads. Data Orientation Horizontal
### Volume in horizontal round tank croatia horizontal cylindrical tank building volume [SOLVED]
Oct 11, 2005 · If the tank is cylindrical (i.e., flat on both sides like a drum), the formula for volume is pi*D^2*L/4, where D is the diameter and L is the length. If the tank is rounded on both sides (and if it can be assumed that the rounded sides are hemispherical), the formula for volume is, pi*D^2(3L-D)/12. Conversion factors for cubic ft to gallon(?) are:calculus - Volume of a horizontal cylinder using height of croatia horizontal cylindrical tank building volumeI had the same problem. Calculus is nice, but there's a much simpler way. For a given horizontal cylinder: V = pi/4 * D^2 * L * h/D. where, V is the volume of the cylinder~: Horizontal Hemispherical Cylinder Tank (Capsule Tank croatia horizontal cylindrical tank building volumeDec 09, 2011 · The formula used is similar to the formula in a horizontal cylindrical tank with both ends flat. There are only additions to calculate the volume on the hemispherical at both ends or heads. If the length of cylindrical part is zero, then the formula will calculate the volume inside spherical tank or ball shaped tank.
• ### compact small size water tanks piezoresistive differential
oltage power supply 3. Isolated construction, possible to various media 4. OEM differential pressure sensor 5. 316L SS pressure sensor 6. High static pressure 20MPa Application: 1. Industrial process control 2. . Differential 4-20mADC 35kPa Small Size Piezoresistive SS316L Water …China 4-20mADC 35
• ### tank of radiator accessories spare parts water cooled
ories, as well as from hotel water radiator accessories, and whether water radiator accessories is none. There are 325 suppliers who sells water radiator accessories on Alibaba.com, mainly located in Asia. Oil Coolers & Radiators Parts &Water Radiator Coolant …Engine Cooling, Cooling Syste
• ### steel oil oil storage tanksteel oil storage tanks
storage tanks for their operations. We construct our signature TrueTank to meet necessary API requirements that you need, ensuring it withstands even the harshest conditions. Crude Oil Storage - Bolted & Welded Steel Storage TanksAs well as oil storage, our bolted and welded tanks are used for
• ### buildin strainer piston actuated float ball water level cont
all Water Level Control Valve (GL98003), Wedge Non Rising Stem Gate Valve (Z45H), Stainless Steel Buffer Device Cushion Counter Weight Slow Shut Check Valve (HBH47) and so on. (GL98005) Buildin Strainer Piston Solenoid Float Ball ...Level Control Valve, Floating Ball Valve, Hydraulic Valve manufactu
• ### oil tank fabrication in stockfuel tank fabrication
different process tanks so far. We’ve built crystallizer tanks as weighing over 120 tons, as long as 80 feet, and up to 33 feet in diameter. We’ve worked with everything from stainless steel to ... Fabricating API-650-Compliant Bulk Oil Storage Tanks | SMFMay 26, 2016 · Back in 19
• ### spx movable and fixed and steel conbined hot storage tank
ixing Technology. Our extensive knowledgebase and dedication to continuous product development ensure efficient and reliable plant operations around the world. 04 Storage tank, 04 Storage tank direct from Guangzhou ...Factory directly selling movable used stainless steel water storage tank. \$699.00
### Message information
Please describe your brand size and data volume in detail to facilitate accurate quotation
|
2021-07-24 14:10:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6397659778594971, "perplexity": 3413.577031641611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150266.65/warc/CC-MAIN-20210724125655-20210724155655-00656.warc.gz"}
|