url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://networkx.org/documentation/networkx-1.9.1/_modules/networkx/algorithms/centrality/katz.html
|
Warning
This documents an unmaintained version of NetworkX. Please upgrade to a maintained version and see the current NetworkX documentation.
# Source code for networkx.algorithms.centrality.katz
"""
Katz centrality.
"""
# Aric Hagberg <hagberg@lanl.gov>
# Dan Schult <dschult@colgate.edu>
# Pieter Swart <swart@lanl.gov>
import networkx as nx
from networkx.utils import not_implemented_for
__author__ = "\n".join(['Aric Hagberg (aric.hagberg@gmail.com)',
'Pieter Swart (swart@lanl.gov)',
'Sasha Gutfraind (ag362@cornell.edu)',
'Vincent Gauthier (vgauthier@luxbulb.org)'])
__all__ = ['katz_centrality',
'katz_centrality_numpy']
@not_implemented_for('multigraph')
[docs]def katz_centrality(G, alpha=0.1, beta=1.0,
max_iter=1000, tol=1.0e-6, nstart=None, normalized=True,
weight = 'weight'):
r"""Compute the Katz centrality for the nodes of the graph G.
Katz centrality is related to eigenvalue centrality and PageRank.
The Katz centrality for node i is
.. math::
x_i = \alpha \sum_{j} A_{ij} x_j + \beta,
where A is the adjacency matrix of the graph G with eigenvalues \lambda.
The parameter \beta controls the initial centrality and
.. math::
\alpha < \frac{1}{\lambda_{max}}.
Katz centrality computes the relative influence of a node within a
network by measuring the number of the immediate neighbors (first
degree nodes) and also all other nodes in the network that connect
to the node under consideration through these immediate neighbors.
Extra weight can be provided to immediate neighbors through the
parameter :math:\beta. Connections made with distant neighbors
are, however, penalized by an attenuation factor \alpha which
should be strictly less than the inverse largest eigenvalue of the
adjacency matrix in order for the Katz centrality to be computed
Parameters
----------
G : graph
A NetworkX graph
alpha : float
Attenuation factor
beta : scalar or dictionary, optional (default=1.0)
Weight attributed to the immediate neighborhood. If not a scalar the
dictionary must have an value for every node.
max_iter : integer, optional (default=1000)
Maximum number of iterations in power method.
tol : float, optional (default=1.0e-6)
Error tolerance used to check convergence in power method iteration.
nstart : dictionary, optional
Starting value of Katz iteration for each node.
normalized : bool, optional (default=True)
If True normalize the resulting values.
weight : None or string, optional
If None, all edge weights are considered equal.
Otherwise holds the name of the edge attribute used as weight.
Returns
-------
nodes : dictionary
Dictionary of nodes with Katz centrality as the value.
Examples
--------
>>> import math
>>> G = nx.path_graph(4)
>>> phi = (1+math.sqrt(5))/2.0 # largest eigenvalue of adj matrix
>>> centrality = nx.katz_centrality(G,1/phi-0.01)
>>> for n,c in sorted(centrality.items()):
... print("%d %0.2f"%(n,c))
0 0.37
1 0.60
2 0.60
3 0.37
Notes
-----
This algorithm it uses the power method to find the eigenvector
corresponding to the largest eigenvalue of the adjacency matrix of G.
The constant alpha should be strictly less than the inverse of largest
eigenvalue of the adjacency matrix for the algorithm to converge.
The iteration will stop after max_iter iterations or an error tolerance of
number_of_nodes(G)*tol has been reached.
When \alpha = 1/\lambda_{max} and \beta=1 Katz centrality is the same as
eigenvector centrality.
For directed graphs this finds "left" eigenvectors which corresponds
to the in-edges in the graph. For out-edges Katz centrality
first reverse the graph with G.reverse().
References
----------
.. [1] M. Newman, Networks: An Introduction. Oxford University Press,
USA, 2010, p. 720.
--------
katz_centrality_numpy
eigenvector_centrality
eigenvector_centrality_numpy
pagerank
hits
"""
from math import sqrt
if len(G) == 0:
return {}
nnodes = G.number_of_nodes()
if nstart is None:
# choose starting vector with entries of 0
x = dict([(n,0) for n in G])
else:
x = nstart
try:
b = dict.fromkeys(G,float(beta))
except (TypeError,ValueError):
b = beta
if set(beta) != set(G):
raise nx.NetworkXError('beta dictionary '
'must have a value for every node')
# make up to max_iter iterations
for i in range(max_iter):
xlast = x
x = dict.fromkeys(xlast, 0)
# do the multiplication y^T = Alpha * x^T A - Beta
for n in x:
for nbr in G[n]:
x[nbr] += xlast[n] * G[n][nbr].get(weight, 1)
for n in x:
x[n] = alpha*x[n] + b[n]
# check convergence
err = sum([abs(x[n]-xlast[n]) for n in x])
if err < nnodes*tol:
if normalized:
# normalize vector
try:
s = 1.0/sqrt(sum(v**2 for v in x.values()))
# this should never be zero?
except ZeroDivisionError:
s = 1.0
else:
s = 1
for n in x:
x[n] *= s
return x
raise nx.NetworkXError('Power iteration failed to converge in '
'%d iterations.' % max_iter)
@not_implemented_for('multigraph')
[docs]def katz_centrality_numpy(G, alpha=0.1, beta=1.0, normalized=True,
weight = 'weight'):
r"""Compute the Katz centrality for the graph G.
Katz centrality is related to eigenvalue centrality and PageRank.
The Katz centrality for node i is
.. math::
x_i = \alpha \sum_{j} A_{ij} x_j + \beta,
where A is the adjacency matrix of the graph G with eigenvalues \lambda.
The parameter \beta controls the initial centrality and
.. math::
\alpha < \frac{1}{\lambda_{max}}.
Katz centrality computes the relative influence of a node within a
network by measuring the number of the immediate neighbors (first
degree nodes) and also all other nodes in the network that connect
to the node under consideration through these immediate neighbors.
Extra weight can be provided to immediate neighbors through the
parameter :math:\beta. Connections made with distant neighbors
are, however, penalized by an attenuation factor \alpha which
should be strictly less than the inverse largest eigenvalue of the
adjacency matrix in order for the Katz centrality to be computed
Parameters
----------
G : graph
A NetworkX graph
alpha : float
Attenuation factor
beta : scalar or dictionary, optional (default=1.0)
Weight attributed to the immediate neighborhood. If not a scalar the
dictionary must have an value for every node.
normalized : bool
If True normalize the resulting values.
weight : None or string, optional
If None, all edge weights are considered equal.
Otherwise holds the name of the edge attribute used as weight.
Returns
-------
nodes : dictionary
Dictionary of nodes with Katz centrality as the value.
Examples
--------
>>> import math
>>> G = nx.path_graph(4)
>>> phi = (1+math.sqrt(5))/2.0 # largest eigenvalue of adj matrix
>>> centrality = nx.katz_centrality_numpy(G,1/phi)
>>> for n,c in sorted(centrality.items()):
... print("%d %0.2f"%(n,c))
0 0.37
1 0.60
2 0.60
3 0.37
Notes
------
This algorithm uses a direct linear solver to solve the above equation.
The constant alpha should be strictly less than the inverse of largest
eigenvalue of the adjacency matrix for there to be a solution. When
\alpha = 1/\lambda_{max} and \beta=1 Katz centrality is the same as
eigenvector centrality.
For directed graphs this finds "left" eigenvectors which corresponds
to the in-edges in the graph. For out-edges Katz centrality
first reverse the graph with G.reverse().
References
----------
.. [1] M. Newman, Networks: An Introduction. Oxford University Press,
USA, 2010, p. 720.
--------
katz_centrality
eigenvector_centrality_numpy
eigenvector_centrality
pagerank
hits
"""
try:
import numpy as np
except ImportError:
raise ImportError('Requires NumPy: http://scipy.org/')
if len(G) == 0:
return {}
try:
nodelist = beta.keys()
if set(nodelist) != set(G):
raise nx.NetworkXError('beta dictionary '
'must have a value for every node')
b = np.array(list(beta.values()), dtype=float)
except AttributeError:
nodelist = G.nodes()
try:
b = np.ones((len(nodelist),1))*float(beta)
except (TypeError,ValueError):
raise nx.NetworkXError('beta must be a number')
n = np.array(A).shape[0]
centrality = np.linalg.solve( np.eye(n,n) - (alpha * A) , b)
if normalized:
norm = np.sign(sum(centrality)) * np.linalg.norm(centrality)
else:
norm = 1.0
centrality = dict(zip(nodelist, map(float,centrality/norm)))
return centrality
# fixture for nose tests
def setup_module(module):
from nose import SkipTest
try:
import numpy
import scipy
except:
raise SkipTest("SciPy not available")
|
2023-02-06 12:18:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6795488595962524, "perplexity": 12020.492685738314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00565.warc.gz"}
|
http://pyleecan.org/pyleecan.Methods.Geometry.SurfLine.check.html
|
# check (method)¶
check(self)[source]
assert the Surface is correct (the radius > 0)
Parameters: self (Surface) – A Surface object None
|
2019-05-27 06:01:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6810398101806641, "perplexity": 12788.29938736204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232261326.78/warc/CC-MAIN-20190527045622-20190527071622-00555.warc.gz"}
|
https://projecteuclid.org/euclid.jca/1420466340
|
## Journal of Commutative Algebra
### SURVEY ARTICLE: Simplicial complexes satisfying Serre's condition: A survey with some new results
#### Abstract
The problem of finding a characterization of Cohen--Macaulay simplicial complexes has been studied intensively by many authors. There are several attempts at this problem available for some special classes of simplicial complexes satisfying some technical conditions. This paper is a survey, with some new results, of some of these developments. The new results about simplicial complexes with Serre's condition are an analogue of the known results for Cohen--Macaulay simplicial complexes.
#### Article information
Source
J. Commut. Algebra, Volume 6, Number 4 (2014), 455-483.
Dates
First available in Project Euclid: 5 January 2015
https://projecteuclid.org/euclid.jca/1420466340
Digital Object Identifier
doi:10.1216/JCA-2014-6-4-455
Mathematical Reviews number (MathSciNet)
MR3294858
Zentralblatt MATH identifier
1345.13014
#### Citation
Pournaki, M.R.; Fakhari, S.A. Seyed; Terai, N.; Yassemi, S. SURVEY ARTICLE: Simplicial complexes satisfying Serre's condition: A survey with some new results. J. Commut. Algebra 6 (2014), no. 4, 455--483. doi:10.1216/JCA-2014-6-4-455. https://projecteuclid.org/euclid.jca/1420466340
#### References
• M. Barile, A note on the edge ideals of Ferrers graphs.v2.
• A. Björner and M.L. Wachs, Shellable nonpure complexes and posets, I, Trans. Amer. Math. Soc. 348 (1996), 1299–1327.
• W. Bruns and J. Herzog, Cohen–Macaulay rings, Cambridge Stud. Adv. Math. 39, Cambridge University Press, 1993.
• A.M. Duval, Algebraic shifting and sequentially Cohen–Macaulay simplicial complexes, Electron. J. Combin. 3 (1996), Research Paper 21.
• J.A. Eagon and V. Reiner, Resolutions of Stanley–Reisner rings and Alexander duality, J. Pure Appl. Alg. 130 (1998), 265–275.
• D. Eisenbud, M. Green, K. Hulek and S. Popescu, Restricting linear syzygies: algebra and geometry, Compos. Math. 141 (2005), 1460–1478.
• M. Estrada and R.H. Villarreal, Cohen–Macaulay bipartite graphs, Arch. Math. (Basel) 68 (1997), 124–128.
• C.A. Francisco and H.T. Há, Whiskers and sequentially Cohen–Macaulay graphs, J. Combin. Theor. 115 (2008), 304–316.
• C.A. Francisco and A. Van Tuyl, Sequentially Cohen–Macaulay edge ideals, Proc. Amer. Math. Soc. 135 (2007), 2327–2337.
• A. Goodarzi, M.R. Pournaki, S.A. Seyed Fakhari and S. Yassemi, On the $h$-vector of a simplicial complex with Serre's condition, J. Pure Appl. Alg. 216 (2012), 91–94.
• H. Haghighi, N. Terai, S. Yassemi and R. Zaare-Nahandi, Sequentially $S_r$ simplicial complexes and sequentially $S_2$ graphs, Proc. Amer. Math. Soc. 139 (2011), 1993–2005.
• H. Haghighi, S. Yassemi and R. Zaare-Nahandi, Bipartite $S_2$ graphs are Cohen–Macaulay, Bull. Math. Soc. Sci. Math. Roum. 53 (2010), 125–132.
• R. Hartshorne, Complete intersections in characteristic $p>0$, Amer. J. Math. 101 (1979), 380–383.
• J. Herzog and T. Hibi, Componentwise linear ideals, Nagoya Math. J. 153 (1999), 141–153.
• ––––, Distributive lattices, bipartite graphs and Alexander duality, J. Alg. Combin. 22 (2005), 289–302.
• ––––, Monomial ideals, Springer-Verlag, London, Ltd., London, 2011.
• J. Herzog, T. Hibi and X. Zheng, Dirac's theorem on chordal graphs and Alexander duality, Europ. J. Combin. 25 (2004), 949–960.
• ––––, Monomial ideals whose powers have a linear resolution, Math. Scand. 95 (2004), 23–32.
• J. Herzog, T. Hibi and X. Zheng, Cohen–Macaulay chordal graphs, J. Combin. Theor. 113 (2006), 911–916.
• J. Herzog, Y. Takayama and N. Terai, On the radical of a monomial ideal, Arch. Math. 85 (2005), 397–408.
• G. Kalai, Algebraic shifting, Adv. Stud. Pure Math. 33 (2001), 121–163.
• D. König, Über Graphen und ihre Anwendung auf Determinantentheorie und Mengenlehre, Math. Ann. 77 (1916), 453–465.
• ––––, Theory of finite and infinite graphs, Birkhauser Boston, Inc., Boston, 1990.
• G. Lyubeznik, On the arithmetical rank of monomial ideals, J. Alg. 112 (1988), 86–89.
• E. Miller and B. Sturmfels, Combinatorial commutative algebra, Springer-Verlag, New York, 2005.
• N.C. Minh and N.V. Trung, Cohen–Macaulayness of monomial ideals and symbolic powers of Stanley–Reisner ideals, Adv. Math. 226 (2011), 1285–1306.
• J.R. Munkres, Elements of algebraic topology, Addison-Wesley Publishing Company, Menlo Park, CA, 1984.
• S. Murai and N. Terai, $h$-Vectors of simplicial complexes with Serre's conditions, Math. Res. Lett. 16 (2009), 1015–1028.
• M.R. Pournaki, S.A. Seyed Fakhari and S. Yassemi, On the $h$-triangles of sequentially $(S_r)$ simplicial complexes via algebraic shifting, Ark. Mat. 51 (2013), 185–196.
• G.A. Reisner, Cohen–Macaulay quotients of polynomial rings, Adv. Math. 21 (1976), 30–49.
• G. Rinaldo, N. Terai and K.I. Yoshida, Cohen–Macaulayness for symbolic power ideals of edge ideals, J. Alg. 347 (2011), 1–22.
• ––––, On the second powers of Stanley–Reisner ideals, J. Comm. Alg. 3 (2011), 405–430.
• P. Schenzel, Dualisierende Komplexe in der lokalen Algebra und Buchsbaum–Ringe, Lect. Notes Math. 907, Springer-Verlag, New York, 1982.
• A. Simis, On the Jacobian module associated to a graph, Proc. Amer. Math. Soc. 126 (1998), 989–997.
• A. Simis, W.V. Vasconcelos and R.H. Villarreal, On the ideal theory of graphs, J. Alg. 167 (1994), 389–416.
• R.P. Stanley, Combinatorics and commutative algebra, Second Edition, Progr. Math. 41, Birkhauser Boston, Inc., Boston, 1996.
• A. Taylor, The inverse Gröbner basis problem in codimension two, J. Symbol. Comp. 33 (2002), 221–238.
• N. Terai, Alexander duality in Stanley–Reisner rings, Affine algebraic geometry, Osaka University Press, Osaka, 2007.
• N. Terai and N.V. Trung, Cohen–Macaulayness of large powers of Stanley–Reisner ideals, Adv. Math. 229 (2012), 711–730.
• N. Terai and K.I. Yoshida, A note on Cohen–Macaulayness of Stanley–Reisner rings with Serre's condition $(S_2)$, Comm. Alg. 36 (2008), 464–477.
• A. Van Tuyl, Sequentially Cohen–Macaulay bipartite graphs: Vertex decomposability and regularity, Arch. Math. 93 (2009), 451–459.
• R.H. Villarreal, Cohen–Macaulay graphs, Manuscr. Math. 66 (1990), 277–293.
• R. Woodroofe, Vertex decomposable graphs and obstructions to shellability, Proc. Amer. Math. Soc. 137 (2009), 3235–3246.
• K. Yanagawa, Alexander duality for Stanley–Reisner rings and squarefree $\mathbb{N}^n$-graded modules, J. Alg. 225 (2000), 630–645.
|
2019-10-14 08:28:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5772912502288818, "perplexity": 3478.286287943536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649841.6/warc/CC-MAIN-20191014074313-20191014101313-00161.warc.gz"}
|
http://math.stackexchange.com/questions/147779/deformation-of-3d-objects-along-curve-or-surface
|
# Deformation of 3D objects along curve or surface?
This is a general question regarding the problem of deforming or forcing some 3-dimensional mathematical shape to flow along some 3D curve or surface.
What kind of mathematics would one need to describe deforming any object around, say, a cylinder of radius $r$? Is there a general function for applying such a process to any shape, B-spline surface or polyhedra (not taking into account physical parameters such as elasticity)?
I know that the CAD software Rhino 3D has built-in functions "flow along curve" and "flow along surface":
I am looking for some hints on how to achieve such operations were the target 3D curve is, say, a circle, without the use of CAD software. This could be tutorials online, math examples, DOIs for good papers on the subject, etc., etc.
I have tried myself once with flowing a function around a circle in 2D. For me it seemed like it was just a change of coordinates: You just express the coordinates in spherical coordinates, the x-cordinates will the flow around radially and the y-coordinate is the length of the arc. But I am looking for the general mapping solution.
-
The idea actually goes back to Bezier's thesis, though it's almost always attributed to Sederberg and Parry (Sederberg, Thomas and Parry, Scott. "Free Form Deformation of Solid Geometric Models." SIGGRAPH, Association of Computing Machinery. Volume 20, Number 4, 1986. 151-159). Google their original paper -- easy to find). Many CAD packages are essentially just implementing these old ideas. Intuitively, you embed your target object in a cube of jello, and deform the jello, and this carries your object along with it. Or, for bending deformations, you do essentially what you described -- "attach" your object to a straight line, and bend the line, which bends your object, too. As you say, it's not much more than a change of coordinates.
There's a huge amount of literature on the subject. Look up "free-form deformation". The wikipedia entry is pretty miserable, but the references it gives are quite good.
-
Thanks. I took a look at the paper, and I have actually seen it before, some years ago. I find it fascinating what they were doing already in 1986 - wonder how long it took to raytrace the included graphics :) I was taken aback by the math in the paper back then and I still am. I think I will try with the simpler "change of coordinate" approach. Or just use Rhino 3D. – Ole Thomsen Buus May 22 '12 at 14:22
> I find it fascinating what they were doing already in 1986 ... well Pierre Bezier was probably doing it in 1966, but he hardly ever gets credit for it. – bubba May 22 '12 at 14:51
The CAD software with the strongest capabilities in this area is probably Think3. Better hurry, though -- I get the impression that the company is on its last legs. – bubba May 22 '12 at 14:54
Well, a mathematical description of splines were named after him... that's credit after all. The Think3 website currently looks ... sad. – Ole Thomsen Buus May 22 '12 at 18:28
This is too long for a comment, so I'm posting an answer. There doesn't seem to be a definite answer, nor do I claim that what is suggested below is the most computationally optimal way of accomplishing the task, but it is the first thing that comes to mind:
Let $O$ be a 3D object that we wish to flow along a surface $S$. Let us assume that $S$ is the image under some map $F$ of a plane $\Pi$. For simplicity, we don't care about specific orientation of $O$. Foliate $O$ by copies of $\Pi$ along the direction perpendicular to $\Pi$, say along some line segment $l\subset O$. Then $O = O\bigcap \left[\cup_{p\in l}\Pi_p\right]$ where $\Pi_p$ is the plane passing through $p$. Now define $\widetilde{F}: \bigcup_{p\in l}\Pi_p \rightarrow\mathbb{R}^3$ by $\widetilde{F}(\pi, p) = (F(\pi), p)$, where $\pi$ is a point on $\Pi_p$. Then $\widetilde{O} := \widetilde{F}(O)$ is the object $O$ "flow along" the surface $S$. Notice in particular that $\widetilde{F}(\bigcup_{p\in l}\Pi_p) = \bigcup_{p\in l}S\times\{p\}$.
-
Thank you very much, but it did not help. I am an engineer :) What I was looking for was actually one of those very useful encyclopedic answers. I would imagine that since CAD programs are working directly with naturally sub-divided shapes ( triangular meshes or patches of NURBS-surfaces) any governed deformation becomes easier somehow. – Ole Thomsen Buus May 22 '12 at 8:27
@Ole: I just read what is described in the other answer. Maybe I'm missing something, but in my answer I am saying exactly the same thing in a mathematically formal way. :) – William May 22 '12 at 18:34
I believe it is the "foliate" word that confuses me. I am sure that you were sincere in your answer, it is just a bit too mathematical for me. You also define $O = O\bigcap \left[\cup_{p\in l}\Pi_p\right]$ ... is there a mathematical shortcut there? Not sure how to even interpret that set construct. – Ole Thomsen Buus May 23 '12 at 8:02
@Ole: No no, I am not defining $O$ that way; $O$ has already been defined. I am saying that $O$ sits inside of the three-dimensional space that is sliced up into planes $\Pi_p$. Basically, you can think of a foliation as a stack of layers. This is a formal way of saying that $O$ is immersed in a cube of jello, in bubba's words, where the cube is made up of layers, which I'm calling $\Pi_p$. Anyway, it's OK, no offense was taken on my part :). – William May 23 '12 at 8:28
|
2016-02-11 18:02:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7119482159614563, "perplexity": 342.5832764651904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162094.74/warc/CC-MAIN-20160205193922-00061-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://webwork.libretexts.org/webwork2/html2xml?answersSubmitted=0&sourceFilePath=Library/Rochester/setLimitsRates2Limits/ur_lr_2_10.pg&problemSeed=1234567&courseID=anonymous&userID=anonymous&course_password=anonymous&showSummary=1&displayMode=MathJax&problemIdentifierPrefix=102&language=en&outputformat=libretexts
|
$f(x)$ $g(x)$
The graphs of $f$ and $g$ are given above. You may click on the graphs to get larger images of them. Use the graphs to evaluate each quantity below. Write DNE if the limit or value does not exist (or if it's infinity).
1. $\displaystyle \lim_{x\to 0^-} [f( g(x) ) ]$
2. $\displaystyle \lim_{x\to 1^+} [f(x) + g(x) ]$
3. $f(0)/g(0)$
4. $f( g(1) )$
|
2022-06-30 01:18:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7244054675102234, "perplexity": 426.98105399862163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103646990.40/warc/CC-MAIN-20220630001553-20220630031553-00234.warc.gz"}
|
http://www.lastfm.se/user/fwojtek/library/music/Linkin+Park/_/When+They+Come+for+Me?setlang=sv
|
# Bibliotek
## When They Come for Me
35 spelade låtar | Gå till låtsida
Låtar (35)
Låt Album Längd Datum
When They Come for Me 4:55 25 feb 2012, 07:47
When They Come for Me 4:55 27 jan 2012, 10:25
When They Come for Me 4:55 30 jul 2011, 07:29
When They Come for Me 4:55 11 jul 2011, 06:01
When They Come for Me 4:55 22 maj 2011, 10:47
When They Come for Me 4:55 22 maj 2011, 07:08
When They Come for Me 4:55 21 maj 2011, 13:48
When They Come for Me 4:55 1 maj 2011, 09:15
When They Come for Me 4:55 28 apr 2011, 11:03
When They Come for Me 4:55 13 apr 2011, 13:06
When They Come for Me 4:55 10 apr 2011, 10:53
When They Come for Me 4:55 7 mar 2011, 17:01
When They Come for Me 4:55 7 feb 2011, 20:05
When They Come for Me 4:55 7 feb 2011, 16:02
When They Come for Me 4:55 6 feb 2011, 11:22
When They Come for Me 4:55 6 feb 2011, 09:05
When They Come for Me 4:55 20 jan 2011, 13:19
When They Come for Me 4:55 14 jan 2011, 13:01
When They Come for Me 4:55 3 jan 2011, 07:35
When They Come for Me 4:55 21 dec 2010, 12:46
When They Come for Me 4:55 21 dec 2010, 11:59
When They Come for Me 4:55 21 dec 2010, 09:31
When They Come for Me 4:55 12 dec 2010, 10:12
When They Come for Me 4:55 12 dec 2010, 09:24
When They Come for Me 4:55 4 nov 2010, 14:38
When They Come for Me 4:55 18 okt 2010, 11:22
When They Come for Me 4:55 18 okt 2010, 07:38
When They Come for Me 4:55 18 okt 2010, 06:50
When They Come for Me 4:55 17 okt 2010, 15:03
When They Come for Me 4:55 17 okt 2010, 14:15
When They Come for Me 4:55 10 okt 2010, 11:29
When They Come for Me 4:55 10 okt 2010, 10:41
When They Come for Me 4:55 10 okt 2010, 09:56
When They Come for Me 4:55 10 okt 2010, 09:04
When They Come for Me 4:55 10 okt 2010, 08:17
|
2014-03-11 17:24:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8535162806510925, "perplexity": 13782.400874399731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011237144/warc/CC-MAIN-20140305092037-00032-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://blog.qarnot.com/distributed_monte_carlo_simulations/
|
< Back
# Distributed Monte Carlo simulations
by Paul - January 19, 2017 - Basics
CC-BY / jgilhutton
In this article, we’ll describe how to use Qarnot computing HPC platform to perform distributed Monte Carlo simulation. As a ‘hello world’ Monte Carlo workload, we’ll use a basic $\pi$ estimation.
## Estimating $\pi$ using Monte Carlo Simulation
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to obtain numerical results. To estimate $\pi$, we randomly place $N$ points in a square of side$1$ and count $P$, the number of points that are at a distance $d<1$ from the origin as shown in the following figure.
When $N$ is large, $P/N$ converges towards $\pi/4$, the surface of the quarter circle. This being said, the following code will help you test this method.
You can notice the convergence toward $\pi$ when you increase the number of samples but also the non deterministic behaviour of the piMc function.
To make the piMc function deterministic, you can seed the random generator. Then, seeds can be used to split the computation into multiple independant sampling rounds.
As you can see, the precision increase with both the number of samples per split and the number of split.
## Let’s use Qarnot computing HPC service
First let’s save the following script in a piMc.py file.
This script simply accepts the seed and number of samples as arguments.
Now, let’s use Qarnot python SDK to launch the distributed computation. You’ll need to launch the following qPiMc.py script after replacing with the key you obtained by registering at Qarnot computing.
By executing this script, you’ll :
• connect to Qarnot API by using your API token
• create a task composed of 10 frames numbered from 1 to 10
• upload the qPiMc.py to be executed on each node
• select the right docker container to be deployed on each node (here, we use anaconda, a sledgehammer to crack a nut…)
• define the command to be launched within containers
• launch the task and print the aggregated stdout when all nodes exited (approx. 1 minute from cold boot to shutdown)
During the execution, you can follow the progress on the Qarnot console. Here is an example with 100,000,000 samples per seed :
## Next steps!
With Qarnot HPC services, you can go well beyond what could be described in this article. You will probably come up with many more interesting use cases. So, go ahead! Try it now, and share your feedback with the community, so that we can keep improving on this product.
[1] Monte Carlo method: wikipedia
[2] Parallel Pi Calculation using Python’s multiprocessing module: github
[3] Qarnot computing Developer & API documentation: qarnot
|
2020-01-21 21:02:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37609100341796875, "perplexity": 1680.3940766166727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00264.warc.gz"}
|
https://physics.stackexchange.com/questions/247690/about-the-holographic-principle
|
I read at a book this quote
"As the degrees of freedom of a particle are the product of all the degrees of freedom of its sub-particles, were a particle to have infinite subdivisions into lower-level particles, the degrees of freedom of the original particle would be infinite, violating the maximal limit of entropy density. The holographic principle thus implies that the subdivisions must stop at some level, and that the fundamental particle is a bit (1 or 0) of information."
I know like everybody else here that it´s wrong because fundamental particles are not Bits, because in they are not pure information nor are information itself, but what exactlly is wrong in the quote that makes it falacious?
To address this specific claim (and here I apologize for the use of technical jargon, but I don't presently have the time to break it down further), I will simply state that for a given physical macrostate of a statistical system, its entropy in terms of the possible corresponding microstates can be calculated as $$S = \sum_{i \in \text{microstates}} p_i \log[p_i]$$ where $p_i$ is the probability of the system finding itself in microstate $i$. The sum would be replaced by an integral or multiple integral when the possible microstates are continuously distributed, but schematically, this shows what we want to see. That it that is completely possible to specify the entropy of a system made up of sub-parts, without reference to those subparts, so long as the probability of states in which those subparts are excited is sufficiently small. To give a completely practical example, one can compute the entropy of a gas of hydrogen molecules without considering that they are each made up of protons and electrons, so long as the temperature of the system is known to be low enough that the chance of any significant portion of the molecules being excited out of the ground state is small. Essentially, in this limit, we know that the state of a given molecule is completely specified by giving its position (and orientation, since molecular hydrogen is not a spherically symmetric), without having to consider internal degrees of freedom. It would certainly be onerous, and make all of statistical mechanics untenable, if we would need to know the ultimate, fundamental structure of matter before we could do any calculations. Rather, the concepts of classical statistical mechanics, including entropy, were formulated long before even the atomic theory of matter was accepted science!
|
2019-12-11 07:47:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8648319244384766, "perplexity": 195.26734760468756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00164.warc.gz"}
|
http://mathcentral.uregina.ca/QQ/database/QQ.09.11/h/boyong1.html
|
SEARCH HOME
Math Central Quandaries & Queries
Question from boyong, a student: 6/2(1+2)=?
Boyong,
The subject line of your email was "what is the right answer?,9 or 1". The real question is "Who would write such a statement?" It looks like it is written with a deliberate attempt to be confusing. Mathematicians take great care in ensuring that what they write is clear and unambiguous. If it were written (6/2)(1+2) then you perform the operations inside the parentheses first giving $3 \times 3 = 9.$ If it were written 6/(2(1+2)) you would again perform the operations inside the parentheses first giving $6/6 = 1.$ As it stands it is not clear what is meant.
If this is a textbook problem then the text is probably using the PEDMAS or BEDMAS rule which says to do division before multiplication. In that case you would get an answer of 9.
Harley
Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
|
2020-07-06 09:05:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5471984148025513, "perplexity": 627.6965933018714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890157.10/warc/CC-MAIN-20200706073443-20200706103443-00488.warc.gz"}
|
http://hopeschoolkw.net/2020/04/23/florida-highest-rated-online-dating-website-no-subscription-needed/
|
# Florida Highest Rated Online Dating Website No Subscription Needed
## Florida Highest Rated Online Dating Website No Subscription Needed
It was covered in lank, dank fur from head to toe and it dripped with a foul smelling water. Hofmeister was launched successfully in the united kingdom, targeted at young, working-class males, a closed shop where beer is concerned, and group conformity is the key. Often talks to illieus, and includes the elk in events far more than his humanoid companions. Description s = struct creates a scalar (1-by-1) structure with no fields. example s = struct(field,value) creates a structure array with the specified field and value. 2. sound: well i have been doing some a/b comparisons with the 3910 and the opus. Utami, mentari tri and pangestu, eko and sutrisno, sutrisno (2016) status mineral mangaan pada sapi potong di daerah aliran sungai jratunseluna. It infuriated me to the point that i will not buy any ac game, nor will i play games similar to it, because of that one 25 minute experience. Allowed unsecured claims will be paid in cash, a pro-rata payment of at least $100,000 from current available cash and further cash available from ksl. Influence of types and forms of food on production performances of fattetning pigs. A case study of new weaknesses that have emerged in our era, this book offers an argument for why we cannot wait for the next disaster before we apply the lessons that should be learned from katrina. Even if the hornets donot win the division, they could sneak into a home playoff series in the first round. It comes as a shock to such persons that we are merely trying to say only what we mean! Their lives are like dammed up rivers where the pressure continues to build but there is no way of releasing it. Mctear had an eagle 3 at the 487yd par-5 13th and birdies at the second, seventh, eighth and ninth. This is the second time i have had this problem, i think it occurs when i plug in the jack, possibly because i didnt turn of bluetooth first. Darwin spent his life in a society where the common belief was that all species have been created by god. In contrast, in result-oriented cultures members are familiar with constant challenges, have fewer absences and several hierarchical levels are missing. This deeper step is about florida highest rated online dating website no subscription needed pre-emption: stopping crime before it starts. The data variables that can be used for quantitation of the analyte are the peak areas, peak heights, or the ratio of peak areas (heights) of analyte to the internal standard peak. Accepting that one is an alcoholic can occur immediately or after a long process. *^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^* salma kuwa tun bayan data kashe safiya,hankalintah ya kwanta ta zama tauraruwar bash dan har gidansa take zuwa tayi kwanaki acewarta taje wata kasar agida. What do you do differently to write a historical rather than a contemporary novel? (les larmes aux yeux, au nagi qui ne comprend pas) gotta say last goodbye, huh ? Bindings are often the last piece of ski gear you choose, but remain a very important one. Both sight systems may be used in conjunction, co-witnessed or offset. This is florida highest rated online dating website no subscription needed the result of a fair bit of trial and error for me, and im really happy with how it works. Just like no two snowflakes are alike, each embryo is unique, and a gift from heaven. Tools are developed florida highest rated online dating website no subscription needed to binarize, skew correct, and de-noise the forms. She is distanced from her murder, taken away from her actions in history. I decided to buy mini server cause i thought it was a better deal than regular mini. It bulged slightly (editor’s note: uh…lens? what was it that bulged slighted???), as if something had been stuffed into the closed, sealed pack. After sonic’s return, snively helped florida highest rated online dating website no subscription needed the knothole freedom fighters halt the nanites destructive expansion and joined the brain trust group. Avalanche offered, in business terms, ” cybercrime as a service, ” supporting a broad digital underground economy. The antenatal health care records were revised every 4.5 years on average. I have no doubt they will emerge out of your grasp and stronger, no matter what happens here. If you experience any difficulty in accessing this web site, please donot hesitate to contact us at: 916-777-4800 or by mail at: po box 758, isleton, ca 95641. We talk about the team, the labor of love, what got left on the cutting floor, and various other bits and bobs. (remembering that i noticed the coincidence of dates doesnot strike me as being as sick as remembering an obsolete birthday.) We must all prepare to make the sacrifices that florida highest rated online dating website no subscription needed the emergency–as serious as war itself–demands. Then the topological products$x \times y$and$y \times x$are homeomorphic and an explicit homeomorphism is given by$f : x \times y \to y \times x$defined by$f(x, y) = (y, x)\$. The coast guard on sunday night located debris florida highest rated online dating website no subscription needed believed associated with a bell 407 that went missing over the weekend off the louisiana coast. The following summarized pointers will come handy when growing strawberry from scratch. February 13 the u.s. army begins to deploy anti-aircraft cannons, to protect nuclear stations and military targets. We know what works and what does not when it comes to nfl travel packages, college football travel packages, mlb vacation packages, nhl ticket packages, and all other sports travel packages. Cannabis wars,one man lay dead in his car with another sprawled wounded in the passenger seat. You didnot choose where to be born, you didnot raise yourself, or provide the genes that made you who you are. Control messages the websocket protocol defines three types of control messages: close, ping and pong. David litchfield specializes in searching for new threats to database systems and florida highest rated online dating website no subscription needed web applications. Race route along the route the race starts at 5:30 am at the dataran merdeka. Afterwards, change bit price and pattern charge to regulate florida highest rated online dating website no subscription needed sound quality. Later arrested, the attacker who turned out to be an executive – and he was summerly fired… A large living room plus a separate dining room and an eat florida highest rated online dating website no subscription needed in kitchen! What we will look at before we start is the net commands for the console in nt. Islam also recognizes “epicenes” belonging florida highest rated online dating website no subscription needed to one sex physically but having characteristics of the other, or of neither; e.g. effeminate men or manly women. Company in 2014 at pebble beach and amelia island in 2015. 1969 lamborghini miura p400 s 1969 lamborghini miura p400 s brian henniker / gooding & But the idea that jesus comes back and is on land is absolutely false, for christ is not coming on the ground but is coming in the air, which is just what we read in verse 27. The x5 bracket does a wonderful job at reducing vibration caused by running the motor.
|
2020-12-01 05:10:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18773587048053741, "perplexity": 8852.992215706767}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141652107.52/warc/CC-MAIN-20201201043603-20201201073603-00167.warc.gz"}
|
https://repo.pw.edu.pl/info/bachelor/WUT746acee9411143f0b4b5364a86d6411f/
|
# Knowledge base: Warsaw University of Technology
Back
## Stability analysis of thin walled C-profile beam
### Michał Maciej Bielski
#### Abstract
The following thesis is dedicated to the problem of global stability loss caused by local buckling, and its influence on the structure. The strength of beam structure is highly dependent on the proper stress distribution which is disturbed when buckling occurs. This thesis is divided into three main parts, which concern respectively: the analysis of an undesignated profile beam (as an introduction of the problem), the main analysis of a thin-walled C-profile beam, and the numerical analysis. In the first part the stability issue is discussed in a comprehensive manner, on the general example of a beam showing the influence of geometric and load imperfections on the stability loss. Separated calculations are being conducted with use of two different approaches: the equations of static equilibrium method, and the energetic method – determining the minimum of total potential energy. Comparing them allows to draw conclusions regarding their effectiveness and application in terms of problem discussed. Next analysis concerns the work of thin walled C-profile beam, loaded with bending moment – which reflects the usual load of the beam as the structural element. Based on careful observation of series of cardboard models with different dimensions ratios (length of the bottom-wall in comparison to the height of side-walls), several possible types of local deformations were determined, and their mathematical models were created. This approach is called a geometric method of structure analysis. After that, the internal energy for each local state of deflection was specified, and corresponding equations of total bending moment in relation to deflection angle, for several different parameters, were plotted. Considering those plots, calculated results and the minimum of total potential energy principle, the most probable type of deformation was designated, as well as specific bifurcation point – of conversion between before- and after buckling deformation states. Knowing of the critical load for which buckling deformation occurs is crucial in structure designing process. The thesis focuses on subsequent stages of calculations, and approximations used – starting from a general approach where deflection is presented in sharp edges form, succeeding with arcs rounded with constant or linearly changing radius – all based on experimental observation of earlier mentioned cardboard models. To verify some of the conclusions, a numerical analysis using the finite element method and Ansys Workbench 14.0 software was conducted. The subject of this calculations was a beam model with identical dimensions as in the analytical part. First results concerns the linear buckling of perfect geometry beam, after that the study focused on the non-perfect geometry with different sizes of imperfection. Again, the main purpose was to determine possible buckling modes. Additionally, the numerical analysis considers also stress distribution along the beam. Enlarging the size of imperfection clearly shows how stress distribution converts from typical general bending state to a state modified with local deflection and side-walls buckling of C-profile beam. As with every engineering analysis, the purpose is to create a model possibly close to the real structure – in terms of reaction to applied physical conditions. Therefore to end the study, the suggestion of further improvements of the proposed model were made, which may allow to make the results more accurate.
Diploma type
Engineer's / Bachelor of Science
Diploma type
Engineer's thesis
Author
Michał Maciej Bielski (FPAE) Michał Maciej Bielski,, Faculty of Power and Aeronautical Engineering (FPAE)
Title in Polish
Analiza stateczności belki cienkościennej o profilu C-owym
Supervisor
Krzysztof Kozakiewicz (FPAE/IAAM) Krzysztof Kozakiewicz,, The Institute of Aeronautics and Applied Mechanics (FPAE/IAAM)Faculty of Power and Aeronautical Engineering (FPAE)
Certifying unit
Faculty of Power and Aeronautical Engineering (FPAE)
Affiliation unit
The Institute of Aeronautics and Applied Mechanics (FPAE/IAAM)
Study subject / specialization
, Lotnictwo i Kosmonautyka
Language
(en) English
Status
Finished
Defense Date
25-01-2016
Issue date (year)
2016
Pages
115
Internal identifier
PD; MEL-3418
Reviewers
Tomasz Zagrajek (FPAE/IAAM) Tomasz Zagrajek,, The Institute of Aeronautics and Applied Mechanics (FPAE/IAAM)Faculty of Power and Aeronautical Engineering (FPAE) Krzysztof Kozakiewicz (FPAE/IAAM) Krzysztof Kozakiewicz,, The Institute of Aeronautics and Applied Mechanics (FPAE/IAAM)Faculty of Power and Aeronautical Engineering (FPAE)
Keywords in Polish
stateczność, wyboczenie, profil cienkościenny, praca belki zginanej, punkt krytyczny pracy ustroju, metoda geometryczna
Keywords in English
stability, buckling, thin-walled profile, bending load, critical point of load, bifurcation, geometric method
Abstract in Polish
File
• File: 1
MBielski_eng_dipl_thesis.pdf
Request a WCAG compliant version
Local fields
Identyfikator pracy APD: 8725
Uniform Resource Identifier
https://repo.pw.edu.pl/info/bachelor/WUT746acee9411143f0b4b5364a86d6411f/
URN
urn:pw-repo:WUT746acee9411143f0b4b5364a86d6411f
Confirmation
Are you sure?
|
2021-06-18 17:59:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5092535018920898, "perplexity": 3014.656574256964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487640324.35/warc/CC-MAIN-20210618165643-20210618195643-00286.warc.gz"}
|
http://marktheballot.blogspot.com/2015/08/betting-market-update.html
|
## Saturday, August 15, 2015
### Betting market update
Extracting the odds from bookmaker webpages is a dark art. The bookies make it hard to extract this data. For each bookie I need a bespoke web-scraper. While these only take a dozen lines of python code; each scraper is different. Furthermore, they are fragile and need to be reconfigured often as the bookmakers modify their web delivery strategies. Anyway, I have now written six little scrapers, which I run every morning.
This morning's odds follow. Links to these bookmakers can be found in the right-hand column.
House Coalition Odds ($) Labor Odds ($) Coalition Win Probability (%)
2015-08-15 CrownBet 1.65 2.25 57.692308
2015-08-15 Luxbet 1.66 2.20 56.994819
2015-08-15 TABtouch 1.76 2.04 53.684211
2015-08-15 Sportsbet 1.65 2.25 57.692308
2015-08-15 William Hill 1.77 2.05 53.664921
To calculate the Coalition's win probabilities from these odds, I use the following formula (where $$C$$ is the Coalition's odds and $$L$$ is Labor's odds). Of course, this formula simply normalises the probabilities for the bookmaker's over-round.
$p=\frac{\frac{1}{C}}{\frac{1}{C}+\frac{1}{L}}$
Looking back over the past week, all of the movement has been in Labor's favour (noting that I only ran scrapers for four bookies on 9, 10 and 11 August). On Monday, Sportsbet had reduced its Coalition win probability from 60.2 to 57.7 per cent. On Tuesday, TABtouch reduced from 57.7 to 53.7 and WIlliam Hill also reduced from 57.7 to 53.7 per cent. This morning, Luxbet was reduced from 57.7 to 57.0 per cent.
In charts, we can see the probability of a Coalition win at the next election has fallen from somewhere between 57 and 60 per cent to somewhere between 54 and 57 per cent.
|
2017-06-29 12:28:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4405094385147095, "perplexity": 3840.8238087261084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323970.81/warc/CC-MAIN-20170629121355-20170629141355-00360.warc.gz"}
|
http://mathoverflow.net/questions/57649/applications-of-koszuls-formula-other-than-the-fundamental-lemma-of-riemannian?sort=oldest
|
# Applications of Koszul's formula other than the fundamental lemma of Riemannian geometry
I'm wondering what else one can do with Koszul's formula
$$2\langle\nabla_XA,B\rangle = X\langle A,B\rangle-B\langle X,A\rangle + A\langle X,B\rangle - \langle A,[X,B]\rangle + \langle[B,X],A\rangle - \langle B,[A,X]\rangle$$
beyond proving existence and uniqueness of the Levi-Civita connection. I haven't yet seen anybody using it for anything else, which would be quite curious.
Here's a pretty and simple example. I don't know if it is known...
Let $\nabla^{LC}$ be the Levi-Civita connection and $\nabla$ be some other metric connection with torsion $T$. Then
$$2\langle\nabla_XA,B\rangle - 2\langle\nabla_X^{LC}A,B\rangle= \langle T(A,X),B\rangle -\langle T(A,B),X\rangle - \langle T(B,X),A\rangle$$
An application of this would be to compute the LC-connection for the metric $\langle K\cdot,K\cdot\rangle$ in terms of the endomorphism $K$ and the LC-connection for $\langle \cdot,\cdot\rangle$. The computation starts with the connection $K^{-1}\nabla^{LC} K$, which is metric for $\langle K\cdot,K\cdot\rangle$...
I can't guarantee correct letters and signs :-)
-
Riemannian geometry IS the study of the Levi-Civita connection and the Koszul formula is an explicit expression for the connection. So ultimately anything you do in Riemannian geometry is bound to be using it at some level. If, on the other hand, you mean what other formulae you can get by formally manipulating the Koszul formula, then there are plenty, but I would not necessarily call them "applications". Killing's equation comes to mind, as well as the O'Neill formulae for submersions, the basic formulae for homogeneous geometry,... Pick any book on differential geometry: e.g., do Carmo. – José Figueroa-O'Farrill Mar 7 '11 at 11:18
Indeed. Except when the glory breaks down and coordinates get introduced (or used implicitly). (If not, wait till play with Laplacians begins and frames introduced...) BTW, one application of my application would be the coordinate representation of the LC connection. PLEASE NOTE: I'm far away from the library and made it there only a few times this century - and it was open a few of these occasions. But I've seen and possibly remember quite a few of these books, e.g. none on Ricci flow without ungeometricly resorting to coordinate stuff when things get most basic – Martin Gisser Mar 7 '11 at 15:09
So let me get this straight: you make a distinction between the Koszul formula and the formula for the Christoffel symbols? I don't think there is any conceptual difference between the two. I suppose it's a question of how much pain are you willing to endure in order to keep everything coordinate-free. Analysts as a general rule seem to have a low pain threshold :) – José Figueroa-O'Farrill Mar 7 '11 at 17:40
Anything proved using local co-ordinates in Riemannian geometry can be proved without using local co-ordinates and vice versa. It's usually a matter of habit and/or taste. Some of us find it advantageous to know how to prove almost anything at least three different ways (using co-ordinates, an arbitrary frame of vector fields, or using differential forms and Cartan's moving frame approach). – Deane Yang Mar 7 '11 at 18:22
Quite a borderline comment but I find it funny. There is (at least) one thing I don't know how to prove without coordinates. Whena metric evolves by the Ricci flow, the evolution of the curvature operator is given by : $\partial_t R=\Delta R + R^2+R^#$, where $R^#$ can be defined in a coordinate free way using the Lie algebra structure of $\Lambda^2 TM$. A substantial part of the proof can be done without coordinates (as in Toppong lectures) but I never saw anyone that goes from the expression ypu found in Topping book to the Lie algebraic expression without coordinates. – Thomas Richard Mar 10 '11 at 7:51
Koszul's formula simply expresses the Levi-Civita connection explicitly in terms of the Riemannian metric. It is quite useful any time you want to eliminate the connection from a formula and write the formula in terms of the metric only. José cited some nice examples. I haven't checked, but I bet the book by Cheeger and Ebin discusses these examples and maybe more quite explicitly.
But it is no different from writing a formula in local co-ordinates and replacing all appearances of Christoffel symbols by their formula in terms of partial derivatives of the Riemannian metric. This is often an equally useful thing to do when, for example, you want to apply PDE techniques or theorems that are stated in terms of co-ordinates to a problem in Riemannian geometry.
-
Deane, 1) my example even involves 2 connections... 2a) I find it disadvantageous to have to learn at least 3 calculi for the whole picture. 2b) I found me developing my own calculus to sanely get into Hilbert space via Stokes theorem (for doing geometric PDE geometrically, like Bochner theorems) Indeed José has an excellent answer. There's sure much more. Alas I had to suffer through O'Neill's formulae etc. in Christoffelian (by a famous German differential ring theorist, no less). – Martin Gisser Mar 8 '11 at 3:05
Martin, I also prefer to understand everything from a single approach. And to some extent I have succeeded in developing a single way to do everything I need to do in Riemannian geometry. But it appears that each of us ends up developing our own preferred approach and even notation. But even then my approach does not accomplish absolutely everything I want, just most of it. Depending on the context, I still sometimes find either the differential form or local co-ordinate approach the easiest way to do what I want. So I find it quite handy to be able to use them, too. – Deane Yang Mar 8 '11 at 3:22
P.S.: Said German differential ring theorist's lectures had at least a complete proof of the fundamental lemma of Riem. Geom., completely proving tensoriality on the r.h.s. P.P.S.: My answer-comment to Jose is hidden (anyhow syntacticly mangled) - Killing fields indeed serving an example of Koszul vs. not-Koszul. P.P.P.S.: Something learned today. Thanks, sirs. – Martin Gisser Mar 8 '11 at 3:24
What's the "fundamental lemma of Riemannian geometry"? – Deane Yang Mar 8 '11 at 4:09
@Deane: Connections with torsion are popular in physics. They also arise naturally in pure geometry. If one considers Hermitian non-Kahler manifolds, then the Chern connection (compatible with the Hermitian metric and the complex structure) has torsion. In fact, its torsion vanishes if and only if the manifold is indeed Kahler. There is an analogous "Obata" connection in quaternionic geometry. When one has an additional structure on a Riemannian manifold (a special tensor, usually a form), then it is more natural to study connections which make that form parallel. Usually these have torsion. – Spiro Karigiannis Mar 26 '11 at 16:08
I've meanwhile found out that my innocent example ...
1.) ... amounts to a rediscovery of Schouten's contorsion tensor. (See e.g. this note).
This concept is important in torsion gravity (which isn't as exotic as it may sound - except for some charlatanry derived from it...) My example equation seems to express the equivalence principle (cf. link eq. (255)). See also Rodrigues & Oliveira: The many faces of Maxwell, Dirac and Einstein equations.
As I've already hinted here, contorsion stuff (or what I would term the contorsion operator) can also occur in computations with torsion-free connections.
2.) ... leads to a generalization of the fundamental "lemma", a.k.a. Levi-Civita theorem: For any given vector-valued 2-form $T$ there exists a unique metric connection with torsion $T$.
-
|
2015-05-23 16:58:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8279337882995605, "perplexity": 522.731283505138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927824.26/warc/CC-MAIN-20150521113207-00018-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:0791.60025
|
# zbMATH — the first resource for mathematics
Harmonizability, $$V$$-boundedness, and stationary dilation of Banach- valued processes. (English) Zbl 0791.60025
Dudley, Richard M. (ed.) et al., Probability in Banach spaces, 8: Proceedings of the eighth international conference, held at Bowdoin College in summer of 1991. Boston, MA: Birkhäuser. Prog. Probab. 30, 189-205 (1992).
For Banach spaces $$X$$, $$Y$$ we consider the problem of factoring a family $$\{Z(\Delta):\Delta \in \Sigma\} \subseteq L(X,Y)$$, indexed by a $$\sigma$$-algebra, through a single Hilbert space $$H$$. We obtain a condition for such a factorization through a reproducing kernel Hilbert space involving an indexed family of positive operators taking values in the space of conjugate linear functionals on $$X$$. Under our condition, we get that the family $$\{Z(\Delta):\Delta \in \Sigma\}$$ has a spectral dilation in the sense that $$Z(\Delta)=SE(\Delta)R$$ where $$R:X \to H$$, $$S:H \to Y$$ are continuous linear operators and $$\{E(\Delta):\Delta \in \Sigma\}$$ is a countably additive spectral measure in $$H$$. As a consequence of this result we obtain necessary and sufficient conditions for the existence of an orthogonal dilation of a Banach space-valued measure $$\{Z(\Delta):\Delta \in \Sigma\}$$ of finite semivariation; $$Z(\Delta)=S\xi(\Delta)$$, $$\xi$$ an orthogonally scattered measure taking values in a Hilbert space $$H$$ and $$S:H \to Y$$ is continuous and linear taking values in the Banach space $$Y$$. As an application we obtain a representation of harmonizable stable processes in terms of stationary second order processes. Additionally, we propose definitions for harmonizable and $$V$$-bounded Banach space-valued processes indexed by a separable locally compact Abelian group. It has been brought to the attention of the author that our representation in Corollary 4.3 had been obtained by C. Houdré whose work was not referenced in this paper. The author welcomes the opportunity to correct this oversight.
For the entire collection see [Zbl 0773.00018].
##### MSC:
60G10 Stationary stochastic processes 60B10 Convergence of probability measures
|
2021-10-20 07:23:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621941804885864, "perplexity": 297.2480479965792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00693.warc.gz"}
|
https://math.stackexchange.com/questions/1451926/what-is-a-nice-way-to-show-that-developable-surfaces-must-have-a-principal-curva
|
# what is a nice way to show that developable surfaces must have a principal curvature=0?
So a developable surface can be parametrized as
$x(s, t)=\alpha(s)+t \beta(s)$
I can see that $\beta(s)$ is the direction of the principal curvature plane with k=0, but why is it the minimum or maximum curvature plane cutting through that point? Is $\alpha(s)$ a plane curve on the other principal curvature plane?
• What is definition of developable surface that you use? Wikipedia defines it precisely as surface with zero curvature – Blazej Sep 26 '15 at 8:18
• I'm using this definition: x(s,t)=α(s)+tβ(s) – UXkQEZ7 Sep 26 '15 at 8:36
The vector you have given $x(s, t)=\alpha(s)+t \beta(s)$ is a ruled surface with generator $\beta(s)$.
It is developable if $(T, \beta(s),\beta{'}(s)) = 0$
$(T, \beta(s),\beta{'}(s)) \ne 0.$
$K = k_1\cdot k_2 = 0$ is necessary and sufficient condition. When parametric lines of principal curvature $k_1=0$ for $K=0$ then that parameter defines the straight edge or regression line of a developable surface.
|
2019-07-18 11:03:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9320616126060486, "perplexity": 347.00156516850814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525627.38/warc/CC-MAIN-20190718104512-20190718130512-00024.warc.gz"}
|
http://mathoverflow.net/feeds/question/38864
|
Visualizing Orthogonal Polynomials - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-22T23:03:52Z http://mathoverflow.net/feeds/question/38864 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/38864/visualizing-orthogonal-polynomials Visualizing Orthogonal Polynomials jd long 2010-09-15T20:11:10Z 2010-09-16T00:35:57Z <p>Recently I was introduced to the concept of <a href="http://en.wikipedia.org/wiki/Orthogonal_polynomials" rel="nofollow">Orthogonal Polynomials</a> through the poly() function in the R programming language. These were introduced to me in the concept of polynomial transformations in order to do a linear regression. Bear in mind that I'm an economist and, as should be obvious, am not all that smart (choice of profession has an odd signaling characteristic). I'm really trying to wrap my head around what Orthogonal Polynomials are and how, if possible, to visualize them. Is there any way to visualize orthogonal polynomials vs. simple polynomials? </p> http://mathoverflow.net/questions/38864/visualizing-orthogonal-polynomials/38866#38866 Answer by Piero D'Ancona for Visualizing Orthogonal Polynomials Piero D'Ancona 2010-09-15T20:26:10Z 2010-09-15T20:26:10Z <p>Start <a href="http://dlmf.nist.gov/18.4" rel="nofollow">here</a></p> http://mathoverflow.net/questions/38864/visualizing-orthogonal-polynomials/38877#38877 Answer by Tom Smith for Visualizing Orthogonal Polynomials Tom Smith 2010-09-15T21:22:06Z 2010-09-15T21:22:06Z <p>I think of the space of polynomials on R as a set of graphs arranged round the real line like the pages of a book round the axis. Polynomials which have almost the same graph are close to each other; then orthogonal polynomials are those which fall at right angles in the picture, and the linear combinations generate the space just as the basis vectors of $\mathbb{R}^n$ generate $\mathbb{R}^n$.</p> http://mathoverflow.net/questions/38864/visualizing-orthogonal-polynomials/38881#38881 Answer by Helge for Visualizing Orthogonal Polynomials Helge 2010-09-15T21:38:16Z 2010-09-15T21:38:16Z <p>I am not sure of your math-background, so I am trying to keep it simple, without oversimplifying some ideas. First off polynomials are nice for various reasons, e.g. a polynomial of degree $n$ has at most $n$ zeros. However, there are still many polynomials and it makes sense to choose VERY nice ones: orthogonal polynomials.</p> <p>To choose orthogonal polynomials, one has a problem at hand, which comes with a way to measure functions $f: \Bbb R \to\Bbb R$ by an expression of the form $$\mathcal{E}(f) = \int_{-\infty}^{\infty} f(x)^2 w(x) dx,$$ where $w(x) > 0$ is a weight that satisfy $\int w(x) dx = 1$. One should think of $\mathcal{E}$ as an energy.</p> <p>Now the orthogonal polynomial of degree $n$ can be defined as the polynomial $P_n(x) = x^n + a_{n-1} x^{n-1} + \dots + a_1 x + a_0$, where $a_{n-1}, \dots, a_0$ are real numbers, that minimizes $\mathcal{E}(P_n)$. It is this minimization property that is responsible for some of the power of orthogonal polynomials.</p> <p>At this point let me also say that it is through the weight $w$ that your problem enters the definition of orthogonal polynomials. And that one also has orthonormal polynomials, which satisfy $\mathcal{p_n} = 1$. These are given by $p_n = \frac{1}{\sqrt{\mathcal{E}(P_n)}} P_n$.</p> http://mathoverflow.net/questions/38864/visualizing-orthogonal-polynomials/38900#38900 Answer by J. M. for Visualizing Orthogonal Polynomials J. M. 2010-09-16T00:35:57Z 2010-09-16T00:35:57Z <p>Helge presented the <em>continuous</em> case in his answer; for the purposes of data fitting in statistics, one usually deals with <em>discrete</em> orthogonal polynomials. Associated with a set of abscissas <code>$x_i$</code>, <code>$i=1\dots n$</code> is the discrete inner product</p> <p><code>$$\langle f,g\rangle=\sum_{i=1}^n w(x_i)f(x_i)g(x_i)$$</code></p> <p>where <code>$w(x)$</code> is a weight function, a function that associates a "weight" or "importance" to each abscissa. A frequently occurring case is one where the <code>$x_i$</code> are equispaced, <code>$x_{i+1}-x_i=h$</code> where $h$ is a constant, and the weight function is <code>$w(x)=1$</code>; for this special case, special polynomials called Gram polynomials are used as the basis set for polynomial fitting. (I won't be dealing with the nonequispaced case in the rest of this answer, but I'll add a few words on it if asked).</p> <p>Let's compare a plot of the regular monomials $x^k$ to a plot of the Gram polynomials:</p> <p><img src="http://i53.tinypic.com/jsfwc6.png" alt="monomial versus Gram"></p> <p>On the left, you have the regular monomials. The "bad" thing about using them in data fitting is that for $k$ high enough, $x^k$ and $x^{k+1}$ are nigh-indistinguishable, and this spells trouble for data-fitting methods since the matrix associated with the linear system describing the fit is dangerously close to becoming singular.</p> <p>On the right, you have the Gram polynomials. Each member of the family does not resemble its predecessor or successor, and thus the underlying matrix used for fitting is a lot less likely to be close to singularity.</p> <p>This is the reason why discrete orthogonal polynomials are of interest in data fitting.</p>
|
2013-05-22 23:03:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8149612545967102, "perplexity": 578.6396464704977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702454815/warc/CC-MAIN-20130516110734-00023-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.math-only-math.com/proof-by-mathematical-induction.html
|
# Proof by Mathematical Induction
Using the principle to proof by mathematical induction we need to follow the techniques and steps exactly as shown.
We note that a prove by mathematical induction consists of three steps.
• Step 1. (Basis) Show that P(n₀) is true.
• Step 2. (Inductive hypothesis). Write the inductive hypothesis: Let k be an integer such that k ≥ n₀ and P(k) be true.
• Step 3. (Inductive step). Show that P(k + 1) is true.
In mathematical induction we can prove an equation statement where infinite number of natural numbers exists but we don’t have to prove it for every separate numbers.
We use only two steps to prove it namely base step and inductive step to prove the whole statement for all the cases. Practically it’s not possible to prove a mathematical statement or formula or equation for all the natural numbers but we can generalize the statement by proving with induction method. As if the statement is true for P (k), it will be true for P (k+1), so if it is true for P (1) then it can be proved for P (1+1) or P (2) similarly for P (3), P (4) and so on up to n natural numbers.
In Proof by mathematical induction the first principle is if the base step and inductive step are proved then P (n) is true for all natural numbers. In inductive step we need to assume P (k) is true and this assumption is called as induction hypothesis. By using this assumption we prove P (k+1) is true. While proving for the base case we can take P (0) or P (1).
Proof by mathematical induction uses deductive reasoning not inductive reasoning. An example of deductive reasoning: All trees have leaves. Palm is a tree. Therefore Palm must have leaves.
When the proof by mathematical induction for a set of countable inductive set is true for all numbers it is called as Weak Induction. This is normally used for natural numbers. It is simplest form of mathematical induction where the base step and inductive step are used to prove a set.
In Reverse Induction assumption is made to prove a negative step from inductive step. If P (k+1) is assumed to be true as induction hypothesis we prove P (k) is true. These steps are reverse to weak induction and this is also applicable for countable sets. From this it can be proved that the set is true for all numbers ≤ n and so the proof ends for 0 or 1 which is base step for weak induction.
Strong Induction is similar to weak induction. But for strong induction in inductive step we assume all P (1), P (2), P (3) …... P (k) are true to prove P (k+1) is true. When weak induction fails to prove a statement for all the cases we use strong induction. If a statement is true for weak induction, it is obvious that it is true for weak induction also.
Questions with solutions to Proof by Mathematical Induction
### 1. Let a and b be arbitrary real numbers. Using the principle of mathematical induction, prove that (ab)n = anbn for all n ∈ N.
Solution:
Let the given statement be P(n). Then,
P(n): (ab)n = anbn.
When = 1, LHS = (ab)1 = ab and RHS = a1b1 = ab
Therefore LHS = RHS.
Thus, the given statement is true for n = 1, i.e., P(1) is true.
Let P(k) be true. Then,
P(k): (ab)k = akbk.
Now, (ab)k + 1 = (ab)k (ab)
= (akbk)(ab) [using (i)]
= (ak ∙ a)(bk ∙ b) [by commutativity and associativity of multiplication on real numbers]
= (ak + 1 ∙ bk + 1 ).
Therefore P(k+1): (ab)k + 1 = ((ak + 1 ∙ bk + 1)
⇒ P(k + 1) is true, whenever P(k) is true.
Thus, P(1) is true and P(k + 1) is true, whenever P(k) is true.
Hence, by the principle of mathematical induction, P(n) is true for all n ∈ N.
More examples to Proof by Mathematical Induction
### 2. Using the principle of mathematical induction, prove that (xn - yn) is divisible by (x - y)for all n ∈ N.
Solution:
Let the given statement be P(n). Then,
P(n): (xn - yn) is divisible by (x - y).
When n = 1, the given statement becomes: (x1 - y1) is divisible by (x - y), which is clearly true.
Therefore P(1) is true.
Let p(k) be true. Then,
P(k): xk - yk is divisible by (x-y).
Now, xk + 1 - yk + 1 = xk + 1 - xky - yk + 1
= xk(x - y) + y(xk - yk), which is divisible by (x - y) [using (i)]
⇒ P(k + 1): xk + 1 - yk + 1is divisible by (x - y)
⇒ P(k + 1) is true, whenever P(k) is true.
Thus, P(1) is true and P(k + 1) is true, whenever P(k) is true.
Hence, by the Principal of Mathematical Induction, P(n) is true for all n ∈ N.
3. Using the principle of mathematical induction, prove that
a + ar + ar2 + ....... + arn – 1 = (arn – 1)/(r - 1) for r > 1 and all n ∈ N.
Solution:
Let the given statement be P(n). Then,
P(n): a + ar + ar2 + ….... +arn - 1 = {a(rn -1)}/(r - 1).
When n = 1, LHS = a and RHS = {a(r1 - 1)}/(r - 1) = a
Therefore LHS = RHS.
Thus, P(1) is true.
Let P(k) be true. Then,
P(k): a + ar + ar2 + …… + ark - 1 = {a(rk - 1)}/(r - 1)
Now, (a + ar + ar2 + …... + ark - 1) + ark = {a(rk - 1)}/(r - 1) + ar2 ....... [using(i)]
= a(rk + 1 - 1)/(r - 1).
Therefore,
P(k + 1): a + ar + ar2 + …….. +ark - 1 + ark = {a(rk + 1 - 1)}/(r - 1)
⇒ P(k + 1)is true, whenever P(k) is true.
Thus, P(1) is true and P(k + 1) is true, whenever P(k) is true.
Hence, by the principle of mathematical induction, P(n) is true for all n ∈ N.
Proof by Mathematical Induction
4. Let a and b be arbitrary real numbers. Using the principle of mathematical induction, prove that
(ab)n = anbn for all n ∈ N.
Solution:
Let the given statement be P(n). Then,
P(n): (ab)n = anbn
When = 1, LHS = (ab)1 = ab and RHS = a1b1 = ab
Therefore LHS = RHS.
Thus, the given statement is true for n = 1, i.e., P(1) is true.
Let P(k) be true. Then,
P(k): (ab)k = akbk
Now, (ab)k + 1 = (ab)k (ab)
= (akbk)(ab) [using (i)]
= (ak ∙ a)(bk ∙ b) [by commutativity and associativity of multiplication on real numbers]
= (ak + 1 ∙ bk + 1 ).
Therefore P(k+1): (ab)k + 1 = ((ak + 1 ∙ bk + 1
⇒ P(k + 1) is true, whenever P(k) is true.
Thus, P(1) is true and P(k + 1) is true, whenever P(k) is true.
Hence, by the principle of mathematical induction, P(n) is true for all n ∈ N.
More examples to Proof by Mathematical Induction
5. Using the principle of mathematical induction, prove that (xn - yn) is divisible by (x - y)for all n ∈ N.
Solution:
Let the given statement be P(n). Then,
P(n): (xn - yn) is divisible by (x - y).
When n = 1, the given statement becomes: (x1 - y1) is divisible by (x - y), which is clearly true.
Therefore P(1) is true.
Let p(k) be true. Then,
P(k): xk - yk is divisible by (x-y).
Now, xk + 1 - yk + 1 = xk + 1 - xky - yk + 1
= xk(x - y) + y(xk - yk), which is divisible by (x - y) [using (i)]
⇒ P(k + 1): xk + 1 - yk + 1is divisible by (x - y)
⇒ P(k + 1) is true, whenever P(k) is true.
Thus, P(1) is true and P(k + 1) is true, whenever P(k) is true.
Hence, by the Principal of Mathematical Induction, P(n) is true for all n ∈ N.
6. Using the principle of mathematical induction, prove that (102n - 1 + 1) is divisible by 11 for all n ∈ N.
Solution:
Let P (n): (102n – 1 + 1) is divisible by 11.
For n=1, the given expression becomes {10(2 × 1 - 1) + 1} = 11, which is divisible by 11.
So, the given statement is true for n = 1, i.e., P (1) is true.
Let P(k) be true. Then,
P(k): (102k - 1 + 1) is divisible by 11
⇒ (102k - 1 + 1) = 11 m for some natural number m.
Now, {102(k - 1) - 1 + 1} = (102k + 1 + 1) = {102 ∙ 10(2k - 1)+ 1}
= 100 × {102k - 1+ 1 } - 99
= (100 × 11m) - 99
= 11 × (100m - 9), which is divisible by 11
⇒ P (k + 1): {102(k + 1) - 1 + 1} is divisible by 11
⇒ P (k + 1) is true, whenever P(k) is true.
Thus, P (1) is true and P(k + 1) is true , whenever P(k) is true.
Hence, by the principle of mathematical induction, P(n) is true for all n ∈ N.
7. Using the principle if mathematical induction, prove that (7n – 3n) is divisible by 4 for all n ∈ N.
Solution:
Let P(n) : (7n – 3n) is divisible by 4.
For n = 1, the given expression becomes (7 1 - 3 1) = 4, which is divisible by 4.
So, the given statement is true for n = 1, i.e., P(1) is true.
Let P(k) be true. Then,
P(k): (7k - 3k) is divisible by 4.
⇒ (7k - 3k) = 4m for some natural number m.
Now, {7(k + 1) - 3 (k + 1)} = 7(k + 1) – 7 ∙ 3k + 7 ∙ 3k - 3 (k + 1)
(on subtracting and adding 7 ∙ 3k)
= 7(7k - 3k) + 3 k (7 - 3)
= (7 × 4m) + 4 ∙ 3k
= 4(7m + 3k), which is clearly divisible by 4.
∴ P(k + 1): {7(k + 1) - 3 (k + 1)} is divisible by 4.
⇒ P(k + 1) is true, whenever P(k) is true.
Hence, by the principle of mathematical induction, P(n) is true for all n ∈ N.
Solved examples to Proof by Mathematical Induction
8. Using the principle if mathematical induction, prove that
(2 ∙ 7n + 3 ∙ 5n - 5) is divisible by 24 for all n ∈ N.
Solution:
Let P(n): (2 ∙ 7n + 3 ∙ 5n - 5) is divisible by 24.
For n = 1, the given expression becomes (2 ∙ 71 + 3 ∙ 51 - 5) = 24, which is clearly divisible by 24.
So, the given statement is true for n = 1, i.e., P(1) is true.
Let P(k) be true. Then,
P(k): (2 ∙ 7n + 3 ∙ 5n - 5) is divisible by 24.
⇒ (2 ∙ 7n + 3 ∙ 5n - 5) = 24m, for m = N
Now, (2 ∙ 7k + 1 + 3 ∙ 5k + 1 - 5)
= (2 ∙ 7k ∙ 7 + 3 ∙ 5k ∙ 5 - 5)
= 7(2 ∙ 7+ 3 ∙ 5k - 5) - 6 ∙ 5k + 30
= (7 × 24m) - 6(5k - 5)
= (24 × 7m) - 6 × 4p, where (5k - 5) = 5(5k - 1 - 1) = 4p
[Since (5k - 1 - 1) is divisible by (5 - 1)]
= 24 × (7m - p)
= 24r, where r = (7m - p) ∈ N
⇒ P (k + 1): (2 ∙ 7k + 1 + 3 ∙ 5k + 1 - 5) is divisible by 24.
⇒ P(k + 1) is true, whenever P(k) is true.
Thus, P(1) is true and P(k + 1) is true, whenever P(k) is true.
Hence, by the principle of mathematical induction, P(n) is true for all n ∈
Mathematical Induction
Mathematical Induction
Problems on Principle of Mathematical Induction
Proof by Mathematical Induction
Induction Proof
|
2018-05-25 05:10:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6566082239151001, "perplexity": 1404.7882572474987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867041.69/warc/CC-MAIN-20180525043910-20180525063910-00576.warc.gz"}
|
http://mathematica.stackexchange.com/questions/6366/how-to-take-part-and-then-sum-over-of-specific-level-of-an-array
|
# How to take part (and then sum over) of specific level of an array?
Specifically, I want to take some elements of the lowest level of an array, and then sum over these elements while holding the higher level. Sorry for my vague expression, maybe an example would make it clear: An array of 3 levels:
try = {{0,{a,b,c}},{{d,e,f,g},{h,i,j}}}
I want to obtain {{0,{a,b}},{{d,e},{h,i}}}, and then sum over to obtain {{0,a+b},{d+e,h+i}}.
Following illustration of Mathematica, i tried:
try[[All,All,1;;2]]
but it could not work. As to summation,
Total[try,{-1}]
could sum over all elements of the last level, however I have no idea how to sum over part of the last level.
Solution?
-
Here's a simple solution that'll act on the deepest level as desired in the question.
Apply[#1 + #2 &, #, {Depth@# -2}]&@ try
(* {{0, a + b}, {d + e, h + i}} *)
Depth gives you the maximum number of indices required to index any level in the expression, plus 1. Since the indexing starts at 0 (the head), the deepest indexable level is Depth[expr]-2.
-
Does your Depth method do something that Apply[# + #2 &, try, {-2}] does not? – Mr.Wizard Jun 3 '12 at 9:50
Yes. {-2} only gets the level that is 2 deep. Using Depth as in my answer gets the deepest level, which is what the OP wanted. In this case {-2} happens to also be the correct depth, but for arbitrary expressions, it won't work. Compare the two with try = {{0, {a, b, c, {p, q, r}}}, {{d, e, f, g, {s, t, u}}, {h, i, j}}} – rm -rf Jun 3 '12 at 9:52
Ah yes I see. I hadn't considered that case. +1 – Mr.Wizard Jun 3 '12 at 9:58
Impressive! Two more questions. 1) How to generally sum over first n(which could be very large) elements? 2) Could Map similarly be used as the responder rguler do? Actually i am not clear about the differences between Map and Apply. – Mathieu Jun 3 '12 at 14:03
@Mathieu Yes, you can do Apply[Total@Take[{##}, n] &, #, {Depth[#] - 2}] &@ try to sum over the first n. The Map version will be the same as Mr.Wizard's answer, except that instead of giving an explicit level, use {Depth@try-2} like in mine. – rm -rf Jun 3 '12 at 16:06
Map[Total@Take[#, 2] &, try, {-2}]
(* ==> {{0, a + b}, {d + e, h + i}} *)
Note: from docs on Map (section More Information):
A negative level -n consists of all parts of expr with depth n.
-
Perhaps this?
f[x_List] := Take[x, 2]
f[x_] := x
Total[Map[f, try, {2}], {-1}]
{{0, a + b}, {d + e, h + i}}
Alternatively:
Replace[try, {a_, b_, ___} :> {a, b}, {2}] ~Total~ {-1}
Or perhaps more generally:
Replace[try, x_List :> x[[;; 2]], {2}] ~Total~ {-1}
More directly:
Apply[# + #2 &, try, {-2}]
-
|
2014-04-18 03:44:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37800997495651245, "perplexity": 4441.486078888711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2692786/how-do-i-prove-if-a-set-of-operations-is-complete
|
# How do I prove if a set of operations is complete?
For example $\{\land, \lnot\}$ apparently forms a functionally complete set as it can form any logical expression.
But I don't really know what this means or how you show it. Do we just show that some set is complete and then in the future if we have a new set, show that we can replicate the known operations with the new ones?
Do you have to find these strategically or is there a systematic way?
Do we just show that some set is complete and then in the future if we have a new set, show that we can replicate the known operations with the new ones?
Yeah, that's how we typically show that $\{ \land , \neg \}$ is complete: we already know that $\{ \land, \lor, \neg \}$ is functionally complete, and we just point out that $\lor$ can be rewritten in terms of $\land$ and $\neg$:
$$P \lor Q \Leftrightarrow \neg (\neg P \land \neg Q)$$
Also, here is a post explaining why $\{ \land , \lor, \neg \}$ is complete
• How do you figure out how to write $\lor$ in terms of $\land$ and $\lnot$? – user539262 Mar 15 '18 at 20:50
• @user539262 The DeMorgan's laws are well-known: $$\neg (P \lor Q) \Leftrightarrow \neg P \land \neg Q$$ ... so negating both sides that also means that $$P \lor Q \Leftrightarrow \neg(\neg P \land \neg Q)$$. You can use truth-tables that all of this is indeed the case – Bram28 Mar 15 '18 at 20:56
• Is showing a relation from one operator to the next more of an art than a science? – user539262 Mar 15 '18 at 21:16
• @user539262 Interesting question ... the binary operators have been studied extensively, so for those it is all pretty well known which can be reduced to which, and which not. And also note that something like DeMorgan's laws is completely intuitive: if it cannot be true that either $P$ or $Q$ are true then obviously both are false, and vice versa. So, grasping the intuitive meaning of the operators certainly helps the search for these kinsd of reductions. But if you gave me a couple of arbitrary ternary operators and asked me if I could make reductions, I would probably have a hard time, yes. – Bram28 Mar 15 '18 at 21:39
Here, once again - as the other page was marked as duplicate - a systematic way of showing completeness without relying on an already given complete set of operators.
I use the follwing notations: $$a \lor b = a + b,\, a \land b = a\cdot b = ab \mbox{ and } \neg a = \bar a, \, T = 1, F = 0$$
It is to be shown that $\forall n \in \mathbb{N}$ any truth function $f:\, \{0,1\}^n \rightarrow \{0,1\}$ can be written using only $\cdot$ and $\bar{}$.
• $n = 1$: So, $f:\, \{0,1\} \rightarrow \{0,1\}$. $f(a) = a$ or $f(a) = \bar a$ or $f(a) = a\bar a$ or $f(a) = \overline{a \bar a}$ are doing the job.
• $n \rightarrow n+1$: So, $f:\, \{0,1\}^{n+1} \rightarrow \{0,1\}$. Split $f(a_1,\ldots ,a_n, a_{n+1})$ into $$g_0(a_1,\ldots ,a_n) = f(a_1,\ldots ,a_n, 0) \mbox{ and } g_1(a_1,\ldots ,a_n) = f(a_1,\ldots ,a_n, 1)$$ According to induction hypothesis $g_0$ and $g_1$ can be expressed using only $\cdot$ and $\bar{}$. Now we have $$f(a_1,\ldots ,a_n, a_{n+1}) = \bar a_{n+1}g_0(a_1,\ldots ,a_n) + a_{n+1}g_1(a_1,\ldots ,a_n)= \overline{\overline{ \bar a_{n+1}g_0(a_1,\ldots ,a_n) + a_{n+1}g_1(a_1,\ldots ,a_n)}}= \overline{\overline{\bar a_{n+1}g_0(a_1,\ldots ,a_n)}\cdot \overline{a_{n+1}g_1(a_1,\ldots ,a_n)}}$$ So, we have written $f$ as an expression using only $\cdot$ and $\bar{}$.
Done.
|
2019-06-18 05:17:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7353783249855042, "perplexity": 157.9842972203329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998607.18/warc/CC-MAIN-20190618043259-20190618065259-00068.warc.gz"}
|
https://asvabtestpro.com/quiz/frac27-times-2823-627227e8db5cd741a80b4cec/
|
$$\frac{{{2^7} \times {2^8}}}{{{2^3}}} = ?$$
$$2^{12}$$
Explanation
The rules for multiplying and dividing terms with exponents are:
$$a^{m} \times a^{n}=a^{m+n} ; \frac{a^{n}}{a^{n}}=a^{m-n}$$
$$\text { So } \frac{2^{4} \times 2^{3}}{2^{4}}=\frac{2^{13}}{2^{3}}=2^{15-3}=2^{12}$$
|
2022-12-05 18:39:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9590619802474976, "perplexity": 3152.3908656393205}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00453.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/geometry/CLONE-68e52840-b25a-488c-a775-8f1d0bdf0669/chapter-4-section-4-4-the-trapezoid-exercises-page-200/2
|
## Elementary Geometry for College Students (6th Edition)
Published by Brooks Cole
# Chapter 4 - Section 4.4 - The Trapezoid - Exercises - Page 200: 2
#### Answer
m$\angle$C=117$^{\circ}$ m$\angle$A=62$^{\circ}$
#### Work Step by Step
m$\angle$B+m$\angle$C=180 63+m$\angle$C=180 m$\angle$C=180-63 m$\angle$C=117$^{\circ}$ m$\angle$A+m$\angle$D=180 m$\angle$A+118=180 m$\angle$A=180-118 m$\angle$A=62$^{\circ}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2022-05-17 09:03:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4642047882080078, "perplexity": 12049.167209738445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00012.warc.gz"}
|
https://www.physicsforums.com/threads/about-four-momentum.367030/
|
1. Jan 4, 2010
snoopies622
In section 2.4 of Schutz's, "A First Course in General Relativity", he states that the conservation of four-momentum law
"has the status of an extra postulate...However, like the two fundamental postulates of SR, this one is amply verified by experiment."
Is this correct? I was under the impression that all of SR can be derived from "the two fundamental postulates" and that no more were necessary.
2. Jan 4, 2010
Fredrik
Staff Emeritus
The proper way to define SR is by writing down a list of axioms about how to interpret the mathematics of Minkowski space as predictions of results of experiments. (See e.g. this post for one of my usual rants about Einstein's postulates). This defines a framework in which you should be able to define theories of matter and interactions*, and those theories should tell you that momentum is conserved.
*) Right now I'm really confused about this, because Meopemuk mentioned the Currie-Jordan-Sudarshan "no interaction theorem" in another thread the other day. It appears to be saying that interactions aren't possible in SR, but surely there can't be anything wrong with just writing down a relativistic version of Newton's second law? (I don't know the answer. Like I said, I'm very confused right now).
3. Jan 4, 2010
atyy
I know I said I wasn't going to continue that discussion in this forum, but I can't help myself. I think CJS is usually not applicable because the only "fundamental" relativistic theories we have are quantum field theories. Meopemuk's claim is interesting because it's a quantum mechanics theory, and he cites Weinberg, whom I've always thought was wrong on this, but of course, I'm a biologist and Weinberg isn't ...
4. Jan 4, 2010
bcrowell
Staff Emeritus
The problem with Schutz's approach is that then you have to have dozens of "postulates." After all, conservation of energy and momentum aren't the only laws of physics that we want to preserve in SR. I think a better way of looking at it is that in addition to the postulates of SR, we assume the correspondence principle. Then I think the vast majority of the nonrelativistic physics has a unique generalization to SR.
5. Jan 4, 2010
Staff: Mentor
I agree with this. I would obtain the conservation of four-momentum by the requirement that the relativistic theory recover the non-relativistic expressions for conservation of energy and momentum in the appropriate limit. I don't know if I would elevate that general approach to the level of a postulate or not, but the approach could certainly be used for more than just this one particular law without requiring a long laundry list of postulates.
6. Jan 4, 2010
atyy
I think it depends.
Fundamentally, all you need are the 2 postulates (like Fredrik) says, then you have no force law, and just a bunch of dynamics for fields that are Poincare invariant, eg. take Maxwell's equations w/o the Lorentz force law - that's a special relativistic theory, and there is no particle 4-momentum conservation (there are no particles anyway, at least none with definite position and momentum).
However, if we wish to have a force law for particles with definite position and momentum then we need to have a relativistic generalization of Newton's second law of motion, and this requires additional postulates like Schutz, bcrowell, and DaleSpam say. The various possible forms of the additional postulates are discussed in section 6.2 of http://books.google.com.sg/books?id=fUj_LW51GfQC&dq=rindler+relativity&source=gbs_navlinks_s, see especially footnote 1 of p109.
While we are listing additional postulates, I think Fredrik's favourite is the clock one?
Last edited: Jan 4, 2010
7. Jan 4, 2010
snoopies622
Goodness - so much to think about!
My first question is for Fredrik: I want to be clear - are you saying that
1. Einstein's two postulates imply Minkowski spacetime
and
2. Minkowski spacetime implies conservation of four-momentum
?
edit: I just read what you posted in the "Difference between Theory and Law" thread and now I'm pretty sure your answer to my question would be "no", but I am still having trouble following your reasoning. Is there really ambiguity in the concept of an inertial reference frame?
Last edited: Jan 4, 2010
8. Jan 5, 2010
atyy
(i) Principle of Relativity ---> Galilean or Lorentzian relativity
(ii) Finite maximum velocity for transmission of signals ---> Lorentzian relativity
(iii) Write a Lagrangian consistent with Lorentzian relativity, by Noether's theorem some function of the Lagrangian will be conserved, and by definition that quantity is the energy/momentum of the field. This can be done for any Lagrangian consistent with Lorentzian relativity, so although you will use additional axioms to pick a particular Lagrangian, the existence of a conserved quantity that can be defined to be the energy/momentum of the field is not dependent on what the additional axioms are. So the first two postulates are enough if you have no particles with definite position and momentum.
Take a look at Eq 1.39 - Eq 1.43 in David Tong's http://www.damtp.cam.ac.uk/user/tong/qft.html
Last edited: Jan 5, 2010
9. Jan 5, 2010
snoopies622
The full quotation from Schutz is,
"This law has the status of an extra postulate, since it is only one of many whose nonrelativistic limit is correct. However, like the two fundamental postulates of SR, this one is amply verified by experiment."
Apparently in this case the generalization is not unique and the conservation law therefore not obtainable only by assuming the correspondence principle along with Einstein's original postulates. I am beginning to agree that some other postulate is necessary. Or to put it another way, if the conservation law can be derived using only Einstein's postulates, I've never seen it done and I certainly don't know how to do it.
Last edited: Jan 5, 2010
10. Jan 5, 2010
bcrowell
Staff Emeritus
Interesting. Does he suggest what the others are?
11. Jan 5, 2010
snoopies622
I'm afraid not. Maybe he's just guessing.
12. Jan 5, 2010
bcrowell
Staff Emeritus
He's probably discovered a truly marvelous proof of this, which his book is too narrow to contain.
13. Jan 5, 2010
meopemuk
In the Wigner-Dirac-Weinberg approach to (quantum) relativistic physics (see S. Weinberg, "The quantum theory of fields" vol. 1) there is no need for any additional postulate to guarantee the conservation of the energy-momentum 4-vector.
The generator of time translations is the Hamiltonian H (the operator of energy). Therefore, energy conservation follows from the trivial fact that the Hamiltonian commutes with itself [H,H]=0 (in classical theory, the commutator should replaced with the Poisson bracket).
The conservation of the total momentum P (which is the generator of space translations) follows from the structure of the Poincare Lie algebra, in which H and P commute [H,P]=0. The same is true for the conservation of the total angular momentum J, because [H,J]=0.
The time evolution of any dynamical variable is determined by its commutator with the full Hamiltonian. From this you can obtain all 3 Newton's laws of dynamics.
Eugene.
Last edited: Jan 6, 2010
14. Jan 6, 2010
Fredrik
Staff Emeritus
There's no ambiguity in Newtonian mechanics, but we can't use that definition, because then the theory would be Newtonian mechanics. So all we know at the start of the "derivation" is what an inertial frame isn't. I suppose the name "inertial frame" suggests that our new inertial frames have something in common with Galilean inertial frames, because otherwise they wouldn't deserve the name "inertial frame", but a suggestion isn't a fact.
I really don't understand why so many people are asking me this. I'm not trying to be rude here, but I have to say that I think it should be immediately obvious to anyone who has seen a few mathematical proofs that Einstein's postulates aren't mathematical axioms.
15. Jan 6, 2010
meopemuk
Fredrik, I am still puzzled. Why do you keep talking about difficulties with the definition of inertial frames? Imagine a person holding three mutually perpendicular sticks and wearing a watch on his wrist. Why is it not a good model of an inertial reference frame?
Note that the difference between relativistic and non-relativistic frames is not in their internal setup, but in the structure of the group of transformations between them. The exact group is the Poincare one. When only low-speed frames are considered, then the Galilei group is a decent approximation.
Eugene.
16. Jan 6, 2010
snoopies622
To me an inertial frame of reference is one in which objects don't accelerate unless they are pushed or pulled. Are you saying that this is not the case in the inertial reference frames of special relativity?
17. Jan 6, 2010
Fredrik
Staff Emeritus
A mathematical proof uses a mathematical statement as the starting point. What you just said isn't a mathematical statement. To "derive" Minkowski space from a non-mathematical statement is like trying to define real numbers from what a biologist can tell you about frogs.
That's not to say that we don't need statements like the one you made. We absolutely do. They can be used as loosely stated guidelines that might help us guess what mathematical structure to use in a new theory. And when we have found a mathematical structure that seems to do the job, they can be used to define a theory of physics. The mathematical structure alone can't define a theory. The theory is defined by a set of axioms that tells us how to interpret the mathematics as predictions about results of experiments. Your statement can be used to define at least two different theories of physics. When the mathematical structure is Minkowski spacetime, we get the special relativistic theory of motion in inertial frames (which in this theory are identified with members of the Poincaré group), and when the mathematical structure is Galilean spacetime, we get the non-relativistic theory of motion in inertial frames (which in this theory are identified with members of the Galilei group).
When we're done with the definition of SR, and done choosing stuff in the mathematics that can represent the things you're talking about (like "acceleration" or "objects"), then we can show that the theory does say that what you just said. But even a word like "push" is ill-defined until we have chosen what mathematics to use. The claim I'm objecting to is that you can prove that we have to use Minkowski space from statements using concepts that we intend to define properly once we have figured out what mathematical structure to use.
18. Jan 6, 2010
meopemuk
I don't think that such mathematical structures as "Minkowski spacetime" or "Galilean spacetime" are needed in physics. In my opinion, they are not just useless, they are misleading. In order to build a complete physical theory you need to know just a few things: (i) a rather vague definition of observer (like the person holding three sticks), (ii) the principle of relativity (the equivalence of different observers), (iii) the postulate that the group of transformations between observers is the Poincare group, and (iv) postulates of quantum mechanics. Then simple logic and math leads you to quantum relativistic physics as described in works of Wigner, Dirac, and Weinberg.
Approximately, you can replace the Poincare group by the Galilei group. Then you physics will be non-relativistic. This is a valid approximation, and it does not depend on the definition of inertial observers at all.
Eugene.
19. Jan 6, 2010
Fredrik
Staff Emeritus
This is mostly true, but it doesn't have a lot to do with what we've been talking about so far. The claim I'm defending is that you can't derive anything from Einstein's postulates, and that in particular, you can't derive the Poincaré transformation. Now you're suddenly talking about relativistic quantum mechanics, and you have also replaced one of the postulates with something that includes a mathematical version of both postulates and a mathematical definition of an inertial frame. (Inertial frames can simply be identified with the Poincaré transformations. You introduced the Poincaré group in a way that guarantees that there's an invariant speed. And you will eventually have to specify that a "law of physics" is "the same" in all inertial frames if it's a relationship between tensor components). You are clearly not deriving a result from Einstein's postulates. In fact, you're doing precisely the sort of thing I've been saying that you have to do if you're going to do something that resembles a derivation.
Einstein's postulates aren't well-defined, but if we're really nice, we can interpret them as representing a set of well-defined statements, one for each definition of "inertial frame", each definition of "law of physics", each definition of what it means for a law of physics to "be the same" in two inertial frames, and each definition of "light" or "the speed of light". And the closest thing to a derivation that we can do, is to find out which of the well-defined statements are consistent with all the other assumptions we'd like to make.
I strongly disagree with the claim that Minkowski spacetime is useless and misleading. Without it, we'd be stuck with the old fashioned definition of a tensor, which just makes me angry each time I see it. It's just so dumb and awkward compared to the modern definition, that this fact alone is enough to justify the use of Minkowski spacetime. There are lots of other reasons to use it, e.g. the fact that it really helps to understand it when you start studying GR.
If your dislike for Minkowski space comes from a belief that spacetime should be a result of some kind of interactions rather than just a passive stage on which the interactions occur, then I can understand it to some extent, but even if this idea is correct, it's not a reason not to use Minkowski space in classical SR.
Another problem with your approach is that it doesn't include any coordinate systems that aren't inertial frames. I'm wondering if you want to eliminate those specifically because you want to consider particles as more fundamental than fields? (The Unruh effect can be interpreted as saying that the number of particles in a region of space depends on your acceleration, and that makes it hard to think of particles as fundamental). This seems futile, because even if we can eliminate non-inertial frames from SR, we still aren't getting rid of them from GR.
20. Jan 6, 2010
meopemuk
No. Most physical observables (in interacting system) do not have tensor transformation laws. The CJS theorem is just one evidence for this statement.
This argument against particles is rather weak. First, there is nothing shocking if different observers see different number of particles. Second, nobody has ever seen the Unruh effect. Perhaps it does not exist after all.
Eugene.
|
2017-11-25 05:07:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8089288473129272, "perplexity": 402.1019185693821}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809392.94/warc/CC-MAIN-20171125032456-20171125052456-00759.warc.gz"}
|
https://www.scienceopen.com/document?vid=bca591b4-dacb-4d95-9f83-8e71138898a3
|
5
views
0
recommends
+1 Recommend
0 collections
0
shares
• Record: found
• Abstract: found
• Article: found
Is Open Access
# Replica Conditional Sequential Monte Carlo
Preprint
,
Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
### Abstract
We propose a Markov chain Monte Carlo (MCMC) scheme to perform state inference in non-linear non-Gaussian state-space models. Current state-of-the-art methods to address this problem rely on particle MCMC techniques and its variants, such as the iterated conditional Sequential Monte Carlo (cSMC) scheme, which uses a Sequential Monte Carlo (SMC) type proposal within MCMC. A deficiency of standard SMC proposals is that they only use observations up to time $$t$$ to propose states at time $$t$$ when an entire observation sequence is available. More sophisticated SMC based on lookahead techniques could be used but they can be difficult to put in practice. We propose here replica cSMC where we build SMC proposals for one replica using information from the entire observation sequence by conditioning on the states of the other replicas. This approach is easily parallelizable and we demonstrate its excellent empirical performance when compared to the standard iterated cSMC scheme at fixed computational complexity.
### Most cited references7
• Record: found
### Particle Markov chain Monte Carlo methods
(2010)
Bookmark
• Record: found
### Smoothing algorithms for state–space models
(2010)
Bookmark
• Record: found
### The Iterated Auxiliary Particle Filter
(2017)
Bookmark
### Author and article information
###### Journal
13 May 2019
###### Article
1905.05255
|
2021-10-17 22:57:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6369154453277588, "perplexity": 2786.0966538218495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00172.warc.gz"}
|
https://ggplot2-book.org/scales-guides.html
|
15 Scales and guides
The scales toolbox in Chapters 10 to 12 provides extensive guidance for how to work with scales, focusing on solving common data visualisation problems. The practical goals of the toolbox mean that topics are introduced when they are most relevant: for example, scale transformations are discussed in relation to continuous position scales (Section 10.1.7) because that is the most common situation in which you might want to transform a scale. However, because ggplot2 aims to provide a grammar of graphics, there is nothing preventing you from transforming other kinds of scales (see Section 15.6). This chapter aims to illustrate these concepts: We’ll discuss the theory underpinning scales and guides, and give examples showing how concepts that we’ve discussed specifically for position or colour scales also apply elsewhere.
15.1 Theory of scales and guides
Formally, each scale is a function from a region in data space (the domain of the scale) to a region in aesthetic space (the range of the scale). The axis or legend is the inverse function, known as the guide: it allows you to convert visual properties back to data. You might find it surprising that axes and legends are the same type of thing, but while they look very different they have the same purpose: to allow you to read observations from the plot and map them back to their original values. The commonalities between the two are illustrated below:
Argument name Axis Legend
name Label Title
breaks Ticks & grid line Key
labels Tick label Key label
However, legends are more complicated than axes, and consequently there are a number of topics that are specific to legends:
1. A legend can display multiple aesthetics (e.g. colour and shape), from multiple layers (Section 15.7.1), and the symbol displayed in a legend varies based on the geom used in the layer (Section 15.8)
2. Axes always appear in the same place. Legends can appear in different places, so you need some global way of positioning them. (Section 11.7)
3. Legends have more details that can be tweaked: should they be displayed vertically or horizontally? How many columns? How big should the keys be? This is discussed in (Section 15.5)
15.1.1 Scale specification
An important property of ggplot2 is the principle that every aesthetic in your plot is associated with exactly one scale. For instance, when you write this
ggplot(mpg, aes(displ, hwy)) +
geom_point(aes(colour = class))
ggplot2 adds a default scale for each aesthetic used in the plot:
ggplot(mpg, aes(displ, hwy)) +
geom_point(aes(colour = class)) +
scale_x_continuous() +
scale_y_continuous() +
scale_colour_discrete()
The choice of default scale depends on the aesthetic and the variable type. In this example hwy is a continuous variable mapped to the y aesthetic so the default scale is scale_y_continuous(); similarly class is discrete so when mapped to the colour aesthetic the default scale becomes scale_colour_discrete(). Specifying these defaults would be tedious so ggplot2 does it for you. But if you want to override the defaults, you’ll need to add the scale yourself, like this:
ggplot(mpg, aes(displ, hwy)) +
geom_point(aes(colour = class)) +
scale_x_continuous(name = "A really awesome x axis label") +
scale_y_continuous(name = "An amazingly great y axis label")
In practice you would typically use labs() for this, discussed in Section 8.1, but it is conceptually helpful to understand that axis labels and legend titles are both examples of scale names: see Section 15.2.
The use of + to “add” scales to a plot is a little misleading because if you supply two scales for the same aesthetic, the last scale takes precedence. In other words, when you + a scale, you’re not actually adding it to the plot, but overriding the existing scale. This means that the following two specifications are equivalent:
ggplot(mpg, aes(displ, hwy)) +
geom_point() +
scale_x_continuous(name = "Label 1") +
scale_x_continuous(name = "Label 2")
#> Scale for 'x' is already present. Adding another scale for 'x', which will
#> replace the existing scale.
ggplot(mpg, aes(displ, hwy)) +
geom_point() +
scale_x_continuous(name = "Label 2")
Note the message when you add multiple scales for the same aesthetic, which makes it harder to accidentally overwrite an existing scale. If you see this in your own code, you should make sure that you’re only adding one scale to each aesthetic.
If you’re making small tweaks to the scales, you might continue to use the default scales, supplying a few extra arguments. If you want to make more radical changes you will override the default scales with alternatives:
ggplot(mpg, aes(displ, hwy)) +
geom_point(aes(colour = class)) +
scale_x_sqrt() +
scale_colour_brewer()
Here scale_x_sqrt() changes the scale for the x axis scale, and scale_colour_brewer() does the same for the colour scale.
15.1.2 Naming scheme
The scale functions intended for users all follow a common naming scheme. You’ve probably already figured out the scheme, but to be concrete, it’s made up of three pieces separated by “_“:
1. scale
2. The name of the primary aesthetic (e.g., colour, shape or x)
3. The name of the scale (e.g., continuous, discrete, brewer).
The naming structure is often helpful, but can sometimes be ambiguous. For example, it is immediately clear that scale_x_*() functions apply to the x aesthetic, but it takes a little more thought to recognise that they also govern the behaviour of other aesthetics that describe a horizontal position (e.g., the xmin, xmax, and xend aesthetics). Similarly, while the name scale_colour_continuous() clearly refers to the colour scale associated with a continuous variables, it is less obvious that scale_colour_distiller() is simply a different method for creating colour scales for continuous variables.
15.1.3 Fundamental scale types
It is useful to note that internally all scale functions in ggplot2 belong to one of three fundamental types; continuous scales, discrete scales, and binned scales. Each fundamental type is handled by one of three scale constructor functions; continuous_scale(), discrete_scale() and binned_scale(). Although you should never need to call these constructor functions, they provide the organising structure for scales and it is useful to know about them.
15.2 Scale names
Extend discussion of labs() in Section 8.1.
15.3 Scale breaks
Discussion of what unifies the concept of breaks across continuous, discrete and binned scales: they are specific data values at which the guide needs to display something. Include additional detail about break functions.
15.4 Scale limits
Section 15.1 introduced the concept that a scale defines a mapping from the data space to the aesthetic space. Scale limits are an extension of this idea: they dictate the region of the data space over which the mapping is defined. At a theoretical level this region is defined differently depending on the fundamental scale type. For continuous and binned scales, the data space is inherently continuous and one-dimensional, so the limits can be specified by two end points. For discrete scales, however, the data space is unstructured and consists only of a set of categories: as such the limits for a discrete scale can only be specified by enumerating the set of categories over which the mapping is defined.
The toolbox chapters outline the common practical goals for specifying the limits: for position scales the limits are used to set the end points of the axis, for example. This leads naturally to the question of what ggplot2 should do if the data set contains “out of bounds” values that fall outside the limits.
The default behaviour in ggplot2 is to convert out of bounds values to NA, the logic for this being that if a data value is not part of the mapped region, it should be treated as missing. This can occasionally lead to unexpected behaviour, as illustrated in Section 10.1.2. You can override this default by setting oob argument of the scale, a function that is applied to all observations outside the scale limits. The default is scales::oob_censor() which replaces any value outside the limits with NA. Another option is scales::oob_squish() which squishes all values into the range. An example using a fill scale is shown below:
df <- data.frame(x = 1:6, y = 8:13)
base <- ggplot(df, aes(x, y)) +
geom_col(aes(fill = x)) + # bar chart
geom_vline(xintercept = 3.5, colour = "red") # for visual clarity only
base
base + scale_fill_gradient(limits = c(1, 3))
base + scale_fill_gradient(limits = c(1, 3), oob = scales::squish)
On the left the default fill colours are shown, ranging from dark blue to light blue. In the middle panel the scale limits for the fill aesthetic are reduced so that the values for the three rightmost bars are replace with NA and are mapped to a grey shade. In some cases this is desired behaviour but often it is not: the right panel addresses this by modifying the oob function appropriately.
15.5 Scale guides
Scale guides are more complex than scale names: where the name argument (and labs() ) takes text as input, the guide argument (and guides()) require a guide object created by a guide function such as guide_colourbar() and guide_legend(). These arguments to these functions offer additional fine control over the guide.
The table below summarises the default guide functions associated with different scale types:
Scale type Default guide type
continuous scales for colour/fill aesthetics colourbar
binned scales for colour/fill aesthetics coloursteps
position scales (continuous, binned and discrete) axis
discrete scales (except position scales) legend
binned scales (except position/colour/fill scales) bins
Each of these guide types has appeared earlier in the toolbox:
• guide_colourbar() is discussed in Section 11.2.5
• guide_coloursteps() is discussed in Section 11.4.2
• guide_axis() is discussed in Section 10.3.2
• guide_legend() is discussed in Section 11.3.6
• guide_bins() is discussed in Section 12.1.2
In addition to the functionality discussed in those sections, the guide functions have many arguments that are equivalent to theme settings like text colour, size, font etc, but only apply to a single guide. For information about those settings, see Chapter 18.
New stuff: show examples where something other than the default guide is used…
15.6 Scale transformation
The most common use for scale transformations is to adjust a continuous position scale, as discussed in Section 10.1.7. However, they can sometimes be helpful to when applied to other aesthetics. Often this is purely a matter of visual emphasis. An example of this for the Old Faithful density plot is shown below. The linearly mapped scale on the left makes it easy to see the peaks of the distribution, whereas the transformed representation on the right makes it easier to see the regions of non-negligible density around those peaks:
base <- ggplot(faithfuld, aes(waiting, eruptions)) +
geom_raster(aes(fill = density)) +
scale_x_continuous(NULL, NULL, expand = c(0, 0)) +
scale_y_continuous(NULL, NULL, expand = c(0, 0))
base
base + scale_fill_continuous(trans = "sqrt")
Transforming size aesthetics is also possible:
df <- data.frame(x = runif(20), y = runif(20), z = sample(20))
base <- ggplot(df, aes(x, y, size = z)) + geom_point()
base
base + scale_size(trans = "reverse")
In the plot on the left, the z value is naturally interpreted as a “weight”: if each dot corresponds to a group, the z value might be the size of the group. In the plot on the right, the size scale is reversed, and z is more naturally interpreted as a “distance” measure: distant entities are scaled to appear smaller in the plot.
15.7 Legend merging and splitting
There is always a one-to-one correspondence between position scales and axes. But the connection between non-position scales and legend is more complex: one legend may need to draw symbols from multiple layers (“merging”), or one aesthetic may need multiple legends (“splitting”).
15.7.1 Merging legends
Merging legends occurs quite frequently when using ggplot2. For example, if you’ve mapped colour to both points and lines, the keys will show both points and lines. If you’ve mapped fill colour, you get a rectangle. Note the way the legend varies in the plots below:
By default, a layer will only appear if the corresponding aesthetic is mapped to a variable with aes(). You can override whether or not a layer appears in the legend with show.legend: FALSE to prevent a layer from ever appearing in the legend; TRUE forces it to appear when it otherwise wouldn’t. Using TRUE can be useful in conjunction with the following trick to make points stand out:
ggplot(toy, aes(up, up)) +
geom_point(size = 4, colour = "grey20") +
geom_point(aes(colour = txt), size = 2)
ggplot(toy, aes(up, up)) +
geom_point(size = 4, colour = "grey20", show.legend = TRUE) +
geom_point(aes(colour = txt), size = 2)
ggplot2 tries to use the fewest number of legends to accurately convey the aesthetics used in the plot. It does this by combining legends where the same variable is mapped to different aesthetics. The figure below shows how this works for points: if both colour and shape are mapped to the same variable, then only a single legend is necessary.
base <- ggplot(toy, aes(const, up)) +
scale_x_continuous(NULL, breaks = NULL)
base + geom_point(aes(colour = txt))
base + geom_point(aes(shape = txt))
base + geom_point(aes(shape = txt, colour = txt))
In order for legends to be merged, they must have the same name. So if you change the name of one of the scales, you’ll need to change it for all of them. One way to do this is by using labs() helper function:
base <- ggplot(toy, aes(const, up)) +
geom_point(aes(shape = txt, colour = txt)) +
scale_x_continuous(NULL, breaks = NULL)
base
base + labs(shape = "Split legend")
base + labs(shape = "Merged legend", colour = "Merged legend")
15.7.2 Splitting legends
Splitting a legend is a much less common data visualisation task. In general it is not advisable to map one aesthetic (e.g. colour) to multiple variables, and so by default ggplot2 does not allow you to “split” the colour aesthetic into multiple scales with separate legends. Nevertheless, there are exceptions to this general rule, and it is possible to override this behaviour using the ggnewscale package.45 The ggnewscale::new_scale_colour() command acts as an instruction to ggplot2 to initialise a new colour scale: scale and guide commands that appear above the new_scale_colour() command will be applied to the first colour scale, and commands that appear below are applied to the second colour scale.
To illustrate this the plot on the left uses geom_point() to display a large marker for each vehicle make in the mpg data, with a single colour scale that maps to the year. On the right, a second geom_point() layer is overlaid on the plot using small markers: this layer is associated with a different colour scale, used to indicate whether the vehicle has a 4-cylinder engine.
base <- ggplot(mpg, aes(displ, hwy)) +
geom_point(aes(colour = factor(year)), size = 5) +
scale_colour_brewer("year", type = "qual", palette = 5)
base
base +
ggnewscale::new_scale_colour() +
geom_point(aes(colour = cyl == 4), size = 1, fill = NA) +
scale_colour_manual("4 cylinder", values = c("grey60", "black"))
Additional details, including functions that apply to other scale types, are available on the package website, https://github.com/eliocamp/ggnewscale.
15.8 Legend key glyphs
In most cases the default glyphs shown in the legend key will be appropriate to the layer and the aesthetic. Line plots of different colours will show up as lines of different colours in the legend, boxplots will appear as small boxplots in the legend, and so on. Should you need to override this behaviour, the key_glyph argument can be used to associate a particular layer with a different kind of glyph. For example:
base <- ggplot(economics, aes(date, psavert, color = "savings"))
base + geom_line()
base + geom_line(key_glyph = "timeseries")
More precisely, each geom is associated with a function such as draw_key_boxplot() or draw_key_path() which is responsible for drawing the key when the legend is created. You can pass the desired key drawing function directly: for example, base + geom_line(key_glyph = draw_key_timeseries) would also produce the plot shown above right.
|
2023-03-21 07:01:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39924976229667664, "perplexity": 2032.6614836025244}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00514.warc.gz"}
|
https://mathematica.stackexchange.com/questions/120338/is-there-sauvolas-or-niblacks-threshold-binarization-algorithms/120350
|
# Is there Sauvola's or Niblack's threshold binarization algorithms?
It seems that there are no Sauvola's or Niblack's built-in algorithms for image binarization. Maybe someone has ready to use math-code that implement one of these algorithms. I need one of these algorithms as a starting point in next steps in binarisation procedure of handwritten old documents, as is described in: Ntirogiannis,Gatos,Pratikakis "A combined approach for the binarization of handwritten document images" If someone can suggest another code for segmeting handwritten text from image like this, I will appreciate this.
• Have you tried any methods with Binarize[] at all? There are lots of options. – dr.blochwave Jul 9 '16 at 12:17
• e.g. LocalAdaptiveBinarize[img, 25, {.95, 0, .05}] – dr.blochwave Jul 9 '16 at 12:26
• Yes, I have looked but nothing is satisfying. But, just now I tried LocalAdaptiveBinarize and it seems OK. – Dragutin Jul 9 '16 at 12:27
Please enjoy my own implementations of the two requested algorithms. Feel free to replace the "WVM" with "C" in case you have an accepted C compiler installed. Please note that your example image is of extraordinarily bad quality, so once you have found appropriate parameters for the two algorithms please let us know what you have obtained.
Please note that my implementations lack any Image3D functionality. Also, multichannel Images cannot be applied, so in other words, only scalar single-channel Image data can be processed so far. However, a specialty is the option to select a circular approximation of the filter kernel. In an update below I also add a sketch of the two algorithms using ImageFilter. By this the restrictions mentioned before can be overcome, but then the kernel will have box shape...
Niblack 1986
Assumes white text on black background
Clear[NiblackKernel];
NiblackKernel = Compile[{{list, _Real, 1}, {sdc, _Real}},
Module[{mean, stddev, thr},
mean = Mean[list];
stddev = StandardDeviation[list];
thr = mean + sdc*stddev;
UnitStep[list[[Ceiling[Length[list]/2]]] - thr]
], CompilationTarget -> "WVM"];
Clear[NiblackFilter];
Options[NiblackFilter] = {"StdDevCoefficient" -> 0.2,
"WindowHalfWidth" -> 15, "Mask" -> "Box"};
NiblackFilter[im_Image, OptionsPattern[]] :=
Module[{flatelpos, padim, whw, sdc, el},
whw = Round[OptionValue["WindowHalfWidth"]];
sdc = OptionValue["StdDevCoefficient"];
"Box", Table[1, {2 whw + 1}, {2 whw + 1}],
"Circle",
Ceiling[Rescale[
Sign[Table[(x^2 + y^2), {x, -whw, whw}, {y, -whw, whw}] -
whw*(whw + 1/2)], {1, -1}]],
_, Table[1, {2 whw + 1}, {2 whw + 1}]
];
flatelpos = Flatten[Position[Flatten[el], 1]];
"Reversed"];
DeveloperPartitionMap[(NiblackKernel[Flatten[#, 1][[flatelpos]],
sdc]) &, padim, Dimensions[el], {1, 1},
Ceiling[Dimensions[el]/2]];
];
Sauvola and Pietikäinen 2000
Assumes black text on white background
Clear[SauvolaKernel];
SauvolaKernel =
Compile[{{list, _Real, 1}, {sdc, _Real}, {dr, _Real}},
Module[{mean, stddev, thr},
mean = Mean[list];
stddev = StandardDeviation[list];
thr = mean*(1. + sdc*(stddev/dr - 1.));
UnitStep[list[[Ceiling[Length[list]/2]]] - thr]
], CompilationTarget -> "WVM"];
Clear[SauvolaFilter];
Options[SauvolaFilter] = {"StdDevCoefficient" -> 0.2,
"DynamicRange" -> 128, "WindowHalfWidth" -> 15, "Mask" -> "Box"};
SauvolaFilter[im_Image, OptionsPattern[]] :=
Module[{flatelpos, padim, whw, sdc, dr, el},
whw = Round[OptionValue["WindowHalfWidth"]];
sdc = OptionValue["StdDevCoefficient"];
dr = N[OptionValue["DynamicRange"]/255];
"Box", Table[1, {2 whw + 1}, {2 whw + 1}],
"Circle",
Ceiling[Rescale[
Sign[Table[(x^2 + y^2), {x, -whw, whw}, {y, -whw, whw}] -
whw*(whw + 1/2)], {1, -1}]],
_, Table[1, {2 whw + 1}, {2 whw + 1}]
];
flatelpos = Flatten[Position[Flatten[el], 1]];
"Reversed"];
DeveloperPartitionMap[(SauvolaKernel[Flatten[#, 1][[flatelpos]],
sdc, dr]) &, padim, Dimensions[el], {1, 1},
Ceiling[Dimensions[el]/2]];
];
Update
I add some sketches using ImageFilter (without some good handling of the margins as above, sorry). I expect them working also for multichannel images as well as for 3D images:
Niblack
whw = 15;
sdc = 0.2;
image = Import[...];
center = Sequence @@ Table[whw + 1, {Length[ImageDimensions[image]]}];
ImageFilter[
UnitStep[#[[center]] - (Mean[Flatten[#]] +
sdc*StandardDeviation[Flatten[#]])] &,
image, whw, Padding -> "Fixed"]
whw is the kernel radius, sdc is the stand. dev. coefficient
Sauvola
whw = 15;
sdc = 0.05;
dr = 128./255.;
image = Import[...];
center = Sequence @@ Table[whw + 1, {Length[ImageDimensions[image]]}];
ImageFilter[
UnitStep[#[[center]] - (Mean[
Flatten[#]]*(1. +
sdc*(StandardDeviation[Flatten[#]]/dr - 1.)))] &,
image, whw, Padding -> "Fixed"]
whw is the kernel radius, sdc is the stand. dev. coefficient, dr is the dynamic range (rf. to 8 bit, so normalized to 255)
Hope everything is correct now.
• Tank you very much. It may seem that Niblack's result is poor, but this is only for creating Inpaint Mask for procedure described in Ntirogiannis. When I finish the process I will post results. In meantime I experimented, and I get this result with LocalAdaptiveBinarize[image, 72, {0.928, 0., 0.034}] – Dragutin Jul 9 '16 at 21:21
• @Dragutin: have corrected a few minor errors in my code; also have added comments, plus some attempts to implement the algorithms using in-mathematica means, i.e. ImageFilter – UDB Jul 10 '16 at 11:40
• I wanted to add a link to Niblack, but it seems he described the procedure in a book, and I can't seem to access that in Google Books. – J. M. is away Jul 10 '16 at 13:13
|
2019-07-19 15:17:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3320516347885132, "perplexity": 11106.698689956547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526254.26/warc/CC-MAIN-20190719140355-20190719162355-00283.warc.gz"}
|
https://docs.relational.ai/rel/concepts/integrity-constraints
|
Rel
REL CONCEPTS
Integrity Constraints
# Integrity Constraints
This concept guide covers integrity constraints in Rel.
## Goal#
Integrity Constraints (ICs) are used to ensure accuracy and consistency of the data in a relational database. They can guarantee that data insertion, updating, and other processes are performed in such a way that data integrity is not affected. The goal of this concept guide is to familiarize the reader with Rel’s IC functionality through a number of examples. After reading this concept guide you should expect to know how to write Integrity Constraints in Rel.
## Introduction#
An integrity constraint (IC) ensures, as the name suggests, the integrity of the database, requiring that its relations and data obey the constraint specified. We can broadly classify ICs into three categories:
• Type Constraints are used to check data types (e.g.: String(name)). These constraints are inherent to the data model (aka schema) and usually can be evaluated statically. They are discussed in detail in Section Type Constraints.
• Value Constraints are used to check that individual entries in a relation hold a specific value or lie within a specific range (e.g.: age >= 0). They are discussed in detail in Section Value Constraints.
• Logical Constraints are used to ensure that relations adhere to specific logical rules. They often involve multiple relations. Logical constraints are also a great way to express expert knowledge in our RKGMS. They are discussed in detail in Section Logical Constraints.
In Rel, an integrity constraint is defined in a similar way as any relation. Consequently, the same language constructs (i.e.: Relational Abstraction, Relational Application, and more) can be used. The technical details of how integrity constraints are defined can be found in the Integrity Constraints section in the Language reference.
In contrast to defining a relation, where variables are kept only if they adhere to the logical statement, an IC probes the opposite, i.e., variable combinations are kept only when they don’t fulfill the logical statement.
ICs may involve a single fact, a single relation, or multiple relations. In the Simple Example section, you can find the basics for writing integrity constraints.
More advanced ICs are discussed in the Common Use Cases section. More specifically, this discusses type, value, and logical constraints.
When an IC is violated, the running transaction is aborted and the appropriate information is logged in the output relation. It is even possible to write ICs that report back counter-examples that violate the IC.
In the Case study: Mini-IMDb Dataset section, you can see how ICs can be enforced on real-world data.
As we will see, the best practices when writing ICs are:
• to give descriptive names to ICs and
• to use arguments in the IC declaration.
Following both points will be most helpful when analyzing IC violations. Naming the IC will help us identify which IC is violated; and IC arguments will help us understand the reason for an IC violation, because counter-examples will be reported back for the specified arguments. In Section Best Practices we discuss in detail the advantages of these two points.
## Simple Example#
A very simple integrity constraint is:
// query
ic {
1 < 2
}
which checks the correctness of the formula 1 < 2. This IC, of course, holds independently of any particular data in the database.
Generally, an IC involves one or more relations because its purpose is to ensure the integrity of the data, or the correctness of definitions and rules. To check, for instance, that two relations are the same we can write:
// query
def R = {1;2}, {'a'; 'b'}
def S = {(1, 'a'); (1, 'b'); (2, 'a'); (2, 'b')}
ic {
equal(R, S)
}
Note that you should use equal(R, S) to check that two relations are the same, instead of =. For more details see the definition of equal.
To ensure that the reasoning between two facts holds you can use implies:
// query
def square(x, y) {
x = range[-3, 3, 1] and
y = x^2
}
ic is_square_positive(x, y) {
square(x, y) implies y >= 0
}
def output = square
In this example, the IC guarantees that all squares y are positive (as all squares are positive numbers, y>0).
The IC name/identifier is_square_positive(x, y) will be displayed in the output in case the IC doesn’t hold, in other words if y is negative. This is optional: If no name is provided by the user, the system assigns a unique identifier to the IC.
Giving descriptive names to ICs will help identify them if they are violated, as we discuss next.
## Common Use Cases#
This section demonstrates the various use cases for integrity constraints. You can start by defining a person relation that contains people and their properties. As you progress, you will define more and more person properties as needed to learn different IC use cases.
### Type Constraints#
You can start by defining the name property — in particular, person[:name, :first] and person[:name, :last] — to store first and last names.
// install
def person[:name, :first] = {
(1, "Fred");
(2, "David");
(3, "Brian");
(4, "Jacques");
(5, "Julie");
}
def person[:name, :last] = {
(1, "Smith");
(2, "Johnson");
(3, "Williams");
(4, "Brown");
(5, "Jones");
}
Now you can query it to see how your data looks.
// query
person
The name property is modeled in the highly normalized Graph Normal Form (GNF). GNF encourages dividing :name into the more basic properties :first and :last. You can now check that the first and last names are strings:
// query
ic person_type_check(id, sub_property){
forall(x:
person[:name, sub_property](id, x) implies String(x)
)
}
The IC person_type_check checks that all sub-properties under person[:name] have value type String. Let’s say that you want to add another sub-property to person[:name] called :MI, with the person’s middle initial. You would also have to store it as String to avoid violating the IC. For instance, you could not store it as Char without violating or modifying the IC.
Type ICs can often be evaluated statically, based only on the schema, without having to actually evaluate the relation. With the @static annotation, you can tell the RKGMS that you expect that this IC can be statically evaluated.
// query
@static
ic person_type_check(id, sub_property) {
forall(x :
person[:name, sub_property](id, x) implies String(x)
)
}
If an IC named ic_name is annotated as @static but cannot be statically evaluated, the system will issue a warning:
warn: declaration 'ic_name' is marked @static but not computed statically
### Value Constraints#
The next thing to check is value constraint. For that, add an age property, :age, to the person relation:
// install
def person[:age] = {
(1, 37);
(2, 31);
(3, 47);
(4, 27);
(5, 22);
}
def person_age = person[:age]
An IC can check that all individuals are 18 or older:
// query
person[:age](id, age)
implies
age >= 18
}
Value constraints can also be used to check the arity of relations. You can check that person[:age] has arity 2:
// query
ic arity_check(x){
arity[person_age] = x implies x = 2
}
Note that by using the IC argument x, you will be informed of the actual value of any arity other than 2 if the IC is violated.
Note that arity is in general an over-approximation that is statically evaluated. In practice, sometimes false alarms can happen if predicates are overloaded by arity. In such cases, the IC can be adjusted accordingly.
The cardinality of a relation can also be checked using value constraints. You can check that person[:age] has five records:
// query
ic cardinality_check(x){
count[person_age] = x implies x = 5
}
### Logical Constraints#
#### Disjointness#
Next, you can define a :gender property in person and check the disjoint property of the derived male (‘M’), female (‘F’), and non_binary (‘X’) relations.
// install
def person[:gender] = {
(1, 'M');
(2, 'M');
(3, 'F');
(4, 'F');
(5, 'X');
}
def male(p) = person(:gender, p, 'M')
def female(p) = person(:gender, p, 'F')
def non_binary(p) = person(:gender, p, 'X')
def gender(p) = person(:gender, p, _)
// query
ic is_disjoint() {
disjoint(female, male) and
disjoint(non_binary, male ∪ female)
}
This checks that male and female are disjoint relations and that non_binary is disjoint from the union, indicated by ∪ of the male and female relations.
#### Subset#
You can use the same gender property and check whether the derived gender relations are subsets of gender.
// query
ic is_subset() {
subset(female, gender) and
male ⊆ gender and
non_binary ⊆ gender
}
In Rel, the built-in subset relation can be called either by its name or by the symbol ⊆.
#### Unique Value#
Using the :age property in person you can define a unique value constraint to check that each person only has one age value.
// install
def person[:age] = {
(1, 37);
(2, 31);
(3, 47);
(4, 27);
(5, 22);
}
// query
ic is_unique(id) {
forall(age1, age2:
(person[:age](id, age1) and
person[:age](id, age2))
implies
age1 = age2
)
}
In the IC above, the id is an IC argument and in case of a violation, the error report returns all id values that violate the IC.
The uniqueness of a person’s age can also be checked using the built-in function relation:
// query
ic is_unique {
function(person[:age])
}
Another alternative to check the uniqueness is to count all ages associated with a person:
// query
ic is_unique(id, x) {
count[person[:age, id]] = x implies x = 1
}
Caution: Intuitively we may want to write the IC this way:
ic (id) {
count[person[:age, id]] = 1
}
However, this will lead to an unbound error for the argument id, as discussed in the Avoiding Unbound Argument Errors When Evaluating ICs section.
#### Symmetry#
Now we want to add a logical constraint to check the symmetry of a relation. To demonstrate this, we define an is_friends relation that collects pairs of people that are friends with each other. Obviously, friendship is mutual and consequently the relation is_friends should reflect that. We do so by checking that the relation is symmetric w.r.t. both arguments:
// install
def is_friends = {
(1, 2);
(1, 5);
(2, 1);
(3, 4);
(4, 3);
(5, 1);
}
ic is_symmetric(id1, id2) {
is_friends(id1, id2) implies is_friends(id2, id1)
}
The symmetry of a relation can also be checked in a point-free way (which lets us skip the id1 and id2 variables) with the built-in transpose relation defined in the Standard Library.
// query
ic is_symmetric {
equal(is_friends, transpose[is_friends])
}
While this formulation is more compact, it has no arguments, so it will not show us the counter-example values for id1, id2 if the IC is violated.
#### Transitivity#
Another logical constraint that might be of interest is transitivity. To demonstrate this, define a property :dob which refers to the date of birth of a person, and an is_older relation containing pairs (a, b) where person a is older than person b.
// install
def person[:dob] = {
(1, 1984);
(2, 1990);
(3, 1974);
(4, 1994);
(5, 1999);
}
def is_older(id1, id2) { person[:dob][id1] < person[:dob][id2]}
We know that the relation is_older should be transitive: if person a is older than person b which is older than person c, then a is also older than c. So let’s check that this is true by writing the following IC:
// query
ic is_transitive {
is_older.is_older ⊆ is_older
}
A relation that doesn’t have to be transitive is the friendship relation. If a is friends with b and b is friends with c then we can’t conclude that a is (or isn’t) friends with c. We can check whether or not our is_friends relation is transitive.
ic is_transitive(id1, id2, id3) {
is_friends(id1, id2) and is_friends(id2, id3)
implies
is_friends(id1, id3)
}
We use the point-wise notation, where we state explicitly the relation variables id1, id2, and id3, so we can capture friend combinations where the transitivity condition doesn’t hold, and indeed we get the following IC violation:
ERROR: LoadError: ClientException: Transaction aborted. The following problems caused the abort:
-- INTEGRITY_CONSTRAINT_VIOLATION ---------------------------------------------------
Integrity constraints are violated. The ICs are defined as follows:
15| ic is_transitive(id1, id2, id3) {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
16| is_friends(id1, id2) and is_friends(id2, id3)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
17| implies
^^^^^^^^^^^^
18| is_friends(id1,id3)
^^^^^^^^^^^^^^^^^^^^^^^^
19| }
^^
error: ICs violated
Transaction output:
abort() => [()]
output(:is_transitive, Int64, Int64, Int64) => [(1, 2, 1), (1, 5, 1), (2, 1, 2), (2, 1, 5), (3, 4, 3), (4, 3, 4), (5, 1, 2), (5, 1, 5)]
From the error report you can see that there are eight tuples (a, b, c) that violate the transitivity condition. In particular, there are two classes of violations:
• Trivial violations like (1, 2, 1) where the first and last persons are the same and you can’t be friends with yourself.
• More interesting triples like (2, 1, 5) where all three people are different.
This example clearly highlights why arguments in your IC definition can be very helpful in understanding the data.
## Integrity Constraint Violations#
This section will discuss IC violations and what information about the violation is reported back to the user. The Best Practices section shows how to ensure you get the most actionable information back when an IC is violated.
Look what happens when you define an IC with a statement that evaluates to false:
ic violation_check {
1 = 2
}
As you can see below, the transaction is aborted due to the IC violation. In general, any transaction that triggers an IC violation will be aborted. This may happen because (1) you try to define an IC that does not hold given the current data, or (2) you try to change the data in a way that violates an existing IC.
ERROR: LoadError: ClientException: Transaction aborted. The following problems caused the abort:
-- INTEGRITY_CONSTRAINT_VIOLATION ---------------------------------------------------
Integrity constraints are violated. The ICs are defined as follows:
2| ic violation_check = {
^^^^^^^^^^^^^^^^^^^^^^^
3| 1 = 2
^^^^^^^^^^^^^^^^
4| }
^^
error: ICs violated
Transaction output:
abort() => [()]
output(:violation_check) => [()]
The error message highlights which IC is violated. Violated ICs are also reported in the output relation. The line output(:violation_check) => [()] states that the relation violation_check, called the IC relation, holds the tuple (), which represents true. Because it is not empty, the IC is violated.
🔎
An IC is violated when its IC relation is not empty: the IC relation is the negation of the actual expression defined in the IC declaration.
Concretely, in the above example, the IC relation violation_check is equivalent to the expression: not 1 = 2, which is true. Hence violation_check holds a logical true, represented by the tuple ().
The Integrity Constraints with Arguments section shows that if an IC is defined with arguments then the IC relation contains all counter-examples that violate the IC.
## Case Study: Mini-IMDb Dataset#
This section uses the Mini-IMDb data as an example of a real world dataset and shows how ICs can enforce a schema and encode expert knowledge in your database.
To load the Mini-IMDb dataset, we use the load_csv functionality. First we define a schema and a path to our data files:
// install
def url = "s3://relationalai-documentation-public/dataset/imdb_job/data_mini/"
def config_name[:path] = concat[url, "name.csv"]
def config_name[:schema, :person_id] = "int"
def config_name[:schema, :name] = "string"
def config_title[:path] = concat[url, "title.csv"]
def config_title[:schema, :movie_id] = "string"
def config_title[:schema, :title] = "string"
def config_title[:schema, :year] = "int"
def config_cast_info[:path] = concat[url, "cast_info.csv"]
def config_cast_info[:schema, :movie_id] = "int"
def config_cast_info[:schema, :person_id] = "int"
def cast_info_csv = load_csv[config_cast_info]
Next we define relations to access the data in Rel:
// install
def movie[:title] = title_csv[:movie_id, pos], title_csv[:title, pos] from pos
def movie[:year] = title_csv[:movie_id, pos], title_csv[:year, pos] from pos
def movie[:id] = first[movie[_]]
def actor[:name] = name_csv[:person_id, pos], name_csv[:name, pos] from pos
def cast = cast_info_csv[:movie_id, pos], cast_info_csv[:person_id, pos] from pos
def co_actors(id1, id2) =
cast(movie_id, id1) and
cast(movie_id, id2) and
id1 != id2
from movie_id
### Defining Mini-IMDb Integrity Constraints#
Now, let’s define ICs on this Mini-IMDb dataset. First we check that the actor name is a String and the actor identifier is a Number.
Note that there is an advantage in creating entities for concepts like actors or movies and using the entity keys to refer to them compared to using simple identifiers like integers. For details see the Entities concept guide.
// query
@static
ic person_type_check(name, id) {
actor(:name, id, name)
implies
Number(id) and String(name)
}
In general, we should check all the other data types. For the purpose of this concept guide, we will consider only this one example.
As discussed above, we can also insert expert knowledge into our database using ICs. For instance, we can check that no movie year is made before 1874 because we know the oldest entry in IMDb is the documentary “Passage de Venus” made in 1874.
// query
ic value_check(mid, year) {
movie(:year, mid, year)
implies
year >= 1874
}
Next, you can also check that the co-author relation, co_actors, is symmetric as it should be:
// query
ic is_symmetric {
equal(co_actors, transpose[co_actors])
}
and indeed it is.
The Mini-IMDb dataset is quite small and the number of useful ICs is quite small. As the dataset grows and more information is stored in the knowledge graph, the need for integrity constraints also grows, to ensure that the knowledge is stored in a correct and consistent manner.
## Best Practices#
This section describes some best practices for writing ICs so that you have the most actionable information to hand when an IC is violated.
### Integrity Constraints with Arguments#
In the previous examples, each IC had no arguments and the IC violation could only show that the IC was violated. To gain more information about why and for which values the IC was violated, you need to add arguments to the IC.
For example, when checking that the relation person[:age]
def person[:age] = {
(1, 37);
(2, 31);
(3, 47);
(4, 27);
(5, 12);
}
only contains people who are older than 18
ic value_violation(id, age) {
person[:age](id, age)
implies
age >= 18
}
you get the following error report:
ERROR: LoadError: ClientException: Transaction aborted. The following problems caused the abort:
-- INTEGRITY_CONSTRAINT_VIOLATION ---------------------------------------------------
Integrity constraints are violated. The ICs are defined as follows:
10| ic value_violation(id, age) {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
11| person[:age](id, age)
^^^^^^^^^^^^^^^^^^^^^^^^^^
12| implies
^^^^^^^^^^^^
13| age >= 18
^^^^^^^^^^^^^^
14| }
^^
error: ICs violated
Transaction output:
value_violation(Int64, Int64) => [(5, 12)]
The report points out that the person with id = 5 has age 12, and this fact is the cause of the IC violation, since 12 < 18.
If you write the same IC without arguments, expecting it to hold for all id and age values,
ic value_violation{
forall(id, age:
person[:age](id, age)
implies
age >= 18
)
}
the IC violation is reported as follows:
ERROR: LoadError: ClientException: Transaction aborted. The following problems caused the abort:
-- INTEGRITY_CONSTRAINT_VIOLATION ---------------------------------------------------
Integrity constraints are violated. The ICs are defined as follows:
10| ic value_violation() {
^^^^^^^^^^^^^^^^^^^^^^^
11| forall(id, age:
^^^^^^^^^^^^^^^^^^^^
12| person[:age](id, age)
^^^^^^^^^^^^^^^^^^^^^^^^^^
13| implies
^^^^^^^^^^^^^^^^
14| age >= 18)
^^^^^^^^^^^^^^^^^^^
15| }
^^^^^^
error: ICs violated
Transaction output:
value_violation() => [()]
There is no additional information about which data record caused the IC violation.
In particular, the only information reported back is that the IC evaluated to true (the relation {()}), which means the expression in the IC evaluated to false.
### Avoiding Unbound Argument Errors in ICs#
When checking an IC, the system evaluates its negation. Any instance that satisfies the negation will be a counter-example. For the evaluation to take place, the variables must be bound to a finite domain.
Otherwise, it is possible that an infinite number of values can violate the IC. This happens when arguments are not bound, which triggers an unbound variable error.
Consider this example:
def R = {1; 2}
ic check_R_values(id) {
R(id) and id <= 2
}
It results in an error that tells us that (1) the IC is violated and (2) the variable id is unbound:
INTEGRITY_CONSTRAINT_VIOLATION
error: ICs violated
-- UNBOUND_VARIABLE -----------------------------------------------------------------
3| ic check_R_values(id) {
^^^^^^^^^^^^^^^^^^^^^^^^^^
4| R(id) and id <= 2
^^^^^^^^^^^^^^^^^^^^^^
5| }
^^
error: expression is not bound. Maybe this is a typo, or consider adding '@inline' if the definition is intentionally infinite?
-- UNBOUND_VARIABLE -----------------------------------------------------------------
3| ic check_R_values(id) {
^^^
error: 'id' is not bound. Maybe this is a typo, or consider adding '@inline' if the definition is intentionally infinite?
Transaction output:
abort() => [()]
At first glance, these error reports might be surprising, because all values in R fulfill the condition id <= 2. However, because an IC relation is the negation of the specified expression, the IC relation check_R_values(id) is equivalent to
id: not (R(id) and id <= 2)
and there are an infinite number of id values that make the statement not (R(id) and id <= 2) true. In particular, nothing here says that the values of id must be in R. For instance, values like id = 3 or id = "abc" are valid counter-examples that violate the IC.
You can rewrite the above IC using implies to bind the variable id to relation R and get the desired integrity check, which is valid:
// query
def R = {1; 2}
ic check_R_values(id) {
R(id) implies id <= 2
}
### Named Integrity Constraints#
Similar considerations apply to naming ICs. The system creates a unique identifier for ICs that lack a user-provided name. If you have multiple ICs without a name, it will be hard to know which IC was violated.
Take the following toy example:
ic (id){
id = {1;2;3}
implies
id <= 2
}
ic (id){
id = {1;2;3}
implies
id < 2
}
Now both ICs are violated, and the system reports the id values that make them fail:
ERROR: LoadError: ClientException: Transaction aborted. The following problems caused the abort:
-- INTEGRITY_CONSTRAINT_VIOLATION ---------------------------------------------------
Integrity constraints are violated. The ICs are defined as follows:
1| ic (id){
^^^^^^^^^
2| id = {1;2;3}
^^^^^^^^^^^^^^^^^
3| implies
^^^^^^^^^^^^
4| id <= 2
^^^^^^^^^^^^
5| }
^^
7| ic (id){
^^^^^^^^^
8| id = {1;2;3}
^^^^^^^^^^^^^^^^^
9| implies
^^^^^^^^^^^^
10| id < 2
^^^^^^^^^^^
11| }
^^
error: ICs violated
Transaction output:
abort() => [()]
output(:rel-query-action##3385#constraint#0, Int64) => [(3,)]
output(:rel-query-action##3385#constraint#1, Int64) => [(2,), (3,)]
However, the ICs were not named and therefore you can’t infer which counter-examples correspond to which IC.
To summarize the best practices:
• Give ICs descriptive names and
• Define ICs with useful arguments such that the error reports, when IC violations occur, provide the most actionable information for analyzing why the IC violation was triggered.
## Tips and Tricks#
### IC Warnings#
You may encounter a NON_EMPTY_INTEGRITY_CONSTRAINT warning. This is to alert you about cases where an IC is trivially satisfied because a relation is empty, but is not an error. For example:
// query
def minors(id) = person[:age](id, age) and age < 18 from age
ic minors_IC {
forall(id : minors(id) implies Int(person:name:first[id]))
}
Given the person relation data we installed above, minors is empty, so it is true that all minors have a first name of type Int — which is not what we might have intended to write. Hence, the warning: “warn: integrity constraint ‘minors_IC’ is (trivially) true when the relation minors is empty”.
This warning can also appear when we require a relation to be false that we do expect to be false (e.g. load errors after a CSV import). In these cases, it can be safely ignored.
### Disabling Integrity Constraints#
All integrity constraints can be disabled with the :disable_integrity_constraints control relation, by adding it to relconfig in an update transaction:
def insert[:relconfig] = {:disable_integrity_constraints}
After this, all ICs are ignored, including any previously installed ICs, and future ones. This feature should be used with care, but can be useful, for example, to improve performance if very expensive ICs are present.
Because we can update base relations for the duration of a read-only query, we can also use this control relation to disable ICs for a particular query, rather than permanently. For instance, this query will not fail, but any installed ICs will still be active afterwards, because the query is not an update:
// query
def insert[:relconfig] = {:disable_integrity_constraints}
def foo = 1; 2; 3
ic { count[foo] = 7 }
def output = foo
|
2022-08-13 14:45:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5200467109680176, "perplexity": 4305.173195378527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571959.66/warc/CC-MAIN-20220813142020-20220813172020-00171.warc.gz"}
|
http://physics.stackexchange.com/questions/15282/quantum-entanglement-faster-than-speed-of-light?answertab=active
|
# Quantum entanglement faster than speed of light?
recently i was watching a video on quantum computing where the narrators describes that quantum entanglement information travels faster than light!
Is it really possible for anything to move faster than light? Or are the narrators just wrong?
Regards,
-
I must unfortunately state that, at the present day, anything you read or hear in the popular media about quantum computing should be treated with deep suspicion. (I say this as someone who works in the field!) The problem is the media is absolutely full of total garbage about the subject, in part because of the existing culture surrounding popular presentations of QM (which is also largely garbage, with a few notable exceptions: e.g. Penrose, Hawking, and other such luminaries). If something said about QC sounds fantastic, then you should expect that it is close to being totally false! – Niel de Beaudrap Oct 1 '11 at 18:33
(I would like to add: models of quantum computing do have intriguing properties which surpass anything we know how to do with classical computers, and it's realistic to hope that we build them some day. However, they are not magical, nor paradoxical. Their properties are just bold extensions of the properties of classical computers, when you add one or two extra ingredients. Entanglement, for instance, is an exotic sort of correlation; but that's all that it is --- correlation of random results --- albeit one of a peculiar sort, which one could not even describe in "classical" probability.) – Niel de Beaudrap Oct 1 '11 at 18:39
@Niel: The problem with describing entanglement as probability correlation (although it is the direct quantum analog) is that correlation can be always interpreted as ignorance of hidden variables, while quantum entanglement has no local ignorance interpretation. – Ron Maimon Dec 13 '11 at 14:06
@Ron: I am not describing it as being a merely classical correlation, though. If we define "correlated" as just being "not independent", the fact that entanglement is a form of correlation immediately follows. The fact that there is no intuitive ignorance interpretation doesn't really affect this. – Niel de Beaudrap Dec 13 '11 at 20:08
Collapsing an entangled pair occurs instantaneously but can never be used to transmit information faster than light. If you have an entangled pair of particles, A and B, making a measurement on some entangled property of A will give you a random result and B will have the complementary result. The key point is that you have no control over the state of A, and once you make a measurement you lose entanglement. You can infer the state of B anywhere in the universe by noting that it must be complementary to A.
The no-cloning theorem stops you from employing any sneaky tricks like making a bunch of copies of B and checking if they all have the same state or a mix of states, which would otherwise allow you to send information faster than light by choosing to collapse the entangled state or not.
On a personal note, it irks me when works of sci-fi invoke quantum entanglement for superluminal communication (incorrectly) and then ignore the potential consequences of implied causality violation...
-
so, it means whatever you do with quantum entanglement, the information flow is always bound to $c$. Change of state of spin(for example) does travel at speed of light to affect at the other end. – Vineet Menon Oct 2 '11 at 5:18
You cannot change the spin on one particle and get a corresponding change on the second particle. Niel de Beaudrap sums it up very well as a 'correlation of random results'. Once you make a measurement (i.e. interact meaningfully with one of the entangled particles) the entanglement is collapsed. – Richard Terrett Oct 2 '11 at 5:42
so, is it that one cannot use entanglement for any use? since a single measurement destroys the entanglement itself! – Vineet Menon Oct 2 '11 at 6:50
Sure, you can use entanglement. You can use it for the same things that you can use correlated random results for: for example, you can use it to turn insecure public communication between distant parties, into secure private communication. And a number of other intriguing theoretical applications. Just not for instantaneous communication, or anything similar to it. – Niel de Beaudrap Oct 2 '11 at 9:24
@KonradRudolph - Both appear to be correct: the no-cloning theorem forbids superluminal communication through cloning of states. However this is 'sufficient but not necessary', as the article states, as the theorem does not say anything about other possible techniques that don't employ cloning of states. Perhaps I should have said 'any sneaky tricks (that employ cloning of states)' to be more clear. – Richard Terrett Feb 27 '12 at 3:05
There are many things that happen faster than speed of light. For example when big bang happened at the beginning of universe, the expansion of Universe is faster than speed of light. If you have studied Bell's theorem, it states and proved by experimentation that nature itself is fundamentally nonlocal. Nonlocality is in the form of instantaneous collapse of wave function. Another example is, if a bug flies across the beam of a movie projector , the speed of its shadow is proportional to the distance to the screen: in principle that distance can be as large as you like and hence the shadow can move arbitrarily at a high velocity. Note: The shadow of the bug moves across the screen at a velocity greater than c, provided the screen is far far enough away. Its true. However the shadow does not carry any energy or transmit any message. Another example is ethereal influences in EPR Experiment. Likewise there are many examples but the important point nothing carries energy or a message from point A to point B.
-
I don't think this really clarifies anything. I appreciate that you are saying that nothing "physical" is transmitted at light-speed or faster (which is true). But you undermine yourself by talking about "ethereal influences" in the EPR experiment. Which is it --- is there an actual 'influence, or not? – Niel de Beaudrap Oct 1 '11 at 18:48
There are ethereal influences. That's how the conservation of angular momentum is done. Yes there is an actual influence. – deepthought Oct 1 '11 at 19:36
The usual interpretation of Bell's inequality violations is that the universe is non-real (i.e. no hidden variables), not non-local. – user2963 Oct 1 '11 at 20:49
Even though I'm not a physicist, I don't agree with "deepthought". A change in the shadow is a change in the propagation of light, and does carry information, it is not possible for the change of light propagation to travel faster than light, it travels at light speed.
Back to the OP, from my understating of quantum entanglement, it is not transfer of energy, you have no control whatsoever on the collapse of the wave function so u can't transfer information either. it is akin to the following analogy: Suppose you and your friend have two boxes, each has an apple, one green and one red and you both know that for sure. Each takes a box and travels to a distant location, the moments one of you opens his box knows what is in the other box instantaneously. Although there is a slight difference between the two scenarios since the interpretation of quantum mechanics says that the wave function collapses only at the experiment, while here the two apples had a specified state prior to the experiment. What happens at a deeper level and how two distant related events happen instantaneously I know nothing about.
-
The shadow can move faster than light speed; but the shadow is in no way controlled by the objects that it falls upon, but by the fly which is far away. The shadow carries information about the fly, which is propagating at light speed from the fly, not about the objects it falls upon. So the shadow can sweep across distances faster than light, but the shadow is not an information-bearing signal across those distances. – Niel de Beaudrap Oct 1 '11 at 18:52
-1: shadows can move faster than light, it is obvious. The shadow is an object which is inferred from the way light hits a surface, it is not a physical information carrier. – Ron Maimon Dec 13 '11 at 14:05
## protected by Community♦Oct 3 '12 at 7:09
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
|
2014-12-25 17:54:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6704046726226807, "perplexity": 626.4243089334587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447547904.83/warc/CC-MAIN-20141224185907-00074-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://www.r-bloggers.com/2018/03/introducing-geofacet/
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I released an R package over 9 months ago called geofacet, and have long promised a blog post about the approach. This is the first post in what I plan to be a series of two or three posts. In this post I’ll introduce what the package does and compare it to some other approaches for visualizing geographic data.
## geofacet
The geofacet package extends ggplot2 in a way that makes it easy to create geographically faceted visualizations in R. To geofacet is to take data representing different geographic entities and apply a visualization method to the data for each entity, with the resulting set of visualizations being laid out in a grid that mimics the original geographic topology as closely as possible.
This idea is probably easiest to explain with an example. The visualization below shows a bar chart of the ranking of U.S. states in six quality-of-life categories, where a state with a rank of 1 is doing the best in the category and a rank of 51 is the worst (Washington DC is included). This data is based on data that comes from this article.
library(geofacet)
library(ggplot2)
ggplot(state_ranks, aes(variable, rank, fill = variable)) +
geom_col() +
coord_flip() +
theme_bw() +
facet_geo(~ state, grid = "us_state_grid2")
As can be seen, the U.S. states are arranged in a way that is familiar to the underlying geography, but each state gets equal space to have its data visualized in whatever way we might envision. Here, we use a bar chart to illustrate each of the 6 categories. States with very low rankings across most categories (HI, VT, CO, MN) stand out, and geographical trends such as the southern states consistently showing up in the bottom of the rankings stands out as well. Many interesting insights and questions come from spending some time looking at the plot.
There are many favorable aspects of this approach to visualizing geographic data. This article will talk about this approach in comparison to other approaches and will focus on methods rather than code. For a more technical introduction to the package and a full overview of how to use it, follow this link.
The “geofaceting” approach itself isn’t new. There are many examples in the wild of this idea being applied in an ad hoc manner (here are some examples at the Wall Street Journal and the Washington Post). People have done this in R as well (see here for example). In fact, the idea for this package came from a colleague of mine, J Hathaway, while we were working together at PNNL 4 or 5 years ago. He will be writing a post about how the idea came about which I’ll link to when it’s up.
What’s new about this R package is that it formalizes the “geofaceting” approach, gives it a name, and makes it available in a user-friendly way. Also, it provides the basis for creating a library of community-contributed grids, which can be used elsewhere outside the package. Another post in this series will be about different ways to make your own grids.
## Geofaceting vs. Other Approaches
There are many reasons why you might want to consider using geofacet vs. other approaches. Here I’ll describe a few alternative approaches. Note that geographical visualization is a well-explored area and the list of things I’m comparing to will not be exhaustive. If there is something major that I missed I’ll be happy to consider follow-up posts discussing those in comparison to geofaceting.
### Choropleth Maps
A choropleth map plots the raw geographic topology and colors each geographic entity according to the value of the variable being visualized.
For example, suppose we want to visualize the 2016 unemployment rate for each state in the United States:
It is quickly evident which states have the highest and lowest unemployment. However, based on color alone, it is difficult to make quantitative comparisons. For example, how much lower is the unemployment rate in Oregon (OR) than in Washington (WA)?. Also, small states are more difficult to see, and the area of a state does not reflect its population, which might be an important context for this plot. Compare Massachusetts (MA) and North Dakota (ND), for example.
These plots can be created with the choroplethr R package, although it does not seem to be quite up to date with the latest version of ggplot2 as of this writing. You can also create these plots on your own, for example with ggplot2 or plotly or leaflet.
• Only visualize one variable and one value per entity: A major deficiency of choropleth maps is that they can only display a single value of a single variable for each geographic entity. What if we want to look at how the unemployment rate has changed over time for each state, or compare the raw vs. seasonally-adjusted unemployment rate?
• Only use color for visual encoding: With choropleth maps, the data are visually encoded only with color, and color is one of the least effective ways to visually encode information. In the example above, the use of color is helpful for getting a general feel for regions of high and low unemployment, but it is very difficult to make quantitative judgements of how different the unemployment rate is between different states.
• Favor large geographic entities: A well-known issue with choropleth maps is that they visually favor geographically large regions over small regions. It is very hard to notice what the unemployment rate is in small regions like Rhode Island or Washington DC. There are many ways to deal with this problem and we’ll see a few below.
## Rectangular / Hex Tile Maps
To deal with the issue of choropleth maps favoring large geographic entities, we can translate the geographic topology into a rectangular or hexagonal grid, in the same way the geofacet package does, so that each geographic entity is represented by shapes of the same size. Rectangular / hex tile maps color the grid of rectangles or hexagons according to the value of a variable in the data. Some R packages that will create these plots include a recently-updated statebins package (see related post) and another one that makes more interactive plots but hasn’t been updated in a while, rcstatebin.
Below is a plot obtained from using statebins on the a 2016 unemployment data:
Here, we can now see Washington D.C. much better, for example.
### Disadvantages of Rectangular / Hex Tile Maps
While hex / tile maps deal with the deficiency of choropleth maps that favor large geographic entities, they still suffer from the other two choropleth map disadvantages, namely only visualizing one variable and only using color to visually encode the information.
This NPR blog post provides a nice commentary for follow-up reading on rectangular and hex tile maps as well as choropleth maps.
## Faceted Choropleth or Tile Maps
One suggestion for using choropleth or “statebin” charts to visualize multiple values is by faceting on the variables instead of the geography. For example, a case of this approach is shown in a 2014 Washington Post article about state workforces that are threatened by trade. The change in share of workforce over three time periods is illustrated as three statebin charts.
A reproduction of their plot is shown below:
While this approach may have a good use case in certain circumstances, it is generally visually not extremely effective because while we have already established that it is difficult to make judgements about differences in value based on color encoding within a map, it is even more difficult to judge differences in color when you have to compare across maps. But there are cases where this can be a useful approach.
## Cartograms
Instead of using color to encode the values of the data, cartograms use size. Cartograms elarge or shrink a geographic entity based on the size of the related values of the variable being visualized.
For example, below is a screenshot for an interactive cartogram I created for a project I’m working on that displays the amount of different kinds of data that are available about countries in the world.
In this plot, countries that are large have more data available than those that are small. There is a lot of distortion, but hopefully it is evident that this is based on a map of the world.
• Too much warping: Maintaining geographic orientation becomes very difficult when things are really out of proportion. Above, without the help of tooltips, it would be extremely difficult to say what several of the countries are. Inside the interactive application represented by the above screenshot, animated transitions are provided between the original map and the cartogram, and this can alleviate the warping problem a little.
• Shapes are arbitrary: It is hard for the human to make comparisons of size based on arbitrary shapes. Ideally we would be using something like squares or rectangles if we wanted to be able to make comparisons of size across shapes.
• Doesn’t highlight both extremes: Often you want geographic entities with both very large and very small values to stand out on the same plot, which is difficult to do with cartograms as small values result in little to no space being used to represent the entity.
• Difficult to create: There’s not very good support for creating cartograms in R, and outside of R, it is difficult to find an easy-to-use tool that provides good results. There are a few R packages (cartogram, Rcartogram / getcartr, topogRam), but I’ve found most of them difficult to install and the results to not look the way I expect – perhaps due to user error.
Another cartogram option that deals with the “arbitrary shapes” disadvantage is rectangular statistical cartograms.
## Tilegrams
Another interesting approach is the “Tiled cartogram”, or “tilegram”. Tilegrams use hexagons, but unlike hex tile maps, instead of using one hexagon to represent a geographic entity, multiple hexagons are used, with the number of hexagons representing the value of the variable being visualized.
Here is a screenshot taken directly from the “tilegramR” R package showing a tilegram of the 2016 US population by state.
Tilegrams are a nice option when wanting to visualize a single variable and when you care about using a larger area to represent larger values of the variable. A nice article about tilegrams can be found here.
Tilegrams have some of the same disadvantages of other approaches that we have seen before, namely that you can only visualize one variable at a time and that you want both large and small values of a variable to be clearly evident.
Tilegrams are also difficult to create. There is an R package for tilegrams, and you can read about using it here, but it only provides a way to plot pre-created tilegrams. You can’t create your own. To actually create a tilegram you have to use a base tilegram to begin with (there’s just US, Germany, and France) and then you have to upload some data in a predetermined format that’s not very well documented and then you still have to do manual manipulation of the result. So while the approach is generally good, the technology for creating tilegrams is not in a good state for use in quick exploratory analysis.
## So Why is Geofaceting Useful?
By looking at some of the alternatives, hopefully some of the advantages of geofaceting are clear. These include:
• We can plot multiple variables or values per geographic entity – you can plot practically anything you can imagine inside each panel.
• We can use more effective visual encoding schemes than just color.
• Each geographic entity is given the same amount of screen real estate (although this may not be desirable in all situations).
• Faceting is in general a powerful visualization technique. People familiar with my work know how big a fan I am of this approach.
For example, we can use geofacet to improve on the 2016 unemployment rate plots above by using a bar instead of color to denote the unemployment rate. We can even go further and visualize how the unemployment rate has varied over time for each state:
## When is Geofaceting Not Useful?
There are some cases when geofaceting might not be useful:
• Sometimes the original geography has entities that are so irregularly organized and have such large size disparities that it is difficult to represent it as a regular grid.
• Sometimes exact preservation of entity boundaries and neighbors is essential.
• A geofacet grid is only meaningful if the person already has an understanding of the underlying original geography. One way to help with this issue in the future would be to have the option for the original geography to be plotted as a reference alongside the geofaceted plot.
• Geofaceting is only useful for data that represents values recorded for different geographic entities. For example, geofaceting is not appropriate for geographical data representing a spatial point process.
## Getting Started
If you have applications that might benefit from geofaceting, you can get started here!
## Next Post
In a future post, I’ll talk more about the community library of grids and show you how to make your own grids.
|
2021-10-26 08:19:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4098665416240692, "perplexity": 914.3074651111635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00157.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-x-2-2x-3-0-2
|
# How do you solve x^2+2x-3>0?
$\to \left(x - 1\right) \left(x + 3\right) = 0 \to x = 1 \mathmr{and} x = - 3$
For any $x$-value in between we get a negative $y$, so the answer must be:
$x < - 3 \mathmr{and} x > 1$
|
2020-02-19 11:35:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472346305847168, "perplexity": 646.8662253205172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144111.17/warc/CC-MAIN-20200219092153-20200219122153-00194.warc.gz"}
|
https://jkcs.or.kr/journal/view.php?number=4880
|
J. Korean Ceram. Soc. > Volume 36(6); 1999 > Article
Journal of the Korean Ceramic Society 1999;36(6): 571.
마이크로 주파수에서 전파 흡수성 코팅재료의 특성 평가 임종인, 김찬욱, 오택수 포항산업과학연구원, 재료공정연구센터 금속코팅재료연구팀 Evaluations of Radar Absorbing Coating materials in the Microwave Frequencies ABSTRACT This paper describes both electromagnetic properties and electromagnetic-wave absorbind (EMA) characteristics of the radar absorbing coating materials in the microwave frequencies. The coating materials are prepared by mixing ferrite powder with epoxy resin polymers. The electromagnetic property and the EMA characteristics of the coating materials are measured in the frequency ranges between 0.2 GHz and 15GHz using the S-parameter method. The results show that the typical coating material has ({{{{ {μ }_{r } ^{' } }} of about 2 and ({{{{ { ε}_{r } ^{' } }} of about 4. And the coating materials have a little EMA characteristics in the frequency from 4 to 6 GHz and more improved EMA characteristics around 12 GHz And it is desirable for improvement of the EMA characteristics of the coating materials to properly select electromagnetic properties of constitutional materials inthe interesting frequency ranges. Key words: Coating material, Radar absorbing material, S-parameter, Electronmagnetic property, Two-port method
TOOLS
Full text via DOI
|
2021-06-23 05:54:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5873721241950989, "perplexity": 10058.329743328575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488534413.81/warc/CC-MAIN-20210623042426-20210623072426-00226.warc.gz"}
|
https://math.answers.com/math-and-arithmetic/What_are_regular_fractions
|
0
# What are regular fractions?
Wiki User
2016-10-17 22:35:46
Regular fractions are the fractions with a numerator that is less than the denominator and irregular fractions are fractions with a denominator less than the numerator.
Wiki User
2016-10-17 22:35:46
Study guides
20 cards
## A number a power of a variable or a product of the two is a monomial while a polynomial is the of monomials
➡️
See all cards
3.81
2057 Reviews
Earn +20 pts
Q: What are regular fractions?
Submit
Still have questions?
View results
View results
View results
View results
View results
|
2023-02-09 13:09:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9102452993392944, "perplexity": 4969.499631805794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00831.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=Hong-wei%20Zhao
|
• ### Analytical Solution to the Transient Beam Loading Effects of the Superconducting Cavity(1707.00043)
June 30, 2017 physics.acc-ph
Transient beam loading effect is one of the key issues in any superconducting accelerators, which needs to be carefully investigated. The core problem in the analysis is to obtain the time evolution of cavity voltage under the transient beam loading. To simplify the problem, the second order ordinary differential equation describing the behavior of the cavity voltage is intuitively simplified to a first order one, with the aid of the two critical approximations lacking the proof for their validity. In this paper, the validity is examined mathematically in some specific cases, resulting in a criterion for the simplification. It's popular to solve the approximated equation for the cavity voltage numerically, while this paper shows that it can also be solved analytically under the step function approximation for the driven term. With the analytical solution to the cavity voltage, the transient reflected power from the cavity and the energy gain of the central particle in the bunch can also be calculated analytically. The validity of the step function approximation for the driven term is examined by direct evaluations. After that, the analytical results are compared with the numerical ones.
• ### Direct mass measurements of neutron-rich $^{86}$Kr projectile fragments and the persistence of neutron magic number $N$ = 32 in Sc isotopes(1603.06404)
March 21, 2016 nucl-ex
In this paper, we present direct mass measurements of neutron-rich $^{86}$Kr projectile fragments conducted at the HIRFL-CSR facility in Lanzhou by employing the Isochronous Mass Spectrometry (IMS) method. The new mass excesses of $^{52-54}$Sc nuclides are determined to be -40492(82), -38928(114), -34654(540) keV, which show a significant increase of binding energy compared to the reported ones in the Atomic Mass Evaluation 2012 (AME12). In particular, $^{53}$Sc and $^{54}$Sc are more bound by 0.8 MeV and 1.0 MeV, respectively. The behavior of the two neutron separation energy with neutron numbers indicates a strong sub-shell closure at neutron number $N$ = 32 in Sc isotopes.
• ### Design Study on Medium beta SC Half-Wave Resonator at IMP(1510.01836)
Oct. 7, 2015 physics.acc-ph
A superconducting half-wave resonator has been designed with the frequency of 325 MHz and beta of 0.51. Different geometry parameters and shapes of inner conductors (racetrack, ring-shape and elliptical-shape) were optimized to decrease the peak electromagnetic fields to obtain higher accelerating gradients and minimize the dissipated power on the cavity walls. To suppress the operation frequency shift caused by the helium pressure fluctuations and maximize the tuning ranges, the frequency shifts and mechanical properties were studied on the electric and magnetic areas separately. At the end, the helium vessel was also designed to keep the mechanical structure as robust as possible. The fabrication and test of the prototype will be completed in the beginning of 2016.
• ### Study of magnetic alloy cores for HIRFL-CSRm compressor cavity(1012.0897)
Dec. 4, 2010 physics.acc-ph
For selecting the properly magnetic alloy (MA) material to load the RF compression cavity, the measurement of the MA cores which is produced by Liyuan Company has been carried out at IMP. We measured 4 kinds of MA core materials, type V1, V2, A1 and A2. And we mainly focus on the permeability, quality factor (Q value) and shunt impedance of the MA core. The MA cores which have higher permeability, lower Q value and higher shunt impedance will be selected to load RF compression cavity. According to the results of measurement, the type V1, V2 and A2 material will be chosen as candidate to load RF cavity.
|
2021-04-12 21:21:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6752861142158508, "perplexity": 1339.450956620424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069267.22/warc/CC-MAIN-20210412210312-20210413000312-00093.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Separated_by_Function
|
Definition:Separated by Function
Definition
Let $\left({X, \vartheta}\right)$ be a topological space.
Let $A, B \subseteq X$.
Then $A$ and $B$ are separated by function iff there exists an Urysohn function for $A$ and $B$.
$A$ and $B$ may well be singleton sets $A = \left\{{a}\right\}, B = \left\{{b}\right\}$.
In this case $a$ and $b$ are separated by function iff $A$ and $B$ are separated by function.
|
2019-08-20 05:32:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395360350608826, "perplexity": 116.11375293573825}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315222.56/warc/CC-MAIN-20190820045314-20190820071314-00293.warc.gz"}
|
https://physics.stackexchange.com/questions/439777/symmetry-breaking-with-kinetic-term-dominating-the-potential-on-a-lagrangian
|
# Symmetry breaking with Kinetic term dominating the Potential on a Lagrangian
Suppose a Lagrangian $$L$$ for a scalar field $$\phi$$, consists of a kinetic term and a Mexican-hat type potential. I am aware that if the vacuum has symmetry $$H \subset G$$ while the Lagrangian has symmetry $$G \subset O(n)$$, there will be symmetry breaking. By Goldstone's theorem there would appear dim $$G$$ - dim $$H$$ massless spinless bosons in a perturbation around the vacuum.
When the kinetic term dominates over the potential, meaning the norm of the derivatives of our field is larger than the value of the potential, does it still make sense to expand our field around the vacuum to extract information about the spectrum of particles of the theory? I guess the expansion has to be done on a stationary point of the Lagrangian with the same energy as our field and therefore not. In this case, how can one explain the Higgs mechanism?
|
2019-04-24 10:08:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7408609986305237, "perplexity": 212.26684098925662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578640839.82/warc/CC-MAIN-20190424094510-20190424120510-00445.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-the-following-linear-system-y-3x-8-y-2x-3
|
# How do you solve the following linear system: y = - 3x + 8, y = 2x + 3 ?
$y = - 3 x + 8. \ldots \ldots \ldots \ldots . \left(i\right)$
$y = 2 x + 3. \ldots \ldots \ldots \ldots \ldots \left(i i\right)$
Since the $L . H . S$ of $\left(i\right)$ and $\left(i i\right)$ are equal to the same variable $y$ therefore the $R . H . S$ of $\left(i\right)$ and $\left(i i\right)$ will also be equal.
$\implies 2 x + 3 = - 3 x + 8$
$\implies 5 x = 5$
$\implies x = 1$
Put $x = 1$ in $\left(i i\right)$
$\implies y = 2 \cdot 1 + 3$
$\implies y = 5$
|
2020-06-06 05:45:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6827002763748169, "perplexity": 100.92577529401002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348509972.80/warc/CC-MAIN-20200606031557-20200606061557-00357.warc.gz"}
|
https://codereview.stackexchange.com/questions/137870/pixel-manipulation-performance
|
# Pixel manipulation performance
1. If the either x or y is 0, ignore.
2. If the pixel on the current pixel's top left corner is not completely transparent, set current pixel's alpha value to 0.1.
This algorithm runs from top to bottom, left to right. Here's the relevant part, where rendered is a canvas that already contain rendered content.
function getIndex(x, y){
return (y * width + x) * 4;
}
function isClear(x, y){
var index = getIndex(x, y);
return data[index + 3] === 0;
}
if(x < 1 || y < 1) return false;
if(isClear(x, y) && !isClear(x - 1, y - 1)) return true;
return false;
}
var width = rendered.width, height = rendered.height,
imgData = renderedCtx.getImageData(0, 0, width, height),
data = imgData.data;
for(var x = 0; x < width; x++){
for(var y = 0; y < height; y++){
data[getIndex(x, y) + 3] = 256 * 0.1;
}
}
}
renderedCtx.putImageData(imgData, 0, 0);
As you can see, this function performs per pixel manipulation and is quite slow in performance. What is a good way to improve performance, probably without checking each pixel independently?
• Also, the shadow is infinite, right? So you can simply continue it diagonally to the bottom-right until an occupied pixel is encountered. Start in the bottom left corner and work the diagonals until the top right corner is reached. And again, inline everything, increment the index directly without multiplication. – wOxxOm Aug 4 '16 at 23:43
• I'm confused on what you mean by "Start in the bottom left corner and work the diagonals until the top right corner is reached". Are you suggesting to treat each diagonal as one line? How does that enable it to skip checking for certain pixels? – Derek 朕會功夫 Aug 4 '16 at 23:46
• Ah, I see what you mean. Going to test that out. – Derek 朕會功夫 Aug 5 '16 at 0:05
• @wOxxOm Is there an easy and fast way to iterate through the 2D array diagonally? It sounds good in theory but it seems difficult to implement. – Derek 朕會功夫 Aug 5 '16 at 1:36
• Do you have a jsFiddle or something to test? Am curious to try a web worker mini-library to see if it helps cases like this. – Sergio Aug 5 '16 at 11:54
Use diagonals. The idea is to have only one pixel check, compared to two you have now:
1. find the first occupied pixel on a diagonal
3. fill with 0.1 while unoccupied
4. repeat 1-3 until out of bounds
Considering everything should be inlined, the code would be like this:
var x, y, index, index0,
len = width * height * 4,
stride = width * 4;
strideDia = stride + 4; // distance to the sibling pixel towards bottom-right
// walk diagonals from the bottom-left corner to the top-left corner
for (x = 0, y = height, index0 = (y * width + x)*4 + 3; --y >= 0; ) {
index = index0 -= stride;
while (index < len && x < width) {
// find an occupied pixel
while (index < len && x < width && !data[index])
index += strideDia, x++;
while (index < len && x < width && data[index])
index += strideDia, x++;
// fill with 0.1 all unoccupied
while (index < len && x < width && !data[index]) {
data[index] = 25; // 0.1 * 256
index += strideDia;
x++;
}
}
x = 0;
}
// walk diagonals from the top-left corner to the top-right corner
// (0,0) is skipped as it was processed in the previous walk block
for (x = 0, y = 0, index0 = (y * width + x)*4 + 3; ++x < width; ) {
index = index0 += 4;
var x0 = x;
while (index < len && x < width) {
// find an occupied pixel
while (index < len && x < width && !data[index])
index += strideDia, x++;
while (index < len && x < width && data[index])
index += strideDia, x++;
// fill with 0.1 all unoccupied
while (index < len && x < width && !data[index]) {
data[index] = 25; // 0.1 * 256
index += strideDia;
x++;
}
}
x = x0;
}
Another potentially promising idea would be to use asm.js.
• So I tested the code in your answer but it probably went into an infinite loop since it froze my page... – Derek 朕會功夫 Aug 6 '16 at 5:06
• I've tested it again and indeed it's a little bit faster :) – Derek 朕會功夫 Aug 6 '16 at 6:16
• I've inlined everything in my original code (and reordered the loops, not sure if cache misses is a thing in JS) and amazingly it reduced the running time quite significantly. – Derek 朕會功夫 Aug 7 '16 at 1:08
• It'd be reasonable if you post your code and accept it as a solution. – wOxxOm Aug 7 '16 at 5:04
• I have posted my code however your answer will stay as marked as solution. – Derek 朕會功夫 Aug 8 '16 at 17:16
This is my attempt of optimizing the performance of the loop:
var width = rendered.width, height = rendered.height,
imgData = renderedCtx.getImageData(0, 0, width, height),
data = new Uint8Array(imgData.data.buffer),
arrayWidth = width * 4, arrayHeight = height * 4;
for(var y = 0; y < arrayHeight; y+=4){
for(var x = 0; x < arrayWidth; x+=4){
if(x > 0 && y > 0 && !data[y * width + x + 3] && data[(y - 4) * width + x - 1]){
data[y * width + x + 3] = 25;
}
}
}
renderedCtx.putImageData(imgData, 0, 0);
I eliminated all function calls since calling a separate function requires more operations, and the loops are also flipped to avoid possible cache misses. The result is that the loop's running time is greatly decreased, making getImageData the most expensive operation in this function.
|
2020-08-09 00:17:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.286333292722702, "perplexity": 3159.9142574218795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738366.27/warc/CC-MAIN-20200808224308-20200809014308-00502.warc.gz"}
|
http://alidoc.cern.ch/AliRoot/v5-08-13q-p5/_r_e_a_d_m_e_mchview.html
|
AliRoot Core d69033e (d69033e)
MUON Tracker visualisation program
A visualisation program, mchview , is now available to display, in two dimensions (3D visu being done within the EVE framework), the tracker chambers.
mchview should be installed together with the rest of AliRoot. One point should be noted though. mchview is using one external file to run properly : a resource file $HOME/.mchviewrc, used to "configure" the program and keep some history of the user interaction with it (e.g. the list of recent data sources used). If you install a new version of mchview, it is recommended to delete this file first, before lauching mchview again. # Navigating the display When starting mchview (and after the padstore.root file has been created for the first time), you'll be presented with two tabs. The first one allow to navigate within the detector, and the second one to select data sources to be displayed. The first tab offers a global view of the 10 tracking chambers. On the right you'll see a color palette (used in the display of data sources, see later). On the top (from left to right) are navigation buttons (backward and forward), and radio buttons to select the kind of view you'd like : you can view things for a given cathode or for a given plane. Note that in some instances those buttons maybe inactive. On the bottom are three groups of radio buttons, labelled "responder", "outline" and "plot", followed by data source buttons (see later) : Each group will contain a number of buttons corresponding to different view levels of the detector (e.g. detection element, manu, buspatch, etc...). In mchview jargon, what is displayed on screen is called a "painter". The meaning of responder, outline and plot is as follow : • responder (only one selected at a time) is the type of painter that responds to mouse events. When you mouse over responder painters, they'll be highlighted (bold yellow line around them), and some information about them will be displayed in the top right corner. If you click on a responder painter, a new view will open to show only this painter. If you meta-click (button2 on a mouse, or alt-click on a Mac for instance) on a responder painter, a new view will open, showing the clicked painter and its "dual" (the other cathode or other plane, depending on how the first one was defined). • outline (multiple selection possible) indicates which painter(s) should be outlined. When starting the program, only manus are outlined, but you can outline the detection elements, the chambers, etc... • plot (see later about plotting and data sources) indicates at which level you want to see the data. On the bottom left is a group button used to select which data source should be displayed (empty until you select a data source, see next section). Next to it will be a list of buttons to select exactly what to plot from a data source (once you've selected a data source only). Note that whenever you click on a painter and get a new view, you get use the navigation buttons (top left) to go forward and backward, as in a web browser for instance. Note also that the mchview menu bar contains a "History" menu where you can see (and pick a view) all the views that were opened. Even before selecting something to plot, at this stage you could use the program to familiarize yourself with the detector structure (aka mapping). # Specifying the data source The second tab of the mchview allows to select one or several data sources. Each data source, in turn, will provide one or more "things" to be plotted. The number of "things" actually depends on the data source. For instance, a raw data source will allow to plot the mean and the sigma of the pad charges. Be warned that this part of the program is likely to evolve, as you'll for sure notice that the interface is quite crude for the moment. From top to bottom, you'll see group of frames used to : • select from a list of recently used source • select a raw data source (either by typing in its full pathname, or opening a file dialog). The second line in this group is to specify that you want to calibrate the data. Check one of the calibrate buttons, and specify the location of the OCDB to be used. If that field is not empty (and the corresponding entry is correct, of course), the raw data will be calibrated. The last line in that group is a single check button, to instruct the program to produce histograms of the data (see Histogramming) • select an OCDB data source (pedestals) In all the frames, once you've selected or entered the needed information, you'll click on the "Create data source" button, and a new data source line will appear in the bottom of that tab (and in also in the first tab, that data source will now be selectable for plotting). Each data source line indicates the short name of the data source, the full path, and a list of buttons to run, stop, rewind and remove. Run/Stop/Rewind is only selectable for data sources where the notion of event means something (e.g. for pedestals it won't). The short name of the data source is as follow : • RAW# : raw data for run # • RAW(#) : raw data for simulated run (where run number is always 0, so # here is the number of such data sources opened at the same time) • HRAW# (or HRAW(#)) : as above, but with histogramming turned on • (H)CALZ# (or (H)CALZ(#)): as above, but for data where pedestal subtraction has been done (and no gain correction whatsoever) Note that all the file paths can be local ones or alien ones, if you have a correctly installed alien, and you use a short wrapped to call the mchview program. For instance : alias mchl$HOME/mchview.alien
where mchview.alien is a little script :
#!/bin/sh
test=alien-token-info | grep -c expired
if [ $test -gt 0 ]; then echo "Token expired. Getting a new token" alien-token-destroy alien-token-init elif [ ! -e /tmp/gclient_env_$UID ]; then
echo "Getting a token"
alien-token-init
fi
if [ ! -e /tmp/gclient_env_$UID ]; then echo "No token. Exiting" exit fi source /tmp/gclient_env_$UID
mchview $* # Histogramming Starting at version 0.9 of the mchview program, you can now produce histograms of the raw adc values, while running over the data. For this you have to check the "histogram" button when creating the data source. Please note that turning on the histogram will slow down a bit the data reading. Histograms produced by the program are as compact as possible in order to fit in memory (so they are not plain TH1 objects). Plain TH1 objects are produced later on (on request only), and should be deleted as soon as possible (you have to realize that 1 million TH1 of 4096 channels has no chance to fit in memory...) Access to the histograms can be done through the GUI, using the right click on any painter. For extra flexibily, you can also use the root prompt (of the mchview program itself). First get the data object, and then ask the data object to create the histogram(s) you want. Remember to delete those histograms as soon as you no longer need them : AliMUONPainterRegistry* reg = AliMUONPainterRegistry::Instance(); reg->Print(); AliMUONVTrackerData* data = reg->FindDataSource("HRAW(1)"); TH1* h = data->CreateChannelHisto(707,1025,63); h->Draw(); delete h; h = data->CreateManuHisto(707,1025); etc... You can get histograms for all levels (except PCB) : channel, manu, bus patch, detection element, chamber. See AliMUONVTrackerData doc. for the methods. # Saving and printing From the File menu of the mchview application, you can use SaveAs and PrintAs popups to respectively save the current data sources (meaning you can quit the program and start again with the same filled data sources, without having to rerun on the source) and print the current display. Printing needs a little bit of polishing (e.g. getting a nice and descriptive title would help a lot), but it's better than nothing. Note that the mchview application now has a –use option to reload a previously saved .root file (same effect as using the File/Open menu). # Resource file format The resource file$HOME/.mchviewrc is a normal Root resource file (see TEnv), i.e. a list of "Key:value(s)" lines.
You should avoid to edit it by hand, as most of it is handled by the mchview program itself.
But for information the defined keys so far are :
disableAutoPedCanvas: 1
Use this one to disable the feature that will open automatically 4 canvases each time you open a data source of type "Pedestals".
defaultRange: PED;Mean;0;500|PED;Sigma;0;10|OCC;occ;0;0.01
Use this one to define default ranges for some data sources and their dimensions. In the example above all the pedestals data source will get a display ranging from 0 to 500 for the mean value and 0 to 10 for the sigma value; the occupancy data source (occ dimension) will range from 0 to 0.01. Those defaults are normally set using, from mchview program (painter master frame tab), using the "Set as default" button, located below the color palette.
In addition, the NumberOfDataSources and DataSource.# (where # ranges from 0 to NumberOfDataSources-1) are used to described the recent opened sources. But those ones should not be edited by hand unless you really know what you are doing.
|
2022-05-26 12:14:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46017777919769287, "perplexity": 4464.796875949856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00132.warc.gz"}
|
https://docs.pennylane.ai/projects/rigetti/en/latest/devices/wavefunction.html
|
# The Wavefunction device¶
The rigetti.wavefunction device provides an interface between PennyLane and the Forest SDK wavefunction simulator. Because the wavefunction simulator allows access and manipulation of the underlying quantum state vector, rigetti.wavefunction is able to support the full suite of PennyLane and Quil quantum operations and observables.
In addition, it is generally faster than running equivalent simulations on the QVM, as the final state can be inspected and the expectation value calculated analytically, rather than by sampling measurements.
Note
By default, rigetti.wavefunction is initialized with shots=0, indicating that the exact analytic expectation value is to be returned.
If the number of trials or shots provided to the rigetti.wavefunction is instead non-zero, a spectral decomposition is performed and a Bernoulli distribution is constructed and sampled. This allows the rigetti.wavefunction device to ‘approximate’ the effect of sampling the expectation value.
## Usage¶
You can instantiate the device in PennyLane as follows:
import pennylane as qml
dev_wfun = qml.device('rigetti.wavefunction', wires=2)
This device can then be used just like other devices for the definition and evaluation of QNodes within PennyLane.
A simple quantum function that returns the expectation value and variance of a measurement and depends on three classical input parameters would look like:
@qml.qnode(dev_wfun)
def circuit(x, y, z):
qml.RZ(z, wires=[0])
qml.RY(y, wires=[0])
qml.RX(x, wires=[0])
qml.CNOT(wires=[0, 1])
return qml.expval(qml.PauliZ(0)), var(qml.PauliZ(1))
You can then execute the circuit like any other function to get the quantum mechanical expectation value and variance:
>>> circuit(0.2, 0.1, 0.3)
array([0.97517033, 0.04904283])
## Supported operations¶
All devices support all PennyLane operations and observables, with the exception of the PennyLane QubitStateVector state preparation operation.
## QVM and quilc server configuration¶
Note
If using the downloadable Rigetti SDK with the default server configurations for the QVM and the Quil compiler (i.e., you launch them with the commands qvm -S and quilc -R), then no special configuration is needed. If using a non-default port or host for either of the servers, see the pyQuil configuration documentation for details on how to override the default values.
|
2023-03-31 22:20:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.458973228931427, "perplexity": 4240.846723512499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00650.warc.gz"}
|
https://mathematica.stackexchange.com/questions/277041/returning-an-interpolatingfunction
|
# Returning an InterpolatingFunction
I am trying to write a function using Block that generates an InterpolatingFunction and then generates a second InterpolatingFunction that is a function of the first. I can return and use the first function but not the second. Below is a minimal version, with the negative sign standing in for a more general function. The result is that I can return values and plot f1 but not f2. How can one modify this to return a usable f2?
{f1, f2} =
Block[{dat1, dat2}, dat1 = ListInterpolation[{1, 2, 3, 4}];
dat2 = -dat1;
{dat1, dat2}];
{f1[1], f2[1]}
Plot[{f1[t], f2[t]}, {t, 1, 4}]
• Maybe dat2 = - dat1[#] &? Dec 7, 2022 at 18:12
• I just tried this, and that did not work, alas. (It returns -dat1[t] for f2[t].) Dec 7, 2022 at 18:20
• Ah, then try dat2 = Evaluate[- dat1[#]] &. Dec 8, 2022 at 14:36
• Thanks. This does work. An advantage of the Reinterpolation method below is that one ends up with a simpler InterpolatingFunction object that, for complicated functions, may be faster to evaluate. Dec 8, 2022 at 21:02
• Simpler, but perhaps less accurate? Dec 8, 2022 at 23:52
## 2 Answers
Based on this old question of mine, I wrote a function for this task in general, called Reinterpolate:
Reinterpolation::usage="Reinterpolation[f] reinterpolates a function containing one or more InterpolatingFunctions.";
Reinterpolation[f_,opts___?OptionQ]:=Module[{
(* options *)
interpolationopts,interpolationpoints,
(* other variables *)
xmin,xmax,ifs,grid,tmp},
(* handle options *)
interpolationopts=FilterRules[Flatten[{opts,Options[Reinterpolation]}],Options[Interpolation]];
interpolationpoints=Evaluate[InterpolationPoints/.Flatten[{opts,Options[Reinterpolation]}]];
ifs=Cases[f,_InterpolatingFunction,{0,\[Infinity]}];
If[ifs=={},Return[f]];
If[interpolationpoints===Automatic,
grid=Union[Flatten[Through[ifs["Grid"]],1]],
{xmin,xmax}=ifs[[1,1,1]];
grid=Table[x,{x,xmin,xmax,(xmax-xmin)/(interpolationpoints-1)}];
];
Quiet[
tmp=Interpolation[Table[{Sequence@@val,f/.(if_InterpolatingFunction->if[Sequence@@val])},{val,grid}],Evaluate[Sequence@@interpolationopts]]
,{InterpolatingFunction::dmval}];
tmp[[1]]=ifs[[1,1]]; (* fix domain *)
Return[tmp]
];
Options[Reinterpolation]={InterpolationPoints->Automatic};
It works applied to your problem:
{f1, f2} =
Block[{dat1, dat2}, dat1 = ListInterpolation[{1, 2, 3, 4}];
dat2 = Reinterpolation[-dat1];
{dat1, dat2}];
{f1[1], f2[1]}
Plot[{f1[t], f2[t]}, {t, 1, 4}]
(* {1, -1} *)
I'd of course be interested in any improvements folks could suggest.
Addition 1:
OP @JohnBechhoefer asked whether this function could be modified to allow the explicit use of the independent variable. This seems to work already:
f1 = ListInterpolation[{1, 2, 3, 4}];
f2 = Reinterpolation[Piecewise[{{f1, t < 2.5}, {-f1, t >= 2.5}}]];
Plot[{f1[t], f2[t]}, {t, 1, 4}]
This was surprising, because I didn't try to build that functionality in. To see what's going on, we can look inside f2:
f2["ValuesOnGrid"]
An unintended happy side-effect! Is this sufficient for you? Otherwise it might be able to be calculated for each point in the new InterpolatingFunction if you have an example where this fails.
• Thanks. Yes, it does work, but I think the simpler solution above will be enough. If I follow a bit what you are doing, it is choosing different points to base an interpolation on. I don't think that will be necessary in my application. (But we'll see....) Dec 7, 2022 at 19:42
• Yes, in my original application I wanted to transform the output of NDSolve, so I use the same points as in the original InterpolatingFunction. Dec 7, 2022 at 19:58
• As mentioned above, I realized that a point about your function is that it can condense a complicated function of an InterpolatingFunction into a single object, which should save time when subsequently using that function (in my case, as a "drive" in a system of ODEs in NDSolve). But one extension that would make it even more useful would be the ability to deal with Piecewise functions. This would require generalizing your routine to allow the "time" variable (implicit in my example) to appear explicitly. Have you thought about that case? Dec 8, 2022 at 21:16
• @JohnBechhoefer See "addition 1" above. Dec 8, 2022 at 23:09
• Thanks! But when I try this with this, with the original syntax, it does not work: {f1, f2} = Block[{dat1, dat2}, dat1 = ListInterpolation[{1, 2, 3, 4}]; dat2 = Reinterpolation[Piecewise[{{f1, t < 2.5}, {-f1, t >= 2.5}}]]; {dat1, dat2}]; Plot[{f1[t], f2[t]}, {t, 1, 4}] gives, for f2, just a line up to 2.5 and then nothing. [Sorry, I don't know how to include images and proper formatting in a comment.] Dec 9, 2022 at 6:32
Try
{f1, f2} = Block[{dat1, dat2}, dat1 = ListInterpolation[{1, 2, 3, 4}];
dat2 = Apply[Function, { {u}, -dat1[u]}];
{dat1, dat2}];
{f1[1], f2[1]}(* {1,-1} *)
Plot[{f1[t], f2[t]}, {t, 1, 4}]
• Could you comment as to why this works when the original does not? I am not quite getting the issue. Also, I may have slightly oversimplified, as I would like to replace the minus sign function with an arbitrary function computed elsewhere. The following modification does not work: ` myfunc[x_] := -x; {f1, f2} = Block[{dat1, dat2, t}, dat1 = ListInterpolation[{1, 2, 3, 4}]; dat2 = Apply[Function, {{dat1}, myfunc[dat1]}]; {dat1, dat2}]; Plot[{f1[t], f2[t]}, {t, 1, 4}] Dec 7, 2022 at 18:53
• Using dat2 = Apply[Function, {{t}, myfunc[dat1[t]]}]; instead works, so this does count as an answer to my question. Many thanks! Still, I'd be grateful to have a better understanding of what is going on. Why does it have to be {t} and not simply t, for example? Dec 7, 2022 at 19:26
• Naming of the function argument t or u makes no difference! Apply is often useful in applying a function to an argument list. Dec 7, 2022 at 21:18
|
2023-01-30 21:09:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.497183233499527, "perplexity": 2340.89381339921}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00392.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-and-trigonometry-10th-edition/chapter-1-1-3-modeling-with-linear-equations-1-3-exercises-page-98/44
|
## Algebra and Trigonometry 10th Edition
Least score required on the fourth test: $187$
Maximum score I can get: $3\times100+200=500$ To get an A: $90$% of $500$ $0.9\times500=450$ My scores on the first three tests: $87,92,84$ Total: $87+92+84=263$ Least score required on the fourth test: $450-263=187$
|
2021-03-06 23:19:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7764009833335876, "perplexity": 2322.149392819484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00470.warc.gz"}
|
http://www.encyclopediaofmath.org/index.php/Undetermined_coefficients,_method_of
|
# Undetermined coefficients, method of
The determination of an unknown function in the form of an exact or approximate linear combination (finite or infinite) of known functions. This linear combination is taken with unknown coefficients, which are determined in one way or another from the conditions of the problem in question. As a rule, one obtains a system of algebraic equations for them.
A classic example of the method of undetermined coefficients is its use in the expansion of a regular rational function in a complex or real domain in elementary fractions. Let and be algebraic polynomials with complex coefficients, where the degree of is less than the degree of and the coefficient of the highest term of is 1, let be a root of of multiplicity , , , so that
The regular rational function can be represented uniquely in the form
(1)
where the are yet unknown complex numbers (altogether ). To find them, both parts of the equality are brought to a common denominator. After this, by disregarding inessential terms and reduction of similar terms on the right-hand side one obtains an equality on each side of which there stand polynomials of degree at most ; on the left-hand side with known coefficients, on the right-hand side in the form of linear combinations of the unknown numbers . By equating coefficients at equal powers of , one obtains a system of linear equations in the which, owing to the existence and uniqueness of the expansion (1), has a unique solution. Occasionally it is convenient to use a somewhat different device for finding the coefficients . For example, suppose that all roots of are simple, so that (1) takes the form
After bringing the two sides to a common denominator and reduction of similar terms, one obtains the equality
When one sets in it in succession , , one readily obtains
In the general case it is useful to combine these two devices for finding the coefficients .
Let and be polynomials with real coefficients,
where are the real roots of of multiplicities , and the quadratic trinomial with real coefficients and is the product , where is a complex root of multiplicity , , of and
Then for the regular rational function there is one and only one expansion of the form
(2)
where the coefficients , , , and , , , are real numbers. The method for finding them is the same as in the complex case described above: the equality (2) is brought to a common denominator, inessential terms are disregarded, and after collecting similar terms the coefficients at equal powers of on both sides are equated. As a result one obtains a system of equations in the unknowns , which has a unique solution.
The expansion of regular rational functions into elementary ones is applied, for example, to find their Laurent series (in particular, their Taylor series), and to integrate them. The method of undetermined coefficients is also used to integrate rational functions by means of the Ostrogradski method, and to integrate functions of the form . In this case the integral becomes
(3)
where the degree of the polynomial is one less than that of . To find the coefficients of and the number one differentiates (3). Then one brings both sides to a common denominator, disregards inessential terms, collects similar terms, and equates coefficients at equal powers of . As a result one obtains again a system of linear equations with a unique solution. Similar methods of integration can be applied in certain other cases.
The method of undetermined coefficients is applied in finding solutions of (ordinary and partial) differential equations in the form of power series. For this purpose, in a neighbourhood of the point in question a power series with undetermined coefficients is substituted in the given equation. Sometimes one obtains as a result relations between the coefficients of the series from which by means of given initial or boundary conditions one succeeds in finding these coefficients and, consequently, a solution of the equation in the form of a series. For example, when one solves the hypergeometric equation in this manner, one can obtain an expansion in a series of hypergeometric functions (cf. Hypergeometric function).
The method of undetermined coefficients is also applied in other ways when solving differential equations, for example, the Galerkin method, the Ritz method and the Trefftz method; it is also used in numerical methods: in Krylov's method for obtaining the coefficients of the secular equation, and in the approximate solution of integral equations.
|
2013-05-22 04:54:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9002606868743896, "perplexity": 175.37096171337242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701314683/warc/CC-MAIN-20130516104834-00028-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://lucatrevisan.wordpress.com/tag/polynomial-hierarchy/
|
# CS245 Lecture 6 – Karp-Lipton
In this lecture we prove the Karp-Lipton theorem that if all NP problems have polynomial size circuits then the polynomial hierarchy collapses. A nice application is a theorem of Kannan, showing that, for every ${k}$, there are languages in ${\Sigma_2}$ requiring circuits of size ${\Omega(n^k)}$. The next result we wish to prove is that all approximate combinatorial counting problem can be solved within the polynomial hierarchy. Before introducing counting problems and the hashing techniques that will yield this result, we prove the Valiant-Vazirani theorem that solving SAT on instances with exactly one satisfying assignment is as hard as solving SAT in general.
# CS254 Lecture 5 – The Polynomial Hierarchy
Last revised 4/29/2010
In this lecture, we first continue to talk about polynomial hierarchy. Then we prove the Gács-Sipser-Lautemann theorem that BPP is contained in the second level of the hierarchy.
# CS254 Lecture 4 – The Polynomial Hierarchy
Today we show how to reduce the error probability of probabilistic algorithms, prove Adleman’s theorem that polynomial time probabilistic algorithms can be simulated by polynomial size circuits, and we give the definition of the polynomial hierarchy
|
2022-08-17 16:27:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.790870726108551, "perplexity": 366.7868015114269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00618.warc.gz"}
|
https://www.lmfdb.org/EllipticCurve/Q/479808ni/
|
# Properties
Label 479808ni Number of curves $6$ Conductor $479808$ CM no Rank $1$ Graph
# Related objects
Show commands for: SageMath
sage: E = EllipticCurve("479808.ni1")
sage: E.isogeny_class()
## Elliptic curves in class 479808ni
sage: E.isogeny_class().curves
LMFDB label Cremona label Weierstrass coefficients Torsion structure Modular degree Optimality
479808.ni5 479808ni1 [0, 0, 0, -1982567244, -33934881170288] [2] 283115520 $$\Gamma_0(N)$$-optimal*
479808.ni4 479808ni2 [0, 0, 0, -2560594764, -12533759058800] [2, 2] 566231040 $$\Gamma_0(N)$$-optimal*
479808.ni2 479808ni3 [0, 0, 0, -24245658444, 1443184565779600] [2, 2] 1132462080 $$\Gamma_0(N)$$-optimal*
479808.ni6 479808ni4 [0, 0, 0, 9876028596, -98580268761968] [2] 1132462080
479808.ni1 479808ni5 [0, 0, 0, -387193879884, 92734389270274192] [2] 2264924160 $$\Gamma_0(N)$$-optimal*
479808.ni3 479808ni6 [0, 0, 0, -8258455884, 3317952650942608] [2] 2264924160
*optimality has not been proved rigorously for conductors over 400000. In this case the optimal curve is certainly one of the 4 curves highlighted, and conditionally curve 479808ni1.
## Rank
sage: E.rank()
The elliptic curves in class 479808ni have rank $$1$$.
## Modular form 479808.2.a.ni
sage: E.q_eigenform(10)
$$q + 2q^{5} - 4q^{11} - 2q^{13} + q^{17} - 4q^{19} + O(q^{20})$$
## Isogeny matrix
sage: E.isogeny_class().matrix()
The $$i,j$$ entry is the smallest degree of a cyclic isogeny between the $$i$$-th and $$j$$-th curve in the isogeny class, in the Cremona numbering.
$$\left(\begin{array}{rrrrrr} 1 & 2 & 4 & 4 & 8 & 8 \\ 2 & 1 & 2 & 2 & 4 & 4 \\ 4 & 2 & 1 & 4 & 2 & 2 \\ 4 & 2 & 4 & 1 & 8 & 8 \\ 8 & 4 & 2 & 8 & 1 & 4 \\ 8 & 4 & 2 & 8 & 4 & 1 \end{array}\right)$$
## Isogeny graph
sage: E.isogeny_graph().plot(edge_labels=True)
The vertices are labelled with Cremona labels.
|
2020-12-05 00:01:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9222999811172485, "perplexity": 8307.664011370478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141745780.85/warc/CC-MAIN-20201204223450-20201205013450-00541.warc.gz"}
|
https://support.bioconductor.org/u/7355/
|
## User: meeta.mistry
meeta.mistry20
Reputation:
20
Status:
New User
Location:
United States
Last seen:
2 months ago
Joined:
3 years, 6 months ago
Email:
m***********@gmail.com
#### Posts by meeta.mistry
<prev • 27 results • page 1 of 3 • next >
2
87
views
2
... Hi, I am trying to use mogsa to analyze ATAC -seq and RNA-seq data on the same samples. In the vignette example, the data matrices are multiple microarray data. If I use ATAC-seq data, what should I use as input? I was thinking a count matrix for a set of consensus peaks across all samples. However ...
written 9 weeks ago by meeta.mistry20 • updated 8 weeks ago by mengchen180
2
120
views
2
... Hi Johannes, Thank you for your quick reply! Both of those alternatives are good to know and very helpful since I use this package often for cross-database annotations. Best, Meeta ...
written 7 months ago by meeta.mistry20
2
120
views
2
... Hello, I encountered a problem when mapping Ensembl genes to Entrez IDs and was wondering if there was a way around this. For a list of Ensembl gene IDs I used the select function to return to me gene symbols and Entrez IDs. common_genes <- select(EnsDb.Mmusculus.v79, keys=common, co ...
written 7 months ago by meeta.mistry20
1
363
views
1
Comment: C: ChIPQC failed on chrM
... Hello I get this same error at the start of my run. Was there a solution to this? My sessionInfo is attached below but also I get a message when I load the library, not sure if this has anything to do with it: No methods found in "RSQLite" for requests: dbGetQuery Bam file has 93 contigs Error ...
written 13 months ago by meeta.mistry20
1
239
views
1
Comment: C: ChIPQC report missing figures
... Hi Tom, Adding the Tissue and Condition columns appears to have solved the problem. Thanks very much! Best, Meeta ...
written 14 months ago by meeta.mistry20
1
239
views
1
... Hi, I am trying to run ChIPQC on my data but for my final report I get blank plots for CrossCoverage and the Signal profile. I don't get any errors when creating the Chip object nor do I get errors when generating the report so it's hard to troubleshoot. Here is a link to my report using only chr ...
written 14 months ago by meeta.mistry20 • updated 14 months ago by Thomas Carroll390
4
362
views
4
... Thanks for the help. My design is setup like the latter with the different patients as separate factor levels. ...
written 2.2 years ago by meeta.mistry20
4
362
views
4
... Ok will do, thanks! ...
written 2.2 years ago by meeta.mistry20
4
362
views
4
... I just realized what you meant. Age is already being controlled for within each pair since the samples for treatment and control come from the same individual. Should have seen that, sorry, Thanks! ...
written 2.2 years ago by meeta.mistry20
4
362
views
4
... Sorry, that code is incorrect, it shoudl read: # Setup design matrix age <- pData(data.norm)$Age treat <- pData(data.norm)$Treatment sample <- pData(data.norm)\$Patient design <- model.matrix(~ sample + age + treat) # Fit model fit <- lmFit(exprs(data.norm), design) ...
written 2.2 years ago by meeta.mistry20
#### Latest awards to meeta.mistry
Popular Question 2.2 years ago, created a question with more than 1,000 views. For DESeq2 with high dispersions
Popular Question 2.9 years ago, created a question with more than 1,000 views. For Problems getting Bioc 3.1 after upgrade to R_3.2.0
Content
Help
Access
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
|
2018-08-20 13:21:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1878335028886795, "perplexity": 3887.6399559536585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216453.52/warc/CC-MAIN-20180820121228-20180820141228-00286.warc.gz"}
|
https://mattermodeling.stackexchange.com/tags/mathematical-modeling/new
|
# Tag Info
## New answers tagged mathematical-modeling
### How to apply FIRE to many atoms where P = F · v seems to be a vector rather than a scalar?
As Susi noted, for your 2D case (as a subset of the more general 3D case) this dot product is assumed to be using the velocity/force written as $2N$ dimensional vectors rather than $(2,N)$ arrays. ...
• 14.3k
### How to apply FIRE to many atoms where P = F · v seems to be a vector rather than a scalar?
Your code is wrong. Your excerpt of the FIRE paper states "a simple mixing of the global (3Natoms dimensional) velocity and force vectors", meaning $\mathbf{F}$ and $\mathbf{v}$ are both ...
• 15.1k
### Do the cc/pc/def2 basis sets mathematically converge to the CBS limit, assuming exact CI/DFT?
To supplement Nike's answer, I would like to point out that the situation is even worse when you look at heavier atoms. While Gaussians do a pretty good job for the first periods of the periodic table,...
• 15.1k
Accepted
### Do the cc/pc/def2 basis sets mathematically converge to the CBS limit, assuming exact CI/DFT?
"Assuming exact CI/DFT, can the sequences of the results of an ever increasing cardinal number of the cc/pc/def2 basis sets mathematically shown to converge exactly to the CBS limit, according to ...
• 29.1k
|
2022-09-26 05:28:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7222273945808411, "perplexity": 1481.3306716012016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00302.warc.gz"}
|
https://myriverside.sd43.bc.ca/jaydenb2016/tag/burtonpcalc11/
|
## Week 16 – Precalculus 11
This week was super cool! We had SFU come in to create a video staring us and our Sr Girls Soccer team went to Provincials 🙂 During my time at provincials I missed two lessons of math… So, I decided to my blog post on what we learned one of the days I was absent!! Enjoy.
So if we started with the equation:
The first thing we would begin trying to do is finding a common denominator, alike what we have been doing thus far.
Once we have found our common denominator we need to multiply it on both sides to create zero pairs:
Once I wrote out my common denominator I then cancelled out like terms, then I wrote out the new equations:
From there, I began by distributing. After that, I moved like terms to each side of the equation to make it easier to solve:
Here is the movement to either side of the equation…
Then, I combined the like terms on both sides of the equation and solved to get one term on each side…
From there I divided what was common and what could get my variable by itself.
Which then resulted in getting the final answer….
## Week 13 – Precalculus 11
Step by step Reciprocal Functions! Enjoy.
## Week 9 – Precalculus 11
This week felt like a lot of review. Wether it be review for our chapter test tomorrow or our midterm on Thursday, it was packed with previous learning. At the start of the week we focused on going and working ‘between’ different equations and changing them from one form into another. The three forms we covered are General Form, Factored Form and Standard / Vertex Form. We can interchange these formulas to give us different pieces we need for graphing. Therefore, if you were given an equation in General Form and you were asked to change it to Vertex Form you would use the method of completing the square. If the equation was $y=-4x^2-24x-7$. First, you would divide by -4 to get the $x^2$ by itself, then you will re-write your equation as $y=-4(x^2+6+blank-blank)-7$. In order to find the numbers that go in the blank you will need to spilt the middle term and square it! (middle term being 6). That would give you 9, you then put the 9 in the 2 blank spots. Your equation will now be $y=-4(x^2+6+9-9)-7$. Next you will create your binomial and multiply the -9 by -4 to be able to move it to the outside of the brackets. Your new equation will now be $y=-4(x^2+3)+36-7$. The next step is to subtract the 7 from 36 leaving you with 29. Your new equation is $-4(x^2+3)+29$. With this new equation we are now able to find the vertex of the parabola, if it is positive or negative (opens up or down), and if it is congruent to the 1,3,5 pattern used when graphing.
Here is the same equation demonstrated on paper.
## Week 8 – Precalculus 11
First week back from Spring Break :/ … Here is my video on Graphing Quadratics!!
|
2021-08-02 05:19:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3101271390914917, "perplexity": 575.6670873617725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00707.warc.gz"}
|
https://vaibhavkarve.github.io/boolean_satisfiability.html
|
# Boolean Satisfiability notes Home
These are my notes from Donald Knuth's The Art of Computer Programming: Satisfiability, Volume 4, Fascicle 6.
## 1 Introduction
A efficient algorithm for SAT would involve deciding a CNF on $$N$$ variables in $$N^{O(1)}$$ steps. It is possible (and D.K. believes so) that SAT is in $$P$$ but the algorithm is "unknowable", meaning we might get an existence proof of such an algorithm.
D.K. defines a literal to be either a variable or its complement. Two literals are distinct if $$l_1 \neq l_2$$. They are strictly distinct if $$|l_1| \neq |l_2|$$.
### 1.1 SAT is equivalent to a covering problem
def Tₙ (n : ℕ) : CNF := {{1, ¬1}, {2, ¬ 2}, ..., {n, ¬n}}
variable (S : Cnf)
def is_sat S : Prop :=
let n := number_of_variables S in
∃ L : set (Literals),
-- (1) L has size n
L.size = n
-- (2) every clause had a literal that is in L.
∧ ∀ (c : Clause) ∈ S, ∃ l ∈ L, l ∈ c
Created: 2021-04-08 Thu 10:49
Validate
|
2021-04-16 14:09:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42628800868988037, "perplexity": 3371.794064261593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00545.warc.gz"}
|
http://content.myhometuition.com/2018/06/07/13-2-1-graphs-of-functions-pt3-focus-practice-2/
|
# 13.2.2 Graphs of Functions, PT3 Focus Practice
13.2.2 Graphs of Functions, PT3 Focus Practice
Question 4:
Use graph paper to answer this question.
Table below shows the values of two variables, x and y, of a function.
x –3 –2 –1 0 1 2 3 y –19 –3 1 –1 –3 1 17
The x-axis and the y-axis are provided on the graph paper on the answer space.
(a) By using a scale of 2 cm to 5 units, complete and label the y-axis.
(b) Based on the table above, plot the points on the graph paper.
(c) Hence, draw the graph of the function.
Answer:
Solution:
Question 5:
Use graph paper to answer this question.
Table below shows the values of two variables, x and y, of a function.
x –4 –3 –2 –1 0 1 2 y 31 17 7 1 –1 1 7
The x-axis and the y-axis are provided on the graph paper on the answer space.
(a) By using a scale of 2 cm to 5 units, complete and label the y-axis.
(b) Based on the table above, plot the points on the graph paper.
(c) Hence, draw the graph of the function.
Solution:
Question 6:
(a) Complete table below in the answer space for the equation L = x2 + 5x by writing the value of L when x = 2.
(b) Use graph paper to answer this part of the question. You may use a flexible curve rule.
By using a scale of 2 cm to 1 unit on the x-axis and 2 cm to 5 units on the y-axis, draw the graph of L = x2 + 5x for 0 ≤ x ≤ 4.
|
2020-07-02 09:18:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6233617663383484, "perplexity": 266.5608047118682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878639.9/warc/CC-MAIN-20200702080623-20200702110623-00170.warc.gz"}
|
https://me.gateoverflow.in/596/gate2016-2-10
|
# GATE2016-2-10
A single degree of freedom mass-spring-viscous damper system with mass $m$, spring constant $k$ and viscous damping coefficient $q$ is critically damped. The correct relation among $m$, $k$, and $q$ is
1. $q=\sqrt{2km} \\$
2. $q=2\sqrt{km} \\$
3. $q=\sqrt{\dfrac{2k}{m}} \\$
4. $q=2\sqrt{\dfrac{k}{m}}$
recategorized
## Related questions
The system shown in the figure consists of block A of mass $5$ $kg$ connected to a spring through a massless rope passing over pulley B of radius $r$ and mass $20$ $kg$. The spring constant $k$ is $1500$ $N/m$. If there is no slipping of the rope over the pulley, the natural frequency of the system is_____________ $rad/s$.
The rod $AB$, of length $1$ $m$, shown in the figure is connected to two sliders at each end through pins. The sliders can slide along $QP$ and $QR$. If the velocity $VA$ of the slider at $A$ is $2$ $m/s$, the velocity of the midpoint of the rod at this instant is ___________ $m/s$.
A mass of $2000$ $kg$ is currently being lowered at a velocity of $2$ $m/s$ from the drum as shown in the figure. The mass moment of inertia of the drum is $150$ $kg$-$m^2$. On applying the brake, the mass is brought to rest in a distance of $0.5$ $m$. The energy absorbed by the brake (in $kJ$) is __________
The forces $F_1$ and $F_2$ in a brake band and the direction of rotation of the drum are as shown in the figure. The coefficient of friction is $0.25$. The angle of wrap is $3\pi /2$ radians. It is given that $R$ = $1$ $m$ and $F_2$ = $1$ $N$. The torque (in $N$-$m$) exerted on the drum is _________
|
2021-09-18 02:36:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8435845971107483, "perplexity": 226.13669027701064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00589.warc.gz"}
|
https://zbmath.org/authors/?q=ai%3Aarsenin.vasiliy-y
|
# zbMATH — the first resource for mathematics
## Arsenin, Vasiliy Y.
Compute Distance To:
Author ID: arsenin.vasiliy-y Published as: Arsenin, V.; Arsenin, Vasiliy Y.
Documents Indexed: 2 Publications since 1940, including 1 Book
#### Co-Authors
0 single-authored 1 Tikhonov, Andreĭ Nikolaevich
#### Fields
1 Partial differential equations (35-XX) 1 Numerical analysis (65-XX)
all top 5
#### Cited by 1,558 Authors
13 Pillonetto, Gianluigi 12 Hasanoǧlu, Alemdar 11 Hon, Yiu-Chung 11 Xiong, Xiangtuan 9 Wei, Ting 8 Pourgholi, Reza 7 Loli Piccolomini, Elena 7 Sanguineti, Marcello 6 Gnecco, Giorgio 6 Groetsch, Charles W. 6 Klibanov, Michael V. 6 Landi, Germana 6 Lv, Xiaoguang 6 Mas, André 6 Scherzer, Otmar 6 Wang, Yanfei 5 Chen, Tianshi 5 Cheng, Jin 5 de Campos Velho, Haroldo Fraga 5 De Nicolao, Giuseppe 5 Golubev, Yuriĭ K. 5 Hofmann, Bernd 5 Liu, Jun 5 Ljung, Lennart 5 Masouri, Zahra 5 Nguyen Huy Tuan 5 Qin, Haihua 5 Trong, Dang Duc 4 Ascher, Uri M. 4 Cawley, Gavin C. 4 Chen, Weidong 4 Clempner, Julio B. 4 Dahmani, Abdelnasser 4 Engl, Heinz W. 4 Fu, Chuli 4 Hatamzadeh-Varmazyar, Saeed 4 He, Guoqiang 4 Kreinovich, Vladik Yakovlevich 4 Maleknejad, Khosrow 4 Nair, M. Thamban 4 Shen, Lixin 4 Shidfar, Abdollah 4 Yang, Chingyu 4 Zama, Fabiana 3 Averbuch, Amir Z. 3 Baranger, Thouraya Nouri 3 Beck, Amir 3 Beilina, Larisa 3 Bertozzi, Andrea Louise 3 Bocharov, Gennady A. 3 Cheng, Hao 3 Cherkaev, Elena 3 Chung, Julianne M. 3 Cimetière, Alain 3 Delvare, Franck 3 Freeden, Willi 3 Galybin, Alexander N. 3 Guerri, Luciano 3 Haltmeier, Markus 3 Hanke-Bourgeois, Martin 3 Hegland, Markus 3 Huang, Ting-Zhu 3 Iakovidis, Ilias 3 Iqbal, Muhammad Asad 3 Jin, Qinian 3 Kabanikhin, Sergeĭ Igorevich 3 Karniadakis, George Em 3 Lei, Jing 3 Lesnic, Daniel 3 Li, Ming 3 Li, Ming 3 Liu, Shi 3 Liu, Zhenhai 3 Lou, Yifei 3 Maksimov, Vyacheslav Ivanovich 3 Mangasarian, Olvi L. 3 Mukanova, Balgaisha 3 Nagy, James Gerard 3 Neubauer, Andreas 3 Ng, Michael Kwok-Po 3 Ngo Van Hoa 3 Ou, Yunhua 3 Pektas, Burhan 3 Pereverzev, Sergei V. 3 Poggio, Tomaso A. 3 Poznyak, Aleksandr Semënovich 3 Quan, Pham Hoang 3 Raissi, Maziar 3 Ramos, Fernando Manuel 3 Sheela, Suresh Mandir 3 Singh, Arindama 3 Talbot, Nicola L. C. 3 Tautenhahn, Ulrich 3 Turco, Emilio 3 Vapnik, Vladimir Naumovich 3 Wang, Linjun 3 Xu, Yuesheng 3 Yildiz, Bunyamin 3 Zabaras, Nicholas J. 3 Zhang, Dali ...and 1,458 more Authors
all top 5
#### Cited in 234 Serials
54 Applied Mathematics and Computation 45 Journal of Computational and Applied Mathematics 31 Journal of Computational Physics 28 Engineering Analysis with Boundary Elements 26 Automatica 22 Applied Numerical Mathematics 18 Computers & Mathematics with Applications 18 Applied Mathematical Modelling 17 Computer Methods in Applied Mechanics and Engineering 14 Journal of Mathematical Analysis and Applications 14 Numerical Functional Analysis and Optimization 12 Mathematical Biosciences 12 Computational Mathematics and Mathematical Physics 12 Journal of Mathematical Imaging and Vision 11 Journal of Optimization Theory and Applications 11 Neural Networks 9 International Journal of Heat and Mass Transfer 9 Mathematics of Computation 9 Journal of Scientific Computing 8 Applicable Analysis 8 Mathematical and Computer Modelling 8 Pattern Recognition 8 Journal of Inverse and Ill-Posed Problems 7 Journal of Integral Equations and Applications 7 Linear Algebra and its Applications 7 Computational Optimization and Applications 7 Inverse Problems and Imaging 6 Zhurnal Vychislitel’noĭ Matematiki i Matematicheskoĭ Fiziki 6 Journal of Approximation Theory 6 Numerische Mathematik 6 Applied Mathematics Letters 6 Neural Computation 6 Numerical Algorithms 6 Computational Statistics and Data Analysis 6 Mathematical Problems in Engineering 6 Optimization Methods & Software 5 Computer Physics Communications 5 Inverse Problems 5 Journal of Statistical Planning and Inference 5 Journal of Complexity 5 Journal of Global Optimization 5 Communications in Statistics. Theory and Methods 5 Journal of Computer and Systems Sciences International 5 Applied and Computational Harmonic Analysis 5 Journal of Mathematical Sciences (New York) 5 Doklady Mathematics 5 Computational Geosciences 4 Journal of Mathematical Biology 4 The Annals of Statistics 4 Information Sciences 4 Journal of Differential Equations 4 Journal of Multivariate Analysis 4 Mathematics and Computers in Simulation 4 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 4 Acta Mathematicae Applicatae Sinica. English Series 4 Signal Processing 4 Machine Learning 4 Computational and Applied Mathematics 4 Journal of Mathematical Chemistry 4 Journal of Shanghai University 4 Communications in Nonlinear Science and Numerical Simulation 4 Vestnik Samarskogo Gosudarstvennogo Tekhnicheskogo Universiteta. Seriya Fiziko-Matematicheskie Nauki 3 Biological Cybernetics 3 Journal of Fluid Mechanics 3 Wave Motion 3 Applied Mathematics and Optimization 3 BIT 3 Calcolo 3 Computing 3 Fuzzy Sets and Systems 3 Circuits, Systems, and Signal Processing 3 Applied Mathematics and Mechanics. (English Edition) 3 Acta Applicandae Mathematicae 3 Physica D 3 Computational Mechanics 3 SIAM Journal on Matrix Analysis and Applications 3 Mathematical Programming. Series A. Series B 3 Advances in Computational Mathematics 3 Mathematics and Mechanics of Solids 3 Electronic Journal of Statistics 3 SIAM Journal on Imaging Sciences 3 Science China. Information Sciences 2 Acta Mechanica 2 International Journal of Engineering Science 2 Journal of Engineering Mathematics 2 Journal of the Franklin Institute 2 Physics Letters. A 2 Annals of the Institute of Statistical Mathematics 2 Integral Equations and Operator Theory 2 International Journal for Numerical Methods in Engineering 2 Journal of Econometrics 2 SIAM Journal on Control and Optimization 2 Transactions of the American Mathematical Society 2 Optimal Control Applications & Methods 2 Systems & Control Letters 2 Zeitschrift für Analysis und ihre Anwendungen 2 Statistics & Probability Letters 2 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 2 Automation and Remote Control 2 European Journal of Operational Research ...and 134 more Serials
all top 5
#### Cited in 44 Fields
453 Numerical analysis (65-XX) 167 Partial differential equations (35-XX) 114 Computer science (68-XX) 99 Statistics (62-XX) 85 Integral equations (45-XX) 85 Information and communication theory, circuits (94-XX) 75 Operator theory (47-XX) 75 Operations research, mathematical programming (90-XX) 68 Calculus of variations and optimal control; optimization (49-XX) 64 Systems theory; control (93-XX) 60 Mechanics of deformable solids (74-XX) 52 Biology and other natural sciences (92-XX) 47 Fluid mechanics (76-XX) 38 Classical thermodynamics, heat transfer (80-XX) 34 Optics, electromagnetic theory (78-XX) 34 Geophysics (86-XX) 24 Ordinary differential equations (34-XX) 16 Functional analysis (46-XX) 14 Harmonic analysis on Euclidean spaces (42-XX) 14 Probability theory and stochastic processes (60-XX) 13 Approximations and expansions (41-XX) 13 Integral transforms, operational calculus (44-XX) 12 Linear and multilinear algebra; matrix theory (15-XX) 12 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 10 Statistical mechanics, structure of matter (82-XX) 8 Quantum theory (81-XX) 4 Functions of a complex variable (30-XX) 4 Dynamical systems and ergodic theory (37-XX) 4 Differential geometry (53-XX) 3 Real functions (26-XX) 3 Potential theory (31-XX) 3 Special functions (33-XX) 3 Mechanics of particles and systems (70-XX) 2 Mathematical logic and foundations (03-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Relativity and gravitational theory (83-XX) 2 Astronomy and astrophysics (85-XX) 1 General and overarching topics; collections (00-XX) 1 History and biography (01-XX) 1 Nonassociative rings and algebras (17-XX) 1 Measure and integration (28-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Difference and functional equations (39-XX) 1 General topology (54-XX)
|
2021-04-12 06:48:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31171053647994995, "perplexity": 9772.341582070134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00617.warc.gz"}
|
https://math.stackexchange.com/questions/521220/maximize-trace-as-standard-semidefinite-optimization
|
# Maximize Trace as standard semidefinite optimization
Let $A$ be a symmetric matrix and $X$ a symmetric positive definite matrix, then the following standard semidefinite optimization problem is convex:
min $tr (AX)$ subject to $X>0$
Now I wonder if $tr ((-A)X)$ is convex or concave?
The reason is the following. I want to solve: max $tr (AX)$ subject to $X>0$
Can that be written as: min $tr ((-A)X)$ subject to $X>0$ ? Is that convex and hence solvable?
• Any linear function of $X$ is convex. – littleO Oct 11 '13 at 11:02
It is still convex. Note that for semi-definite optimization, the requirement is that the objective function and constraints (except the semi-definite constraint) are linear in terms of the entries of $\mathbf{X}$.
|
2020-02-17 10:16:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7627869844436646, "perplexity": 311.25649394652135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141806.26/warc/CC-MAIN-20200217085334-20200217115334-00040.warc.gz"}
|
https://uniteasy.com/formula/formulas/
|
Cotangent of the sum of two angles
The cotangent of the sum of two angles equals the product of the cotangents of the angles minus 1 divided by the sum of the cotangents of the two angles
Triple angle identity for cotangent
Triple angle identity for cotangent function is used to represent cotangent of a triple angle in terms of cotangent of one-third of the triple angle. More specifically, cotangent of a triple angle equals to the difference between three cotangent of one-third of the triple angle and cotangent of one-third of the triple angle to power three divided by 1 minus three cotangent of one-third of the triple angle to power two.
Triple angle identity for tangent
Triple angle identity for tangent function is used to represent tangent of a triple angle in terms of tangent of one-third of the triple angle. More specifically, tangent of a triple angle equals to the difference between three tangent of one-third of the triple angle and tangent of one-third of the triple angle to power three divided by 1 minus three tangent of one-third of the triple angle to power two.
Triple angle identity for cosine function
Triple angle identity for cosine function is used to represent cosine of a triple angle in terms of cosine of one-third of the triple angle. More specifically, cosine of a triple angle equals to the difference between 4 multiples of cosine of one-third of the triple angle to power three and three multiples of cosine of one-third of the triple angle.
Triple angle identity for cosine function is often used to simplify trigonometric expressions or solve specific cubic equations.
If taking cosine of one-third of the triple angle as a variable, given cosine of a triple angle, there are three solutions according to de Moivre’s formula. It is also true to find solutions for the cubic equations that are alike to the triple root identity.
Triple angle identity for sines function
Triple angle identity for sines function is used to represent sines of a triple angle in terms of sines of one-third of the triple angle. More specifically, sines of a triple angle equals to the difference between three multiples of sines of one-third of the triple angle and 4 multiples of sines of one-third of the triple angle to power three.
Triple angle identity for sines function is often used to simplify trigonometric expressions or solve specific cubic equations.
Given sines of a triple angle, there are three solutions for the cubic root of sines of one-third of the triple angle according to de Moivre’s formula. It is also true to find solutions for the cubic equations that are alike to the triple root identity.
Quotient Rule for Derivatives
The Quotient Rule says that the derivative of a quotient is the denominator times the derivative of the numerator minus the numerator times the derivative of the denominator, all divided by the square of the denominator,
|
2023-03-24 22:58:20
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891914427280426, "perplexity": 415.14073455036004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00371.warc.gz"}
|
https://tex.stackexchange.com/questions/390763/pgfplots-how-to-get-a-legend-of-fill-area
|
# pgfplots: How to get a legend of fill area?
I want to make a bar plot and put the average with error bar or an error behind it. So far I got it working, except I can't get the filled area to show up in the legend. Here is my working example
\documentclass{article}
\usepackage{pgfplots}
\usepgfplotslibrary{units,
fillbetween}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
ymajorgrids,
legend style={at={(0.5,-0.2)},
anchor=north,legend columns=-1},
ylabel={Si pro Zelle},
ymin=0,
ytick={0,2,...,16},
symbolic x coords={0, 5 Tage, 6 Tage, 7 Tage, 8 Tage, 1},
xtick=data,
x tick label style={rotate=45,
anchor=east},
]
error bars/.cd, y dir=both,y explicit,
]
table[x=x,y=y,y error=error,col sep=comma] {test.csv};
sharp plot,update limits=false,
] coordinates { (0,7.524) (1,7.524) }
node [above] at (6 Tage,7.524) {Average};
\addplot [transparent,name path=B,sharp plot, update limits=false,
] coordinates {(0,5.474) (1,5.474) };
] coordinates { (0,9.574) (1,9.574) };
\addplot [red!10!white,area legend] fill between [
of=A and B];
\end{axis}
\end{tikzpicture}
test
\end{document}
I have tried the package manual and google but didn't find something. Does anyone have an idea how to get the fill area/color to show up in the legend?
I have two more minor issues:
1. Is there a more elegant way to let the average line start at the axis, not outside the plot?
2. How can I change only the color of the error bars?
\begin{filecontents*}{test.csv} x y error 5 Tage 4.031 0.457 6 Tage 6.205 0.065 7 Tage 14.275 0.869 8 Tage 5.585 0.229 \end{filecontents*}
• Please add filecontents and the settings that you use to your code to make it a real MWE. Sep 10, 2017 at 22:00
• Sep 11, 2017 at 3:36
• Sep 11, 2017 at 3:43
• Does my answer answer your questions or do you need further assistance? In the first case please consider upvoting it (with the upward pointing arrow to the left of it) and accepting it (by clicking on the checkmark ✓). In the later case, please edit your question accordingly. Thank you. Sep 14, 2017 at 5:04
You only made one small mistake to achieve what you want: \addlegendentry is not adding a legend entry to the previous \addplot command, but just collects entries. So only the order of these commands is of interest to produce the legend, not the place where you write the command(s). Said that, the picture of the "Error" entry in the legend actually shows the style of the "name path=B" \addplot which you defined as transparent.
To prove the said just add two more \addlegendentry commands and the "last one" will show (correctly) the style of the "fill between" \addplot command.
But this is not a real solution, because also if you just state empty arguments to the \addlegendentry command, these will occupy space in the legend. To prevent this, you can either use the \legend command instead (which I did in the shown solution below) or you can use legend entries in the axis options, where empty entries will not be added to the legend.
% used PGFPlots v1.15
\begin{filecontents*}{test.csv}
x, y, error
5 Tage, 4.031, 0.457
6 Tage, 6.205, 0.065
7 Tage, 14.275, 0.869
8 Tage, 5.585, 0.229
\end{filecontents*}
\documentclass[border=5pt]{standalone}
\usepackage{pgfplots}
\usepgfplotslibrary{fillbetween}
\pgfplotsset{compat=1.3}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
ymajorgrids,
legend style={
at={(0.5,-0.2)},
anchor=north,
legend columns=-1,
},
ylabel={Si pro Zelle},
ymin=0,
ytick distance=2, % <-- replaced ytick' key
symbolic x coords={
0, 5 Tage, 6 Tage, 7 Tage, 8 Tage, 1},
xtick=data,
x tick label style={
rotate=45,
anchor=east,
},
axis on top, % <-- added
]
ybar,
ybar legend,
blue,
fill=blue!30!white,
error bars/.cd,
y dir=both,
y explicit,
] table [
x=x,
y=y,
y error=error,
col sep=comma,
] {test.csv};
red,
line legend,
update limits=false,
] coordinates {
(0,7.524)
(1,7.524)
}
% specified relative positioning, rather than an absolute one
node [above,pos=0.5] {Average}
;
transparent,
name path=B,
update limits=false,
] coordinates {
(0,5.474)
(1,5.474)
};
transparent,
name path=A,
update limits=false,
] coordinates {
(0,9.574)
(1,9.574) }
;
red!10!white,
area legend,
] fill between [
of=A and B,
];
\legend{
Si-Pool,
Average,
,
,
Error,
}
\end{axis}
\end{tikzpicture}
\end{document}
`
|
2022-11-29 00:34:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7879873514175415, "perplexity": 5811.7952728676155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710684.84/warc/CC-MAIN-20221128235805-20221129025805-00520.warc.gz"}
|
http://weblib.cern.ch/collection/PBC%20Notes?ln=es
|
The CERN Document Server website will be shortly unavailable the 21st of August at 12:00 noon CEST due to a planned intervention.
# PBC Notes
2019-04-16
12:51
Physics opportunities for a fixed-target programme in the ALICE experiment / Galluccio, Francesca (Universita e sezione INFN di Napoli (IT)) ; Hadjidakis, Cynthia (Centre National de la Recherche Scientifique (FR)) ; Kurepin, Alexander (Institute for Nuclear Research, Moscow) ; Massacrier, Laure (Centre National de la Recherche Scientifique (FR)) ; Porteboeuf, S (Centre national de la recherche scientifique (FR)) ; Pressard, Kevin (Centre national de la recherche scientifique (FR)) ; Scandale, Walter (CERN) ; Topilskaya, Natalia (Institute for Nuclear Research, Moscow) ; Trzeciak, Barbara (Utrecht University) ; Uras, Antonio (Centre national de la recherche scientifique (FR)) et al. A fixed-target programme in the ALICE experiment using the LHC proton and lead beams offers many physics opportunities related to the parton content of the nucleon and nucleus at high-$x$, the nucleon spin and the Quark-Gluon Plasma. We investigate two solutions that would allow ALICE to run in a fixed-target mode: the internal solid target coupled to a bent crystal and the internal gas target. [...] CERN-PBC-Notes-2019-004.- Geneva : CERN, 2019 - Published in : link to original file in ESPP strategy update PDF document submitted to ESPP strategy update: PDF;
2019-03-14
08:31
VELO with SMOG2 Impedance-based Heating Localization Analysis / Popovic, Branko Kosta (CERN) In connection to the proposed upgrade (SMOG2) of the existing gas injection system for the LHCb experiment (SMOG), the potential heating due to induced longitudinal beam impedance of the VELO with the SMOG2 is investigated. The analysis looks at heating of the individual components of the object in the open position, especially potential heating of the SMOG2 cell. [...] CERN-PBC-Notes-2019-003.- Geneva : CERN, 2019
2019-02-13
15:58
Note on the Efficiency of Differential Pumping in the LHC Polarized Gas Target (PGT) / Steffens, Erhard (University of Erlangen) For a sufficient density of a window-less gas target based on a storage cell, a substantial gas flow rate is required, exceeding that of UHV vacuum systems. Therefore, differential pumping has to be applied in order to limit the gas flow into neighbouring sections. [...] CERN-PBC-Notes-2019-002.- Geneva : CERN, 2019 Fulltext: PDF;
2019-02-08
15:08
Study of beam-gas interaction at the LHC for the Physics Beyond Collider Fixed-Target study / Boscolo Meneguolo, Caterina (Universita e INFN, Padova (IT)) ; Bruce, Roderik (CERN) ; Cerutti, Francesco (CERN) ; Ferro-Luzzi, Massimiliano (CERN) ; Mereghetti, Alessio (CERN) ; Molson, James (CERN) ; Redaelli, Stefano (CERN) ; Abramov, Andrey (University of London (GB)) Among several working groups formed in the framework of Physics Beyond Colliders study, launched at CERN in September 2016, there is one investigating some specific fixed-target experiment proposals. Of particular interest is the study of a high-density unpolarized or polarized gas target to be installed upstream the LHCb detector using storage cells to enhance the target density. [...] CERN-PBC-Notes-2019-001.- Geneva : CERN, 2019 beamgas note v2019-04-18: PDF;
2018-12-14
14:45
Calculation of the allowed aperture for a gas storage cell in IP8 / Boscolo Meneguolo, Caterina (CERN) ; Bruce, Roderik (CERN) ; Ferro-Luzzi, Massimiliano (CERN) ; Giovannozzi, Massimo (CERN) ; Redaelli, Stefano (CERN) In the framework of the Physics Beyond Collider studies, a working group was created in order to study some specific proposals to perform fixed-target physics experiments at the Large Hadron Collider. Among these proposals, the possibility has been described of installing an internal gas target, injecting unpolarized or polarized gas inside a storage cell (SC) located around the beam, close to the LHCb detector. [...] CERN-PBC-Notes-2018-008.- Geneva : CERN, 2018 Main document PDF file for CERN-PBC-Notes-2018-008: PDF;
2018-12-14
11:58
The SMOG2 project / Di Nezza, Pasquale (INFN Frascati) ; Carassiti, Vittore (Universita e INFN, Ferrara (IT)) ; Ciullo, Giuseppe (Universita e INFN, Ferrara (IT)) ; Lenisa, Paolo (Universita e INFN, Ferrara (IT)) ; Pappalardo, Luciano Libero (Universita e INFN, Ferrara (IT)) ; Steffens, Erhard (University of Erlangen (Germany)) ; Bruce, Roderik (CERN) ; Vasilyev, Alexander (Petersburg Nuclear Physics Institute, Gatchina (Russia)) ; Boscolo Meneguolo, Caterina (Universita e INFN, Padova (IT)) ; Bregliozzi, Giuseppe (CERN) et al. "A proposal for an upgraded version of the existing gas injection system for the LHCb experiment (SMOG) is presented. The core idea of the project, called SMOG2, is the use of a storage cell for the injected gas to be installed upstream of the VELO detector [...] CERN-PBC-Notes-2018-007.- Geneva : CERN, 2018 Fulltext: PDF;
2018-12-06
15:27
The report of the Conventional Beams Working Group to the Physics Beyond Collider Study and to the European Strategy for Particle Physics / Gatignon, Lau (CERN) This document summarises the main conclusions of the Conventional Beams Working group, which has analysed the beam related and technical requirements and requests in the proposals to the Physics Beyond Colliders study for the North Area at the CERN SPS. We present results from studies on feasibility, requirements, compatibility between proposals and, where possible, the order of magnitude of the costs. [...] CERN-PBC-Notes-2018-005.- Geneva : CERN, 2018 Fulltext: PDF;
2018-12-05
17:48
REDTOP DISCUSSIONS IN THE CONVENTIONAL BEAMS WORKING GROUP / Gatignon, Lau (CERN) In an EN-EA internal meeting of the Conventional Beams WG on February 19th Reyes Alemany Fernandez presented a preliminary study (together with Brennan Goddard) on possibilities for REDTOP at LEIR. The slides are attached. [...] CERN-PBC-Notes-2018-004.- Geneva : CERN, 2018 - 3. Fulltext: DOCX;
2018-11-30
12:20
STUDIES FOR FUTURE FIXED-TARGET EXPERIMENTS AT THE LHC IN THE FRAMEWORK OF THE CERN PHYSICS BEYOND COLLIDERS STUDY A study on prospects for Physics Beyond Colliders at CERN was launched in September 2016 to assess the capabilities of the existing accelerators complex. Among several other working groups, this initiative triggered the creation of a working group with the scope of studying a few specific proposals to perform fixed-target physics experiments at the Large Hadron Collider (LHC). [...] CERN-PBC-Notes-2018-003.- Geneva : CERN, 2018 - 4. - Published in : Fulltext: PDF;
2018-10-12
14:30
Target studies for the proposed KLEVER experiment / Van Dijk, Maarten (CERN) ; Rosenthal, Marcel (CERN) The KLEVER experiment is being proposed to measure the branching ratio 𝐾 𝐿 π 0 𝜈𝜈̅ at the K12 beamline at the CERN SPS. This study presents considerations for the target required for such an experiment, specifically the production angle, target length and material. [...] CERN-PBC-Notes-2018-002.- Geneva : CERN, 2018 - 28. Fulltext: DOCX;
|
2019-08-20 12:32:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4621593654155731, "perplexity": 13448.861067784655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315329.55/warc/CC-MAIN-20190820113425-20190820135425-00094.warc.gz"}
|
http://physics.stackexchange.com/questions/11622/what-corresponds-to-this-lagrangian-density
|
# What corresponds to this Lagrangian density?
Is there a physical example of a field that would have the following Lagrangian density $$L= \sqrt{1+\phi_x^2 +\phi_y^2+\phi_z^2}$$ where the subscripts denote partial derivatives and $\phi$ is a scalar field?
-
Can I ask where you found this Lagrangian? It looks interesting. – Kasper Meerts Jun 27 '11 at 17:59
By the way, in physics nomenclature these types of Lagrangian densities are often called (static) non-linear sigma models. Some people would also refer to the particular form that you wrote down as one that is of "Born-Infeld type". – Willie Wong Jun 28 '11 at 1:58
This looks a lot like soap film statics, but with an extra dimension. Consider a soap film glued to a ring. The film is described by a function $z = \phi(x,y)$, with $z$ the height of the film above the xy-plane. We want to minimise the potential energy of the film, which means to a good approximation minimising the surface area. The total area of the film is given by
$A = \int \sqrt{1 + \phi_x^2 + \phi_y^2}$
where the integration is over the entire xy-plane. This functional looks very similar to your Lagrangian density. The solution can be derived by applying the Euler-Lagrange equations and the solution is confusingly also called Lagrange's equation
$(1 + \phi_y)^2 \phi_{xx} + 2 \phi_x \phi_y \phi_{xy} + (1 + \phi_x)^2 \phi_{yy} = 0$
To give you an idea of what the solutions look like, if the field doesn't change too rapidly we can make some approximations ($\phi_x \approx \phi_y \approx \phi_{xy} \approx 0$) and get Laplace's equation
$\phi_{xx} + \phi_{yy} = \Delta \phi = 0$
This also provides an interesting interpretation for solutions to the Laplace equation, as they are approximate minimal-area surfaces.
Most of this carries over three dimensions but the minimal-surface interpretation isn't so clear anymore.
-
The Euler Lagrange equation for what you wrote down as the action is not $\triangle \phi = 0$. It should be instead Lagrange's equation. The higher dimensional generalisation does not have the solution as a harmonic field either. – Willie Wong Jun 27 '11 at 16:51
Also, the Lorentzian version of the equation (with density $\sqrt{1 - (\partial_t\phi)^2 + (\partial_x\phi)^2 + (\partial_y\phi)^2}$ ) is sometimes called the "relativistic membrane equation"; you can see this article of Hoppe for a discussion and further info. – Willie Wong Jun 27 '11 at 17:02
@Willie Wong: Rats, I forgot to take the derivative of the numerator. I'll edit my answer. – Kasper Meerts Jun 27 '11 at 17:50
@Kasper I encountered this Lagrangian in the book "Emmy Noether's Wonderful Theorem" by Neuenschwander. – Noah Jun 27 '11 at 21:39
Okay, +1 now that you fixed the error. :-) Note that the minimal surface interpretation is still valid: it is just not a surface any more, but a hypersurface in $\mathbb{R}^4$ (so you extremize 3-dimensional volume). – Willie Wong Jun 28 '11 at 1:52
|
2015-12-02 05:33:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9163334965705872, "perplexity": 429.8725759584237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448399326483.8/warc/CC-MAIN-20151124210846-00081-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://openstax.org/books/statistics/pages/9-formula-review
|
Statistics
# Formula Review
StatisticsFormula Review
### 9.1Null and Alternative Hypotheses
If H0 has: equal (=) greater than or equal to (≥) less than or equal to (≤) then Ha has: not equal (≠) or greater than (>) or less than (<) less than (<) greater than (>)
Table 9.4
If αp-value, then do not reject H0.
If α > p-value, then reject H0.
α is preconceived. Its value is set before the hypothesis test starts. The p-value is calculated from the data.
### 9.2Outcomes and the Type I and Type II Errors
α = probability of a Type I error = P(Type I error) = probability of rejecting the null hypothesis when the null hypothesis is true.
β = probability of a Type II error = P(Type II error) = probability of not rejecting the null hypothesis when the null hypothesis is false.
### 9.3Distribution Needed for Hypothesis Testing
If there is no given preconceived α, then use α = 0.05.
Types of Hypothesis Tests
• Single population mean, known population variance (or standard deviation): Normal test.
• Single population mean, unknown population variance (or standard deviation): Student's t-test.
• Single population proportion: Normal test.
• For a single population mean, we may use a normal distribution with the following mean and standard deviation. Means: $μ= μ x ¯ μ= μ x ¯$ and $σ x ¯ = σ x n . σ x ¯ = σ x n .$
• For a single population proportion, we may use a normal distribution with the following mean and standard deviation. Proportions: µ = p and $σ= pq n σ= pq n$.
|
2020-10-28 14:46:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513080477714539, "perplexity": 1414.4822317637463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898577.79/warc/CC-MAIN-20201028132718-20201028162718-00718.warc.gz"}
|
http://mbb-team.github.io/VBA-toolbox/wiki/BMS-for-group-studies/
|
Here, we address the problem of Bayesian model selection (BMS) at the group level. First of all, note that one could perform such analysis under two qualitatively distinct assumptions:
• fixed-effect analysis (FFX): a single model best describes all subjects
• random-effect analysis (RFX): models are treated as random effects that could differ between subjects, with an unknown population distribution (described in terms of model frequencies/proportions).
Note: In classical statistics, random effects models refer to situations in which data have two sources of variability: within-subject variance and between-subject variance, respectively. The latter is typically captured in terms of the spread (over subjects) of within-subject parameter estimates. This is not equivalent to RFX-BMS, where one assumes that the model that best describes a given subject may depend upon the subject.
We first recall how to perform an FFX analysis. We then expose how to perform a RFX analysis. Finally, we address the problem of between-groups and between-conditions model comparisons. The key idea here is to quantify the evidence for a difference in model labels or frequencies across groups or conditions.
## FFX-BMS
In brief, FFX-BMS assumes that the same model generated the data of all subjects. NB: Subjects might still differ with each other through different model parameters. The corresponding FFX generative model is depicted in the following graph: where $$m$$ is the group’s label (it assigns the group to a given model) and $$y$$ are (within-subject) experimentally measured datasets.
Under FFX assumptions, the posterior probability of a given model is expressed as:
$p(m\mid y_1,\dots,y_n )\propto p(y_1,\dots,y_n\mid m)p(m)= p(y_1\mid m)\dots p(y_n\mid m)p(m)$
Thus, FFX-BMS simply proceeds as follows:
1. for each subject, invert each model and get the corresponding (log-) model evidence,
2. sum the log-evidences over subjects (cf. equation above),
3. compare models as usual (i.e. as in a single-subject study) based upon summed log-evidences.
FFX-BMS is valid whenever one may safely assume that the group of subjects is homogeneous (i.e., subjects are best described by the same model $$m$$).
## RFX-BMS
In RFX-BMS, models are treated as random effects that could differ between subjects and have a fixed (unknown) distribution in the population. Critically, the relevant statistical quantity is the frequency with which any model prevails in the population. The corresponding RFX generative model is depicted in the following graph:
where $$r$$ is the population frequency profile, $$m$$ is the subject-specific model label (it assigns each subject to a given model) and $$y$$ are (within-subject) experimentally measured datasets.
in VBA, the above RFX generative model can be inverted as follows:
[posterior,out] = VBA_groupBMC(L) ;
where the I/O arguments of VBA_groupBMC are summarized as follows:
• L: Kxn array of log-model evidences (K models; n subjects)
• posterior: a structure containing the sufficient statistics (moments) of the posterior distributions over unknown model variables (i.e. subjects’ labels and model frequencies).
• out: a structure containing inversion diagnostics, e.g.: RFX log-evidence, exceedance probabilities (see below), etc…
In Stephan et al. (2009), we introduced the notion of exceedance probability (EP), which measures how likely it is that any given model is more frequent than all other models in the comparison set:
$$EP_i = P\left(r_i > r_j \mid y\right)$$ where $$j\neq i$$.
Estimated model frequencies and EPs are the two summary statistics that typically constitute the results of RFX-BMS. They can be retrieved as follows:
f = out.Ef ;
EP = out.ep ;
Protected exceedance probabilities (PEPs) are an extention of this notion. They correct EPs for the possiblity that observed differences in model evidences (over subjects) are due to chance (Rigoux et al., 2014). They can be retieved as follows: PEP = (1-out.bor)*out.ep + out.bor/length(out.ep), where out.bor is the Bayesian Omnibus Risk (BOR), i.e. the posterior probability that model frequencies are all equal.
The graphical output of VBA_groupBMC.m is appended below (with random log-evidences, with K=4 and n=16):
Upper-left panel: log-evidences (y-axis) over each model (x-axis). NB: Each line/colour identifies one subjects within the group. Middle-left panel: exceedance probabilities (y-axis) over models (x-axis). Lower-left panel: RFX free energy (y-axis) over VB iterations (x-axis). The log-evidence (+/- 3) of the FFX (resp., “null”) model is shown in blue (resp. red) for comparison purposes. NB: here, the observed log-evidence are better explained by chance than by the RFX generative model (cf. simulated random log-evidences)! Upper-right panel: model attributions (subjects’ labels), in terms of the posterior probability (colour code) of each model (x-axis) to best explain each subject (y-axis). Middle-right panel: estimated model frequencies (y-axis) over models (x-axis). NB: the red line shows the “null” frequency profile over models.
Optional arguments can be passed to the function, which can be used to control the convergence of the VB scheme.
Importantly, one may want to partition the model space into distinct subsets (i.e. model families). For example, a given family of models would contain all models that share a common feature (e.g., some non-zero parameter). Information Re: model families are passed through the options variable (see header of VBA_groupBMC.m):
options.families = {[1,2], [3,4]} ;
[posterior, out] = VBA_groupBMC(L, options) ;
The above script effectively forces RFX-BMS to perform family inference at the group-level, where the first (resp. second) family contains the first and second (resp., third and fourth) model. Querying the family frequencies and EPs can be done as follows:
ff = out.families.Ef ;
fep = out.families.ep ;
(Same format as before). NB: in the lower-left panel, one can also eyeball the “family null” log-evidence (here, it is confounded with the above “model null”). Lower-right panel: model space partition and estimated frequencies (y-axis) over families (x-axis).
Check demo_modelComparison.m for a more detailed example.
## Between-conditions RFX-BMS
Now what if we are interested in the difference between treatment conditions; for example, when dealing with one group of subjects measured under two conditions? One could think that it would suffice to perform RFX-BMS independently for the different conditions, and then check to see whether the results of RFX-BMS were consistent. However, this approach is limited, because it does not test the hypothesis that the same model describes the two conditions. In this section, we address the issue of evaluating the evidence for a difference - in terms of models - between conditions.
Let us assume that the experimental design includes p conditions, to which a group of n subjects were exposed. Subject-level model inversions were performed prior to the group-level analysis, yielding the log-evidence of each model, for each subject under each condition. One can think of the conditions as inducing an augmented model space composed of model “tuples” that encode all combinations of candidate models and conditions. Here, each tuple identifies which model underlies each condition (e.g., tuple 1: model 1 in both conditions 1 and 2, tuple 2: model 1 in condition 1 and model 2 in condition 2, etc…). The log-evidence of each tuple (for each subject) can be derived by appropriately summing up the log evidences over conditions.
Note that the set of induced tuples can be partitioned into a first subset, in which the same model underlies all conditions, and a second subset containing the remaining tuples (with distinct condition-specific models). One can the use family RFX-BMS to ask whether the same model underlies all conditions. This is the essence of between-condition RFX-BMS, which is performed automatically as follows:
[ep, out] = VBA_groupBMC_btwConds(L) ;
where the I/O arguments of VBA_groupBMC_btwConds are summarized as follows:
• L: Kxnxp array of log-model evidences (K models; n subjects; p conditions)
• ep: exceedance probability of no difference in models across conditions.
• out: diagnostic variables (see the header of VBA_groupBMCbtw.m).
Now, one may be willing to ask whether the same model family underlies all conditions. For example, one may not be interested in knowing that different conditions may induce some variability in models that do not cross the borders of some relevant model space partition. This can be done as follows:
options.families = {[1,2], [3,4]} ;
[ep, out] = VBA_groupBMC_btwConds(L, options) ;
Here, the EP will be high if, for most subjects, either family 1 (models 1 and 2) or family 2 (models 3 and 4) are most likely, irrespective of conditions.
If the design is factorial (e.g., conditions vary along two distinct dimensions), one may be willing to ask whether there is a difference in models along each dimension of the factorial design. For example, let us consider a 2x2 factorial design:
factors = [[1,2]; [3,4]] ;
[ep,out] = VBA_groupBMC_btwConds(L, [], factors) ;
Here, the input argument factors is the (2x2) factorial condition attribution matrix, whose entries contain the index of the corresponding condition (p=4). The output argument ep is a 2x1 vector, quantifying the EP that models are identical along each dimension of the factorial design.
Of course, one may want to combine family inference with factorial designs, as follows:
options.families = {[1,2], [3,4]} ;
factors = [[1,2]; [3,4]] ;
[ep, out] = VBA_groupBMC_btwConds(L, options, factors) ;
Note that the ensuing computational cost scales linearly with the number of dimensions in the factorial design, but is an exponential function of the number of conditions (there are K^p tuples).
## Between-groups RFX-BMS
Assessing between-group model comparison in terms of random effects amounts to asking whether model frequencies are the same or different between groups. In other words, one wants to compare the two following hypotheses (at the group level):
• $$H_=$$: data y come from the same population, i.e. model frequencies are the same for all subgroups:
• $$H_{\neq}$$: subjects’ data y come from different populations, i.e. they have distinct model frequencies:
Under $$H_=$$ , the group-specific datasets can be pooled to perform a standard RFX-BMS, yielding a single evidence $$p(y\mid H_{=})$$:
L = [L1, L2] ;
[posterior, out] = VBA_groupBMC(L) ;
Fe = out.F ;
where L1 (resp. L2) is the subject-level log-evidence matrix of the first -resp. second) group of subjects, and Fe is the log-evidence of the group-hypothesis $$H_=$$.
Under $$H_{\neq}$$, datasets are marginally independent. In this case, the evidence $$p(y\mid H_{\neq})$$ is the product of group-specific evidences:
[posterior1, out1] = VBA_groupBMC(L1) ;
[posterior2, out2] = VBA_groupBMC(L2) ;
Fd = out1.F + out2.F ;
where Fd is the log-evidence of the group-hypothesis $$H_{\neq}$$.
The posterior probability $$P\left(H_= \mid y \right)$$ that the two groups have the same model frequencies is thus simply given by:
p = 1/(1+exp(Fd-Fe))
Note that one can directly test for a group difference with the function VBA_groupBMC_btwGroups which directly performs the above analysis:
[h, p] = VBA_groupBMC_btwGroups({L1, L2})
where p is $$P\left(H_= \mid y \right)$$.
|
2021-09-20 01:35:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7698768973350525, "perplexity": 2479.141482482827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056974.30/warc/CC-MAIN-20210920010331-20210920040331-00524.warc.gz"}
|
https://ora.ox.ac.uk/objects/uuid:dfe8fbaa-3411-40ea-8068-05bb67fa253e
|
Journal article
### Eigenvalue density of correlated complex random Wishart matrices.
Abstract:
Using a character expansion method, we calculate exactly the eigenvalue density of random matrices of the form M dagger M where M is a complex matrix drawn from a normalized distribution P(M) approximately exp(-Tr [AMB M dagger]) with A and B positive definite (square) matrices of arbitrary dimensions. Such so-called correlated Wishart matrices occur in many fields ranging from information theory to multivariate analysis.
Publication status:
Published
### Access Document
Publisher copy:
10.1103/physreve.69.065101
### Authors
More by this author
Institution:
University of Oxford
Department:
Oxford, MPLS, Physics, Theoretical Physics
Role:
Author
Journal:
Physical review. E, Statistical, nonlinear, and soft matter physics
Volume:
69
Issue:
6 Pt 2
Pages:
065101
Publication date:
2004-06-05
DOI:
EISSN:
1550-2376
ISSN:
1539-3755
URN:
uuid:dfe8fbaa-3411-40ea-8068-05bb67fa253e
Source identifiers:
168171
Local pid:
pubs:168171
Language:
English
|
2021-07-29 20:03:20
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8224712610244751, "perplexity": 7042.577395291689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153892.74/warc/CC-MAIN-20210729172022-20210729202022-00406.warc.gz"}
|
http://math.stackexchange.com/questions/138657/squaring-an-arbitrary-summation?answertab=votes
|
# Squaring an arbitrary summation?
I'm trying to find a recurrence relation for the coefficients for the Maclaurin series for $\tan(x)$ by substituting $y=\sum_{k=0}^{\infty}C_{2k+1}x^{2k+1}$ into the differential equation $y'=1+y^2$. This is because $\tan(x)$ is the solution to the initial value problem for the aforementioned DE with the initial condition $y(0)=0$; this is also where the form $\sum_{k=0}^{\infty}C_{2k+1}x^{2k+1}$ comes from (the fact that $\tan(x)$ is an odd function and that $y(0)=0$ which implies $C_0=0$). But I have no clue how to work "around" the expression $y^2=\big(\sum_{n=1}^{\infty}C_{2k+1}x^{2k+1}\big)^2$. How can I find a recurrence relation with an infinite squared summation? Any help is appreciated, thank you.
-
The coefficients will get convolved i.e. $(\sum_{k\ge0}C_kx^k)^2=\sum_{k\ge0}(\sum_{i=0}^kC_iC_{k-i})x^k$. – sai Apr 30 '12 at 0:11
Take the Cauchy product:
\begin{align*} \left(\sum_{k\ge 0}C_{2k+1}x^{2k+1}\right)^2&=x^2\left(\sum_{k\ge 0}C_{2k+1}x^{2k}\right)^2\\\\ &=x^2\sum_{k\ge 0}D_{2k}x^{2k}\;, \end{align*}
where $$D_{2k}=\sum_{i=0}^kC_{2i+1}C_{2(k-i)+1}\;.$$ Thus, the differential equation becomes
\begin{align*} \sum_{k\ge 0}C_{2k+1}(2k+1)x^{2k}&=1+x^2\sum_{k\ge 0}\sum_{i=0}^kC_{2i+1}C_{2(k-i)+1}x^{2k}\\ &=1+\sum_{k\ge 1}\sum_{i=0}^{k-1}C_{2i+1}C_{2(k-i)-1}x^{2k}\;, \end{align*}
and we have $C_1=1$ and $$C_{2k+1}=\frac1{2k+1}\sum_{i=0}^{k-1}C_{2i+1}C_{2(k-i)-1}$$ for $k\ge 1$.
E.g.,
\begin{align*} C_3&=\frac13 C_1^2=\frac13\;,\\ C_5&=\frac15(2C_1C_3)=\frac2{15}\;,\text{ and}\\ C_7&=\frac17(2C_1C_5+C_3^2)=\frac{17}{315}\;. \end{align*}
-
In other contexts, the coefficients of the squared function would be what is termed a self-convolution, or autoconvolution. – J. M. Apr 30 '12 at 0:41 Say we have $\big(\sum_{i=0}^{n}a_i\big)\big(\sum_{j=0}^{n}b_j\big)$ $=a_0\big(\sum_{j=0}^{n}\big)+a_1\big(\sum_{j=0}^{n}\big)+...+a_n\big(\sum_{j=0}^{n}\big)$ which equals $\sum_{i=0}^{n}\sum_{j=0}^{n}a_ib_j$. Why do we shift the index of the second summation? – Hautdesert Apr 30 '12 at 1:39 @Hautdesert: I don’t understand. This example is very different from the product of two series (or for that matter two polynomials), and I don’t know what index you think is being shifted. – Brian M. Scott Apr 30 '12 at 1:44 @BrianM.Scott: Is the reasoning in my previous comment not a generalization of the product of two arbitrary polynomials? I thought we could just set $a_i=c_ix^i$ and derive the Cauchy product. – Hautdesert Apr 30 '12 at 1:49 @Hautdesert: Not if you want to group the product terms according to powers of $x$ so as to be able to match coefficients in $y'$ and $1+y^2$. That’s why the coefficient $D_{2k+1}$ of $x^{2k}$ above is itself a sum of products. Similarly, in $(a+bx+cx^2)(d+ex+fx^2)$ the coefficient of $x^3$ is $bf+cd$, since there are two $x^3$ terms. – Brian M. Scott Apr 30 '12 at 1:55
|
2013-05-23 07:25:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202293157577515, "perplexity": 237.4391520732961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703001356/warc/CC-MAIN-20130516111641-00010-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://golem.ph.utexas.edu/category/2009/08/computation_and_the_periodic_t_1.html
|
## August 8, 2009
### Computation and the Periodic Table II
#### Posted by John Baez
On Tuesday my wife Lisa and I are flying back from Paris to Los Angeles, and then taking the long shuttle ride from the airport back home to Riverside. But on Wednesday we’re driving back into LA. And then on Thursday morning at 8:30, if I don’t oversleep, I’m giving a talk here:
It’s called Computation and the periodic table, and you can see the slides now. Comments and corrections are welcome!
Devotees of the $n$-Café will note that I gave a talk with a suspiciously similar title over a year ago. What’s new about this one?
Well, it’s not drastically different. But I’ve thought a lot more about 2-categories and the $\lambda$-calculus, so this goes into a bit more detail about that. And also there’s more of a focus on physics. Last year’s conference was on algebraic topology and computer science; now it’s logic and computer science — but I’ve been invited to talk about connections between these subjects and physics. So, I’m trying hard to explain how a 2-category of
• data types,
• terms, and
• rewrite rules
can resemble a 2-category of
• $D$-branes,
• states of a topological open string theory, and
• operators between open string states
and how this fits into the overall perspective of the ‘Periodic Table’ of $n$-categories. Doing this with any precision in one hour seems impossible, so I’m just trying to convey a rough flavor of the idea, while pointing people to a webpage for references that provide more details.
As with the previous incarnation of this talk, you understand it once you fully grok what these pictures mean:
Posted at August 8, 2009 6:22 PM UTC
TrackBack URL for this Entry: http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2031
### Re: Computation and the Periodic Table II
Any idea what the registration fee is for nonmembers?
The registration page will not show the fees until AFTER you fill out the form (?!)
Posted by: Eric Forgy on August 8, 2009 7:16 PM | Permalink | Reply to this
### Re: Computation and the Periodic Table II
Posted by: Toby Bartels on August 8, 2009 8:18 PM | Permalink | Reply to this
As much as I love to see John’s talks, $750 is a bit steep :) Thanks for finding that info. PS: John, you and Lisa have an open invitation to visit us any time you’re in LA. Beers? :) Posted by: Eric Forgy on August 8, 2009 9:34 PM | Permalink | Reply to this ### Re: Computation and the Periodic Table II You know, I half expected you to have an IEEE membership, Eric. (Not that that would have made this cheap!) Posted by: Toby Bartels on August 9, 2009 4:39 PM | Permalink | Reply to this ### Re: Computation and the Periodic Table II I was no longer surprised about the fees when I saw it was an IEEE conference. It still seems excessive though. The fees are not much lower than the IEEE conferences I used to attend which hosted several thousand people in tens of parallel tracks hosted in elegant hotels. As far as membership, I gave up my engineering badges years ago :) Posted by: Eric Forgy on August 9, 2009 5:23 PM | Permalink | Reply to this ### Re: Computation and the Periodic Table II Eric wrote: PS: John, you and Lisa have an open invitation to visit us any time you’re in LA. Beers? :) Thanks! But I’m afraid I’ll be sufficiently busy and jet-lagged that I’m worried about how I’ll survive and stay awake even without extra fun. I have to get up at 3 am the day after tomorrow to catch my flight to LA — and from then on it’ll be go, go, go. What I’ll need is not beer but sleep. In fact I’m feeling sleepy just thinking about it! I hope sometime I can see you in LA on some more peaceful trip into town. Posted by: John Baez on August 9, 2009 6:49 PM | Permalink | Reply to this ### Re: Computation and the Periodic Table II I am in LA for the next week and I was planning on getting up early and driving over. It is showing a price of$400 for a student.
Does anyone know why they charge so much and how it is justified?
Posted by: Alex Hoffnung on August 9, 2009 12:53 AM | Permalink | Reply to this
### Re: Computation and the Periodic Table II
Does anyone know why they charge so much and how it is justified?
Well, it comes with 8 meals, not to mention that John's isn't the only talk.
Posted by: Toby Bartels on August 9, 2009 1:15 AM | Permalink | Reply to this
### Re: Computation and the Periodic Table II
Sorry for the high price — I didn’t know it was so bad. When prices get this high, the question becomes: how carefully will they check badges?
Of course it makes sense to charge money for meals, but what about people who are willing to take their own sandwich?
I found this year’s Joint Math Meetings painfully expensive, especially since I’d let my AMS membership lapse, and they seemed determined to punish me for that. But LICS is run by the Institute of Electrical and Electronics Engineers (IEEE), which probably makes things even worse. As you probably know, there’s a lot more money flowing through the system in engineering than in math — their pay scale is higher, but most people attending this conference will simply pay for it using a grant.
Be glad they let me put the slides of my talk on the web, where it’s free!
Posted by: John Baez on August 9, 2009 12:38 PM | Permalink | Reply to this
Post a New Comment
|
2018-12-19 09:37:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4612525403499603, "perplexity": 1626.165518103131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831933.96/warc/CC-MAIN-20181219090209-20181219112209-00216.warc.gz"}
|
http://mathhelpforum.com/calculus/135637-lagrange-multipliers-print.html
|
# Lagrange Multipliers
• March 25th 2010, 11:33 AM
davesface
Lagrange Multipliers
The plane $x+y+z=4$ intersects the paraboloid $z=x^2+y^2$ in an ellipse. Find the points on the ellipse nearest to and farthest from the origin.
So is the goal here to just minimize and then maximize the distance formula $f(x,y,z)=\sqrt{x^2+y^2+z^2}$ with the constraints of $g_1:x+y+z=4$ and $g_2:z=x^2+y^2$?
• March 25th 2010, 12:11 PM
Soroban
Hello, davesface!
Quote:
The plane $x+y+z\:=\:4$ intersects the paraboloid $z\:=\:x^2+y^2$ in an ellipse.
Find the points on the ellipse nearest to and farthest from the origin.
So is the goal to minimize and maximize the distance formula $f(x,y,z)\:=\:\sqrt{x^2+y^2+z^2}$
with the constraints of $g_1:x+y+z\:=\:4$ and $g_2:z\:=\:x^2+y^2$ ? . . . . Yes!
We have: . $F(x,y,z,\lambda,\mu) \;=\;\left(x^2+y^2+z^2\right)^{\frac{1}{2}} + \lambda(x+y+z-4) + \mu\left(z-x^2-y^2\right)$
Find the five partial derivatives, equate to zero, and solve the system.
. . $\begin{array}{ccccc}\dfrac{\partial F}{\partial x} &=& \dfrac{x}{\sqrt{x^2+y^2+z^2}} + \lambda - 2\mu x &=& 0 \\ \\
\dfrac{\partial F}{\partial y} &=& \dfrac{y}{\sqrt{x^2+y^2+z^2}} + \lambda - 2\mu y &=&0 \\ \\
\dfrac{\partial F}{\partial z} &=& \dfrac{z}{\sqrt{x^21+y^2+z^2}} + \lambda + \mu &=&0 \end{array}$
. . . . $\begin{array}{ccccc}\dfrac{\partial F}{\partial \lambda} &=& x + y + z - 4 &=& 0 \\ \\
\dfrac{\partial F}{\partial \mu} &=& z - x^2 - y^2 &=& 0 \end{array}$
I'll wait in the car . . .
.
• March 25th 2010, 01:16 PM
davesface
I haven't ever seen someone take $\frac{\delta F}{d \lambda}$ or $\frac{\delta F}{d \mu}$ (and since those are constants, do those derivatives make sense?). According to my notes, it's set up as $\nabla f= \lambda \nabla g_1+ \mu \nabla g_2$, which I think for this problem would be:
$\left [ \begin{array}{cc} \frac{x}{\sqrt{x^2+y^2+z^2}} \\ \frac{y}{\sqrt{x^2+y^2+z^2}}\\ \frac{z}{\sqrt{x^2+y^2+z^2}} \end{array} \right ]= \lambda \left [ \begin{array}{cc} 1 \\ 1\\1 \end{array} \right ] + \mu \left [ \begin{array}{cc} 2x \\ 2y \\-1 \end{array} \right ]$
|
2016-08-28 23:28:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947950005531311, "perplexity": 585.4132602764688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982948216.97/warc/CC-MAIN-20160823200908-00015-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=H.Berrehrah
|
• ### Parton-hadron dynamics in heavy-ion collisions(1312.7578)
Dec. 29, 2013 hep-ph, nucl-th
The dynamics of partons and hadrons in relativistic nucleus-nucleus collisions is analyzed within the novel Parton-Hadron-String Dynamics (PHSD) transport approach, which is based on a dynamical quasiparticle model for the partonic phase (DQPM) including a dynamical hadronization scheme. The PHSD approach is applied to nucleus-nucleus collisions from low SPS to LHC energies. The traces of partonic interactions are found in particular in the elliptic flow of hadrons and in their transverse mass spectra. We investigate also the equilibrium properties of strongly-interacting infinite parton-hadron matter characterized by transport coefficients such as shear and bulk viscosities and the electric conductivity in comparison to lattice QCD results.
|
2020-01-26 02:49:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820977509021759, "perplexity": 3789.2364885019847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251684146.65/warc/CC-MAIN-20200126013015-20200126043015-00233.warc.gz"}
|
https://math.stackexchange.com/questions/1615216/stuck-proving-that-if-m-and-n-are-perfect-squares-then-mn2-sqrtmn-is
|
# Stuck proving that if $m$ and $n$ are perfect squares. Then $m+n+2\sqrt{mn}$ is also a perfect square.
I am relatively new to proofs and can't seem to figure out how to solve an exercise.
I am trying to prove:
Suppose that $m$ and $n$ are perfect squares. Then $m+n+2\sqrt{mn}$ is also a perfect square.
I know that per the definition of a perfect square, that $m=a^2$ and $n=b^2$, if a and b are some positive integer.
I can then use substitution to rewrite the statement as:
$$a^2+b^2+2\sqrt{a^2b^2}$$
I also know that $2\sqrt{a^2b^2}$ can be simplified to: S $$a^2+b^2+2ab$$
I am stuck after this point though. I don't know how to eliminate the $2ab$.
You don't need to eliminate the $2ab$ term.
Notice that $(a+b)^2 = (a+b)(a+b) = a^2+ab+ba+b^2 = a^2+b^2+2ab$.
Now use the fact that $a^2+b^2+2ab = (a+b)^2$.
|
2019-07-22 09:32:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9419093132019043, "perplexity": 39.25851504868621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527907.70/warc/CC-MAIN-20190722092824-20190722114824-00024.warc.gz"}
|
https://aliquote.org/micro/2021-11-17-15-59-29/
|
# aliquote
## < a quantity that can be divided into another a whole number of time />
The proper use of macros in Lisp. #lisp
|
2022-05-26 12:07:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579360842704773, "perplexity": 2622.1371012947457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00564.warc.gz"}
|
https://testbook.com/question-answer/a-4-m-long-solid-round-bar-is-used-as-a-column-hav--5ef062f3a0ae8e1e94b44e48
|
# A 4 m long solid round bar is used as a column having one end fixed and the other end free. If Euler’s critical load on this column is found as 10 kN and E = 210 GPa for the material of the bar, the diameter of the bar is
This question was previously asked in
ESE Mechanical 2014 Official Paper - 2
View all UPSC IES Papers >
1. 50 mm
2. 40 mm
3. 60 mm
4. 45 mm
Option 1 : 50 mm
Free
CT 3: Building Materials
2962
10 Questions 20 Marks 12 Mins
## Detailed Solution
Assumption of Euler’s theory
2. flexural rigidity is unchanged
3. stress in structure are within elastic limit
4. cpmpressive load is perfectly axially applied
5.material should consider isotropic and homogeneous
End conditions Both end hinged One end fixed other is free One end fixed and other hinged Both end fixed Effective length L 2L $$\frac{L}{{\sqrt 2 }}$$ $$L/2$$
$${P_e} = \frac{{{\pi ^2}\; \times \;E\; \times \;I}}{{{{\left( {{L_e}} \right)}^2}}}$$
$${P_e} = Buckling\;load = 10kN$$
$${L_e} = effective\;length = 2L$$
$$I = moment\;of\;inertia\;about\;centroid\;axis$$
$$10 \times {10^3} = \frac{{{\pi ^2}\; \times \;210\; \times \;{{10}^9}\; \times \;I}}{{{{\left( {2\:times\;} \right)}^2}}}$$
$$I = 3.088 \times {10^{ - 7}}\;m$$
$$I = \frac{\pi }{{64}}{d^4}$$
$$3.088 \times {10^{ - 7}}\;m = \;\frac{\pi }{{64}}{d^4}$$
$$d = 0.050081\;m \approx 50\;mm$$
|
2021-10-18 07:15:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.784256637096405, "perplexity": 4104.521203150054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00477.warc.gz"}
|
https://usa.cheenta.com/tag/cheenta/
|
Categories
Clocky Rotato Arithmetic
Do you know that CLOCKS add numbers in a different way than we do? Do you know that ROTATIONS can also behave as numbers and they have their own arithmetic? Well, this post is about how clock adds numbers and rotations behave like numbers. Let’s learn about clock rotation today
Consider the clock on earth.
So, there are 12 numbers {1,2, …, 12 } are written on the clock. But let’s see how clocks add them.
What is 3+ 10 ?
Well, to the clock it is nothing else than 1. Why?
Say, it is 3 am and the clock shows 3 on the clock. Now you add 10 hours to 3 am. You get a 13th hour of the day. But to the clock, it is 1 pm.
So, 3 + 10 = 1.
If you take any other addition, say 9 + 21 = 6 to the clock ( 9 am + 21 hours = 6 pm ).
Now, you can write any other Clocky addition. But you will essentially see that the main idea is :
The clock counts 12 = 0.
Isn’t it easy? 0 comes as an integer just before 1, but on the clock, it is 12 written. So 12 must be equal to 0. Yes, it is that easy.
Cayley’s Table
This is a handsome and sober way to write the arithmetic of a set. It is useful if the set is finite like the numbers of the CLOCK Arithmetic.
Let me show you by an example.
Consider the planet Cheenta. A day on Cheenta consists of 6 earth hours.
So, how will the clock on Cheenta look like?
Let’s us construct the Cayley Table for Cheenta’s Clocky Arithmetic. Check it really works as you wish. Here for Cheenta Clock, 3 = 0.
Exercise: Draw the Cayley Table for the Earth (24 hours a day) and Jupiter (10 hours a day).
Nice, let’s move on to the Rotato part. I mean the arithmetic of Rotation part.
Let’s go through the following image.
Well, let’s measure the symmetry of the figure. But how?
Well, which is more symmetric : The Triskelion or the Square (Imagine).
Well, Square seems more right? But what is the thing that is catching our eyes?
It is the set of all the symmetric positions, that capture the overall symmetry of a figure.
For the Triskelion, observe that there are three symmetric operations that are possible but that doesn’t alter the picture:
• Rotation by 120 degrees. $r_1$
• Rotation by 240 degrees. $r_2$
• Rotation by 360 degrees. $r_3$
For the Square, the symmetries are:
• Rotation by 90 degrees.
• Rotation by 180 degrees.
• Rotation by 270 degrees.
• Rotation by 360 degrees.
• Four Reflections along the Four axes
For, a square there are symmetries, hence the eyes feel that too.
So, what about the arithmetic of these? Let’s consider the Triskelion.
Just like 1 interact (+) 3 to give 4.
We say $r_1$ interacts with $r_2$ if $r_1$ acts on the figure after $r_2$ i.e ( 240 + 120 = 360 degrees rotation = $r_3$ ).
Hence, this is the arithmetic of the rotations. To give a sober look to this arithmetic, we draw a Cayley Table for this arithmetic.
Well, check it out.
Exercise: Can you see any similarity of this table with that of anything before?
Challenge Problem: Can you draw the Cayley Table for the Square?
You may explore this link:- https://www.cheenta.com/tag/level-2/
And this video:- https://www.youtube.com/watch?v=UaGsKzR_KVw
Don’t stop investigating.
All the best.
Hope, you enjoyed. 🙂
Passion for Mathematics.
Categories
Natural Geometry of Natural Numbers
How does this sound?
The numbers 18 and 30 together looks like a chair.
The Natural Geometry of Natural Numbers is something that is never advertised, rarely talked about. Just feel how they feel!
Let’s revise some ideas and concepts to understand the natural numbers more deeply.
We know by Unique Prime Factorization Theorem that every natural number can be uniquely represented by the product of primes.
So, a natural number is entirely known by the primes and their powers dividing it.
Also if you think carefully the entire information of a natural number is also entirely contained in the set of all of its divisors as every natural number has a unique set of divisors apart from itself.
We will discover the geometry of a natural number by adding lines between these divisors to form some shape and we call that the natural geometry corresponding to the number.
Let’s start discovering by playing a game.
Take a natural number n and all its divisors including itself.
Consider two divisors a < b of n. Now draw a line segment between a and b based on the following rules:
• a divides b.
• There is no divisor of n, such that a < c < b and a divides c and c divides b.
Also write the number $\frac{b}{a}$ over the line segment joining a and b.
Let’s draw for number 6.
Now, whatever shape we get, we call it the natural geometry of that particular number. Here we call that 6 has a natural geometry of a square or a rectangle. I prefer to call it a square because we all love symmetry.
What about all the numbers? Isn’t interesting to know the geometry of all the natural numbers?
Let’s draw for some other number say 30.
Observe this carefully, 30 has a complicated structure if seen in two dimensions but its natural geometrical structure is actually like a cube right?
The red numbers denote the divisors and the black numbers denote the numbers to be written on the line segment.
Beautiful right!
Have you observed something interesting?
• The numbers on the line segments are always primes.
Exercise: Prove from the rules of the game that the numbers on the line segment always correspond to prime numbers.
Did you observe this?
• In the pictures above, the parallel lines have the same prime number on it.
Exercise: Prove that the numbers corresponding to the parallel lines always have the same prime number on it.
Actually each prime number corresponds to a different direction. If you draw it perpendicularly we get the natural geometry of the number.
Let’s observe the geometry of other numbers.
Try to draw the geometry of the number 210. It will look like the following:
Obviously, this is not the natural geometry as shown. But neither we can visualize it. The number 210 lies in four dimensions. If you try to discover this structure, you will find that it has four different directions corresponding to four different primes dividing it. Also, you will see that it is actually a four-dimensional cube, which is called a tesseract. What you see above is a two dimensional projection of the tesseract, we call it a graph.
A person acquainted with graph theory can understand that the graph of a number is always k- regular where k is the number of primes dividing the number.
Now it’s time for you to discover more about the geometry of all the numbers.
I leave some exercises to help you along the way.
Exercise: Show that the natural geometry of $p^k$ is a long straight line consisting of k small straight lines, where p is a prime number and k is a natural number.
Exercise: Show that all the numbers of the form $p.q$ where p and q are two distinct prime numbers always have the natural geometry of a square.
Exercise: Show that all the numbers of the form $p.q.r$ where p, q and r are three distinct prime numbers always have the natural geometry of a cube.
Research Exercise: Find the natural geometry of the numbers of the form $p^2.q$ where p and q are two distinct prime numbers. Also, try to generalize and predict the geometry of $p^k.q$ where k is any natural number.
Research Exercise: Find the natural geometry of $p^a.q^b.r^c$ where p,
q, and r are three distinct prime numbers and a,b and c are natural numbers.
Let’s end with the discussion with the geometry of {18, 30}. First let us define what I mean by it.
We define the natural geometry of two natural numbers quite naturally as a natural extension from that of a single number.
Take two natural numbers a and b. Consider the divisors of both a and b and follow the rules of the game on the set of divisors of both a and b. The shape that we get is called the natural geometry of {a, b}.
You can try it yourself and find out that the natural geometry of {18, 30} looks like the following:
Sit on this chair, grab a cup of coffee and set off to discover.
The numbers are eagerly waiting for your comments. 🙂
Please mention your observations and ideas and the proofs of the exercises in the comments section. Also think about what type of different shapes can we get from the numbers.
Also visit: Thousand Flowers Program
[/et_pb_text][/et_pb_column] [/et_pb_row] [/et_pb_section]
|
2021-10-21 16:56:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5762816071510315, "perplexity": 498.5360671150418}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00383.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-and-chemical-reactivity-9th-edition/chapter-6-the-structure-of-atoms-study-questions-page-247c/60
|
## Chemistry and Chemical Reactivity (9th Edition)
Published by Cengage Learning
# Chapter 6 The Structure of Atoms - Study Questions - Page 247c: 60
#### Answer
See the answer below.
#### Work Step by Step
$\Delta E=-Rhc(1/n_f^2-1/n_i^2)$ a) $-0.980.Rhc$ b) $-0.0074.Rhc$ c) $-0.75.Rhc$ A releases more energy, thus having the bigger frequency (ii) and the shortest wavelength (iii). B releases the least amount of energy (i).
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2022-08-09 16:52:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6413010954856873, "perplexity": 2516.794099615935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00182.warc.gz"}
|
https://rlang.r-lib.org/reference/rst_abort.html
|
The abort restart is the only restart that is established at top level. It is used by R as a top-level target, most notably when an error is issued (see abort()) that no handler is able to deal with (see with_handlers()).
rst_abort()
## Life cycle
All the restart functions are in the questioning stage. It is not clear yet whether we want to recommend restarts as a style of programming in R.
rst_jump(), abort()
## Examples
# The abort restart is a bit special in that it is always
# registered in a R session. You will always find it on the restart
# stack because it is established at top level:
rst_list()#> [[1]]
#> <restart: abort >
#>
# You can use the above restart to jump to top level without
# signalling an error:
if (FALSE) {
fn <- function() {
cat("aborting...\n")
rst_abort()
cat("This is never called\n")
}
{
fn()
cat("This is never called\n")
}
}
# The above restart is the target that R uses to jump to top
# level when critical errors are signalled:
if (FALSE) {
{
abort("error")
cat("This is never called\n")
}
}
# If another abort restart is specified, errors are signalled as
# usual but then control flow resumes with from the new restart:
if (FALSE) {
out <- NULL
{
out <- with_restarts(abort("error"), abort = function() "restart!")
cat("This is called\n")
}
cat("out has now become:", out, "\n")
}
|
2020-01-20 19:18:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3517833650112152, "perplexity": 7242.702552553034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00502.warc.gz"}
|
https://tex.stackexchange.com/questions/458802/beamer-navigation-sidebar-change-colors-of-name-and-title
|
# Beamer navigation sidebar: change colors of name and title
i set up a navigation bar to the right using: \useoutertheme[right,width = 3cm]{sidebar}
Now it looks like this:
First Question:
How can i change the color of the title "The Title"?
In the manual i only found the setbeamercolor{section in sidebar}{...} but i couldn't find the name of the title. title in sidebar did not work.
Second Question:
Is it possible to change the alignment of the sections (testsection,hallo) to centered and leave out the subsection or to draw a vertical line on the left side of the sidebar?
EDIT: Minimum Working Example:
%%% For normal presentations
%\documentclass{beamer}
%%%
%%% For handouts with lots of extra notes
\documentclass[handout]{beamer}
\usepackage{pgfpages}
\title{The Title}
\author{The Author}
%\usetheme{Copenhagen}
\useoutertheme[right,width = 3cm]{sidebar}
\begin{document}
\section{testsection}
\begin{frame}
Here's some content, with no notes added.
\end{frame}
\section{hallo}
\begin{frame}
Here's some content, with notes added.
\end{frame}
\section{asdf}
\end{document}
• Welcome to TeX.SE. It would be helpful if you composed a fully compilable minimal working example (MWE) including \documentclass and the appropriate packages that sets up the problem. While solving problems can be fun, setting them up is not. Then, those trying to help can simply cut and paste your MWE and get started on solving the problem. – samcarter_is_at_topanswers.xyz Nov 7 '18 at 15:29
• i edited the question. Could somebody unhold it?? – musthave Nov 11 '18 at 11:42
• Thank you for taking the time to add an example document that reproduces the issue, an MWE makes it much easier to investigate your issue and is often even necessary to make your question answerable in the first place. I'm sure the question will be reopened and answered in now time. For future questions you may want to keep in mind that it is generally preferred on this site to only as one question per question. So in future you may want to split questions like this with two problems into two separate questions. – moewe Nov 11 '18 at 12:01
1. To change the colour of the title:
\setbeamercolor{title in sidebar}{fg=red}
2. The option hideallsubsections will hide the subsections
3. To centre the content of the sidebar, you can add center to the sidebar format
\documentclass{beamer}
\title{The Title}
\author{The Author}
\useoutertheme[right,width = 3cm,hideallsubsections]{sidebar}
\setbeamercolor{title in sidebar}{fg=red}
\makeatletter
\def\beamer@sidebarformat#1#2#3{%
\begin{beamercolorbox}[wd=\beamer@sidebarwidth,leftskip=#1,rightskip=1ex plus1fil,vmode,center]{#2}
\vbox{}%
#3\par%
\vbox{}%
\vskip-1.5ex%
\end{beamercolorbox}
}
\makeatother
\begin{document}
\section{testsection}
\subsection{subsection name}
\begin{frame}
Here's some content, with no notes added.
\end{frame}
\section{hallo}
\begin{frame}
Here's some content, with notes added.
\end{frame}
\end{document}
• I like it. Thank you. May i ask where you get that information from? I searched the whole beameruserguide but couldn't find title in sidebar. For example: Where would i search i also want to change the color of current section in the sidebar? – musthave Nov 11 '18 at 15:23
• @musthave see tex.stackexchange.com/a/458193/36296 for a summary how to find out colour names – samcarter_is_at_topanswers.xyz Nov 11 '18 at 16:17
|
2021-01-21 05:24:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5642637014389038, "perplexity": 1323.2831420371801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00411.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-definite-integral-for-xcos-5x-2-dx-for-the-intervals-0-4sqrt#248100
|
# How do you find the definite integral for: xcos(5x^2) dx for the intervals [0,4sqrtpi]?
Apr 1, 2016
Zero
#### Explanation:
${\int}_{0}^{4 \sqrt{\pi}} x \cos \left(5 {x}^{2}\right) \mathrm{dx}$
Use ${x}^{2}$ as your direct variable instead of $x$:
$d \left({x}^{2}\right) = 2 x \mathrm{dx}$
Hence
${\int}_{0}^{4 \sqrt{\pi}} x \cos \left(5 {x}^{2}\right) \frac{1}{2 x} d \left({x}^{2}\right)$
$\frac{1}{2} {\int}_{0}^{4 \sqrt{\pi}} \cos \left(5 {x}^{2}\right) d \left({x}^{2}\right)$
$\frac{1}{2} \sin \left(5 {x}^{2}\right)$ from $0 \to 4 \sqrt{\pi}$
You can immediately see that the sine term will give zero value for both limits.
|
2021-10-16 15:55:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9476823806762695, "perplexity": 1155.796752109937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584886.5/warc/CC-MAIN-20211016135542-20211016165542-00640.warc.gz"}
|
https://gyre.readthedocs.io/en/v6.0.1/appendices/comp-ptrope/solution-method.html
|
# Solution Method¶
## Specification¶
The structure of a composite polytrope is specified completely by
• a set of $$\nreg$$ polytropic indices $$n_{i}$$
• a set of $$\nreg-1$$ boundary coordinates $$z_{i-1/2}$$
• a set of $$\nreg$$ density jumps $$\Delta_{i-1/2} \equiv \ln [\rho_{i}(z_{i-1/2})/\rho_{i-1}(z_{i-1/2}]$$
Although the normalizing densities $$\rho_{i,0}$$ have so far been left unspecified, it’s convenient to choose them as the density at the beginning of their respective regions.
## Solution¶
The structure equations may be solved as an initial value problem. In the first region ($$i=1$$) this IVP involves integrating the Lane-Emden equation (12) from the center $$z=0$$ to the first boundary $$z=z_{3/2}$$, with the initial conditions
$\begin{split}\left. \begin{gathered} \theta_{i} = 1, \\ \theta'_{i} = 0, \\ B_{1} = 1, \\ t_{1} = 1 \end{gathered} \right\} \quad \text{at}\ z=0\end{split}$
(here, $$t_{i} \equiv \rho_{i,0}/\rho_{1,0}$$).
The IVP in the intermediate regions ($$i = 2,\ldots,\nreg-1$$) involves integrating from $$z=z_{i-1/2}$$ to $$z=z_{i+1/2}$$, with initial conditions established from the preceding region via
$\begin{split}\left. \begin{gathered} \theta_{i} = 1, \\ \theta'_{i} = \frac{n_{i-1} + 1}{n_{i} + 1} \frac{\theta_{i-1}^{n_{i-1}+1}}{\theta_{i}^{n_{i}+1}} \frac{t_{i}}{t_{i-1}} \, \theta'_{i-1}, \\ B_{i} = \frac{n_{i-1} + 1}{n_{i} + 1} \frac{\theta_{i}^{n_{i}+1}}{\theta_{i-1}^{n_{i-1}+1}} \frac{t_{i}^{2}}{t_{i-1}^{2}} \, B_{i-1}, \\ \ln t_{i} = \ln t_{i-1} + n_{i-1} \ln \theta_{i-1} - n_{i} \ln \theta_{i} + \Delta_{i-1/2}. \end{gathered} \right\} \quad \text{at}\ z=z_{i-1/2}\end{split}$
The IVP in the final region ($$i=\nreg$$) involves integrating from $$z_{\nreg-1/2}$$ until $$\theta_{\nreg} = 0$$. This point defines the stellar surface, $$z=z_{\rm s}$$. For some choices of $$n_{i}$$, $$z_{i-1/2}$$ and/or $$\Delta_{i-1/2}$$, the point $$\theta=0$$ can arise in an earlier region $$i = \nreg_{\rm t} < \nreg$$; in such cases, the model specification must be truncated to $$\nreg_{\rm t}$$ regions.
|
2023-03-31 11:57:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9614562392234802, "perplexity": 582.8444092093695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00432.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=150&t=43632&p=150483
|
## Elementary Reactions and Reaction Mechanisms
$K = \frac{k_{forward}}{k_{reverse}}$
Kimberly 1H
Posts: 63
Joined: Fri Sep 28, 2018 12:17 am
Been upvoted: 1 time
### Elementary Reactions and Reaction Mechanisms
Can someone please explain how elementary reactions relate to reaction mechanisms? And how do the two terms explain the decomposition of O3 to O2?
Camille Marangi 2E
Posts: 60
Joined: Fri Sep 28, 2018 12:26 am
### Re: Elementary Reactions and Reaction Mechanisms
Elementary reactions are the step by step sequences that make up overall reaction mechanisms. For example 3 elementary steps can add up to one overall reaction mechanism with intermediates, catalysts, etc.. The slow step of a mechanism determines the rate law and the overall reaction formula can be determined from the reaction mechanism.
EllerySchlingmann1E
Posts: 76
Joined: Fri Sep 28, 2018 12:24 am
### Re: Elementary Reactions and Reaction Mechanisms
To answer the second part of your question: the decomposition of O3 to O2 can be hypothesized as occurring by different reaction mechanisms that are each made up of different elementary reactions. The decomposition can occur by a one-step reaction mechanism that is composed of just one elementary reaction between O3 molecules to produce O2, or it can takes place by a two-step mechanism where O acts as an intermediate between the two elementary reactions which make up this mechanism. Hope that helps!
|
2020-02-19 18:04:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5042865872383118, "perplexity": 2038.529644297627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00451.warc.gz"}
|
http://umj.imath.kiev.ua/volumes/issues/?lang=en&year=1996%CE%BDmber=6&number=2
|
2019
Том 71
№ 11
### All Issues
Article (Ukrainian)
### Turning point in a system of differential equations with analytic operator
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 147-160
We construct uniform asymptotics for a solution of a system of singularly perturbed differential equations with turning point. We consider the case where the boundary operator analytically depends on a small parameter.
Article (Ukrainian)
### Global solutions of a two-dimensional initial boundary-value problem for a system of semilinear magnetoelasticity equations
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 161-167
We prove the theorem on the existence and uniqueness of global solutions of a system of semilinear magnetoelasticity equations in a two-dimensional space.
Article (Russian)
### Approximation properties of systems of exponentials in one space of analytic functions
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 168-183
We obtain a criterion of completeness of a system of exponentials in the Hardy-Smirnov spaces in unbounded convex polygons and study the properties of incomplete systems of exponentials.
Article (Ukrainian)
### Representation and investigation of solutions of a nonlocal boundary-value problem for a system of partial differential equations
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 184-194
We study the boundary-value problem for a system of partial differential equations with constant coefficients with conditions nonlocal in time. By using a metric approach, we prove the well-posedness of the problem in the scale of Sobolev spaces of functions periodic in space variables. By using matrix calculus, we construct an explicit representation of a solution.
Article (Ukrainian)
### On the maximum principle for ultraparabolic equations
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 195-201
We prove the maximum principle and various modifications of it for one class of degeneration of parabolic equations.
Article (Russian)
### Space-time localization in problems with free boundaries for a nonlinear second-order equation
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 202-211
For thermal and diffusion processes in active media described by nonlinear evolution equations, we study the phenomena of space localization and stabilization for finite time.
Article (Russian)
### On one approach to the discretization of the lavrent’ev method
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 212-219
We propose a new scheme of discretization of the Lavrent’ev method for operator equations of the first kind with self-adjoint nonnegative operators of certain “smoothness.” This scheme is more economical in the sense of the amount of used discrete information as compared with traditional approaches.
Article (Ukrainian)
### Nonlinear integrable systems related to the elliptic lie—baxter algebra
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 220-235
We construct a hierarchy of Poisson Hamiltonian structures related to an “elliptic” spectral problem and determine the generating operators for the equation of asymmetric chiral 0 (3) — field.
Article (Russian)
### Approximation of classes of analytic functions by algebraic polynomials and kolmogorov widths
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 236-250
We obtain estimates of the best polynomial approximations, uniform in the closure $B$ of Faber domains of the complex plane $ℂ$, for functions continuous in $B$ and defined by Cauchy-type integrals with densities possessing certain generalized differential properties. We establish estimates exact in order for the Kolmogorov widths of classes of such functions in relevant functional spaces.
Article (Ukrainian)
### On Isotopes of Groups. III
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 251-259
We describe normal congruences of group isotopes, establish criteria of homomorphism and isomorphism, and select the methods for description of isotopes up to isomorphism. In addition, we establish a criterion for a subset to be a subquasigroup of a group isotope and describe subquasigroups of certain classes of group isotopes. The obtained results are applied to the investigation of left distributive quasi-groups.
Article (Russian)
### Strong summability of orthogonal expansions of summable functions. I
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 260-277
We study the problem of strong summability of Fourier series in orthonormal systems of polynomial-type functions and establish local characteristics of the points of strong summability of series of this sort for summable functions. It is shown that the set of these points is a set of full measure in the region of uniform boundedness of systems under consideration.
Article (Russian)
### On the erugin and floquet-lyapunov theorems for countable systems of difference equations
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 278-284
For linear difference equations in the space of bounded number sequences, we prove an analog of the Erugin theorem on reducibility and present sufficient conditions for the reducibility of countable linear systems of difference equations with periodic coefficients.
Anniversaries (Ukrainian)
### Evgenii Yakovlevich Remez
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 285-286
Brief Communications (Russian)
### Application of ateb-functions to the construction of solutions of some nonlinear partial differential equations
Ukr. Mat. Zh. - 1996νmber=6. - 48, № 2. - pp. 287-288
We construct asymptotic approximations of one-frequency solutions of some nonlinear partial differential equations by using periodic Ateb-functions.
|
2020-02-19 16:28:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4983859658241272, "perplexity": 974.7134370299304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00128.warc.gz"}
|
https://help.altair.com/hwsolvers/ms/topics/solvers/ms/optimization_capabilities_response_variables_r.htm
|
# Response Variables
Model behavior is captured in terms of Response Variables.
These are the functions $\psi \left(...\right)$ , in the equations in the Optimization Problem Formulation section. The optimizer in MotionSolve works with Response Variables. Response Variables are used to define the cost and constraint functions for the optimization problem. The MotionSolve optimization toolkit provides a library of responses that you can use to define cost and constraint functions. You can, if you wish, create your own response variables and use it.
The following responses are available in MotionSolve:
|
2023-04-02 12:28:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34789717197418213, "perplexity": 604.0179728589299}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00466.warc.gz"}
|
https://www.mapleprimes.com/posts/1606
|
## Document attachments using links in MaplePrimes...
by: MaplePrimes
The above doc/worksheet can be clicked and viewed.
Youtube URL is given below. Incase you cannot click and view the URL below,
You can copy paste in a new window and view.
https://youtu.be/0pDa4FWMSQo
## Maple Evaluation Order Example
by: Maple 2020
The order in which expressions evaluate in Maple is something that occasionally causes even advanced users to make syntax errors.
I recently saw a single line of Maple code that provided a good example of a command not evaluating in the order the user desired.
The command in question (after calling with(plots):) was
animate(display, [arrow(<cos(t), sin(t)>)], t = 0 .. 2*Pi)
This resulted in the error:
Error, (in plots/arrow) invalid input: plottools:-arrow expects its 3rd argument, pv, to be of type {Vector, list, vector, complexcons, realcons}, but received 0.5000000000e-1*(cos(t)^2+sin(t)^2)^(1/2)
This error indicates that the issue in the animation is the arrow command
arrow(<cos(t), sin(t)>)
on its own, the above command would give the same error. However, the animate command takes values of t from 0 to 2*Pi and substitutes them in, so at first glance, you wouldn't expect the same error to occur.
What is happening is that the command
arrow(<cos(t), sin(t)>)
in the animate expression is evaluating fully, BEFORE the call to animate is happening. This is due to Maple's automatic simplification (since if this expression contained large expressions, or the values were calculated from other function calls, that could be simplified down, you'd want that to happen first to prevent unneeded calculation time at each step).
So the question is how do we stop it evaluating ahead of time since that isn't what we want to happen in this case?
In order to do this we can use uneval quotes (the single quotes on the keyboard - not to be confused with the backticks).
animate(display, ['arrow'(<cos(t), sin(t)>)], t = 0 .. 2*Pi)
By surrounding the arrow function call in the uneval quotes, the evaluation is delayed by a level, allowing the animate call to happen first so that each value of t is then substituted before the arrow command is called.
Maple_Evaluation_Order_Example.mw
## One more way of inverse kinematics
by:
As a continuation of the posts:
https://www.mapleprimes.com/posts/208958-Determination-Of-The-Angles-Of-The-Manipulator
https://www.mapleprimes.com/posts/209255-The-Use-Of-Manipulators-As-Multiaxis
https://www.mapleprimes.com/posts/210003-Manipulator-With-Variable-Length-Of
But this time without Draghilev's method.
Motion along straight lines can replace motion along any spatial path (with any practical precision), which means that solving the inverse problem of the manipulator's kinematics can be reduced to solving the movement along a sequential set of segments. Thus, another general method for solving the manipulator inverse problem is proposed.
An example of a three-link manipulator with 5 degrees of freedom. Its last link, like the first link, geometrically corresponds to the radius of the sphere. We calculate the coordinates of the ends of its links when passing a straight line segment. We do this in a loop for interior points of the segment using the procedure for finding real roots of polynomial systems of equations RootFinding [Isolate]. First, we “remove” two “extra” degrees of freedom by adding two equations to the system. There can be an infinite set of options for additional equations - if only they correspond to the technical capabilities of the device. In this case, two maximally easy conditions were taken: one equation corresponds to the perpendicularity of the last (third) link directly to the segment of the trajectory itself, and the second equation corresponds to the perpendicularity to the vector with coordinates <1,1,1>. As a result, we got four ways to move the manipulator for the same segment. All of these ways are selected as one of the RootFinding [Isolate] solutions (in text jj=1,2,3,4).
In this text jj=4
without_Draghilev_method.mw
As you can see, everything is very simple, there is practically no programming and is performed exclusively by Maple procedures.
## The Physics Examples and LaTeX
by: Maple
One of the most interesting help page about the use of the Physics package is Physics,Examples. This page received some additions recently. It is also an excellent example of the File -> Export -> LaTeX capabilities under development.
Below you see the sections and subsections of this page. At the bottom, you have links to the updated PhysicsExample.mw worksheet, together with PhysicsExamples.PDF.
The PDF file has 74 pages and is obtained by going File -> Export -> LaTeX (FEL) on this worksheet to get a .tex version of it using an experimental version of Maple under development. The .tex file that results from FEL (used to get the PDF using TexShop on a Mac) has no manual editing. This illustrates new automatic line-breakingequation labels, colours, plots, and the new LaTeX translation of sophisticated mathematical physics notation used in the Physics package (command Latex in the Maplesoft Physics Updates, to be renamed as latex in the upcoming Maple release).
In brief, this LaTeX project aims at writing entire course lessons or scientific papers directly in the Maple worksheet that combines what-you-see-is-what-you-get editing capabilities with the Maple computational engine to produce mathematical results. And from there get a LaTeX version of the work in two clicks, optionally hiding all the input (View -> Show/Hide -> Input).
PS: MANY THANKS to all of you who provided so-valuable feedback on the new Latex here in Mapleprimes.
Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft
## Polar Point Plot and Zooming
by:
A customer wondered if they could create a plot of points on a polar axis, and if they could zoom a polar plot.
We suggested a couple of options for making these plots, and (although zooming is not supported in polar coordinates in Maple 2020) an alternative to manually scale a polar plot.
Hope it's a useful demonstration.
polar-point-plot-with-zoom.mw
## 2-line tickmarks on Maple plots
by: Maple
A customer wondered if it was possible to create 2-line tickmarks on Maple plots like so
We achieved this using typeset, fractions, and backticks. Worksheet follows.
2line-tickmarks-mprimes.mw
## To export a worksheet as pdf on a Mac, select...
by: Maple
'Export as pdf' is not one of the 'Export as' options on a Mac. Instead, you should follow these instructions, from
?pdf:
1. Open the worksheet that you want to export.
2. From the File menu, select Print.
3. Specify options for export to PDF as described in Section Options for Export to PDF below.
4. From the PDF drop-down list, select Save as PDF...
5. In the Save As dialog, enter a filename.
Click Save.
## small note about the new Maple 2020.2 and Latex...
by: Maple 2020
I am testing Maple 2020.2 with new Latex with Physics version latest 879.
The latex generated now issues \! between the symbol and the () next to it to improve the spacing. This post is just to let anyone using the package mleftright in Latex, that this will cause a problem. So it is better to remove this package if you are allready using it.
Here is an example
eq:=y(x)=exp(x/2);
Latex(eq)
y \! \left(x \right) = {\mathrm e}^{\frac{x}{2}}
In earlier version of Physics:-Latex (now it is just Latex), the above generated this
y \left(x \right) = {\mathrm e}^{\frac{x}{2}}
Notice, no \! in earlier version.
If you happen to be using \usepackage{mleftright} to improve the spacing for \left and \right, which I was using, you'll get negative side effect. A solution is to remove this package. Here is an example showing the above Latex compiled with this package added, and without it, so you can see the differerence.
\documentclass[12pt]{book}
\usepackage{amsmath}
\usepackage{mleftright}
\mleftright
\begin{document}
which gives using latest Latex V 879. Maple 2020.2
$y \! \left(x \right) = {\mathrm e}^{\frac{x}{2}}$
And which gives using earlier Physics Latex. Using Maple 2020.1
$y \left(x \right) = {\mathrm e}^{\frac{x}{2}}$
\end{document}
This is the output without using this package. by removing the inlcude command in the above Latex code and not calling mlfright. Now the problem is gone:
I like the effect added from \! , which is a manual way to improve the space, which this package was doing.
just be careful not to use mleftright package now, which is a somewhat popular package in latex and It was recommended to use sometime ago to improve the spacing, as it will over correct the spacing, and looks like not needed any more with latest Maple Latex.
Maple 2020.2 includes corrections and improvements to printing and export to PDF, support for macOS 11.0, more MATLAB connectivity, resolves issues with the installation of the Maplesoft Physics Updates, and more. We recommend that all Maple 2020 users install these updates.
This update is available through Tools>Check for Updates in Maple, and is also available from our website on the Maple 2020.2 download page, where you can also find more details.
## Something about one degree of freedom for testing...
by: Maple 17
One forum had a topic related to such a platform. You can download a video of the movement of this platform from the picture at this link. The manufacturer calls the three-degrees platform, that is, having three degrees of freedom. Three cranks rotate, and the platform is connected to them by connecting rods through ball joints. The movable beam (rocker arm) has torsion springs. I counted 4 degrees of freedom, because when all three cranks are locked, the platform remains mobile, which is camouflaged by the springs of the rocker arm. Actually, the topic on the forum arose due to problems with the work of this platform. Neither the designers nor those who operate the platform take into account this additional fourth, so-called parasitic degree of freedom. Obviously, if we will to move the rocker with the locked cranks , the platform will move.
Based on this parasitic movement and a similar platform design, a very simple device is proposed that has one degree of freedom and is, in fact, a spatial linkage mechanism. We remove 3 cranks, keep the connecting rods, convert the rocker arm into a crank and get such movements that will not be worse (will not yield) to the movements of the platform with 6 degrees of freedom. And by changing the length of the crank, the plane of its rotation, etc., we can create simple structures with the required design trajectories of movement and one degree of freedom.
Two examples (two pictures for each example). The crank rotates in the vertical plane (side view and top view)
PLAT_1.mw
and the crank rotates in the horizontal plane (side view and top view).
The program consists of three parts. 1 choice of starting position, 2 calculation of the trajectory, 3 design of the picture. Similar to the programm in this topic.
## A little about controlled platforms (parallel...
by: Maple 17
Controlled platform with 6 degrees of freedom. It has three rotary-inclined racks of variable length:
and an example of movement parallel to the base:
Perhaps the Stewart platform may not reproduce such trajectories, but that is not the point. There is a way to select a design for those specific functions that our platform will perform. That is, first we consider the required trajectories of the platform movement, and only then we select a driving device that can reproduce them. For example, we can fix the extreme positions of the actuators during the movement of the platform and compare them with the capabilities of existing designs, or simulate your own devices.
In this case, the program consists of three parts. (The text of the program directly for the first figure : PLATFORM_6.mw) In the first part, we select the starting point for the movement of a rigid body with six degrees of freedom. Here three equations f6, f7, f8 are responsible for the six degrees of freedom. The equations f1, f2, f3, f4, f5 define a trajectory of motion of a rigid body. The coordinates of the starting point are transmitted via disk E for the second part of the program. In the second part of the program, the trajectory of a rigid body is calculated using the Draghilev method. Then the trajectory data is transferred via the disk E for the third part of the program.
In the third part of the program, the visualization is executed and the platform motion drive device is modeled.
It is like a sketch of a possible way to create controlled platforms with six degrees of freedom. Any device that can provide the desired trajectory can be inserted into the third part. At the same time, it is obvious that the geometric parameters of the movement of this device with the control of possible emergency positions and the solution of the inverse kinematics problem can be obtained automatically if we add the appropriate code to the program text.
Equations can be of any kind and can be combined with each other, and they must be continuously differentiable. But first, the equations must be reduced to uniform variables in order to apply the Draghilev method.
(These examples use implicit equations for the coordinates of the vertices of the triangle.)
## What to take care of when entering a tetrad
by: Maple 2020
In the study of the Gödel spacetime model, a tetrad was suggested in the literature [1]. Alas, upon entering the tetrad in question, Maple's Tetrad's package complained that that matrix was not a tetrad! What went wrong? After an exchange with Edgardo S. Cheb-Terrab, Edgardo provided us with awfully useful comments regarding the use of the package and suggested that the problem together with its solution be presented in a post, as others may find it of some use for their work as well.
The Gödel spacetime solution to Einsten's equations is as follows.
>
(1)
>
(2)
Working with Cartesian coordinates,
>
(3)
the Gödel line element is
>
(4)
Setting the metric
>
(5)
The problem appeared upon entering the matrix M below supposedly representing the alleged tetrad.
>
>
(6)
Each of the rows of this matrix is supposed to be one of the null vectors . Before setting this alleged tetrad, Maple was asked to settle the nature of it, and the answer was that M was not a tetrad! With the Physics Updates v.857, a more detailed message was issued:
>
(7)
So there were actually three problems:
1 The entered entity was a null tetrad, while the default of the Physics package is an orthonormal tetrad. This can be seen in the form of the tetrad metric, or using the library commands:
>
(8)
>
(9)
>
(10)
2 The matrix M would only be a tetrad if the spacetime index is contravariant. On the other hand, the command IsTetrad will return true only when M represents a tetrad with both indices covariant. For instance, if the command IsTetrad is issued about the tetrad automatically computed by Maple, but is passed the matrix corresponding to with the spacetime index contravariant, false is returned:
>
(11)
>
(12)
3 The matrix M corresponds to a tetrad with different signature, (+---), instead of Maple's default (---+). Although these two signatures represent the same physics, they differ in the ordering of rows and columns: the timelike component is respectively in positions 1 and 4.
The issue, then, became how to correct the matrix M to be a valid tetrad: either change the setup, or change the matrix M. Below the two courses of action are provided.
First the simplest: change the settings. According to the message (7), setting the tetrad to be null, changing the signature to be (+---) and indicating that M represents a tetrad with its spacetime index contravariant would suffice:
>
(13)
The null tetrad metric is now as in the reference used.
>
(14)
Checking now with the spacetime index contravariant
>
(15)
At this point, the command IsTetrad provided with the equation (15), where the left-hand side has the information that the spacetime index is contravariant
>
(16)
Great! one can now set the tetrad M exactly as entered, without changing anything else. In the next line it will only be necessary to indicate that the spacetime index, , is contravariant.
>
(17)
The tetrad is now the matrix M. In addition to checking this tetrad making use of the IsTetrad command, it is also possible to check the definitions of tetrads and null vectors using TensorArray.
>
(18)
>
(19)
For the null vectors:
>
(20)
>
(21)
From its Weyl scalars, this tetrad is already in the canonical form for a spacetime of Petrov type "D": only
>
(22)
>
(23)
Attempting to transform it into canonicalform returns the tetrad (17) itself
>
(24)
Let's now obtain the correct tetrad without changing the signature as done in (13).
Start by changing the signature back to
>
(25)
So again, M is not a tetrad, even if the spacetime index is specified as contravariant.
>
(26)
By construction, the tetrad M has its rows formed by the null vectors with the ordering . To understand what needs to be changed in M, define those vectors, independent of the null vectors (with underscore) that come with the Tetrads package.
>
and set their components using the matrix M taking into account that its spacetime index is contravariant, and equating the rows of M using the ordering :
>
(27)
>
(28)
Check the covariant components of these vectors towards comparing them with the lines of the Maple's tetrad
>
(29)
This shows the null vectors (with underscore) that come with Tetrads package
>
(30)
So (29) computed from M is the same as (30) computed from Maple's tetrad.
But, from (30) and the form of Maple's tetrad
>
(31)
for the current signature
>
(32)
we see the ordering of the null vectors is , not used in [1] with the signature (+ - - -). So the adjustment required in M, resulting in , consists of reordering M's rows to be
>
(33)
>
(34)
Comparing with the tetrad computed by Maple ((24) and (31), they are actually the same.
References
[1]. Rainer Burghardt, "Constructing the Godel Universe", the arxiv gr-qc/0106070 2001.
[2]. Frank Grave and Michael Buser, "Visiting the Gödel Universe", IEEE Trans Vis Comput GRAPH, 14(6):1563-70, 2008.
## Computing a tetrad in canonical form - automatically...
by: Maple
In a recent question in Mapleprimes, a spacetime (metric) solution to Einstein's equations, from chapter 27 of the book of Exact Solutions to Einstein's equations [1] was discussed. One of the issues was about computing a tetrad for that solution [27, 37, 1] such that the corresponding Weyl scalars are in canonical form. This post illustrates how to do that, with precisely that spacetime metric solution, in two different ways: 1) automatically, all in one go, and 2) step-by-step. The step-by-step computation is useful to verify results and also to compute different forms of the tetrads or Weyl scalars. The computation below is performed using the latest version of the Maplesoft Physics Updates.
>
>
(1)
The starting point is this image of page 421 of the book of Exact Solutions to Einstein's equations, formulas (27.37)
Load the solution [27, 37, 1] from Maple's database of solutions to Einstein's equations
>
(2)
>
(3)
The assumptions on the metric's parameters are
>
The line element is as shown in the second line of the image above
>
(4)
>
(5)
The Petrov type of this spacetime solution is
>
(6)
The null tetrad computed by the Maple system using a general algorithms is
>
>
(7)
According to the help page TransformTetrad , the canonical form of the Weyl scalars for each different Petrov type is
So for type II, when the tetrad is in canonical form, we expect only and different from 0. For the tetrad computed automatically, however, the scalars are
>
(8)
The question is, how to bring the tetrad (equation (7)) into canonical form. The plan for that is outlined in Chapter 7, by Chandrasekhar, page 388, of the book "General Relativity, an Einstein centenary survey", edited by S.W. Hawking and W.Israel. In brief, for Petrov type II, use a transformation of to make , then a transformation of making , finally use a transformation of making . For an explanation of these transformations see the help page for TransformTetrad . This plan, however, is applicable if and only if the starting tetrad results in , which we see in (8) it is not the case, so we need, in addition, before applying this plan, to perform a transformation of making
In what follows, the transformations mentioned are first performed automatically, in one go, letting the computer deduce each intermediate transformation, by passing to TransformTetrad the optional argument canonicalform. Then, the same result is obtained by transforming the starting tetrad one step at at time, arriving at the same Weyl scalars. That illustrates well both how to get the result exploiting advanced functionality but also how to verify the result performing each step, and also how to get any desired different form of the Weyl scalars.
Although it is possible to perform both computations, automatically and step-by-step, departing from the tetrad (7), that tetrad and the corresponding Weyl scalars (8) have radicals, making the readability of the formulas at each step less clear. Both computations, can be presented in more readable form without radicals departing from the tetrad shown in the book, that is
>
(9)
>
(10)
The corresponding Weyl scalars free of radicals are
>
(11)
So set this tetrad as the starting point
>
(12)
All the transformations performed automatically, in one go
To arrive in one go, automatically, to a tetrad whose Weyl scalars are in canonical form as in (31), use the optional argument canonicalform:
>
>
(13)
Note the length of
>
(14)
That length corresponds to several pages long. That happens frequently, you get Weyl scalars with a minimum of residual invariance, at the cost of a more complicated tetrad.
The transformations step-by-step leading to the same canonical form of the Weyl scalars
Step 0
As mentioned above, to apply the plan outlined by Chandrasekhar, the starting point needs to be a tetrad with , not the case of (9), so in this step 0 we use a transformation of making . This transformation introduces a complex parameter E and to get any value of E suffices. We use :
>
(15)
>
(16)
>
(17)
Step 1
Next is a transformation of to make , that in the case of Petrov type II also implies on .According to the the help page TransformTetrad , this transformation introduces a parameter B that, according to the plan outlined by Chandrasekhar in Chapter 7 page 388, is one of the two identical roots (out of the four roots) of the principalpolynomial. To see the principal polynomial, or, directly, its roots you can use the PetrovType command:
>
(18)
The first two are the same and equal to -1
>
(19)
>
(20)
Check this result and the corresponding Weyl scalars to verify that we now have and
>
(21)
>
(22)
Step 2
Next is a transformation of that makes . This transformation introduces a parameter E, that according to Chandrasekhar's plan can be taken equal to one of the roots of Weyl scalar that corresponds to the transformed tetrad. So we need to proceed in three steps:
a. transform the tetrad introducing a parameter E in the tetrad's components
b. compute the Weyl scalars for that transformed tetrad
c. take and solve for E
d. apply the resulting value of E to the transformed tetrad obtained in step a.
a.Transform the tetrad and for simplicity take E real
>
(23)
>
(24)
>
(25)
c. Solve discarding the case which implies on no transformation
>
(26)
d. Apply this result to the tetrad (23). In doing so, do not display the result, just measure its length (corresponds to two+ pages)
>
>
(27)
Check the scalars, we expect
>
(28)
Step 3
Use a transformation of making . Such a transformation changes , where we need to take , and without loss of generality we can take
Check first the value of in the last tetrad computed
>
(29)
So, the transformed tetrad to which corresponds Weyl scalars in canonical form, with and , is
>
>
(30)
>
(31)
These are the same scalars computed in one go in (13)
>
(32)
>
Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft
## Workaround for the problem of installing MapleCloud...
by: Maple 2020
Hi,
Some people using the Windows platform have had problems installing MapleCloud packages, including the Maplesoft Physics Updates. This problem does not happen in Macintosh or Linux/Unix, also does not happen with all Windows computers but with some of them, and is not a problem of the MapleCloud packages themselves, but a problem of the installer of packages.
I understand that a solution to this problem will be presented within an upcoming Maple dot release.
Meantime, there is a solution by installing a helper library; after that, MapleCloud packages install without problems in all Windows machines. So whoever is having trouble installing MapleCloud packages in Windows and prefers not to wait until that dot release, instead wants to install this helper library, please email me at physics@maplesoft.com
Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft
## 2D Input operator assignment syntax
by: Maple
Caution, certain kinds of earlier input can affect the results from using the 2D Input syntax for operator assignment.
>
>
>
The following now produces a remember-table assignment,
instead of assigning a procedure to name f, even though by default
Typesetting:-Settings(functionassign)
is set to true.
>
>
>
>
>
With the previous line of code commented-out the following
line assigns a procedure to name f, as expected.
If you uncomment the previous line, and re-execute the whole
worksheet using !!! from the menubar, then the following will
|
2021-02-25 21:30:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7736214995384216, "perplexity": 1528.7964570384704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355937.26/warc/CC-MAIN-20210225211435-20210226001435-00545.warc.gz"}
|
https://www.quizover.com/course/section/introduction-conditional-probability-by-openstax
|
Conditional probability
Page 1 / 6
The probability P(A) of an event A is a measure of the likelihood that the event will occur on any trial. New, but partial, information determines a conditioning event C , which may call for reassessing the likelihood of event A. For a fixed conditioning event C, this new assignment to all events constitutes a new probability measure. In addition, because of the way it is derived from the original, or prior, probability, the conditional probability measure has a number of special properties which are important in applications. Determination of the conditioning event is key.
Introduction
The probability $P\left(A\right)$ of an event A is a measure of the likelihood that the event will occur on any trial. Sometimes partial information determines thatan event C has occurred. Given this information, it may be necessary to reassign the likelihood for each event A . This leads to the notion of conditional probability. For a fixed conditioning event C , this assignment to all events constitutes a new probability measure which has all the properties of the originalprobability measure. In addition, because of the way it is derived from the original, the conditional probability measure has a number of special propertieswhich are important in applications.
Conditional probability
The original or prior probability measure utilizes all available information to make probability assignments $P\left(A\right),\phantom{\rule{0.277778em}{0ex}}P\left(B\right)$ , etc., subject to the defining conditions (P1), (P2), and (P3) . The probability $P\left(A\right)$ indicates the likelihood that event A will occur on any trial.
Frequently, new information is received which leads to a reassessment of the likelihood of event A . For example
• An applicant for a job as a manager of a service department is being interviewed. His résumé shows adequate experience and other qualifications. He conductshimself with ease and is quite articulate in his interview. He is considered a prospect highly likely to succeed. The interview is followed by an extensive backgroundcheck. His credit rating, because of bad debts, is found to be quite low. With this information, the likelihood that he is a satisfactory candidate changes radically.
• A young woman is seeking to purchase a used car. She finds one that appears to be an excellent buy. It looks “clean,” has reasonable mileage, and is a dependablemodel of a well known make. Before buying, she has a mechanic friend look at it. He finds evidence that the car has been wrecked with possible frame damage that has beenrepaired. The likelihood the car will be satisfactory is thus reduced considerably.
• A physician is conducting a routine physical examination on a patient in her seventies. She is somewhat overweight. He suspects that she may be prone to heartproblems. Then he discovers that she exercises regularly, eats a low fat, high fiber, variagated diet, and comes from a family in which survival well into their ninetiesis common. On the basis of this new information, he reassesses the likelihood of heart problems.
New, but partial, information determines a conditioning event C , which may call for reassessing the likelihood of event A . For one thing, this means that A occurs iff the event $AC$ occurs. Effectively, this makes C a new basic space. The new unit of probability mass is $P\left(C\right)$ . How should the new probability assignments be made? One possibility is to make the new assignment to A proportional to the probability $P\left(AC\right)$ . These considerations and experience with the classical case suggests the following procedure for reassignment. Althoughsuch a reassignment is not logically necessary, subsequent developments give substantial evidence that this is the appropriate procedure.
Do somebody tell me a best nano engineering book for beginners?
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
what is the Synthesis, properties,and applications of carbon nano chemistry
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Berger describes sociologists as concerned with
A fair die is tossed 180 times. Find the probability P that the face 6 will appear between 29 and 32 times inclusive
|
2018-09-24 19:14:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5687064528465271, "perplexity": 1495.774665362357}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160641.81/warc/CC-MAIN-20180924185233-20180924205633-00472.warc.gz"}
|
http://openstudy.com/updates/50f27495e4b0694eaccf5f54
|
1. UnkleRhaukus
\begin{align*} \\ \erf x &=\erfi x\\ &=\\ &\,\vdots\\ &=\\ &=\frac2{\sqrt\pi}\left\{x-\frac{x^3}3+\frac{x^5}{2!5}+\frac{x^7}{3!7}+\dots\right\} \end{align*}
2. UnkleRhaukus
I'm not sure how to turn the integral into the infinite series,
3. AravindG
as always this is above my level always wish i could help !
4. ParthKohli
Looks like a Maclaurin Series bro.
5. UnkleRhaukus
are you series ?
6. ParthKohli
lol$f(x) = \sum_{n = 0}^{\infty}\dfrac{{f^n}(0)}{n!}x^n$Definitely not a Maclaurin.
7. ParthKohli
Not sure. I am not good at this :-(
8. UnkleRhaukus
ah yes Maclaurin Series (13) http://mathworld.wolfram.com/MaclaurinSeries.html
9. wio
looks like the series is $\frac{x^{2n+1}}{n!(2n+1)}$
10. wio
11. UnkleRhaukus
12. wio
Why not use: $\Large e^{x} = \sum_{n=0}^{\infty } \frac{x^n}{n!} \implies e^{-x^2} = \sum_{n=0}^{\infty } \frac{(-x^2)^n}{n!}$
13. wio
It's really easy to find the anti derivative
14. UnkleRhaukus
i think i got it now ! thanks!
15. wio
No problem,
16. wio
the part that threw me off was that there wasn't an alternating - +
17. UnkleRhaukus
oh dear i made an error in the question , it should be alternating , like you say @wio
18. wio
@UnkleRhaukus Where are you getting the questions from anyway?
19. UnkleRhaukus
20. ParthKohli
What is all this \newcommand thing? Are we allowed to introduce new variables in MathJax?
21. wio
Is it a class you're getting it from?
22. wio
Or a book or what?
23. wio
@ParthKohli you're allowed to make command shortcuts, but I think it only works for your own post.
24. UnkleRhaukus
MATH202 Differential equations sophomore subject $\emph{Exercise 4D}1(d)$
25. UnkleRhaukus
26. experimentX
don't expand that series ... just integrate inside summation sign.
27. UnkleRhaukus
28. experimentX
there are many cases where you have to change functions into infinite series to evaluate integral with real methods. I think ... expanding the series will just mess up.
29. UnkleRhaukus
have i done something wrong?
30. experimentX
no it's okay ... just advised for simplicity. |dw:1358071243367:dw|
31. experimentX
|dw:1358071379099:dw|
32. UnkleRhaukus
|
2014-10-24 15:29:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686995148658752, "perplexity": 14496.942776613967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646209.30/warc/CC-MAIN-20141024030046-00135-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/force-and-acceleration.132285/
|
# Force and acceleration
1. Sep 16, 2006
### Warrzie
This one has me clueless. The problem reads:
"An object of mass m1 on a frictionless horizontal table is connected to an object of mass m2 through a very light pulley P1 and a light fixed pulley P2."
It wants me to state the relationship between the accelerations, with a1 and a2 corresponding to m1 and m2 respectively.
It then wants the tension in both strings, and finally the acceleration of each block in terms of m1, g, and m2
I've stared at the attached figure for a while and don't even understand how m2's string is connect to the floating pulley, or if that pulley even has an effect on the accelerations/tension. I want to say that a1=a2 since the blocks are attached to each other.
Any hints?
#### Attached Files:
• ###### p4-38.gif
File size:
10.3 KB
Views:
119
2. Sep 16, 2006
You may want to put a link to the image instead of an attachment, since these usually don't work.
3. Sep 16, 2006
### Astronuc
Staff Emeritus
The acceleration of m2 = acceleration of pulley P1. On has to determine the accleration of m1 and the acceleration of P1. When P1 moves (translates) $\Delta{x}$, by what distance does m1 translate? One may assume the string tied to m2 is fixed to the axis of P1 and is of fixed length.
Last edited: Sep 16, 2006
|
2018-03-21 14:02:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47228506207466125, "perplexity": 1501.070456222363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647649.70/warc/CC-MAIN-20180321121805-20180321141805-00586.warc.gz"}
|
http://math.stackexchange.com/questions/348066/normalizing-a-matrix
|
# Normalizing a matrix
I came across a step in an numerical algebra algorithm that says "Normalize the rows of matrix A such that they are unit-norm. Call U the normalized matrix."
I do something like this:
for i=1:no_of_rows
U(i,:)= A(i,:)./ norm(A(i,:))
end
My question is what norm should I use? Will $2$ norm as below : $$\|X\|_2=\sqrt{\sum_{k=1}^n|x_k|^2}$$ work here? Do I need to satisfy UU*=I ?
-
## 4 Answers
Most likely, they mean the Euclidean $2$ norm you mention. But the procedure makes sense for any norm on the row space.
The resulting matrix needs not be unitary if the size of the matrix is $\geq 2$, i.e. you don't get $UU^*=I$ in general. Just start with the matrix whose coefficients are all equal to $35$, for instance. But that's ok. "Normalizing" the rows does not even require to make the matrix "normal", a fortiori not unitary.
-
Isn't the row sum norm more natural here ? – Dominic Michaelis Apr 1 '13 at 11:41
Generally speaking, when the norm is not specified, the Euclidean norm is assumed, yes.
Note that if you are using Matlab, the bsxfun function can eliminate the for loop, which is always good:
U = bsxfun( @rdivide, A, sqrt(sum(abs(A).^2,2)) );
This does not necessarily result in a matrix that satisfies $UU^*=I$. But that is not being requested in the quoted statement.
-
The more natural choice wuold be the row sum norm in my opinion. Because you scale the matrices to get a better condition, so you have to scale it with the norm of the matrix, here you need to know which one you are having. then the algorithm in Mathematica Syntax would be this one:
scale[m_]:=
DiagonalMatrix[
Table[
(1/Sum[
Abs[
m[[pinguin,affe]]
],{affe,1,Dimensions[m][[1]]}
]),{pinguin,1,Dimensions[m][[2]]}]
].m
This algorithm is more for intuition then for efficiency, I just multiply the rows of the matrix with the inverse of the row norms.
As for a unitary matrix you need that the norm induced by the hermitian scalar product is $1$ (and even much more) you will get unitary matrices only by coincidence.
-
@julien The Row sum norm for matrices is for a matrix $\in \mathbb{K}^{m \times n}$ $$\max_{i} \sum_{k=1}^n |a_{ik}|$$ so it's NOT the euclidean norm, so the $\|\cdot\|_1$ norm would be the more natural choice for scaling – Dominic Michaelis Apr 1 '13 at 12:00
Ah, I see, that's what you meant. It is not more natural. It depends on the purpose, the context, etc... If I'm free to choose a norm, I will always pick a Euclidean one. – 1015 Apr 1 '13 at 12:01
Let's $X_j$ the $j$-row of matrix $X$. Then $$XX^\ast=(X_{ij})_{n\times n}(X_{ij})_{n\times n}^\ast=(\langle X_j X_j\rangle)_{n\times n}$$ Note that \begin{align} \| (X_{ij})_{n\times n} \|_F = & \sqrt[2\,]{\mbox{trace}\left[ (X_{ij})_{n\times n} (X_{ij})_{n\times n}^\ast\right]} \\ = & \sqrt[2\,]{\mbox{trace}\left[ (X_{ij})_{n\times n} (\overline{X_{ij}})_{n\times n}\right]} \\ = & \sqrt[2\,]{\mbox{trace}\left[ \left(\sum_{k=1}^nX_{ik}\cdot \overline{X_{ik}}\right)_{n\times n}\,\right]} \\ = & \sqrt[2\,]{\sum_{i=1}^n\sum_{k=1}^nX_{ik}\cdot \overline{X_{ik}}} \\ = & \sqrt[2\,]{\sum_{i=1}^n\sum_{k=1}^n |X_{ik}|^2} \\ = & \sqrt[2\,]{\sum_{i=1}^n\langle X_i,X_i\rangle} \\ = & \sqrt[2\,]{\sum_{i=1}^n\| X_i\|^2} \end{align} Then $XX^\ast=I$ implies $\| X_i\|=1$ for all $i=1,\ldots, n$. But, the reciprocal not true. This is true if, only if, the rows of $X$ is a orthonormal basis of $\mathbb{R}^n$.
-
|
2016-06-30 12:13:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99713534116745, "perplexity": 646.9390168295247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00140-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://sizespectrum.org/mizerExperimental/reference/getYieldVsFcurve.html
|
Calculates the yield of a species for a range of fishing mortalities for species across all gears at each simulation time step. Note: The function just returns the yield at the last time step of the simulation, which might not have converged, or might oscillate.
getYieldVsFcurve(params, ixSpecies, nSteps = 10, Fmax = 3, bPlot = FALSE)
## Arguments
params An object of class MizerParams. The species number to make the calculation over The number of steps in F to calculate The maximum fishing mortality Boolean that indicates whether a plot is to be made The range of fishing mortalities to run over
## Value
An list with yields and fishing mortalities
## Examples
# \dontrun{
params <- newMultispeciesParams(NS_species_params_gears, inter)#> Note: No h provided for some species, so using f0 and k_vb to calculate it.#> Note: Because you have n != p, the default value is not very good.#> Note: No ks column so calculating from critical feeding level.#> Note: Using z0 = z0pre * w_inf ^ z0exp for missing z0 values.#> Note: Using f0, h, lambda, kappa and the predation kernel to calculate gamma.y <- getYieldVsFcurve(params,11,bPlot=TRUE)# }
|
2020-09-25 14:13:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5011311173439026, "perplexity": 4850.348497330093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400226381.66/warc/CC-MAIN-20200925115553-20200925145553-00229.warc.gz"}
|
https://plainmath.net/2445/solution-inequality-interval-notation-given-displaystyle-plus-less-then
|
Question
# The solution of the inequality and interval notation.Given:displaystyle{10}+{x}<{6}{x}-{10}
Decimals
The solution of the inequality and interval notation.
Given:
$$\displaystyle{10}+{x}<{6}{x}-{10}$$
2020-12-25
Concept used:
Consider the following steps to solve one variable linear inequality:
If an equation contains fractions or decimals, multiply both sides by the LCD to clear the equation of fractions or decimals.
Use the distributive property to remove parentheses if they are present.
Simplify each side of the inequality by combining like terms.
Get all variable terms on one side and all numbers on the other side by using the addition property of inequality.
Get the variable alone by using the multiplication property of equality.
The given inequality equation is,
$$\displaystyle{10}+{x}<{6}{x}-{10}$$
$$\displaystyle{10}+{x}-{6}{x}<{6}{x}-{10}-{6}{x}$$
$$\displaystyle{10}-{5}{x}<-{10}$$
$$\displaystyle{10}-{5}{x}-{10}<-{10}-{10}$$
Simplify further,
$$\displaystyle-{5}{x}<-{20}$$
$$\displaystyle\frac{{-{5}{x}}}{{5}}>\frac{{-{20}}}{ -{{5}}}$$
$$\displaystyle{x}>{4}$$
The interval notation of the inequality is written as $$\displaystyle{\left({4},\infty\right)}.$$
Hence, the solution of the inequality is $$\displaystyle{x}>{4}$$
and the interval notation is $$\displaystyle{\left({4},\infty\right)}.$$
|
2021-10-21 07:35:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808025598526001, "perplexity": 619.4766983101788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00033.warc.gz"}
|
http://mathhelpforum.com/math-topics/26538-substitution-word-problem.html
|
# Math Help - Substitution word problem
1. ## Substitution word problem
I realise I posted this in the wrong forum sorry
the problem is:
a silversmith has alloys that contain 40% silver and others that have 50% silver. A custom order requires 150g of 44% silver. How much of each alloy should be melted together to make the bracelet?
so far i got:
Let x represent the 40% alloy
Let y represent the 50% alloy.
x+y=150
0.4x+0.5y=0.44
I rearranged the first eqaution to be y=-x+150.
Then subbed it into the second to be 0.4x+0.5(-x+150)=0.44
but when I solved this, i got 745.6. Where did I go wrong?
2. Originally Posted by questions4u
I realise I posted this in the wrong forum sorry
the problem is:
a silversmith has alloys that contain 40% silver and others that have 50% silver. A custom order requires 150g of 44% silver. How much of each alloy should be melted together to make the bracelet?
so far i got:
Let x represent the 40% alloy
Let y represent the 50% alloy.
x+y=150
0.4x+0.5y=0.44
I rearranged the first eqaution to be y=-x+150.
Then subbed it into the second to be 0.4x+0.5(-x+150)=0.44
but when I solved this, i got 745.6. Where did I go wrong?
your second equation is wrong. it should be equated to 66 (which is 44% of 150g) not 0.44. you let x and y represent the grams of silver in the first equation and then the percentage of silver in the second. this inconsistency gave rise to your incorrect solution
3. thank you
|
2015-03-28 02:41:35
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8360466957092285, "perplexity": 1907.4260622424408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297172.60/warc/CC-MAIN-20150323172137-00119-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://blog.theleapjournal.org/2021/12/keshav-desiraju.html?m=0
|
## Sunday, December 05, 2021
### Keshav Desiraju
by Naman Shah.
Honoring the dead is tricky, more so when their loss is so unexpected. I imagined decades more of dialogue and friendship with Keshav before having to reflect this way. We all know what he accomplished, so I won’t dwell on that. Besides, legacy making is problematic and I think Keshav would’ve found it to be quite a bore. Instead, I would like to remember what made him special so we might imbibe those qualities ourselves:
1. Keshav spent time with young people and he had a blast doing so. He was curious and carried no airs about him. On the receiving end, whether he agreed or disagreed with you, it was such a rush to be taken seriously. One could also not escape noticing how very kind that act was and wanting to do the same for others. I remember a brisk winter walk in the Purana Qila zoo with him and another friend, a designer without an inkling of the policy world. Keshav still tried, for some time, to explain to her what his typical day entailed and in the end accepted with good humor and grace her synopsis “so you just sit in meetings and sign lots of papers?” as the superior if unflattering summary.
2. Keshav could acknowledge heartbreak. We know this from having seen it. His pre-election, overnight dismissal as health secretary by Congress, the party of his honored grandfather no less, broke him. He did not need the position to define himself, rather it was the sense of betrayal and losing a long awaited opportunity to do something of meaning (Keshav had turned down promotion twice to lead other ministries for what he saw as his best chance to make a difference in the misery of many). To outward appearances, he may have masked his emotions well. He remained active in many groups and found other new meanings. But underneath, there was pain, and his energy was never the same. For me, and I’m sure many of us, his experience had personal import. I had to reorient my theory of change, for if he couldn’t move the beast from within, what chance did I have?
3. Keshav always made space for others. I mean this both at the level of the individual and in his role in governance. For the former, Keshav was never too busy. We met during work hours for chai. He could spend the weekend sitting with his elderly aunt. He didn’t succumb to the modern plague of busyness and never invoked time scarcity as a status symbol. He thought it appropriate to write back to others who reached out to him. Unlike many of his peers, he identified as Keshav and never referred to himself as ‘the Government’. In the social realm, Keshav was a pluralist. He brought in a broader perspective to Nirman Bhawan, i.e. beyond the typical multilateral and Ansari Nagar crowd that are mustered for such things. He also made it a point to travel outside those confines. I can recall a vigorous nighttime meeting with our malaria team in Orissa. More than simple diversity, I think he sought the credibility and competency from those doing daily, direct work. Whether or not he had read Sen’s Positional Objectivity, his actions lived its message. The view from Delhi is limited. Also, ‘small’ things outside the Centre mattered. Or, at least, there was some justice in getting them resolved. In the midst of drafting national law, he pushed the NBME to expand nascent Family Medicine training at a handful of wonderful, small hospitals scattered in our hinterland.
4. Keshav loved to learn. While I admire and share the drive to fix problems, seeing the world in that lens alone gets bleak. Only making terrible situations passable misses the full range of human experience. He explored the beauty of literature, music, and museums. Keshav was joyous about these and, in sharing that joy, enriched those around him. At least ten books, both gifted by and borrowed from him, remain with me. To learn is to be open and to be open is to be liberal. From his philosophy, policy approach, and here, culturally, Keshav was the consummate Nehruvian Indian. The Oxbridge accent helped that impression too I guess. I am grateful to him as my introduction and living interlocutor to the rich, though diminishing, world of liberal Indian thought and politics.
5. Finally, Keshav was a terrific listener. There’s nothing to add here, as all the prior traits attest to this. His ability to listen was the foundation of the rest of the above. It was the simple basis for his widespread admiration and now its absence for our collective mourning.
I’ll end with what I think was the peculiar, though perhaps not for him, way we met. I had dropped by IIC for a mental health conference organized by colleagues. At some point they introduced me to Keshav, one of the speakers, and we found ourselves discussing literature including how much we both enjoyed Beteille’s essay My Two Grandmothers. He then asked me what I was currently reading. It was Guha’s The Last Liberal, and then, sheepishly, as we bid farewell, inquired whether I had read the dedication. I hadn’t and imagine my surprise when I did so after reaching home. The most civilized of civil servants indeed.
LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
|
2022-12-09 20:22:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3546541631221771, "perplexity": 2809.2516857641644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00091.warc.gz"}
|
http://mathoverflow.net/questions/95396/multivariate-gaussian-approximation-in-total-variation-distance/95401
|
# multivariate Gaussian approximation in total variation distance
I'm wondering if there's any general technique that gives the total variation distance between a distribution on $\mathbb{R}^n$ and $N(0, I_n)$.
My understanding is that Stein's method gives only Wasserstein distance in higher dimension because the characterization of multivariate Gaussian is a second-order differential equation (while it is a first-order differential equation in one-dimensional case) so more regularity is required on test functions and thus it yields a weaker distance. And I understand that it is possible to improve Wasserstein distance to total variation distance if the distribution is log-concave.
What is the usual way to handle the total variation distance to multivariate Gaussian? I'm primarily interested in approximating $N(0,I_n)$ but the approximating distribution is not necessarily log-concave. Perhaps there's some easy way for this special case? Or is there any impossibility result?
-
Unfortunately I cannot help you, since I just found out about the Stein's method. However, I was wondering if you could point me in the direction of the material behind the second paragraph of your question. I have a similar problem: I am trying to upper-bound the total variation distance between $N(0,I_n)$ and another distribution. My other distribution happens to be a mixture of multivariate Gaussians with unit variance, but non-zero vectors of means. Thus, I am wondering about the applicability of the last sentence of the second paragraph to mixtures of log-concave distributions. – Bullmoose May 31 '13 at 8:30
Stein's method doesn't give total variation approximation in one dimension, either, without some kind of additional assumptions. This has nothing to do with Stein's method; for an impossibility result, any discrete distribution has maximal (1 or 2 depending on your normalization convention) total variation distance to any continuous (e.g. Gaussian) distribution. But of course you can approximate any distribution by a discrete distribution, in Wasserstein distance for example.
-
Stein's method can be used to derive total-variation bounds. This is because $d_{TV}(p_1,p_2) = max_{||h|| \leq 1} (E_{p_1}(h) - E_{p_2}(h) )$. The properties of the solution of the "Stein differential equation" for gaussian approximation in 1D are given in Lemma 2.4 of Chen, Goldstein, Shao (2011). – Guillaume Dehaene Feb 4 at 10:25
@Guillaume: You seem to have misread what I wrote. What I said is that total variation approximation is simply false without some kind of additional assumptions, so it is equally out of reach for any method. This is the kind of "impossibility result" the OP asked about. On the other had, if you do have appropriate additional assumptions, then Stein's method is often the best way to reach total variation bounds. – Mark Meckes Feb 4 at 14:11
Incidentally, the OP's question suggests that they might be familiar with arxiv.org/abs/math/0606073, from which you can see that I know perfectly well that Stein's method can be used to derive total variation bounds. – Mark Meckes Feb 4 at 14:16
I am not sure the following helps.
If $f$ is a prob.distribution on $\mathbb{R}^n$ having the same covariance matrix as $N(0, I_n)$, then \begin{eqnarray*}D(f\|N(0, I_n))&=& H(N(0, I_n))-H(f)\\\ &=& \frac{1}{2}\log((2\pi e)^n)-H(f)\end{eqnarray*} where $D(\cdot \| \cdot)$ denotes the KL divergence $H$ denotes the Shannon entropy.
Now, by Pinsker's inequality, \begin{eqnarray*}\|f-N(0, I_n)\|_1 &\le& \sqrt{2 D(f\|N(0, I_n))}\\\ &=& \sqrt{\log((2\pi e)^n)-2 H(f)}\end{eqnarray*}
So the conclusion is that if the entropy $f$ is closer to that of Gaussian, $f$ will be closer to $N(0, I_n)$ in total variation. But it works only for those $f$ which also has covariance matrix $I_n$.
-
|
2016-05-01 21:56:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.88104248046875, "perplexity": 183.24655739965561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116929.30/warc/CC-MAIN-20160428161516-00186-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://vrcacademy.com/calculator/sine-wave-angular-speed-frequency-calculator/
|
The frequency at which the plasma oscillation occurs is called the plasma frequency which can be calculated using this simple physics calculator based on electron number density, electronic charge, permittivity of vacuum and mass of electron.
Sine Wave Angular Speed / Frequency Calculator
Enter Frequency (Hz)
Angular Speed (ω): {{angularSpeedResult() | number:0}}
## Formula
### ω = 2 π f
Where,
ω = Angular Speed
2π =2 x 3.14
f = Frequency
|
2020-06-02 15:14:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9386830925941467, "perplexity": 4056.2548415106003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347425148.64/warc/CC-MAIN-20200602130925-20200602160925-00481.warc.gz"}
|
https://byjus.com/question-answer/a-ladder-20-ft-long-leans-against-a-vertical-wall-the-top-end-slides-downwards-1/
|
Question
A ladder $$20$$ft long leans against a vertical wall. The top end slides downwards at the rate of $$2$$ ft per second. The rate at which the lower end moves on a horizontal floor when it is $$12$$ft from the wall is
A
83
B
65
C
32
D
174
Solution
The correct option is A $$\cfrac { 8 }{ 3 }$$since ABC is a right angled triangle $$AC^2+AB^2=BC^2\Rightarrow x^2+y^2=20^2 ..............................(1)$$ when it is 12 ft away from wall (means $$x=12$$ )$$AC^2+12^2=20^2 \Rightarrow AC = 16 \Rightarrow y=16$$Differentiating (1) w.r.t time $$t$$$$\Rightarrow 2x\dfrac{dx}{dt}+2y\dfrac{dy}{dt}=0$$Putting $$x=12,y=16,\dfrac{dx}{dt}=v,$$ $$\dfrac{dy}{dt}=-2('-' because \ y \ is \ decreasing )$$ we get$$\Rightarrow 2\times12\times v-2\times16\times2=0\Rightarrow v=\dfrac83$$Therefore Answer is $$A$$Maths
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
2022-01-18 16:16:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8406261205673218, "perplexity": 1350.7734341183557}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300934.87/warc/CC-MAIN-20220118152809-20220118182809-00464.warc.gz"}
|
https://discuss.workflowgen.com/t/how-to-get-back-to-iis-authentication-mode/1354/1
|
# How to get back to IIS authentication mode
#1
In the Configuration Panel, you might have chosen by mistake WorkflowGen authentication mode instead of IIS authentication mode and now you are stuck because there are no passwords set for any user.
To revert back to IIS authentication, go to the `\wfgen\web.config` file and remove the following inside the system.webServer node:
``````<modules>
<add name="ApplicationSecurityAuthenticationModule" type="Advantys.Security.Http.AuthenticationModule" />
</modules>
``````
This should set you back to IIS authentication mode
0 Likes
|
2019-04-21 08:05:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8829207420349121, "perplexity": 6784.054762728861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530505.30/warc/CC-MAIN-20190421080255-20190421102255-00088.warc.gz"}
|
https://codereview.stackexchange.com/questions/14397/windows-application-that-draws-indias-flag/14407
|
# Windows application that draws India's flag
I have written a small Windows application that draws India's flag in the window. I am very new to using Visual C++ and I want to understand if my code can be improved further.
#include "afxwin.h"
#include <math.h>
class CMyApp : public CWinApp
{
public:
virtual BOOL InitInstance ();
};
class CMainWindow : public CFrameWnd
{
public:
CMainWindow();
protected:
afx_msg void OnPaint();
DECLARE_MESSAGE_MAP();
};
CMyApp myAPP;
BOOL CMyApp::InitInstance()
{
m_pMainWnd = new CMainWindow;
m_pMainWnd->ShowWindow(m_nCmdShow);
m_pMainWnd->UpdateWindow();
return TRUE;
}
BEGIN_MESSAGE_MAP (CMainWindow, CFrameWnd)
ON_WM_PAINT()
END_MESSAGE_MAP()
CMainWindow::CMainWindow ()
{
Create(NULL,_T("India's Flag"), WS_OVERLAPPEDWINDOW );
}
void IndiaFlag(CDC &dc, int x, int y)
{
dc.SetBkMode(TRANSPARENT);
CRect rect;
CPen pen(PS_SOLID, 1, RGB(0,0,0));
CPen *oldPen = dc.SelectObject(&pen);
{
CBrush brush(RGB(255,130,0));
CBrush *oldBrush = dc.SelectObject(&brush);
dc.Rectangle(x,y,x+200,(y+50));
}
{
CBrush brush(RGB(255,255,255));
CBrush *oldBrush = dc.SelectObject(&brush);
dc.Rectangle(x,(y+50),(x+200),(y+100));
CPen pen2(PS_SOLID, 1,RGB(0,0,255));
CPen *oldPen = dc.SelectObject(&pen2);
dc.Ellipse((x+75),(y+50),(x+125),(y+100));
double Nx,Ny;
for (int angle=0;angle<360; angle+=15)
{
int angle2 = angle;
double length = 25*(cos(double(angle2 *(3.14159265 / 180))));
double len2 = 25*(sin(double(angle2 *(3.14159265 / 180))));
int originX = (x+100);
int originY = (y+75);
Nx = originX + length;
Ny = originY - len2;
dc.MoveTo(originX,originY);
dc.LineTo(Nx,Ny);
}
}
{
CBrush brush(RGB(34,139,34));
CBrush *oldBrush = dc.SelectObject(&brush);
dc.Rectangle(x,(y+100),(x+200),(y+150));
}
}
void CMainWindow::OnPaint ()
{
CPaintDC dc(this);
IndiaFlag(dc, 150, 150);
}
In case you wondered, this is what the Indian flag looks like:
Thanks to your method IndiaFlag, I guess you're drawing a flag of India. The problem is, I don't know what it looks like. OK, I gonna google it.
It consists of 3 stripes: orange, white and green (top to bottom) and a sort of a wheel in the center. So, why don't you just say:
int const width = ...
int const height = ...
int const stripeHeight = height / 3;
drawStripe(0, 0, width, stripeHeight, Orange);
drawStripe(0, stripeHeight, width, 2 * stripeHeight, White);
drawStripe(0, 2 * stripeHeight, width, 3 * stripeHeight, Green);
drawWheel(width / 2, height / 2)
?
This code describes what your're drawing in fact. There are no brushes or pencils here, because if I ask you what the flag looks like, for sure you're not going to tell me about pencils and brushes and how GDI works. You'll probably say there are 3 stripes and a wheel in the center and that's what I want to hear.
So, why do you write the code like you're not going to tell anyone what happens in there?
Looking at your code, I'd say that all the names you give are terrible:
CBrush brush(RGB(255,255,255));
What does it mean? You're only going to have a single brush, so that's why you call it a "brush"? Nothing specific about it? If I see a name brush 20 lines later, do you expect me to remember it is white or any specific at all?
CBrush *oldBrush = dc.SelectObject(&brush);
An old brush? When I see oldBrush 20 lines later, do you expect me to remember what this brush is? Do YOU know why you need this oldBrush? I see no code where you switch back to it, so why do you even do it?
dc.Rectangle(x,(y+50),(x+200),(y+100));
What is x here? Why y+50? What is 500? What is 200?
CPen pen2(PS_SOLID, 1,RGB(0,0,255));
And once again, something called pen2. Is this pen any specific, or you're just telling me you're using 2 pens? Should I care? Etc.
P.S. Procedures/Functions/Methods describe actions, so their names are traditionally DoWhat. Your IndiaFlag violates the principle, because it says What.
• "I see no code where you switch back to it, so why do you even do it?" That's a bug. You must reselect the original object into the DC before releasing it. – Cody Gray Jan 25 '17 at 10:05
Consider replacing any literal constants with symbols (macros, enums, functions). This should help understand the code better.
For example:
#define BLACK RGB(0,0,0)
In combination with:
CPen pen(PS_SOLID, 1, BLACK)
• Good point! Didn't notice that at first read. – Jean-François Côté Aug 7 '12 at 15:00
• Why don't just use CPen blackSolidPen(PS_SOLID, 1, BLACK)? It's not the case for #define. – Andrey Agibalov Aug 7 '12 at 17:38
• This might be a bad example. Frankly, I prefer to stay away from #define, but it's the first thing I could think of. And I'm a lousy C/C++ programmer. I like to read it though. – Henk Langeveld Aug 7 '12 at 18:36
• #define PI (3.14) is for C, because there is no const in C. In C++ there are real consts. – Andrey Agibalov Aug 7 '12 at 19:01
• You don't need colors normally. You need "tools for drawing colorful things", like pencils and brushes (blackSolidBrush). Even in case you think you really want it, the "more C++" approach is like this: COLORREF const blackColor = RGB(0, 0, 0);. COLORREF is in fact just a number. – Andrey Agibalov Aug 7 '12 at 19:14
IMO, this code seems ok!
To have a better clarity, you could create some functions for the drawing parts so that you India flag is made of smaller function. For example, for the inda flag, I would create a "drawColoredRectangle" and "drawCircleStar" just to make in clearer.
But for the rest, I think it's ok!
• For sure this code doesn't seem to be any OK. – Andrey Agibalov Aug 7 '12 at 17:36
|
2019-12-13 06:45:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24488523602485657, "perplexity": 5213.186178524154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548544.83/warc/CC-MAIN-20191213043650-20191213071650-00118.warc.gz"}
|
https://physics.stackexchange.com/questions/117384/gravity-a-weak-force
|
# Gravity, a weak force? [duplicate]
Why is gravity such a weak force?
It becomes strong for particles only at the Planck scale, around $10^{19}$ $\text{GeV}$, much above the electroweak scale ($100$ $\text{GeV}$, the energy scale dominating physics at low energies).
Why are these scales so different from each other? What prevents quantities at the electroweak scale, such as the Higgs boson mass, from getting quantum corrections on the order of the Planck scale?
Is the solution super symmetry, extra dimensions, or just anthropic fine-tuning?
Can we relate few problems of quantum mechanics with gravity ?
Despite the fact that there is no experimental evidence that conflicts with the predictions of general relativity, physicists have found compelling reasons to suspect that general relativity may be only a good approximation to a more fundamental theory of gravity. The central issue is reconciling general relativity with the demands of quantum mechanics. Well tested by experiment, quantum mechanics is the theory that describes the microscopic behavior of particles. In the quantum world, particles are also waves, the results of measurements are probabilistic in nature, and an uncertainty principle forbids knowing certain pairs of measurable quantities, such as position and momentum, to arbitrary precision. The Standard Model is the unified picture of the strong, weak, and electromagnetic forces within the framework of quantum mechanics. Nonetheless, theoretical physicists have found it to be extremely difficult to construct a theory of quantum gravity that incorporates both general relativity and quantum mechanics.
At the atomic scale, gravity is some $40$ orders of magnitude weaker than the other forces in nature. In both general relativity and Newtonian gravity, the strength of gravity grows at shorter and shorter distances, while quantum effects prevent the other forces from similarly increasing in strength. At a distance of approximately $10^{-35}$ $\text{m}$, called the Planck length, gravity becomes as strong as the other forces. At the Planck length, gravity is so strong and spacetime is so highly distorted that our common notions of space and time lose meaning. Quantum fluctuations at this length scale produce energies so large that microscopic black holes would pop into and out of existence. A theory of quantum gravity is needed to provide a description of nature at the Planck length. Yet, attempts by researchers to construct such a theory, analogous to the Standard Model of particle physics, have lead to serious inconsistencies.
## marked as duplicate by Qmechanic♦ quantum-mechanics StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Jul 31 '18 at 15:11
• What exactly are you asking here? As it stands, there is no clear question, and rather a general invitation to discuss supersymmetry and quantum gravity. Please make it clearer what specific question you have. – Flint72 Jun 8 '14 at 12:57
• @Flint72 Why are these scales so different from each other? What prevents quantities at the electroweak scale, such as the Higgs boson mass, from getting quantum corrections on the order of the Planck scale? – Murtuza Vadharia Jun 8 '14 at 12:59
• Related: physics.stackexchange.com/q/4243/2451 , physics.stackexchange.com/q/24314/2451 and links therein. – Qmechanic Jun 8 '14 at 13:13
|
2019-07-23 09:59:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41455355286598206, "perplexity": 463.92743769120676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529175.83/warc/CC-MAIN-20190723085031-20190723111031-00142.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php?title=2010_AIME_I_Problems/Problem_6&oldid=33917
|
# 2010 AIME I Problems/Problem 6
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
## Problem
Let $P(x)$ be a quadratic polynomial with real coefficients satisfying $x^2 - 2x + 2 \le P(x) \le 2x^2 - 4x + 3$ for all real numbers $x$, and suppose $P(11) = 181$. Find $P(16)$.
## Solution
$[asy] import graph; real min = -0.5, max = 2.5; pen dark = linewidth(1); real P(real x) { return 8*(x-1)^2/5+1; } real Q(real x) { return (x-1)^2+1; } real R(real x) { return 2*(x-1)^2+1; } draw(graph(P,min,max),dark); draw(graph(Q,min,max),linetype("6 2")+linewidth(0.7)); draw(graph(R,min,max),linetype("6 2")+linewidth(0.7)); dot((1,1)); label("P(x)",(max,P(max)),E,fontsize(10)); label("Q(x)",(max,Q(max)),E,fontsize(10)); label("R(x)",(max,R(max)),E,fontsize(10)); /* axes */ Label f; f.p=fontsize(8); xaxis(-2, 3, Ticks(f, 5, 1)); yaxis(-1, 5, Ticks(f, 6, 1)); [/asy]$
Let $Q(x) = x^2 - 2x + 2$, $R(x) = 2x^2 - 4x + 3$. Completing the square, we have $Q(x) = (x-1)^2 + 1$, and $R(x) = 2(x-1)^2 + 1$, so it follows that $P(x) \ge Q(x) \ge 1$ for all $x$ (by the Trivial Inequality).
Also, $1 = Q(1) \le P(1) \le R(1) = 1$, so $P(1) = 1$, and $P$ obtains its minimum at the point $(1,1)$. Then $P(x)$ must be of the form $c(x-1)^2 + 1$ for some constant $c$; substituting $P(11) = 181$ yields $c = \frac 95$. Finally, $P(16) = \frac 95 \cdot (16 - 1)^2 + 1 = \boxed{406}$.
|
2021-12-08 07:59:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9586273431777954, "perplexity": 215.76511233268303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00094.warc.gz"}
|
https://ham.stackexchange.com/questions/17019/can-the-parallel-capacitance-of-a-quartz-crystal-be-directly-measured
|
# Can the parallel capacitance of a quartz crystal be directly measured?
A quartz crystal is often represented by this equivalent schematic:
simulate this circuit – Schematic created using CircuitLab
How to measure quartz crystal motional parameters using a VNA? discusses measuring these parameters with a VNA, and there the most mathematically complex step is determining $$C_p$$. Is it feasible to measure $$C_p$$ directly with an LCR meter?
The impedance of $$C_p$$ is
$$Z_p(\omega) = {1 \over i \omega C_p }$$
and the impedance of the other branch is
$$Z_s(\omega) = R + i \omega L + {1 \over i \omega C_s}$$
and the impedance of the entire crystal is the parallel combination of these:
$$Z_x(\omega) = \left( {1\over Z_p(\omega)} + {1\over Z_s(\omega)} \right)^{-1}$$
Typical values for an approximately 14 MHz crystal are:
\begin{align} R &= 7.37 \:\Omega\\ C_s &= 18.8\:\mathrm{fF}\\ C_p &= 4.15\:\mathrm{pF}\\ L &= 6.57\:\mathrm{mH} \end{align}
At 1.4 MHz:
\begin{align} Z_p(2\pi 1400000) &= 0-27393i\\ Z_s(2\pi 1400000) &= 7.37 - 5989128i\\ Z_x(2\pi 1400000) &= 0.000152779-27269i \end{align}
An LCR meter is probably just measuring the magnitude of the voltage when applying a known AC current, so it will be off by a factor of:
$${|Z_x(2\pi 1400000)| \over |Z_p(2\pi 1400000)|} = 0.995447$$
So for this particular crystal, measuring the impedance with an LCR meter (assuming no other inaccuracy in the device) at approximately 1/10th the serial resonance frequency of the crystal yields an error of less than 1%.
So, given an LCR meter that can be accurate with a reactance on the order of 30kΩ, this is not a bad way to go.
Short answer: measure $$C_p$$ at a frequency far from any one of the crystal's resonant frequencies.
Be aware that not only is a crystal resonant on harmonics of its primary (printed) frequency, but spurious resonances also appear.
It is probably safest to measure the parallel-plate capacitance below resonant frequency. So yes, LCR bridges often use a signal source at low frequency. Since $$C_p$$ may be in the single-digit picofarad range, care should be taken to reduce stray capacitance of the measurement fixture.
You might measure $$C_p$$ with the crystal mounted in the LCR meter, then carefully remove the crystal without disturbing the environment, and measure stray capacitance. The true(er) $$C_p$$ is the difference between the two measurements.
For quartz AT-cut crystals, $$C_p$$ is very roughly 250 times larger than $$C_s$$.
You're unlikely to measure $$C_p$$ so accurately in a LCR meter or bridge that this factor need be considered.
Ceramic resonators are not as piezo-active as quartz, so the disturbing effect of $$C_s$$ requires factoring-in its effect on a $$C_p$$ measurement.
For example, an ultrasonic transducer was measured where $$C_p$$ was only 7.5 times larger than $$C_s$$.
Quartz has amazing qualities: very piezo-active, temperature-stable. Most other materials that are piezo-active pale in comparison.
|
2022-01-18 00:44:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9835773706436157, "perplexity": 2508.8748981386493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300658.84/warc/CC-MAIN-20220118002226-20220118032226-00028.warc.gz"}
|
https://proofwiki.org/wiki/Power_Set_of_Singleton
|
# Power Set of Singleton
## Theorem
Let $x$ be an object.
Then the power set of the singleton $\set x$ is:
$\powerset {\set x} = \set {\O, \set x}$
## Proof
$\O \in \powerset {\set x}$
Let $A \in \powerset {\set x}$ such that $A \ne \O$
That is:
$\ds$ $\ds A \subseteq \set x \land A \ne \O$ $\ds \leadsto \ \$ $\ds$ $\ds A \subseteq \set x \land \exists y : y \in A$ Definition of Empty Set $\ds \leadsto \ \$ $\ds$ $\ds A \subseteq \set x \land \exists y : y \in A \land y \in \set x$ Definition of Subset $\ds \leadsto \ \$ $\ds$ $\ds A \subseteq \set x \land \exists y : y \in A \land y = x$ Definition of Singleton $\ds \leadsto \ \$ $\ds$ $\ds A \subseteq \set x \land x \in A$ $\ds \leadsto \ \$ $\ds$ $\ds A \subseteq \set x \land \set x \subseteq A$ Singleton of Element is Subset $\ds \leadsto \ \$ $\ds$ $\ds A = \set x$ Definition of Set Equality
So a subset of $\set x$ is either $\O$ or $\set x$.
$\blacksquare$
|
2021-03-01 00:47:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423348903656006, "perplexity": 259.8854671192411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00580.warc.gz"}
|
https://www.physicsforums.com/threads/what-do-i-do-with-z.36807/
|
# What do i do with z?
1. Jul 26, 2004
### Phymath
what do i do with z!?
ok no idea what to do with this z in the this problem, cause im teaching this to myself i don't know what to do and im asking for some help thanks anyone...
ok well im trying to compute this problems with Stroke's therom
the surface is $$z = \sqrt{4-x^2-y^2}$$ above the xy-plane
and $$\vec{F} = <2x-y,yz^2,y^2z>$$
ok using stoke's i do $$\nabla X \vec{F} = \vec{k}$$
and...
then instead of finding the normal vector with the surface z, i did, $$f(x,y,z) = 4-x^2-y^2-z^2$$ which $$\vec{n} = \nabla f = <-2x,-2y,-2z>$$ but we want a normal vector pointing out from the surface, so..(also note i didn't but the normalizing sqrt cause it comes out in the dS -> dA change) $$-\vec{n} = <2x,2y,2z>$$ and then fallowing stoke's...$$\vec{k} \bullet\vec{n} = 2z -> \int\int_{A}2zdydx$$ but my understanding of a surface intergral is that z is 0 so that the surface is actually an area projected on to the xy-plane, what am to do in these situations? or did i just do it wrong?
Last edited: Jul 26, 2004
2. Jul 26, 2004
### speeding electron
Ok, after a bit of contemplation about doing surface integrals in different ways, I've see you've made a mistake in your calculations, which I will come back to. This does not affect your main question though. The answer is that you are just projecting the surface onto the xy plane, and it is on the surface that the integrating is actually happening. Here (on the surface), z is not zero in general. Remember that here z = sqrt(4 - x^2 - y^2).
Now to my first point. It has to do with you mixing two methods of calculating the unit normal vector. First you put the equation for the surface into the form: f(x,y,z) = C, and asserted that:
$$\mathbf{ \hat{n}} = \frac{ \nabla f }{| \nabla f|}$$
which is fine, but then the the square root factors do not quite cancel anymore, because:
$$\cos \theta = \mathbf{ \hat{n} \cdot k} = \frac{ \frac{ \partial f }{ \partial x } \mathbf{i} + \frac{ \partial f }{ \partial y } \mathbf{j} + \frac{ \partial f }{ \partial z } \mathbf{k} }{ \sqrt{ \left( \frac{ \partial f }{ \partial x } \right)^2 + \left( \frac{ \partial f }{ \partial y} \right)^2 + \left( \frac{ \partial f }{ \partial y} \right)^2 }} \mathbf{ \cdot k }$$
$$\Rightarrow \frac{1}{ \mathbf{ \hat{n} \cdot k} } = \frac{ \sqrt{ \left( \frac{ \partial f }{ \partial x } \right)^2 + \left( \frac{ \partial f }{ \partial y} \right)^2 + \left( \frac{ \partial f }{ \partial y} \right)^2 }}{ \frac{ \partial f }{ \partial z }}$$
The square roots only cancel when we keep the equation for the surface in the form z = f(x,y). So we have to divide the integrand by 2z, making it 1, and your surface integral is just the area of the circle below. Your line integral should show that.
P.S. Apologies for the latex which won't show ATM. It's late now, but I'll fix it ASAP. It just shows why the roots don't cancel.
Last edited: Jul 28, 2004
3. Jul 26, 2004
### Phymath
Oh
oh oh, i see like $$dS = \sqrt{f_{x}^2+f_{y}^2+1}$$ is not...the normalizer of the f function, thanks much, thou i don't know if the 2z will drop because of that to 1, but thats good ill take a look and post if that worked
|
2017-12-18 21:29:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9042425751686096, "perplexity": 618.7251764196411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948623785.97/warc/CC-MAIN-20171218200208-20171218222208-00473.warc.gz"}
|
https://www.physicsforums.com/threads/need-serious-help.23288/
|
# Need Serious help
1. Apr 29, 2004
### frenika
Problem: A 64-lb block is released from the top of a plane inclined at 30 degrees to the horizontal. As the block slides down the plane, its coefficient of friction is 0.25, and it experiences a drag force due to air resistance equal to one-half is velocity in ft/sec. (a) Determine the equation for the velocity of the block. (b) Determine the equation for the displacement of the block, (c) Calculate the displacement and velocity of the block 5 sec after it is released.
2. Apr 29, 2004
### Staff: Mentor
3. Apr 29, 2004
### frenika
I understand the formulas for the force and everything but I need help setting up the differential equation
4. Apr 29, 2004
### gnome
After you have expressions for each of the forces and for the acceleration, write
$$F = ma$$
Then, beneath that, in the corresponding positions, rewrite it using the expressions you found. That will give you the differential equation you have to solve.
|
2017-02-23 23:03:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5517157316207886, "perplexity": 527.337413131172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00419-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://fr.mathworks.com/help/econ/vecm.html
|
# vecm
Create vector error-correction (VEC) model
## Description
The vecm function returns a vecm object specifying the functional form and storing the parameter values of a (p – 1)-order, cointegrated, multivariate vector error-correction model (VEC((p – 1)) model.
The key components of a vecm object include the number of time series (response-variable dimensionality), the number of cointegrating relations among the response variables (cointegrating rank), and the degree of the multivariate autoregressive polynomial composed of first differences of the response series (short-run polynomial), which is p – 1. That is, p – 1 is the maximum lag with a nonzero coefficient matrix, and p is the order of the vector autoregression (VAR) model representation of the VEC model. Other model components include a regression component to associate the same exogenous predictor variables to each response series, and constant and time trend terms.
Another important component of a VEC model is its Johansen form because it dictates how MATLAB® includes deterministic terms in the model. This specification has implications on the estimation procedure and allowable equality constraints. For more details, see Johansen Form and [2].
Given the response-variable dimensionality, cointegrating rank, and short-run polynomial degree, all coefficient matrices and innovation-distribution parameters are unknown and estimable unless you specify their values by using name-value pair argument syntax. To choose which Johansen form is suitable for your data, then estimate a model containing all or partially unknown parameter values given the data, use estimate. To work with an estimated or fully specified vecm model object, pass it to an object function.
## Creation
### Description
example
Mdl = vecm(numseries,rank,numlags) creates a VEC(numlags) model composed of numseries time series containing rank cointegrating relations. The maximum nonzero lag in the short-run polynomial is numlags. All lags and the error-correction term have numseries-by-numseries coefficient matrices composed of NaN values.
This shorthand syntax allows for easy model template creation in which you specify the model dimensions explicitly. The model template is suited for unrestricted parameter estimation, that is, estimation without any parameter equality constraints. After you create a model, you can alter property values using dot notation.
example
Mdl = vecm(Name,Value) sets properties or additional options using name-value pair arguments. Enclose each name in quotes. For example, 'Lags',[1 4],'ShortRun',ShortRun specifies the two short-run coefficient matrices in ShortRun at lags 1 and 4.
This longhand syntax allows for creating more flexible models. However, vecm must be able to infer the number of series (NumSeries) and cointegrating rank (Rank) from the specified name-value pair arguments. Name-value pair arguments and property values that correspond to the number of time series and cointegrating rank must be consistent with each other.
### Input Arguments
expand all
The shorthand syntax provides an easy way for you to create model templates that are suitable for unrestricted parameter estimation. For example, to create a VEC(2) model composed of three response series containing one cointegrating relation and unknown parameter values, enter:
Mdl = vecm(3,1,2);
To impose equality constraints on parameter values during estimation, set the appropriate property values using dot notation.
Number of time series m, specified as a positive integer. numseries specifies the dimensionality of the multivariate response variable yt and innovation εt.
numseries sets the NumSeries property.
Data Types: double
Number of cointegrating relations, specified as a nonnegative integer. The adjustment and cointegration matrices in the model have rank linearly independent columns, and are numseries-by-rank matrices composed of NaN values.
Data Types: double
Number of first differences of responses to include in the short-run polynomial of the VEC(p – 1) model, specified as a nonnegative integer. That is, numlags = p – 1. Consequently, numlags specifies the number of short-run terms associated with the corresponding VAR(p) model.
All lags have numseries-by-numseries short-run coefficient matrices composed of NaN values.
Data Types: double
Name-Value Pair Arguments
Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.
The longhand syntax enables you to create models in which some or all coefficients are known. During estimation, estimate imposes equality constraints on any known parameters. Specify enough information for vecm to infer the number of response series and the cointegrating rank.
Example: 'Adjustment',nan(3,2),'Lags',[4 8] specifies a three-dimensional VEC(8) model with two cointegrating relations and nonzero short-run coefficient matrices at lags 4 and 8.
Short-run polynomial lags, specified as the comma-separated pair consisting of 'Lags' and a numeric vector containing at most P – 1 elements of unique positive integers.
The matrix ShortRun{j} is the coefficient of lag Lags(j).
Example: 'Lags',[1 4]
Data Types: double
## Properties
expand all
You can set writable property values when you create the model object by using name-value pair argument syntax, or after you create the model object by using dot notation. For example, to create a VEC(1) model in the H1 Johansen form suitable for simulation, and composed of two response series of cointegrating rank one and no overall time trend term, enter:
'Cointegration',[1; -4],'ShortRun',{[0.3 -0.15 ; 0.1 -0.3]},...
'Covariance',eye(2));
Mdl.Trend = 0;
Number of time series m, specified as a positive integer. NumSeries specifies the dimensionality of the multivariate response variable yt and innovation εt.
Data Types: double
Number of cointegrating relations, specified as a nonnegative integer. The adjustment and cointegration matrices in the model have Rank linearly independent columns and are NumSeries-by-Rank matrices.
Data Types: double
Corresponding VAR model order, specified as a nonnegative integer. P – 1 is the maximum lag in the short-run polynomial that has a nonzero coefficient matrix. Lags in the short-run polynomial that have degree less than P – 1 can have coefficient matrices composed entirely of zeros.
P specifies the number of presample observations required to initialize the model.
Data Types: double
Model description, specified as a string scalar or character vector. vecm stores the value as a string scalar. The default value describes the parametric form of the model, for example "2-Dimensional Rank = 1 VEC(1) Model".
Example: 'Description','Model 1'
Data Types: string | char
Response series names, specified as a NumSeries length string vector. The default is ['Y1' 'Y2' ... 'YNumSeries'].
Example: 'SeriesNames',{'CPI' 'Unemployment'}
Data Types: string
Overall model constant (c), specified as a NumSeries-by-1 numeric vector.
The value of Constant, and whether estimate supports equality constraints on it during estimation, depend on the Johansen form of the VEC model.
Example: 'Constant',[1; 2]
Data Types: double
Overall linear time trend (d), specified as a NumSeries-by-1 numeric vector.
The value of Trend, and whether estimate supports equality constraints on it during estimation, depend on the Johansen form of the VEC model.
Example: 'Trend',[0.1; 0.2]
Data Types: double
Cointegration adjustment speeds (A), specified as a NumSeries-by-Rank numeric matrix.
If you specify a matrix of known values, then all columns must be linearly independent (that is, Adjustment must be a matrix of full column rank).
For estimation, you can impose equality constraints on the cointegration adjustment speeds by supplying a matrix composed entirely of numeric values or a mixture of numeric and missing (NaN) values.
If Rank = 0, then Adjustment is an empty NumSeries-by-0 vector.
For more details on specifying Adjustment, see Algorithms.
Data Types: double
Cointegration matrix (B), specified as a NumSeries-by-Rank numeric matrix.
If you specify a matrix of known values, then all columns must be linearly independent (that is, Cointegration must be a matrix of full column rank).
Cointegration cannot contain a mixture of missing (NaN) values and numeric values. Supported equality constraints on the cointegration matrix during estimation depend on the Johansen form of the VEC model.
If Rank = 0, then Cointegration is an empty NumSeries-by-0 vector.
For more details on specifying Cointegration, see Algorithms.
Example: 'Cointegration',NaN(2,1)
Data Types: double
Impact, or long-run level, matrix (Π), specified as a NumSeries-by-NumSeries numeric matrix. The rank of Impact must be Rank.
For estimation of full-rank models (Rank = NumSeries), you can impose equality constraints on the impact matrix by supplying a matrix containing a mixture of numeric and missing values (NaN).
If 1RankNumSeries1, then the default value is Adjustment*Cointegration'.
If Rank = 0, then Impact is a matrix of zeros. Consequently, the model does not have an error-correction term.
For more details on specifying Impact, see Algorithms.
Example: 'Impact',[0.5 0.25 0; 0.3 0.15 0; 0 0 0.9]
Data Types: double
Constant (intercept) in the cointegrating relations (c0), specified as a Rank-by-1 numeric vector. You can set CointegrationConstant only by using dot notation after you create the model.
CointegrationConstant cannot contain a mixture of missing (NaN) values and numeric values. Supported equality constraints on the cointegration constant vector during estimation depend on the Johansen form of the VEC model.
If Rank = 0, then CointegrationConstant is a 0-by-1 vector of zeros.
Example: Mdl.CointegrationConstant = [1; 0]
Data Types: double
Time trend in the cointegrating relations (d0), specified as a Rank-by-1 numeric vector. You can set CointegrationTrend only by using dot notation after you create the model.
CointegrationTrend cannot contain a mixture of missing (NaN) values and numeric values. Supported equality constraints on the cointegration linear trend vector during estimation depend on the Johansen form of the VEC model.
If Rank = 0, then CointegrationTrend is a 0-by-1 vector of zeros.
Example: Mdl.CointegrationTrend = [0; 0.5]
Data Types: double
Short-run coefficient matrices associated with the lagged response differences, specified as a cell vector of NumSeries-by-NumSeries numeric matrices.
Specify coefficient signs corresponding to those coefficients in the VEC model expressed in difference-equation notation. The property P is numel(ShortRun) + 1.
• If you set the 'Lags' name-value pair argument to Lags, the following conditions apply.
• The lengths of ShortRun and Lags must be equal.
• ShortRun{j} is the coefficient matrix of lag Lags(j).
• By default, ShortRun is a numel(Lags)-by-1 cell vector of matrices composed of NaN values.
• Otherwise, the following conditions apply.
• ShortRun{j} is the coefficient matrix of lag j.
• By default, ShortRun is a (P – 1)-by-1 cell vector of matrices composed of NaN values.
MATLAB assumes that the coefficient of the current, differenced response (Δyt) is the identity matrix. Therefore, exclude this coefficient from ShortRun.
Example: 'ShortRun',{[0.5 -0.1; 0.1 0.2]}
Data Types: cell
Regression coefficient matrix associated with the predictor variables, specified as a NumSeries-by-NumPreds numeric matrix. NumPreds is the number of predictor variables, that is, the number of columns in the predictor data.
Beta(j,:) contains the regression coefficients for each predictor in the equation of response yj,t. Beta(:,k) contains the regression coefficient in each response equation for predictor xk. By default, all predictor variables are in the regression component of all response equations. You can exclude certain predictors from certain equations by specifying equality constraints to 0.
Example: In a model that includes 3 responses and 4 predictor variables, to exclude the second predictor from the third equation and leave the others unrestricted, specify [NaN NaN NaN NaN; NaN NaN NaN NaN; NaN 0 NaN NaN].
The default value specifies no regression coefficient in the model. However, if you specify predictor data when you estimate the model using estimate, then MATLAB sets Beta to an appropriately sized matrix of NaN values.
Example: 'Beta',[2 3 -1 2; 0.5 -1 -6 0.1]
Data Types: double
Innovations covariance matrix of the NumSeries innovations at each time t = 1,...,T, specified as a NumSeries-by-NumSeries numeric, positive definite matrix.
Example: 'Covariance',eye(2)
Data Types: double
Note
NaN-valued elements in properties indicate unknown, estimable parameters. Specified elements indicate equality constraints on parameters in model estimation. The innovations covariance matrix Covariance cannot contain a mix of NaN values and real numbers; you must fully specify the covariance or it must be completely unknown (NaN(NumSeries)).
## Object Functions
estimate Fit vector error-correction (VEC) model to data fevd Generate vector error-correction (VEC) model forecast error variance decomposition (FEVD) filter Filter disturbances through vector error-correction (VEC) model forecast Forecast vector error-correction (VEC) model responses infer Infer vector error-correction (VEC) model innovations irf Generate vector error-correction (VEC) model impulse responses simulate Monte Carlo simulation of vector error-correction (VEC) model summarize Display estimation results of vector error-correction (VEC) model varm Convert vector error-correction (VEC) model to vector autoregression (VAR) model
## Examples
collapse all
Suppose that a VEC model with cointegrating rank of 4 and a short-run polynomial of degree 2 is appropriate for modeling the behavior of seven hypothetical macroeconometric time series.
Create a VEC(7,4,2) model using the shorthand syntax.
Mdl = vecm(7,4,2)
Mdl =
vecm with properties:
Description: "7-Dimensional Rank = 4 VEC(2) Model with Linear Time Trend"
SeriesNames: "Y1" "Y2" "Y3" ... and 4 more
NumSeries: 7
Rank: 4
P: 3
Constant: [7×1 vector of NaNs]
Cointegration: [7×4 matrix of NaNs]
Impact: [7×7 matrix of NaNs]
CointegrationConstant: [4×1 vector of NaNs]
CointegrationTrend: [4×1 vector of NaNs]
ShortRun: {7×7 matrices of NaNs} at lags [1 2]
Trend: [7×1 vector of NaNs]
Beta: [7×0 matrix]
Covariance: [7×7 matrix of NaNs]
Mdl is a vecm model object that serves as a template for parameter estimation. MATLAB® considers the NaN values as unknown parameter values to be estimated. For example, the Adjustment property is a 7-by-4 matrix of NaN values. Therefore, the adjustment speeds are active model parameters to be estimated.
By default, MATLAB® includes overall and cointegrating linear time trend terms in the model. You can create a VEC model in H1 Johansen form by removing the time trend terms, that is, by setting the Trend property to 0 using dot notation.
Mdl.Trend = 0
Mdl =
vecm with properties:
Description: "7-Dimensional Rank = 4 VEC(2) Model"
SeriesNames: "Y1" "Y2" "Y3" ... and 4 more
NumSeries: 7
Rank: 4
P: 3
Constant: [7×1 vector of NaNs]
Cointegration: [7×4 matrix of NaNs]
Impact: [7×7 matrix of NaNs]
CointegrationConstant: [4×1 vector of NaNs]
CointegrationTrend: [4×1 vector of NaNs]
ShortRun: {7×7 matrices of NaNs} at lags [1 2]
Trend: [7×1 vector of zeros]
Beta: [7×0 matrix]
Covariance: [7×7 matrix of NaNs]
MATLAB® expands Trend to the appropriate length, a 7-by-1 vector of zeros.
Consider this VEC(1) model for three hypothetical response series.
$\begin{array}{rcl}\Delta {y}_{t}& =& c+A{B}^{\prime }{y}_{t-1}+{\Phi }_{1}\Delta {y}_{t-1}+{\epsilon }_{t}\\ & & \\ & =& \left[\begin{array}{c}-1\\ -3\\ -30\end{array}\right]+\left[\begin{array}{cc}-0.3& 0.3\\ -0.2& 0.1\\ -1& 0\end{array}\right]\left[\begin{array}{ccc}0.1& -0.2& 0.2\\ -0.7& 0.5& 0.2\end{array}\right]{y}_{t-1}+\left[\begin{array}{ccc}0& 0.1& 0.2\\ 0.2& -0.2& 0\\ 0.7& -0.2& 0.3\end{array}\right]\Delta {y}_{t-1}+{\epsilon }_{t}.\end{array}$
The innovations are multivariate Gaussian with a mean of 0 and the covariance matrix
$\Sigma =\left[\begin{array}{ccc}1.3& 0.4& 1.6\\ 0.4& 0.6& 0.7\\ 1.6& 0.7& 5\end{array}\right].$
Create variables for the parameter values.
Adjustment = [-0.3 0.3; -0.2 0.1; -1 0];
Cointegration = [0.1 -0.7; -0.2 0.5; 0.2 0.2];
ShortRun = {[0. 0.1 0.2; 0.2 -0.2 0; 0.7 -0.2 0.3]};
Constant = [-1; -3; -30];
Trend = [0; 0; 0];
Covariance = [1.3 0.4 1.6; 0.4 0.6 0.7; 1.6 0.7 5];
Create a vecm model object representing the VEC(1) model using the appropriate name-value pair arguments.
'Constant',Constant,'ShortRun',ShortRun,'Trend',Trend,...
'Covariance',Covariance)
Mdl =
vecm with properties:
Description: "3-Dimensional Rank = 2 VEC(1) Model"
SeriesNames: "Y1" "Y2" "Y3"
NumSeries: 3
Rank: 2
P: 2
Constant: [-1 -3 -30]'
Cointegration: [3×2 matrix]
Impact: [3×3 matrix]
CointegrationConstant: [2×1 vector of NaNs]
CointegrationTrend: [2×1 vector of NaNs]
ShortRun: {3×3 matrix} at lag [1]
Trend: [3×1 vector of zeros]
Beta: [3×0 matrix]
Covariance: [3×3 matrix]
Mdl is, effectively, a fully specified vecm model object. That is, the cointegration constant and linear trend are unknown. However, they are not needed for simulating observations or forecasting, given that the overall constant and trend parameters are known.
By default, vecm attributes the short-run coefficient to the first lag in the short-run polynomial. Consider another VEC model that attributes the short-run coefficient matrix ShortRun to the fourth lag term, specifies a matrix of zeros for the first lag coefficient, and treats all else as being equal to Mdl. Create this VEC(4) model.
Mdl.ShortRun(4) = ShortRun;
Mdl.ShortRun(1) = {0}
Mdl =
vecm with properties:
Description: "3-Dimensional Rank = 2 VEC(4) Model"
SeriesNames: "Y1" "Y2" "Y3"
NumSeries: 3
Rank: 2
P: 5
Constant: [-1 -3 -30]'
Cointegration: [3×2 matrix]
Impact: [3×3 matrix]
CointegrationConstant: [2×1 vector of NaNs]
CointegrationTrend: [2×1 vector of NaNs]
ShortRun: {3×3 matrix} at lag [4]
Trend: [3×1 vector of zeros]
Beta: [3×0 matrix]
Covariance: [3×3 matrix]
Alternatively, you can create another model object using vecm and the same syntax as for Mdl, but additionally specify 'Lags',4.
Consider a VEC model for the following seven macroeconomic series, and then fit the model to the data.
• Gross domestic product (GDP)
• GDP implicit price deflator
• Paid compensation of employees
• Nonfarm business sector hours of all persons
• Effective federal funds rate
• Personal consumption expenditures
• Gross private domestic investment
Suppose that a cointegrating rank of 4 and one short-run term are appropriate, that is, consider a VEC(1) model.
For more information on the data set and variables, enter Description at the command line.
Determine whether the data needs to be preprocessed by plotting the series on separate plots.
figure;
subplot(2,2,1)
plot(FRED.Time,FRED.GDP);
title('Gross Domestic Product');
ylabel('Index');
xlabel('Date');
subplot(2,2,2)
plot(FRED.Time,FRED.GDPDEF);
title('GDP Deflator');
ylabel('Index');
xlabel('Date');
subplot(2,2,3)
plot(FRED.Time,FRED.COE);
title('Paid Compensation of Employees');
ylabel('Billions of \$');
xlabel('Date');
subplot(2,2,4)
plot(FRED.Time,FRED.HOANBS);
ylabel('Index');
xlabel('Date');
figure;
subplot(2,2,1)
plot(FRED.Time,FRED.FEDFUNDS);
title('Federal Funds Rate');
ylabel('Percent');
xlabel('Date');
subplot(2,2,2)
plot(FRED.Time,FRED.PCEC);
title('Consumption Expenditures');
ylabel('Billions of \$');
xlabel('Date');
subplot(2,2,3)
plot(FRED.Time,FRED.GPDI);
title('Gross Private Domestic Investment');
ylabel('Billions of \$');
xlabel('Date');
Stabilize all series, except the federal funds rate, by applying the log transform. Scale the resulting series by 100 so that all series are on the same scale.
FRED.GDP = 100*log(FRED.GDP);
FRED.GDPDEF = 100*log(FRED.GDPDEF);
FRED.COE = 100*log(FRED.COE);
FRED.HOANBS = 100*log(FRED.HOANBS);
FRED.PCEC = 100*log(FRED.PCEC);
FRED.GPDI = 100*log(FRED.GPDI);
Create a VEC(1) model using the shorthand syntax. Specify the variable names.
Mdl = vecm(7,4,1);
Mdl.SeriesNames = FRED.Properties.VariableNames
Mdl =
vecm with properties:
Description: "7-Dimensional Rank = 4 VEC(1) Model with Linear Time Trend"
SeriesNames: "GDP" "GDPDEF" "COE" ... and 4 more
NumSeries: 7
Rank: 4
P: 2
Constant: [7×1 vector of NaNs]
Cointegration: [7×4 matrix of NaNs]
Impact: [7×7 matrix of NaNs]
CointegrationConstant: [4×1 vector of NaNs]
CointegrationTrend: [4×1 vector of NaNs]
ShortRun: {7×7 matrix of NaNs} at lag [1]
Trend: [7×1 vector of NaNs]
Beta: [7×0 matrix]
Covariance: [7×7 matrix of NaNs]
Mdl is a vecm model object. All properties containing NaN values correspond to parameters to be estimated given data.
Estimate the model using the entire data set and the default options.
EstMdl = estimate(Mdl,FRED.Variables)
EstMdl =
vecm with properties:
Description: "7-Dimensional Rank = 4 VEC(1) Model"
SeriesNames: "GDP" "GDPDEF" "COE" ... and 4 more
NumSeries: 7
Rank: 4
P: 2
Constant: [14.1329 8.77841 -7.20359 ... and 4 more]'
Cointegration: [7×4 matrix]
Impact: [7×7 matrix]
CointegrationConstant: [-28.6082 109.555 -77.0912 ... and 1 more]'
CointegrationTrend: [4×1 vector of zeros]
ShortRun: {7×7 matrix} at lag [1]
Trend: [7×1 vector of zeros]
Beta: [7×0 matrix]
Covariance: [7×7 matrix]
EstMdl is an estimated vecm model object. It is fully specified because all parameters have known values. By default, estimate imposes the constraints of the H1 Johansen VEC model form by removing the cointegrating trend and linear trend terms from the model. Parameter exclusion from estimation is equivalent to imposing equality constraints to zero.
Display a short summary from the estimation.
results = summarize(EstMdl)
results = struct with fields:
Description: "7-Dimensional Rank = 4 VEC(1) Model"
Model: "H1"
SampleSize: 238
NumEstimatedParameters: 112
LogLikelihood: -1.4939e+03
AIC: 3.2118e+03
BIC: 3.6007e+03
Table: [133x4 table]
Covariance: [7x7 double]
Correlation: [7x7 double]
The Table field of results is a table of parameter estimates and corresponding statistics.
This example follows from Estimate VEC Model.
Create and estimate the VEC(1) model. Treat the last ten periods as the forecast horizon.
FRED.GDP = 100*log(FRED.GDP);
FRED.GDPDEF = 100*log(FRED.GDPDEF);
FRED.COE = 100*log(FRED.COE);
FRED.HOANBS = 100*log(FRED.HOANBS);
FRED.PCEC = 100*log(FRED.PCEC);
FRED.GPDI = 100*log(FRED.GPDI);
Mdl = vecm(7,4,1)
Mdl =
vecm with properties:
Description: "7-Dimensional Rank = 4 VEC(1) Model with Linear Time Trend"
SeriesNames: "Y1" "Y2" "Y3" ... and 4 more
NumSeries: 7
Rank: 4
P: 2
Constant: [7×1 vector of NaNs]
Cointegration: [7×4 matrix of NaNs]
Impact: [7×7 matrix of NaNs]
CointegrationConstant: [4×1 vector of NaNs]
CointegrationTrend: [4×1 vector of NaNs]
ShortRun: {7×7 matrix of NaNs} at lag [1]
Trend: [7×1 vector of NaNs]
Beta: [7×0 matrix]
Covariance: [7×7 matrix of NaNs]
Y = FRED{1:(end - 10),:};
EstMdl = estimate(Mdl,Y)
EstMdl =
vecm with properties:
Description: "7-Dimensional Rank = 4 VEC(1) Model"
SeriesNames: "Y1" "Y2" "Y3" ... and 4 more
NumSeries: 7
Rank: 4
P: 2
Constant: [14.5023 8.46791 -7.08266 ... and 4 more]'
Cointegration: [7×4 matrix]
Impact: [7×7 matrix]
CointegrationConstant: [-32.8433 -101.126 -84.2373 ... and 1 more]'
CointegrationTrend: [4×1 vector of zeros]
ShortRun: {7×7 matrix} at lag [1]
Trend: [7×1 vector of zeros]
Beta: [7×0 matrix]
Covariance: [7×7 matrix]
Forecast 10 responses using the estimated model and in-sample data as presample observations.
YF = forecast(EstMdl,10,Y);
On separate plots, plot part of the GDP and GPDI series with their forecasted values.
figure;
plot(FRED.Time(end - 50:end),FRED.GDP(end - 50:end));
hold on
plot(FRED.Time((end - 9):end),YF(:,1))
h = gca;
fill(FRED.Time([end - 9 end end end - 9]),h.YLim([1,1,2,2]),'k',...
'FaceAlpha',0.1,'EdgeColor','none');
legend('True','Forecasted','Location','NW')
title('Quarterly Scaled GDP: 2004 - 2016');
ylabel('Billions of \$ (scaled)');
xlabel('Year');
hold off
figure;
plot(FRED.Time(end - 50:end),FRED.GPDI(end - 50:end));
hold on
plot(FRED.Time((end - 9):end),YF(:,7))
h = gca;
fill(FRED.Time([end - 9 end end end - 9]),h.YLim([1,1,2,2]),'k',...
'FaceAlpha',0.1,'EdgeColor','none');
legend('True','Forecasted','Location','NW')
title('Quarterly Scaled GPDI: 2004 - 2016');
ylabel('Billions of \$ (scaled)');
xlabel('Year');
hold off
expand all
expand all
## References
[1] Hamilton, James D. Time Series Analysis. Princeton, NJ: Princeton University Press, 1994.
[2] Johansen, S. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University Press, 1995.
[3] Juselius, K. The Cointegrated VAR Model. Oxford: Oxford University Press, 2006.
[4] Lütkepohl, H. New Introduction to Multiple Time Series Analysis. Berlin: Springer, 2005.
### Topics
Introduced in R2017b
|
2021-12-04 19:33:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.767966091632843, "perplexity": 10057.740729670104}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363006.60/warc/CC-MAIN-20211204185021-20211204215021-00584.warc.gz"}
|
https://physics.stackexchange.com/questions/673881/molecular-dipole-moment-transition-in-bra-ket-notation
|
# Molecular dipole moment transition in bra-ket notation
Below is equation for dipole moment transition: $$\langle \Psi_k| p |\Psi_i \rangle = \langle \Phi \chi_{N,k}|p_e+p_N|\Phi \chi_{N,k} \rangle = \langle \chi_{N,k}|\langle\Phi_k|p_e|\Phi_i\rangle |\chi_{N,i}\rangle + \langle \chi_{N,k}|p_N\langle\Phi_k|\Phi_i\rangle |\chi_{N,i}\rangle$$ Where $$\chi$$ is a nuclear wave function, $$\Phi$$ is electronic wave function and $$p_e, p_N$$ are dipole moment of electron and nucleus, respectively.
My question is why do we put this expression $$\langle\Phi_k|p_e|\Phi_i\rangle$$ between nuclear wave function $$\chi$$. As far as I understand $$\langle\Phi_k|p_e|\Phi_i\rangle$$ doesn't have anything in common. My confusion comes from the fact that we don't put nuclear dipole moment $$p_N$$ between electronic wave function.
• Just a hunch, but if you start with something like a Born-Oppenheimer approximation, the nuclear degrees of freedom are much slower than the electronic degrees of freedom. Therefore, as far as the electrons are concerned, the nuclear degrees of freedom are constants, and so the nuclear stuff can be pulled out of any electronic overlap integrals. However, those electronic overlap integrals are functions of nuclear coordinates, and so they have to stay inside the nuclear overlap integrals. Oct 27, 2021 at 16:50
|
2022-07-02 15:35:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908081591129303, "perplexity": 246.47645404776762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00769.warc.gz"}
|
https://www.zigya.com/study/book?class=11&board=bsem&subject=Physics&book=Physics+Part+I&chapter=Motion+in+A+Plane&q_type=&q_topic=Scalars+And+Vectors&q_category=&question_id=PHEN11039492
|
## Book Store
Currently only available for.
CBSE Gujarat Board Haryana Board
## Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
Class 10 Class 12
Read each statement below carefully and state, with reasons and examples, if it is true or false:
A scalar quantity is one that
(a) is conserved in a process.
(b) can never take negative values.
(c) must be dimensionless.
(d) does not vary from one point to another in space.
(e) has the same value for observers with different orientations of axes.
(a) False - When we carry out the adiabatic process, the scalar quantity temperature does not conserve.
In inelastic collisions, the K.E. (scalar quantity) does not conserve.
(b) False - scalar quantity can have negative value e.g. .
If the angle between is greater than then W will be negative e.g. work done by gravitational force when body is raised upward is negative.
(c) False - Scalar quantity may have dimensions, e.g. Dimensional formula of work is [M1L2T-2].
(d) False - Temperature is a scalar quantity and it changes as we go higher, the gravitational potential is a scalar quantity and it varies with the height.
(e) True.
152 Views
What is a vector quantity?
A physical quantity that requires direction along with magnitude, for its complete specification is called a vector quantity.
835 Views
What is a scalar quantity?
A physical quantity that requires only magnitude for its complete specification is called a scalar quantity.
1212 Views
Give three examples of scalar quantities.
Mass, temperature and energy
769 Views
What are the basic characteristics that a quantity must possess so that it may be a vector quantity?
A quantity must possess the direction and must follow the vector axioms. Any quantity that follows the vector axioms are classified as vectors.
814 Views
Give three examples of vector quantities.
Force, impulse and momentum.
865 Views
|
2018-10-24 00:58:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.503743588924408, "perplexity": 2316.709243744533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517628.91/warc/CC-MAIN-20181024001232-20181024022732-00037.warc.gz"}
|
http://tex.stackexchange.com/questions/94303/a-matrix-inside-a-table?answertab=active
|
# A matrix inside a table [duplicate]
Possible Duplicate:
I am trying to insert a matrix inside a table, and I am looking for a good way to do this. Usually when I write matrices in equations, I use array environment, so I naively defined an array inside tabular, and it sort of works. The only problem is that the matrix itself seems to take up a huge amount of space inside the cell. An example is
\begin{tabular}{|c|c|}
\hline
\textbf{EKF} & \textbf{UKF}\\
\hline
$Q = \left[ \begin{array}{cc} 0.1 & 0 \\ 0 & 0.1 \end{array}\right]$ & $Q = left[ \begin{array}{cc} 0.01 & 0 \\ 0 & 0.01 \end{array}\right]$\\
\hline
$R = 5 \times 10^{-3}$ & $R = 5 \times 10^{-5}$ \\
\hline
\end{tabular}
Is there any way to increase the spacing in the cell containing the matrix without that matrix taking up all the space?
-
## marked as duplicate by Werner, Kurt, Thorsten, Guido, Martin SchröderJan 18 '13 at 7:34
Welcome to TeX.sx! – Kurt Jan 17 '13 at 23:24
A couple of possibilities:
\documentclass{article}
\usepackage{array}
\begin{document}
\begingroup
\renewcommand{\arraystretch}{4}
\begin{tabular}{|*2{>{\renewcommand{\arraystretch}{1}}c|}}
\hline
\textbf{EKF} & \textbf{UKF}\\
\hline
$Q = \left[ \begin{array}{cc} 0.1 & 0 \\ 0 & 0.1 \end{array}\right]$ & $Q = \left[ \begin{array}{cc} 0.01 & 0 \\ 0 & 0.01 \end{array}\right]$\\
\hline
$R = 5 \times 10^{-3}$ & $R = 5 \times 10^{-5}$ \\
\hline
\end{tabular}
\endgroup
\bigskip
\begin{tabular}{|*2{>{\centering\arraybackslash}p{.3\textwidth}|}}
\hline
\textbf{EKF} & \textbf{UKF}\\
\hline
$Q = \left[ \begin{array}{cc} 0.1 & 0 \\ 0 & 0.1 \end{array}\right]$ & $Q = \left[ \begin{array}{cc} 0.01 & 0 \\ 0 & 0.01 \end{array}\right]$\\
\hline
$R = 5 \times 10^{-3}$ & $R = 5 \times 10^{-5}$\\
\hline
\end{tabular}
\end{document}
*{2}{c|} just repeats c| or in general the second argument (p{.3\textwidth} in this case) that many times, that is standard latex, not requiring any package.
> is extended syntax from the array package (which is part of the core LaTeX distribution) which inserts declarations into every cell in that column, in this case \centering\arraybackslash the \centering inserts \centering into each of the p column cells so the text centres. Unfortunately \centering locally defines \\ to be a newline command for centred text so it no longer ends a table row, \arraybackslash re-asserts the tabular/array definition of \\.
-
Thanks! Can you make a quick comment on how the statement |*2{>{\centering\arraybackslash}p{.3\textwidth}|} works? – Mr. Fegur Jan 18 '13 at 0:25
Too much for a comment I added a note to the answer – David Carlisle Jan 18 '13 at 0:37
I think you should eliminate the vertical rules and use the booktabs package with vertical spacing controlled after each \\.
Furthermore, as egreg commented, you get better matrix spacing with a amsmath's bmatrix environment:
## Code:
\documentclass{article}
\usepackage{booktabs}
\usepackage{amsmath}
\begin{document}
\begin{tabular}{ c c }
\toprule
\textbf{EKF} & \textbf{UKF}\\
\midrule\\
$Q = \begin{bmatrix} 0.1 & 0 \\ 0 & 0.1 \end{bmatrix}$ &
$Q = \begin{bmatrix} 0.01 & 0 \\ 0 & 0.01 \end{bmatrix}$\\
$R = 5 \times 10^{-3}$ & $R = 5 \times 10^{-5}$ \\
|
2015-06-30 10:14:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837467074394226, "perplexity": 2251.304639193404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093400.45/warc/CC-MAIN-20150627031813-00127-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://tensornetwork.org/mps/algorithms/
|
About Contribute Source
Algorithms for Matrix Product States / Tensor Trains
A wide variety of efficient algorithms have been developed for MPS/TT tensor networks.
Solving Linear Equations
The following algorithms involve solving equations such as $A x = \lambda x$ or $A x = b$ where $x$ is a tensor in MPS/TT form.
Summing MPS/TT networks
The following are algorithms for summing two or more MPS/TT networks and approximating the result by a single MPS/TT.
Multiplying a MPS/TT by an MPO
The following are algorithms for multiplying a given MPS/TT tensor network by an MPO tensor network, resulting in a new MPS/TT that approximates the result.
Time Evolution Algorithms
One reason MPS are very useful in quantum physics applications is that they can be efficiently evolved in real or imaginary time. This capability is useful for studying quantum dynamics and thermalization, and directly simulating finite-temperature systems.
Edit This Page
|
2018-11-18 00:44:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359636068344116, "perplexity": 2116.5163452711136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743913.6/warc/CC-MAIN-20181117230600-20181118012600-00455.warc.gz"}
|
http://cms.math.ca/cjm/kw/composition
|
location: Publications → journals
Search results
Search: All articles in the CJM digital archive with keyword composition
Expand all Collapse all Results 1 - 11 of 11
1. CJM Online first
Wu, Xinfeng
Weighted Carleson Measure Spaces Associated with Different Homogeneities In this paper, we introduce weighted Carleson measure spaces associated with different homogeneities and prove that these spaces are the dual spaces of weighted Hardy spaces studied in a forthcoming paper. As an application, we establish the boundedness of composition of two Calderón-Zygmund operators with different homogeneities on the weighted Carleson measure spaces; this, in particular, provides the weighted endpoint estimates for the operators studied by Phong-Stein. Keywords:composition of operators, weighted Carleson measure spaces, dualityCategories:42B20, 42B35
2. CJM 2013 (vol 66 pp. 387)
Mashreghi, J.; Shabankhah, M.
Composition of Inner Functions We study the image of the model subspace $K_\theta$ under the composition operator $C_\varphi$, where $\varphi$ and $\theta$ are inner functions, and find the smallest model subspace which contains the linear manifold $C_\varphi K_\theta$. Then we characterize the case when $C_\varphi$ maps $K_\theta$ into itself. This case leads to the study of the inner functions $\varphi$ and $\psi$ such that the composition $\psi\circ\varphi$ is a divisor of $\psi$ in the family of inner functions. Keywords:composition operators, inner functions, Blaschke products, model subspacesCategories:30D55, 30D05, 47B33
3. CJM 2012 (vol 65 pp. 241)
Aguiar, Marcelo; Lauve, Aaron
Lagrange's Theorem for Hopf Monoids in Species Following Radford's proof of Lagrange's theorem for pointed Hopf algebras, we prove Lagrange's theorem for Hopf monoids in the category of connected species. As a corollary, we obtain necessary conditions for a given subspecies $\mathbf k$ of a Hopf monoid $\mathbf h$ to be a Hopf submonoid: the quotient of any one of the generating series of $\mathbf h$ by the corresponding generating series of $\mathbf k$ must have nonnegative coefficients. Other corollaries include a necessary condition for a sequence of nonnegative integers to be the dimension sequence of a Hopf monoid in the form of certain polynomial inequalities, and of a set-theoretic Hopf monoid in the form of certain linear inequalities. The latter express that the binomial transform of the sequence must be nonnegative. Keywords:Hopf monoids, species, graded Hopf algebras, Lagrange's theorem, generating series, Poincaré-Birkhoff-Witt theorem, Hopf kernel, Lie kernel, primitive element, partition, composition, linear order, cyclic order, derangementCategories:05A15, 05A20, 05E99, 16T05, 16T30, 18D10, 18D35
4. CJM 2011 (vol 64 pp. 1329)
Izuchi, Kei Ji; Nguyen, Quang Dieu; Ohno, Shûichi
Composition Operators Induced by Analytic Maps to the Polydisk We study properties of composition operators induced by symbols acting from the unit disk to the polydisk. This result will be involved in the investigation of weighted composition operators on the Hardy space on the unit disk and moreover be concerned with composition operators acting from the Bergman space to the Hardy space on the unit disk. Keywords:composition operators, Hardy spaces, polydiskCategories:47B33, 32A35, 30H10
5. CJM 2011 (vol 63 pp. 862)
Hosokawa, Takuya; Nieminen, Pekka J.; Ohno, Shûichi
Linear Combinations of Composition Operators on the Bloch Spaces We characterize the compactness of linear combinations of analytic composition operators on the Bloch space. We also study their boundedness and compactness on the little Bloch space. Keywords: composition operator, compactness, Bloch spaceCategories:47B33, 30D45, 47B07
6. CJM 2009 (vol 62 pp. 305)
Hua, He; Yunbai, Dong; Xianzhou, Guo
Approximation and Similarity Classification of Stably Finitely Strongly Irreducible Decomposable Operators Let $\mathcal H$ be a complex separable Hilbert space and ${\mathcal L}({\mathcal H})$ denote the collection of bounded linear operators on ${\mathcal H}$. In this paper, we show that for any operator $A\in{\mathcal L}({\mathcal H})$, there exists a stably finitely (SI) decomposable operator $A_\epsilon$, such that $\|A-A_{\epsilon}\|<\epsilon$ and ${\mathcal{\mathcal A}'(A_{\epsilon})}/\operatorname{rad} {{\mathcal A}'(A_{\epsilon})}$ is commutative, where $\operatorname{rad}{{\mathcal A}'(A_{\epsilon})}$ is the Jacobson radical of ${{\mathcal A}'(A_{\epsilon})}$. Moreover, we give a similarity classification of the stably finitely decomposable operators that generalizes the result on similarity classification of Cowen-Douglas operators given by C. L. Jiang. Keywords:$K_{0}$-group, strongly irreducible decomposition, CowenâDouglas operators, commutant algebra, similarity classificationCategories:47A05, 47A55, 46H20
7. CJM 2009 (vol 62 pp. 182)
Prajs, Janusz R.
Mutually Aposyndetic Decomposition of Homogeneous Continua A new decomposition, the \emph{mutually aposyndetic decomposition} of homogeneous continua into closed, homogeneous sets is introduced. This decomposition is respected by homeomorphisms and topologically unique. Its quotient is a mutually aposyndetic homogeneous continuum, and in all known examples, as well as in some general cases, the members of the decomposition are semi-indecomposable continua. As applications, we show that hereditarily decomposable homogeneous continua and path connected homogeneous continua are mutually aposyndetic. A class of new examples of homogeneous continua is defined. The mutually aposyndetic decomposition of each of these continua is non-trivial and different from Jones' aposyndetic decomposition. Keywords:ample, aposyndetic, continuum, decomposition, filament, homogeneousCategories:54F15, 54B15
8. CJM 2006 (vol 58 pp. 877)
Selick, P.; Theriault, S.; Wu, J.
Functorial Decompositions of Looped Coassociative Co-$H$ Spaces Selick and Wu gave a functorial decomposition of $\Omega\Sigma X$ for path-connected, $p$-local \linebreak$\CW$\nbd-com\-plexes $X$ which obtained the smallest nontrivial functorial retract $A^{\min}(X)$ of $\Omega\Sigma X$. This paper uses methods developed by the second author in order to extend such functorial decompositions to the loops on coassociative co-$H$ spaces. Keywords:homotopy decomposition, coassociative co-$H$ spacesCategory:55P53
9. CJM 2003 (vol 55 pp. 1000)
Graczyk, P.; Sawyer, P.
Some Convexity Results for the Cartan Decomposition In this paper, we consider the set $\mathcal{S} = a(e^X K e^Y)$ where $a(g)$ is the abelian part in the Cartan decomposition of $g$. This is exactly the support of the measure intervening in the product formula for the spherical functions on symmetric spaces of noncompact type. We give a simple description of that support in the case of $\SL(3,\mathbf{F})$ where $\mathbf{F} = \mathbf{R}$, $\mathbf{C}$ or $\mathbf{H}$. In particular, we show that $\mathcal{S}$ is convex. We also give an application of our result to the description of singular values of a product of two arbitrary matrices with prescribed singular values. Keywords:convexity theorems, Cartan decomposition, spherical functions, product formula, semisimple Lie groups, singular valuesCategories:43A90, 53C35, 15A18
10. CJM 1999 (vol 51 pp. 850)
Muhly, Paul S.; Solel, Baruch
Tensor Algebras, Induced Representations, and the Wold Decomposition Our objective in this sequel to \cite{MSp96a} is to develop extensions, to representations of tensor algebras over $C^{*}$-correspondences, of two fundamental facts about isometries on Hilbert space: The Wold decomposition theorem and Beurling's theorem, and to apply these to the analysis of the invariant subspace structure of certain subalgebras of Cuntz-Krieger algebras. Keywords:tensor algebras, correspondence, induced representation, Wold decomposition, Beurling's theoremCategories:46L05, 46L40, 46L89, 47D15, 47D25, 46M10, 46M99, 47A20, 47A45, 47B35
11. CJM 1998 (vol 50 pp. 525)
Brockman, William; Haiman, Mark
Nilpotent orbit varieties and the atomic decomposition of the $q$-Kostka polynomials We study the coordinate rings~$k[\Cmubar\cap\hbox{\Frakvii t}]$ of scheme-theoretic intersections of nilpotent orbit closures with the diagonal matrices. Here $\mu'$ gives the Jordan block structure of the nilpotent matrix. de Concini and Procesi~\cite{deConcini&Procesi} proved a conjecture of Kraft~\cite{Kraft} that these rings are isomorphic to the cohomology rings of the varieties constructed by Springer~\cite{Springer76,Springer78}. The famous $q$-Kostka polynomial~$\Klmt(q)$ is the Hilbert series for the multiplicity of the irreducible symmetric group representation indexed by~$\lambda$ in the ring $k[\Cmubar\cap\hbox{\Frakvii t}]$. \LS~\cite{L&S:Plaxique,Lascoux} gave combinatorially a decomposition of~$\Klmt(q)$ as a sum of atomic'' polynomials with non-negative integer coefficients, and Lascoux proposed a corresponding decomposition in the cohomology model. Our work provides a geometric interpretation of the atomic decomposition. The Frobenius-splitting results of Mehta and van der Kallen~\cite{Mehta&vanderKallen} imply a direct-sum decomposition of the ideals of nilpotent orbit closures, arising from the inclusions of the corresponding sets. We carry out the restriction to the diagonal using a recent theorem of Broer~\cite{Broer}. This gives a direct-sum decomposition of the ideals yielding the $k[\Cmubar\cap \hbox{\Frakvii t}]$, and a new proof of the atomic decomposition of the $q$-Kostka polynomials. Keywords:$q$-Kostka polynomials, atomic decomposition, nilpotent conjugacy classes, nilpotent orbit varietiesCategories:05E10, 14M99, 20G05, 05E15
|
2014-09-01 07:40:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9303765296936035, "perplexity": 1027.3879688478419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917463.1/warc/CC-MAIN-20140901014517-00383-ip-10-180-136-8.ec2.internal.warc.gz"}
|