text stringlengths 256 16.4k |
|---|
EUDML | Geometric -homolgy and controlled paths. EuDML | Geometric -homolgy and controlled paths.
K
-homolgy and controlled paths.
Keswani, Navin
Keswani, Navin. "Geometric -homolgy and controlled paths.." The New York Journal of Mathematics [electronic only] 5 (1999): 53-81. <http://eudml.org/doc/120114>.
@article{Keswani1999,
author = {Keswani, Navin},
keywords = {Dirac-type operator; finite propagation speed; trace class operator; -homology; Baum's geometric theory; smooth oriented Riemannian manifold; Clifford bundle; trace class perturbation; Schwartz class; Fourier transform; Dirac operators; -homology},
title = {Geometric -homolgy and controlled paths.},
AU - Keswani, Navin
TI - Geometric -homolgy and controlled paths.
KW - Dirac-type operator; finite propagation speed; trace class operator; -homology; Baum's geometric theory; smooth oriented Riemannian manifold; Clifford bundle; trace class perturbation; Schwartz class; Fourier transform; Dirac operators; -homology
Eric Leichtnam, Paolo Piazza, Elliptic operators and higher signatures
Dirac-type operator, finite propagation speed, trace class operator,
K
-homology, Baum's geometric theory, smooth oriented Riemannian manifold, Clifford bundle, trace class perturbation, Schwartz class, Fourier transform, Dirac operators,
K
-homology
Homological methods (exact sequences, right inverses, lifting, etc.)
{C}^{*}
{W}^{*}
K
K
-theory and operator algebras
EXT and
K
Kasparov theory (
KK
-theory)
Articles by Keswani |
Dimension - Simple English Wikipedia, the free encyclopedia
maximum number of independent directions within a mathematical space
Dimensions are the way we see, measure and experience our world, by using up and down, right to left, back to front, hot and cold, how heavy and how long, as well as more advanced concepts from mathematics and physics. One way to define a dimension is to look at the degrees of freedom, or the way an object can move in a specific space. There are different concepts or ways where the term dimension is used, and there are also different definitions. There is no definition that can satisfy all concepts.
From left to right, the square, the cube, and the tesseract. The square is a 2-dimensional object, the cube is a 3-dimensional object, and the tesseract is a 4-dimensional object. A 1-dimensional object is just a line. A projection of the cube is given since it is viewed on a two-dimensional screen. The same applies to the tesseract, which additionally can only be shown as a projection even in three-dimensional space.
A diagram of the first four spatial dimensions.
In a vector space
{\displaystyle V}
(with vectors being "arrows" with directions), the dimension of
{\displaystyle V}
{\displaystyle \dim(V)}
,[1] is equal to the cardinality (or number of vectors) of a basis of
{\displaystyle V}
[2][3] (a set which indicates how many unique directions
{\displaystyle V}
actually has). It is also equal to the number of the largest group of straight line directions of that space. "Normal" objects in everyday life are specified by three dimensions, which are usually called length, width and depth. Mathematicians call this concept Euclidean space.
Dimensions can be used to measure position too. The distance to a position from a starting place can be measured in the length, width and height directions. These distances are a measure of the position.
In some occasions, a fourth (4D) dimension, time, is used to show the position of an event in time and space.
2 Dimensions and vectors
In modern science, people use other dimensions. Dimensions like temperature and weight can be used to show the position of something in less simple spaces. Scientist study those dimension with dimensional analysis.
Mathematicians also use dimensions. In mathematics, dimensions are more general. Dimensions in mathematics might not measure things in the world. The rules for doing arithmetic with dimensions in mathematics might be different than usual arithmetic rules.
Dimensions and vectorsEdit
Vectors are used to show distances and directions. Vectors are often used in engineering and science, and sometimes in mathematics.
A vector is a list of numbers. There is one number for each dimension. There are arithmetic rules for vectors.
For example, if Jane wants to know the position of Sally, Sally can give Jane a vector to show the position. If Jane and Sally are in the world, there are three dimensions. Therefore, Sally gives Jane a list of three numbers to show her position. The three numbers in the vector Sally gives Jane might be:
Sally's distance north of Jane
Sally's distance east of Jane
Sally's height above Jane
Hypercube, generalization of square and cube beyond three dimensions
Minkowski spacetime, a four-dimensional manifold
↑ Weisstein, Eric W. "Dimension". mathworld.wolfram.com. Retrieved 2020-09-07.
↑ "Basis and Dimension". people.math.carleton.ca. Retrieved 2020-09-07.
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Dimension&oldid=7797579" |
EUDML | Modularity of the Rankin-Selberg -series, and multiplicity one for . EuDML | Modularity of the Rankin-Selberg -series, and multiplicity one for .
Modularity of the Rankin-Selberg
L
SL\left(2\right)
Ramakrishnan, Dinakar. "Modularity of the Rankin-Selberg -series, and multiplicity one for .." Annals of Mathematics. Second Series 152.1 (2000): 45-111. <http://eudml.org/doc/121894>.
author = {Ramakrishnan, Dinakar},
keywords = {isobaric automorphic representations; factor; -factor; cuspidality; -functions; multiplicity one; Tate conjecture; factor; -factor; -functions},
title = {Modularity of the Rankin-Selberg -series, and multiplicity one for .},
AU - Ramakrishnan, Dinakar
TI - Modularity of the Rankin-Selberg -series, and multiplicity one for .
KW - isobaric automorphic representations; factor; -factor; cuspidality; -functions; multiplicity one; Tate conjecture; factor; -factor; -functions
Guy Henniart, Progrès récents en fonctorialité de Langlands
Cristian Virdol, Non-solvable base change for Hilbert modular representations and zeta functions of twisted quaternionic Shimura varieties
L
isobaric automorphic representations,
L
factor,
\epsilon
-factor, cuspidality,
L
-functions, multiplicity one, Tate conjecture,
L
\epsilon
-factor,
L
Representations of Lie and linear algebraic groups over global fields and adèle rings
L
Representation-theoretic methods; automorphic representations over local and global fields
Articles by Ramakrishnan |
EUDML | On Kazhdan's property (T) and Kazhdan constants associated to a Laplacian for SL(3,R). EuDML | On Kazhdan's property (T) and Kazhdan constants associated to a Laplacian for SL(3,R).
On Kazhdan's property (T) and Kazhdan constants associated to a Laplacian for SL(3,R).
Bekka, M.E.B.; Mayer, M.
Bekka, M.E.B., and Mayer, M.. "On Kazhdan's property (T) and Kazhdan constants associated to a Laplacian for SL(3,R).." Journal of Lie Theory 10.1 (2000): 93-105. <http://eudml.org/doc/120568>.
author = {Bekka, M.E.B., Mayer, M.},
keywords = {Kazhdan's property ; ; locally compact group; simple Lie group; connected Lie group; Kazhdan constants; Laplacian; unitary representation; Kazhdan’s property ; },
title = {On Kazhdan's property (T) and Kazhdan constants associated to a Laplacian for SL(3,R).},
TI - On Kazhdan's property (T) and Kazhdan constants associated to a Laplacian for SL(3,R).
KW - Kazhdan's property ; ; locally compact group; simple Lie group; connected Lie group; Kazhdan constants; Laplacian; unitary representation; Kazhdan’s property ;
Kazhdan's property
\left(T\right)
SL\left(3,k\right)
, locally compact group, simple Lie group, connected Lie group, Kazhdan constants, Laplacian, unitary representation, Kazhdan’s property
\left(T\right)
SL\left(3,k\right)
Articles by Mayer |
Malayalam: Failed to parse (syntax error): \frac{ക}{ച}
Tamil: Failed to parse (syntax error): \frac{க}{த}
Hindi: Failed to parse (syntax error): \frac{क}{च} Also, if one keeps the code as:
"<math>\text{क्ष}</math>"
then it is rendered as
{\displaystyle {\text{क्ष}}}
instead of "क्ष". This is because "क्ष" is a conjunct conjunction, i.e, क्ष = क् + ष.
Simple forms like Failed to parse (syntax error): ക also fails!
Another problem:
{\displaystyle {\cfrac {\text{ച}}{\angle 4{\text{ത്ര}}^{3}}}=}
rendering in MathML.
Given the example of the non standard text character ä (a-Umlaut). It has to be mentioned that
Failed to parse (syntax error): ä is no valid even though it's rendered in a way that it could be considered as correct.
{\displaystyle {\text{ä}}}
or Failed to parse (syntax error): \text{\"a} are both valid tex input but the second input is not rendered correctly (it's supposed to look like the first input). |
Estimate parameters of regression models with ARIMA errors - MATLAB - MathWorks 한êµ
Estimate Parameters of Regression Model Containing ARIMA Errors Without Initial Values
Estimate Parameters of a Regression Model with ARIMA Errors Using Initial Values
Class: regARIMA
Estimate parameters of regression models with ARIMA errors
[EstMdl,EstParamCov,logL,info] = estimate(Mdl,y)
[EstMdl,EstParamCov,logL,info] = estimate(Mdl,y,Name,Value)
EstMdl = estimate(Mdl,y) uses maximum likelihood to estimate the parameters of the regression model with ARIMA time series errors, Mdl, given the response series y. EstMdl is a regARIMA model that stores the results.
[EstMdl,EstParamCov,logL,info] = estimate(Mdl,y) additionally returns EstParamCov, the variance-covariance matrix associated with estimated parameters, logL, the optimized loglikelihood objective function, and info, a data structure of summary information.
[EstMdl,EstParamCov,logL,info] = estimate(Mdl,y,Name,Value) estimates the model using additional options specified by one or more Name,Value pair arguments.
Mdl — Regression model with ARIMA errors
Regression model with ARIMA errors, specified as a regARIMA model returned by regARIMA or estimate.
Single path of response data to which the model is fit, specified as a numeric column vector. The last observation of y is the latest.
AR0 — Initial estimates of ARIMA error model nonseasonal autoregressive coefficients
Initial estimates of ARIMA error model nonseasonal autoregressive coefficients, specified as the comma-separated pair consisting of 'AR0' and a numeric vector.
The number of coefficients in AR0 must equal the number of lags associated with nonzero coefficients in the nonseasonal autoregressive polynomial.
Initial estimates of regression coefficients, specified as the comma-separated pair consisting of 'Beta0' and a numeric vector.
The number of coefficients in Beta0 must equal the number of columns of X.
DoF0 — Initial t-distribution degree-of-freedom estimate
Initial t-distribution degree-of-freedom estimate, specified as the comma-separated pair consisting of 'DoF0' and a positive scalar. DoF0 must exceed 2.
Presample innovations that have mean 0 and provide initial values for the ARIMA error model, specified as the comma-separated pair consisting of 'E0' and a numeric column vector. E0 must contain at least Mdl.Q rows. If E0 contains extra rows, then estimate uses the latest Mdl.Q presample innovations. The last row contains the latest presample innovation.
By default, estimate sets the necessary presample innovations to 0.
Intercept0 — Initial regression model intercept estimate
Initial regression model intercept estimate, specified as the comma-separated pair consisting of 'Intercept0' and a scalar.
MA0 — Initial estimates of ARIMA error model nonseasonal moving average coefficients
Initial estimates of ARIMA error model nonseasonal moving average coefficients, specified as the comma-separated pair consisting of 'MA0' and a numeric vector.
The number of coefficients in MA0 must equal the number of lags associated with nonzero coefficients in the nonseasonal moving average polynomial.
SAR0 — Initial estimates of ARIMA error model seasonal autoregressive coefficients
Initial estimates of ARIMA error model seasonal autoregressive coefficients, specified as the comma-separated pair consisting of 'SAR0' and a numeric vector.
The number of coefficients in SAR0 must equal the number of lags associated with nonzero coefficients in the seasonal autoregressive polynomial.
SMA0 — Initial estimates of ARIMA error model seasonal moving average coefficients
Initial estimates of ARIMA error model seasonal moving average coefficients, specified as the comma-separated pair consisting of 'SMA0' and a numeric vector.
The number of coefficients in SMA0 must equal the number of lags with nonzero coefficients in the seasonal moving average polynomial.
U0 — Presample unconditional disturbances
Presample unconditional disturbances that provide initial values for the ARIMA error model, specified as the comma-separated pair consisting of 'U0' and a numeric column vector. U0 must contain at least Mdl.P rows. If U0 contains extra rows, then estimate uses the latest presample unconditional disturbances. The last row contains the latest presample unconditional disturbance.
By default, estimate backcasts for the necessary amount of presample unconditional disturbances.
Variance0 — Initial estimate of ARIMA error model innovation variance
Initial estimate of ARIMA error model innovation variance, specified as the comma-separated pair consisting of 'Variance0' and a positive scalar.
Predictor data in the regression model, specified as the comma-separated pair consisting of 'X' and a matrix.
The columns of X are separate, synchronized time series, with the last row containing the latest observations. The number of rows of X must be at least the length of y. If the number of rows of X exceeds the number required, then estimate uses the latest observations.
By default, estimate does not estimate the regression coefficients regardless of their presence in Mdl.
NaNs in y, E0, U0, and X indicate missing values, and estimate removes them. The software merges the presample data (E0 and U0) separately from the effective sample data (X and y), then uses list-wise deletion to remove any NaNs. Removing NaNs in the data reduces the sample size, and can also create irregular time series.
estimate assumes that you synchronize the data (presample separately from effective sample) such that the latest observations occur simultaneously.
The intercept of a regression model with ARIMA errors having nonzero degrees of seasonal or nonseasonal integration is not identifiable. In other words, estimate cannot estimate an intercept of a regression model with ARIMA errors that has nonzero degrees of seasonal or nonseasonal integration. If you pass in such a model for estimation, estimate displays a warning in the Command Window and sets EstMdl.Intercept to NaN.
EstMdl — Model containing parameter estimates
Model containing the parameter estimates, returned as a regARIMA model. estimate uses maximum likelihood to calculate all parameter estimates not constrained by Mdl (that is, all parameters in Mdl that you set to NaN).
Variance-covariance matrix of maximum likelihood estimates of model parameters known to the optimizer, returned as a matrix.
The rows and columns contain the covariances of the parameter estimates. The standard errors of the parameter estimates are the square root of the entries along the main diagonal. The rows and columns associated with any parameters held fixed as equality constraints contain 0s.
Nonzero AR coefficients at positive lags
Nonzero SAR coefficients at positive lags
Nonzero MA coefficients at positive lags
Nonzero SMA coefficients at positive lags
Regression coefficients (when you specify X in estimate)
Innovations variance
Degrees of freedom for the t distribution
info — Summary information
Summary information, returned as a structure.
For example, you can display the vector of final estimates by typing info.X in the Command Window.
Fit this regression model with ARMA(2,1) errors to simulated data:
\begin{array}{l}\begin{array}{c}{y}_{t}={X}_{t}\left[\begin{array}{c}0.1\\ -0.2\end{array}\right]+{u}_{t}\\ {u}_{t}=0.5{u}_{t-1}-0.8{u}_{t-2}+{\mathrm{ε}}_{t}-0.5{\mathrm{ε}}_{t-1},\end{array}\end{array}
{\mathrm{ε}}_{t}
Specify the regression model ARMA(2,1) errors. Simulate responses from the model and two predictor series.
Mdl0 = regARIMA('Intercept',0,'AR',{0.5 -0.8}, ...
'MA',-0.5,'Beta',[0.1 -0.2],'Variance',0.1);
X = randn(100,2);
y = simulate(Mdl0,100,'X',X);
Specify a regression model with ARMA(2,1) errors with no intercept, and unknown coefficients and variance.
Mdl = regARIMA(2,0,1);
Mdl.Intercept = 0 % Exclude the intercept
Description: "ARMA(2,1) Error Model (Gaussian Distribution)"
The AR coefficients, MA coefficients, and the innovation variance are NaN values. estimate estimates those parameters, but not the intercept. The intercept is held fixed at 0.
Fit the regression model with ARMA(2,1) errors to the data.
EstMdl = estimate(Mdl,y,'X',X,'Display','params');
Intercept 0 0 NaN NaN
AR{1} 0.6203 0.10419 5.9534 2.6267e-09
AR{2} -0.69717 0.079575 -8.7612 1.9315e-18
MA{1} -0.55808 0.1319 -4.2312 2.3243e-05
Beta(1) 0.10367 0.021735 4.7696 1.8456e-06
Beta(2) -0.20945 0.024188 -8.659 4.7574e-18
Variance 0.074885 0.0090358 8.2876 1.1558e-16
The result, EstMdl, is a new regARIMA model. The estimates in EstMdl resemble the parameter values that generated the simulated data.
Fit a regression model with ARMA(1,1) errors by regressing the log GDP onto the CPI and using initial values.
Load the US Macroeconomic data set and preprocess the data.
load Data_USEconModel;
logGDP = log(DataTable.GDP);
dlogGDP = diff(logGDP); % For stationarity
dCPI = diff(DataTable.CPIAUCSL); % For stationarity
T = length(dlogGDP); % Effective sample size
Specify a regression model with ARMA(1,1) errors in which all estimable parameters are unknown.
EstMdl = regARIMA(1,0,1);
Fit the model to the first half of the data.
EstMdl0 = estimate(EstMdl,dlogGDP(1:ceil(T/2)),...
'X',dCPI(1:ceil(T/2)),'Display','off');
The result is a new regARIMA model with the estimated parameters.
Use the estimated parameters as initial values for fitting the second half of the data.
Intercept0 = EstMdl0.Intercept;
AR0 = EstMdl0.AR{1};
MA0 = EstMdl0.MA{1};
Variance0 = EstMdl0.Variance;
Beta0 = EstMdl0.Beta;
[EstMdl,~,~,info] = estimate(EstMdl,dlogGDP(floor(T/2)+1:end),...
'X',dCPI(floor(T/2)+1:end),'Display','params',...
'Intercept0',Intercept0,'AR0',AR0,'MA0',MA0,...
'Variance0',Variance0,'Beta0',Beta0);
Intercept 0.011174 0.002102 5.3158 1.0619e-07
AR{1} 0.78684 0.036229 21.718 1.376e-104
MA{1} -0.47362 0.06554 -7.2264 4.9601e-13
Beta(1) 0.0021933 0.00058327 3.7604 0.00016966
Variance 4.8349e-05 4.1705e-06 11.593 4.4716e-31
Display all of the parameter estimates using info.X.
info.X
The order of the parameter estimates in info.X matches the order that estimate displays in its output table.
estimate estimates the parameters as follows:
Infer the unconditional disturbances from the regression model.
Infer the residuals of the ARIMA error model.
Use the distribution of the innovations to build the likelihood function.
Maximize the loglikelihood function with respect to the parameters using fmincon.
[3] Enders, W. Applied Econometric Time Series. Hoboken, NJ: John Wiley & Sons, Inc., 1995.
[5] Pankratz, A. Forecasting with Dynamic Regression Models. John Wiley & Sons, Inc., 1991.
forecast | infer | simulate | summarize |
Flexible AC transmission system - Wikipedia
(Redirected from FACTS (Electric power transmission))
This article is about the electrical system. For the Australian television industry body formerly abbreviated as FACTS, see FreeTV Australia.
A flexible alternating current transmission system (FACTS) is a system composed of static equipment used for the alternating current (AC) transmission of electrical energy. It is meant to enhance controllability and increase power transfer capability of the network. It is generally a power electronics-based system.
FACTS is defined by the Institute of Electrical and Electronics Engineers (IEEE) as "a power electronic based system and other static equipment that provide control of one or more AC transmission system parameters to enhance controllability and increase power transfer capability".[1]
According to Siemens, "FACTS Increase the reliability of AC grids and reduce power delivery costs. They improve transmission quality and efficiency of power transmission by supplying inductive or reactive power to grid.[2]
1.1 Shunt compensation
2.1 Series compensation
3 Examples of series compensation
4 Examples of shunt compensation
Transmission on a no-loss line.
Series compensation.
Shunt compensation.
Shunt compensation[edit]
This method is used to improve the power factor. Whenever an inductive load is connected to the transmission line, power factor lags because of lagging load current. To compensate, a shunt capacitor is connected which draws the current leading the source voltage. The net result is improvement in power factor.
{\displaystyle P=\left({\frac {EV}{X}}\right)\sin(\delta )}
{\displaystyle \delta }
is the power angle.
In the case of a no-loss line, voltage magnitude at the receiving end is the same as voltage magnitude at the sending end: Vs = Vr = V. Transmission results in a phase lag
{\displaystyle \delta }
that depends on line reactance X.
{\displaystyle {\begin{aligned}{\underline {V_{s}}}&=V\cos \left({\frac {\delta }{2}}\right)+jV\sin \left({\frac {\delta }{2}}\right)\\[3pt]{\underline {V_{r}}}&=V\cos \left({\frac {\delta }{2}}\right)-jV\sin \left({\frac {\delta }{2}}\right)\\[3pt]{\underline {I}}&={\frac {{\underline {V_{s}}}-{\underline {V_{r}}}}{jX}}={\frac {2V\sin {\left({\frac {\delta }{2}}\right)}}{X}}\end{aligned}}}
As it is a no-loss line, active power P is the same at any point of the line:
{\displaystyle P_{s}=P_{r}=P=V\cos \left({\frac {\delta }{2}}\right)\cdot {\frac {2V\sin {\left({\frac {\delta }{2}}\right)}}{X}}={\frac {V^{2}}{X}}\sin(\delta )}
Reactive power at sending end is the opposite of reactive power at receiving end:
{\displaystyle Q_{s}=-Q_{r}=Q=V\sin \left({\frac {\delta }{2}}\right)\cdot {\frac {2V\sin \left({\frac {\delta }{2}}\right)}{X}}={\frac {V^{2}}{X}}(1-\cos \delta )}
{\displaystyle \delta }
is very small, active power mainly depends on
{\displaystyle \delta }
whereas reactive power mainly depends on voltage magnitude.
Series compensation[edit]
FACTS for series compensation modify line impedance: X is decreased so as to increase the transmittable active power. However, more reactive power must be provided.
{\displaystyle {\begin{aligned}P&={\frac {V^{2}}{X-Xc}}\sin(\delta )\\[3pt]Q&={\frac {V^{2}}{X-Xc}}(1-\cos(\delta ))\end{aligned}}}
Reactive current is injected into the line to maintain voltage magnitude. Transmittable active power is increased but more reactive power is to be provided.
{\displaystyle {\begin{aligned}P&={\frac {2V^{2}}{X}}\sin \left({\frac {\delta }{2}}\right)\\[3pt]Q&={\frac {4V^{2}}{X}}\left[1-\cos \left({\frac {\delta }{2}}\right)\right]\end{aligned}}}
Examples of series compensation[edit]
Examples of FACTS for series compensation (schematic)
Thyristor-controlled series capacitor (TCSC): a series capacitor bank is shunted by a thyristor-controlled inductor reactor
Thyristor-controlled series reactor (TCSR): a series reactor bank is shunted by a thyristor-controlled reactor
Thyristor-switched series capacitor (TSSC): a series capacitor bank is shunted by a thyristor-switched reactor
Thyristor-switched series reactor (TSSR): a series reactor bank is shunted by a thyristor-switched reactor
Examples of shunt compensation[edit]
Examples of FACTS for shunt compensation (schematic)
Static synchronous compensator (STATCOM); previously known as a static condenser (STATCON)
Static VAR compensator (SVC). Most common SVCs are:
Thyristor-controlled reactor (TCR): reactor is connected in series with a bidirectional thyristor valve. The thyristor valve is phase-controlled. Equivalent reactance is varied continuously.
Thyristor-switched reactor (TSR): Same as TCR but thyristor is either in zero- or full- conduction. Equivalent reactance is varied in stepwise manner.
Thyristor-switched capacitor (TSC): capacitor is connected in series with a bidirectional thyristor valve. Thyristor is either in zero- or full- conduction. Equivalent reactance is varied in stepwise manner.
Mechanically-switched capacitor (MSC): capacitor is switched by circuit-breaker. It aims at compensating steady state reactive power. It is switched only a few times a day.
^ Proposed terms and definitions for flexible AC transmission system(FACTS), IEEE Transactions on Power Delivery, Volume 12, Issue 4, October 1997, pp. 1848–1853. doi:10.1109/61.634216
^ Flexible AC Transmission Systems (FACTS) - Siemens
Narain G. Hingorani, Laszlo Gyugyi Understanding FACTS: Concepts and Technology of Flexible AC Transmission Systems, Wiley-IEEE Press, December 1999. ISBN 978-0-7803-3455-7
Xiao-Ping Zhang, Christian Rehtanz, Bikash Pal, Flexible AC Transmission Systems: Modelling and Control, Springer, March 2006. ISBN 978-3-540-30606-1. https://link.springer.com/book/10.1007%2F3-540-30607-2
Xiao-Ping Zhang, Christian Rehtanz, Bikash Pal, Flexible AC Transmission Systems: Modelling and Control, 2nd Edition, Springer, Feb 2012, ISBN 978-3-642-28240-9 (Print) 978-3-642-28241-6 (Online), https://link.springer.com/book/10.1007%2F978-3-642-28241-6
A. Edris, R. Adapa, M.H. Baker, L. Bohmann, K. Clark, K. Habashi, L. Gyugyi, J. Lemay, A. Mehraban, A.K. Myers, J. Reeve, F. Sener, D.R. Torgerson, R.R. Wood, Proposed Terms and Definitions for Flexible AC Transmission System (FACTS), IEEE Transactions on Power Delivery, Vol. 12, No. 4, October 1997. doi: 10.1109/61.634216[dead link] http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=00634216
Retrieved from "https://en.wikipedia.org/w/index.php?title=Flexible_AC_transmission_system&oldid=1080891300" |
Upper tails of self-intersection local times of random walks: survey of proof techniques
Wolfgang König1
1 Technical University Berlin, Str. des 17. Juni 136, 10623 Berlin, and Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstr. 39, 10117 Berlin, Germany
Actes des rencontres du CIRM, Volume 2 (2010) no. 1, pp. 15-24.
The asymptotics of the probability that the self-intersection local time of a random walk on
{ℤ}^{d}
exceeds its expectation by a large amount is a fascinating subject because of its relation to some models from Statistical Mechanics, to large-deviation theory and variational analysis and because of the variety of the effects that can be observed. However, the proof of the upper bound is notoriously difficult and requires various sophisticated techniques. We survey some heuristics and some recently elaborated techniques and results. This is an extended summary of a talk held on the CIRM-conference on Excess self-intersection local times, and related topics in Luminy, 6-10 Dec., 2010.
DOI: 10.5802/acirm.18
Classification: 60K37, 60F10, 60J55
Keywords: Self-intersection local time, upper tail, Donsker-Varadhan large deviations, variational formula
Wolfgang König 1
@article{ACIRM_2010__2_1_15_0,
author = {Wolfgang K\"onig},
title = {Upper tails of self-intersection local times of random walks: survey of proof techniques},
journal = {Actes des rencontres du CIRM},
publisher = {CIRM},
doi = {10.5802/acirm.18},
url = {https://acirm.centre-mersenne.org/articles/10.5802/acirm.18/}
TI - Upper tails of self-intersection local times of random walks: survey of proof techniques
JO - Actes des rencontres du CIRM
PB - CIRM
UR - https://acirm.centre-mersenne.org/articles/10.5802/acirm.18/
UR - https://doi.org/10.5802/acirm.18
DO - 10.5802/acirm.18
ID - ACIRM_2010__2_1_15_0
%T Upper tails of self-intersection local times of random walks: survey of proof techniques
%J Actes des rencontres du CIRM
%I CIRM
%U https://doi.org/10.5802/acirm.18
%R 10.5802/acirm.18
%F ACIRM_2010__2_1_15_0
Wolfgang König. Upper tails of self-intersection local times of random walks: survey of proof techniques. Actes des rencontres du CIRM, Volume 2 (2010) no. 1, pp. 15-24. doi : 10.5802/acirm.18. https://acirm.centre-mersenne.org/articles/10.5802/acirm.18/
[A08] A. Asselah, Large deviations estimates for self-intersection local times for simple random walk in
{ℤ}^{3}
, Probab. Theory Relat. Fields 141, 19-45 (2008). | Article | MR: 2372964 | Zbl: 1135.60340
[A09] A. Asselah, Large deviation principle for self-intersection local times for random walk in
{ℤ}^{d}
d\ge 5
, ALEA Lat. Am. J. Probab. Math. Stat. 6, 281-322 (2009). | Zbl: 1276.60104
[A10] A. Asselah, Shape transition under excess self-intersections for transient random walk, Ann. Inst. Henri Poincaré Probab. Stat. 46:1, 250-278 (2010). | Article | MR: 2641778 | Zbl: 1202.60151
[AC07] A. Asselah and F. Castell, Random walk in random scenery and self-intersection local times in dimensions
d\ge 5
. Probab. Theory Relat. Fields 138, 1-32 (2007). | Article | MR: 2288063 | Zbl: 1116.60057
[BHK07] D. Brydges, R. van der Hofstad and W. König, Joint density for the local times of continuous-time Markov chains, Ann. Probab. 35:4, 1307-1332 (2007). | Article | MR: 2330973 | Zbl: 1127.60076
[BK11+] M. Becker and W. König, Self-intersection local times of random walks: exponential moments in supercritical dimensions, in preparation. | Article | MR: 3000555 | Zbl: 1276.60120
[BK09] M. Becker and W. König, Moments and distribution of the local times of a transient random walk on
{ℤ}^{d}
, Jour. Theor. Prob. 22:2, 365 - 374 (2009). | Article | MR: 2501325 | Zbl: 1175.60043
[BK10] M. Becker and W. König, Self-intersection local times of random walks: exponential moments in subcritical dimensions, preprint (2010). | Article | MR: 3000555 | Zbl: 1276.60120
[Ca10] F. Castell, Large deviations for intersection local time in critical dimension, Ann. Probab. 38:2, 927-953 (2010). | Article | MR: 2642895 | Zbl: 1195.60041
[Ce07] J. Cerny, Moments and distribution of the local times of a two-dimensional random walk, Stoch. Proc. Appl. 117, 262-270 (2007). | Article | MR: 2290196 | Zbl: 1107.60043
[Ch09] X. Chen, Random Walk Intersections: Large Deviations and Related Topics. Mathematical Surveys and Monographs, AMS. (2010) Vol. 157, Providence, RI.
[CM09] X. Chen and P. Mörters, Upper tails for intersection local times of random walks in supercritical dimensions. J. London Math. Soc. 79, 186-210 (2009). | Article | MR: 2472140 | Zbl: 1170.60019
[D88] E.B. Dynkin, Self-intersection gauge for random walks and for Brownian motion, Ann. Probab. 16, 1-57 (1988). | Article | MR: 920254 | Zbl: 0638.60081
[dA85] A. de Acosta, Upper bounds for large deviations of dependent random vectors, Z. Wahrsch. Verw. Gebiete 69, 551-565 (1985). | Article | MR: 791911 | Zbl: 0547.60033
[DV79] M. Donsker and S.R.S. Varadhan, On the number of distinct sites visited by a random walk, Comm. Pure Appl. Math. 32, 721–747 (1979). | Article | MR: 539157 | Zbl: 0418.60074
[DZ98] A. Dembo and O. Zeitouni, Large Deviations Techniques and Applications, 2
{}^{\mathrm{nd}}
Edition. Springer, New York (1998). | Zbl: 0896.60013
[GKS07] N. Gantert, W. König and Z. Shi, Annealed deviations for random walk in random scenery, Ann. Inst. Henri Poincaré (B) Prob. Stat. 43:1, 47-76 (2007). | Article | MR: 2288269 | Zbl: 1119.60083
[HKM06] R. van der Hofstad, W. König and P. Mörters, The universality classes in the parabolic Anderson model, Commun. Math. Phys. 267:2, 307-353 (2006). | Article | MR: 2249772 | Zbl: 1115.82030
[KM02] W. König and P. Mörters, Brownian intersection local times: upper tail asymptotics and thick points, Ann. Probab. 30, 1605–1656 (2002). | Article | MR: 1944002 | Zbl: 1032.60073
[L10a] C. Laurent, Large deviations for self-intersection local times of stable random walks, arXiv: 1003.6060, preprint (2010). | Article | MR: 2684742 | Zbl: 1217.60021
[L10b] C. Laurent, Large deviations for self-intersection local times in subcritical dimensions, arXiv: 1011.6486, preprint (2010). | Article | MR: 2900462 | Zbl: 1245.60036
[Le86] J.-F. Le Gall, Propriétés d’intersection des marches aléatoires, I. Convergence vers le temps local d’intersection, Com. Math. Phys. 104, 471-507 (1986). | Article | Zbl: 0609.60078 |
Why is adsorption always exothermic? VIEW SOLUTION
Name the method that is used for refining of nickel. VIEW SOLUTION
Why does NO2 dimerise? VIEW SOLUTION
Based on molecular forces, what type of polymer is neoprene? VIEW SOLUTION
What are the products of hydrolysis of maltose? VIEW SOLUTION
Write the structure of 4-chloropentan-2-one. VIEW SOLUTION
(i) Terylene
(ii) Nylon-6,6 VIEW SOLUTION
(i) SiO2 in the extraction of copper from copper matte
(ii) NaCN in froth floatation process VIEW SOLUTION
\mathrm{Ag}+{\mathrm{PCl}}_{5} \to
{\mathrm{CaF}}_{2}+{\mathrm{H}}_{2}{\mathrm{SO}}_{4} \to
(ii) HClO4 VIEW SOLUTION
(i) Write the type of magnetism observed when the magnetic moments are oppositely aligned and cancel out each other.
(ii) Which stoichiometric defect does not change the density of the crystal? VIEW SOLUTION
(i) Fuel cell
(ii) Limiting molar conductivity
\left({\wedge }_{m}^{0}\right)
{\mathrm{CH}}_{3}{\mathrm{CH}}_{2}\mathrm{OH} \stackrel{\mathrm{HBr}}{\to }{\mathrm{CH}}_{3}{\mathrm{CH}}_{2}\mathrm{Br}+{\mathrm{H}}_{2}\mathrm{O}
(ii) What is the slope of the curve? VIEW SOLUTION
(iii) Oligosaccharides VIEW SOLUTION
{\mathrm{C}}_{6}{\mathrm{H}}_{5}{\mathrm{NO}}_{2} \stackrel{ \mathrm{Sn} + \mathrm{HCl} }{\to } \mathrm{A} \underset{273 \mathrm{K}}{\overset{ {\mathrm{NaNO}}_{2} + \mathrm{HCl} }{\to }} \mathrm{B} \stackrel{ {\mathrm{H}}_{2}\mathrm{O} }{\to }
{\mathrm{CH}}_{3}\mathrm{CN} \stackrel{{\mathrm{H}}_{2}\mathrm{O}/{\mathrm{H}}^{+}}{\to } \mathrm{A} \underset{∆}{\overset{ {\mathrm{NH}}_{3} }{\to }} \mathrm{B} \stackrel{ {\mathrm{Br}}_{2}+\mathrm{KOH} }{\to } \mathrm{C}
(a) Draw the structures of major monohalo products in each of the following reactions:
Given : E° cell = + 2·71 V, 1 F = 96500 C mol−1
Experiment Time/s−1 Total pressure/atm
(Given : log 4 = 0·6021, log 2 = 0·3010) VIEW SOLUTION
{\mathrm{Cr}}_{2}{{\mathrm{O}}_{7}}^{2-} + 2{\mathrm{OH}}^{-} \to
{{\mathrm{MnO}}_{4}}^{-} + 4{\mathrm{H}}^{+} + 3{\mathrm{e}}^{-} \to
{{\mathrm{MnO}}_{4}}^{-} + 8{\mathrm{H}}^{+} + 5{\mathrm{e}}^{-}\to |
(ii) Please check that this Question Paper contains 26 Questions.
(iii) Marks for each question are indicated against it.
(iv) Questions 1 to 6 in Section-A are Very Short Answer Type Questions carrying one mark each.
(v) Questions 7 to 19 in Section-B are Long Answer I Type Questions carrying 4 marks each.
(vi) Questions 20 to 26 in Section-C are Long Answer II Type Questions carrying 6 marks each.
(vii) Please write down the serial number of the Question before attempting it.
* Kindly update your browser if you are unable to view the equations.
\stackrel{\to }{\mathrm{a}} . \left(\stackrel{\to }{\mathrm{b} } × \stackrel{\to }{\mathrm{a}}\right).
\stackrel{\to }{\mathrm{a}}= \stackrel{^}{\mathrm{i}} + 2 \stackrel{^}{\mathrm{j}} - \stackrel{^}{\mathrm{k}}, \stackrel{\to }{\mathrm{b}} = 2 \stackrel{^}{\mathrm{i}} + \stackrel{^}{\mathrm{j}} + \stackrel{^}{\mathrm{k}}
\stackrel{\to }{\mathrm{c}} = 5 \stackrel{^}{\mathrm{i}} - 4 \stackrel{^}{\mathrm{j}} + 3 \stackrel{^}{\mathrm{k}}
\left(\stackrel{\to }{\mathrm{a}} + \stackrel{\to }{\mathrm{b}}\right). \stackrel{\to }{\mathrm{c}}.
Write the direction ratios of the following line :
\mathrm{x} = -3, \frac{\mathrm{y}-4}{3} = \frac{2 -\mathrm{z}}{1}
\mathrm{A} = \left[\begin{array}{cc}2& 3\\ 5& -2\end{array}\right]
, then write A−1. VIEW SOLUTION
Find the differential equation representing the curve y = cx + c2. VIEW SOLUTION
Write the integrating factor of the following differential equation:
\left(1+{y}^{2}\right) dx-\left({\mathrm{tan}}^{-1} y-x\right) dy=0
Using the properties of determinants, prove the following:
\left|\begin{array}{ccc}1& x& x+1\\ 2x& x\left(x-1\right)& x\left(x+1\right)\\ 3x\left(1-x\right)& x\left(x-1\right) \left(x-2\right)& x\left(x+1\right) \left(x-1\right)\end{array}\right|=6{x}^{2} \left(1-{x}^{2}\right)
x=\alpha \mathrm{sin} 2t \left(1 + \mathrm{cos} 2t\right) \mathrm{and} y=\beta \mathrm{cos} 2t \left(1-\mathrm{cos} 2t\right)
\frac{dy}{dx}=\frac{\beta }{\alpha }\mathrm{tan} t
. VIEW SOLUTION
\frac{d}{dx}{\mathrm{cos}}^{-1} \left(\frac{x-{x}^{-1}}{x+{x}^{-1}}\right)
Find the derivative of the following function f(x) w.r.t. x, at x = 1 :
{\mathrm{cos}}^{-1} \left[\mathrm{sin} \sqrt{\frac{1+x}{2}}\right]+{x}^{x}
\underset{0}{\overset{\frac{\mathrm{\pi }}{2}}{\int }}\frac{{2}^{\mathrm{sin} x}}{{2}^{\mathrm{sin} x}+{2}^{\mathrm{cos} x}}dx
\underset{0}{\overset{3/2}{\int }} \left|x·\mathrm{cos} \left(\mathrm{\pi }x\right)\right|dx
To raise money for an orphanage, students of three schools A, B and C organised an exhibition in their locality, where they sold paper bags, scrap-books and pastel sheets made by them using recycled paper, at the rate of Rs 20, Rs 15 and Rs 5 per unit respectively. School A sold 25 paper bags, 12 scrap-books and 34 pastel sheets. School B sold 22 paper bags, 15 scrap-books and 28 pastel sheets while School C sold 26 paper bags, 18 scrap-books and 36 pastel sheets. Using matrices, find the total amount raised by each school.
By such exhibition, which values are generated in the students? VIEW SOLUTION
2 {\mathrm{tan}}^{-1}\left(\sqrt{\frac{a-b}{a+b}}\mathrm{tan}\frac{\mathrm{x}}{2}\right)={\mathrm{cos}}^{-1}\left(\frac{a \mathrm{cos} x+b}{a+b \mathrm{cos} x}\right)
Solve the following for x :
{\mathrm{tan}}^{-1}\left(\frac{x-2}{x-3}\right)+{\mathrm{tan}}^{-1}\left(\frac{x+2}{x+3}\right)=\frac{\mathrm{\pi }}{4}, \left|x\right|<1.
\left(\begin{array}{ccc}2& 0& 1\\ 2& 1& 3\\ 1& -1& 0\end{array}\right)\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}
, find A2 − 5 A + 16 I. VIEW SOLUTION
Show that four points A, B, C and D whose position vectors are
4\stackrel{^}{\mathrm{i}}+5\stackrel{^}{\mathrm{j}}+\stackrel{^}{\mathrm{k}}, -\stackrel{^}{\mathrm{j}} -\stackrel{^}{\mathrm{k}}, 3\stackrel{^}{\mathrm{i}}+9\stackrel{^}{\mathrm{j}}+4\stackrel{^}{\mathrm{k}} \mathrm{and} 4\left(-\stackrel{^}{\mathrm{i}}+\stackrel{^}{\mathrm{j}}+\stackrel{^}{\mathrm{k}}\right)
respectively are coplanar. VIEW SOLUTION
Show that the following two lines are coplanar:
\frac{x-a+d}{\alpha -\delta }= \frac{y-a}{\alpha }=\frac{z-a-d}{\alpha +\delta } and \frac{x-b+c}{\beta -\gamma }=\frac{y-b}{\beta }=\frac{z-b-c}{\beta +\gamma }
Find the acute angle between the plane 5x − 4y + 7z − 13 = 0 and the y-axis. VIEW SOLUTION
A and B throw a die alternatively till one of them gets a number greater than four and wins the game. If A starts the game, what is the probability of B winning?
A die is thrown three times. Events A and B are defined as below:
A : 5 on the first and 6 on the second throw.
B: 3 or 4 on the third throw.
Find the probability of B, given that A has already occurred.
\int \left(\sqrt{\mathrm{cot} x}+\sqrt{\mathrm{tan} x}\right) dx
\int \frac{{x}^{3}-1}{{x}^{3}+x}dx
Using integration, find the area of the region bounded by the lines y = 2 + x, y = 2 – x and x = 2. VIEW SOLUTION
Show that the differential equation
2xy\frac{dy}{dx}={x}^{2}+3{y}^{2}
is homogeneous and solve it. VIEW SOLUTION
Find the direction ratios of the normal to the plane, which passes through the points (1, 0, 0) and (0, 1, 0) and makes angle
\frac{\mathrm{\pi }}{4}
with the plane x + y = 3. Also find the equation of the plane. VIEW SOLUTION
If the function f : R → R be defined by f(x) = 2x − 3 and g : R → R by g(x) = x3 + 5, then find the value of (fog)−1 (x).
Let A = Q ✕ Q, where Q is the set of all rational numbers, and * be a binary operation defined on A by
(a, b) * (c, d) = (ac, b + ad), for all (a, b) (c, d) ∈ A.
(i) the identity element in A
(ii) the invertible element of A. VIEW SOLUTION
f\left(x\right)=2{x}^{3}-9m{x}^{2}+12{m}^{2}x+1
m>0
attains its maximum and minimum at p and q respectively such that
{p}^{2}=q
, then find the value of m. VIEW SOLUTION
The postmaster of a local post office wishes to hire extra helpers during the Deepawali season, because of a large increase in the volume of mail handling and delivery. Because of the limited office space and the budgetary conditions, the number of temporary helpers must not exceed 10. According to past experience, a man can handle 300 letters and 80 packages per day, on the average, and a woman can handle 400 letters and 50 packets per day. The postmaster believes that the daily volume of extra mail and packages will be no less than 3400 and 680 respectively. A man receives Rs 225 a day and a woman receives Rs 200 a day. How many men and women helpers should be hired to keep the pay-roll at a minimum ? Formulate an LPP and solve it graphically. VIEW SOLUTION
40% students of a college reside in hostel and the remaining reside outside. At the end of the year, 50% of the hostellers got A grade while from outside students, only 30% got A grade in the examination. At the end of the year, a student of the college was chosen at random and was found to have gotten A grade. What is the probability that the selected student was a hosteller ? VIEW SOLUTION |
Convert radiation pattern from phi-theta coordinates to azimuth-elevation coordinates - MATLAB phitheta2azelpat - MathWorks 한êµ
Convert a radiation pattern to azimuth/elevation form, with the azimuth and elevation angles spaced 1° apart.
Define the pattern in terms of φ and θ.
{1}^{â}
\mathrm{Ï}
\mathrm{θ}
{5}^{â}
\begin{array}{l}\mathrm{sin}el=\mathrm{sin}\mathrm{Ï}\mathrm{sin}\mathrm{θ}\\ \mathrm{tan}az=\mathrm{cos}\mathrm{Ï}\mathrm{tan}\mathrm{θ}\\ \mathrm{cos}\mathrm{θ}=\mathrm{cos}el\mathrm{cos}az\\ \mathrm{tan}\mathrm{Ï}=\mathrm{tan}el/\mathrm{sin}az\end{array}
\begin{array}{l}\mathrm{Ï}=az\\ \mathrm{θ}=90âel\\ az=\mathrm{Ï}\\ el=90â\mathrm{θ}\end{array} |
\mathrm{CB}
R
X
R
A
A
The compressed sparse column form of an
m
A
k
\mathrm{CB}
R
X
k
A
X
k
A
R
k
\mathrm{CB}
n+1
{\mathrm{CB}}_{i}
X
R
i
{\mathrm{CB}}_{1}=1
{\mathrm{CB}}_{n+1}=k+1
i
{\mathrm{CB}}_{i}
{\mathrm{CB}}_{i+1}
\mathrm{cbbase}
{\mathrm{CB}}_{1}=\mathrm{cbbase}
{\mathrm{CB}}_{n+1}=k+\mathrm{cbbase}
A
X
A
R
k
{\mathrm{CB}}_{i}
X
R
i
A
\mathrm{rtable_option}\left(X,\mathrm{storage}\right)
\mathrm{sparse}
{\mathrm{sparse}}_{\mathrm{upper}}
{\mathrm{sparse}}_{\mathrm{lower}}
A
\mathrm{rtable_indfns}\left(A\right)
\mathrm{shape}=\mathrm{symmetric}
\mathrm{storage}={\mathrm{sparse}}_{\mathrm{upper}}
A
A
\mathrm{sfloat},\mathrm{complex}\left(\mathrm{sfloat}\right),{\mathrm{integer}}_{1},{\mathrm{integer}}_{2},{\mathrm{integer}}_{4},{\mathrm{integer}}_{8},{\mathrm{float}}_{4},{\mathrm{float}}_{8},{\mathrm{complex}}_{8}
\mathrm{with}\left(\mathrm{LinearAlgebra}\right):
m≔\mathrm{Matrix}\left(6,6,{\left(1,2\right)=-81,\left(2,3\right)=-55,\left(2,4\right)=-15,\left(3,1\right)=-46,\left(3,3\right)=-17,\left(3,4\right)=99,\left(3,5\right)=-61,\left(4,2\right)=18,\left(4,5\right)=-78,\left(5,6\right)=22},\mathrm{datatype}=\mathrm{integer}[4]\right)
\textcolor[rgb]{0,0,1}{m}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-81}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-55}& \textcolor[rgb]{0,0,1}{-15}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{-46}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-17}& \textcolor[rgb]{0,0,1}{99}& \textcolor[rgb]{0,0,1}{-61}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{18}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-78}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{22}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}]
\mathrm{CompressedSparseForm}\left(m\right)
[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{6}\\ \textcolor[rgb]{0,0,1}{8}\\ \textcolor[rgb]{0,0,1}{10}\\ \textcolor[rgb]{0,0,1}{11}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{5}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{-46}\\ \textcolor[rgb]{0,0,1}{-81}\\ \textcolor[rgb]{0,0,1}{18}\\ \textcolor[rgb]{0,0,1}{-55}\\ \textcolor[rgb]{0,0,1}{-17}\\ \textcolor[rgb]{0,0,1}{-15}\\ \textcolor[rgb]{0,0,1}{99}\\ \textcolor[rgb]{0,0,1}{-61}\\ \textcolor[rgb]{0,0,1}{-78}\\ \textcolor[rgb]{0,0,1}{22}\end{array}]
\mathrm{CompressedSparseForm}\left(m,'\mathrm{form}=\mathrm{row}'\right)
[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{8}\\ \textcolor[rgb]{0,0,1}{10}\\ \textcolor[rgb]{0,0,1}{11}\\ \textcolor[rgb]{0,0,1}{11}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{6}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{-81}\\ \textcolor[rgb]{0,0,1}{-55}\\ \textcolor[rgb]{0,0,1}{-15}\\ \textcolor[rgb]{0,0,1}{-46}\\ \textcolor[rgb]{0,0,1}{-17}\\ \textcolor[rgb]{0,0,1}{99}\\ \textcolor[rgb]{0,0,1}{-61}\\ \textcolor[rgb]{0,0,1}{18}\\ \textcolor[rgb]{0,0,1}{-78}\\ \textcolor[rgb]{0,0,1}{22}\end{array}]
\mathrm{CompressedSparseForm}\left(m,'\mathrm{cbbase}'=0\right)
[\begin{array}{c}\textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{7}\\ \textcolor[rgb]{0,0,1}{9}\\ \textcolor[rgb]{0,0,1}{10}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{5}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{-46}\\ \textcolor[rgb]{0,0,1}{-81}\\ \textcolor[rgb]{0,0,1}{18}\\ \textcolor[rgb]{0,0,1}{-55}\\ \textcolor[rgb]{0,0,1}{-17}\\ \textcolor[rgb]{0,0,1}{-15}\\ \textcolor[rgb]{0,0,1}{99}\\ \textcolor[rgb]{0,0,1}{-61}\\ \textcolor[rgb]{0,0,1}{-78}\\ \textcolor[rgb]{0,0,1}{22}\end{array}]
\mathrm{CompressedSparseForm}\left(m,'\mathrm{rbase}'=0\right)
[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{6}\\ \textcolor[rgb]{0,0,1}{8}\\ \textcolor[rgb]{0,0,1}{10}\\ \textcolor[rgb]{0,0,1}{11}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{-46}\\ \textcolor[rgb]{0,0,1}{-81}\\ \textcolor[rgb]{0,0,1}{18}\\ \textcolor[rgb]{0,0,1}{-55}\\ \textcolor[rgb]{0,0,1}{-17}\\ \textcolor[rgb]{0,0,1}{-15}\\ \textcolor[rgb]{0,0,1}{99}\\ \textcolor[rgb]{0,0,1}{-61}\\ \textcolor[rgb]{0,0,1}{-78}\\ \textcolor[rgb]{0,0,1}{22}\end{array}]
\mathrm{m2}≔\mathrm{Matrix}\left([[0,1,0],[2,0,0],[3,0,4],[5,6,7]],'\mathrm{datatype}=\mathrm{float}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{m2}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{2.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{3.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{4.}\\ \textcolor[rgb]{0,0,1}{5.}& \textcolor[rgb]{0,0,1}{6.}& \textcolor[rgb]{0,0,1}{7.}\end{array}]
\mathrm{cb},r,x≔\mathrm{CompressedSparseForm}\left(\mathrm{m2}\right)
\textcolor[rgb]{0,0,1}{\mathrm{cb}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{6}\\ \textcolor[rgb]{0,0,1}{8}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2.}\\ \textcolor[rgb]{0,0,1}{3.}\\ \textcolor[rgb]{0,0,1}{5.}\\ \textcolor[rgb]{0,0,1}{1.}\\ \textcolor[rgb]{0,0,1}{6.}\\ \textcolor[rgb]{0,0,1}{4.}\\ \textcolor[rgb]{0,0,1}{7.}\end{array}]
r
\mathrm{column}≔2
\textcolor[rgb]{0,0,1}{\mathrm{column}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{2}
r[\mathrm{cb}[\mathrm{column}]..\mathrm{cb}[\mathrm{column}+1]-1]
[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{4}\end{array}]
x
x[\mathrm{cb}[\mathrm{column}]..\mathrm{cb}[\mathrm{column}+1]-1]
[\begin{array}{c}\textcolor[rgb]{0,0,1}{1.}\\ \textcolor[rgb]{0,0,1}{6.}\end{array}]
\mathrm{m3}≔\mathrm{Matrix}\left(3,3,\left(i,j\right)↦i-j\right)
\textcolor[rgb]{0,0,1}{\mathrm{m3}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{-2}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\end{array}]
\mathrm{CompressedSparseForm}\left(\mathrm{m3}\right)
\mathrm{m3}
\mathrm{datatype}={\mathrm{integer}}_{4}
\mathrm{m3}≔\mathrm{Matrix}\left(\mathrm{m3},\mathrm{datatype}=\mathrm{integer}[4]\right)
\textcolor[rgb]{0,0,1}{\mathrm{m3}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{-2}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\end{array}]
\mathrm{cb},r,x≔\mathrm{CompressedSparseForm}\left(\mathrm{m3}\right)
\textcolor[rgb]{0,0,1}{\mathrm{cb}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{7}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{-2}\\ \textcolor[rgb]{0,0,1}{-1}\end{array}]
\mathrm{m4}≔\mathrm{Matrix}\left(6,\mathrm{datatype}=\mathrm{integer}[4],\mathrm{shape}=\mathrm{antisymmetric},\mathrm{storage}=\mathrm{sparse}[\mathrm{lower}],\left(i,j\right)↦\mathrm{`if`}\left(\mathrm{irem}\left(i+j,2\right)=1,i-j,0\right)\right)
\textcolor[rgb]{0,0,1}{\mathrm{m4}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-3}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-5}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-3}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-3}\\ \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\end{array}]
\mathrm{CompressedSparseForm}\left(\mathrm{m4}\right)
[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{6}\\ \textcolor[rgb]{0,0,1}{8}\\ \textcolor[rgb]{0,0,1}{9}\\ \textcolor[rgb]{0,0,1}{10}\\ \textcolor[rgb]{0,0,1}{10}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{6}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{6}\\ \textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{6}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{1}\end{array}] |
Price European swaption using Linear Gaussian two-factor model - MATLAB swaptionbylg2f
Price a European Swaption Using a Linear Gaussian Two-Factor Model
Price European swaption using Linear Gaussian two-factor model
Price = swaptionbylg2f(ZeroCurve,a,b,sigma,eta,rho,Strike,ExerciseDate,Maturity)
Price = swaptionbylg2f(___,Name,Value)
Price = swaptionbylg2f(ZeroCurve,a,b,sigma,eta,rho,Strike,ExerciseDate,Maturity) returns the European swaption price for a two-factor additive Gaussian interest-rate model.
Price = swaptionbylg2f(___,Name,Value) adds optional name-value pair arguments.
Define the ZeroCurve, a, b, sigma, eta, and rho parameters to compute the price of the swaption.
ExerciseDate = daysadd(Settle,360*5,1);
Maturity = daysadd(ExerciseDate,360*[3;4],1);
Price = swaptionbylg2f(irdc,a,b,sigma,eta,rho,Strike,ExerciseDate,Maturity,'Reset',Reset)
ZeroCurve — Zero-curve for Linear Gaussian two-factor model
Zero-curve for the Linear Gaussian two-factor model, specified using IRDataCurve or RateSpec.
Mean reversion for first factor for the Linear Gaussian two-factor model, specified as a scalar.
Mean reversion for second factor for the Linear Gaussian two-factor model, specified as a scalar.
Volatility for first factor for the Linear Gaussian two-factor model, specified as a scalar.
Volatility for second factor for the Linear Gaussian two-factor model, specified as a scalar.
rho — Scalar correlation of the factors
Strike — Swaption strike price
Swaption strike price, specified as a nonnegative integer using a NumSwaptions-by-1 vector.
ExerciseDate — Swaption exercise dates
vector of serial date numbers | character vector of dates
Swaption exercise dates, specified as a NumSwaptions-by-1 vector of serial date numbers or date character vectors.
Maturity — Underlying swap maturity date
Underlying swap maturity date, specified using a NumSwaptions-by-1 vector of serial date numbers or date character vectors.
Example: Price = swaptionbylg2f(irdc,a,b,sigma,eta,rho,Strike,ExerciseDate,Maturity,'Reset',1,'Notional',100,'OptSpec','call')
Reset — Frequency of swaption payments per year
Frequency of swaption payments per year, specified as the comma-separated pair consisting of 'Reset' and positive integers for the values 1,2,4,6,12 in a NumSwaptions-by-1 vector.
Notional — Notional value of swaption
Notional value of swaption, specified as the comma-separated pair consisting of 'Notional' and a nonnegative integer using a NumSwaptions-by-1 vector of notional amounts.
OptSpec — Option specification for the swaption
'call' (default) | character vector with value of 'call' or 'put' | cell array of character vectors with values of 'call' or 'put'
Option specification for the swaption, specified as the comma-separated pair consisting of 'OptSpec' and a character vector or a NumSwaptions-by-1 cell array of character vectors with a value of 'call' or 'put'.
A 'call' swaption or Payer swaption allows the option buyer to enter into an interest-rate swap in which the buyer of the option pays the fixed rate and receives the floating rate.
A 'put' swaption or Receiver swaption allows the option buyer to enter into an interest-rate swap in which the buyer of the option receives the fixed rate and pays the floating rate.
Price — Swaption price
Swaption price, returned as a scalar or an NumSwaptions-by-1 vector.
The following defines the swaption price for a two-factor additive Gaussian interest-rate model, given the ZeroCurve, a, b, sigma, eta, and rho parameters:
r\left(t\right)=x\left(t\right)+y\left(t\right)+\varphi \left(t\right)
dx\left(t\right)=-ax\left(t\right)dt+\sigma d{W}_{1}\left(t\right),\text{ }x\left(0\right)=0
dy\left(t\right)=-by\left(t\right)dt+\eta d{W}_{2}\left(t\right),\text{ }y\left(0\right)=0
d{W}_{1}\left(t\right)d{W}_{2}\left(t\right)=\rho dt
capbylg2f | floorbylg2f | LinearGaussian2F |
f:\phantom{\rule{0.277778em}{0ex}}\right]0,+\infty \left[\phantom{\rule{0.277778em}{0ex}}\to ℝ
S\alpha S
Lévy process. We show that the small ball exponent is uniquely determined by the norm and by the behaviour of
f
{L}_{p}
Aurzada, Frank; Simon, Thomas. Small ball probabilities for stable convolutions. ESAIM: Probability and Statistics, Tome 11 (2007), pp. 327-343. doi : 10.1051/ps:2007022. http://www.numdam.org/articles/10.1051/ps:2007022/
{L}_{p} |
Potentiometer - Wikipedia
Type of resistor, usually with three terminals
(IEC Standard)
2.1.3 Contactless potentiometer
Cutaway drawing of potentiometer showing parts: (A) shaft, (B) stationary carbon composition resistance element, (C) phosphor bronze wiper, (D) shaft attached to wiper, (E, G) terminals connected to ends of resistance element, (F) terminal connected to wiper. A mechanical stop (H) prevents rotation past end points.
Single-turn potentiometer with metal casing removed to expose wiper contacts and resistive track
Many inexpensive potentiometers are constructed with a resistive element (B in cutaway drawing) formed into an arc of a circle usually a little less than a full turn and a wiper (C) sliding on this element when rotated, making electrical contact. The resistive element can be flat or angled. Each end of the resistive element is connected to a terminal (E, G) on the case. The wiper is connected to a third terminal (F), usually between the other two. On panel potentiometers, the wiper is usually the center terminal of three. For single-turn potentiometers, this wiper typically travels just under one revolution around the contact. The only point of ingress for contamination is the narrow space between the shaft and the housing it rotates in.
Another type is the linear slider potentiometer, which has a wiper which slides along a linear element instead of rotating. Contamination can potentially enter anywhere along the slot the slider moves in, making effective sealing more difficult and compromising long-term reliability. An advantage of the slider potentiometer is that the slider position gives a visual indication of its setting. While the setting of a rotary potentiometer can be seen by the position of a marking on the knob, an array of sliders can give a visual impression of settings as in a graphic equalizer or faders on a mixing console.
PCB mount trimmer potentiometers, or "trimpots", intended for infrequent adjustment
Electronic symbol for pre-set potentiometer
Resistance–position relationship: "taper"[edit]
Size scaled 10k and 100k pots that combine traditional mountings and knob shafts with newer and smaller electrical assemblies. The "B" designates a linear (USA/Asian style) taper.
A letter code may be used to identify which taper is used, but the letter code definitions are not standardized. Potentiometers made in Asia and the USA are usually marked with an "A" for logarithmic taper or a "B" for linear taper; "C" for the rarely seen reverse logarithmic taper. Others, particularly those from Europe, may be marked with an "A" for linear taper, a "C" or "B" for logarithmic taper, or an "F" for reverse logarithmic taper.[2] The code used also varies between different manufacturers. When a percentage is referenced with a non-linear taper, it relates to the resistance value at the midpoint of the shaft rotation. A 10% log taper would therefore measure 10% of the total resistance at the midpoint of the rotation; i.e. 10% log taper on a 10 kOhm potentiometer would yield 1 kOhm at the midpoint. The higher the percentage, the steeper the log curve.[3]
Linear taper potentiometer[edit]
A linear taper potentiometer (linear describes the electrical characteristic of the device, not the geometry of the resistive element) has a resistive element of constant cross-section, resulting in a device where the resistance between the contact (wiper) and one end terminal is proportional to the distance between them. Linear taper potentiometers[4] are used when the division ratio of the potentiometer must be proportional to the angle of shaft rotation (or slider position), for example, controls used for adjusting the centering of the display on an analog cathode-ray oscilloscope. Precision potentiometers have an accurate relationship between resistance and slider position.
Beckman Helipot precision potentiometer
Logarithmic potentiometer[edit]
A logarithmic taper potentiometer is a potentiometer that has a bias built into the resistive element. Basically this means the center position of the potentiometer is not one half of the total value of the potentiometer. The resistive element is designed to follow a logarithmic taper, aka a mathematical exponent or "squared" profile. A logarithmic taper potentiometer is constructed with a resistive element that either "tapers" in from one end to the other, or is made from a material whose resistivity varies from one end to the other. This results in a device where output voltage is a logarithmic function of the slider position.
Logarithmic taper potentiometers are often used for volume or signal level in audio systems, as human perception of audio volume is logarithmic, according to the Weber–Fechner law.
Contactless potentiometer[edit]
Unlike mechanical potentiometers, non-contact potentiometers use an optical disk to trigger an infrared sensor, or a magnet to trigger a magnetic sensor (as long as there are other types of sensors, such as capacitive, other types of non-contact potentiometers can probably be built) , and then an electronic circuit does the signal processing to provide an output signal that can be analog or digital.
An example of a non-contact potentiometer can be found with the AS5600 integrated circuit. However, absolute encoders must also use similar principles, although being for industrial use, certainly the cost must be unfeasible for use in domestic appliances.
Rheostat[edit]
The most common way to vary the resistance in a circuit continuously is to use a rheostat.[6] It is basically used to adjust magnitude of current in a circuit by changing length. The word rheostat was coined about 1845 by Sir Charles Wheatstone, from the Greek ῥέος rheos meaning "stream", and -στάτης -states (from ἱστάναι histanai, " to set, to cause to stand") meaning "setter, regulating device",[7][8][9] which is a two-terminal variable resistor. The term "rheostat" is becoming obsolete,[10] with the general term "potentiometer" replacing it. For low-power applications (less than about 1 watt) a three-terminal potentiometer is often used, with one terminal unconnected or connected to the wiper.
Where the rheostat must be rated for higher power (more than about 1 watt), it may be built with a resistance wire wound around a semicircular insulator, with the wiper sliding from one turn of the wire to the next. Sometimes a rheostat is made from resistance wire wound on a heat-resisting cylinder, with the slider made from a number of metal fingers that grip lightly onto a small portion of the turns of resistance wire. The "fingers" can be moved along the coil of resistance wire by a sliding knob thus changing the "tapping" point. Wire-wound rheostats made with ratings up to several thousand watts are used in applications such as DC motor drives, electric welding controls, or in the controls for generators. The rating of the rheostat is given with the full resistance value and the allowable power dissipation is proportional to the fraction of the total device resistance in circuit. Carbon-pile rheostats are used as load banks for testing automobile batteries and power supplies.
Charles Wheatstone's 1843 rheostat with a metal and a wooden cylinder
Charles Wheatstone's 1843 rheostat with a moving whisker
Digital potentiometer[edit]
Membrane potentiometers[edit]
A membrane potentiometer uses a conductive membrane that is deformed by a sliding element to contact a resistor voltage divider. Linearity can range from 0.50% to 5% depending on the material, design and manufacturing process. The repeat accuracy is typically between 0.1 mm and 1.0 mm with a theoretically infinite resolution. The service life of these types of potentiometers is typically 1 million to 20 million cycles depending on the materials used during manufacturing and the actuation method; contact and contactless (magnetic) methods are available (to sense position). Many different material variations are available such as PET, FR4, and Kapton. Membrane potentiometer manufacturers offer linear, rotary, and application-specific variations. The linear versions can range from 9 mm to 1000 mm in length and the rotary versions range from 20 to 450 mm in diameter, with each having a height of 0.5 mm. Membrane potentiometers can be used for position sensing.[11]
For touch-screen devices using resistive technology, a two-dimensional membrane potentiometer provides x and y coordinates. The top layer is thin glass spaced close to a neighboring inner layer. The underside of the top layer has a transparent conductive coating; the surface of the layer beneath it has a transparent resistive coating. A finger or stylus deforms the glass to contact the underlying layer. Edges of the resistive layer have conductive contacts. Locating the contact point is done by applying a voltage to opposite edges, leaving the other two edges temporarily unconnected. The voltage of the top layer provides one coordinate. Disconnecting those two edges, and applying voltage to the other two, formerly unconnected, provides the other coordinate. Alternating rapidly between pairs of edges provides frequent position updates. An analog-to-digital converter provides output data.
Potentiometers are rarely used to directly control significant amounts of power (more than a watt or so). Instead they are used to adjust the level of analog signals (for example volume controls audio equipment), and as control inputs for electronic circuits. For example, a light dimmer uses a potentiometer to control the switching of a TRIAC and so indirectly to control the brightness of lamps.
User-actuated potentiometers are widely used as user controls, and may control a very wide variety of equipment functions. The widespread use of potentiometers in consumer electronics declined in the 1990s, with rotary incremental encoders, up/down push-buttons, and other digital controls now more common. However they remain in many applications, such as volume controls and as position sensors.
Audio control[edit]
Slide potentiometers (faders)
Low-power potentiometers, both slide and rotary, are used to control audio equipment, changing loudness, frequency attenuation, and other characteristics of audio signals.
The 'log pot', that is, a potentiometer has a resistance, taper, or, "curve" (or law) of a logarithmic (log) form, is used as the volume control in audio power amplifiers, where it is also called an "audio taper pot", because the amplitude response of the human ear is approximately logarithmic. It ensures that on a volume control marked 0 to 10, for example, a setting of 5 sounds subjectively half as loud as a setting of 10. There is also an anti-log pot or reverse audio taper which is simply the reverse of a logarithmic potentiometer. It is almost always used in a ganged configuration with a logarithmic potentiometer, for instance, in an audio balance control.
In audio systems, the word linear, is sometimes applied in a confusing way to describe slide potentiometers because of the straight line nature of the physical sliding motion. The word linear when applied to a potentiometer regardless of being a slide or rotary type, describes a linear relationship of the pot's position versus the measured value of the pot's tap (wiper or electrical output) pin.
Potentiometers were formerly used to control picture brightness, contrast, and color response. A potentiometer was often used to adjust "vertical hold", which affected the synchronization between the receiver's internal sweep circuit (sometimes a multivibrator) and the received picture signal, along with other things such as audio-video carrier offset, tuning frequency (for push-button sets) and so on. It also helps in frequency modulation of waves.
Potentiometers can be used as position feedback devices in order to create closed-loop control, such as in a servomechanism. This method of motion control is the simplest method of measuring the angle or displacement.
{\displaystyle V_{\mathrm {L} }={R_{2}R_{\mathrm {L} } \over R_{1}R_{\mathrm {L} }+R_{2}R_{\mathrm {L} }+R_{1}R_{2}}\cdot V_{s}.}
{\displaystyle V_{\mathrm {L} }={R_{2} \over R_{1}+R_{2}}\cdot V_{s}.}
As an example, assume
{\displaystyle V_{\mathrm {S} }=10\ \mathrm {V} }
{\displaystyle R_{1}=1\ \mathrm {k\Omega } }
{\displaystyle R_{2}=2\ \mathrm {k\Omega } }
{\displaystyle R_{\mathrm {L} }=100\ \mathrm {k\Omega } .}
{\displaystyle {2\ \mathrm {k\Omega } \over 1\ \mathrm {k\Omega } +2\ \mathrm {k\Omega } }\cdot 10\ \mathrm {V} ={2 \over 3}\cdot 10\ \mathrm {V} \approx 6.667\ \mathrm {V} .}
Because of the load resistance, however, it will actually be slightly lower: ≈ 6.623 V.
^ The Authoritative Dictionary of IEEE Standards Terms (IEEE 100) (seventh ed.). Piscataway, New Jersey: IEEE Press. 2000. ISBN 0-7381-2601-2.
^ "Resistor Guide". Retrieved 3 January 2018.
^ Elliot, Rod. "Beginners' Guide to Potentiometers". Elliott Sound Products. Retrieved 7 June 2012.
^ Peterson, Phillip. "Linear Type Precision Potentiometer Diagram" (PDF). Precision Sensors. Betatronix. Retrieved 29 April 2015.
^ "Potentiometer taper". the Resistor Guide. Retrieved 19 November 2012.
^ Jhakki, Akki (2020). Concise Physics Class IX (ICSE). New Delhi: Selina Publishers Pvt. Ltd. p. 189. ISBN 9789388594387.
^ Brian Bowers (ed.), Sir Charles Wheatstone FRS: 1802-1875, IET, 2001 ISBN 0-85296-103-0 pp.104-105
^ "stat". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.)
^ ῥέος, ἱστάναι. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project.
^ Dolan, Alexander. "Potentiometer History and Terminology". Sensor Journal. Journal of Sensor History. Retrieved 29 April 2015.
^ Membrane Potentiometer White Paper
The Potentiometer Handbook; 1ed; Carl Todd; McGraw-Hill; 300 pages; 1975; ISBN 978-0070066908. (download)
Potentiometer caution (Problems); Alpsalpine talks about some care with pots. (download)
Contactless potentiometer; The AS5600 is an easy to program magnetic rotary position sensor with a high-resolution 12-bit analog or PWM output. This contactless system measures the absolute angle of a diametric magnetized on-axis magnet. This AS5600 is designed for contactless potentiometer applications; (AS5600)
Wikimedia Commons has media related to Potentiometers.
Electrical calibration equipment including various measurement potentiometers
The Secret Life of Pots - Dissecting and repairing potentiometers
Making a rheostat
Potentiometer calculations as voltage divider - loaded and open circuit (unloaded)
How to build a potentiometer with familiar outputs – and unfamiliar qualities (AS5600 - contactless potentiometer)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Potentiometer&oldid=1073049969" |
(t + 2)^2/3 + 3(t + 2)^1/3 - 10 =
\frac{{\left(t+2\right)}^{2}}{3}+3\frac{{\left(t+2\right)}^{1}}{3}-10=0
Given equation is:
\frac{{\left(t+2\right)}^{2}}{3}+3\left(t+2\right)\frac{1}{3}-10=0\dots \left(1\right)
\frac{{\left(t+2\right)}^{1}}{3}=x
\frac{{\left(t+2\right)}^{1}}{3}\left(t+2\right)13+3\left(t+2\right)13-10=0
x\ast x+3\ast x-10=0
{x}^{2}+3x-10=0\dots \left(2\right)
Solving the quadratic equation (2)
{x}^{2}+3x-10=0
{x}^{2}+5x-2x-10
x\left(x+5\right)-2\left(x+5\right)=0
\left(x+5\right)\left(x-2\right)=0
Putting the value of x :
\frac{{\left(t+2\right)}^{1}}{3}=x
\frac{{\left(t+2\right)}^{1}}{3}=2
cubing both the sides
\left(t+2\right)={2}^{3}
\left(t+2\right)=8
t=8-2=6
t=6
\frac{{\left(t+2\right)}^{1}}{3}=-5
\left(t+2\right)=-125
t=-125-2=-127
Thus the value of t are 6 and -127
Let x and y be two real positive integers, such that:
x+y+xy=3
x+y\ge 2
x\left(5x+2\right)=3
You are given a quadratic function
f\left(x\right)=ax2+bx+c
and a linear function g(x).
The two functions intersect at
x=0
and at also at an x with
g\left(x\right)=f\left(x\right)=0
x<0
Which of the two could, for some values of a,b,c, be an expression for g(x):
g\left(x\right)=bx+c
g\left(x\right)=ax+c
How do I factor this polynomial:
2{x}^{2}-5xy-{y}^{2}
Prove quadratic equation
\left(y=a{x}^{2}+bx+c\right)
has only one line of symmetry
Quadratics by factoring
2z\left(5z-2\right)=-5z+2
I'm learning how to convert quadratic equations from general form to standard form, in order to make them easier to graph. We know the general form is
a{x}^{2}+b{x}^{2}+c
, and the standard form is
a{\left(x-h\right)}^{2}+k
. To help with the conversion, we can expand the standard form, and see that it turns into the general form. I totally get how to go from standard to general. I can easily memorize what h and k are, and use them to consistently derive standard forms.
What I'm curious about is how to, a priori, go from the general form to the standard form? Is there a way to see that
a{x}^{2}+bx+c
can turn into
a{\left(x-h\right)}^{2}+k
without knowing that form ahead of time? How was the form
a{\left(x-h\right)}^{2}+k
discovered in the first place? How are alternate forms of equations discovered in general? I honestly wouldn't know where to begin.
I ask out of curiosity, and because I believe knowing how to go in the other direction will help really solidify this concept for me. Even if that knowledge is above my skillset at the moment, at least an overview of what kind of math is involved may supplement this concept for me. |
Transform IIR lowpass filter to complex bandpass filter - MATLAB iirlp2bpc
iirlp2bpc
Transform Lowpass Filter to Complex Bandpass Filter
IIR Lowpass to Complex Bandpass Transformation
Transform IIR lowpass filter to complex bandpass filter
[num,den,allpassNum,allpassDen] = iirlp2bpc(b,a,wo,wt)
[num,den,allpassNum,allpassDen] = iirlp2bpc(b,a,wo,wt) transforms an IIR lowpass filter to a complex bandpass filter.
The function transforms a real lowpass prototype filter, specified as the numerator and denominator coefficients b and a respectively, to a complex bandpass filter by applying a first-order real lowpass to complex bandpass frequency transformation.
The function returns the numerator and denominator coefficients of the transformed complex bandpass filter. The function also returns the numerator and denominator coefficients of the allpass mapping filter, allpassNum and allpassDen respectively.
For more details on the transformation, see IIR Lowpass to Complex Bandpass Transformation.
Transform a lowpass IIR filter to a complex bandpass filter using the iirlp2bpc function.
Transform Filter Using iirlp2bpc
Transform the prototype lowpass filter into a complex bandpass filter by placing the cutoff frequencies of the prototype filter at 0.25π and 0.75π.
[num,den] = iirlp2bpc(b,a,0.5,[0.25 0.75]);
[num2,den2] = iirlp2bpc(ss(:,1:3),ss(:,4:6),0.5,[0.25 0.75]);
H\left(z\right)=\frac{B\left(z\right)}{A\left(z\right)}=\frac{{b}_{0}+{b}_{1}{z}^{-1}+\cdots +{b}_{n}{z}^{-n}}{{a}_{0}+{a}_{1}{z}^{-1}+\cdots +{a}_{n}{z}^{-n}},
b=\left[\begin{array}{ccccc}{b}_{01}& {b}_{11}& {b}_{21}& ...& {b}_{Q1}\\ {b}_{02}& {b}_{12}& {b}_{22}& ...& {b}_{Q2}\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ {b}_{0P}& {b}_{1P}& {b}_{2P}& \cdots & {b}_{QP}\end{array}\right]
H\left(z\right)=\prod _{k=1}^{P}{H}_{k}\left(z\right)=\prod _{k=1}^{P}\frac{{b}_{0k}+{b}_{1k}{z}^{-1}+{b}_{2k}{z}^{-2}+\cdots +{b}_{Qk}{z}^{-Q}}{{a}_{0k}+{a}_{1k}{z}^{-1}+{a}_{2k}{z}^{-2}+\cdots +{a}_{Qk}{z}^{-Q}},
H\left(z\right)=\frac{B\left(z\right)}{A\left(z\right)}=\frac{{b}_{0}+{b}_{1}{z}^{-1}+\cdots +{b}_{n}{z}^{-n}}{{a}_{0}+{a}_{1}{z}^{-1}+\cdots +{a}_{n}{z}^{-n}},
a=\left[\begin{array}{ccccc}{a}_{01}& {a}_{11}& {a}_{21}& \cdots & {a}_{Q1}\\ {a}_{02}& {a}_{12}& {a}_{22}& \cdots & {a}_{Q2}\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ {a}_{0P}& {a}_{1P}& {a}_{2P}& \cdots & {a}_{QP}\end{array}\right]
H\left(z\right)=\prod _{k=1}^{P}{H}_{k}\left(z\right)=\prod _{k=1}^{P}\frac{{b}_{0k}+{b}_{1k}{z}^{-1}+{b}_{2k}{z}^{-2}+\cdots +{b}_{Qk}{z}^{-Q}}{{a}_{0k}+{a}_{1k}{z}^{-1}+{a}_{2k}{z}^{-2}+\cdots +{a}_{Qk}{z}^{-Q}},
Frequency value to transform from the prototype filter, specified as a real scalar. Frequency wo should be normalized to be between 0 and 1, with 1 corresponding to half the sample rate.
num — Numerator coefficients of transformed complex bandpass filter
Numerator coefficients of the transformed complex bandpass filter, returned as one of the following:
den — Denominator coefficients of transformed complex bandpass filter
Denominator coefficients of the transformed complex bandpass filter, returned as one of the following:
IIR lowpass to complex bandpass transformation effectively places one feature of the original filter, located at frequency −wo, at the required target frequency location, wt1, and the second feature, originally at wo, at the new location, wt2. It is assumed that wt2 is greater than wt1.
Lowpass to bandpass transformation can also be used to transform other types of filters, for example real notch filters or resonators can be doubled and positioned at two distinct desired frequencies at any place around the unit circle, forming a pair of complex notches or resonators. You can use this transformation to design bandpass filters for radio receivers from the high-quality prototype lowpass filter.
iirftransf | allpasslp2bpc | zpklp2bpc |
EUDML | The limit theorem for mappings with bounded distortion on the Heisenberg group and the local homeomorphism theorem. EuDML | The limit theorem for mappings with bounded distortion on the Heisenberg group and the local homeomorphism theorem.
The limit theorem for mappings with bounded distortion on the Heisenberg group and the local homeomorphism theorem.
Dairbekov, N. S.. "The limit theorem for mappings with bounded distortion on the Heisenberg group and the local homeomorphism theorem.." Sibirskij Matematicheskij Zhurnal 41.2 (2000): 316-328 (2000); translation in Sib. Math. J. 41. <http://eudml.org/doc/51742>.
@article{Dairbekov2000,
author = {Dairbekov, N. S.},
keywords = {Heisenberg group; mapping with bounded distortion},
title = {The limit theorem for mappings with bounded distortion on the Heisenberg group and the local homeomorphism theorem.},
AU - Dairbekov, N. S.
TI - The limit theorem for mappings with bounded distortion on the Heisenberg group and the local homeomorphism theorem.
KW - Heisenberg group; mapping with bounded distortion
Heisenberg group, mapping with bounded distortion
{𝐑}^{n}
Articles by Dairbekov |
Real-time state update by state-space model Kalman filtering - MATLAB update - MathWorks América Latina
Compute Only Final State Distribution from Kalman Filter
Filter States in Real Time
Nowcast State-Space Model Containing Regression Component
Efficiently Obtain Observation Contributions to Full Data Likelihood
CurrentStateCov
NextStateCov
Real-Time State-Distribution Update
Real-time state update by state-space model Kalman filtering
[nextState,NextStateCov] = update(Mdl,Y)
[nextState,NextStateCov] = update(Mdl,Y,currentState,CurrentStateCov)
[nextState,NextStateCov] = update(___,Name,Value)
[nextState,NextStateCov,logL] = update(___)
update efficiently updates the state distribution in real time by applying one recursion of the Kalman filter to compute state-distribution moments for the final period of the specified response data.
To compute state-distribution moments by recursive application of the Kalman filter for each period in the specified response data, use filter instead.
[nextState,NextStateCov] = update(Mdl,Y) returns the updated state-distribution moments at the final time T, conditioned on the current state distribution, by applying one recursion of the Kalman filter to the fully specified, standard state-space model Mdl given T observed responses Y. nextState and NextStateCov are the mean and covariance, respectively, of the updated state distribution.
[nextState,NextStateCov] = update(Mdl,Y,currentState,CurrentStateCov) initializes the Kalman filter at the current state distribution with mean currentState and covariance matrix CurrentStateCov.
[nextState,NextStateCov] = update(___,Name,Value) uses additional options specified by one or more name-value arguments, and uses any of the input-argument combinations in the previous syntaxes. For example, update(Mdl,Y,Params=params,SquareRoot=true) sets unknown parameters in the partially specified model Mdl to the values in params, and specifies use of the square-root Kalman filter variant for numerical stability.
[nextState,NextStateCov,logL] = update(___) also returns the loglikelihoods computed for each observation in Y.
{x}_{t}=0.5{x}_{t-1}+{u}_{t},
{u}_{t}
{x}_{t}
ARMdl = arima(AR=0.5,Constant=0,Variance=1);
x = simulate(ARMdl,T,Y0=x0);
{y}_{t}={x}_{t}+{\epsilon }_{t},
{\epsilon }_{t}
Filter the observations through the state-space model, in real time, to obtain the state distribution for period 100.
[rtfX100,rtfXVar100] = update(Mdl,y)
rtfX100 = 1.2073
rtfXVar100 = 0.3714
update applies the Kalman filter to all observations in y, and returns the state estimate of only period 100.
Compare the result to the results of filter.
[fX,~,output] = filter(Mdl,y);
size(fX)
fX100 = fX(100)
fX100 = 1.2073
fXVar100 = output(end).FilteredStatesCov
fXVar100 = 0.3714
discrepencyMeans = fX100 - rtfX100;
discrepencyVars = fXVar100 - rtfXVar100;
areMeansEqual = norm(discrepencyMeans) < tol
areVarsEqual = norm(discrepencyVars) < tol
Like update, the filter function filters the observations through the model, but it returns all intermediate state estimates. Because update returns only the final state estimate, it is more suited to real-time calculations than filter.
Consider the simulated data and state-space model in Compute Only Final State Distribution from Kalman Filter.
Suppose observations are available sequentially, and consider obtaining the updated state distribution by filtering each new observation as it is available.
Simulate the following procedure using a loop.
Create variables that store the initial state distribution moments.
Filter the incoming observation through the model specifying the current initial state distribution moments.
Overwrite the current state distribution moments with the new state distribution moments.
Repeat steps 2 and 3 as new observations are available.
currentState = Mdl.Mean0;
currentStateCov = Mdl.Cov0;
newState = zeros(T,1);
newStateCov = zeros(T,1);
[newState(j),newStateCov(j)] = update(Mdl,y(j),currentState,currentStateCov);
currentState = newState(j);
currentStateCov = newStateCov(j);
Plot the observations, true state values, and new state means of each period.
plot(1:T,x,'-k',1:T,y,'*g',1:T,newState,':r','LineWidth',2)
xlabel("Period")
legend(["True state values" "Observations" "New state values"])
Compare the results to the results of filter.
discrepencyMeans = fX - newState;
discrepencyVars = [output.FilteredStatesCov]' - newStateCov;
The real-time filter update, applied to the entire data set sequentially, returns the same state distributions as filter.
Consider that the linear relationship between the change in the unemployment rate and the nominal gross national product (nGNP) growth rate is of interest. Suppose the innovations of a mismeasured regression of the first difference of the unemployment rate onto the nGNP growth rate is an ARMA(1,1) series with Gaussian disturbances (that is, a regression model with ARMA(1,1) errors and measurement error). Symbolically, and in state-space form, the model is
\begin{array}{l}\left[\begin{array}{c}{x}_{1,t}\\ {x}_{2,t}\end{array}\right]=\left[\begin{array}{cc}\varphi & \theta \\ 0& 0\end{array}\right]\left[\begin{array}{c}{x}_{1,t-1}\\ {x}_{2,t-1}\end{array}\right]+\left[\begin{array}{c}1\\ 1\end{array}\right]{u}_{t}\\ {y}_{t}-\beta {Z}_{t}={x}_{1,t}+\sigma {\epsilon }_{t},\end{array}
{x}_{1,t}
is the ARMA error series in the regression model.
{x}_{2,t}
{y}_{1,t}
{Z}_{t}
{u}_{t}
is a Gaussian series of disturbances having mean 0 and standard deviation 1.
{\epsilon }_{\mathit{t}}
is a Gaussian series of measurement errors with scale
\sigma
Load the Nelson-Plosser data set, which contains the unemployment rate and nGNP series, among other measurements.
Remove the leading missing observations.
Convert the nGNP series to a return series by using price2ret.
Apply the first difference to the unemployment rate series.
vars = ["GNPN" "UR"];
DT = rmmissing(DataTable(:,vars));
T = size(DT,1) - 1; % Sample size after differencing
Z = [ones(T,1) price2ret(DT.GNPN)];
y = diff(DT.UR);
Though this example removes missing values, the Kalman filter accommodates series containing missing values.
Fit the model to all observations except for the final 10 observations (a holdout sample). Use a random set of initial parameter values for optimization. Specify the regression component and its initial value for optimization using the 'Predictors' and 'Beta0' name-value arguments, respectively. Restrict the estimate of
\sigma
[EstMdl,estParams] = estimate(Mdl,y(1:T-fh),params0,'Predictors',Z(1:T-fh,:), ...
EstMdl is an ssm model.
Nowcast the unemployment rate into the forecast horizon. Simulate this procedure using a loop:
Compute the current state distribution moments by filtering all in-sample observations through the estimated model.
When an observation is available in the forecast horizon, filter it through the model. EstMdl does not store the regression coefficients, so you must pass them in using the name-value argument Beta.
Set the current state distribution state moments to the nowcasts.
Repeat steps 2 and 3 when new observations are available.
[currentState,currentStateCov] = update(EstMdl,y(1:T-fh), ...
Predictors=Z(1:T-fh,:),Beta=estParams(end-1:end));
unrateF = zeros(fh,2);
unrateCovF = cell(fh,1);
[unrateF(j,:),unrateCovF{j}] = update(EstMdl,y(T-fh+j),currentState,currentStateCov, ...
Predictors=Z(T-fh+j,:),Beta=estParams(end-1:end));
currentState = unrateF(j,:)';
currentStateCov = unrateCovF{j};
Plot the estimated, filtered states. Recall that the first state is the change in the unemployment rate, and the second state helps build the first.
plot(dates((end-fh+1):end),[unrateF(:,1) y((end-fh+1):end)]);
The filter function returns only the sum of the loglikelihoods for specified observations. To efficiently compute the loglikelihood of each observation, which can be convenient for custom estimation techniques, use update instead.
Evaluate the likelihood function for each observation.
[~,~,logLj] = update(Mdl,y);
logL is a 100-by-1 vector; logL(j) is the loglikelihood evaluated at observation j.
Use filter to evaluate the likelihood for the entire data set.
[~,logL] = filter(Mdl,y);
logL is a scalar representing the full data likelihood.
Because the software assumes the sample is randomly drawn, the likelihood for all observations is the sum of the individual loglikelihood values. Confirm this fact.
discrepency = logL - sum(logLj);
areEqual = discrepency < tol
areEqual = logical
If Mdl is partially specified (that is, it contains unknown parameters), specify estimates of the unknown parameters by using the 'Params' name-value argument. Otherwise, update issues an error.
currentState — Current mean of state distribution
Mdl.Mean0 (default) | numeric vector
The current mean of the state distribution (in other words, the mean at time 1 before the Kalman filter processes the specified observations Y), specified as an m-by-1 numeric vector. m is the number of states.
CurrentStateCov — Current covariance matrix of state distribution
Mdl.Cov0 (default) | numeric matrix
The current covariance matrix of the state distribution (in other words, the covariance matrix at time 1 before the Kalman filter processes the specified observations Y), specified as an m-by-m symmetric, positive semi-definite numeric matrix.
Example: update(Mdl,Y,Params=params,SquareRoot=true) sets unknown parameters in the partially specified model Mdl to the values in params, and specifies use of the square-root Kalman filter variant for numerical stability.
If Mdl is fully specified, update ignores Params.
Univariate — Flag for applying univariate treatment of multivariate series
Flag for applying the univariate treatment of a multivariate series (also known as sequential filtering), specified as true or false. A value of true applies the univariate treatment.
Example: Univariate=true
SquareRoot — Flag for applying square-root Kalman filter variant
Flag for applying the square-root Kalman filter variant, specified as true or false. A value of true applies the square-root filter when update implements the Kalman filter.
If you suspect that the eigenvalues of the filtered state or forecasted observation covariance matrices are close to zero, set SquareRoot=true. The square-root filter is robust to numerical issues arising from the finite precision of calculations, but it requires more computational resources.
Example: SquareRoot=true
Forecast uncertainty threshold, specified as a nonnegative scalar.
Example: Tolerance=le-15
Predictor variables in the state-space model observation equation, specified as a T-by-d numeric matrix, where d is the number of predictor variables. Row t corresponds to the observed predictors at period t (Zt). The expanded observation equation is
{y}_{t}-{Z}_{t}\beta =C{x}_{t}+D{u}_{t}.
That is, update deflates the observations using the regression component. β is the time-invariant vector of regression coefficients that the software estimates with all other parameters.
Regression coefficients corresponding to predictor variables, specified as a d-by-n numeric matrix. d is the number of predictor variables (see Predictors).
If Mdl is an estimated state-space model, specify the estimated regression coefficients stored in estParams.
nextState — State mean after update applies Kalman filter
State mean after update applies the Kalman filter, returned as an m-by-1 numeric vector. Elements correspond to the order of the states defined in Mdl (either by the rows of A or as determined by Mdl.ParamMap).
NextStateCov — State covariance matrix after update applies Kalman filter
State covariance matrix after update applies the Kalman filter, returned as an m-by-m numeric matrix. Rows and columns correspond to the order of the states defined in Mdl (either by the rows of A or as determined by Mdl.ParamMap).
logL — Loglikelihood for each observation
Loglikelihood for each observation in Y, returned as an T-by-1 numeric vector.
The real-time state-distribution update applies one recursion of the Kalman filter to a standard state-space model given a length T response series and the state distribution at time T - 1, to compute the state distribution at time T.
Consider a state-space model expressed in compact form
\left[\begin{array}{c}{x}_{t}\\ {y}_{t}\end{array}\right]=\left[\begin{array}{c}{A}_{t}\\ {A}_{t}{C}_{t}\end{array}\right]{x}_{t-1}+\left[\begin{array}{cc}{B}_{t}& 0\\ {B}_{t}{C}_{t}& {D}_{t}\end{array}\right]\left[\begin{array}{c}{u}_{t}\\ {\epsilon }_{t}\end{array}\right].
The Kalman filter proceeds as follows for each period t:
Obtain the forecast distributions for each period in the data by recursively applying the conditional expectation to the state-space equation, given initial state distribution moments x0|0 and P0|0, and all observations up to time t − 1 (Yt−11). The resulting conditional distribution is
\left[\begin{array}{c}{x}_{t}\\ {y}_{t}\end{array}\right]|{Y}_{1}^{t-1}~Ν\left(\left[\begin{array}{c}{\stackrel{^}{x}}_{t|t-1}\\ {\stackrel{^}{y}}_{t|t-1}\end{array}\right],\left[\begin{array}{cc}{P}_{t|t-1}& {L}_{t|t-1}\\ {L}_{t|t-l}^{\prime }& {V}_{t|t-1}\end{array}\right]\right),
{\stackrel{^}{x}}_{t|t-1}={A}_{t}{\stackrel{^}{x}}_{t-1|t-1},
the state forecast for time t.
{\stackrel{^}{y}}_{t|t-1}={C}_{t}{\stackrel{^}{x}}_{t|t-1},
the forecasted response for time t.
{P}_{t|t-1}={A}_{t}{P}_{t-1|t-1}{A}_{t}^{\prime }+{B}_{t}{B}_{t}^{\prime },
the state forecast covariance.
{V}_{t|t-1}={C}_{t}{P}_{t-1|t-1}{C}_{t}^{\prime }+{D}_{t}{D}_{t}^{\prime },
the forecasted response covariance.
{L}_{t|t-1}={P}_{t-1|t-1}{C}_{t}^{\prime },
the state and response forecast covariance.
Filter observation t through the model to obtain the updated state distribution:
{x}_{t}|{Y}_{1}^{t}~Ν\left({\stackrel{^}{x}}_{t|t},{P}_{t|t}\right),
{\stackrel{^}{x}}_{t|t}={\stackrel{^}{x}}_{t|t-1}+{L}_{t|t-1}{V}_{t|t-1}^{-1}\left({y}_{t}-{\stackrel{^}{y}}_{t|t-1}\right),
the state filter estimator.
{P}_{t|t}={P}_{t|t-1}-{L}_{t|t-1}{V}_{t|t-1}^{-1}{L}_{t|t-1}^{\prime },
the state covariance filter estimator.
{\stackrel{^}{x}}_{t-1|t-1}
is the current state mean and Pt−1|t−1 is the current state covariance,
{\stackrel{^}{x}}_{t|t}
is the new state mean and Pt|t is the new state covariance.
For explicitly defined state-space models, update applies all predictors to each response series. However, each response series has its own set of regression coefficients.
For efficiency, update does minimal input validation.
In theory, the state covariance matrix must be symmetric and positive semi-definite. update forces symmetry of the covariance matrix before it applies the Kalman filter, but it does not check whether the matrix is positive semi-definite.
To obtain filtered states for each period in the response data, call the filter function instead. Unlike update, filter performs comprehensive input validation.
filter | forecast | smooth |
Some Properties of Distances and Best Proximity Points of Cyclic Proximal Contractions in Metric Spaces
2014 Some Properties of Distances and Best Proximity Points of Cyclic Proximal Contractions in Metric Spaces
M. De La Sen, Asier Ibeas
This paper presents some results concerning the properties of distances and existence and uniqueness of best proximity points of p-cyclic proximal, weak proximal contractions, and some of their generalizations for the non-self-mapping
T:{\bigcup }_{i\in \stackrel{-}{p}}{A}_{i}\to {\bigcup }_{i\in \stackrel{-}{p}}{B}_{i}\left(p\ge 2\right)
{A}_{i}
{B}_{i}
\forall i\in \stackrel{-}{p}=\left\{1,2,\dots ,p\right\}
, are nonempty subsets of
X
T\left({A}_{i}\right)\subseteq {B}_{i},\forall i\in \stackrel{-}{p}
\left(X,d\right)
is a metric space. The boundedness and the convergence of the sequences of distances in the domains and in their respective image sets of the cyclic proximal and weak cyclic proximal non-self-mapping, and of some of their generalizations are investigated. The existence and uniqueness of the best proximity points and the properties of convergence of the iterates to such points are also addressed.
M. De La Sen. Asier Ibeas. "Some Properties of Distances and Best Proximity Points of Cyclic Proximal Contractions in Metric Spaces." Abstr. Appl. Anal. 2014 (SI71) 1 - 11, 2014. https://doi.org/10.1155/2014/914915
M. De La Sen, Asier Ibeas "Some Properties of Distances and Best Proximity Points of Cyclic Proximal Contractions in Metric Spaces," Abstract and Applied Analysis, Abstr. Appl. Anal. 2014(SI71), 1-11, (2014) |
Formulation and Solution of n th-Order Derivative Fuzzy Integrodifferential Equation Using New Iterative Method with a Reliable Algorithm
2012 Formulation and Solution of
n
th-Order Derivative Fuzzy Integrodifferential Equation Using New Iterative Method with a Reliable Algorithm
The nth-order derivative fuzzy integro-differential equation inparametric form is converted to its crisp form, and then the new iterative methodwith a reliable algorithm is used to obtain an approximate solution for this crispform. The analysis is accompanied by numerical examples which confirm efficiency and power of this method in solving fuzzy integro-differential equations.
A. A. Hemeda. "Formulation and Solution of
n
th-Order Derivative Fuzzy Integrodifferential Equation Using New Iterative Method with a Reliable Algorithm." J. Appl. Math. 2012 (SI06) 1 - 17, 2012. https://doi.org/10.1155/2012/325473
A. A. Hemeda "Formulation and Solution of
n
th-Order Derivative Fuzzy Integrodifferential Equation Using New Iterative Method with a Reliable Algorithm," Journal of Applied Mathematics, J. Appl. Math. 2012(SI06), 1-17, (2012) |
On Roberts rings
April, 2001 On Roberts rings
In 1985, P. C. Roberts [14] proved the vanishing theorem of intersection multiplicities for a local ring that satisfies
{\tau }_{A/S}\left(\left[A\right]\right)=\left[SpecA{\right]}_{\mathrm{dim}A}
{\tau }_{A/S}
is the Riemann-Roch map for
SpecA
with regular base scheme
SpecS
. We refer such rings as Roberts rings. For rings of positive characteristic, we can characterize Roberts rings by the Frobenius maps. For rings with field of fractions of characteristic 0, we can characterize Roberts rings by some Galois extensions. We shall give basic properties and examples of Roberts rings in the paper.
Kazuhiko KURANO. "On Roberts rings." J. Math. Soc. Japan 53 (2) 333 - 355, April, 2001. https://doi.org/10.2969/jmsj/05320333
Primary: 13D15 , 14C40
Keywords: Riemann-Roch map , Roberts ring
Kazuhiko KURANO "On Roberts rings," Journal of the Mathematical Society of Japan, J. Math. Soc. Japan 53(2), 333-355, (April, 2001) |
In the Money Put Option: What It Means and How It Works
Put Options: An Overview
"In the Money" Put Option
Investing can be a very rewarding experience. But things can get a little daunting, not to mention intimidating, with all the options out there. Most investors start off with stocks, bonds, and mutual funds (among others) as they're the most simplest and common vehicles from which to choose.
Other investments require a little more experience and/or research to generate a profit. Options trading is one of them. The more you know about how they work, the easier it will be to recognize where opportunities exist.
A put option is the opposite of a call option. In the case of a call option, the holder has the right (but not the obligation) to buy an underlying security at a specified strike price, before it reaches its expiration date. An put that is in the money has intrinsic value. In this article, we look at how put options work and how you can generate profits when they're in the money.
Investors with put options have the right but not the obligation to sell shares in an underlying security at a certain price by a specified date.
A put option is said to be in the money when the strike price is higher than the underlying security's market price.
Investors commonly use put options as downside protection, which cuts or prevents a drop in value.
Puts may give investors short market exposure with limited risk if the underlying asset's price rises.
A put option's time value, which is an extra premium that an investor will pay above the option's intrinsic value, can also affect the option's value.
An option contract is a financial derivative that represents a holder who buys a contract sold by a writer. Options can be both calls and puts. Both of these can be used to trade any number of underlying assets or securities. These include stocks, bonds, commodities, currencies, indexes, and futures.
A put option gives the holder the right but not the obligation to sell a certain amount of the underlying asset or security by a certain date (the expiration date) at a certain price. This price is called the strike price. Both call and put options can be either out of the money (OTM), at the money, or in the money (ITM). This moneyness of options (whether they're calls or puts) describes a situation that relates the strike price of a derivative to the price of its underlying security.
A put option that is in the money is one whose strike price is greater than the market price of the underlying asset. This means that the put holder has the right to sell the underlying at a price that is greater than where it currently trades. When an option is in the money, it allows for an immediate profit if the contract holder buys the shares back at the market price. Therefore, the price of an ITM put closely tracks changes in the underlying asset or security.
In the money options always have deltas greater than 0.50.
A put option buyer has the right but not the obligation to sell a specified quantity of the underlying security at a predetermined strike price on or before its expiration date. On the other hand, the seller or writer of a put option is obligated to buy the underlying security at a predetermined strike price if the corresponding put option is exercised.
Put options are used as downside protection, which are strategies used to mitigate—if not completely prevent—a drop in its value. The reason being is that owning the underlying asset with the right to sell it at some price effectively gives you a guaranteed floor price. Put options can also be used to speculate on an underlying if you think that it will go down in price. Thus, a put can give short market exposure with limited risk if the underlying security does, in fact, rise.
A put option is considered in the money (ITM) when the underlying security's current market price is below that of the put option. The put option is in the money because the put option holder has the right to sell the underlying security above its current market price. When there is a right to sell the underlying security at a price higher than its strike price, the right to sell has a value equal to at least the amount of the sale price less the current market price.
Therefore, an ITM put option is one where the strike price is above the current market price. When an investor holds an ITM put option at expiry, it means the stock price is below the strike price. This means it's entirely possible that the option is worth exercising. The buyer of a put option wants the stock's price will fall far enough below the option's strike to at least cover the cost of the premium for buying the put.
The amount that a put option's strike price is greater than the current underlying security's price is known as intrinsic value because the put option is worth at least that amount.
Put options allow the contract holder to lock in a price to sell the underlying asset by a predetermined time. Remember, the put option gives the holder the right (but not the obligation) to sell the stock or asset by the expiration date at the strike price. When an option expires, it is settled. The option may expire worthless or with some value left. The underlying asset's price can make the value of a put (and a call) option fluctuate along with another factor, which is known as its time value.
The time value is an additional premium that investors are willing to pay above the option's intrinsic value. The basic formula to figure out an option's time value is to subtract its intrinsic value from the premium. So:
Time Value = Option Premium - Option's Intrinsic Value
TimeValue=OptionPremium−Option′sIntrinsicValue
Investors are often willing to pay this premium because they believe that the value of the option will increase before it expires. An option's time value is greater when there is a greater length of time until it expires. When the option gets deeper in the money, its intrinsic value increases. Investors can use the formula above to determine how much they're willing to spend for an option. For instance, you'd want to ensure that the premium is higher than the option's intrinsic value. If not, you'll end up losing on the purchase.
The intrinsic value of any financial instrument is the measure of its worth using objective calculations instead of the current market price. ITM options have some intrinsic value, by definition.
Example of an "in the Money" Put Option
Here's a hypothetical example to show how put options work when they're in the money. Assume that you have a put option for shares in Company XYZ. This contract gives you the right to sell 100 shares of the company at a strike price of $100. And you purchased the put option at a premium of $10 with the belief that the stock price would drop before the expiration date.
Your hunch proves to be right at the expiration date and the stock price dips to $75 per share, rendering the put option in the money. You could exercise the option and net yourself a profit of $15 per share, which is the difference between the strike price and the actual price of the stock and the premium you paid ($25 - $10). If you multiply that by the number of shares (100), then you get a profit of $1,500.
What Happens If My Put Option Expires in the Money?
Options can be either out of the money, at the money, or in the money. When a put option expires in the money, the contract holder's stake in the underlying security is sold at the strike price, provided the investor owns shares. If the investor doesn't, a short position is initiated at the strike price. This allows the investor to purchase the asset at a lower price.
What Is an in the Money Put Option?
A put option is considered in the money when the price of the underlying asset is lower than the strike price at the expiration date. Therefore, the exercise price is above the current market price. Being in the money allows the holder of the contract to sell the related security at a price that is higher than where it trades when the put option contract expires.
What Happens If I Sell a Put Option in the Money?
When a put option is in the money, you can choose to exercise it. This means that you can sell the shares of the underlying asset as outlined in the contract at the strike price and make a profit. This is generated by subtracting the current price of the asset from the strike price, then subtracting the premium you paid. If you choose not to exercise it, you may choose to sell the contract to another buyer.
ITM options have both extrinsic (time) value and intrinsic value, making them more expensive in terms of premium. These options also have higher deltas, making them behave more like the underlying itself. For purposes of hedging and speculation, traders will sometimes prefer OTM options because they have lower premiums and smaller deltas.
Investing in options, whether you choose to invest in calls or puts, can seem very intimidating at first. That's because there are many fine nuances that you have to wade through before you can fully understand how they work. But once you get a fundamental understanding, you may be able to generate big returns and increase your bottom line. |
EllipticCE - Maple Help
Home : Support : Online Help : Mathematics : Mathematical Functions : EllipticCE
Incomplete and complete elliptic integrals of the second kind
EllipticE(z,k)
EllipticE(k)
EllipticCE(k)
The incomplete elliptic integral EllipticE is defined by
\mathrm{EllipticE}\left(z,k\right)={\int }_{0}^{z}\frac{\sqrt{-{k}^{2}{t}^{2}+1}}{\sqrt{-{t}^{2}+1}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}ⅆt
The complete elliptic integrals EllipticE and EllipticCE are defined by
\mathrm{EllipticE}\left(k\right)=\mathrm{EllipticE}\left(1,k\right)
\mathrm{EllipticCE}\left(k\right)=\mathrm{EllipticE}\left(1,\sqrt{-{k}^{2}+1}\right)
\mathrm{EllipticE}\left(0.2,0.3\right)
\textcolor[rgb]{0,0,1}{0.2012363833}
\mathrm{EllipticE}\left(0.3\right)
\textcolor[rgb]{0,0,1}{1.534833465}
\mathrm{EllipticCE}\left(0.3\right)
\textcolor[rgb]{0,0,1}{1.096477517} |
Support vector machine (SVM) for one-class and binary classification - MATLAB - MathWorks Korea
f\left(x\right)=\left(x/s\right)\prime \beta +b.
\left\{\begin{array}{l}{\alpha }_{j}\left[{y}_{j}f\left({x}_{j}\right)-1+{\xi }_{j}\right]=0\\ {\xi }_{j}\left(C-{\alpha }_{j}\right)=0\end{array}
f\left({x}_{j}\right)=\varphi \left({x}_{j}\right)\prime \beta +b,
0.5\sum _{jk}{\alpha }_{j}{\alpha }_{k}G\left({x}_{j},{x}_{k}\right)
{\alpha }_{1},...,{\alpha }_{n}
\sum {\alpha }_{j}=n\nu
0\le {\alpha }_{j}\le 1
f\left(x\right)=x\prime \beta +b,
2/‖\beta ‖.
‖\beta ‖
0.5{‖\beta ‖}^{2}+C\sum {\xi }_{j}
{y}_{j}f\left({x}_{j}\right)\ge 1-{\xi }_{j}
{\xi }_{j}\ge 0
0.5\sum _{j=1}^{n}\sum _{k=1}^{n}{\alpha }_{j}{\alpha }_{k}{y}_{j}{y}_{k}{x}_{j}\prime {x}_{k}-\sum _{j=1}^{n}{\alpha }_{j}
\sum {\alpha }_{j}{y}_{j}=0
0\le {\alpha }_{j}\le C
\stackrel{^}{f}\left(x\right)=\sum _{j=1}^{n}{\stackrel{^}{\alpha }}_{j}{y}_{j}x\prime {x}_{j}+\stackrel{^}{b}.
\stackrel{^}{b}
{\stackrel{^}{\alpha }}_{j}
\stackrel{^}{\alpha }
\text{sign}\left(\stackrel{^}{f}\left(z\right)\right).
0.5\sum _{j=1}^{n}\sum _{k=1}^{n}{\alpha }_{j}{\alpha }_{k}{y}_{j}{y}_{k}G\left({x}_{j},{x}_{k}\right)-\sum _{j=1}^{n}{\alpha }_{j}
\sum {\alpha }_{j}{y}_{j}=0
0\le {\alpha }_{j}\le C
\stackrel{^}{f}\left(x\right)=\sum _{j=1}^{n}{\stackrel{^}{\alpha }}_{j}{y}_{j}G\left(x,{x}_{j}\right)+\stackrel{^}{b}.
{C}_{j}=n{C}_{0}{w}_{j}^{\ast },
{x}_{j}^{\ast }=\frac{{x}_{j}-{\mu }_{j}^{\ast }}{{\sigma }_{j}^{\ast }},
\begin{array}{c}{\mu }_{j}^{\ast }=\frac{1}{\sum _{k}{w}_{k}^{*}}\sum _{k}{w}_{k}^{*}{x}_{jk},\\ {\left({\sigma }_{j}^{\ast }\right)}^{2}=\frac{{v}_{1}}{{v}_{1}^{2}-{v}_{2}}\sum _{k}{w}_{k}^{*}{\left({x}_{jk}-{\mu }_{j}^{\ast }\right)}^{2},\\ {v}_{1}=\sum _{j}{w}_{j}^{*},\\ {v}_{2}=\sum _{j}{\left({w}_{j}^{*}\right)}^{2}.\end{array}
\sum _{j=1}^{n}{\alpha }_{j}=n\nu . |
g(x)=x(x+2)(x-3)
, where are the asymptotes of
y=\frac{1}{g(x)}
y=g(x)
. Recall that for the reciprocal function, the vertical asymptotes occur at the zeros of the original function. The reciprocal function is positive where the original function is positive.
Sketch a prediction for the original function. Then check your prediction using a graphing calculator. Be sure to identify and correct and errors. |
Sum and difference monopulse for ULA - MATLAB - MathWorks Benelux
phased.SumDifferenceMonopulseTracker
Monopulse Algorithm
Sum and difference monopulse for ULA
The SumDifferenceMonopulseTracker object implements a sum and difference monopulse algorithm on a uniform linear array.
Define and set up your sum and difference monopulse DOA estimator. See Construction.
Call step to estimate the DOA according to the properties of phased.SumDifferenceMonopulseTracker. The behavior of step is specific to each object in the toolbox.
H = phased.SumDifferenceMonopulseTracker creates a tracker System object, H. The object uses sum and difference monopulse algorithms on a uniform linear array (ULA).
H = phased.SumDifferenceMonopulseTracker(Name,Value) creates a ULA monopulse tracker object, H, with each specified property Name set to the specified Value. You can specify additional name-value pair arguments in any order as (Name1,Value1,...,NameN,ValueN).
step Perform monopulse tracking using ULA
{w}_{s}=\left(1,{e}^{ikd\mathrm{sin}{\varphi }_{0}},{e}^{ik2d\mathrm{sin}{\varphi }_{0}},\dots ,{e}^{ik\left(N-1\right)d\mathrm{sin}{\varphi }_{0}}\right)
v=\left(1,{e}^{ikd\mathrm{sin}\varphi },{e}^{ik2d\mathrm{sin}\varphi },\dots ,{e}^{ik\left(N-1\right)d\mathrm{sin}\varphi }\right)
{w}_{s}^{H}v\left(\phi \right)
{w}_{d}=-i\left(1,{e}^{ikd\mathrm{sin}{\varphi }_{0}},{e}^{ik2d\mathrm{sin}{\varphi }_{0}},\dots ,{e}^{ikN/2d\mathrm{sin}{\varphi }_{0}},-{e}^{ik\left(N/2+1\right)d\mathrm{sin}{\varphi }_{0}},\dots ,-{e}^{ik\left(N-1\right)d\mathrm{sin}{\varphi }_{0}}\right)
{w}_{d}^{H}v\left(\phi \right)
R\left(\phi \right)=Re\left(\frac{{w}_{d}^{H}v\left(\phi \right)}{{w}_{s}^{H}v\left(\phi \right)}\right)
z=Re\left(\frac{{w}_{d}^{H}x}{{w}_{s}^{H}x}\right)
phased.BeamscanEstimator | phased.SumDifferenceMonopulseTracker2D |
Baryon number - zxc.wiki
The baryon the particle , a quantum number of the elementary particles is defined as the difference in the number of quark and the number of anti quark divided by 3:
{\ displaystyle B \! \,}
{\ displaystyle B = {\ frac {n_ {q} -n _ {\ overline {q}}} {3}}}
/ 0+1 for baryons like the proton and the neutron (each composed of 3 quarks)
+1/3 for quarks
+ / 00 for leptons (such as the electron ) and for mesons
−1/3 for antiquarks and
/ 0−1 for antibaryons (each composed of 3 antiquarks).
Baryon number as a conserved quantity
Experience has shown that the number of baryons in a closed system always remains constant, so it is an absolute conservation quantity . This knowledge - a basic component of the standard model of elementary particle physics - makes the stability of matter understandable. Since a spontaneous decay can only ever lead to lighter particles because of the conservation of energy, the lightest baryon, the proton, is stable.
In many theories going beyond the standard model, such as B. According to the great unified theory (GUT) the baryon number is not an exact conserved quantity, so that protons decay over time , but with a very long half-life .
The currently assumed mechanisms of baryogenesis , the emergence of the imbalance between matter and antimatter in the early universe fractions of a second after the Big Bang , assume that the baryon number is not maintained.
In most versions of the GUT, however, at least the difference BL of baryons and leptons is strictly preserved.
Klaus Rith, Christoph Scholz, Frank Zetsche: Particles and Cores: an introduction to the physical concepts . Springer DE, 2009, ISBN 978-3-540-68080-2 , p. 109f.
Wolfgang Demtröder: Experimental Physics 4: Nuclear, Particle And Astrophysics . Springer DE, January 1, 2010, ISBN 978-3-642-01598-4 , pp. 188–.
Klaus Bethge, Ulrich E. Schröder: Elementary particles and their interactions . John Wiley & Sons, April 30, 2012, ISBN 978-3-527-66216-6 , pp. 296-.
This page is based on the copyrighted Wikipedia article "Baryonenzahl" (Authors); it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License. You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA. |
(i) There are 26 questions in all. All questions are compulsory.
(ii) This question paper has five sections : Section A, Section B, Section C, Section D and Section E.
(iii) Section A contains five questions of one mark each, Section B contains five questions of two marks each, Section C contains twelve questions of three marks each, Section D contains one value based question of four marks and Section E contains three questions of five marks each.
(iv) There is no overall choice. However, an internal choice has been provided in one question of two marks, one question of three marks and all the three questions of five marks weightage. You have to attempt only one of the choices in such questions.
The line AB in the ray diagram represents a lens State whether the lens is convex or concave.
Distinguish between emf and terminal voltage of a cell. VIEW SOLUTION
Draw a graph to show variation of capacitive-reactance with frequency in an a.c. circuit. VIEW SOLUTION
What is the function of a 'Repeater' used in communication system? VIEW SOLUTION
The field lines of a negative point charge are as shown in the figure. Does the kinetic energy of a small negative charge increase or decrease in going from B to A?
The equivalent wavelength of a moving electron has the same value as that of a photon of energy 6 × 10–17 J. Calculate the momentum of the electron. VIEW SOLUTION
What is ground wave communication? Explain why this mode cannot be used for long distance communication using high frequencies. VIEW SOLUTION
A ray of light passes through an equilateral glass prism such that the angle of incidence is equal to the angle of emergence and each of these angles is equal to 3/4 of angle of prism. Find the angle of deviation.
Calculate the speed of light in a medium whose critical angle is 45°. Does critical angle for a given pair of media depend on the wavelength of incident light ? Give reason. VIEW SOLUTION
How does one explain, using de Broglie hypothesis, Bohr's second postulate of quantization of orbital angular momentum? VIEW SOLUTION
In a meter bridge shown in the figure, the balance point is found to be 40 cm from end A. If a resistance of 10 Ω is connected in series with R, balance point is obtained 60 cm from A. Calculate the values of R and S.
State Lenz's law. Illustrate, by giving an example, how this law helps in predicting the direction of the current in a loop in the presence of a changing magnetic flux.
In a given coil of self-inductance of 5 mH, current changes from 4 A to 1 A in 30 ms. Calculate the emf induced in the coil.
In what way is Gauss's law in magnetism different from that used in electrostatics ? Explain briefly.
The Earth's magnetic field at the Equator is approximately 0.4 G. Estimate the Earth's magnetic dipole moment. Given : Radius of the Earth = 6400 km. VIEW SOLUTION
How are electromagnetic waves produced? What is the source of energy of these waves?
Draw a schematic sketch of the electromagnetic waves propagating along the + x-axis. Indicate the directions of the electric and magnetic fields. Write the relation between the velocity of propagation and the magnitudes of electric and magnetic fields. VIEW SOLUTION
Obtain the relation between the decay constant and half life of a radioactive sample.
The half life of a certain radioactive material against α-decay is 100 days. After how much time, will the undecayed fraction of the material be 6.25%? VIEW SOLUTION
Write two important considerations used while fabricating a Zener diode. Explain, with the help of a circuit diagram, the principle and working of a Zener diode as voltage regular. VIEW SOLUTION
Find the equivalent capacitance of the network shown in the figure, when each capacitor is of 1 μF. When the ends X and Y are connected to a 6 V battery, find out (i) the charge and (ii) the energy stored in the network.
State the underlying principle of a potentiometer. Write two factors by which current sensitivity of a potentiometer can be increased. Why is a potentiometer preferred over a voltmeter for measuring the emf of a cell? VIEW SOLUTION
(a) Define the term 'intensity of radiation' in terms of photon picture of light.
(b) Two monochromatic beams, one red and the other blue, have the same intensity. In which case (i) the number of photons per unit area per second is larger, (ii) the maximum kinetic energy of the photoelectrons is more? Justify your answer.
(a) Give two reasons to explain why reflecting telescopes are preferred over refracting type.
(b) Use mirror equation to show that convex mirror always produces a virtual image independent of the location of the object.
(a) Write the necessary conditions to obtain sustained interference fringes.
(b) In Young's double slit experiment, plot a graph showing the variation of fringe width versus the distance of the screen from the plane of the slits keeping other parameters same. What information can one obtain from the slope of the curve?
(c) What is the effect on the fringe width if the distance between the slits is reduced keeping other parameters same?
A series LCR circuit is connected across an a.c. source of variable angular frequency 'ω'. Plot a graph showing variation of current 'i' as a function of 'ω' for two resistances R1 and R2 (R1 > R2).
Answer the following questions using this graph :
(a) In which case is the resonance sharper and why?
(b) In which case in the power dissipation more and why?
Draw the necessary energy band diagrams to distinguish between conductors, semiconductors and insulators.
How does the change in temperature affect the behaviour of these materials ? Explain briefly. VIEW SOLUTION
(a) What are the three basic units in communication systems ? Write briefly the function of each of these.
(b) Write any three applications of the Internet used in communication systems. VIEW SOLUTION
During a thunderstorm the 'live' wire of the transmission line fell down on the ground from the poles in the street. A group of boys, who passed through, noticed it and some of them wanted to place the wire by the side. As they were approaching the wire and trying to lift the cable, Anuj noticed it and immediately pushed them away, thus preventing them from touching the live wire. During pushing some of them got hurt. Anuj took them to a doctor to get them medical aid.
Based on the above paragraph, answer the following questions :
(a) Write the two values which Anuj displayed during the incident.
(b) Why is it that a bird can sit on a suspended 'live' wire without any harm whereas touching it on the ground can give a fatal shock ?
(c) The electric power from a power plant is set up to a very high voltage before transmitting it to distant consumers. Explain, why. VIEW SOLUTION
(a) Use Huygens' principle to show the propagation of a plane wavefront from a denser medium to a rarer medium. Hence find the ratio of the speeds of wavefronts in the two media.
(b) (i) Why does an unpolarised light incident on a polaroid get linearly polarised ?
(ii) Derive the expression of Brewster's law when unpolarised light passing from a rarer to a denser medium gets polarised on reflection at the inteface.
A biconvex lens with its two faces of equal radius of curvature R is made of a transparent medium of refractive index μ1. It is kept in contact with a medium of refractive index μ2 as shown in the figure.
(a) Find the equivalent focal length of the combination.
(b) Obtain the condition when this combination acts as a diverging lens.
(c) Draw the ray diagram for the case μ1 > (μ2 + 1) / 2, when the object is kept far away from the lens. Point out the nature of the image formed by the system. VIEW SOLUTION
Two infinitely long straight parallel wires, '1' and '2', carrying steady currents I1 and I2 in the same direction are separated by a distance d. Obtain the expression for the magnetic field
\stackrel{\to }{\mathrm{B}}
due to the wire '1' acting on wire '2'. Hence find out, with the help of a suitable diagram, the magnitude and direction of this force per unit length on wire '2' due to wire '1'. How does the nature of this force changes if the currents are in opposite direction? Use this expression to define the S.I. unit of current.
State any two causes of energy loss in actual transformers. VIEW SOLUTION
(a) State Kirchhoff's rules and explain on what basis they are justified.
(b) Two cells of emfs E1 and E2 and internal resistances r1 and r2 are connected in parallel. Derive the expression for the (i) emf and (ii) internal resistance of a single equivalent cell which can replace this combination.
(a) "The outward electric flux due to charge +Q is independent of the shape and size of the surface which encloses is." Give two reasons to justify this statement.
(b) Two identical circular loops '1' and '2' of radius R each have linear charge densities −λ and +λ C/m respectively. The loops are placed coaxially with their centres
R\sqrt{3}
distance apart. Find the magnitude and direction of the net electric field at the centre of loop '1'. VIEW SOLUTION |
y=f(x)
below, sketch a graph of each of the following transformations.
y=−2f(x)
1. Reflect vertically (across the
x
2. Stretch vertically by a factor of
2
y=f(−x)+1
1. Reflect horizontally (across the
y
2. Shift up
1
y=\frac{1}{f(x)}
Sketch the vertical asymptotes. Added to the graph, 2 vertical dashed lines at, x = negative 2, & at, x = 3.
\frac{1}{1}=1
\frac{1}{-1}=-1
, mark the points that "stay in place" on the graph. The following Points highlighted: (negative 3, comma negative 1), (negative 1, comma 1), (0, comma 1), (1, comma 1), & (5, comma negative 1).
Where the original function is positive, the reciprocal function will also be positive.
Where the original function is negative, the reciprocal function will also be negative. |
EUDML | Algebraic shifting and sequentially Cohen-Macaulay simplicial complexes. EuDML | Algebraic shifting and sequentially Cohen-Macaulay simplicial complexes.
Algebraic shifting and sequentially Cohen-Macaulay simplicial complexes.
Duval, Art M.
Duval, Art M.. "Algebraic shifting and sequentially Cohen-Macaulay simplicial complexes.." The Electronic Journal of Combinatorics [electronic only] 3.1 (1996): Research paper R21, 14 p.-Research paper R21, 14 p.. <http://eudml.org/doc/119198>.
@article{Duval1996,
author = {Duval, Art M.},
keywords = {-triangle; nonpure shellable complexes; Cohen-Macaulay conditions; algebraic shifting; -triangle},
title = {Algebraic shifting and sequentially Cohen-Macaulay simplicial complexes.},
AU - Duval, Art M.
TI - Algebraic shifting and sequentially Cohen-Macaulay simplicial complexes.
KW - -triangle; nonpure shellable complexes; Cohen-Macaulay conditions; algebraic shifting; -triangle
h
-triangle, nonpure shellable complexes, Cohen-Macaulay conditions, algebraic shifting,
h
-triangle
Combinatorial properties (number of faces, shortest paths, etc.)
Articles by Duval |
Phase response of digital filter - MATLAB phasez - MathWorks India
Use designfilt to design an FIR filter of order 54, normalized cutoff frequency
0.3\pi
rad/s, passband ripple 0.7 dB, and stopband attenuation 42 dB. Use the method of constrained least squares. Display the phase response of the filter.
Design a lowpass equiripple filter with normalized passband frequency
0.45\pi
rad/s, normalized stopband frequency
0.55\pi
rad/s, passband ripple 1 dB, and stopband attenuation 60 dB. Display the phase response of the filter.
Design an elliptic lowpass IIR filter with normalized passband frequency
0.4\pi
0.5\pi
H\left({e}^{j\omega }\right)=\frac{B\left({e}^{j\omega }\right)}{A\left({e}^{j\omega }\right)}=\frac{\text{b(1)}+\text{b(2)}\text{\hspace{0.17em}}{e}^{-j\omega }+\text{b(3)}\text{\hspace{0.17em}}{e}^{-j2\omega }+\cdots +\text{b(M)}\text{\hspace{0.17em}}{e}^{-j\left(M-1\right)\omega }}{\text{a(1)}+\text{a(2)}\text{\hspace{0.17em}}{e}^{-j\omega }+\text{a(3)}\text{\hspace{0.17em}}{e}^{-j2\omega }+\cdots +\text{a(N)}\text{\hspace{0.17em}}{e}^{-j\left(N-1\right)\omega }}. |
When We Visualize City Names
2. Visualizing Place Names in the World
2.2. Cities with Longer Words
2.3. Cities with More Words
2.4. Cities with a Higher Vowel-Consonant Ratio
3. Experimenting with Placenames in China
3.1. Zhongyuan Markets
3.2. Mountains and Plains
3.3. Lakes in the South
3.4. Tibetan and Minnan Dialect
4. Summary of Clusters
Note: this article is an excerpt from the final report of Project Placename Insights, a coursework study on typonymy. Below is a conclusive analysis of the visualization section.
Place names around the world have a subtle and close relationship with their location. This is usually because different regions have different languages and writing systems, hence the different spelling patterns.
Characteristics of place names differ due to geographic location changes. To a certain extent, the changes are both continuous and discrete. They are continuous because place names are similar in neighboring regions. For instance, throughout the Europe continent, the proportion of consonants in their spelling rises from the Mediterranean to the north. Another example, inside China, the unified country, the place names in the northwest are more “Arabic” than those in the east.
Figure 9-0. Continuous. The proportion of consonants throughout the Europe continent.
The discrete character of place names, on the other hand, is usually caused by geographical barriers, political boundaries, and cultural divisions. For example, in Spain and Argentina, two countries that locate vastly different, have similar place names because of historical colonial activities. Like the border between China and Vietnam, they share similar cultures. However, the official language on one side is In Chinese, while Vietnamese is used on the other side - the two writing systems, therefore, have different Latin transcription standards, resulting in the latter’s place names usually being split into many relatively short words. This “discrete” character is caused by political reasons.
Figure 9-1. Discrete. The difference in place names caused by different transliteration systems.
Some geographical regions have a quite distinct place name patterns. For instance, in East Europe, especially Russia, place names are usually constructed by a single long word, which makes them relatively easier to be recognized. Some Slavic suffixes like “-sk” are commonly seen here. Some geographical areas, such as sub-Saharan Africa and Oceania island countries, the culture and languages are so diverse there, causing the place names are much more challenging to identify.
Visualizing Place Names in the World
Figure 9-2. All cities with a population of greater than 500.
The figure above shows all cities with a population more than 500 (total ~180,000 cities) scattered onto a dark background, with each point represents a city. This figure gives a sense of where geographic information is densely aggregated or recorded and where there are not. It seems that the western European area holds the most densely distributed populated cities. Other populated area includes the USA, Central America, and Southeast China.
Cities with Longer Words
Figure 9-3. Cities with longer words.
Cities are represented with different colors on this map, and to avoid interfering mixed colors, color blend mode was set to Normal, i.e., no blending. In this figure, blue dot means a city that has short words in its name (e.g., Ho Chi Minh City), and yellow dot indicates long words (e.g., Vladivostok).
To design a proper words_length -> color projection function, we firstly investigated the distribution of word lengths. The median word length is around six letters, with a minimum of 1 letter and a maximum of over 20 letters. This character is like a normal distribution with skewed (or imbalanced) two sides. Log-normal distribution, in this case, fits the model.
Figure 9-4. The PDF of the distribution model used in mapping word length to colors.
Above is its PDF (Probability Density Function) plotted. To transform the distribution into a color projection function, we need its CDF (Cumulative Distribution Function) expressions:
\displaystyle CDF(x, \mu, \sigma) = \frac12 + \frac12\operatorname{erf}\Big[\frac{\ln x-\mu}{\sqrt{2}\sigma}\Big]
The corresponding part of the code is implemented as below:
let m = 1.00;
let s = 1.00;
let sum = x * 1.0;
s *= -1;
sum += (s * Math.pow(x, 2.0 * i + 1.0)) / (m * (2.0 * i + 1.0));
return 2 * sum / Math.sqrt(PI);
function logNormalCDF(x, mu, sigma) {
let par = (Math.log(x) - mu) / (Math.sqrt(2) * sigma)
return 0.5 + 0.5 * erf(par)
const projectColor = (x) => Math.round(logNormalCDF(x/5, 0, 1)*255)
The result color is concatenated from strings:
context.fillStyle = `rgb(${projectColor(wordLength)}, ${projectColor(wordLength)}, ${255 - projectColor(wordLength)})`
Cities with More Words
Figure 9-5. Cities with more words.
The idea behind this image is similar to the previous plot, whereas a red dot indicates a city name that consists of many words, and a green dot indicates the contrary.
Cities with a Higher Vowel-Consonant Ratio
Figure 9-6. An experiment with vowel-consonant ratio.
We experimented with the vowel-consonant ratio on this map. Vowels only include five Latin letters, and all other letters are consonants. So more vowels in a word are likely to make the pronunciation softer, e.g., “Ieyouia.” A word with a lower vowel-consonant ratio, which means more consonants in their spelling, e.g., “Pszykt,” usually sounds hard. We can observe the general pattern that a place having a higher latitude usually have a hard-sounding name. Certain countries, such as Japan and Nigeria, have a particularly high vowel-consonant ratio.
Experimenting with Placenames in China
Figure 9-7. Filtering certain patterns in Chinese place names.
In addition to the regular steps in machine learning and statistics, we tried to find out whether specific “patterns” in place names exist. We limited the area of research to Mainland China for the Chinese language we’re both familiar with. We used regular expressions to filter the points in the scatter plot and found out for some specific words or characters, the place names containing them tend to aggregate within a particular area, or show interesting distribution patterns.
Zhongyuan Markets
“Dian(店)” means “market” in Mandarin Chinese. This is a word that appears only in the Zhongyuan Mandarin dialect; therefore, the place names containing “Dian” gather in the Zhongyuan region, the central east of Mainland China.
“Shan(山)” means “mountain” in Chinese. The place names containing “Shan,” as the visualization suggests, appear more frequently in those mountainous terrains. The western part of China is also considered mountainous, but insufficient in geographic information in general. Still, we can observe that for most populated areas, “Shan” s gather in hills rather than plains. Notice the dense distribution of the hilly regions of Shandong, and the strip of dark areas of the North China Plain surrounding the mountains on the scatter plot.
Lakes in the South
“Hu(湖)” stands for “lake” in Mandarin Chinese. Lakes are more densely distributed in Southern China than the north; therefore, in the visualization, the southern part of China is more densely lit.
Tibetan and Minnan Dialect
“Cuo(错)” stands for “lake,” too, but in Tibetan instead of Mandarin Chinese. That explains why this time, the lakes in Tibet are highlighted. There is a strange, unintentional aggregation of points on the southeast coastline of China, however. After investigation, we discovered that there is another Chinese word “Cuo(厝)” that spells identical to “Cuo(错),” which means “House” in the Minnan dialect of the Chinese language. Minnan dialect is widely spoken by inhabitants in the southeast coastline of China, and that thoroughly explains the result.
Summary of Clusters
The writing system in East Asia is usually ideographic, which generates clear syllable boundaries in its place names. Some common spelling patterns, including “-eng,” “-ang” can be seen.
The features in Southeast Asia and South Asia place names is somewhat similar to that of East Asia. However, the average word length is dramatically shorter because each ideograph represents one word. The “Syllable boundary” in East Asia place name words becomes white spaces here.
Place names in Sub-Saharan Africa are diverse in characteristics, making them harder to recognize. We can occasionally notice a European impact, especially the French ones, on their place names.
Place names in West Europe typically demonstrate a low vowel-consonant ratio, which means the pronunciation sounds “harder.” Some common suffixes like “-burg / -bourg,” “-eaux” can be observed.
Place names in East Europe are usually constructed by a single long word, which makes them relatively easier to be recognized. Some Slavic suffixes like “-sk” are commonly seen here. Place names from English-spoken countries (North America, Australia, New Zealand, and the UK) show features in the English language.
Place names in Latin Region, including Latin America, Spain, and Portugal, are made up of relatively short words. Some common prefixes like “de,” “le,” “san” can be observed.
Place names in the Arabic Cultural Region usually suggest some unique patterns, including “Al,” “Ah,” “-bad,” and “-j.”
Place names in Oceania are diverse in characteristics and meanwhile short in collected data. This diversity makes them nearly impossible to be classified. |
Cistron - Wikipedia
Find sources: "Cistron" – news · newspapers · books · scholar · JSTOR (February 2021) (Learn how and when to remove this template message)
A cistron is an alternative term for "gene".[1] The word cistron is used to emphasize that genes exhibit a specific behavior in a cis-trans test; distinct positions (or loci) within a genome are cistronic.
The words cistron and gene were coined before the advancing state of biology made it clear that the concepts they refer to are practically equivalent. The same historical naming practices are responsible for many of the synonyms in the life sciences.
The term cistron was coined by Seymour Benzer in an article entitled The elementary units of heredity.[2] The cistron was defined by an operational test applicable to most organisms that is sometimes referred to as a cis-trans test, but more often as a complementation test.
For example, suppose a mutation at a chromosome position
{\displaystyle x}
is responsible for a change in recessive trait in a diploid organism (where chromosomes come in pairs). We say that the mutation is recessive because the organism will exhibit the wild type phenotype (ordinary trait) unless both chromosomes of a pair have the mutation (homozygous mutation). Similarly, suppose a mutation at another position,
{\displaystyle y}
, is responsible for the same recessive trait. The positions
{\displaystyle x}
{\displaystyle y}
are said to be within the same cistron when an organism that has the mutation at
{\displaystyle x}
on one chromosome and has the mutation at position
{\displaystyle y}
on the paired chromosome exhibits the recessive trait even though the organism is not homozygous for either mutation. When instead the wild type trait is expressed, the positions are said to belong to distinct cistrons / genes. Or simply put, mutations on the same cistrons will not complement; as opposed to mutations on different cistrons may complement (see Benzer's T4 bacteriophage experiments T4 rII system).
For example, an operon is a stretch of DNA that is transcribed to create a contiguous segment of RNA, but contains more than one cistron / gene. The operon is said to be polycistronic, whereas ordinary genes are said to be monocistronic.
^ Lewin B (2000). Genes VII. New York: Oxford University Press and Cell Press. p. 955. ISBN 0-19-879276-X.
^ Benzer S (1957). "The elementary units of heredity". In McElroy WD, Glass B (eds.). The Chemical Basis of Heredity. Baltimore, Maryland: Johns Hopkins Press. pp. 70–93. also reprinted in Benzer S (1965). "The elementary units of heredity". In Taylor JH (ed.). Selected papers on Molecular Genetics. New York: Academic Press. pp. 451–477.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Cistron&oldid=1007236784" |
expandoff - Maple Help
Home : Support : Online Help : Mathematics : Algebra : Expression Manipulation : Expanding : expandoff
suppress expansion of function(s)
unsuppress expansion of function(s)
expandoff(f1, f2, ...)
expandon(f1, f2, ...)
A call to expandoff suppresses the expansion of the functions listed. If no arguments are passed, the expansion of all functions is suppressed.
Conversely, expandon unsuppresses the expansion of the functions listed. If no arguments are passed, expansion of all functions is reasserted.
Both functions return NULL as the result.
Note that expand uses option remember. See the examples below.
The expandoff function should be defined by the command expand(expandoff()) before it is used. The expandon function should be defined by the command expand(expandon()) before it is used.
\mathrm{expand}\left(\mathrm{expandoff}\left(\right)\right)
\textcolor[rgb]{0,0,1}{\mathrm{expandoff}}\textcolor[rgb]{0,0,1}{}\left(\right)
\mathrm{expandoff}\left(\mathrm{exp}\right)
\mathrm{expand}\left(\mathrm{exp}\left(a+b\right)\right)
{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{b}}
\mathrm{expand}\left(\mathrm{expandon}\left(\right)\right)
\textcolor[rgb]{0,0,1}{\mathrm{expandon}}\textcolor[rgb]{0,0,1}{}\left(\right)
\mathrm{expandon}\left(\mathrm{exp}\right)
\mathrm{expand}\left(\mathrm{exp}\left(c+d\right)\right)
{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{c}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{d}}
\mathrm{expand}\left(\mathrm{exp}\left(a+b\right)\right)
{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{b}}
The last example remains unexpanded because of option remember. |
In Part 1 of our Reinforcement Learning Tutorial series, we learned about the basics of reinforcement learning, and also about one of the most easy algorithms to implement it, called Q-Learning.
Q-Learning was a greedy algorithm in that, the main strategy involved was to perform the action that will eventually yield the highest cumulative rewards.
We also saw the Bellman equation, and how it forms the basis for Q-Learning.
If you want to refresh your concepts of Q-Learning, and get an intuitive introduction to how it all works, please refer to Part 1 of this series.
Now, let's get started with something at a little more advanced level - Deep Q-Learning!
In Q-Learning, we made a table which had all the possible states as columns, and the actions which could be taken at every state as rows. But what would happen in an environment, where the number of states and possible outcomes would become very large?
Turns out, it's not rare, the possibility of this happening. Even a simple game such as Tic-Tac-Toe has hundreds of different states, which is then multiplied by 9, the number of possible actions.
So, how will vanilla Q-Learning help us solve complex problems like these?
The answer to this lies in Deep Q-Learning, an effort to combine Q-Learning and Deep Learning, the resultant being Deep Q Networks.
The idea is straightforward - where we had the table consisting of states and possible outcomes in Q-Learning, we'll now replace that with a neural network which tries to approximate Q Values, in Deep Q-Learning.
It's usually referred to as the approximator or the approximating function, and denoted as
Q\left(s, a; \theta \right)
\theta
represents the trainable weights of the network.
It makes sense to use the Bellman Equation here too, but what exactly are we minimizing? Let's take another peek at it:
The "=" sign here stands for assignment, but do we have any condition which will also satisfy an equality?
Yes - when the Q Value reached its converged and final value.
And that is exactly our goal - so we could minimize the difference between the LHS and the RHS - to get our cost function.
This might look familiar to your eyes, because it's the Mean Squared Error function, where the current Q Value is the prediction
\left(y\right)
, and the immediate, future rewards are the target
\left(y\text{'}\right)
This is also why
Q\left(s\text{'}, a; \theta \right)
is usually referred to as Q-target.
Let's move on to Training, now. In Reinforcement Learning, the training set is created on the fly - we ask the Agent to try and select the best action using the present network, and we record the state, action, reward and the next state it ended up at.
We zero on to a batch size
\left(b\right)
, and each time
\left(b\right)
new records were recorded, we select
\left(b\right)
records at random from the memory, to train the network.
The memory buffers are known as Experience Replay.
Many types of such memories exist - one of the most common is a Cyclic Memory Buffer, which makes sure the Agent keeps training over its new behavior rather than training on things that might no longer be relevant.
Talking about Architecture - if it's a table we're using, the network must receive the state and action as input, and should produce a Q-Value as output:
Despite being correct, this architecture is inefficient from a technical point of view.
Note that the cost function requires the maximal future Q-Value, so we'll need several network predictions for a single cost calculation.
Thus, we could use the following architecture instead:
Here, we provide the network with only the state
\left(s\right)
as input, and receive Q Values for all possible actions at once.
This architecture is more efficient and much better than the previous one.
Congratulations, you just learned how Deep Q-Networks function!
Implementing a Deep Q-Network
To ensure smooth continuity of your understanding of Q Networks and Deep Q Networks, I'll demonstrate Deep Q Networks, starting from basic Q-Learning.
Here I'll share with you the full code and outputs, from scratch.
Importing the libraries and setting the seed
print('Seed: {}'.format(seed))
The game the Q-agents will need to learn is made of a board with
4
cells. The agent will receive a reward of
+1
every time it fills a vacant cell, and will receive a penalty of
-1
when it tries to fill an already occupied cell. The game ends when the board is full.
def __init__(self, board_size = 4):
self.board_size = board_size
self.board = np.zeros(self.board_size)
def play(self, cell):
## Returns a tuple: (reward, game_over?)
if self.board[cell] == 0:
self.board[cell] = 1
game_over = len(np.where(self.board == 0)[0]) == 0
return (1, game_over)
return (-1, False)
return str(list(map(int, state.tolist())))
all_states = list()
s = np.array([i, j, k, l])
all_states.append(state_to_str(s))
print('All possible states: ')
for s in all_states:
Initialize the game:
Starting with a table-based Q-learning algorithm
num_of_games = 2000
Initializing the Q-table:
q_table = pd.DataFrame(0,
index = np.arange(4),
columns = all_states)
Letting the agent play and learn:
r_list = [] ## Store the total reward of each game so we could plot it later
for g in range(num_of_games):
state = np.copy(game.board)
action = q_table[state_to_str(state)].idxmax()
reward, game_over = game.play(action)
if np.sum(game.board) == 4: ## Terminal state
next_state_max_q_value = 0
next_state = np.copy(game.board)
next_state_max_q_value = q_table[state_to_str(next_state)].max()
q_table.loc[action, state_to_str(state)] = reward + gamma * next_state_max_q_value
r_list.append(total_reward)
q_table
Let's verify that the agent indeed learned a correct strategy by seeing what action it will choose in each one of the possible states:
b = np.array([i, j, k, l])
if len(np.where(b == 0)[0]) != 0:
action = q_table[state_to_str(b)].idxmax()
pred = q_table[state_to_str(b)].tolist()
print('board: {b}\tpredicted Q values: {p} \tbest action: {a}\tcorrect action? {s}'
.format(b = b, p = pred, a = action, s = b[action] == 0))
We can see that the agent indeed picked up the right way to play the game. Still, when looking at the predicted Q values, we see that there are some states where it didn't pick up the correct Q values.
Q Learning is a greedy algorithm, and it prefers choosing the best action at each state rather than exploring. We can solve this issue by increasing
\epsilon
(epsilon), which controls the exploration of this algorithm and was set to
0.1
, OR by letting the agent play more games.
Let's plot the total reward the agent received per game:
plt.plot(range(len(r_list)), r_list)
Let's move on to neural network-based modeling. We'll design the Q network first.
Remember that the output of the network, self.output is an array of the predicted Q value for each action taken from the input state, self.states. Comparing with the Q-table algorithm, the output is an entire column of the table.
def __init__(self, hidden_layers_size, gamma, learning_rate,
input_size = 4, output_size = 4):
self.q_target = tf.placeholder(shape = (None, output_size), dtype = tf.float32)
self.r = tf.placeholder(shape = None, dtype = tf.float32)
self.states = tf.placeholder(shape = (None, input_size), dtype = tf.float32)
self.enum_actions = tf.placeholder(shape = (None, 2), dtype = tf.int32)
layer = self.states
for l in hidden_layers_size:
layer = tf.layers.dense(inputs = layer, units = 1,
activation = tf.nn.relu,
kernel_initializer = tf.contrib.layers.xavier_initializer(seed = seed))
self.output = tf.layers.dense(inputs = layer, units = output_size,
self.predictions = tf.gather_nd(self.output, indices = self.enum_actions)
self.labels = self.r + gamma * tf.reduce_max(self.q_target, axis = 1)
self.cost = tf.reduce_mean(tf.losses.mean_squared_error(labels = self.labels,
predictions = self.predictions))
self.optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(self.cost)
Designing the Experience Replay memory which will be used, using a cyclic memory buffer:
memory = None
self.memory = deque(maxlen = size)
self.memory.append(element)
return random.sample(self.memory, n)
Setting up the parameters. Note that here we used gamma = 0.99 and not 1 like in the Q-table algorithm, as the literature recommends working with a discount factor of
0.9 \le \gamma \le 0.99
. It probably won't matter much in this specific case, but it's good to get used to this.
Initializing the Q-network:
qnn = QNetwork(hidden_layers_size = [20, 20],
learning_rate = 0.001)
memory = ReplayMemory(memory_size)
Training the network. Compare this code to the above Q-table training.
c_list = [] ## Same as r_list, but for storing the cost
counter = 0 ## Will be used to trigger network training
pred = np.squeeze(sess.run(qnn_output,
feed_dict = {qnn.states: np.expand_dims(game.board, axis = 0)}))
action = np.argmax(pred)
memory.append({'state': state, 'action': action,
'reward': reward, 'next_state': next_state,
'game_over': game_over})
if counter % batch_size == 0:
## Network training
q_target = sess.run(qnn.output, feed_dict = {qnn.states: np.array(list(map(lambda x: x['next_state'], batch)))})
terminals = np.array(list(map(lambda x: x['game_over'], batch)))
for i in range(terminals.size):
if terminals[i]:
## Remember we use the network's own predictions for the next state while calculating loss.
## Terminal states have no Q-value, and so we manually set them to 0, as the network's predictions
## for these states is meaningless
q_target[i] = np.zeros(game.board_size)
_, cost = sess.run([qnn.optimizer, qnn.cost],
feed_dict = {qnn.states: np.array(list(map(lambda x: x['state'], batch))),
qnn.r: np.array(list(map(lambda x: x['reward'], batch))),
qnn.enum_actions: np.array(list(enumerate(map(lambda x: x['action'], batch)))),
qnn.q_target: q_target})
c_list.append(cost)
print('Final cost: {}'.format(c_list[-1]))
Again, let's verify that the agent indeed learned a correct strategy:
pred = np.squeeze(sess.run(qnn.output,
feed_dict = {qnn.states: np.expand_dims(b, axis = 0)}))
pred = list(map(lambda x: round(x, 3), pred))
print('board: {b} \tpredicted Q values: {p} \tbest action: {a} \tcorrect action? {s}'
Here too, we see that the agent learned a correct strategy, and again, Q values are not what we would've expected.
Let's plot the rewards the agent received:
Zooming in, so we can compare the Q-table algorithm:
plt.xlabel('Game played')
plt.ylim(-2, 4.5)
Let's plot the cost too. Remember that here the X-axis reflects the number of trainings, and not the number of games. The number of training depends on the number of actions taken, and the agent can take any number of actions during each game.
plt.plot(range(len(c_list)), c_list)
plt.xlabel('Trainings')
Bonus: You can play around with the code at this link.
That wraps up the demonstration of Q-Learning and Deep Q-Networks.
Stay tuned for upcoming articles of the Reinforcement Learning Tutorial series!
Follow me on LinkedIn to get updates on more such intuitive articles.
Love Data Science and am crazy about Deep Learning. Write poems and stories in my free time.
ifdef is an include guard. ifndef can be seen as if defined then do this till endif. It is opposite of [ifndef directive
ifndef directive is a define directive in C which checks if a MACRO has been defined previously or not. If it has not been defined, it will execute a set of commands. It is followed by endif. |
Gaia_Sausage Knowpia
The Gaia Sausage or Gaia Enceladus is the remains of a dwarf galaxy (the Sausage Galaxy, or Gaia-Enceladus-Sausage, or Gaia-Sausage-Enceladus) that merged with the Milky Way about 8–11 billion years ago. At least eight globular clusters were added to the Milky Way along with 50 billion solar masses of stars, gas and dark matter.[1]
Gaia Sausage or Gaia Enceladus
Artist’s impression of debris from the Gaia-Sausage-Enceladus galaxy. Yellow arrows represent the positions and velocities of stars originating from the dwarf galaxy, the data taken from a simulated merger with the Milky Way with similar properties to the one believed to have occurred.
The "Gaia Sausage" is so-called because of the characteristic sausage shape of the population in a chart of velocity space, in particular a plot of radial (
{\displaystyle {\boldsymbol {v}}_{r}}
) versus azimuthal velocity (
{\displaystyle {\boldsymbol {v}}_{\theta }}
) of stars (See spherical coordinate system), using data from the Gaia Mission.[1] The stars that have merged with the Milky Way have orbits that are highly elongated. The outermost points of their orbits are around 20 kiloparsecs from the galactic centre at what is called the "halo break".[2] These stars had previously been seen in Hipparcos data [3] and identified as originаting from an accreted galaxy.[4]
The globular clusters firmly identified as former Sausage members are Messier 2, Messier 56, Messier 75, Messier 79, NGC 1851, NGC 2298, and NGC 5286.[1]
NGC 2808: Globular cluster or old core?Edit
NGC 2808, possible old core of Gaia Sausage
NGC 2808 is another globular-like cluster of Sausage. This cluster is composed of three generations of stars, all born within 200 million years of the formation of the cluster.[5]
One theory to account for three generations of stars is that NGC 2808 is the former core of Sausage.[1] This is also an explanation for the number of stars, more than a million, which is unusually large for a globular cluster.
The stars from this dwarf orbit the Milky Way core with extreme eccentricities on the order of about 0.9. Their metallicity is also typically higher than other halo stars, with most having [Fe/H] > −1.7 dex, i.e., at least 2% of the solar value[2][6]
The "Gaia Sausage" reconstructed the Milky Way by puffing up the thin disk to make it a thick disk, whilst the gas it brought into the Milky Way triggered a fresh round of star formation and replenished the thin disk. The debris from the dwarf galaxy provides most of the metal-rich part of the galactic halo.[1]
M32p, large galaxy merged into Andromeda Galaxy, responsible for its thick disc and most halo stars
^ a b c d e Myeong, G.C.; Evans, N.W.; Belokurov, V.; Sanders, J.L.; Koposov, S. (2018). "The Sausage globular clusters". The Astrophysical Journal. 863 (2): L28. arXiv:1805.00453. Bibcode:2018ApJ...863L..28M. doi:10.3847/2041-8213/aad7f7. S2CID 67791285.
^ a b Deason, Alis; Belokurov, Vasily; Koposov, Sergey; Lancaster, Lachlan (2018). "Apocenter Pile-Up: Origin of the stellar halo density break". The Astrophysical Journal. 862 (1): L1. arXiv:1805.10288. Bibcode:2018ApJ...862L...1D. doi:10.3847/2041-8213/aad0ee. S2CID 118936735.
^ Chiba, Masashi; Beers, Timothy C. (June 2000). "Kinematics of Metal-poor Stars in the Galaxy. III. Formation of the Stellar Halo and Thick Disk as Revealed from a Large Sample of Nonkinematically Selected Stars". The Astronomical Journal. 119 (6): 2843–2865. arXiv:astro-ph/0003087. Bibcode:2000AJ....119.2843C. doi:10.1086/301409. S2CID 16620828.
^ Brook, Chris B.; Kawata, Daisuke; Gibson, Brad K.; Flynn, Chris (10 March 2003). "Galactic Halo Stars in Phase Space: A Hint of Satellite Accretion?". The Astrophysical Journal. 585 (2): L125–L129. arXiv:astro-ph/0301596. Bibcode:2003ApJ...585L.125B. doi:10.1086/374306. S2CID 16936195.
^ Piotto, G.; et al. (May 2007). "A Triple Main Sequence in the Globular Cluster NGC 2808". The Astrophysical Journal. 661 (1): L53–L56. arXiv:astro-ph/0703767. Bibcode:2007ApJ...661L..53P. doi:10.1086/518503. S2CID 119376556.
^ Iorio, Giuliano; Belokurov, Vasily (2021). "Chemo-kinematcs of the Gaia RR Lyrae: the halo and the disc". Monthly Notices of the Royal Astronomical Society. 502 (4): 5686–5710. Bibcode:2021MNRAS.502.5686I. doi:10.1093/mnras/stab005.
Belokurov, V.; Erkal, D.; Evans, N.W.; Koposov, S.E.; Deason, A.J. (July 2018). "Co-formation of the disc and the stellar halo". Monthly Notices of the Royal Astronomical Society. 478 (1): 611–619. arXiv:1802.03414. Bibcode:2018MNRAS.478..611B. doi:10.1093/mnras/sty982.
Myeong, G.C.; Evans, N.W.; Belokurov, V.; Sanders, J.L.; Koposov, S.E. (April 2018). "The Milky Way halo in action space". The Astrophysical Journal Letters. 856 (2): L26. arXiv:1802.03351. Bibcode:2018ApJ...856L..26M. doi:10.3847/2041-8213/aab613. S2CID 73518200.
Myeong, G.C.; Evans, N.W.; Belokurov, V.; Sanders, J.L.; Koposov, S.E. (April 2018). "The Shards of ω Centauri". arXiv:1804.07050 [astro-ph.GA].
Chaplin, William J.; Serenelli, Aldo M.; Miglio, Andrea; Morel, Thierry; Mackereth, J. Ted; Vincenzo, Fiorenzo; Kjeldsen, Hans; Basu, Sarbani; Ball, Warrick H.; Stokholm, Amalie; Verma, Kuldeep (Jan 13, 2020). "Age dating of an early Milky Way merger via asteroseismology of the naked-eye star ν Indi". Nature Astronomy. 4 (4): 382–389. arXiv:2001.04653. Bibcode:2020NatAs...4..382C. doi:10.1038/s41550-019-0975-9. hdl:1721.1/128912. ISSN 2397-3366. S2CID 210166431.
Gaia Sausage Simulation on YouTube YouTube
Jocelyn Duffy (4 July 2018). "The Gaia Sausage: The major collision that changed the Milky Way galaxy". Carnegie Mellon University.
Sarah Collins (4 July 2018). "The Gaia Sausage: The major collision that changed the Milky Way". University of Cambridge. |
torch.symeig — PyTorch 1.11.0 documentation
torch.symeig
torch.symeig¶
torch.symeig(input, eigenvectors=False, upper=True, *, out=None)¶
This function returns eigenvalues and eigenvectors of a real symmetric or complex Hermitian matrix input or a batch thereof, represented by a namedtuple (eigenvalues, eigenvectors).
This function calculates all eigenvalues (and vectors) of input such that
\text{input} = V \text{diag}(e) V^T
The boolean argument eigenvectors defines computation of both eigenvectors and eigenvalues or eigenvalues only.
If it is False, only eigenvalues are computed. If it is True, both eigenvalues and eigenvectors are computed.
Since the input matrix input is supposed to be symmetric or Hermitian, only the upper triangular portion is used by default.
If upper is False, then lower triangular portion is used.
torch.symeig() is deprecated in favor of torch.linalg.eigh() and will be removed in a future PyTorch release. The default behavior has changed from using the upper triangular portion of the matrix by default to using the lower triangular portion.
L, _ = torch.symeig(A, upper=upper) should be replaced with
UPLO = "U" if upper else "L"
L = torch.linalg.eigvalsh(A, UPLO=UPLO)
L, V = torch.symeig(A, eigenvectors=True, upper=upper) should be replaced with
L, V = torch.linalg.eigh(A, UPLO=UPLO)
The eigenvalues are returned in ascending order. If input is a batch of matrices, then the eigenvalues of each matrix in the batch is returned in ascending order.
Irrespective of the original strides, the returned matrix V will be transposed, i.e. with strides V.contiguous().mT.stride() .
Extra care needs to be taken when backward through outputs. Such operation is only stable when all eigenvalues are distinct and becomes less stable the smaller
\min_{i \neq j} |\lambda_i - \lambda_j|
input (Tensor) – the input tensor of size
(*, n, n)
where * is zero or more batch dimensions consisting of symmetric or Hermitian matrices.
eigenvectors (bool, optional) – controls whether eigenvectors have to be computed
upper (boolean, optional) – controls whether to consider upper-triangular or lower-triangular region
out (tuple, optional) – the output tuple of (Tensor, Tensor)
(*, m)
. The eigenvalues in ascending order.
eigenvectors (Tensor): Shape
(*, m, m)
. If eigenvectors=False, it’s an empty tensor. Otherwise, this tensor contains the orthonormal eigenvectors of the input.
>>> a = a + a.t() # To make a symmetric
tensor([[-5.7827, 4.4559, -0.2344, -1.7123, -1.8330],
[-1.8330, -0.1798, 7.1988, 3.1036, -5.1453]])
>>> e, v = torch.symeig(a, eigenvectors=True)
tensor([-13.7012, -7.7497, -2.3163, 5.2477, 8.1050])
tensor([[ 0.1643, 0.9034, -0.0291, 0.3508, 0.1817],
[ 0.6415, -0.0447, -0.6381, -0.0193, -0.4230]])
>>> a_big = torch.randn(5, 2, 2)
>>> a_big = a_big + a_big.mT # To make a_big symmetric
>>> e, v = a_big.symeig(eigenvectors=True)
>>> torch.allclose(torch.matmul(v, torch.matmul(e.diag_embed(), v.mT)), a_big) |
Japorized | Axiom of Choice
\forall \{A_i\}_{i \in I} \; A_i \neq \emptyset \quad \prod_{i \in I} A_i \neq \emptyset
which says that for a family of non-empty sets, the Cartesian product of all such sets is non-empty.
An alternative definition was given with the use of the choice function:
X \neq \emptyset \quad \exists f:\mathcal{P}(X) \setminus \{\emptyset\} \to X \quad \forall A \in \mathcal{P}(X) \setminus \{\emptyset\} \quad f(A) \in A
which says that given a non-empty set
X
, there exists a choice function
f
that maps from the power set of
X
less the empty set to
X
itself, such that for all such sets in the domain, their images are themselves.
What may be interesting in the alternative definition (hereafter referred to as AC’), the image set of A is an element of itself instead of being a subset or the set itself.
Without further probing on such a weird question, notice that this definition makes sense if we believe that the axiom of choice states, in layman terms, that for any number of non-empty sets, we may pick an element from each set. In this case, the choice function acts as our “way” of picking the element.
The following is a prove of the two statements.
(AC) \implies (AC')
\prod_{i \in I} A_i \neq \emptyset
\exists (x_A)_{A \in \mathcal{P}(X)\setminus\{\emptyset\}} \in \prod_{A \in \mathcal{P}(X)\setminus\{\emptyset\}} A
Thus we can just choose
f(A) = x_A
(AC') \implies (AC)
X = \bigsqcup_{i \in I} A_i \quad f: \mathcal{P}(X) \setminus \{\emptyset\} \to X \implies \Big( (f(A_i) \Big)_{i \in I} \in \prod_{i \in I} A_i
Before going further to the other equivalent statements, we name the following definition:
Chain: Let
(X, \leq)
be a partially ordered set. A chain is a set
S \subset X
S
has a total order, i.e.
\forall x, y \in S
x \leq y
\leq x
and not both.
The Axiom of Choice is also equivalent to 3 other statements, namely:
Haussdorff’s Maximality Principle (HMP)
In any partially ordered set
(S, \leq )
, there is a maximal chain , i.e. a chain
M
such that no
M \cup \{s\}
is a chain for any
S \setminus M
Zorn’s Lemma (ZL)
(X, \leq)
be a partially ordered set. If every chain in
(X, \leq)
has an upper bound, then there is a maximal
m \in X
\forall s \in S \; m \leq s \implies m = s
Well-Ordering Principle (WOP)_
Any non-empty set
X
has a well order, i.e.
\forall \emptyset = S \subseteq X \; \exists s \in S \; \forall t \in S \; s \leq t
, or in words, any subset of X has a minimal element.
This page is still incomplete
Next: Uncountability of (0, 1) |
Change of Basis - Haifeng's Notes
A matrix can be seen as a linear transformation of a space. \mathbf{Ax}
\mathbf{Ax}
means transform the vector \mathbf{x}
\mathbf{x}
, which is in the original space, into the new space, but still using the original basis for coordination. The original basis means the perpendicular unit vectors. In this case, \mathbf{x}
\mathbf{x}
should be a vector represented in the original basis coordination system.
Let's consider another problem, \mathbf{x}
\mathbf{x}
is (3, 2). However, it is not in the original coordinate system but a new coordinate system, which uses \mathbf{A}
\mathbf{A}
's column vectors as basis. 3 and 2 are the scalars for the column vectors in \mathbf{A}
\mathbf{A}
. How can we translate the vector \mathbf{x}
\mathbf{x}
back to the original coordination system? I mean what is the coordinate of the \mathbf{x}
\mathbf{x}
represented by the original coordination system. It is the same as the above example, it should be \mathbf{Ax}
\mathbf{Ax}
So a matrix can have different meaning's in different situations. It can either mean a transformation of the space, or a translation from one coordination system to another. It depends on the meaning of the vector multiplied on the right. If x is using the new space basis, then means translation. If in the old one, it means transformation.
Another problem is how to translate a vector \mathbf{x}
\mathbf{x}
represented in the original system into the new coordination system. Just use \mathbf{A}^{-1}\mathbf{x}
\mathbf{A}^{-1}\mathbf{x}
, where \mathbf{A}
\mathbf{A}
consists of the basis vectors of the new coordination system as columns. The reason is as follows. Suppose \mathbf{x}$ is using the new coordinate system specified by \mathbf{A}
\mathbf{A}
. \mathbf{A}^{-1}\mathbf{Ax}
\mathbf{A}^{-1}\mathbf{Ax}
is still \mathbf{x}
\mathbf{x}
. \mathbf{v} = \mathbf{Ax}
\mathbf{v} = \mathbf{Ax}
is the translated \mathbf{x}
\mathbf{x}
in the original coordinate system. So \mathbf{v}
\mathbf{v}
is \mathbf{x}
\mathbf{x}
described by the original coordination system. \mathbf{A}^{-1}\mathbf{Ax}=\mathbf{x} \Rightarrow \mathbf{A}^{-1}\mathbf{v}=\mathbf{x}
\mathbf{A}^{-1}\mathbf{Ax}=\mathbf{x} \Rightarrow \mathbf{A}^{-1}\mathbf{v}=\mathbf{x}
, so \mathbf{A}^{-1}
\mathbf{A}^{-1}
can translate \mathbf{v}
\mathbf{v}
to \mathbf{x}
\mathbf{x}
. So \mathbf{A}^{-1}
\mathbf{A}^{-1}
is the opposite translation of \mathbf{A}
\mathbf{A}
If we want to do an operation (rotate a vector for 90 degree) on a vector in the new coordination system, what we do is translate it to the old system, do the operation, translate it back to the new. \mathbf{A}^{-1}\mathbf{BAx}
\mathbf{A}^{-1}\mathbf{BAx}
, \mathbf{B}
\mathbf{B}
is the operation matrix, \mathbf{A}
\mathbf{A}
is the translation matrix.
Previous Abstract Vectors
Next Column Space |
porosity - zxc.wiki
The porosity is a dimensionless measurement and represents the ratio of the volume of the void to the total volume of a substance or mixture of substances. It serves as a classifying measure for the voids actually present. The size is used in the field of materials and construction technology as well as in the geosciences . The porosity has a great influence on the density of a material as well as on the resistance when flowing through a bed ( Darcy's law ).
Originally due to natural conditions and usually undesirable especially in the production of sophisticated cast products, there is now an artificially created, insofar desired porosity, primarily in the service of the production of lightweight building materials. Metal foam and lightweight concrete are examples of porosity, which as such is not the subject of this article.
1.1 Open and closed porosity
1.2 Typical values
2 Occurrence of porosity
2.1 Construction technology
2.3 Geosciences
The porosity is defined as 1 minus the quotient of the bulk density (of a solid ) or bulk density (of a pile ) and the true density :
{\ displaystyle \ Phi}
{\ displaystyle \ Phi = 1 - {\ frac {\ rho} {\ rho _ {0}}}}
As a percentage it is calculated as follows:
{\ displaystyle \ Phi = \ left (1 - {\ frac {\ rho} {\ rho _ {0}}} \ right) \ times 100 \, \%}
Alternatively, the porosity can be defined as the ratio of void volume to total volume with a net volume state of the solid:
{\ displaystyle V_ {H}}
{\ displaystyle V = V_ {H} + V_ {F}}
{\ displaystyle V_ {F}}
{\ displaystyle \ Phi = {\ frac {V_ {H}} {V}} = {\ frac {V_ {H}} {V_ {H} + V_ {F}}}}
In soil mechanics , the number of pores is also used as a key figure (ratio of void volume to solid volume ).
{\ displaystyle V_ {H}}
{\ displaystyle V_ {F}}
Open and closed porosity
The total porosity of a substance is made up of the sum of the cavities that are connected to one another and to the environment ( open porosity , useful porosity) and the cavities that are not connected to one another ( cemented , closed or dead-end porosity ).
High open porosity is the term used for open-pored material or, ideally, a honeycomb structure ; pure closed -pore properties are called foam .
The following geometrically determinable total porosities of an arrangement of massive spheres of equal size can be regarded as typical:
For an ordered, cubic closest (face-centered) packing of spheres as well as for an ordered, hexagonal closest packing of spheres , it is Φ = 0.26
for the body-centered cubic packing of the spheres it is Φ = 0.32.
These values result directly from the packing density , which results in a degree of space filling of 74% for the cubic and hexagonal closest packing of spheres. Kepler postulated that this is the greatest value that a sphere packing can assume. This so-called Kepler conjecture could only be confirmed by computer-aided evidence; it was added by David Hilbert in 1900 as the 18th problem in his list of 23 mathematical problems .
With a body-centered cubic lattice (like tungsten - bcc) the value is only 0.68 and a primitive cubic lattice (like alpha polonium - sc) only 0.52.
For any packing of spheres made of a material that is not internally porous (solid spheres), the following rough estimate applies:
{\ displaystyle \ Phi \ approx {\ frac {\ pi} {\ text {Coordination number}}} \ cong 0 {,} 4 \ ldots 0 {,} 45}
Appearance of porosity
Asphalt layer with a high pore volume
In civil engineering, the term porosity refers to the voids in a bed or pile. Porosity and bulk density are related. The porosity is defined as the ratio of the void volume V hollow to the total volume of the pile V tot . Commonly used is the letter ε or P W , while the already introduced bereits is less common.
The following definition is common:
{\ displaystyle \ varepsilon = {\ frac {V _ {\ mathit {H}}} {V _ {\ mathrm {ges}}}} = {\ frac {V _ {\ mathit {H}}} {V _ {\ mathit { H}} + V_ {s}}}}
The total volume V ges sets itself from the solids volume V s (corresponding to pure volume V F ) and the cavity volume V H together.
In materials engineering, porous materials are classified according to the size of the pores :
microporous : pores <2 nm
Mesoporous : pore size between 2 and 50 nm
macroporous : pores> 50 nm
For gray cast iron parts, but also those that are cast from copper alloys in sand molds, there are u. A. a very characteristic pore shape known as pin-holes (“pinhole porosity”). It can be visible on the surface or just below it. These are reactions of the melt with the moisture of the molding material or the cores used, but also with the binders of the same. Hydrogen pin-holes and hydrogen-nitrogen pin-holes are possible. Another type of porosity is found in cast aluminum in sand and mold. The solidification of the metal in the mold can lead to porosity as the cooling increases, because the hydrogen solubility of aluminum and aluminum alloys decreases depending on the temperature, but the released hydrogen is prevented from escaping and thus leads to undesirable porosity with a considerable influence on the strength properties. Degassing measures as part of a melt treatment can help. Die-cast aluminum is less prone to porosity due to its very rapid mold filling and solidification. Porosity caused by air trapped in the casting process is avoided by using a vacuum casting process (VACURAL).
Soil constituents Solid, water and air
In geology , hydrogeology and soil science , porosity describes the ratio of the volume of all cavities in a porous soil or rock to its external volume. It is therefore a measure of how much space the actual soil or rock fills within a certain volume due to its grain size or fissures or which cavities it leaves behind in it. The pores or capillaries are usually filled with air and / or water . The porosity is usually given in percent or as a fraction (fractions of 1 = 100%) and is designated with the formula letter Φ.
The porosity of rocks describes the volume of voids that can be taken up by mobile, migratory media such as water and gases. Occasionally, the synonymous term “ degree of impermeability” is used for the porosity of rocks . There are also the rock- technical values of the number of pores (symbol ) and proportion of pores (symbol ).
{\ displaystyle e}
{\ displaystyle n}
When considering the weathering resistance of natural stone, one starts with the open porosity ( π wi ). It only describes those pore spaces in which liquids and gases are involved in exchange processes.
Sediments and sedimentary rocks have a porosity of around 10 to 40%, while metamorphites and igneous rocks only around 1 to 2%. Typical, real measured total porosities are:
Sandstone : 5 to 40%, typically 30% (depending on grain size distribution, type of binding agent and consolidation)
Limestone or dolomite : 5 to 25% (depending on dissolution processes through groundwater and weathering)
Mudstone : 20 to 45% (due to the small diameter of the pores, however, no storage rock)
Slate : less than 10%
Loose sand and gravel: up to over 40%
Classification of porosities in the deposit assessment
Negligible Φ <4%
Low 4 <Φ <10%
Well 10 <Φ <20%
Excellent Φ> 20%
In the oil / natural gas industry , mining geology and geothermal energy , the effective porosity plays a major role, since fluids (water, oil or gas) can only flow through the interconnected pores . In connection with the storage properties of a rock, the term usable porosity is also used in hydrogeology .
geotechnical porosity
Quantitative description of a porous medium (script "Hydrogeology", Chapter 4, University of Kassel) (PDF file; 0.63 MB)
↑ "An evaluation concept for computed tomographically determined porosities in cast parts with regard to their effect on the local stress resistance of the component", Rüdiger Bahr and colleagues, Giesserei Rundschau des VÖG, Vienna, 60th year, issue 5/6, p. 106.
^ Helmut Polster, Christa Buwert, Peter Herrmann: Sanierungsgrundlagen Plattenbau. Test procedure . Published by the Institute for the Preservation and Modernization of Buildings eV (IEMB). Fraunhofer Information Center for Space and Construction, Stuttgart. Version: January 1995. IRB-Verlag, Stuttgart 1995, ISBN 3-8167-4137-1 .
↑ ^ Jump up to: from Hales, Thomas; Adams, Mark; Bauer, Gertrud; Dang, Tat Dat; Harrison, John; Hoang, Le Truong; Kaliszyk, Cezary; Magron, Victor; McLaughlin, Sean; Nguyen, Tat Thang; Nguyen, Quang Truong; Nipkow, Tobias; Obua, Steven; Pleso, Joseph; Tail, Jason; Solovyev, Alexey; Ta, Thi Hoai An; Tran, Nam Trung; Trieu, Thi Diep; Urban, Josef; Vu, Ky; Zumkeller, Roland (29 May 2017). "A Formal Proof of the Kepler Conjecture". Forum of Mathematics, Pi. 5: e2. doi: 10.1017 / fmp.2017.1 . Retrieved June 16, 2017.
↑ to this pinholes. In: Ernst Brunhuber (founder): Foundry Lexicon . 17th edition, completely revised and edited by Stephan Hasse. Schiele & Schön, Berlin 1997, ISBN 3-7949-0606-3 .
↑ B. Oberdorfer, D. Habe, E. Kaschnitz Determination of the porosity in aluminum castings by means of CT and its influence on the strength properties . Lecture at the VÖG conference 2014 in Bad Ischl, published in VÖG Giesserei-Rundschau, vol. 61, issue 5/6, p. 138.
↑ Arnd Pesch: Natural stones . 2nd, revised edition. German publishing house for basic industry, Leipzig 1983, p. 64-65 .
^ R. Allan Freeze, John A. Cherry: Groundwater . Prentice-Hall, Englewood Cliffs NJ 1979, ISBN 0-13-365312-9 .
This page is based on the copyrighted Wikipedia article "Porosit%C3%A4t" (Authors); it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License. You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA. |
Linear stability of projected canonical curves with applications to the slope of fibred surfaces
January, 2008 Linear stability of projected canonical curves with applications to the slope of fibred surfaces
Miguel Ángel BARJA, Lidia STOPPINO
f:S⟶B
be a non locally trivial relatively minimal fibred surface. We prove a lower bound for the slope of
depending increasingly from the relative irregularity of
and the Clifford index of the general fibres.
Miguel Ángel BARJA. Lidia STOPPINO. "Linear stability of projected canonical curves with applications to the slope of fibred surfaces." J. Math. Soc. Japan 60 (1) 171 - 192, January, 2008. https://doi.org/10.2969/jmsj/06010171
Keywords: Clifford index , fibration , relative irregularity , slope
Miguel Ángel BARJA, Lidia STOPPINO "Linear stability of projected canonical curves with applications to the slope of fibred surfaces," Journal of the Mathematical Society of Japan, J. Math. Soc. Japan 60(1), 171-192, (January, 2008) |
Recurrence versus transience for weight-dependent random connection models
2022 Recurrence versus transience for weight-dependent random connection models
Peter Gracar, Markus Heydenreich, Christian Mönch, Peter Mörters
Peter Gracar,1 Markus Heydenreich,2 Christian Mönch,3 Peter Mörters1
2Ludwig-Maximilians-Universität München, Germany
3Johannes Gutenberg-Universität Mainz, Germany
We investigate random graphs on the points of a Poisson process in d-dimensional space, which combine scale-free degree distributions and long-range effects. Every Poisson point carries an independent random mark and given marks and positions of the points we form an edge between two points independently with a probability depending via a kernel on the two marks and the distance of the points. Different kernels allow the mark to play different roles, like weight, radius or birth time of a vertex. The kernels depend on a parameter γ, which determines the power-law exponent of the degree distributions. A further independent parameter δ characterises the decay of the connection probabilities of vertices as their distance increases. We prove transience of the infinite cluster in the entire supercritical phase in regimes given by the parameters γ and δ, and complement these results by recurrence results if
d=2
. Our results are particularly interesting for the soft Boolean graph model discussed in the preprint [arXiv:2108:11252] and the age-dependent random connection model recently introduced by Gracar et al. [Queueing Syst. 93.3-4 (2019)]
We acknowledge support from DFG through the scientific network Stochastic Processes on Evolving Networks. The research of CM is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 443916008/SPP2265.
We thank Noam Berger for correspondence concerning the recurrence results.
Peter Gracar. Markus Heydenreich. Christian Mönch. Peter Mörters. "Recurrence versus transience for weight-dependent random connection models." Electron. J. Probab. 27 1 - 31, 2022. https://doi.org/10.1214/22-EJP748
Keywords: Boolean model , preferential attachment , random-connection model , recurrence , Scale-free percolation , transience
Peter Gracar, Markus Heydenreich, Christian Mönch, Peter Mörters "Recurrence versus transience for weight-dependent random connection models," Electronic Journal of Probability, Electron. J. Probab. 27(none), 1-31, (2022) |
The Round Earth Practice Problems Online | Brilliant
As humanity has come of age as a civilization, we’ve had to come to terms with a disconnect between how our gut tells us the world should work, and the way the world actually works. Geocentrism is a common example of this: the core belief that humanity must be special, and by association the Earth must hold a special place in the cosmos. Instead, as we’ve learned more and more about the universe, we find ourselves in an unexceptional arm of a standard spiral galaxy, of which there are billions in the visible universe.
Some gut feelings have proven remarkably difficult to shake, and have come back into the minds of people thousands of years after they’ve been disproved. The most common example is the so-called “Flat Earth Theory”, which has experienced a resurgence in popularity with internet trolls in recent years. Whether this renaissance of a disproved theory is solely due to internet trolling is difficult to confirm. After all, no matter the scientific evidence and videos from space, if you put a level on the ground it appears flat, and if you look around your neighborhood, there is no curvature to be seen.
But even if you only believe what you can see, all it takes is a bit of geometry and basic physics to prove that the Earth is a sphere. Just remember: don’t feed the trolls.
The Flat Earth “theory” that has taken hold recently in some corners of the internet is actually not a theory at all, strictly speaking. Flat Earthers actually have a collection of models, many of which may contradict each other. Each is crafted in an ad hoc fashion to explain why one or two particular scientific observations that seem suggest a round Earth really "prove" a flat Earth.
Even though they’re not consistent, we can paint a general picture of the Flat Earth “theory”:
Across most Flat Earth theories, the Earth is a disk with the North Pole located at the center, and the South Pole is actually an impassable mountain range that runs the perimeter of the Earth’s disk. The Sun and Moon rotate at a fixed distance above the Earth, and shine like spotlights on the surface, creating the appearance of day and night.
To those of us who live in the real world and acknowledge that Earth is a sphere, the evidence is all around us. We’ve all observed eclipses, videos and photos from space, and perhaps even seen the horizon recede as we take an elevator up a tall building.
But a constant onslaught of trolling Flat Earthers forces us to think critically about the evidence for a spherical Earth. In some cases, a tweaked Flat Earth model can adequately explain an observation that seems to obviously support a spherical Earth, in others, Flat Earthers can only respond with ad hominem.
Consider lunar eclipses, which occur when the Earth passes between the Sun and Moon. The circular shape of the Earth’s shadow seems like irrefutable evidence for a spherical Earth, but is this observation robust to Flat Earther scrutiny?
Note: some Flat Earthers believe that Earth never passes between the Sun and Moon, but for the sake of this question, assume that it does.
Yes, a round shadow would only be observed with a spherical Earth No, a round shadow could be observed with a flat Earth
The shape of Earth’s shadow during a lunar eclipse is a common, seemingly obvious observation in support of a spherical Earth. But the equally common refutation that a disk-shaped flat earth would also cast a circular shadow should teach us a lesson: just because an observation supports one theory, doesn’t mean it can’t also support another, incorrect theory.
Consider a triumph of classical physics: Newton and Einstein’s laws of gravity. These theories predict a constant gravitational acceleration on the Earth’s surface towards its core, due to the gravitational field of the Earth attracting any nearby objects.
In most Flat Earth theories, gravity is explained in a much more pedestrian and intuitive way: When do you feel accelerated in everyday life? In an elevator. When an elevator accelerates up with acceleration
a
, anyone inside feels a downward force
\mathbf{F}_a
. This familiar experience is used to explain the appearance of gravity on Earth — it must be accelerating upwards at exactly
\SI[per-mode=symbol]{9.8}{\meter\per\second\squared}.
Can we disprove this explanation?
Let’s say you wake up in a rocket with no windows designed to accelerate at
\SI[per-mode=symbol]{9.8}{\meter\per\second\squared},
unsure whether you’ve taken off from the Earth yet or not. Can you devise a practical experiment that could distinguish the force from acceleration
\mathbf{F}_a
and the force of gravity
\mathbf{F}_g
Yes, a practical experiment exists. No such experiment exists.
Two thousand years before Newton or Einstein, the Greeks were busy using the newly understood mathematics of angles and geometry (or "earth-measure" in ancient Greek) to nail down the shape and size of the Earth.
Eratosthenes is credited with inventing the discipline of geography. In the old world, the cities of Alexandria and Syene in Egypt each had a famed deep well to provide the city’s water. Most of the time, the very bottom of the wells are dark since the light from the sun enters at an angle.
On just one day a year, at the moment of the summer solstice, Eratosthenes noticed that at noon the Sun appears directly overhead, lighting up the bottom of the well in Syene.
Would you expect the bottom of the well in Alexandria, 800 km north of Syene, to be lit up at the same time?
Yes, the bottoms of both wells would be illuminated at the same time. No, the bottoms of both wells would never be illuminated at the same time.
At the same time that the sunlight appeared to be coming from directly overhead in Syene, it had a more shallow angle of incidence in Alexandria, so the bottom of the well didn’t light up.
Eratosthenes noticed that the light coming into the well in Alexandria had an angle of incidence of
7.2^{\circ}
. He concluded that, if the Earth were round, this indicates that about
7.2^{\circ}
of latitude separate Syene and Alexandria. This angle is about
1/50\,\textrm{th}
the arc of an entire circle, so he concluded that
\SI{800}{\kilo\meter}
1/50\,\textrm{th}
the way around the Earth.
On first glance, this doesn’t seem to support any flat Earth theory: if the Earth were flat, with the Sun illuminating from a hundred million miles away, you would be able to see the bottom of both the well in Alexandria and Syene at the same time because the light would come straight into both wells.
What modification to the Flat Earth theory might explain this observation?
If the Earth faces the Sun at an oblique angle. If the Earth were positioned closer and larger relative to the Sun. If the Sun were much larger and further away.
If you imagine the Sun as a much smaller source of light floating only a few thousand kilometers above a flat Earth’s surface and moving throughout the sky, its illumination would appear to come in at an angle just like Eratosthenes observed in Alexandria.
Without today's accepted view of the Earth's position in the solar system with seven other planets, it might seem obvious that the Sun is situated above the Earth and floats around the sky. This flat Earth model requires the Sun to float just
\SI{6322}{\kilo\meter}
above the Earth, closer than most telecommunication satellites.
The ancients thought that the Sun was a fiery chariot ridden around the sky by a god. Today we know that the thermonuclear fusion going on in the Sun is not something we'd want in low orbit of the Earth.
With just two cities and two wells, our observations allow enough wiggle room for two very different explanations to fit our observations.
It is only by adding a third city with a well that Eratosthenes would be able to completely rule out the flat Earth. If we observed a well in Athens which lies another 800 km North from Alexandria, we would expect the angle of incidence on a spherical Earth to double to
14.4^{\circ}
Would the angle of incidence on a Flat Earth be greater, less than or the same as the one predicted by a spherical Earth?
Less than. Greater than. The same.
One of the simplest observations of a spherical Earth is also the most robust.
Since far before Columbus’s time, sailors and their families have become familiar with the sight of sailing ships disappearing behind the horizon. As the ships get further away from the harbor, the bottom of the ship disappears progressively, until only the top of the mast is still visible.
If departing ships appear to lower as they get more and more distant, the Earth’s surface must be curved. A flat Earth would not explain this observation, unless every harbor on Earth was equipped with entrance and exit ramps.
We can estimate the distance at which departing ships would disappear using the knowledge of the curvature of the Earth collected by Eratosthenes.
He used the two wells to determine that
7.2^{\circ}
of the Earth’s
360^{\circ}
corresponded to about
\SI{800}{\kilo\meter}
of distance. From this observation, Eratosthenes estimated the total circumference of the Earth to be about
\SI{40000}{\kilo\meter},
which is within
6\%
of the value calculated with modern instruments.
About how far away would a ship with a
\SI{10}{\meter}
high mast disappear from view on Eratosthenes’s spherical Earth? Assume the observer's height is
\SI{2}{\meter}.
\SI{1.5}{\kilo\meter}
\SI{15}{\kilo\meter}
\SI{150}{\kilo\meter}
We live in a world with flights from Johannesburg, South Africa to Perth, Australia and with live video feed from satellites and even interplanetary sportscars. The debate over the shape of the Earth is long over, but the trolling of Flat Earthers can still teach us a valuable lesson about making a robust scientific argument.
An important part of being a scientist is not only knowing how the world works, but being able to prove it at a party. |
CPM Homework Help : CCG Problem 11-110
Home > CCG > Chapter 11 > Lesson 11.2.3 > Problem 11-110
Solve for the variables in each of the diagrams below. Assume point
C
is the center of the circle in part (b).
x = 360° − 90° = 270°
90° + 90° + 48° + x = 360°
\text{SSS} ≅
, the two triangles are congruent. Therefore, the corresponding angles of the two triangles are also congruent, which means the values of the angles are equal to
90°
24°
(half the value of the original
48°
66°
(half the original value of
x
By the Law of Sines:
\frac{\sin24°}{7} = \frac{\sin66°}{y}
The two triangles are similar, which means:
\frac{3}{x} = \frac{6}{x + 2}
x = 2 |
From J. D. Hooker [after 12 July 1845]1
Your collection was the main material—amounting to 153 species of flowering plants and (& +) 40 of Cryptog. i.e. 153 + 40 not 42 = 193 total, your,
Yours Total
193 : 225 :: 1 :
Macræ a collector of the Hort. Soc. formed the best part of the Albemarle Isld Collections ie. 42 phænogamic plants, you collected 7. Phænogamic there 4 of which are not in Macræ’s— Macræs total Galap are these of Albemarle Isld & one Fern. Douglas & Scouler on their way to the Columbia river collected some plants on James Isld about 15 species.2 Of these 15, 6 you did not get at all, & 5 you got on other Islds, (not on Jas.)— the other 5 both of you got on Jas Isld. Thus though you both collected on one small Isld. your collections are very different, 11 out of his 15 you did not collect & he only 5 out of the 60 that you got, this is terribly unsatisfactory.
The only other collector is Cuming who got one Scalesia which you did not but hardly another species of plant, if you see him ask him what Isld he landed on, I did ask him once but forget. I saw at Paris a few Galaps. from Petit Thouars I think late voyage.3
In all the numerical estimates I would exclude the Cryptogs. Even the Ferns except the prop. these bear to Phænog.—there are 28 species—Chas Isld: 10, James 20, Albem. 1. I make 6 new species, only one however is very remarkable it is from Jas. Isld— The rest 22 are almost all W. Indian— There are no limits to the diffusion of Ferns. (beasts)
Scalesia is a peculiar Galapagæan arborescent Compos containing 6 species—one from Cuming. 1 Chatham Isld:—1 Albemarle, 1 Chas Isld & two James Isld.— ! — it is no doubt a damp region tree, analogous no affinity to the arborescent Comp. of Juan Fernandez, St. Helena & I think Mauritius has some too.
I cannot remember the tortoises tree, but I think you noted it, if so I have also in the paper sent to Linn. Soc.4
I remarked also what you say of Chas’ Isld & took notice of the plants at the time. Compos Ageratum conyzoides, Nicotiana glutinosa, Teucrium inflatum, Salvia tiliæfolia, Scoparia dulci Lantana canescens?— Verbena littoralis, Boussingaultia baselloides, Brandesia echinocephala, Sandwich Islds. only previously maranthus celosioides & aracasanus, Phyllanthus obovatus, Urtica canadensis & divaricata, Hypoxis erecta, Cyperus strigosus, Panicum colonum— are all plants of S. America not found in the other Galapagos, those underlined are no doubt introduced by man, the rest I would expect in the other Islands.. & may or may not be introduced by man, I incline to think not. Take these 10 from the 68 Chas. Isld plants & the peculiar balance the common, is not this funny, upon my honor I have not cooked the result!—
Thus, as far as the collections go, the Florula of each Islet is
\frac{1}{2}
peculiar, but mark—; the very few species of Douglas & Macræ so disturb the results drawn from your special Herbarium, that there is no saying positively what a third collector might produce.
The instances of representative species on the several Islets may be divided into 2 groups, 1 st. of peculiar genera, as Scalesia see p. 25 & Galapagoa one Alb. & one Chas & Alb. & 2 nd. Extra Galapageian genera having peculiar Galapageian species. Thus, the
Euphorbiæ are very peculiar, only one species,
pilulifera (Jas. Isld.) is mundane, but of 7 others not one is common to 2 Islets.— Acalypha has 6 species, in the same predicament, (here the species of 2 very mundane genera are not widely dispersed) Cordia has 5 species,—2 Jas’ Isld. 1 Alb. & Chas (an a & b however) 1, Albem. & Jas. 1 Chat. & Alb.
Compos Lerontea has one Chat. & 2 species Albemarle.
Erigeron?. 1, Chas & Jas. & 1, Albemarle
Milleria? 1. Chas & 1 Jas.
Spilanthes? 1, Albem. 1 Jas.
Nov Gen. Compos, 1. Alb. & Jas;—1 Chat & Chas.
3 other unknown Compos are each confined to one Islet—
These compos, so far, very wonderfully peculiar to seperate Islets, add Scalesia & more so still.
Borreria is a mundane genus with 7 species all peculiar, only one of which is found on 2 Islets, & yet this is a genus of canaille all the world over. Chiococca 3 species 1 Chat, 1 Jas & 1 Albem. Ipomea 2 peculiar species both on Jas. one mundane on Chatham
These are the most marked instances of peculiarity in the distribution of the peculiar species of genera presenting more than one representative
There is still an enormous deal to be done with the materials—a comparison of the Islets with all the extra Galapageian species eliminated. A comparison of each with the coast Flora. The proportion of driftable & portable xtra Galap. plants in each:—those that fly or are flown by birds, those that salt water does & does not kill: that birds do & do not digest &c &c &c.
The collection is out & out S. American, & W. coast, but from the peculiarity of some genera & most species, I should not have known where to put it, supposing Galapagos not to xist. I know so much of the Flora of the coast as not to expect so much novelty from any 100 miles of it, if forced to assign the place it wd. probably be Panama
The opuntia is nowhere in the coll.6 The Flora is S. American throughout in character. What is the Flora of St. Felix Islds on the S. Tropic.
I hope you can read this: pray ask me about any difficulties without ceremony I will write ere long.
Ever yours Jos D Hooker
1.4 193 … 1:] ‘ie (32) not mine’ added ink
1.4 225] ‘
\frac{193}{32}
’ added pencil below
‘6.’added brown crayon, underl brown crayon
These answers to CD’s inquiries about Galápagos plants were written on the back of CD’s questions (enclosure with letter to J. D. Hooker, [11–12 July 1845]). See Journal of researches 2d ed., pp. 392–3 for CD’s use of these notes in his much revised Galápagos chapter.
David Douglas and John Scoulter touched at James Island on the Hudson’s Bay Company’s expedition to the Columbia river, 1824–5.
Abel Aubert Du Petit-Thouars, who circumnavigated the globe, 1836–9. See Du Petit-Thouars 1840–3, 2: 313–22, for an account of his visit to the Galápagos Islands.
The tortoises feed on the lichen Usnea plicata which hangs from the branches of trees in the upper damp regions of the islands (J. D. Hooker 1845d, p. 164, and Journal of researches 2d ed., p. 382).
Hooker numbered the pages of his reply to CD and refers back to paragraph five.
CD’s Galápagos Opuntia specimens were described in Henslow 1837.
Du Petit-Thouars, Abel Aubert. 1840–3. Voyage autour du monde sur la frégate La Vénus, pendant les années 1836–1839. Relation. 4 vols. and atlas. Paris.
Answers CD’s questions relating to the flora of the Galapagos. [See 889.]
AL 3pp, JDH note 7pp †(by CD) |
Exercise - Pods
Options buyers can exercise their right during the window of expiration.
The event of exercising an option requires the following initial information: 1. Amount To Exercise 2. Owner
After the information was supplied, the exercise function will perform the following activities:
This function can only be called duringexerciseWindowperiod, depending on the exercise type.
1) Consult Current
StrikeReserves_n
This balance should reflect the new current strikeReserves, considering the interest that accrued in the meanwhile. We are calling balanceOf()from ERC20 strike asset to check option contract strike asset balance.
StrikeToSend
This step calculates how many strike assets the user will receive in a put option after it requested the exercise.
StrikeToSend=ExerciseAmount\cdot StrikePrice
UnderlyingToReceive
This step calculates the amount a user should receive in underlying assets after the exercise was requested in the case of a call option.
UnderlyingToReceive=ExerciseAmount
StrikeReserves_i
StrikeReserves
(accrued with interest from the last period), we'll deduct the amount
StrikeToSend
used while minting this option.
StrikeReserves_i=StrikeReserves_n-StrikeToSend
underlyingReserves_i
UnderlyingReserves
(accrued with interest from the last period), we'll add the amount of underlying to transfer used while minting this option. Also, now with the expiration, one can either receive tokens or send tokens. In the case of a put option, this balance should increase if the options were exercised. That means an option buyer chose to exercise options.
UnderlyingReserves_i=UnderlyingReserves_n+UnderlyingToReceive
5) Burn options
Burn exercised options.
Burn = ExerciseAmount
Note that exercise functions do not impact the
TotalShares
OwnerShares
Exercise options ✅ |
Construct an embedded.numerictype object describing fixed-point or floating-point data type - MATLAB numerictype - MathWorks France
Create a Default numerictype Object
Create a numerictype Object with Default Word Length and Scaling
Create a numerictype Object with Unspecified Scaling
Create a numerictype Object with Specified Word and Fraction Length
Create a numerictype Object with Slope and Bias Scaling
Create a numerictype Object with Specified Property Values
Create a numerictype Object with Unspecified Sign
Create a numerictype Object with Specified Data Type
Create a Double, Single, Half, or Boolean numerictype Object
Construct an embedded.numerictype object describing fixed-point or floating-point data type
T = numerictype creates a default numerictype object.
T = numerictype(s) creates a fixed-point numerictype object with unspecified scaling, a signed property value of s, and a 16-bit word length.
T = numerictype(s,w) creates a fixed-point numerictype object with unspecified scaling, a signed property value of s, and word length of w.
T = numerictype(s,w,f) creates a fixed-point numerictype object with binary point scaling, a signed property value of s, word length of w, and fraction length of f.
T = numerictype(s,w,slope,bias) creates a fixed-point numerictype object with slope and bias scaling, a signed property value of s, word length of w, slope, and bias.
T = numerictype(s,w,slopeadjustmentfactor,fixedexponent,bias) creates a fixed-point numerictype object with slope and bias scaling, a signed property value of s, word length of w, slopeadjustmentfactor, and bias.
T = numerictype(___,Name,Value) allows you to set properties using name-value pairs. All properties that you do not specify a value for are assigned their default values.
T = numerictype(T1,Name,Value) allows you to make a copy, T1, of an existing numerictype object, T, while modifying any or all of the property values.
T = numerictype('Double') creates a numerictype object of data type double.
T = numerictype('Single') creates a numerictype object of data type single.
T = numerictype('Half') creates a numerictype object of data type half.
T = numerictype('Boolean') creates a numerictype object of data type Boolean.
This example shows how to create a numerictype object with default property settings.
This example shows how to create a numerictype object with the default word length and scaling by omitting the arguments for word length, w, and fraction length, f.
The object is signed, with a word length of 16 bits and unspecified scaling.
You can use the signedness argument, s, to create an unsigned numerictype object.
The object is has the default word length of 16 bits and unspecified scaling.
This example shows how to create a numerictype object with unspecified scaling by omitting the fraction length argument, f.
The object is signed, with a 32-bit word length.
This example shows how to create a signed numerictype object with binary-point scaling, a 32-bit word length, and 30-bit fraction length.
This example shows how to create a numerictype object with slope and bias scaling. The real-world value of a slope and bias scaled number is represented by:
\mathrm{realworldvalue}=\left(\mathrm{slope}×\mathrm{integer}\right)+\mathrm{bias}
Create a numerictype object that describes a signed, fixed-point data type with a word length of 16 bits, a slope of 2^-2, and a bias of 4.
Alternatively, the slope can be represented by:
\mathrm{slope}=\mathrm{slopeadjustmentfactor}×{2}^{\mathrm{fixedexponent}}
Create a numerictype object that describes a signed, fixed-point data type with a word length of 16 bits, a slope adjustment factor of 1, a fixed exponent of -2, and a bias of 4.
This example shows how to use name-value pairs to set numerictype properties at object creation.
This example shows how to create a numerictype object with an unspecified sign by using name-value pairs to set the Signedness property to Auto.
This example shows how to create a numerictype object with a specific data type by using arguments and name-value pairs.
The returned numerictype object, T, is unsigned, and has a word length of 24 bits, a fraction length of 12 bits, and a data type set to scaled double.
This example shows how to create a numerictype object with data type set to double, single, half, or Boolean at object creation.
Create a numerictype object with the data type mode set to double.
Create a numerictype object with the data type mode set to single.
Create a numerictype object with the data type mode set to half.
Create a numerictype object with the data type mode set to Boolean.
s — Whether object is signed
Whether the object is signed, specified as a numeric or logical 1 (true) or 0 (false).
Example: T = numerictype(true)
Word length, in bits, of the stored integer value, specified as a positive integer.
Example: T = numerictype(true,16)
Fraction length, in bits, of the stored integer value, specified as an integer.
Fraction length can be greater than word length. For more information, see Binary Point Interpretation (Fixed-Point Designer).
Example: T = numerictype(true,16,15)
3.0518e-05 (default) | finite floating-point number greater than zero
Slope, specified as a finite floating-point number greater than zero.
The slope and the bias determine the scaling of a fixed-point number.
slope=slopeadjustmentfactor×{\text{2}}^{fixedexponent}
Example: T = numerictype(true,16,2^-2,4)
bias — Bias associated with object
Bias associated with the object, specified as a floating-point number.
Slope adjustment factor, specified as a positive scalar.
The slope adjustment factor must be greater than or equal to 1 and less than 2. If you input a slopeadjustmentfactor outside this range, the numerictype object automatically applies a scaling normalization to the values of slopeadjustmentfactor and fixedexponent so that the revised slope adjustment factor is greater than or equal to 1 and less than 2, and maintains the value of the slope.
slope=slopeadjustmentfactor×{\text{2}}^{fixedexponent}
fixedexponent — Fixed-point exponent
-15 (default) | integer
Fixed-point exponent associated with the object, specified as an integer.
Example: F = numerictype('DataTypeMode','Fixed-point: binary point scaling','DataTypeOverride','Inherit')
When you create a numerictype object by using name-value pairs, Fixed-Point Designer™ creates a default numerictype object, and then, for each property name you specify in the constructor, assigns the corresponding value. This behavior differs from the behavior that occurs when you use a syntax such as T = numerictype(s,w). See Example: Construct a numerictype Object with Property Name and Property Value Pairs.
Bias, specified as a floating-point number.
The slope and bias determine the scaling of a fixed-point number.
Example: T = numerictype('DataTypeMode','Fixed-point: slope and bias scaling','Bias',4)
DataType — Data type category
'Fixed' (default) | 'Boolean' | 'Double' | 'ScaledDouble' | 'Single' | 'Half'
Data type category, specified as one of these values:
'Fixed' – Fixed-point or integer data type
'Boolean' – Built-in MATLAB® Boolean data type
'Double' – Built-in MATLAB double data type
'ScaledDouble' – Scaled double data type
'Single' – Built-in MATLAB single data type
'Half' – MATLAB half-precision data type
Example: T = numerictype('Double')
DataTypeMode — Data type and scaling mode
'Fixed-point: binary point scaling' (default) | 'Fixed-point: slope and bias scaling' | 'Fixed-point: unspecified scaling' | 'Scaled double: binary point scaling' | 'Scaled double: slope and bias scaling' | 'Scaled double: unspecified scaling' | 'Double' | 'Single' | 'Half' | 'Boolean'
Data type and scaling mode associated with the object, specified as one of these values:
'Fixed-point: binary point scaling' – Fixed-point data type and scaling defined by the word length and fraction length
'Fixed-point: slope and bias scaling' – Fixed-point data type and scaling defined by the slope and bias
'Fixed-point: unspecified scaling' – Fixed-point data type with unspecified scaling
'Scaled double: binary point scaling' – Double data type with fixed-point word length and fraction length information retained
'Scaled double: slope and bias scaling' – Double data type with fixed-point slope and bias information retained
'Scaled double: unspecified scaling' – Double data type with unspecified fixed-point scaling
'Double' – Built-in double
'Single' – Built-in single
'Boolean' – Built-in boolean
Example: T = numerictype('DataTypeMode','Fixed-point: binary point scaling')
DataTypeOverride — Data type override settings
Data type override settings, specified as one of these values:
'Inherit' – Turn on DataTypeOverride
'Off' – Turn off DataTypeOverride
The DataTypeOverride property is not visible when its value is set to the default, 'Inherit'.
Example: T = numerictype('DataTypeOverride','Off')
Example: T = numerictype('FixedExponent',-12)
FractionLength — Fraction length of the stored integer value
best precision (default) | integer
The default value is the best precision fraction length based on the value of the object and the word length.
Example: T = numerictype('FractionLength',12)
Scaling — Fixed-point scaling mode
'BinaryPoint' (default) | 'SlopeBias' | 'Unspecified'
Fixed-point scaling mode of the object, specified as one of these values:
'BinaryPoint' – Scaling for the numerictype object is defined by the fraction length.
'SlopeBias' – Scaling for the numerictype object is defined by the slope and bias.
'Unspecified' – Temporary setting that is only allowed at numerictype object creation, and allows for the automatic assignment of a best-precision binary point scaling.
Example: T = numerictype('Scaling','BinaryPoint')
Signed — Whether the object is signed
Although the Signed property is still supported, the Signedness property always appears in the numerictype object display. If you choose to change or set the signedness of your numerictype object using the Signed property, MATLAB updates the corresponding value of the Signedness property.
Example: T = numerictype('Signed',true)
Signedness — Whether the object is signed
'Signed' (default) | 'Unsigned' | 'Auto'
Whether the object is signed, specified as one of these values:
'Signed' – Signed
'Unsigned' – Unsigned
'Auto' – Unspecified sign
Although you can create numerictype objects with an unspecified sign (Signedness: Auto), all fixed-point numerictype objects must have a Signedness of Signed or Unsigned. If you use a numerictype object with Signedness: Auto to construct a numerictype object, the Signedness property of the numerictype object automatically defaults to Signed.
Example: T = numerictype('Signedness','Signed')
Slope, specified as a finite, positive floating-point number.
slope=slopeadjustmentfactor×{\text{2}}^{fixedexponent}
Example: T = numerictype('DataTypeMode','Fixed-point: slope and bias scaling','Slope',2^-2)
slope=slopeadjustmentfactor×{\text{2}}^{fixedexponent}
Example: T = numerictype('DataTypeMode','Fixed-point: slope and bias scaling','SlopeAdjustmentFactor',1.5)
WordLength — Word length of the stored integer value
Example: T = numerictype('WordLength',16)
Fixed-point signals coming in to a MATLAB Function block from Simulink® are assigned a numerictype object that is populated with the signal's data type and scaling information.
Returns the data type when the input is a non fixed-point signal.
Use to create numerictype objects in generated code.
All numerictype object properties related to the data type must be constant. |
h is related to one of the six parent functions. a) Identify the parent functio
h is related to one of the six parent functions. a) Identify the parent function f. b) Describe the sequence of transformations from f to h. c) Sketch the graph of h by hand. d) Use function notation to write h in terms of the parent function f. h(x)=(-x)^{2}-8
h is related to one of the six parent functions.
a) Identify the parent function f.
b) Describe the sequence of transformations from f to h.
c) Sketch the graph of h by hand.
d) Use function notation to write h in terms of the parent function f.
h\left(x\right)={\left(-x\right)}^{2}-8
lamusesamuset
a) Parent function:
f\left(x\right)={x}^{2}
b) Reflection in the y-axis
Vertical shift 8 units downward
c) The graph will be
d) In function,
h\left(x\right)=f\left(-x\right)\text{ }-\text{ }8
Find domain of fog, if
f\left(x\right)=x+5;g\left(x\right)=\frac{7}{x+7}
f\left(x\right)=\sqrt{x};g\left(x\right)=6x+18
g is related to one of the six parent functions. (a) Identify the parent function f. (b) Describe the sequence of transformations from f to g. (c) Sketch the graph of g by hand. (d) Use function notation to write g in terms of the parent function f.
g\left(x\right)=\frac{1}{2}\mid x-2\mid -3
By using the transformation of function y=|x|, sketch the function of y=|x-3|+2
5{e}^{0.2}x=7
\frac{m}{8}=\frac{15}{24}
8×m=25×15
true or false? Correct if false. Find m.
Begin with the graph of y = ln x and use transformations to sketch the graph of each of the given functions. y = 1 - ln (1 - x)
The function g is related to one of the parent functions g(x) = x^2 + 6 The parent function f is: f(x)= x^2 Use function notation to write g in terms of f. |
Radiative Properties of Multilayer Thin Films With Positive and Negative Refractive Indexes | IMECE | ASME Digital Collection
Ceji Fu,
Ceji Fu
Zhuomin M. Zhang,
Fu, C, Zhang, ZM, & Tanner, DB. "Radiative Properties of Multilayer Thin Films With Positive and Negative Refractive Indexes." Proceedings of the ASME 2002 International Mechanical Engineering Congress and Exposition. Heat Transfer, Volume 2. New Orleans, Louisiana, USA. November 17–22, 2002. pp. 191-192. ASME. https://doi.org/10.1115/IMECE2002-32771
In 1968, Veselago [1] predicted that there could still be propagating waves in a medium that had simultaneously negative permittivity ε and permeability μ, because the product of ε and μ would be positive. However, to ensure energy conservation, he concluded that the refractive index must use the negative square root of the product of ε and μ (i.e., n =
εμ
). A consequence is certain unusual optical features in negative-index media. The electric field vector E, magnetic field vector H and wave vector k are a left-handed triplet, the basis for calling materials with simultaneously negative ε and μ “left-handed materials” (LHMs). LHMs would have novel optical properties. Light at non-normal incidence would bend to the side opposite that in a normal RHM; positive lenses would become negative; a flat slab could focus. The phase velocity of an electromagnetic wave would be opposite to the direction of energy flux, resulting in a reversed Doppler effect. Photons would have negative momentum and apply tension to the interface upon reflection. Recently, this kind of material has been demonstrated experimentally to exist. Shelby et al. [2] measured the scattering angle of the transmitted beam through a prism manufactured from a composite material consisting of a two-dimensional array of copper wires and split ring resonators and showed that the effective refractive index of the material is negative at microwave frequencies. Recent theoretical studies also showed that some photonic crystals might have negative-refraction properties in the near infrared spectral region [3].
Composite materials, Copper, Crystals, Doppler effect, Electric fields, Electrical properties, Electromagnetic scattering, Energy conservation, Magnetic fields, Microwaves, Momentum, Permeability, Photons, Prisms (Optics), Radiation scattering, Reflection, Refraction, Refractive index, Scattering (Physics), Slabs, Tension, Thin films, Waves, Wire
Temperature-Dependent Luminescence Quenching in Random Nano Porous Media
A Three-Flux Method for Predicting Radiative Transfer in Aqueous Suspensions |
What Is Parallel Computing in Optimization Toolbox? - MATLAB & Simulink - MathWorks Australia
Parallel Optimization Functionality
Parallel Estimation of Gradients
Parallel Central Differences
Nested Parallel Functions
Parallel computing is the technique of using multiple processors on a single problem. The reason to use parallel computing is to speed computations.
The following Optimization Toolbox™ solvers can automatically distribute the numerical estimation of gradients of objective functions and nonlinear constraint functions to multiple processors:
These solvers use parallel gradient estimation under the following conditions:
You have a license for Parallel Computing Toolbox™ software.
The option SpecifyObjectiveGradient is set to false, or, if there is a nonlinear constraint function, the option SpecifyConstraintGradient is set to false. Since false is the default value of these options, you don't have to set them; just don't set them both to true.
Parallel computing is enabled with parpool, a Parallel Computing Toolbox function.
The option UseParallel is set to true. The default value of this option is false.
When these conditions hold, the solvers compute estimated gradients in parallel.
Even when running in parallel, a solver occasionally calls the objective and nonlinear constraint functions serially on the host machine. Therefore, ensure that your functions have no assumptions about whether they are evaluated in serial or parallel.
One solver subroutine can compute in parallel automatically: the subroutine that estimates the gradient of the objective function and constraint functions. This calculation involves computing function values at points near the current location x. Essentially, the calculation is
\nabla f\left(x\right)\approx \left[\frac{f\left(x+{\Delta }_{1}{e}_{1}\right)-f\left(x\right)}{{\Delta }_{1}},\frac{f\left(x+{\Delta }_{2}{e}_{2}\right)-f\left(x\right)}{{\Delta }_{2}},\dots ,\frac{f\left(x+{\Delta }_{n}{e}_{n}\right)-f\left(x\right)}{{\Delta }_{n}}\right],
f represents objective or constraint functions
ei are the unit direction vectors
Δi is the size of a step in the ei direction
To estimate ∇f(x) in parallel, Optimization Toolbox solvers distribute the evaluation of (f(x + Δiei) – f(x))/Δi to extra processors.
You can choose to have gradients estimated by central finite differences instead of the default forward finite differences. The basic central finite difference formula is
\nabla f\left(x\right)\approx \left[\frac{f\left(x+{\Delta }_{1}{e}_{1}\right)-f\left(x-{\Delta }_{1}{e}_{1}\right)}{2{\Delta }_{1}},\dots ,\frac{f\left(x+{\Delta }_{n}{e}_{n}\right)-f\left(x-{\Delta }_{n}{e}_{n}\right)}{2{\Delta }_{n}}\right].
This takes twice as many function evaluations as forward finite differences, but is usually much more accurate. Central finite differences work in parallel exactly the same as forward finite differences.
Enable central finite differences by using optimoptions to set the FiniteDifferenceType option to 'central'. To use forward finite differences, set the FiniteDifferenceType option to 'forward'.
Solvers employ the Parallel Computing Toolbox function parfor (Parallel Computing Toolbox) to perform parallel estimation of gradients. parfor does not work in parallel when called from within another parfor loop. Therefore, you cannot simultaneously use parallel gradient estimation and parallel functionality within your objective or constraint functions.
Suppose, for example, your objective function userfcn calls parfor, and you wish to call fmincon in a loop. Suppose also that the conditions for parallel gradient evaluation of fmincon, as given in Parallel Optimization Functionality, are satisfied. When parfor Runs In Parallel shows three cases:
The outermost loop is parfor. Only that loop runs in parallel.
The outermost parfor loop is in userfcn. userfcn can use parfor in parallel.
Using Parallel Computing in Optimization Toolbox | Improving Performance with Parallel Computing | Minimizing an Expensive Optimization Problem Using Parallel Computing Toolbox |
The reduced row echelon form of the augmented matrix of
The reduced row echelon form of the augmented matrix of a system of linear equations is given. Tell whether the system has one solution, no solution
The reduced row echelon form of the augmented matrix of a system of linear equations is given. Tell whether the system has one solution, no solution, or infinitely many solutions. Write the solutions or, if there is no solution, say the system is inconsistent.
\left[\begin{array}{ccccc}1& 0& -2& |& 6\\ 0& 1& 3& |& 1\end{array}\right]
nitruraviX
The system is consistent and has infinitely many solutions.
{x}_{3}\in R
Row 2:
{x}_{2}+3{x}_{3}=1\to {x}_{2}=1-3{x}_{3}
{x}_{1}-2{x}_{3}=6\to {x}_{1}=6+2{x}_{3}
Determine whether the given set S is a subspace of the vector space V.
A. V=
{P}_{5}
, and S is the subset of
{P}_{5}
consisting of those polynomials satisfying p(1)>p(0).
V={R}_{3}
, and S is the set of vectors
\left({x}_{1},{x}_{2},{x}_{3}\right)
in V satisfying
{x}_{1}-6{x}_{2}+{x}_{3}=5
V={R}^{n}
, and S is the set of solutions to the homogeneous linear system Ax=0 where A is a fixed m×n matrix.
D. V=
{C}^{2}\left(I\right)
, and S is the subset of V consisting of those functions satisfying the differential equation y″−4y′+3y=0.
E. V is the vector space of all real-valued functions defined on the interval [a,b], and S is the subset of V consisting of those functions satisfying f(a)=5.
F. V=
{P}_{n}
{P}_{n}
consisting of those polynomials satisfying p(0)=0.
V={M}_{n}\left(R\right)
, and S is the subset of all symmetric matrices
Can someone show me step-by-step how to diagonalize this matrix? Im
Each of the matrices is the final matrix form for a system of two linear equations in the variables
{x}_{1}
{x}_{2}
. Write the solution of the system.
\left[\begin{array}{cccc}1& 3& |& 2\\ 0& 0& |& 4\end{array}\right]
Suppose that the augmented matrix for a system of linear equations has been reduced by row operations to the given reduced row echelon form. Solve the system. Assume that the variables are named
{x}_{1},{x}_{2}
... from left to right.
\left[\begin{array}{ccccc}1& 0& 0& -7& 8\\ 0& 1& 0& 3& 2\\ 0& 0& 1& 1& -5\end{array}\right]
The reduced row echelon form of a system of linear equations is given. Write the system of equations corresponding to the given matrix. Use x,y;x,y; or x,y,z;x,y,z; or
{x}_{1},{x}_{2},{x}_{3},{x}_{4}
as variables. Determine whether the system is consistent or inconsistent. If it is consistent, give the solution. ⎡⎣⎢100010430420⎤⎦⎥
x = ___, y = ___
If the Wro
ian W of f and g is
3{e}^{4t}
\left(t\right)={e}^{2t}
, find g(t). |
relation - Maple Help
Home : Support : Online Help : Programming : Data Types : Conversion : relation
convert Complex and Real ranges into relations expressed using =, <=, <, >=, and >
convert( expr, relation )
The convert(expr, relation) function converts Real ranges and Complex ranges found in
\mathrm{expr}
into relations using
=,<=,<,>=,>
\mathrm{ℑ}
z::\mathrm{ComplexRange}\left(-1-I,1+I\right)
\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{::}\textcolor[rgb]{0,0,1}{\mathrm{ComplexRange}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{-1}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{I}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{I}\right)
\mathrm{convert}\left(,\mathrm{relation}\right)
\textcolor[rgb]{0,0,1}{-1}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{\mathrm{ℜ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{\wedge }\textcolor[rgb]{0,0,1}{\mathrm{ℜ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{\wedge }\textcolor[rgb]{0,0,1}{-1}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{\mathrm{ℑ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{\wedge }\textcolor[rgb]{0,0,1}{\mathrm{ℑ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{1}
z\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{in}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathrm{ComplexRange}\left(-\mathrm{\infty }I,I\right)
\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{∈}\textcolor[rgb]{0,0,1}{\mathrm{ComplexRange}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{\infty }}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{I}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{I}\right)
\mathrm{convert}\left(,\mathrm{RealRange}\right)
\textcolor[rgb]{0,0,1}{\mathrm{ℜ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{∈}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{ℑ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{∈}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{\infty }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\right]
\mathrm{convert}\left([],\mathrm{relation}\right)
[\textcolor[rgb]{0,0,1}{\mathrm{ℜ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{\infty }}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{\mathrm{ℑ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{\wedge }\textcolor[rgb]{0,0,1}{\mathrm{ℑ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{1}]
When constructions like z < a or z <= a are around, it is understood that z is real
\mathrm{FunctionAdvisor}\left(\mathrm{branch_cuts},\mathrm{arcsin}\right)
[\textcolor[rgb]{0,0,1}{\mathrm{arcsin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{-1}\textcolor[rgb]{0,0,1}{\vee }\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{z}]
\mathrm{FunctionAdvisor}\left(\mathrm{branch_cuts},\mathrm{arccot}\right)
[\textcolor[rgb]{0,0,1}{\mathrm{arccot}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{∈}\textcolor[rgb]{0,0,1}{\mathrm{ComplexRange}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{\infty }}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{I}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-I}\right)\textcolor[rgb]{0,0,1}{\vee }\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{∈}\textcolor[rgb]{0,0,1}{\mathrm{ComplexRange}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{I}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{\infty }}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{I}\right)]
\mathrm{convert}\left(,\mathrm{relation}\right)
[\textcolor[rgb]{0,0,1}{\mathrm{arccot}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{,}\left(\textcolor[rgb]{0,0,1}{\mathrm{ℜ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{\wedge }\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{\infty }}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{\mathrm{ℑ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{\wedge }\textcolor[rgb]{0,0,1}{\mathrm{ℑ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{-1}\right)\textcolor[rgb]{0,0,1}{\vee }\left(\textcolor[rgb]{0,0,1}{\mathrm{ℜ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{\wedge }\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{\mathrm{ℑ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{\wedge }\textcolor[rgb]{0,0,1}{\mathrm{ℑ}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{\le }\textcolor[rgb]{0,0,1}{\mathrm{\infty }}\right)] |
{\mathrm{sin}}^{-1}\sqrt{P}
\pi /2-{\mathrm{sin}}^{-1}\sqrt{1-P}
, where P is mortality; therefore,
V\left({\mathrm{sin}}^{-1}\sqrt{P}\right)=V\left(\pi /2-{\mathrm{sin}}^{-1}\sqrt{1-P}\right)=V\left({\mathrm{sin}}^{-1}\sqrt{1-P}\right)
1) As discussed by Charlesworth [21] , population genetics models show that the intrinsic rate of natural increase,
{r}_{ij}
, defined as the real root of the Euler-Lotka equation
{\sum }_{x}{e}^{-{r}_{ij}x}{k}_{ij}\left(x\right)=1
, is an appropriate measure of fitness for a diploid genotype ij under density-independent conditions under many circumstances (p. 178). Here,
{k}_{ij}\left(x\right)
is the reproductive function, the net expectation of female offspring produced by a female aged x, weighted by the probability of survival from the zygote stage. Fitness is usually defined for individuals, unless group selection is being considered. Of course, group selection theories may be relevant for traits such as altruistic behaviors but no one would probably consider that this is appropriate for insecticide resistance within a natural population of D. melanogaster. If it is acceptable that fitness should be defined for individuals, our attempt to measure the intrinsic rate of natural increase at the genotype (individual) level is a reasonable approach. |
G
{}^{c}
G
(..., ...)-and (..., ...)-connexions of an almost complex and an almost product space.
M. Prvanovic (1977)
3 Classification des plongements isotropes d'après A. Weinstein
D. Sondaz (1983)
3-submersions from QR-hypersurfaces of quaternionic Kähler manifolds
Gabriel Eduard Vîlcu (2010)
We study 3-submersions from a QR-hypersurface of a quaternionic Kähler manifold onto an almost quaternionic hermitian manifold. We also prove the non-existence of quaternionic submersions between quaternionic Kähler manifolds which are not locally hyper-Kähler.
A basic inequality for submanifolds in a cosymplectic space form.
Kim, Jeong-Sik, Choi, Jaedong (2003)
A boundary rigidity problem for holomorphic mappings.
Gentili, Graziano, Migliorini, Serena (1997)
M. De León, E. Merino, J. A. Oubiña, M. Salgado (1994)
A class of 3-dimensional almost Kenmotsu manifolds with harmonic curvature tensors
Yaning Wang (2016)
Let M3 be a three-dimensional almost Kenmotsu manifold satisfying ▽ξh = 0. In this paper, we prove that the curvature tensor of M3 is harmonic if and only if M3 is locally isometric to either the hyperbolic space ℍ3(-1) or the Riemannian product ℍ2(−4) × ℝ. This generalizes a recent result obtained by [Wang Y., Three-dimensional locally symmetric almost Kenmotsu manifolds, Ann. Polon. Math., 2016, 116, 79-86] and [Cho J.T., Local symmetry on almost Kenmotsu three-manifolds, Hokkaido Math. J., 2016,...
A class of locally symmetric Kähler Einstein structures on the nonzero cotangent bundle of a space form.
Poroşniuc, Dumitru Daniel (2004)
A Class of Non-Polarizable Symplectic Manifolds.
Mark J. Gotay (1987)
A classification of certain submanifolds of an S-manifold
José L. Cabrerizo, Luis M. Fernández, Manuel Fernández (1991)
A classification theorem is obtained for submanifolds with parallel second fundamental form of an 𝑆-manifold whose invariant f-sectional curvature is constant.
A classification of contact metric 3-manifolds with constant
\xi
-sectional and
\phi
-sectional curvatures.
Gouli-Andreou, F., Xenos, Ph.J. (2002)
V. V. Lychagin, V. N. Rubtsov, I. V. Chekalov (1993)
A collar neighborhood theorem for a complex manifold
C. Denson Hill, Mauro Nacinovich (1994)
A complete classification of four-dimensional paraKähler Lie algebras
Giovanni Calvaruso (2015)
We consider paraKähler Lie algebras, that is, even-dimensional Lie algebras g equipped with a pair (J, g), where J is a paracomplex structure and g a pseudo-Riemannian metric, such that the fundamental 2-form Ω(X, Y) = g(X, JY) is symplectic. A complete classification is obtained in dimension four.
A connection in a differential module
Kazimierz Cegiełka (1976)
A contact metric manifold satisfying a certain curvature condition
Jong Taek Cho (1995)
In the present paper we investigate a contact metric manifold satisfying (C)
\left({\overline{\nabla }}_{\stackrel{˙}{\gamma }}R\right)\left(·,\stackrel{˙}{\gamma }\right)\stackrel{˙}{\gamma }=0
\overline{\nabla }
-geodesic
\gamma
\overline{\nabla }
is the Tanaka connection. We classify the 3-dimensional contact metric manifolds satisfying (C) for any
\overline{\nabla }
\gamma
. Also, we prove a structure theorem for a contact metric manifold with
\xi
belonging to the
k
-nullity distribution and satisfying (C) for any
\overline{\nabla }
\gamma
A convex Darboux theorem
Pierre-André Chiappori, Ivar Ekeland (1997)
A Direct Extension Method for CR Structures.
Garo K. Kiremidjian (1979) |
Consider the integral: \int_0^1\frac{sin(\pi x)}{1-x}dx I want to do this via power
Agaiepsh 2021-11-19 Answered
{\int }_{0}^{1}\frac{\mathrm{sin}\left(\pi x\right)}{1-x}dx
I want to do this via power series and obtain an exact solution.
In power series, I have
{\int }_{0}^{1}\left(\sum _{\left\{n=0\right\}}^{\mathrm{\infty }}{\left(-1\right)}^{n}\frac{{\left(\pi x\right)}^{2n+1}}{\left(2n+1\right)!}\cdot \sum _{\left\{n=0\right\}}^{\mathrm{\infty }}\right)dx
My question is: how do I multiply these summations together? I have searched online, however, in all cases I found they simply truncated the series and found an approximation.
Drood1980
Let's take a more abstract case, trying to multiply
\sum _{\left\{k=0\right\}}^{\mathrm{\infty }}{a}_{n}\text{ }\text{ and }\text{ }\sum _{\left\{k=0\right\}}^{\mathrm{\infty }}{b}_{n}
. Note that In the resulting sum, we will have
{a}_{i}{b}_{j}
for all possibilities of i,j
\in \mathbb{N}
One way to make it compact is to sum across diagonals. Think about an integer lattice in the first quadrant of
{\mathbb{R}}^{2}
. Drawing diagonals (origin, then along x+y=1 then along x+y=2, etc), note that the one along the line x+y=n will have length n+1 integer points, and the sum of the indices along all points there will be n - i.e.
(n,0),(n−1,1),…,(k,n−k)…,(0,n). So we can renumber the summation based on these diagonals, getting
\left(\sum _{k=0}^{\mathrm{\infty }}{a}_{n}\right)\left(\sum _{k=0}^{\mathrm{\infty }}{b}_{n}\right)=\sum _{\left\{n=0\right\}}^{\mathrm{\infty }}\sum _{j,k,\text{along }\text{ }x+y=n}{a}_{k}{b}_{j}=\sum _{\left\{n=0\right\}}^{\mathrm{\infty }}\sum _{\left\{k=0\right\}}^{\mathrm{\infty }}{a}_{k}{b}_{n-k}
Onlaceing
I am trying to solve and it does not work, if you can, then please help
Using laws of logarithms, write the expression in the image as a single logarithm.
6\mathrm{log}\left(64–{x}^{2}\right)–\left(\mathrm{log}\left(8+x\right)+\mathrm{log}\left(8–x\right)=\mathrm{log}\left(\right)
\mathrm{log}base2\left(\mathrm{log}base3\left(\mathrm{log}base4\left(x\right)\right)\right)=0
Proving Logarithmic identity
Today, I read these two new Logarithmic identity
{a}^{{\mathrm{log}}_{a}m}=m
{\mathrm{log}}_{{a}^{q}}{m}^{p}=\frac{p}{q}{\mathrm{log}}_{a}m
Both of them seems new to me,so even after solving some problems (directly) based on thesm I haven't fully understood how they holds good,Could anybody show me how to prove them ?
Condense them to the same base before solving for x
\mathrm{log}16\left(x\right)+\mathrm{log}4\left(x\right)+\mathrm{log}2\left(x\right)=7
{e}^{-2x+1}=13
Any tricky method to solve this one?
The question : Prove that :
\text{ }\text{ if }\text{ }y=2{x}^{2}-1,\text{ }\text{ then }\text{ }\left[\frac{1}{y}+\frac{1}{3{y}^{3}}+\frac{1}{5{y}^{5}}+\cdots \right]
\frac{1}{2}\left[\frac{1}{{x}^{2}}+\frac{1}{2{x}^{4}}+\frac{1}{3{x}^{6}}+\cdots \right]
Here I have modified the question, in the actual question (from my paper) there were four other options given,my approach was to reduce the first expression to
\frac{1}{2}\mathrm{ln}\left(\frac{y+1}{y-1}\right)
and then trying to check for each options to find the match, now is there any other approach for this one ? Since it took me sometime for checking each options :( and the desired answer is at the last option!
\mathrm{log}\left(x\right)=-0.123 |
Developing system of automatic control resonant mode of a vibrating machine | JVE Journals
Konstantin V. Krestnikovskiy1 , Grigory Ya. Panovko2 , Alexander E. Shokhin3
1, 2, 3Mechanical Engineering Research Institute of the RAS, Moscow, Russia
Received 4 July 2016; accepted 30 August 2016; published 7 October 2016
The paper is devoted to the experimental check of a control algorithm for automatic adjustment of oscillations of a mechanical system to the resonance mode. Oscillations of the system are excited by rotating an unbalanced rotor of an asynchronous electric motor. Working principle of the control system is based on measuring a phase shift between oscillations of the system and an excitation force.
Keywords: vibrating machine, unbalanced rotor, asynchronous electric motor, resonant mode, control system.
One of the basic principles of creation of power efficient vibrational machines consists in using a resonant mode of the working body oscillations. However, in certain cases, there is a problem of ensuring the stability of the resonant mode of oscillation associated with process load fluctuations, non-linear characteristics of the mechanical parts, nonlinear character of interaction between a working body and work environment, features of interaction between AN exciter and a vibrodrive. To the greatest extent it relates to machines with an inertial (unbalanced) vibration exciter. For this exact reason the most vibrational technological machines with inertial vibration exciter operate on below-resonance or above-resonance modes [1-3].
By now the problem of stabilizing the resonant vibration mode in machines with DC motor has been well enough studied. Adjustment and maintenance of the resonant mode for machines of this type is provided by a control system of angular velocity of the motor by changing excitation current or supply voltage [4].
However, there are no reliable and effective control principles and algorithms to automatically adjust vibrational machines driven by asynchronous electro-motors to the resonant mode, with such motors being most commonly used in machines of this purpose.
The aim of the work is to develop an experimental model of a system to control asynchronous motor rotational frequency, which provides automatic adjustment of the machines oscillation to the resonance mode, based on the control algorithms proposed by the authors in [5, 6].
2. Object of the study
Previously, in [5, 6] the authors proposed control algorithms to automatically adjust to a resonance mode a single-mass vibration machine, with its working body performing unidirectional oscillations excited by an unbalanced exciter driven by an asynchronous motor. For experimental testing of these algorithms, in the laboratory of vibromechanics of Mechanical Engineering Research Institute of the RAS, an experimental model of vibration machine has been created as shown in Fig. 1. It represents a resiliently fastened platform with two almost identical asynchronous motors (type AИP-56B4) mounted on the platform. Both motors are supplied from a single inverter of type FR-D740 MITSUBISHI. Both ends of each motor shaft are equipped with discs of identical masses and eccentrics.
Due to geometrical and force symmetry of the structure only unidirectional oscillations of the platform are excited in case of synchronous antiphase rotation of debalances.
Fig. 1. Experimental model of a vibrational machine
We consider the situation when for given frequency of power supply under synchronous rotation of the motors with the frequency different from resonant frequency of the platform oscillation, it is necessary to automatically tune the system to resonance. It seems that such a formulation of the problem is similar to problems arising in a real situation, when for initial resonance tuning, due to changes of the mass of the system, for example, a withdrawal from the resonance occurs, and the control system is to tune to a new resonance regime automatically.
Fig. 2. Principle scheme of a vibrational machine with control system
A principle scheme of the machine with a control system is shown in Fig. 2. The platform 3 together with motors 1 and 2 is considered to be a perfectly rigid body of mass
m
symmetrically supported by viscoelastic dampers 4 and 5 with linear characteristics defined by stiffness coefficient
c
and damping coefficient
k
. Debalances of the rotors 6 and 7 have identical masses
{m}_{r}
r
. Motion of the system is described relatively to a fixed coordinate system
x0y
with the origin at the center of mass of the system in static balance. Vertical axis
0y
is directed downward, with the angles of rotation of the rotors
\phi
being measured from positive direction of the axis
0y
in a counter-clockwise direction.
It is assumed that the rotors rotate synchronously and in phase, but in opposite directions. Thus due to geometrical and power symmetry the platform performs only unidirectional vertical oscillations. In this case a steady motion of the system can be describe by linear equation:
\stackrel{¨}{y}+2\mathrm{\lambda }\stackrel{˙}{y}+{\mathrm{\Omega }}^{2}y=\frac{{m}_{r}}{m}r{\omega }^{2}\mathrm{c}\mathrm{o}\mathrm{s}\omega t,
y
– is a platform center of mass coordinate,
\mathrm{\lambda }=k/2m
\mathrm{\Omega }=\sqrt{c/m}
– the natural frequency of the system,
\omega
– a steady rotational speed of rotors.
Control system work is based on an algorithm, described in details in [5]. According to the algorithm we consider phase angle
\epsilon
between the response displacement and the excitation force as a controlled variable which should be equal to
\frac{\pi }{2}
at resonance. As a control parameter we consider frequency
{\omega }_{e}
of the supply voltage, supplied to the motors via a frequency inverter 12 that controls the rotational speed
\omega ={\omega }_{e}/2
of the rotor. For the sake of simplicity of the control system it is assumed that the slip is negligible in the asynchronous motor at the regimes considered. In this case
\omega
{\omega }_{e}
are related by ratio
\omega ={\omega }_{e}/{p}_{n}
{p}_{n}
– is the number of pole pairs of the asynchronous motor. Then a correction value of supplied voltage frequency is determined by formula [5]:
{∆\omega }_{e}={p}_{n}\left(\lambda \mathrm{ctg}\epsilon +\sqrt{{\left(\lambda \mathrm{ctg}\epsilon \right)}^{2}+1\right)}\right).
To determine the phase shift
\epsilon
simultaneous measurements of vertical displacements
y\left(t\right)
of the platform and exciting force
{F}_{y}\left(t\right)
are performed. For measuring vertical displacement
y\left(t\right)
a piezoelectric accelerometer 8 (type KD-35, MMF, GDR) with subsequent double analog integration of its signal by a matching amplifier 9 (type SM-10, RFT, GDR) is used. Synchronous in-phase rotation of both motors leads to excitation of unidirectional oscillation of the platform with a vertical force
{F}_{y}\left(t\right)=2{m}_{r}r{\omega }^{2}cos\phi t
steady-state oscillations of the system, this force is determined by angle
\phi
of rotation of one of the rotors. The angle is measured by angular position sensor 10 – incremental encoder (type E40HB – 2000 pulses per revolution, Autonics, Korea), mounted on the right-hand motor shaft in the model considered (Fig. 2). Mathematical processing of measured signals is carried out in a programmable microcontroller Atmel SAM3X – block 11, in which according to algorithm [5] the phase angle
\epsilon
is determined in each cycle of oscillations. The moment for the system to reach a steady-state oscillation mode is determined by a change in value of the phase shift in the last
N
periods of oscillation of the system (this value affects both accuracies of adjustment and regulation time and usually it is chosen empirically). In case of
\left|{\epsilon }_{N}-{\epsilon }_{1}\right|<{\epsilon }^{*}
{\epsilon }^{*}
– predetermined accuracy of determining steady state, oscillation of the system is assumed to be steady, and the regulation cycle starts.
First, the control system checks the resonance condition
\left|\left({\epsilon }_{N}-\pi /2\right)/\pi /2\right|\le {\epsilon }^{**}
{\epsilon }^{**}
– predetermined regulation accuracy. Then, in case of the system is not at resonance the correction value of supply voltage frequency
\Delta {\omega }_{e}
is estimated by presented above formula.
Note that damping coefficient
\lambda
, part of formula for
\Delta {\omega }_{e}
, was found experimentally while experimental analysis of the oscillogram of convergent free oscillations, excited by a single impact impulse.
After correction value
\Delta \omega e
is found, a corresponding control signal is sent to a frequency inverter 12. The regulation cycle is repeated if necessary.
Registration of both oscillations and control signals is provided by measuring module ADC/DAC (type E-14-440-D, L-Card, Russia) – pos. 13 in Fig. 2, connected to PC.
Results of experimental studies of the control system when tuning the system out of below-resonance mode to the resonance mode are shown in Figs. 3-5. Fig. 3 demonstrates the change of power supply frequency during the tuning the system to the resonance. Fig. 4 shows the values of the phase (marked by dots) measured at the moments when the control system detected a steady state mode and performed regulation of the supplied voltage frequency
{\omega }_{e}
. During the first two seconds the control system checks for steady-oscillation condition realization for initially given frequency of supplied voltage
{\omega }_{e}=
50 Hz (therewith the rotation frequency of the rotor
\omega =
25 Hz). Then the system starts tuning to near-resonance mode until the phase shift value of
\epsilon =
87° is reached. Note that the value of the phase shift angle, measured after two first seconds, is
\epsilon =
54°, moreover, with a new value of the power supply frequency being set as
{\omega }_{e}=
50,65 Hz. Then the regulation cycle is repeated. The system reaches the desired regime of oscillations in about 35 seconds after the start, which is also seen from steady amplitude of acceleration
\stackrel{~}{a}
in diagram in Fig. 5, where
\stackrel{~}{a}=a/g, g=
9.81 m/s2.
Fig. 3. Power supply frequency
Fig. 4. Phase shift between oscillation of the system and excitation force
Fig. 5. Acceleration of platform oscillations
Experiments have shown that for other values of the initial frequency of the supply voltage starting from 45 Hz, the control system provides tuning to the required near-resonant mode. Moreover, the closer the initial frequency to the resonance, the less regulation time is required.
For initial frequencies less than 45 Hz the control system was unable to determine steady-state oscillations. That is due to violation of the relevant synchronization of unbalanced exciters as we move away from the resonance frequency corresponding to a purely vertical oscillations form. Furthermore, the duration of transient processes in the system also increases.
Experimental studies have shown that the proposed control algorithm affords for the system to tune from below-resonance oscillation mode to the near-resonant one with a predetermined phase shift value, with mass and stiffness parameters of the oscillating system being unknown initially. Regulation time significantly depends on the accuracy of measuring a phase shift as well as on value of damping
\mathrm{\lambda }
, that is determined experimentally. It is found that the range of working frequencies of the control system is limited to a range where vertical oscillations dominate, which results from the possibility to determine a steady-state oscillation regime. To extend the operating frequency range it is necessary to improve the algorithms of determination of a steady-state oscillation regime and a phase shift value taking into account dynamic characteristics of a real vibrating machine.
The study was performed account for a grant of the Russian Science Foundation (Project No. 15-19-30026)).
Vibration in the Technique: Handbook. Vol. 4. Vibration Processes and Machines. Mechanical Engineering, Moscow, 1981. [Search CrossRef]
Blekhman I. I. Synchronization in Science and Technology. ASME Press, New York, NY, USA, 1988. [Search CrossRef]
Blekhman I. I., Fradkov A. L., Tomchina O. P., Bogdanov D. E. Self-synchronization and controlled synchronization: general definition and example design. Mathematics and Computers in Simulation, Vol. 58, Issues 4-6, 2002, p. 367-384. [Search CrossRef]
Astashev V., Babitsky V., Vulfson I. Dynamics of Machines and Machine Control. Mechanical Engineering, Moscow, 1988. [Search CrossRef]
Panovko G. Ya., Shokhin A. E., Eremeikin S. A. The control of the resonant mode of a vibrating machine that is driven by an asynchronous electro motor. Journal of Machinery Manufacture and Reliability, Vol. 44, Issue 2, 2015, p. 109-113. [Search CrossRef]
Panovko G. Ya., Shokhin A. E., Barmina O. O., Gorbunov A. A. Method of Automatic Setting of Resonant Modes of Oscillations of Vibration Machine Driven by Induction Motor. RU Patent 2572657 C1, 2014. [Search CrossRef] |
Solve cos 30° cos 35° - sin 30° sin
Solvecos 30° cos 35° - sin 30° sin 35°
\mathrm{cos}{30}^{\circ }\mathrm{cos}{35}^{\circ }-\mathrm{sin}{30}^{\circ }\mathrm{sin}{35}^{\circ }
Use the sum identity for cosine:
\mathrm{cos}\left(A+B\right)=\mathrm{cos}A\mathrm{cos}B-\mathrm{sin}A\mathrm{sin}B=\mathrm{cos}\left({30}^{\circ }+{35}^{\circ }\right)
=\mathrm{cos}{65}^{\circ }
\mathrm{sin}x+\mathrm{sin}y=a\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\mathrm{cos}x+\mathrm{cos}y=b
\mathrm{tan}\left(x-\frac{y}{2}\right)
Finding all a such that
f\left(x\right)=\mathrm{sin}2x-8\left(a+1\right)\mathrm{sin}x+\left(4{a}^{2}+8a-14\right)x
is increasing and has no critical points
Obviously, the first thing I did was to find the derivative of this function and simplify it a bit and I got:
{f}^{\prime }\left(x\right)=4\left({\mathrm{cos}}^{2}x-2\left(a+1\right)\mathrm{cos}x+\left({a}^{2}+2a-4\right)\right)
But now how do I proceed further, had it been a simple quadratic in x.
\frac{{e}^{jx}-j{e}^{-jx}}{j{e}^{jx}-{e}^{-jx}}=\frac{\mathrm{tan}x-1}{\mathrm{tan}x+1}
j=\sqrt{-1}
\left(\mathrm{sin}x+\mathrm{cos}x\right)²=1+\mathrm{sin}2x
\mathrm{cos}135
4\mathrm{sin}\left(2y-0.3\right)+5\mathrm{cos}\left(2y-0.3\right)=0 |
Solving 2\sin(x+30^{\circ})=\cos(x+150^{\circ}) for x between 0^{\circ} \text{ and } 360^{\circ}
joygielymmeloiy 2022-01-24 Answered
2\mathrm{sin}\left(x+{30}^{\circ }\right)=\mathrm{cos}\left(x+{150}^{\circ }\right)
for x between
{0}^{\circ }\text{ }\text{ and }\text{ }{360}^{\circ }
spelkw
2\mathrm{sin}x{\mathrm{cos}30}^{\circ }+2{\mathrm{sin}30}^{\circ }\mathrm{cos}x=\mathrm{cos}x{\mathrm{cos}150}^{\circ }-\mathrm{sin}x{\mathrm{sin}150}^{\circ }
\sqrt{3}\mathrm{sin}x+\mathrm{cos}x=-\frac{\sqrt{3}}{2}\mathrm{cos}x-\frac{12}{\mathrm{sin}x}
\mathrm{sin}t=,\mathrm{cos}t=,\text{ }\text{and}\text{ }\mathrm{tan}t=
In any triangle is
\mathrm{sin}A+\mathrm{sin}B+\mathrm{sin}C=\frac{3\sqrt{3}}{2}
always Well, I came with an interesting proof. But I just want to verify it Applied at function
y=\mathrm{sin}x
\mathrm{sin}\left(\frac{A+B+C}{2}\right)\ge \frac{\mathrm{sin}A+\mathrm{sin}B+\mathrm{sin}C}{3}
From here we will get
\mathrm{sin}A+\mathrm{sin}B+\mathrm{sin}C\le \frac{3\sqrt{3}}{2}
Also by A.M.
\ge
G.M. in an acute angled triangle
\frac{\mathrm{sin}A+\mathrm{sin}B+\mathrm{sin}C}{3}\ge \sqrt[3]{\mathrm{sin}A\mathrm{sin}B\mathrm{sin}C}
⇒\mathrm{sin}A+\mathrm{sin}B+\mathrm{sin}C\ge 3\left(\sqrt[3]{\mathrm{sin}A\mathrm{sin}B\mathrm{sin}C}\right)
⇒\mathrm{sin}A+\mathrm{sin}B+\mathrm{sin}C\ge 3\left(\frac{\sqrt{3}}{2}\right)=\frac{3\sqrt{3}}{2}>2
and from this I get
\mathrm{sin}A+\mathrm{sin}B+\mathrm{sin}C\ge \frac{3\sqrt{3}}{2}
I have an impulse train given by
\frac{1}{R+1}+\frac{\sum _{k=1}^{R}\mathrm{cos}\left(\frac{2k\pi x}{R+1}\right)}{R+1}
It seems obvious to me that, for x=0, the function returns 1. This is because
\mathrm{cos}\left(0\right)=1
, and we therefore end up with
\frac{1}{R+1}+\frac{R}{R+1}=\frac{R+1}{R+1}=1
However indeterminate result at x=0. Usually this means there is a division by 0 somewhere. But I can't see any reason for this function to produce an indeterminate result.
\mathrm{sin}4\theta =4{\mathrm{cos}}^{3}\theta \mathrm{sin}\theta -4\mathrm{cos}\theta {\mathrm{sin}}^{3}\theta
Using De Moivre's formula:
{\left(\mathrm{cos}\theta +i\mathrm{sin}\theta \right)}^{4}=\mathrm{cos}4\theta +i\mathrm{sin}4\theta
How can one tell the period of
\mathrm{sin}\left({x}^{\frac{3}{2}}\right)
? Is it
{\left(2\pi \right)}^{\frac{2}{3}}
4{\mathrm{cos}}^{2}\frac{\pi }{5}-2\mathrm{cos}\frac{\pi }{5}-1=0
Tips on evaluating
\underset{x\to 0}{lim}\frac{\sqrt{1-\mathrm{cos}\left({x}^{2}\right)}}{1-\mathrm{cos}x} |
Continuous Random Variables - Definition | Brilliant Math & Science Wiki
Continuous random variables describe outcomes in probabilistic situations where the possible values some quantity can take form a continuum, which is often (but not always) the entire set of real numbers
\mathbb{R}
. They are the generalization of discrete random variables to uncountably infinite sets of possible outcomes.
Continuous random variables are essential to models of statistical physics, where the large number of degrees of freedom in systems mean that many physical properties cannot be predicted exactly in advance but can be well-modeled by continuous distributions. In particular, quantum mechanical systems often make use of continuous random variables, since physical properties in these cases might not even have definite values.
Definition of Continuous Random Variables
Examples of Continuous Random Variables
Recall that a random variable is a quantity which is drawn from a statistical distribution, i.e. it does not have a fixed value. A continuous random variable is a random variable whose statistical distribution is continuous. Formally:
A continuous random variable is a function
X
on the outcomes of some probabilistic experiment which takes values in a continuous set
V
That is, the possible outcomes lie in a set which is formally (by real-analysis) continuous, which can be understood in the intuitive sense of having no gaps. The fact that
X
is technically a function can usually be ignored for practical purposes outside of the formal field of measure theory. In applications,
X
is treated as some quantity which can fluctuate e.g. in repeated experiments, which has statistical properties like mean and variance .
In the next article on continuous probability density functions, the meaning of
X
will be explored in a more practical setting.
Which of the following are continuous random variables?
(1) The sum of numbers on a pair of two dice.
(2) The possible sets of outcomes from flipping ten coins.
(3) The possible sets of outcomes from flipping (countably) infinite coins.
(4) The possible values of the temperature outside on any given day.
(5) The possible times that a person arrives at a restaurant.
(4) and (5) are the continuous random variables. Going through each case in order:
(1) Ignoring reordering of the dice and repeated values, there are a maximum of 36 possible sets of values on the two dice. In reality, the number is less than this, but would require more careful counting. However, this is sufficent to note that this value is a discrete random variable, since the number of possible values is finite.
(2) Again, the possible sets of outcomes is larger (bounded above by
2^{10}
, certainly) but finite and the same logic applies as in (1).
(3) This case is more interesting because there are infinitely many coins. However, there are only countably many sets of outcomes. A countable set of real numbers is not continuous (consider the countable rational numbers, which are not continuous).
(4) The temperature outside on any given day could be any real number in a given reasonable range. In particular, on no two days is the temperature exactly the same number out to infinite decimal places. Thus, the temperature takes values in a continuous set.
(5) This case is similar to (4): no two people ever arrive at exactly the same time out to infinite precision. The precise time a person arrives is a value in the set of real numbers, which is continuous. Note that this implies that the probability of arriving at any one given time is zero, a fact which will be discussed in the next article.
The minimum outcome from rolling infinitely many dice The number of raindrops in a storm The number of people that show up to class The angle you face after spinning in a circle
Which of the following answers is the continuous random variable?
See uniform random variables, normal distribution, and exponential distribution for more details.
A uniform random variable is one where every value is drawn with equal probability. For instance, a random variable that is uniform on the interval
[0,1]
f(x) = \begin{cases} 1 \quad & x \in [0,1] \\ 0 \quad & \text{ otherwise} \end{cases}.
A random variable uniform on
[0,1]
A normal random variable is drawn from the classic "bell curve," the distribution:
f(x) = \frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}},
\mu
\sigma^2
are the mean and variance of the distribution, respectively. The peak of the normal distribution is centered at
\mu
\sigma^2
characterizes the width of the peak.
A normal random variable with
\mu = 0
\sigma^2 = 1
An exponential random variable is drawn from the distribution:
f(x) = \lambda e^{-\lambda x},
\lambda
is the decay rate. This distribution has mean
\frac{1}{\lambda}
\frac{1}{\lambda^2}
. Exponential random variables are often useful in measuring the times between events like radioactive decays. In this case the formula for the mean makes sense: the larger the value of
\lambda
, the faster the decay rate and the less time expected on average for one decay to occur.
An exponential distribution with parameter
\lambda = 2
1
2
4
\sqrt{2}
What is the mean of the normal distribution given by:
\large f(x)=\frac{1}{\sqrt{4\pi}} e^{-\frac{(x-1)^2}{4}}?
Cite as: Continuous Random Variables - Definition. Brilliant.org. Retrieved from https://brilliant.org/wiki/continuous-random-variables-definition/ |
Ranging potion - OSRS Wiki
The RuneScape Wiki also has an article on: rsw:Super ranging potionThe RuneScape Classic Wiki also has an article on: classicrsw:Ranging_Potion
For the barbarian mix, see Ranging mix.
1 dose of ranging potion.
Ranging potion(1)Ranging potion(2)Ranging potion(3)Ranging potion(4)? (edit)1 dose2 dose3 dose4 dose? (edit)? (edit) ? (edit)File:Ranging potion(1).pngFile:Ranging potion(2).pngFile:Ranging potion(3).pngFile:Ranging potion(4).png? (edit)27 February 2002 (Update)27 February 2002 (Update)27 February 2002 (Update)29 March 2004 (Update)? (edit)YesNoYes? (edit)? (edit)NoNoYes? (edit)? (edit)Drink, EmptyDrink, EmptyDrink, EmptyDrink, Empty? (edit)Drop? (edit)1 dose of ranging potion.2 doses of ranging potion.3 doses of ranging potion.4 doses of ranging potion.? (edit)144216288360? (edit)144 coins216 coins288 coins360 coinstrue? (edit)86 coins129 coins172 coins216 coins? (edit)57 coins86 coins115 coins144 coins? (edit)86129172216? (edit)0.0200.0250.0300.035? (edit)0.02 kg0.025 kg0.03 kg0.035 kg? (edit)trueRanging potion(1)Ranging potion(2)Ranging potion(3)Ranging potion(4)106230302463106 coins (info)230 coins (info)302 coins (info)463 coins (info)infobox-cell-shown2,00020004,8551,15263,816227,109
Item? (edit)1731711692444? (edit)1731711692444Versions: 4Default version: (4)
SMW Subobject for (3)High Alchemy value: 172Buy limit: 2000Examine: 3 doses of ranging potion.Is members only: trueIs variant of: Ranging potionImage: File:Ranging potion(3).pngUses infobox: ItemWeight: 0.030Value: 288Version anchor: 3 doseRelease date: 27 February 2002Item ID: 169
SMW Subobject for (4)High Alchemy value: 216Buy limit: 2000Examine: 4 doses of ranging potion.Is members only: trueIs variant of: Ranging potionImage: File:Ranging potion(4).pngUses infobox: ItemWeight: 0.035Value: 360Version anchor: 4 doseRelease date: 29 March 2004Item ID: 2444
SMW Subobject for (1)High Alchemy value: 86Buy limit: 2000Examine: 1 dose of ranging potion.Is members only: trueIs variant of: Ranging potionImage: File:Ranging potion(1).pngUses infobox: ItemWeight: 0.020Value: 144Version anchor: 1 doseRelease date: 27 February 2002Item ID: 173
A ranging potion is a potion made by using a wine of Zamorak on a dwarf weed potion (unf), requiring 72 Herblore, yielding a ranging potion(3) and 162.5 Herblore experience. A dose of ranging potion provides a temporary skill boost to Ranged equal to 4 + 10% of the player's current Ranged level, rounded down.
Crystal dust may be added to ranging potion, 1 dust per dose of potion, to make a divine ranging potion, requiring 74 Herblore.
After Barbarian Herblore Training, caviar can be added to a ranging potion(2), requiring 80 Herblore, yielding a ranging mix and 54 Herblore experience. A ranging mix has 2 doses, boosts Ranged equal to 4 + 10% of the player's current Ranged level, rounded down, and heals 6 Hitpoints.
5.1 Ranging potion(2)
Ranged boost is calculated with:
{\displaystyle \lfloor RangedLevel\times {\frac {1}{10}}\rfloor +4}
Wine of zamorak 1 1,111
Ranging potion(3) 1 302
1 × Ranging potion(1)
1 × Crystal dust
Ranging mix(2)
Ranging potion(1) 106
Ranging potion(2)[edit | edit source]
Grotesque Guardians 1 2 × 6/137
Grubby Chest N/A 1 8/20
Spiritual ranger Zaros 158 1 11/128
Alchemical Hydra 426 1 2 × 7/101
Kalphite Queen 333 1 1/9
Zombie (Tarn's Lair) Medium level 61–80 1 5/128
Reward casket (elite) N/A 30 (noted) 1/28,750
The 4 dose potion became permanently available with the launch of RuneScape 2.
The 4 dose potion was added to the RuneScape 2 Beta.
1 dose2 dose3 dose
Retrieved from ‘https://oldschool.runescape.wiki/w/Ranging_potion?oldid=14283792’ |
Motion In A Straight Line, Popular Questions: CBSE Class 11-science PHYSICS, Physics Part I - Meritnation
Ritesh Rawat asked a question
Purvi asked a question
A car is moving with a constant velocity of 60m/s towards north and truck is moving with a constant velocity of 80 m/s towards east. The magnitude of relative velocity of car w.r.t. truck will be
Saif Siddiqui asked a question
Siddharth Setia asked a question
Show that the instantaneous velocity of a particle at any instant of time is equal to the slope of the tangent drawn to displacement time curve at that instant.
Sv & 1 other asked a question
Sonal Patil asked a question
what is difference between absolute velocity and relative velocity
Babita Mehta asked a question
If the acceleration of a particle is constant in magnitude but not in direction what type of path does the particle follow?
Rachael Abraham asked a question
Sayantanee Roy asked a question
Alllu asked a question
Abhik Ganguly asked a question
What is the difference between delta x/delta t and dx/dt ?
Creative Mind asked a question
6. What is the moment of inertia of rod of mass M, length L about an axis perpendicular to it through one end given the moment of inertia about the center of mass is 1/12ML2.
Shriya Kohli asked a question
A spaceship going away from the earth at half the speed of light fires from its nose a rocket which travels with a speed of 0.4sec with reference to the ship. The speed of the rocket with reference to earth is?
Apurva Arora & 1 other asked a question
A baloon starting from the ground has been ascending vertically at a uniform velocity for 4 sec and a stone let fall from it reaches the ground in 6 sec. find te velocity of the balloon and its height when the stone was let fall. (g=10 m/second square).
Please provide step by step solution to problem with the explanation of the question.
A ball is thrown horizontally from the top of a tower with a velocity of 40 ms-1. Take g = 10 ms-2
a) Find the horizontal and vertical displacement after 1,2,3,4,5 seconds, then the path of the motion of ball.
a train starts from a station with acceleration 0.2 m/s2 on a straight track and then comes to rest after attaining maximum speed on another station due to retardation 0.4m/s2. if total time spent is half an hour, then distance between two stations is {neglect length of train}
Sai Siddarth Murali asked a question
A particle moves along a semi circular path of radius R in time t with constant speed. For the particle, calculate the distance travelled, displacement, avg speed, avg velocity and avg acceleration.
Rocking Rajal asked a question
Avani Goel asked a question
Explain average velocity and average speed in terms of graphical representation
Coolm30@ Jani asked a question
A particle moves along x-axis in positive direction. Its acceleration 'a' is given as a = cx + d, where x denotes the x-coordinate of particle, c and d are positive constants. For velocity-position graph of particle to be of type as shown in figure, find the value of speed (in m/s) of particle at x = 0. Take c = 1s–2 and d = 3 ms–2
Lakshmy asked a question
Baibhav Kumar asked a question
A particle is in motion along a straight track. As it crosses a fixed point, a stop watch is started. A body travels a distance 180cm in the first 3s and 220cm in the next 5s. What will be it velocity at the end of ninth second?
Find if y=sin(2 pi x+(pi/6)) for dy/dt. In the answers, I am solving it. Pls tell me my mistake. Also pls tell me some shortcut method to solve it.
Utsav asked a question
On a two lane road, car A is travelling with a speed of 36km/h . two cars B and C approach car A in opposite directions with a speed of 54km/h each. At certain instant ,when the distance AB is equal to AC , both being 1 km , B decides to overtake A before C does. what minimum acceleration of car B is required to avoid an accident ?
Can position- time graph have a negative slope?
Shikhar Gupta asked a question
integration ( sin (4x +5) dx
Yatharth Batra asked a question
two trains A and B of length 400m each are moving on two parallel tracks with a uniform speed of 72 km/h in the same direction,with A ahead of B.The driver of B decides to overtake A andaccelerates by 1m/s^2.If after 50s ,the guard of B just brushes past the driver of A,what was the original distance between them?
If unit vectors A and B are inclined at an angle theta, then prove that IA-BI=2 sin theta/ 2
Arun Tej asked a question
Minni asked a question
Q. The position of an object moving along x-axis is given by x= a + bt2 where a=8.5m, b=2.5m/s2 and t is measured in seconds. What is its velocity at t=0s and t=2.0s. What is the average velocity b/w t=2.0s and t= 4.0s ?pls reply soon
Subhasri asked a question
A ball is allowed to fall from rest from height h. If it travels 9/25th of total height in the last second of its fall then ball will hit ground will hit ground with speed
Shaurya Sharma asked a question
The initial velocity of a particle is u (at t= 0) and the acceleration a is given by alpha t3/2. Which of the following relations is valid?
1. v = u + αt3/2
2. v = u + 3/2αt3
3. v = u +2/5αt5/2
Deepak Chatti asked a question
Mugdha Abhyankar asked a question
A highway motorist travels at a constant velocity of 45Km /h in a 30Km/h zone . A motor-cyclist police officer has been watching from behind a bill board and at the Same moment the speeding motorist passes the bill board; the police officer accelerates uniformly from rest to overtake her. If the acceleration of the police officer is 10 km h-2 how long does betake to reach the motorist?
a food packet is released from a helicopter which is rising steadily at 2 m/s
after 2 sec ( i ) what is the velocity of the packet ( ii) how far is it below the helicopter?
Q.15. Two cars move uniformly towards each other, the distance between them decreases by 50 m/s. If they move in same direction with different speeds, the distance between them increases by 10 m/s. The speed of two cars will be
(1) 30 m/s and 20 m/s
Christa asked a question
A police constable is chasing a thief who is initially 10m ahead of the constable.The uniform speeds of the constable and the thief are 10m/s and 8 m/s respectively.From the plot of the position time graph for the constable and the thief ,find the time the constable will take to catch the thief and the distance the constable has to run.
Nivedita Saha asked a question
a particle moving along x- axis has acceleration f at time t given by f = fo[1+t / T] where fo and t are constant. the particle at t = 0 has zero velocity .in time interval b/w t = 0 and instant when f=0 the particle velocity vx is
A man bailed out of a balloon. After sometime the parachute opened up and he could land on the earth's surface at a retardation of 2.4 m. s-2, and took 4 times the time that elapsed before the parachute opened. If the ballon was at a height of 398.4 m, how long was the airborne? At what speed did he touch the ground?
Eshani Verma asked a question
Abraham asked a question
Drops of water fall at regular intervals from the roof of a building of height 16m, the first drop striking the ground at the same moment as the fifth drop detaches itself from the roof. Find the distance between the separate drops in the air, as the first drop reaches the ground.
Ajitesh asked a question
what are herbaceous mammals?
what are horizontal and vertical component of acceleration of a body thrown horizontally with uniform speed.
the v-t plot of a moving object is shown in the figure. the average velocity of the object during the first 10 seconds is
Devanshi Goel asked a question
Sudha Nair asked a question
Dinesh Kumar Jain asked a question
The displacement x of a particle moving along x-axis at time t is given by
{x}^{2} =2{t}^{2} +6t
. What is the velocity at any time t?
anshi1994 asked a question
a motor car is going due north at a speed of 50 km/h.it makes a 90o left turn without changin its speed the change in velocity of the car is ?
Kush Rustagi asked a question
A bullet loses 1/20 of its velocity in passing through a plank.What is the least number of planks required to stop the bullet?
1)a small coin is dropped down a well.the splash is heard 2.3s later.how deep is the well?I(assuming sound travels quickly enough for any delaying effect to be ignored)
2)a stone is thrown upwards from the top of a tower 85m high.it reaches the ground in 5 seconds.calculate
1)the greatest height above the ground.
2)the velocity with which it reaches the ground.
3)time taken to reach maximum height(g=10ms^2)
3)a stone falls freely under gravity,starting from rest.calculate the ratio of distance,travelled by the stone during the first half of any interval of time to the distance travelled during the second half of the same interval
Lagan Jain asked a question
Velocity and acceleration of a particle at some instant of time are v = (2i-j+2k ) m/s and a = (i + 6j-k) m/s2 . then the speed of the particle is ....................... at a rate of ............................ m/s2.
(a)increasing , 2
(b)decreasing , 2
(c)increasing , 4
(d)decreasing , 4
Himanshu Pandey asked a question
A Parachutist bails out from an aeroplaneand after dropping through a distance of 40m opens the parachute and decelerates at 2m/s2 if he reach the ground with a speed of 2m/s how long he is in air? at what height did he bail out from the plane
draw position -time graph for an object moving with a)positive acceleration b) negative velocity?
Atul asked a question
A car is moving with the Speed of 50 Km/h can be stopped by brakes after at least 6m.What will be the minimum stopping distance, if the same car is moving at the speed of 100 Km/Hr.
draw the position time graph for two bodies moving with different velocities in the same direction while they start from different positions.are they meet if yes explain.
Anupall Handique asked a question
The distance x of a particle moving in 1 dimension under the action of a constant force is related to time t by equation t = root over x + 3 where x is in metres and t is in seconds . Find the displacement when it's velocity is zero.
A body falls freely from rest for 6 seconds. Find the distance travelled in the last two seconds. Take g=9.8 m/s
A body is projected vertically upwards with a velocity of 20 m/s. Find the distance travelled by it in 3 seconds. taking g=10m/s
Liniya asked a question
Medha Hegde asked a question
Is it possible for the velocity and the acceleration of an object to have opposite signs? If not, state a proof.If so, give an example of such a situation and sketch a velocity–time graph to prove your point.
please explain the answer 4 this qstn
Varneet asked a question
What is the difference between resultant and magnitude of resultant
Spurthi A A Srinivas asked a question
During Instantaneous Velocity The Delta 't' Tends To Zero ?What Does That Mean? plz explain me through an example |
Breach growth formula (Water Overlay) - Tygron Support wiki
Water can flow through breaches into levee protected areas. These breaches often start small and grow over time[1].
The water flowing through breaches can originate from an external area outside the project area or an input area within the project area.
First, the difference in height of the water on either side of the breach is calculated.
{\displaystyle \Delta h=abs(max(0,w_{e,t}-H_{b,t})-max(0,w_{b,t}-H_{b,t}))}
Using the height difference, the breach width increase is calculated.
{\displaystyle \Delta W_{b,t}=f_{m}\cdot ({\sqrt {g}}\cdot {\sqrt {\Delta h^{3}}}/cs_{b})\cdot \log _{10}(1+(0.04\cdot g/cs_{b})\cdot \Delta t/3600)}
The current breach width is then equal to the last calculated breach width, plus the calculated breach width increment.
{\displaystyle W_{b,t}=W_{b,t-1}+\Delta W_{b,t}}
{\displaystyle W_{b}}
= The BREACH_WIDTH of the breach.
{\displaystyle H_{b,t}}
= The BREACH_HEIGHT of the breach at time t.
{\displaystyle W_{b,t}}
= The calculated breach width, initially equal to Wb.
{\displaystyle w_{b,t}}
= water level at breach at time t.
{\displaystyle w_{e,t}}
= water level at entry area (external or internal) at time t.
{\displaystyle \Delta h_{t}}
= The difference between the height of the water columns on either side of the breach at time t.
{\displaystyle f_{m}}
= Material factor, set to 1.3 (average for sand and clay levees).
{\displaystyle g}
= Gravity constant, defined for the Water Overlay.
{\displaystyle cs_{b}}
= The critical BREACH_SPEED of the breach (e.g. 0.2 for sand and 0.5 for clay).
{\displaystyle \Delta W_{b,t}}
= The calculated width increase of the breach at time t.
{\displaystyle \Delta t}
= Computational timestep.
The breach height Hb,t can be defined over time, by configuring the BREACH_HEIGHT as an attribute array.
For an example of the breach growth, take a look at the Demo Breach Project available in all domains.
The following topics are related to this formula.
Breach flow formula
↑ Verheij, H.J. ∙ Aanpassen van het bresgroeimodel in HIS-OM: Bureaustudie ∙ found at: http://resolver.tudelft.nl/uuid:aedc8109-da43-4a03-90c3-44f706037774 ∙ (last visited 2019-03-08)
Surface • Underground • Rain • Evaporation • Infiltration • Elevation • Sewer • Storage • Tracer flow • Border
Timestep • Surface water level • Groundwater level
Surface flow • Surface infiltration • Underground flow • Underground infiltration • Surface evaporation • Underground evaporation • Underground seepage
Hydraulic structure formulas
Culvert • Weir • Breach growth • Breach flow • Pump • Sewer Overflow • Inlet • Drainage
Construction attributes • Hydrological features • Hydraulic structures • Terrain attributes • Model attributes • Simulation data
Retrieved from "https://support.tygron.com/w/index.php?title=Breach_growth_formula_(Water_Overlay)&oldid=44803" |
Dielectric - New World Encyclopedia
Previous (Diego Velázquez)
Next (Diesel)
Various types of capacitors. Each capacitor includes a pair of conducting plates separated by a dielectric.
A dielectric, or electrical insulator, is a material that is highly resistant to the flow of an electric current. Dielectric materials can be solids, liquids, or gases. In addition, a vacuum is an excellent dielectric.
2.1 Breakdown field strength
3 Dielectrics in Parallel-Plate Capacitors
5 Some practical dielectrics
An important application of dielectrics is to separate the plates of capacitors. A capacitor's ability to store electric charge depends on the dielectric that separates its plates.
When a dielectric medium interacts with an applied electric field, charges are redistributed within its atoms or molecules. This redistribution alters the shape of an applied electrical field both within the dielectric medium and in the nearby region.
When two electric charges move through a dielectric medium, the interaction energies and forces between them are reduced. When an electromagnetic wave travels through a dielectric, its speed decreases and its wavelength shortens.
When an electric field is initially applied across a dielectric medium, a current flows. The total current flowing through a real dielectric is made up of two parts: a conduction and a displacement current. In good dielectrics, the conduction current will be extremely small. The displacement current can be considered the elastic response of the dielectric material to any change in the applied electric field. As the magnitude of the electric field is increased, a displacement current flows, and the additional displacement is stored as potential energy within the dielectric. When the electric field is decreased, the dielectric releases some of the stored energy as a displacement current. The electric displacement can be separated into a vacuum contribution and one arising from the dielectric by
{\displaystyle \mathbf {D} =\varepsilon _{0}\mathbf {E} +\mathbf {P} =\varepsilon _{0}\mathbf {E} +\varepsilon _{0}\chi \mathbf {E} =\varepsilon _{0}\mathbf {E} \left(1+\chi \right),}
where P is the polarization of the medium, E is the electric field, D is the electric flux density (or displacement), and
{\displaystyle \chi }
its electric susceptibility. It follows that the relative permittivity and susceptibility of a dielectric are related,
{\displaystyle \varepsilon _{r}=\chi +1}
Teflon™ 2.1
Polystyrene 2.4–2.7
Pyrex (glass) 4.7 (3.7–10)
Furfural 42.0
Water 88–80.1–55.3–34.5
(0–20–100–200 °C)
Hydrofluoric acid 83.6 (0 °C)
Formamide 84.0 (20 °C)
Hydrogen peroxide 128 aq–60
Barium strontium titanate 15 nc–500
Barium titanate 90 nc–1250–10,000
(La,Nb):(Zr,Ti)PbO3 500,6000
The dielectric constant (or static permittivity) of a material (under given conditions) is a measure of the extent to which the material concentrates electrostatic lines of flux. In practice, it is measured as the "relative dielectric constant," which is defined as the ratio of the amount of electrical energy stored in an insulator when a static electric field is imposed across it, relative to the permittivity of a vacuum (which has a dielectric constant of 1).
The relative dielectric constant is represented as εr (or sometimes
{\displaystyle \kappa }
, K, or Dk). Mathematicallly, it is defined as:
{\displaystyle \varepsilon _{r}={\frac {\varepsilon _{s}}{\varepsilon _{0}}}}
where εs is the static permittivity of the material, and ε0 is vacuum permittivity. Vacuum permittivity is derived from Maxwell's equations by relating the electric field intensity E to the electric flux density D. In vacuum (free space), the permittivity ε is just ε0, so the dielectric constant is one.
Permittivity is a physical quantity that describes how an electric field affects and is affected by a dielectric medium, and is determined by the ability of a material to polarize in response to the field, and thereby reduce the field inside the material. Thus, permittivity relates to a material's ability to transmit (or "permit") an electric field.
It is directly related to electric susceptibility. For example, in a capacitor, an increased permittivity allows the same charge to be stored with a smaller electric field (and thus a smaller voltage), leading to an increased capacitance.
The term dielectric strength may be defined as follows:
For an insulating material, dielectric strength is the maximum electric field strength that the material can withstand intrinsically without breaking down, that is, without experiencing failure of its insulating properties.
For a given configuration of dielectric material and electrodes, dielectric strength is the minimum electric field that produces breakdown.
The theoretical dielectric strength of a material is an intrinsic property of the bulk material and is dependent on the configuration of the material or the electrodes with which the field is applied. At breakdown, the electric field frees bound electrons. If the applied electric field is sufficiently high, free electrons may become accelerated to velocities that can liberate additional electrons during collisions with neutral atoms or molecules in a process called avalanche breakdown. Breakdown occurs quite abruptly (typically in nanoseconds), resulting in the formation of an electrically conductive path and a disruptive discharge through the material. For solid materials, a breakdown event severely degrades, or even destroys, its insulating capability.
Breakdown field strength
The field strength at which breakdown occurs in a given case is dependent on the respective geometries of the dielectric (insulator) and the electrodes with which the electric field is applied, as well as the rate of increase at which the electric field is applied. Because dielectric materials usually contain minute defects, the practical dielectric strength will be a fraction of the intrinsic dielectric strength seen for ideal, defect free, material. Dielectric films tend to exhibit greater dielectric strength than thicker samples of the same material. For instance, dielectric strength of silicon dioxide films of a few hundred nm to a few microns thick is approximately ten MV/cm. Multiple layers of thin dielectric films are used where maximum practical dielectric strength is required, such as high voltage capacitors and pulse transformers.
Dielectric strength of various common materials
Neoprene rubber 12
Pyrex glass 14
Silicone oil 15
Dielectrics in Parallel-Plate Capacitors
The electrons in the molecules shift toward the positively charged left plate. The molecules then create a leftward electric field that partially cancels the field created by the plates. (The air gap is shown for clarity; in real capacitor, the dielectric is usually in direct contact with the plates.)
Putting a dielectric material between the plates in a parallel plate capacitor causes an increase in the capacitance in proportion to k, the relative permittivity of the material:
{\displaystyle C={\frac {k\epsilon _{0}A}{d}}}
{\displaystyle \epsilon _{0}}
is the permittivity of free space, A is the area covered by the capacitors, and d is the distance between the plates.
This happens because an electric field polarizes the bound charges of the dielectric, producing concentrations of charge on its surfaces that create an electric field opposed (antiparallel) to that of the capacitor. Thus, a given amount of charge produces a weaker electric field between the plates than it would without the dielectric, which reduces the electric potential. Considered in reverse, this argument means that, with a dielectric, a given electric potential causes the capacitor to accumulate a larger charge polarization.
The use of a dielectric in a capacitor presents several advantages. The simplest of these is that the conducting plates can be placed very close to one another without risk of contact. Also, if subjected to a very high electric field, any substance will ionize and become a conductor. Dielectrics are more resistant to ionization than dry air, so a capacitor containing a dielectric can be subjected to a higher operating voltage. Layers of dielectric are commonly incorporated in manufactured capacitors to provide higher capacitance in a smaller space than capacitors using only air or a vacuum between their plates, and the term dielectric refers to this application as well as the insulation used in power and RF cables.
Some practical dielectrics
Dielectric materials can be solids, liquids, or gases. In addition, a high vacuum can also be a useful, lossless dielectric even though its relative dielectric constant is only unity.
Solid dielectrics are perhaps the most commonly used dielectrics in electrical engineering, and many solids are very good insulators. Some examples include porcelain, glass, and most plastics. Air, nitrogen and sulfur hexafluoride are the three most commonly used gaseous dielectrics.
Industrial coatings such as parylene provide a dielectric barrier between the substrate and its environment.
Mineral oil is used extensively inside electrical transformers as a fluid dielectric and to assist in cooling. Dielectric fluids with higher dielectric constants, such as electrical grade castor oil, are often used in high voltage capacitors to help prevent corona discharge and increase capacitance.
Because dielectrics resist the flow of electricity, the surface of a dielectric may retain stranded excess electrical charges. This may occur accidentally when the dielectric is rubbed (the triboelectric effect). This can be useful, as in a Van de Graaff generator or electrophorus, or it can be potentially destructive as in the case of electrostatic discharge.
Specially processed dielectrics, called electrets, may retain excess internal charge or "frozen in" polarization. Electrets have a semipermanent external electric field, and are the electrostatic equivalent to magnets. Electrets have numerous practical applications in the home and industry.
Some dielectrics can generate a potential difference when subjected to mechanical stress, or change physical shape if an external voltage is applied across the material. This property is called piezoelectricity. Piezoelectric materials are another class of very useful dielectrics.
Some ionic crystals and polymer dielectrics exhibit a spontaneous dipole moment which can be reversed by an externally applied electric field. This behavior is called the ferroelectric effect. These materials are analogous to the way ferromagnetic materials behave within an externally applied magnetic field. Ferroelectric materials often have very high dielectric constants, making them quite useful for capacitors.
Boettcher, Carl Johan Friedrich. 1980. Theory of Electric Polarization: Dielectric Polarization. Elsevier Science. ISBN 0444415793
Rumble, John (ed.). 2017. CRC Handbook of Chemistry and Physics, 98th ed. Boca Raton: CRC Press. ISBN 978-1498784542
Von Hippel, Arthur R. 1994. Dielectrics and Waves. Artech Print on Demand. ISBN 978-0890068038
Dielectric Constants of common materials
Dielectric history
Permittivity history
Dielectric_constant history
Dielectric_strength history
History of "Dielectric"
Retrieved from https://www.newworldencyclopedia.org/p/index.php?title=Dielectric&oldid=1011366 |
A New Conditional for Naive Truth Theory
2013 A New Conditional for Naive Truth Theory
Notre Dame J. Formal Logic 54(1): 87-104 (2013). DOI: 10.1215/00294527-1731407
In this paper a logic suitable for reasoning disquotationally about truth,
{\mathsf{TJK}}^{+}
, is presented and shown to have a standard model. This work improves on Hartry Field’s recent results establishing consistency and
\omega
-consistency of truth theories with strong conditional logics. A novel method utilizing the Banach fixed point theorem for contracting functions on complete metric spaces is invoked, and the resulting logic is shown to validate a number of principles which existing revision theoretic methods have so far failed to provide.
Andrew Bacon. "A New Conditional for Naive Truth Theory." Notre Dame J. Formal Logic 54 (1) 87 - 104, 2013. https://doi.org/10.1215/00294527-1731407
Secondary: 03Axx , 03B47 , 03B50
Keywords: contractionless logic , Curry’s paradox , fixed point Theorem , liar paradox , nonclassical logic , semantic paradoxes
Andrew Bacon "A New Conditional for Naive Truth Theory," Notre Dame Journal of Formal Logic, Notre Dame J. Formal Logic 54(1), 87-104, (2013) |
Nonlinear dynamic effect in synthetic fibres from semi- and rigid chain polymers | JVE Journals
Alla A. Romanova1 , Pavel P. Rymkevich2
1, 2Military Space Academy by A. F. Mozhayskii, St. Petersburg, Russia
A study on elastic and relaxation properties of a set of oriented polymers in the dynamic strain mode is reported in this paper. It is established that all investigated objects show the beatings phenomenon at a certain range of stresses and temperatures. The model and possible mechanisms of the observed phenomena are offered.
Keywords: semicrystalline polymers, polymer fibres, dynamic mechanical properties, beatings, Terlon, Kevlar, polyethylene terephthalate.
High tenacity synthetic fibres based on semi- and rigid chain polymers and fibrous composite materials are widely used in the technical fields of application, particularly, in aerospace engineering, parachute production and anti-vibration systems. In real-life conditions of production and operation uniaxially oriented polymers undergo a complicated mechanical effect, including not only static loading, but also periodic strain. It is the dynamic nonlinear processes where the relaxation properties of polymers are clearly seen.
Here we have investigated the synthetic fibres formed of semi- and rigid chain polymers, such as poly-
p
-phenylene terephthal amide (PPTA, trade mark Terlon, the Russian analogue of Kevlar), poly-
p
-benzo imidazole (PBI, Russian trade mark SVM), polyethylene terephthalate (PET, Russian trade mark Lavsan).
The investigation of elastic and relaxation properties of synthetic fibres in the dynamic non-destructive strain mode was conducted using the free longitudinal oscillation method at the maximum stresses not exceeding 40-50 % of ultimate tensile strength [1]. The temperature test was conducted in the temperature range of 20-450°C.
The choice of the range of mechanical stresses was made based on the criterion that the fibres are limited to 40-50 % of the tensile stress. During the determining of the stress in a specimen, it was suggested that there is a no-significant change in specimen section at various tensile loads, and this change may be ignored. The measurements have been taken at the load step of 5-10 N depending on the investigated material.
Before each test, all the specimens were dried in a desiccator at a relative humidity of 65 % for not less than 2 days; in order to detach a part of plastic strain and straighten the mechanical “memory” the specimens were placed under minimal test load 2 N for 30 minutes.
The test in the context of the investigation of the impact of the level of mechanical stress on dynamic characteristics was conducted isothermally at an ambient temperature of 20±2 °С and relative humidity of 65±5%. The fibre clamping length was 200 mm.
The temperature test was conducted in the temperature range of 20-450 °C. The rate of temperature increase was ~5 /min. The temperature gradient at the specimen length did not exceed 3-6 °C.
The amplitude of oscillations imparted to the specimen did not exceed 0,5 % of the clamping length.
We have determined that all the investigated fibres within a certain range of mechanical stress (or the levels of static strain) exhibit a complicated nonexponential form of decaying oscillations, or beatings. The form being hard to explain traditionally is represented schematically in Fig. 1.
Fig. 1. Beatings (amplitude-modulated free oscillations)
The equation for the dynamic part of the strain can be visually demonstrated as follows:
{\epsilon }_{dyn}\left(t\right)={C}_{0}{e}^{-{B}_{0}t}+{C}_{1}{e}^{-{B}_{1}t}\mathrm{sin}\left({\omega }_{1}t+{\phi }_{1}\right)+ {C}_{2}{e}^{-{B}_{2}t}\mathrm{sin}\left({\omega }_{2}t+{\phi }_{2}\right)+ \delta \epsilon \left(t\right),
{C}_{i}
are the initial amplitudes of oscillations;
{\omega }_{bas}=\left({\omega }_{1}+{\omega }_{2}\right)/2
is the basic (beating) angular frequency of oscillations;
{\omega }_{mod}=\left({\omega }_{1}-{\omega }_{2}\right)/2
is the angular frequency of modulation of amplitude;
\phi
is the initial phase.
\delta \epsilon \left(t\right)
forms the so-called “white noise”. Analysis revealed that the noise does not exceed the error range of 1-3 %.
It should be noted that the free exponent value is traditionally low,
\left|{C}_{0}\right|\le 0,1\left|{C}_{1}\right|
. The presence of the free exponent was discovered earlier in study [5] through the torsional oscillation method for a number of caoutchoucs and represented in a 3D model. In our opinion, the presence of the free exponent reflects a transition process. Its existence has been shown in the works of Yu. N. Rabotnov [6].
The main behaviour characteristics of synthetic fibres in the nonexponential decay mode should also be noted.
1) The beatings are observed in all investigated fibres.
2) The beatings exist in a wide range of temperatures up to the glass transition temperature. However, at the glass transition temperature this phenomenon disappears spasmodically within the whole load interval.
3) The beatings phenomenon exists at a certain stress interval (or the levels of static strain) appearing and disappearing spasmodically.
4) In the case of the coincidence of frequencies
{\omega }_{1}
{\omega }_{2}
an acute maximum of the tangent of the mechanical loss angle (Fig. 2(c)) takes place, which shows its resonant behaviour. Nevertheless, the elastic modulus
E\text{'}
, calculated in a traditional way [7] (
E\text{'}=\frac{m}{F}{\omega }_{bas}^{2}
F=S/l
is the specimen form-factor,
m
l
S
are mass, length and area of fibres cross-section respectively) shows the minimum. (Fig. 2(a)).
5) Thus, it is possible to shift the mechanical loss maximum on the stress scale in either direction by changing the length of fibres (the base).
6) The frequency
{\omega }_{bas}
near the beatings significantly depends on the load, while this dependency is weak outside the area of the beatings (Fig. 2(b)).
{\omega }_{mod}
does not depend on the specimen length and section.
Fig. 2. Dependencies of dynamic modulus of elasticity
{Е}_{dyn}
(a), basic frequency of oscillations
{\omega }_{bas}
(b), and tan
\delta
of mechanical losses (c) on basic stress
\sigma
for PET fibres at
T=
Applying the well-known method based on the Boltzmann-Volterra equation [7-9], the nonlinear complicated strain mode can be viewed as an interaction of static and dynamic parts of the hereditary relaxation core. In this case the following applies to the periodic processes:
{\sigma }_{dyn}\left(t\right)={E}_{0dyn}{\epsilon }_{dyn}\left(t\right)-\underset{0}{\overset{1}{\int }}{r}_{dyn}\left({\epsilon }_{st},t-\theta \right){\epsilon }_{dyn}\left(\theta \right)d\theta ,
t
{\epsilon }_{st}
is level of static strain;
{\sigma }_{dyn}\left(t\right)
is mechanical stress corresponding to periodic strain
{\epsilon }_{dyn}\left(t\right)
{E}_{0dyn}
is initial dynamic modulus of elasticity;
{r}_{dyn}\left({\epsilon }_{st},t\right)
is hereditary dynamic core of the level of static strain
{\epsilon }_{st}
. In this connection it is suggested that the dynamic loading takes place in quasiequilibrium static strain, for which the following equation applies
{\epsilon }_{st}{\tau }_{dyn}\ll 1
{\tau }_{dyn}
is the characteristic time of the dynamic process.
In order to quantitatively describe the complicated form of the observed oscillating process a simulation model may be suggested, which allows distinguishing the basic and additional oscillation frequencies and their amplitudes using the Fourier transform.
In the context of such an approach the Laplace transformation of dynamic strain is represented as follows:
\stackrel{-}{\epsilon }\left(s\right)=\frac{s\epsilon \left(0\right)+\stackrel{˙}{\epsilon }\left(0\right)}{{s}^{2}+{\stackrel{-}{{\omega }_{0}}}^{2}-\stackrel{-}{{r}_{dyn}}\left(s\right)} ,
s
is Laplace parameter;
\epsilon \left(0\right)
\stackrel{˙}{\epsilon }\left(0\right)
are initial (random) conditions;
\stackrel{-}{{r}_{dyn}}\left(s\right)
is image of the hereditary dynamic core;
{\stackrel{-}{{\omega }_{0}}}^{2}=\frac{{S}_{0}{E}_{0dyn}}{ml}
\frac{ml}{{S}_{0}}=F
is form-factor of the investigated specimen.
Conceptually, the image of dynamic relaxation core can be demonstrated as follows:
\stackrel{-}{{r}_{dyn}}\left(s\right)=\sum _{\left(n\right)}{q}_{n}\frac{1}{1+s{\tau }_{n}}.
{q}_{n}
is spectral intensity of the corresponding relaxation process with the relaxation time
{\tau }_{n}
It can be shown that the dynamic part of the core looks as follows:
{r}_{dyn}\left(t\right)={M}_{1}{e}^{-{\nu }_{1}t}+{e}^{-{\nu }_{2}t}\left({M}_{2}\mathrm{cos}\xi t+\frac{{M}_{3}-{M}_{2}{\nu }_{2}}{\xi }\mathrm{sin}\xi t\right),
{M}_{i}\left(i=1,2,3\right)
{\nu }_{i}\left(i=1,2,3\right)
\xi
are characteristics of the dynamic core. Overall, Eq. (5) corresponds with Yu. N. Rabotnov’s views [6] which support the existence of relaxation cores of an oscillation type.
The following interrelation of the tangent of the mechanical loss angle and parameters of dynamic hereditary relaxation core has been found:
\mathrm{tan}\delta =\frac{\left[{M}_{2}\left({W}^{2}-{\omega }_{bas}^{2}\right)-2{\nu }_{2}{M}_{3}\right]{\omega }_{bas}^{2}}{\left[{\left({W}^{2}-{\omega }_{bas}^{2}\right)}^{2}+4{\omega }_{bas}^{2}{\nu }_{2}^{2}\right]{\stackrel{-}{{\omega }_{0}}}^{2}} ,
{W}^{2}={\xi }^{2}+{\nu }_{2}^{2}
\xi
is core oscillation frequency;
{\nu }_{2}
is core decay rate,
{\stackrel{-}{{\omega }_{0}}}^{2}
denotes the frequency of free oscillations in the absence of relaxation contributions. On the basis of Eq. (6) it has been shown that in free longitudinal oscillation conditions the modulus of the tangent of the mechanical loss angle
\mathrm{tan}\delta
may have one or more extreme values; and in the case of proximity of core oscillation frequencies 𝜉 and the basic oscillation frequency
{\omega }_{bas}
the manifestation of such a well-known physical phenomenon as the beating is possible. In the case of
{\omega }_{bas}=W
an acute maximum of the tangent of the mechanical loss angle must be observed.
It may be suggested that the observed phenomenon is caused by the proximity of the two free frequencies of the polymer fibres under investigation, which, due to the heterogeneity (of amorphocrystaline structure), may be referred to amorphous and crystalline areas of a polymer, where crystallites play the role of some "masses" connected by inter- and intrafibrillar amorphous interlayers.
Application of external mechanical stress causes the change in elasticity (high elasticity) constants resulting from the change of the number of macromolecular conformations in the presence of internal rotation, the breakage (and recombination) of intermolecular bonds and the change of external mobility of macromolecules corresponding with the basic polymer viscosity [10], and, consequently, the growth of the modulus of elasticity.
Periodic strain in a certain stress range (which is different for polymers of various chemical structures) causes the free frequencies of two oscillation modes approach each other resulting in a sudden growth of the oscillation decay rate. The narrowness of the resonance curve allows suggesting a high cooperativity in the movements in the amorphous and crystalline areas, which may be connected with the formation of unusual clusters consisting of similar crystallites. The further increase of stress largely causes the breakage of intermolecular bonds, straightening of molecules and increase in number of passage chains keeping the load [2, 10-13] resulting in the damage of clusters and asynchronous crystallite oscillation.
The absence of the phenomenon for the polymers displaying the state of high elasticity (polyethylene, polypropylene) and the disappearance of beatings at the glass transition temperature indirectly prove the suggested mechanism of periodic strain.
Also, it is possible that the vibration component suddenly changes its drift at a certain level of static strain and transfers a part of developed stimulated high elasticity (which is collected in vitrified polymers when applying static load [12]) into reversing elasticity. The both suggested versions do not contradict each other.
Thus, the observed phenomenon of amplitude-modulated oscillations (beatings phenomenon) may be described using a nonlinear integral constitutive equation with an oscillating relaxation core. The interrelation of the tangent of the mechanical loss angle and parameters of dynamic hereditary relaxation core has been found.
Authors are very grateful to Prof. D. A. Indeitsev for helpful remarks and discussion.
Romanova A. A. Installation for definition of dynamic creep characteristics of polymer threads. Zavodskaia Laboratoria, Diagnostics Mater-Fishing, Vol. 74, Issue 9, 2008, p. 78-79, (in Russian). [Search CrossRef]
Romanova A. A., Stalevich A. M., Rymkevich P. P., Gorschkov A. S., Ginzburg B. M. A new phenomenon – amplitude-modulated free oscillations (beatings) in the loaded highly oriented fibers from semicrystalline polymers. Journal of Macromolecular Science, Part B: Physics, Vol. 46, 2007, p. 467-474. [Search CrossRef]
Romanova A. A., Rymkevich P. P., Stalevich A. M. Kinetic description of the relaxation of mechanical stress in the synthetic threads. Izvestia vuzov. Technology of Textile Industry, Vol. 1, 2000, p. 3-7, (in Russian). [Search CrossRef]
Rymkevich P. P., Gorshkov A. S., Makarov A. G., Romanova A. A. Main constitutive equation of the viscoelastic behavior of uniaxial co-oriented polymers. Fibre Chemistry, Vol. 46, 2014, p. 28. [Search CrossRef]
Ryszkova K. A., Dorfman I. Ya Definition of viscoelastic behaviors of polymeric material by a dynamic method. Vysokomolecularnie Soedinenia, Series A, (Polymer Science, Russia), Vol. 23A, Issue 11, 1981, p. 2615. [Search CrossRef]
Rabotnov Yu. N. Mechanics of a Deformable Solid Body. Moscow, 1988, (in Russian). [Search CrossRef]
Ferry J.D. Viscoelastic Properties of Polymers. New-York, London, 1961, (Russian translation, IL, Moscow, 1963, p. 535. [Search CrossRef]
Ward I. M., Hadley D. W. An Introduction to the Mechanical Properties of Solid Polymers. Wiley, Chichester, New York, Brisbain, Toronto, Singapore, 1993. [Search CrossRef]
Demidov A. V., Makarov A. G., Stalevich A. M. Modelling variant of the nonlinear hereditary viscoelasticity of polymeric materials. Proceedings RAS (Russian Academy of Science), Mechanics of Solid, Vol. 1, 2009, p. 155-165. [Search CrossRef]
Kargin V. A., Slonimskii G. L. Mechanical Properties. Encyclopedia of Polymer Science and Technology, Vol. 8., Wiley, New York, London, Sydney, Toronto, 1968. [Search CrossRef]
Golovina V. V., Marikhin V. A., Slutsker G. Ya., Stalevich A. M. Broadening of relaxation and retardation spectra due to uniaxial orientational drawing of polyamide films. Vysokomolecularnie Soedinenia, Series A, (Polymer Science, Russia), Vol. 49, Issue 6, 2007, p. 1-5. [Search CrossRef]
Sanditov D. S., Bartenev G. M. Physical Properties of the Disordered Structures. Izdatel’stvo Nauka: Novosibirsk, 1982, (in Russian). [Search CrossRef]
Stalevich A. M., Ginzburg B. M. Crystal-like bundles in intrafibrillar amorphous regions and non-linear viscoelasticity of oriented semicrystalline polymers. Journal of Macromolecular Science, Part B: Physics, Vol. 45, 2006, p. 377-383. [Search CrossRef] |
Approximating the Area of a Circle using Rectangles - Maple Help
Home : Support : Online Help : Math Apps : Calculus : Integral : Approximating the Area of a Circle using Rectangles
The area of a circle can be approximated by rectangles. As the number of rectangles approaches infinity the total area of all the rectangles approaches the actual area of the circle.
Using integration, the exact area of the circle can be found. The exact area of a circle is
\mathrm{π}\cdot {r}^{2}
Let r be the radius of the circle, and let n be the number of approximating rectangles.
The height h of each rectangle can be defined as:
h = \frac{2\cdot r}{n}
{l}_{k}
of the kth rectangle located at height
{y}_{k}
{\left(\frac{{l}_{k}^{}}{2}\right)}^{2}={r}^{2}-{y}_{k}^{2}
The area of the kth rectangle is:
{A}_{k }= {l}_{k}^{}\cdot h\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}= 2\cdot \sqrt{{r}^{2}-{y}_{k}^{2}} \cdot h
Therefore the total area of n rectangles is:
\mathrm{Area} = 2 \underset{k=1}{\overset{n}{∑}}\sqrt{{r}^{2}-{y}_{k}^{2}}\cdot h
n → \infty
, we get the area of the circle:
{A}_{\mathrm{circle}} =2 {∫}_{-r}^{r}\sqrt{{r}^{2}-{y}^{2}} \mathrm{dy}
= \mathrm{π}\cdot {r}^{2}
Adjust the number of rectangles used to approximate the area of the circle:
Number of Rectangles = |
In a General Social Survey of Americans in 1991, two variables, gender and findi
In a General Social Survey of Americans in 1991, two variables, gender and finding life exciting or dull, were measured on 980 individuals. The two-wa
In a General Social Survey of Americans in 1991, two variables, gender and finding life exciting or dull, were measured on 980 individuals. The two-way table below summarizes the results.
Let A = randomly chosen person is female
Let B = randomly chosen person finds life exciting
(a) Find P(A | B)
(b) Are the events A & B independent?
\begin{array}{ccccc}\text{Original Counts}& \text{Exciting}& \text{Routine}& \text{Dull}& \text{Total}\\ \text{Male}& 213& 200& 12& 425\\ \text{Female}& 221& 305& 29& 555\\ \text{Female}& 434& 505& 41& 980\end{array}
Want to know more about Two-way tables?
hosentak
Step 1 Let A is the event that a randomly chosen person is female. Let B is the event a randomly chosen person finds life exciting. It is noted that if the two events are independent, then the conditional probability can be written as follows:
P\left(\frac{A}{B}\right)=P\left(A\right)
a) The conditional probability of A given B is as follows:
P\left(\frac{A}{B}\right)=P\left(A\right)
=\frac{555}{980}
\approx 0.57
b). Here, a randomly chosen person finds life exciting can be either male or female. There is condition on it. Thus, there is no influence of one event to other event. Hence the events A and B are independent.
In this exercise , a two-way table is shown for two groups , 1 and 2 , and two possible outcomes , A nad B
\begin{array}{|cccc|}\hline & \text{Outcome A}& \text{Outcome B}& \text{Total}\\ \text{Group 1}& 30& 20& 50\\ \text{Group 2}& 40& 110& 150\\ \text{Total}& 70& 130& 200\\ \hline\end{array}\phantom{\rule{0ex}{0ex}}
a) What proportion of all cases had Outcome A?
b) What proportion of all cases are in Group 1?
c) What proportion of cases in group 1 had Outcome B?
d) What proportion of cases who had Outcome A were in group 2?
A group of children and adults were polled about whether they watch a particular TV show. The survey results, showing the joint relative frequencies and marginal relative frequencies, are shown in the two-way table.
\begin{array}{|cccc|}\hline & Yes& No& Total\\ Children& 0.3& 0.4& 0.7\\ Adults& 0.25& x& 0.3\\ Total& 0.55& 0.45& 1\\ \hline\end{array}
P\left(A|B\right)=\frac{P\left(B|A\right)\cdot P\left(A\right)}{P\left(B\right)}
\text{Misplaced \hline}
Find the expected count and the contribution to the chi-square statistic for the (Group 1, Yes) cell in the two-way table below.
\begin{array}{|cccc|}\hline & \text{Yes}& \text{No}& \text{Total}\\ \text{Group 1}& 710& 277& 987\\ \text{Group 2}& 1175& 323& 1498\\ \text{ }\text{Total}& 1885& 600& 2485\\ \hline\end{array}
Round your answer for the excepted count to one decimal place, and your answer for the contribution to the chi-square statistic to three decimal places.
Expected count=?
contribution to the chi-square statistic=?
"It is right to use animals for medical testing if it might save human lives." The General Social Survey asked 1152 adults to react to this statement Here is the two-way table of their responses:
GradeFrequencyRelative frequency |
EUDML | Approximation of -processes by Gaussian processes. EuDML | Approximation of -processes by Gaussian processes.
Approximation of
{L}_{2}
-processes by Gaussian processes.
Akcoglu, M.A.; Baxter, J.R.; Ha, D.M.; Jones, R.L.
Akcoglu, M.A., et al. "Approximation of -processes by Gaussian processes.." The New York Journal of Mathematics [electronic only] 4 (1998): 75-82. <http://eudml.org/doc/119777>.
@article{Akcoglu1998,
author = {Akcoglu, M.A., Baxter, J.R., Ha, D.M., Jones, R.L.},
keywords = {Gaussian processes; ergodic transformation; -processes; -processes},
title = {Approximation of -processes by Gaussian processes.},
AU - Akcoglu, M.A.
AU - Baxter, J.R.
AU - Ha, D.M.
TI - Approximation of -processes by Gaussian processes.
KW - Gaussian processes; ergodic transformation; -processes; -processes
Gaussian processes, ergodic transformation,
{L}_{2}
-processes,
{L}_{2}
Articles by Akcoglu
Articles by Baxter
Articles by Ha |
EUDML | On-line algorithms for the -adic covering of the unit interval and for covering a cube by cubes. EuDML | On-line algorithms for the -adic covering of the unit interval and for covering a cube by cubes.
On-line algorithms for the
q
-adic covering of the unit interval and for covering a cube by cubes.
Lassak, Marek. "On-line algorithms for the -adic covering of the unit interval and for covering a cube by cubes.." Beiträge zur Algebra und Geometrie 43.2 (2002): 537-549. <http://eudml.org/doc/228479>.
keywords = {on-line covering; -adic covering; sequence of segments; sequence of cubes; -adic covering},
title = {On-line algorithms for the -adic covering of the unit interval and for covering a cube by cubes.},
TI - On-line algorithms for the -adic covering of the unit interval and for covering a cube by cubes.
KW - on-line covering; -adic covering; sequence of segments; sequence of cubes; -adic covering
on-line covering,
q
-adic covering, sequence of segments, sequence of cubes,
q
-adic covering
Packing and covering in |
A 172-cm-tall person lies ona light (massless) board which is supported by two s
A 172-cm-tall person lies ona light (massless) board which is supported by two scales one under the top of her head and ones beneath the bottom of her
A 172-cm-tall person lies ona light (massless) board which is supported by two scales one under the top of her head and ones beneath the bottom of her feet(figure 9-53). The two scales readrespectively 35.1 and 31.6 kg
What distance is the center of gravity of this person from the bottom of her feet?
izboknil3
{m}_{A}=35.1\text{ }kg
{m}_{B}=31.6\text{ }kg
L=17.2 m
Now the distance for the center of gravity is calculated as
x=\frac{{m}_{A}}{{m}_{A}+{m}_{B}}L=9.05\text{ }cm
The specific gravity of ice is 0.917, whereas that for seawater is 1.025. What fraction of an iceberg is above the surface of the water?
A force of 250 N is applied to a hydraulic jack piston that is 0.01 m in diameter. If the piston which supports the load hasa diameter of 0.10 m, approximately how much mass can belifted. Ignore any difference in height between the pistons.
An FM radio station broadcasts at a frequency of 100 MHz. What inductance should be paired with a 10 pF capacitor to build a receiver circuit for this station?
One thousand independent rolls of a fair die will be made. Compute an approximation to the probability that the number 6 will appear between 150 and 200 times inclusively. If the number 6 appears exactly 200 times, find the probability that the number 5 will appear less than 150 times.
Two charges are located on the x axis:
{q}_{1}=+3.0\mu C
{x}_{1}=+4.0
{q}_{2}=+3.0\mu C
{x}_{2}=-4.0
cm. Two other charges are located on the y axis:
{q}_{3}=+3.0\mu C
{y}_{3}=+5.0
{q}_{4}=-5.0\mu C
{y}_{4}=+7.0 |
FromCompressedSparseForm - Maple Help
Home : Support : Online Help : Mathematics : Linear Algebra : LinearAlgebra Package : Solvers : FromCompressedSparseForm
translate compressed sparse row and column forms to native Maple form
FromCompressedSparseForm(CB, R, X, opts)
integer vector of column bounds
integer vector of row coordinates
hardware datatype vector with values
This option determines whether
\mathrm{CB}
R
X
are interpreted as the compressed sparse column form of
A
or as its compressed sparse row form. The default is compressed sparse column form.
otherdimension = n
This option determines the number of columns of
A
in the case of compressed sparse column form, and the number of rows of
A
in the case of compressed sparse row form. If the option is not specified, Maple uses the maximal entry in
R
X
X
This option determines at what number Maple starts numbering the rows, for compressed sparse column form, or the columns, for compressed sparse row form. The default is 1, corresponding to the standard Maple convention. Other values, in particular 0, are mainly useful if the data come from external code.
The inverse command, CompressedSparseForm, also has a cbbase option. This corresponds to the first entry of
\mathrm{CB}
and cannot be set manually for FromCompressedSparseForm.
The FromCompressedSparseForm function constructs a sparse Matrix
A
from either its compressed sparse row form or its compressed sparse column form, performing the opposite function to CompressedSparseForm.
The compressed sparse column form of an
m
A
k
\mathrm{CB}
R
X
k
A
X
k
A
R
k
\mathrm{CB}
n+1
{\mathrm{CB}}_{i}
X
R
i
{\mathrm{CB}}_{n+1}={\mathrm{CB}}_{1}+k
i
{\mathrm{CB}}_{i}
{\mathrm{CB}}_{i+1}
A
X
A
R
k
{\mathrm{CB}}_{i}
X
R
i
The code for FromCompressedSparseForm relies on being able to construct
A
as a NAG-sparse Matrix; that is, the datatype of
X
\mathrm{sfloat},\mathrm{complex}\left(\mathrm{sfloat}\right),{\mathrm{integer}}_{1},{\mathrm{integer}}_{2},{\mathrm{integer}}_{4},{\mathrm{integer}}_{8},{\mathrm{float}}_{4},{\mathrm{float}}_{8},{\mathrm{complex}}_{8}
\mathrm{with}\left(\mathrm{LinearAlgebra}\right):
m≔\mathrm{Matrix}\left(5,6,{\left(1,2\right)=-81,\left(2,3\right)=-55,\left(2,4\right)=-15,\left(3,1\right)=-46,\left(3,3\right)=-17,\left(3,4\right)=99,\left(3,5\right)=-61,\left(4,2\right)=18,\left(4,5\right)=-78,\left(5,6\right)=22},\mathrm{datatype}=\mathrm{integer}[4]\right)
\textcolor[rgb]{0,0,1}{m}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-81}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-55}& \textcolor[rgb]{0,0,1}{-15}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{-46}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-17}& \textcolor[rgb]{0,0,1}{99}& \textcolor[rgb]{0,0,1}{-61}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{18}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-78}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{22}\end{array}]
\mathrm{cb},r,x≔\mathrm{CompressedSparseForm}\left(m\right)
\textcolor[rgb]{0,0,1}{\mathrm{cb}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{6}\\ \textcolor[rgb]{0,0,1}{8}\\ \textcolor[rgb]{0,0,1}{10}\\ \textcolor[rgb]{0,0,1}{11}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{5}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{-46}\\ \textcolor[rgb]{0,0,1}{-81}\\ \textcolor[rgb]{0,0,1}{18}\\ \textcolor[rgb]{0,0,1}{-55}\\ \textcolor[rgb]{0,0,1}{-17}\\ \textcolor[rgb]{0,0,1}{-15}\\ \textcolor[rgb]{0,0,1}{99}\\ \textcolor[rgb]{0,0,1}{-61}\\ \textcolor[rgb]{0,0,1}{-78}\\ \textcolor[rgb]{0,0,1}{22}\end{array}]
\mathrm{FromCompressedSparseForm}\left(\mathrm{cb},r,x\right)
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-81}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-55}& \textcolor[rgb]{0,0,1}{-15}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{-46}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-17}& \textcolor[rgb]{0,0,1}{99}& \textcolor[rgb]{0,0,1}{-61}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{18}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-78}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{22}\end{array}]
\mathrm{cb},r,x≔\mathrm{CompressedSparseForm}\left(m,'\mathrm{form}=\mathrm{row}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{cb}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{8}\\ \textcolor[rgb]{0,0,1}{10}\\ \textcolor[rgb]{0,0,1}{11}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{6}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{-81}\\ \textcolor[rgb]{0,0,1}{-55}\\ \textcolor[rgb]{0,0,1}{-15}\\ \textcolor[rgb]{0,0,1}{-46}\\ \textcolor[rgb]{0,0,1}{-17}\\ \textcolor[rgb]{0,0,1}{99}\\ \textcolor[rgb]{0,0,1}{-61}\\ \textcolor[rgb]{0,0,1}{18}\\ \textcolor[rgb]{0,0,1}{-78}\\ \textcolor[rgb]{0,0,1}{22}\end{array}]
\mathrm{FromCompressedSparseForm}\left(\mathrm{cb},r,x,'\mathrm{form}=\mathrm{row}'\right)
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-81}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-55}& \textcolor[rgb]{0,0,1}{-15}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{-46}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-17}& \textcolor[rgb]{0,0,1}{99}& \textcolor[rgb]{0,0,1}{-61}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{18}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-78}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{22}\end{array}]
\mathrm{FromCompressedSparseForm}\left(\mathrm{cb},r,x\right)
[\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-46}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{-81}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{18}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-55}& \textcolor[rgb]{0,0,1}{-17}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-15}& \textcolor[rgb]{0,0,1}{99}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-61}& \textcolor[rgb]{0,0,1}{-78}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{22}\end{array}]
\mathrm{cb},r,x≔\mathrm{CompressedSparseForm}\left(m,'\mathrm{cbbase}'=3,'\mathrm{rbase}'=-2\right)
\textcolor[rgb]{0,0,1}{\mathrm{cb}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{c}\textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{6}\\ \textcolor[rgb]{0,0,1}{8}\\ \textcolor[rgb]{0,0,1}{10}\\ \textcolor[rgb]{0,0,1}{12}\\ \textcolor[rgb]{0,0,1}{13}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{-2}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{-46}\\ \textcolor[rgb]{0,0,1}{-81}\\ \textcolor[rgb]{0,0,1}{18}\\ \textcolor[rgb]{0,0,1}{-55}\\ \textcolor[rgb]{0,0,1}{-17}\\ \textcolor[rgb]{0,0,1}{-15}\\ \textcolor[rgb]{0,0,1}{99}\\ \textcolor[rgb]{0,0,1}{-61}\\ \textcolor[rgb]{0,0,1}{-78}\\ \textcolor[rgb]{0,0,1}{22}\end{array}]
\mathrm{FromCompressedSparseForm}\left(\mathrm{cb},r,x,'\mathrm{rbase}'=-2\right)
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-81}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-55}& \textcolor[rgb]{0,0,1}{-15}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{-46}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-17}& \textcolor[rgb]{0,0,1}{99}& \textcolor[rgb]{0,0,1}{-61}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{18}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-78}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{22}\end{array}]
If the matrix has zero rows at the bottom, these are not reflected in the compressed sparse column form. (Similarly, zero columns at the right are not reflected in the compressed sparse row form.)
\mathrm{m1}≔\mathrm{Matrix}\left([[0,1,0],[2,0,0],[3,0,4]],'\mathrm{datatype}=\mathrm{float}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{m1}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{2.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{3.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{4.}\end{array}]
\mathrm{m2}≔\mathrm{Matrix}\left([[0,1,0],[2,0,0],[3,0,4],[0,0,0]],'\mathrm{datatype}=\mathrm{float}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{m2}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{2.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{3.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{4.}\\ \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\end{array}]
\mathrm{cb1},\mathrm{r1},\mathrm{x1}≔\mathrm{CompressedSparseForm}\left(\mathrm{m1}\right)
\textcolor[rgb]{0,0,1}{\mathrm{cb1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{r1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{x1}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{5}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2.}\\ \textcolor[rgb]{0,0,1}{3.}\\ \textcolor[rgb]{0,0,1}{1.}\\ \textcolor[rgb]{0,0,1}{4.}\end{array}]
\mathrm{cb2},\mathrm{r2},\mathrm{x2}≔\mathrm{CompressedSparseForm}\left(\mathrm{m2}\right)
\textcolor[rgb]{0,0,1}{\mathrm{cb2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{r2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{x2}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{5}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2.}\\ \textcolor[rgb]{0,0,1}{3.}\\ \textcolor[rgb]{0,0,1}{1.}\\ \textcolor[rgb]{0,0,1}{4.}\end{array}]
Therefore, to recover the original Matrix, you may need to use the otherdimension option.
\mathrm{FromCompressedSparseForm}\left(\mathrm{cb2},\mathrm{r2},\mathrm{x2}\right)
[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{2.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{3.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{4.}\end{array}]
\mathrm{FromCompressedSparseForm}\left(\mathrm{cb2},\mathrm{r2},\mathrm{x2},'\mathrm{otherdimension}'=4\right)
[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{2.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{3.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{4.}\\ \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\end{array}]
The LinearAlgebra[FromCompressedSparseForm] command was introduced in Maple 17. |
Simplify each of the following expressions. Be sure to simplify each of your answers as much as possible Write any answers greater than one as mixed numbers.
\quad \frac { 3 } { 5 } + \frac { 1 } { 4 }
Make the fractions have the same denominator.
\left ( \frac{3}{5} \right )\left ( \frac{4}{4} \right )=\frac{12}{20}\ \ \ \ \ \ \left ( \frac{1}{4} \right )\left ( \frac{5}{5} \right )= \frac{5}{20}
\frac{12}{20}+\frac{5}{20}=\frac{17}{20}
\frac{17}{20}
\frac{3}{4}-\frac{2}{3}
You will follow the same steps as in part (a), except that you will subtract two thirds from three fourths, rather than adding them.
\frac{1}{12}
5 \frac { 1 } { 2 } + 4 \frac { 1 } { 3 }
You can either convert these mixed numbers to fractions greater than one before adding, or you can add their parts. Remember that your answer should be expressed as a mixed number.
\frac { 7 } { 8 } \cdot \frac { 5 } { 6 }
To multiply fractions, you multiply the numerators by one another to find the new numerator. The same process is repeated for finding the new denominator. |
Examine pentagon
SMILE
at right. Do any of its sides have equal length? How do you know? Be sure to provide convincing evidence. You might want to copy the figure onto graph paper.
Draw a slope triangle for each side of the pentagon. Now can you determine the lengths of the sides?
By the Pythagorean Theorem, if the corresponding legs of two right triangles are congruent, then their hypotenuses will be congruent. |
Stable_vector_bundle Knowpia
In mathematics, a stable vector bundle is a (holomorphic or algebraic) vector bundle that is stable in the sense of geometric invariant theory. Any holomorphic vector bundle may be built from stable ones using Harder–Narasimhan filtration. Stable bundles were defined by David Mumford in Mumford (1963) and later built upon by David Gieseker, Fedor Bogomolov, Thomas Bridgeland and many others.
One of the motivations for analyzing stable vector bundles is their nice behavior in families. In fact, Moduli spaces of stable vector bundles can be constructed using the Quot scheme in many cases, whereas the stack of vector bundles
{\displaystyle \mathbf {B} GL_{n}}
is an Artin stack whose underlying set is a single point.
Here's an example of a family of vector bundles which degenerate poorly. If we tensor the Euler sequence of
{\displaystyle \mathbb {P} ^{1}}
{\displaystyle {\mathcal {O}}(1)}
{\displaystyle 0\to {\mathcal {O}}(-1)\to {\mathcal {O}}\oplus {\mathcal {O}}\to {\mathcal {O}}(1)\to 0}
which represents a non-zero element in
{\displaystyle v\in {\text{Ext}}^{1}({\mathcal {O}}(1),{\mathcal {O}}(-1))\cong k}
[2] since the trivial exact sequence representing the
{\displaystyle 0}
{\displaystyle 0\to {\mathcal {O}}(-1)\to {\mathcal {O}}(-1)\oplus {\mathcal {O}}(1)\to {\mathcal {O}}(1)\to 0}
If we consider the family of vector bundles
{\displaystyle E_{t}}
in the extension from
{\displaystyle t\cdot v}
{\displaystyle t\in \mathbb {A} ^{1}}
, there are short exact sequences
{\displaystyle 0\to {\mathcal {O}}(-1)\to E_{t}\to {\mathcal {O}}(1)\to 0}
which have Chern classes
{\displaystyle c_{1}=0,c_{2}=0}
generically, but have
{\displaystyle c_{1}=0,c_{2}=-1}
at the origin. This kind of jumping of numerical invariants does not happen in moduli spaces of stable vector bundles.[3]
Stable vector bundles over curvesEdit
A slope of a holomorphic vector bundle W over a nonsingular algebraic curve (or over a Riemann surface) is a rational number μ(W) = deg(W)/rank(W). A bundle W is stable if and only if
{\displaystyle \mu (V)<\mu (W)}
{\displaystyle \mu (V)\leq \mu (W)}
for all proper non-zero subbundles V of W. Informally this says that a bundle is stable if it is "more ample" than any proper subbundle, and is unstable if it contains a "more ample" subbundle.
If W and V are semistable vector bundles and μ(W) >μ(V), then there are no nonzero maps W → V.
Mumford proved that the moduli space of stable bundles of given rank and degree over a nonsingular curve is a quasiprojective algebraic variety. The cohomology of the moduli space of stable vector bundles over a curve was described by Harder & Narasimhan (1975) using algebraic geometry over finite fields and Atiyah & Bott (1983) using Narasimhan-Seshadri approach.
Stable vector bundles in higher dimensionsEdit
If X is a smooth projective variety of dimension m and H is a hyperplane section, then a vector bundle (or a torsion-free sheaf) W is called stable (or sometimes Gieseker stable) if
{\displaystyle {\frac {\chi (V(nH))}{{\hbox{rank}}(V)}}<{\frac {\chi (W(nH))}{{\hbox{rank}}(W)}}{\text{ for }}n{\text{ large}}}
for all proper non-zero subbundles (or subsheaves) V of W, where χ denotes the Euler characteristic of an algebraic vector bundle and the vector bundle V(nH) means the n-th twist of V by H. W is called semistable if the above holds with < replaced by ≤.
Slope stabilityEdit
For bundles on curves the stability defined by slopes and by growth of Hilbert polynomial coincide. In higher dimensions, these two notions are different and have different advantages. Gieseker stability has an interpretation in terms of geometric invariant theory, while μ-stability has better properties for tensor products, pullbacks, etc.
Let X be a smooth projective variety of dimension n, H its hyperplane section. A slope of a vector bundle (or, more generally, a torsion-free coherent sheaf) E with respect to H is a rational number defined as
{\displaystyle \mu (E):={\frac {c_{1}(E)\cdot H^{n-1}}{\operatorname {rk} (E)}}}
where c1 is the first Chern class. The dependence on H is often omitted from the notation.
A torsion-free coherent sheaf E is μ-semistable if for any nonzero subsheaf F ⊆ E the slopes satisfy the inequality μ(F) ≤ μ(E). It's μ-stable if, in addition, for any nonzero subsheaf F ⊆ E of smaller rank the strict inequality μ(F) < μ(E) holds. This notion of stability may be called slope stability, μ-stability, occasionally Mumford stability or Takemoto stability.
For a vector bundle E the following chain of implications holds: E is μ-stable ⇒ E is stable ⇒ E is semistable ⇒ E is μ-semistable.
Harder-Narasimhan filtrationEdit
Let E be a vector bundle over a smooth projective curve X. Then there exists a unique filtration by subbundles
{\displaystyle 0=E_{0}\subset E_{1}\subset \ldots \subset E_{r+1}=E}
such that the associated graded components Fi := Ei+1/Ei are semistable vector bundles and the slopes decrease, μ(Fi) > μ(Fi+1). This filtration was introduced in Harder & Narasimhan (1975) and is called the Harder-Narasimhan filtration. Two vector bundles with isomorphic associated gradeds are called S-equivalent.
On higher-dimensional varieties the filtration also always exist and is unique, but the associated graded components may no longer be bundles. For Gieseker stability the inequalities between slopes should be replaced with inequalities between Hilbert polynomials.
Kobayashi–Hitchin correspondenceEdit
Narasimhan–Seshadri theorem says that stable bundles on a projective nonsingular curve are the same as those that have projectively flat unitary irreducible connections. For bundles of degree 0 projectively flat connections are flat and thus stable bundles of degree 0 correspond to irreducible unitary representations of the fundamental group.
Kobayashi and Hitchin conjectured an analogue of this in higher dimensions. It was proved for projective nonsingular surfaces by Donaldson (1985), who showed that in this case a vector bundle is stable if and only if it has an irreducible Hermitian–Einstein connection.
It's possible to generalize (μ-)stability to non-smooth projective schemes and more general coherent sheaves using the Hilbert polynomial. Let X be a projective scheme, d a natural number, E a coherent sheaf on X with dim Supp(E) = d. Write the Hilbert polynomial of E as PE(m) = Σd
i=0 αi(E)/(i!) mi. Define the reduced Hilbert polynomial pE := PE/αd(E).
A coherent sheaf E is semistable if the following two conditions hold:[4]
E is pure of dimension d, i.e. all associated primes of E have dimension d;
for any proper nonzero subsheaf F ⊆ E the reduced Hilbert polynomials satisfy pF(m) ≤ pE(m) for large m.
A sheaf is called stable if the strict inequality pF(m) < pE(m) holds for large m.
Let Cohd(X) be the full subcategory of coherent sheaves on X with support of dimension ≤ d. The slope of an object F in Cohd may be defined using the coefficients of the Hilbert polynomial as
{\displaystyle {\hat {\mu }}_{d}(F)=\alpha _{d-1}(F)/\alpha _{d}(F)}
if αd(F) ≠ 0 and 0 otherwise. The dependence of
{\displaystyle {\hat {\mu }}_{d}}
on d is usually omitted from the notation.
A coherent sheaf E with
{\displaystyle \operatorname {dim} \,\operatorname {Supp} (E)=d}
is called μ-semistable if the following two conditions hold:[5]
the torsion of E is in dimension ≤ d-2;
for any nonzero subobject F ⊆ E in the quotient category Cohd(X)/Cohd-1(X) we have
{\displaystyle {\hat {\mu }}(F)\leq {\hat {\mu }}(E)}
E is μ-stable if the strict inequality holds for all proper nonzero subobjects of E.
Note that Cohd is a Serre subcategory for any d, so the quotient category exists. A subobject in the quotient category in general doesn't come from a subsheaf, but for torsion-free sheaves the original definition and the general one for d = n are equivalent.
There are also other directions for generalizations, for example Bridgeland's stability conditions.
One may define stable principal bundles in analogy with stable vector bundles.
^ Note
{\displaystyle \Omega _{\mathbb {P} ^{1}}^{1}\cong {\mathcal {O}}(-2)}
from the Adjunction formula on the canonical sheaf.
^ Since there are isomorphisms
{\displaystyle {\begin{aligned}{\text{Ext}}^{1}({\mathcal {O}}(1),{\mathcal {O}}(-1))&\cong {\text{Ext}}^{1}({\mathcal {O}},{\mathcal {O}}(-2))\\&\cong H^{1}(\mathbb {P} ^{1},\omega _{\mathbb {P} ^{1}})\end{aligned}}}
^ Faltings, Gerd. "Vector bundles on curves" (PDF). Archived (PDF) from the original on 4 March 2020.
^ Huybrechts, Daniel; Lehn, Manfred (1997). The Geometry of Moduli Spaces of Sheaves (PDF). , Definition 1.2.4
Atiyah, Michael Francis; Bott, Raoul (1983), "The Yang-Mills equations over Riemann surfaces", Philosophical Transactions of the Royal Society of London. Series A. Mathematical and Physical Sciences, 308 (1505): 523–615, doi:10.1098/rsta.1983.0017, ISSN 0080-4614, JSTOR 37156, MR 0702806
Donaldson, S. K. (1985), "Anti self-dual Yang-Mills connections over complex algebraic surfaces and stable vector bundles", Proceedings of the London Mathematical Society, Third Series, 50 (1): 1–26, doi:10.1112/plms/s3-50.1.1, ISSN 0024-6115, MR 0765366
Huybrechts, Daniel; Lehn, Manfred (2010), The Geometry of Moduli Spaces of Sheaves, Cambridge Mathematical Library (2nd ed.), Cambridge University Press, ISBN 978-0521134200
Mumford, David; Fogarty, J.; Kirwan, F. (1994), Geometric invariant theory, Ergebnisse der Mathematik und ihrer Grenzgebiete (2) [Results in Mathematics and Related Areas (2)], vol. 34 (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-56963-3, MR 1304906 especially appendix 5C.
Narasimhan, M. S.; Seshadri, C. S. (1965), "Stable and unitary vector bundles on a compact Riemann surface", Annals of Mathematics, Second Series, The Annals of Mathematics, Vol. 82, No. 3, 82 (3): 540–567, doi:10.2307/1970710, ISSN 0003-486X, JSTOR 1970710, MR 0184252 |
The performance of semi-rigid steel frame structure in progressive collapse | JVE Journals
Hengchao Chen1
1Zunyi Vocational and Technical College, Zunyi, Guizhou, 563000, China
Traditional welded joint in steel frame design can be seen as completely rigid joint. And the high strength bolt end-plate joints have been used with popularity in multi-story frames because they have the advantages of requiring shorter assembly times, better seismic performance and good ductility. The performance of this kind of structure model under the condition of progressive collapse has not been analyzed. So, this paper aimed to solve this problem. ANSYS is applied in this article for the analysis of the performance of the capacity of unilateral extended end-plate connections through the method of numerical simulation and comparing with rigid joint. Then applying corner-general force curve of semi-rigid joint to the numerical analysis by using SAP2000. Establishing separate frames with rigid joints and semi-rigid joints, and analyzing the performance of each frame in the condition of progressive collapse. The analysis results show that the resistance of steel frame with semi-rigid joints is superior to rigid frame structure. It has better ductility, and is conducive to the further development of catenary stage.
Keywords: high strength bolt end-plate joints, semi-rigid joint, progressive collapse.
Progressive collapse refers to the small damage in structure that caused by any accident results in the upper unbalanced load which makes the damage inside the structure develops and eventually lead to wide collapse. Progressive collapse often causes serious social influence. It has carried out a large number of related research at home and abroad in recent years. And the purpose is to enhance the ability of structures to resist progressive collapsed after local failure occurred [1].
At present, related study mostly assumes that beam-column connections as fully rigid joints or ideal hinge, which is inconsistent with the actual structures [2]. Connected joints in practical structures are often between just rigid and articulated, which is known as semi-rigid joints.
2. The performance of semi-rigid joint
The stress distribution of high strength bolt end-plate joints is complex, for its contact condition between each element, the applying of bolt pretension and the plastic development of metal materials, as well as large deformation problems. Using ANSYS in elastic-plastic finite element analysis of high strength bolt end-plate joints, considering the material nonlinearity, geometric nonlinearity and nonlinear factors such as the contact status. This can get more correct stress and deformation performance of panels, as well as the formation and development of pry force, the distribution of high strength bolt tensile force [3].
Model of plate and bolt adopts 10 nodes hexahedron unit SOLID185 entity model. The contact unit and the target unit of CONT174 and TARGE170 have been set up between column flange and the end plate, nut and column flange, nut and end plate, and bolt bar and hole wall.
The steel plate ultimate tensile strength is 345 MPa (
{\sigma }_{u}
) off 1.45 times of yield strength (
{\sigma }_{y}
). The yield strength of 10.9 bolt diameter of 20 high strength bolt is 940 MPa, and the tensile strength is 1050 MPa.
The column section is H400×350×12×16 and beam section is H400×300×12×16. The detail main size of model and end plate is shown in Figs. 1, 2. At the same time, establishing solid model of the same size frame with rigid joints. The final finite element model of semi-rigid frame as shown in Fig. 3.
Fig. 1. The model of connection
Fig. 2. Structure of end-plate
Fig. 3. The model of finite element
The load-displacement curve of semi-rigid frame and rigid frame is shown in Fig. 4.
From the diagram, the initial stiffness of rigid joint is 9.8 % bigger than semi-rigid joint. And the ultimate bearing capacity of the rigid joint is 7.1 % larger than semi-rigid joint, although its effect on the ultimate bearing capacity is smaller than the initial stiffness, but the bearing capacity of rigid frame is significantly greater than semi-rigid joint. But the brittle fracture in the subsequent stages of analysis such as the welding defects in rigid joint eventually lead to joint damage occurred. The destruction of semi-rigid joint is behind the rigid joint because of its better ductility. This is good for the development of catenary mechanism in frame structure.
Fig. 4. Force-displacement curve of connection
5. The resistance of frame structures in progressive collapse. Material and methods
In order to determine the influence of semi-rigid frame in progressive collapse resistance, establishing two frames with semi-rigid joints and rigid joints. Nonlinear dynamic analysis and nonlinear static analysis have been applied and results have been compared.
According to the determine structure failure criterion in UFC2010 [4] of the United States, when structures destroyed or angle of the beam end more than 0.2 rad depending on the structure of progressive collapse happened. At the same time, the yield strength of steel will be multiplied by the amplification coefficient of 1.1 to consider the strain rate effect in the process of analysis according to the GSA [5].
Using finite element analysis software SAP2000 to establish a steel frame with 4×4 across and 5 layers, the height of each layer is 3.6 m, the column spacing of transverse and longitudinal direction is 9 m. The column section is 400 mm×350 mm×12 mm×16mm, beam cross section is 400 mm×300 mm×12 mm×16 mm, and all the steel are Q345.According to the FEMA356 [3], M3 plastic hinge has been applied in the ends of beams, and P-M2-M3 plastic hinge has been applied in he ends of columns. The weight of each component are included. According to the GSA [5], the load G = DL+0.25LL (DL as the dead load, LL live load). The frame model is shown in Fig. 5.
Fig. 5. Finite element analysis models
Alternate load path method is used in analyzing progressive collapse. To format the structure with local initial damage by dismantling one or several main stress components of root structure [6]. Then the remaining structure is analyzed with mechanics’ method, and to simulate rest structure during the process of progressive collapse by numerical method. This paper only analyzes the condition that outside column fails. The calendar function failure time take no more than 10 % of the remaining structure vertical base period. Using Reilly damping as structural damping,
\left[C\right]=\alpha \left[M\right]+\beta \left[K\right]
[7]. Its proportion coefficient can be decided according to the first two order frequency of the rest of the structure:
\alpha =\frac{4\pi \xi {f}_{1}{f}_{2}}{{f}_{1}+{f}_{2}},
\beta =\frac{\xi }{\pi \left({f}_{1}+{f}_{2}\right)},
where [
M
K
] represent mass and stiffness matrices respectively,
\alpha
\beta
represent quality and stiffness coefficient,
\xi
represent modal damping ratio as 0.02, and
{f}_{1}
{f}_{2}
represent the first two order frequency of the rest of the structure.
6.1. Nonlinear static analysis results
The development of plastic hinge of rigid and semi-rigid frame under the Pushdown analysis are shown in Fig. 6.
By the figure of the development and distribution of plastic hinges, rigid frame structure and semi-rigid frame structures’ damage only over the failure columns’ studio and, no plastic hinge occurs on another cross beam. When the node displacement of rigid frame meets the failure criteria, the plastic hinge only in the phase of its plastic hinge curve of LS (that is, the stage of life safety). And at the time of semi-rigid frame meets the failure criteria, some of plastic hinges have developed into CP phase (that is, the collapse phase). There is a plastic hinge even come close to failure stage.
Analysis based on the above results can be concluded that the semi-rigid frame structure under collapse load, the failure pattern is relatively more ideal than rigid framework with more plastic hinges in the failure across, at the same time with more fully developed plastic hinges. It means that the ductility of the frame is higher, and dissipates more energy. It can help the overall structure to consume more energy to a certain extent by redistributing plastic internal force in the process of progressive collapse.
Fig. 6. Plastic hinge result of frames
a) Rigid frame
b) Semi-rigid frame
6.2. Dynamic analysis results
The dynamic response of structures in progressive collapse can be achieved by vertical displacement of failure joints and the inertial force of upper structures [1].
The vertical displacement time history curve of failure joint of rigid and semi-rigid frames are shown in Fig. 7.
Fig. 7. The displacement-time curve of failure point
As shown in the figure, the maximum vertical displacement of rigid frame is 75.48 mm, occurs in
t=
0.07 s. And the maximum vertical displacement of semi-rigid frame is 121.52 mm, in
t=
0.12 s. In addition, the displacement time history curve of semi-rigid frame failure joint is in fluctuation condition before 1.3 s. After stabilizing, and control in 89.42 mm or so. Just pick up rigid frame achieved stable after 0.5 s, and finally controlled at about 57.67 mm. According to the analysis results that the vertical displacement of latter frame is much smaller than the former’s. At the same time, the structure under damage conditions achieves maximum response much faster, and easily tends to be stable, which means rigid frame’s stiffness is much larger than semi-rigid frame. It also has better capability of resistance of progressive collapse. However, the semi-rigid frame has better ductility.
Axial force of right side beam above the failure joint are shown in Fig. 8.
Fig. 8. The force-time curve of left beam
a) Semi-rigid frame
b) Rigid frame
The axial force of right side beam above the failure joint in semi-rigid frame has obvious vibration, and greatly increased, and finally gets in the tensile state. Thus the beam-column substructure has entered the stage of catenary that both sides of the beam resist progressive collapse by Rachel action. At this time, if continue to increase the upper load, the structural damage is likely to occur. The axial force of right side beam above the failure joint in rigid frame is pressure which explain the structural resist to fall through the beam mechanism. There is a certain distance from structure progressive collapse. From the above analysis, semi-rigid frame has better ductility, catenary effect will develop more fully under the condition of progressive collapse. It will consume more energy while its vulnerability is larger than rigid frame.
Based on finite element analysis between rigid and semi-rigid frames, the following conclusions have been obtained:
1) The failure joint displacement of semi-rigid frame is the largest as well as its oscillation amplitude, which shows that it has better capability in energy dissipation. But due to the lesser stiffness, failure joint displacement is too large because of the lack of effective support. Through the analysis of the internal force of beam, it can be found that this kind of frame structures are more likely to enter the stage of catenary under collapse conditions, and help dissipating energy.
2) The rigid frame has better stiffness. Rigid beam with rigid joint can generate effective support for the structure above the failure joint, so that structure is more likely to enter a stable state. But its deformation ability is poorer, energy consumption ability is lower than semi-rigid frame. The brittle failure often happens under the condition of progressive collapse.
3) The vulnerability of semi-rigid frame is lager. So it should be carefully used in the inverted design of structure. Rigid and semi-rigid joints should be mutual fusion used in progressive collapse resistance protection.
Minimum Design for Buildings and Other Structures. ASCE/SEI7-05, American Society of Civil Engineers, USA, 2006, p. 207-208. [Search CrossRef]
Hendrick A., Murray T. M. Column Web Compression Strength at End-Plate Connections. Research Report No. FSEL/AISC83-01, Fears Structural Engineering Laboratory, School of Civil Engineering and Environmental Science, University of Oklahoma, Norman, 1983. [Search CrossRef]
Prestandard and Commentary for the Seismic Rehabilitation of Buildings. FEMA 356. [Search CrossRef]
Design of Buildings to Resist Progressive Collapse. UFC4-023-10. [Search CrossRef]
Progressive Collapse Analysis and Design Guidelines for New Federal Office Buildings and Major Modernization Projects (GSA2003). [Search CrossRef]
Gu Xiang Lin Design methods for buildings to resist progressive collapse. Structural Engineers, Vol. 25, Issue 5, 2009, p. 142-148. [Search CrossRef]
Qian Jiaru, Hu Xiaobin Dynamic effect analysis of progressive collapse of multi-story steel frames. Journal of Earthquake Engineering and Engineering Vibration, Vol. 28, Issue 2, 2008, p. 8-14. [Search CrossRef]
Wheeler A. T., Clarke M. J. FE Modeling of four-bolt tubular moment end-plate connections. Journal of Structural Engineering, Vol. 126, 7, p. 816-822. [Search CrossRef]
Li Guo Qiang, Shi Wen Long Design of Steel Frames with Semi-Rigid Connections. China Architecture and Building Press, Beijing, 2009. [Search CrossRef]
Izzuddin B. A., Vlassis A. G., Nethercot D. A. Progressive collapse of multi-storey buildings due to sudden column loss. Part 1: simplified assessment framework. Engineering Structures, Vol. 30, Issue 5, 2008, p. 1308-1318. [Search CrossRef] |
Price floor using Linear Gaussian two-factor model - MATLAB floorbylg2f
Price a Floor Using a Linear Gaussian Two-Factor Model
Price an Amortizing Floor Using a Linear Gaussian Two-Factor Model
Price floor using Linear Gaussian two-factor model
FloorPrice = floorbylg2f(ZeroCurve,a,b,sigma,eta,rho,Strike,Maturity)
FloorPrice = floorbylg2f(___,Name,Value)
FloorPrice = floorbylg2f(ZeroCurve,a,b,sigma,eta,rho,Strike,Maturity) returns the floor price for a two-factor additive Gaussian interest-rate model.
FloorPrice = floorbylg2f(___,Name,Value) adds optional name-value pair arguments.
Use the optional name-value pair argument, Notional, to pass a schedule to compute the price for an amortizing floor.
Define the ZeroCurve, a, b, sigma, eta, and rho parameters to compute the floor price.
CurveDates = daysadd(Settle,360*ZeroTimes,1);
FloorMaturity = daysadd(Settle,360*[1:5 7 10 15 20 25 30],1);
Strike = [0.035 0.037 0.038 0.039 0.040 0.042 0.044 0.046 0.047 0.047 0.047]';
Price = floorbylg2f(irdc,a,b,sigma,eta,rho,Strike,FloorMaturity)
Price = 11×1
Define the ZeroCurve, a, b, sigma, eta, rho, and Notional parameters for the amortizing floor.
% Define ZeroCurve
CurveDates = daysadd(Settle,360*ZeroTimes);
% Define a, b, sigma, eta, and rho
% Define the amortizing floors
Notional = {{'15-Dec-2012' 100;'15-Dec-2017' 70;'15-Dec-2022' 40;'15-Dec-2037' 10}};
% Price the amortizing floors
Price = floorbylg2f(irdc,a,b,sigma,eta,rho,Strike,FloorMaturity,'Notional',Notional)
ZeroCurve — Zero curve for Linear Gaussian two-factor model
Zero curve for the Linear Gaussian two-factor model, specified using IRDataCurve or RateSpec.
a — Mean reversion for first factor for Linear Gaussian two-factor model
Mean reversion for the first factor for the Linear Gaussian two-factor model, specified as a scalar.
b — Mean reversion for second factor for Linear Gaussian two-factor model
Mean reversion for the second factor for the Linear Gaussian two-factor model, specified as a scalar.
sigma — Volatility for first factor for Linear Gaussian two-factor model
Volatility for the first factor for the Linear Gaussian two-factor model, specified as a scalar.
eta — Volatility for second factor for Linear Gaussian two-factor model
Volatility for the second factor for the Linear Gaussian two-factor model, specified as a scalar.
Scalar correlation of the factors, specified as a scalar.
Strike — Floor strike price
Floor strike price specified, as a nonnegative integer using a NumFloors-by-1 vector of floor strike prices.
serial date number | vector of serial date numbers | date character vector
Floor maturity date, specified using a NumFloors-by-1 vector of serial date numbers or date character vectors.
Data Types: single | double | char | cell
Example: Price = floorbylg2f(irdc,a,b,sigma,eta,rho,Strike,FloorMaturity,'Reset',1,'Notional',100)
Reset — Frequency of floor payments per year
2 (default) | positive integer from the set [1,2,3,4,6,12] | vector of positive integers from the set [1,2,3,4,6,12]
Frequency of floor payments per year, specified as the comma-separated pair consisting of 'Reset' and positive integers for the values [1,2,4,6,12] in a NumFloors-by-1 vector.
Notional — Notional value of floor
NINST-by-1 of notional principal amounts or NINST-by-1 cell array where each element is a NumDates-by-2 cell array where the first column is dates and the second column is the associated principal amount. The date indicates the last day that the principal value is valid.
FloorPrice — Floor price
Floor price, returned as a scalar or a NumFloors-by-1 vector.
\mathrm{max}\left(FloorRate-CurrentRate,0\right)
The following defines the two-factor additive Gaussian interest-rate model, given the ZeroCurve, a, b, sigma, eta, and rho parameters:
r\left(t\right)=x\left(t\right)+y\left(t\right)+\varphi \left(t\right)
dx\left(t\right)=-a\left(x\right)\left(t\right)dt+\sigma \left(d{W}_{1}\left(t\right),x\left(0\right)=0
dy\left(t\right)=-b\left(y\right)\left(t\right)dt+\eta \left(d{W}_{2}\left(t\right),y\left(0\right)=0
d{W}_{1}\left(t\right)d{W}_{2}\left(t\right)=\rho dt
is a two-dimensional Brownian motion with correlation ρ and ϕ is a function chosen to match the initial zero curve.
[1] Brigo, D. and F. Mercurio, Interest Rate Models - Theory and Practice. Springer Finance, 2006.
capbylg2f | swaptionbylg2f | LinearGaussian2F |
Electron Configurations | Brilliant Math & Science Wiki
Sravanth C., Abhiram Rao, Satyabrata Dash, and
Electron configuration of an atom tells us how the electrons are arranged in various shells of the atom. It gives an idea of its valency, which will decide how an atom will react with other atoms. The most simple system to determine the electron configuration is the
\text K,\text L,\text M,\text N
system, devised by Bohr.
According to this, the shells of an atom are named as
\text K,\text L,\text M,\text N, . . . ,
etc., or
1, 2, 3, 4, . . . ,
etc., and the maximum number of electrons that can be accommodated is given by the formula
2n^2,
n=(\text{number of the shell})
. For example, the
3^\text{rd}
shell can accommodate
2\times 3^2=18
electrons. To be familiar with the valency, you are requested to read Octet Rule.
Symmetry of Wave Functions
We have to construct the wave function for a system of identical particles so that it reflects the requirement that the particles are indistinguishable from each other. Mathematically, this means interchanging the particles occupying any pair of states should not change the probability density of the system. This simple statement has the enormous consequence of dividing all particles in nature into one of two classes.
An example for two non-interacting identical particles will illustrate the point. The probability density of the the two particle wave function must be identical to that of the the wave functions that have been interchanged.
In fact, all particles with half integer spins, such as electrons, protons, neutrons, etc., are described by anti-symmetric wave functions and obey the exclusion principle. These particles are called fermions because they obey a statistical distribution law discovered by Fermi and Dirac. Particles with integer spins, such as photons, alpha particles, etc., are described by symmetric wave functions and do not obey the exclusion principle. These particles are called bosons because they follow Bose-Einstein statistics.
This rule states that two particles with half integer spins cannot occupy the same quantum state at the same time. In other words, two electrons moving in an orbital cannot have the same spin quantum number or simply spin.
Half integer spin: A spin is an intrinsic property of all elementary particles. Fermions, the particles that constitute ordinary matter, have half-integer spin.
For example, in a helium atom, we have two electrons, and they are not present in the same spin, i.e. a helium atom:
\begin{aligned} \text{would be like this: }\ce{_2He}:& \ \ \boxed{\uparrow\downarrow}\\ \text{and not like this: } \ce{_2He}:& \ \ \boxed{\uparrow\uparrow} \end{aligned}
Note: The spin quantum number has two values
\frac{1}{2}
-\frac{1}{2},
which means that the spin of an electron can be clockwise represented as
\uparrow
, or anti-clockwise represented as
\downarrow
Main Article: orbitals and quantum numbers.
The quantum numbers govern how the particles behave under certain environmental conditions, and they also describe how an electron behaves in a certain orbital. From the Pauli's exclusion principle, we know that two particles cannot be present in the same quantum state, and thus there are a set of
4
quantum numbers which distinguish the quantum behavior of an electron. They are listed as follows:
1. principal quantum number:
The principal quantum number points the energy level at which the electron is present. It is denoted by the letter
n
, where the value of
n\in \mathbb N,
the natural numbers. So, as the value of the principle quantum number increases, the energy level also increases, and thus the values
n
indicate the shell (will be discussed soon) in which the electron is present. For instance,
\begin{array}{|c|c|c|c|c|} \hline \text{Value of }n&1 &2& 3 & \cdots\\ \hline \text{Designation of the Shell}& \text K &\text L & \text M & \cdots\\ \hline \end{array}
2. angular quantum number:
The angular momentum quantum number is a quantum number that describes the shape of an orbital and tells us which subshells are present in the principal shell. It is denoted by the letter
l
, where the value it takes is from
0
(n-1)
; that is, if the principle quantum number of an electron is
n,
then the possible values would be
l=0,1,2,3,\cdots,(n-1).
Find the possible values of the angular quantum number if the electron is present in the
\text M
energy level.
As the electron is in the
\text M
shell, it means that the value of its principle quantum number is
3
. Thus the possible angular quantum number values of the electron would be from
0
(3-1)=2
l=0,1,2.\ _\square
But what do these values represent? As said earlier, it represents the subshell (will be discussed soon) in which the electron is present.
\begin{array}{|c|c|c|c|c|} \hline \text{Value of }l&0 &1 & 2 & 3 & \cdots\\ \hline \text{Designation of the Subshell}& \text s &\text p & \text d & \text f & \cdots\\ \hline \end{array}
If an electron is present in the
\text N
shell, find the total number of possible angular quantum numbers in this state.
3. magnetic quantum number:
The magnetic quantum number tells us about the orbital that an electron occupies--it determines how many orbitals there are as well as their orientation within a subshell. It is denoted by
m_l,
which visualizes the behavior of an electron under the influence of a magnetic field (like earth).
We know that the movement of electric charge can generate a magnetic field, and under the influence of an external magnetic field the electrons tend to orient themselves in certain regions around the nucleus (called orbitals), which is why this quantum number gives the number of orbitals in a particular subshell. The values of the magnetic quantum number depends upon the angular quantum number
(l)
. For example, if the angular quantum number of an atom is
l
, then the magnetic quantum numbers range as follows:
m_l= -l,( -l+1), . . . , 0 , . . . (l-1), l.
2l+1
m_l
l
, i.e. there will be
2l+1
Find all possible values of the magnetic quantum number for an electron present in the
\text M
As seen earlier, the principle quantum number of an electron present in the
\text M
shell is
3
l
n=3
l=0,1,2
. So, the values of
m_l
n=3
will be as follows:
\begin{array}{|c|c|} \hline \text{Values of }l & \text{Values of }m_l\\ \hline l=0 & m_l = 0\\ \hline l=1 & m_l = -1, 0, 1\\ \hline l=2 & m_l = -2, -1, 0, 1, 2\\ \hline \end{array}
f
orbital of the
\text N
shell, what are the total possible magnetic quantum numbers?
4. spin quantum number:
In atomic physics, the spin quantum number is a quantum number that parameterizes the intrinsic angular momentum (or spin angular momentum, or simply spin) of a given particle. It is denoted by
m_s
or rarely as
m_{m_l}
. It was found that the electrons not only revolve around the nucleus but also spin around their own axes, and thus the spin quantum number got its own significance.
The spin of an electron can only be, obviously, two types, clockwise or counter clockwise, and the values of the spins are denoted by
\frac 12
when it is clockwise, and
-\frac 12
when counter clockwise. Sometimes it is also denoted by the arrows:
\begin{array}{|c|c|} \hline \text{clockwise} & \uparrow\\ \hline \text{counter-clockwise} & \downarrow\\ \hline \end{array}
According to quantum mechanics, each shell of an atom is defined as an energy level. The shells are as follows:
1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, \ldots.
The electronic configuration of an electron in a particular shell is denoted by
pA^x,
p
is the principle quantum number,
A
is the angular quantum number, and
x
is the number of electrons in the shell.
Image to help us remember the order in which the electron's are filled up
It also helps in writing the electronic configuration of any specific element. For example,
\ce{_10Ne}
1s^{2}2s^{2}2p^{6}
This rule says that the electrons occupy the lowest possible energy level before proceeding to the higher level. The order of increasing energy level is given by the
n+l
(n+l)=(\text{principal quantum number}) + (\text{azimuthal quantum number}).
The rule is based on the total number of nodes in the atomic orbital,
n + ℓ
, which is related to the energy. In the case of equal
n + ℓ
values, the orbital with a lower
n
value is filled first. The fact that most of the ground state configurations of neutral atoms fill orbitals following this
n + ℓ
n
pattern was obtained experimentally, by reference to the spectroscopic characteristics of the elements.
Pictorial representation of the Aufbau Principle
This lays the corner stone for the internal arrangement of electrons in an atom, it deals with the order of filling the electrons into the orbitals of the same subshell. When more than one orbital of equal energy are available (e.g.
p_x, p_y, p_z
), the electrons first occupy the orbitals with parallel spins or same spin. Only then the pairing of electrons will take place.
This happens because if we place two electrons in the same orbital with opposite spins, the repulsive nature increases; however, if the electrons are placed in separate orbitals with parallel spins, the repulsion is greatly minimized. Also, the electrons occupy the orbitals with same spin, thus reducing the repulsion.
Let's have a look at the examples of nitrogen and oxygen:
If we follow the Hund's rule for the nitrogen atom, its electronic configuration would be like this:
\ce{_7N}:\underbrace{\boxed{\uparrow\downarrow}}_{\text{1s}}\quad\underbrace{\boxed{\uparrow\downarrow}}_{\text{2s}}\quad\underbrace{\boxed{\uparrow\color{#FFFFFF}{\downarrow}}\boxed{\uparrow\color{#FFFFFF}{\downarrow}}\boxed{\uparrow\color{#FFFFFF}{\downarrow}}}_{\text{2p}} \ \ \ (\text{total spin of unpaired electrons}) = \dfrac 12+\dfrac 12+\dfrac12=1\dfrac12.
Note that the electrons are placed in orbitals with parallel spins and are not paired up before at least one electron occupies each orbital.
But we can't place the electrons with opposite spin (as in the case below) or start pairing up electrons before filling up each orbital with at least one electron (as in the next case):
\begin{aligned} \ce{_7N}:\underbrace{\boxed{\uparrow\downarrow}}_{\text{1s}}\quad\underbrace{\boxed{\uparrow\downarrow}}_{\text{2s}}\quad\underbrace{\boxed{\uparrow\color{#FFFFFF}{\downarrow}}\boxed{\color{#FFFFFF}{\uparrow}\downarrow}\boxed{\uparrow\color{#FFFFFF}{\downarrow}}}_{\text{2p}}& \ \ \ (\text{total spin of unpaired electrons}) = \dfrac 12-\dfrac 12+\dfrac12=\dfrac12.\\ &\text{or}\\ \ce{_7N}:\underbrace{\boxed{\uparrow\downarrow}}_{\text{1s}}\quad\underbrace{\boxed{\uparrow\downarrow}}_{\text{2s}}\quad\underbrace{\boxed{\uparrow\downarrow}\boxed{\uparrow\color{#FFFFFF}{\downarrow}}\boxed{\color{#FFFFFF}{\uparrow}\color{#FFFFFF}{\downarrow}}}_{\text{2p}}& \ \ \ (\text{total spin of unpaired electrons}) = \dfrac12. \end{aligned}
Observe that the total spin of the unpaired electrons is maximum in the correct configuration, which is why this rule is also called Hund's rule of maximum multiplicity, the multiple being the spin of the electron i.e.
\frac12
The electrons in the oxygen atom are arranged in the following way which obeys the Hund's rule. The electrons in the
p
orbital are first filled up with at least one electron (in parallel spin) and only then the last electron is paired up:
\ce{_8O}:\underbrace{\boxed{\uparrow\downarrow}}_{\text{1s}}\quad\underbrace{\boxed{\uparrow\downarrow}}_{\text{2s}}\quad\underbrace{\boxed{\uparrow\downarrow}\boxed{\uparrow\color{#FFFFFF}{\downarrow}}\boxed{\uparrow\color{#FFFFFF}{\downarrow}}}_{\text{2p}}\ \ \ (\text{total spin of unpaired electrons}) =\dfrac12+\dfrac 12 = 1,
as opposed to this one where the pairing up has taken place in the wrong order, disobeying Hund's rule:
\ce{_8O}:\underbrace{\boxed{\uparrow\downarrow}}_{\text{1s}}\quad\underbrace{\boxed{\uparrow\downarrow}}_{\text{2s}}\quad\underbrace{\boxed{\uparrow\downarrow}\boxed{\uparrow\downarrow}\boxed{\color{#FFFFFF}{\downarrow}\color{#FFFFFF}{\downarrow}}}_{\text{2p}}\ \ \ (\text{total spin of unpaired electrons}) =0.
Again observe that the total spin of the unpaired electrons is maximum in the correct configuration.
Cite as: Electron Configurations. Brilliant.org. Retrieved from https://brilliant.org/wiki/electron-configurations/ |
R İnanç Baykur, Naoyuki Monden and Jeremy Van Horn-Morris
In this article, we study the maximal length of positive Dehn twist factorizations of surface mapping classes. In connection to fundamental questions regarding the uniform topology of symplectic
4
–manifolds and Stein fillings of contact
3
–manifolds coming from the topology of supporting Lefschetz pencils and open books, we completely determine which boundary multitwists admit arbitrarily long positive Dehn twist factorizations along nonseparating curves, and which mapping class groups contain elements admitting such factorizations. Moreover, for every pair of positive integers
g
n
, we tell whether or not there exist genus-
g
Lefschetz pencils with
n
base points, and if there are, what the maximal Euler characteristic is whenever it is bounded above. We observe that only symplectic
4
–manifolds of general type can attain arbitrarily large topology regardless of the genus and the number of base points of Lefschetz pencils on them.
mapping class groups, Lefschetz fibrations, contact manifolds, symplectic manifolds
Primary: 20F65, 53D35, 57R17
R İnanç Baykur |
Evaluate the following definite integrals. int_{1/8}^1frac{dx}{xsqrt{1+x^{2/3}}}
Jaya Legge 2020-12-14 Answered
{\int }_{1/8}^{1}\frac{dx}{x\sqrt{1+{x}^{2/3}}}
\text{Consider the integral:}\phantom{\rule{0ex}{0ex}}
{\int }_{1/8}^{1}\frac{dx}{x\sqrt{1+{x}^{2/3}}}\phantom{\rule{0ex}{0ex}}
\text{Apply u-substitution: }u=\sqrt{1+{x}^{2/3}}\phantom{\rule{0ex}{0ex}}
{\int }_{1/8}^{1}\frac{dx}{x\sqrt{1+{x}^{2/3}}}={\int }_{\frac{{5}^{1/2}}{2}}^{\sqrt{2}}\frac{3}{{u}^{2}-1}du\phantom{\rule{0ex}{0ex}}
=3\cdot {\int }_{\frac{{5}^{1/2}}{2}}^{\sqrt{2}}\frac{1}{{u}^{2}-1}du\phantom{\rule{0ex}{0ex}}
=3\cdot {\int }_{\frac{{5}^{1/2}}{2}}^{\sqrt{2}}\frac{1}{-\left(-{u}^{2}+1\right)}du\phantom{\rule{0ex}{0ex}}
=3-{\int }_{\frac{{5}^{1/2}}{2}}^{\sqrt{2}}\frac{1}{-{u}^{2}+1}du\phantom{\rule{0ex}{0ex}}
=3\left(-\left[\frac{\mathrm{ln}|u+1|}{2}-\frac{\mathrm{ln}|u-1|}{2}{\right]}_{\frac{{5}^{1/2}}{2}}^{\sqrt{2}}\right)\phantom{\rule{0ex}{0ex}}
=-3\left[\frac{1}{2}\left(\mathrm{ln}|u+1|-\mathrm{ln}|u-1|\right){\right]}_{\frac{{5}^{1/2}}{2}}^{\sqrt{2}}\phantom{\rule{0ex}{0ex}}
=-3\cdot \frac{\mathrm{ln}\left(\sqrt{2}+1\right)-\mathrm{ln}\left(\sqrt{2}-1\right)-\mathrm{ln}\left(\frac{\sqrt{5}}{2}+1\right)+\mathrm{ln}\left(\frac{\sqrt{5}}{2}-1\right)}{2}\phantom{\rule{0ex}{0ex}}\text{Answer in terms of logarithms:}\phantom{\rule{0ex}{0ex}}
{\int }_{1/8}^{1}\frac{dx}{x\sqrt{1+{x}^{2/3}}}=-3\cdot \frac{\mathrm{ln}\left(\sqrt{2}+1\right)-\mathrm{ln}\left(\sqrt{2}-1\right)-\mathrm{ln}\left(\frac{\sqrt{5}}{2}+1\right)+\mathrm{ln}\left(\frac{\sqrt{5}}{2}-1\right)}{2}
How to calculate following integration?
\int \left(\sqrt{\mathrm{tan}x}+\sqrt{\mathrm{cot}x}\right)dx
{\int }_{9}^{4}\left(\sqrt{x}+\frac{1}{\sqrt{x}}{\right)}^{2}dx
How would you compute the following integrals?
{I}_{n}={\int }_{0}^{\pi }\frac{1-\mathrm{cos}nx}{1-\mathrm{cos}x}dx
{J}_{n,m}={\int }_{0}^{\pi }\frac{{x}^{m}\left(1-\mathrm{cos}nx\right)}{1-\mathrm{cos}x}dx
{\int }_{0}^{-2}\mathrm{ln}|x|dx
How to find
{\int }_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\frac{\mathrm{sin}y}{\sqrt{\mathrm{sin}y+c}}dy
\int \frac{4}{5\left(4t+4\right)}dt
\int \frac{{\mathrm{tan}}^{-1}x}{{x}^{2}}dx |
Shortest Path Algorithms | Brilliant Math & Science Wiki
Shortest path algorithms are a family of algorithms designed to solve the shortest path problem. The shortest path problem is something most people have some intuitive familiarity with: given two points, A and B, what is the shortest path between them? In computer science, however, the shortest path problem can take different forms and so different algorithms are needed to be able to solve them all.
For simplicity and generality, shortest path algorithms typically operate on some input graph,
G
. This graph is made up of a set of vertices,
V
, and edges,
E
, that connect them. If the edges have weights, the graph is called a weighted graph. Sometimes these edges are bidirectional and the graph is called undirected. Sometimes there can be even be cycles in the graph. Each of these subtle differences are what makes one algorithm work better than another for certain graph type. An example of a graph is shown below.
An undirected, weighted graph
There are also different types of shortest path algorithms. Maybe you need to find the shortest path between point A and B, but maybe you need to shortest path between point A and all other points in the graph.
Shortest path algorithms have many applications. As noted earlier, mapping software like Google or Apple maps makes use of shortest path algorithms. They are also important for road network, operations, and logistics research. Shortest path algorithms are also very important for computer networks, like the Internet.
Any software that helps you choose a route uses some form of a shortest path algorithm. Google Maps, for instance, has you put in a starting point and an ending point and will solve the shortest path problem for you.
Types of Shortest Path Algorithms
There are many variants of graphs. The first property is the directionality of its edges. Edges can either be unidirectional or bidirectional. If they are unidirectional, the graph is called a directed graph. If they are bidirectional (meaning they go both ways), the graph is called a undirected graph. In the case where some edges are directed and others are not, the bidirectional edges should be swapped out for 2 directed edges that fulfill the same functionality. That graph is now fully directed.
The same graph as above, but directed
The second property of a graph has to do with the weights of the edges. Edges can have no weight, and in that case the graph is called unweighted. If edges do have weights, the graph is said to be weighted. There is an extra caveat here: graphs can be allowed to have negative weight edges. The inclusion of negative weight edges prohibits the use of some shortest path algorithms.
The third property of graphs that affects what algorithms can be used is the existence of cycles. A cycle is defined as any path
p
through a graph,
G
, that visits that same vertex,
v
, more than once. So, if a graph has any path that has a cycle in it, that graph is said to be cyclic. Acyclic graphs, graphs that have no cycles, allow more freedom in the use of algorithms.
Cyclic graph with cyclic path A -> E -> D -> B -> A
There are two main types of shortest path algorithms, single-source and all-pairs. Both types have algorithms that perform best in their own way. All-pairs algorithms take longer to run because of the added complexity. All shortest path algorithms return values that can be used to find the shortest path, even if those return values vary in type or form from algorithm to algorithm.
Single-source shortest path algorithms operate under the following principle:
G
V
, edges
E
with weight function
w(u, v) = w_{u, v}
, and a single source vertex,
s
, return the shortest paths from
s
to all other vertices in
V
If the goal of the algorithm is to find the shortest path between only two given vertices,
s
t
, then the algorithm can simply be stopped when that shortest path is found. Because there is no way to decide which vertices to "finish" first, all algorithms that solve for the shortest path between two given vertices have the same worst-case asymptotic complexity as single-source shortest path algorithms.
This paradigm also works for the single-destination shortest path problem. By reversing all of the edges in a graph, the single-destination problem can be reduced to the single-source problem. So, given a destination vertex,
t
, this algorithm will find the shortest paths starting at all other vertices and ending at
All-pairs shortest path algorithms follow this definition:
G
V
E
w(u, v) = w_{u, v}
return the shortest path from
u
v
(u, v)
V
The most common algorithm for the all-pairs problem is the floyd-warshall algorithm. This algorithm returns a matrix of values
M
, where each cell
M_{i, j}
is the distance of the shortest path from vertex
i
j
. Path reconstruction is possible to find the actual path taken to achieve that shortest path, but it is not part of the fundamental algorithm.
The Bellman-Ford algorithm solves the single-source problem in the general case, where edges can have negative weights and the graph is directed. If the graph is undirected, it will have to modified by including two edges in each direction to make it directed.
Bellman-Ford has the property that it can detect negative weight cycles reachable from the source, which would mean that no shortest path exists. If a negative weight cycle existed, a path could run infinitely on that cycle, decreasing the path cost to
- \infty
If there is no negative weight cycle, then Bellman-Ford returns the weight of the shortest path along with the path itself.
Dijkstra's algorithm makes use of breadth-first search (which is not a single source shortest path algorithm) to solve the single-source problem. It does place one constraint on the graph: there can be no negative weight edges. However, for this one constraint, Dijkstra greatly improves on the runtime of Bellman-Ford.
Dijkstra's algorithm is also sometimes used to solve the all-pairs shortest path problem by simply running it on all vertices in
V
. Again, this requires all edge weights to be positive.
For graphs that are directed acyclic graphs (DAGs), a very useful tool emerges for finding shortest paths. By performing a topological sort on the vertices in the graph, the shortest path problem becomes solvable in linear time.
A topological sort is an ordering all of the vertices such that for each edge
(u, v)
E
u
comes before
v
in the ordering. In a DAG, shortest paths are always well defined because even if there are negative weight edges, there can be no negative weight cycles.
The Floyd-Warshall algorithm solves the all-pairs shortest path problem. It uses a dynamic programming approach to do so. Negative edge weight may be present for Floyd-Warshall.
Floyd-Warshall takes advantage of the following observation: the shortest path from A to C is either the shortest path from A to B plus the shortest path from B to C or it's the shortest path from A to C that's already been found. This may seem trivial, but it's what allows Floyd-Warshall to build shortest paths from smaller shortest paths, in the classic dynamic programming way.
While Floyd-Warshall works well for dense graphs (meaning many edges), Johnson's algorithm works best for sparse graphs (meaning few edges). In sparse graphs, Johnson's algorithm has a lower asymptotic running time compared to Floyd-Warshall.
Johnson's algorithm takes advantage of the concept of reweighting, and it uses Dijkstra's algorithm on many vertices to find the shortest path once it has finished reweighting the edges.
Dijkstra Bellman-Ford Johnson Floyd-Warshall
Shortest-path algorithms are useful for certain types of graphs. For the graph below, which algorithm should be used to solve the single-source shortest path problem?
The runtimes of the shortest path algorithms are listed below.
O(|V| \cdot |E|)
Dijkstra's (with list)
O(|V|^2)
O(|V| + |E|)
O(|V|^3)
Johnson's *
O(|E| \cdot |V| + |V|^2 \cdot \log_2(|V|))
*This runtime assumes that the implementation uses fibonacci heaps.
Oftentimes, the question of which algorithm to use is not left up to the individual; it is merely a function of what graph is being operated upon and which shortest path problem is being solved.
For graphs with negative weight edges, the single source shortest path problem needs Bellman-Ford to succeed. For dense graphs and the all-pairs problem, Floyd-Warshall should be used.
However, there are some subtle differences. For sparse graphs and the all-pairs problem, it might be obvious to use Johnson's algorithm. However, if there are no negative edge weights, then it is actually better to use Dijkstra's algorithm with binary heaps in the implementation. Running Dijsktra's from each vertex will yield a better result.
From a space complexity perspective, many of these algorithms are the same. In their most fundemental form, for example, Bellman-Ford and Dijkstra are the exact same because they use the same representation of a graph. However, when these algorithms are sped up using advanced data structures like fibonacci or binary heaps, the space required to perform the algorithm increases. As is common with algorithms, space is often traded for speed.
These algorithms have been improved upon over time. Dijkstra's algorithm, for example, was initally implemented using a list, and had a runtime of
O(|V|^2)
. However, when a binary heap is used, a runtime of
O((|E|+|V|) \cdot \log_2(|V|))
has been achieved. When a fibonacci heap is used, one implementation can achieve
O(|E| + |V| \cdot \log_2(|V|))
while another can do
O(|E| \cdot \log_2(\log_2(|C|)))
|C|
is a bounded constant for edge weight.
Bellman-Ford has been implemented in
O(|V|^2 \cdot \log_2(|V|))
. This implementation can be efficient if used on the right kind of graph (sparse).
Cite as: Shortest Path Algorithms. Brilliant.org. Retrieved from https://brilliant.org/wiki/shortest-path-algorithms/ |
Home : Support : Online Help : Science and Engineering : Scientific Error Analysis : Details
In ScientificErrorAnalysis, a structure that has numerical quantities with associated errors is called a quantity-with-error structure. Instances of such structures, which have a particular quantity and error, are called quantities-with-error.
There is a quantity-with-error structure native to ScientificErrorAnalysis, constructed using the Quantity constructor.
Quantity( 10., 1. );
\textcolor[rgb]{0,0,1}{\mathrm{Quantity}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{10.}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1.}\right)
In the above, the first argument is the central value and the second argument is the associated error, which can be specified in absolute, relative, or units in the least digit form. In the returned object, the error is in absolute form.
The minimal interpretation placed on a quantity-with-error by ScientificErrorAnalysis is that of an unknown value with central tendency, where the error value is some statistical measure of the spread of the distribution of particular values (as obtained from an experiment or trial, for example).
Nevertheless, many calculations of both error analysis and the ScientificErrorAnalysis package are only strictly valid for Gaussian distributions. However, in any particular application, this condition may not be strictly satisfied, and the interpretation of error values is the responsibility of the application.
What the ScientificErrorAnalysis package does not do is perform what is called interval arithmetic. The error value of an object in the ScientificErrorAnalysis package does not represent an interval in which possible values must be contained.
To extract the central value and error of a quantity-with-error, for example the above object, use evalf and GetError.
x := (1):
evalf( x ), GetError( x );
\textcolor[rgb]{0,0,1}{10.}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1.}
The central feature of ScientificErrorAnalysis is the ability to combine quantities-with-error in a mathematical expression, that is, to propagate the errors through an expression. This is done using combine/errors.
y := Quantity( 20., 1. );
\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{Quantity}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{20.}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1.}\right)
combine( x*y, errors );
\textcolor[rgb]{0,0,1}{\mathrm{Quantity}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{200.}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{22.36067977}\right)
The significance of this result is that, because x and y are uncertain quantities, the product has an uncertainty, as given by the error in the result. In the theory of error analysis, this is called error propagation. (Again, these errors do not represent closed intervals.)
In an error, usually only 1 or 2 digits are meaningful. To appropriately round a result, use ApplyRule with a rounding rule.
ApplyRule( (4), round[2] );
\textcolor[rgb]{0,0,1}{\mathrm{Quantity}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{200.}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{22.}\right)
A rounding rule can be directly specified in combine/errors.
combine( x*y, errors, rule=round[2] );
\textcolor[rgb]{0,0,1}{\mathrm{Quantity}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{200.}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{22.}\right)
Alternatively, the default rounding rule used by combine/errors can be changed.
UseRule( round[2] );
\textcolor[rgb]{0,0,1}{\mathrm{Quantity}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{200.}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{22.}\right)
UseRule( digits );
Several rounding rules are available in ScientificErrorAnalysis, and more can be added by the user.
Errors can be propagated through any differentiable function.
combine( x*exp(0.1*y), errors );
\textcolor[rgb]{0,0,1}{\mathrm{Quantity}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{73.89056099}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{10.44970335}\right)
Correlations can be defined between quantities-with-error.
SetCorrelation( x, y, 0.3 );
\textcolor[rgb]{0,0,1}{\mathrm{Quantity}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{200.}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{24.89979920}\right)
See SetCorrelation and GetCorrelation for more information.
For more information on error propagation, see any text on error analysis for the physical sciences.
Other quantity-with-error structures are the Constant( ) and Element( ) objects of the ScientificConstants package. For example:
ScientificConstants:-Constant( G );
\textcolor[rgb]{0,0,1}{\mathrm{Constant}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{G}\right)
combine( (10), errors );
\textcolor[rgb]{0,0,1}{\mathrm{Quantity}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{6.67408}\textcolor[rgb]{0,0,1}{×}{\textcolor[rgb]{0,0,1}{10}}^{\textcolor[rgb]{0,0,1}{-11}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3.100000000}\textcolor[rgb]{0,0,1}{×}{\textcolor[rgb]{0,0,1}{10}}^{\textcolor[rgb]{0,0,1}{-15}}\right)
Quantity-with-error structures are defined using the AddStructure command.
Quantity-with-error structures can be defined so that their objects are defined as functions of other objects. In such cases, Variance, Covariance, and combine/errors use this functional dependence. For more information, see Variance, Covariance, and combine/errors.
The derived Constants of ScientificConstants are treated as quantities-with-error with functional dependence by ScientificErrorAnalysis. For example:
ScientificConstants:-GetConstant( m[e] );
\textcolor[rgb]{0,0,1}{\mathrm{electron_mass}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{symbol}}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{m}}_{\textcolor[rgb]{0,0,1}{e}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{derive}}\textcolor[rgb]{0,0,1}{=}\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{R}}_{\textcolor[rgb]{0,0,1}{\mathrm{\infty }}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{h}}{\textcolor[rgb]{0,0,1}{c}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{\mathrm{\alpha }}}^{\textcolor[rgb]{0,0,1}{2}}}
combine( ScientificConstants:-Constant( m[e] ), errors );
\textcolor[rgb]{0,0,1}{\mathrm{Quantity}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{9.109383560}\textcolor[rgb]{0,0,1}{×}{\textcolor[rgb]{0,0,1}{10}}^{\textcolor[rgb]{0,0,1}{-31}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1.114295438}\textcolor[rgb]{0,0,1}{×}{\textcolor[rgb]{0,0,1}{10}}^{\textcolor[rgb]{0,0,1}{-38}}\right)
See ScientificErrorAnalysis and ScientificConstants for more information.
Some physical constants have their values determined to more than 10 significant figures. Hence, calculations involving these objects at the default setting of Digits may result in a loss of precision. For more precision, set Digits to a higher value, for example, 15. |
Killing tensor - formulasearchengine
{{ safesubst:#invoke:Unsubst||$N=Unreferenced |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} A Killing tensor, named after Wilhelm Killing, is a symmetric tensor, known in the theory of general relativity,
{\displaystyle K}
{\displaystyle \nabla _{(\alpha }K_{\beta \gamma )}=0\,}
where the parentheses on the indices refer to the symmetric part.
This is a generalization of a Killing vector. While Killing vectors are associated with continuous symmetries (more precisely, differentiable), and hence very common, the concept of Killing tensor arises much less frequently. The Kerr solution is the most famous example of a manifold possessing a Killing tensor.
Retrieved from "https://en.formulasearchengine.com/index.php?title=Killing_tensor&oldid=251516" |
Normal matrix - Wikipedia
In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:
{\displaystyle A{\text{ normal}}\quad \iff \quad A^{*}A=AA^{*}}
The concept of normal matrices can be extended to normal operators on infinite dimensional normed spaces and to normal elements in C*-algebras. As in the matrix case, normality means commutativity is preserved, to the extent possible, in the noncommutative setting. This makes normal operators, and normal elements of C*-algebras, more amenable to analysis.
The spectral theorem states that a matrix is normal if and only if it is unitarily similar to a diagonal matrix, and therefore any matrix A satisfying the equation A*A = AA* is diagonalizable. The converse does not hold because diagonalizable matrices may have non-orthogonal eigenspaces.
The left and right singular vectors in the singular value decomposition of a normal matrix
{\displaystyle \mathbf {A} =\mathbf {U} {\boldsymbol {\Sigma }}\mathbf {V} ^{*}}
differ only in complex phase from each other and from the corresponding eigenvectors, since the phase must be factored out of the eigenvalues to form singular values.
4 Normal matrix analogy
Among complex matrices, all unitary, Hermitian, and skew-Hermitian matrices are normal, with all eigenvalues being unit modulus, real, and imaginary, respectively. Likewise, among real matrices, all orthogonal, symmetric, and skew-symmetric matrices are normal, with all eigenvalues being complex conjugate pairs on the unit circle, real, and imaginary, respectively. However, it is not the case that all normal matrices are either unitary or (skew-)Hermitian, as their eigenvalues can be any complex number, in general. For example,
{\displaystyle A={\begin{bmatrix}1&1&0\\0&1&1\\1&0&1\end{bmatrix}}}
is neither unitary, Hermitian, nor skew-Hermitian, because it's eigenvalues are
{\displaystyle 2,(1\pm i{\sqrt {3}})/2}
; yet it is normal because
{\displaystyle AA^{*}={\begin{bmatrix}2&1&1\\1&2&1\\1&1&2\end{bmatrix}}=A^{*}A.}
Proposition: A normal triangular matrix is diagonal.
Proof: Let A be any normal upper triangular matrix. Since
{\displaystyle (A^{*}A)_{ii}=(AA^{*})_{ii},}
using subscript notation, one can write the equivalent expression using instead the ith unit vector (
{\displaystyle {\hat {\mathbf {e} }}_{i}}
) to select the ith row and ith column:
{\displaystyle {\hat {\mathbf {e} }}_{i}^{\intercal }\left(A^{*}A\right){\hat {\mathbf {e} }}_{i}={\hat {\mathbf {e} }}_{i}^{\intercal }\left(AA^{*}\right){\hat {\mathbf {e} }}_{i}.}
{\displaystyle \left(A{\hat {\mathbf {e} }}_{i}\right)^{*}\left(A{\hat {\mathbf {e} }}_{i}\right)=\left(A^{*}{\hat {\mathbf {e} }}_{i}\right)^{*}\left(A^{*}{\hat {\mathbf {e} }}_{i}\right)}
is equivalent, and so is
{\displaystyle \left\|A{\hat {\mathbf {e} }}_{i}\right\|^{2}=\left\|A^{*}{\hat {\mathbf {e} }}_{i}\right\|^{2},}
which shows that the ith row must have the same norm as the ith column.
Consider i = 1. The first entry of row 1 and column 1 are the same, and the rest of column 1 is zero (because of triangularity). This implies the first row must be zero for entries 2 through n. Continuing this argument for row–column pairs 2 through n shows A is diagonal.◻
The concept of normality is important because normal matrices are precisely those to which the spectral theorem applies:
Proposition. A matrix A is normal if and only if there exists a diagonal matrix Λ and a unitary matrix U such that A = UΛU*.
The diagonal entries of Λ are the eigenvalues of A, and the columns of U are the eigenvectors of A. The matching eigenvalues in Λ come in the same order as the eigenvectors are ordered as columns of U.
Another way of stating the spectral theorem is to say that normal matrices are precisely those matrices that can be represented by a diagonal matrix with respect to a properly chosen orthonormal basis of Cn. Phrased differently: a matrix is normal if and only if its eigenspaces span Cn and are pairwise orthogonal with respect to the standard inner product of Cn.
The spectral theorem for normal matrices is a special case of the more general Schur decomposition which holds for all square matrices. Let A be a square matrix. Then by Schur decomposition it is unitary similar to an upper-triangular matrix, say, B. If A is normal, so is B. But then B must be diagonal, for, as noted above, a normal upper-triangular matrix is diagonal.
The spectral theorem permits the classification of normal matrices in terms of their spectra, for example:
Proposition. A normal matrix is unitary if and only if all of its eigenvalues (its spectrum) lie on the unit circle of the complex plane.
Proposition. A normal matrix is self-adjoint if and only if its spectrum is contained in
{\displaystyle \mathbb {R} }
. In other words: A normal matrix is Hermitian if and only if all its eigenvalues are real.
In general, the sum or product of two normal matrices need not be normal. However, the following holds:
Proposition. If A and B are normal with AB = BA, then both AB and A + B are also normal. Furthermore there exists a unitary matrix U such that UAU* and UBU* are diagonal matrices. In other words A and B are simultaneously diagonalizable.
In this special case, the columns of U* are eigenvectors of both A and B and form an orthonormal basis in Cn. This follows by combining the theorems that, over an algebraically closed field, commuting matrices are simultaneously triangularizable and a normal matrix is diagonalizable – the added result is that these can both be done simultaneously.
It is possible to give a fairly long list of equivalent definitions of a normal matrix. Let A be a n × n complex matrix. Then the following are equivalent:
A is normal.
A is diagonalizable by a unitary matrix.
There exists a set of eigenvectors of A which forms an orthonormal basis for Cn.
{\displaystyle \left\|A\mathbf {x} \right\|=\left\|A^{*}\mathbf {x} \right\|}
for every x.
The Frobenius norm of A can be computed by the eigenvalues of A:
{\textstyle \operatorname {tr} \left(A^{*}A\right)=\sum _{j}\left|\lambda _{j}\right|^{2}}
The Hermitian part 1/2(A + A*) and skew-Hermitian part 1/2(A − A*) of A commute.
A* is a polynomial (of degree ≤ n − 1) in A.[a]
A* = AU for some unitary matrix U.[1]
U and P commute, where we have the polar decomposition A = UP with a unitary matrix U and some positive semidefinite matrix P.
A commutes with some normal matrix N with distinct eigenvalues.
σi = |λi| for all 1 ≤ i ≤ n where A has singular values σ1 ≥ ⋯ ≥ σn and eigenvalues |λ1| ≥ ⋯ ≥ |λn|.[2]
{\displaystyle A=B+iC}
for two self-adjoint matrices B and C.
Some but not all of the above generalize to normal operators on infinite-dimensional Hilbert spaces. For example, a bounded operator satisfying (9) is only quasinormal.
Normal matrix analogyEdit
It is occasionally useful (but sometimes misleading) to think of the relationships of special kinds of normal matrices as analogous to the relationships of the corresponding type of complex numbers of which their eigenvalues are composed. This is because any function of a non-defective matrix acts directly on each of its eigenvalues, and the conjugate transpose of its spectral decomposition
{\displaystyle VDV^{*}}
{\displaystyle VD^{*}V^{*}}
{\displaystyle D}
is the diagonal matrix of eigenvalues. Likewise, if two normal matrices commute and are therefore simultaneously diagonalizable, any operation between these matrices also acts on each corresponding pair of eigenvalues.
The conjugate transpose is analogous to the complex conjugate.
Unitary matrices are analogous to complex numbers on the unit circle.
Hermitian matrices are analogous to real numbers.
Hermitian positive definite matrices are analogous to positive real numbers.
Skew Hermitian matrices are analogous to purely imaginary numbers.
Invertible matrices are analogous to non-zero complex numbers.
The inverse of a matrix has each eigenvalue inverted.
A uniform scaling matrix is analogous to a constant number.
In particular, the zero is analogous to 0, and
the identity matrix is analogous to 1.
An idempotent matrix is an orthogonal projection with each eigenvalue either 0 or 1.
A normal involution has eigenvalues
{\displaystyle \pm 1}
As a special case, the complex numbers may be embedded in the normal 2×2 real matrices by the mapping
{\displaystyle a+bi\mapsto {\begin{bmatrix}a&b\\-b&a\end{bmatrix}}=a\,{\begin{bmatrix}1&0\\0&1\end{bmatrix}}+b\,{\begin{bmatrix}0&1\\-1&0\end{bmatrix}}\,.}
which preserves addition and multiplication. It is easy to check that this embedding respects all of the above analogies.
Least-squares normal matrix
^ Proof. When
{\displaystyle A}
is normal, use Lagrange's interpolation formula to construct a polynomial
{\displaystyle P}
{\displaystyle {\overline {\lambda _{j}}}=P(\lambda _{j})}
{\displaystyle \lambda _{j}}
{\displaystyle A}
Horn, Roger Alan; Johnson, Charles Royal (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6 .
Horn, Roger Alan; Johnson, Charles Royal (1991). Topics in Matrix Analysis. Cambridge University Press. ISBN 978-0-521-30587-7.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Normal_matrix&oldid=1081581627" |
(1E,3E)-4-hydroxybuta-1,3-diene-1,2,4-tricarboxylate 1,2-hydro-lyase (2-hydroxy-4-oxobutane-1,2,4-tricarboxylate-forming) Wikipedia
(1E,3E)-4-hydroxybuta-1,3-diene-1,2,4-tricarboxylate 1,2-hydro-lyase (2-hydroxy-4-oxobutane-1,2,4-tricarboxylate-forming)
4-oxalmesaconate hydratase
In enzymology, a 4-oxalmesaconate hydratase (EC 4.2.1.83) is an enzyme that catalyzes the chemical reaction
2-hydroxy-4-oxobutane-1,2,4-tricarboxylate
{\displaystyle \rightleftharpoons }
(E)-4-oxobut-1-ene-1,2,4-tricarboxylate + H2O
Hence, this enzyme has one substrate, 2-hydroxy-4-oxobutane-1,2,4-tricarboxylate, and two products, (E)-4-oxobut-1-ene-1,2,4-tricarboxylate and H2O.
This enzyme belongs to the family of lyases, specifically the hydro-lyases, which cleave carbon-oxygen bonds. The systematic name of this enzyme class is 2-hydroxy-4-oxobutane-1,2,4-tricarboxylate 2,3-hydro-lyase [(E)-4-oxobut-1-ene-1,2,4-tricarboxylate-forming]. Other names in common use include 4-carboxy-2-oxohexenedioate hydratase, 4-carboxy-2-oxobutane-1,2,4-tricarboxylate 2,3-hydro-lyase, oxalmesaconate hydratase, gamma-oxalmesaconate hydratase, 4-carboxy-2-oxohexenedioate hydratase, and 2-hydroxy-4-oxobutane-1,2,4-tricarboxylate 2,3-hydro-lyase. This enzyme participates in benzoate degradation via hydroxylation.
Maruyama K (February 1983). "Purification and properties of 2-pyrone-4,6-dicarboxylate hydrolase". Journal of Biochemistry. 93 (2): 557–65. PMID 6841353. |
Le Chatelier's Principle | Brilliant Math & Science Wiki
Aditya Virani, Gautam Sharma, Vishal Ch, and
We know that equilibrium is achieved in a reversible reaction when the rate of forward reaction becomes equal to the rate of backward reaction. But what happens when we disturb this equilibrium? This is where Le Chatelier's principle comes into play.
Le Chatelier's principle states that if a system in equilibrium is subjected to a change of concentration, temperature or pressure, the equilibrium shifts in a direction so as to undo the effect of the change imposed.
_\square
As we can see from the definition, a change in concentration (of the reactants/products), temperature, or pressure can shift the equilibrium of a reaction. However, adding a catalyst makes the reaction faster, but does not affect equilibrium. Now we will discuss how some factors affect equilibrium.
Effect of change in pressure
Effect of change in pressure on melting point
The General Example
When a substance is added at equilibrium state, the reaction occurs in the direction that decreases the concentration of that substance. When a substance is removed at equilibrium state, the reaction occurs in the direction that increases the concentration of that substance.
As we add or remove reactant (or product), the ratio of equilibrium concentration becomes
Q,
which is called the reaction quotient.
Q<K
, equilibrium shifts in forward direction;
Q>K
, equilibrium shifts in backward direction;
K
is the equilibrium constant of the reaction.
This can be simply understood from the definition. Let us take an example:
\ce{N2}(g) +\ce{3H2}(g) \rightleftharpoons \ce{2NH3}(g).
Now if we remove some amount of reactant (
\ce{N2}
\ce{H2}
or both), then we have disturbed the equilibrium and the concentration of the reactants gets decreased. To achieve equilibrium, the products will react to form reactants. Hence the reaction moves backwards.
Mathematically, let the equilibrium concentrations of
\ce{N2}, \ce{H2}
\ce{NH3}
c_1,c_2,
c_3,
Then the equilibrium constant would be
K= \frac{{c_3}^2}{(c_1)(c_2)^3}.
c'(<c_1)
be the concentration of
\ce{N2}
after removal. Just after the removal, we have the reaction quotient
Q=\frac{{c_3}^2}{(c')(c_2)^3}.
Now clearly we have
Q>K,
and hence the reaction will proceed backwards and reactants will be formed.
Pressure can shift the chemical equilibrium of a reaction that involves gaseous molecules. If pressure is increased, then the equilibrium shifts in the direction that decreases the number of gas molecules. If pressure is decreased, then the equilibrium shifts in the direction that increases the number of gas molecules. For example, let's take a look at the following reaction:
\text{C}(s)+\text{H}_2\text{O}(g)\leftrightharpoons\text{CO}(g)+\text{H}_2(g).
Note that the forward reaction increases the number of gas molecules, and the reverse reaction decreases that. Thus if pressure is increased, then the reverse reaction will occur. In contrast if the pressure is decreased, then the equilibrium will shift forward.
However, the pressure we apply must have effect on the partial pressure of the reactants and/or products. If we increase the pressure by adding an irrelevant gas (e.g. Helium) under constant volume, then the equilibrium will not shift.
If a system in equilibrium consists of gases, then the concentration of all the components can be altered by changing the pressure.
When there is an increase in pressure, the equilibrium will shift towards the side of the reaction with fewer moles of gas.
When there is a decrease in pressure, the equilibrium will shift towards the side of the reaction with higher moles of gas.
We know in a gaseous system pressure is directly proportional to moles and hence the above two points.
Pressure is inversely related to volume. Therefore, the effects of changes in pressure are opposite of the effects of changes in volume. Additionally, this does not apply to a change in the pressure in the system due to the addition of an inert gas.
N_2(g) +3H_2(g) \rightleftharpoons 2NH_3(g).
If we increase pressure, then the reaction will proceed forward (less number of moles) and if we decrease pressure then the reaction will proceed backwards (high moles).
For solids whose volume increases on melting, e.g.
\ce{Fe}, \ce{Cu}, \ce{Ag}, \ce{Au},
\text{Solid (Lower volume) } \rightleftharpoons \text{Liquid (Higher volume)}.
In this case, the process of melting becomes difficult at high pressure, and thus the melting point becomes high.
Solids whose volume decreases on melting, e.g.
\text{quartz, carborundum, ice, diamond},
\text{Solid (Higher volume) } \rightleftharpoons \text{Liquid (Lower volume)}.
In this case, the process of melting becomes favorable at high pressure, and thus the melting point is lowered.
When solid substances are dissolved in water, either heat is evolved (exothermic) or heat is absorbed (endothermic).
For endothermic solubility process, solubility increases with increase in temperature .
For exothermic solubility process, solubility decreases with increase in temperature .
When a gas dissolves in liquid, there is a decrease in volume. Thus the increase of pressure will favor the dissolution of gas in liquid.
For endothermic and exothermic reactions, the chemical equilibrium will shift according to temperature. If the temperature is risen, then the reaction will occur in the endothermic direction. If the temperature is dropped, the reaction occurs in the exothermic direction. Take a look at the following reaction:
2\text{NO}_2(g)\leftrightharpoons\text{N}_2\text{O}_4(g),\qquad\Delta H=-54.8\text{ kJ}.
Observe that the forward reaction is exothermic and the reverse reaction is endothermic. Thus, if we raise the temperature at equilibrium, then the equilibrium will shift backward. In this case the equilibrium constant
K
becomes smaller. On the other hand if we lower the temperature, the equilibrium constant
K
gets larger, and the equilibrium will shift forward.
Keep in mind that the only factor that alters the equilibrium constant
K
For reactions in which
n_p=n_r
(number of moles of product = number of moles of reactant), there is no effect on adding an inert gas at constant volume or at constant pressure on the equilibrium.
n_p \ne n_r
, there is no effect on adding an inert gas at constant volume
BUT
at constant pressure, the equilibrium shifts towards the side with higher number of moles.
Applying Le Chatelier's principle, the favorable conditions for dissociation of
\ce{NH_3}
by Haber's process
\ce{N_2}(g)+3\ce{H_2}(g) \rightleftharpoons 2\ce{NH_3}, \quad \Delta H=-92.5kJ
\ce{N_2}
\ce{H_2}
addition of inert gas at constant pressure.
Think why the above 4 points are true? (Remember we are talking about dissociation of
\ce{NH_3}.
\text{N}_2(g)+2\text{O}_2(g)\leftrightharpoons2\text{NO}_2(g),\qquad\Delta H=+66\text{ kJ}.
Find all of the options, if any, that will increase the yield of nitrogen dioxide.
\qquad \text{(a)}
Increasing the concentration of nitrogen.
\qquad \text{(b)}
Increasing the temperature.
\qquad \text{(c)}
Applying pressure.
\qquad \text{(d)}
Adding 3 moles of Neon under constant volume.
\qquad \text{(e)}
Adding a catalyst.
(a) If the concentration of nitrogen is increased, then the reaction will occur in the direction that decreases the concentration of nitrogen. Thus the equilibrium will shift forward, and increase the yield of
\text{NO}_2.
(b) Since the forward reaction is endothermic, the equilibrium will shift forward when temperature is increased, increasing the yield of nitrogen dioxide. In this case, the equilibrium constant
K
(c) If pressure is applied, then the equilibrium shifts in the direction that decreases the number of gas molecules, which is the forward direction in this case. Therefore increasing pressure will increase the yield of nitrogen dioxide.
(d) Neon is a noble gas, which has nothing to do with the above reaction. Adding 3 moles of Neon will change the total pressure, but will not affect the partial pressures of the reactants and products.
(e) Adding a catalyst will accelerate the speed of the reaction, but will not shift the equilibrium.
Therefore our answer is (a), (b), and (c).
_\square
Cite as: Le Chatelier's Principle. Brilliant.org. Retrieved from https://brilliant.org/wiki/le-chateliers-principle/ |
ReiszWindow - Maple Help
Home : Support : Online Help : Science and Engineering : Signal Processing : Windowing Functions : ReiszWindow
multiply an array of samples by a Welch windowing function
multiply an array of samples by a Reisz windowing function
WelchWindow(A)
ReiszWindow(A)
The WelchWindow(A) command multiplies the Array A by the Welch windowing function and returns the result in an Array having the same length.
The ReiszWindow( A ) command is provided as an alias.
The Welch windowing function
w\left(k\right)
N
w\left(k\right)=1-{\left(\frac{2k}{n}-1\right)}^{2}
The SignalProcessing[WelchWindow] and SignalProcessing[ReiszWindow] commands are thread-safe as of Maple 18.
\mathrm{with}\left(\mathrm{SignalProcessing}\right):
N≔1024:
a≔\mathrm{GenerateUniform}\left(N,-1,1\right)
{\textcolor[rgb]{0,0,1}{\mathrm{_rtable}}}_{\textcolor[rgb]{0,0,1}{36893628302016037220}}
\mathrm{WelchWindow}\left(a\right)
{\textcolor[rgb]{0,0,1}{\mathrm{_rtable}}}_{\textcolor[rgb]{0,0,1}{36893628301873532924}}
c≔\mathrm{Array}\left(1..N,'\mathrm{datatype}'='\mathrm{float}'[8],'\mathrm{order}'='\mathrm{C_order}'\right):
\mathrm{WelchWindow}\left(\mathrm{Array}\left(1..N,'\mathrm{fill}'=1,'\mathrm{datatype}'='\mathrm{float}'[8],'\mathrm{order}'='\mathrm{C_order}'\right),'\mathrm{container}'=c\right)
{\textcolor[rgb]{0,0,1}{\mathrm{_rtable}}}_{\textcolor[rgb]{0,0,1}{36893628301873508348}}
u≔\mathrm{`~`}[\mathrm{log}]\left(\mathrm{FFT}\left(c\right)\right):
\mathbf{use}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathrm{plots}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{in}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathrm{display}\left(\mathrm{Array}\left(\left[\mathrm{listplot}\left(\mathrm{ℜ}\left(u\right)\right),\mathrm{listplot}\left(\mathrm{ℑ}\left(u\right)\right)\right]\right)\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{end use}
The SignalProcessing[WelchWindow] and SignalProcessing[ReiszWindow] commands were introduced in Maple 18. |
Find all rational zeros of the polynomial, and write the polynomial in factored form. P(x)=4x^{3}-7x+3
P\left(x\right)=4{x}^{3}-7x+3
P\left(x\right)=4{x}^{3}-7x+3
First check the solution of cubic equation by putting the different value of x.
P\left(1\right)=4{\left(1\right)}^{3}-7\left(1\right)+3=7-7=0
So, x = 1 is the solution of given cubic equation.
4{x}^{3}-7x+3
is divided by x-1
\frac{4{x}^{3}-7x+3}{x-1}=4{x}^{2}+4x-3
4{x}^{2}+4x-3
4{x}^{2}-2x+6x-3=0
2x(2x-1)+3(2x-1)=0
(2x-1)=0 or (2x+3)=0
2x=1 or 2x=-3
x=\frac{1}{2}
x=-\frac{3}{2}
Zeroes of polynomial are: x=1,
x=\frac{1}{2},x=-\frac{3}{2}
I have the equation:
\frac{1}{\tau }{\int }_{0}^{\tau }A\mathrm{sin}\left(\mathrm{\Omega }t\right)\cdot A\mathrm{sin}\left(\mathrm{\Omega }\left(t-\lambda \right)\right)dt
for which the attempted solution is to convert the sine terms into complex natural exponents (engineering notation using j as imaginary unit) as
\frac{{A}^{2}}{\tau }{\int }_{0}^{\tau }\frac{{e}^{j\mathrm{\Omega }t}-{e}^{-j\mathrm{\Omega }t}}{2j}\cdot \frac{{e}^{j\mathrm{\Omega }\left(t-\lambda \right)}-{e}^{-j\mathrm{\Omega }\left(t-\lambda \right)}}{2j}dt
the next step in the solution moves the
\frac{1}{2j}
term outside of the integral to form
\frac{-{A}^{2}}{4\tau }{\int }_{0}^{\tau }\left({e}^{j\mathrm{\Omega }t}-{e}^{-j\mathrm{\Omega }t}\right)\cdot \left({e}^{j\mathrm{\Omega }\left(t-\lambda \right)}-{e}^{-j\mathrm{\Omega }\left(t-\lambda \right)}\right)dt
I'm struggling to understand how
\frac{1}{2j}\to \frac{-1}{4}
when being moved out of the integral.
The problem is to prove that:
{\int }_{-\pi }^{\pi }{\int }_{-\pi }^{\pi }\left(a\mathrm{sin}\left(x\right)+b\mathrm{cos}\left(x\right)+c\mathrm{sin}\left(y\right)+d\mathrm{cos}\left(y\right)\right){e}^{-2\mathrm{cos}\left(y-x\right)+a\mathrm{cos}\left(x\right)-b\mathrm{sin}\left(x\right)+c\mathrm{cos}\left(y\right)-d\mathrm{sin}\left(y\right)}dxdy
I have an idea to represent a function under the integral like the odd function, but I can't.
Also, I can prove, that:
{\int }_{-\pi }^{\pi }\left(a\mathrm{cos}\left(x\right)+b\mathrm{sin}\left(x\right)\right){e}^{a\mathrm{sin}\left(x\right)-b\mathrm{cos}\left(x\right)}dx=0
{\int }_{0}^{1}2x{\mathrm{tan}}^{-1}\left({x}^{2}\right)dx
\int \frac{{e}^{\mathrm{sec}\left(x\right)\mathrm{sin}\left(x\right)}}{{\mathrm{cos}}^{2}\left(x\right)}dx
{\int }_{0}^{1}\frac{dx}{\sqrt{1-{x}^{2}}}
{\int }_{-\pi }^{\frac{2\pi }{3}}-\mathrm{sin}xdx
{\int }_{10}^{30}g\left(x\right)dx |
Food - OSRS Wiki
The RuneScape Wiki also has an article on: rsw:FoodThe RuneScape Classic Wiki also has an article on: classicrsw:Food
There are several kinds of food, such as fish and meat, bread, cakes, pies, and pizza. Some foods, such as pies and pizzas, have two or more "bites" or "slices" meaning they can be eaten multiple times. There are also foods which can be grown with the Farming skill or found in some farm areas such as potatoes, cabbages, and onions. Lastly, there are holiday food items such as pumpkins and easter eggs. It is not advised to eat one of these holiday items due to their low availability and high price.
1 Common foods
2 Wilderness-only foods
Common foods[edit | edit source]
Shrimp 3 1 1 Caught via small net fishing. 14.33 43 0
Cooked chicken 3 1 Raw chicken is obtained by killing chickens. 24.33 73 0
Cooked meat 3 1 Raw meat can be obtained by killing cows or giant rats. 21.33 64 0
Sardine 4 1 5 Caught via bait fishing. 3 9 0
Bread 5 1 Baked by players, see Cooking skill. 34 170 0
Herring 5 5 10 Caught via bait fishing. 1.6 8 0
Mackerel 6 10 16 Caught via big net fishing. 2 12 1
Choc-ice 6 N/A Can be bought from Rokuh in Nardah. Negates damage from desert heat. 31.17 187 1
Trout 7 15 20 Caught via fly fishing. 5.71 40 0
Cod 7 18 23 Caught via big net fishing. 0.86 6 1
Pike 8 20 25 Caught via bait fishing. 3.5 28 0
Roast beast meat 8 21 An iron spit must be used along with a fire to cook it. 3.25 26 1
Pineapple punch 9 8 See Gnome cooking for more information. 5.22 47 1
Salmon 9 25 30 Caught via fly fishing. 7.67 69 0
Tuna 10 30 35 Caught with a harpoon. 7.5 75 0
Redberry pie 5 x 2 10 Baked by players, see Cooking skill.
Heals 5 hitpoints per bite (2 bites total). 4.5 45 0
Jug of wine 11[2] 35 Upon consumption, Attack is temporarily decreased by a few points and leaves a jug behind. 0.18 2 0
Rainbow fish 11 35 38 Caught via fly fishing by using stripy feathers. 8.45 93 1
Stew 11 25 Once eaten, an empty bowl remains in the inventory. 6.27 69 0
Banana stew 11 N/A Once eaten, an empty bowl remains in the inventory. 1.82 20 1
Cake 4 × 3 40 Baked by players, see Cooking skill.
Can be stolen from East Ardougne, Keldagrim, and Kourend Castle.
Heals 4 hitpoints per bite (3 bites total). 17.33 208 0
Meat pie 6 × 2 20 Baked by players, see Cooking skill.
Lobster 12 40 40 Caught with a lobster pot. 13.75 165 0
Bass 13 43 46 Caught via big net fishing. 6.54 85 1
Plain pizza 7 × 2 35 Baked by players, see cooking skill.
Swordfish 14 45 50 Caught with a harpoon. 29 406 0
Potato with butter 14 39 Made by using pat of butter on a baked potato. 27.64 387 1
Apple pie 7 × 2 30 Made by using a cooking apple on a pie shell to make an uncooked apple pie and cooking it.
Heals 7 hitpoints per bite (2 bites total). 7.64 107 0
Chocolate cake 5 × 3 50 Made by adding chocolate dust to cake.
Heals 5 hitpoints per bite (3 bites total).
1/3 Slice can be thieved from baker's stalls. 16.67 250 0
Watermelon (1 to 5) × 3 47 Must be sliced with a knife before eating.
Heals for
{\displaystyle Hitpoints*{\frac {1}{20}}+1}
per slice. 1.6 – 8 24 1
Tangled toad's legs 15 40 See Gnome cooking for more information. 328.47 4,927 1
Chocolate bomb 15 42 see Gnome cooking for more information. 68.07 1,021 1
Potato with cheese 16 47 Made by using cheese on a potato with butter. 16.06 257 1
Meat pizza 8 × 2 45 Made by using cooked meat or chicken on a plain pizza.
Heals 8 hitpoints per bite (2 bites total). 25 400 0
Admiral pie 8 × 2 70 Heals 8 hitpoints per bite (2 bites total).
Boosts Fishing by 5. 9.25 148 1
Monkfish 16 62 62 Caught via small net fishing in Piscatoris after the completion of the Swan Song. 21.31 341 1
Anchovy pizza 9 × 2 55 Made by using anchovies on a plain pizza.
Cooked karambwan 18 30 65 A karambwan vessel and completion of Tai Bwo Wannai Trio must be done to fish and cook it properly.
This food is unique in that it can be eaten in the same tick as another piece of food, allowing for combo eating. 27.28 491 1
Curry 19 60 Commonly used for fighting the Chaos Elemental. Once eaten, an empty bowl remains in the inventory. 43 817 1
Ugthanki kebab 19 58 The player's character will utter a random phrase of contentment, such as "Lovely!", "Scrummy!", "Delicious!" or "Yum!" on eating. 43.16 820 1
Guthix rest 5 × 4 18 Requires partial completion of One Small Favour to make and consume.
Can boost 5 hitpoints above max. Restores 5% run energy and reduces poison/venom damage by 1.
It is not considered food as it is a potion made with the Herblore skill. 144 2,880 1
Dragonfruit pie 10 × 2 73 Heals 10 hitpoints per bite (2 bites total).
Boosts Fletching by 4. 35 700 1
Mushroom potato 20 64 Made by using mushroom & onion on a potato with butter. 44.95 899 1
Shark 20 80 76 Caught with a harpoon or traded for with minnows. 48.3 966 1
Sea turtle 21 82 79 Can only be obtained from the Fishing Trawler minigame, Tempoross, and drift net fishing. 48.9 1,027 1
Pineapple pizza 11 × 2 65 Made by using pineapple chunks or rings on a plain pizza.
Heals 11 hitpoints per bite (2 bites total). 30.32 667 1
Summer pie 11 × 2 95 Heals 11 hitpoints per bite (2 bites total).
Boosts Agility by 5, and restores 10% run energy. 29.45 648 1
Wild pie 11 × 2 85 Heals 11 hitpoints per bite (2 bites total).
Boosts Slayer by 5, and Ranged by 4. 30.05 661 1
Manta ray 22 91 81 Can only be obtained from the Fishing Trawler minigame, drift net fishing, Tempoross, Zulrah and Vorkath. 68.55 1,508 1
Tuna potato 22 68 Made by using tuna and corn on a potato with butter. 64.23 1,413 1
Dark crab 22 90 85 Caught by using a lobster pot with dark fishing bait.
Can only be fished in the Wilderness and the Resource Area. 66.86 1,471 1
Anglerfish 3 to 22 84 82 Caught by using a fishing rod with sandworms (requires 100% Port Piscarilius favour).
Healing dependent on the player's Hitpoints level and can boost hitpoints above one's maximum equal to the amount it heals. 93.05 – 682.33 2,047 1
Basket of strawberries (1 to 6) × 5 31 Contains 5 strawberries, each of which heals 1-6 Hitpoints (1 + 6% of hp).
Strawberry spawn at Shayziens' Wall in Great Kourend for no requirements. 41.77 – 250.6 1,253 1
Saradomin brew (3 to 16) × 4 81 Drunk in 4 sips, each healing for
{\displaystyle Hitpoints*{\frac {15}{100}}+2}
and can boost health above the maximum value.
Due to it lowering offensive stats, it is typically used with super restore potions.
It is not considered food as it is a potion made with the Herblore skill. 113.28 – 604.17 7,250 1
^ GP per heal on food that take multiple bites indicate the total hp healed, rather than per bite or dose.
^ Drains Attack by 2 points.
Wilderness-only foods[edit | edit source]
Blighted foods can only be consumed in the Wilderness.
Blighted karambwan 18 N/A Can only be eaten in the Wilderness.
Purchased from the Soul Wars Reward Shop or Justine's stuff for the Last Shopper Standing.
Blighted manta ray 22 N/A Can only be eaten in the Wilderness.
Purchased from the Soul Wars Reward Shop or Justine's stuff for the Last Shopper Standing. 49.95 1,099 1
Blighted anglerfish 3 to 22 N/A Can only be eaten in the Wilderness.
Healing dependent on the player's Hitpoints level and can boost hitpoints above one's maximum equal to the amount it heals. 64.5 – 473 1,419 1
Effectiveness: Ensure that the food you take is good enough for your next encounter. Tuna is cheap but it only heals 10 hitpoints, making it not the most effective against stronger opponents, whereas swordfish heal 14 hitpoints, a major difference. Also, if you know exactly what you're going to be facing, you should try to find food that heals for more than their max hit. As an example, a TzHaar-Ket has a max hit of 16. Therefore, sharks and monkfish are effective, swordfish are passable, and lobsters and below are much less effective.
Cost: Use food that you can afford. Sharks are often sold by players for around 966 coins, whereas monkfish are nearly as effective in many situations and typically sell for 341 coins, 80% of the health restoration for 35.3% of the cost. Weigh the cost of food against the possible profit of the venture, and avoid using expensive food unless it is going to make a significant difference to your chance of survival or task completion.
"It heals some health" messages have been added to the game filter for more types of food.
Retrieved from ‘https://oldschool.runescape.wiki/w/Food?oldid=14246558’ |
The TradeStars platform - Whitepaper
The TradeStars platform can be considered to be a Decentralized Exchange (DEX) for Fractional NFTs, where economic incentives for users to stake in the game are connected to real life statistical data.
Leveraging our team past experience in the Fantasy Sports and Gaming industries, we created a crypto-economic game based on the trading of virtual assets that would represent real-life statistical performance.
Real life statistics are tokenized with our Fractional NFT implementation where users can trade shares from. We call this shares “Smart Tokens”
The tokens supply for each Fractional NFT Market is managed by a bonding curve that sets the share price seamlessly as per market supply and demand.
When Smart Tokens are purchased, the payment gets added to the reserve balance and new Smart Tokens are issued to the buyer. Since both the reserve balance and the supply are increasing, the purchase of a smart token will cause its price to increase. Similarly, when Smart Tokens are liquidated, they are removed from the supply, reserve tokens are transferred to the seller, and the token price decreases.
To ensure that the price fluctuation of the reserve does not affect the market price of the smart tokens, stable coin is used as the common reserve token.
Tracking real-time and historic statistics for these NFTs added another component to the formula that would be a factor of influence while determining it shares' final price. Much like in the real-world stock exchange, hard data would influence the perceived dividends for the stockholders (more on this later) helping determine the new price for the trading shares and setting the incentive on the opportunity to spot and buy early those assets that promises the most upside to your investment.
As new prices gets validated there will be users willing to buy or sell to make profit out of their holdings, and these actions would then result on setting new prices for the traded assets.
Also, if for each purchase transaction a small fee is accrued for the NFT owner, users would be encouraged in holding the ownership of the Fractional NFTs and try to increase the transaction volume of its shares.
All this would result in the creation of a hyper liquid market around the real-life, tokenized asset, represented by the Fractional NFT.
TradeStars' core focus is to enable sports fans around the globe to be able to use the platform in a friendly and natural way. Here’s how the simplest use case for a user interacting with the platform works:
Once registered on the platform, the user can fund their account with any of the supported payment methods, or use an external web3 compatible wallet to fund it using any supported ERC20 token.
Users can now purchase or liquidate Smart Tokens of any of the unlocked Fractional NFT markets .
While holding Smart Tokens in their portfolio, and according to the scoring rules, users will receive dividends in the platform main token (TSX).
Users can stake TSX to unlock new Fractional NFT markets, participate voting in platform Governance decisions, and entitle a percentage of the generated platform transaction fees.
Fractional NFT markets
Fractional NFT markets are the main items on the TradeStars platform and can be compared to the liquidity pools seen on conventional decentralized exchanges.
A Fractional NFT market is composed by the real-life performance of a sport player tokenized through the Fractionable NFT and its circulating supply of shares or "Smart Tokens". It provides automated liquidity managing its shares' supply and price validation using a parameterized bonding curve.
Users can purchase and liquidate Smart Tokens at these markets and, by using a common reserve as medium of exchange (TSX), all of these tokens are interchangeable inside the TradeStars platform.
As we defined earlier, these tokens are transferable ERC-20 compatible tokens that are created and destroyed by the holding Fractionable NFT, providing automated liquidity.
Each of these tokens represents a fraction, or a share, of the emitting Fractionable NFT, and users can trade, hold, purchase or liquidate these tokens at anytime against the TradeStars smart contracts in exchange for the reserve token. (more on this later)
Much like the Bancor's implementation, we use a method based on a “Constant Reserve Ratio” (CRR) for setting the relation between price and supply for these tokens. The CRR is set by TradeStars and can be later changed by TSX holders' voting decisions.
The CRR is used in price calculation, along with the Smart Token’s current supply and reserve balance, in the following way:
Price = \frac{Balance}{Supply * CRR}
A constant ratio is kept between the reserve token balance and the smart token’s market capitalization (supply * price). Dividing the market cap by the supply produces the price according to which the smart token can be purchased and liquidated through the smart contract.
The smart token’s price is denominated in the reserve token and readjusted by the smart contract per each creation or destroy operation, which increases or decreases the reserve balance and the smart token supply (and thus the price).
When smart tokens are purchased, the payment for the purchase is added to the reserve balance, and based on the calculated price, new smart tokens are issued to the buyer.
Due to the calculation above, a purchase of a smart token will cause its price to increase, since both the reserve balance and the supply are increasing, while the latter is multiplied by a fraction. Similarly, when smart tokens are liquidated, they are removed from the supply (destroyed), and based on the current price, reserve tokens are transferred to the liquidator. In this case, any liquidation will trigger a price decrease.
The following image shows a simplified scenario of how this mechanism works:
The actual price of a smart token is calculated as a function of the transaction amount.
R - Reserve Token Balance
S - Smart Token Supply
F - Constant Reserve Ratio (CRR)
T = Smart tokens received in exchange for E (reserve tokens), given R, S.
T = S((1 + \frac{E}{R})^F-1)
E = Reserve tokens received in exchange for T (smart tokens), given R, S.
E=R(1-\sqrt[F]{1-\frac{T}{S}})
Smart Tokens are first created by depositing an initial reserve and issuing the initial token supply.
Network Token and Smart Tokens exchangeability.
As all the Performance Smart Tokens use the same reserve token, they form a network of tokens. The common reserve token can be described as a network token which captures the combined value of the network of smart tokens which hold it in reserve.
The network token also functions as a “token for tokens”, rendering all the smart tokens in the network interchangeable.
Since increased demand for any of the smart tokens in the network would increase demand for the network token (because it is required for purchasing these tokens and held in their reserves), and the price of network token is directly related (to maintain the CRR) to the value of the Smart Tokens. We will use the TradeStars TSX token as the common reserve token. |
PCA based health indicator for remaining useful life prediction of wind turbine gearbox | JVE Journals
Hemanth Mithun Praveen1 , Divya Shah2 , Krishna Dutt Pandey3 , Vamsi I4 , Sabareesh G R5
1, 2, 3, 4, 5Department of Mechanical Engineering, BITS Pilani, Hyderabad Campus, Hyderabad, India
Copyright © 2019 Hemanth Mithun Praveen, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Fault prognosis of wind turbine gearbox has received considerable attention as it predicts the remaining useful life which further allows the scheduling of maintenance strategies. However, the studies related towards the RUL prediction of wind turbine gearbox are limited, because of the complexity of gearbox, acute changes in the operating conditions and non-linear nature of the acquired vibration signals. In this study, a health indicator is constructed in order to predict the remaining useful life of the wind turbine gearbox. Run to fail experiments are performed on a laboratory scaled wind turbine gearbox of overall gear ratio 1:100. Vibration signals are acquired and decomposed through continuous wavelet transform to obtain the wavelet coefficients. Various statistical features are computed from the wavelet coefficients which return form high-dimensional input feature set. Principal component analysis is performed to reduce the dimensionality and principal components (PCs) are computed from the input feature set. PC1 is considered as the health indicator and subjected to further smoothening by linear rectification technique. Exponential degradation model is fit to the considered health indicator and the model is able to predict the RUL of the gearbox with an error percentage of 2.73 %.
Keywords: vibration analysis, fault prognosis, principal component analysis, wind turbine gearbox.
In recent years, harnessing of wind through off-shore wind turbines has increased significantly. However, wind energy industries are experiencing longer downtimes, high maintenance costs and less reliability. As the wind energy industries are located in remote and unmanned regions, it is very difficult to implement reactive and preventive maintenance strategies. In order to make wind energy competitive and economically viable, condition based maintenance is being employed, which allows the scheduling of maintenance thereby reducing the unexpected downtimes [1]. Condition monitoring (CM) systems consists of two sub-systems, namely, fault diagnosis and fault prognosis. Fault diagnosis is defined as detection of physical fault in a mechanical system and classification of the fault type whereas fault prognosis is concerned about predicting the remaining useful life (RUL) of the mechanical system [2]. Previous researchers have used vibration analysis in order to diagnose the defect present in the wind turbine gearbox and further quantified the severity level using various machine learning algorithms [3, 4]. As the acquired vibration signatures are noisy and have non-linear nature, recent studies are more concerned about the implementation of various signal processing algorithms such as wavelet transform, empirical mode decomposition to extract the fault sensitive information from the acquired data [3, 5].
On the other hand, the research investigations related to the RUL prediction of wind turbine gearbox are limited, because of the complexity of gearbox, acute changes in the operating conditions and non-linear nature of the acquired vibration signals [6]. Besides that, prediction of RUL mainly depends on the assumption that mechanical component failure is preceded by a period where there is extent of deviation or smooth degradation of the system from its expected normal operating condition. This can be done using two approaches: model based approach and data-driven approach. Prediction of RUL through model based approach uses finite element models to examine the overall stress distribution so as to predict the RUL according to damage propagation mechanism. However, this approach is far from reality as it consists of many assumptions and also it is quite challenging to model the complex gearbox and non-linearities that arises due to the changes in loading [6]. On the other hand, data-driven approach detects the degradation and predicts the RUL of the mechanical components based on the available run-to-failure data. Usually, in data-driven approach, a cumulative health indicator (HI) is modelled in order to indicate the degradation of the mechanical components and then this HI values are used to correlate the acquired data so as to predict the RUL of the component [7]. Majority of the previous research investigations have predicted the RUL of the components of electrical and mechanical systems while the operating conditions are stationary. However, there exists a dearth of literature about the prediction of RUL of wind turbine gearbox as it is a complex system subjected to non-stationary loads. This investigation models an exponential degradation model in order to predict the RUL of the wind turbine gearbox subjected to non-stationary operating conditions.
2.1. Experimental test-rig
A customized miniature wind turbine planetary gearbox was designed and constructed with an overall gear ratio of 1:100, refer Fig. 1. The gearbox consists of three stages, two planetary and one parallel stage. The gearbox was in perfect working order at the start of the experiment. All components of the gearbox were free from defects. The experiment was run for a total of 345 hours. The experiment was stopped when the gearbox failed catastrophically. The system was automated to acquire data. Accelerometers were stud mounted at appropriate points. The sample rate was set at 12 kHz. The sensors are connected to data acquisition device and further to the computer.
Fig. 1. Experimental setup- miniature wind turbine planetary gearbox
The vibration signals are acquired and further processed using continuous wavelet transform (CWT). CWT provides time-frequency representation of the original vibration signal through a mother wavelet [4]. CWT yields to a series of wavelet coefficients having same length as that of the original signal, also expressing similarity between the original signal and mother wavelet function for a given scale. Morlet wavelet was used as the mother wavelet in the present investigation while decomposing the vibration signal CWT. Further, various statistical features are computed from the CWT coefficients. These statistical features are used to construct the health indicator which is further extrapolated to predict the RUL of the wind turbine gearbox, refer Fig. 2.
Fig. 2. Experimental methodology
3. Modeling of health indicator
The obtained statistical feature set is a high dimensional data matrix and it consists of noise as well as non-linear variability, which requires large amount of computational power to process. Besides that, the large feature set can lead to smearing effect if not pre-processed and can also cause overfitting. Therefore, it is necessary to pre-process the features dataset to establish an effective health indicator to develop the degradation model. Initially, the obtained feature set is normalized to reduce the data redundancy which further improves the overall integrity of the dataset. Min-max normalization has been used and it maps the entire range of feature set to the range of 0 to 1. Eq. (1) is used to perform the normalization and it is shown below:
Normalization= \frac{{a}_{i}-{a}_{min}}{{a}_{max}-{a}_{min}},
{a}_{i}
i
th data,
{a}_{min}
is the minimum value of the data and
{a}_{max}
is the maximum value of the data. Therefore, the high dimensional feature data set has been normalised and it is subjected to dimensionality reduction in order to reduce the computational time. Principal component analysis (PCA) is an unsupervised machine learning algorithm used for the dimensionality reduction [8]. PCA is a mathematical approach that performs the orthogonal linear transformation on the input data in such a way that, the data consists of greatest variance (principal component 1) comes to as the first coordinate, the next greatest variance (principal component 2) lie as the second coordinate, and so on. The
n
-dimensional feature matrix
X
consisting of the feature values corresponding to runtime of
n
(in days) as row vectors and the
d
-dimension feature vectors as column vector is devised and supplied as input matrix to PCA. PCA computes the principal components (eigen vectors) from the input matrix which determines the directions of the new feature space and eigen values which corresponds to their magnitude. PCA computes the principal components (PCs) and the defect sensitive information of the original vibration signal is retained in the initial few PCs. As the first PC (PC1) consists of 90 % of the variability of the original signal, the information pertaining to PC1 has been chosen as the health indicator (HI). Fig. 3 represents the trend of raw and smoothened health indicator for the run time of 292 hours. It can be observed that, rather than exhibiting a regular expected trend, the health indicator gives local variation with the machine run time. It is quite challenging to make any inference about the RUL and hence, the HI trend needs to be smoothened/rectified. Linear rectification technique (LRT) has been applied to smoothen the spurious local fluctuations present in the HI trend in the form of sudden peaks or unexpected valleys [9]. Mathematically, LRT smoothening and growth rate of HI are explained by Eqs. (2) and (3) respectively:
{h}_{i}=\left\{\begin{array}{l}{h}_{i}, \forall { h}_{i-1}\le {h}_{i}\le \left(1+\eta \right){h}_{i-1},\\ {h}_{i-1}+\eta , \forall {h}_{i}<{h}_{i-1}\bigvee {h}_{i}>\left(1+\eta \right){h}_{i-1},\end{array}\right\
\eta =\frac{1}{n}\left|\sum _{i=1}^{n}\left({h}_{i-1}-{h}_{i}\right)\right|,
{h}_{i}
indicates the value of HI at run time
{t}_{i}
\eta
indicates the growth rate within certain window. Therefore, LRT produces a progressively increasing trend that corresponds to the degradation of the components.
Fig. 3. Evolution of health indicator a) raw health indicator b) smoothened health indicator
4. Remaining useful life prediction
The LRT smoothened HI trend for the available acquired vibration data is constructed and the prospective values of HI are predicted until the threshold value reaches. Exponential degradation model is used to predict the RUL and is defined by Eq. (4):
f\left(t\right)=a*\mathrm{exp}\left(b*t\right).
Among the acquired vibration data of 292 hours, 90 % of the data (about 262 hours) is used for constructing the HI thorough exponential fit. The threshold for confidence interval is taken as 0.9, and it signifies that, the predicted exponential fit contains the true population mean as 95 % [10]. Fig. 4(a) illustrates the trend of actual HI, predicted trend of HI through exponential degradation model. It can be observed that, the predicted HI trend through exponential model reaches the threshold value (0.5) at 284 hours which is close the actual run time of 292 hours and the error percentage is 2.73 %. In order to ensure the reliability of the proposed method, another trail of RUL prediction is performed by considering 80 % of the data (about 233 hours) for building the HI through exponential fit. It can be noted that, the HI through exponential model has reached the threshold value at 269 hours and the error percentage is 7.19 %, refer Fig. 4(b). Therefore, the proposed approach is able to predict the RUL of the wind turbine gearbox and it requires considerable amount of historical data for training the exponential model.
Fig. 4. Prediction of RUL: a) 90 % training health indicator, b) 80 % training health indicator
In this study, a novel health indicator (HI) was constructed in order to predict the remaining useful life of the wind turbine gearbox. Run to fail experiments are performed on a laboratory scaled wind turbine gearbox of overall gear ratio 98:1. Vibration signature were acquired, and continuous wavelet transform was applied to decompose the acquired raw signals. Various statistical features were computed from the wavelet coefficients which form the input feature set. Principal component analysis was performed and principal components (PCs) are computed from the input feature set. PC1 was considered as health indicator and the trend of PC1 was further smoothened by linear rectification technique. Exponential degradation model was fit for the considered health indicator and the model was able to predict the RUL of the gearbox with an error percentage of 2.73 %. Thus, the proposed approach was able to predict the RUL of the wind turbine gearbox which in turn requires considerable amount of historical data for training the exponential model. The scope of the present study includes the establishment of an adaptive failure threshold in order to predict the remaining useful life more precisely.
This research work was funded by Department of Science and Technology (DST), Government of India through grant number YSS/2015/001945, which is gratefully acknowledged.
Yang W., Tavner P. J., Crabtree C. J., Feng Y., Qiu Y. Wind turbine condition monitoring: technical and commercial challenges. Wind Energy, Vol. 17, Issue 5, 2014, p. 673-693. [Publisher]
Rai A., Upadhyay S. H. A review on signal processing techniques utilized in the fault diagnosis of rolling element bearings. Tribology International, Vol. 96, 2016, p. 289-306. [Publisher]
Si X. S., Wang W., Hu C. H., Zhou D. H. Remaining useful life estimation–a review on the statistical data driven approaches. European Journal of Operational Research, Vol. 213, Issue 1, 2011, p. 1-14. [Publisher]
Mosallam A., Medjaher K., Zerhouni N. Data-driven prognostic method based on Bayesian approaches for direct remaining useful life prediction. Journal of Intelligent Manufacturing, Vol. 27, Issue 5, 2016, p. 1037-1048. [Publisher]
Ahmad W., Khan S. A., Islam M. M., Kim J. M. A reliable technique for remaining useful life estimation of rolling element bearings using dynamic regression models. Reliability Engineering and System Safety, Vol. 184, 2019, p. 67-76. [Publisher]
Sankararaman S., Goebel K. Why is the remaining useful life prediction uncertain. Annual Conference of the Prognostics and Health Management Society, 2013. [Search CrossRef]
Component level signal segmentation method for multi-component fault detection in a wind turbine gearbox
Hemanth Mithun Praveen, G.R. Sabareesh, Vamsi Inturi, Akshay Jaikanth
Fingerprinting based data abstraction technique for remaining useful life estimation in a multi-stage gearbox
Hemanth Mithun Praveen, Akshay Jaikanth, Vamsi Inturi, G.R. Sabareesh |
A severely myopic patient has a far point of 10.00
keche0b 2021-12-14 Answered
A severely myopic patient has a far point of 10.00 cm. By how many diopters should the power of his eye be reduced in laser vision correction to obtain normal distant vision for him?
Far point
=10cm
The power of the normal eye is given by
P=\left(\frac{1}{f}\right)=\left(\frac{1}{u}\right)+\left(\frac{1}{v}\right)
u=\text{object distance}=\text{far point of a normal human eye is infinity}
v=\text{image distance}=\text{the image is formed on the retina which is about 2 cm from the lens}
⇒P=\left(\frac{1}{\mathrm{\infty }}\right)+\left(\frac{1}{0.02}\right)=50D
The power of the myopic eye is given by
P=\left(\frac{1}{f}\right)=\left(\frac{1}{u}\right)+\left(\frac{1}{v}\right)
u=\text{object distance}=\text{far point of the myopic eye}=10cm
v=\text{image distance}=\text{the image is formed on the retina which is about 2 cm from the lens}
⇒P=\left(\frac{1}{0.1}\right)+\left(\frac{1}{0.02}\right)=60D
Reduction in power
=60-50=10D
The power of severely myopic eye
{p}_{myo\pi c}
can be obtained,
{P}_{myo\pi c}=\frac{1}{{d}_{0}}+\frac{1}{{d}_{i}}
The quantity of dioptres
{p}_{c\phantom{\rule{1em}{0ex}}\text{or}\phantom{\rule{1em}{0ex}}rected}
{P}_{c\phantom{\rule{1em}{0ex}}\text{or}\phantom{\rule{1em}{0ex}}rected}={p}_{myo\pi c}-{p}_{‖a‖l}
{P}_{myo\pi c}=\frac{1}{10.00×{10}^{-2}m}+\frac{1}{2.00×{10}^{-2}m}=60.0D
The power for the normal is 50D thus the corrected power is
{p}_{c\phantom{\rule{1em}{0ex}}\text{or}\phantom{\rule{1em}{0ex}}rected}=60D-50D
=10D
4.50\cdot {10}^{4}
\theta
{13.0}^{\circ }
\theta
The enthalpy of vaporization of ethanol is 38.7 kJ/mol at its boiling point
\left({78}^{\circ }C\right)
\mathrm{△}{S}_{sys},\mathrm{△}{S}_{surr},\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\text{ }\mathrm{△}{S}_{univ}
when 1.00 mole of ethanol is vaporized at
{78}^{\circ }C
and 1.00atm.
Consider a hydrogen atom in the ground state, what is the energy of its electron? Now consider an excited- state hydrogen atom, what is the energy of the electron in the n=4 level? |
Baseband - Wikipedia
Frequencies occupied by an unmodulated signal
Spectrum of a baseband signal, energy E per unit frequency as a function of frequency f. The total energy is the area under the curve.
In telecommunications and signal processing, baseband is the range of frequencies occupied by a signal that has not been modulated to higher frequencies.[1] Baseband signals typically originate from transducers, converting some other variable into an electrical signal. For example, the output of a microphone is a baseband signal that is an analog of the received audio. In conventional analog radio broadcasting the baseband audio signal is used, after processing, to modulate a separate RF carrier signal at a much higher frequency.
A baseband signal may have frequency components going all the way down to DC, or at least it will have a high ratio bandwidth. A modulated baseband signal is called a passband signal, which occupies a higher range of frequencies and has a much lower ratio and fractional bandwidth.
1.1 Baseband signal
1.2 Baseband channel
1.3 Digital baseband transmission
1.3.1 Baseband transmission in Ethernet
1.4 Baseband processor
1.5 Equivalent baseband signal
Various uses[edit]
Baseband signal[edit]
A baseband signal or lowpass signal is a signal that can include frequencies that are very near zero, by comparison with its highest frequency (for example, a sound waveform can be considered as a baseband signal, whereas a radio signal or any other modulated signal is not).[2]
A baseband bandwidth is equal to the highest frequency of a signal or system, or an upper bound on such frequencies,[3] for example the upper cut-off frequency of a low-pass filter. By contrast, passband bandwidth is the difference between a highest frequency and a nonzero lowest frequency.
Baseband channel[edit]
A baseband channel or lowpass channel (or system, or network) is a communication channel that can transfer frequencies that are very near zero.[4] Examples are serial cables and local area networks (LANs), as opposed to passband channels such as radio frequency channels and passband filtered wires of the analog telephone network. Frequency division multiplexing (FDM) allows an analog telephone wire to carry a baseband telephone call, concurrently as one or several carrier-modulated telephone calls.
Digital baseband transmission[edit]
Digital baseband transmission, also known as line coding,[5] aims at transferring a digital bit stream over baseband channel, typically an unfiltered wire, contrary to passband transmission, also known as carrier-modulated transmission.[6] Passband transmission makes communication possible over a bandpass filtered channel, such as the telephone network local-loop or a band-limited wireless channel.[citation needed]
Baseband transmission in Ethernet[edit]
The word "BASE" in Ethernet physical layer standards, for example 10BASE5, 100BASE-TX and 1000BASE-SX, implies baseband digital transmission (i.e. that a line code and an unfiltered wire are used).[7][8]
Baseband processor[edit]
A baseband processor also known as BP or BBP is used to process the down-converted digital signal to retrieve essential data for the wireless digital system. The baseband processing block in GNSS receivers is usually responsible for providing observable data: code pseudo-ranges and carrier phase measurements, as well as navigation data.[citation needed]
Equivalent baseband signal[edit]
An equivalent baseband signal or equivalent lowpass signal is—in analog and digital modulation methods for (passband) signals with constant or varying carrier frequency (for example ASK, PSK QAM, and FSK)—a complex valued representation of the modulated physical signal (the so-called passband signal or RF signal). The equivalent baseband signal is
{\displaystyle Z(t)=I(t)+jQ(t)\,}
{\displaystyle I(t)}
is the inphase signal,
{\displaystyle Q(t)}
the quadrature phase signal, and
{\displaystyle j}
the imaginary unit. In a digital modulation method, the
{\displaystyle I(t)}
{\displaystyle Q(t)}
signals of each modulation symbol are evident from the constellation diagram. The frequency spectrum of this signal includes negative as well as positive frequencies. The physical passband signal corresponds to
{\displaystyle I(t)\cos(\omega t)-Q(t)\sin(\omega t)=\mathrm {Re} \{Z(t)e^{j\omega t}\}\,}
{\displaystyle \omega }
is the carrier angular frequency in rad/s.[9]
A signal at baseband is often used to modulate a higher frequency carrier signal in order that it may be transmitted via radio. Modulation results in shifting the signal up to much higher frequencies (radio frequencies, or RF) than it originally spanned. A key consequence of the usual double-sideband amplitude modulation (AM) is that the range of frequencies the signal spans (its spectral bandwidth) is doubled. Thus, the RF bandwidth of a signal (measured from the lowest frequency as opposed to 0 Hz) is twice its baseband bandwidth. Steps may be taken to reduce this effect, such as single-sideband modulation. Some transmission schemes such as frequency modulation use even more bandwidth.
The figure shows what happens with AM modulation:
Comparison of the equivalent baseband version of a signal and its AM-modulated (double-sideband) RF version, showing the typical doubling of the occupied bandwidth.
Look up baseband in Wiktionary, the free dictionary.
^ Jeff Rutenbeck, Tech Terms: What Every Telecommunications and Digital Media Professional Should Know, p. 24, CRC Press, 2012 ISBN 1136034501
^ Steven Alan Tretter (1995). Communication System Design Using Dsp Algorithms: With Laboratory Experiments for the TMS320C30. Springer. ISBN 0-306-45032-1.
^ Mischa Schwartz (1970). Information, Transmission, Modulation and Noise: A Unified Approach to Communication Systems. McGraw-Hill.
^ Chris C. Bissell and David A. Chapman (1992). Digital Signal Transmission. Cambridge University Press. ISBN 0-521-42557-3.
^ Mikael Gustavsson and J. Jacob Wikner (2000). CMOS Data Converters for Communications. Springer. ISBN 0-7923-7780-X.
^ Jan W. M. Bergmans (1996). Digital Baseband Transmission and Recording. Springer. ISBN 0-7923-9775-4.
^ "IEEE Get Program". standards.ieee.org. IEEE. Retrieved 29 March 2017.
^ Proakis, John G. Digital Communications, 4th edition. McGraw-Hill, 2001. p150
Retrieved from "https://en.wikipedia.org/w/index.php?title=Baseband&oldid=1088729196" |
Department of Statistics and Computer Science, Faculty of Science, University of Peradeniya, Peradeniya, Sri Lanka.
The aim of this study is to evaluate factory workers’ job satisfaction in a man-ufacturing plant in Sri Lanka. A sample of 180 employees from a study population of 846 was taken. Stratified Random Sampling method was used as the sampling technique. The tool used for the data collection is a questionnaire which was designed covering all the aspects related to the job satisfaction of the factory workers in the manufacturing plant. The analysis was carried out using the statistical software SPSS version 20.0. Demographic variables which are associated with the overall job satisfaction at 5% level of significance and the four factors which were identified from the factor analysis process were used for the model building procedure. Ordinal regression model was developed which includes the factors affecting job satisfaction. For the variable selection procedure, the forward selection method was applied. Statistically, “Work arrangements in the workplace” and “Family issues of the workers” were the most influencing factors to job satisfaction and “Gender”, “The distance employees travel” and “Mode of transportation of employees” were the demographic factors associated with the level of job satisfaction.
Najimuddin, N. and Abeysundara, S. (2019) Job Satisfaction of the Factory Workers at the Manufacturing Plant: A Case Study. Open Access Library Journal, 6, 1-11. doi: 10.4236/oalib.1105312.
n=\frac{\frac{p\left(1-p\right)}{V}}{1+\frac{\left[\frac{p\left(1-p\right)}{v}-1\right]}{N}};V={\left[\frac{d}{{Z}_{\frac{\alpha }{2}}}\right]}^{2}
\text{n}=\frac{\frac{0.5\left(1-0.5\right)}{{\left(\frac{0.06}{1.96}\right)}^{2}}}{1+\frac{\left[\frac{0.5\left(1-0.5\right)}{{\left(\frac{0.06}{1.96}\right)}^{2}}-1\right]}{846}}=\text{2}0\text{2}.\text{996}\approx \text{2}0\text{3}
\text{log}\left[-\text{log}\left(\text{1}-{Q}_{ijkl}\right)\right]\text{}=\text{}{\alpha }_{\text{i}}+{\beta }_{j}^{\text{gend}}+{\beta }_{k}^{\text{ditr}}+{\beta }_{1}^{\text{motr}}+{\beta }^{\text{wkar}}+{\beta }^{\text{fmis}}
[1] Locke, E.A. (1976) The Nature and Causes of Job Satisfaction. In: Dunnette, M.D., (Ed.), Handbook of Industrial and Organizational Psychology, Rand McNally, Chicago, 1297-1349.
[2] Hashim, R. and Mahmood, R. (2011) What Is the State of Job Satisfaction among Academic Staff at Malaysian Universities? Unitar E-Journal, 7, 15-26.
[3] Kaliski, B.S. (2007) Encyclopedia of Business and Finance. Second Edition, Thompson Gale, Detroit, 446.
[4] Rice, R.W., Near, J.P. and Hunt, R.J. (1979) Unique Variance in Job and Life Satisfaction Associated with Work Related and Extra Work-Related Variables. Human Relations, 32, 605-623. https://doi.org/10.1177/001872677903200706
[5] Kumari, N. (2011) Job Satisfaction of the Employees at the Workplace. European Journal of Business and Management, 3, 4.
[6] Moyes, G.D., Shao, L.P. and Newsome, M. (2008) Comparative Analysis of Employee Job Satisfaction in the Accounting Profession. Journal of Business & Economics Research, 6, 65-81. https://doi.org/10.19030/jber.v6i2.2392
[7] Spector, P.E. (1997) Job Satisfaction: Application, Assessment, Causes and Conse-quences. Sage Publications, Inc., Thousand Oaks, 3.
[8] Sweney, P.D. and McFarlin, D.B. (2005) Organizational Behavior, Solu-tions for Management. McGraw-Hill/Irwin, New York, 57.
[9] Scott, K.D. and Taylor, G.S. (2017) An Examination of Conflicting Findings on the Relationship between Job Satisfaction and Absenteeism: A Meta-Analysis. Academy of Management Journal, 28, 599-612.
[10] Sagie, A. (1998) Employee Absenteeism, Organizational Commitment, and Joba Satisfaction: Another Look. Journal of Vocational Behavior, 52, 156-171.
[11] Javed, S. and Kamal, A. (2014) Job Satisfaction Factors of PTCL Employees. International Journal of Applied Research, 3, 166-174.
[12] Khatun, R. and Shamsuzzaman, M.D. (2015) Employee’s View on Job Satisfaction: A Study on Garment Industry (AKH Group), Bangladesh. International Journal of Research in Management & Business Studies, 2, 12.
[13] Naing, L., Winn, T. and Rusli, B. (2006) Practical Issues in Calculating the Sample Size for Prevalence Studies. Archives of Orofacial Sciences, 1, 9-14.
[14] Scandura, T.A. and Lankau, M.J. (1997) Rela-tionships of Gender, Family Responsibility and Flexible Work Hours to Organizational Commitment and Job Satisfaction. Journal of Organizational Behavior, 18, 377-391.
https://doi.org/10.1002/(SICI)1099-1379(199707)18:4
<377::AID-JOB807>3.0.CO;2-1
[15] Clark, A.E. (1997) Job Sat-isfaction and Gender: Why Are Women So Happy at Work? Labour Economics, 4, 341-372.
[16] Donohue, S.M. and Heywood, J.S. (2004) Job Satisfaction and Gender: An Expanded Specification from the NLSY. International Journal of Manpower, 25, 211-238. https://doi.org/10.1108/01437720410536007
[17] Hair, J.F., Black, W.C., Babin, B.J. and Anderson, R.E. (2010) Multivariate Data Analysis. 7th Edition.
[18] Kaiser, H.F. (1974) An Index of Factor Simplicity. Psychometrika, 39, 31-36.
[19] Yay, M. and Akinci, E.D. (2009) Application of Ordinal Logistic Regression and Artificial Neural Networks in a Study of Student Satisfaction. Cypriot Journal of Educational Sciences, 4, 58-70. |
Clifford's_theorem_on_special_divisors Knowpia
In mathematics, Clifford's theorem on special divisors is a result of William K. Clifford (1878) on algebraic curves, showing the constraints on special linear systems on a curve C.
A divisor on a Riemann surface C is a formal sum
{\displaystyle \textstyle D=\sum _{P}m_{P}P}
of points P on C with integer coefficients. One considers a divisor as a set of constraints on meromorphic functions in the function field of C, defining
{\displaystyle L(D)}
as the vector space of functions having poles only at points of D with positive coefficient, at most as bad as the coefficient indicates, and having zeros at points of D with negative coefficient, with at least that multiplicity. The dimension of
{\displaystyle L(D)}
is finite, and denoted
{\displaystyle \ell (D)}
. The linear system of divisors attached to D is the corresponding projective space of dimension
{\displaystyle \ell (D)-1}
The other significant invariant of D is its degree d, which is the sum of all its coefficients.
A divisor is called special if ℓ(K − D) > 0, where K is the canonical divisor.[1]
Clifford's theorem states that for an effective special divisor D, one has:
{\displaystyle 2(\ell (D)-1)\leq d}
and that equality holds only if D is zero or a canonical divisor, or if C is a hyperelliptic curve and D linearly equivalent to an integral multiple of a hyperelliptic divisor.
The Clifford index of C is then defined as the minimum of
{\displaystyle d-2(\ell (D)-1)}
taken over all special divisors (except canonical and trivial), and Clifford's theorem states this is non-negative. It can be shown that the Clifford index for a generic curve of genus g is equal to the floor function
{\displaystyle \lfloor {\tfrac {g-1}{2}}\rfloor .}
The Clifford index measures how far the curve is from being hyperelliptic. It may be thought of as a refinement of the gonality: in many cases the Clifford index is equal to the gonality minus 2.[2]
Green's conjectureEdit
A conjecture of Mark Green states that the Clifford index for a curve over the complex numbers that is not hyperelliptic should be determined by the extent to which C as canonical curve has linear syzygies. In detail, one defines the invariant a(C) in terms of the minimal free resolution of the homogeneous coordinate ring of C in its canonical embedding, as the largest index i for which the graded Betti number βi, i + 2 is zero. Green and Robert Lazarsfeld showed that a(C) + 1 is a lower bound for the Clifford index, and Green's conjecture states that equality always holds. There are numerous partial results.[3]
Claire Voisin was awarded the Ruth Lyttle Satter Prize in Mathematics for her solution of the generic case of Green's conjecture in two papers.[4][5] The case of Green's conjecture for generic curves had attracted a huge amount of effort by algebraic geometers over twenty years before finally being laid to rest by Voisin.[6] The conjecture for arbitrary curves remains open.
^ Hartshorne p.296
^ Eisenbud (2005) p.178
^ Eisenbud (2005) pp. 183-4.
^ Green's canonical syzygy conjecture for generic curves of odd genus - Claire Voisin
^ Green’s generic syzygy conjecture for curves of even genus lying on a K3 surface - Claire Voisin
^ Satter Prize
Arbarello, Enrico; Cornalba, Maurizio; Griffiths, Phillip A.; Harris, Joe (1985). Geometry of Algebraic Curves Volume I. Grundlehren de mathematischen Wisenschaften 267. ISBN 0-387-90997-4.
Clifford, William K. (1878), "On the Classification of Loci", Philosophical Transactions of the Royal Society of London, The Royal Society, 169: 663–681, doi:10.1098/rstl.1878.0020, ISSN 0080-4614, JSTOR 109316
Eisenbud, David (2005). The Geometry of Syzygies. A second course in commutative algebra and algebraic geometry. Graduate Texts in Mathematics. Vol. 229. New York, NY: Springer-Verlag. ISBN 0-387-22215-4. Zbl 1066.14001.
Fulton, William (1974). Algebraic Curves. Mathematics Lecture Note Series. W.A. Benjamin. p. 212. ISBN 0-8053-3080-1.
Griffiths, Phillip A.; Harris, Joe (1994). Principles of Algebraic Geometry. Wiley Classics Library. Wiley Interscience. p. 251. ISBN 0-471-05059-8.
Hartshorne, Robin (1977). Algebraic Geometry. Graduate Texts in Mathematics. Vol. 52. ISBN 0-387-90244-9.
Iskovskikh, V.A. (2001) [1994], "Clifford theorem", Encyclopedia of Mathematics, EMS Press |
Explain whether the central limit theorem can be applied and assert that the sam
Explain whether the central limit theorem can be applied and assert that the sampling distributions of A and Bare approximately normal, if the sample sizes of A and Bare large.
Kyran Hudson 2021-03-01 Answered
Want to know more about Sampling distributions?
In general, the central limit theorem applies only to the sample mean.
In this case, A and Bare not sample means. Thus, the central limit theorem cannot be applied.
Therefore, people cannot assert that the sampling distributions of A and Bare approximately normal, if the sample sizes of A and Bare large.
Which of the following are possible examples of sampling distributions? (Select all that apply.)
mean trout lengths based on samples of size 5
average SAT score of a sample of high school students
average male height based on samples of size 30
heights of college students at a sampled universit
yall mean trout lengths in a sampled lake
Which of the following is true about the sampling distribution of means?
A. Shape of the sampling distribution of means is always the same shape as the population distribution, no matter what the sample size is.
B. Sampling distributions of means are always nearly normal.
C. Sampling distributions of means get closer to normality as the sample size increases.
D. Sampling distribution of the mean is always right skewed since means cannot be smaller than 0.
Which of the following is true about sampling distributions?
-Shape of the sampling distribution is always the same shape as the population distribution, no matter what the sample size is.
-Sampling distributions are always nearly normal.
-Sampling distribution of the mean is always right skewed since means cannot be smaller than 0.
-Sampling distributions get closer to normality as the sample size increases.
Which of the following statements about the sampling distribution of the sample mean is incorrect?
(a) The standard deviation of the sampling distribution will decrease as the sample size increases.
(b) The standard deviation of the sampling distribution is a measure of the variability of the sample mean among repeated samples.
(c) The sample mean is an unbiased estimator of the population mean.
(d) The sampling distribution shows how the sample mean will vary in repeated samples.
(e) The sampling distribution shows how the sample was distributed around the sample mean.
Explain the statement ‘The main priority with sampling distributions is to get across the idea that estimates and other statistics change every time we do a new study’.
Young's modulus is a quantitative measure of stiffness of an elastic material. Suppose that for aluminum alloy sheets of a particular type, its mean value and standard deviation are 70 GPa and 1.6 GPa, respectively (values given in the article ''Influence of Material Properties Variability on Springback and Thinning in Sheet Stamping Processes: A Stochastic Analysis'' (Intl. J. of Advanced Manuf. Tech., 2010: 117–134)). If
\overline{X}
is the sample mean Young’s modulus for a random sample of
n=16
sheets, where is the sampling distribution of
\overline{X}
centered, and what is the standard deviation of the
\overline{X}
distribution?
Critical Thinking Let x be a random variable representing the amount of sleep each adult in New York City got last night. Consider a sampling distribution of sample means
\stackrel{―}{x}
What value will the standard deviation
{\sigma }_{\stackrel{―}{x}}
of the sampling distribution approach? |
To calculate: The vertices and foci of the conic section: x29 + y24=1
To calculate: The vertices and foci of the conic section:
x29\text{ }+\text{ }y24=1
Nola Robson
Formula Used: Eccentricity
\left(e\right)=1\text{ }-\text{ }{b}^{2}{a}^{2}
Calculation: Compare the given equation with the equation of an ellipse
{x}^{2}{a}^{2}\text{ }+\text{ }{y}^{2}{b}^{2}=1
{a}^{2}=9\text{ }⇒\text{ }a=3\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }{b}^{2}=4\text{ }⇒\text{ }b=2
e=1\text{ }-\text{ }{b}^{2}{a}^{2}=1\text{ }-\text{ }49=53
Vertices are:
\left(±\text{ }a,\text{ }0\right)\text{ }\text{and}\text{ }\left(0,\text{ }±\text{ }b\right)
Now, put the values of a and b to get the vertice of the conic section and we get:
\left(±\text{ }a,\text{ }0\right)\text{ }\text{and}\text{ }\left(0,\text{ }±\text{ }b\right)=\left(±\text{ }3,\text{ }0\right)\text{ }\text{and}\text{ }\left(0,\text{ }±\text{ }2\right)
\text{Foci}\text{ }=\left(±\text{ }ae,\text{ }0\right)=\left(±\text{ }5,\text{ }0\right)
Thus, the vertices and foci of the conic section are:
\left(±\text{ }3,\text{ }0\right)\text{ }\text{and}\text{ }\left(0,\text{ }±\text{ }2\right)\text{ }\text{and}\text{ }\left(±\text{ }5,\text{ }0\right).
Coordinate geometry and Trignometry.
Find the condition so that the line
px+qy=r
intersects the ellipse
\frac{{x}^{2}}{{a}^{2}}+\frac{{y}^{2}}{{b}^{2}}=1
in points whose eccentric angles differ by
\frac{\pi }{4}
Though I know how to solve it using parametric coordinates, I was wondering if there's an another approach which is less time consuming.
And contrast the general form of the equations of the four conic sections we have studied.
To solve the given inequality and to write the solution set in interval notation.
Formula for analytical finding ellipse and circle intersection points if exist
I need a formula that will give me all points of random ellipse and circle intersection (ok, not fully random, the center of circle is laying on ellipse curve)
I need step by step solution (algorithm how to find it) if this is possible.
What can you say about the motion of an object with velocity vector perpendicular to position vector? Can you say anything about it at all?
I know that velocity is always perpendicular to the position vector for circular motion and at the endpoints of elliptical motion. Is there a general statement that can be made about the object's motion when the velocity is perpendicular to position?
We need to verify theorem 5 Focus-Directrix definition:
0\text{ }<\text{ }e\text{ }<\text{ }1\text{ }\text{and}\text{ }c=d\left(e\text{ }-\text{ }2\text{ }-\text{ }1\right) |
Previous (Crystallite)
Next (Crystallography)
Frost that has crystallized on a shrub.
Crystallization is the (natural or artificial) process of formation of solid crystals from a homogeneous solution or melt, or more rarely directly from a gas. This process is often used as a technique to separate a solute from a liquid solution, bringing it into a pure crystalline phase.
2 Crystallization in nature
3 Artificial methods
4 Equipment used for industrial production
5 Thermodynamics and kinetics of crystallization
Crystallization is a valuable process for both research and industrial applications. Some industries are set up for the mass production of crystals, such as the production of edible salt (in powder form), silicon wafers, and sucrose from sugar beet. In addition, chemists and biochemists use the pure crystals of substances to determine their molecular structures, with techniques such as X-ray crystallography and NMR spectroscopy.
For a solute to crystallize out of a solution, the solution must be supersaturated with the solute. This means that the solution has to contain more solute entities (atoms, molecules, or ions) dissolved than it would contain under the equilibrium conditions (of a saturated solution).
The crystallization process consists of two major steps: nucleation and crystal growth. In the nucleation step, the solute molecules dispersed in the solvent start to gather into clusters (on the nanometer scale). When these clusters become stable, they constitute the nuclei. However, when the clusters are not stable, they redissolve. Therefore, the clusters need to reach a critical size to become stable nuclei. The critical size is dictated by the prevailing conditions, such as temperature and supersaturation. It is at the stage of nucleation that the atoms or molecules arrange themselves in a particular periodic manner that defines the crystal structure.[1]
Crystal growth corresponds to growth of the nuclei that succeed in achieving critical cluster size. Nucleation and growth continue to occur simultaneously as long as the solution is supersaturated with the solute. The solution that remains after a crystallization process is called the mother liquor.
Supersaturation is the driving force of the crystallization process—the rates of nucleation and growth are driven by supersaturation within the solution. Depending upon the conditions, either nucleation or growth may predominate over the other, and as a result, crystals with different sizes and shapes are obtained. (The control of crystal size and shape constitutes one of the main challenges in industrial manufacturing, such as for pharmaceuticals.) Once the solution is no longer supersaturated, the solid-liquid system reaches equilibrium and crystallization is complete, unless the operating conditions are modified from equilibrium so that the solution becomes supersaturated again.
Many compounds can crystallize with different crystal structures, a phenomenon called polymorphism. Each crystal polymorph is a different thermodynamic solid state. Crystal polymorphs of the same compound exhibit different physical properties, such as dissolution rate, shape (angles between facets and facet growth rates), melting point, and so on. For this reason, polymorphism is of major importance in the industrial manufacture of crystalline products.
Snowflakes are a well-known example of crystals. Subtle differences in crystal growth conditions result in different geometries of snowflakes.
There are many examples of crystallization in nature, some of which are noted below.
Examples of crystallization on the geological time scale:
Formation of minerals, including gemstones.
Formation of stalactites and stalagmites.
Examples of crystallization on ordinary time scales:
Formation of snowflakes.
Crystallization of honey.
For artificial crystallization of a solute from solution, the conditions must be adjusted such that the solution becomes supersaturated with the solute. This can be achieved by various methods, such as:
cooling the solution;
evaporating part of the solvent;
adding a second solvent that reduces the solubility of the solute (technique known as anti-solvent or drown-out);
changing the pH of the solution; and
performing a chemical reaction.
Artificial crystallization includes two major groups of applications: crystal production and purification.
From the perspective of the materials industry:
To meet the demand for crystals that simulate natural crystals, there are methods that accelerate the rate of production and crystal perfection. They include ionic crystal production and covalent crystal production.
To produce tiny crystals, such as those in powder or even smaller sizes, methods include:
Mass-production by the chemical industry, such as salt-powder production.
Sample production of tiny crystals for the characterization of materials. Controlled recrystallization is an important method to produce unusual crystals that are needed to reveal the molecular structure and nuclear forces within molecules that form crystals. Many techniques, such as X-ray crystallography and NMR spectroscopy, are widely used in chemical and biochemical research to determine the structures of a wide variety of molecules, including inorganic compounds and biological macromolecules.
Examples of the mass production of crystalline materials include:
"Powder salt for food" industry.
Production of sucrose from sugar beet, where the sucrose is crystallized out of aqueous solution.
Well-formed crystals are expected to be pure because each molecule or ion must fit perfectly into the lattice as it leaves the solution. Impurities would normally not fit as well in the lattice, and thus remain in solution preferentially. Hence, molecular recognition is the principle of purification in crystallization. However, there are instances when impurities are incorporated into the lattice, thus decreasing the level of purity of the final crystalline product. Also, in some cases, the solvent may be incorporated into the lattice, forming a solvate. In some cases, the solvent may be 'trapped' in the liquid state within the crystal, forming what are known as inclusions.
Depending on the nature of the crystal system, crystals of a substance consist of only one enantiomer. Louis Pasteur discovered chirality when he was able to separate enantiomeric crystals from racemic tartaric acid.
Equipment used for industrial production
Several types of equipment are used for the production of crystals on an industrial scale. Some examples follow.[2]
1. Tank crystallizer: A hot, saturated solution is placed in an open tank and allowed to cool. Once an adequate level of crystallization is reached, the mother liquor is drained away and the crystals are removed.
2. Scraped surface crystallizer: The solution is placed in an open trough (with a semi-circular bottom) and allowed to cool with the help of a cooling jacket outside the trough. As crystals form on the inner walls of the trough, they are removed by the blades of a slow-speed agitator.
3. Forced circulating liquid evaporator-crystallizer: In this case, the solution is circulated through a heater, and then passed into the vapor space of a chamber where some of the solvent evaporates, leading to supersaturation of the remaining solution. Crystals are formed in another part of the equipment, through secondary nucleation.
Thermodynamics and kinetics of crystallization
Consider the case of molecules within a pure and perfect crystal that is heated by an external source. At some sharply defined temperature, the melting point, the molecules separate from their neighbors and the complicated architecture of the crystal collapses to that of a liquid. Textbook thermodynamics says that melting occurs because the system's gain in entropy (ΔS) by the spatial randomization of its molecules has overcome the loss of enthalpy (ΔH) due to breaking the crystal packing forces:
{\displaystyle T(S_{liquid}-S_{solid})>H_{liquid}-H_{solid}}
{\displaystyle G_{liquid}<G_{solid}}
where T is the temperature (in Kelvin) and G is Gibbs free energy.
Conversely, on cooling the melt, at the very same temperature (freezing point), the molecules may be expected to click back into the same crystalline form. The entropy decrease due to the ordering of molecules within the system is overcompensated by the thermal randomization of the surroundings, due to the release of the heat of fusion; the entropy of the universe increases.
But liquids that behave in this way on cooling are the exception rather than the rule. Despite the second principle of thermodynamics, crystallization usually occurs at lower temperatures (supercooling). This indicates that a crystal is more easily destroyed than it is formed. Similarly, it is usually easier to dissolve a perfect crystal in a solvent than to regrow a good crystal from solution. The nucleation and growth of a crystal are under kinetic, rather than thermodynamic, control.
Solvent recrystallisation
1-solvent recrystallization
Hot-filtration, 1-solvent recrystallization
2-solvent recrystallization, with evaporation
X-ray crystals
slow evaporation 1 solvent
slow gas diffusion 2 solvent
slow liquid diffusion
slow liquid diffusion - H Tube
↑ Note that "crystal structure" is a special term that refers to the relative arrangement of the atoms or molecules, not the macroscopic properties (size and shape) of the crystal, although the latter properties are a result of the internal crystal structure.
↑ Crystallization Cheresources.com. Retrieved May 13, 2008.
Geankoplis, C. J. Transport Processes and Separation Process Principles, 4th ed. Prentice-Hall Inc., 2003. ISBN 978-0131013674
Glynn, P. D., and E. J. Reardon. "Solid-solution aqueous-solution equilibria: thermodynamic theory and representation." Amer. J. Sci. 290 (1990): 164-201.
Jones, A. G. Crystallization Process Systems. Oxford: Butterworth-Heinemann, 2002. ISBN 978-0750655200
Mullin, J. W. Crystallization, 4th ed. Oxford: Lutterworth-Heinemann, 2001. ISBN 978-0750648332
Myerson, Allan S. Handbook of Industrial Crystallization, 2nd ed. Boston: Butterworth-Heinemann, 2002. ISBN 978-0750670128
Stanley, S. J. "Tomographic imaging during reactive precipitation: mixing with chemical reaction." Chemical Engineering Science 61(23) (2006): 7850-7863.
Crystallization Sigma-Aldrich.
History of "Crystallization"
Retrieved from https://www.newworldencyclopedia.org/p/index.php?title=Crystallization&oldid=1068824 |
representation learning | herr strathmann
June 29, 2019 April 19, 2020 ~ karlnapf ~ Leave a comment
\varphi \left(x\right)
<math xmlns="http://www.w3.org/1998/Math/MathML"><mi>ϕ</mi><mo stretchy="false">(</mo><mi>x</mi><mo stretchy="false">)</mo></math>
\phi(x)
\varphi \left(x\right)
<math xmlns="http://www.w3.org/1998/Math/MathML"><mi>ϕ</mi><mo stretchy="false">(</mo><mi>x</mi><mo stretchy="false">)</mo></math>
\phi(x)
\log p(x)=\sum_{i=1}^n \alpha_i k(z_i, x)
k(x,y)=\exp\left(-\Vert x-y\Vert ^2 / \sigma \right)
{\mathrm{zi}}_{}
z_i
\sigma
{\mathrm{\alpha i}}^{}
<math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>αi</mi></msub></math>
\alpha_i
{\mathrm{\alpha i}}^{}
<math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>αi</mi></msub></math>
\alpha_i
June 7, 2018 June 28, 2019 ~ karlnapf ~ Leave a comment
I got mildly involved in a cool project with the ETHZ group, lead by Vincent Fortuin and Matthias Hüser, along with Francesco Locatello, myself, and Gunnar Rätsch. The work is about building a variational autoencoder with a discrete (and thus interpretable) latent space that admits topological neighbourhood structure through using a self organising map. To represent latent dynamics (the lab is interested in time series modelling), there also is a built-in Markov transition model. We just put a version on arXiv. |
The joint probability distribution of thr random variables X and
The joint probability distribution of thr random variables X and Y is given below: f(x,y)={(cxy , 0<x<2,0<y<x),(0,, \text{ other }):}
The joint probability distribution of thr random variables X and Y is given below:
f\left(x,y\right)=\left\{\begin{array}{ll}cxy& 0<x<2,0<y<x\\ 0& \text{ other }\end{array}
a.Find the value of the constant.
b.Calculate the covariance and the correlation of the X and Y random variables.
c. Calculate the expected value of the random variable
Z=2X-3Y+2
f\left(x,y\right)=\left\{\begin{array}{ll}cxy& 0<x<2,0<y<x\\ 0& \text{ other }\end{array}
a. value of constant c,
{\int }_{0}^{2}{\int }_{0}^{x}cxydydx=1
{\int }_{0}^{2}cx{\left[\frac{{y}^{2}}{2}\right]}_{0}^{x}dx=1
\frac{c}{2}{\int }_{0}^{2}x×{x}^{2}dx=1
\frac{c}{2}{\left[\frac{{x}^{4}}{4}\right]}_{0}^{2}=1
c=\frac{1}{2}
we have given a joint pdf,
f\left(x,y\right)=\left\{\begin{array}{ll}cxy& 0<x<2,0<y<x\\ 0& \text{ other }\end{array}
marginal pdf of x,
{f}_{1}\left(x\right)={\int }_{0}^{x}f\left(x,y\right)dy
{f}_{1}\left(x\right)={\int }_{0}^{x}\frac{1}{2}xydy
{f}_{1}\left(x\right)=\frac{1}{2}x×{\left(\frac{{y}^{2}}{2}\right)}_{0}^{x}
{f}_{1}\left(x\right)=\left\{\begin{array}{ll}\frac{1}{4}{x}^{3}& 0<x<2\\ 0& \text{ otherwise }\end{array}
E\left(x\right)={\int }_{0}^{2}x×{f}_{1}\left(x\right)dx
E\left(x\right)={\int }_{0}^{2}×\frac{{x}^{3}}{4}dx
E\left(x\right)=\frac{1}{4}{\left(\frac{{x}^{5}}{5}\right)}_{0}^{2}
E\left(x\right)=\frac{8}{5}
E\left({x}^{2}\right)={\int }_{0}^{2}{x}^{2}×{f}_{1}\left(x\right)dx
P\left(x\right)=-12{x}^{2}+2136x-41000
x=r\mathrm{cos}\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}y=r\mathrm{sin}
\underset{\left(x,y\right)\to \left(0,0\right)}{lim}\frac{{x}^{2}-{y}^{2}}{\sqrt{{x}^{2}+{y}^{2}}}
Prove mathematically (using variables, not numbers) that kx>kz for hydraulic conductivity - topic: Groundwater Hydrology
{K}_{z}=\frac{d}{\frac{{d}_{1}}{{K}_{1}}+\frac{{d}_{2}}{{K}_{2}}+\stackrel{˙}{s}+\frac{{d}_{n}}{{K}_{n}}}
{K}_{x}=\frac{{K}_{1}{d}_{1}+{K}_{2}{d}_{2}+\stackrel{˙}{s}+{K}_{n}{d}_{n}}{d}
State and prove the linearity property of the Laplace transform by using the definition of Laplace transform. Give an example by selecting different types of function, from, trigonometric, polynomial, exponential that shows the application of this property while solving the Laplace transform by using direct rules.Does such property hold for the inverse Laplace transform as well? Prove by giving a suitable example.
Write formulas for the indicated partial derivatives for the multivariable function.
f\left(x,y\right)=7{x}^{2}+9xy+4{y}^{3}
\frac{\partial f}{\partial x}
\frac{\partial f}{\partial y}
\frac{\partial f}{\partial x}{\mid }_{y=9}
To evaluate: The terminal velocity from a graphical analysis.
Describe in general terms how to solve a system in three variables. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.