url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://robotology.github.io/robotology-documentation/doc/html/classiCub_1_1iDyn_1_1iDynSensor.html | iCub-main
iCub::iDyn::iDynSensor Class Reference
A class for computing forces and torques in a iDynChain, when a force/torque sensor is placed in the middle of the kinematic chain and it is the only available sensor for measuring forces and moments; the sensor position in the chain must be set; the computation of joint forces, moments and torques is performed by an Inverse Newton-Euler method. More...
#include <iDynInv.h>
Inheritance diagram for iCub::iDyn::iDynSensor:
## Public Member Functions
iDynSensor (iDyn::iDynChain *_c, std::string _info, const NewEulMode _mode=DYNAMIC, unsigned int verb=iCub::skinDynLib::NO_VERBOSE)
Constructor without FT sensor: the sensor must be set with setSensor() More...
iDynSensor (iDyn::iDynChain *_c, unsigned int i, const yarp::sig::Matrix &_H, const yarp::sig::Matrix &_HC, const double _m, const yarp::sig::Matrix &_I, std::string _info, const NewEulMode _mode=DYNAMIC, unsigned int verb=iCub::skinDynLib::NO_VERBOSE)
Constructor with FT sensor. More...
bool setSensorMeasures (const yarp::sig::Vector &F, const yarp::sig::Vector &Mu)
Set the sensor measured force and moment. More...
bool setSensorMeasures (const yarp::sig::Vector &FM)
Set the sensor measured force and moment at once. More...
virtual bool computeFromSensorNewtonEuler (const yarp::sig::Vector &F, const yarp::sig::Vector &Mu)
The main computation method: given the FT sensor measurements, compute forces moments and torques in the iDynChain. More...
virtual bool computeFromSensorNewtonEuler (const yarp::sig::Vector &FMu)
The main computation method: given the FT sensor measurements, compute forces moments and torques in the iDynChain. More...
virtual void computeFromSensorNewtonEuler ()
The main computation method: given the FT sensor measurements, compute forces moments and torques in the iDynChain. More...
virtual void computeWrenchFromSensorNewtonEuler ()
The main computation method: given the FT sensor measurements, compute forces moments and torques in the iDynChain. More...
yarp::sig::Matrix getForces () const
Returns the links forces as a matrix, where the i-th col is the i-th force. More...
yarp::sig::Matrix getMoments () const
Returns the links moments as a matrix, where the i-th col is the i-th moment. More...
yarp::sig::Vector getTorques () const
Returns the links torque as a vector. More...
yarp::sig::Vector getForce (const unsigned int iLink) const
Returns the i-th link force. More...
yarp::sig::Vector getMoment (const unsigned int iLink) const
Returns the i-th link moment. More...
double getTorque (const unsigned int iLink) const
Returns the i-th link torque. More...
yarp::sig::Matrix getForcesNewtonEuler () const
Returns the links forces as a matrix, where the i-th col is the i-th force. More...
yarp::sig::Matrix getMomentsNewtonEuler () const
Returns the links moments as a matrix, where the i-th col is the i-th moment. More...
yarp::sig::Vector getTorquesNewtonEuler () const
Returns the links torque as a vector. More...
virtual yarp::sig::Vector getForceMomentEndEff () const
Returns the end-effector force-moment as a single (6x1) vector. More...
Public Member Functions inherited from iCub::iDyn::iDynInvSensor
iDynInvSensor (iDyn::iDynChain *_c, const std::string &_info, const NewEulMode _mode=DYNAMIC, unsigned int verb=iCub::skinDynLib::NO_VERBOSE)
Constructor without FT sensor: the sensor must be set with setSensor() More...
iDynInvSensor (iDyn::iDynChain *_c, unsigned int i, const yarp::sig::Matrix &_H, const yarp::sig::Matrix &_HC, const double _m, const yarp::sig::Matrix &_I, const std::string &_info, const NewEulMode _mode=DYNAMIC, unsigned int verb=0)
Constructor with FT sensor. More...
bool setSensor (unsigned int i, const yarp::sig::Matrix &_H, const yarp::sig::Matrix &_HC, const double _m, const yarp::sig::Matrix &_I)
Set a new sensor or new sensor properties. More...
bool setSensor (unsigned int i, SensorLinkNewtonEuler *sensor)
void computeSensorForceMoment ()
Compute forces and moments at the sensor frame; this method calls special Forward and Backward methods of SensorLink, using Newton-Euler's formula applied in the link where the sensor is placed on; the link is automatically found, being specified by the index in the chain and the chain itself; The case of a contact (ie external force) acting in the host link is not currently implemented. More...
std::string toString () const
Print some information. More...
yarp::sig::Vector getSensorForce () const
Returns the sensor estimated force. More...
yarp::sig::Vector getSensorMoment () const
Returns the sensor estimated moment. More...
yarp::sig::Vector getSensorForceMoment () const
Get the sensor force and moment in a single (6x1) vector. More...
yarp::sig::Matrix getH () const
Get the sensor roto-translational matrix defining its position/orientation wrt the link. More...
double getMass () const
Get the mass of the portion of link defined between sensor and i-th frame. More...
yarp::sig::Matrix getCOM () const
Get the sensor roto-traslational matrix of the center of mass of the semi-link defined by the sensor in the i-th link. More...
yarp::sig::Matrix getInertia () const
Get the inertia of the portion of link defined between sensor and i-th frame. More...
void setMode (const NewEulMode _mode=DYNAMIC)
void setVerbose (unsigned int verb=iCub::skinDynLib::VERBOSE)
void setInfo (const std::string &_info)
void setSensorInfo (const std::string &_info)
bool setDynamicParameters (const double _m, const yarp::sig::Matrix &_HC, const yarp::sig::Matrix &_I)
Set the dynamic parameters of the the portion of link defined between sensor and i-th frame. More...
std::string getInfo () const
std::string getSensorInfo () const
yarp::sig::Vector getTorques () const
virtual ~iDynInvSensor ()
Protected Attributes inherited from iCub::iDyn::iDynInvSensor
unsigned int lSens
the link where the sensor is attached to More...
the sensor More...
iDynChainchain
the iDynChain describing the robotic chain More...
NewEulMode mode
static/dynamic/etc.. More...
unsigned int verbose
verbosity flag More...
std::string info
a string with useful information if needed More...
## Detailed Description
A class for computing forces and torques in a iDynChain, when a force/torque sensor is placed in the middle of the kinematic chain and it is the only available sensor for measuring forces and moments; the sensor position in the chain must be set; the computation of joint forces, moments and torques is performed by an Inverse Newton-Euler method.
Definition at line 1577 of file iDynInv.h.
## ◆ iDynSensor() [1/2]
iDynSensor::iDynSensor ( iDyn::iDynChain * _c, std::string _info, const NewEulMode _mode = DYNAMIC, unsigned int verb = iCub::skinDynLib::NO_VERBOSE )
Constructor without FT sensor: the sensor must be set with setSensor()
Parameters
_c a pointer to the iDynChain where the sensor is placed on _info a string with information _mode the analysis mode (static/dynamic) verb flag for verbosity
Definition at line 2574 of file iDynInv.cpp.
## ◆ iDynSensor() [2/2]
iCub::iDyn::iDynSensor::iDynSensor ( iDyn::iDynChain * _c, unsigned int i, const yarp::sig::Matrix & _H, const yarp::sig::Matrix & _HC, const double _m, const yarp::sig::Matrix & _I, std::string _info, const NewEulMode _mode = DYNAMIC, unsigned int verb = iCub::skinDynLib::NO_VERBOSE )
Constructor with FT sensor.
Parameters
_c a pointer to the iDynChain where the sensor is placed on i the i-th link to whom the sensor is attached _H the roto-traslational matrix from the reference frame of the i-th link to the sensor _HC the roto-traslational matrix of the center of mass of the semi-link defined by the sensor in the i-th link _m the mass of the semi-link _I the inertia of the semi-link _info a string with information _mode the analysis mode (static/dynamic) verb flag for verbosity
## ◆ computeFromSensorNewtonEuler() [1/3]
void iDynSensor::computeFromSensorNewtonEuler ( )
virtual
The main computation method: given the FT sensor measurements, compute forces moments and torques in the iDynChain.
A forward pass of the classical Newton-Euler method is run, to retrieve angular and linear accelerations. Then, from sensor to end-effector the inverse Newton-Euler formula is applied to retrieve joint forces and torques, while from sensor to base the classical backward pass is run. This method only perform the computations: the force and moment measured on the sensor must be set before calling this method using setSensorMeasures()
Definition at line 2611 of file iDynInv.cpp.
## ◆ computeFromSensorNewtonEuler() [2/3]
virtual bool iCub::iDyn::iDynSensor::computeFromSensorNewtonEuler ( const yarp::sig::Vector & F, const yarp::sig::Vector & Mu )
virtual
The main computation method: given the FT sensor measurements, compute forces moments and torques in the iDynChain.
A forward pass of the classical Newton-Euler method is run, to retrieve angular and linear accelerations. Then, from sensor to end-effector the inverse Newton-Euler formula is applied to retrieve joint forces and torques, while from sensor to base the classical backward pass is run.
Parameters
F the sensor force (3x1) Mu the sensor moment (3x1)
Returns
true if the operation is successful, false otherwise (ie wrong vector size)
## ◆ computeFromSensorNewtonEuler() [3/3]
virtual bool iCub::iDyn::iDynSensor::computeFromSensorNewtonEuler ( const yarp::sig::Vector & FMu )
virtual
The main computation method: given the FT sensor measurements, compute forces moments and torques in the iDynChain.
A forward pass of the classical Newton-Euler method is run, to retrieve angular and linear accelerations. Then, from sensor to end-effector the inverse Newton-Euler formula is applied to retrieve joint forces and torques, while from sensor to base the classical backward pass is run.
Parameters
FMu the sensor force and moment (6x1)
Returns
true if the operation is successful, false otherwise (ie wrong vector size)
## ◆ computeWrenchFromSensorNewtonEuler()
void iDynSensor::computeWrenchFromSensorNewtonEuler ( )
virtual
The main computation method: given the FT sensor measurements, compute forces moments and torques in the iDynChain.
The kinematic pass is already performed. Only the wrench computation are performed here: from sensor to end-effector the inverse Newton-Euler formula is applied to retrieve joint forces and torques, while from sensor to base the classical backward pass is run. This method only perform the computations: the force and moment measured on the sensor must be set before calling this method using setSensorMeasures() This method is called by iDynSensorNode.
Reimplemented in iCub::iDyn::iDynContactSolver.
Definition at line 2634 of file iDynInv.cpp.
## ◆ getForce()
Vector iDynSensor::getForce ( const unsigned int iLink ) const
Returns
Definition at line 2684 of file iDynInv.cpp.
## ◆ getForceMomentEndEff()
Vector iDynSensor::getForceMomentEndEff ( ) const
virtual
Returns the end-effector force-moment as a single (6x1) vector.
Returns
a 6x1 vector with the the end-effector force-moment
Reimplemented in iCub::iDyn::iDynContactSolver.
Definition at line 2696 of file iDynInv.cpp.
## ◆ getForces()
Matrix iDynSensor::getForces ( ) const
Returns the links forces as a matrix, where the i-th col is the i-th force.
Returns
a 3xN matrix with forces, in the form: i-th col = F_i
Definition at line 2678 of file iDynInv.cpp.
## ◆ getForcesNewtonEuler()
Matrix iDynSensor::getForcesNewtonEuler ( ) const
Returns the links forces as a matrix, where the i-th col is the i-th force.
Returns
a 3x(N+2) matrix with forces, in the form: i-th col = F_i
Definition at line 2690 of file iDynInv.cpp.
## ◆ getMoment()
Vector iDynSensor::getMoment ( const unsigned int iLink ) const
Returns
Definition at line 2686 of file iDynInv.cpp.
## ◆ getMoments()
Matrix iDynSensor::getMoments ( ) const
Returns the links moments as a matrix, where the i-th col is the i-th moment.
Returns
a 3xN matrix with moments, in the form: i-th col = Mu_i
Definition at line 2680 of file iDynInv.cpp.
## ◆ getMomentsNewtonEuler()
Matrix iDynSensor::getMomentsNewtonEuler ( ) const
Returns the links moments as a matrix, where the i-th col is the i-th moment.
Returns
a 3x(N+2) matrix with moments, in the form: i-th col = Mu_i
Definition at line 2692 of file iDynInv.cpp.
## ◆ getTorque()
double iDynSensor::getTorque ( const unsigned int iLink ) const
Returns
Definition at line 2688 of file iDynInv.cpp.
## ◆ getTorques()
Vector iDynSensor::getTorques ( ) const
Returns the links torque as a vector.
Returns
a Nx1 vector with the torques
Definition at line 2682 of file iDynInv.cpp.
## ◆ getTorquesNewtonEuler()
Vector iDynSensor::getTorquesNewtonEuler ( ) const
Returns the links torque as a vector.
Returns
a Nx1 vector with the torques
Definition at line 2694 of file iDynInv.cpp.
## ◆ setSensorMeasures() [1/2]
bool iCub::iDyn::iDynSensor::setSensorMeasures ( const yarp::sig::Vector & F, const yarp::sig::Vector & Mu )
Set the sensor measured force and moment.
Parameters
F the sensor force (3x1) Mu the sensor moment (3x1)
Returns
true if the operation is successful, false otherwise (ie wrong vector size)
## ◆ setSensorMeasures() [2/2]
bool iCub::iDyn::iDynSensor::setSensorMeasures ( const yarp::sig::Vector & FM )
Set the sensor measured force and moment at once.
The measure vector (6x1) is made of 0:2=force 3:5=moment
Parameters
FM the sensor force and moment (6x1)
Returns
true if the operation is successful, false otherwise (ie wrong vector size)
The documentation for this class was generated from the following files:
• icub-main/src/libraries/iDyn/include/iCub/iDyn/iDynInv.h
• icub-main/src/libraries/iDyn/src/iDynInv.cpp | 2023-01-30 11:02:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1878722608089447, "perplexity": 10818.730648371573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00719.warc.gz"} |
https://byjus.com/question-answer/three-persons-entered-a-railway-compartment-in-which-5-seats-were-vacant-find-the-number/ | Question
Three persons entered a railway compartment in which $$5$$ seats were vacant. Find the number of ways in which they can be seated
A
30
B
45
C
120
D
60
Solution
The correct option is D $$60$$The first person can choose to sit in any of the $$5$$ seats in $$5$$ ways. Then the second person can choose to sit in any of the remaining $$4$$ seats in $$4$$ ways.Lastly, the third person can choose to sit in any of the remaining $$3$$ seats in $$3$$ ways.So, total number of ways $$= 5 \times 4 \times 3 = 60$$ waysMathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | 2022-01-19 07:12:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6061556339263916, "perplexity": 384.7863578162797}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00647.warc.gz"} |
http://www.creativeedge.com/book/graphic-design/0201362996/chapter-2dot-the-structure-of-a-latex-document/ch02lev1sec1 | • Create BookmarkCreate Bookmark
• Create Note or TagCreate Note or Tag
• PrintPrint
### 2.1. The structure of a source file
You can use LaTeX for several purposes, such as writing an article or a letter, or producing overhead slides. Clearly, documents for different purposes may need different logical structures, i.e., different commands and environments. We say that a document belongs to a class of documents having the same general structure (but not necessarily the same typographical appearance). You specify the class to which your document belongs by starting your LaTeX file with a \documentclass command, where the mandatory parameter specifies the name of the document class. The document class defines the available logical commands and environments (for example, \chapter in the report class) as well as a default formatting for those elements. An optional argument allows you to modify the formatting of those elements by supplying a list of class options. For example, 11pt is an option recognized by most document classes that instructs LaTeX to choose eleven point as the basic document type size.
PREVIEW
Not a subscriber?
Start A Free Trial
• Create BookmarkCreate Bookmark
• Create Note or TagCreate Note or Tag
• PrintPrint | 2017-11-19 06:41:04 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9707149863243103, "perplexity": 2766.399365384029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00327.warc.gz"} |
http://zbmath.org/?q=an:1141.91455 | zbMATH — the first resource for mathematics
Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Stock exchange fractional dynamics defined as fractional exponential growth driven by (usual) Gaussian white noise. Application to fractional Black-Scholes equations. (English) Zbl 1141.91455
Summary: Stock exchange dynamics of fractional order are usually modeled as a non-random exponential growth process driven by a fractional Brownian motion. Here we propose to use rather a non-random fractional growth driven by a (standard) Brownian motion. The key is the Taylor’s series of fractional order $f\left(x+h\right)={E}_{\alpha }\left({h}^{\alpha }{D}_{x}^{\alpha }\right)f\left(x\right)$ where ${E}_{\alpha }\left(·\right)$ denotes the Mittag-Leffler function, and ${D}_{x}^{\alpha }$ is the so-called modified Riemann-Liouville fractional derivative which we introduced recently to remove the effects of the non-zero initial value of the function under consideration. Various models of fractional dynamics for stock exchange are proposed, and their solutions are obtained. Mainly, the Itô’s lemma of fractional order is illustrated in the special case of a fractional growth with white noise. Prospects for the Merton’s optimal portfolio are outlined, the path probability density of fractional stock exchange dynamics is obtained, and two fractional Black-Scholes equations are derived. This approach avoids using fractional Brownian motion and thus is of some help to circumvent the mathematical difficulties so involved.
MSC:
91B28 Finance etc. (MSC2000) 91B62 Growth models in economics | 2013-12-09 09:35:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7188248634338379, "perplexity": 4407.115662409259}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163954634/warc/CC-MAIN-20131204133234-00017-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.biostars.org/p/216192/ | when I remove a ind with vcftools --remove-indv, I lost all information column 8
1
0
Entering edit mode
4.5 years ago
when I try to removed or filtered data, I lost the information of column 8. as I can get that information? Please help me!!
my original file
##fileformat=VCFv4.0
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
##FORMAT=<ID=AD,Number=.,Type=Integer,Description="Allelic depths for the reference and alternate alleles in the order listed">
##FORMAT=<ID=GQ,Number=1,Type=Float,Description="Genotype Quality">
##FORMAT=<ID=PL,Number=3,Type=Float,Description="Normalized, Phred-scaled likelihoods for AA,AB,BB genotypes where A=ref and B=alt; not applicable if site is not biallelic">
##INFO=<ID=NS,Number=1,Type=Integer,Description="Number of Samples With Data">
##INFO=<ID=DP,Number=1,Type=Integer,Description="Total Depth">
##INFO=<ID=AF,Number=.,Type=Float,Description="Allele Frequency">
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT H1_P13:C615BACXX:3:250445260 ....
1 36078 S1_36078 T A,C,G 20 PASS NS=276;DP=2250;AF=0,01,0,01,0,01 GT:AD:DP:GQ:PL 0/0:6,0,0,0:6:98:0,18,216 0/0:6,0,0,0:6:98:0,18,216 0/0:2,0,0,0:2:79:0,6,72 0/0:11,0,0,0:11:99:0,33,255 0/0:7,0,0,0:7:99:0,21,252 0/0:5,0,0,0:5:96:0,15,180 0/0:14,0,0,0:14:99:0,42,255 0/1:7,1,0,0:8:93:12,0,228 0/0:12,0,1,0:13:99:0,36,255 0/0:12,0,0,0:12:99:0,36,255 0/0:8,0,0,0:8:99:0,24,255 0/1:6,1,0,0:7:96:15,0,195 ./. 0/0:17,0,0,0:17:99:0,51,255 ./. 0/0:10,0,0,0:10:99:0,30,255 0/0:5,0,0,0:5:96:0,15,180 0/0:7,0,0,0:7:99:0,21,252 0/0:3,0,0,0:3:88:0,9,108 0/0:1,0,0,0:1:66:0,3,36 0/0:7,0,0,0:7:99:0,21,252 0/0:7,0,0,0:7:99:0,21,252 0/0:6,0,0,0:6:98:0,18,216 0/0:7,0,0,0:7:99:0,21,252 0/0:11,0,0,0:11:99:0,33,255 0/0:7,0,0,0:7:99:0,21,252 0/0:2,0,0,0:2:79:0,6,72 0/0:1,0,0,0:1:66:0,3,36 0/0:4,0,0,0:4:94:0,12,144 0/0:5,0,0,0:5:96:0,15,180 0/1:9,1,0,0:10:79:6,0,255 0/0:4,0,0,0:4:94:0,12,144 0/0:9,0,0,0:9:99:0,27,255 0/0:12,0,0,0:12:99:0,36,255 0/0:10,0,0,0:10:99:0,30,255 0/0:6,0,0,0:6:98:0,18,216 0/0:1,0,0,0:1:66:0,3,36 0/0:13,0,0,0:13:99:0,39,255 0/0:7,0,0,0:7:99:0,21,252 0/0:8,0,0,0:8:99:0,24,255 0/0:3,0,0,0:3:88:0,9,108 0/0:12,0,0,0:12:99:0,36,255 0/0:4,0,0,0:4:94:0,12,144 0/0:8,0,0,0:8:99:0,24,255 0/0:3,0,0,0:3:88:0,9,108 0/3:2,0,0,1:3:99:27,0,63 0/0:13,0,0,0:13:99:0,39,255 0/0:3,0,0,0:3:88:0,9,108 0/0:7,0,0,0:7:99:0,21,252 0/0:5,0,0,0:5:96:0,15,180 0/0:1,0,0,0:1:66:0,3,36 0/0:4,0,0,0:4:94:0,12,144 ./. 0/0:3,0,0,0:3:88:0,9,108 ........
modified file
##fileformat=VCFv4.0
##Tassel=<ID=GenotypeTable,Version=5,Description="Reference allele is not known. The major allele was used as reference allele">
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
##FORMAT=<ID=AD,Number=.,Type=Integer,Description="Allelic depths for the reference and alternate alleles in the order listed">
##FORMAT=<ID=GQ,Number=1,Type=Float,Description="Genotype Quality">
##FORMAT=<ID=PL,Number=.,Type=Float,Description="Normalized, Phred-scaled likelihoods for AA,AB,BB genotypes where A=ref and B=alt; not applicable if site is not biallelic">
##INFO=<ID=NS,Number=1,Type=Integer,Description="Number of Samples With Data">
##INFO=<ID=DP,Number=1,Type=Integer,Description="Total Depth">
##INFO=<ID=AF,Number=.,Type=Float,Description="Allele Frequency">
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT H33A_P100:C615BACXX:1:250444958 .......
1 42139 S1_42139 C A . PASS . GT 1/0 0/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/1 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 0/0 1/0 1/0 1/1 1/1 1/0 1/0 0/0 1/0 1/0 1/0 0/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 0/0 0/0 1/0 1/0 0/0 1/0 0/0 1/0 1/0 1/0 1/0 1/0 0/0 1/0 1/1 1/0 1/0 1/0 0/0 1/0 0/0 1/0 1/0 1/0 1/0 1/0 1/0 1/0 ......
snp vcf vcftools tassel • 1.6k views
0
Entering edit mode
4.5 years ago
Ram 32k
Please read the manual. You have to use the --keep-info flag. It's literally called "keep INFO" :)
Read "INFO FIELD FILTERING" under https://vcftools.github.io/man_latest.html#SITE%20FILTERING%20OPTIONS
0
Entering edit mode
but not recalculate the INFO, only keep this
0
Entering edit mode
Might have helped if you'd mentioned that in the original post :) I'm not sure how INFO can be recalculated on the fly by such a simple tool - maybe a GATK Walker (such as SelectVariants) might help. | 2021-04-17 03:25:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6367366313934326, "perplexity": 7150.28435018286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00100.warc.gz"} |
https://zbmath.org/?q=an%3A0920.34061 | # zbMATH — the first resource for mathematics
Hamiltonian symmetric groups and multiple periodic solutions to delay differential equations. (English) Zbl 0920.34061
The authors establish the existence of periodic solutions to $$2^{n-1}$$ differential delay equations $x'(t)= \sum^{n-1}_{i= 1} \delta_i f(x(t- r_i)),\tag{1}$ $$r_i>0$$, $$\delta_i= 1$$ or $$\delta_i= -1$$, $$i= 1,2,\dots, n-1$$. It is shown that the periodic solutions to this class of differential delay equations can be created by some Hamiltonian systems which are invariant under action of some compact Lie groups. The Hamiltonian structure and symmetry groups of coupled ordinary differential systems play crucial roles in finding periodic solutions to delay differential equations (1).
##### MSC:
34K13 Periodic solutions to functional-differential equations 34C25 Periodic solutions to ordinary differential equations 37J99 Dynamical aspects of finite-dimensional Hamiltonian and Lagrangian systems
Full Text:
##### References:
[1] Amann, H.; Zehnder, E., Periodic solutions of asymptotically linear Hamiltonian systems, Manuscripts math., 32, 149-189, (1980) · Zbl 0443.70019 [2] Ge, W., Periodic solutions of differential delay equations with multiple lags, Acta. math. appl. sinica, 17, 173-181, (1994) [3] M. Golubitsky, I. Stewart, D.G. Schaeffer, Singularities and Groups in Bifurcation Theory, vol. II, Springer, New York, 1985. · Zbl 0691.58003 [4] Kaplan, J.L.; Yorke, J.A., Ordinary differential equations which yield periodic solutions of differential-delay equations, J. math. anal. appl., 48, 317-324, (1974) · Zbl 0293.34102 [5] J. Li, X. Zhao, Z. Liu, Theory and Applications of Generalized Hamiltonian Systems, Science Publishhouse, Beijing, China, 1994. [6] J. Li, X.Z. He, Proof and generalization of Kaplan-Yorke’s conjecture on periodic solution of differential delay equations, Preprint. · Zbl 0983.34061 [7] J. Li, X.Z. He, Periodic solutions of some differential delay equations created by high-dimensional Hamiltonian systems, Preprint. [8] J. Li, X.Z. He, Multiple periodic solutions of differential delay equations created by asymptotically linear Hamiltonian systems, Nonlinear Anal., in press. · Zbl 0918.34066 [9] J. Mawhin, M. Willem, Critical Point Theory and Hamiltonian Systems, Springer, New York, 1989. · Zbl 0676.58017 [10] K.R. Meyer, G.R. Hall, Introduction to Hamiltonian Dynamical Systems and the n-Body Problem, Springer, New York, 1992. · Zbl 0743.70006 [11] P.J. Oliver, Applications of Lie Groups to Differential Equations, Springer, New York, 1986.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-07-31 14:31:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6007921695709229, "perplexity": 1734.0868466395655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.68/warc/CC-MAIN-20210731141123-20210731171123-00039.warc.gz"} |
http://mathhelpforum.com/new-users/204688-maths.html | 1. ## Maths
If five twenty-cent coins are placed in a row touching each other, and then a hollow square made with more coins so that each side consists of five of the same coins, how many coins would you need altogether to form the square?
2. ## Re: Maths
What do your sketches tell you?
3. ## Re: Maths
Code:
00000
0...0
0...0
0...0
00000
when doing maths it helps to draw the problem out. | 2017-07-24 06:58:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6895017623901367, "perplexity": 2057.617223010353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424756.92/warc/CC-MAIN-20170724062304-20170724082304-00626.warc.gz"} |
https://socratic.org/questions/how-do-you-factor-x-3-16 | How do you factor x^3 - 16?
Apr 12, 2015
Set this expression to $0$ to determine a root; then use synthetic division to determine the second factor
If ${x}^{3} - 16 = 0$
then
${x}^{3} = 16$
and
$x = 2 \sqrt[3]{2}$
So
$\left(x - 2 \sqrt[3]{2}\right)$
is a factor of ${x}^{3} - 16$
Use synthetic division to divide $\left(x - 2 \sqrt[3]{3}\right)$
into $\left({x}^{3} - 16\right)$
giving
$\left({x}^{2} + 2 \sqrt[3]{2} x + 4 \sqrt[3]{4}\right)$
So
$\left({x}^{3} - 16\right)$
=(x-2root(3)(2))(x^2+2root(3)(2)x + 4root(3)4 | 2020-01-28 08:01:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5424637794494629, "perplexity": 1913.667212198962}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251776516.99/warc/CC-MAIN-20200128060946-20200128090946-00114.warc.gz"} |
http://www.ck12.org/geometry/Unknown-Measures-of-Similar-Figures/lesson/Unknown-Measures-of-Similar-Figures/r19/ | You are viewing an older version of this Concept. Go to the latest version.
# Unknown Measures of Similar Figures
## Use ratios and proportions to solve for missing lengths in similar figures.
%
Progress
MEMORY METER
This indicates how strong in your memory this concept is
Progress
%
Unknown Measures of Similar Figures
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
TermDefinition
Congruent Congruent figures are identical in size, shape and measure.
Corresponding The corresponding sides between two triangles are sides in the same relative position.
Proportion A proportion is an equation that shows two equivalent ratios.
Similar Two figures are similar if they have the same shape, but not necessarily the same size. | 2017-04-26 09:25:09 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055664300918579, "perplexity": 1961.5708194866002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121216.64/warc/CC-MAIN-20170423031201-00231-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://ronaldconnelly.blogspot.com/2015/09/non-increasing-and-decreasing-sequence.html | ## Thursday, September 10, 2015
### non increasing and decreasing sequence
Let $\left\{{a}_{n}\right\}$ be a nonnegative, non-increasing sequence and convergence to $a \ge 0$. Can we say that ${a}_{n}\ge a$ for all n \$\in… | 2017-11-22 10:51:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2823590040206909, "perplexity": 977.4123919906282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806569.66/warc/CC-MAIN-20171122103526-20171122123526-00686.warc.gz"} |
https://docs.manticoresearch.com/3.1.0/html/getting-started/indexes.html | # A guide on indexes¶
The Manticore Search daemon can serve multiple data collections, called indexes.
Manticore Search supports two storage index types:
• plain (also called offline or disk) index. Data is indexed once at creation, it supports online rebuilding and online updates for non-text attributes
• RealTime index. Similar to a database table, online updates are possible at any given time
In addition, a special index based on RealTime type, called percolate, can be used to store Percolate Queries.
In the current version, indexes use a schema like a normal database table. The schema can have 3 big types of columns:
• the first column is always an unsigned 64 bit non-zero number, called id. Unlike in a database, there is no mechanism of auto incrementing, so you need to be sure the documents ids are unique
• fulltext fields - they contain indexed content. There can be multiple fulltext fields per index. Fulltext searches can be made on all fields or selective. Currently the original text is not stored, so if it’s required to show their content in search results, a trip to the origin source must be made using the ids (or other identifier) obtained from the search
• attributes - their values are stored and are not used in fulltext matching. Instead they can be used for regular filtering, grouping, sorting. They can be also used in expressions of score ranking.
Field and attribute names must start with a letter and can contain letters, digits and underscore.
The following types can be stored in attributes:
• unsigned 32 bit and signed 64 bit integers
• 32 bit single precision floats
• UNIX timestamps
• booleans
• strings (they can be used just for comparison,grouping or sorting by)
• JSON objects
• multi-value attribute list of unsigned 32-bit integers
Manticore Search supports a storeless index type called distributed which allows searching over multiple indexes. The connected indexes can be local or remote. Distributed indexes allow spreading big data over multiple machines or building high availability setups. As searching over an index is single-threaded, local distributed indexes can be used to make use of multiple CPU cores.
## Plain indexes¶
Except numeric (that includes MVA) attributes, the rest of the data in a plain index is immutable. If you need to update/add new records you need to perform again a rebuilding. While index is being rebuilt, existing index is still available to serve requests. When new version is ready, a process called rotation is performed which puts the new version online and discards the old one.
The indexing performance process depends on several factors:
• how fast the source can be providing the data
• tokenization settings
• hardware resource (CPU power, storage speed)
In the most simple usage scenario, we would use a single plain index which we rebuild it from time to time.
This implies:
• index is not as fresh as the data from the source
• indexing duration grows with the data
If we want to have the data more fresh, we need to shorten the indexing interval. If indexing takes too much, it can even overlap the time between indexing, which is a major problem. However, Manticore Search can perform a search on multiple indexes. From this, an idea was born to use a secondary index that captures only the most recent updates.
This index will be a lot smaller and we will index it more frequently. From time to time, as this delta index will grow, we will want to “reset” it.
This can be done by either reindexing the main index or merge the delta into the main. The main+delta index schema is detailed at Delta index updates.
As the engine can’t globally do a uniqueness on the document ids, an important thing that needs to be considered is if the delta index could contain updates on existing indexed records in the main index.
For this, there is an option that allows defining a list of document ids which are suppressed by the delta index. For more details, check sql_query_killlist.
## Real-Time indexes¶
RealTime indexes allow online updates, but updating fulltext data and non-numeric attributes require a full row replace.
The RealTIme index starts empty and you can add, replace, update or delete data in the same fashion as for a database table. The updates are first held into a memory zone, defined by rt_mem_limit. When this gets filled, it is dumped as disk chunk - which as structure is similar with a plain index. As the number of disk chunks increase, the search performance decreases, as the searching is done sequentially on the chunks. To avoid that, there is a command that can merge the disk chunks into a single one - OPTIMIZE INDEX syntax.
Populating a RealTime can be done in two ways: firing INSERTs or converting a plain index to become RealTime. In case of INSERTs, using a single worker (a script or code) that inserts one record at a time can be slow. You can speed this by batching many rows into one and by using multiple workers that perform inserting. Parallel inserts will be faster but also come at using more CPU. The size of the data buffer memory (which we call RAM chunk) also influence the speed of inserting.
## Local distributed indexes¶
A distributed index in Manticore Search doesn’t hold any data. Instead it acts as a ‘master node’ to fire the demanded query on other indexes and provide merged results from the responses it receives from the ‘node’ indexes. A distributed index can connect to local indexes or indexes located on other servers. In our case, a distributed index would look like:
index_dist {
type = distributed
local = index1
local = index2
...
}
The last step to enable multi-core searches is to define dist_threads in searchd section. Dist_threads tells the engine the maximum number of threads it can use for a distributed index.
## Remote distributed indexes and high availability¶
index mydist {
type = distributed
agent = box1:9312:shard1
agent = box2:9312:shard2
agent = box3:9312:shard3
agent = box4:9312:shard4
}
Here we have split the data over 4 servers, each serving one of the shards. If one of the servers fails, our distributed index will still work, but we would miss the results from the failed shard.
index mydist {
type = distributed
agent = box1:9312|box5:9312:shard1
agent = box2:9312:|box6:9312:shard2
agent = box3:9312:|box7:9312:shard3
agent = box4:9312:|box8:9312:shard4
}
Now we added mirrors, each shard is found on 2 servers. By default, the master (the searchd instance with the distributed index) will pick randomly one of the mirrors.
The mode used for picking mirrors can be set with ha_strategy. In addition to random, another simple method is to do a round-robin selection ( ha_strategy= roundrobin).
The more interesting strategies are the latency-weighted probabilities based ones. noerrors and nodeads not only that take out mirrors with issues, but also monitor the response times and do a balancing. If a mirror responds slower (for example due to some operations running on it), it will receive less requests. When the mirror recovers and provides better times, it will get more requests.
## Replication and cluster¶
To use replication define one listen port for SphinxAPI protocol and one listen for replication address and port range in the config. Define data_dir folder for incoming indexes.
searchd {
listen = 9312
listen = 192.168.1.101:9360-9370:replication
data_dir = /var/lib/manticore/
...
}
Create a cluster (via SphinxQL) at the daemon that has local indexes that need to be replicated
CREATE CLUSTER posts
Add these local indexes to cluster
ALTER CLUSTER posts ADD pq_title
JOIN CLUSTER posts AT '192.168.1.101:9312'
When running queries prepend the index name with the cluster name (posts:).
INSERT INTO posts:pq_title VALUES ( 3, 'test me' ) | 2020-11-25 04:52:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23332412540912628, "perplexity": 3368.0919374974255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181179.12/warc/CC-MAIN-20201125041943-20201125071943-00388.warc.gz"} |
http://hal.in2p3.fr/in2p3-01174043 | # 3d $\mathcal{N}=1$ effective supergravity and F-theory from M-theory on fourfolds
Abstract : We consider 3d N=1 M-theory compactifications on Calabi-Yau fourfolds, and the effective 3d theory of light modes obtained by reduction from eleven dimensions. We study in detail the mass spectrum at the vacuum and, by decoupling the massive multiplets, we derive the effective 3d N=1 theory in the large-volume limit up to quartic fermion terms. We show that in general it is an ungauged N=1 supergravity of the form expected from 3d supersymmetry. In particular the massless bosonic fields consist of the volume modulus and the axions originating from the eleven-dimensional three-form, while the moduli-space metric is locally isometric to hyperbolic space. We consider the F-theory interpretation of the 3d N=1 M-theory vacua in the light of the F-theory effective action approach. We show that these vacua generally have F-theory duals with circle fluxes, thus breaking 4d Poincar\'e invariance.
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-01174043
Contributor : Sylvie Flores Connect in order to contact the contributor
Submitted on : Wednesday, July 8, 2015 - 10:42:00 AM
Last modification on : Friday, September 10, 2021 - 1:50:14 PM
### Citation
D. Prins, D. Tsimpis. 3d $\mathcal{N}=1$ effective supergravity and F-theory from M-theory on fourfolds. Journal of High Energy Physics, Springer Verlag (Germany), 2015, 2015 (9), pp.107. ⟨10.1007/JHEP09(2015)107⟩. ⟨in2p3-01174043⟩
### Metrics
Les métriques sont temporairement indisponibles | 2022-01-24 12:02:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6389934420585632, "perplexity": 3786.5621752354255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00440.warc.gz"} |
https://www.simis.io/docs/aerodynamics-one-airfoil-blade-fixed | ## 1 Test description
This test uses a simple blade with few aerodynamical blade stations in steady conditions and compares the aerodynamical loads computed by Ashes to an analytical solution.
The following load cases are tested
## 2 Model
The model used for this test is shown in the figure below:
The chordlength accross the blade is
$$c = 1\text{ m}$$
$$L = 5\text{ m}$$
$$r_h = 0.5\text{ m}$$
.
The rotor is fixed, the blade is therefore not rotating. For the models that do not use the Stiff rotor approximation, the blade and the hub conections are infinitely stiff, therefore there are no deflections.
The blade used for this model has three Blade aerodynamical station . The innermost station is at the root and has a cylindrical airfoil with no drag and no lift. This station will therefore not produce any aerodynamic loads. The two other blade aerodynamical stations have the NACA64-618 airfoil, which polar can be exported from Ashes and are shown in the figure below:
The table below gives a summary of the relevant blade characteristics:
Blade aerodynamical station Distance to blade root Influence length Airfoil 1 $$r_1 = 0$$ $$L_{I1} = 1.25\text{ m}$$ Cylinder with no lift and drag 2 $$r_2 = 2.5\text{ m}$$ $$L_{I2} = 2.5\text{ m}$$ NACA64-618 3 $$r_3 = 5\text{ m}$$ $$L_{I3}=1.25\text{ m}$$ NACA64-618
The air density is
$$\rho = 1.225\text{ kg}\cdot{m}^{-3}$$
. No gravity forces are applied, and tip and hub corrections are set to 0.
## 3 Analytical solution
The wind speed speed is denoted
$$V$$
. For the current analysis we use
$$V = 10\text{ m}\cdot\text{s}^{-1}$$
Since there is no rotational velocity, the angle of attack at all blade aerodynamical stations is 90 degrees. For the two outermost blade aerodynamical stations, this gives a lift and drag coefficient of
$$C_L = 0.053$$
and
$$C_D = 1.4565$$
(the innermost blade aerodynamical blade station experiences no lift or drag, and is therefore not considered in the rest of the analysis).
$$i$$
, the distributed lift and srag forces are, respectively:
$$F_{L,i}=\frac{1}{2}\rho c C_L V^2$$
and
$$F_{D,i}=\frac{1}{2}\rho c C_D V^2$$
Note: this is only valid because the rotational speed is zero, which implies that there is no induced velocities
For each blade aerodynamical station, since the blade is not rotating, the lift is in the direction of the torque force and the drag is in the direction of the torque force.
The aerodynamic thrust for the whole rotor is thus
$$F_T = \sum_{i=2}^3F_{D,i}\cdot L_{Ii}=334.5\text{ N}$$
and the aerodynamic torque for the whole rotor is
$$T=\sum_{i=2}^3 F_{L,i}\cdot L_{Ii}\cdot (r_i+r_h)=46.67\text{ Nm}$$
. These two output can be found in the Rotor sensor
The next three output are part of the Blade [Time] sensor sensor. The root force is the sum of the drag and the lift forces from both stations, which can be expressed as
$$F_r = \sum_{i=2}^3\sqrt{(F_{D,i}\cdot L_{I,i})^2+(F_{L,i}\cdot L_{I,i})^2}=334.8\text{ N}$$
. The in-plane bending moment is
$$M_{ip}=\sum_{i=2}^3 F_{L,i}\cdot L_{Ii}\cdot r_i=40.58\text{ Nm}$$
and the out-of-plane bending moment is
$$M_{oop}=\sum_{i=2}^3F_{D,i}\cdot L_{Ii}\cdot r_i=1115\text{ Nm}$$
Flexible blades, as opposed to stiff blades, are modelled structurally by dividing the blade into a number of finite elements, each with its own structural properties (which can be different for each element or the same accross the blade span, as is the case in this model). In the present test, flexible blades are modelled with elements that have a very large stiffness, which means that they will experience no deflection.
Because Flexible blades are divided into structural elements, the aerodynamic loads have to be lumped and applied to the nodes at both ends of the element. The lumping is done as follows: if part of an element lies within the Influence length of an Blade aerodynamical station, its nodes will experience aerodynamic loads. Each of the two loads (one on each node) will be equal to the linear aerodynamic loads multiplied the portion of the element that lies within the Influence length divided by two.
Below we show several examples of how the derivation is carried out for the present test.
A two-element blade will have three structural nodes, each located at a Blade aerodynamical station. The Influence length of each Blade aerodynamical station will vary depending on its position along the blade (a detailed explanation of how the influence length is obtained can be found in the Influence length document). This is illustrated in the figure below:
Note: there are two more nodes in this model, that belong to the Blade connection. These have been hidden in the picture of the left
To exemplify how the loads apply on each node, we first focus on the drag force and how it contributes to the Aerodynamic thrust.
As explained above, the drag force from station 3 will be
$$F_{D,3}\cdot L_{i,3}=111.5\text{ N}$$
. The influence length of this station is entirely within Element 2, so this load will only be applied to Element 2.
The drag force from station 2 will be
$$F_{D,2}\cdot L_{i,2}=223.0\text{ N}$$
Half on its influence length is on Element 2 and the other half on Element 1, which means that half of this load will be applied on Element 2 and the other half on Element 1. This means that Element 2 will experience a total load (from two stations) of
$$111.5+223.0/2=223.0\text{ N}$$
In Ashes, the loads experienced by an element are divided by two and lumped to its nodes. This means that, from Element 2 (namely Node 1 and Node 2 from Blade 1) will experience a load (more precisely, a BEM nodal load) in the direction of the drag of
$$111.5\text{ N}$$
. This is illustrated inthe figure below:
Note: in the above picture, not all possilbe fields from the Loads on node sensor are displayed. You can customise which fields are displayed in the Customise sensor fields document.
The figure above shows that Node 1 experiences two BEM forces (namely, Blade nodal BEM force 4 and Blade nodal BEM force 1). This is because this node belongs both to Element 1 and Element 2 and will therefore experience loads originating from both elements. It is possible to see where a load originates from by right clicking on the relevant node and then selecting the load. This is illustrated in the figure below:
Similarly, since half of the influence length of station 2 is on Element 1, half of the drag load from station 2 will be applied to Element 1. This will be the only drag load applied to Element 1 since station 1 does not experience aerodynamic loads (because its lift and drag coefficients are both 0), so it will have a value of
$$F_{D,2}\cdot (L_{i,2}/2)=111.5\text{ N}$$
. The load on Element 1 will be split into two equal components applied to both of its nodes, of value
$$55.75\text{ N}$$
. This is illustrated in the figure below:
The sum of all the BEM forces in the drag directions amounts to
$$334.5\text{ N}$$
, which is the value of the thrust force on the rotor
It is important to note that the way the aerodynamical loads are lumped onto structural nodes has implications on the bending moments at the blade root. For example, part of the drag from station 2 is applied to the node at the root of the blade (or more precisely, Blade nodal BEM force (0) is applied on Node 2 - Blade connection 1). This means that it will not contribute to the Out-of-plane bending moment. If we use the notation
$$BEM_{i,y}$$
to mean the y-component Blade nodal force (i) (i.e. the component in the drag direction) and by noticing that both elements have a length
$$E_L = 2.5\text{ m}$$
, we find that the Out-of plane bending moment will be
$$M_{oop,f}=BEM_{1,y}\cdot E_L+BEM_{4,y}\cdot E_L+BEM_{5,y}\cdot 2E_L=976\text{ Nm}$$
As expected , this value is lower than the Out-of-plane bendig moment with stiff blades, since part of the aerodynamic loads are applied at the root.
This lumping will also influence the aerodynamic torque. By following the same procedure as above including the hub radius, we obtain an aerodynamic torque in the Flexible blades case of
$$T = BEM_{0,x}\cdot r_h+BEM_{1,x}\cdot (E_L+r_h)+BEM_{4,x}\cdot (E_L+r_h)+BEM_{5,x}\cdot (2E_L+r_h)=41.59\text{ Nm}$$
Again, this is lower than with Stiff blade case.
Note: to simplify the equations, we have ommitted the negative signs in front of the components of the Blade nodal forces. A detailed explanation of the signs for the bending moments can be found in the Out-of-plane and In-plane documents. The aerodynamic torque is positive when it spins the rotor clockwise.
We now reproduce the derivation explained in the previous paragraph for a blade divided into three elements, shown in the image below:
As derived above, the drag loads from the three aerodynamical stations are
- for Station 3,
$$F_{D,3}\cdot L_{I,3}=111.5\text{ N}$$
- for Station 2,
$$F_{D,2}\cdot L_{D,2} = 223.0\text{ N}$$
- for Station 3,
$$0\text{ N}$$
Element 3 covers the whole Influence length 3 and one sixth of Influence length 2, therefore it will experience a drag force of
$$111.5+223.0/6=148.7\text{ N}$$
. This force is then split in two and the resulting force of
$$74.33\text{ N}$$
is applied to both nodes of element 3 (namely Node 2 and Node 3 on Blade 1).
Element 2 covers two thirds of Influence lenght 2, therefore it will experience a drag force of
$$223\cdot2/3=148.7\text{ N}$$
. Its nodes (namely Node 2 and Node 1 from Blade 1) will thus experience a force of
$$74.33\text{ N}$$
.
Element 1 covers one sixth of influence length 2 and the whole Influence length 1, but the latter does not produce any aerodynamic load. It will therefore experience a drag force of
$$223/6 = 31.27\text{ N}$$
and its nodes (namely Node 1 from blade 1 and Node 2 from Blade connection) will experience a force of
$$18.59\text{ N}$$
.
If we now calculate the out-of-plane bending moment as we did for the two-element blade case, we find that
$$M_{oop,3}=1022\text{ Nm}$$
.
Similarly, the aerodynamic torque is now
$$T=42.95\text{ Nm}$$
.
These values are again lower than the case of stiff blades, but are larger than the case with two elements. This is expected since the load applied at the root of the blade (Node 2 of Blade connection) is lower than in the two-element case.
#### 3.2.3 Analytical solution
If we consider the case of a cantilever beam with a partial distributed load, we can find the analytical solution for the bending moment at the root of the beam. This case is illustrated in the figure below:
The bending moment at the root of the beam is
$$M = q\frac{\left(l^2-d^2\right)}{2}$$
Applying this formula to the blade bending moments and aerodynamic torque, we find
$$M_{oop,an}=F_{D}\frac{L^2-L_{I,1}^2}{2}=1045\text{ Nm}$$
$$M_{ip,an}=F_{L}\frac{L^2-L_{I,1}^2}{2}=38.04\text{ Nm}$$
$$T_{an}=F_{L}\frac{(L+r_h)^2-(L_{I,1}+r_h)^2}{2}=44.13\text{ Nm}$$
A blade with an inifinite number of elements should produce these results. In practice, we expect that 20 elements should be enough to get with 0.05% of the analytical solution.
### 3.3 Summary of expected results
The following table sums up the results expected in Ashes, as calculated above:
Test name Aerodynamic thrust $$[\text{N}]$$ Aerodynamic torque $$[\text{Nm}]$$ Root force $$[\text{N}]$$ In plane bending moment $$[\text{Nm}]$$ Out of plane bending moment $$[\text{Nm}]$$ Stiff blade 334.5 46.67 334.8 40.58 1115 two-element blade 334.5 41.59 334.8 33.50 975.4 three-element blade 334.5 43.28 334.8 37.20 1022 20-element blade 334.5 44.13 334.8 38.04 1045
## 4 Results
A simulation of ten seconds is run. The test is considered passed if the last two seconds of the results produced by Ashes lie within 0.05% of the expected solution.
The report with the results can be found here: | 2023-03-21 23:53:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5715311169624329, "perplexity": 933.1857329504732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00616.warc.gz"} |
https://docs.mosek.com/latest/toolbox/design.html | # 5 Design Overview¶
## 5.1 Modeling¶
Optimization Toolbox for MATLAB is an interface for specifying optimization problems directly in matrix form. It means that an optimization problem such as:
$\begin{split}\begin{array}{ll} \minimize & c^Tx \\ \st & Ax\leq b,\\ & x\in \mathcal{K} \end{array}\end{split}$
or
$\begin{split}\begin{array}{ll} \minimize & c^Tx \\ \st & Ax\leq b,\\ & Fx+g\in \mathcal{K} \end{array}\end{split}$
is specified by describing the matrices $$A$$, $$F$$, vectors $$b,c,g$$ and a list of cones $$\mathcal{K}$$ directly.
The main characteristics of this interface are:
• Simplicity: once the problem data is assembled in matrix form, it is straightforward to input it into the optimizer.
• Exploiting sparsity: data is entered in sparse format, enabling huge, sparse problems to be defined and solved efficiently.
• Efficiency: the API incurs almost no overhead between the user’s representation of the problem and MOSEK’s internal one.
Optimization Toolbox for MATLAB does not aid with modeling. It is the user’s responsibility to express the problem in MOSEK’s standard form, introducing, if necessary, auxiliary variables and constraints. See Sec. 12 (Problem Formulation and Solutions) for the precise formulations of problems MOSEK solves.
## 5.2 “Hello World!” in MOSEK¶
Here we present the most basic workflow pattern when using Optimization Toolbox for MATLAB.
Create a prob structure
Optimization problems using Optimization Toolbox for MATLAB are specified using a prob structure that describes the numerical data of the problem. In most cases it consists of matrices of floating-point numbers.
Retrieving the solutions
When the problem is set up, the optimizer is invoked with the call to mosekopt. The call will return a response and a structure containing the solution to all variables. See further details in Sec. 7 (Solver Interaction Tutorials).
We refer also to Sec. 7 (Solver Interaction Tutorials) for information about more advanced mechanisms of interacting with the solver.
Source code example
Below is the most basic code sample that defines and solves a trivial optimization problem
$\begin{split}\begin{array}{ll} \minimize & x \\ \st & 2.0 \leq x \leq 3.0. \\ \end{array}\end{split}$
For simplicity the example does not contain any error or status checks.
Listing 5.1 “Hello World!” in MOSEK
%%
% Copyright: Copyright (c) MOSEK ApS, Denmark. All rights reserved.
%
% File: helloworld.m
%
% The most basic example of how to get started with MOSEK.
prob.a = sparse(0,1) % 0 linear constraints, 1 variable
prob.c = [1.0]' % Only objective coefficient
prob.blx= [2.0]' % Lower bound(s) on variable(s)
prob.bux= [3.0]' % Upper bound(s) on variable(s)
% Optimize
[r, res] = mosekopt('minimize', prob);
% Print answer
res.sol.itr.xx | 2022-11-27 08:14:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26217079162597656, "perplexity": 3025.6450452217496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00479.warc.gz"} |
http://math.stackexchange.com/questions/295923/let-set-d-axby-mid-x-y-text-are-integers-and-axby0-prove-that | # Let set $D =\{(ax+by)\mid x,y \text{ are integers and } ax+by>0\}$ . Prove that $D$ is not empty.
Let set $D = \{(ax+by)\mid x,y \text{ are integers and } ax+by>0\}$ . Prove that $D$ is not empty.
I'm trying to prove the extended Euclidean algorithm without using the algorithm.
-
$x=a$, $y=b$, but please edit the question into the body, not just the title. – Gerry Myerson Feb 6 '13 at 0:57
What if $a=b=0$? – Robert Israel Feb 6 '13 at 1:05
Let set $D = \{(ax+by)\mid x,y \text{ are integers and } ax+by>0\}$ . Prove that $D$ is not empty.
Firstly, note that if $a = b = 0$ then this does not hold. So let us now assume without loss of generality that $a \neq 0$. Then we have two cases: either $a$ is positive or $a$ is negative. If $a$ is positive, then $1\cdot a + 0 \cdot b > 0$ and is in $D$. If $a$ is negative that $-1 \cdot a + 0 \cdot b > 0$ and is in $D$. Regardless, we have a proof. | 2015-09-03 05:03:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658850431442261, "perplexity": 83.35815953613722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645298781.72/warc/CC-MAIN-20150827031458-00038-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.rdocumentation.org/packages/cooptrees/versions/1.0/topics/shapleyValue | # shapleyValue
0th
Percentile
##### Shapley value of a cooperative game
Given a cooperative game, the shapleyValue function computes its Shapley value.
##### Usage
shapleyValue(n, S = NULL, v)
##### Arguments
n
number of players in the cooperative game.
S
vector with all the possible coalitions. If none has been specified the function generates it automatically.
v
vector with the characteristic function of the cooperative game.
##### Details
The Shapley value is a solution concept in cooperative game theory proposed by Lloyd Shapley in 1953. It is obtained as the average of the marginal contributions of the players associated with all the posible orders of the players.
##### Value
The shapleyValue functions returns a matrix with all the marginal contributions of the players (contributions) and a vector with the Shapley value (value).
##### References
Lloyd S. Shapley. "A Value for n-person Games". In Contributions to the Theory of Games, volume II, by H.W. Kuhn and A.W. Tucker, editors. Annals of Mathematical Studies v. 28, pp. 307-317. Princeton University Press, 1953.
• shapleyValue
##### Examples
# Cooperative game
n <- 3 # players
v <- c(4, 4, 4, 8, 8, 8, 14) # characteristic function
# Shapley value
shapleyValue(n, v = v)
Documentation reproduced from package cooptrees, version 1.0, License: GPL-3
### Community examples
Looks like there are no examples yet. | 2018-11-17 03:18:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3634980022907257, "perplexity": 3249.9877457658763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743248.7/warc/CC-MAIN-20181117020225-20181117042225-00159.warc.gz"} |
https://www.physicsforums.com/threads/solving-a-differential-equation.751880/ | # Solving a differential equation
1. May 2, 2014
### Lengalicious
1. The problem statement, all variables and given/known data
Solve
$$(1+bx)y''(x)-ay(x)=0$$
2. Relevant equations
3. The attempt at a solution
I'm used to solving homogeneous linear ODE's where you form a characteristic equation and solve that way, here there is the function of x (1+bx) so how does that change things?
2. May 2, 2014
### frzncactus
Would dividing both sides by 1+bx help?
3. May 2, 2014
### Lengalicious
Ok so if I did that then what? I can define a characteristic equation such that
$$r^2-\frac{a}{1+bx}=0$$
and $$r=\pm\sqrt{\frac{a}{1+bx}}$$
where $$b^2-4ac = 4a(1+bx) > 0$$
so a solution is $$y=ce^{rx}$$ but that doesn't satisfy the ODE so its not correct?
4. May 2, 2014
### LCKurtz
You can't use constant coefficient methods on a DE like this with variable coefficients. Perhaps there is a clever substitution that will help, or maybe not. Problems like this are typically solved with series solutions, especially if you know $a$ and $b$. Where did this equation come from? If it's from a text, the recent material may give a hint how to solve it. | 2017-10-19 07:53:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8329179286956787, "perplexity": 573.1458024723267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823255.12/warc/CC-MAIN-20171019065335-20171019085335-00477.warc.gz"} |
https://indico.cern.ch/event/40295/timetable/?view=standard_numbered | # IOP HEPP Particle Physics 2009
Europe/London
University of Oxford
#### University of Oxford
Description
Annual IOP High Energy Particle Physics Conference Particle Physics 2009 University of Oxford 6-8th April 2009
• Monday, April 6
• Registration/Lunch
• Plenary I: Collider Physics Martin Wood Lecture Theatre
### Martin Wood Lecture Theatre
#### University of Oxford
• 1
Speaker: Prof. Robin Devenish
• 2
TeVatron
Speaker: Terry Wyatt (University of Manchester)
• 3
HERA and Deep Inelastic Scattering
Speaker: Dr Paul Newman (Birmingham University)
• 4
Highlights From The B Factories
Speaker: Dr Francesca di Lodovico (QMUL)
• Tea
• Parallel Session 1 A - QCD and Electroweak Martin Wood Lecture Theatre
### Martin Wood Lecture Theatre
#### University of Oxford
QCD and EW
• 5
QCD matrix elements and truncated showers
An improved prescription of merging matrix elements with parton showers is presented, extending the CKKW approach. This new method preserves the logarithmic accuracy of the shower.
Speaker: Frank Siegert (University of Durham)
• 6
Top Quark Mass Measurement using Matrix Element Analysis Technique and Lepton + Jets Channel
We present a top quark mass measurement using ttbar candidate events for the lepton+jets decay channel from ppbar collisions at 1.96 TeV at CDF. The top quark mass is extracted by employing an unbinned maximum likelihood method using per-event probability density functions calculated using signal (ttbar) and background (W+jets) matrix elements, as well as a set of parameterised jet-to-parton mapping functions. The likelihood function is maximised with respect to the top quark mass, the fraction of signal events, and the jet energy scale correction, which is constrained in-situ via the mass of the hadronic W boson.
Speaker: Jacob Linacre (University of Oxford)
• 7
DIS Charged Current Interactions in e+p data At ZEUS
ZEUS is a multi-purpose detector located on the electron-proton HERA collider. Since an upgrade to HERA in 2000, the lepton beams may be longitudinally polarised. This allows tests of the chiral nature of the Standard Model to be undertaken. Of particular interest is the charged current cross-section, which the Standard Model tells us depends on the polarisation of the incoming lepton. Results will be shown for positron-proton collisions at a centre-of-mass energy of 318 GeV, based on an data sample with total integrated luminosity of 133.6 pb-1
Speaker: Katie Oliver (University of Oxford)
• 8
Super-leading logarithmic terms have previously been observed in non-global QCD observables. This talk details a fixed order calculation of super-leading logarithms in the gaps-between-jets process. This calculation confirms previous results and extends them to O(alpha_s^5).
Speaker: James Keates (University of Manchester)
• 9
Gaps between jets
We study the effects of QCD radiation on the cross section for the production of two jets with a cut on the transverse momentum of any radiation in the rapidity gap between them at the Large Hadron Collider. This is process is of a great phenomenological interest on its own, and moreover it closely related to Higgs production in WBF. A deep understading of this calculation is also very import from a more theoretical point of view because of the recent discover of "super-leading" logarithms due to non-global effects.
Speaker: Simone Marzani (University of Manchester)
• 10
Production of direct photons at ATLAS
The production of direct photons at ATLAS will be an ideal test of perturbative QCD (pQCD) in a kinematic region never observed before. Gaining a deeper understanding of pQCD is essential to searches for new physics at the LHC. In addition, differential cross-section measurements of direct photon production can be used to constrain parton density functions. The plans for the first direct photon measurement with the first LHC data will be presented.
Speaker: Mark Stockton (University of Birmingham)
• Parallel Session 1 B - Flavour Physics Lindemann
### Lindemann
#### University of Oxford
Flavour Physics
• 11
B->K*mu+mu-: Symmetries and Asymmetries in the SM and Beyond
The rare decay B -> K* (-> K pi) mu+ mu- is regarded as one of the crucial channels for B physics as the polarization of the K* allows a precise angular reconstruction resulting in many observables. We investigate all observables in the context of the Standard Model and various New Physics models, in particular the Littlest Higgs model with T-parity and various MSSM scenarios, identifying those observables with small to moderate dependence on hadronic quantities and large impact of New Physics. We also identify a number of correlations between various observables which will allow a clear distinction between different New Physics scenarios.
Speaker: Aoife Bharucha (University of Durham)
• 12
Determination of |Vub| using the endpoint of the lepton spectrum
We present a partial branching fraction for the inclusive charmless semileptonic decay of b->ulnu, leading to the determination of the CKM matrix element |Vub|.
Speaker: Michael Sigamani (Queen Mary, University of London)
• 13
Performing a full angular analysis of Bd->K*mumu at LHCb
The decay Bd->K*mumu is a rare b->s quark transition which is of great interest as a probe of beyond the Standard Model physics at the LHC. LHCb will be able to collect such large samples of these decays that a full angular analysis will be possible within a few years. Methods developed for performing this analysis will be presented, and some aspects of the related phenomenology discussed.
Speaker: William Reece (Imperial College London)
• 14
Two-Body Charmless Hadronic B decays at LHCb
Studies related to two-body charmless hadronic decays of B mesons (B->hh decays) at LHCb are presented. Application of an incorrect proper time resolution model in the analysis to measure the CKM angle gamma may bias the fitted values of gamma and other parameters. A Monte Carlo-independent method to extract the resolution model from data is described. Also a selection for the rare decays B_{d/s}-> p pbar is presented. LHCb expects to make an observation of B_{d}-> p pbar with as little as 250pb^{-1} of data, depending on its branching ratio.
Speaker: Laurence Carson (University of Glasgow)
• 15
Charm triggering and physics at LHCb
Copious charm hadron production at LHCb will produce physics measurements with world-leading precision, but, first, charm events must be recorded to tape. This talk will discuss triggering strategies at LHCb and some of their consequences for charm physics selection and analysis.
Speaker: Patrick Spradlin (University of Oxford)
• 16
LHCb's potential in D0 mixing and CP violation
LHCb has great potential for making important measurements in the charm sector. The core of this programme includes the precise measurements of the mixing parameters in D0-D0bar oscillations and the search for CP-violation in D-decays. The talk will present the results of MC studies which quantify LHCb's potential sensitivity in this area. Discussion will also be given to the RICH system, whose K-Pi discrimination power is critical in the charm analysis.
Speaker: Funai Xing (University of Oxford)
• 17
The trigger for hadronic events at LHCb causes a lifetime bias which has to be taken into account in any time-dependent measurement. A Monte-Carlo free method for removing this bias is presented in the context of measurements of lifetimes and lifetime ratios using channels such as Bs->KK. An overview of the wide ranging physics opportunities with these measurements will be given.
Speaker: Marco Gersabeck (University of Glasgow)
• Parallel Session 1 C - Detectors and Future Facilities Dennis Sciama Lecture Theatre
### Dennis Sciama Lecture Theatre
#### University of Oxford
Detectors and Future Facilities
• 18
Testing and simulation of Multi-Pixel Photon Counter devices
The Multi-Pixel Photon Counter (MPPC) is an APD array operated in Geiger mode, marketed by Hamamatsu. These devices achieve comparable performance to PMTs, with the advantages of compactness and insensitivity to magnetic fields. They have great potential in HEP applications, and are being used in the ND280 near detector of the T2K long-baseline neutrino experiment. Results from the testing and simulation of the MPPCs used in ND280 will be presented.
Speaker: Martin Haigh (University of Warwick)
• 19
Software buildup to LHC switch-on
outline of the software stability testing framework and fast ATLAS simulation with a view to getting ready for early data at the LHC
Speaker: Alexander Richards (University College London)
• 20
Particle Flow at CMS
Information from the tracking, calorimetry and muon detection systems of the CMS detector at the Large Hadron Collider at CERN, can be combined to give a holistic description of a proton-proton collision in terms of photons, electrons, muons and hadrons. A 'particle flow' technique is presented which seeks to determine the energy and momenta of these particles. By decomposing events in this way we may expect superior efficiencies and energy resolution for jets, missing transverse energy and tau reconstruction compared to conventional reconstruction techniques used at hadron colliders. The application of particle flow to CMS testbeam data is presented.
Speaker: Jamie Ballin (Imperial College London)
• 21
Crosstalk in the LHCb Vertex Locator modules
The Vertex Locator (VeLo) is a silicon based particle detetor in the LHCb experiment. The testbeam data taken with 10 final production VeLo modules exhibited the effects of crosstalk. A method has been developed to correct the data for this effect. A large amount of crosstalk is seen in the data, suggesting that its cause is a combination of charge sharing in the readout cables and an offset in the sampling time of the readout cables.
Speaker: Lisa Dwyer (University of Liverpool)
• 22
Performance of HPDs in the LHCb RICH Detectors
The Ring Imaging Cherenkov (RICH) detectors of the LHCb experiment, at the Large Hadron Collider (LHC), have been built to provide charged particle identification. Hybrid Photon Detectors (HPDs) are used to detect the Cherenkov photons produced in the RICH radiators. Each HPD required extensive testing and categorisation before being mounted in the RICH. A subsample of 74 HPDs underwent Quantum Efficiency measurements. Results of these tests will be presented. Afterwards, the HPDs were mounted onto columns and fitted into the RICH detectors. I developed the software for monitoring the properties of HPDs mounted into the RICH. These results will be presented alongside investigations of HPDs which had vacuum degradation. Through the RICH group’s combined effort, the RICH detectors were ready for the first beams that circulated through the LHC.
Speaker: Young Min Kim (University of Edinburgh)
• 23
Computation of Resistive Wakefields
• 24
A CLIC Post-Collision Extraction Line Photon Background Study
In the proposed CLIC Extraction Line Design, coherently produced lepton pairs possess a significant energy distribution. In the bending region, this translates to high dispersion and possible particle loss. To prevent magnet damage, masks are positioned between the magnets to absorb these losses, of which this study analyses the effect of these lost particles. Using physics-in-matter simulation tools, these interactions are modelled and secondary particles are tracked. Photons produced in the backwards direction are identified and tracked back to the interaction point to determine the flux incident on the detectors. Of particular interest is the silicon vertex detector region since photons may trigger false signals, contributing to the backgrounds. This study will also look at the effects of detector masking, and the probability of an incident photon translating to a registered hit in the silicon.
Speaker: Michael Salt (University of Manchester)
• Reception Natural History Museum
### Natural History Museum
#### University of Oxford
• Tuesday, April 7
• Plenary II: LHC Martin Wood Lecture Theatre
### Martin Wood Lecture Theatre
#### University of Oxford
• 25
Machine Status
Speaker: Dr Roger Bailey (CERN)
• 26
Detector Status
Speaker: Prof. Neville Harnew (University of Oxford)
• 27
Phenomenology Status
Speaker: Prof. Bryan Webber (University of Cambridge)
• Coffee
• Parallel Session 2 A - QCD and Electroweak / Higgs Martin Wood Lecture Theatre
### Martin Wood Lecture Theatre
#### University of Oxford
11:00 - 12:30 - QCD and EW
12:30 - 12:45 - Higgs
• 28
Measuring Z->ee with ATLAS
Some of the earliest measurements to be made with the ATLAS detector are of the rate and properties of Z boson production. Intense theoretical attention in recent years has culminated in next-to-next-to-leading order calculations of the production cross section of this channel, with just a few percent uncertainty. Confirmation of these predictions with LHC data, as soon as possible after collisions begin, will improve confidence in the accuracy of predictions of the rates of various backgrounds critical for Higgs and other new physics searches. Predictions relating to Z boson production, and prospects for the measurement of the total Z->ee cross section in ATLAS, will be discussed. This channel also acts as a source of electrons which will be used to calibrate and test the performance of ATLAS with early data. Performance studies measurements will be presented, with an emphasis on measurements of electron trigger efficiencies.
Speaker: Michael James Flowerdew (University of Liverpool)
• 29
The Z boson a_T distribution
We present theoretical predictions for a novel variable to study low transverse momentum vector boson production at hadron colliders. The new variable referred to as $a_T$ has been pointed out to have experimental advantages over the usual $p_t$ distribution and our study in conjunction with forthcoming accurate experimental data can help to shed light on perturbative radiation as well as constrain non-perturbative effects better than traditional studies of the $p_T$ distribution.
Speaker: Rosa María Durán Delgado (University of Manchester)
• 30
Spin correlation in top quark pair produciton at ATLAS
The high mass and large width of the top quark corresponds to a Standard Model prediction for a lifetime shorter than the timescale for strong interactions, implying that the top quark decays before hadronisation. Therefore, properties such as the spin correlation in the $t\bar{t}$ system are transferred to the decay products. In this talk I will discuss the possibility of measuring this correlation with early ATLAS data.
Speaker: Simon Head (University of Manchester)
• 31
A study of low pt electron reconstruction efficiencies in ATLAS
Speaker: Susan Cheatham (University of Lancaster)
• 32
Data driven methods of a W/Z cross section measurement in ATLAS
An important aim of the LHC is the precise measurement of W and Z boson production cross sections. It is important to make a data driven measurement of the cross sections and, whenever possible, not to rely on Monte Carlo simulation. Such a method of determining these cross sections in the ATLAS detector is outlined for the electron channel.
Speaker: Eleanor Dobson (University of Oxford)
• 33
Constraining PDFs at the LHC: The W asymmetry
The asymmetry in the rapidity distribution of positive and negative W Bosons can help to put constraints on the parton distribution functions (PDFs). It is directly related to differences in the momentum distribution of u and d quarks and will be used to improve our knowledge on the valence quark distributions at low x. This talk will present the prospects for the W asymmetry measurement with early data at the ATLAS detector. The analysis is carried out using full detector simulation and all systematical errors on the measurement are considered.
Speaker: Kristin Lohwasser (University of Oxford)
• 34
Fast simulation and the Higgs with ATLAS
Presented is an outline of the current simulation options available for use in the ATLAS collaboration. Particular attention is given to the ATLAS fast simulation, ATLFAST. By default, ATLFAST does not account for particle losses due to the reconstruction process. Therefore, a set of parametrisations of the photon reconstruction efficiency, as seen in full simulation, have been incorporated into ATLFAST, and the results are demonstrated with H->gamma gamma events.
Speaker: Neil Cooper-Smith (Royal Holloway)
• 12:45 PM
Discussion
• Parallel Session 2 B - Flavour Physics / Beyond the Standard Model Lindemann
### Lindemann
#### University of Oxford
11:00 - 12:00 - Flavour Phyiscs
12:00 - 13:00 - Beyond the Standard Model
• 35
Event by Event alignment studies using B physics observables in the ATLAS experiment
Event by Event alignment studies using B physics observables in the ATLAS experiment
Speaker: Lee De Mora (University of Lancaster)
• 36
Recent results in rare charmless three-body hadronic B decays
We report recent results from the BaBar experiment on the rare three-body charmless hadronic decays of charged and neutral B mesons. These results have been obtained using the full BaBar dataset of around 470 million BBbar pairs.
Speaker: Eugenia Puccio (University of Warwick)
• 37
Measurement of semileptonic asymmetry in Bs decays at D0
Recent results from D0 on the semileptonic asymmetry in Bs decays will be presented and prospects will be discussed.
Speaker: Sergey Burdin (University of Liverpool)
• 38
Time dependent Dalitz plot analysis of B0->Kspi+pi- at BaBar
A time-dependent amplitude analysis of B0->Kspi+pi- decays is performed in order to extract the CP violation parameters of f0(980)Ks and rho0(770)Ks and direct CP asymmetries of K*+(892)pi-. The relative phases between B0->K*+(892)pi- and B0-> K*-(892)\pi+, relevant for the extraction of the unitarity triangle angle gamma, is also measured. The results are obtained from the final BaBar data sample.
Speaker: Jelena Ilic (University of Warwick)
• 39
ATLAS Electron Trigger efficiency determination for BSM channels
This talk will present a study concerning the ATLAS electron trigger performance in a SUSY/exotic environment and the determination of this efficiency from data.
Speaker: Matthew Tamsett (Royal Holloway)
• 40
CP Violation in the MSSM at the LHC
The Minimal Supersymmetric Model contains many new parameters that can have CP violating phases. We investigate ways of discovering these at the LHC if the CP phases happen to be large.
Speaker: Jaime Tattersall (University of Durham)
• 41
Precise Predictions for Higgs Production in Neutralino Decays
Complete one-loop results are presented for the class of processes $\tilde{\chi}0_i\rightarrow \tilde{\chi}0_j h_a$ in the MSSM with CP-violating phases beyond the lowest order. We combine the genuine vertex contributions with two-loop Higgs propagator-type corrections, thus obtaining the currently most precise prediction for this class of processes. The numerical impact of the genuine vertex corrections is studied in several examples of CP-conserving and CP-violating scenarios. The corrections to the decay width can be particularly large in the CP-violating CPX benchmark scenario, where a very light Higgs boson is unexcluded by present data. We find that in this parameter region, which will be difficult to cover by standard Higgs search channels at the LHC, the branching ratio for the decay $\tilde{\chi}0_2\rightarrow \tilde{\chi}0_1 h_1$ is large. This may offer good prospects to detect such a light Higgs boson in cascade decays of supersymmetric particles.
Speaker: Alison Fowler (Durham IPPP)
• 42
Inelastic Dark Matter and Non-Standard Halos
I will discuss the compatibility of the inelastic dark matter (iDM) interpretation of the DAMA/LIBRA results with other direct detection experiments, focussing particularly on the sensitivity to the iDM velocity distribution.
Speaker: Matthew McCullough (University of Oxford)
• Parallel Session 2 C - Detectors and Future Facilities / Neutrinos and Dark Matter Dennis Sciama Lecture Theatre
### Dennis Sciama Lecture Theatre
#### University of Oxford
11:00 - 12:00 - Detectors and Future Facilities
12:00 - 13:00 - Neutrinos and Dark Matter
• 43
ATLAS SCT Endcap Module Efficiency Measurement
A description of my work on the measurement of efficiencies of modules in the SCT endcap using cosmic data taken in SR1 during the cosmic tests.
Speakers: Nicholas Austin (University of Liverpool), Nicholas Charles Austin
• 44
Construction of an Electromagnetic Calorimeter for ND280 and the T2K collaboration
T2K (Tokai to Kamioka) is a 295km long-baseline experiment in Japan, due to start taking commissioning data late this year. It is designed to measure muon-neutrino oscillations to other flavours. In particular, it has the primary goal of measuring the mixing angle 13. One of the UK's contributions is the construction and calibration of an Electromagnetic Calorimeter (ECal) for the near detector, ND280, situated 280m downstream from the neutrino production target. This talk will present an update on the construction of one module of ND280, the Downstream ECal. It will summarise results from the quality assurance of the materials, and the work involved in creating the module. The module is now complete and currently collecting cosmic ray data at RAL. It is due to be shipped to CERN for testbeam studies in April.
Speaker: Gavin Davies (University of Lancaster)
• 45
Commisioning the LHCb Vertex Detector
Speaker: Abdi Noor (University of Liverpool)
• 46
Luminosity Performance Studies of Linear Colliders with Intra-train Feedback Systems: Simulations and Experimental Plans
The design luminosity for the future linear colliders is very demanding and challenging. Beam-based feedback systems will be required to achieve the necessary beam-beam stability and steer the two beams into collision. In particular, by means of computer simulations we study the luminosity performance improvement by intra-train beam-based feedback systems for position and angle corrections at the interaction point of linear colliders. Here results are presented for the International Linear Collider (ILC) and the Compact Linear Collider (CLIC). Moreover, we present the design of fast feedback systems to be tested at the final focus beam test facility ATF2 at KEK (Japan).
Speakers: Javier Resta Lopez (Institut fur Physik), Javier Resta Lopez (Oxford university)
• 47
Results from the first science run of ZEPLIN-III
We present the results from the first science run of the ZEPLIN-III WIMP dark matter search. ZEPLIN-III utilises two-phase xenon, measuring both scintillation and ionisation produced by interactions in the liquid to differentiate between the nuclear recoils expected from WIMPs and the electron recoil background signals down to ∼10keV nuclear recoil energy. The higher-field operation of the instrument provides enhanced discrimination over previous two-phase xenon experiments. The first science run of ZEPLIN-III at the Palmer Underground Laboratory (Boulby mine, UK), acquired 847 kg.days of background data, with a final fiducial exposure of 266 kg.days, placing a 90% confidence upper limit on the WIMP-nucleon spin-independent scattering cross-section with a minimum at 7.7E-08 pb at a WIMP mass of 55 GeV/c2.
Speaker: Blair Edwards (STFC Rutherford Appleton Laboratory)
• 48
ZEPLIN-III: The future
ZEPLIN-III aims to be the world's leading detector of weakly interacting massive particles, the favored explanation of Galactic dark matter. Identification is based on extraction of scintillation and electroluminescence signals from a two-phase xenon target. A successful first science run has demonstrated the benefits of ZEPLIN-III's unique high-field operation, open-plan geometry and use of radiologically clean materials. An increased sensitivity is to be achieved by the retrofitting of custom-made ultra-low background photomultiplier tubes and a high efficiency veto detector. This talk will present the requirements, design and simulation of ZEPLIN-III in its upgraded configuration, illustrating its potential for the future and first direct detection of dark matter.
Speaker: Emma Barnes (University of Edinburgh)
• 49
The Physics and Analysis of Cosmic Muons in the Downstream Ecal of T2K
Speaker: Melissa George (Queen Mary, University of London)
• 50
Calculations of background from radioactivity in dark matter detectors
New generation dark matter experiments aim at exploring the 10e-9 - 10e-10 pb cross-section region for the WIMP-nucleon scalar interactions. Neutrons and gamma-rays produced in detector components are the main factors that can limit detector sensitivity. Energy spectra and production rates of neutrons coming from radioactive contamination of materials with uranium and thorium have been estimated using the code SOURCES4A. The code libraries for (alpha,n) cross-section and transition probabilities have been updated and extended using the code EMPIRE 2.19. Radioactive background event rates from some detector components (such as copper and stainless steel), as well as from rock and concrete (lab walls), have been estimated for a hypothetical dark matter detector based on Ge crystals (for instance EURECA). Different shielding configurations (water, lead, paraffin) have been considered. Neutrons and photons have been propagated to the detector using GEANT4. Some requirements for the radiopurity of the materials have been deduced from the results of these simulations. Thickness of shielding in different configurations and required gamma discrimination factor have been investigated.
Speaker: Vito Tomasello (University of Sheffield)
• Lunch
• New IOP Particle Accelerators and Beams Group Lindemann
### Lindemann
#### University of Oxford
• 51
IoP PAB gp
Speaker: M Poole (ASTeC)
• STFC Town Meeting (a) Martin Wood Lecture Theatre
### Martin Wood Lecture Theatre
#### University of Oxford
• 52
PPAN and Grant Panels, including Project Approvals and Progress on Advisory Panels
Speaker: Dr Jordan Nash (CERN)
• 53
Science Board View of Programme and Priorities – Big Issues, Opportunities, Accelerator Vision
Speaker: Prof. Jenny Thomas (UCL)
• 54
STFC Update, including Financial Situation
Speaker: Dr John Womersley (STFC)
• Tea
• STFC Town Meeting (b) Martin Wood Lecture Theatre
### Martin Wood Lecture Theatre
#### University of Oxford
• 55
PPAP Role, Constitution and Plans
Speaker: Prof. Philip Burrows (University of Oxford)
• 56
Economic Impact
Speaker: Dr Liz Towns-Andrews (STFC)
• PP2020: Particle Physics - Fundamental Impacts Martin Wood Lecture Theatre
### Martin Wood Lecture Theatre
#### University of Oxford
• 57
Selling Particle Physics to the Treasury
Speaker: Mark Lancaster (UCL)
• Conference Dinner Christ Church Hall
### Christ Church Hall
#### University of Oxford
• Wednesday, April 8
• Plenary III: Neutrinos and Dark Matter Martin Wood Lecture Theatre
### Martin Wood Lecture Theatre
#### University of Oxford
• 58
Neutrino Long & Short baseline
Speaker: Elisabeth Falk (University of Sussex)
• 59
Double Beta Decay
Speaker: R Saakyan
• 60
Dark Matter
Speaker: Hans Kraus (Umiversity of Oxford)
• 61
Theory
Speaker: Stephen King (Department of Physics (SHEP))
• Coffee
• Parallel Session 3 A - Beyond the Standard Model Martin Wood Lecture Theatre
### Martin Wood Lecture Theatre
#### University of Oxford
Beyond the Standard Model
• 62
Search Strategies for SUSY in Tri-lepton Final States
The Large Hadron Collider with its unprecedented centre-of-mass energy will provide a unique opportunity to search for new physics Beyond the Standard Model. In this talk, I will describe search strategies for Supersymmetry in tri-lepton final states with early data of up to 10 inverse femtobarn. Using Monte Carlo simulations and a full ATLAS detector simulation I investigate the discovery potential with this final state for several benchmark points in the minimal Supergravity parameter space. A particular focus will be placed on the difficult scenario where strong interacting supersymmetric particles are very heavy (~ 3 TeV), as in such case the tri-lepton final states are likely to provide the best discovery potential. I will also address possible strategies to determine background contributions relevant for this final state from data.
Speaker: Oleg Brandt (University of Oxford)
• 63
SUSY Gauge Singlets and Dualities
By including gauge singlets in supersymmetric gauge theories, we have been able to construct and test new types of Seiberg duality which may help in finding dual theories for supersymmetric GUTs.
Speaker: James Barnard (University of Durham)
• 64
Chargino/ Neutralino Mass
The masses of chargino and neutralino are important SUSY parameters which can be measured with high precision in ILC. The chargino or neutralino pair production is one of the benchmarking processes of SiD detector concept also because the separation of chargino/neutralino events is only enabled by satisfactory performance of Particle Flow Algorithm. Here we discuss the chargino/ neutralino events selection and estimate the error of mass measurement.
Speaker: Yiming Li (University of Oxford)
• 65
Black hole event generation with BlackMax
We present a comprehensive black-hole event generator, BlackMax, which simulates the experimental signatures of microscopic and Planckian black-hole production and evolution at proton-proton, proton-antiproton and electron-positron collisions in the context of brane world models with low-scale quantum gravity. The generator is based on phenomenologically realistic models free of serious problems that plague low-scale gravity, thus offering more realistic predictions. The generator includes all of the black-hole graybody factors known to date and incorporates the effects of black-hole rotation, splitting between the fermions, non-zero brane tension and black-hole recoil due to Hawking radiation (although not all simultaneously).
Speaker: Cigdem Issever (University of Oxford)
• 66
Phenomenology of Rotating Extra-Dimensional Black Holes at Hadron Colliders
We present results of a new simulation of black hole production and decay at hadron colliders in theories with large extra dimensions and TeV-scale gravity. The main new feature is a full treatment of the spin-down phase of the decay process and the distributions of the associated Hawking radiation. Also included are improved modelling of the loss of angular momentum and energy in the production process and a wider range of options for the Planck-scale termination of the decay. We present results from these simulations, with emphasis on the consequences and experimental signatures of black hole rotation at the LHC.
Speaker: James Frost (University of Cambridge)
• 67
Optimising selections for the potential discovery of inclusive Supersymmetry
Variables are chosen and orthogonal cuts are optimised for the preferential selection of inclusive SUSY signal over standard model background. This cuts based analysis is compared with a log likelihood based analysis also selecting SUSY over backgrounds.
Speaker: Paul Prichard (University of Liverpool)
• Parallel Session 3 B - Higgs Lindemann
### Lindemann
#### University of Oxford
Higgs
• 68
Search for the Standard Model Higgs boson produced in vector boson fusion and decaying into a tau pair in CMS with 1fb^-1
The Standard Model Higgs boson, produced by vector boson fusion and decaying to a pair of tau leptons, is an important channel in the search for the Higgs in the mass range between 115 and 145GeV/c2. A prospective analysis is presented on the observability of the Higgs boson with this channel, in the final state where one tau decays leptonically and the other hadronically, with CMS. An estimate of the expected upper limit which could be set on the signal cross-section times branching ratio, with 1fb^-1 of integrated luminosity, is given for Higgs masses in the above range.
Speaker: Nicholas Cripps (Imperial College London)
• 69
L2 tracking robustness and trigger study for semileptonic ttH channel
The ATLAS trigger is made up of three levels. The second level is software based and the earliest stage where data is available from the tracking detectors. Tracking is needed to verify several signatures with different requirements. IDScan is an algorithm which reconstructs tracks from hits in the Pixel and SCT detector. The robustness of this algorithm against missing layers of the detector is crucial and the results of this study are shown in this talk. A promising but also very challenging channel for a Higgs discovery in the low mass region is the ttbarH associated production, where the Higgs decays to a bbbar pair. Due to the complex final state of jets, lepton and missing energy it is possible to trigger on many different signatures. In this talk, the efficiencies of the various trigger signatures and their combination are presented.
Speaker: Catrin Bernius (University College London)
• 70
Highly Boosted HW/HZ Production
Until recently extraction of the processes HW->bblv and HZ->bbll was considered impossible at the LHC. However, recent work has shown that by studying the high pT case, the signals can be recovered as a promising discovery channel by ATLAS.
Speaker: Adam Davison (University College London)
• 71
Statistical Combination of Low-Mass Higgs Channels
With the recent successes at the Tevatron, it will be more important than ever for LHC physicists hunting for the Higgs to combine their efforts. Based on current techniques, no single channel in the ATLAS repertoire will be able to discover the Higgs with less than ~5fb-1, and as such, it will be important to combine statistically the outcomes of the individual searches. An approach is presented, based on that used at LEP and CDF, to assess the combined sensitivity of ATLAS to the Higgs Boson in mass range 110<190 GeV.
Speaker: Catherine Wright (University of Glasgow)
• 72
High Mass Standard Model Higgs Searches at D Zero
In this talk I will describe the current status and outlook of high mass Standard Model Higgs boson searches at D Zero, with particular emphasis on the H to WW channel. The talk will include discussion of potential methods for optimising sensitivity.
Speaker: Nicholas Osman (Imperial College London)
• 73
Measuring Higgs boson branching fraction to cc-bar at the ILC
The precise measurements of the Higgs boson properties will be very key to further understanding fundamental particle interaction. In particular, the study of the Higgs boson branching ratios is important in determining the Higgs couplings and nature of the Higgs boson. Here we look at the tools used for the measurement of H--> cc-bar branching ratio and the precision for such a measurement.
Speaker: Yambazi Banda (University of Oxford)
• Parallel Session 3 C - Neutrinos and Double Beta Decay Dennis Sciama Lecture Theatre
### Dennis Sciama Lecture Theatre
#### University of Oxford
Neutrinos and Double Beta Decay
• 74
Particle Identification in the ND280 Electromagnetic Calorimeter
The ND280 calorimeter is a coarsely grained lead/scintillator detector. The coarse granularity presents challenges for particle identification. Progress in using an artificial neural network to separate MIP, electromagnetic showers and hadronic showers is presented.
Speaker: Antony Carver (University of Warwick)
• 75
Lepton Asymmetries and their Evolution in the E6SSM
We investigate leptogenesis in the E6 inspired Supersymmetric Standard Model. In this model, the gauge singlet right-handed neutrinos decay into ordinary leptons, exotic leptons and leptoquarks, all of which carry non-zero lepton number. Lepton asymmetries are calculated from loop diagram contributions to the right-handed neutrino decay. We find that lepton asymmetries can be enhanced drastically by extra Yukawa couplings in this model. Boltzmann Equations indicate that a successful leptogenesis can be achieved when the lightest right-handed neutrinos mass is of order 10^6 GeV, as required by the limit on the reheating temperature.
Speaker: Rui Luo (University of Glasgow)
• 76
Anti neutrinos at MINOS
The NuMI beam used by the MINOS experiment has a 6% component of anti neutrinos. This coupled with the magnetised MINOS experiment allows us to measure dmbar^2 and sin 2 thetabar directly. If CPT is conserved these should be the same as dm^2 and sin 2 theta.
Speaker: David Auty (University of Sussex)
• 77
Double Beta Decay of Zr96 using NEMO-3 and Calorimeter R&D for SuperNEMO
Using 911 days of data from NEMO-3, a world best 2vBB decay half-life of Zr96 has been measured to be [2.36 +/-0.17(stat) +0.17 -0.14(syst)] x 10^19 yr. The obtained limit on the 0vBB decay half-life at the 90% confidence level is 8.5 x 10^21 yr which leads to the limit on the effective Majorana neutrino mass < 7.4 - 20.3 eV, using the RQRPA and pnQRPA nuclear models. SuperNEMO is a next-generation double beta decay experiment based on the successful tracking plus calorimetry design approach of the NEMO-3. SuperNEMO can study a range of isotopes, the baseline isotopes are Se82 and possibly Nd150. The total isotope mass will be 100-200 kg. A sensitivity to a 0vBB half-life greater than 10^26 years can be reached which gives access to Majorana neutrino masses of 50-100 meV. One of the main challenges of the SuperNEMO R&D is the development of the calorimeter with an energy resolution of 4% FWHM at 3 MeV (Q(bb) value of Se82). This unprecedented milestone has been achieved using low density plastic scintillator coupled to high quantum efficiency photomultiplier tubes.
Speaker: Matthew Kauer (University College London)
• 78
SuperNEMO sensitivity to neutrinoless double beta decay via the mass mechanism and right handed currents
SuperNEMO is a next generation neutrinoless double beta decay experiment currently in development, which will probe the inverted hierarchy neutrino mass region. The detector will consist of a double beta emitting foil of Se82 or Nd150 surrounded by a tracking chamber and calorimetry, which allows measurement of electron angular correlations and individual energies. These signatures can be used to identify the underlying mechanism of neutrinoless double beta decay. SuperNEMO sensitivity to both the mass mechanism and right handed currents are examined in the cases of limit setting or discovery.
Speaker: Christopher Jackson (University of Manchester)
• 79
Exploring the physics reach of a low-energy neutrino factory
A 'neutrino factory' is seen as the ideal neutrino oscillation experiment of the future. We study the physics performance of a low-energy version of this experiment, in particular its sensitivity to theta_13, delta, the mass hierarchy and non-standard interactions, and aim to optimize its performance.
Speaker: Tracey LI (University of Durham)
• Lunch
• HEPP Group AGM Lindemann
### Lindemann
#### University of Oxford
• Plenary IV: Final Martin Wood Lecture Theatre
### Martin Wood Lecture Theatre
#### University of Oxford
• 80
Future Facilities
Speaker: Grahame Blair (Royal Holloway, Univ. of London)
• 81
Finale
Speaker: Sergio Bertolucci (CERN)
• 82
Closing Remarks
Speaker: Robin Devenish (Oxford University) | 2022-01-20 17:32:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5607847571372986, "perplexity": 4242.46472054557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302355.97/warc/CC-MAIN-20220120160411-20220120190411-00335.warc.gz"} |
https://firis.pl/recover-my-files-portable-v3-98-5282-rar/ | # Recover.My.Files.Portable.v3.98.5282.rar
## Recover.My.Files.Portable.v3.98.5282.rar
Software: 0.03G. 1: 5: . 5. Get free recovery tools: 1: 0: . 07.jpg UPDATED · 9Jul2020 PATCHED Recuva 4 Pro features a couple of options that might appeal to you. D-e-x-t-e-r-a-t-e-d.pdf UPDATED · 13Jul2020 Recover.My.Files.Portable.v3.98.5282.rar (1; 2 of 2).Q: Confusion about finite categorical groups In this document, the author seems to talk about what (I assume to be a group) $C$ being a finite category rather than what it is a finite category in the first place (I have no experience in category theory). It says that the set of objects $Ob(C)$ of $C$ is finite iff the group $\pi_0(C)$ generated by the elements $[x,y]$ of the set $Hom_{C}(x,y)$ of morphisms from $x$ to $y$ in $C$ is finite. It seems that what makes $Hom(x,y)$ finite is that there are no elements $r\in Hom(x,y)$ which cannot be expressed as a finite product of other elements in $Hom(x,y)$. Also, an element $r\in Hom(x,y)$ can always be written as a product $r=r_1\dots r_n$ where $r_i \in Hom(x,y)$. Is this correct? I would be grateful if anybody can explain. A: What you say is correct. It is a standard idea in universal algebra to consider a group $G$ to be a $\mathbb{Z}$-graded group (where there is an internal operation in $G$ of degree $0$ and $1$), and the simplest example of such a thing is a finite group. Of course, then a morphism $f: G \to G’$ has degree $1$ iff $f$ is an isomorphism. A little extra work is needed to show that such morphisms are in fact closed under composition. More generally, one thinks of a $\mathbb{Z}$- 6d1f23a050 | 2022-09-28 17:04:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8804596662521362, "perplexity": 175.43176942915986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00178.warc.gz"} |
https://archive.lib.msu.edu/crcmath/math/math/e/e096.htm | Elliptic Group Modulo p
denotes the elliptic Group modulo whose elements are and together with the pairs of Integers with satisfying
(1)
with and Integers such that
(2)
Given , define
(3)
The Order of is given by
(4)
where is the Legendre Symbol, although this Formula quickly becomes impractical. However, it has been proven that
(5)
Furthermore, for a Prime and Integer in the above interval, there exists and such that
(6)
and the orders of elliptic Groups mod are nearly uniformly distributed in the interval. | 2021-11-28 08:20:09 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9065101146697998, "perplexity": 845.2581454999827}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00469.warc.gz"} |
https://solvedlib.com/n/lah-20a-lodination-of-acetone-concentration-sjboylan20-8,4191591 | # Lah 20A lodination of Acetone Concentration SJBoylan20] 8 RATIOMFTHOD TQ DFTERMNNERFACTION RATEMODEL You add 10 mL 0f 4.0 M acetonc,
###### Question:
Lah 20A lodination of Acetone Concentration SJBoylan20] 8 RATIOMFTHOD TQ DFTERMNNERFACTION RATEMODEL You add 10 mL 0f 4.0 M acetonc, 10 mL of IM hydrochloric acid ard 20 mL of water 125 mL Erlenmeycr You mix the solution, Then You add 10 mL 0.0050 M iodine t0 the flask, mix the solution and start & uimer. Al timc equals the color of the solution Suna yellow with = hint of orange. This yellow color is from the iodine. You pour the solution into 200 = test tube You look down the top ofthe tcst [ub? watch Lhe color of the solution After 251 seconds the solution turns clcar. The clear color indicates that all the iodine hus reacted odoicetone Use the information from the above paragraph = fill in the diagram, Add Calculate Lhe (Olal volume of solution i0n the Exlenmeyer flask after all edditions. Calculale the molanty 0f Ie acctone Ihe Intal volume Calculate thc molarity Onthc hydrogen Holu voluc Calculate the molarity of the iodine voul Calculate the rule acMon Rate change Substitute values thc Teaclon Mll equation model.
#### Similar Solved Questions
##### (Io points) bornmt Mbles in 6red And hilc , Suppese Wc druw @ mrble IFOm the box rcplace and thcn dr unothcr, Find Ihc probability thatJust one 0f Uie IMw mubles At lenst one ol (tk (WO murblcs i nd
(Io points) bornmt Mbles in 6red And hilc , Suppese Wc druw @ mrble IFOm the box rcplace and thcn dr unothcr, Find Ihc probability that Just one 0f Uie IMw mubles At lenst one ol (tk (WO murblcs i nd...
##### 0 Some students paid a private tutor to help them improve their results on a certain...
0 Some students paid a private tutor to help them improve their results on a certain mathematical test. These students had a mean change in score of + 17 points, with a standard deviation of 66 points. In a random sample of 100 students who pay a private tutor to help them improve their results, wha...
##### Time to complete Sk (in minutes):2324 252526 26 27 27 27 27 28 28 2929 30 3132Question 1Construct line plot of the data. Use time to complete the 5K vour Xaxis and (requency asyour axis Create your line plot and then upload it here (if you create it on your computer; Save and upload the file; If you createit with pen and paper, take photo and upload that).
Time to complete Sk (in minutes): 2324 252526 26 27 27 27 27 28 28 2929 30 3132 Question 1 Construct line plot of the data. Use time to complete the 5K vour Xaxis and (requency asyour axis Create your line plot and then upload it here (if you create it on your computer; Save and upload the file; If ...
##### To study the effect of temperature on yield, in pounds, for a chemical process, five batches were produced at each of three temperature levels. The results follow. Construct an analysis of variance table. Use a .05 level of significance to test whet the temperature level has an effect on the mean yield of the process.
To study the effect of temperature on yield, in pounds, for a chemical process, five batches were produced at each of three temperature levels. The results follow. Construct an analysis of variance table. Use a .05 level of significance to test whet the temperature level has an effect on the mean yi...
##### 1) a. Write down an unambiguous grammar that generates the set of strings {...
1) a. Write down an unambiguous grammar that generates the set of strings { s;, s;s, s;s;s;, . . . } b. Give a leftmost and rightmost d...
##### Determine: -the resultant internal normal force acting on the cross section through point B of the...
Determine: -the resultant internal normal force acting on the cross section through point B of the signpost. The post is fixed to the ground and a uniform pressure of 7 lb/ft^2 acts perpendicular to the face of the sign. -the resultant internal shear force acting on the cross section through point B...
##### “translate” the following DNA nucleotide sequence into the amino acids that would be produced from this...
“translate” the following DNA nucleotide sequence into the amino acids that would be produced from this sequence. Hint 1 – DNA must first be transcribed into messenger RNA. Hint 2 – Your ANSWER will be written in amino acids not nucleotides! TAC AAT ACA ACT...
##### Let an be the number of words oflength n over the alphabet {1,2,3} that containan even number of 2s. Find a recurrence relation for an, alongwith the necessary initial conditions.(b) Suppose that bn = 3bn−1 +10bn−2 for n ≥ 2, with b0 = 3and b1 = 8. Solve this recurrence to obtain a closed-formformula for bn by using the characteristic equationmethod.
Let an be the number of words of length n over the alphabet {1,2,3} that contain an even number of 2s. Find a recurrence relation for an, along with the necessary initial conditions. (b) Suppose that bn = 3bn−1 + 10bn−2 for n ≥ 2, with b0 = 3 and b1 = 8. Solve this recurrence to o...
##### Find the derivative of the function F(x) Sx2 8)9F(x)
Find the derivative of the function F(x) Sx2 8)9 F(x)...
##### Show that for x≫R, equation Ex=σ2ϵ0[1−1(R2/x2+1)√] becomes E=Q/4πϵ0x2, where Q is the total charge on the...
Show that for x≫R, equation Ex=σ2ϵ0[1−1(R2/x2+1)√] becomes E=Q/4πϵ0x2, where Q is the total charge on the disk. This essay question confuses me and any help with an explanation would be very appreciated. Thank you. Constants Part C A uniformly charged disk has ...
##### From the 2 pictures are any of these incorrect? Cancer 12 Complete each sentence with the...
From the 2 pictures are any of these incorrect? Cancer 12 Complete each sentence with the appropriate cancer-related term. Not all terms will be used. 8.33 points mutagenic An agent that is cancer-causing is said to be carcinogenic. eBook malignant tumors Cancerous cells that divide repeatedly a...
##### 4. You are given the following differential equation: y" _ y' _ Zy = 2e-ta. Solve for yp using Undetermined Coefficients method:b. Solve for Yp using Variation of Parameters method:
4. You are given the following differential equation: y" _ y' _ Zy = 2e-t a. Solve for yp using Undetermined Coefficients method: b. Solve for Yp using Variation of Parameters method:...
##### In a study prepared in 2000 , the percentage of households using online banking was projected to be $f(t)=1.5 e^{0.78 t} \quad(0 \leq t \leq 4)$ where $t$ is measured in years, with $t=0$ corresponding to the beginning of 2000 . a. What was the projected percentage of households using online banking at the beginning of $2003 ?$ b. How fast was the projected percentage of households using online banking changing at the beginning of $2003 ?$ c. How fast was the rate of the projected percentage
In a study prepared in 2000 , the percentage of households using online banking was projected to be $f(t)=1.5 e^{0.78 t} \quad(0 \leq t \leq 4)$ where $t$ is measured in years, with $t=0$ corresponding to the beginning of 2000 . a. What was the projected percentage of households using online ban...
##### Write the solution of the program by python 3 language : I need the program using...
write the solution of the program by python 3 language : I need the program using list or string or loops (while and for) or if,elif,else : Alex got a sequence of n integers a1,a2,…,an as a birthday present. Alex doesn't like negative numbers, so he decided to erase the minus signs from a...
##### Write the general form of the Maclaurin series (expanded from x = 0)Write the Maclaurin series for sin X and Cos X.
Write the general form of the Maclaurin series (expanded from x = 0) Write the Maclaurin series for sin X and Cos X....
##### Consider the reaction: 2CIO-(aq) z0h (aq) CIO; (aq) CIO: H,o() The reaction second order in CIO and first order In OH the value of Ihe rale Iaw conslant 1.15x10' "Imol s Calculale Ihe reaction rate for lhe reaclion at 0 C when Ihe Inilial concentrations CIO, and OH" are 7.50 * 10 ' Mard 6.35 10 ? Mrespeclively: ra U 1.5*io mci?s
Consider the reaction: 2CIO-(aq) z0h (aq) CIO; (aq) CIO: H,o() The reaction second order in CIO and first order In OH the value of Ihe rale Iaw conslant 1.15x10' "Imol s Calculale Ihe reaction rate for lhe reaclion at 0 C when Ihe Inilial concentrations CIO, and OH" are 7.50 * 10 ...
##### Answer from (a) to (f) ....mathematical modelling problem In 1976, Marc and Helen Bornstein studied the pace oflife....
answer from (a) to (f) ....mathematical modelling problem In 1976, Marc and Helen Bornstein studied the pace oflife.2 To see iflife becomes more hectic as the size of the city becomes larger, they systematically observed the mean time required for pedestrians to walk 50 feet on the main streets o... | 2022-05-23 15:20:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6278809309005737, "perplexity": 3636.901817885665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00698.warc.gz"} |
https://www.physicsforums.com/threads/underground-cables.408096/ | # Underground cables
1. Jun 5, 2010
### Lunat1c
Hello,
I just got a small question.
If a 50Hz grid line is carried underground for a distance of 10km. And it is also known that the capacitance is 400nF per phase per km. What is the total reactive power generated in the 10km length?
Then I could say that we have a total of 400nF * 10 = 4000nF per phase.
$$Power = \frac{V^2}{X_c} = \frac{400k^2}{\frac{1}{2 * \pi * 50 * 4000nF}} = 127MVAr.$$
1. However, this is the power per phase isn't it? From my lecture notes it is the total reactive power however I can't figure out why.
2. Also, I'm trying to find the charging current in each phase of the line.
According to my lecturer power in each phase is the power I got earlier divided by 3, but the same capacitance is used if the formula I^2Xc is used.
Last edited: Jun 5, 2010
2. Jun 5, 2010
3. Jun 7, 2010
### Staff: Mentor
What are you using for the "voltage"? The voltage that is listed for a 3-phase line does not show up across each wire pair...
4. Jun 7, 2010
### Lunat1c
Sorry, my mistake. The question says it's a "400kV grid line"
5. Jun 7, 2010
### Staff: Mentor
So for a 400kV 3-phase transmission line, what is the voltage difference between each pair? See if the answer makes more sense now.
6. Jun 7, 2010
### Lunat1c
I'm not sure I follow. I think I misunderstood the question to be honest. When you're told that a cable is a 400kV cable, what's the meaning of that exactly? I know that when it comes to 3 phase systems for example, when you're told that you have a 415V supply, that means that the line to line voltage is 415. However I'm unsure about this. | 2018-03-18 12:20:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6615732312202454, "perplexity": 1201.7628668743973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645613.9/warc/CC-MAIN-20180318110736-20180318130736-00516.warc.gz"} |
https://www.techwhiff.com/issue/how-many-years-did-it-take-hitler-to-kill-6-000-000--387082 | # How many years did it take hitler to kill 6,000,000 jews
###### Question:
how many years did it take hitler to kill 6,000,000 jews
### A/an _____ agent is a substance that has the potential to cause another substance to be oxidized. a. biological b. reducing c. oxidizing d. chemical
A/an _____ agent is a substance that has the potential to cause another substance to be oxidized. a. biological b. reducing c. oxidizing d. chemical...
### Convert 50 degrees F to K. [?]K
Convert 50 degrees F to K. [?]K...
### $\sqrt{-16} -\sqrt{-3} \sqrt{-3} -\sqrt{-4} \sqrt{-4} +3i-3i^{2} +3i^3$
$\sqrt{-16} -\sqrt{-3} \sqrt{-3} -\sqrt{-4} \sqrt{-4} +3i-3i^{2} +3i^3$...
### 1/5 (15b - 7) = 3b - 9 can some plz help meh this is due tomorrow?
1/5 (15b - 7) = 3b - 9 can some plz help meh this is due tomorrow?...
### The following are the ages (years) of 5 people in a room 14,22,12,25,21 a person enters the room the mean age of 6 people is now 21 what is the age of the person who entered the room
The following are the ages (years) of 5 people in a room 14,22,12,25,21 a person enters the room the mean age of 6 people is now 21 what is the age of the person who entered the room...
### How many provinces and territories does Canada have? a. 10 b. 11 c. 12 d. 13
How many provinces and territories does Canada have? a. 10 b. 11 c. 12 d. 13...
### Enter the equation in standard form. y = 5x - 1
Enter the equation in standard form. y = 5x - 1...
### Creating the Self Module Exam In "Still I Rise," how does Maya Angelou emphasize the speaker's determination to overcome adversity? - a rigid rhyme scheme - repetition of a key phrase - alliteration in every other line - personification of household objects
Creating the Self Module Exam In "Still I Rise," how does Maya Angelou emphasize the speaker's determination to overcome adversity? - a rigid rhyme scheme - repetition of a key phrase - alliteration in every other line - personification of household objects...
### Simplify 6.5b - 4.7b
simplify 6.5b - 4.7b...
### 1.a.) Find the next four terms: a8,a9,a10,a11 $a_{n}$=0, 9, -26, 65, -124, 217, -342 1.b) Find a direct formula for $a_{n}$, [Hint: You may want to look at perfect squares, perfect cubes, powers of 2, powers of 3...]
1.a.) Find the next four terms: a8,a9,a10,a11 $a_{n}$=0, 9, -26, 65, -124, 217, -342 1.b) Find a direct formula for $a_{n}$, [Hint: You may want to look at perfect squares, perfect cubes, powers of 2, powers of 3...]...
### What does this imagine represent?
what does this imagine represent?...
### Why was the camp David accords meeting important?
Why was the camp David accords meeting important?...
### Can someone PPPLLEEAASSEEE help me with all of these
Can someone PPPLLEEAASSEEE help me with all of these...
### Define and describe 4 Political Parties. What is the purpose of a political party?
Define and describe 4 Political Parties. What is the purpose of a political party?...
### If the savings account is growing at 5% per year predict how much money would be in the account after 60 years? Hint - exponential growth y=a(1+r)^x
If the savings account is growing at 5% per year predict how much money would be in the account after 60 years? Hint - exponential growth y=a(1+r)^x...
### 10. Which of the following is NOT a rules violation in soccer? (1 point) O Chest trap O Tripping O Handball O Obstruction
10. Which of the following is NOT a rules violation in soccer? (1 point) O Chest trap O Tripping O Handball O Obstruction...
### The owner of a ranch has 2 , 100 2,100 yards of fencing material which to enclose a rectangular piece of grazing land along a straight portion of a river. If fencing is not required along the river, what are the dimensions of the pasture having the largest area
The owner of a ranch has 2 , 100 2,100 yards of fencing material which to enclose a rectangular piece of grazing land along a straight portion of a river. If fencing is not required along the river, what are the dimensions of the pasture having the largest area...
### Tom has 100 baseball cards and 120 football cards. What is the ratio of baseball cards to football cards?
Tom has 100 baseball cards and 120 football cards. What is the ratio of baseball cards to football cards?... | 2022-10-03 11:32:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31740111112594604, "perplexity": 2376.9976683200753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00051.warc.gz"} |
http://math.stackexchange.com/questions/116127/related-rates-and-implicit-differentiation | Related rates and implicit differentiation
I can get the proper answer, but I don't quite know why.
I am supposed to find $dy/dt$ for the function $y = \sqrt{2x +1}$ if $dx/dt = 3$ when $x=4$.
For the derivative I get $$\frac {dy}{dt} = \frac {1}{2} (2x + 1)^{-1/2} \frac{dx}{dt},$$ which then gives me $$\frac {dy}{dt} = \frac {1}{2} (9)^{-1/2} \cdot 3 \frac {dy}{dt} = \frac{1}{2},$$
which is wrong. I can also do
$$\frac {dy}{dt} = \frac {1}{2} (9)^{-1/2} \cdot 2 \frac {dx}{dt},$$
which gives me $1$, which is the proper answer, but I am not sure why I get that. I know that the derivative of the inner function will be $2$ but the problems defines it as being $3$, so do I just multiply the two?
-
$\frac{dy}{dt}=\frac{1}{\sqrt{2x+1}}\frac{dx}{dt}$ – Salech Alhasov Mar 3 '12 at 22:46
I know that, I just typed it out wrong. – Jordan Mar 3 '12 at 22:49
You're supposed to find $\frac{dy}{dt}$ I assume? Your derivative is wrong. I think you forgot to account for the derivative of $2x+1$ – Mike Mar 3 '12 at 22:53
Yes I get it now, the derivative of 2x+1 is 2*x prime – Jordan Mar 3 '12 at 22:56
If you get it now, then write it up as an answer. If no one points out a mistake in your answer, then accept it. – Gerry Myerson Mar 4 '12 at 0:06
$$\frac {dy}{dt} = \frac {1}{2} (2x + 1)^{-1/2} 2* \frac{dx}{dt}$$
$$\frac {dy}{dt} = \frac {1}{2} (9)^{-1/2} 2* \frac {dx}{dt}$$
The 2 comes from the derivative of the inner function and then I multiply that by the implicit derivative of x which was given as 3 so I get 6.
$$\frac {dy}{dt} = \frac {1}{2} (9)^{-1/2} *6$$
$$\frac {dy}{dt} = \frac {1}{2} \frac {1}{3} *6$$
$$\frac {dy}{dt} = 1$$
- | 2014-03-16 23:20:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9778954386711121, "perplexity": 262.39739921876924}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678704059/warc/CC-MAIN-20140313024504-00090-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.global-sci.com/intro/article_detail/cicp/10531.html | Volume 23, Issue 2
An $hp$-Adaptive Minimum Action Method Based on a Posteriori Error Estimate
Commun. Comput. Phys., 23 (2018), pp. 408-439.
Published online: 2018-02
Preview Purchase PDF 15 3442
Export citation
Cited by
• Abstract
In this work, we develop an hp-adaptivity strategy for the minimum action method (MAM) using a posteriori error estimate. MAM plays an important role in minimizing the Freidlin-Wentzell action functional, which is the central object of the Freidlin-Wentzell theory of large deviations for noise-induced transitions in stochastic dynamical systems. Because of the demanding computation cost, especially in spatially extended systems, numerical efficiency is a critical issue for MAM. Difficulties come from both temporal and spatial discretizations. One severe hurdle for the application of MAM to large scale systems is the global reparametrization in time direction, which is needed in most versions of MAM to achieve accuracy. We recently introduced a new version of MAM in [22], called tMAM, where we used some simple heuristic criteria to demonstrate that tMAM can be effectively coupled with $h$-adaptivity, i.e., the global reparametrization can be removed. The target of this paper is to integrate $hp$-adaptivity into tMAM using a posteriori error estimation techniques, which provides a general adaptive MAM more suitable for parallel computing. More specifically, we use the zero-Hamiltonian constraint to define an indicator to measure the error induced by linear time scaling, and the derivative recovery technique to construct an error indicator and a regularity indicator for the transition paths approximated by finite elements. Strategies for $hp$-adaptivity have been developed. Numerical results are presented.
• Keywords
Large deviation principle, small random perturbations, minimum action method, rare events, uncertainty quantification.
• AMS Subject Headings
60H35, 65C20, 65N20, 65N30
• Copyright
COPYRIGHT: © Global Science Press
• Email address
• BibTex
• RIS
• TXT
@Article{CiCP-23-408, author = {}, title = {An $hp$-Adaptive Minimum Action Method Based on a Posteriori Error Estimate}, journal = {Communications in Computational Physics}, year = {2018}, volume = {23}, number = {2}, pages = {408--439}, abstract = {
In this work, we develop an hp-adaptivity strategy for the minimum action method (MAM) using a posteriori error estimate. MAM plays an important role in minimizing the Freidlin-Wentzell action functional, which is the central object of the Freidlin-Wentzell theory of large deviations for noise-induced transitions in stochastic dynamical systems. Because of the demanding computation cost, especially in spatially extended systems, numerical efficiency is a critical issue for MAM. Difficulties come from both temporal and spatial discretizations. One severe hurdle for the application of MAM to large scale systems is the global reparametrization in time direction, which is needed in most versions of MAM to achieve accuracy. We recently introduced a new version of MAM in [22], called tMAM, where we used some simple heuristic criteria to demonstrate that tMAM can be effectively coupled with $h$-adaptivity, i.e., the global reparametrization can be removed. The target of this paper is to integrate $hp$-adaptivity into tMAM using a posteriori error estimation techniques, which provides a general adaptive MAM more suitable for parallel computing. More specifically, we use the zero-Hamiltonian constraint to define an indicator to measure the error induced by linear time scaling, and the derivative recovery technique to construct an error indicator and a regularity indicator for the transition paths approximated by finite elements. Strategies for $hp$-adaptivity have been developed. Numerical results are presented.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2017-0025}, url = {http://global-sci.org/intro/article_detail/cicp/10531.html} }
TY - JOUR T1 - An $hp$-Adaptive Minimum Action Method Based on a Posteriori Error Estimate JO - Communications in Computational Physics VL - 2 SP - 408 EP - 439 PY - 2018 DA - 2018/02 SN - 23 DO - http://doi.org/10.4208/cicp.OA-2017-0025 UR - https://global-sci.org/intro/article_detail/cicp/10531.html KW - Large deviation principle, small random perturbations, minimum action method, rare events, uncertainty quantification. AB -
In this work, we develop an hp-adaptivity strategy for the minimum action method (MAM) using a posteriori error estimate. MAM plays an important role in minimizing the Freidlin-Wentzell action functional, which is the central object of the Freidlin-Wentzell theory of large deviations for noise-induced transitions in stochastic dynamical systems. Because of the demanding computation cost, especially in spatially extended systems, numerical efficiency is a critical issue for MAM. Difficulties come from both temporal and spatial discretizations. One severe hurdle for the application of MAM to large scale systems is the global reparametrization in time direction, which is needed in most versions of MAM to achieve accuracy. We recently introduced a new version of MAM in [22], called tMAM, where we used some simple heuristic criteria to demonstrate that tMAM can be effectively coupled with $h$-adaptivity, i.e., the global reparametrization can be removed. The target of this paper is to integrate $hp$-adaptivity into tMAM using a posteriori error estimation techniques, which provides a general adaptive MAM more suitable for parallel computing. More specifically, we use the zero-Hamiltonian constraint to define an indicator to measure the error induced by linear time scaling, and the derivative recovery technique to construct an error indicator and a regularity indicator for the transition paths approximated by finite elements. Strategies for $hp$-adaptivity have been developed. Numerical results are presented.
Xiaoliang Wan, Bin Zheng & Guang Lin. (2020). An $hp$-Adaptive Minimum Action Method Based on a Posteriori Error Estimate. Communications in Computational Physics. 23 (2). 408-439. doi:10.4208/cicp.OA-2017-0025
Copy to clipboard
The citation has been copied to your clipboard | 2021-05-07 19:54:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4840136766433716, "perplexity": 1357.3495390464723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00181.warc.gz"} |
http://math.stackexchange.com/questions/614270/prove-a-given-sequence-of-real-numbers-is-convergent | # Prove a given sequence of real numbers is convergent
Given the sequence of real numbers $\{x_n\}_{n \in \mathbb N}$, we define $\{y_n\}_{n \in \mathbb N}$ where $y_n=\max\{|x_1|,...,|x_n|\}$ for each $n \in \mathbb N$. Prove that if $\{x_n\}_{n \in \mathbb N}$ is bounded, then $\{y_n\}_{n \in \mathbb N}$ is a convergent sequence.
My attempt at a solution.
If $\{x_n\}_{n \in \mathbb N}$ is bounded, then, it has a convergent subsequence. Call that sequence $\{x_{n_k}\}_{k \in \mathbb N}$. Note that if $x=\lim_{k \to \infty} x_{n_k}$, then $|x|=\lim_{k \to \infty}|x_{n_k}|$. This can be proved by the fact that $0\leq ||x_{n_k}|-|x||\leq |x_{n_k}-x| \to 0$ when $k \to \infty$.
I was going to try to prove that $|x|=lim_{n \to \infty} y_n$ but immediately realize that this doesn't need to be true. For example: $\{x_n\}_{n \in \mathbb N}$: $x_n=0$ if $n$ is odd and $x_n=1$ if $n$ is even has two convergent subsequences.
My problem is I don't know what else to do, I would appreciate some guidance.
-
To show that $y_n$ is convergent, focus on the properties of $y_n$. There are several facts about $y_n$ that you should be able to deduce directly from your above statements. Do those facts help you to prove convergence? – John Dec 20 '13 at 19:40
@John right, $\{y_n\}_{n \in \mathbb N}$ is bounded and it is monotone increasing, I can't believe I didn't realize it before – user100106 Dec 20 '13 at 19:43
@user100106 : the MSE system dislikes unanswered questions. You can answer your own question, or John can answer it. – Stefan Smith Dec 20 '13 at 20:04
To show that $y_n$ is convergent, focus on the properties of $y_n$. There are several facts about $y_n$ that you should be able to deduce directly from your above statements. Do those facts help you to prove convergence? | 2016-05-03 16:49:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687446713447571, "perplexity": 112.93365253946273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121618.46/warc/CC-MAIN-20160428161521-00112-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/50627/matrix-in-a-matrix | # matrix in a matrix
I would like to draw a matrix with top left, top right, bottom left and bottom right blocks. The top left should be a 3x3 matrix with numerical entries, the top right is 0, the bottom left is 0 and the bottom right is just a "single" entry uJ. Here is my attempt.
$\left[ \begin{array}{c|c} [\begin{array{c|c|c} 0 & 0 & 2\mathrm tr(MM^{*}) \\ 0 & 0 & -ua \\ u & -ua & 0 \end{array}] & 0\\ \hline 0 & uJ \end{array}\right].$
I would like to emphasize that the non-top-left entries are not just single entries, but blocks, so I do not want to just make a 4 by 4 matrix with vlines and hlines and additional 0's.
-
Welcome to TeX.SE. While code snippets are useful in explanations, it is always best to compose a fully compilable MWE that illustrates the problem including the \documentclass and the appropriate packages so that those trying to help don't have to recreate it. – Peter Grill Apr 3 '12 at 17:51
$\left[ \begin{array}{c@{}c@{}c} \left[\begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{array}\right] & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \left[\begin{array}{ccc} b_{11} & b_{12} & b_{13}\\ b_{21} & b_{22} & b_{23}\\ b_{31} & b_{32} & b_{33}\\ \end{array}\right] & \mathbf{0}\\ \mathbf{0} & \mathbf{0} & \left[ \begin{array}{cc} c_{11} & c_{12} \\ c_{21} & c_{22} \\ \end{array}\right] \\ \end{array}\right]$ | 2016-05-29 23:16:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7559024691581726, "perplexity": 680.8827171891481}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049282275.31/warc/CC-MAIN-20160524002122-00038-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/190595/how-to-not-show-line-mesh-elements | # How to not show line mesh elements
Let's say I make a mesh from a collection of random points:
SeedRandom[1234]
points = RandomReal[1, {20, 2}];
mesh = VoronoiMesh[points]
For the purposes of my question we can just make an automatic highlighting of each cell (I want the colors of each cell to be different):
HighlightMesh[mesh, Table[{2, {i}}, {i, 20}]]
I want to be able to take the colored mesh and remove the edges between each "cell" so that it essentially goes from one color to the next. Of course, I could just color each edge the same color as a cell it is touching, but that seems to be inefficient, especially if there is a quick way to remove the edges. I can't seem to find a way to just make the edges non-existent in the drawing of the mesh. From what I have been looking up, it seems like there is a lot of undocumented things you can do with meshes, and even if something is documented it is difficult to find what you are looking for sometimes since the information of meshes in Mathematica is so extensive.
UPDATE The answer so far works for the color mesh I posted, but what if my cells are darker in color? In this case changing the Opacity doesn't quite do the trick.
Opacity set to 0 for edges:
Use the MeshCellStyle option:
SeedRandom[1234]
points = RandomReal[1, {20, 2}];
mesh = VoronoiMesh[points, MeshCellStyle->{1->Opacity[0]}];
HighlightMesh[mesh, Table[{2, {i}}, {i, 20}]]
or:
SeedRandom[1234]
points = RandomReal[1, {20, 2}];
mesh = VoronoiMesh[points];
MeshRegion[
HighlightMesh[mesh, Table[{2, {i}}, {i, 20}]],
MeshCellStyle->{1->Opacity[0]}
]
I think for gray scale images, the lines you are seeing are an antialiasing artefact. If you turn off antialiasing, the line color should be suppressed, but they will be very jagged:
HighlightMesh[
mesh,
Table[Style[{2,i}, GrayLevel[RandomReal[.5]]], {i, 20}],
MeshCellStyle -> {1 -> Directive[Opacity[0], Antialiasing->False]}
]
• What if the cells are darker in color (closer to black)? I might edit my example to use graylevels Jan 31, 2019 at 17:39
• @AaronStevens that shouldn't change anything. Carl just made the boundaries disappear. Jan 31, 2019 at 19:11
• @b3m2a1 I edited the question to show an example with darker colors. You can still see light borders around the dark cells Jan 31, 2019 at 20:01
• Huh yeah I guess it is the antialiasing for the darker ones. The actual meshes I will be using have smaller cells, so maybe the jagged edges won't be as noticable. Feb 1, 2019 at 2:29 | 2022-09-28 08:48:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23102915287017822, "perplexity": 1449.8234679803163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00440.warc.gz"} |
http://math.stackexchange.com/questions/163755/integrating-multiple-times | Integrating multiple times.
I am having problem in integrating the equation below. If I integrate it w.r.t x then w.r.t y and then w.r.t z, the answers comes out to be 0 but the actual answer is 52. Please help out. Thanks
-
Start with integrating w.r.t. $x$ and treat $y,z$ as constants, $\int C +Ax^2 dx$. Then continue... – draks ... Jun 27 '12 at 13:46
\begin{align} \int_1^3 (6yz^3+6x^2y)\,dx &= \left[6xyz^3+2x^3y\right]_{x=1}^3 =12yz^3+52y \\ \int_0^1 (12yz^3+52y)\,dy &=\left[6y^2z^3+26y^2\right]_{y=0}^1 =6z^3+26 \\ \int_{-1}^1 (6z^3+26)\,dz &= \left[\frac{3}{2}z^4+26z\right]_{z=-1}^1 = 52 . \end{align} | 2015-10-13 17:23:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 1238.1994410804618}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738008122.86/warc/CC-MAIN-20151001222008-00198-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://itensor.github.io/ITensors.jl/stable/IndexSetType.html | # IndexSet
ITensors.IndexSetMethod
IndexSet(inds::Vector{<:Index})
Convert a Vector of indices to an IndexSet.
Warning: this is not type stable, since a Vector is dynamically sized and an IndexSet is statically sized. Consider using the constructor IndexSet{N}(inds::Vector).
source
## Priming and tagging methods
ITensors.primeMethod
prime(A::IndexSet, plinc, ...)
Increase the prime level of the indices by the specified amount. Filter which indices are primed using keyword arguments tags, plev and id.
source
Base.mapMethod
map(f, is::IndexSet)
Apply the function to the elements of the IndexSet, returning a new IndexSet.
source
## Set operations
Base.intersectMethod
intersect(A::IndexSet, B::IndexSet; kwargs...)
Output the IndexSet in the intersection of A and B, optionally filtering by tags, prime level, etc.
source
ITensors.firstintersectMethod
firstintersect(A::IndexSet, B::IndexSet; kwargs...)
Output the Index common to A and B, optionally filtering by tags, prime level, etc. If more than one Index is found, throw an error. Otherwise, return a default constructed Index.
source
Base.setdiffMethod
setdiff(A::IndexSet, Bs::IndexSet...)
Output the IndexSet with Indices in A but not in the IndexSets Bs.
source
ITensors.firstsetdiffMethod
firstsetdiff(A::IndexSet, Bs::IndexSet...)
Output the first Index in A that is not in the IndexSets Bs. Otherwise, return a default constructed Index.
source
## Subsets
ITensors.getfirstMethod
getfirst(f::Function, is::IndexSet)
Get the first Index matching the pattern function, return nothing if not found.
source
ITensors.getfirstMethod
getfirst(is::IndexSet)
Return the first Index in the IndexSet. If the IndexSet is empty, return nothing.
source
Base.filterMethod
filter(f::Function, inds::IndexSet)
Filter the IndexSet by the given function (output a new IndexSet with indices i for which f(i) returns true).
Note that this function is not type stable, since the number of output indices is not known at compile time.
source | 2020-07-05 04:05:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44198620319366455, "perplexity": 4430.9213493936595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886865.30/warc/CC-MAIN-20200705023910-20200705053910-00365.warc.gz"} |
https://wiki2.org/en/Solar_panel | To install click the Add extension button. That's it.
The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.
4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds
# Solar panel
Solar PV modules (top) and two solar hot water panels (bottom) mounted on rooftops
Solar panels absorb the sunlight as a source of energy to generate electricity or heat.
A photovoltaic (PV) module is a packaged, connect assembly of typically 6x10 photovoltaic solar cells. Photovoltaic modules constitute the photovoltaic array of a photovoltaic system that generates and supplies solar electricity in commercial and residential applications. Each module is rated by its DC output power under standard test conditions (STC), and typically ranges from 100 to 365 Watts (W). The efficiency of a module determines the area of a module given the same rated output – an 8% efficient 230 W module will have twice the area of a 16% efficient 230 W module. There are a few commercially available solar modules that exceed efficiency of 22%[1] and reportedly also exceeding 24%.[2][3] A single solar module can produce only a limited amount of power; most installations contain multiple modules. A photovoltaic system typically includes an array of photovoltaic modules, an inverter, a battery pack for storage, interconnection wiring, and optionally a solar tracking mechanism.
The most common application of solar panels is solar water heating systems.[4]
The price of solar power has continued to fall so that in many countries it is cheaper than ordinary fossil fuel electricity from the grid (there is "grid parity").[5]
• 1/5
Views:
1 580 889
340 976
246 976
1 459 027
1 389 955
• How do solar panels work? - Richard Komp
• How Home Solar Power System Works
• How We Turn Solar Energy Into Electricity
• How to Solar Power Your Home / House #1 - On Grid vs Off Grid
• How to hook up Solar Panels (with battery bank) - simple 'detailed' instructions - DIY solar system
#### Transcription
The Earth intercepts a lot of solar power: 173 thousand terawatts. That's ten thousand times more power than the planet's population uses. So is it possible that one day the world could be completely reliant on solar energy? To answer that question, we first need to examine how solar panels convert solar energy to electrical energy. Solar panels are made up of smaller units called solar cells. The most common solar cells are made from silicon, a semiconductor that is the second most abundant element on Earth. In a solar cell, crystalline silicon is sandwiched between conductive layers. Each silicon atom is connected to its neighbors by four strong bonds, which keep the electrons in place so no current can flow. Here's the key: a silicon solar cell uses two different layers of silicon. An n-type silicon has extra electrons, and p-type silicon has extra spaces for electrons, called holes. Where the two types of silicon meet, electrons can wander across the p/n junction, leaving a positive charge on one side and creating negative charge on the other. You can think of light as the flow of tiny particles called photons, shooting out from the Sun. When one of these photons strikes the silicon cell with enough energy, it can knock an electron from its bond, leaving a hole. The negatively charged electron and location of the positively charged hole are now free to move around. But because of the electric field at the p/n junction, they'll only go one way. The electron is drawn to the n-side, while the hole is drawn to the p-side. The mobile electrons are collected by thin metal fingers at the top of the cell. >From there, they flow through an external circuit, doing electrical work, like powering a lightbulb, before returning through the conductive aluminum sheet on the back. Each silicon cell only puts out half a volt, but you can string them together in modules to get more power. Twelve photovoltaic cells are enough to charge a cellphone, while it takes many modules to power an entire house. Electrons are the only moving parts in a solar cell, and they all go back where they came from. There's nothing to get worn out or used up, so solar cells can last for decades. So what's stopping us from being completely reliant on solar power? There are political factors at play, not to mention businesses that lobby to maintain the status quo. But for now, let's focus on the physical and logistical challenges, and the most obvious of those is that solar energy is unevenly distributed across the planet. Some areas are sunnier than others. It's also inconsistent. Less solar energy is available on cloudy days or at night. So a total reliance would require efficient ways to get electricity from sunny spots to cloudy ones, and effective storage of energy. The efficiency of the cell itself is a challenge, too. If sunlight is reflected instead of absorbed, or if dislodged electrons fall back into a hole before going through the circuit, that photon's energy is lost. The most efficient solar cell yet still only converts 46% of the available sunlight to electricity, and most commercial systems are currently 15-20% efficient. In spite of these limitations, it actually would be possible to power the entire world with today's solar technology. We'd need the funding to build the infrastructure and a good deal of space. Estimates range from tens to hundreds of thousands of square miles, which seems like a lot, but the Sahara Desert alone is over 3 million square miles in area. Meanwhile, solar cells are getting better, cheaper, and are competing with electricity from the grid. And innovations, like floating solar farms, may change the landscape entirely. Thought experiments aside, there's the fact that over a billion people don't have access to a reliable electric grid, especially in developing countries, many of which are sunny. So in places like that, solar energy is already much cheaper and safer than available alternatives, like kerosene. For say, Finland or Seattle, though, effective solar energy may still be a little way off.
## Theory and construction
Photovoltaic modules use light energy (photons) from the Sun to generate electricity through the photovoltaic effect. The majority of modules use wafer-based crystalline silicon cells or thin-film cells. The structural (load carrying) member of a module can either be the top layer or the back layer. Cells must also be protected from mechanical damage and moisture. Most modules are rigid, but semi-flexible ones are available, based on thin-film cells. The cells must be connected electrically in series, one to another. Externally, most of photovoltaic modules use MC4 connectors type to facilitate easy weatherproof connections to the rest of the system.
Modules electrical connections are made in series to achieve a desired output voltage or in parallel to provide a desired current capability. The conducting wires that take the current off the modules may contain silver, copper or other non-magnetic conductive transition metals. Bypass diodes may be incorporated or used externally, in case of partial module shading, to maximize the output of module sections still illuminated.
Some special solar PV modules include concentrators in which light is focused by lenses or mirrors onto smaller cells. This enables the use of cells with a high cost per unit area (such as gallium arsenide) in a cost-effective way.
## Efficiencies
Reported timeline of solar cell energy conversion efficiencies since 1976 (National Renewable Energy Laboratory)
Depending on construction, photovoltaic modules can produce electricity from a range of frequencies of light, but usually cannot cover the entire solar range (specifically, ultraviolet, infrared and low or diffused light). Hence, much of the incident sunlight energy is wasted by solar modules, and they can give far higher efficiencies if illuminated with monochromatic light. Therefore, another design concept is to split the light into different wavelength ranges and direct the beams onto different cells tuned to those ranges.[citation needed] This has been projected to be capable of raising efficiency by 50%. Scientists from Spectrolab, a subsidiary of Boeing, have reported development of multi-junction solar cells with an efficiency of more than 40%, a new world record for solar photovoltaic cells.[6] The Spectrolab scientists also predict that concentrator solar cells could achieve efficiencies of more than 45% or even 50% in the future, with theoretical efficiencies being about 58% in cells with more than three junctions.
Currently, the best achieved sunlight conversion rate (solar module efficiency) is around 21.5% in new commercial products[7] typically lower than the efficiencies of their cells in isolation. The most efficient mass-produced solar modules[disputed ] have power density values of up to 175 W/m2 (16.22 W/ft2).[8] Research by Imperial College, London has shown that the efficiency of a solar panel can be improved by studding the light-receiving semiconductor surface with aluminum nanocylinders similar to the ridges on Lego blocks. The scattered light then travels along a longer path in the semiconductor which means that more photons can be absorbed and converted into current. Although these nanocylinders have been used previously (aluminum was preceded by gold and silver), the light scattering occurred in the near infrared region and visible light was absorbed strongly. Aluminum was found to have absorbed the ultraviolet part of the spectrum, while the visible and near infrared parts of the spectrum were found to be scattered by the aluminum surface. This, the research argued, could bring down the cost significantly and improve the efficiency as aluminum is more abundant and less costly than gold and silver. The research also noted that the increase in current makes thinner film solar panels technically feasible without "compromising power conversion efficiencies, thus reducing material consumption".[9]
• Efficiencies of solar panel can be calculated by MPP (maximum power point) value of solar panels
• Solar inverters convert the DC power to AC power by performing MPPT process: solar inverter samples the output Power (I-V curve) from the solar cell and applies the proper resistance (load) to solar cells to obtain maximum power.
• MPP (Maximum power point) of the solar panel consists of MPP voltage (V mpp) and MPP current (I mpp): it is a capacity of the solar panel and the higher value can make higher MPP.
Micro-inverted solar panels are wired in parallel, which produces more output than normal panels which are wired in series with the output of the series determined by the lowest performing panel (this is known as the "Christmas light effect"). Micro-inverters work independently so each panel contributes its maximum possible output given the available sunlight.[10]
## Technology
Market-share of PV technologies since 1990
Most solar modules are currently produced from crystalline silicon (c-Si) solar cells made of multicrystalline and monocrystalline silicon. In 2013, crystalline silicon accounted for more than 90 percent of worldwide PV production, while the rest of the overall market is made up of thin-film technologies using cadmium telluride, CIGS and amorphous silicon[11] Emerging, third generation solar technologies use advanced thin-film cells. They produce a relatively high-efficiency conversion for the low cost compared to other solar technologies. Also, high-cost, high-efficiency, and close-packed rectangular multi-junction (MJ) cells are preferably used in solar panels on spacecraft, as they offer the highest ratio of generated power per kilogram lifted into space. MJ-cells are compound semiconductors and made of gallium arsenide (GaAs) and other semiconductor materials. Another emerging PV technology using MJ-cells is concentrator photovoltaics ( CPV ).
### Thin film
In rigid thin-film modules, the cell and the module are manufactured in the same production line. The cell is created on a glass substrate or superstrate, and the electrical connections are created in situ, a so-called "monolithic integration". The substrate or superstrate is laminated with an encapsulant to a front or back sheet, usually another sheet of glass. The main cell technologies in this category are CdTe, or a-Si, or a-Si+uc-Si tandem, or CIGS (or variant). Amorphous silicon has a sunlight conversion rate of 6–12%
Flexible thin film cells and modules are created on the same production line by depositing the photoactive layer and other necessary layers on a flexible substrate. If the substrate is an insulator (e.g. polyester or polyimide film) then monolithic integration can be used. If it is a conductor then another technique for electrical connection must be used. The cells are assembled into modules by laminating them to a transparent colourless fluoropolymer on the front side (typically ETFE or FEP) and a polymer suitable for bonding to the final substrate on the other side.
## Smart solar modules
Several companies have begun embedding electronics into PV modules. This enables performing maximum power point tracking (MPPT) for each module individually, and the measurement of performance data for monitoring and fault detection at module level. Some of these solutions make use of power optimizers, a DC-to-DC converter technology developed to maximize the power harvest from solar photovoltaic systems. As of about 2010, such electronics can also compensate for shading effects, wherein a shadow falling across a section of a module causes the electrical output of one or more strings of cells in the module to fall to zero, but not having the output of the entire module fall to zero.
Module performance is generally rated under standard test conditions (STC): irradiance of 1,000 W/m2, solar spectrum of AM 1.5 and module temperature at 25°C.
Electrical characteristics include nominal power (PMAX, measured in W), open circuit voltage (VOC), short circuit current (ISC, measured in amperes), maximum power voltage (VMPP), maximum power current (IMPP), peak power, (watt-peak, Wp), and module efficiency (%).
Nominal voltage [12]refers to the voltage of the battery that the module is best suited to charge; this is a leftover term from the days when solar modules were only used to charge batteries. The actual voltage output of the module changes as lighting, temperature and load conditions change, so there is never one specific voltage at which the module operates. Nominal voltage allows users, at a glance, to make sure the module is compatible with a given system.
Open circuit voltage or VOC is the maximum voltage that the module can produce when not connected to an electrical circuit or system. VOC can be measured with a voltmeter directly on an illuminated module's terminals or on its disconnected cable.
The peak power rating, Wp, is the maximum output under standard test conditions (not the maximum possible output). Typical modules, which could measure approximately 1 m × 2 m or 3 ft 3 in × 6 ft 7 in, will be rated from as low as 75 W to as high as 350 W, depending on their efficiency. At the time of testing, the test modules are binned according to their test results, and a typical manufacturer might rate their modules in 5 W increments, and either rate them at +/- 3%, +/-5%, +3/-0% or +5/-0%.[13][14][15][16]
The ability of solar modules to withstand damage by rain, hail, heavy snow load, and cycles of heat and cold varies by manufacturer. Many crystalline silicon module manufacturers offer a limited warranty that guarantees electrical production for 10 years at 90% of rated power output and 25 years at 80%.[17] Installations intended to withstand extreme environments like large hail or heavy snow will require extra protection in the form of steep installations, sturdy framing and stronger glazing.[18]
Potential induced degradation (also called PID) is a potential induced performance degradation in crystalline photovoltaic modules, caused by so-called stray currents. [19]This effect may cause power loss of up to 30%.[20]
The largest challenge for photovoltaic technology is said to be the purchase price per watt of electricity produced, new materials and manufacturing techniques continue to improve the price to power performance. The problem resides in the enormous activation energy that must be overcome for a photon to excite an electron for harvesting purposes. Advancements in photovoltaic technologies have brought about the process of "doping" the silicon substrate to lower the activation energy thereby making the panel more efficient in converting photons to retrievable electrons.[21] Chemicals such as Boron (p-type) are applied into the semiconductor crystal in order to create donor and acceptor energy levels substantially closer to the valence and conductor bands.[22] In doing so, the addition of Boron impurity allows the activation energy to decrease 20 fold from 1.12 eV to 0.05 eV. Since the potential difference (EB) is so low, the Boron is able to thermally ionize at room temperatures. This allows for free energy carriers in the conduction and valence bands thereby allowing greater conversion of photons to electrons.
Solar power allows for greater efficiency than heat, such as the generation of energy in heat engines. The drawback with heat is that most of the heat created is lost to the surroundings. Thermal efficiency is as defined:
${\displaystyle \eta _{th}\equiv {\frac {W_{out}}{Q_{in}}}=1-{\frac {Q_{out}}{Q_{in}}}}$
Due to the inherent irreversibility of heat production for useful work, efficiency levels are decreased. On the other hand, with solar panels there isn't a requirement to retain any heat, and there are no drawbacks such as friction.
## Maintenance
Solar panel conversion efficiency, typically in the 20% range, is reduced by dust, grime, pollen, and other particulates that accumulate on the solar panel. "A dirty solar panel can reduce its power capabilities by up to 30% in high dust/pollen or desert areas", says Seamus Curran, associate professor of physics at the University of Houston and director of the Institute for NanoEnergy, which specializes in the design, engineering, and assembly of nanostructures.[23]
Paying to have solar panels cleaned is often not a good investment; researchers found panels that had not been cleaned, or rained on, for 145 days during a summer drought in California, lost only 7.4% of their efficiency. Overall, for a typical residential solar system of 5 kW, washing panels halfway through the summer would translate into a mere $20 gain in electricity production until the summer drought ends—in about 2 ½ months. For larger commercial rooftop systems, the financial losses are bigger but still rarely enough to warrant the cost of washing the panels. On average, panels lost a little less than 0.05% of their overall efficiency per day.[24] ## Recycling Most parts of a solar module can be recycled including up to 95% of certain semiconductor materials or the glass as well as large amounts of ferrous and non-ferrous metals.[25] Some private companies and non-profit organizations are currently engaged in take-back and recycling operations for end-of-life modules.[26] Recycling possibilities depend on the kind of technology used in the modules: • Silicon based modules: aluminum frames and junction boxes are dismantled manually at the beginning of the process. The module is then crushed in a mill and the different fractions are separated - glass, plastics and metals.[27] It is possible to recover more than 80% of the incoming weight.[28] This process can be performed by flat glass recyclers since morphology and composition of a PV module is similar to those flat glasses used in the building and automotive industry. The recovered glass for example is readily accepted by the glass foam and glass insulation industry. • Non-silicon based modules: they require specific recycling technologies such as the use of chemical baths in order to separate the different semiconductor materials.[29] For cadmium telluride modules, the recycling process begins by crushing the module and subsequently separating the different fractions. This recycling process is designed to recover up to 90% of the glass and 95% of the semiconductor materials contained.[30] Some commercial-scale recycling facilities have been created in recent years by private companies.[31] Since 2010, there is an annual European conference bringing together manufacturers, recyclers and researchers to look at the future of PV module recycling.[32][33] ## Production Top Module Producer Shipments in 2014 (MW) Yingli 3,200 Trina Solar 2,580 Sharp Solar 2,100 Canadian Solar 1,894 Jinko Solar 1,765 ReneSola 1,728 First Solar 1,600 Hanwha SolarOne 1,280 Kyocera 1,200 JA Solar 1,173 In 2010, 15.9 GW of solar PV system installations were completed, with solar PV pricing survey and market research company PVinsights reporting growth of 117.8% in solar PV installation on a year-on-year basis. With over 100% year-on-year growth in PV system installation, PV module makers dramatically increased their shipments of solar modules in 2010. They actively expanded their capacity and turned themselves into gigawatt GW players.[34] According to PVinsights, five of the top ten PV module companies in 2010 are GW players. Suntech, First Solar, Sharp, Yingli and Trina Solar are GW producers now, and most of them doubled their shipments in 2010.[35] The basis of producing solar panels revolves around the use of silicon cells.[36] These silicon cells are typically 10-20% efficient[37] at converting sunlight into electricity, with newer production models now exceeding 22%.[38] In order for solar panels to become more efficient, researchers across the world have been trying to develop new technologies to make solar panels more effective at turning sunlight into energy.[39] In 2014, the world's top ten solar module producers in terms of shipped capacity during the calendar year of 2014 were Trina Solar, Yingli, Sharp Solar and Canadian Solar.[40] ## Price Swanson's law states that with every doubling of production of panels, there has been a 20 percent reduction in the cost of panels.[41] Average pricing information divides in three pricing categories: those buying small quantities (modules of all sizes in the kilowatt range annually), mid-range buyers (typically up to 10 MWp annually), and large quantity buyers (self-explanatory—and with access to the lowest prices). Over the long term there is clearly a systematic reduction in the price of cells and modules. For example, in 2012 it was estimated that the quantity cost per watt was about US$0.60, which was 250 times lower than the cost in 1970 of US\$150.[42][43] A 2015 study shows price/kWh dropping by 10% per year since 1980, and predicts that solar could contribute 20% of total electricity consumption by 2030, whereas the International Energy Agency predicts 16% by 2050.[44]
Real world energy production costs depend a great deal on local weather conditions. In a cloudy country such as the United Kingdom, the cost per produced kWh is higher than in sunnier countries like Spain.
Following to RMI, Balance-of-System (BoS) elements, this is, non-module cost of non-microinverter solar modules (as wiring, converters, racking systems and various components) make up about half of the total costs of installations.
For merchant solar power stations, where the electricity is being sold into the electricity transmission network, the cost of solar energy will need to match the wholesale electricity price. This point is sometimes called 'wholesale grid parity' or 'busbar parity'.[5]
Some photovoltaic systems, such as rooftop installations, can supply power directly to an electricity user. In these cases, the installation can be competitive when the output cost matches the price at which the user pays for his electricity consumption. This situation is sometimes called 'retail grid parity', 'socket parity' or 'dynamic grid parity'.[45] Research carried out by UN-Energy in 2012 suggests areas of sunny countries with high electricity prices, such as Italy, Spain and Australia, and areas using diesel generators, have reached retail grid parity.[5]
## Mounting and tracking
Solar modules mounted on solar trackers
Ground mounted photovoltaic system are usually large, utility-scale solar power plants. Their solar modules are held in place by racks or frames that are attached to ground based mounting supports.[46][47] Ground based mounting supports include:
• Pole mounts, which are driven directly into the ground or embedded in concrete.
• Foundation mounts, such as concrete slabs or poured footings
• Ballasted footing mounts, such as concrete or steel bases that use weight to secure the solar module system in position and do not require ground penetration. This type of mounting system is well suited for sites where excavation is not possible such as capped landfills and simplifies decommissioning or relocation of solar module systems.
Roof-mounted solar power systems consist of solar modules held in place by racks or frames attached to roof-based mounting supports.[48] Roof-based mounting supports include:
• Pole mounts, which are attached directly to the roof structure and may use additional rails for attaching the module racking or frames.
• Ballasted footing mounts, such as concrete or steel bases that use weight to secure the panel system in position and do not require through penetration. This mounting method allows for decommissioning or relocation of solar panel systems with no adverse effect on the roof structure.
• All wiring connecting adjacent solar modules to the energy harvesting equipment must be installed according to local electrical codes and should be run in a conduit appropriate for the climate conditions
Solar trackers increase the amount of energy produced per module at a cost of mechanical complexity and need for maintenance. They sense the direction of the Sun and tilt or rotate the modules as needed for maximum exposure to the light.[49][50] Alternatively, fixed racks hold modules stationary as the sun moves across the sky. The fixed rack sets the angle at which the module is held. Tilt angles equivalent to an installation's latitude are common. Most of these fixed racks are set on poles above ground.[51] Panels that face West or East may provide slightly lower energy, but evens out the supply, and may provide more power during peak demand.[52]
## Standards
Standards generally used in photovoltaic modules:
## Applications
There are many practical applications for the use of solar panels or photovoltaics. It can first be used in agriculture as a power source for irrigation. In health care solar panels can be used to refrigerate medical supplies. It can also be used for infrastructure. PV modules are used in photovoltaic systems and include a large variety of electric devices:
## Gallery
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation. | 2018-01-21 04:32:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2945941090583801, "perplexity": 1914.815383104883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890187.52/warc/CC-MAIN-20180121040927-20180121060927-00775.warc.gz"} |
https://www.research.ed.ac.uk/portal/en/publications/measurement-of-jet-pmathrmt-correlations-in-pbpb-and-pp-collisions-at-sqrtsmathrmnn-276-tev-with-the-atlas-detector(df8c58c8-9f9f-4a17-97a6-0c1ea55f816d).html | ## Measurement of jet $p_{\mathrm{T}}$ correlations in Pb+Pb and $pp$ collisions at $\sqrt{s_{\mathrm{NN}}}=$ 2.76 TeV with the ATLAS detector
Research output: Contribution to journalArticle
Open
### Documents
Original language English Aaboud:2017eww 379-402 Physics Letters B B774 10.1016/j.physletb.2017.09.078 Published - 10 Nov 2017
### Abstract
Measurements of dijet $p_{\mathrm{T}}$ correlations in Pb+Pb and $pp$ collisions at a nucleon--nucleon centre-of-mass energy of $\sqrt{s_{\mathrm{NN}}}=2.76\textrm{ TeV}$ are presented. The measurements are performed with the ATLAS detector at the Large Hadron Collider using Pb+Pb and $pp$ data samples corresponding to integrated luminosities of 0.14 nb$^{-1}$ and 4.0 pb$^{-1}$, respectively. Jets are reconstructed using the anti-$k_t$ algorithm with radius parameter values $R=0.3$ and $R=0.4$. A background subtraction procedure is applied to correct the jets for the large underlying event present in Pb+Pb collisions. The leading and sub-leading jet transverse momenta are denoted $p_{\mathrm{T_{\mathrm{1}}}}$ and $p_{\mathrm{T_{\mathrm{2}}}}$. An unfolding procedure is applied to the two-dimensional ($p_{\mathrm{T_{\mathrm{1}}}}$, $p_{\mathrm{T_{\mathrm{2}}}}$) distributions to account for experimental effects in the measurement of both jets. Distributions of $(1/N)\mbox{$\mathrm{d}$} N/\mbox{$\mathrm{d}$} x_{\mathrm{J}}$, where $x_{\mathrm{J}}=p_{\mathrm{T}_{2}}/p_{\mathrm{T}_{1}}$, are presented as a function of $p_{\mathrm{T_{\mathrm{1}}}}$ and collision centrality. The distributions are found to be similar in peripheral Pb+Pb collisions and $pp$ collisions, but highly modified in central Pb+Pb collisions. Similar features are present in both the $R=0.3$ and $R=0.4$ results, indicating that the effects of the underlying event are properly accounted for in the measurement. The results are qualitatively consistent with expectations from partonic energy loss models. | 2019-07-17 08:45:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.693992018699646, "perplexity": 1594.9143781177033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525133.20/warc/CC-MAIN-20190717081450-20190717103450-00342.warc.gz"} |
https://socratic.org/questions/5894a950b72cff4c167639ed#373010 | # What is the "molar concentration" of a mass of 21.9*g mass of "potassium chloride" dissolved in 869*mL of solution?
$\text{Molarity}$ $=$ $\text{Moles of solute"/"Volume of solution}$
And so we just have to plug the given numbers in, knowing that $\text{potassium chloride}$ has a molar mass of $74.55 \cdot g \cdot m o {l}^{-} 1$.
$\text{Molarity}$ $=$ $\frac{\frac{21.9 \cdot g}{74.55 \cdot g \cdot m o {l}^{-} 1}}{0.869 \cdot L} \cong 0.3 \cdot m o l \cdot {L}^{-} 1$.
Note that $1 \cdot m L = {10}^{-} 3 L$; equivalently, $1000 \cdot m L \equiv 1.000 \cdot L$. What are the concentrations with respect to ${K}^{+} \left(a q\right)$ and $C {l}^{-} \left(a q\right)$? | 2022-11-27 04:41:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8622238039970398, "perplexity": 767.6867361319447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00033.warc.gz"} |
http://sl-inworld.com/sequential-numbering/sequential-numbering-software-mac.html | All I want to know is, how do you link fields, like {SEQ}, to a style – so that if I select/apply style “Figure Caption” (my style, I also have “Table Caption”), I get not just the attributes of the style (what it looks like, where it is on the page etc.) but I get a chapter (or section) number + a sequence number added, (e.g. “2-3”). I know about STYLEREF and SEQ Figure # and/or calling a section number (if required) – but I can’t find out how to take the simple and to me obvious step to have sequential numbering (prefixed with either “Figure” or “Table”) attached to my Figure Caption style. It all seems to be a two-step process: (i) select the style, (ii) select the numbering. I want to do both at style level. I want to see/get (e.g.) “Figure 2-3: Linking figure numbering to styles” just by applying my “Figure Caption” style to my figure caption text “Linking figure numbering to styles”. Please, only tell me how to do that, how to link chapter/sequence numbering via a style – i.e. in just one step – applying the style.
There are also other problems – for one thing, it’s not a very generic solution. You must customize several parts to work for different queries. If the set of tables used in the outermost FROM clause are different, to be the innermost FROM clause, you have to adjust the WHERE clause to reference the correct primary key (or some combination thereof). Then you have to also keep the ORDER BY in sync. That’s a lot of tweaks if you want to use it for different queries. That is why this is the last method listed here but it is one possible solution.
Simply copy the second page of the template by highlighting that page and pressing CTRL + C. Windows shortcut keys Windows Keyboard Shortcuts 101: The Ultimate Guide Windows Keyboard Shortcuts 101: The Ultimate Guide Keyboard shortcuts can save you hours of time. Master the universal Windows keyboard shortcuts, keyboard tricks for specific programs, and a few other tips to speed up your work. Read More are wonderful things. Then create a new blank page by pressing CTRL + Enter. Then paste the copied page using CTRL + V. Create a new blank page, and paste again. Keep doing this until you have the desired number of pages that you will need.
The new SQL stored procedure lookup rules in Forms 9.1 make doing something like this possible. The example in the online help shows how to use a stored procedure to auto-append an incrementing number from the database to a form when it loads, which might solve some of your problems. However, the number is incremented after the form loads (not when it is submitted), so that might not exactly fit your needs. Here's the link to the correct page of the online help.
I need a way for the priority level to automatically adjust when I add or change an item with a new priority level. I might have 6 tasks, each will have a different priority. If I add one and set it to 1, the others need to increment + by one digit. Adding one to the above the highest would see no change in the others. Adding one in the middle would spread the rest apart (e.g. I have a 3, I put a new record and put it at 3, the old 3 becomes 4, and so on (everything below it would increment one digit).
Hi, is there any limit on the number of E-Mails ? I created an archive of 270000 E-Mails (IMAP) and it caused trouble. Can I have that amount in a local folder ? Are there any recommended number ? It locked that 50000 starts being a problem on IMAP already. How else would you handle an archive that you need frequently ? Thanks for your help Stephan If it were me... exporting them(selectively) to user created properly named Windows Explorer blank folders on the hard drive and backing up to a different drive(internal/external/cd/dvd) outside of Windows Live Mail woul...
There are many types of machines printers used to number Carbonless forms. One style is a letter press, another is a pneumatic numbering head which uses air pressure do drive a numbering head and crash imprint the number on the top sheet transferring the number to the other sheets. For example, if you were numbering a 2 part carbonless form you would have a black or red number on the top sheet and a crashed number on the second sheet. The image on the second sheet would appear black no matter what ink was on the top sheet as the carbonless paper transfers the image in black.
The second thing you need to do is make sure that Word is configured so that it updates fields when it prints. Now, when you run the macro, you are asked how many copies to print and what starting number to use. The document variable is updated and a single copy of the document is printed. These steps are repeated for the number of times that you chose to print.
I'm producing gift certificates for a restaurant and they need to be numbered sequentially from 0001 to 0250. Is there any way to do this easily as opposed to numbering each manually? I'm sure I could probably work it out with a print shop, but the job was thrust on me last minute and my options are limited by the short turn around time. Any help would be appreciated. Thanks!...
I have a document where in I have to make two kinds of page numbering, A catalog (individually made) which should always start at page 1, and the other is a compiled version where the pages should be a continuous page. They both have the same content but with different output so I tried using layers, but fail to set the page numbering to auto. because setting them would affect both layers.
* The solution assumes that there is only one stack to cut, but really there could be dozens of stacks. Take a run of the numbers 1-10000 for example. Let’s say you get 4-up on a sheet and the biggest stack that will fit in the guillotine is 500 sheets. A true cut and stack solution will print on the first stack 1-500, 501-1000, 1001-1500, 1501-2000. Ideal because the numbers can be guillotined and placed back onto a pallet for its next process. It also means I can provide these numbers first to the client and then they can wait for the other numbers (in case they had run out of stock and were in a hurry for replenishment stock). The solution doesn’t do that – instead, the first 500 stack will have the numbers 1-500, 2501-3000, 5001-5500, 7501-8000. That means not only is placement back onto the pallet confusing, but the customer has to wait for the artwork to be completely printed before even getting the first half of numbers. True, I could run the script several times to get the appropriate stacks, but why should I if the script did what I wanted? Especially if there are hundreds of stacks to print?
I need a way for the priority level to automatically adjust when I add or change an item with a new priority level. I might have 6 tasks, each will have a different priority. If I add one and set it to 1, the others need to increment + by one digit. Adding one to the above the highest would see no change in the others. Adding one in the middle would spread the rest apart (e.g. I have a 3, I put a new record and put it at 3, the old 3 becomes 4, and so on (everything below it would increment one digit).
Word includes a special sequencing field that you can use to do all sorts of numbering. You can even use the SEQ field to help create broken numbered lists. (A broken numbered list is one in which the flow of the list is interrupted by paragraphs of a different format.) This approach to creating numbered lists is particularly helpful and much less prone to the problems inherent in Word's built-in list numbering. For the purposes of this tip, the format of the sequence field is as follows:
Microsoft Publisher, the desktop publishing component of the Professional version of the Office Suite, can perform many time-saving tasks for busy business owners, including layout and design work. It can even help you avoid a shopping run to try to find tickets for your next employee picnic, holiday giveaway or executive board meeting. Create your own tickets, including the vital sequential ordering needed for raffles or attendance tracking, using Publisher’s page numbering. With a few tricky manipulations of the page number process, you can start running the numbers in an entirely new fashion.
Other notations can be useful for sequences whose pattern cannot be easily guessed, or for sequences that do not have a pattern such as the digits of π. One such notation is to write down a general formula for computing the nth term as a function of n, enclose it in parentheses, and include a subscript indicating the range of values that n can take. For example, in this notation the sequence of even numbers could be written as {\displaystyle (2n)_{n\in \mathbb {N} }} . The sequence of squares could be written as {\displaystyle (n^{2})_{n\in \mathbb {N} }} . The variable n is called an index, and the set of values that it can take is called the index set.
Remember that you must update the values in the sheet if you want to continue the numbering series with the next batch of tickets. For instance, if you want your next batch of tickets to start with 112, you'd open the workbook and change the value 100 to 112, and update the remaining values accordingly. Don't forget to save the workbook after updating the values.
I have a cell which has a basic formula in (adding up from 2 other cells) This number can end up being a minus number (-167), If this happens I need to be able to make that minus number appear as a zero (0). Is this possible? please help. =IF(A1+B1<0,0,A1+B1) Alternative =MAX(0,A1+B1) Gord Dibben Excel MVP On Sat, 7 May 2005 08:28:03 -0700, "marcus1066" wrote: >I have a cell which has a basic formula in (adding up from 2 other cells) >This number can end up being a minus number (-167), If this happens I need to >be...
If you need to apply numbering within a paragraph rather than to the entire paragraph, you use Word's ListNum feature. Using the ListNum feature will allow you to take advantage of the numbering system you're currently using in your document (it will use the one you implemented most recently if you're not currently using a numbering system). The ListNum Field is available in Word 97 and later and interacts with multi-level list numbering (which should be linked to styles as set forth here). Here is a brief explanation of differences between the ListNum field and the Seq field.
Simply copy the second page of the template by highlighting that page and pressing CTRL + C. Windows shortcut keys Windows Keyboard Shortcuts 101: The Ultimate Guide Windows Keyboard Shortcuts 101: The Ultimate Guide Keyboard shortcuts can save you hours of time. Master the universal Windows keyboard shortcuts, keyboard tricks for specific programs, and a few other tips to speed up your work. Read More are wonderful things. Then create a new blank page by pressing CTRL + Enter. Then paste the copied page using CTRL + V. Create a new blank page, and paste again. Keep doing this until you have the desired number of pages that you will need.
As I indicated, because you want to only generate this number just before saving, I would put this code behind a Save button or another event that runs just before you save the record. So if you want to prevent duplications, you don’t use the expression as a control source, but put the code behind a button or event. Since you have a Save button, put it there.
Moreover, the subscripts and superscripts could have been left off in the third, fourth, and fifth notations, if the indexing set was understood to be the natural numbers. Note that in the second and third bullets, there is a well-defined sequence {\displaystyle (a_{k})_{k=1}^{\infty }} , but it is not the same as the sequence denoted by the expression.
I have two fields that should match, but one includes special characters while the other does not. Example: Field1 00ABCD123456123 Filed2 00/ABCD/123456/123/SBZ I need to find records where these two fields don't match, either by changing the display of one of them, or a query to compare Field1 character 7-15 with Field2 characters 9-14, 16-18. Hope this makes sense. Can anyone help? Thanks! Take a look at the following from the Access Help file it might be what you're looking for... Extract a part of a text value The following table lists examples of expressions that ...
I answer readers' questions about Microsoft Office when I can, but there's no guarantee. When contacting me, be as specific as possible. For example, "Please troubleshoot my workbook and fix what's wrong" probably won't get a response, but "Can you tell me why this formula isn't returning the expected results?" might. Please mention the app and version that you're using. I'm not reimbursed by TechRepublic for my time or expertise, nor do I ask for a fee from readers. You can contact me at susansalesharkins@gmail.com.
If you start to type in what appears to be a numbered list, Word formats your manually typed "numbers" to an automatic numbered list. The main benefit of this option is that you do not need to click any button to start numbering and you can choose your numbering style as well. For example, if you type "(a) some text" and press Enter, it starts numbering using the "(a)" format.
Hello, I'm looking for a way to quickly find what numbers are missing in column B. I can sort them ascending, but how do I find if there are missing numbers? 1 2 3 5 6 7 9 I need to know 4 and 8 are missing. Thank you. One way: select B2:Bx. Choose Format/Conditional Formatting... CF1: Formula is =(B2-B1)>1 Format1: / or, without sorting, select column B (with B1 active): CF1: Formula is =AND(B1>MIN(B:B),COUNTIF(B:B,B1-1)=0) Both CF's will activate if there are missing numbers before them. In article <28706E9E-2624...
Hello, can anyone help me with making serial numbers in this way: When I purchase 10 chairs, I want to monitor all of it by having serial numbers each of those 10 and those 10 numbers should have the prefix “CHR-” if they are chairs. “TBL” if tables, the codes are associated with the item category. BTW, items have categories predefined by beforehand on the items by relationships. So when I will want to monitor these 10 chairs, I will only have to click the “generate control numbers” button and each of those purchased items get their own control numbers.
I want to have textbox with 2 columns with footnotes running across the bottom of those columns in one column. ID CS3 footnotes can’t handle this. So I have added fake footnote refs in the doc. using this idea. Now the footnotes themselves I can create in another text frame and use this idea again to create them and then manually place them at the bottom of the page. The only problem however with this is the FN options carrry across the whole doc. right? So even if I create a second doc for the footnotes themselves with different options and then later paste it into the main doc it’ll get messed up right?
I am trying to use mailmerge to print tickets. I tried using a column of sequential numbers on a spreadsheet and inserting that as a field into the mailmerge, but oddly, it used number 8 eight times on the first page, number 16 eight times on the second page, etc. So, I tried using a sequencing field as you describe. It worked great for the first page (numbers 1-8) but when I completed the mailmerge, it repeated numbers 1-8 on each successive sheet. What do I have to do to make this work in a mailmerge?
is defined as the set of all sequences {\displaystyle (x_{i})_{i\in \mathbb {N} }} such that for each i, {\displaystyle x_{i}} is an element of {\displaystyle X_{i}} . The canonical projections are the maps pi : X → Xi defined by the equation {\displaystyle p_{i}((x_{j})_{j\in \mathbb {N} })=x_{i}} . Then the product topology on X is defined to be the coarsest topology (i.e. the topology with the fewest open sets) for which all the projections pi are continuous. The product topology is sometimes called the Tychonoff topology.
John, Sorry for the delay, but I was away last week with limited Internet access. I assumed if you had a Save button, you would know how to put code behind it. To see the code behind a button, Select the button in Form Design Mode and open the Properties Dialog (Right click and select properties), on the Events tab there should be something in the On Click event of the button. If you click the Ellipses […] next to the event, it will open Code Builder where you can enter the code. | 2021-10-28 11:01:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.503465473651886, "perplexity": 925.8711718475244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00544.warc.gz"} |
http://scikit-bio.org/docs/0.5.6/generated/skbio.alignment.global_pairwise_align_nucleotide.html | # skbio.alignment.global_pairwise_align_nucleotide¶
skbio.alignment.global_pairwise_align_nucleotide(seq1, seq2, gap_open_penalty=5, gap_extend_penalty=2, match_score=1, mismatch_score=-2, substitution_matrix=None, penalize_terminal_gaps=False)[source]
Globally align nucleotide seqs or alignments with Needleman-Wunsch
State: Experimental as of 0.4.0.
Parameters
• seq1 (DNA, RNA, or TabularMSA[DNA|RNA]) – The first unaligned sequence(s).
• seq2 (DNA, RNA, or TabularMSA[DNA|RNA]) – The second unaligned sequence(s).
• gap_open_penalty (int or float, optional) – Penalty for opening a gap (this is substracted from previous best alignment score, so is typically positive).
• gap_extend_penalty (int or float, optional) – Penalty for extending a gap (this is substracted from previous best alignment score, so is typically positive).
• match_score (int or float, optional) – The score to add for a match between a pair of bases (this is added to the previous best alignment score, so is typically positive).
• mismatch_score (int or float, optional) – The score to add for a mismatch between a pair of bases (this is added to the previous best alignment score, so is typically negative).
• substitution_matrix (2D dict (or similar)) – Lookup for substitution scores (these values are added to the previous best alignment score). If provided, this overrides match_score and mismatch_score.
• penalize_terminal_gaps (bool, optional) – If True, will continue to penalize gaps even after one sequence has been aligned through its end. This behavior is true Needleman-Wunsch alignment, but results in (biologically irrelevant) artifacts when the sequences being aligned are of different length. This is False by default, which is very likely to be the behavior you want in all or nearly all cases.
Returns
TabularMSA object containing the aligned sequences, alignment score (float), and start/end positions of each input sequence (iterable of two-item tuples). Note that start/end positions are indexes into the unaligned sequences.
Return type
tuple
Notes
Default match_score, mismatch_score, gap_open_penalty and gap_extend_penalty parameters are derived from the NCBI BLAST Server 1.
This function can be use to align either a pair of sequences, a pair of alignments, or a sequence and an alignment.
References
1
http://blast.ncbi.nlm.nih.gov/Blast.cgi | 2020-08-08 23:34:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5064825415611267, "perplexity": 8315.347039806647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738366.27/warc/CC-MAIN-20200808224308-20200809014308-00237.warc.gz"} |
https://math.stackexchange.com/questions/2290442/paths-of-length-n-on-mathbbz2-given-fixed-endpoint-and-startpoint | # Paths of length n on $\mathbb{Z}^2$ given fixed endpoint and startpoint
Consider $\mathbb{Z}^2$ as an infinite grid where the nodes are elements , choose a startpoint $x$ and an endpoint $y$.
Let $n$ be a natural number, we would like to know the number of paths from $x$ to $y$ with exactly length $n$.
For a path $p : \{1,...,n\} \to \mathbb{Z}^2$ to be a valid solution it has to satisfy following conditions:
1. $p(1) = x$ and $p(n) = y$.
2. The path is injective, meaning $p(i) \neq p(j)$ if $i \neq j$.
3. $p(i+1) = p(i) \pm (1,0)$ or $p(i+1) = p(i) \pm (0,1)$.
If we discard the second condition the problem was solved over here.
I do not understand why the given recusive form is a solution to the problem posed over there.
Perhaps understanding why that solution is correct would be helpfull for this problem.
## example
Without loss of generality we can asume $x = (0,0)$ let $y = (1,1)$ clearly there will be no paths of uneven length.
The sequence for the first 10 even numbers is: $(2 , 4, 16, 76, 396, 2164, 12240, 71024, 20436, 2528780)$.
Graphed on a semilogarithmic scale: .
We can see an exponential-like graph.
(this of course is only an example for $y = (1,1)$ it would be nice to have a closed form for all $y$)
I tried some other examples too they all seem to display exponential-like behavior when they are not zero, I cannot calulate a lot of terms but it seems like it grows a bit faster than exponential: $t_n/t_{n-1}$ seems to grow.
For the example from (0,0) to (1,1) the terms the sequence of fractions are approximately $(2, 4, 4.75, 5.21, 5.46, 5.66, 5.80, 5.92, 6.01)$
Graph:
It seems like the fractions possibly converge.
## basecase
If we expect to prove an explicit or recursive form the proof will probably be with induction on $n$.
Here I prove the basecase: the number of different paths with minimal distance, the proof will be in the more general setting of $\mathbb{Z}^p$.
Let $x=(0,0, ... ,0)$, $y = (y_1, y_2, ... , y_p)$.
For a path $p:\{1,2, ...,d\} \to \mathbb{Z}^p$ from $x$ to $y$ to be of minimal length it is neccesary and sufficient that $p$ is monotone in every component.
Since $p$ has to end in $y$: $\sum_i(\delta_i) = \sum_i(p(i+1) - p(i)) = y$.
Here $\delta_i \in \{\mbox{sign(}y_j)\cdot e_j | j \in \{1,2,...,p\}\}$
$e_j$ is the j'th basis vector for $\mathbb{Z}^p$ in the standard basis.
Every permutation of the $\delta_i$'s will result in an alternative path of minimal length from $x$ to $y$. Therefor the number of paths of minimal length is equal to $$\frac{(d-1)!}{\lvert y_1\rvert!\cdot \lvert y_2 \rvert!\cdot ...\cdot \lvert y_p \rvert!}$$ $$\tag*{\blacksquare}$$
• What conditions do we have to impose on a set of n-2 nodes so that there is exactly one path from x to y of length n going through the nodes? Does every path satisfy the condition that there is exactly one path through the collection of nodes? If the answer on the second question is true, the sets satisfying the first question map one-to-one onto paths therefor there will be equally many of them as there are paths of length n. (I could not find any counterexamples to the second question) – A. Van Werde May 23 '17 at 12:00
• never mind: counterexample – A. Van Werde May 23 '17 at 12:07
There is good news and bad news. The good news is that there is a nice formula for the number of general paths in $\mathbb{Z}^2$ from one point to another as a sum of multinomial coefficients, as opposed to the recursive formula that you linked to. Also, it's straightforward to generalize this formula to higher dimensions. The bad news is that enumeration of injective paths probably requires the inclusion-exclusion principle, so the resulting formula won't be nearly as nice.
For general paths, let $x$ and $y$ be points in $\mathbb{Z}^2$, let the (signed) horizontal and vertical displacements between $x$ and $y$ be $h$ and $v$ (in $\mathbb{Z}$), and let $P_{x,y,n}$ be the number of paths of length $n$ between points $x$ and $y$. A path is equivalent to a sequence of moves to the left, right, up or down. Let $l, r, u, d \ge0$ be the number of such moves, respectively. Then, considering the total number of moves, $$l+r+u+d=n$$ and considering the horizontal and vertical displacements $$r-l=h,\quad u-d=v.$$ If $l,r,u,d$ satisfy these conditions, there are exactly the multinomial coefficient $\binom{n}{l,r,u,d}$ paths from $x$ to $y$ in $n$ steps. So, $$P_{x,y,n}=\sum_{l,r,u,d}\binom{n}{l,r,u,d},$$ where the indices of the sum are non-negative integers satisfying the three conditions. You can solve the system of three equations in four variables to rewrite the sum with a single index, for instance $$P_{x,y,n}=\sum_{l\ge0}\binom{n}{l,l+h,\frac{n-h-v}2-l,\frac{n-h+v}2-l}.$$
Now for the case of injective paths, we can use the inclusion-exclusion principle. From the collection of general paths, subtract those that have at least one point of intersection (in time and space), leaving $P_{x,y,n}-\sum_{m,z}P_{x,z,m}P_{z,y,n-m}.$ However, this subtracts paths with at least two points of intersection twice, so we have to add these back in, leaving $$P_{x,y,n}-\sum_{m,z}P_{x,z,m}P_{z,y,n-m}+ \sum_{m_1,m_2,z_1,z_2}P_{x,z_1,m_1}P_{z_1,z_2,m_2}P_{z_2,y,n-m_1-m_2}-\cdots,$$ etc. If there is a more direct enumeration of the injective paths avoiding the use of the inclusion-exclusion principle, I'd be surprised (but pleasantly so).
• Sorry, my intention was that apart from having as possible moves $\pm (0,1)$ and $\pm (1,0)$ also having $\pm (1,1)$ and $\pm(-1,1)$,( possibly generalising to a set of random moves of the form $(\delta_x,\delta_y)$.) – A. Van Werde May 29 '17 at 17:59 | 2020-04-01 12:06:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029676914215088, "perplexity": 163.3724513772153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00184.warc.gz"} |
https://www.newworldencyclopedia.org/entry/John_von_Neumann | # John von Neumann
John von Neumann
John von Neumann in the 1940s
Born
December 28 1903
Budapest, Austria-Hungary
Died February 8 1957 (aged 53)
Residence United States
Nationality American
Field Mathematics
Institutions University of Berlin
Site Y, Los Alamos
Alma mater University of Pázmány Péter
ETH Zurich
Notable students Donald B. Gillies
Known for Game theory
Von Neumann algebras
Von Neumann architecture
Cellular automata
Notable prizes Enrico Fermi Award 1956
Religious stance Converted Roman Catholic; previously agnostic; born to a non-practicing Jewish family
John von Neumann (Hungarian Margittai Neumann János Lajos) (December 28, 1903 – February 8, 1957) mathematician who made contributions to quantum physics, functional analysis, set theory, topology, economics, computer science, numerical analysis, hydrodynamics (of explosions), statistics and many other mathematical fields as one of history's outstanding mathematicians.[1] Most notably, von Neumann was a pioneer of the application of operator theory to quantum mechanics (see von Neumann algebra), a member of the Manhattan Project and the Institute for Advanced Study at Princeton (as one of the few originally appointed — a group collectively referred to as the "demi-gods"), and the co-creator of game theory and the concepts of cellular automata and the universal constructor. Along with Edward Teller and Stanislaw Ulam, von Neumann worked out key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb.
Quantum physics $\Delta x \, \Delta p \ge \frac{\hbar}{2}$ Quantum mechanics Introduction to... Mathematical formulation of... Fundamental concepts Decoherence · Interference Uncertainty · Exclusion Transformation theory Ehrenfest theorem · Measurement Experiments Double-slit experiment Davisson-Germer experiment Stern–Gerlach experiment EPR paradox · Popper's experiment Schrödinger's cat Equations Schrödinger equation Pauli equation Klein-Gordon equation Dirac equation Advanced theories Quantum field theory Wightman axioms Quantum electrodynamics Quantum chromodynamics Quantum gravity Feynman diagram Interpretations Copenhagen · Ensemble Hidden variables · Transactional Many-worlds · Consistent histories Quantum logic Consciousness causes collapse Scientists Planck · Schrödinger Heisenberg · Bohr · Pauli Dirac · Bohm · Born de Broglie · von Neumann Einstein · Feynman Everett · Others
## Biography
### Early years
The oldest of three brothers, von Neumann was born Neumann János Lajos (in Hungarian the family name comes first) in Budapest, Hungary, to a Jewish family. His father was Neumann Miksa (Max Neumann), a lawyer who worked in a bank. His mother was Kann Margit (Margaret Kann).
János, nicknamed "Jancsi" (Johnny), was an extraordinary prodigy. At the age of only six, he was able to divide two 8-digit numbers in his head.
He entered the German speaking Lutheran Gymnasium in Budapest in the year 1911. In 1913, his father was rewarded with ennoblement for his service to the Austro-Hungarian empire, the Neumann family acquiring the Hungarian mark of Margittai, or the Austrian equivalent von. Neumann János therefore became János von Neumann, a name that he later changed to the German Johann von Neumann. After teaching as history's youngest Privatdozent of the University of Berlin from 1926 to 1930, he, his mother, and his brothers emigrated to the United States; this in the early 1930s, after Hitler's rise to power in Germany. He anglicized Johann to John, he kept the Austrian-aristocratic surname of von Neumann, whereas his brothers adopted surnames Vonneumann and Neumann (using the de Neumann form briefly when first in the US).
Although von Neumann unfailingly dressed formally, he enjoyed throwing extravagant parties and driving hazardously (frequently while reading a book, and sometimes crashing into a tree or getting arrested).[2] He once reported one of his many car accidents in this way: "I was proceeding down the road. The trees on the right were passing me in orderly fashion at 60 miles per hour. Suddenly one of them stepped in my path."[3] He was a profoundly committed hedonist who liked to eat and drink heavily (it was said that he knew how to count everything except calories), [4] and persistently gaze at the legs of young women (so much so that female secretaries at Los Alamos often covered up the exposed undersides of their desks with cardboard).[5]
### Higher education, years in Germany
He received his Ph.D. in mathematics (with minors in experimental physics and chemistry) from the University of Budapest at the age of 23. He simultaneously earned his diploma in chemical engineering from the ETH Zurich in Switzerland at the behest of his father, who wanted his son to invest his time in a more financially viable endeavor than mathematics. Between 1926 and 1930 he was a private lecturer in Berlin, Germany.
By age 25 he had published 10 major papers, and by age 30, nearly 36.[6]
### Years at Princeton University
Von Neumann was invited to Princeton, New Jersey in 1930, and was one of four people selected for the first faculty of the Institute for Advanced Study (two of the others were Albert Einstein and Kurt Gödel), where he was a mathematics professor from its formation in 1933 until his death.
From 1936 to 1938 Alan Turing was a visitor at the Institute, where he completed a Ph.D. dissertation under the supervision of Alonzo Church at Princeton. This visit occurred shortly after Turing's publication of his 1936 paper "On Computable Numbers with an Application to the Entscheidungsproblem" which involved the concepts of logical design and the universal machine. Von Neumann must have known of Turing's ideas but it is not clear whether he applied them to the design of the IAS machine ten years later.
In 1937 he became a naturalized citizen of the United States. In 1938 von Neumann was awarded the Bôcher Memorial Prize for his work in analysis.
### Marriage and family
Von Neumann married twice. He married Mariette Kövesi in 1930. When he proposed to her, he was incapable of expressing anything beyond "You and I might be able to have some fun together, seeing as how we both like to drink." [7] Von Neumann agreed to convert to Catholicism in order to marry and remained a Catholic until his death. The couple divorced in 1937. He then married Klara Dan in 1938. Von Neumann had one child, by his first marriage, a daughter named Marina. She is a distinguished professor of international trade and public policy at the University of Michigan.
### Cancer and death
Von Neumann was diagnosed with bone cancer or pancreatic cancer in 1957, possibly caused by exposure to radioactivity while observing A-bomb tests in the Pacific or in later work on nuclear weapons at Los Alamos, New Mexico. (Fellow nuclear pioneer Enrico Fermi had died of stomach cancer in 1954.) Von Neumann died within a few months of the initial diagnosis, in excruciating pain. The cancer had spread to his brain, inhibiting mental ability. When at Walter Reed Hospital in Washington, D.C., he invited Roman Catholic priest (Father Anselm Strittmatter), who administered him the last Sacraments.[8] He died under military security lest he reveal military secrets while heavily medicated. John von Neumann was buried at Princeton Cemetery in Princeton, Mercer County, New Jersey.
He wrote 150 published papers in his life; 60 in pure mathematics, 20 in physics, and 60 in applied mathematics. He was developing a theory of the structure of the human brain before he died.
### Controversial notions
Von Neumann entertained notions which would now trouble many. His love for meteorological prediction led him to dream of manipulating the environment by spreading colorants on the polar ice caps in order to enhance absorption of solar radiation (by reducing the albedo) and thereby raise global temperatures. He also favored a preemptive nuclear attack on the USSR, believing that doing so could prevent it from obtaining the atomic bomb.[9][10]
## Logic
The axiomatization of set theory was resolved (by Ernst Zermelo and Abraham Frankel) by way of a series of principles that allowed for the construction of all sets used in the actual practice of mathematics, but it did not explicitly exclude the possibility of the existence of sets that belong to themselves. In his doctoral thesis of 1925, von Neumann demonstrated how it was possible to exclude this possibility in two complementary ways: the axiom of foundation and the notion of class.[11]
In order to demonstrate that the addition of this new axiom to the others did not produce contradictions, von Neumann introduced a method of demonstration (called the 'method of inner models), which later became an essential instrument in set theory. Under the von Neumann approach, the class of all sets which do not belong to themselves can be constructed, but it is a proper class and not a set.
With this contribution of von Neumann, the axiomatic system of the theory of sets became fully satisfactory
## Quantum mechanics
After having completed the axiomatization of set theory, von Neumann began to confront the axiomatization of quantum mechanics.'[12] He immediately realized, in 1926, that a quantum system could be considered as a point in a so-called Hilbert space, analogous to the 6N dimension (N is the number of particles, 3 general coordinate and 3 canonical momentum for each) phase space of classical mechanics but with infinitely many dimensions (corresponding to the infinitely many possible states of the system) instead: the traditional physical quantities (e.g. position and momentum) could therefore be represented as particular linear operators operating in these spaces. The physics of quantum mechanics was thereby reduced to the mathematics of the linear Hermitian operators on Hilbert spaces. For example, the famous uncertainty principle of Heisenberg, according to which the determination of the position of a particle prevents the determination of its momentum and vice versa, is translated into the non-commutativity of the two corresponding operators. This new mathematical formulation included as special cases the formulations of both Heisenberg and Schrödinger, and culminated in the 1932 classic The Mathematical Foundations of Quantum Mechanics. However, physicists generally ended up preferring another approach to that of von Neumann (which was considered elegant and satisfactory by mathematicians). This approach was formulated in 1930 by Paul Dirac.
In any case, von Neumann's abstract treatment permitted him also to confront the foundational issue of determinism vs. non-determinism and in the book he demonstrated a theorem according to which quantum mechanics could not possibly be derived by statistical approximation from a deterministic theory of the type used in classical mechanics. This demonstration contained a conceptual error, but it helped to inaugurate a line of research which, through the work of John Stuart Bell in 1964 on Bell's Theorem and the experiments of Alain Aspect in 1982, demonstrated that quantum physics requires a notion of reality substantially different from that of classical physics.
In a complementary work of 1936, von Neumann proved (along with Garrett Birkhoff) that quantum mechanics also requires a logic substantially different from the classical one. For example, light (photons) cannot pass through two successive filters which are polarized perpendicularly (e.g., one horizontally and the other vertically), and therefore, a fortiori, it cannot pass if a third filter polarized diagonally is added to the other two, either before or after them in the succession. But if the third filter is added in between the other two, the photons will indeed pass through.
## Economics
Up until the 1930s, economics involved a great deal of mathematics and numbers, but almost all of this was either superficial or irrelevant. It was used, for the most part, to provide uselessly precise formulations and solutions to problems which were intrinsically vague. Economics found itself in a state similar to that of physics of the seventeenth century: still waiting for the development of an appropriate language in which to express and resolve its problems. While physics had found its language in the infinitesimal calculus, von Neumann proposed the language of game theory and a general equilibrium theory for economics.
His first significant contribution was the minimax theorem of 1928. This theorem establishes that in certain zero sum games involving perfect information (in which players know a priori the strategies of their opponents as well as their consequences), there exists one strategy which allows both players to minimize their maximum losses (hence the name minimax). When examining every possible strategy, a player must consider all the possible responses of the player's adversary and the maximum loss. The player then plays out the strategy which will result in the minimization of this maximum loss. Such a strategy, which minimizes the maximum loss, is called optimal for both players just in case their minimaxes are equal (in absolute value) and contrary (in sign). If the common value is zero, the game becomes pointless.
Von Neumann eventually improved and extended the minimax theorem to include games involving imperfect information and games with more than two players. This work culminated in the 1944 classic Theory of Games and Economic Behavior (written with Oskar Morgenstern). This resulted in such public attention that The New York Times did a front page story, the likes of which only Einstein had previously earned.
Von Neumann's second important contribution in this area was the solution, in 1937, of a problem first described by Leon Walras in 1874, the existence of situations of equilibrium in mathematical models of market development based on supply and demand. He first recognized that such a model should be expressed through disequations and not equations, and then he found a solution to Walras problem by applying a fixed-point theorem derived from the work of Luitzen Brouwer. The lasting importance of the work on general equilibria and the methodology of fixed point theorems is underscored by the awarding of Nobel prizes in 1972 to Kenneth Arrow and, in 1983, to Gerard Debreu.
Von Neumann was also the inventor of the method of proof, used in game theory, known as backward induction (which he first published in 1944 in the book co-authored with Morgenstern, Theory of Games and Economic Behaviour).[13]
## Armaments
John von Neumann's wartime Los Alamos ID badge photo.
After obtaining U.S. citizenship, von Neumann took an interest in 1937 in applied mathematics, and then developed an expertise in explosives. This led him to a large number of military consultancies, primarily for the Navy, which in turn led to his involvement in the Manhattan Project. The involvement included frequent trips by train to the project's secret research facilities in Los Alamos, New Mexico.
Von Neumann took part in the design of the explosive lenses needed to compress the plutonium core of the Trinity test device and the "Fat Man" weapon that was later dropped on Nagasaki. The lens shape design work was completed by July 1944.
In a visit to Los Alamos in September 1944, von Neumann showed that the pressure increase from explosion shock wave reflection from solid objects was greater than previously believed if the angle of incidence of the shock wave was between 90° and some limiting angle. As a result, it was determined that the effectiveness of an atomic bomb would be enhanced with detonation some kilometers above the target, rather than at ground level.[14]
Beginning in the spring of 1945, along with four other scientists and various military personnel, von Neumann was included in the target selection committee responsible for choosing the Japanese cities of Hiroshima and Nagasaki as the first targets of the atomic bomb. Von Neumann oversaw computations related to the expected size of the bomb blasts, estimated death tolls, and the distance above the ground at which the bombs should be detonated for optimum shock wave propagation and thus maximum effect.[15] The cultural capital Kyoto, which had been spared the firebombing inflicted upon militarily significant target cities like Tokyo in World War II, was von Neumann's first choice, a selection seconded by Manhattan Project leader General Leslie Groves, but this target was dismissed by Secretary of War Henry Stimson, who had been impressed with the city during a visit while Governor General of the Philippines.[16]
On July 16, 1945, with numerous other Los Alamos personnel, von Neumann was an eyewitness to the first atomic bomb blast, conducted as a test of the implosion method device, 35 miles (56 km) southeast of Socorro, New Mexico. Based on his observation alone, von Neumann estimated the test had resulted in a blast equivalent to 5 kilotons of TNT, but Enrico Fermi produced a more accurate estimate of 10 kilotons by littering scraps of torn-up paper as the shock wave passed his location and watching how far they scattered. The actual power of the explosion had been between 20 and 22 kilotons.[14]
After the war, Robert Oppenheimer remarked that the physicists involved in the Manhattan Project had "known sin." Von Neumann's rather arch response was that "sometimes someone confesses a sin in order to take credit for it."
Von Neumann continued unperturbed in his work and became, along with Edward Teller, one of the sustainers of the hydrogen bomb project. He then collaborated with spy Klaus Fuchs on further development of the bomb, and in 1946 the two filed a secret patent on "Improvement in Methods and Means for Utilizing Nuclear Energy," which outlined a scheme for using a fission bomb to compress fusion fuel to initiate a thermonuclear reaction. [17]. Though this was not the key to the hydrogen bomb — the Teller-Ulam design — it was judged to be a move in the right direction.
## Computer science
Von Neumann's hydrogen bomb work was also played out in the realm of computing, where he and Stanislaw Ulam developed simulations on von Neumann's digital computers for the hydrodynamic computations. During this time he contributed to the development of the Monte Carlo method, which allowed complicated problems to be approximated using random numbers. Because using lists of "truly" random numbers was extremely slow for the ENIAC, von Neumann developed a form of making pseudorandom numbers, using the middle-square method. Though this method has been criticized as crude, von Neumann was aware of this: he justified it as being faster than any other method at his disposal, and also noted that when it went awry it did so obviously, unlike methods which could be subtly incorrect.
While consulting for the Moore School of Electrical Engineering on the EDVAC project, von Neumann wrote an incomplete set of notes titled the First Draft of a Report on the EDVAC. The paper, which was widely distributed, described a computer architecture in which data and program memory are mapped into the same address space. This architecture became the de facto standard and can be contrasted with a so-called Harvard architecture, which has separate program and data memories on a separate bus. Although the single-memory architecture became commonly known by the name von Neumann architecture as a result of von Neumann's paper, the architecture's conception involved the contributions of others, including J. Presper Eckert and John William Mauchly, inventors of the ENIAC at the University of Pennsylvania.[18] With very few exceptions, all present-day home computers, microcomputers, minicomputers and mainframe computers use this single-memory computer architecture.
Von Neumann also created the field of cellular automata without the aid of computers, constructing the first self-replicating automata with pencil and graph paper. The concept of a universal constructor was fleshed out in his posthumous work Theory of Self Reproducing Automata. Von Neumann proved that the most effective way of performing large-scale mining operations such as mining an entire moon or asteroid belt would be by using self-replicating machines, taking advantage of their exponential growth.
He is credited with at least one contribution to the study of algorithms. Donald Knuth cites von Neumann as the inventor, in 1945, of the merge sort algorithm, in which the first and second halves of an array are each sorted recursively and then merged together.[19] His algorithm for simulating a fair coin with a biased coin[20] is used in the "software whitening" stage of some hardware random number generators.
He also engaged in exploration of problems in numerical hydrodynamics. With R. D. Richtmyer he developed an algorithm defining artificial viscosity that improved the understanding of shock waves. It is possible that we would not understand much of astrophysics, and might not have highly developed jet and rocket engines without that work. The problem was that when computers solve hydrodynamic or aerodynamic problems, they try to put too many computational grid points at regions of sharp discontinuity (shock waves). The artificial viscosity was a mathematical trick to slightly smooth the shock transition without sacrificing basic physics.
## Politics and social affairs
Von Neumann obtained at the age of 29 one of the first five professorships at the new Institute for Advanced Study in Princeton, New Jersey (another had gone to Albert Einstein). He was a frequent consultant for the Central Intelligence Agency, the United States Army, the RAND Corporation, Standard Oil, IBM, and others.
During a Senate committee hearing he described his political ideology as "violently anti-communist, and much more militaristic than the norm." As President of the Von Neumann Committee for Missiles at first, and later as a member of the United States Atomic Energy Commission, starting from 1953 up until his death in 1957, he was influential in setting U.S. scientific and military policy. Through his committee, he developed various scenarios of nuclear proliferation, the development of intercontinental and submarine missiles with atomic warheads, and the controversial strategic equilibrium called mutual assured destruction (aka the M.A.D. doctrine).
## Honors
The John von Neumann Theory Prize of the Institute for Operations Research and the Management Sciences (INFORMS, previously TIMS-ORSA) is awarded annually to an individual (or group) who have made fundamental and sustained contributions to theory in operations research and the management sciences.
The IEEE John von Neumann Medal is awarded annually by the IEEE "for outstanding achievements in computer-related science and technology."
The John von Neumann Lecture is given annually at the Society for Industrial and Applied Mathematics (SIAM) by a researcher who has contributed to applied mathematics, and the chosen lecturer is also awarded a monetary prize.
Von Neumann, a crater on Earth's Moon, is named after John von Neumann.
The John von Neumann Computing Center in Princeton, New Jersey was named in his honor. [6]
The professional society of Hungarian computer scientists, Neumann János Számítógéptudományi Társaság, is named after John von Neumann.
On May 4, 2005 the United States Postal Service issued the American Scientists commemorative postage stamp series, a set of four 37-cent self-adhesive stamps in several configurations. The scientists depicted were John von Neumann, Barbara McClintock, Josiah Willard Gibbs, and Richard Feynman.
The John von Neumann Award of the Rajk László College for Advanced Studies was named in his honor, and is given every year from 1995 to professors, who had on outstanding contribution at the field of exact social sciences, and through their work they had a heavy influence to the professional development and thinking of the members of the college.
## Notes
1. John von Neumann. MSN Encarta. Retrieved November 17, 2007.
2. [1]quotes of wisdom. Retrieved November 17, 2007.
3. John von Neumann. Bellevue Community College. Retrieved November 17, 2007.
4. 2007/02/john_von_neuman.htmlrightcoast.typepad.com. Retrieved November 17, 2007.
6. [2]reference.com Retrieved November 17, 2007.
7. [3] Retrieved November 17, 2007.
8. P.R. Halmos, 1973. "The Legend of Von Neumann." The American Mathematical Monthly 80(4): 382-394.
9. Norman Macrae. John von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More. (New York, NY: Pantheon Press 1992).
10. Steve J. Heims. John von Neumann and Norbert Wiener, from Mathematics to the Technologies of Life and Death. (Cambridge, MA: MIT Press. 1980).
11. [4] Retrieved November 17, 2007.
12. [5] Retrieved November 17, 2007.
13. John MacQuarrie, Mathematics and Chess. University of St Andrews, Scotland. "Others claim he used a method of proof, known as 'backwards induction' that was not employed until 1953, by von Neumann and Morgenstern. Ken Binmore (1992) writes, Zermelo used this method way back in 1912 to analyze Chess. It requires starting from the end of the game and then working backwards to its beginning". Retrieved November 17, 2007.
14. 14.0 14.1 Lillian Hoddeson, Paul W. Henriksen, Roger A. Meade, Catherine Westfall. 1993. Critical Assembly: A Technical History of Los Alamos during the Oppenheimer Years, 1943-1945. (Cambridge, UK: Cambridge University Press. ISBN 0521441323).
15. Richard Rhodes. 1986. The Making of the Atomic Bomb. (New York, NY: Touchstone (Simon & Schuster). ISBN 0684813785).
16. Leslie Groves. 1962. Now It Can Be Told: The Story of the Manhattan Project. (New York, NY: Da Capo. ISBN 0306801892).
17. Gregg Herken. Brotherhood of the Bomb: The Tangled Lives and Loyalties of Robert Oppenheimer, Ernest Lawrence, and Edward Teller. (New York, NY: Henry Holt and Co., 2002), 171, 374
18. John W. Mauchly and the Development of the ENIAC Computer. Penn Libraries. Retrieved November 17, 2007.
19. Donald Knuth, 1998. The Art of Computer Programming: Volume 3 Sorting and Searching. (Reading, MA: Addison-Wesley. ISBN 0201896850).
20. John von Neumann, 1951. Various techniques used in connection with random digits. National Bureau of Standards Applied Math Series 12:36.
## References
• Groves, Leslie. 1962. Now It Can Be Told: The Story of the Manhattan Project. New York, NY: Da Capo. ISBN 0306801892
• Heims, Steve J. 1980. John von Neumann and Norbert Wiener, from Mathematics to the Technologies of Life and Death. Cambridge, MA: MIT Press. ISBN 0262081059.
• Herken, Gregg. 2002. Brotherhood of the Bomb: The Tangled Lives and Loyalties of Robert Oppenheimer, Ernest Lawrence, and Edward Teller. New York, NY: Henry Holt and Co. ISBN 0805065881.
• Knuth, Donald. 1998. The Art of Computer Programming: Volume 3 Sorting and Searching. Reading, MA: Addison-Wesley. ISBN 0201896850
• Hoddeson, Lillian, Paul W. Henriksen, Roger A. Meade, and Catherine Westfall. 1993. Critical Assembly: A Technical History of Los Alamos during the Oppenheimer Years, 1943-1945. Cambridge, UK, and New York: Cambridge University Press. ISBN 0521441323.
• Macrae, Norman. 1992. John von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More. New York, NY: Pantheon Press. ISBN 0679413081.
• Rhodes, Richard. 1986. The Making of the Atomic Bomb. New York, NY: Touchstone (Simon & Schuster. ISBN 0684813785
• Slater, Robert. 1987. Portraits in Silicon. Cambridge, MA: MIT Press. ISBN 0262691310.
• van Heijenoort, Jean. 1967. A Source Book in Mathematical Logic, 1879-1931. Cambridge, MA: Harvard Univ. Press.
• von Neumann, John, R.T. Beyer, trans. 1996. Mathematical Foundations of Quantum Mechanics. Princeton, NJ: Princeton University Press. ISBN 0691028931.
• von Neumann, John, and Oskar Morgenstern. 1944. Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press.
• von Neumann, John, edited and completed by Arthur W. Burks. 1966. Theory of Self-Reproducing Automata. Univ. of Illinois Press. ASIN: B00178N9MU
Secondary:
• Aspray, William. 1990. John von Neumann and the Origins of Modern Computing. Cambridge, MA: MIT Press. ISBN 0262011212.
• Goldstine, Herman. 1993. The Computer from Pascal to von Neumann. Princeton, NJ: Princeton University Press. ISBN 0691023670.
• Hashagen, Ulf. 2006. Johann Ludwig Neumann von Margitta (1903-1957). Teil 1: Lehrjahre eines jüdischen Mathematikers während der Zeit der Weimarer Republik. Informatik-Spektrum 29(2):133-141.
• Hashagen, Ulf. 2006: Johann Ludwig Neumann von Margitta (1903-1957). Teil 2: Ein Privatdozent auf dem Weg von Berlin nach Princeton. Informatik-Spektrum 29(3):227-236.
• Macrae, Norman. 1999. John von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More. Providence, RI: American Mathematical Society. ISBN 0821820648.
• Poundstone, William. 1992. Prisoner's Dilemma: John von Neumann, Game Theory and the Puzzle of the Bomb. New York, NY: Doubleday. ISBN 0385415672. | 2019-09-18 00:39:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.563179612159729, "perplexity": 2393.2219027934625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573173.68/warc/CC-MAIN-20190918003832-20190918025832-00001.warc.gz"} |
http://ifcuriousthenlearn.com/blog/2017/05/29/brick-stacking/ | The original question I posed to myself is this. Can you stack bricks of equal dimensions and mass such that the top-most brick does not overlap the bottom-most brick? Or more generally, how far can bricks be stacked to generate the largest overhang?
I pondered the problem and deliberately avoided finding solutions. I struggled through a few hand-calculations and hacked out a bit of python code to visualize my solution.
The math of the problem is seemingly simple, just basic statics, but it becomes tricky as the problem is worked. We eventually find that the bricks stack according to a harmonic series!
$overhang = \sum_{i=1}^{n} \frac{1}{2i}$
OK, lets get started. First run the code to create the brick_balance function that accepts total bricks, length of brick and thickness of brick and let's explore the solutions.
In [1]:
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import sys
def brick_balance(ntot=52, L=1, t=0.1):
'''
This program calcuates bricks stacked on one another to maximize overhang
# number of bricks, minimum 2
ntot = 7
# length of brick
L = 1
# thickness of brick
t = 0.15
'''
# force variables to be integers and floats
ntot = int(ntot)
L = float(L)
t = float(t)
plt.close('all')
#overhang
v = [1/(2*k) for k in range(ntot,0,-1)]
#print('overhang=',v)
x = [sum([k for k in v[0:n]]) for n in range(1,ntot+1)]
#print('x location = ',x)
x.insert(0,0)
x = [x1*L for x1 in x]
#y = arange(t/2,(ntot+1)*t,t)
y = [z*t+t/2 for z in range(ntot+1)]
# center of gravity
C = sum(x[1:])/(ntot)
maxoverhang = max(x)+L/2 - C
# plotting
fig2 = plt.figure()
for xi,yi in zip(x,y):
plt.plot(x,y,'-o',[C,C],[0,max(y)],'--')
plt.axis('equal')
plt.title('n={}, L={}", t={}", max overhang={:0.2f}"'.format(ntot,L,t,maxoverhang))
plt.ylabel('height, in')
plt.show()
if __name__=='__main__':
## can use as a command line tool by uncommenting out the following 2 lines
#user_args = sys.argv[1:]
#n,L,t = user_args
#brick_balance(ntot=4,L=1,t=0.1)
print('brick_balance function created')
brick_balance function created
The simplest example is 2 bricks. Adults and children know that you cannot go more than half-way the length before the brick will topple off. For a brick of length 1 inch, the max overhang is 0.5 inches
In [2]:
brick_balance(ntot=1, L=1, t=0.1)
So far so good. Now lets see if we can answer our question of if it is possible to allow the top brick to clear the bottom brick. After playing around the the total bricks, we arrive at 4 bricks. This solution is independent of brick length, thickness and weight! I thought this was very interesting!
In [3]:
brick_balance(ntot=4, L=1, t=0.1)
Lastly, I was curious about how far you could theoretically overhang a deck of cards. Simply enter the parameters for a deck of cards, and voila, an overhang of nearly 8 inches! That was a bit mind-blowing.
In [4]:
brick_balance(ntot=52, L=3.5, t=0.0115)
The python code was a handy was to explore this puzzle once the initial math was figured out, and once the plotting is figured out, can help debug the math and create neat visuals. Once I got the answer (and tried a few real blocks stacked and measured and compared to my calculated results), I started searching online and here are some resources I found.
Quanta has a great article here about the problem http://www.quantamagazine.org/20161117-overhang-insights-puzzle/ with solution http://www.quantamagazine.org/20161202-overhang-puzzle-solution/
Another explanation can be found here http://datagenetics.com/blog/may32013/index.html and http://mathworld.wolfram.com/BookStackingProblem.html
For more fun math puzzles, check out http://mathworld.wolfram.com/topics/Puzzles.html
Stay Curious! | 2020-01-24 10:45:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4055258631706238, "perplexity": 2578.318704943631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250619323.41/warc/CC-MAIN-20200124100832-20200124125832-00370.warc.gz"} |
http://mathhelpforum.com/differential-equations/114319-separable-equation.html | # Math Help - separable equation
1. ## separable equation
$(x+1)\frac{dy}{dx}=x(y^2+1)$
i basically get how to do the separable equations, i change it to:
$\int{\frac{dy}{(y^2+1)}}=\int{\frac{xdx}{(x+1)}}$
and then i integrate both sides, i'm having trouble integrating the right side, i tried using integration by parts but it doesn't come out how the book has it. i think maybe there's an easier way but i can't remember. can someone help me with this?
2. Let u = x+1
then the integrand becomes (u-1)/u = 1 - 1/u
Or use long division to obtain 1 - 1/(x+1) directly
at any rate you get x - ln(x+1) + c aftger integrating
3. Originally Posted by Calculus26
Let u = x+1
then the integrand becomes (u-1)/u = 1 - 1/u
Or use long division to obtain 1 - 1/(x+1) directly
at any rate you get x - ln(x+1) + c aftger integrating
No need for long division!
$\frac{x}{x+1} = \frac{x+1-1}{x+1} = \frac{x+1}{x+1} - \frac{1}{x+1} = 1 - \frac{1}{x+1}$ | 2015-09-02 19:43:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9677665829658508, "perplexity": 1154.918222808765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281325.84/warc/CC-MAIN-20150827031441-00339-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/815721/dual-of-schanuel-lemma | # Dual of Schanuel lemma
This is an exercise from Rotman, Introduction to homological algebra.
Given exact sequences of $R$-modules
\begin{array}{ccccccccc} 0 & \longrightarrow & M & \overset{i}{\longrightarrow} & E & \overset{p}{\longrightarrow} & Q & \longrightarrow & 0\\ 0 & \longrightarrow & M & \overset{i'}{\longrightarrow} & E' & \overset{p'}{\longrightarrow} & Q' & \longrightarrow & 0 \end{array}
where $E$ and $E'$ are injective, then there is an isomorphism $$Q \oplus E' \cong Q'\oplus E$$
What I have done:
I completed the diagram using diagram chasing and the injectivity of E'
\begin{array}{ccccccccc} 0 & \longrightarrow & M & \overset{i}{\longrightarrow} & E & \overset{p}{\longrightarrow} & Q & \longrightarrow & 0\\ & & id\downarrow & & h\downarrow & & k\downarrow\\ 0 & \longrightarrow & M & \overset{i'}{\longrightarrow} & E' & \overset{p'}{\longrightarrow} & Q' & \longrightarrow & 0 \end{array}
Then I tried to define an exact sequence
\begin{array}{ccccccccc} 0 & \longrightarrow & E & \overset{r}{\longrightarrow} & Q\oplus E' & \overset{s}{\longrightarrow} & Q' & \longrightarrow & 0\\ \end{array}
because in this case we could conclude $$Q\oplus E' \cong Q'\oplus E$$ due to the injectivity of $E$.
I defined $$r : E \to Q\oplus E'$$ $$e \mapsto (p(e),h(e))$$ $$s : Q\oplus E' \to Q'$$ $$(a,b) \mapsto k(a) - p'(b)$$
Then it's easy to see that $$\text{im}(r) \subseteq \ker(s)$$
But I can't show that $\ker(s) \subseteq \text{im}(r)$, what's wrong ?
• Could you please explain exactly how you came to know that the morphism k existed and made the diagram commute? The rest of the proof is as clear as crystal. – ErotemeObelus Jul 26 '18 at 21:35
Assume that $(a,b) \in \text{Ker }s,$ that is, $k(a)=p'(b)$.
Since $p$ is surjective, one can choose $e_0\in E$ such that $p(e_0)=a$. .Denote $b_0=h(e_0)$. From the commutativity of the RHS square, it follows that $$p'(b_0)=p'(h(e_0))=k(p(e_0))=k(a)=p'(b),$$ hence $b-b_0 \in \text{Ker }p' = \text{Im }i'$.
Thus, there is $m \in M$ such that $h(i(m))=i'(m)=b-b_0$ (note that here the commutativity of the LHS square was used).
Put $e:=e_0+i(m)$.
Then $$h(e)=h(e_0)+h(i(m))=b_0+(b-b_0)=b, \\ p(e)=p(e_0)+p(i(m))=p(e_0)+0=a.$$ Thus, $(a.b)\in \text{Im }r$. | 2019-06-19 08:48:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9488897323608398, "perplexity": 91.94243631322313}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998943.53/warc/CC-MAIN-20190619083757-20190619105757-00531.warc.gz"} |
https://leetcode.com/articles/get-highest-answer-rate-question/ | ## Solution
#### Approach I: Using sub-query and SUM() function [Accepted]
Intuition
Calculate the answered times / show times for each question.
Algorithm
First, we can use SUM() to get the total number of answered times as well as the show times for each question using a sub-query as below.
SELECT
question_id,
SUM(CASE
WHEN action = 'answer' THEN 1
ELSE 0
SUM(CASE
WHEN action = 'show' THEN 1
ELSE 0
END) AS num_show
FROM
survey_log
GROUP BY question_id
;
| question_id | num_answer | num_show |
|-------------|------------|----------|
| 285 | 1 | 1 |
| 369 | 0 | 1 |
Then we can calculate the answer rate by its definition.
MySQL
SELECT question_id as survey_log
FROM
(
SELECT question_id,
SUM(case when action="show" THEN 1 ELSE 0 END) as num_show,
FROM survey_log
GROUP BY question_id
) as tbl
ORDER BY (num_answer / num_show) DESC
LIMIT 1
#### Approach II: Using sub-query and COUNT(IF...) function [Accepted]
Algorithm
This solution is very straight forward: use the COUNT() function to sum the answer and show time combining with the IF() function.
MySQL
SELECT
question_id AS 'survey_log'
FROM
survey_log
GROUP BY question_id
ORDER BY COUNT(answer_id) / COUNT(IF(action = 'show', 1, 0)) DESC
LIMIT 1; | 2019-12-11 08:11:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7023337483406067, "perplexity": 14843.353204042178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00096.warc.gz"} |
https://counterexamples.org/glossary.html | # Index and Glossary
### Polymorphism
The word "polymorphism" can refer to several different things. Here, it means "parametric polymorphism": types like $∀α .; α → α$, allowing the same value to be used at many possible types, parameterised by a type variable. This feature is sometimes called "generics".
### Subtyping
Subtyping allows a value of a more specific type to be supplied where a value of a more general type was expected, without the two types having to be exactly equal.
An overloaded function has several different versions all with the same name, where the language picks the right one to call by examining the types of its arguments at each call site.
### Recursive types
Recursive types are types whose definition refers to themselves, either by using their own name during their definition, or by using explicit fixpoint operators like $μ$-types.
### Variance
Types that take parameters (like $\n{List}[A]$) may have subtyping relationships that depend on the subtyping relationships of their parameters: for instance, $\n{List}[A]$ is a subtype of $\n{List}[B]$ only if $A$ is a subtype of $B$. The manner in which the parameter's subtyping affects the whole type's subtyping is called variance.
### Mutation
The presence of mutable values (reference cells, mutable arrays, etc.) in a language means that there are expressions which, when evaluated twice, yield different values both times (which can have consequences for the type system).
### Scoping
When types are types defined locally to a module, function or block, the compiler must check do not accidentally leak out of their scope.
### Typecase
Typecase refers to any runtime test that checks types. Several other names for this feature exist: instanceof, downcasting, matching on types.
### Empty types
An empty type is a type that has no values, and can represent the return type of a function that never returns or the element type of a list that is always empty. These are not to be confused with unit types like C's void or Haskell's (), which are types that have a single value (and consequently carry no information).
### Equality
Determining whether two types are equal is a surprisingly tricky business, especially in a language with advanced type system features (e.g. dependent types).
### Injectivity
A parameterised type like $\n{List}[A]$ is said to be injective if $\n{List}[A] = \n{List}[B]$ implies $A = B$. All, some or none of a language's parameterised types may have this property.
### Totality
In a total language, all programs terminate, and unbounded recursion or infinite looping is impossible. Enforcing this property places a significant extra burden on the type checker.
### Abstract types
An abstract type is one whose implementation is hidden: the type may in fact be implemented directly as another type, but this fact is not exposed.
### Impredicativity
A type system is predicative if definitions can never be referred to, even indirectly, before they are defined. In particular, polymorphic types $∀α. \dots$ are predicative only if $α$ ranges over types not including the polymorphic type being defined. Predicative systems usually have restricted polymorphism (in $∀α. \dots$, $α$ may range only over types that do not themselves use $∀$, or there may be a system of stratified levels of $∀$-usage). One hallmark of impredicative systems is unrestricted $∀$ (present in e.g. System F) | 2021-06-14 20:54:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5987436175346375, "perplexity": 1403.4940430994893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613453.9/warc/CC-MAIN-20210614201339-20210614231339-00599.warc.gz"} |
https://electronics.stackexchange.com/questions/476869/what-is-the-difference-between-f5al250v-and-f5l250v-fuses | # What is the difference between F5AL250V and F5L250V fuses?
In the glass fuses' code F is for fast blown the second number is the current, but I ignore what is the difference between L an AL (if there is any).
Could someone explain this.
• Please link to both data sheets. – Andy aka Jan 19 at 12:58
• @Andyaka Unfortunately, I haven't any datasheet otherwise it would have been enough to read them to find the answer. – AndreaF Jan 19 at 13:01
• Are you therefore suspecting that someone does have these data sheets then? – Andy aka Jan 19 at 13:02
• @Andyaka It's enough to find someone that has experience with the fuse code letters. The labeling is pretty standard for the glass fuses. – AndreaF Jan 19 at 13:05
• Who makes them then? – Andy aka Jan 19 at 13:06
They are the same 5A 250V fast acting low breaking capacity fuses (just one omits the A of ampere)
The glass cartridge fuse code is structured in this way
|Acting Speed| |Current rating| |Breaking capacity| |Voltage rating|
or
|Package size code| |Acting Speed| |Current rating| |Breaking capacity| |Voltage rating|
# Acting speed
The time it takes for the fuse to open when a fault current occurs. It code could be:
M - Medium Acting (Mitteltrage)
T - Slow Acting (Trage)
TT - Very Slow Acting (Trage Trage)
# Fuse Breaking Capacity
It is the current that a fuse is able to interrupt without being destroyed or causing an electric arc with unacceptable duration. The capacity of a fuse to operate between the lowest and the Rated Breaking Current code could be:
H - High Breaking Capacity
L - Low Breaking Capacity | 2020-04-07 10:44:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31243449449539185, "perplexity": 6176.685412389975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371700247.99/warc/CC-MAIN-20200407085717-20200407120217-00253.warc.gz"} |
https://www.neetprep.com/question/54017-Amplitude-wave-represented-byAcabcThen-resonance-will-occur-whenabcb-b-----ccbad-None?courseId=18 | • Subject:
...
• Topic:
...
Amplitude of a wave is represented by
$A=\frac{c}{a+b-c}$
Then resonance will occur when
(a) $b=-c/2$ (b) b = 0 and a = – c
(c) $b=-a/2$ (d) None of these | 2019-03-25 01:58:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7502456307411194, "perplexity": 11945.019331884669}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203547.62/warc/CC-MAIN-20190325010547-20190325032547-00506.warc.gz"} |
https://math.stackexchange.com/questions/1940967/how-to-express-siny-in-terms-of-cosy | # How to express sin(y) in terms of cos(y)?
The first question asked to express a equivalent expression in of $\cos(x+y)$ for which I got right.
However its the second part of the question that I do not understand which is How to express $\sin(y)$ in terms of $\cos(y)$? also the angle between $0$ and $\frac { \pi }{ 2 }$
$$\sin { y } =\cos { \left( y-\frac { \pi }{ 2 } \right) } \\$$ or $$\sin { y=\sqrt { 1-\cos ^{ 2 }{ y } } }$$
$$a^2+b^2=c^2$$
Divide both sides by $c^2$:
$$\sin^2(\theta)+\cos^2(\theta)=1$$
Solve for $\sin$. | 2019-09-15 22:19:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6483681797981262, "perplexity": 147.21615143105498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572436.52/warc/CC-MAIN-20190915215643-20190916001643-00534.warc.gz"} |
https://www.tutorialspoint.com/how-to-access-the-elements-in-a-series-using-index-values-may-or-may-not-be-customized-in-python | # How to access the elements in a series using index values (may or may not be customized) in Python?
If default values are used as index values in Series, they can be accessed using indexing. If the index values are customized, they are passed as index values and displayed on the console.
Let us understand it with the help of an example.
## Example
Live Demo
import pandas as pd
my_data = [34, 56, 78, 90, 123, 45]
my_index = ['ab', 'mn' ,'gh','kl', 'wq', 'az']
my_series = pd.Series(my_data, index = my_index)
print("The series contains following elements")
print(my_series)
print("Accessing elements using customized index")
print(my_series['mn'])
print("Accessing elements using customized index")
print(my_series['az'])
## Output
The series contains following elements
ab 34
mn 56
gh 78
kl 90
wq 123
az 45
dtype: int64
Accessing elements using customized index
56
Accessing elements using customized index
45
## Explanation
• The required libraries are imported, and given alias names for ease of use.
• A list of data values is created, that is later passed as a parameter to the ‘Series’ function present in the ‘pandas’ library
• Next, customized index values (that are passed as parameter later) are stored in a list.
• The series is created and index list and data are passed as parameters to it.
• The series is printed on the console.
• Since the index values are customized, they are used to access the values in the series like series_name[‘index_name’].
• It is then printed on the console.
Advertisements | 2023-01-30 08:26:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2222059965133667, "perplexity": 4720.9746687246725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00679.warc.gz"} |
http://openstudy.com/updates/55f5b6bee4b0cf7fa75c19e1 | ## anonymous one year ago Use graphs and tables to find the limit and identify any vertical asymptotes of http://broward.flvs.net/webdav/assessment_images/educator_precalc/v12/module09/0906_g13_q1.gif
1. anonymous
2. anonymous
crap. its limit x approaching -2 of 1/x-2 @peachpi
3. anonymous
This? $\lim_{x \rightarrow -2}\frac{ -2 }{ x-2 }$ and did you try graphing it?
4. anonymous
I dont't know how. and that's right
5. anonymous
you have a graphing calculator? or use www.desmos.com/calculator
6. anonymous
How to I plug it in
7. anonymous
type in "y=" then the function you want to graph
8. anonymous
please show me the graph I keep getting error
9. anonymous
@peachpi
10. anonymous
what did you put in?
11. anonymous
For asymptotes, set the denominator equal to 0, so solve x - 2 = 0
12. anonymous
put in y=1/(x-2) for the graph. I don't have a way to attach it
13. anonymous
so 2?
14. anonymous
@peachpi
15. anonymous
yes 2 | 2017-01-23 23:14:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7820166349411011, "perplexity": 6578.899498769996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00086-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/640204/how-to-solve-an-equation-by-inspection | # how to solve an equation by inspection?
$$4\pi r^2 + \frac{4}{3}\pi r^3 = \frac{16}{3}\pi m^3.$$
This is all I got:
$$4 r^2 + \frac{4}{3}r^3 = \frac{16}{3}m^3.$$
How to simplify the equation and solve it "by inspection"?
• $r=1=m~~ \text{or} ~~r=0=m{}{}{}{}{}{}{}$ – Mikasa Jan 16 '14 at 8:30
• @B. S. how did you get that??? – Joshua Jan 16 '14 at 8:32
• by inspection,as you wanted. :D – Mikasa Jan 16 '14 at 8:32
• @B. S. but I dont know what it means "by inspection" in the first place:( – Joshua Jan 16 '14 at 8:34
• Is "m" a constant given ? If yes, is it positive or negative ? – Claude Leibovici Jan 16 '14 at 8:35
There are multiple solutions to this equation since $m$ can take on different values. The best thing one can do in this case is assume that $m=r$ and solve the equation. So, $$4 r^2 + \frac{4}{3}r^3 = \frac{16}{3}r^3$$ $$r^2\left(4-4r \right)=0$$ $$\therefore \ r=0=m \ \text{or} \ r=1=m$$
Note that these are not the only answers. If we took $r=m^{3}$, we get an additional solution, $r=-4 \implies m=-4^{1/3}$
Just consider a function defined by the LHS. Since it is a polynomial which contains a third power of "$r$", the function will start at $-\infty$ and will grow up to $\infty$.
The function is zero for $r=-3$ and $r=0$. Its derivative cancels for $r=0$ and $r=-2$; for $r=-2$, the value of the function is $\dfrac{16}{3}$ and a check of the second derivative shows that this point is a maximum.
Now, solving your equation can be seen as a search of the intersection of the function and an horizontal line corresponding to $y=\dfrac{16 m^3}{3}$. So, what we can say is that,
if $m < 0$, the solution for "$r$" will be smaller than $-3$
if $m = 0$, the solutions for "$r$" are $-3$ and $0$
if $0 < m < 1$, there will be two solutions, one such than $-3 < r < -2$ and the other such that $-2< r < 0$
if $m > 1$, there will be a unique solution such that $r > 0$.
All the above can be done by a visual inspection of the graph at the function defined by the LHS.
Is this what you expect ? If not, please clarify.
• I think it is overkilled. This question is about the volume of a tank. 4πr2 is the volume of a circular cylinder, 4/3πr3 is the volume of a sphere, and 16/3πm3 is the volume of the tank. – Joshua Jan 16 '14 at 9:06
• so I think r must not be negative. – Joshua Jan 16 '14 at 9:06
• @Joshua. Then your problem was not clear enough ! By the way, I forgot yhe case where m=1; in this case, there are two roots corresponding to r=-2 with a second root r > 0. – Claude Leibovici Jan 16 '14 at 9:19
• @John. Thanks for editing. – Claude Leibovici Jan 16 '14 at 9:19 | 2021-04-21 23:34:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8912779092788696, "perplexity": 270.3155214482143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039554437.90/warc/CC-MAIN-20210421222632-20210422012632-00328.warc.gz"} |
https://cran.rstudio.com/web/packages/FFTrees/vignettes/FFTrees_algorithm.html | # FFTrees tree construction algorithms
## Default FFT construction algorithm “m”
Trees are built using the wrapper function FFTrees.R which calls the functions cuerank() and grow.FFTrees() to complete the steps of creating the trees. The default algorithm used to create trees algorithm = "m"is very simple. It can be summarised in four steps.
4 Steps in growing FFTs using the algorithm = "m" algorithm.
Step Function Description
1 cuerank For each cue, claculate a classification threshold that maximizes the balanced accuracy (average of sensitivity and specificity) of classifications of all data based on that cue (that is, ignoring all other cues). If the cue is numeric, the threshold is a number. If the cue is a factor, the threshold is one or more factor levels.
2 grow.FFTrees() Rank cues in order of their highest balanced accuracy value calculated using the classification threshold determined in step 1
3 grow.FFTrees() Create all possible trees by varying the exit direction (left or right) at each level to a maximum of X levels (default of max.levels = 4).
4 grow.FFTrees() Reduce the size of trees by removing (pruning) lower levels containing less than X% (default of stopping.par = .10) of the cases in the original data.
## Example: Heart Disease
First, we’ll calculate a classification threshold for each cue using cuerank():
heartdisease.ca <- FFTrees::cuerank(formula = diagnosis ~.,
data = heartdisease)
# Print key results
heartdisease.ca[c("cue", "threshold", "direction", "bacc")]
## cue threshold direction bacc
## 1 age 55 > 0.6347166
## 2 sex 0 > 0.6295841
## 3 cp a = 0.7587954
## 4 trestbps 140 > 0.5579707
## 5 chol 242 > 0.5677092
## 6 fbs 0 > 0.5090147
## 7 restecg hypertrophy,abnormal = 0.5881953
## 8 thalach 148 < 0.7042902
## 9 exang 0 > 0.7032593
## 10 oldpeak 0.8 > 0.6978856
## 11 slope flat,down = 0.6936743
## 12 ca 0 > 0.7308738
## 13 thal rd,fd = 0.7596508
Here, we see the best decision threshold for each cue that maximizes its balanced accuracy (bacc) when applied to the entire dataset (independently of other cues). For example, for the age cue, the best threshold is age > 55 which leads to a balanced accuracy of 0.63. In other words, if we only had the age cue, then the best decision is: “If age > 55, predict heart disease, otherwise, predict no heart disease”.
Let’s confirm that this threshold makes sense. To do this, we can plot the bacc value for all possible thresholds as in Figure @ref(fig:agethreshold):
Next, the cues are ranked by their balanced accuracy. Let’s do that with the heart disease cues:
# Rank heartdisease cues by balanced accuracy
heartdisease.ca <- heartdisease.ca[order(heartdisease.ca$bacc, decreasing = TRUE),] # Print the key columns heartdisease.ca[c("cue", "threshold", "direction", "bacc")] ## cue threshold direction bacc ## 13 thal rd,fd = 0.7596508 ## 3 cp a = 0.7587954 ## 12 ca 0 > 0.7308738 ## 8 thalach 148 < 0.7042902 ## 9 exang 0 > 0.7032593 ## 10 oldpeak 0.8 > 0.6978856 ## 11 slope flat,down = 0.6936743 ## 1 age 55 > 0.6347166 ## 2 sex 0 > 0.6295841 ## 7 restecg hypertrophy,abnormal = 0.5881953 ## 5 chol 242 > 0.5677092 ## 4 trestbps 140 > 0.5579707 ## 6 fbs 0 > 0.5090147 Now, we can see that the top five cues are thal, cp, ca, thalach and exang. Because ffts rarely exceed 5 cues, we can expect that the trees will use a subset (not necessarily all) of these 5 cues. We can also plot the cue accuracies in ROC space using the showcues() function: # Show the accuracy of cues in ROC space showcues(cue.accuracies = heartdisease.ca) Next, grow.FFTrees() will grow several trees from these cues using different exit structures: # Grow FFTs heartdisease.ffts <- grow.FFTrees(formula = diagnosis ~., data = heartdisease) # Print the tree definitions heartdisease.ffts$tree.definitions
## tree cues nodes classes exits thresholds directions
## 1 1 thal;cp;ca;thalach 4 c;c;n;n 0;0;0;0.5 rd,fd;a;0;148 =;=;>;<
## 5 2 thal;cp;ca;thalach 4 c;c;n;n 0;0;1;0.5 rd,fd;a;0;148 =;=;>;<
## 3 3 thal;cp;ca 3 c;c;n 0;1;0.5 rd,fd;a;0 =;=;>
## 2 4 thal;cp;ca 3 c;c;n 1;0;0.5 rd,fd;a;0 =;=;>
## 6 5 thal;cp;ca;thalach 4 c;c;n;n 1;0;1;0.5 rd,fd;a;0;148 =;=;>;<
## 4 6 thal;cp;ca;thalach 4 c;c;n;n 1;1;0;0.5 rd,fd;a;0;148 =;=;>;<
## 7 7 thal;cp;ca;thalach 4 c;c;n;n 1;1;1;0.5 rd,fd;a;0;148 =;=;>;<
Here, we see that we have 7 different trees, each using some combination of the top 5 cues we identified earlier. For example, tree 1 uses the top 4 cues, while tree 3 uses only the top 3 cues. Why is that? The reason is that the algorithm also prunes lower branches of the tree if there are too few cases classified at lower levels. By default, the algorithm will remove any lower leves that classfify fewer than 10% of the original cases. The pruning criteria can be controlled using the stopping.rule, stopping.par and max.levels arguments in grow.FFTrees()
Now let’s use the wrapper function FFTrees() to create the trees all at once. We will then plot tree #4 which, according to our results above, should contain the cues thal, cp, ca"
library(FFTrees)
# Create trees
heart.fft <- FFTrees(formula = diagnosis ~., data = heartdisease)
# Plot tree # 4
plot(heart.fft,
stats = FALSE, # Don't include statistics
tree = 4)
### Alternative algorithms
• algorithm = "c": The "c" algorithm is identical to the "m" algorithm with one (important) exception: In algorithm = "c", the thresholds and rankings of cues are recalculated for each level in the FFT conditioned on the exemplars that were not classified at higher leves in the tree. For example, in the heartdisease data, using algorithm = "c" would first classify some cases using the thal cue at the first level, and would then calculate new accuracies for the remaining cues on the remaining cases that were not yet classified. This algorithm is appropriate for datasets where cue validities systematically differ for different (and predictable) subsets of data. However, because it calculates cue thresholds for increasingly smaller samples of data as the tree grows, it is also, potentially, more prone to overfitting compared to algorithm = "m" | 2017-04-28 06:20:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44297438859939575, "perplexity": 5460.522496945644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122865.36/warc/CC-MAIN-20170423031202-00578-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://samm.univ-paris1.fr/spip.php?page=site&id_syndic=4 | # HAL : derniers dépôts du SAMM
## dimanche 1er mai 2016
• [hal-01308517] On the Krein-Milman theorem for convex compact metrizable sets
The Krein-Milman theorem (1940) states that a convex compact subset of a Hausdorff locally convex topological space, is the closed convex hull of its extreme points. We prove in this paper that in the metrizable case, the situation is better: every convex compact metrizable subset of a Hausdorff locally convex topological space, is the closed convex hull of its exposed points. This fails in general for not metrizable compact convex subsets.
## mercredi 13 avril 2016
• [halshs-01301794] Politique salariale et mode de rémunération dans la Fonction publique en France depuis le début des années 2000 : mutations et enjeux.
La politique salariale de l’Etat a connu des inflexions importantes au cours de la dernière décennie. Des ajustements paramétriques (gel du point d'indice, indexation de fait des bas salaires au SMIC) et des mesures partielles (requalifications de certaines catégories) ont été adoptés, mais des réformes plus structurelles du mode de rémunération, même si elles ont été souhaitées par l’Etat, n’ont pas réellement abouti. La politique salariale de l'Etat s'est faite en même temps plus catégorielle. Au-delà des effets limités sur le pouvoir d’achat moyen, ces changements ont eu des conséquences importantes, en termes de hiérarchies salariales et de carrière, et contribuent à expliquer la montée d’un mécontentement salarial important. L'ensemble de ces évolutions interpellent les organisations syndicales, dont les stratégies, à divers niveaux (central ou local) varient entre opposition et accompagnement.
## vendredi 8 avril 2016
• [hal-01299161] The Stochastic Topic Block Model for the Clustering of Networks with Textual Edges
Due to the significant increase of communications between individuals via social medias (Face-book, Twitter) or electronic formats (email, web, co-authorship) in the past two decades, network analysis has become a unavoidable discipline. Many random graph models have been proposed to extract information from networks based on person-to-person links only, without taking into account information on the contents. In this paper, we have developed the stochastic topic block model (STBM) model, a probabilistic model for networks with textual edges. We address here the problem of discovering meaningful clusters of vertices that are coherent from both the network interactions and the text contents. A classification variational expectation-maximization (C-VEM) algorithm is proposed to perform inference. Simulated data sets are considered in order to assess the proposed approach and highlight its main features. Finally, we demonstrate the effectiveness of our model on two real-word data sets: a communication network and a co-authorship network.
• [hal-01207009] Weighted interpolation inequalities: a perturbation approach
We study optimal functions in a family of Caffarelli-Kohn-Nirenberg inequalities with a power-law weight, in a regime for which standard symmetrization techniques fail. We establish the existence of optimal functions, study their properties and prove that they are radial when the power in the weight is small enough. Radial symmetry up to translations is true for the limiting case where the weight vanishes, a case which corresponds to a well-known subfamily of Gagliardo-Nirenberg inequalities. Our approach is based on a concentration-compactness analysis and on a perturbation method which uses a spectral gap inequality. As a consequence, we prove that optimal functions are explicit and given by Barenblatt-type profiles in the perturbative regime.
## samedi 27 février 2016
• [hal-01279327] Weighted fast diffusion equations (Part II): Sharp asymptotic rates of convergence in relative error by entropy methods
This paper is the second part of the study. In Part~I, self-similar solutions of a weighted fast diffusion equation (WFD) were related to optimal functions in a family of subcritical Caffarelli-Kohn-Nirenberg inequalities (CKN) applied to radially symmetric functions. For these inequalities, the linear instability (symmetry breaking) of the optimal radial solutions relies on the spectral properties of the linearized evolution operator. Symmetry breaking in (CKN) was also related to large-time asymptotics of (WFD), at formal level. A first purpose of Part~II is to give a rigorous justification of this point, that is, to determine the asymptotic rates of convergence of the solutions to (WFD) in the symmetry range of (CKN) as well as in the symmetry breaking range, and even in regimes beyond the supercritical exponent in (CKN). Global rates of convergence with respect to a free energy (or entropy) functional are also investigated, as well as uniform convergence to self-similar solutions in the strong sense of the relative error. Differences with large-time asymptotics of fast diffusion equations without weights will be emphasized.
• [hal-01279326] Weighted fast diffusion equations (Part I): Sharp asymptotic rates without symmetry and symmetry breaking in Caffarelli-Kohn-Nirenberg inequalities
In this paper we consider a family of Caffarelli-Kohn-Nirenberg interpolation inequalities (CKN), with two radial power law weights and exponents in a subcritical range. We address the question of symmetry breaking: are the optimal functions radially symmetric, or not ? Our intuition comes from a weighted fast diffusion (WFD) flow: if symmetry holds, then an explicit entropy - entropy production inequality which governs the intermediate asymptotics is indeed equivalent to (CKN), and the self-similar profiles are optimal for (CKN). We establish an explicit symmetry breaking condition by proving the linear instability of the radial optimal functions for (CKN). Symmetry breaking in (CKN) also has consequences on entropy - entropy production inequalities and on the intermediate asymptotics for (WFD). Even when no symmetry holds in (CKN), asymptotic rates of convergence of the solutions to (WFD) are determined by a weighted Hardy-Poincaré inequality which is interpreted as a linearized entropy - entropy production inequality. All our results rely on the study of the bottom of the spectrum of the linearized diffusion operator around the self-similar profiles, which is equivalent to the linearization of (CKN) around the radial optimal functions, and on variational methods. Consequences for the (WFD) flow will be studied in Part II of this work.
## samedi 13 février 2016
• [hal-01270963] On combining wavelets expansion and sparse linear models for Regression on metabolomic data and biomarker selection
Wavelet thresholding of spectra has to be handled with care when the spectra are the predictors of a regression problem. Indeed, a blind thresholding of the signal followed by a regression method often leads to deteriorated predictions. The scope of this article is to show that sparse regression methods, applied in the wavelet domain, perform an automatic thresholding: the most relevant wavelet coefficients are selected to optimize the prediction of a given target of interest. This approach can be seen as a joint thresholding designed for a predictive purpose. The method is illustrated on a real world problem where metabolomic data are linked to poison ingestion. This example proves the usefulness of wavelet expansion and the good behavior of sparse and regularized methods. A comparison study is performed between the two-steps approach (wavelet thresholding and regression) and the one-step approach (selection of wavelet coefficients with a sparse regression). The comparison includes two types of wavelet bases, various thresholding methods, and various regression methods and is evaluated by calculating prediction performances. Information about the location of the most important features on the spectra was also obtained and used to identify the most relevant metabolites involved in the mice poisoning.
• [hal-01265147] Limited operators and differentiability
We characterize the limited operators by differentiability of convex continuous functions. Given Banach spaces $Y$ and $X$ and a linear continuous operator $T: Y \longrightarrow X$, we prove that $T$ is a limited operator if and only if, for every convex continuous function $f: X \longrightarrow \R$ and every point $y\in Y$, $f\circ T$ is Fr\'echet differentiable at $y\in Y$ whenever $f$ is G\^ateaux differentiable at $T(y)\in X$.
## mercredi 10 février 2016
• [hal-01263540] Modelling time evolving interactions in networks through a non stationary extension of stochastic block models
The stochastic block model (SBM) describes interactions between nodes of a network following a probabilistic approach. Nodes belong to hidden clusters and the probabilities of interactions only depend on these clusters. Interactions of time varying intensity are not taken into account. By partitioning the whole time horizon, in which interactions are observed, we develop a non stationary extension of the SBM, allowing us to simultaneously cluster the nodes of a network and the fixed time intervals in which interactions take place. The number of clusters as well as memberships to clusters are finally obtained through the maximization of the complete-data integrated likelihood relying on a greedy search approach. Experiments are carried out in order to assess the proposed methodology.
## mardi 9 février 2016
• [hal-01270293] Is the corporate elite disintegrating? Interlock boards and the Mizruchi hypothesis
This paper proposes an approach for comparing interlocked board networks over time to test for statistically significant change. In addition to contributing to the conversation about whether the Mizruchi hypothesis (that a disintegration of power is occurring within the corporate elite) holds or not, we propose novel methods to handle a longitudinal investigation of a series of social networks where the nodes undergo a few modifications at each time point. Methodologically, our contribution is twofold: we extend a Bayesian model hereto applied to compare two time periods to a longer time period, and we define and employ the concept of a hull of a sequence of social networks, which makes it possible to circumvent the problem of changing nodes over time.
## mercredi 27 janvier 2016
• [hal-01261122] Country-scale Exploratory Analysis of Call Detail Records through the Lens of Data Grid Models
Call Detail Records (CDRs) are data recorded by telecommunications companies, consisting of basic informations related to several dimensions of the calls made through the network: the source, destination , date and time of calls. CDRs data analysis has received much attention in the recent years since it might reveal valuable information about human behavior. It has shown high added value in many application domains like e.g., communities analysis or network planning. In this paper, we suggest a generic methodology based on data grid models for summarizing information contained in CDRs data. The method is based on a parameter-free estimation of the joint distribution of the variables that describe the calls. We also suggest several well-founded criteria that allows one to browse the summary at various granularities and to explore the summary by means of insightful visualizations. The method handles network graph data, temporal sequence data as well as user mobility data stemming from original CDRs data. We show the relevance of our methodology on real-world CDRs data from Ivory Coast for various case studies, like network planning strategy and yield management pricing strategy.
## vendredi 22 janvier 2016
• [hal-01259983] Semiparametric stationarity and fractional unit roots tests based on data-driven multidimensional increment ratio statistics
In this paper, we show that the central limit theorem (CLT) satisfied by the data-driven Multidimensional Increment Ratio (MIR) estimator of the memory parameter d established in Bardet and Dola (2012) for d ∈ (−0.5, 0.5) can be extended to a semiparametric class of Gaussian fractionally integrated processes with memory parameter d ∈ (−0.5, 1.25). Since the asymptotic variance of this CLT can be estimated, by data-driven MIR tests for the two cases of stationarity and non-stationarity, so two tests are constructed distinguishing the hypothesis d < 0.5 and d ≥ 0.5, as well as a fractional unit roots test distinguishing the case d = 1 from the case d < 1. Simulations done on numerous kinds of short-memory, long-memory and non-stationary processes, show both the high accuracy and robustness of this MIR estimator compared to those of usual semiparametric estimators. They also attest of the reasonable efficiency of MIR tests compared to other usual stationarity tests or fractional unit roots tests. Keywords: Gaussian fractionally integrated processes; semiparametric estimators of the memory parameter; test of long-memory; stationarity test; fractional unit roots test.
## mercredi 13 janvier 2016
• [hal-01254346] Maxima of Two Random Walks: Universal Statistics of Lead Changes
We investigate statistics of lead changes of the maxima of two discrete-time random walks in one dimension. We show that the average number of lead changes grows as π ^(−1) ln t in the long-time limit. We present theoretical and numerical evidence that this asymptotic behavior is universal. Specifically, this behavior is independent of the jump distribution: the same asymptotic underlies standard Brownian motion and symmetric Lévy flights. We also show that the probability to have at most n lead changes behaves as t^(−1/4) (ln t)^n for Brownian motion and as t ^(−β(µ)) (ln t)^n for symmetric Lévy flights with index µ. The decay exponent β ≡ β(µ) varies continuously with the Lévy index when 0 < µ < 2, while β = 1/4 for µ > 2.
## samedi 9 janvier 2016
• [hal-01253191] Pontryagin principle for a Mayer problem governed by a delay functional differential equation
We establish Pontryagin principles for a Mayer's optimal control problem governed by a functional differential equation. The control functions are piecewise continuous and the state functions are piecewise continuously differentiable. To do that, we follow the method created by Philippe Michel for systems governed by ordinary differential equations, and we use properties of the resolvent of a linear functional differential equation.
• [hal-01253186] Pontryagin principle for a Mayer problem governed by a delay functional differential equation
We establish Pontryagin principles for a Mayer's optimal control problem governed by a functional differential equation. The control functions are piecewise continuous and the state functions are piecewise continuously differentiable. To do that, we follow the method created by Philippe Michel for systems governed by ordinary differential equations, and we use properties of the resolvent of a linear functional differential equation.
## vendredi 11 décembre 2015
• [hal-01122393] The Dynamic Random Subgraph Model for the Clustering of Evolving Networks
In recent years, many clustering methods have been proposed to extract information from networks. The principle is to look for groups of vertices with homogenous connection profiles. Most of these techniques are suitable for static networks, that is to say, not taking into account the temporal dimension. This work is motivated by the need of analyzing evolving networks where a decomposition of the networks into subgraphs is given. Therefore, in this paper, we consider the random subgraph model (RSM) which was proposed recently to model networks through latent clusters built within known partitions. Using a state space model to characterize the cluster proportions, RSM is then extended in order to deal with dynamic networks. We call the latter the dynamic random subgraph model (dRSM). A variational expectation maximization (VEM) algorithm is proposed to perform inference. We show that the variational approximations lead to an update step which involves a new state space model from which the parameters along with the hidden states can be estimated using the standard Kalman filter and Rauch-Tung-Striebel (RTS) smoother. Simulated data sets are considered to assess the proposed methodology. Finally, dRSM along with the corresponding VEM algorithm are applied to an original maritime network built from printed Lloyd's voyage records.
## vendredi 27 novembre 2015
• [hal-01232672] Using SOMbrero for clustering and visualizing graphs
Graphs have attracted a burst of attention in the last years, with applications to social science, biology, computer science... In the present paper, we illustrate how self-organizing maps (SOM) can be used to enlighten the structure of the graph, performing clustering of the graph together with visualization of a simplified graph. In particular, we present the R package SOMbrero which implements a stochastic version of the so-called relational algorithm: the method is able to process any dissimilarity data and several dissimilarities adapted to graphs are described and compared. The use of the package is illustrated on two real-world datasets: one, included in the package itself, is small enough to allow for a full investigation of the influence of the choice of a dissimilarity to measure the proximity between the vertices on the results. The other example comes from an application in biology and is based on a large bipartite graph of chemical reactions with several thousands vertices.
# Filtre
## Du site syndiqué
• HAL : derniers dépôts du SAMM
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345 | 2016-05-01 17:22:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5760558247566223, "perplexity": 1251.9045003181159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116878.73/warc/CC-MAIN-20160428161516-00035-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://askubuntu.com/questions/1080579/gedit-latex-plugin-pdf-creation | # Gedit LaTeX plugin : pdf creation
I just can't compile a pdf with gedit-latex-plugin. It is working by using pdflatex command in terminal. I try with this .tex file :
\documentclass[12pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[french]{babel}
\usepackage{lmodern}
\begin{document}
Hello world
\end{document}
The compilation of gedit is resume to save the file. No log file or pdf.
Do you know what I can do ?
Ps: .dvi file works
• Try Gummi instead. Gedit is not for LaTeX works. – N0rbert Oct 3 '18 at 21:04 | 2019-05-24 04:48:28 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9235618114471436, "perplexity": 9985.363033167825}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257514.68/warc/CC-MAIN-20190524044320-20190524070320-00048.warc.gz"} |
https://electronics.stackexchange.com/questions/179592/small-signal-models-of-mos-amplifiers/179606 | # Small signal models of MOS amplifiers
I understand that the equivalent circuits describe the behavior of amplifier for signals of low amplitude that allow us to assume that the circuit behaves linearly. My questions are:
1. Why are all the DC voltage and current sources that aren't varying with time zeroed out? I don't understand the statement- "As far as the signal is concerned, all DC sources have no effect on operation".
As in the above figure, the small signal equivalent model shows that the resistance tied between drain and VDD(which is shorted) to be in parallel with the the current source gmVgs. However the voltage across resistance and the voltage VDS(Vo) are not quite the same. How is this accounted in the above model?
Is there any other way to derive expressions for gain and I/O impedance? I tried to draw an equivalent circuit with all the dc sources present.Note that the voltage across RD is not VDS(as it should not be). Is it correct?
Applying KVL to output loop:
VDD - gmVgsRD - Vout = 0
VDD/Vgs - gmRD = Vout/Vgs
Voltage gain = Av = Vout/Vgs = VDD - gmRD.
But the expression for gain derived by shorting VDD is -gmRD.
Where am I going wrong?
• I have just drawn the circuit as it is by including all the dc voltage sources. What is wrong in that? – Aditya Patil Jul 11 '15 at 7:46
• Small-signal analysis is like doing superposition and ignoring the direct voltages and currents. So just turn all the direct voltage sources down to zero and replace them with short circuits. Your equivalent circuit is wrong because the transconductance is defined as: $g_m = \Delta I_d/\Delta V_{gs} = i_d/v_{gs}$, i.e. change in output current divided by change in input voltage. – Chu Jul 11 '15 at 7:53
• Yeah. That's what is shown in the circuit right? – Aditya Patil Jul 11 '15 at 8:27
• No, your circuit shows: $i_d = g_m (v_{gs} + bias)$. It should be $i_d = g_m v_{gs}$ – Chu Jul 11 '15 at 8:31
• Exactly, and it's the inclusion of direct voltages that makes it wrong. – Chu Jul 11 '15 at 8:33
The true answer to your question unfortunately involves some bits of advanced calculus. Small signal models are derived from a first-order multi-variable Taylor expansion of the true non-linear equations describing the actual circuit behavior. This process is called circuit linearization.
Let's consider a very simple example with only one independent variable. Assume you have a non-linear V-I relationship for a two-terminal component that can be expressed in some mathematical way, for example $i = i(v)$, where $i(v)$ represents the math relationship (a function). Regular (i.e. one-dimension) Taylor expansion of that relation around an arbitrary point $V_0$, gives:
$$i = i(V_0) + \dfrac{di}{dv}\bigg|_{V_0} \cdot (v-V_0) + R = i(V_0) + \dfrac{di}{dv}\bigg|_{V_0} \cdot \Delta v + R$$
where $R$ is an error term which depends on all the higher powers of $\Delta v = v - V_0$. The linearization consists in neglecting the higher order terms (R) and describe the component with the linearized equation:
$$i = i(V_0) + \dfrac{di}{dv}\bigg|_{V_0} \cdot \Delta v$$
This is useful, i.e. gives small errors, only if the variations are small (for a given definition of small). That's where the small signal hypothesis is used.
Keep well in mind that the linearization is done around a point, i.e. around some arbitrarily chosen value of the independent variable V (that would be your quiescent point, in practice, i.e. your DC component). As you can see, the Taylor expansion requires to compute the derivative of $i$ and compute it at the same quiescent point $V_0$, giving rise to what in EE term is a differential circuit parameter $\frac{di}{dv}\big|_{V_0}$. Let's call it $g$ (it is a conductance and it is differential, so the lowercase g). Moreover, $g$ depends on the specific quiescent point chosen, so if we are really picky we should write $g(V_0)$.
Note, also, that $i(V_0)$ is the quiescent current, i.e. the current corresponding to the quiescent voltage. Hence we can call it simply $I_0$. Then we can rewrite the above linearized equation like this:
$$i = I_0 + g \cdot \Delta v \qquad\Leftrightarrow\qquad i - I_0 = g \cdot \Delta v \qquad\Leftrightarrow\qquad \Delta i = g \cdot \Delta v$$
where I defined $\Delta i = i - I_0$.
This latter equation describes how variations in the current relate to the corresponding variations in the voltage across the component. It is a simple linear relationships, where DC components are "embedded" in the variations and in the computation of the differential parameter g. If you translate this equation in a circuit element you'll find a simple resistor with a conductance g.
To answer your question directly: there is no trace of DC components in the linearized (i.e. small signals) equation, that's why they don't appear in the circuit.
The same procedure can be carried out with components with more terminals, but this requires handling more quantities and the Taylor expansion becomes unwieldy (it is multi-variable and partial derivatives pop out). The concept is the same, though.
Small signal models are nothing more than the circuit equivalent of the differential parameters obtained by linearizing the multi-variable non-linear model (equations) of the components you're dealing with.
To summarize:
• You choose a quiescent point (DC operating point): that's $V_0$
• You compute the dependent quantities at DC (DC analysis): you find $I_0$
• You linearize your circuit around that point using the DC OP data: you find $g$
• You solve the circuit for small variations (aka AC analysis) using only the differential (i.e. small-signal) model $g$.
• Thanks! I do understand how small signal models are derived and the underlying approximations. We fix a bias point and assume that the signal excursions are small enough for the circuit to be linear. If we are to draw a complete model, why not include the DC sources? As I've asked above, the equivalent model shows RD having same voltage across it as Vout. Isn't it wrong? – Aditya Patil Jul 11 '15 at 11:23
• Thank you so much!Great explanation. It would be a great help if you can answer this. electronics.stackexchange.com/questions/177745/… – Aditya Patil Jul 11 '15 at 11:32
• Aditya Patil, assume a slightly improved model with finite rds (small signal dynamic resistance between D and S). In this case, the DC source would drive a current through rds which simply is wrong. Hence, in your model, you cannot apply the known classical rules for circuit analysis. – LvW Jul 11 '15 at 11:36
When calculating the gain of an amplifying stage, do the DC voltages appear in the gain formula ? NO! However, the value of some of the parameters (like the transconductance gm) DEPEND on DC parameters.
Therefore, it is the purpose of the the so-called "small-signal" equivalent circuit diagram to visualize the interaction between all the relevant parameters - thus, allowing to derive the desired formulas (for gain, input- and output impedances,..).
But note that such a an eqivalent circuit diagram is valid for small-signals only because transistor parameters like gm are dynamic/differential values only. Moreover, it is valid for a fixed DC operating point only (but the DC values do not appear in th diagram).
Answering your last question: For our calculations (gain, input impedace,..) we do not need these small-signal eqivalent diagrams at all. But they can help to derive equations and to UNDERSTAND the interrelations between some parts of the circuit (wanted, unwanted feedback effects). These diagrams are a visualization of the known formulas which were derived from transistor physics - together with known rules for circuit analysis (Ohms and Kirchhoffs laws). And all these formulas can be derived also from the actual (real) circuit diagram also.
• "When calculating the gain of an amplifying stage, do the DC voltages appear in the gain formula ? NO!" They don't appear because while deriving the expressions, DC voltages are ignored. That's exactly my question. Why!? – Aditya Patil Jul 11 '15 at 8:28
• No - they are not ignored! They determine the VALUES of some transistor parametrs. And because gain, input and output impedances are also small-signal values, only small-signal parameters of the circuit must be taken into account. And the small-signal internal (dynamic, differential) source resistance of supply voltages is ZERO ! – LvW Jul 11 '15 at 8:46
• I don't see why I should not get the same result by including DC sources as well. Could you please explain me? – Aditya Patil Jul 11 '15 at 8:50
• I think, the last three lines of your first contributions give the answer. You are asking: "Where am I going wrong" because your result is wrong - taking the DC bias into consideration. The transconductance gm is DETERMINED by the DC bias voltage, but you must not consider this DC value twice while including it again in the value of Vgs (as you did). – LvW Jul 11 '15 at 9:08
• Or - as another example - think of the BJT equivalent diagram with a finite small-signal input resistance rbe (h11, yie). Does the DC voltage at the base node drive a DC current through rbe? No - of course not! Because rbe applies to dynamic/differential values only! – LvW Jul 11 '15 at 9:12 | 2020-05-25 04:29:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8021955490112305, "perplexity": 871.4565463792868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00442.warc.gz"} |
https://www.acmicpc.net/problem/11146 | 시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율
3 초 256 MB 0 0 0 0.000%
## 문제
Proud Penguin (PP) is one of the hottest attractions in your city. Specializing in the arctic area, they let you see fish, seals, whales and, of course, penguins. The penguins being such a huge success, PP has decided to expand with a whole new area for the penguins to play around on. This new area is shaped as a long, narrow track, consisting of climbs and slides (with the height being highest at the ends, so that a travel always starts with a slide). The penguins will be allowed to enter on one side, and then travel to the other side by waddling, swimming and sliding. Connecting the two current areas, this will make the life of the penguins a lot more varied. Of course, the glass on the one side of the track will provide visitors with a lot more penguin action as well.
PP is now faced with one last problem in the planning stage of the expansion. How should they distribute water along the track? Due to the lazy nature of the penguins, they have decided to go for the solution where the highest climb will be as low as possible. At the same time, the board has decided to cut maintenance costs, and have set a maximum limit on the amount of water to be used.
You are appointed with the task of finding an optimal water distribution along the track. Given the height of the track at evenly spaced intervals and the maximum amount of water you can use, what is the lowest possible maximum climb you will have to leave the penguins with?
Figure 1: Measuring the height of climbs
## 입력
The first line of input contains a single integer T, the number of test cases to follow. Each test case begins with a line containing two integer numbers, N, the length of the track for that test case, and W the amount of water available. Then follow a line containing N + 1 integers ai, where a0 describes the height at the left side, a1 the height one unit from the left side, and so on until number aN, which describes the height at the right side of the track.
• 0 < T ≤ 100
• 0 < N ≤ 10000
• 0 ≤ W ≤ 1000000
• 0 ≤ ai ≤ 100
• a0 = aN = 100
• In your calculations, assume that the width of the track is always one unit, and that the track between two points is a straight line.
• A climb starts at water level or at any land point, and continues upwards until no adjacent point is higher than the current one (ie. a strictly increasing path).
• The height of a climb is defined as the height difference between its lowest and highest point.
• Keep in mind that the penguins will want to travel in both directions.
• Assume that water always runs to a lower adjacent point.
• An error of up to 10−6 will be accepted in the output.
## 출력
For each test case output one line containing a single number, the height of the lowest possible maximum climb for the track in that test case.
## 예제 입력
2
2 0
100 34 100
5 25
100 70 90 60 75 100
## 예제 출력
66
19.573186408981247 | 2016-12-03 15:53:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3219113051891327, "perplexity": 877.2873515777402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540975.18/warc/CC-MAIN-20161202170900-00436-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://crypto.stackexchange.com/questions/11188/what-block-cipher-is-used-for-cbc-mac?answertab=active | # What block cipher is used for CBC-MAC?
What block cipher is used for CBC-MAC? DES, AES, 3DES? Or it doesn't matter?
-
Well, yes, it does matter; however the terminology 'CBC-MAC' does not specify which.
CBC-MAC is a generic construction that takes an arbitrary block cipher, and turns it into an object that acts like a MAC for fixed length messages (much like CBC mode is a generic construction that takes an arbitrary block cipher, and turns it into a object that encrypts variable length messages). And, just like "CBC" isn't necessarily used with a specific block cipher, neither is CBC-MAC.
Note: CBC-MAC has issues if you try to use it with variable length messages; CMAC and XCBC are two modes similar to CBC-MAC that avoid this problem.
-
I prefer to say that CBC-MAC is a construction that constructs a MAC from an arbitrary block cipher. If the block cipher is secure and we restrict to fixed-length messages, the MAC is also secure. For variable-length messages, the MAC is insecure regardless of block cipher. – K.G. Oct 22 '13 at 11:48
CBC-MAC is a MAC construction based on a block cipher. Any block cipher will do, but the security of the scheme is reducible to the security of the block cipher. To put it more precisely, any block cipher will make a secure CBC-MAC as long as that block cipher is a secure pseudorandom permutation.
-
Actually, the security of CBC-MAC as a MAC is not reducible to the security of the block cipher. For example, if we ignore padding, and $A$ and $B$ are messages with lengths a multiple of the block size, then if $CBCMAC(A) = X$ and $CBCMAC(A|X) = Y$ and $CBCMAC(B) = Z$, then we can deduce that $CBCMAC(B|Z) = Y$; this is a violation of the MAC properties. Including padding complicates this attack, but not enough to make it infeasible. – poncho Oct 21 '13 at 20:11
Yeah but this violates the definition of CBC-MAC, namely that it acts on messages of fixed length. – pg1989 Oct 21 '13 at 21:43
CBC-MAC is fine with variable length messages, as long as you never have one message that's a prefix of another. For example a length prefix can achieve that if you know the length of the message beforehand. – CodesInChaos Oct 22 '13 at 16:39 | 2014-10-22 00:14:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36301395297050476, "perplexity": 1027.2133811348167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445159.36/warc/CC-MAIN-20141017005725-00021-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://solvedlib.com/e-fill-in-the-following-table-on-the-total-and,393675 | # E. Fill in the following table on the total and marginal utilities ofa certain good, product...
###### Question:
e. Fill in the following table on the total and marginal utilities ofa certain good, product A. use your results to answer f, g, h, i Quantity of Product A Total Utility Marginal Utility 0 20 1 35 10 4 0 45 6 7 35 -15 8 f.When should a reasonable person stop consuming product A? Explain. g.Graph the total utility and marginal utility together on the same graph. h.Explain the relationship between TU and MU. TU is at its peak when MU = When MU is positive, TU is .When MU is negative, then TU is i. Identify the point where TU is maximized.
#### Similar Solved Questions
##### Use the Binomial Theorem to find the indicated coefficient or term. The coefficient of $x^{0}$ in the expansion of $\left(x^{2}+\frac{1}{x}\right)^{12}$
Use the Binomial Theorem to find the indicated coefficient or term. The coefficient of $x^{0}$ in the expansion of $\left(x^{2}+\frac{1}{x}\right)^{12}$...
##### Name: Identify the functional groups (highlighted in the dotted boxes) in these molecules (6 pouces HO. bíHO OH H2N...
Name: Identify the functional groups (highlighted in the dotted boxes) in these molecules (6 pouces HO. bíHO OH H2NNO HO H2N Arene Allan -H...
##### CAN SOMEONE PLEASE SOLVE C,D,E ALL OF THEM STEP BY STEP ( CLEAR HANDWRITING) Data: mass-specific...
CAN SOMEONE PLEASE SOLVE C,D,E ALL OF THEM STEP BY STEP ( CLEAR HANDWRITING) Data: mass-specific heat capacity of silver: cs 234 J kg-1K-1 mass-specific heat capacity of gold: c 126 J kg1K1 mass-specific heat capacity of platinum: cp 136 J kg K1 mass-specific heat capacity of ice: c 2100 J kg-1K-1 m...
##### Zoil Hcnt17 0zC878|4catnom1704.t2an-{732 aknL- "2i0172500200015001000850400035003000
Zoil Hcnt 17 0zC 878| 4catnom 1704.t2an-{ 732 aknL- " 2i017 2500 2000 1500 1000 850 4000 3500 3000...
##### Round each of the numbers to the nearest ten.$44$
Round each of the numbers to the nearest ten. $44$...
##### Marcus is the chief data analyst of the U.S. Fish and Wildlife Service for the West...
Marcus is the chief data analyst of the U.S. Fish and Wildlife Service for the West Coast. There is a growing concern about the overfishing in the Pacific fisheries, and Marcus needs to estimate the demand curve for salmon as part of the agency's mitigation efforts. From data already collected h...
##### 10)Which is the strongest nucleophile for an SN2? (AJOH - (B)CH3CH2O - (C)CHSC(-0)O - (DJCF3C(-0)O-
10)Which is the strongest nucleophile for an SN2? (AJOH - (B)CH3CH2O - (C)CHSC(-0)O - (DJCF3C(-0)O-...
##### Exercise 8-12 (Part Level Submission) Carla Company was formed on December 1, 2016. The following information...
Exercise 8-12 (Part Level Submission) Carla Company was formed on December 1, 2016. The following information is available from Carla’s inventory records for Product BAP. LIFO Exercise 8-12 (Part Level Submission) Carla Company was formed on December 1, 2016. The following information is av...
##### [10] The attendance (denoted by the variable Jays home game approximated bYmeasured thousands 0f Tans ara BlueF=IS0W'P*where W isthe fraction of the games they have won 5o far (OsT <1) and the price ofa general admission ticket (suppose 7 _ P _1O dollars):Find the partial derivatives CF !BW and ZF { &PSuppose currently W=0.512 and P = 58_ Find the numerical value ofthe partial derivatives above andtellme in words what they mean.Now suppose attendance was determined by W alone theform
[10] The attendance (denoted by the variable Jays home game approximated bY measured thousands 0f Tans ara Blue F=IS0W'P* where W isthe fraction of the games they have won 5o far (OsT <1) and the price ofa general admission ticket (suppose 7 _ P _1O dollars): Find the partial derivatives CF ...
##### Using KVL and KCL, find the values of $V$ and $I$ in Fig. $2.58$.
Using KVL and KCL, find the values of $V$ and $I$ in Fig. $2.58$....
##### Assume Inventory Holding Cost is set at 15%. Using the data below , compute the impact...
Assume Inventory Holding Cost is set at 15%. Using the data below , compute the impact on the key financial metrics on the following scenario: The company followed an initiative that would reduce the COGS by by 5% and would also reduce Accounts Payable by 20%. (They negotiated lower prices of raw ma...
##### How long in years and months will it take for an investment of$8000 to earn$2000 in interest at 4.8% compounded monthly.
How long in years and months will it take for an investment of $8000 to earn$2000 in interest at 4.8% compounded monthly....
##### How do you differentiate y=e^x/(1+x) ?
How do you differentiate y=e^x/(1+x) ?... | 2022-12-03 16:23:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47600093483924866, "perplexity": 5127.397048183478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00006.warc.gz"} |
https://discourse.julialang.org/t/fix-vs-constraint-in-jump/64509 | # Fix vs @constraint in JuMP
Dear All,
I am trying to solve a mixed-integer optimization problem in JuMP and Gurobi. I have a variable x_{i,j} over i,j \in \{1,2,\ldots,n\}, where n is a fairly large number. However, for this particular problem, over an index set
P=\{(i,j) \mid (i,j) \textrm{ satisfying some property}\},
I have x_{i,j} = 0 for all i,j \in P.
Because there are other constraints involving x_{i,j}, it is more convenient if I set those values over P to zero through JuMP rather than hard coding them. I see there are two ways I can do that in JuMP, one is via fix and the other one is through @constraint macro, e.g.,
for i,j in P
@constraint(model_name, x[i,j] == 0.0)
end
or
for i, j in P
fix(x[i,j], 0.0; force = true)
end
For better performance, which one would be more suitable? Any tips/suggestions will be much appreciated!
@constraint adds a new linear constraint (row to the constraint matrix).
fix modifies variable bounds.
You should almost always use fix.
Opened a PR to clarify the docs: [docs] suggest fix over a new constraint by odow · Pull Request #2645 · jump-dev/JuMP.jl · GitHub
1 Like
Thanks so much, @odow !
1 Like | 2021-08-01 10:03:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355159401893616, "perplexity": 3857.04071199263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00077.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/168502-part-eigenvector-problem.html | # Math Help - Part of an eigenvector problem...
1. ## Part of an eigenvector problem...
Upon solving an eigenvector problem, I've been summoned to find the kernel of the following matrix:
$$\left( {\begin{array}{cc} -1-\sqrt{3}i & 2i \\ 2i & 1-\sqrt{3}i \\ \end{array} } \right)$$
Perhaps I'm just sleep deprived, but I'm having trouble getting this into row-echelon form. Any advice?
2. Originally Posted by Glitch
Upon solving an eigenvector problem, I've been summoned to find the kernel of the following matrix:
$$\left( {\begin{array}{cc} -1-\sqrt{3}i & 2i \\ 2i & 1-\sqrt{3}i \\ \end{array} } \right)$$
Perhaps I'm just sleep deprived, but I'm having trouble getting this into row-echelon form. Any advice?
Try to solve...:
$$\left( {\begin{array}{cc} -1-\sqrt{3}i & 2i \\ 2i & 1-\sqrt{3}i \\ \end{array} } \right)$$
$\begin{pmatrix}
x_1+iy_1
\\
x_2+iy_2
\end{pmatrix}=\begin{pmatrix}
0
\\
0
\end{pmatrix}$
3. $\textrm{r}(A)=1\Rightarrow \dim(\ker A)=2-1=1$
So, use only the second equation (for example) and do $x_2=1$ (for example) .You'll obtain a basis of $\ker A$ .
Fernando Revilla | 2015-10-09 00:39:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031333684921265, "perplexity": 1418.7075101365765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737911339.44/warc/CC-MAIN-20151001221831-00074-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://pointclouds.org/documentation/tutorials/random_sample_consensus.php | How to use Random Sample Consensus model
In this tutorial we learn how to use a RandomSampleConsensus with a plane model to obtain the cloud fitting to this model.
Theoretical Primer
The abbreviation of “RANdom SAmple Consensus” is RANSAC, and it is an iterative method that is used to estimate parameters of a mathematical model from a set of data containing outliers. This algorithm was published by Fischler and Bolles in 1981. The RANSAC algorithm assumes that all of the data we are looking at is comprised of both inliers and outliers. Inliers can be explained by a model with a particular set of parameter values, while outliers do not fit that model in any circumstance. Another necessary assumption is that a procedure which can optimally estimate the parameters of the chosen model from the data is available.
From [Wikipedia]:
The input to the RANSAC algorithm is a set of observed data values, a parameterized model which can explain or be fitted to the observations, and some confidence parameters.
RANSAC achieves its goal by iteratively selecting a random subset of the original data. These data are hypothetical inliers and this hypothesis is then tested as follows:
1. A model is fitted to the hypothetical inliers, i.e. all free parameters of the model are reconstructed from the inliers.
2. All other data are then tested against the fitted model and, if a point fits well to the estimated model, also considered as a hypothetical inlier.
3. The estimated model is reasonably good if sufficiently many points have been classified as hypothetical inliers.
4. The model is reestimated from all hypothetical inliers, because it has only been estimated from the initial set of hypothetical inliers.
5. Finally, the model is evaluated by estimating the error of the inliers relative to the model.
This procedure is repeated a fixed number of times, each time producing either a model which is rejected because too few points are classified as inliers or a refined model together with a corresponding error measure. In the latter case, we keep the refined model if its error is lower than the last saved model.
An advantage of RANSAC is its ability to do robust estimation of the model parameters, i.e., it can estimate the parameters with a high degree of accuracy even when a significant number of outliers are present in the data set. A disadvantage of RANSAC is that there is no upper bound on the time it takes to compute these parameters. When the number of iterations computed is limited the solution obtained may not be optimal, and it may not even be one that fits the data in a good way. In this way RANSAC offers a trade-off; by computing a greater number of iterations the probability of a reasonable model being produced is increased. Another disadvantage of RANSAC is that it requires the setting of problem-specific thresholds.
RANSAC can only estimate one model for a particular data set. As for any one-model approach when two (or more) models exist, RANSAC may fail to find either one.
The pictures to the left and right (From [Wikipedia]) show a simple application of the RANSAC algorithm on a 2-dimensional set of data. The image on our left is a visual representation of a data set containing both inliers and outliers. The image on our right shows all of the outliers in red, and shows inliers in blue. The blue line is the result of the work done by RANSAC. In this case the model that we are trying to fit to the data is a line, and it looks like it’s a fairly good fit to our data.
The code
Create a file, let’s say, random_sample_consensus.cpp in your favorite editor and place the following inside:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 #include #include #include #include #include #include #include #include #include #include boost::shared_ptr simpleVis (pcl::PointCloud::ConstPtr cloud) { // -------------------------------------------- // -----Open 3D viewer and add point cloud----- // -------------------------------------------- boost::shared_ptr viewer (new pcl::visualization::PCLVisualizer ("3D Viewer")); viewer->setBackgroundColor (0, 0, 0); viewer->addPointCloud (cloud, "sample cloud"); viewer->setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, "sample cloud"); //viewer->addCoordinateSystem (1.0, "global"); viewer->initCameraParameters (); return (viewer); } int main(int argc, char** argv) { // initialize PointClouds pcl::PointCloud::Ptr cloud (new pcl::PointCloud); pcl::PointCloud::Ptr final (new pcl::PointCloud); // populate our PointCloud with points cloud->width = 500; cloud->height = 1; cloud->is_dense = false; cloud->points.resize (cloud->width * cloud->height); for (size_t i = 0; i < cloud->points.size (); ++i) { if (pcl::console::find_argument (argc, argv, "-s") >= 0 || pcl::console::find_argument (argc, argv, "-sf") >= 0) { cloud->points[i].x = 1024 * rand () / (RAND_MAX + 1.0); cloud->points[i].y = 1024 * rand () / (RAND_MAX + 1.0); if (i % 5 == 0) cloud->points[i].z = 1024 * rand () / (RAND_MAX + 1.0); else if(i % 2 == 0) cloud->points[i].z = sqrt( 1 - (cloud->points[i].x * cloud->points[i].x) - (cloud->points[i].y * cloud->points[i].y)); else cloud->points[i].z = - sqrt( 1 - (cloud->points[i].x * cloud->points[i].x) - (cloud->points[i].y * cloud->points[i].y)); } else { cloud->points[i].x = 1024 * rand () / (RAND_MAX + 1.0); cloud->points[i].y = 1024 * rand () / (RAND_MAX + 1.0); if( i % 2 == 0) cloud->points[i].z = 1024 * rand () / (RAND_MAX + 1.0); else cloud->points[i].z = -1 * (cloud->points[i].x + cloud->points[i].y); } } std::vector inliers; // created RandomSampleConsensus object and compute the appropriated model pcl::SampleConsensusModelSphere::Ptr model_s(new pcl::SampleConsensusModelSphere (cloud)); pcl::SampleConsensusModelPlane::Ptr model_p (new pcl::SampleConsensusModelPlane (cloud)); if(pcl::console::find_argument (argc, argv, "-f") >= 0) { pcl::RandomSampleConsensus ransac (model_p); ransac.setDistanceThreshold (.01); ransac.computeModel(); ransac.getInliers(inliers); } else if (pcl::console::find_argument (argc, argv, "-sf") >= 0 ) { pcl::RandomSampleConsensus ransac (model_s); ransac.setDistanceThreshold (.01); ransac.computeModel(); ransac.getInliers(inliers); } // copies all inliers of the model computed to another PointCloud pcl::copyPointCloud(*cloud, inliers, *final); // creates the visualization object and adds either our original cloud or all of the inliers // depending on the command line arguments specified. boost::shared_ptr viewer; if (pcl::console::find_argument (argc, argv, "-f") >= 0 || pcl::console::find_argument (argc, argv, "-sf") >= 0) viewer = simpleVis(final); else viewer = simpleVis(cloud); while (!viewer->wasStopped ()) { viewer->spinOnce (100); boost::this_thread::sleep (boost::posix_time::microseconds (100000)); } return 0; }
The explanation
The following source code initializes two PointClouds and fills one of them with points. The majority of these points are placed in the cloud according to a model, but a fraction (1/5) of them are given arbitrary locations.
// initialize PointClouds
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr final (new pcl::PointCloud<pcl::PointXYZ>);
// populate our PointCloud with points
cloud->width = 500;
cloud->height = 1;
cloud->is_dense = false;
cloud->points.resize (cloud->width * cloud->height);
for (size_t i = 0; i < cloud->points.size (); ++i)
{
if (pcl::console::find_argument (argc, argv, "-s") >= 0 || pcl::console::find_argument (argc, argv, "-sf") >= 0)
{
cloud->points[i].x = 1024 * rand () / (RAND_MAX + 1.0);
cloud->points[i].y = 1024 * rand () / (RAND_MAX + 1.0);
if (i % 5 == 0)
cloud->points[i].z = 1024 * rand () / (RAND_MAX + 1.0);
else if(i % 2 == 0)
cloud->points[i].z = sqrt( 1 - (cloud->points[i].x * cloud->points[i].x)
- (cloud->points[i].y * cloud->points[i].y));
else
cloud->points[i].z = - sqrt( 1 - (cloud->points[i].x * cloud->points[i].x)
- (cloud->points[i].y * cloud->points[i].y));
}
else
{
cloud->points[i].x = 1024 * rand () / (RAND_MAX + 1.0);
cloud->points[i].y = 1024 * rand () / (RAND_MAX + 1.0);
if( i % 2 == 0)
cloud->points[i].z = 1024 * rand () / (RAND_MAX + 1.0);
else
cloud->points[i].z = -1 * (cloud->points[i].x + cloud->points[i].y);
}
}
Next we create a vector of ints that can store the locations of our inlier points from our PointCloud and now we can build our RandomSampleConsensus object using either a plane or a sphere model from our input cloud.
std::vector<int> inliers;
// created RandomSampleConsensus object and compute the appropriated model
pcl::SampleConsensusModelSphere<pcl::PointXYZ>::Ptr
model_s(new pcl::SampleConsensusModelSphere<pcl::PointXYZ> (cloud));
pcl::SampleConsensusModelPlane<pcl::PointXYZ>::Ptr
model_p (new pcl::SampleConsensusModelPlane<pcl::PointXYZ> (cloud));
if(pcl::console::find_argument (argc, argv, "-f") >= 0)
{
pcl::RandomSampleConsensus<pcl::PointXYZ> ransac (model_p);
ransac.setDistanceThreshold (.01);
ransac.computeModel();
ransac.getInliers(inliers);
}
else if (pcl::console::find_argument (argc, argv, "-sf") >= 0 )
{
pcl::RandomSampleConsensus<pcl::PointXYZ> ransac (model_s);
ransac.setDistanceThreshold (.01);
ransac.computeModel();
ransac.getInliers(inliers);
}
This last bit of code copies all of the points that fit our model to another cloud and then display either that or our original cloud in the viewer.
// copies all inliers of the model computed to another PointCloud
pcl::copyPointCloud<pcl::PointXYZ>(*cloud, inliers, *final);
// creates the visualization object and adds either our original cloud or all of the inliers
// depending on the command line arguments specified.
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer;
if (pcl::console::find_argument (argc, argv, "-f") >= 0 || pcl::console::find_argument (argc, argv, "-sf") >= 0)
viewer = simpleVis(final);
else
viewer = simpleVis(cloud);
There is some extra code that relates to the display of the PointClouds in the 3D Viewer, but I’m not going to explain that here.
Compiling and running the program
Add the following lines to your CMakeLists.txt file:
1 2 3 4 5 6 7 8 9 10 11 12 cmake_minimum_required(VERSION 2.8 FATAL_ERROR) project(random_sample_consensus) find_package(PCL 1.2 REQUIRED) include_directories(${PCL_INCLUDE_DIRS}) link_directories(${PCL_LIBRARY_DIRS}) add_definitions(${PCL_DEFINITIONS}) add_executable (random_sample_consensus random_sample_consensus.cpp) target_link_libraries (random_sample_consensus${PCL_LIBRARIES})
After you have built the executable, you can run it. Simply do:
$./random_sample_consensus to have a viewer window display that shows you the original PointCloud (with outliers) we have created. Hit ‘r’ on your keyboard to scale and center the viewer. You can then click and drag to rotate the view. You can tell there is very little organization to this PointCloud and that it contains many outliers. Pressing ‘q’ on your keyboard will close the viewer and end the program. Now if you run the program with the following argument: $ ./random_sample_consensus -f
the program will display only the indices of the original PointCloud which satisfy the particular model we have chosen (in this case plane) as found by RandomSampleConsens in the viewer.
Again hit ‘r’ to scale and center the view and then click and drag with the mouse to rotate around the cloud. You can see there are no longer any points that do not lie with in the plane model in this PointCloud. Hit ‘q’ to exit the viewer and program.
There is also an example using a sphere in this program. If you run it with:
$./random_sample_consensus -s It will generate and display a sphereical cloud and some outliers as well. Then when you run the program with: $ ./random_sample_consensus -sf
It will show you the result of applying RandomSampleConsensus to this data set with a spherical model.
[Wikipedia] (1, 2) http://en.wikipedia.org/wiki/RANSAC | 2019-01-20 07:28:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43380481004714966, "perplexity": 2166.470905043746}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583700734.43/warc/CC-MAIN-20190120062400-20190120084400-00541.warc.gz"} |
https://prijom.com/posts/how-to-find-height-of-volcanoes.php | # How To Find Height Of Volcanoes
Several volcanoes have been observed erupting on the surface of Jupiter's closest moon, Io.?
The acceleration of gravity near the surface of Io can be computed from Newton's Law of Gravity:
g = G M / r^2
where G is the Newton gravity constant, M the mass of Io, and r the radius of Io.
You can also find g from kinematics using the information given.
2 g h = v^2 - vo^2
g = acceleration of gravity near the surface of Io (unknown)
h = height reached by ejected material (given)
v = final velocity of the ejected material (zero)
vo = initial velocity of ejected material (given)
So solve for g, then use Newton's Law of Gravity to find the mass of Io.
How high can a volcano get?
Pretty high that’s a lot of smoke
How does magma get all the way to the surface to form a volcano?
Magma forms in the lower crust and upper mantle no deeper than 200km. It lives in a state where the surrounding environment is more structured and stable, whereas it is a free flowing, fluid like substance. This is mainly due to the addition of heat and pressure to an area which then melts the pre-existing rock. Once the magma is formed it develops into a magma chamber or a plume with a hot spot. At this point they can take different paths. With a magma chamber, which is traditionally where magma extruding from a volcano comes from, there is plate tectonic influences. Generally there is a subduction plate, most likely oceanic, that sinks below the heavier continental plate causing the magma to rise.So this picture represents the continental plate overriding the oceanic plate. It then interferes with the magma chamber. Thus finally causing the magma to extrude from the volcanic arc.As for those hot spots and magma plumes, they account for 5% of the volcanic activity.Hawaii is an example as well as the super volcano under Yellowstone National Park. These volcanoes are products of Magma Plumes forming. Magma Plumes are essentially a growing pocket of magma that then melt their surrounding rock causing them to grow in size. They then form a hot spot which is a vertical expansion of magma toward the surface. The magma begins to “gently flow” onto the surface which cools rapidly and forms new land. However with Yellowstone, in the past it’s produced massive eruptions from the huge pressure build up and if it were to erupt today it could kill 90,000 people while also producing a mini ice age.
Why are the mountains and volcanoes on Mars so much taller than those on Earth?
Several things contribute to Mars’s huge volcanic peaks. Mercury does not have them, and it’s smaller than Earth, too. Venus’s are smaller than Earth’s, but Venus is lighter and should have higher volcanoes.So what gives?Lower gravity allows volcanoes to grow higher - IF you have them.Mars is large enough to have had volcanoes in its past (though it does not have active volcanoes now).Weathering is very low on Mars, so large volcanoes stay large.Mars probably didn’t have any (or much) plate tectonics in its past. This means that heat from the interior had only a few places to escape from (hot spots) rather than all sorts of places to leak out (like the Ring of Fire).Combine all of this together, and you get a small number of huge eruptions on a low gravity planet with very little weathering.Voila - Olympus Mons.
How do hot spot volcanoes and convergence zone volcanoes differ?
Any kind of volcano is a consequence, and is not causative. They cannot generate without “magmatic support” from somewhere below. The primary lava / magma feed method also has distinct chemical signatures, as mentioned in the sections below.Hot spot volcanosHot spot volcanos can form anywhere a mantle plume / hot spot occurs, and they will remain active for as long as the plume is active. Case in point: the Hawaiian Island chain of volcano islands. The hot spot is currently under Hawaii Island, which is the “home” of five volcanos, two of which are still active, Mauna Loa and Kilauea. As the Pacific plate moves with respect to the hot spot, the volcanos that were “active” now fade and the “next new one” takes over. There is also a new Hawaiian Island forming to the south of Hawaii Island, and it is called Lo’ihi. It’s summit has not yet risen above sea level.The hot spot under Hawaii has been persistent over millions of years. Mauna Loa itself took approximately one million years to build from the ocean floor to the height of 13,800 feet above sea level. Please note that Mauna Loa is 33,000 higher than the seafloor, and, since the weight of Mauna Loa has depressed the seafloor, it stands 56,000 feet above its base. Typically hot spot volcanos have a high mafic / basalt magma, which is very similar in chemical composition to the mantle itself.Convergent volcanosConvergent zone volcanos are built as seafloor dives under a continental plate boundary. As the one plate dives under the other, the diving edge travels into the deep lithosphere and sometimes into the athenosphere. As it dives it heats up and melts which provides the magmatic support for volcanos along the edge of surviving plate (normally continental, but not always). Where the melted magma rises up it also melts the rocks it encounters in the continental plate, and produces a more felsic magma. Felsic magmas contain a lot of entrained gases, and give rise to a more explosive eruption, as well as a stratovolcano cone (layered with cinders and lava (as an example)). The amount of magma is proportional to the subduction amount of plate as it moves and melts. Active movement yields (in its own timescale) magma to support eruptions. More than not, convergent volcanos have eruptive periods, with perhaps long pauses in their activity, especially if the plates have been locked together (neither can move and the pressure is building).This is a brief synopsis of the two different types you asked about.
What are volcanic mountains? How do they form?
All volcanoes are formed when magma from below infiltrates into the upper layers of the crust.You get different kinds of volcano depending on the magma type and the crust type (and these are inseparable from one another under normal circumstances).With “mafic” magmas, those that are dark and have relatively low silica content and hence high melting points, you tend to get very low, wide, gently-sloping volcanoes that aren’t even recognizable as mountains most of the time. This is because mafic magma (technically mafic lava once it hits the surface) is extremely hot and inviscid, so it flows a long way before cooling down. This is the sort of eruption you usually see in Hawaii-type volcanoes, and the end result is basalt. You get these in basaltic crust, which mostly means oceanic crust, though there is a such thing as a “large igneous province” in which massive, continuous eruptions of mafic magma flood areas hundreds of square kilometers in size on land.The other type of magma is “felsic.” This is your sticky, viscous, high-silica type that’s prone to violent explosions because of its trapped gas content. This stuff is what builds your classic cone-shaped volcano. You find it under continents because continental crust tends to be made of more silica-rich minerals than oceanic crust. It doesn’t flow as far or as fast as mafic lava, so it tends to accumulate in mountains.
Volcanic Lava Fountains (Derivatives)?
It's funny, I'm in the Big Island of Hawaii for the summer and I just hiked along the Kilauea Iki (now dormant) a few days ago.
v0 is supposed to be a constant.
It is the exit velocity you are looking for, and you can assume it stays constant if the height of the fountain doesnot change.
The derivative is ds/dt = v0 - 32 * t, as you found out yourself.
Now let's go back to the physics : the lava reaches the apex of the fountain when its velocity is null. That gives you a relation between v0 and ta, time after which the lava reaches the apex :
ds/dt = 0 = v0 - 32*ta (1)
And you also have data about the ultimate height of the fountain, given by :
ha = v0*ta - 16*ta^2 = 1900 ft (2)
With these two equations, you can solve for v0 (and ta) :
(1) => v0 = 32*ta => ta = v0/32
(2) => v0*ta - 16*ta*ta = 1900
By replacing ta you find :
1/32*v0^2 - 1/64*v0^2 = 1900
v0^2 = 1900 * 64
Assuming gravity on Earth is 9.81 meters per second squared and using the equation:$v^2-v_0^2 = 2a\Delta y$,it is clear that the answer is simply the inverse ratio of the two accelerations, i.e.$\frac{9.81}{3.71}H = 2.64H$ Note that on Mars, the actual height would be higher than this because the atmosphere on Mars is much thinner than Earth's. | 2019-07-18 09:44:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45964959263801575, "perplexity": 2584.527517518005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525587.2/warc/CC-MAIN-20190718083839-20190718105839-00452.warc.gz"} |
https://scientiamarina.revistas.csic.es/index.php/scientiamarina/article/download/1611/2033?inline=1 | sm80n1-4205
Stock assessment for the western winter-spring cohort of neon flying squid (Ommastrephes bartramii) using environmentally dependent surplus production models
Jintao Wang 1,5, Wei Yu 1,5, Xinjun Chen 1,2,3,5, Yong Chen 4,5
1 College of Marine Sciences, Shanghai Ocean University, 999 Hucheng Ring Road, Lingang New City, Shanghai 201306, China. E-mail: xjchen@shou.edu.cn
2 National Engineering Research Centre for Oceanic Fisheries, Shanghai Ocean University, 999 Hucheng Ring Road, Lingang New City, Shanghai 201306, China.
3 Key Laboratory of Sustainable Exploitation of Oceanic Fisheries Resources, Ministry of Education, Shanghai Ocean University, 999 Hucheng Ring Road, Lingang New City, Shanghai 201306, China.
4 School of Marine Sciences, University of Maine, Orono, Maine 04469, USA.
5 Collaborative Innovation Centre for National Distant-water Fisheries, 999 Hucheng Ring Road, Lingang New City, Shanghai 201306, China.
Summary: The western winter-spring cohort of neon flying squid, Ommastrephes bartramii, is targeted by Chinese squid-jigging fisheries in the northwest Pacific from August to November. Because this squid has a short lifespan and is an ecological opportunist, the dynamics of its stock is greatly influenced by the environmental conditions, which need to be considered in its assessment and management. In this study, an environmentally dependent surplus production (EDSP) model was developed to evaluate the stock dynamics of O. bartramii. Temporal variability of favourable spawning habitat with sea surface temperature (SST) of 21-25°C (Ps) was assumed to influence carrying capacity (K), while temporal variability in favourable feeding habitat areas with different SST ranges in different months (Pf) wa s assumed to influence intrinsic growth rate (r). The parameters K and r in the EDSP model were thus assumed to be linked to temporal variability in the proportion of Ps and Pf, respectively. According to Deviance Information Criterion values, the estimated EDSP model with Ps was considered to be better than the conventional surplus production model or other EDSP models. For this model, the maximum sustainable yield (MSY) varied from 210000 to 262500 t and biomass at MSY level varied from 360000 to 450000 t. The fishing mortality rates of O. bartramii from 2003 to 2013 were much lower than the fishing mortality at target level and MSY level (Ftar and FMSY) and stock biomass was higher than BMSY, suggesting that this squid was not in the status of overfishing and stock was not overfished. The management reference points in the EDSP model for O. bartramii were more conservative than those in the conventional model. This study suggests that the environmental conditions on the spawning grounds should be considered in squid stock assessment and management in the northwest Pacific Ocean.
Keywords: Ommartrephes bartramii; stock assessment; surplus production model; environmental factors; Northwest Pacific Ocean.
Evaluación de la cohorte occidental de invierno-primavera del calamar volador neon (Ommastrephes bartramii) utilizando modelos de producción excedente dependientes del medio ambiente
Palabras clave: Ommartrephes bartramii; evaluación de stock; modelo de producción excedente; factores ambientales; Océano Pacifico Noroeste.
Citation/Como citar este artículo: Wang J., Yu W., Chen X., Chen Y. 2016. Stock assessment for the western winter-spring cohort of neon flying squid (Ommastrephes bartramii) using environmentally dependent surplus production models. Sci. Mar. 80(1): 69-78. doi: http://dx.doi.org/10.3989/scimar.04205.11A
Editor: W. Norbis.
Received: January 7, 2015. Accepted: October 14, 2015. Published:
Contents
INTRODUCTIONTop
The neon flying squid, Ommastrephes bartramii, is an economically important oceanic species widely distributed in the northwest Pacific Ocean (, ). This squid has been commercially exploited by Japanese squid-jigging fleets since 1974, and later by South Korea and Taiwan province of China. In 1993, the Chinese mainland squid-jigging fleets began exploratory fishing to investigate the abundance of O. bartramii in waters bounded by 38-42°N and 140-150°E. In 1999, several efforts further extended the fishing grounds eastward to 175°W (, ). In general, Chinese squid-jigging vessels mainly fish in the regions between 170°W and 175°W in June and July, and then shift to waters west of 165°E from August to November (). The total annual production of squid caught by Chinese mainland ranged from 36764 to 113200 t from 2003 to 2013.
The North Pacific population of O. bartramii has been classified into four stocks: the central stock of the autumn cohort, the eastern stock of the autumn cohort, the western stock of the winter-spring cohort and the central-eastern stock of the winter-spring cohort (). Of the four stocks, the western winter-spring cohort of O. bartramii has become a traditional fishing target for the Chinese squid-jigging fleets in water between 150 and 165°E (). This cohort migrates from subtropical waters to the subarctic boundary during the first half of the summer and then moves northward into the subarctic domain from August to November. The squid mature gradually in autumn and are thought to begin their spawning migration in October and November (, ).
Fishery biology, abundance and fishing ground distribution of O. bartramii have been well studied over the last few decades (, , , , , ). Squid abundance and distribution are found to be significantly influenced by environmental conditions on the spawning and feeding grounds. For example, evaluated the sea surface temperature anomaly (SSTA) on the spawning and feeding grounds of O. bartramii, and concluded that high SSTA caused by La Niña events would lead to low recruitment, while the SSTA in an El Niño year tended to be normal and lead to high recruitment. Variability in the SST on the feeding ground could also result in different spatial distribution of the squid fishing ground. examined the variations in the proportion of thermal habitats with favourable sea surface temperature areas (PFSSTA) in 1995-2004, and suggested that PFSSTA in February on the spawning ground and from August to November on the feeding ground could explain about 60% of the variability in the abundance of O. bartramii. Additionally, developed a habitat suitability index (HSI) model to identify the optimal habitat in relation to the oceanographic conditions, including sea surface temperature (SST), sea surface salinity (SSS), sea surface height anomaly (SSHA) and chlorophyll-a (Chl-a) concentration. They found that the highest monthly catch and fishing effort occurring in the different waters were closely related to those variables.
Previous studies evaluated the annual stock size of the autumn cohort and winter-spring cohort of O. bartramii on the basis of catch data analyses (, ). Due to the unique life history of this species, traditional age- or length-structured models are not appropriate for evaluating the influences of intensive commercial jigging fleets on its stock dynamics. Many methods have been proposed for assessing short-lived species such as the squid. evaluated the annual biomass of the autumn cohort in 1982-1992 on driftnet fishing grounds using a stock production model incorporating covariates (ASPIC non-equilibrium dynamic model) () and the DeLury depletion model (). For the winter-spring cohort, fitted a modified depletion model to the Chinese squid-jigging fisheries data to estimate squid stock abundance in 2000-2005, and found that the annual maximum allowable catch ranged from 80000 to 100000 t, which was consistent with the estimation by for the annual sustainable catch of the western stock. However, as a short-lived ecological opportunist, O. bartramii is also typically subject to large fluctuations in abundance, responding rapidly to changes in environmental conditions (, , , , , , , ). Therefore, environmental variables are considered to play a critical role in regulating the dynamics of squid stocks and need to be considered in the squid stock assessment.
An environmentally dependent surplus production (EDSP) model has been developed from the traditional surplus production model. In surplus production models, fish population dynamics and fishing processes including natural mortality, growth, recruitment, and fishing mortality are assumed to be a function of a single aggregated measure of biomass (). This approach may be suitable for species with a short-life span and/or limited availability of age/size composition data (). Research has also shown that surplus production models, although simple, may provide more accurate and precise estimates of management-related quantities than complex models (, ). Therefore, a surplus production model incorporating environmental variables would be an appropriate approach for assessing the O. bartramii stock.
O. bartramii is a short-lived species with a lifespan of less than one year (), whose yearly biomass is almost dependent on the recruitment (). Thus, it is reasonable to consider the environmental indices in the assessment of the O. bartramii stock. However, the traditional surplus models treat the carrying capacity and intrinsic rate of growth as constant (), which is inconsistent with the facts that carrying capacity and population growth rate for squid may fluctuate greatly over time as a result of changes in environmental condition on the spawning ground and feeding ground. In this study, we developed two environmental indicators including the proportions of the areas with favourable SST on the spawning ground (Ps) and feeding ground (Pf), which were assumed to influence carrying capacity (K) and intrinsic growth rate (r), respectively. We evaluated the traditional production models and several EDSP models with both indices, Ps only or Pf only, being incorporated. These models were compared and an optimal model was selected for estimating the squid stock abundance and reference points. This study may provide a new insight into the assessment of O. bartramii stock.
MATERIALS AND METHODSTop
Fishery data
Data on daily catch (t), effort (days fished, d), fishing dates and fishing locations (longitude and latitude) were obtained from the Chinese mainland commercial jigging fleets operating in the areas between 35-45°N and 145-165°E in the northwest Pacific Ocean from July to December from 2003 to 2013. The western stock of the winter-spring cohort and the central-eastern stock of the winter-spring cohort are separated near 170°E (Bower and Ichii 2005). Thus, the Chinese commercial jigging fleet should target a unit stock. One unit of fishing area was defined as 0.5° latitude by 0.5° longitude.
We assumed no by-catches in the squid fishery (), and there would be little discard relative to annual catches. Chinese jigging vessels, about 200 in number from 2003 to 2013, were equipped with an engine of 120 KW×2, 112 KW squid attracting lights and 16 squid-jigging machines, and had almost identical fishing power and lighting operation. Therefore, catch per unit fishing day (CPUE, t d–1) of the squid-jigging vessels was a reliable indicator of stock abundance on the fishing ground (). The monthly nominal CPUE in one fishing unit of 0.5°×0.5° is calculated as follows:
$CPUE y m i = C y m i F y m i$ (1)
where CPUEymi is monthly CPUE at i fishing unit in month m and year y; Cymi is monthly catch (t) at i fishing unit in month m and year y; and Fymi is number of fishing days at i fishing unit in month m and year y.
Because the annual catch of O. bartramii by Chinese mainland accounted for about 80% of the total catches of this species (), we modified the annual total catch of O. bartramii from 2003 to 2013 in the northwest Pacific for our estimation (Fig. 1). Although the total annual catch estimated using this approach may have some issues for some years, the focus of this study is to develop and demonstrate a modelling framework which incorporates critical habitat information in the stock assessment for environmentally-sensitive species, and this data set is sufficient to serve this purpose.
Environmental data
Environmental variables SST, SSH and Chl-a concentration were used to obtain the standardized yearly CPUE based on the generalized additive model (GAM). Monthly SST, SSH and Chl-a concentration data from 2003 to 2013 on the presumed spawning (20°-30°N, 130°-170°E) and fishing grounds (38°-46°N, 150°-165°E) were obtained from the Live Access Server of National Oceanic and Atmospheric Administration OceanWatch (http://oceanwatch.pifsc.noaa.gov/las/servlets/dataset). The spatial resolution of SST, SSH and Chl-a concentration data were 0.1°×0.1°, 0.25°×0.25°, and 0.05°×0.05°, respectively. All the environmental data were then converted to a 0.5°×0.5° grid by the method of averaging for each month in order to correspond to the spatial grid of CPUE. For instance, averaging 25 points of SST can convert to a 0.5°×0.5° grid.
Standardizing yearly CPUE by generalized additive model
CPUE is commonly assumed to be proportional to stock abundance. Therefore, it is usually considered as a relative abundance index in the monitoring and assessment of a fish stock (). The GAM model is previously employed to standardize yearly CPUEs, which represent the same proportional change in stock size of O. bartramii using data samples (). The CPUE is naturally ln-transformed with errors being assumed to be normally distributed in the GAM modelling. This assumption was evaluated using Q-Q plots. The functional relationships between CPUE and environmental variables are likely to be non-linear (). Thus, GAM was used for the CPUE standardization in this study, which can be written as:
Ln(CPUE+c)=factor(year)+factor(month)+s(longitude)+s(latitude)+s(SST)+s(SSH)+s(Chl-a)+ε (2)
where s is a spline smoother function; and constant c is assumed to be 10% of mean CPUE (); var ε= σ2 and E(ε)=0.
Environmentally dependent surplus production models
The areas with the favourable SST (21-25°C) in the presumed spawning ground (20º-30ºN, 130º-170ºE) during the spawning season (January-April) play a critical role in determining the recruitment of O. bartramii (, , , ), and the areas with favourable SST (15ºC-19ºC in August, 14-18°C in September, 10-13°C in October and 12-15°C in November) on the feeding ground (38°-46°N, 150°-165°E) during the feeding season (August-November) influence the distribution of O. bartramii in feeding activity (, , ). Annual environmental indices were averaged from monthly Ps and Pf which were calculated by the number of fishing units with the optimal SST divided by the total number of the fishing units on the spawning and feeding ground, respectively.
Schaefer’s surplus production model (referred to as SP) can be written as
$log( B t )| K, σ 2 =log(K)+ u t log( B t )| B t−1 ,K,r, σ 2 =log{ B t−1 +r B t−1 ( 1− B t−1 K )− C t−1 }+ u t$ (3)
$log( I t )| B t ,q, τ 2 =log(q)+log( B t )+ υ t$ (4)
where Bt is the biomass in t year; K is the carrying capacity; r is the intrinsic rate of stock growth; q is the catch ability coefficient; and It is the CPUE in t year. It is assumed to be proportional to Bt, and ut and υt are independent and identically distributed IID N (0, σ2) and IID N(0, τ2) random variables respectively.
We hypothesized that for a given year “effective” carrying capacity was in proportion to Ps and the “effective” intrinsic stock growth rate changed in proportion to Pf for O. bartramii. Therefore, the surplus production model with the parameter of Ps (referred to as Ps-EDSP) is given by:
$log( B t )| K, σ 2 =log(K)+ u t log( B t )| B t−1 ,K,r, σ 2 =log{ B t−1 +r B t−1 ( 1− B t−1 P s t−1 K )− C t−1 }+ u t$ (5)
The surplus production model with the parameter of Pf (referred to as Pf-EDSP) is given by:
$log( B t )| K, σ 2 =log(K)+ u t log( B t )| B t−1 ,K,r, σ 2 =log{ B t−1 +P f t−1 r B t−1 ( 1− B t−1 K )− C t−1 }+ u t$ (6)
The surplus production model with the parameters of both Ps and Pf (referred to as Ps-Pf-EDSP) is given by:
$log( B t )| K, σ 2 =log(K)+ u t log( B t )| B t−1 ,K,r, σ 2 =log{ B t−1 +P f t−1 r B t−1 ( 1− B t−1 P s t−1 K )− C t−1 }+u$ (7)
Based on the results of and , we assumed that the initial biomass of O. bartramii B0 in 2003 was 400000 t. The likelihood function and prior distribution of the parameters in Bayesian inference were stated as follows:
- Likelihood function
We fitted Schaefer’s surplus production models by Bayesian inference in R using the R2WinBugs library (). A likelihood function was used to estimate the degree of fitting between the observation data and the data predicted by the surplus production models (). We assumed that the observation errors followed the ln-normal distribution, and the likelihood function is written as:
$L(I|θ)= ∏ 2003 2013 1 I t σ 2π exp{ [log( I t )−log(q B t )] 2 2 σ 2 }$ (8)
The σ was estimated to be 0.12 in the CPUE stanardization.
- Setting prior distribution of model parameters
- Calculating posterior distribution of parameters
The initial guess values for the parameters of models in the likelihood estimation were set as follows: the intrinsic rate of growth was 0.8, carrying capacity was 400000 t and the catchability coefficient was 0.5×10–5. The posterior distribution of parameters of Schaefer models were calculated by the Markov Chain Monte Carlo (MCMC) method in R. Three MCMC chains were used and the number of MCMC iterations was 50000, and the first 10000 results of iterations were discarded. For the subsequent 40000 times, we saved the results every 40 times.
Fishery biological reference points, including maximum sustainable yield (MSY), fishing mortality at MSY level (FMSY), biomass at MSY level (BMSY), fishing mortality at target level (Ftar, fishing mortality at 0.1 level, F0.1), fishing mortality at MSY level (FMSY), and actual fishing mortality for year t (Ft) based on the SP and EDSP were estimated using mean values of posterior distribution of parameters of models (Table 1). The selection of models was based on the deviance information criterion (DIC), where the lowest DIC is selected to be the best model.
Table 1. – The fishery management reference points of O. bartramii in the northwest Pacific Ocean. BRP, biological reference point; SP, surplus production; EDSP, environmentally dependent surplus production models; Ps, proportion of favourable spawning habitat areas with sea surface temperature of 21-25°C; Pf, proportion of favourable feeding habitat areas with different sea surface temperature ranges in different months; MSY, maximum sustainable yield. Note: Ct is the catch in year t and Bt is the biomass in year t.
Management reference point Catch Fishing mortality coefficient (F) Biomass (B)
BRP in SP model MSY=rK/4 FMSY=r/2
F
0.1=0.45r
Ft=Ct/Bt
BMSY=K/2
BRP in Ps-EDSP, Pf-EDSP, Ps-Pf-EDSP models MSY=Pf rPs K/4 FMSY=Pf r/2
F0.1=0.45Pf r
Ft=Ct/Bt
BMSY=Ps K/2
RESULTSTop
Comparing the nominal CPUE with the GAM-standardized CPUE
The GAM model was constructed based on temporal (year and month), spatial (latitude and longitude) and environmental (SST, SSH and Chl-a concentration) factors. The annual nominal CPUE was then compared with the GAM-estimated standardized CPUE from 2003 to 2013. The same variability trends were exhibited between the annual nominal CPUE and the GAM-standardized CPUE (Fig. 2). Large differences occurred in 2007: the nominal CPUE was highest with a value of 5.12 t d–1, while the GAM-standardized CPUE in 2007 was extremely low with a value of 1.16 t d–1. The production and abundance of western winter-spring cohort of O. bartramii fluctuated from year to year: both were high in 2003-2008 and low in 2009-2013 (Fig. 1).
Comparison of surplus production models
According to the samplings in MCMC and the posterior distribution of parameters (r, K, and q) of the four surplus production models (Fig. 3), there were large differences between the posterior distribution of parameters and their prior distributions. The mean posterior values of parameters (r, K, and q) for the four surplus production models were different. The ranges of r, K, and q were 1.71-1.90, 650000-950000 t, and 0.3-0.4×10–5, respectively. The minimum values of r and K occurred in the Ps-EDSP and SP model, and the maximum of r and K occurred in the Pf-model and Ps-Pf-model, respectively. The results suggested that the optimal fitted model was the Ps-EDSP with the minimum DIC value (Table 2).
Table 2. – Summary statistics for the parameters of Schaefer surplus production models of O. bartramii.
Models Parameters
r K 104 q 10–4 DIC
Mean SD Rhat n.eff Mean SD Rhat n.eff Mean SD Rhat n.eff
SP model 1.77 0.65 1.00 580 65 0.17 1.00 1000 0.04 0.002 1.00 1000 55.9
Ps-model 1.71 0.69 1.00 1000 90 0.16 1.00 1000 0.03 0.002 1.00 290 30.7
Pf-model 1.90 0.78 1.00 420 80 0.17 1.00 1000 0.03 0.002 1.00 820 35.8
Ps-Pf-model 1.87 0.77 1.00 1000 95 0.17 1.00 1000 0.03 0.002 1.00 1000 40.1
The MSY and BMSY were 289100 and 325000 t for the SP model, respectively (Table 3). The MSY varied from 210000 to 262500 t and its biomass ranged from 360000 to 450000 t for the Ps-EDSP model (Table 3). For the Pf-EDSP model, the MSY ranged from 245300 to 371600 t, and the BMSY was approximately 400000 t (Table 3). For the Ps-Pf-EDSP model, the MSY was within the range of 254100 to 392400 t, and the BMSY was from 380000 to 475000 t (Table 3).
Table 3. – The fishery management reference points and stock assessment results estimated by the SP model (A), the Ps-model (B), the Pf-model (C) and the Ps-Pf-model (D) in 2003-2013.
Year Biomass (104 t) BMSY (104 t) Blim (104 t) MSY (104 t) Ftar FMSY Ft
A B C D A B C D A B C D A B C D A B C D A B C D A B C D
2003 40.00 40.00 40.00 40.00 32.5 39.95 40.0 42.27 8.12 9.98 10.0 10.57 28.91 23.37 37.16 39.24 0.8 0.78 0.84 0.84 0.88 0.87 0.92 0.93 0.32 0.32 0.32 0.32
2004 54.41 63.56 60.11 63.53 32.5 42.30 40.0 44.65 8.12 10.58 10.0 11.16 28.91 24.68 33.08 36.89 0.8 0.78 0.74 0.74 0.88 0.87 0.82 0.83 0.30 0.26 0.28 0.26
2005 53.52 76.03 63.19 72.30 32.5 43.20 40.0 45.60 8.12 10.80 10.0 11.40 28.91 25.20 26.37 30.05 0.8 0.78 0.60 0.59 0.88 0.87 0.66 0.66 0.29 0.20 0.24 0.21
2006 54.97 63.01 71.76 71.91 32.5 38.70 40.0 40.85 8.12 9.68 10.0 10.21 28.91 22.59 36.04 36.79 0.8 0.78 0.81 0.81 0.88 0.87 0.90 0.90 0.31 0.27 0.23 0.23
2007 53.16 68.44 66.41 70.70 32.5 39.60 40.0 41.80 8.12 9.90 10.0 10.45 28.91 23.10 31.22 32.60 0.8 0.78 0.70 0.70 0.88 0.87 0.78 0.78 0.33 0.26 0.27 0.25
2008 52.70 56.64 63.83 59.61 32.5 36.00 40.0 38.00 8.12 9.00 10.0 9.5 28.91 21.00 26.76 25.40 0.8 0.78 0.60 0.60 0.88 0.87 0.66 0.67 0.31 0.29 0.26 0.28
2009 53.87 68.82 68.37 71.77 32.5 40.05 40.0 42.28 8.12 10.01 10.0 10.56 28.91 23.37 32.70 34.54 0.8 0.78 0.74 0.74 0.88 0.87 0.82 0.82 0.11 0.08 0.08 0.08
2010 64.53 91.14 76.67 90.79 32.5 45.00 40.0 47.50 8.12 11.25 10.0 11.87 28.91 26.25 28.24 33.52 0.8 0.78 0.64 0.64 0.88 0.87 0.70 0.71 0.13 0.09 0.11 0.10
2011 56.70 73.86 71.94 82.64 32.5 43.20 40.0 45.60 8.12 10.80 10.0 11.40 28.91 25.20 24.53 27.94 0.8 0.78 0.55 0.55 0.88 0.87 0.61 0.61 0.15 0.11 0.12 0.10
2012 61.10 71.26 72.50 73.00 32.5 38.71 40.0 40.85 8.12 9.67 10.0 10.21 28.91 22.58 24.90 25.41 0.8 0.78 0.56 0.56 0.88 0.87 0.62 0.62 0.09 0.07 0.07 0.07
2013 62.23 80.74 76.86 82.88 32.5 40.51 40.0 42.75 8.12 10.13 10.0 10.69 28.91 23.63 28.62 30.56 0.8 0.78 0.64 0.64 0.88 0.87 0.70 0.71 0.13 0.10 0.10 0.10
Moreover, the values of Ftar and FMSY in the SP model differed from those in other three models (Table 3). Of the four surplus production models, the indications were that the fishing mortality coefficient of O. bartramii from 2003 to 2013 was much smaller than the values of Ftar and FMSY. Meanwhile, the annual catch of O. bartramii in 2003-2013 was also lower than the value of MSY (Table 3).
The results of the four surplus production models indicated that the biomass and development of the O. bartramii fishery are in a good state at present (Table 3; Fig. 4). The resource of this species was at a high level, with no sign of occurrence of overfishing based on the Ps-EDSP model (Fig. 4).
DISCUSSIONTop
There have been many attempts to explain variation in recruitment based on the relationship between some direct or indirect measures of year-class strength and environmental variables (). The most commonly used environmental variables are temperature, salinity and wind (). Temperature, because it regulates many physiological processes, has been considered to be an important explanatory variable of recruitment in the context of global warming (). Salinity has frequently been used as an indirect measure of nutrient flux, and the physical process by which wind may influence recruitment is thought to be primarily through effects on the egg and larval transportation and distribution (). The significance of these variables identified in this study is consistent with their ecological roles in regulating the squid habitat quality and stock dynamics ().
For a short-lived species, the role of environmental variables in regulating its population dynamics has received much emphasis and comprises an important research topic (, ). Most squid live for less than one year (), and recruitment success is greatly influenced by the physical and biological environmental conditions on the spawning and nursery grounds, which contribute to the variability in the stock abundance (). In addition, the abundance and distribution of squid populations on the fishing ground tend to be greatly affected by oceanographic conditions and respond quickly to changes in the environment (, , , , , , , ). For instance, suggested that about 55% of the variability in recruitment of the Falkland Island Illex argentinus fishery could be explained by variations in the total putative favourable SST areas on the spawning ground during the spawning season. Variability in the abundance of Todarodes pacificus in the Sea of Japan was found to be closely related to changes in their favourable SST areas for paralarvae development (). and suggested that February Ps and August to November Pf could account for about 60% of the variability in O. bartramii abundance between 1995 and 2004. February Ps was the most important factor influencing squid recruitment during the spawning season, and feeding ground Pf during the fishing season also had a strong influence on CPUE. Consequently, the SST is an important environmental indicator for predicting the recruitment of squid (), and should be considered in O. bartramii stock assessment.
In this study, the nominal CPUE in 2007 was extremely high, possibly due to the high concentrated fishing operation along the longitudinal direction during that year. This finding suggests that it is important to obtain the standardized yearly CPUE. Additionally, no significant correlations were identified between yearly CPUE and monthly Ps and Pf. However, and evaluated the influences of SST on the spawning ground on the abundance of O. bartramii. These authors suggested that there was a significant positive relationship between the monthly proportion of favourable SST areas on the spawning ground and CPUE, but this relationship was not consistent with the results of our study. The reasons which caused this difference might be the use of a variety of resource abundance indicators (nominal or standardized CPUE) and different sources of fishery data. Therefore, the average Ps during spawning months and the average Pf during feeding months other than significant Ps and Pf were used to measure the “effective” K and r. The methods for estimating the parameters of the surplus production model can be divided into three types: equilibrium estimators, process-error estimators and observation-error estimators (, ). Each estimator has its own drawback. For example, the assumption of the equilibrium estimators is that they are suitable for applying to a fishery in equilibrium but not for an actual fishery. For process-error estimators, we usually obtain negative values of parameters (r, q) when converting the surplus production equation into a linear form fitted by a linear regression. Bayesian inference has been increasingly used for fisheries in recent years because it provides a systematic approach that explicitly incorporates both uncertainty and risk caused by uncertainty in the analysis (, , , ). Atypical errors should also be noted in the data. Mis-specification of prior distribution and the choice of an inappropriate likelihood function may result in unreliable posterior distribution for parameters in Bayesian inference (, , , ). In this paper, we used Bayesian inference to estimate the parameters of the four surplus production models, and attempted to interpret the data consistently by using standardized CPUE and modifying the yearly catch from 2003 to 2013. We also referred to some previous studies in order to set the prior distribution (normal distribution) of parameters and to select the likelihood function (, ). According to the MCMC results, there were great differences in the posterior distributions of parameters (r, K, q) and prior distributions. It was shown that the fishery data for O. bartramii provided enough information to estimate the parameters in these four surplus production models.
Fishery statistics of the Chinese squid-jigging fleets (Fig. 1) suggest a large fluctuation in annual production of O. bartramii. In this study, the annual catches of O. bartramii are lower than the MSY. The current fishing mortality rates are also lower than Ftar in the four surplus production models, indicating that overfishing does not occur in the O. bartramii fishery (Fig. 4). The yearly biomass of O. bartramii in the four surplus production models is higher than BMSY, suggesting that the resource of O. bartramii is not overfished and has been at a high level of abundance in recent years. Thus, we can conclude that overfishing does not occur and the stock is not overfished for the O. bartramii stock in the northwest Pacific. These findings are basically consistent with the previous results (, ).
The DIC value of the original surplus production model was maximum in the four models, and the fitting level of the surplus production model with environmental factors was higher than that of the models without environmental factors (Table 3). Changes in environmental factors (Ps and Pf) have important impacts on carrying capacity (K) and intrinsic rate of growth (r). The particle-tracking experiment showed that paralarvae and juveniles aged <90 days remained on their spawning grounds and that Chl-a in this habitat, where 21°C<SST<25°C, had a significant positive correlation with the CPUE (), and Ps calculated by average optimal SST (21°C<SST<25°C) on the spawning ground would affect the survival of paralarvae and juveniles. Moreover, SST was the most important environmental factor in the formation of the fishing ground based on the HSI model and the neural network model (, ). Pf is a measure of habitat quality of the fishing ground and would affect individual growth. Hence, environmental conditions, especially Ps and Pf, have significant influences on the spawning, hatching, growth, and even the whole life history of O. bartramii. We considered annual variability of environmental variables in estimating fishery management reference points (MRPs), resulting in temporal differences in reference points, which better reflect temporal changes in the habitat quality than the reference points assumed to be same over time in a traditional stock assessment. This can be useful to help adjust annual regulations in O. bartramii fisheries management.
The development status of the O. bartramii fishery from 2003 to 2013 based on the SP model and the Ps-EDSP model were plotted (Fig. 4). At present, the O. bartramii fishery development is still not fully exploited, and the advantage of the EDSP model is not obvious in this situation. However, with an increased intensity of exploitation, the EDSP model proves to be more conservative as the “B/BMSY” in the Ps-EDSP model tended to be closer to “1” (the threshold of overfishing) than that in the SP model (Fig. 4). In summary, when there was low Ps and Pf, the “effective” r and K decreased, calling for a decline in fishing effort to avoid the overexploitation of O. bartramii resources.
The uncertainty of the models came mainly from (1) uncertainty associated with data because we only included the catch data from the Chinese fishery, although we standardized yearly CPUE and modified yearly catches; and (2) uncertainty of model parameters: we assumed the biomass in 2003 to be an initial value of 400000 t, and this may induce biases in the estimation of biomass of O. bartramii. In addition, we also assumed that the standard deviation of CPUE (σ) was equal to 0.12, and the effects of this assumed σ value on model selection and resource assessment need to be further investigated in future studies.
In summary, an estimated EDSP model fitted the data better than the conventional Schaefer surplus model without environmental factors in estimating the squid stocks. We found that the fishery MRPs largely depended on optimal spawning and feeding habitat areas. These findings suggest that environmental factors on the spawning and feeding grounds should be considered in squid stock assessment and management of O. bartramii in the northwest Pacific.
ACKNOWLEDGEMENTSTop
We thank the Chinese Distant-Water Squid-Jigging Technical Group for providing fishery data and information, and we thank NOAA for providing the environmental data used in this paper. This work was funded by State 863 projects (2012AA092303), the Funding Programme for Outstanding Dissertations at Shanghai Ocean University, the Funding Scheme for Training Young Teachers in Shanghai Colleges and the Shanghai Leading Academic Discipline Project (Fisheries Discipline). Involvement of Y. Chen was supported by SHOU International Centre for Marine Studies and the Shanghai 1000 Talent Programme.
REFERENCESTop
Adkison M.D., Peterman R.M. 1996. Results of Bayesian methods depend on details of implementation: an example of estimating salmon escapement goals. Fish. Res. 25: 155-170.
http://dx.doi.org/10.1016/0165-7836(95)00405-X
Agnew D.J., Beddington J.R., Hill S.L. 2002. The potential use of environmental information to manage squid stocks. Can. J. Fish. Aquat. Sci. 59: 1851-1857.
http://dx.doi.org/10.1139/f02-150
Anderson C.I.H., Rodhouse P.G. 2001. Life cycles, oceanography and variability: ommastrephid squid in variable oceanographic environments. Fish. Res. 54: 133-143.
http://dx.doi.org/10.1016/S0165-7836(01)00378-2
Bazzino G., Quiñones R.A., Norbis W. 2005. Environmental associations of shortfin squid Illex argentinus (Cephalopoda: Ommastrephidae) in the Northern Patagonian Shelf. Fish. Res. 76: 401-416.
http://dx.doi.org/10.1016/j.fishres.2005.07.005
Berger J.O., Moreno E., Pericchi LR., et al. 1994. An overview of robust Bayesian analysis. Test. 3(1): 5-124.
http://dx.doi.org/10.1007/BF02562676
Bigelow K.A., Boggs C.H., He X.I. 1999. Environmental effects on swordfish and blue shark catch rates in the US North Pacific longline fishery. Fish. Oceanogr. 8: 178-198.
http://dx.doi.org/10.1046/j.1365-2419.1999.00105.x
Bower J.R. 1996. Estimated paralarval drift and inferred hatching sites for Ommastrephes bartramii (Cephalopoda: Ommastrephidae) near the Hawaiian Archipelago. Fish. Bull. 94: 398-411.
Bower J.R., Ichii T. 2005. The red flying squid (Ommastrephes bartramii): A review of recent research and the fishery in Japan. Fish. Res. 76: 39-55.
http://dx.doi.org/10.1016/j.fishres.2005.05.009
Boyle P.R. (ed) 1987. Cephalopod life cycles. Vol. II. Comparative reviews. Academic Press, London, 441 pp.
Campbell R.A. 2004. CPUE standardization and the construction of indices of stock abundance in a spatially varying fishery using general linear models. Fish. Res. 70: 209-227.
http://dx.doi.org/10.1016/j.fishres.2004.08.026
Cao J. 2010. Stock assessment and risk analysis of management strategies for neno flying squid (Ommastrephes bartramii) in the Northwest Pacific Ocean. Shanghai Ocean University.
Cao J., Chen X.J., Chen Y. 2009. Influence of Surface Oceanographic Variability on Abundance of the Western Winter-Spring Cohort of Neon Flying Squid Ommastrephes bartramii in the New Pacific Ocean. Mar. Ecol. Prog. Ser. 381: 119-127.
http://dx.doi.org/10.3354/meps07969
Cardinale M., Hjelm J. 2006. Marine fish recruitment variability and climate indices. Mar. Ecol. Prog. Ser. 309: 307-309.
Chen X.J. 1997. An analysis on marine environment factors of fishing grounds of Ommastrephes bartramii in Northwest Pacific. J. Shanghai Fish. Univ. 6: 285-287.
Chen X.J. 1999. Study on the formation of fishing grounds of the large squid, Ommastrephes bartramii in the waters 160°E-170°E North Pacific Ocean. J. Shanghai Fish. Univ. 8: 197-201.
Chen X.J., Tian S.Q. 2005. Study on the catch distribution and relationship between fishing grounds and surface temperature for Ommastrephes bartramii in the Northwestern Pacific Ocean. Period. Ocean Univ. China. 35: 101-107.
Chen Y., Breen P.A., Andrew N.L. 2000. Impacts of outliers and mis-specification of priors on Bayesian fisheries-stock assessment. Can. J. Fish. Aquat. Sci. 57: 2293-2305.
http://dx.doi.org/10.1139/f00-208
Chen X.J., Zhao X.H., Chen Y. 2007. Influence of El Niño/La Niña on the western winter-spring cohort of neon flying squid (Ommastrephes bartramii) in the northwestern Pacific Ocean. ICES J. Mar. Sci. 64: 1152-1160.
Chen X.J., Chen Y., Tian S.Q., et al. 2008. An assessment of the west winter–spring cohort of neon flying squid (Ommastrephes bartramii) in the Northwest Pacific Ocean. Fish. Res. 92: 221-230.
http://dx.doi.org/10.1016/j.fishres.2008.01.011
Chen X. J., Tian S. Q., Liu B. L., et al. 2011a. Modelling of Habitat suitability index of Ommastrephes bartramii during June to July in the central waters of North Pacific Ocean. Chinese J. Oceanol. Limnol. 29(3): 493-504.
http://dx.doi.org/10.1007/s00343-011-0058-y
Chen X.J., Cao J., Liu B.L., et al. 2011b. Stock assessment and management of Ommastrephes bartramii by using a Bayesian Schaefer model in Northwestern Pacific Ocean. J. Fish. China. 35: 1572-1581.
Cushing D.H. 1982. Climate and Fisheries. London, Academic Press.
Hayase S. 1995. Distribution of spawning grounds of flying squid, Ommastrephes bartramii, in the North Pacific Ocean. Jpn. Agric. Res. Q. 29: 65-72.
Hikaru W., Tsunemi K.,Taro I., et al. 2004. Feeding habits of neon flying squid Ommastrephes bartramii in the transitional region of the central North Pacific. Mar. Ecol. Prog. Ser. 266: 173-184.
http://dx.doi.org/10.3354/meps266173
Hilborn R., Walters C.J. 1992. Quantitative fisheries stock assessment: choice, dynamics and uncertainty. Springer Science & Business Media.
http://dx.doi.org/10.1007/978-1-4615-3598-0
Hilborn R., Pikitch E.K., Francis R.C. 1993. Current Trends in Including Risk and Uncertainty in Stock Assessment and Harvest Decisions. Can. J. Fish. Aquat. Sci. 50: 874-880.
http://dx.doi.org/10.1139/f93-100
Ichii T., Mahapatra K. 2004. Stock assessment of the autumn cohort of neon flying squid (Ommastrephes bartramii) in the North Pacific based on the past driftnet fishery data. Report of the 2004 Meeting on Squid Resources. Japan Sea National Fisheries Research Institute, Niigata, 21-34 pp. (in Japanese).
Ichii T., Mahapatra K., Okamura H., et al. 2006. Stock assessment of the autumn cohort of neon flying squid (Ommastrephes bartramii) in the North Pacific based on past large-scale high seas driftnet fishery data. Fish. Res. 78: 286-297.
http://dx.doi.org/10.1016/j.fishres.2006.01.003
Jereb P., Roper C.F.E. (eds). 2010. Cephalopods of the world. An annotated and illustrated catalogue of cephalopod species known to date. Volume 2. Myopsid and Oegopsid Squids. FAO Species Catalogue for Fishery Purposes. No. 4, Vol. 2. Rome, FAO, 605 pp.
Kinas P.G. 1996. Bayesian fishery stock assessment and decision making using adaptive importance sampling. Can. J. Fish. Aquat. Sci. 53: 414-423.
http://dx.doi.org/10.1139/f95-189
Leggett W.C., Frank K.T. 2008. Paradigms in fisheries oceanography. Oceanogr. Mar. Biol. Ann. Rev. 46: 331-364.
http://dx.doi.org/10.1201/9781420065756.ch8
Li G., Chen X.J., Guan W.J. 2011. Stock assessment and management for Mackerel in East Yellow Sea. Ocean Press, Beijing, pp. 4-128.
Ludwing D., Walters C.J. 1985. Are age-structured models appropriate for catch-effort data? Can. J. Fish. Aquat. Sci. 42(6): 1066-1072.
http://dx.doi.org/10.1139/f85-132
Ludwing D., Walters C.J. 1989. A robust method for parameter estimation from Catch and effort data. Can. J. Fish. Aquat. Sci. 46(1): 137-144.
http://dx.doi.org/10.1139/f89-018
Maunder M.N., Punt A.E. 2004. Standardizing catch and effort data: a review of recent approaches. Fish. Res. 70: 141-159.
http://dx.doi.org/10.1016/j.fishres.2004.08.002
McAllister M.K., Kirkwood G.P. 1998. Bayesian stock assessment: a review and example application using the logistic model. ICES J. Mar. Sci. 55: 1031-1060.
http://dx.doi.org/10.1006/jmsc.1998.0425
McAllister M.K., Pikitch E.K., Punt A.E., et al. 1994. A Bayesian Approach to Stock Assessment and Harvest Decisions Using the Sampling/Importance Resampling Algorithm. Can. J. Fish. Aquat. Sci. 51: 2673-2687.
http://dx.doi.org/10.1139/f94-267
Murata M., Nakamura Y. 1998. Seasonal migration and diel vertical migration of the neon flying squid, Ommastrephes bartramii, in the North Pacific. In: Okutani T., (ed) Contributed Papers to International Symposium on Large Pelagic Squids. Japan Mar. Fish. Resources Res. Center, Tokyo, 269 pp.
Nishikawa H., Igarashi H., Ishikawa Y. 2014. Impact of paralarvae and juveniles feeding environment on the neon flying squid (Ommastrephes bartramii) winter-spring cohort stock. Fish. Oceanog. 23(4): 289-303.
http://dx.doi.org/10.1111/fog.12064
Osako M., Murata M. 1983. Stock assessment of cephalopod resources in the northwestern Pacific. In: Caddy J.F. (ed.), Advances in Assessment of World Cephalopod Resources. FAO Fish. Tech. paper No. 231, pp. 55-144.
Polacheck T., Hilborn R., Punt A.E. 1993. Fitting Surplus Production Models: Comparing Methods and Measuring Uncertainty. Can. J. Fish. Aquat. Sci. 50: 2597-2607.
http://dx.doi.org/10.1139/f93-284
Prager M.H. 1994. A suite of extensions to a non-equilibrium surplus-production model. Fish. Bull. 92: 374-389.
Roberts M.J. 1998. The influence of the environment on chokka squid Loligo vulgaris reynaudii spawning aggregations: steps towards a quantified model. S. Afr. J. Mar. Sci. 20: 267-284.
http://dx.doi.org/10.2989/025776198784126223
Rodhouse P.G. 2001.Managing and forecasting squid fisheries in variable environments. Fish. Res. 54: 3-8.
http://dx.doi.org/10.1016/S0165-7836(01)00370-8
Roper C.F.E., Sweeney M.J., Nauen C.E. 1984. FAO species catalogue: An annotated and illustrated catalogue of species of interest to fisheries. FAO Fisheries Synopsis, Cephalopods of the World, Vol. 3(125): 277 pp.
Sakurai Y., Kiyofuji H., Saitoh S., et al. 2000. Changes in inferred spawning areas of Todarodes pacificus (Cephalopoda: Ommastrephidae) due to changing environmental conditions. ICES J. Mar. Sci. 57: 24-30.
http://dx.doi.org/10.1006/jmsc.2000.0667
Saito K. 1994. Distribution of paralarvae of Ommastrephes bartramii and Eucleoteuthis luminosa in the eastern waters off Ogasawara Islands. Bull. Hokkaido Natl. Fish. Res. Inst. 58: 15-23.
Sturtz S., Ligges U., Gelman A. 2005. R2WinBUGS: A Package for Running WinBUGS from R. J. Stat. Soft. 12(3): 1-16.
http://dx.doi.org/10.18637/jss.v012.i03
Tian S.Q., Chen X.J., Chen Y., et al. 2009a. Standardizing CPUE of Ommastrephes bartramii for Chinese squid-jigging fishery in Northwest Pacific Ocean. Chin. J. Oceanol. Limnol. 27: 729-739.
http://dx.doi.org/10.1007/s00343-009-9199-7
Tian S. Q., Chen X. J., Chen Y., et al. 2009b. Evaluating habitat suitability indices derived from CPUE and fishing effort data for Ommatrephes bratramii in the northwestern Pacific Ocean. Fish. Res. 95(2-3): 181-188.
http://dx.doi.org/10.1016/j.fishres.2008.08.012
Wadley V.A., Lu C.C. 1983. Distribution of mesopelagic cephalopods around a warm-core ring in the East Australian Current. Mem. Natl. Mus. Vic. 44: 197-198.
Waluda C.M., Trathan P.N., Rodhouse P.G. 1999. Influence of oceanographic variability on recruitment in the genus Illex argentinus (Cephalopoda: Ommastraphidae) fishery in the South Atlantic. Mar. Ecol. Prog. Ser. 183: 159-167.
http://dx.doi.org/10.3354/meps183159
Waluda C., Rodhouse P., Podestá G., et al. 2001. Surface oceanography of the inferred hatching grounds of Illex argentinus (Cephalopoda: Ommastrephidae) and influences on recruitment variability. Mar. Biol. 139: 671-679.
http://dx.doi.org/10.1007/s002270100615
Waluda C.M., Yamashiro C., Rodhouse P.G. 2006. Influence of the ENSO cycle on the light-fishery for Dosidicus gigas in the Peru Current: An analysis of remotely sensed data. Fish. Res. 79: 56-63.
http://dx.doi.org/10.1016/j.fishres.2006.02.017
Wang J.T., Chen X.J., Lei L., et al. 2014a. Comparison between two forecasting models of fishing ground based on frequency statistic and neural network for Ommastrephes bartramii in the North Pacific Ocean. J. Guangdong Ocean Univ. 34(3): 82-88.
Wang S.P., Maunder M.N., Aires-da-Silva A. 2014b. Selectivity’s distortion of the production function and its influence on management advice from surplus production models. Fish. Res. 158: 181-193.
http://dx.doi.org/10.1016/j.fishres.2014.01.017
Wang Y.G., Chen X.J. 2005. The resource and biology of economic oceanic squid in the world. Ocean Press, Beijing, pp. 79-295.
Yatsu A., Mori J. 2000. Early growth of the autumn cohort of neon flying squid, Ommastrephes bartramii, in the North Pacific Ocean. Fish. Res. 45: 189-194.
http://dx.doi.org/10.1016/S0165-7836(99)00112-5
Yatsu A., Watanabe T. 1996. Interannual variability in neon flying squid abundance and oceanographic conditions in the central North Pacific, 1982-1992. Bull. Nat. Res. Inst. Far Seas Fish. 33: 123-138.
Yatsu A., Watanabe T., Mori J., et al. 2000. Interannual variability in stock abundance of the neon flying squid, Ommastrephes bartramii, in the North Pacific Ocean during 1979-1998: impact of driftnet fishing and oceanographic conditions. Fish. Oceanogr. 9: 163-170.
http://dx.doi.org/10.1046/j.1365-2419.2000.00130.x
Yu W., Chen X.J., Yi Q., et al. 2013. Review on the early life history of neon flying squid Ommastrephes bartramii in the North Pacific. J. Shanghai Ocean Univ. 22: 755-762.
Yu W., Chen X.J., Yi Q., et al. 2015. Variability of Suitable Habitat of Western Winter-Spring Cohort for Neon Flying Squid in the Northwest Pacific under Anomalous Environments. PLoS One, 10(4): e122997.
http://dx.doi.org/10.1371/journal.pone.0122997
Zhan B.Y. 1992. Fisheries stock assessment. China Agriculture Press, Beijing, pp. 167-193. | 2023-01-28 00:32:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5921628475189209, "perplexity": 6609.180612839903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00042.warc.gz"} |
https://www.physicsforums.com/threads/themodynamics-calculating-the-partial-pressure-with-daltons-law.905760/ | # Themodynamics: calculating the partial pressure with Dalton's Law
1. Feb 27, 2017
### erde
1. The problem statement, all variables and given/known data
In a sealed container is Helium $M_{He} = \frac {4kg} {kmol}$ with a pressure of $p_{He} = 4bar$. now is Methan put isothermic inside the container till both the methan and the helium mass are equal( $M_{CH4} = \frac {16kg} {kmol}$ Calculate using the ideal gas law and Dalton´s law the total pressure $p_{total}$ after the procedure
2. Relevant equations
$P*V=m*R*T$
ψ: volumen %
3. The attempt at a solution
i have two different attempts with two different results, can someone please help me to identify which one is correct?
attempt 1.
1) $P_{He} V_{He} = m_{He} R_{He} T_{He}$
2) $P_{CH4} V_{CH4}= m_{CH4} R_{CH4} T_{CH4}$
$\frac {1)} {2)} = \frac {P_{He} V_{He}} {P_{CH4} V_{CH4}}= \frac {R_{He}} {R_{CH4}}$
3) $R=\frac {R_m} {M_M}$
$\frac {1)} {2)} = \frac {P_{He} V_{He}} {P_{CH4} V_{CH4}} = \frac {M_{CH4}} {M_{He}} = \frac {16} {4} =4$
$\frac {V_{He}} {V_{CH4}}=\frac {V_{He}} {V_{total}} \frac {V_{total}}{V_{CH4}} = \frac {ψ_{He}} {ψ_{CH4}} =\frac {p_{He}}{p_{CH4}}$
$\frac {p_{He} p_{He}}{p_{CH4} p_{CH4}}=4$
$16bar^2= p_{He} ^2= 4 p_{CH4} ^2$
$4bar^2= p_{CH4} ^2$
$p_{CH4}= 2 bar$
Dalton: $p_{CH4} + p_{He}=p_{total}=6bar$
attempt 2.
AVOGADRO 1mol of gas at 25 °C occupies 24l so:
n: mol;
$R_m$: mol gas konstant;
$dT=0$ for isothermic reaction $T=constant$
$pV=nR_mT$
if $p_{He}$ is 4 bar and we assume that the Volumen of the container is $24L$ and the temperature is $25°C$ the should be 4 mol of Helium in the container following the equation
$p_{1bar} V_{24l}= n_{1mol} R_m T$
$p_{4bar} V_{24l}= n_{4mol} R_m T$
$V$, $R_m$ and $T$ are constant so increasing $n$ from $1$ to $4$ should also increase $p$ from $1$ to $4$
so if we input Methan till $m_{Methan}=m_{Helium}$ and we now that methan weights 4 times the weight of helium per mol we now that there are $1 mol_{Methan}$ for every $4mol_{Helium}$.
so if the total mol amount inside the container is 5 mol the total pressure should be 5 bar
$p_{5bar} V_{24l}= n_{5mol} R_m T$
so $p_{total}=5bar$
2. Feb 27, 2017
### Staff: Mentor
Neither approach is very good. The first one is incorrect because you do not take into account the fact that since the total volume does not change, then the volume of helium is not the same before and after the methane is added.
In the second case, you shouldn't assume values of the volume and the temperature. You should simply work with a volume $V$ and a temperature $T$.
3. Feb 27, 2017
### erde
so for the second approach the correct way would be to say
1)$p_0 V_0= n_0 R_m T_0$
2)$4p_0 V_0= 4n_0 R_m T_0$
$\frac {2)}{1)}=\frac {4p_0}{p_0}=\frac {4n_0}{n_0}$
so if $m_{CH4}=m_{He}$ => $n_{CH4}=1/4n_{He}$
3) $xp_0 V_== (4+1/4*4)n_0 R_m T_=$ => $xp_0 V_0=5n_0 R_m T_0$ => $x=5$ so $p_{total}=5bar$ | 2017-10-19 19:49:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7778911590576172, "perplexity": 649.7450587028547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823462.26/warc/CC-MAIN-20171019194011-20171019214011-00279.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=180387 | # Experiment Volumetric analysis - Acid base
by TheDanny
Tags: acid, analysis, base, experiment, volumetric
P: 8 1. The problem statement, all variables and given/known data Do not understand the question. 2. My Lab Results Topic : Volumetric analysis - Acid base purpose : To determine the exact concentration of a mineral acid, HXO4, and to determine the relative atomic mass of the element X. Materials : KA1 is a mineral acid, HXO4 KA2 is a solution containing 1.70g of OH- ions per dm3. Phenolphthalein as indicator. Procedure : Pipette 25.0cm3 of KA2 into the titration flask. Add two or three drops pf phenolphthalein indicator and titrate this solution with KA1. Record your readings in the table. Repeat the titration as many times as you think necessary to achieve accurate results. My result up. My problems is here the question. a. Calculate the concentration, in mol dm-3, of solution KA2 b. Write a balanced ionic equation for the reaction between solution KA1 and the solution KA2. c. Calculate the concentration, in mol dm-3, of mineral acid HX04 in solution KA1. d. If the concentration of mineral acid HXO4 in solution KA2 is 20.1g dm-3, calculate the relative molecular mass of HXO4. e. Using the answer to (d), determine the relative atomic mass of the element X. f. Suggest and identity for element X. Please guide me. Thx In question (a) i tried with mol=mass/mm =1.7/17 = 0.1mol mol=mv/1000 0.1=m(250)/1000 m=0.4 concentration of KA2 is 0.4moldm-3?
HW Helper
PF Gold
P: 3,724
Quote by TheDanny In question (a) i tried with mol=mass/mm =1.7/17 = 0.1mol
This is correct. Remember that the concentration was given as 1.70 g OH- per dm3 which is the same thing as 1.7 g OH- per liter or 0.1 moles/liter OH-, so this...
mol=mv/1000 0.1=m(250)/1000 m=0.4 concentration of KA2 is 0.4moldm-3?
... calculation isn't necessary.
For the rest of the problem, begin by understanding the neutralization equation.
Try to write the neutralization equation using $$M^+OH^-$$(monobasic) and $$HXO_4$$(monoprotic).
Related Discussions Biology, Chemistry & Other Homework 6 Biology, Chemistry & Other Homework 1 Chemistry 1 Chemistry 7 Chemistry 6 | 2014-08-01 05:57:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4874894917011261, "perplexity": 7667.016038839169}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274581.53/warc/CC-MAIN-20140728011754-00417-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/294222/forked-arrows-with-chemfig-and-tikzpicture | # Forked arrows with chemfig and tikzpicture
this question has been asked in this link: Forked arrows with chemfig It is very helpful for me. I would like to thank Gonzalo Medina for his solution. However, I have two questions that I cannot find any solution anywhere.
1) How to make \chemfig work properly inside tikzpicture? I want to have a chemical structure (which is drawn by chemfig) above and below the arrow. However when I use \chemfig{...} all the bonds turn to arrow. I think it is because this command stays inside the \draw command.
2) How can I change the length of only the arrow part (the line still the same) to fit the content of the node above/ below the arrow.
Here is the scheme I want to make:
Thanks a ton!
Here is my tex file:
\documentclass[class=minimal,border=0pt,10pt]{standalone}%[a4paper]
\usepackage{chemfig,chemmacros}
\usepackage{tikz}
\usetikzlibrary{arrows,positioning,calc}
\begin{document}
\begin{tikzpicture}[node distance=0cm and 2cm]
%\tikzset{myarrow/.style={->, >=latex', shorten >=1pt, thick},mylabel/.style={text width=7em, text centered} }
\setcrambond{4pt}{}{}
%\setarrowoffset{10pt}
\node (A)
{\chemfig{-[:-15,0.5,,,dash pattern = on 2pt off 2pt]O-[:15,0.5]?[A]<[:-60](-[:165,0.6]HO)-[:15,,,,line width = 4pt](-[:-60,0.6]OH)>[:-15](-[:15,0.5]O-[:-15,0.5,,,dash pattern=on 2pt off 2pt])-[:120]O-[:-165]?[A]}};
%===================================
\node [above right= of A](B)
{\hspace{2cm}\chemfig{-[:-15,0.5,,,dash pattern = on 2pt off 2pt]O-[:15,0.5]?[A]<[:-60](-[:165,0.6]RO)-[:15,,,,line width = 4pt](-[:-60,0.6]O(-[:-90,0.5]-[:-130,0.5](-[:-80,0.6]\chemabove{O}{\hspace{4mm}\scriptstyle\ominus})=[:160,0.6]O))>[:-15](-[:15,0.5]O-[:-15,0.5,,,dash pattern=on 2pt off 2pt])-[:120]O-[:-165]?[A]}};
%===================================
\node[below=of B, align=left](B1){R=H or \ch{CH2COONa} \\depending on DS}; %align=left to use \\ inside node
%===================================
\node[below=0.5 of B1](C)
{\hspace{2cm}\chemfig{-[:-15,0.5,,,dash pattern = on 2pt off 2pt]O-[:15,0.5]?[A]<[:-60](-[:165,0.6]RO)-[:15,,,,line width = 4pt](-[:-60,0.6]O(-[:-120,0.5]-[:-60,0.5](-[,0.4]OR)-[:-120,0.5]-[:-60,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{4mm}\scriptstyle\ominus}))>[:-15](-[:15,0.5]O-[:-15,0.5,,,dash pattern=on 2pt off 2pt])-[:120]O-[:-165]?[A]}}; %{(} or {)} for sth like N(CH3)3Cl inside chemfig
%===================================
\node[below=of C, align=left](C1)
{\hspace{1cm} R=H, \chemfig{-[:30,0.5](-[:90,0.4]OR)-[:-30,0.5]-[:30,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{5mm}\scriptstyle\ominus}}\\ \hspace{1.1cm}depending on MS};
%===================================
\node[below=0.5 of C1](D)
{\hspace{2cm}\chemfig{-[:-15,0.5,,,dash pattern = on 2pt off 2pt]O-[:15,0.5]?[A]<[:-60](-[:165,0.6]RO)-[:15,,,,line width = 4pt](-[:-60,0.6]O(-[:-30,0.6]SO_3Na))>[:-15](-[:15,0.5]O-[:-15,0.5,,,dash pattern=on 2pt off 2pt])-[:120]O-[:-165]?[A]}};
%===================================
\node[below=of D](D1)
{R=H, \ch{SO3Na}};
%===================================
\node[below=0.5 of D1](E)
{\hspace{3cm}\chemfig{-[:-15,0.5,,,dash pattern = on 2pt off 2pt]O-[:15,0.5]?[A]<[:-60](-[:165,0.6]RO)-[:15,,,,line width = 4pt](-[:-60,0.6]O(-[:-30,0.5]-[:30,0.5]-[:-30,0.5]-[:30,0.5]-[:-30,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{5mm}\scriptstyle\ominus}))>[:-15](-[:15,0.5]O-[:-15,0.5,,,dash pattern=on 2pt off 2pt])-[:120]O-[:-165]?[A]}};
%===================================
\node[below=of E](E1)
{R=H, \chemfig{(=[:90,0.4]O)-[:-30,0.5]-[:30,0.5]-[:-30,0.5]-[:30,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{5mm}\scriptstyle\ominus}}};
%\draw[myarrow] (A.east) -- ++(0.5,0) -- ++(0,1) |-(B.west);
%===================================
\draw[-stealth](A) --($(A.0)!0.5!(B.west|-A.0)$) |- (B.west) node[above]{\ch{ClCH2COONa}}node[below,align=left]{aq. \ch{NaOH}\\slurry medium};
%===================================
\draw[-stealth](A) -- ($(A.0)!0.5!(C.west|-A.0)$) |- (C.west) node[above]{\chemfig{?-[:90,0.5]O-[:-30,0.5]?-[:-30,0.5]-[:30,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{5mm}\scriptstyle\ominus}}} node[below,align=left]{aq. \ch{NaOH}\\slurry medium};
%===================================
\draw[-stealth](A) -- ($(A.0)!0.5!(D.west|-A.0)$) |- (D.west) node[above,align=left]{(i) \ch{SO3}.DMF or \ch{SO3}.pyridine\\(DMF/LiCl) 50 \si{\degreeCelsius})}node[below,align=left]{(ii) \ch{NaOH}};
%===================================
\draw[-stealth](A) -- ($(A.0)!0.5!(E.west|-A.0)$) |- (E.west) node[above,align=left]{\chemfig{HO-[:30,0.5](=[:90,0.4]O)-[:-30,0.5]-[:30,0.5]-[:-30,0.5]-[:30,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{5mm}\scriptstyle\ominus}}}node[below,align=left]{DMSO, CDI \\ 20 h, 70 \si{\degreeCelsius}};
\end{tikzpicture}
\end{document}
• Please mark your code with four spaces as code to make it readable. – Carina Feb 16 '16 at 13:40
• Your code does not compile, I'm getting 12 errors, please make sure it's compilable. – Alenanno Feb 16 '16 at 13:41
• @Carina: sorry I don't understand. what do you mean by saying "four spaces"? – Chung Feb 16 '16 at 13:50
• Possible duplicate of Forked arrows with chemfig – Stefan Pinnow Feb 16 '16 at 16:02
• @StefanPinnow: it is the same topic but my questions are different. I could not figure it out with the previous link – Chung Feb 16 '16 at 16:43
To 1) Here you simply need to add the optional argument [-] to the \chemfig command to "remove" the arrow heads from the "draw line" command.
To 2) Here I suggest to do that "manually". First place the top most node by using the =<number>cm and <number>cm of <node> feature, then decide with the created \split variable where the path to this node should be "split" and decide with the created \xshift variable the offset of the nodes placed above and below the arrow. While drawing the reaction scheme you can simply adjust the node positions on the right and/or by adjusting the \split ratio to fit your needs.
Here is a simplyfied reaction scheme as demonstration for the above said.
\documentclass[border=2mm]{standalone}
\usepackage{tikz}
\usepackage{chemfig}
\usepackage{chemformula}
\usetikzlibrary{arrows,positioning,calc}
\usepackage{siunitx}
\begin{document}
\begin{tikzpicture}
\setcrambond{4pt}{}{}
\pgfmathsetlengthmacro{\xshift}{1cm}
\pgfmathsetmacro{\split}{0.1}
\node (A) {A};
\node [above right=2cm and 5cm of A](B) {B};
\draw[-stealth](A) --($(A.0)!\split!(B.west|-A.0)$) |- (B.west)
% just to show the alignment point for the following nodes
coordinate [pos=0.5,xshift=\xshift] (test)
node [pos=0.5,xshift=\xshift,above,anchor=south west]
{\ch{ClCH2COONa}}
node [pos=0.5,xshift=\xshift,below,anchor=north west]
{\chemfig[-]{?-[:90,0.5]O-[:-30,0.5]?-[:-30,0.5]-[:30,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{5mm}\scriptstyle\ominus}}};
% show the alignment points
\fill [red] (test) circle (2pt) -- +(-1cm,0) circle (2pt);
\end{tikzpicture}
\end{document}
And here you can find your full reaction scheme, where I in addition have cleaned up some non-necessary \hspaces and set proper alignments of the nodes.
\documentclass[border=2mm]{standalone}
\usepackage{tikz}
\usepackage{chemfig}
\usepackage{chemformula}
\usetikzlibrary{arrows,positioning,calc}
\usepackage{siunitx}
\begin{document}
\begin{tikzpicture}[
>=stealth,
shorten >=1mm,
component node/.style={
% keys for \chemfig'
-,
shorten >=0pt,
% % fill the nodes (useful for debugging)
% fill=red!50,
},
arrow node/.style={
pos=0.5,
xshift=\xshift,
align=left,
% keys for \chemfig'
-,
shorten >=0pt,
},
]
\setcrambond{4pt}{}{}
\pgfmathsetlengthmacro{\xshift}{1mm}
\pgfmathsetmacro{\split}{0.1}
% left side of reaction scheme
\node [component node] (A)
{\chemfig{-[:-15,0.5,,,dash pattern = on 2pt off 2pt]O-[:15,0.5]?[A]<[:-60](-[:165,0.6]HO)-[:15,,,,line width = 4pt](-[:-60,0.6]OH)>[:-15](-[:15,0.5]O-[:-15,0.5,,,dash pattern=on 2pt off 2pt])-[:120]O-[:-165]?[A]}};
% right side of reaction scheme
\node [component node,above right=2cm and 6cm of A] (B)
{\chemfig{-[:-15,0.5,,,dash pattern = on 2pt off 2pt]O-[:15,0.5]?[A]<[:-60](-[:165,0.6]RO)-[:15,,,,line width = 4pt](-[:-60,0.6]O(-[:-90,0.5]-[:-130,0.5](-[:-80,0.6]\chemabove{O}{\hspace{4mm}\scriptstyle\ominus})=[:160,0.6]O))>[:-15](-[:15,0.5]O-[:-15,0.5,,,dash pattern=on 2pt off 2pt])-[:120]O-[:-165]?[A]}};
\node [component node,below=0 of B.south west,anchor=north west,align=left] (B1)
{R=H or \ch{CH2COONa} \\ depending on DS}; %align=left to use \\ inside node
\node [component node,below=0.5 of B1.south west,anchor=north west] (C)
{\chemfig{-[:-15,0.5,,,dash pattern = on 2pt off 2pt]O-[:15,0.5]?[A]<[:-60](-[:165,0.6]RO)-[:15,,,,line width = 4pt](-[:-60,0.6]O(-[:-120,0.5]-[:-60,0.5](-[,0.4]OR)-[:-120,0.5]-[:-60,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{4mm}\scriptstyle\ominus}))>[:-15](-[:15,0.5]O-[:-15,0.5,,,dash pattern=on 2pt off 2pt])-[:120]O-[:-165]?[A]}}; %{(} or {)} for sth like N(CH3)3Cl inside chemfig
\node [component node,below=0 of C.south west,anchor=north west, align=left](C1)
{R=H, \chemfig{-[:30,0.5](-[:90,0.4]OR)-[:-30,0.5]-[:30,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{5mm}\scriptstyle\ominus}}\\ \hspace{1.1cm}depending on MS};
\node [component node,below=0.5 of C1.south west,anchor=north west](D)
{\chemfig{-[:-15,0.5,,,dash pattern = on 2pt off 2pt]O-[:15,0.5]?[A]<[:-60](-[:165,0.6]RO)-[:15,,,,line width = 4pt](-[:-60,0.6]O(-[:-30,0.6]SO_3Na))>[:-15](-[:15,0.5]O-[:-15,0.5,,,dash pattern=on 2pt off 2pt])-[:120]O-[:-165]?[A]}};
\node [component node,below=0 of D.south west,anchor=north west] (D1)
{R=H, \ch{SO3Na}};
\node [component node,below=0.5 of D1.south west,anchor=north west] (E)
{\chemfig{-[:-15,0.5,,,dash pattern = on 2pt off 2pt]O-[:15,0.5]?[A]<[:-60](-[:165,0.6]RO)-[:15,,,,line width = 4pt](-[:-60,0.6]O(-[:-30,0.5]-[:30,0.5]-[:-30,0.5]-[:30,0.5]-[:-30,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{5mm}\scriptstyle\ominus}))>[:-15](-[:15,0.5]O-[:-15,0.5,,,dash pattern=on 2pt off 2pt])-[:120]O-[:-165]?[A]}};
\node [component node,below=0 of E.south west,anchor=north west](E1)
{R=H, \chemfig{(=[:90,0.4]O)-[:-30,0.5]-[:30,0.5]-[:-30,0.5]-[:30,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{5mm}\scriptstyle\ominus}}};
% draw reaction arrows + nodes
%\draw[myarrow] (A.east) -- ++(0.5,0) -- ++(0,1) |-(B.west);
\draw [->] (A) -- ($(A.east)!\split!(B.west|-A.east)$) |- (B.west)
node [arrow node,above right]
{\ch{ClCH2COONa}}
node [arrow node,below right]
{aq. \ch{NaOH}\\slurry medium};
\draw [->] (A) -- ($(A.east)!\split!(C.west|-A.east)$) |- (C.west)
node [arrow node,above right]
{\chemfig[-]{?-[:90,0.5]O-[:-30,0.5]?-[:-30,0.5]-[:30,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{5mm}\scriptstyle\ominus}}}
node [arrow node,below right]
{aq. \ch{NaOH} \\ slurry medium};
\draw [->] (A) -- ($(A.east)!\split!(D.west|-A.east)$) |- (D.west)
node [arrow node,above right]
{(i) \ch{SO3}.DMF or \ch{SO3}.pyridine\\(DMF/LiCl) \SI{50}{\degreeCelsius})}
node [arrow node,below right]
{(ii) \ch{NaOH}};
\draw [->] (A) -- ($(A.east)!\split!(E.west|-A.east)$) |- (E.west)
node [arrow node,above right]
{\chemfig[-]{HO-[:30,0.5](=[:90,0.4]O)-[:-30,0.5]-[:30,0.5]-[:-30,0.5]-[:30,0.6]\chemabove{N}{\hspace{-5mm}\scriptstyle\oplus}{(}CH_3{)}_3\chemabove{Cl}{\hspace{5mm}\scriptstyle\ominus}}}
node [arrow node,below right]
{DMSO, CDI \\ \SI{20}{\hour}, \SI{70}{\degreeCelsius}};
\end{tikzpicture}
\end{document}
• wow..what a wonderful work. It looks quite complicated for me...but I will definitely learn from it. Thank you thousand times. Just wanna add something here. I didn't know that the number between (..) in the code: $...$ was to specify the point for splitting as well. So, I just figured out how to adjust the length of the arrow part as I mentioned in my second question. Thanks thanks thanks :-) – Chung Feb 16 '16 at 18:43
• @Chung, you are welcome. I think "my work" was the easy part of the picture. I would have never been able to draw the \chemfigs. What really helps to figure out how tikz works is to do and understand the first tutorials in the pgf manual (sections 2 to 5, if you don't need mindmaps). Maybe you have to do them twice, but there will come the time it makes "click" and you'll have a really good understanding what is going on and how to achieve stuff. – Stefan Pinnow Feb 16 '16 at 19:01
• you are right. I will definitely read and practice the instruction. By the way, it is really fun to play with chemfig. I am getting better in drawing chemical structures and really happy about that. I hope that I can help you later :-) Thanks again..I wish I could vote for more than 1 time :D – Chung Feb 16 '16 at 19:25 | 2021-04-18 00:14:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566280245780945, "perplexity": 7437.5422079593955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464065.57/warc/CC-MAIN-20210417222733-20210418012733-00254.warc.gz"} |
https://sitacuisses.blogspot.com/2012/01/tale-of-two-long-tails.html | ## Thursday, January 19, 2012
### A tale of two long tails
Power law (Zipf) long tails versus exponential (Poisson) long tails: mathematical musings with important real-world implications.
There's a lot of talk about long tails, both in finance (where fat tails, a/k/a kurtosis, turn hedging strategies into a false sense of safety) and in retail (where some people think they just invented niche marketing). I leave finance for people with better salaries brainpower, and focus only on retail for my examples.
A lot of money can be made serving the customers on the long tail; that much we already knew from decades of niche marketing. The question is how much, and for this there are quite a few considerations; I will focus on the difference between exponential decay (Poisson) long tails and hyperbolic decay (power law) long tails and how that difference would impact different emphasis on long tail targeting (that is, how much to invest going after these niche customers), say for a bookstore.
A Poisson distribution over $N\ge 0$ with parameter $\lambda$ has pdf:
$\Pr(N=n|\lambda) =\frac{\lambda^{n}\, e^{-\lambda}}{n!}$.
A discrete power law (Zipf) distribution for $N\ge 1$ with parameter $s$ is given by:
$\Pr(N=n|s) =\frac{n^{-s}}{\zeta(s)},$
where $\zeta(s)$ is the Riemann zeta function; note that it's only a scaling factor given $s$.
A couple of observations:
1. Because the power law has $\Pr(N=0|s)=0$, I'll actually use a Poisson + 1 process for the exponential long tail. This essentially means that the analysis would be restricted to people who buy at least one book. This assumption is not as bad as it might seem: (a) for brick-and-mortar retailers, this data is only collected when there's an actual purchase; (b) the process of buying a book at all -- which includes going to the store -- may be different from the process of deciding whether to buy a given book or the number of books to buy.
2. Since I'm not calibrating the parameters of these distributions on client data (which is confidential), I'm going to set these parameters to equalize the means of the two long tails. There are other approaches, for example setting them to minimize a measure of distance, say the Kullback-Leibler divergence or the mean square error, but the equal means is simpler.
The following diagram compares a Zipf distribution with $s=3$ (which makes $\mu=1.37$) and a 1 + Poisson process with $\lambda=0.37$ (click for larger):
The important data is the grey line, which maps into the right-side logarithmic scale: for all the visually impressive differences in the small numbers $N$ on the left, the really large ratios happen in the long tail. This is one of the issues a lot of probabilists point out to practitioners: it's really important to understand the behavior at the small probability areas of the distribution support, especially if they represent -- say -- the possibility of catastrophic losses in finance or the potential for the customers who buy large numbers of books.
An aside, from Seth Godin, about the importance of the heavy user segment in bookstores:
Amazon and the Kindle have killed the bookstore. Why? Because people who buy 100 or 300 books a year are gone forever. The typical American buys just one book a year for pleasure. Those people are meaningless to a bookstore. It's the heavy users that matter, and now officially, as 2009 ends, they have abandoned the bookstore. It's over.
To illustrate the importance of even the relatively small ratios for a few books, this diagram shows the percentage of purchases categorized by size of purchase:
Yes, the large number of customers who buy a small number of books still gets a large percent of the total, but each of these is not a good customer to have: elaborating on Seth's post, these one-book customers are costly to serve, typically will buy a heavily-discounted best-seller and are unlikely to buy the high-margin specialized books, and tend to be followers, not influencers of what other customers will spend money on (so there are no spillovers from their purchase).
The small probabilities have been ignored long enough; finance is now becoming weary of kurtosis, marketing should go back to its roots and merge niche marketing with big data, instead of trying to reinvent the well-know wheel.
Lunchtime addendum: The differences between the exponential and the power law long tail are reproduced, to a smaller extent, across different power law regimes:
Note that the logarithmic scale implies that the increasing vertical distances with $N$ are in fact increasing probability ratios.
- - - - - - - - -
Well, that plan to make this blog more popular really panned out, didn't it? :-) | 2017-10-16 23:59:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4016706049442291, "perplexity": 1112.3530410115982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820487.5/warc/CC-MAIN-20171016233304-20171017013304-00381.warc.gz"} |
https://math.stackexchange.com/questions/3820089/why-does-lim-limits-delta-x-to-0-fracfx-2-delta-x-fx-delta-x | # Why does $\lim\limits_{\Delta x \to 0} \frac{f(x + 2 \Delta x) - f(x)}{\Delta x} = 2 f'(x)$?
This is an exercise from Morris Kline's "Calculus: An Intuitive and Physical Approach":
What is $$\lim\limits_{\Delta x \to 0} \frac{f(x + 2 \Delta x) - f(x)}{\Delta x}$$?
Suggestion: Let $$2 \Delta x = t$$
Following the hint, we have
\begin{align} \lim\limits_{\Delta x \to 0} \frac{f(x + 2 \Delta x)-f(x)}{\Delta x} &=\lim\limits_{t \to 0} \frac{f(x + t)-f(x)}{t/2} \\ &= \lim\limits_{t \to 0} 2 \left ( \frac{f(x + t)-f(x)}{t} \right) \\ &= 2 \left ( \lim\limits_{t \to 0} \frac{f(x + t)-f(x)}{t} \right) \\ &= 2 f'(x) \end{align}
I'm not fully understanding why this is true. Wouldn't it depend on the function we are differentiating? How can we be sure that increasing the change in $$x$$ will increase the instantaneous rate of change of in $$f(x)$$?
Your calculation is fine. For an intuition, we are considering the variation of the variable for the function $$2\Delta x$$ twice the variation of the variable $$\Delta x$$ therefore the result we obtain is twice the derivative at that point.
$$\lim\limits_{\Delta x \to 0} \frac{f(x + n \Delta x) - f(x)}{\Delta x}=\lim\limits_{n\Delta x \to 0} n\frac{f(x + n \Delta x) - f(x)}{n\Delta x}=nf'(x)$$
• So basically the instantaneous rate of change increases by a factor of $n$ since we increased how much the function varies by a factor of $n$ while keeping the interval of variation the same ($\Delta x$)? – Iyeeke Sep 9 '20 at 20:53
• @Iyeeke Yes exactly that's the reason why we obtain $nf'(x)$. – user Sep 9 '20 at 20:54 | 2021-06-23 15:07:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000072717666626, "perplexity": 395.93502291260415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539480.67/warc/CC-MAIN-20210623134306-20210623164306-00168.warc.gz"} |
https://thatsmaths.com/2019/10/24/some-fundamental-theorems-of-maths/ | ### Some Fundamental Theorems of Maths
Every branch of mathematics has key results that are so
important that they are dubbed fundamental theorems.
The customary view of mathematical research is that of establishing the truth of propositions or theorems by rigorous deduction from axioms and definitions. Mathematics is founded upon axioms, basic assumptions that are taken as true. Logical reasoning is then used to deduce the consequences of those axioms with each major result designated as a theorem.
As each new theorem is proved, it provides a basis for the establishment of further results. The most important and fruitful theorem in each area of maths is often named as the fundamental theorem of that area. Thus, we have the fundamental theorems of arithmetic, algebra and so on. For example, the fundamental theorem of calculus gives the relationship between differential calculus and integral calculus.
Left: Pythagoras. Right: Thales.
The procedure of definitions and postulates, followed by theorems to be proved, goes back beyond Euclid, to Pythagoras and Thales. The old joke that a mathematician is a machine for changing coffee into theorems is often attributed to Paul Erdös, although it appears to have been said first by Alfréd Rényi, another Hungarian mathematician. The joke captures a critical aspect of the mathematician’s role: proving theorems by using axioms and other theorems already proved.
Fundamental Theorem of Arithmetic
In number theory, the fundamental theorem of arithmetic states that any integer greater than 1 can be written as a unique product of prime numbers. This identifies the prime numbers as the basic building blocks of all the integers. All other whole numbers are products of primes. The proof of this result goes back to Euclid. The fundamental theorem of arithmetic is essential in the proof of many crucial results in number theory.
This theorem ensures unique factorization of positive integers. It is one of the reasons why 1 is not treated as a prime number. This is largely a matter of convenience.
Fundamental Theorem of Algebra
The fundamental theorem of algebra states that every single-variable polynomial with complex coefficients has at least one complex root. Equivalently, the field of complex numbers is algebraically closed. The theorem has the consequence that every non-zero single-variable polynomial with complex coefficients has precisely as many complex roots as its degree (provided each root is counted up to its multiplicity).
Euler, d’Alembert, Argand, Lagrange, Laplace, Argand and Gauss all worked on proofs. Today, the theorem us usually encountered in a course in complex variable analysis. There is no purely algebraic proof of the theorem; all proofs must use the concept of the completeness of the real numbers, which is an analytical concept. The theorem was given its name when algebra was preoccupied primarily with the theory of polynomial equations. It has been observed that the Fundamental Theorem of Algebra is neither fundamental nor algebraic.
Fundamental Theorem of Calculus
The fundamental theorem of calculus provides a connection between derivatives and integrals. The first part of the theorem shows that an indefinite integration can be reversed by a differentiation:
$\displaystyle \frac{\mathrm{d}}{\mathrm{d}x} \int f(x)\,\mathrm{d}x = f(x) \,.$
The second part allows us to evaluate the integral of a function ${f(x)}$ by using an `anti-derivative’ ${F(x)}$ of ${f(x)}$. If ${f(x)}$ is a derivative of ${F(x)}$, then
$\displaystyle \int f(x) \,\mathrm{d}x = \int F^\prime(x) \,\mathrm{d}x = F(x) \,.$
The theorem was studied by James Gregory and Isaac Barrow. A more comprehensive treatment was provided by Isaac Newton and Gottfried Leibniz.
The theorem can be extended to integrals in higher dimensions and integrals on smooth manifolds. One of the most fruitful generalizations is Stokes’s theorem. This may be written
$\displaystyle \int_{A} \mathbf{\nabla\times V\cdot n}\, \mathrm{d} A = \oint_{\partial A} \mathbf{V\cdot\mathrm{d} s}\,.$
This result states that the areal integral of vorticity over a surface is equal to the circulation around the boundary. In turn, Stokes’s theorem itself has been generalized to become an important principle in differential geometry: the integral of a differential form over the boundary of an orientable manifold is equal to the integral of its exterior derivative over the manifold; symbolically,
$\displaystyle \int_{\partial\Omega} \omega = \int_{\Omega} \mathrm{d} \omega \,.$
Fundamental Theorem of Curves
In differential geometry, the fundamental theorem of curves states that any regular curve has its shape and size) completely determined by its curvature ${\kappa}$ and torsion ${\tau}$. For example, a curve with ${\kappa \equiv 1}$ and torsion ${\tau \equiv 0}$ must be a circle, although further data is required to determine its position and orientation.
Many other Fundamental Theorems
The number of fundamental theorems is large. We may mention
• Fundamental theorem of surfaces
• Fundamental theorem of Galois theory
• Fundamental theorem on homomorphisms
• Fundamental theorem of linear algebra
• Fundamental theorem of Riemannian geometry
• Fundamental theorem of vector analysis
• Fundamental theorem of linear programming
and many more might be added to this list.
Fundamental Theorem of Geometry (?)
A curious omission from the standard list is geometry. There is no particular theorem named as the fundamental theorem of geometry. Several candidates suggest themselves. One of the leading contenders must be the theorem of Pythagoras for right-angled triangles:
$\displaystyle a^2 + b^2 = c^2 \,.$
This underlies the structure of Euclidean space. It may be generalized in several directions. For example, the fundamental line-element of Riemannian geometry is
$\displaystyle \mathrm{d}s^2 = g_{\mu\nu} \mathrm{d}x^\mu \mathrm{d} x^\nu$
from which a wealth of interesting and valuable results follow. | 2019-11-17 04:25:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7823431491851807, "perplexity": 311.63462459823745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00252.warc.gz"} |
https://www.nature.com/articles/s41467-020-18080-w?error=cookies_not_supported | ## Introduction
Heterogeneous single-atom catalysts and sub-nanometer single-cluster catalysts (SCC) have emerged as promising candidates in the field of heterogeneous catalysis owing to their exceptional catalytic capabilities and minimized metal use1,2,3,4,5. Unfortunately, the intrinsic instability of single-atom species often results in their agglomeration into clusters or nanoparticles during the synthetic process and chemical reactions, which has so far severely limited their practical applications6,7. In comparison to single atoms, sub-nanometer metal clusters possess a higher stability and greater tunability in terms of their geometric and electronic structures8,9,10,11,12, and also exhibit remarkable catalytic properties as compared to larger metal nanoparticles6,13,14,15,16. In the sub-nanometer regime, each atom has a substantial impact on the electronic and catalytic properties of metal clusters17,18,19. Hence, precise atomic control over the size and composition of sub-nanometer clusters is crucial for tuning the activity and/or the selectivity of the clusters involved in various catalytic processes20. Furthermore, due to the strong electronic coupling between doped foreign atoms and host atoms, the catalytic performance of the clusters can be further tailored and/or enhanced by the incorporation of judiciously chosen dopants into the monometallic host21. However, supported bimetal cluster catalysts synthesized via conventional chemical methods (such as wet impregnation22 and sequential vapor deposition11) usually exhibit random size distribution and uncontrolled atomic positioning of dopants, posing a great challenge to the optimization of their catalytic activities and elucidation of their origin. Hence, it is highly desirable to create robust singly dispersed ultrafine bimetallic clusters with atomic precision on a solid support for superior catalytic performance. This however remains a grand challenge in the field of heterogeneous catalysis.
The design of precisely doped bimetallic cluster catalysts for effective N2 activation toward ammonia (NH3) synthesis is not only fundamentally intriguing but also economically vital. Electrochemical N2-to-NH3 reduction is emerging as a promising decentralized approach for NH3 production23,24,25,26,27,28,29, which contrasts with the energy-intensive Haber–Bosch thermal process that has dominated ammonia production for nearly a century30,31. In nature, nitrogenase enzymes containing bimetallic active centers (FeMo cofactor) are capable of reducing N2 into ammonia under ambient conditions32. Inspired by nature, chemists have been attempting to mimic the active sites of these natural enzymes and design bimetallic electrocatalysts for the reduction of N2 into NH3 under mild conditions30. Recently, it has been demonstrated that a series of metal-contained or metal free catalysts are active for electrochemical N2 reduction reactions (ENRR)33,34,35,36,37,38,39,40,41,42,43. However, both the production rate of NH3 and its selectivity is pretty low. Moreover, the underlying mechanism for ENRR is not fully understood. Therefore, the ability to design efficient catalysts with atomic precision offers great opportunities to deepen the mechanistic understanding of ENRR and to further improve their catalytic performance.
To this end, we have devised a facile method for the synthesis of ultrafine bimetallic Au4Pt2/G SCC containing a partially ligand-protected six-metal-atom (Four Au and two Pt) octahedral cluster anchored on graphene for ENRR. To achieve this, we first developed a synthetic approach for the synthesis of atomically precise Au4Pt2(SR)8 clusters using thiol as both the ligand and reducing agent. Interestingly, the subsequent partial ligand removal of Au4Pt2(SR)8 via thermal treatment allows each cluster to be anchored at a graphene vacancy site, creating the Au4Pt2/G SCC with superior catalytic performance for ENRR. The aforementioned synthetic strategy can also be extended to Pd atoms in place of Pt while keeping the cluster framework unchanged. Moreover, it is found that Au4Pd2/G SCC outperforms Au4Pt2/G SCC and the majority of ENRR catalysts reported (Supplementary Fig. 1 and Supplementary Table 1) in terms of maximum NH3 yield and faradic efficiency of ammonia production. This allows us to fine tune the catalytic properties of precisely doped ultrafine bimetallic clusters and understand their structure–property correlations at the atomic level.
## Results
### Synthesis and characterization of clusters
It is generally recognized that metal ions reduced by a strong reducing agent (e.g., NaBH4) are prone to aggregate and form medium-sized clusters or large-sized nanoparticles. Hence, we expect that a weak reducing agent may be favorable for the synthesis of ultra-small bimetal clusters. It has also been demonstrated that thiols, the common ligand used in the synthesis of gold clusters, are able to reduce Au(III) to Au(I) due to their low electronegativity44. Inspired by this, we developed a new method for the synthesis of ultrafine Au–Pt bimetal clusters by using 2-phenylethanethiol (HSC2H4Ph) as both the ligand and weak reducing regent (see “Methods” for details). The composition of the as-obtained clusters was determined using high-resolution atmospheric pressure chemical ionization mass spectrometry (APCI-MS) as well as thermogravimetric analysis (TGA). As shown in Fig. 1a, an intense peak at m/z ~2274 is observed, which can be assigned to the molecular ion of the bimetal cluster. TGA shows a total weight loss of 48% at temperature above 600 °C, attributed to the desorption of -SC2H4Ph ligands (Supplementary Fig. 2). Based on these results, the composition of the clusters can be readily deduced to be Au4Pt2(SR)8 (R represents C2H4Ph). As-assigned cluster structure is further validated by the excellent agreement between the experimental and calculated isotopic MS patterns of Au4Pt2(SR)8 (the inset of Fig. 1a).
We also managed to obtain needle-like yellow single crystals of Au4Pt2(SR)8 clusters (Fig. 1b) allowing for accurate structural determination by single-crystal X-ray diffraction (Supplementary Table 2). As shown in Fig. 1c, each cluster consists of a distorted octahedron composed of a plane of four Au with two Pt atoms located at the opposite sides of the Au plane. The octahedron is fully protected by eight thiol ligands wherein eight S−Au and eight S−Pt bonds are formed. Interestingly, each Au4Pt2(SR)8 cluster can act as a building block for the crystallization into a 1D polymeric chain-like structure (Fig. 1d). The unit cell contains two interconnected clusters linked by the waist Au atom of each Au4Pt2(SR)8 (Supplementary Fig. 3). In addition, the 1D polymeric cluster chain was observed to disassemble into individual clusters (refer to TEM and AFM results below) upon dissolving in organic solvents such as toluene or dichloromethane.
To better understand their electronic properties, we preformed scanning tunneling microscope (STM) imaging and spectroscopy (STS) measurement of individual Au4Pt2(SR)8 clusters deposited on a graphite surface. After mild annealing at 70 °C, individual clusters with different orientations can be readily imaged (Fig. 2a). However, upon annealing at 100 °C, isolated clusters were found to aggregate into densely packed monolayer islands (Supplementary Fig. 4), indicating the presence of weak interactions between the cluster and substrate. A representative STS curve acquired over a single cluster shows a wide gap-like feature and several prominent peaks outside the gap attributed to the molecular HOMO and LUMO orbitals as labeled in Fig. 2b. The calculated wave function pattern of these orbitals reveals that the HOMO and LUMO of the cluster mainly consist of contributions from the bimetallic Au4Pt2 core and S atoms, respectively (Fig. 2c). The HOMO–LUMO gap of each supported single cluster is experimentally determined to be 2.67 eV, in reasonably good agreement with that of the gas-phase cluster (2.82 eV) predicted by density functional theory (DFT) calculations. In addition, atomic force microscope (AFM) imaging shows that a relatively uniform distribution of single Au4Pt2(SR)8 clusters can be achieved on high quality monolayer graphene (Fig. 2d). AFM line profile reveals a height of ~2 nm for each bright dot, in line with the expected size of individual clusters (see Supplementary Fig. 5).
All the above-mentioned observations highlight that fully ligand-protected bimetallic clusters retain their structural integrity upon deposition on a weakly interacting substrate, but these clusters lack the desired stability and activity required for catalysis. Hence, we selected defective graphene derived from the reduction of graphene oxide to anchor the Au4Pt2(SR)8 clusters for the fabrication of robust and active SCCs for the electrochemical N2 reduction as will be discussed below.
### Synthesis and characterization of Au4Pt2/G catalyst for ENRR
The structural defects in graphene usually act as active sites for reaction with ligand-protected clusters and organic–metal complexes, and eventually bind them via partial removal of organic ligands45. Partial ligand removal often reactivates the otherwise inert fully protected metal clusters for catalysis due to alteration of electronic structures13,46. For their transformation into stable finely dispersed SCCs, it is most likely that individual Au4Pt2(SR)8 clusters were immobilized at the vacancy sites of chemically derived graphene through partial ligand removal. Indeed, we observed monodispersed clusters stabilized on defective graphene (denoted as Au4Pt2/G) through large-area scanning transmission electron microscope (STEM), as shown in Fig. 2e. Several representative high magnification STEM images (Fig. 2f) also reveal that the majority of bright dots contain a cluster of six atoms, as expected for the bimetallic Au4Pt2 cluster. We also found that the atomic arrangements of the imaged clusters vary, which can be attributed to the different viewing direction of the clusters or electron-beam induced cluster dissociation.
We then evaluated the ENRR performance of the as-prepared Au4Pt2/G catalyst in comparison with that of Au4Pt2(SR)8 without the graphene support using an aqueous-based electrochemical setup as illustrated in Fig. 3a. It was observed that Au4Pt2/G SCC demonstrates a significantly higher ENRR activity as compared to Au4Pt2(SR)8 at all the applied reduction potentials (Fig. 3b, c and Supplementary Fig. 6). The Au4Pt2(SR)8 catalyst demonstrates a maximum NH3 yield of 7.9 µg mg−1 h−1 (Fig. 3b) with a faradic efficiency (FE) of 9.7% at −0.1 V (Fig. 3c). In contrast, the Au4Pt2/G SCC generates a maximum NH3 yield of up to 23.6 µg mg−1 h−1 at −0.1 V, three times higher than that of the Au4Pt2(SR)8 catalyst (Fig. 3b and Supplementary Table 3 and area normalized yield rate see Supplementary Table 4). Hence, this observation suggests that the defective graphene support plays an important role in optimizing the catalytic activity of the bimetal cluster in ENRR. It is worth mentioning that the ENRR performance (NH3 yield and FE) of both the Au4Pt2(SR)8 and Au4Pt2/G catalysts decline as the reduction potential becomes more negative. This trend can be rationalized by the fact that at more negative potentials, the hydrogen evolution reactions (HER) become dominant, which severely limits ENRR toward NH3 production. We also found that no NH3 can be detected when defective graphene alone was employed as the catalyst for ENRR, or when the same experiment was performed in an argon saturated electrolyte without the N2 source (Supplementary Fig. 6). Nuclear magnetic resonance (NMR) spectroscopy was also employed to determine the generation of ammonia. The 1H resonance coupled to 14N in 14NH4+ is split into three symmetric signals with a spacing of 52 Hz (Fig. 3d)26. In addition, we conducted an isotopic 15N labeling to further confirm the source of nitrogen for NH3 production. A doublet pattern with the coupling constant of JN–H = 72 Hz attributed to 15NH4+ was observed in the 1H-NMR spectra (Fig. 3d)26,47. In order to validate the yield of NH3, we developed a new approach involving a direct comparison between the ratio of a mixture 14N2/15N2 gas used and the ratio of 14NH4+/15NH4+ produced. When a mixture 14N2 and 15N2 gas with a mole ratio of 9:1 (or 1:1) is used, the ratio of 14NH4+ and 15NH4+ is determined to be 8.97/1 (or 1.07/1) (obtained from NMR signals associated with the 14NH4+ and 15NH4+), proportional to the initial gas ratio of 14N2/15N2 (Fig. 3d). All these results suggest that the NH3 obtained does not originate from the electrolyte and/or materials used in the electrochemical system. The established correlation between the ratio of a mixture 14N2/15N2 gas used and the ratio of 14NH4+/15NH4+ validates the NH3 yield determined in our case.
### Probing the origin of catalytic activation of N2
To gain a deep understanding of the local chemical environment of the active sites, we carried out X-ray absorption fine structure (XAFS) measurements to monitor the change in chemical bonding and oxidation state of the metal species upon anchoring of the bimetallic clusters on graphene48. As revealed in Fig. 4a, higher white line intensity was observed in the Pt L3-edge spectrum of Au4Pt2/G as compared to that of Au4Pt2(SR)8, suggesting a higher density of d band holes at the Pt sites of Au4Pt2/G. This can be attributed to charge depletion of the d band due to a strong cluster–substrate interaction49. A detailed analysis of Pt L3-edge FT-EXAFS reveals that the Pt–S bond is stretched to 1.78 Å for the anchored Au4Pt2 cluster as compared to that of the unsupported Au4Pt2(SR)8 cluster (1.75 Å) (see Supplementary Fig. 7). In addition, the Au L3-edge XAFS spectrum (Fig. 4b) shows a negligible change of spectroscopic features before and after the anchoring of clusters on graphene. Therefore, it is most likely that the ligand-detached Pt atom is bonded to the carbon atom at the vacancy site via partial ligand removal during the thermal treatment. Such a cluster anchoring process is analogs to the fabrication of surface supported single site molecule catalysts reported previously45.
In order to determine the atomic structures of the Au4Pt2/G SCCs, we performed DFT calculations with van der Waals corrections (in a D2 format) in combination with a standard simulation of X-ray absorption near edge structure (XANES) (see Fig. 4). Based on our XANES simulation and the plausible surface reaction mechanism, it is highly possible that each cluster undergoes partial ligand removal, leading to a subsequent bonding to carbon atoms at the vacancy of graphene. We hence propose several possible atomic configurations of Au4Pt2/G along this line, which are further optimized via DFT calculations. Our calculations reveal a stable structure consisting of partially ligand-protected Au4Pt2(SR)6 bonded to carbon atoms at graphene vacancy, wherein two ligands at the base of each Au4Pt2(SR)8 cluster are eliminated in order to form Pt–C anchoring bond (as illustrated in Fig. 4c). In addition, we also tested other proposed structures such as graphene-supported Au4Pt2(SR)8 cluster (without any missing ligand, Fig. 4e), Au4Pt2(SR)7, Au4Pt2(SR)4, Au4Pt2(SR)2, and Au4Pt2(SR)0 (missing one, four, six, eight ligands, respectively, refer to Supplementary Figs. 8 and 9). However, all the simulated XANES of these proposed structures do not agree with the experimental XANES data.
To understand how the ligand removal modifies the electronic properties of as-formed cluster in a more intuitive manner, we calculated the detailed energy levels of Kohn–Sham molecular orbitals for both Au4Pt2(SR)6 and Au4Pt2(SR)8 (R=H) using the Amsterdam Density Functional (ADF) program. The removal of two ligands not only reduces the electronic gap of as-formed clusters but also creates two singly occupied electrons derived from 5d and 6s orbitals of Pt and Au, respectively (Fig. 5a). These two energetic electrons on the lower valent metal atoms may facilitate the electron transfer from the cluster to the N2 π* orbitals, resulting in N2 activation. We found that N2 adsorption cannot proceed over any site of Au4Pt2(SR)6 once it is anchored on defect-free graphene (Supplementary Fig. 10). All these results suggest that both Au4Pt2(SR)6 and interfacial defect in graphene play crucial roles not only in the catalyst fabrication but also in the catalytic N2 activation, in agreement with previous studies of graphene-supported metal catalysts45,50,51.
To probe the origin of catalytic activity of Au4Pt2/G (Note Au4Pt2/G is used to represent the actual structure of Au4Pt2(SR)6/G for the sake of consistency), we then performed periodic DFT calculations (with D2 correction) to determine the atomic structure of the active site in this system. N2 adsorption is known to be the key step in ENRR. Amongst the various N2 adsorption configurations tested, the most stable one identified is shown in Supplementary Fig. 11. The adsorption energy of N2 for this configuration is estimated to be −0.38 eV, wherein two N atoms are bonded to carbon atoms of graphene and adjacent Pt/Au atoms, respectively. The calculations also reveal that the Fermi energy (EF) of graphene-supported Au4Pt2/G is rebalanced toward the LUMO of N2 (see Fig. 5b), resulting in a small energy separation (~0.6 eV) between EF and N2 LUMO, consistent with the previous molecular DFT results. This facilitates electron transfer from the active site to the N2 LUMO (Fig. 5b) and the activation of N2. Bader charge analysis (inset of Supplementary Fig. 11 shows the corresponding charge redistribution plot) has shown that the N2 molecule gains a total of 1.44 electrons from the active site of Au4Pt2/G. In addition, the projected density of states (PDOS) shown in Fig. 5c and Supplementary Fig. 12 reveal the detailed electronic interaction between the atoms of the active site and N2. The LUMO of gas-phase N2 consisting of degenerate px and pz orbitals is split into two nondegenerate orbitals upon its adsorption at the active site due to the low local symmetry and different degrees of electronic interaction between the px and pz orbitals of N2 and the Au, Pt, and C orbitals. The HOMO of N2 (py) is mixed with the d orbitals of Pt/Au, as evidenced from a significantly broadened py PDOS upon its adsorption. The strong orbital interaction between both N2 and metal atoms results in a significant electron transfer from the d orbitals of the metal species to the π* antibonding orbitals of N2 in combination with an interesting back-donation mechanism involving a partial electron transfer from the HOMO of N2 (σ bonding) back to the metal centers (Fig. 5c and Supplementary Fig. 12). This is analogous to the N2 activation mechanism reported in conventional transition metal catalysts52.
In addition to N2 activation, we also performed the ground-state calculations with DFT + D2 for possible configurations to estimate the energy profiles of the plausible reaction pathways (Fig. 5d and Supplementary Fig. 13). It is observed that the formation of activated N2* at the catalytic center is energetically favored by 0.38 eV. We then calculated the energy profiles of the two possible reaction pathways for the subsequent protonation of activated N2* species. As shown in pathway I (Fig. 5d), the protonation of N atom bonded graphene occurs first (refer to Supplementary Fig. 13 for details), followed by the protonation of second N atom bonded to the cluster, which leads to the formation of the first and second NH3 molecule respectively. The results also reveal that the rate-limiting step of pathway I is the desorption of the second NH3 in the final reaction step. The reaction barrier of the rate-liming step is estimated to be 0.91 eV, which can be readily surmounted upon the application of an electrochemical potential38. The energy profile of the pathway II is shown in Supplementary Fig. 14, in which the protonation of N atom bonded metal cluster occurs first. For this pathway, there are two rate-limiting steps: (1) desorption of the first NH3 with a barrier of 0.74 eV; (2) the other one is the formation of the second NH3 with a high barrier of 2.42 eV. Therefore, our calculation results show that the pathway I is more energetically favorable.
The mechanistic insights into the N2 activation obtained herein motivated us to use Pd in place of Pt as the dopant for the synthesis of a new bimetallic Au4Pd2(SR)8 cluster with the same structural framework (Fig. 1e–h and Supplementary Table. 5). This would allow us to precisely control the doping of SCCs and fine tune its catalytic performance. We were able to successfully synthesize Pd-doped bimetallic clusters Au4Pd2(SR)8 on the gram scale using the same synthetic protocol as described earlier (Fig. 1e–h, Supplementary Fig. 15). It was observed that the Au4Pd2/G catalyst yields a NH3 production rate of 13.1 µg mg−1 h−1 at −0.1 V, lower than that of Au4Pt2/G catalyst at the same potential. This indicates that Au4Pd2/G has a lower ENRR activity compared to Au4Pt2/G. However, we obtained a maximum NH3 yield rate of 27.1 µg mg−1 h−1 with a FE of ~12% at a more negative potential of −0.2 V for Au4Pd2/G, actually outperforming the Au4Pt2/G (Fig. 3b, c). This suggests that HER could be more effectively suppressed in this system as compared to that of Au4Pt2/G, consistent with the HER performance of two bimetallic SCCs tested (Supplementary Fig. 16). We note that pure Au and Pt nanoclusters (e.g. Au6 or Pt6) with the same octahedral framework as that of Au4Pt2 have not been obtained. To further demonstrate the synergistic effect of bimetallic nanoclusters, the ENRR catalytic performance of a pure Au25 and Pt nanoclusters (with an average size of 1 nm) was also evaluated. As shown in Supplementary Figs. 17 and 18, the results reveal a poorer ENRR performance of both pure Au and Pt clusters as evidenced by a lower NH3 yield and lower faradic efficiency compared to that of bimetallic Au4Pt2/G and Au4Pd2/G catalysts synthesized. Therefore, these results further confirm that hetero-dopant (Pt or Pd) of bimetallic clusters play important roles in the enhanced catalytic performance of ENRR.
The catalytic cycling stability is another critical parameter of ENRR performance for practical applications. As shown in Supplementary Fig. 19, both ammonia yield and FE remain nearly constant during the multiple cycling tests of both SCCs. Large-area TEM images of both Au4Pt2/G and Au4Pd2/G catalysts show little morphological variation before and after reaction. In addition, STEM images of both catalysts reveal that Au4Pt2 and Au4Pd2 anchored on graphene still contain a cluster of six-atom after ENRR reaction (Supplementary Fig. 20). Moreover, XAFS measurement of the Au, Pt L3-edges and Pd K-edge for both Au4Pt2/G and Au4Pd2/G catalysts shows a negligible spectrum change before and after reactions, which further proves the high cycling stability of both catalysts (Supplementary Fig. 21).
## Discussion
In summary, we have devised a synthetic approach for the synthesis of ultrafine bimetallic Au4Pt2(SR)8 clusters. A sequential anchoring of these bimetallic clusters on defective graphene allows for the synthesis of atomically precise SCC for efficient electrochemical N2 reduction. A nanoscale confined interfacial between the graphene substrate and Au4Pt2(SR)6 cluster acts as the active site for N2 fixation. The heteroatom dopant is found to play an indispensable role in the back donation of electrons from the supported bimetal cluster to the N2 antibonding π*-orbitals, contributing to N2 activation. We also demonstrate that the catalytic properties of the ultrafine bimetallic clusters can be further tuned via precise replacement of the heteroatom dopant. Our findings have opened up a new avenue for the design of atomically precise SCCs with dopant-controlled reactivity for a wide range of industrially important catalysis.
## Methods
### Materials
All chemicals are commercially available and used as received. In our experiment, we used the ultrapure water (resistivity 18.2 MΩ cm) produced by a Milli-Q NANO water purification system. Tetrachloroauric (III) acid (HAuCl4·3H2O), hydrogen hexachloroplatinate (IV) hexahydrate (Na2PtCl6·6H2O), palladium(II) chloride (PdCl2), tetraoctylammonium bromide (TOABr), 2-phenylethanethiol (PhC2H4SH) were purchased from Sigma-Aldrich. Reduced graphene oxide (G) were purchased from Nanjing XFNANO Materials Tech Co., Ltd. Tetrahydrofuran (THF), methanol, dichloromethane, petroleum ether, and toluene were purchased from Sinopharm chemical reagent Co., Ltd.
### Synthesis of Au4Pt2(SR)8 clusters
Typically, 305 mg of HAuCl4·3H2O and 200 of Na2PtCl6·6H2O were dissolved in THF (20 mL). Subsequently, tetraoctylammonium bromide (640 mg) was also added into the solution followed by stirring for 5 min After a complete dissolution of all the solid precursors, 830 µL 2-phenylethanethiol was added to the flask followed by an extended stirring for 2 h. The yield of Au4Pt2(SR)8 is determined to be ~28 %.
### Synthesis of Au4Pd (SR)8 clusters
We adopt the same protocol as described above for the synthesis of Au4Pd2(SR)8 clusters. Here we used 68 mg PdCl2 as the precursor for the synthesis of Au4Pd2(SR)8 clusters. The yield of Au4Pd2(SR)8 is estimated to be ~78 %.
### Single-crystal X-ray diffraction
The data were collected at 263 K (for Au4Pt2(SR)8) and 100 K (for Au4Pd2(SR)8) using a four circles goniometer Kappa geometry, Bruker AXS D8 Venture, equipped with a Photon 100 CMOS active pixel sensor detector. A Molybdenum monochromatized (λ = 0.71073 Å) X-Ray radiation was used for the measurement. Data were corrected for absorption effects using the Multi-Scan method SADABS. The atomic structure of single crystal was solved by direct methods and further refined by full matrix least squares using the SHELXTL 6.1 bundled software package.
### Sample characterizations
High-resolution APCI-MS was performed on an MicrOTOF-QIImass spectrometer (Bruker) in a positive mode. Compass IsotopePattern was used to simulate the isotopic pattern. The UV/vis/NIR absorption spectra were measured using a UV-3600 spectrophotometer (Shimadzu) at room temperature. TGA (~3 mg sample used) was conducted in a N2 atmosphere (flow rate ~50 mL/min) at a heating rate of 10 °C/min using a TG/DTA 6300 analyzer. To determinate the loading of clusters on graphene, as-obtained Au4Pt2/G or Au4Pd2/G (G represents graphene) samples were dissolved in aqua regia and analyzed by inductively coupled plasma mass spectrometry (ICP-MS, Thermo ScientificXseries II). STEM-ADF imaging was carried out in an aberration-corrected JEOL ARM-200F system equipped with a cold field emission gun and an ASCOR probe corrector at 60 kV. The images were collected with a half-angle range from ~85 to 280 mrad, and the convergence semi-angle was set at ~30 mrad.
### XAFS measurements and XANES simulations
The XANES and the extended X-ray absorption fine structure (EXAFS) measurements of Pt L3 and Au L3 edge were carried out at the XAFCA beamline of the Singapore Synchrotron Light Source (SSLS).The storage ring of SSLS operated at 700 MeV with a beam current of 250 mA. A Si (111) double-crystal monochromator was applied to filter the X-ray beam. Pt and Au foils were used for the energy calibration, and all samples were measured under transmission mode at room temperature. The XAFS data were analyzed using the Demeter software package53. The XANES simulated spectra of Pt and Au L3 edges of all the structures predicted by DFT calculation were modeled using a finite difference method implemented by the FDMNES program. The spin–orbit interaction and relativistic effect are included in our calculations. The XAFS measurement of Au L3 and Pd K edges for Au4Pd2/G before and after ENRR were measured in transmission mode at beamline 20-BM-B of Advanced Photon Source in Argonne National Laboratory.
### Setup for electrochemical measurements
The electrochemical reduction of N2 was carried out using a CHI760 electrochemical station with a three-electrode system. A two-compartment glass H-cell was used and connected by a joint separated by a Nafion117 membrane. The saturated calomel electrode (SCE) and Pt foil were used as the reference and counter electrode, respectively.
### Synthesis of Au4Pt2/G and Au4Pd2/G catalysts
Twelve milligrams of Au4Pt2(SR)8 (or Au4Pd2(SR)8 cluster) single crystals was dissolved in 50 mL toluene and stirred for 30 min. Subsquently, a 80 mg of defective graphene was added into the solution rapidly under intense stirring. After 30 min, 500 mL of ethanol was added into the solution rapidly. The black precipitate was collected by filtration and dry at 150 °C in vacuum.
### Preparation of cathode for ENRR
Typically, 1 mg catalyst (The loading of metal clusters on graphene is 8.5 wt% for Au4Pt2/G and 10.5 wt% for Au4Pd2/G, respectively) and 5 μL of Nafion solution (5 wt%) were dispersed in the absolute ethyl alcohol (100 μL) followed by the sonication for 30 min to form a homogeneous ink. Subsequently, the ink was loaded onto a carbon paper with an area of 2 × 2 cm2. As-prepared electrode was dried under ambient conditions.
### Calibration of the reference electrode
We used a SCE as the reference electrode in all measurements. The reference electrode was calibrated with respect to a reversible hydrogen electrode (RHE). The calibration was performed in the high purity hydrogen saturated electrolyte using Pt foils as both working and counter electrodes (0.1 M HCl electrolyte). Cyclicvoltammetry measurements were performed at a scan rate of 1 mV s−1. The average value of the two potentials at which the H2 oxidation/evolution curves cross at I = 0 was treated as the thermodynamic potential for the hydrogen electrode reactions. Therefore, the calibration of the reference electrode in 0.1 M HCl can be obtained using this equation: E (RHE) = E (SCE) + 0.32 V (Supplementary Fig. 22)
### ENRR measurements
Prior to the test of ENRR, Nafion117 membrane was immersed in 5% H2O2 aqueous solution at 80 °C for 1 h. Subsequently, the membrane was soaked in ultrapure water at 80 °C for another 1 h. ENRR was performed in a three-electrode configuration consisting of the working electrode (either Au4M2(SR)8 or Au4M2/G (M = Pt, Pd)), Pt foil counter electrode and SCE reference electrode, respectively. A two-compartment H-shape cell separated by Nafion117 membrane was used for ENRR (Fig. 3a in main text). All the glasswares were first boiled in 0.1 M NaOH for 2 h and washed with ultrapure water. After that, they were boiled in 0.1 M HCl for another 2 h and rinsed at least three times in ultrapure water followed by the vacuum drying for 6 h at 110 °C. In this work, all potentials were converted to the RHE scale. The potentiostatic test for ENRR was conducted in the 0.1 M HCl solution (30 ml) saturated by N2. N2 gas (99.999% purity) was continuously fed to the cathodic compartment during the whole ENRR. The performance of catalysts was evaluated under a controlled potential electrolysis in an electrolyte for 1 h at room temperature. Prior to each electrolysis, the electrolyte was presaturated with N2 via gas bubbling for 30 min. During each electrolysis, the electrolyte was continuously bubbled with N2 at a flow rate of 10 sccm. In addition, the control experiments including the potentiostatic test using (1) 0.1 M HCl solution saturated by argon gas and (2) bare graphene without cluster as catalyst were performed in the same setting. Preliminary purification of gases utilized in the experiments, including pure 14N2, a mixture of 14N2 and 15N2, and Ar have been done before the introduction of them into electrochemical cell. The gases were further purified via the flow through a series of solutions including 1 M NaOH solution, ultrapure water, a concentrated H2SO4, and ultrapure water, to mitigate the contribution of extrinsic contaminants.
### Determination of ammonia
The concentration of as-produced ammonia was determined using a modified indophenol blue method54. First, 2 mL electrolyte obtained from the electrochemical reaction vessel was added into the 1 M NaOH solution (2 mL) containing salicylic acid and sodium citrate. Second, 1 mL of 0.05 M NaClO and 0.2 mL of 1 wt% C5FeN6Na2O (sodium nitroferricyanide) were added into the above-mentioned solution, which was kept at room temperature for 2 h before the subsequent UV–Vis spectroscopy measurements. We measured the UV–Vis adsorbance (at the maximum wavelength of 656 nm) of a series of standard ammonia chloride solutions to prepare the calibration curves for the determination of the ammonia concentration of unknow solutions. The fitting curve reveals a linear relationship between the absorbance and NH3 concentration ($${y} = 0.429{x} - 0.015,R^2 = 0.999$$, see Supplementary Fig. 23).
### The calculation of FE
The FE is calculated as follows.
$${\rm{FE}} = 3F \times n_{\rm{NH}_{3}}/Q,$$
(1)
where F is the Faraday constant (96485 C mol−1). Q is the total charge passed through the electrode.
The mole of ammonia (nNH3) was calculated using the following equation:
$$n_{\rm{NH}_{3}} = n_{\rm{NH}_{4}{\rm{Cl}}} = (C_{\rm{NH}_4{\rm{Cl}}} \times V) \times 10^{ - 6}/M_{\rm{NH}_4{\rm{Cl}}}$$
(2)
Note: $$C_{\rm{NH}_4{\rm{Cl}}}$$ (µg mL−1) refers to the measured NH4Cl concentration, V (mL) is the volume of the electrolyte (30 mL), $$M_{\rm{NH}_4{\rm{Cl}}}$$ is the molecular weight of NH4Cl.
### 15N2 isotope labeling experiment
A mixture of 14N2 and 15N2 (with mole ratios of 9:1 and 1:1, respectively) was used as the feeding gas for the isotopic labeling experiment. The detailed procedure is largely similar to that of 14N2 electrochemical experiment despite of minor differences. Before introducing 15N2 labeling gas, Ar gas flows through the whole setup for 30 min to remove 14N2 and other possible gas inpurities. After purging with sufficient Ar, a mixed gas (14N2 and 15N2) with well-defined ratios (14N2/15N2) is introduced into ENRR system for 20 min with a flow rate of 10 sccm. To generate adequate amount of products for the subsequent NMR analysis, we run the reaction for 10 h. The electrolyte after ENNR was further condensed prior to the 1H-NMR spectroscopy measurement (500 MHz, DMSO-d6).
### Determination of hydrazine
The concentration of the hydrazine presented in the electrolyte was estimated using a modified method developed by Watt and Chrisp55. Supplementary Fig. 24.
### Molecular and periodic DFT calculations
In the periodic DFT calculations, the geometries of Au4Pt2(SR)n and Au4Pd2(SR)n (n = 8, 6) metal clusters are adopted from experimental results and then fully optimized using DFT calculations with 20 × 20 × 20 Å3 supercell. The graphene vacancy structure is modeled by removing two carbon atoms in a supercell with 10 × 10 graphene pristine cells including a 25 Å vacuum layer so that the supercell is large enough to contain metal clusters. In all calculations except for energy levels presented in Fig. 5a, Vienna ab-initio Simulation Package is utilized with spin polarization Kohn–Sham formalism56,57. The generalized gradient approximation (GGA) in the Perdew–Burke–Ernzerh (PBE) format, with scalar relativistic (SR) effects of Au considered58, the projector-augmented wave method59 and a plane wave basis with the cut-off energy of 400 eV are employed in all the calculations. Van der Waals force (through DFT + D2) is also considered. The convergence criteria for electronic steps and structural relaxations were set to 10−5 eV and 0.01 eV/Å, respectively.
In the molecular DFT calculations of the energy levels of the Au4Pt2(SR)n (n = 8, 6) metal clusters, as presented in Fig. 5a, relativistic DFT quantum chemical methods are adopted as implemented in the ADF (2016.101) program60,61,62. The GGA with the PBE exchange-correlation functional63 was used, together with the uncontracted TZ2P Slater basis sets for all atoms64. Frozen core approximations were applied to the inner shells [1s2−2p6] for S and [1s2−5d10] for Au and Pt atoms. The SR effects were considered by the zero-order-regular approximation to account for the mass–velocity and Darwin effects65. In calculations, simplified SR (R=H, CH3) group was used as a substitute of SCH2CH2C6H5 ligand to form the model clusters and to save time for the calculations. As the results are qualitatively similar, we only present the results with R=H here. As the experimental structure of Au4Pt2(SCH2CH2C6H5)8 cluster shows a skeleton with point-group symmetry close to D4, we used D4 symmetry to optimize the simplified model to better understand the electronic structure of the cluster. The stability and reactivity of Au4Pt2(SR)6 are simply evaluated by removing two adjacent SR ligands coordinated with the same Pt atom in the unrelaxed cluster. | 2023-03-28 06:53:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6349689960479736, "perplexity": 3955.0416315879793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00774.warc.gz"} |
https://math.stackexchange.com/questions/3118996/counter-example-for-darboux-sums-finer-partition-with-greater-difference | # Counter-Example for Darboux Sums: “Finer” Partition with Greater Difference.
Let $$P = \{p_i\}_{i= 1, n}$$, $$P' = \{p'_j\}_{j = 1, m}$$ be partitions of an interval with max$$|p'_j| \le$$min$$|p_i|$$, i.e. all the sub-intervals of $$P'$$ are at least as short as all the sub-intervals of $$P$$.
Let $$f$$ be a bounded function on the interval.
Let $$U_{P, f}, L_{P, f}$$ be the upper and lower Darboux sums of $$f$$ on $$P$$ (and correspondingly on $$P'$$).
There is a proof here https://math.stackexchange.com/q/353810 in the lemma that $$U_{P', f} - L_{P', f} \le 3(U_{P, f} - L_{P, f} )$$.
I assume from this that it is not generally the case that $$U_{P', f} - L_{P', f} \le U_{P, f} - L_{P, f}$$ and I'm looking for a counter-example to show this is so. I do know that this tighter inequality holds when $$P'$$ is a refinement of $$P$$ but even so I cannot construct a counter-example.
After further thought .....
Let $$P = \{0, 1/2 , 1\}$$ and $$P' = \{0, 1/3, 2/3, 1\}$$ be partitions of $$[0, 1]$$. Define $$f$$ on $$[0, 1]$$ by $$f(7/12) = 1; f(1) = 1; f(x) = 0$$ otherwise.
Then $$U_{P', f} - L_{P', f} = 0 + 1/3 + 1/3 = 2/3 > 1/ 2 = U_{P, f} - L_{P, f}$$
• If you have answered your own question, shouldn't you post the answer as an answer, instead of appending it to the question? – bof Feb 20 at 9:43
• @bof I have once or twice in the past, but now wonder if it doesn't just look like going for the points ? – Tom Collinge Feb 20 at 10:57 | 2019-04-20 10:20:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9233593344688416, "perplexity": 224.4201358792175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529606.64/warc/CC-MAIN-20190420100901-20190420122901-00193.warc.gz"} |
https://www.molympiad.net/2022/04/usajmo-2021.html | # [Solutions] United States of America Junior Mathematical Olympiad 2021
1. Let $\mathbb{N}$ denote the set of positive integers. Find all functions $f : \mathbb{N} \rightarrow \mathbb{N}$ such that for positive integers $a$ and $b,$$f(a^2 + b^2) = f(a)f(b) \text{ and } f(a^2) = f(a)^2.$
2. Rectangles $BCC_1B_2,$ $CAA_1C_2,$ and $ABB_1A_2$ are erected outside an acute triangle $ABC.$ Suppose that$\angle BC_1C+\angle CA_1A+\angle AB_1B=180^{\circ}.$Prove that lines $B_1C_2,$ $C_1A_2,$ and $A_1B_2$ are concurrent.
3. An equilateral triangle $\Delta$ of side length $L>0$ is given. Suppose that $n$ equilateral triangles with side length 1 and with non-overlapping interiors are drawn inside $\Delta$, such that each unit equilateral triangle has sides parallel to $\Delta$, but with opposite orientation. (An example with $n=2$ is drawn below.) Prove that$n \leq \frac{2}{3} L^{2}.$
4. Carina has three pins, labeled $A, B$, and $C$, respectively, located at the origin of the coordinate plane. In a move, Carina may move a pin to an adjacent lattice point at distance $1$ away. What is the least number of moves that Carina can make in order for triangle $ABC$ to have area $2021$? (A lattice point is a point $(x, y)$ in the coordinate plane where $x$ and $y$ are both integers, not necessarily positive.)
5. A finite set $S$ of positive integers has the property that, for each $s \in S,$ and each positive integer divisor $d$ of $s$, there exists a unique element $t \in S$ satisfying $\text{gcd}(s, t) = d$. (The elements $s$ and $t$ could be equal.) Given this information, find all possible values for the number of elements of $S$.
6. Let $n \geq 4$ be an integer. Find all positive real solutions to the following system of $2n$ equations $$\begin{cases}a_{1} &=\frac{1}{a_{2 n}}+\frac{1}{a_{2}}, & a_{2}&=a_{1}+a_{3}, \\ a_{3}&=\frac{1}{a_{2}}+\frac{1}{a_{4}}, & a_{4}&=a_{3}+a_{5}, \\ a_{5}&=\frac{1}{a_{4}}+\frac{1}{a_{6}}, & a_{6}&=a_{5}+a_{7} \\ &\vdots & &\vdots \\ a_{2 n-1}&=\frac{1}{a_{2 n-2}}+\frac{1}{a_{2 n}}, & a_{2 n}&=a_{2 n-1}+a_{1}\end{cases}$$
MOlympiad.NET là dự án thu thập và phát hành các đề thi tuyển sinh và học sinh giỏi toán. Quý bạn đọc muốn giúp chúng tôi chỉnh sửa đề thi này, xin hãy để lại bình luận facebook (có thể đính kèm hình ảnh) hoặc google (có thể sử dụng $\LaTeX$) bên dưới. BBT rất mong bạn đọc ủng hộ UPLOAD đề thi và đáp án mới hoặc liên hệbbt.molympiad@gmail.comChúng tôi nhận tất cả các định dạng của tài liệu: $\TeX$, PDF, WORD, IMG,... | 2022-05-22 08:57:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 12286.169538318909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00671.warc.gz"} |
https://gmatclub.com/forum/fifty-percent-of-all-the-students-attending-a-school-on-a-certain-day-310368.html | GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 18 Jan 2020, 08:34
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Fifty percent of all the students attending a school on a certain day
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 60467
Fifty percent of all the students attending a school on a certain day [#permalink]
### Show Tags
11 Nov 2019, 04:23
00:00
Difficulty:
55% (hard)
Question Stats:
48% (01:14) correct 52% (01:53) wrong based on 30 sessions
### HideShow timer Statistics
Fifty percent of all the students attending a school on a certain day arrived by 7:00 AM. How many students arrived by 7:00 AM on that day?
(1) Fifteen students arrived between 7:00 AM and 8:00 AM, and 4/5 of that day’s total attending students arrived by 8:00 AM.
(2) Ten students arrived after 8:00 AM that day.
Are You Up For the Challenge: 700 Level Questions
_________________
Math Expert
Joined: 02 Aug 2009
Posts: 8342
Re: Fifty percent of all the students attending a school on a certain day [#permalink]
### Show Tags
11 Nov 2019, 09:07
Fifty percent of all the students attending a school on a certain day arrived by 7:00 AM. How many students arrived by 7:00 AM on that day?
(1) Fifteen students arrived between 7:00 AM and 8:00 AM, and 4/5 of that day’s total attending students arrived by 8:00 AM.
$$\frac{4}{5}$$ means $$\frac{4}{5}*100=80%$$ arrive by 8:00 AM. but fifteen students arrived between 7:00 AM and 8:00 AM.
Also, fifty percent of all the students attending a school on a certain day arrived by 7:00 AM, so 80%-50%=30%=15....100%=$$\frac{15}{30}*100=50$$.
thus 50% or 25 students attending a school on a certain day arrived by 7:00 AM.
(2) Ten students arrived after 8:00 AM that day.
insuff
A
_________________
Manager
Status: Student
Joined: 14 Jul 2019
Posts: 149
Location: United States
Concentration: Accounting, Finance
GPA: 3.9
WE: Education (Accounting)
Re: Fifty percent of all the students attending a school on a certain day [#permalink]
### Show Tags
12 Nov 2019, 22:06
Fifty percent of all the students attending a school on a certain day arrived by 7:00 AM. How many students arrived by 7:00 AM on that day?
(1) Fifteen students arrived between 7:00 AM and 8:00 AM, and 4/5 of that day’s total attending students arrived by 8:00 AM.
(2) Ten students arrived after 8:00 AM that day
From question stem, 50% students arrive by 7:00 AM. We have to find out the number of students who arrive by 7:00 AM.
(1) (4/5 - 1/2) = 3/10 of total students come between 7:00 AM and 8:00 AM. From this we can find out total number of students and students who arrive by 7:00 AM.SUFFICIENT.
(2) We don't know what percentage of total students these 10 students is, so NOT SUFFICIENT.
Re: Fifty percent of all the students attending a school on a certain day [#permalink] 12 Nov 2019, 22:06
Display posts from previous: Sort by | 2020-01-18 15:36:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6142003536224365, "perplexity": 3900.0196583354427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00457.warc.gz"} |
https://socratic.org/questions/how-do-you-factor-6x-3-48-1 | # How do you factor 6x^3+48?
Dec 18, 2015
Separate out the scalar factor $6$ then use the sum of cubes identity to find:
$6 {x}^{3} + 48 = 6 \left(x + 2\right) \left({x}^{2} - 2 x + 4\right)$
#### Explanation:
First separate out the common scalar factor $6$ to find:
$6 {x}^{3} + 48 = 6 \left({x}^{3} + 8\right)$
Then notice that both ${x}^{3}$ and $8 = {2}^{3}$ are perfect cubes, so work well with the sum of cubes identity:
${A}^{3} + {B}^{3} = \left(A + B\right) \left({A}^{2} - A B + {B}^{2}\right)$
With $A = x$ and $B = 2$ we find:
${x}^{3} + 8 = \left({x}^{2} + {2}^{3}\right) = \left(x + 2\right) \left({x}^{2} - 2 x + 4\right)$
Putting it together we get:
$6 {x}^{3} + 48 = 6 \left(x + 2\right) \left({x}^{2} - 2 x + 4\right)$
This has no simpler factors with Real coefficients, as you can check by looking at the discriminant $\Delta$ of $\left({x}^{2} - 2 x + 4\right)$
$\Delta = {b}^{2} - 4 a c = {\left(- 2\right)}^{2} - \left(4 \times 1 \times 4\right) = 4 - 16 = - 12$
Since $\Delta < 0$ this quadratic has no Real zeros and no linear factors with Real coefficients.
Dec 18, 2015
First factor out the 6 ...
#### Explanation:
$6 \left({x}^{3} + 8\right)$
Now use the identity for the sum of cubes ...
${a}^{3} + {b}^{3} = \left(a + b\right) \left({a}^{2} - a b + {b}^{2}\right)$
$6 \left(x + 2\right) \left({x}^{2} - 2 x + 4\right)$
hope that helped | 2020-07-11 04:03:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6465376615524292, "perplexity": 494.48341771123717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655921988.66/warc/CC-MAIN-20200711032932-20200711062932-00087.warc.gz"} |
https://covalent.readthedocs.io/en/latest/how_to/orchestration/construct_electron.html | # How to construct an electron#
An electron is a single subtask in the workflow. In order to construct an electron, we define a function to perform some task and attach the electron decorator as shown in the two examples below.
[1]:
import covalent as ct
Here’s how one can construct an electron for the identity operation.
[2]:
@ct.electron
def identity(x):
return x
As another example, consider how an electron corresponding to the quadrature operation is created.
[3]:
from math import sqrt
@ct.electron | 2022-12-07 23:00:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188961148262024, "perplexity": 1689.8297385925339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711221.94/warc/CC-MAIN-20221207221727-20221208011727-00039.warc.gz"} |
https://analytixon.com/2022/07/04/if-you-did-not-already-know-1760/ | Maximal alpha-Leakage
A tunable measure for information leakage called \textit{maximal $\alpha$-leakage} is introduced. This measure quantifies the maximal gain of an adversary in refining a tilted version of its prior belief of any (potentially random) function of a dataset conditioning on a disclosed dataset. The choice of $\alpha$ determines the specific adversarial action ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. For all other $\alpha$ this measure is shown to be the Arimoto channel capacity. Several properties of this measure are proven including: (i) quasi-convexity in the mapping between the original and disclosed datasets; (ii) data processing inequalities; and (iii) a composition property. …
Correlation Congruence for Knowledge Distillation (CCKD)
Most teacher-student frameworks based on knowledge distillation (KD) depend on a strong congruent constraint on instance level. However, they usually ignore the correlation between multiple instances, which is also valuable for knowledge transfer. In this work, we propose a new framework named correlation congruence for knowledge distillation (CCKD), which transfers not only the instance-level information, but also the correlation between instances. Furthermore, a generalized kernel method based on Taylor series expansion is proposed to better capture the correlation between instances. Empirical experiments and ablation studies on image classification tasks (including CIFAR-100, ImageNet-1K) and metric learning tasks (including ReID and Face Recognition) show that the proposed CCKD substantially outperforms the original KD and achieves state-of-the-art accuracy compared with other SOTA KD-based methods. The CCKD can be easily deployed in the majority of the teacher-student framework such as KD and hint-based learning methods. …
ConvCSNet
Compressive sensing (CS), aiming to reconstruct an image/signal from a small set of random measurements has attracted considerable attentions in recent years. Due to the high dimensionality of images, previous CS methods mainly work on image blocks to avoid the huge requirements of memory and computation, i.e., image blocks are measured with Gaussian random matrices, and the whole images are recovered from the reconstructed image blocks. Though efficient, such methods suffer from serious blocking artifacts. In this paper, we propose a convolutional CS framework that senses the whole image using a set of convolutional filters. Instead of reconstructing individual blocks, the whole image is reconstructed from the linear convolutional measurements. Specifically, the convolutional CS is implemented based on a convolutional neural network (CNN), which performs both the convolutional CS and nonlinear reconstruction. Through end-to-end training, the sensing filters and the reconstruction network can be jointly optimized. To facilitate the design of the CS reconstruction network, a novel two-branch CNN inspired from a sparsity-based CS reconstruction model is developed. Experimental results show that the proposed method substantially outperforms previous state-of-the-art CS methods in term of both PSNR and visual quality. …
Hierarchical Attention-Based Temporal Convolutional Network (HA-TCN)
Myotonia, which refers to delayed muscle relaxation after contraction, is the main symptom of myotonic dystrophy patients. We propose a hierarchical attention-based temporal convolutional network (HA-TCN) for myotonic dystrohpy diagnosis from handgrip time series data, and introduce mechanisms that enable model explainability. We compare the performance of the HA-TCN model against that of benchmark TCN models, LSTM models with and without attention mechanisms, and SVM approaches with handcrafted features. In terms of classification accuracy and F1 score, we found all deep learning models have similar levels of performance, and they all outperform SVM. Further, the HA-TCN model outperforms its TCN counterpart with regards to computational efficiency regardless of network depth, and in terms of performance particularly when the number of hidden layers is small. Lastly, HA-TCN models can consistently identify relevant time series segments in the relaxation phase of the handgrip time series, and exhibit increased robustness to noise when compared to attention-based LSTM models. … | 2022-11-27 06:12:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.485870897769928, "perplexity": 1468.637259562557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00575.warc.gz"} |
http://math.sns.it/seminar/625/ | # The Gauss-Green theorem in stratified groups
## Giovanni Eugenio Comi (Scuola Normale Superiore)
created by comi on 28 Feb 2018
modified on 02 May 2018
27 feb 2018 -- 14:30 [open in google calendar]
Dipartimento di Matematica e Informatica, Ferrara.
Abstract.
The Gauss--Green formulas are of significant relevance in many areas of mathematical analysis and mathematical physics. This motivated several investigations to extend such formulas to more general classes of integration domains and weakly differentiable vector fields. In the Euclidean setting it has been shown by Silhavy (2005) and Chen, Torres and Ziemer (2009) that Gauss-Green formulas hold for sets of finite perimeter and $L^{\infty}$-divergence measure fields, i. e. essentially bounded vector fields whose distributional divergence is a Radon measure. We extend these results to the context of stratified groups. In particular, we prove the existence of generalized normal traces on the reduced boundary of sets of locally finite h-perimeter without requiring De Giorgi's rectifiability theorem to hold. This is a joint work with V. Magnani.
Documents:
Credits | Cookie policy | HTML 5 | CSS 2.1 | 2018-06-22 18:24:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8852356672286987, "perplexity": 1130.827527000169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864776.82/warc/CC-MAIN-20180622182027-20180622202027-00557.warc.gz"} |
https://www.physicsforums.com/threads/linear-algebra-subspace-proof.670931/ | # Linear algebra subspace proof
1. Feb 10, 2013
### Mdhiggenz
1. The problem statement, all variables and given/known data
Prove that if S is a subspace of R1, then either S={0} or S=R1.
Trying to come up with a proof I dissected each statement, I know that in order for S to be
a subspace the zero vector must lie within the subset. So I know S={0} is true. I then
checked an arbitary vector x1 which lies on R1 to make sure it
was closed under scalar multiplication, and addition, and that checked out as well.
Not sure if I am on the right track.
Thanks
2. Relevant equations
3. The attempt at a solution
2. Feb 10, 2013
### jbunniii
You need to specify what field of scalars you are using. The statement is true if the field is $\mathbb{R}$, false if it is, for example, $\mathbb{Q}$.
Assuming you are using $\mathbb{R}$ for the field of scalars, use the fact that a subspace must be closed under scalar multiplication.
3. Feb 10, 2013
### Karnage1993
Your ideas are right, but it's not really a proof unless you do everything step-by-step.
For example, for the first possible set: $S = \{0\}$, show that all three properties of a subspace are satisfied.
4. Feb 10, 2013
### jbunniii
Why is that necessary? The problem statement tells you that $S$ is a subspace. All you need to do is show that if it's not $\{0\}$ then it must be all of $\mathbb{R}^1$.
So: if $S$ is not $\{0\}$, then $S$ contains a nonzero element, say $s \neq 0$. Since $S$ is a subspace, it is closed under scalar multiplication. Thus it must contain $\alpha s$ for every $\alpha \in F$ where $F$ is the scalar field (presumably $\mathbb{R}$). Therefore...?
5. Feb 11, 2013
s=r1? | 2017-12-18 15:03:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7839826941490173, "perplexity": 309.88180205184034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948617816.91/warc/CC-MAIN-20171218141805-20171218163805-00115.warc.gz"} |
https://bayesforest.com/2016/01/25/standard-error-derivaiton/ | Standard Error of the Mean – Derivation
The standard error (SE) is an amazingly useful statistical device for defining confidence intervals. In layman terms standard error is measure of how far a sample statistic is from it’s true value. This post will go through the process of deriving the SE of the mean.
$SE = \frac{\sigma}{\sqrt{n}}$
I have always wanted to dig deeper into where the $\sqrt{n}$ comes from.
The mathematical derivation is pretty straight forward. First, is to note that the mean $\bar{X}$ is a sample mean of a population.
$\bar{X} = \frac{\displaystyle\sum_{i=1}^n{X_i}}{n}$
We know that the variance is equal to the expected value for the square difference from the mean.
$Var(X) = E[(X - E[X])^2] = \sigma^2$
Replacing $X$ with $\bar{X}$ yields.
$Var(\bar{X}) = E[(\bar{X} - E[\bar{X}])^2]$
$Var(\bar{X}) = E[(\frac{\sum_{i=1}^n{X_i}}{n} - E[\frac{\sum_{i=1}^n{X_i}}{n}])^2]$
$Var(\bar{X}) = \frac{1}{n^2}E[(\sum_{i=1}^n{X_i} - E[\sum_{i=1}^n{X_i}])^2]$
$Var(\bar{X}) = \frac{1}{n^2}E[(\sum_{i=1}^n{X_i} - \sum_{i=1}^nE[\bar{X}])^2]$
$Var(\bar{X}) = \frac{1}{n^2}\sum_{i=1}^nE[({X_i} - \bar{X})^2]$
$Var(\bar{X}) = \frac{\sum(\sigma^2)}{n^2}$
since $\sum_{i=1}^n{\sigma^2} = \sigma^2 +\sigma^2 + ... + \sigma^2 = n\sigma^2$
$Var(\bar{X}) = \frac{\sigma^2}{n}$
$\sqrt(Var(\bar{X})) = \frac{\sigma}{\sqrt(n)}$
And there we have it.
Most commonly, standard error is a calculated from a sample $n$ for a sample mean $\bar{x}$.
For example the SE for a 95% confidence interval (alpha of 5%) from a normal distribution would be.
$lower limit = \bar{X} - 1.96 * SE$
$upper limit = \bar{X} + 1.96 * SE$
I found this a helpful exercise in gaining confidence if the formula’s I am using. The key was to understand that the SE is looking at the sample mean and not an individual sample value. | 2019-01-16 13:57:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7406970858573914, "perplexity": 325.8428885011445}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657510.42/warc/CC-MAIN-20190116134421-20190116160421-00464.warc.gz"} |
https://www.natureof3laws.co.in/blog/page/5/ | ## Mutual inductance – definition, formula, units, and dimensions
Hmmm… Mutual inductance has more self-respect than self-inductance because it has a…
## Self-inductance | definition, formula, units, and dimensions
Self-inductance is usually just called inductance. From the knowledge of inductance, we…
## Inductance – definition, formula, units, and dimensions
Inductance is just another version of Lenz’s law. If anyone knows what… | 2023-02-08 00:43:06 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9695291519165039, "perplexity": 8806.365336733692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500664.85/warc/CC-MAIN-20230207233330-20230208023330-00772.warc.gz"} |
https://www.physicsforums.com/threads/analytic-solution-of-this-ordinary-differentiale-equation.283610/ | # Analytic solution of this ordinary differentiale equation
1. Jan 8, 2009
### bsodmike
1. The problem statement, all variables and given/known data
$$y'=4e^{0.8x}-0.5y$$
This question was obtained from a text book, where it is used as an example for the application of Heun's method (of ODE integration). They state that it has a 'simple' analytic solution of,
$$y=\dfrac{4}{1.3}(e^{0.8x}-e^{-0.5x})+2e^{-0.5x}$$
2. Relevant equations
It is of the form.
3. The attempt at a solution
Attempted to use an integrating factor,
Obtaining,
Hence,
Any ideas ?!?
Last edited: Jan 8, 2009
2. Jan 8, 2009
### bsodmike
It is also given, y(0) = 2. i.e. at x=0, y=2.
3. Jan 8, 2009
### HallsofIvy
Staff Emeritus
Yes, that is correct.
Then since
$$y(x)= \frac{4}{1.3}e^{.8t}+ Ce^{-.5t}$$
$$y(0)= \frac{4}{1.3}+ C= 2$$
so
$$C= 2- \frac{4}{1.3}[/itex] That is, [tex]y(x)= \frac{4}{1.3}e^{.8t}+ \left(2- \frac{4}{1.3}\right)e^{-.5t}$$
$$y(x)= \frac{4}{1.3}\left(e^{.8t}+ e^{-.5t}\right)+ 2e^{-.5t}$$
exactly as given.
4. Jan 8, 2009
### Thaakisfox
This differential equation is quite simple, so its better if we see whats behind this integration factor. Its basically Lagranges method, this "fits" to your hand a bit more:
So we have:
$$y'+p(x)y=q(x)$$
As we know the general solution of the DE can be obtained by adding the $$Y$$ general solution of the homogenous part and a $$y_0$$ particular solution of the entire DE.
first of all lets solve the homogenous part, that is:
$$Y'+p(x)Y=0 \Longrightarrow \frac{dY}{Y}=-p(x)dx \Longrightarrow Y=C\exp\left(-\int^x p(x')dx'\right)$$
Now we only have to find a particular solution of the entire DE. Here is the trick, lets consider the constant in the homogenous part to be some function of the free variable, that is: $$C=C(x)$$
so we have: $$y_0=C(x)\exp\left(-\int^x p(x')dx'\right)$$
Now plug this back into the DE to get the C(x) function, and then you have the particular and add this to the homogenous part, and then you have the general solution:
$$y=y_0+Y$$
So basicaly the integration factor is a solution of the homogenous part of the DE.
5. Jan 8, 2009
### bsodmike
Thanks HallsOfIvy!!
I was trying to evaluate the following,
$$y(x)= \frac{4}{1.3}e^{.8t}+ Ce^{-.5t}$$
at y(0) and x=0 (in your example, t=0) as all the exp() expressions tend to 1; and tried to substitute back C=-14/13
$$y(x)= \frac{4}{1.3}e^{.8t} - \dfrac{14}{13}e^{-.5t}$$
d'oh!!!! Cheers :) :)
Last edited: Jan 8, 2009
6. Jan 8, 2009
### bsodmike
Thanks Thaakisfox! | 2017-02-22 18:25:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.86387699842453, "perplexity": 1108.5737935482064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171004.85/warc/CC-MAIN-20170219104611-00114-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://cs-syd.eu/posts/2016-04-09-typesafe-polyvariadic-functions-in-haskell | # Type safe Polyvariadic functions in Haskell
Date 2016-04-09
Polyvariadic functions can be useful to improve the user experience when writing an EDSL in Haskell. Instead of making your users write doSomething ["With", "these", "arguments"], you could have them write doSomething "With" "these" "arguments".
Your intuition will probably tell you that making a polyvariadic function will be unsafe, but Haskell allows you to make it safe. Of course it is not as easy as in Python, but then again, polyvariadic functions in Python will never be type safe.
### Background: Currying
The concept of currying needs to be very clear before we can dive into polyvariadic functions.
A function with two arguments: f :: a -> b -> c really only has one argument. The -> operator is right-associative, so we can rewrite this type signature as f :: a -> (b -> c). This means that f takes a value x :: a and gives you a function f x :: b -> c.
When we write down functions with 'multiple arguments', the parentheses are usually omitted, but in reality f x y z is parsed as ((f x) y) z.
### The problem of polyvariadic functions
What we want to build is a function of type a -> (a -> (a -> (... -> b ))) where the length of this chain of -> operators is determined by how many arguments are given to the function.
Meanwhile, we would like to implement this function as if it were of type [a] -> b.
Keep in mind that, with what I am about to show you, it is also possible to implement more complicated polyvariadic functions (for example with different types as arguments).
### A first draft of a solution
The trick that we will employ to make this happen will involve a typeclass. Let's say we would like to build the polyvariadic equivalent of a function sumOf :: [Int] -> Int. We would then write a typeclass SumArgs as follows:
class SumArgs a where
sumArgs :: [Int] -> a
Instances of this class, 'know how to make a value of their type from a list of integers'.
Next, we instantiate Int -> r where SumArgs r. (We will need {-# LANGUAGE FlexibleInstances #-} and {-# LANGUAGE FlexibleContexts #-}.)
instance (SumArgs r) => SumArgs (Int -> r) where
sumArgs is i = sumArgs (i:is)
This may look like black magic at first. I will explain how this works right now. Why it is useful should become clear later.
Note that the sumArgs function in the second instance is of type SumArgs r => [Int] -> (Int -> r). That is, given a list of integers is :: [Int] and another integer i :: Int, we have to build a value of type r where r has a sumArgs instance. r, having a sumArgs instance, can be built using sumArgs :: SumArgs r => [Int] -> r if we have a list of integers. We will build this list by adding i :: Int to the existing list is :: [Int].
Now we are ready to add the functionality that will use sumOf :: [Int] -> Int in a polyvariadic context. We will do this with another instance:
instance SumArgs Int where
sumArgs = sumOf
This instance explains how to build the result (of type Int) from a list of Int arguments.
We could now use sumArgs as let f = 5 :: Int in sumArgs [] f f :: Int and we would already have a polyvariadic function, but the empty list n there is still ugly.
The final piece of the puzzle is to abbreviate this:
sumOf' :: SumArgs args => args
sumOf' = sumArgs []
There we have it, a type safe polyvariadic function in Haskell.
*> let f = 5 :: Int
*> let g = 4 :: Int
*> sumOf' f g :: Int
9
*> sumOf' f g f f g :: Int
23
### Under the hood of the black magic
If you did not already get how this works, here is an example.
sumOf' f g is declared to be of type Int, which means we can write it as sumArgs [] f g :: Int.
If we add the parentheses in the right places, we get ((sumArgs []) f) g :: Int. Now we will examine what Haskell's type inference does:
• Peeling off one layer, Haskell figures out that (sumArgs []) f must be of some type c -> Int where g is of that type c.
• Peeling off the next layer, Haskell figures out that sumArgs [] must be of some type b -> c -> Int where f is of that type b.
• It knows that sumArgs is of type [Int] -> a.
• It identifies a with b -> c -> Int to come to the conclusion that sumArgs is of type [Int] -> b -> c -> Int where b and c are the types of f and g, respectively.
• Because it knows that f and g are both of type Int, it specializes the type of sumArgs to [Int] -> Int -> Int -> Int, which is exactly what we want.
The reason that sumArgs :: [Int] -> Int -> Int -> Int is a valid type signature, is because Int -> Int -> Int has an instance of SumArgs. Int -> Int -> Int has an instance of SumArgs because Int -> Int does. Finally, Int -> Int has an instance of SumArgs because Int does.
### Backward compatibility
I motivated polyvariadic functions by arguing that they look nice in EDSLs. If you are interested in improving the user experience of your EDSL, chances are that you have already implemented it. This means that there is probably already a bunch of code that uses sumOf :: [Int] -> Int directly. Now, there is a way to change the type of sumOf to SumArgs args => a without breaking any of your users' code and it is by adding one more instance to the above code:
instance (SumArgs r) => SumArgs ([Int] -> r) where
sumArgs is n = sumArgs (n ++ is)
Now the old code will still work as expected:
*> sumOf' [f, g] :: Int
9
As a nice bonus, the following now also works:
sumOf' f g [f, g] :: Int
18
Working from home? Wear Clothes!
Start your Haskell project from a template
You don't want to finish life knowing you've never been an additional agenda item. - Zuber Anwar | 2021-05-09 07:39:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23551686108112335, "perplexity": 1602.8696549760439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00129.warc.gz"} |
http://www.chermontdebritto.adv.br/mp-vidhan-itrt/aa5a17-statistics-final-exam-answers | Compartilhe:
a. A survey randomly selected 240 top executives. Consider the following observations from a population: 88 235 73 138 138 71 229 153 73 a. Test your understanding with practice problems and step-by-step solutions. a. Browse through all study tools. Given the following information set up the 90% CI for p the population proportion. A random sample of 58 homes the exurban Greenbr... A tech firm wants to estimate the difference in the proportion of Stanford students who prefer Android phones over iPhones to the proportion of Caltech students who prefer Android phones to iPhones... A sociologist feels that only half of the high school seniors capable of graduating from college go to college. Explain in detail the predictor variable or explanatory variable. A. How can we determine the representativeness of a sample mean for a particular population? All rights reserved. Consult the Useful Formulas sheet as needed. Statistics 101: Principles of Statistics Final Exam Take this practice test to check your existing knowledge of the course material. The sample mean was 77 and the sample standard deviation was s = 5.6. A US Department of Education official is curious about SAT score performance for students attending certain high schools. STA 2023 Practice Questions Final Exam 1. Assume that you make random guesses for 5 true-or-false questions. Confidence intervals are useful when trying to estimate _____. Final Exam December 19, 2010 Directions: Before you leave, you must turn in both this exam sheet and any statistical tables. When rolling a pair of dice, probability of rolling a total point value of (7) is 0.17, if you rolled a pair of dice 1,000 times and the point value of 7 appears 723 times, what would you conclude? What is the probability that the demand will be at least two but no more than six? Web design company has come up with two possible designs A and B for a web interface. What error does the margin of error in a confidence interval cover? The villagers noticed that wherever doctors appeared, pe... Let f(x) = 1.9^{1.9x}. What is the probability that the demand will be greater than mu + 2sigma copies in a week? A sample of 17 items yield = 50.0 grams and s = 2.5 grams. Find the score (X value) that corresponds to a z score equal to 0.75. Find P(1.21 less than Z less than 3.00). The [{Blank}] is the probability of rejecting the null hypothesis when it is in fact false, and should be rejected. Construct a 99% confidence interval estimate of the mean of the difference between volumes for the first-born and the second-born twins. Find the area under the standard normal distribution curve to the left of z = 0.44. The variance is 2500($K)^2. Sample Statistics Exam #500 Multiple Choice Identify the letter of the choice that best completes the statement or answers the question. Use a 0.05 significance level to test the claim of the highway engineer that the standard deviation of speeds is 5.0 mi/h. Hybrid cars get better gas mileage than traditional cars B. Find the z-score for X = 50 from this sample. Spring 2017. 3. The data are normally distributed. We are 90% sure that the population mean is less than 200. The POE is$1,925K. How many subjects are needed to estimate the HDL cholesterol within 4 points with 99% confidence assuming sigma = 1... Bob has a 50% chance of passing an exam. Essay on Statistics Final Exam Answers. The time it takes a mechanic to change the oil in a car is exponentially distributed with a mean of 5 minutes. A deviation score of -3 indicates that the raw score is [{Blank}]. 95% of data would be expected to fall within 2s of the line, and 2s= ?) Assume the population proportion is 0.55. The mean amount spent by a family of four on food per month is $500 with a standard deviation of$75. The scores on a certain test are normally distributed with mean 61 and standard deviation 3. False, If a covariance is positive, then the corresponding correlation is necessarily positive. However, the variation among prices remains steady with a population standard deviation of 20 cents. Spring 2015. Roll a die 3 times or roll 3 dices simultaneously and record the number of 5's that occur. Key Concepts: Terms in this set (141) the number of goals scored in a hockey game. Practice Test #4 Answers Practice Test #4b Updated Practice Test #4 Solutions to #3 and #4 Practice Final Exam Practice Final Exam Solutions Practice Final Exam B Practice Final Exam B (ANSWERS) Z-Score Table Statistics Video Lessons Statistics Self Tutoring Site GREAT TUTORING VIDEOS FOR STATISTICS Optional Textbook with Videos Fall 2010 Math 130 section 1. To do a survey/questionnaire on cinema survey to be sent to kids at school, which sampling method would work best? Eight players are receiving scholarships, and four are not. He estimates that the first project will yield a profit of $170,000 with a probability of 0.7 or a profit of$130,... A doctor wants to estimate the HDL cholesterol of all 20- to 29-year-old females. The provincial transportation department is studying traffic patterns on one of the busiest highways in the province. If you have any questions or need clarification about the chapter test questions or exam questions, don't hesitate to contact us! C. Subtracting the standard deviation from each value. If your work is absent or illegible, and at the same time your answer is not perfectly correct, then no partial credit can be awarded. Ohio State University. What is probability that the mean salar... Construct the 90% confidence interval for the population mean mu. The average expenditure in a sam... Assessment of a financial investment opportunity for a merchant bank provides the following estimates of possible profit levels and corresponding probabilities. The probability that the sample proportion will be more than 17% is 1. In a linear relationship, as the X scores increase, the Y scores change in [{Blank}] direction. True or False: It is easier to reject the null hypothesis if the researcher uses a smaller alpha level. Is it true that the length of a confidence interval can be determined only if you know the margin of error? A student who scored 0 on the midterm would be predicted to score 50 on the final exam. Does the one-sided or two-sided Ha have stronger evidence against Ho? Height of the curve C. Area on the left. a. Earn Transferable Credit & Get your Degree. 7. Among 58,593 patients who had cardiac arrest during the day 11,604 survived and were discharged. If the distribution is approximately normal with a standard deviation of 1.3 what is the probability that a randomly select... What is the key to finding Normal proportions? 19. The best-fitting line through a scatterplot is known as the [{Blank}]. Thirty-six selected freshmen took the test and their mea... For the data given below, find the equation of the line containing the first and the last data points. Inference or observation? If Tim chooses balls at random from the urn, what is the probability that he will select white balls and red balls? Current smokers (mor... What will reduce the width of a confidence interval? The probability that a particular screening test for breast cancer will be positive when, in fact, the patient does, in fact, have breast cancer is 0.9. Explain your reason. A) Negative binomial distribution B) Uniform distribution C) Hypergeometric distribution, Which of the following distributions are symmetric? B. ANS: E DIF: 2 TOP: 2 2. Square f. G. From a population of 200 elements, a sample of 49 elements is selected. In a certain marathon, the average time to complete the race was Normally distributed with mean is 4.15 hours and standard deviation is 0.84 hours. The following another set of data that looking at how long it takes to get to work. Find the probability that the sample... Test scores have a normal distribution with a mean of 180 and a standard deviation of 10. Explain the trade-off between accuracy and cost/time of sample size in using a sample to make inferences about a population. When would a researcher use a hypothesis test for the difference in two population proportions? If probability of getting z score between the mean and +1 standard deviation is 0.3413, what is probability of getting a z score of +1 or less? The sample mean for a recent introductory psychology test was 78 and the sample variance was 9. Match. A random sample of 200 was selected to estimate the average amount of time retired folks in Arizona listened to the radio during the day. Which of the following illustrates the correct expression of the null hypothesis for the alternate hypothesis that more than 40% of Internet users shop online? True False. Suppose that the duration of a particular type of criminal trial is known to have a mean of 21 days and a standard deviation of seven days. What is the probability that a teen girl uses social networking sites? Determine whether the events of getting tails twice are independent or dependent. a) +0.65 b) -0.89 c) +0.10 d) -0.10. It's uniform C. It's bell-shaped D. It's unimodal E. The curve never touches the horizontal a... A gambler pays $5 to play a game of dice. Plasma volume is influenced by the overall health and physical a... 1. Find the probability of getting: a. exactly 4 heads b. less than 4 heads (4 is not included), Interpret the following 95% confidence interval for mean weekly salaries of shift managers at Guiseppe's Pizza and Pasta. What is the relationship between linear and scatter plot diagrams? The average number of lions seen by tourists on a 1-day safari is 5. c) It depends. Round two decimal places as needed. An interpretation of the slope is: A. A) F-distribution B) Exponential distribution C) Normal distribution. What proportion of population is between 20.0 and 25.0? A. ICs continuous B. Which of the following hypotheses is a one-tailed test? A study was done to test the claim that... A survey was done to see if the mean starting age is at least 19 with the standard deviation of 1.5. The probability of each failing is 0.05. It was revealed that it takes them an average of 4 months with a sample standard deviation o... A survey of 64 randomly selected people shows that the time needed to complete one person's tax forms is 23.4 hours. Which of the following is false? When making predictions using the regression equation, which of the following is NOT advisable? (Round "mean" to 2 decimal places.) ||Year||Email Users, m (million users) |2005|125.2 |2006|131.1 |2007|133.8 |2008|137.9 |2009|139.8 |2010... Statistics Canada wants to know the unemployment rate in Vancouver during any part of last year. A sample of 20 cartons contained the following amounts in ounces: 27.3, 28.4, 31.1, 31.5, 28.9, 28.3, 32.2, 32.8, 32.6, 31.4, 27.3, 32.2, 32... Often uncovered through ESM are A. Newfound activities B. Unanticipated feelings about activities and unidentified feelings about change processes C. Beliefs about the self D. Predictions about cha... What are the advantages of stratified sampling? In a non-linear or curvilinear relationship, as the X scores change, the Y scores change consistently, but in [{Blank}] (one, more than one) direction. 50 C. 7.1 D. 5.9, If the probability that it will rain tomorrow is 0.38, then the probability that it will not rain tomorrow will be: A. B. 95% confidence; n = 2388, x = 1672. In a random sample of 16 ATM transactions at a bank, the sample mean transaction time is 2 minutes with the sample standard deviation of 1.6 minutes. 60 B. How would you describe the function of a generalized linear regression model? Define standard error of measurement; standard error of the mean; and discuss the term confidence interval. Learn. Find the value of z such that the area to the right of z is 67% of the total area. On the second exam, the mean score is 55.0 with a standard devia- tion of 10.00. a. unknown parameters b. Practice Final Exam Questions (2) -- Answers . Suppose the lengths of the pregnancies of a certain animal are approximately normally distributed with mean mu = 272 days and standard deviation sigma = 18 days. A random sample of size 15 is selected from a normal population. In a correlational analysis, N stands for the [{Blank}]. A. Poisson distribution B. Exponential distribution C. Binomial distribution D. Normal distribution... What symbol represents the density curve (population) standard deviation? One typical IQ scale has a mean of 100 and standard deviation 15... For a Normal distribution, the chance of the variable z equaling exactly 1.5 is [{Blank}]. While interviewing a client, the nurse record the assessment in the electronic health record. In a biology class, the scores on the final exam were normally distributed, with a mean of 85, and a standard deviation of five. B. be larger. A. A student who scored 0 on the midterm would be predicted to score 50 on the final exam. (Hint: Rewrite in terms of P(D|C) and apply Bayes theorem.). 150 b. You've purchased 30 tickets. A sample has a mean of M = 40 and a standard deviation of s = 6. Spring 2015. 0.44 B. Multiple samples, each of size 100, were taken from the population. Calculate the mean and median. In eight attempts, what is the probabil... Use the following distribution. a)Find x. Past data has shown that the regression line relating the final exam score and the midterm exam score for students who take statistics from a certain professor is: final exam = 50 + (0.5)(midterm). Assuming normality, compute 90% confidence level for true proportions of failed students. Consider the following binomial experiment: A survey shows 56% of households in Centercity own a DVD player. Researchers collected information on the body parts of a new species of frog. B. either a discrete or a continuous random variable, depending on the variance. For the given below, state the type of hypothesis test to be used for One Sample Z Test for Proportions. Find the probability that among 6 voters questioned, 5 of them favor the new ball park. Fall 2015. Find the z-score location of a vertical line that separates a normal distribution as described: 40% in the tail on the right. What is the probability that a sample of 100 students will have a mean score of at least 61.3? (2 m, 19.5 m/s) (3 m, 26.2 m/s) (4 m, 30.1 m/s) (5 m, 35.1 m/s) (6 m, 39.7... Name two ways to decrease the width of the confidence interval. Statistics 100 Sample Final Questions (Note: These are mostly multiple choice, for extra practice. Which is the following is correct? = s (0 2 4 3) + (1 )2 + (3 4)2 2 = 1 3 r Use... Mensa is a society for "geniuses". Material covered after the second midterm will appear on the final exam. Get help with your statistics and probability homework! Find the probability that a randomly picked score will be between 165 and 195. An interpretation of the slope is: A. Be sure to state the null and alternative hypotheses, p-value, and write the conclusion. A. Null B. One-tailed C. Two-tailed... What type of hypothesis states that a sample statistic is either greater than or less than the population parameter? Flashcards. One prize is worth$4000, another is worth $1500, and two are worth$1000. A scientist studying babies born prematurely would like to obtain an estimate for the mean birth weight, The standard deviation of test scores on a certain achievement test is 10.8. The probability that you will throw boxcars (two 6's) with one throw using 2 dice is: A. The current unemployment rate for young college graduates is about 10%. Which of the following is not used for evaluating a regression analysis? Fall 2016. In the early 19th century, the Russian government sent doctors to southern Russian villages to provide assistance during a cholera epidemic. Find the probability that C and D both happen. Fall 2012. At an automatic car wash, cars arrive randomly at a rate of 7 cars every 30 minutes. To run a survey on the online shopping attitudes of people during COVID, which sampling method would work best? If you purchase the equity, in one year you will receive $1.5 million with 40% probability and$1.2 million with 60% probability. There is no penalty for incorrect answers. How many standard deviations an x-value is from the mean C. How far an x-value i... What is standardizing? The car wash takes exactly 4 minutes (this is constant). If 30 adult smartphone users are randomly selected, find the probability that exactly 20 of them... On average, weight of carry-on baggage of passengers on planes is 32.2 pounds. Given the following data. (Round your answers to 4 decimal places.). Repeat the binomial experiment 100 times and compare your relative frequency distribution with the theoret... How many license plates can be made which have the form ##LLL? (0.064, 0.136) B. Create an account to browse all assets today, Probability and Statistics Questions and Answers, Biological and Biomedical 0 - 9 40 10 - 19 50 20 - 29 70 30 - 39 40. Because of staffing decisions, managers of the Gibson-Marimont Hotel are interested in the variability in the number of rooms occupied per day during a particular season of the year. A random sample of 60 scores on, The lifetime of a certain brand of battery is known to have a standard deviation of 19.8 hours. (Hint: Sketch the distribution and locat... For a population with a standard deviation equal to 8, a score of X = 44 corresponds to z = - 0.50. The table below shows the results of a survey in which 2589 adults from Country A, 1111 adults from Country B, and 1099 adults from Country C were asked if human activity contributes to global warm... A researcher wishes to estimate, with 95%confidence, the population proportion of adults who say chocolate is their favorite ice cream flavor. Construct a 95% confidence interval for the population proportion of women who change their nail polish once a week. Introduction To The Practice Of Statistics … Learn vocabulary, terms, and more with flashcards, games, and other study tools. Z c. Morphet's Q d. F e. Chi. Which of the following is not a property of the t distribution? If the correlation between two variables is the same in two different samples, then the p-value is the same regardless of differences in sample sizes. A national computer retailer believes that the average sales are greater for salesperson with a college degree. Part A - Multiple Choice Indicate the best choice for each question in the indicated space. A. Fall 2014. What is the expectation of this game? 0.354 C. 0.833 D. 03... Men's heights are normally distributed with mean 69.0 inches and standard deviation 2.8 inches. If the z score for the critical region is 1.96, and the z score representing our sample mean is 2.00, can we reject the null hypothesis? Find the value of the standard error of the mean for the following case (use the finite population correction factor, if appropriate. If their average response time was 1.0 seconds with a standa... What is meant by the "possible states of nature"? A manager records the repair cost for 6 randomly selected VCRs. The professor selects from the class list using random sampling. The following sample observations were randomly selected. Elementary Probability and Statistics "Student name and ID number "Final Exam June 6, 2011 Instructor: Bj˝rn Kjos-Hanssen Disclaimer: It is essential to write legibly and show your work. What are the property of the fare is known to be ( 51.56, 65.79 ) statistics! And red balls states that 34 % of the data are useful consider! Two-Sided sheet of notes for this exam on one of the regression line between and... Tests for the population standard deviation was 7.1 minutes 40 and a standard devia- tion 10.00. 151 final exam review questions to study their opinion about the mean attention span of individuals in some hypothetical.... * ) a probability of x is not used for one sample z for. Body parts of a confidence interval of the choice that best completes the statement or answers question. Suppose a researcher obtains a sample has a mean of 100 students form a 99 confidence... Computer monitors, including an overall rating among prices remains steady with a standard deviation 2.5.. To student grade point average is a density function 400 registered voters, 120 that. Is -- -- -- -- Match Lengths are longer than 2 hours theoretically related the! 5 answers correct total area Y = 12 + 0.85 x where Y is the probability that standard... To city current unemployment rate for young college graduates is about $58,500 basketball coach has 12 on... Would you expect to contain the true value new species of pigs to 90... Following is not advisable plan to vote for Trump for President, tables provided and a standard deviation$... Usa Today, February 13, 2006 ) of the standard deviation of s =.! Mean x=80 and standard deviation of the regression equation, which of the line and find the score x! To understand t distribution for dealing with this trade-off interval cover line that separates a normal distribution for z z-score. To southern Russian villages to provide assistance during a cholera epidemic following data in! Of 10 decide between two projects and put out of 6 exams that bob takes, he passes 2 determine... Four to six items on each test statistics considers subjective probability estimates [! You think will capture the value of x having a value of z = 1.5 ) uses a alpha... A. null b. one-tailed C. two-tailed D. Three-tailed E. None of the scores on 1-day. Different 90 % of all Americans take multiple vitamins regularly understanding of statistical significance in statistics with one throw 2! Steady with a standa... what will reduce the width of a new species pigs... Z is 67 % of her opponent 's serves, p-value, and P = 020 is.! Hypotheses in terms of parameter values correlational analysis, n stands for curve. Instructor will drop the low test when computing final grades, but he curves grades on each screen the relationship. Would a researcher obtains a sample has a mean fare of $60,000 with standard. The slope and interpret its meaning or need clarification about the chapter tests navigate. Hypotheses is a density function 0.04 for a web interface the popul find. Exponential distribution c ) T-distribution, which of the following is not distributed! Least two but no more than happy to Help you players on team. \Bar { x } to have a mean score of -2.00, critical?! = 0.025 and df = 4 a one-tailed test that falls within the region of rejection from population... D. 03... men 's heights are normally distributed with mean 63.6 inches standard... Repair cost for 6 randomly selected young college graduates is about$ 58,500 the! Stratified sampling, Managers at an automobile manufacturing plant would like to estimate _____ -1.10 z! Flashcards, games, and more with flashcards, games, and 80 students in the province the z-score of! Student who scored 0 on the midterm would be predicted to score 50 the... Fall within 2s statistics final exam answers the popul... find t alpha, df from the balance point respectively a researcher college... A female of -2.00, critical value is used with: a. a discrete random variable are longer 2... When the critical values are +1.96 and -1.96... Thirty randomly selected students took a test or exam and. Are 200 students in the sample... test scores have a standard deviation of dispersion... A family of four on food per month is $500 with a mean of the time it takes get... To - 1.50 or a continu... a coin is tossed twice b.. To city the density curve ( population ) standard deviation of the,! Following: C^1:.1 manager for American Airlines and you are estimating a survey/questionnaire on survey! The scores on a given size n = 266 Format: estimate \pm critical value compliance in was! Of n=27 statistics final exam questions and answers | new this trade-off the x scores,... Use is the probabil... use the STAT 151 formula sheets, tables provided and a standard deviation of inches... We would be expected to be uniformly distributed between 70 and 90 one throw using 2 dice:... Demand will be between 165 and 195 61, 54, 59,,! To return 15 % of men consider themselves professional baseball fan far an x-value I... what the. It so important for a certain population and it is easier to reject the hypothesis... Σ = 21 taking the chapter test questions or exam questions and answers | new the! Not used for one sample z test for means overview and introduction: a = 266 Format: \pm. As needed. ) alpha = 0.025 and df = 4, n't... Invest in either project a has a mean of$ 172 on average, she able. Be expected to reach a live person 15 % statistics final exam answers men consider themselves professional baseball fans selected... = 21 a rate of 7 cars every 30 minutes use a test!, x1 = 7.82, sigma = 3.17 a physical a... 1 influenced the. Statistical significance in statistics estimated regression equation, which of the line and the... Derivative f ' ( 1.1 ) to within two decimal places... And red balls the mean age at which students graduate a 1-day safari is 5 table for the first-born the! That Wimbledon Match Lengths are longer than 2 hours stands for the information. Chapters 1-13 in Sullivan 's statistics, 4th ed hypotheses in terms of the treated patients are uninsured month $... Hypotheses is a density function drink is at most 10 percent in [ { Blank } ] 0 2... Level of confidence of sight sample data midterm will appear on the exam. Designed to test your understanding of statistical procedures statistics final exam answers s = 6 100 births at that hospital test be... A scientist was studying the effects of a z-score equal to - 0.70 < -0.36 ) of %... See the notation ( E x ) = c x 2 is a [ { }. Is a one-tailed test to qualify for membership is having an IQ least! 6 randomly selected young college graduates is about$ 58,500 think will the! Feet 11.1 inches tall grades on each screen the absolute value of z is %... Hypothesis if the researcher uses a smaller alpha level in meetings or classes situations, define the parameter write. 235 73 138 138 71 229 153 73 a \alpha ) exam, the mean learned in class null will! Test are normally distributed with mean 61 and standard deviation of 10 statistics final exam answers and briefly describe the distribution! This study, each cat was sho... family incomes have a preliminary estimate the... According to the practice of statistics … Business statistics final exam dealing with this?! Dialing machine is expected to reach a live person 15 % of the sfu Statgen Working group within the of. Of 62 % for P.... a sociologist wishes to determine the representativeness a. Villages to provide assistance during a cholera epidemic includes some of the following data nurse the. Temperatures has x=98.20 f degrees -2.00, critical value x standard error population proportion at the below... = 0.025 statistics final exam answers df = 17 reviewing the training programs in the body is on the midterm would expected! As the gender of the choice that best completes the statement or answers the question is marked with an (! ( two 6 's ) with one throw using 2 dice is: a: 3-1. Is worth $4000, another is worth$ 1500, and interpret in Sullivan 's statistics, 4th.... Iq of each rating as well as the [ { Blank } ] statistics considers subjective probability estimates [! When would a researcher surveyed college students to study from certain high schools second exam the... Being randomly selected students took a test to a z score = -1.50 record the number who cons... %., from the class list using random sampling are randomly selected young college graduates 2. Researcher decided to use the table shows the frequency of each person interviewed and IQ. Will receive a significant grade reduction alcohol brewery firm claims that the number of 3s will! She is able to return 15 % of the next 100 births at that hospital USA,... Known that the demand will be rolled is: a company owns 17 copiers have been.... Graph to use a calculator and a non-programmable calculator statistics final exam answers it so important for a region. Players are receiving scholarships, and 2s=? 60, 67 right left. Calls to Business people for which n = 55 information below, compute the of. Managers at an automobile manufacturing plant would like to estimate the derivative f ' ( 1.1 ) within.
◂ Voltar | 2021-04-10 23:02:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5059763193130493, "perplexity": 974.6652222205579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00353.warc.gz"} |
https://testbook.com/question-answer/iffracxy-fraca-2a-2--6035e26d333724fb765d490d | # If $$\frac{x}{y} = \frac{{a + 2}}{{a - 2}}$$, then $$\frac{{{x^2} - {y^2}}}{{{x^2} + {y^2}}} =$$?
This question was previously asked in
Official Paper 2: Tripura TET 2019 Paper 2 (Maths & Science)
View all Tripura TET Papers >
1. $$\frac{{2a}}{{{a^2} + 2}}$$
2. $$\frac{{4a}}{{{a^2} + 4}}$$
3. 1
4. None of the above
Option 2 : $$\frac{{4a}}{{{a^2} + 4}}$$
Free
Official Paper 2: Tripura TET 2019 Paper 2 (Maths & Science)
297
150 Questions 150 Marks 150 Mins
## Detailed Solution
Given:
$$\frac{x}{y} = \frac{{a + 2}}{{a - 2}}$$
Formula used:
(a + b)2 = a2 + b2 + 2ab
(a - b)2 = a2 + b2 - 2ab
Calculation:
Divide both numarator and denominator of $$\frac{{{x^2} - {y^2}}}{{{x^2} + {y^2}}}$$ with y2
⇒ $$\frac{x^2 -y^2}{y^2}\over{\frac{x^2 + y^2}{y^2}}$$ = $${\frac{x^2}{y^2} - 1}\over{{\frac{x^2 }{y^2}+1}}$$
⇒ $$(x/y)^2 - 1\over(x/y)^2 + 1$$ = $${(\frac{a + 2}{a - 2})^2 -1}\over(\frac{a + 2}{a - 2})^2 +1$$
⇒ $$\frac{a^2 + 4 + 4a - a^2 - 4 +4a}{a^2 + 4 - 4a} \over \frac{a^2 + 4 + 4a + a^2 + 4 -4a}{a^2 + 4 - 4a}$$
⇒ 8a/(2a2 + 8)
⇒ 4a/(a2 + 4)
∴ The required value is 4a/(a2 + 4) | 2021-09-19 03:05:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7406761050224304, "perplexity": 10609.229529905606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00468.warc.gz"} |
http://icpc.njust.edu.cn/Problem/Hdu/5533/ | # Dancing Stars on Me
Time Limit: 2000/1000 MS (Java/Others)
Memory Limit: 262144/262144 K (Java/Others)
## Description
The sky was brushed clean by the wind and the stars were cold in a black sky. What a wonderful night. You observed that, sometimes the stars can form a regular polygon in the sky if we connect them properly. You want to record these moments by your smart camera. Of course, you cannot stay awake all night for capturing. So you decide to write a program running on the smart camera to check whether the stars can form a regular polygon and capture these moments automatically.
Formally, a regular polygon is a convex polygon whose angles are all equal and all its sides have the same length. The area of a regular polygon must be nonzero. We say the stars can form a regular polygon if they are exactly the vertices of some regular polygon. To simplify the problem, we project the sky to a two-dimensional plane here, and you just need to check whether the stars can form a regular polygon in this plane.
## Input
The first line contains a integer $T$ indicating the total number of test cases. Each test case begins with an integer $n$, denoting the number of stars in the sky. Following $n$ lines, each contains $2$ integers $x_i, y_i$, describe the coordinates of $n$ stars.
$1 \le T \le 300$
$3 \le n \le 100$
$-10000 \le x_i, y_i \le 10000$
All coordinates are distinct.
## Output
For each test case, please output "YES" if the stars can form a regular polygon. Otherwise, output "NO" (both without quotes).
## Sample Input
3
3
0 0
1 1
1 0
4
0 0
0 1
1 0
1 1
5
0 0
0 1
0 2
2 2
2 0
## Sample Output
NO
YES
NO
hujie
## Source
2015ACM/ICPC亚洲区长春站-重现赛(感谢东北师大) | 2019-08-19 10:36:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3052016496658325, "perplexity": 612.5281949162742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314721.74/warc/CC-MAIN-20190819093231-20190819115231-00049.warc.gz"} |
https://mithatkonar.com/wiki/doku.php/python/about_python/about_python_iv | # EDA
Any language that supports object orientation will let you create objects, either by instantiating them from classes or by building them from prototypes. Class-based object orientation becomes significantly more powerful when you are able to use inheritance and polymorphism. It is also a great convenience if you can use multiple constructors. Python permits all these.
### Inheritance
In the real world, we often create hierarchies of things. For example, a vehicle is “a machine that is used to carry people or goods from one place to another”.1) Based on this general concept, we may define a category of two-wheel vehicles that includes bicycles and motorcycles, a category called car that includes sedans, coupes, and convertibles, a truck category that includes buses, vans, tractors, and so on.
In these kinds of hierarchies, we typically start with a general class of things at the root of the tree, and all the other classes of things are more specialized versions of the general class of things.
This is the essence of inheritance in object oriented programming. For example, to solve a particular programming problem, we might define a Person class, and based on that definition we might then define an Employee class, and then based on that Employee class we might define Manager, Staff, and Hourly employee classes.
Or, we might define a general Shape class. Then based on the Shape class we might define Rectangle and Ellipse classes. Then based on the Rectangle class we might define a Square class (a square is rectangle with equal height and width), and based on the Ellipse class we might define a Circle (a circle is an ellipse with zero eccentricity).
#### Generalization and specialization
In both of the above cases, the classes closest to the root of the tree are more general than the classes toward the bottom. A Square is a special kind of Rectangle and a Rectangle is a special kind of Shape. An Hourly employee is a special kind of Employee, and an Employee is a special kind or Person. For this reason, we often say that inheritance defines “is a” relationships.
#### Class heirarchies
We call trees of classes like the above class heirarchies. The class at the root of the tree is a base class, and classes that inherit from a base classes are derived classes.2)
#### Inheritance and code reuse
One of the advantages of implementing classes with inheritance is code reuse. Almost all class-based object oriented languages let you define derived classes without having to re-write the base class code. The idea is that you write the base class code once. Then when you write the derived classes, you add only whatever new code is required for the derived class or new definitions for old code that must be overridden.
Since you do not need to rewrite the code that is common to both base and derived classes, you end up writing less code. But even more important, when you fix a bug in the base class, it automatically propagates to the derived classes.
#### Inheritance and polymorphism
Another advantage of using inheritance is that with statically typed languages (such as C++ and Java), it facilitates polymorphism (discussed below). Dynamically typed languages (like Python) behave polymorphically almost by definition.
### Inheritance in Python
To demonstrate the use of inheritance in Python, we are going to create a specialized version of ClickerCounter called ClickUpDown.
In addtion to the two buttons found on a ClickerCounter (for incrementing and resetting the count), a ClickUpDown object will add another that resets the count. In terms of a Python model, a ClickUpDown is identical to a ClickerCounter except that it has an additional method: click_down.
Here is the base class:
# base class definition
class ClickerCounter():
def __init__(self):
self.count = 0
# accessor for count
def get_count(self):
return self.count
# click the counter
def click(self):
self.count = self.count + 1
# reset the count
def reset(self):
self.count = 0
And here is the derived class:
# derived class definition
class ClickUpDown(ClickerCounter):
# click down the counter
def clickdown(self):
self.count = self.count - 1
That's it!
Using the new class:
b = ClickUpDown() # instantiate a ClickUpDown object
b.click() # 1
b.click() # 2
b.click() # 3
b.clickdown() # should be 2
print b.get_count()
a = ClickerCounter() # instantiate a ClickerCounter object
a.click() # 1
a.click() # 2
a.click() # 3
a.clickdown() # can't do that!
print a.get_count()
### Polymorphism
In statically typed programming languages, the behavior of an object bound to a variable might be:
a) the behavior associated with the class of the variable used to reference the object, or
b) the behavior associated with the class of the object to which the variable is bound.
We call b) above polymorphism. In other words, when a language behaves polymorphically, the behavior of objects is based on the class to which the object belongs—not to the class of the variable by which you are accessing the object.
### Polymorphism in Python
Because Python is dynamically typed, Python is polymorphic without any extra effort on the part of the programmer. When variables are bound to objects, the type of the variable changes—therefore the behavior is always determined by the object's class.
a = ClickUpDown() # a now references a ClickUpDown object
a = ClickerCounter() # a now references a ClickerCounter object
In fact, it's hard to make Python behave non-polymorphically.
### Parameterized constructor example
Let's now add a click_limit feature to our ClickerCounter. When the user clicks past the click_limit, the count will automatically reset to zero. By default, the click_limit will be 100,000. Let us also let the user set the click_limit when she or he instantiates a ClickerCounter.
To do this, we will need to add an instance variable to store the click_limit, add some logic to the click method, and add a parameterized constructor:
class ClickerCounter():
# parameterized constructor
def __init__(self, click_limit = 100000):
self.count = 0
self.click_limit = click_limit
# accessor for count
def get_count(self):
return self.count
# click the counter
def click(self):
if self.count < self.click_limit:
self.count = self.count + 1
else:
self.reset()
# reset the count
def reset(self):
self.count = 0
To use it:
a = ClickerCounter(3) # make a clicker-counter that counts up to 3
a.click()
a.click()
a.click()
print a.get_count() # should be 3
a.click() # should go from 3 to 0
print a.get_count() # should be 0 | 2021-09-21 11:30:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3434717655181885, "perplexity": 2195.3426190056357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00555.warc.gz"} |
http://www.hudsonworkshops.com/midvn/radial-velocity-method-equation-5f6083 | Radial velocity methods alone may only reveal a lower bound, since a large planet orbiting at a very high angle to the line of sight will perturb its star radially as much as a much smaller planet with an orbital plane on the line of sight. 1. When is the radial velocity, then Eq. The prognostic equation for radial velocity scanned from a Doppler radar, called the radial velocity equation, has been used in various simplified forms as a dynamic constraint for analyzing and assimilating radial velocity observations in space and time dimensions (Xu et al. By regularly looking at the spectrum of a starâand so, measuring its velocityâit can be determined if it moves periodically due to the influence of an exoplanet companion. radial velocity method The glass rod contains NaCl; when highly heated, it produces Na light that absorbs the light from Na spectral lamp, whereas the torch light looks unchanged. the velocity component along the radius between observer and target). The radial velocity method to detect exoplanets is based on the detection of variations in the velocity of the central star, due to the changing direction of the gravitational pull from an (unseen) exoplanet as it orbits the star. Just as a star causes a planet to move in an orbit around it, so a planet causes its host star to move in a small counter-orbit resulting in a tiny additional, regularly-varying component to the star's motion. The mass of the planet can then be found from the calculated velocity of the planet: M P L = M s t a r V s t a r V P L. {\displaystyle M_ {\mathrm {PL} }= {\frac {M_ {\mathrm {star} }V_ {\mathrm {star} }} {V_ {\mathrm {PL} }}}\,} where. The radial velocity method is one of the principal techniques used in the search for exoplanets.It is also known as Doppler spectroscopy. V P L. {\displaystyle V_ {\mathrm {PL} }} is the velocity of planet. ýlÙs[j°"R@WB-,lÝq8^
µ+ã5ã!Ðý¶ÐV ³ùV8^N¥?RÖÌ3TÁâyªÂ Ø©GÒ¡GÐÂÔýdÒý{JkF(XX©¬¤PªàNq(µòEoX7×R2Ïaë÷ºaÆ _zAsíªÍáÁ8¤½-Ætã@tàïæuLÊ¥x4CU1A@!S_®Sð3&.åe¨x?=Y!®S¨Ué¨Rpúé\¡
kc³¡èúY{¢óåª2gDL#/µ£Þ(A§¾ª¸-ÁXM_¼ò«¸¶ÔÉâ¬sLX|rmVSa5_ÀWú«@b*娶¸9JÙÕÂð7*q¯±bÑÚé]ÚTsð¹w/]ÅM!IÍDï))Î"5ºs)k¥ÂQrñZ-*²´U'©K¬BLf, fig3_fy_pop_en_radialVelocityMethod_191001. Glassrod! Redshift and Recessional Velocity - Hubbleâ s observations made use of the fact that radial velocity is related to shifting of the Spectral Lines. If this motion is not exactly in the plane of the sky, then there will be a radial velocity component ⦠u(x,0) and ut(x,0), are generally required. The same method has also been used to detect planets around stars, in the way that the movement's measurement determines the planet's orbital period, while the resulting radial-velocity amplitude allows the calculation of the lower bound on a planet's mass using the binary mass function. To simplify this equation, we define F R â¡ â«fdv θ, Ï, which is the density of particles in a given volume of space with a given radial velocity v r. We also define |$\langle v_i^2\rangle _r \equiv \int fv_i^2\text{d}v_{\theta , \phi }$|â , which is the weighted sum of squared i -velocity components in the phase plane ( v θ , v Ï ), with f acting as weight. These functions have been used extensively in modeling with Schrodingerâs Equation (Marhic,1978;Dai et al.,2016), as well as in tting emission lines in galaxy spectroscopy (Ri el,2010). V s t a r. The relation is by means of a nonlinear integral operator mapping radial veloci-ties into AT-INSAR images. Radial velocity is the component of the velocity of an object, directed along a line from the observer to the object. Microburst and downburst signatures of straightline winds are best seen using the base velocity. Radians per second is termed as angular velocity. Radial velocity can be used to estimate the ratio of the masses of the stars, and some orbital elements, such as eccentricity and semimajor axis. In astronomy, the point is usually taken to be the observer on Earth, so the radial velocity then denotes the speed with which the object moves away from the Earth (or approaches it, for a negative radial velocity). Â_z¦1Y P«bdJãÂï,t+[æzð ú{ªßxZõå366äÙ"_ÄÅéÚ®RĹRÉݲòÁÑÊä+Wæ¤ñ)²9\¨!âµrHÈ6éKHUt¶®Ó Base velocity is just the ground-relative radial velocity that is directly measured by the doppler radar. FINDING PLANETS USING THE RADIAL VELOCITY METHOD TIME 4.2 DAYS Light from an object moving away from us is redder. The proposed method will solve at each interior node six integral equations in order to obtain the velocities u1, u2, stresses Ï11, Ï12, Ï22 and pressure p.The integral equation for velocity components is given by (15). REDSHIFT Light from an object moving towards us is bluer. $\displaystyle\text{Angular velocity} = \frac{ \text{Transversal velocity} }{ \text{Distance} }$ Pe⦠Produced by the School of Physics and Astronomy. It relies on the fact that objects with a large mass can bend light around them. Michael Endl, in Encyclopedia of the Solar System (Third Edition), 2014. In this paper, Doppler radar radial velocity and reflectivity are simultaneously assimilated into a weather research and forecasting (WRF) model by a proper orthogonalâdecompositionâbased ensemble, threeâdimensional variational assimilation method (referred to as PODEn3DVar), which therefore forms the PODEn3DVarâbased radar assimilation system (referred to as WRFâPODEn3DVar). Once the flow leaves the rotor its angular momentum must be conserved in the absence of ⦠[5], In many binary stars, the orbital motion usually causes radial velocity variations of several kilometers per second (km/s). The method consists of obtaining the equation related to the domain with an iterative process. ?o¿ÄPbõÅ¿¼¤ÐSÙ~b?n§=÷fï[|f¾ÉEiÓ£írѶJkØuä
l{¹\S÷ìôíÊ¥é3g¨6ô(J5ߦ0YGïõ-è°×fÕ°ð«m×rìúÜ:ñ9hFrºù³(¼49$?§F®>! It is expressed in radians. For example, if you have an angular velocity at 6.283 rad/sec, then you are orbiting a full circle every second (since 6.283 = 2 * PI). The starâs velocity Our proposal is to solve the latter by Newtonâs methods on func- Other articles where Radial velocity is discussed: Milky Way Galaxy: Solar motion calculations from radial velocities: For objects beyond the immediate neighbourhood of the Sun, initially it is necessary to choose a standard of rest (the reference frame) from which the solar motion is to be calculated. When both are used, slightly better retrieval results were obtained (Xu and Qiu, 1995). Radial velocity method is limited by how long we have monitored a given star (longest radial velocity are 15 years. Astronomers measure Doppler shifts in the star's spectral features, which track the line-of/sight gravitational accelerations of a star caused by the planets orbiting it. This is usually done by selecting a particular kind of star or⦠[2] By contrast, astrometric radial velocity is determined by astrometric observations (for example, a secular change in the annual parallax).[2][3][4]. In astronomy, radial velocity is often measured to the first order of approximation by Doppler spectroscopy. Radial Velocity Method. From the instrumental perspective, velocities are measured relative to the telescope's motion. The radial velocity method to detect exoplanets is based on the detection of variations in the velocity of the central star, due to the changing direction of the gravitational pull from an (unseen) exoplanet as it orbits the star. Comparing the two methods for detection of exoplanets that depend on the host star's wobble. 1994, 1995, 2001a,b; Xu and Qiu 1995; Qiu and Xu 1996). The radial velocity profile is then obtained. The method may be applied to flows with a swirl number up to about Sw=0.25. Na! Astronomers, using the radial velocity technique, measure the line-of-sight component of the space velocity vector of a star (hence the term âradialâ, i.e. Instead, the planet and the star orbit their common center of mass. A new calculation method of the axial and radial velocity and gradeâefficiency for highâefficiency cyclones. The quantity obtained by this method may be called the barycentric radial-velocity measure or spectroscopic radial velocity. lamp! Doppler Spectroscopy) Method relies on measurements of a planet's "wobble" to determine the presence of one or more planets around it. It is measured in radians per second, with Ï (3.14) radians equal to 180 degrees. ... in the flow field within cyclones and other parameters on the grade-efficiency calculation are analyzed and a new equation for grade-efficiency estimation is introduced. Ordinary! In relation to a direction of observation, this motion-vector can be broken down into two components. This is done by fitting a analytical transit light curve to the data using the transit equation ⦠(3), in the context of the 2-D SA method. Radial velocity observations provide information about the minimum mass, of , assuming the stellar mass is known. A positive radial velocity indicates the distance between the objects is or was increasing; a negative radial velocity indicates the distance between the source and observer is or was decreasing. Here, we will observe four cases and find a r What is the radial velocity method?=====Thanks for watchingPlease subscribe my channel https://goo.gl/Vd7QTr Periodic movement. This method uses the fact that if a star has a planet (or planets) around it, it is not strictly correct to say that the planet orbits the star. While both the radial velocity and transit methods rely on detecting variations in light from the star, a completely different method uses the effect of gravity on light. It has been suggested that planets with high eccentricities calculated by this method may in fact be two-planet systems of circular or near-circular resonant orbit.[6][7]. [1] However, due to relativistic and cosmological effects over the great distances that light typically travels to reach the observer from an astronomical object, this measure cannot be accurately transformed to a geometric radial velocity without additional assumptions about the object and the space between it and the observer. Likewise for a time dependent diï¬erential equation of second order (two time derivatives) the initial values for t= 0, i.e. Meckerburner! In GARP, base velocity for the 0.5 degree tilt is N0V. THE RADIAL VELOCITY EQUATION 7 THE CENTER OF MASS FRAME OF REFERENCE The general twoâbody equation for the center of mass is: ⬠R = m 1 r 1 +m 2 r m 1 +m 2 where m 1 â¡ mass of the first body (which, in this derivation, is the star) m 2 â¡ mass of the second body (which, in Radial ⦠The Radial Velocity (aka. 1. radial velocity method is limited by how accurately we can measure velocity (cannot currently find planets smaller than Saturn) 2. The force of gravity can be determined from the Doppler shift measured using the radial velocity method. Doppler Shift is the change in the frequency of a wave for an observer if the observer is moving relative to the source of the wave. the motion along that radial (either directly toward or away from the observer, called radial speed); the motion perpendicular to that radial (called tangential speed). The radial velocity of an object with respect to a given point is the rate of change of the distance between the object and the point. The star moves, ever so slightly, in a small circle or ellipse, responding to the gravitational tug of its smaller companion. To constrain the actual mass of an exoplanet, the orbital inclination, , has to be measured. torch! Gravitational microlensing was predicted by Albert Einstein in his general theory of relativity. Light from an object with a substantial relative radial velocity at emission will be subject to the Doppler effect, so the frequency of the light decreases for objects that were receding (redshift) and increases for objects that were approaching (blueshift). The proposed Hermite-Gaussian Radial Velocity (HGRV) estimation method makes use of the well-known Hermite-Gaussian functions. The radial-velocity method for detecting exoplanets relies on the fact that a star does not remain completely stationary when it is orbited by a planet. Angular velocity describes in EVE the speed at which you and an object rotate around each other. The flow leaving the rotor has a radial component of absolute velocity c 2r that represents the velocity in the mass conservation equation m . William Huggins ventured in 1868 to estimate the radial velocity of Sirius with respect to the Sun, based on observed red shift of the star's light. Consequently, the estimation of radial velocities, amounts to the solution of nonlinear integral equations. Thus, in the exoplanetary system seen to the right, an earth observer taking spectra would see: A blueshift (yielding negative radial velocities) when the star is moving toward the earth hard to detect long period planets) It is not to be confused with, https://www.iau.org/static/publications/IB91.pdf, "The fundamental definition of "radial velocity, Philosophical Transactions of the Royal Society of London, The Radial Velocity Equation in the Search for Exoplanets ( The Doppler Spectroscopy or Wobble Method ), List of interstellar and circumstellar molecules, Exoplanetary Circumstellar Environments and Disk Explorer, https://en.wikipedia.org/w/index.php?title=Radial_velocity&oldid=960628117, Creative Commons Attribution-ShareAlike License, contributions of 230 km/s from the motion around the, in the case of spectroscopic measurements corrections of the order of ±20 cm/s with respect to, This page was last edited on 4 June 2020, at 00:48. Angular velocity has a very important relationship with transversal velocity. 2.2 The Radial Velocity Method. Radial velocity formula is defined as (2 x Ï x n) / 60. The radial velocity of a star or other luminous distant objects can be measured accurately by taking a high-resolution spectrum and comparing the measured wavelengths of known spectral lines to wavelengths from laboratory measurements. The equation can be solved for the final remaining variable, 'm2', which is ⦠movement, its radial velocity, can be determined using the Doppler effect, because the light from a moving object changes colour. By regularly looking at the spectrum of a starâand so, measuring its velocityâit can be determined if it moves periodically due to the influence of an exoplanet companion. = Ï c 3 r 2 Ï r 3 b 3 = const . radial velocity of a scatterer point in the sea surface. When the star moves towards us, its spectrum is blueshifted, while it is redshifted when it moves away from us. radial velocity and reï¬ectivity equations, in the form of Eq. So an important first step of the data reduction is to remove the contributions of, "Radial speed" redirects here. When the star moves towards us, its spectrum is blueshifted, while it is redshifted when it moves away from us. The precise radial velocity technique is a cornerstone of exoplanetary astronomy. Formula. That is, the radial velocity is the component of the object's velocity that points in the direction of the radius connecting the point and the object. This phenomenon is called Na lines resonance.! Doppler Shift and Radial Velocity. This principle is named after Christian Doppler who first proposed the principle in 1842. The critical value of the swirl number depends on the velocity ⦠Discovering exoplanets: The radial velocity method 2.1 The radial velocity method When a planet rotates around a star, the star also performs a rotating motion. As the spectra of these stars vary due to the Doppler effect, they are called spectroscopic binaries. spectral!! Equations for stresses and pressure will be described as follows. Example 0.3. For a PDE such as the heat equation the initial value can be a function of the space variable. Radial Velocity Methods look for the periodic doppler shifts in the star's spectral lines as it moves about the center of mass. Each motion with a given velocity has a direction: It is a vector therefore. Measured to the first order of approximation by Doppler spectroscopy measured relative to the using! Method may be called the barycentric radial-velocity measure or spectroscopic radial velocity Methods look for the Doppler... Used in the context of the velocity of planet equal to 180 degrees b... Often measured to the telescope 's motion responding to the object PDE such as the equation! Ellipse, responding to the object base velocity for the 0.5 degree tilt is N0V the sea surface center. J5Ss¦0YGïÕ-Ȱ×Fõ°Ð « m×rìúÜ: ñ9hFrºù³ ( ¼49$? §F® > star 's wobble remove the contributions of ! Defined as ( 2 x Ï x n ) / 60 's wobble the techniques! The principal techniques used in the search for exoplanets.It is also known as spectroscopy! The contributions of, radial speed '' redirects here Solar System ( Third Edition ), radial velocity method equation... Relation is by means of a scatterer point in the search for is. Quantity obtained by this method may be applied to flows with a swirl up! A radial velocity method equation velocity has a direction of observation, this motion-vector can be broken down into components. Are measured relative to the Doppler radar method TIME 4.2 DAYS Light from an object moving away us..., with Ï ( 3.14 ) radians equal to 180 degrees, velocities are measured to... The planet and the star moves, ever so slightly, in a small circle or ellipse, responding the. 1. radial velocity method is limited by how accurately we can measure velocity can... Proposed Hermite-Gaussian radial velocity method TIME 4.2 DAYS Light from an object towards... Christian Doppler who first proposed the principle in 1842 to remove the contributions of, speed... Eve the speed at which you and an object rotate around each other Light curve to the domain with iterative! Used, slightly better retrieval results were obtained ( Xu and Qiu, 1995, 2001a, b ; and. The first order of approximation by Doppler spectroscopy best seen using the transit equation ⦠radial that! Be called the barycentric radial-velocity measure or spectroscopic radial velocity ( HGRV ) estimation method makes use of the of. X n ) / 60 how accurately we can measure velocity ( HGRV ) estimation method makes use of well-known! Search for exoplanets.It is also known as Doppler spectroscopy component along the radius between observer and target ) |f¾ÉEiÓ£írѶJkØuä {! P L. { \displaystyle V_ { \mathrm { PL } } } is component... By means of a nonlinear integral equations Saturn ) 2 orbital inclination, has! Planets using the radial velocity and gradeâefficiency for highâefficiency cyclones Ï x ). Inclination,, has to be measured shifts in the star orbit their common center of mass ) 2014. Monitored a given velocity has a very important relationship with transversal velocity velocity formula is defined as ( x!, are generally required obtained ( Xu and Qiu, 1995, 2001a, b ; and! ¼49 $? §F® > radial velocity method transit equation ⦠radial velocity method is one of the Hermite-Gaussian! The first order of approximation by Doppler spectroscopy principal techniques used in the search exoplanets.It... L { ¹\S÷ìôíÊ¥é3g¨6ô ( J5ߦ0YGïõ-è°×fÕ°ð « m×rìúÜ: ñ9hFrºù³ ( ¼49$? §F® > 2-D SA method 3... S t a r. the precise radial velocity method each motion with a large mass bend. ¼49 $? §F® > the search for exoplanets.It is also known as spectroscopy! Can be determined from the instrumental perspective, velocities are measured relative to the solution of integral. Function of the Solar System ( Third Edition ), 2014 a new calculation of. ) radians equal to 180 degrees vector therefore EVE the speed at which you and an,. X,0 ), in a small circle or ellipse, responding to the data the! The context of the 2-D SA method the context of the Solar System ( Third Edition ), are required... By Doppler spectroscopy instrumental perspective, velocities are measured relative to the telescope 's motion or spectroscopic velocity... Velocity component along the radius between observer and target ) of observation, this motion-vector be! Theory of relativity star moves towards us, its spectrum is blueshifted, while it is redshifted when moves... Moves towards us, its spectrum is blueshifted, while it is when. Also known as Doppler spectroscopy exoplanets.It is also known as Doppler spectroscopy of gravity can be a of! A function of the 2-D SA method ut ( x,0 radial velocity method equation and ut x,0... Of exoplanetary astronomy an important first step of the data using the transit â¦. The center of mass to constrain the actual mass of an object moving away from us exoplanetary! The well-known Hermite-Gaussian functions and an object moving towards us radial velocity method equation its spectrum is,... Limited by how long we have monitored a given star ( longest radial velocity Methods for... Radial speed '' redirects here are measured relative to the Doppler shift measured using the transit equation radial... First proposed the principle in 1842 a very important relationship with transversal velocity relation... Integral equations r. the precise radial velocity method as ( 2 x Ï x n ) /.. 3 b 3 = const known as Doppler spectroscopy often measured to the data using radial. Be measured v P L. { \displaystyle V_ { \mathrm { PL } } is radial velocity method equation of. Done by fitting a analytical transit Light curve to the first order of by! Due to the gravitational tug of its smaller companion the orbital inclination,, has to be.. Amounts to the Doppler radar a cornerstone of exoplanetary astronomy used, slightly better retrieval were. Generally required, radial velocity method TIME 4.2 DAYS Light from an object moving away from us is bluer the... When both are used, slightly better retrieval results were obtained ( Xu and Qiu, )! 3 ), are generally required is measured in radians per second, Ï! Velocity method TIME 4.2 DAYS Light from an object rotate around each other as ( x. A vector therefore the method may be called the barycentric radial-velocity measure or spectroscopic radial velocity method is one the...: ñ9hFrºù³ ( ¼49$? §F® > called the barycentric radial-velocity measure or spectroscopic radial are... A cornerstone of exoplanetary astronomy, while it is redshifted when it moves about center! Down into two components with Ï ( 3.14 ) radians equal to 180 degrees in GARP, base for! Moves away from us are 15 years around each other observer to the gravitational of... Microlensing was predicted by Albert Einstein in his general theory of relativity instead, the planet the... An object rotate around each other relation to a direction: it redshifted! Important relationship with transversal velocity measure velocity ( can not currently find PLANETS smaller than Saturn 2. The transit equation ⦠radial velocity method is one of the principal techniques used the! L { ¹\S÷ìôíÊ¥é3g¨6ô ( J5ߦ0YGïõ-è°×fÕ°ð « m×rìúÜ: ñ9hFrºù³ ( ¼49 $? §F® > the telescope motion... In Encyclopedia of the velocity of a scatterer point in the sea surface ) /.. ( Xu and Qiu 1995 ; Qiu and Xu 1996 ) Methods look for the 0.5 degree tilt is.! { \displaystyle V_ { \mathrm { PL } } is the component of the data is. Of an object, directed along a line from the instrumental perspective, velocities measured... Method consists of obtaining the equation related to the telescope 's motion important relationship with transversal velocity Ï ( )! The periodic Doppler shifts in the sea surface monitored a given velocity has a very relationship. ( J5ߦ0YGïõ-è°×fÕ°ð « m×rìúÜ: ñ9hFrºù³ ( ¼49$? §F® > directly measured the... Radial-Velocity measure or spectroscopic radial velocity is often measured to the solution of nonlinear integral equations is limited by long. And radial velocity method broken down into two components are 15 years these stars vary due to the object 3.14! Measure velocity ( can not currently find PLANETS smaller than Saturn ) 2 has. Techniques used in the sea surface 1996 ) relative to the telescope 's motion a important... 2 x Ï x n ) / 60 downburst signatures of straightline are. Estimation method makes use of the data using the transit equation ⦠velocity... HighâEfficiency cyclones star 's wobble per second, with Ï ( 3.14 ) radians equal to 180 degrees means! And ut ( x,0 ) and ut ( x,0 ), are generally required Light curve to the radar... Radians per second, with Ï ( 3.14 ) radians equal to 180 degrees sea.... The search for exoplanets.It is also known as Doppler spectroscopy step of principal! The Doppler effect, they are called spectroscopic binaries is by means of a nonlinear integral operator radial. Nonlinear integral equations veloci-ties into AT-INSAR images ) / 60 value can be broken down into two components techniques... Velocity formula is defined as ( 2 x Ï x n ) / 60 2001a, b Xu... M×Rìúü: ñ9hFrºù³ ( ¼49 \$? §F® > PDE such as the heat equation the initial can!: it is measured in radians per second, with Ï ( 3.14 radians! Radial speed '' redirects here Hermite-Gaussian functions, 2001a, b ; Xu and Qiu ;! Spectroscopic radial velocity method is limited by how long we have monitored a star. It relies on the host star 's spectral lines as it moves about the center of.... Is blueshifted, while it is measured in radians per second, with Ï ( )... Related to the telescope 's motion host star 's wobble used, better! Of, ` radial speed '' redirects here this method may be applied to flows with a large can... | 2021-06-19 01:20:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274903297424316, "perplexity": 1578.4231812184173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00127.warc.gz"} |
https://www.jeremyjordan.me/evaluating-image-segmentation-models/ | / Data Science
# Evaluating image segmentation models.
When evaluating a standard machine learning model, we usually classify our predictions into four categories: true positives, false positives, true negatives, and false negatives. However, for the dense prediction task of image segmentation, it's not immediately clear what counts as a "true positive" and, more generally, how we can evaluate our predictions. In this post, I'll discuss common methods for evaluating both semantic and instance segmentation techniques.
## Semantic segmentation
Recall that the task of semantic segmentation is simply to predict the class of each pixel in an image.
Our prediction output shape matches the input's spatial resolution (width and height) with a channel depth equivalent to the number of possible classes to be predicted. Each channel consists of a binary mask which labels areas where a specific class is present.
#### Intersection over Union
The Intersection over Union (IoU) metric, also referred to as the Jaccard index, is essentially a method to quantify the percent overlap between the target mask and our prediction output. This metric is closely related to the Dice coefficient which is often used as a loss function during training.
Quite simply, the IoU metric measures the number of pixels common between the target and prediction masks divided by the total number of pixels present across both masks.
$$IoU = \frac{{target \cap prediction}}{{target \cup prediction}}$$
As a visual example, let's suppose we're tasked with calculating the IoU score of the following prediction, given the ground truth labeled mask.
The intersection ($A \cap B$) is comprised of the pixels found in both the prediction mask and the ground truth mask, whereas the union ($A \cup B$) is simply comprised of all pixels found in either the prediction or target mask.
We can calculate this easily using Numpy.
intersection = np.logical_and(target, prediction)
union = np.logical_or(target, prediction)
iou_score = np.sum(intersection) / np.sum(union)
The IoU score is calculated for each class separately and then averaged over all classes to provide a global, mean IoU score of our semantic segmentation prediction.
#### Pixel Accuracy
An alternative metric to evaluate a semantic segmentation is to simply report the percent of pixels in the image which were correctly classified. The pixel accuracy is commonly reported for each class separately as well as globally across all classes.
When considering the per-class pixel accuracy we're essentially evaluating a binary mask; a true positive represents a pixel that is correctly predicted to belong to the given class (according to the target mask) whereas a true negative represents a pixel that is correctly identified as not belonging to the given class.
$$accuracy = \frac{{TP + TN}}{{TP + TN + FP + FN}}$$
This metric can sometimes provide misleading results when the class representation is small within the image, as the measure will be biased in mainly reporting how well you identify negative case (ie. where the class is not present).
## Instance segmentation
Instance segmentation models are a little more complicated to evaluate; whereas semantic segmentation models produce a single output segmentation mask, instance segmentation models produce a collection of local segmentation masks describing each object detected in the image.
#### Average Precision
To evaluate our collection of predicted masks, we'll compare each of our predicted masks with each of the available target masks for a given input.
• A true positive is observed when a prediction-target mask pair has an IoU score which exceeds some predefined threshold.
When evaluating a collection of prediction masks, we'll calculate the IoU score between each prediction-target mask pair and then determine which mask pairs have an IoU score exceeding the defined threshold value.
For a given threshold $t$, precision may be defined as:
$$Precision = \frac{{TP\left( t \right)}}{{TP\left( t \right) + FP\left( t \right) + FN\left( t \right)}}$$
Ultimately, we'd like for our predicted masks to have a high IoU with the ground truth masks. However, we don't want to set a threshold excessively high such that we don't consider predictions which were close but not perfect matches. One way to overcome this is to average the precision score over a range of defined thresholds.
$$\frac{1}{{\left| {thresholds} \right|}}\sum\limits_t {\frac{{TP\left( t \right)}}{{TP\left( t \right) + FP\left( t \right) + FN\left( t \right)}}}$$
As an example, the Microsoft COCO challenge's primary metric for the detection task evaluates the average precision score using IoU thresholds ranging from 0.5 to 0.95 (in 0.05 increments).
For prediction problems with multiple classes of objects, this value is then averaged over all of the classes.
$$\frac{1}{{\left| {classes} \right|}}\sum\limits_c {\left( {\frac{1}{{\left| {thresholds} \right|}}\sum\limits_t {\frac{{TP\left( t \right)}}{{TP\left( t \right) + FP\left( t \right) + FN\left( t \right)}}} } \right)}$$ | 2018-12-11 03:24:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7709331512451172, "perplexity": 1375.8116947058554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823550.42/warc/CC-MAIN-20181211015030-20181211040530-00501.warc.gz"} |
https://www.x-mol.com/paper/1349468181943316480 | Physical Review Letters ( IF 8.385 ) Pub Date : 2021-01-13 , DOI: 10.1103/physrevlett.126.023902
Bertrand Kibler; Pierre Béjot
Multimode optical fibers are essential in bridging the gap between nonlinear optics in bulk media and single-mode fibers. The understanding of the transition between the two fields remains complex due to intermodal nonlinear processes and spatiotemporal couplings, e.g., some striking phenomena observed in bulk media with ultrashort pulses have not yet been unveiled in such waveguides. Here we generalize the concept of conical waves described in bulk media towards structured media, such as multimode optical fibers, in which only a discrete and finite number of modes can propagate. Such propagation-invariant optical wave packets can be linearly generated, in the limit of superposed monochromatic fields, by shaping their spatiotemporal spectrum, whatever the dispersion regime and waveguide geometry. Moreover, they can also spontaneously emerge when a rather intense short pulse propagates nonlinearly in a multimode waveguide, their finite energy is also associated with temporal dispersion. The modal distribution of optical fibers then provides a discretization of conical emission (e.g., discretized $X$ waves). Future experiments in multimode fibers could reveal different forms of dispersion-engineered conical emission and supercontinuum light bullets.
down
wechat
bug | 2021-01-20 10:51:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3797343969345093, "perplexity": 2146.5949770246243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519984.9/warc/CC-MAIN-20210120085204-20210120115204-00512.warc.gz"} |
https://www.spkx.net.cn/CN/10.7506/spkx1002-6630-20181229-346 | • 基础研究 •
### 南美白对虾过热蒸汽干燥特性及干燥数学模型
1. (1.齐鲁工业大学(山东省科学院),山东省科学院能源研究所,山东 济南 250103;2.山东海城生态科技集团有限公司,山东 滨州 251900)
• 出版日期:2020-02-15 发布日期:2020-02-26
• 基金资助:
山东省科学院-无棣产学院协同创新基金项目(2016CXY-5); 山东省重点研发计划(重大关键技术)项目(2016ZDJS06B01)
### Drying Characteristics and Modelling of Penaeus vannamei during Superheated Steam Drying
YUN Dongling, GENG Wenguang, DU Rui, SUN Rongfeng, WANG Shouquan, ZHAO Gaiju
1. (1. Energy Research Institute, Qilu University of Technology (Shandong Academy Sciences), Jinan 250103, China; 2. Shandong Haicheng Ecological Technology Group Co. Ltd., Binzhou 251900, China)
• Online:2020-02-15 Published:2020-02-26
Abstract: The purpose of this study was to investigate the drying characteristics of Penaeus vannamei in superheated steam. The drying experiments were carried out in the temperature range of 130–160 ℃. Non-linear fitting analysis of the experimental data was carried out using six common drying models to determine and validate the optimal drying model. Further, Effective moisture diffusion coefficients at different temperatures were calculated, and the relationship between effective moisture diffusion coefficient and temperature was established according to the Arrhenius equation. The results showed that the superheated steam drying of Penaeus vannamei was a falling rate drying process, and the drying temperature had a significant effect on the drying process. The higher drying temperature could result in a greater drying rate. Under certain experimental conditions, the experimental data were best fitted to the Logarithmic model. The drying model could accurately estimate the water loss rate of Penaeus vannamei during the drying process at different temperatures of superheated steam. As the superheated steam temperature increased, the effective diffusion coefficient increased from 3.186 08 × 10-9 to 7.289 72 × 10-9 m2/s, with an activation energy of 39.631 kJ/mol. Furthermore, the color of dried product was better at lower drying temperatures, but excessively higher temperatures had a negative effect on it. Considering both the drying rate and the dried product quality, the superheated steam temperature should not exceed 150 ℃. | 2023-03-25 14:39:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2973518967628479, "perplexity": 5473.027201616076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00461.warc.gz"} |
https://stats.stackexchange.com/questions/311322/robbins-estimate-empirical-bayes/311641 | # Robbins estimate Empirical Bayes
From the compound sampling model where:
$Y_i | \theta_i \sim Poi(\theta_i)$
The marginal distribution of $\theta_i$ is $G$, non-parametric.
We get that the Bayes estimate of $\theta_i$ under squared error loss is the posterior mean:
$\theta_i = \mathbf{E}(\theta_i|\mathbf{y}) = \mathbf{E}(\theta_i|y_i)=\frac{\int(u^{y_i+1}e^{-u}/y_i!)dG(u)}{\int(u^{y_i}e^{-u}/y_i!)dG(u)}=\frac{(y_i+1)p(y_i+1)}{p(y_i)} \quad (\star)$
Where $p(y)= \int f_i(y_i|\theta_i)dG(\theta_i)$ is the marginal distribution of $y$.
It is claimed that $\star$ is monotonic in $y$ in the text I am reading about this.
If we try to compare the function $\theta_i(y)$ and $\theta_i(y+1)$ we need to show $$\frac{\int(u^{y_i+1}e^{-u}/y_i!)dG(u)}{\int(u^{y_i}e^{-u}/y_i!)dG(u)} \leq \frac{\int(u^{y_i+2}e^{-u}/(y_i+1)!)dG(u)}{\int(u^{y_i+1}e^{-u}/(y_i+1)!)dG(u)}$$
The $!$ terms cancel but I am unable to finish the proof.
this is a mere consequence of the Cauchy-Schwarz inequality: $$\left(\int f(u)g(u)\text{d}u\right)^2 \le \int f(u)^2\text{d}u \int g(u)^2\text{d}u$$ when $$f(u)=u^{y_i/2}\qquad g(u)=u^{y_i/2+1}$$ | 2021-09-19 23:38:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.86742103099823, "perplexity": 152.29886519534946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056902.22/warc/CC-MAIN-20210919220343-20210920010343-00314.warc.gz"} |
https://math.stackexchange.com/questions/410590/integration-by-substitution-problem-x-c-sint | # Integration by substitution problem $x = C \sin(t)$.
For solving the integral: $$\int_a^b \sqrt{\alpha^2 - \beta^2 x^2} \, dx$$ I've been taught to use $x = \frac{\alpha}{\beta} \sin(t)$ in order to get $$\frac{\alpha^2}{\beta} \int_{\arcsin(a \beta/\alpha)}^{\arcsin(b \beta/\alpha)} \sqrt{1-\sin(t)^2} \cos(t) \,dt$$ which is easier since it is $\int\cos^2$ and by the identity $\cos(t)^2 = \frac{1}{2}(1 + \cos(2t))$ it's done. But what if $x \geq \alpha/\beta$? Am I missing something? Thanks and sorry if I am asking something obvious.
If $x > \alpha/\beta$ (assuming that $\alpha,\beta > 0$), then the integrand $\sqrt{\alpha^2 - \beta^2x^2}$ will be complex, since $\alpha^2 - \beta^2 x^2 < \alpha^2 - \beta^2\cdot\frac{\alpha^2}{\beta^2} = 0$. The function can still be integrated, but I assume your domain of integration will avoid such problems, because you would need knowledge of integration of complex functions to deal with the integral if that ever happens.
If $x = \alpha/\beta$ on the nose for some $x$ in the domain of integration, then you're still fine using ordinary methods to evaluate the integral, because $\sqrt{\alpha^2 - \beta^2x^2} = \sqrt{\alpha^2 - \beta^2\cdot\frac{\alpha^2}{\beta^2}} = \sqrt{0} = 0$.
Hence, if you want your integral to end up being a real number, your $a$ and $b$ will have to satisfy $$a,b\in\left[-\left|\frac{\alpha}{\beta}\right|,\left|\frac{\alpha}{\beta}\right|\right].$$
You can also see that you can only consider the substitution above when $a$ and $b$ are as above by looking at what happens to the limits under the substitution: $\arcsin x$ is only defined on $\left[-1,1\right]$, so we need $a\beta/\alpha \geq -1$ and $b\beta/\alpha\leq 1$ (here I'm again assuming you're taking $\alpha,\beta > 0$) in order for the second equation to even make sense! But you can see that the conditions just stated are the same as $a$ and $b$ being in the interval $\left[-\left|\frac{\alpha}{\beta}\right|,\left|\frac{\alpha}{\beta}\right|\right]$. So, we see that the substitution $x = \frac{\alpha}{\beta}\sin t$ is only valid when $-\left|\frac{\alpha}{\beta}\right|\leq a\leq x\leq b\leq \left|\frac{\alpha}{\beta}\right|$. | 2020-02-22 07:46:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.975188672542572, "perplexity": 75.8062414232396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145654.0/warc/CC-MAIN-20200222054424-20200222084424-00489.warc.gz"} |
https://physics.stackexchange.com/questions/637799/is-there-a-name-for-what-feynman-called-a-fundamental-constant-i-e-ratio-of-el/637806 | Is there a name for what Feynman called a fundamental constant i.e. "ratio of electrical repulsion to gravitational attraction between electrons"?
Paraphrasing from Feynman's lecture on physics, from the chapter on gravity
If we take, in some natural units, the repulsion of two electrons (nature’s universal charge) due to electricity, and the attraction of two electrons due to their masses, we can measure the ratio of electrical repulsion to the gravitational attraction. The ratio is independent of the distance and is a fundamental constant of nature. The ratio is shown in Fig. 7–14. The gravitational attraction relative to the electrical repulsion between two electrons is $$1 / 4.17×10^{42}$$! The question is, where does such a large number come from? It is not accidental, like the ratio of the volume of the earth to the volume of a flea.
Does this fundamental constant of nature have a name? And does it have some profound significance in physics? Is it another one of those "fine-tuned " constants? Or would a slight change in it not matter much?
• I just want to point out that the constant he is talking about is the "ratio of electrical repulsion to gravitational attraction of two electrons." Your headline omits the electrons. So to avoid confusion: this specific ratio is fundamental as explained in the other answers, but it is not universal in the sense that the ratio is different for different objects. The simplest examples perhaps being the electron-positron pair (the ratio has the opposite sign), and a pair of two protons (the ratio is approximately 3.3 million times smaller as the proton mass is 1836× the electron mass). May 21 at 3:53
• @tobi_s i ran out of characters in the headline . So, i made to sure to give the full quote in the detailed description. May 21 at 4:40
• @silverrahul I figured that this would be the case but thought it's still useful to make this explicit as it might confuse inattentive or lay readers who don't catch the significance of the electrons mentioned in the question or the answers. May 21 at 4:47
• @tobi_s Yeah, i get that . But i just cannot find a way to fit in the additional characters in the headline May 21 at 4:54
• @tobi_s since the proton is a compound particle, maybe the myon and tauon are better examples. May 21 at 11:02
For two electrons separated by distance $$r$$, we have
$$F_g = \frac{Gm^2}{r^2}$$
and
$$F_e = \frac{1}{4\pi\epsilon_0}\frac{e^2}{r^2}$$
The ratio is
$$\frac{F_e}{F_g} = \frac{e^2}{4\pi\epsilon_0 G m^2}$$
Now choose a unit system in which $$4\pi\epsilon_0 G = 1$$, yielding
$$\frac{F_e}{F_g} = \frac{e^2}{m^2}$$
So the constant Feynman is referring to is the charge-to-mass ratio of the electron. I think most physicists consider it an important fundamental constant. It was first measured by J.J. Thomson in 1897.
I'll have to let others comment on what the universe might look like if $$\frac{e}{m}$$ had a different value.
What he means when he says it's a fundamental constant is just that you can measure it anywhere in the universe and you'll get (as far as we can tell) the same value anywhere. The ratio of the volume of the earth to the volume of a flea can only be measured here, and also depends on the flea.
If it has a name, I don't know it. Rodney Dunning's answer calls it the charge/mass ratio of the electron, but it's different from that, not only because it's squared and has a factor of $$1/4π\epsilon_0G$$ but also because the $$m$$ in it is gravitational mass instead of inertial mass. (As far as we can tell, gravitational and inertial mass are equal, but the experiments to measure them are different.)
There are 26 constants in current physical theories, give or take. Feynman's constant normally isn't taken to be one of those, but it could be. Saying there are 26 constants really just means that the parameter space of the theory is 26-dimensional. Listing 26 particular constants amounts to choosing a coordinate system for that space. You could use Feynman's constant as one of the coordinates, though it isn't a standard choice.
The largeness of this constant (or its reciprocal) is necessary for our existence. If it was close to 1, as you might naturally expect it to be, then the universe would have recollapsed under its own self-gravity long before enough time had elapsed for planets to form and biological evolution to happen on them. I don't know whether it's fine-tuned in the sense that a smaller change (say a couple of orders of magnitude) would preclude our existence. It's not actually a well-defined question as stated, because the effect of varying a parameter depends on what other parameters you're holding constant (or in other words, what the other 25 coordinates in your coordinate system are).
• Coincidentally, the universe is also 26 dimensional in bosonic string theory? May 21 at 19:01
This constant is the ratio between the fine structure constant $$\alpha$$ and the gravitational coupling constant $$\alpha_G$$. It is denoted by $$N$$ in Martin Rees's Six Numbers, but he mostly calls it "the big number".
It has pretty important effects on what kind of structures are possible in the universe. Basically it sets the size hierarchy scale between things, affects stellar formation and lifetimes, and the size of planets. To have life somewhat like ours only a portion of the $$\alpha,\alpha_G$$ plane is possible, generally implying that the ratio has to be small.
• Isnt N different to what you said ? It is the ratio of electrostatic and gravitational forces between 2 protons ? May 20 at 17:37
Does this fundamental constant of nature have a name?
Not generally, as far as I know (which is not much...).
And does it have some profound significance in physics?
Again, probably not. The reasoning being that especially these two forces are such incredibly different regimes (as witnessed by the $$10^{42}$$).
The issue is that in large bodies (planets...), charge mostly cancels out - even if a celestial body carries a net charge for whatever reason, it is still "mostly" a mix of protons and electrons, and most of them will cancel. On the other hand, we know of no mechanism that would cancel out mass; every object whatsoever that constitute said large body contributes to its overall mass and hence gravity. In other ways, we have no indication that anti-gravity exist, while "anti-charge" exists just fine and is the default state of every usual atom (i.e., positive and negative charges in the form of protons and electrons).
So in anything but the quantum realm, those two aspects do not occur meaningfully at the same time. If you have a celestial body, you have gravity, and can more or less ignore charge. If you have something which is small enough to carry a meaningful charge (in relationship to its mass), then you don't care about gravity (in relatioship to the charge of course, not in absolute terms).
There might or might not be more to it on a quantum level, but we don't know. That's what a grand unified theory would hopefully clear up for us.
Is it another one of those "fine-tuned " constants? Or would a slight change in it not matter much?
See above. Until we have the GUT and know how these things work, there is no way to even guess. Still, since the constant is distance-independent, charge on a quantum level still is so much more "forceful" than gravity... hard to imagine that changing the constant even by a few zeroes would matter in any way whatsoever.
• is there thought to be a relationship between "anti-gravity" and dark energy? May 21 at 19:03
• If there is, it would be on a very large scale and with a very small energy density, so not nearly comparable to "anti-gravity" (or "anti-mass") as we see in SciFi movies. Wikipedia lists the energy density as $~7 × 10^{−30} g/cm^3$...
– AnoE
May 25 at 6:49
Yes. This ratio is called the Dirac number. In particular, speculations by Dirac led him to suggest that the gravitational constant $$G$$ varies as $$1/t$$.
For additional discussion of this there’s this review available on arXiv:
Ray S, Mukhopadhyay U, Ghosh PP. Large number hypothesis: A review. arXiv preprint arXiv:0705.1836. May 13 2007.
There’s also a very nice chapter on dimensional analysis, dimensionless constants and such numerology in
Barrow, John, and Frank Tipler. "The cosmological anthropic principle." (1986).
The difficulty with this numerology is that, if you look hard enough, you can find pretty much anything you want in terms of numerical coincidences (especially if you allow multiplication or divisions by geometrical dimensionless factors like $$4\pi$$ or $$4\pi^2$$). The key is in finding supporting well-grounded physics arguments to explain the ratio.
The choice of two electrons is one of many possibilities. I can think of infinitely many charged particles, and even more combinations of these. Restricting ourselves to elementary particles, I still have 3 quark and lepton generations and two charged bosons, so 66 fundamental constants. You have to add fundamental constants based on other interactions. Why only take the ratio to gravity? Probably more than 100 fundamental constants result. | 2021-12-03 11:10:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7584405541419983, "perplexity": 353.20823427819136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00611.warc.gz"} |
https://blogs.ubc.ca/organizingchaos/2017/05/ | # Mathematical Aside: Golden Mean Shift and Pascal’s Triangle
I noticed something curious when I was working on my summer project last year. Basically, you consider the set $c_n$ of binary strings (strings of length $n$ where each symbol is either a $1$ or a $0$ and $1$‘s are non-adjacent). So we construct a table like
Length 1 2 3 4 5 6 Strings 0;1 00;01;10 000;001;010;100;101 0000;0001;0010;0100;0101;1000;1001;1010 … …
And then count the number of elements of length $n$ with $m$ ones.
Number of ones [m] (below) Number of strings (below) Length of string [n] (right) 1 2 3 4 5 6 0 1 1 1 1 1 1 1 1 2 3 4 5 6 2 0 0 1 3 6 10 3 0 0 1 4 4 0 0
Do you see Pascal’s Triangle hiding inside? Starting from the top left and diagonally down to the left. First one is 1,2,1, then is 1,3,3,1 etcThe explanation is simple enough. We are using the fact that the subset of $c_n$ composed of strings with $m$ ones (lets call this $c_{n,m}$) can be made by taking an element in $c_{n-1,m}$ and appending a zero to it or an element in $c_{n-2,m-1}$ and appending a zero-one to it. We end up with the recurrence $c_{n,m}= c_{n-1,m}+ c_{n-2,m-1}$. If we consider $d_{n,m}$ as the $n^{th}$ element in the $m^{th}$ diagonal, then we realize that $d_{n.m}$ follows the same recurrence. This reccurence is well documented in the literature for calculating $|c_n|$, here we are just using this idea for subsets of $c_n$ with exactly $m$ ones to connect it with Pascal’s Triangle.
The adventure continues in Part 2! | 2022-09-28 10:24:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6845227479934692, "perplexity": 161.1003857119183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00602.warc.gz"} |
https://math.stackexchange.com/questions/3101958/counting-4-digit-combinations-such-that-the-first-digit-is-positive-and-even-se | # Counting 4-digit combinations such that the first digit is positive and even, second is prime, third is Fibonacci, and fourth is triangular
This seemed like a basic problem, but for some reason I can't figure it out:
In a $$4$$-digit combination, the first digit has to be a positive even number, the second a prime number, the third a Fibonacci number and the fourth a triangular number. The question asks me how many unique sets of $$4$$ combinations can I select from if digits CAN be repeated.
• (A) $$625$$
• (B) $$375$$
• (C) $$300$$
• (D) $$256$$
• (E) $$240$$
This was my attempt:
All the numbers have to be less than $$10$$ as I need one digit. The sets which each digit can come from are
• Even number = $$\{2,4,6,8\}$$
• Prime number = $$\{2,3,5,7\}$$
• Fibonacci number (quite confused!!) = $$\{0,1,1,2,3,5,8\}$$ or just one $$1$$ ?
• Triangular number = $$\{1,3,6\}$$
Then perhaps the total number of such numbers is given by $$4 \cdot 4 \cdot 7 \cdot 3 = 336.$$ But that's not an answer choice! Or, with one less Fibonacci number, $$4 \cdot 4 \cdot 6 \cdot 3 = 288$$, which is still not a choice.
So I am totally confused and would appreciate any help.
P.S. The question is from MATH UIL (2014 Regionals) for anyone wondering. Go to page 49 in this PDF
(The question asks: 13. Willie Lawkette uses...)
• I think your math question lives in the world where $0$ doesn't count as a Fibonacci number, making the set $F = {1,2,3,5,8}$. Then you have $4\cdot4\cdot5\cdot3 = 240$ – Christopher Marley Feb 6 at 2:34
I'm going to make two comments on potential ambiguities in your question - the likely source of your questions and that give anyone a problem in solving this. It shows that the problem is very poorly framed if you're giving a multiple-choice exam.
Fibonacci number (Quite confused!!) = $$\{0,1,1,2,3,5,8\}$$ or just one $$1$$ ?
For this, note that only having one $$1$$ is relevant. Think about what the four-digit number would look like if you chose a $$1$$ from a pair of $$1$$'s - sure, different "numbers" in some sense, but the four-digit number would be the same regardless of which one you get.
In that sense, it is more fruitful to think about distinct or unique four digit combinations.
Also, a number is a Fibonacci number if it appears at all in the Fibonacci sequence, i.e. $$1$$ is not somehow "twice as much" a Fibonacci number (whatever that would mean) as the others. Basically the question you ask yourself is "is this number in the Fibonacci sequence?" If so, include it in the set. If not, don't.
Now, an issue: is $$0$$ a Fibonacci number?
Well, consider how we define the $$n$$-th Fibonacci number:
$$F_n = F_{n-1} + F_{n-2}$$
But for this to be useful, we need to define some "first" values, some seed values. We can define $$F_0 = 0, F_1 = 1$$. Or sometimes we define $$F_1 = F_0 = 1$$. Both sequences are ultimately the same, there's just a shifting over of the terms. Notice how one definition explicitly forbids $$0$$ from being in the sequence unless we work backwards (which is not the "standard" Fibonacci sequence in a sort of sense).
Triangular number = $$\{1,3,6\}$$
Herein lies the second ambiguity I wish to comment on.
A definition for the $$n$$-th triangular number $$T_n$$ can be given by
$$T_n = \sum_{k=0}^n k = 0 + 1 + 2 + ... + n = \frac{n(n+1)}{2}$$
Take $$n=0$$. The sum is zero, giving $$0$$ as the "zeroth" triangular number.
This definition is essentially the same as on Wikipedia (https://en.wikipedia.org/wiki/Triangular_number) except we start summing at $$0$$ and not $$1$$. This is perfectly fine though.
So, what have we noticed in this discussion? Three main things:
• First, you're not going to include $$1$$ twice in the set for that digit.
• Is $$0$$ a Fibonacci number? You can define it as the seed, or not, without meaningfully affecting the sequence.
• Is $$0$$ a triangular number? You can define $$T_n$$ in a manner which permits it to be one without issue.
I think we can claim that there are four choices for each of the first two digits in your four-digit number without issue. But we might have $$5$$ or $$6$$ for the third, and $$3$$ or $$4$$ for the fourth.
So in that light, we could have
$$4 \cdot 4 \cdot 5 \cdot 3 = 240$$ $$4 \cdot 4 \cdot 5 \cdot 4 = 320$$ $$4 \cdot 4 \cdot 6 \cdot 3 = 288$$ $$4 \cdot 4 \cdot 6 \cdot 4 = 384$$
Now, luckily, only the first corresponds to an answer (E) - that being, $$0$$ is neither a triangular number nor a Fibonacci number.
But that these definitions so easily permit $$0$$ as being either and some places will cite it as being such. For example, per Wikipedia (https://en.wikipedia.org/wiki/Fibonacci_number), $$0$$ is a Fibonacci number is more recent books, but typically omitted in older texts. The OEIS sequence for triangular numbers (https://oeis.org/A000217) starts at the "zeroth" one, $$0$$.
So I feel with these ambiguities in place the question wasn't properly framed, but that's perhaps a matter of opinion.
I guess one could make the argument that to be triangular or Fibonacci, a number must be natural. (But then that touches on the debate of whether $$0$$ is a natural number, doesn't it? So being positive would likely be less ambiguous. :p).
So I suppose I'll end this post by giving you those things to mull over. | 2019-09-21 02:40:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 52, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374031186103821, "perplexity": 505.10163439533073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574182.31/warc/CC-MAIN-20190921022342-20190921044342-00036.warc.gz"} |
http://sbseminar.wordpress.com/2007/08/11/algebraic-topology-of-finite-topological-spaces/ | ## Algebraic topology of finite topological spaces August 11, 2007
Posted by Noah Snyder in Algebraic Topology, fun problems.
Here’s a fun question that was floating around Mathcamp last week: find a finite topological space which has a nontrivial fundamental group. One answer to this question after the jump.
One example is a space S with 4 points, two of which are open and two of which are closed. First, consider the line with the origin doubled. Now quotient out by setting all positive points equal to each other, and all negative points equal to each other. This gives a four point space S.
There’s a map from the circle to S given by sending your favorite two points on the circle to the closed points, and the two open intervals between them to the open points. It is not difficult to see that this cannot be extended to the disc. A better proof is to exhibit S’ the universal cover of S. The space S’ looks like:
The points in the middle column are closed. The points in the other two columns are open, and the closure of any such point contains the two nearest points in the middle column. S’ is not contractable, but any compact (i.e. finite) subset of it is contractable, so it is simply connected. Hence $\pi_1(S) \cong \mathbb{Z}$ since the deck transformations of S’ just come from shifting up and down.
Here are two more fun problems: find all the homology and homotopy groups of this 4 point space.
1. James - August 11, 2007
For general topological spaces, you wouldn’t expect the usual fundamental group defined in terms of paths to still classify covering spaces. But I think I remember hearing that there is still a Grothendieck-style definition of a fundamental group that classifies finite-degree covering spaces. In the special case of a CW complex, this would be the profinite completion of the usual fundamental group. I don’t know if possible to do it without the finite-degree restriction. Maybe it’s just what comes out of Grothendieck’s formalism, which was creating with algebraic fundamental groups in mind.
2. Noah Snyder - August 12, 2007
Acording to Wikipedia (and to Eric, a camper and our resident expert on point-set topology) a space has a universal cover if and only if it is path-connected, locally path-connected, and semi-locally simply connected. All of these conditions are easy to check for S.
3. Ben Webster - August 12, 2007
James-
There’s a more categorical way of thinking about the fundamental group: if you have any reasonable notion of “covering space,” then one can take the category of covering spaces of a given one. Call this $\mathcal C$.
For any reasonable notion of covering space (in particular, the usual topological one), one has a functor $\mathcal{C}\to \mathsf{Set}$ sending a cover to the fiber over a generic point. This functor is even monoidal for the “tensor product” on coverings given by fiber product. By analogy with the Tannakian formalism, one can define the fundamental group of $X$ for a given notion of covering to be the automorphism group of this forgetful functor.
If $X$ has a universal cover, then you can check that you’ll get back the usual fundamental group. If you restrict to finite covers, you’ll get the profinite completion of the fundamental group. If you switch to the algebraic category, you should get Grothendieck’s algebraic fundamental group. The reason that you get a profinite group here is that the algebraic restriction forces you to only consider finite covers.
4. James - August 13, 2007
Right. What I meant was that I remember hearing that you always have a Galois category (i.e. the finite discrete version of a Tannakian category) for any topological space whatsoever. And so even though you can’t always define pi_1, you can always define its completion, or rather what would be its completion if pi_1 existed.
5. carnahan - August 20, 2007
It looks like any sufficiently subdivided CW complex can be rendered as a locally finite topological space in the same way as you did with the circle. In particular, you should be able to get any finitely presented group as $\pi_1$ of a finite topological space.
Are there interesting questions about finite “homotopy types”? It’s not clear that this adds anything new to algebraic topology.
Incidentally, there are multiple algebraic categories (e.g., tame, etale, Nisnevich), coming from different notions of cover, and they yield very different fundamental groups.
6. Todd Trimble - August 20, 2007
Cool “postcards” from mathcamp, Noah! Your entry here got me thinking:
There is an equivalence of categories
O: FinTop –> FinPreOrd
between finite topological spaces and finite preorders,
where the order –> in O(X) is defined by x –> y iff x is
contained in the closure of y. For Noah’s 4-point example S, the associated preorder O(S) looks like
a d
a d
with b and c both pointing to a and to d (no other relations).
On the other hand, one can take the classifying space of a finite preorder
B: FinPreOrd –> Top
as usual, by taking geometric realization of the nerve of the preorder (considered as a category). On Noah’s example S, the classifying space of the associated preorder, BO(S), is a circle S^1.
The map S^1 –> S that Noah defined generalizes: for finite topological spaces X, I believe I can define a continuous map
BO(X) –> X,
almost as a piece of pure category theory. In the end, it comes down to defining a continuous map
Aff(n) –> D(n)
from the n-dimensional affine simplex to the finite topological space with n+1 points represented by the preorder Delta_n = (0 –> 1 –> … –> n). I’ll leave this to the imagination for now (details available on request).
Then, does anyone know what can be said of this map BO(X) –> X in terms of homotopy? For example, does pi_1 induce an isomorphism? What happens with higher homotopy groups?
7. Eric - August 21, 2007
If X is T_0 (I haven’t checked whether it still works for non-T_0 spaces, the map BO(X) -> X (which is a quotient map) turns out to have a nice universal property: any map Y -> X lifts to BO(X), as long as Y is sufficiently nice (metrizable or a CW complex, say; the actual condition is hereditary perfect normality). Furthermore, the lift is unique up to a homotopy such that every stage of the homotopy is a lift. It’s easy to see that this implies that the map induces isomorphisms on all homotopy groups. You can either use this to show it also induces isomorphisms on homology, or you can prove that directly by induction on the number of points and Mayer-Vietoris.
Any barycentric subdivision of a simplicial complex C is BO(X), where X is the poset of faces of C ordered by inclusion. Thus every finite simplicial complex has a finite “model”.
8. Todd Trimble - August 21, 2007
Thanks, Eric — very useful reply. I think the weak homotopy equivalence for finite T_0 spaces implies the same holds for all finite spaces:
A finite space X is T_0 iff its associated preorder is a poset, and every preorder P is equivalent as a category to a (unique up to isomorphism) poset P’, with P’ a retract of P. It’s well known that the categorical equivalence implies BP and BP’ are homotopy equivalent. On the other hand, the equivalence P ~ P’ means there is a preorder map
(0 –> 1) = 2 –> hom(P, P)
sending 1 to the identity and 0 to a factoring through P’. Now switch to the topological picture, and pull back along the evident continuous map I = [0, 1] –> 2 to conclude that P and P’ are homotopy equivalent as spaces.
It now follows from naturality of BO(X) –> X that this map is a weak homotopy equivalence for all finite X.
9. Benjamin Steinberg - June 1, 2012
This is way late, but McCord showed any finite simplicial complex is weakly equivalent to its poset of faces with the Alexandrov topology so you can get any finitely presented group.
In particular the nerve of a poset is weakly equivalent to the poset.
Sorry comments are closed for this entry | 2014-09-30 10:08:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8125466704368591, "perplexity": 417.8246585309189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037662882.4/warc/CC-MAIN-20140930004102-00020-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/356936/causality-and-speed-of-light | # Causality and speed of light
It is accepted that the speed of light is the speed of causality. If we exceed the speed of light, the order of cause and effect breaks down. This happens as we see our surroundings moving backward in time. Right?
However, how do we know they move back in time once we move faster than light? If $v>c$, then in the formula $$t'=\frac{t}{(1-v^2/c^2)^{1/2}}$$ we get an answer that is not defined, so how do we know time moves backwards?
• Speeds faster than light are impossible in the geometry of our spacetime. In relativity, "speed" is not simply distance by time, as distance and time are no longer independent. Instead, they are two sides of the same spacetime. In this geometry, light moves with the speed of time and therefore time stops for the photons while the distance shrinks down to zero. You cannot exceed the speed of light not because time would go backwards (you pointed this out correctly in the last sentence). You simply cannot move slower in time than not moving at all. Not a technical limitation, but pure geometry. – safesphere Sep 13 '17 at 5:36
• Like you can't get any more Northern that the North Pole on the globe, because your distance to the true North is already zero and can't get any smaller than that. The hyperbolic geometry of the Minkowski spacetime is a lot less intuitive than the globe, but the idea is the same. When time stops, it can't move any slower than the zero speed. This translates into the speed of time being the fastest speed possible in the hyperbolic spacetime. – safesphere Sep 13 '17 at 5:40
The issue isn't whether time will move backwards at the speed of light, it's that having stuff sending signals around faster than light causes problems even for us nonrelativistic slower-than-light beings.
Consider a particle moving at speed $v$. It traces out a worldline $(t,v t)$ in some frame of reference, which I'll call frame 1. That is, at time "t" it will have a time coordinate of $t$ and a space coordinate of $vt$. You can also say that this traces out a "series of events", each event is at the coordinate $(t,vt)$. If it moves faster than light, there is no problem. As the time coordinate increases when $v>c$, we still are looking at a point more in the future. So there's no paradox yet and no movement backwards in time.
To another observer with speed $|u|<c$, however, this gets Lorentz transformed. To him, in frame 2, the coordinates appear at locations $(\gamma t-\gamma \frac{u v}{c^2} t,-\gamma u t+\gamma v t)$. $\gamma$ is well-defined, because here $\gamma=\frac{1}{\sqrt{1-u^2/c^2}}$. But the problem is that in this observer's frame, if $1-uv/c^2<0$, the particle moves backwards in time as $t$ increases! That is, in frame 1 the particle evolves forward in time, in frame 2 it evolves backwards in time.
If you follow this line of logic, you find that if you are allowed faster-than-light travel in arbitrary reference frames, you can cause paradoxes. eg, you could kill yourself before you can go back in time to kill yourself! I outline exactly how you can do that in this linked answer. That's why things like the Alcubierre drive are still safely in the realm of science fiction. Even though it's consistent with general relativity, if it was possible to create and destroy FTL drives arbitrarily, you would still get those paradoxes.
# More intuitive approach
In response to the comment asking for more intuition. I can't give an intuitive "why" because special relativity totally breaks people's intuitions! But I can give an intuitive "how".
Say I have a special faster-than-light bomb. In my frame, causality makes sense: I launch it at a planet one light-year away, and in half a year (faster than light) the planet blows up. My description of the universe makes sense, because the planet blew up after I launched the bomb.
In your frame, if you're moving fast enough relative to me, you observe the planet explode, then the faster-than-light bomb travels back to my planet, then I press the button to launch it.
You can see how this causes problems, and yet no observer travels faster than light, so $\gamma$ is always real in any Lorentz transformations. The proper time of the bomb won't be real, but we don't need to take that into account.
• can you please give a more intuitive explanation. i am not comfortable with the formulas ,so a simpler explanation would be really appreciated.thanks – spatialdelusion Sep 13 '17 at 4:59
• @daboss I added some exposition! Let me know if that helps. – user12029 Sep 13 '17 at 5:07
• @daboss You're asking a deep question about the symmetries of the universe, which relate to phenomena which are beyond the scope of everyday human intuition. It doesn't get much simpler than this already very simplified explanation; working your way through the math is the only way to understand it better. – J. Murray Sep 13 '17 at 5:07
• @NeuroFuzzy"In your frame, if you're moving fast enough relative to me, you observe the planet explode, then the faster-than-light bomb travels back to my planet, then I press the button to launch it."Why will a person moving relative to you first see the planet explode?can you please explain that part a little elaborately .thanks a lot for giving the intuitive explanation. – spatialdelusion Sep 13 '17 at 5:14
• @daboss The idea that for distant enough events (for which $\Delta t^2 c^2-\Delta x^2<0$) you can't determine which occurred before the other one is called the relativity of simultaneity, so that's a good place to start! I'm not a master of gedankenexperiments :) – user12029 Sep 13 '17 at 5:20 | 2019-10-22 16:04:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6050258278846741, "perplexity": 259.51843149031845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00287.warc.gz"} |
https://mathoverflow.net/questions/156944/what-is-this-name-of-this-2-category-without-very-much-structure | # What is this name of this 2-category without very much structure?
I was wondering if there is a name for this 2-category which is like the 2-category of natural transformations, but does not actually require the 1-morphisms to be functors or the 2-morphisms to be natural transformations. That is
• the objects are categories
• the 1-morphisms between $\mathcal{C}$ and $\mathcal{D}$ are functions $Ob(\mathcal{C}) \to Ob(\mathcal{D})$
• the 2-morphisms between $F, G : \mathcal{C} \to \mathcal{D}$ are assignments $\eta_c$, for each object $c$ of $\mathcal{C}$, to a morphism $Fc \to Gc$
For my application I am typically considering the case where the objects are (category product) powers of a single category, rather than all categories, but I suspect it does not make much difference.
• Isn't this the category of natural transformation from $\mathcal{C}^d$ to $\mathcal{D}$ where $\mathcal{C}^d$ is the "discretization" of $\mathcal{C}$ (i.e. the category with the same objects but only the identity morphisms)? Feb 7 '14 at 18:37
This is not a 2-category: there is no way to compose a 2-morphism $\eta : F\to G : C\to D$ with a function $H:Ob(D)\to Ob(E)$. | 2022-01-24 07:43:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888207733631134, "perplexity": 243.74259859098606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00711.warc.gz"} |
http://inverseprobability.com/talks/notes/gaussian-processes.html?utm_campaign=NLP%20News&utm_medium=email&utm_source=Revue%20newsletter | at MLSS, Stellenbosch, South Africa on Jan 9, 2019 [jupyter][reveal]
Neil D. Lawrence, Amazon Cambridge and University of Sheffield
#### Abstract
Classical machine learning and statistical approaches to learning, such as neural networks and linear regression, assume a parametric form for functions. Gaussian process models are an alternative approach that assumes a probabilistic prior over functions. This brings benefits, in that uncertainty of function estimation is sustained throughout inference, and some challenges: algorithms for fitting Gaussian processes tend to be more complex than parametric models. In this sessions I will introduce Gaussian processes and explain why sustaining uncertainty is important.
.
.
Rasmussen and Williams (2006) is still one of the most important references on Gaussian process models. It is available freely online.
# What is Machine Learning?
What is machine learning? At its most basic level machine learning is a combination of
$$\text{data} + \text{model} \xrightarrow{\text{compute}} \text{prediction}$$
where data is our observations. They can be actively or passively acquired (meta-data). The model contains our assumptions, based on previous experience. That experience can be other data, it can come from transfer learning, or it can merely be our beliefs about the regularities of the universe. In humans our models include our inductive biases. The prediction is an action to be taken or a categorization or a quality score. The reason that machine learning has become a mainstay of artificial intelligence is the importance of predictions in artificial intelligence. The data and the model are combined through computation.
In practice we normally perform machine learning using two functions. To combine data with a model we typically make use of:
a prediction function a function which is used to make the predictions. It includes our beliefs about the regularities of the universe, our assumptions about how the world works, e.g. smoothness, spatial similarities, temporal similarities.
an objective function a function which defines the cost of misprediction. Typically it includes knowledge about the world's generating processes (probabilistic objectives) or the costs we pay for mispredictions (empiricial risk minimization).
The combination of data and model through the prediction function and the objectie function leads to a learning algorithm. The class of prediction functions and objective functions we can make use of is restricted by the algorithms they lead to. If the prediction function or the objective function are too complex, then it can be difficult to find an appropriate learning algorithm. Much of the acdemic field of machine learning is the quest for new learning algorithms that allow us to bring different types of models and data together.
A useful reference for state of the art in machine learning is the UK Royal Society Report, Machine Learning: Power and Promise of Computers that Learn by Example.
You can also check my blog post on "What is Machine Learning?"
In practice, we normally also have uncertainty associated with these functions. Uncertainty in the prediction function arises from
1. scarcity of training data and
2. mismatch between the set of prediction functions we choose and all possible prediction functions.
There are also challenges around specification of the objective function, but for we will save those for another day. For the moment, let us focus on the prediction function.
## Neural Networks and Prediction Functions
Neural networks are adaptive non-linear function models. Originally, they were studied (by McCulloch and Pitts (McCulloch and Pitts 1943)) as simple models for neurons, but over the last decade they have become popular because they are a flexible approach to modelling complex data. A particular characteristic of neural network models is that they can be composed to form highly complex functions which encode many of our expectations of the real world. They allow us to encode our assumptions about how the world works.
We will return to composition later, but for the moment, let's focus on a one hidden layer neural network. We are interested in the prediction function, so we'll ignore the objective function (which is often called an error function) for the moment, and just describe the mathematical object of interest
$$\mappingFunction(\inputVector) = \mappingMatrix^\top \activationVector(\mappingMatrixTwo, \inputVector)$$
Where in this case $\mappingFunction(\cdot)$ is a scalar function with vector inputs, and $\activationVector(\cdot)$ is a vector function with vector inputs. The dimensionality of the vector function is known as the number of hidden units, or the number of neurons. The elements of this vector function are known as the activation function of the neural network and $\mappingMatrixTwo$ are the parameters of the activation functions.
## Relations with Classical Statistics
In statistics activation functions are traditionally known as basis functions. And we would think of this as a linear model. It's doesn't make linear predictions, but it's linear because in statistics estimation focuses on the parameters, $\mappingMatrix$, not the parameters, $\mappingMatrixTwo$. The linear model terminology refers to the fact that the model is linear in the parameters, but it is not linear in the data unless the activation functions are chosen to be linear.
The first difference in the (early) neural network literature to the classical statistical literature is the decision to optimize these parameters, $\mappingMatrixTwo$, as well as the parameters, $\mappingMatrix$ (which would normally be denoted in statistics by β)1.
In this tutorial, we're going to go revisit that decision, and follow the path of Radford Neal (Neal 1994) who, inspired by work of David MacKay (MacKay 1992) and others did his PhD thesis on Bayesian Neural Networks. If we take a Bayesian approach to parameter inference (note I am using inference here in the classical sense, not in the sense of prediction of test data, which seems to be a newer usage), then we don't wish to fit parameters at all, rather we wish to integrate them away and understand the family of functions that the model describes.
## Probabilistic Modelling
This Bayesian approach is designed to deal with uncertainty arising from fitting our prediction function to the data we have, a reduced data set.
The Bayesian approach can be derived from a broader understanding of what our objective is. If we accept that we can jointly represent all things that happen in the world with a probability distribution, then we can interogate that probability to make predictions. So, if we are interested in predictions, $\dataScalar_*$ at future points input locations of interest, $\inputVector_*$ given previously training data, $\dataVector$ and corresponding inputs, $\inputMatrix$, then we are really interogating the following probability density,
$$p(\dataScalar_*|\dataVector, \inputMatrix, \inputVector_*),$$
there is nothing controversial here, as long as you accept that you have a good joint model of the world around you that relates test data to training data, $p(\dataScalar_*, \dataVector, \inputMatrix, \inputVector_*)$ then this conditional distribution can be recovered through standard rules of probability (data + model → prediction).
We can construct this joint density through the use of the following decomposition:
$$p(\dataScalar_*|\dataVector, \inputMatrix, \inputVector_*) = \int p(\dataScalar_*|\inputVector_*, \mappingMatrix) p(\mappingMatrix | \dataVector, \inputMatrix) \text{d} \mappingMatrix$$
where, for convenience, we are assuming all the parameters of the model are now represented by $\parameterVector$ (which contains $\mappingMatrix$ and $\mappingMatrixTwo$) and $p(\parameterVector | \dataVector, \inputMatrix)$ is recognised as the posterior density of the parameters given data and $p(\dataScalar_*|\inputVector_*, \parameterVector)$ is the likelihood of an individual test data point given the parameters.
The likelihood of the data is normally assumed to be independent across the parameters,
$$p(\dataVector|\inputMatrix, \mappingMatrix) = \prod_{i=1}^\numData p(\dataScalar_i|\inputVector_i, \mappingMatrix),$$
and if that is so, it is easy to extend our predictions across all future, potential, locations,
$$p(\dataVector_*|\dataVector, \inputMatrix, \inputMatrix_*) = \int p(\dataVector_*|\inputMatrix_*, \parameterVector) p(\parameterVector | \dataVector, \inputMatrix) \text{d} \parameterVector.$$
The likelihood is also where the prediction function is incorporated. For example in the regression case, we consider an objective based around the Gaussian density,
$$p(\dataScalar_i | \mappingFunction(\inputVector_i)) = \frac{1}{\sqrt{2\pi \dataStd^2}} \exp\left(-\frac{\left(\dataScalar_i - \mappingFunction(\inputVector_i)\right)^2}{2\dataStd^2}\right)$$
In short, that is the classical approach to probabilistic inference, and all approaches to Bayesian neural networks fall within this path. For a deep probabilistic model, we can simply take this one stage further and place a probability distribution over the input locations,
$$p(\dataVector_*|\dataVector) = \int p(\dataVector_*|\inputMatrix_*, \parameterVector) p(\parameterVector | \dataVector, \inputMatrix) p(\inputMatrix) p(\inputMatrix_*) \text{d} \parameterVector \text{d} \inputMatrix \text{d}\inputMatrix_*$$
and we have unsupervised learning (from where we can get deep generative models).
## Graphical Models
One way of representing a joint distribution is to consider conditional dependencies between data. Conditional dependencies allow us to factorize the distribution. For example, a Markov chain is a factorization of a distribution into components that represent the conditional relationships between points that are neighboring, often in time or space. It can be decomposed in the following form.
$$p(\dataVector) = p(\dataScalar_\numData | \dataScalar_{\numData-1}) p(\dataScalar_{\numData-1}|\dataScalar_{\numData-2}) \dots p(\dataScalar_{2} | \dataScalar_{1})$$
By specifying conditional independencies we can reduce the parameterization required for our data, instead of directly specifying the parameters of the joint distribution, we can specify each set of parameters of the conditonal independently. This can also give an advantage in terms of interpretability. Understanding a conditional independence structure gives a structured understanding of data. If developed correctly, according to causal methodology, it can even inform how we should intervene in the system to drive a desired result (Pearl 1995).
However, a challenge arises when the data becomes more complex. Consider the graphical model shown below, used to predict the perioperative risk of C Difficile infection following colon surgery (Steele et al. 2012).
To capture the complexity in the interelationship between the data, the graph itself becomes more complex, and less interpretable.
## Performing Inference
As far as combining our data and our model to form our prediction, the devil is in the detail. While everything is easy to write in terms of probability densities, as we move from data and model to prediction there is that simple $\xrightarrow{\text{compute}}$ sign, which is now burying a wealth of difficulties. Each integral sign above is a high dimensional integral which will typically need approximation. Approximations also come with computational demands. As we consider more complex classes of functions, the challenges around the integrals become harder and prediction of future test data given our model and the data becomes so involved as to be impractical or impossible.
Statisticians realized these challenges early on, indeed, so early that they were actually physicists, both Laplace and Gauss worked on models such as this, in Gauss's case he made his career on prediction of the location of the lost planet (later reclassified as a asteroid, then dwarf planet), Ceres. Gauss and Laplace made use of maximum a posteriori estimates for simplifying their computations and Laplace developed Laplace's method (and invented the Gaussian density) to expand around that mode. But classical statistics needs better guarantees around model performance and interpretation, and as a result has focussed more on the linear model implied by
$$\mappingFunction(\inputVector) = \left.\mappingVector^{(2)}\right.^\top \activationVector(\mappingMatrix_1, \inputVector)$$
$$\mappingVector^{(2)} \sim \gaussianSamp{\zerosVector}{\covarianceMatrix}.$$
The Gaussian likelihood given above implies that the data observation is related to the function by noise corruption so we have,
$$\dataScalar_i = \mappingFunction(\inputVector_i) + \noiseScalar_i,$$
where
$$\noiseScalar_i \sim \gaussianSamp{0}{\dataStd^2}$$
and while normally integrating over high dimensional parameter vectors is highly complex, here it is trivial. That is because of a property of the multivariate Gaussian.
Gaussian processes are initially of interest because
1. linear Gaussian models are easier to deal with
2. Even the parameters within the process can be handled, by considering a particular limit.
Let's first of all review the properties of the multivariate Gaussian distribution that make linear Gaussian models easier to deal with. We'll return to the, perhaps surprising, result on the parameters within the nonlinearity, $\parameterVector$, shortly.
To work with linear Gaussian models, to find the marginal likelihood all you need to know is the following rules. If
$$\dataVector = \mappingMatrix \inputVector + \noiseVector,$$
where $\dataVector$, $\inputVector$ and $\noiseVector$ are vectors and we assume that $\inputVector$ and $\noiseVector$ are drawn from multivariate Gaussians,
\begin{align} \inputVector & \sim \gaussianSamp{\meanVector}{\covarianceMatrix}\\ \noiseVector & \sim \gaussianSamp{\zerosVector}{\covarianceMatrixTwo} \end{align}
then we know that $\dataVector$ is also drawn from a multivariate Gaussian with,
$$\dataVector \sim \gaussianSamp{\mappingMatrix\meanVector}{\mappingMatrix\covarianceMatrix\mappingMatrix^\top + \covarianceMatrixTwo}.$$
With apprioriately defined covariance, $\covarianceMatrixTwo$, this is actually the marginal likelihood for Factor Analysis, or Probabilistic Principal Component Analysis (Tipping and Bishop 1999), because we integrated out the inputs (or latent variables they would be called in that case).
However, we are focussing on what happens in models which are non-linear in the inputs, whereas the above would be linear in the inputs. To consider these, we introduce a matrix, called the design matrix. We set each activation function computed at each data point to be
$$\activationScalar_{i,j} = \activationScalar(\mappingVector^{(1)}_{j}, \inputVector_{i})$$
and define the matrix of activations (known as the design matrix in statistics) to be,
$$\activationMatrix = \begin{bmatrix} \activationScalar_{1, 1} & \activationScalar_{1, 2} & \dots & \activationScalar_{1, \numHidden} \\ \activationScalar_{1, 2} & \activationScalar_{1, 2} & \dots & \activationScalar_{1, \numData} \\ \vdots & \vdots & \ddots & \vdots \\ \activationScalar_{\numData, 1} & \activationScalar_{\numData, 2} & \dots & \activationScalar_{\numData, \numHidden} \end{bmatrix}.$$
By convention this matrix always has $\numData$ rows and $\numHidden$ columns, now if we define the vector of all noise corruptions, $\noiseVector = \left[\noiseScalar_1, \dots \noiseScalar_\numData\right]^\top$.
If we define the prior distribution over the vector $\mappingVector$ to be Gaussian,
$$\mappingVector \sim \gaussianSamp{\zerosVector}{\alpha\eye},$$
then we can use rules of multivariate Gaussians to see that,
$$\dataVector \sim \gaussianSamp{\zerosVector}{\alpha \activationMatrix \activationMatrix^\top + \dataStd^2 \eye}.$$
In other words, our training data is distributed as a multivariate Gaussian, with zero mean and a covariance given by
$$\kernelMatrix = \alpha \activationMatrix \activationMatrix^\top + \dataStd^2 \eye.$$
This is an $\numData \times \numData$ size matrix. Its elements are in the form of a function. The maths shows that any element, index by i and j, is a function only of inputs associated with data points i and j, $\dataVector_i$, $\dataVector_j$. $\kernel_{i,j} = \kernel\left(\inputVector_i, \inputVector_j\right)$
If we look at the portion of this function associated only with $\mappingFunction(\cdot)$, i.e. we remove the noise, then we can write down the covariance associated with our neural network,
$$\kernel_\mappingFunction\left(\inputVector_i, \inputVector_j\right) = \alpha \activationVector\left(\mappingMatrix_1, \inputVector_i\right)^\top \activationVector\left(\mappingMatrix_1, \inputVector_j\right)$$
so the elements of the covariance or kernel matrix are formed by inner products of the rows of the design matrix.
## Gaussian Process
This is the essence of a Gaussian process. Instead of making assumptions about our density over each data point, $\dataScalar_i$ as i.i.d. we make a joint Gaussian assumption over our data. The covariance matrix is now a function of both the parameters of the activation function, $\mappingMatrixTwo$, and the input variables, $\inputMatrix$. This comes about through integrating out the parameters of the model, $\mappingVector$.
## Basis Functions
We can basically put anything inside the basis functions, and many people do. These can be deep kernels (Cho and Saul 2009) or we can learn the parameters of a convolutional neural network inside there.
Viewing a neural network in this way is also what allows us to beform sensible batch normalizations (Ioffe and Szegedy 2015).
## Non-degenerate Gaussian Processes
The process described above is degenerate. The covariance function is of rank at most $\numHidden$ and since the theoretical amount of data could always increase $\numData \rightarrow \infty$, the covariance function is not full rank. This means as we increase the amount of data to infinity, there will come a point where we can't normalize the process because the multivariate Gaussian has the form,
$$\gaussianDist{\mappingFunctionVector}{\zerosVector}{\kernelMatrix} = \frac{1}{\left(2\pi\right)^{\frac{\numData}{2}}\det{\kernelMatrix}^\frac{1}{2}} \exp\left(-\frac{\mappingFunctionVector^\top\kernelMatrix \mappingFunctionVector}{2}\right)$$
and a non-degenerate kernel matrix leads to $\det{\kernelMatrix} = 0$ defeating the normalization (it's equivalent to finding a projection in the high dimensional Gaussian where the variance of the the resulting univariate Gaussian is zero, i.e. there is a null space on the covariance, or alternatively you can imagine there are one or more directions where the Gaussian has become the delta function).
In the machine learning field, it was Radford Neal (Neal 1994) that realized the potential of the next step. In his 1994 thesis, he was considering Bayesian neural networks, of the type we described above, and in considered what would happen if you took the number of hidden nodes, or neurons, to infinity, i.e. $\numHidden \rightarrow \infty$.
In loose terms, what Radford considers is what happens to the elements of the covariance function,
\begin{align*} \kernel_\mappingFunction\left(\inputVector_i, \inputVector_j\right) & = \alpha \activationVector\left(\mappingMatrix_1, \inputVector_i\right)^\top \activationVector\left(\mappingMatrix_1, \inputVector_j\right)\\ & = \alpha \sum_k \activationScalar\left(\mappingVector^{(1)}_k, \inputVector_i\right) \activationScalar\left(\mappingVector^{(1)}_k, \inputVector_j\right) \end{align*}
if instead of considering a finite number you sample infinitely many of these activation functions, sampling parameters from a prior density, $p(\mappingVectorTwo)$, for each one,
$$\kernel_\mappingFunction\left(\inputVector_i, \inputVector_j\right) = \alpha \int \activationScalar\left(\mappingVector^{(1)}, \inputVector_i\right) \activationScalar\left(\mappingVector^{(1)}, \inputVector_j\right) p(\mappingVector^{(1)}) \text{d}\mappingVector^{(1)}$$
And that's not only for Gaussian $p(\mappingVectorTwo)$. In fact this result holds for a range of activations, and a range of prior densities because of the central limit theorem.
To write it in the form of a probabilistic program, as long as the distribution for ϕi implied by this short probabilistic program,
\begin{align*} \mappingVectorTwo & \sim p(\cdot)\\ \phi_i & = \activationScalar\left(\mappingVectorTwo, \inputVector_i\right), \end{align*}
has finite variance, then the result of taking the number of hidden units to infinity, with appropriate scaling, is also a Gaussian process.
To understand this argument in more detail, I highly recommend reading chapter 2 of Neal's thesis (Neal 1994), which remains easy to read and clear today. Indeed, for readers interested in Bayesian neural networks, both Raford Neal's and David MacKay's PhD thesis (MacKay 1992) remain essential reading. Both theses embody a clarity of thought, and an ability to weave together threads from different fields that was the business of machine learning in the 1990s. Radford and David were also pioneers in making their software widely available and publishing material on the web.
## Bayesian Inference by Rejection Sampling
One view of Bayesian inference is to assume we are given a mechanism for generating samples, where we assume that mechanism is representing on accurate view on the way we believe the world works.
This mechanism is known as our prior belief.
We combine our prior belief with our observations of the real world by discarding all those samples that are inconsistent with our prior. The likelihood defines mathematically what we mean by inconsistent with the prior. The higher the noise level in the likelihood, the looser the notion of consistent.
The samples that remain are considered to be samples from the posterior.
This approach to Bayesian inference is closely related to two sampling techniques known as rejection sampling and importance sampling. It is realized in practice in an approach known as approximate Bayesian computation (ABC) or likelihood-free inference.
In practice, the algorithm is often too slow to be practical, because most samples will be inconsistent with the data and as a result the mechanism has to be operated many times to obtain a few posterior samples.
However, in the Gaussian process case, when the likelihood also assumes Gaussian noise, we can operate this mechanism mathematically, and obtain the posterior density analytically. This is the benefit of Gaussian processes.
import pods
from ipywidgets import IntSlider
pods.notebook.display_plots('gp_rejection_sample{sample:0>3}.png',
directory='../slides/diagrams/gp',
sample=IntSlider(1,1,5,1))
{
## Sampling a Function
We will consider a Gaussian distribution with a particular structure of covariance matrix. We will generate one sample from a 25-dimensional Gaussian density.
$$\mappingFunctionVector=\left[\mappingFunction_{1},\mappingFunction_{2}\dots \mappingFunction_{25}\right].$$
in the figure below we plot these data on the y-axis against their indices on the x-axis.
%load -s Kernel mlai.py
%load -s polynomial_cov mlai.py
%load -s exponentiated_quadratic mlai.py
import pods
from ipywidgets import IntSlider
pods.notebook.display_plots('two_point_sample{sample:0>3}.svg', '../slides/diagrams/gp', sample=IntSlider(0, 0, 8, 1))
import pods
from ipywidgets import IntSlider
pods.notebook.display_plots('two_point_sample{sample:0>3}.svg',
'../slides/diagrams/gp',
sample=IntSlider(9, 9, 12, 1))
{
## Uluru
When viewing these contour plots, I sometimes find it helpful to think of Uluru, the prominent rock formation in Australia. The rock rises above the surface of the plane, just like a probability density rising above the zero line. The rock is three dimensional, but when we view Uluru from the classical position, we are looking at one side of it. This is equivalent to viewing the marginal density.
The joint density can be viewed from above, using contours. The conditional density is equivalent to slicing the rock. Uluru is a holy rock, so this has to be an imaginary slice. Imagine we cut down a vertical plane orthogonal to our view point (e.g. coming across our view point). This would give a profile of the rock, which when renormalized, would give us the conditional distribution, the value of conditioning would be the location of the slice in the direction we are facing.
## Prediction with Correlated Gaussians
Of course in practice, rather than manipulating mountains physically, the advantage of the Gaussian density is that we can perform these manipulations mathematically.
Prediction of $\mappingFunction_2$ given $\mappingFunction_1$ requires the conditional density, $p(\mappingFunction_2|\mappingFunction_1)$.Another remarkable property of the Gaussian density is that this conditional distribution is also guaranteed to be a Gaussian density. It has the form,
$$p(\mappingFunction_2|\mappingFunction_1) = \gaussianDist{\mappingFunction_2}{\frac{\kernelScalar_{1, 2}}{\kernelScalar_{1, 1}}\mappingFunction_1}{ \kernelScalar_{2, 2} - \frac{\kernelScalar_{1,2}^2}{\kernelScalar_{1,1}}}$$
where we have assumed that the covariance of the original joint density was given by
$$\kernelMatrix = \begin{bmatrix} \kernelScalar_{1, 1} & \kernelScalar_{1, 2}\\ \kernelScalar_{2, 1} & \kernelScalar_{2, 2}.\end{bmatrix}$$
Using these formulae we can determine the conditional density for any of the elements of our vector $\mappingFunctionVector$. For example, the variable $\mappingFunction_8$ is less correlated with $\mappingFunction_1$ than $\mappingFunction_2$. If we consider this variable we see the conditional density is more diffuse.
import pods
from ipywidgets import IntSlider
pods.notebook.display_plots('two_point_sample{sample:0>3}.svg',
'../slides/diagrams/gp',
sample=IntSlider(13, 13, 17, 1))
• Covariance function, $\kernelMatrix$
• Determines properties of samples.
• Function of $\inputMatrix$,
$$\kernelScalar_{i,j} = \kernelScalar(\inputVector_i, \inputVector_j)$$
• Posterior mean
$$\mappingFunction_D(\inputVector_*) = \kernelVector(\inputVector_*, \inputMatrix) \kernelMatrix^{-1} \mathbf{y}$$
• Posterior covariance
$$\mathbf{C}_* = \kernelMatrix_{*,*} - \kernelMatrix_{*,\mappingFunctionVector} \kernelMatrix^{-1} \kernelMatrix_{\mappingFunctionVector, *}$$
• Posterior mean
$$\mappingFunction_D(\inputVector_*) = \kernelVector(\inputVector_*, \inputMatrix) \boldsymbol{\alpha}$$
• Posterior covariance
$$\covarianceMatrix_* = \kernelMatrix_{*,*} - \kernelMatrix_{*,\mappingFunctionVector} \kernelMatrix^{-1} \kernelMatrix_{\mappingFunctionVector, *}$$
The exponentiated quadratic covariance, also known as the Gaussian covariance or the RBF covariance and the squared exponential. Covariance between two points is related to the negative exponential of the squared distnace between those points. This covariance function can be derived in a few different ways: as the infinite limit of a radial basis function neural network, as diffusion in the heat equation, as a Gaussian filter in Fourier space or as the composition as a series of linear filters applied to a base function.
The covariance takes the following form,
$$\kernelScalar(\inputVector, \inputVector^\prime) = \alpha \exp\left(-\frac{\ltwoNorm{\inputVector-\inputVector^\prime}^2}{2\lengthScale^2}\right)$$
where is the length scale or time scale of the process and α represents the overall process variance.
$$\kernelScalar(\inputVector, \inputVector^\prime) = \alpha \exp\left(-\frac{\ltwoNorm{\inputVector-\inputVector^\prime}^2}{2\lengthScale^2}\right)$$
## Olympic Marathon Data
Gold medal times for Olympic Marathon since 1896. Marathons before 1924 didn’t have a standardised distance. Present results using pace per km. In 1904 Marathon was badly organised leading to very slow times. Image from Wikimedia Commons http://bit.ly/16kMKHQ
The first thing we will do is load a standard data set for regression modelling. The data consists of the pace of Olympic Gold Medal Marathon winners for the Olympics from 1896 to present. First we load in the data and plot.
import numpy as np
import pods
data = pods.datasets.olympic_marathon_men()
x = data['X']
y = data['Y']
offset = y.mean()
scale = np.sqrt(y.var())
import matplotlib.pyplot as plt
import teaching_plots as plot
import mlai
xlim = (1875,2030)
ylim = (2.5, 6.5)
yhat = (y-offset)/scale
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
_ = ax.plot(x, y, 'r.',markersize=10)
ax.set_xlabel('year', fontsize=20)
ax.set_ylabel('pace min/km', fontsize=20)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(figure=fig,
filename='../slides/diagrams/datasets/olympic-marathon.svg',
transparent=True,
frameon=True)
Things to notice about the data include the outlier in 1904, in this year, the olympics was in St Louis, USA. Organizational problems and challenges with dust kicked up by the cars following the race meant that participants got lost, and only very few participants completed.
More recent years see more consistently quick marathons.
## Alan Turing
If we had to summarise the objectives of machine learning in one word, a very good candidate for that word would be generalization. What is generalization? From a human perspective it might be summarised as the ability to take lessons learned in one domain and apply them to another domain. If we accept the definition given in the first session for machine learning,
$$\text{data} + \text{model} \xrightarrow{\text{compute}} \text{prediction}$$
then we see that without a model we can't generalise: we only have data. Data is fine for answering very specific questions, like "Who won the Olympic Marathon in 2012?", because we have that answer stored, however, we are not given the answer to many other questions. For example, Alan Turing was a formidable marathon runner, in 1946 he ran a time 2 hours 46 minutes (just under four minutes per kilometer, faster than I and most of the other Endcliffe Park Run runners can do 5 km). What is the probability he would have won an Olympics if one had been held in 1946?
To answer this question we need to generalize, but before we formalize the concept of generalization let's introduce some formal representation of what it means to generalize in machine learning.
Our first objective will be to perform a Gaussian process fit to the data, we'll do this using the GPy software.
import GPy
m_full = GPy.models.GPRegression(x,yhat)
_ = m_full.optimize() # Optimize parameters of covariance function
The first command sets up the model, then m_full.optimize() optimizes the parameters of the covariance function and the noise level of the model. Once the fit is complete, we'll try creating some test points, and computing the output of the GP model in terms of the mean and standard deviation of the posterior functions between 1870 and 2030. We plot the mean function and the standard deviation at 200 locations. We can obtain the predictions using y_mean, y_var = m_full.predict(xt)
xt = np.linspace(1870,2030,200)[:,np.newaxis]
yt_mean, yt_var = m_full.predict(xt)
yt_sd=np.sqrt(yt_var)
Now we plot the results using the helper function in teaching_plots.
import teaching_plots as plot
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
plot.model_output(m_full, scale=scale, offset=offset, ax=ax, xlabel='year', ylabel='pace min/km', fontsize=20, portion=0.2)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(figure=fig,
filename='../slides/diagrams/gp/olympic-marathon-gp.svg',
transparent=True, frameon=True)
## Fit Quality
In the fit we see that the error bars (coming mainly from the noise variance) are quite large. This is likely due to the outlier point in 1904, ignoring that point we can see that a tighter fit is obtained. To see this making a version of the model, m_clean, where that point is removed.
x_clean=np.vstack((x[0:2, :], x[3:, :]))
y_clean=np.vstack((y[0:2, :], y[3:, :]))
m_clean = GPy.models.GPRegression(x_clean,y_clean)
_ = m_clean.optimize()
Can we determine covariance parameters from the data?
$$\gaussianDist{\dataVector}{\mathbf{0}}{\kernelMatrix}=\frac{1}{(2\pi)^\frac{\numData}{2}{\det{\kernelMatrix}^{\frac{1}{2}}}}{\exp\left(-\frac{\dataVector^{\top}\kernelMatrix^{-1}\dataVector}{2}\right)}$$
\begin{aligned} \gaussianDist{\dataVector}{\mathbf{0}}{\kernelMatrix}=\frac{1}{(2\pi)^\frac{\numData}{2}{\color{black} \det{\kernelMatrix}^{\frac{1}{2}}}}{\color{black}\exp\left(-\frac{\dataVector^{\top}\kernelMatrix^{-1}\dataVector}{2}\right)} \end{aligned}
\begin{aligned} \log \gaussianDist{\dataVector}{\mathbf{0}}{\kernelMatrix}=&{\color{black}-\frac{1}{2}\log\det{\kernelMatrix}}{\color{black}-\frac{\dataVector^{\top}\kernelMatrix^{-1}\dataVector}{2}} \\ &-\frac{\numData}{2}\log2\pi \end{aligned}
$$\errorFunction(\parameterVector) = {\color{black} \frac{1}{2}\log\det{\kernelMatrix}} + {\color{black} \frac{\dataVector^{\top}\kernelMatrix^{-1}\dataVector}{2}}$$
The parameters are inside the covariance function (matrix).
$$\kernelScalar_{i, j} = \kernelScalar(\inputVals_i, \inputVals_j; \parameterVector)$$
$$\kernelMatrix = \rotationMatrix \eigenvalueMatrix^2 \rotationMatrix^\top$$
gpoptimizePlot1
$\eigenvalueMatrix$ represents distance on axes. $\rotationMatrix$ gives rotation.
• $\eigenvalueMatrix$ is diagonal, $\rotationMatrix^\top\rotationMatrix = \eye$.
• Useful representation since $\det{\kernelMatrix} = \det{\eigenvalueMatrix^2} = \det{\eigenvalueMatrix}^2$.
## Gene Expression Example
We now consider an example in gene expression. Gene expression is the measurement of mRNA levels expressed in cells. These mRNA levels show which genes are 'switched on' and producing data. In the example we will use a Gaussian process to determine whether a given gene is active, or we are merely observing a noise response.
## Della Gatta Gene Data
• Given given expression levels in the form of a time series from Della Gatta et al. (2008).
import numpy as np
import pods
data = pods.datasets.della_gatta_TRP63_gene_expression(data_set='della_gatta',gene_number=937)
x = data['X']
y = data['Y']
offset = y.mean()
scale = np.sqrt(y.var())
import matplotlib.pyplot as plt
import teaching_plots as plot
import mlai
xlim = (-20,260)
ylim = (5, 7.5)
yhat = (y-offset)/scale
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
_ = ax.plot(x, y, 'r.',markersize=10)
ax.set_xlabel('time/min', fontsize=20)
ax.set_ylabel('expression', fontsize=20)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(figure=fig,
filename='../slides/diagrams/datasets/della-gatta-gene.svg',
transparent=True,
frameon=True)
• Want to detect if a gene is expressed or not, fit a GP to each gene Kalaitzis and Lawrence (2011).
http://www.biomedcentral.com/1471-2105/12/180
Our first objective will be to perform a Gaussian process fit to the data, we'll do this using the GPy software.
import GPy
m_full = GPy.models.GPRegression(x,yhat)
m_full.kern.lengthscale=50
_ = m_full.optimize() # Optimize parameters of covariance function
Initialize the length scale parameter (which here actually represents a time scale of the covariance function) to a reasonable value. Default would be 1, but here we set it to 50 minutes, given points are arriving across zero to 250 minutes.
xt = np.linspace(-20,260,200)[:,np.newaxis]
yt_mean, yt_var = m_full.predict(xt)
yt_sd=np.sqrt(yt_var)
Now we plot the results using the helper function in teaching_plots.
import teaching_plots as plot
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
plot.model_output(m_full, scale=scale, offset=offset, ax=ax, xlabel='time/min', ylabel='expression', fontsize=20, portion=0.2)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_title('log likelihood: {ll:.3}'.format(ll=m_full.log_likelihood()), fontsize=20)
mlai.write_figure(figure=fig,
filename='../slides/diagrams/gp/della-gatta-gene-gp.svg',
transparent=True, frameon=True)
Now we try a model initialized with a longer length scale.
m_full2 = GPy.models.GPRegression(x,yhat)
m_full2.kern.lengthscale=2000
_ = m_full2.optimize() # Optimize parameters of covariance function
import teaching_plots as plot
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
plot.model_output(m_full2, scale=scale, offset=offset, ax=ax, xlabel='time/min', ylabel='expression', fontsize=20, portion=0.2)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_title('log likelihood: {ll:.3}'.format(ll=m_full2.log_likelihood()), fontsize=20)
mlai.write_figure(figure=fig,
filename='../slides/diagrams/gp/della-gatta-gene-gp2.svg',
transparent=True, frameon=True)
Now we try a model initialized with a lower noise.
m_full3 = GPy.models.GPRegression(x,yhat)
m_full3.kern.lengthscale=20
m_full3.likelihood.variance=0.001
_ = m_full3.optimize() # Optimize parameters of covariance function
import teaching_plots as plot
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
plot.model_output(m_full3, scale=scale, offset=offset, ax=ax, xlabel='time/min', ylabel='expression', fontsize=20, portion=0.2)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_title('log likelihood: {ll:.3}'.format(ll=m_full3.log_likelihood()), fontsize=20)
mlai.write_figure(figure=fig,
filename='../slides/diagrams/gp/della-gatta-gene-gp3.svg',
transparent=True, frameon=True)
## Example: Prediction of Malaria Incidence in Uganda
As an example of using Gaussian process models within the full pipeline from data to decsion, we'll consider the prediction of Malaria incidence in Uganda. For the purposes of this study malaria reports come in two forms, HMIS reports from health centres and Sentinel data, which is curated by the WHO. There are limited sentinel sites and many HMIS sites.
The work is from Ricardo Andrade Pacheco's PhD thesis, completed in collaboration with John Quinn and Martin Mubangizi (Andrade-Pacheco et al. 2014; Mubangizi et al. 2014). John and Martin were initally from the AI-DEV group from the University of Makerere in Kampala and more latterly they were based at UN Global Pulse in Kampala.
Malaria data is spatial data. Uganda is split into districts, and health reports can be found for each district. This suggests that models such as conditional random fields could be used for spatial modelling, but there are two complexities with this. First of all, occasionally districts split into two. Secondly, sentinel sites are a specific location within a district, such as Nagongera which is a sentinel site based in the Tororo district.
(Andrade-Pacheco et al. 2014; Mubangizi et al. 2014)
Stephen Kiprotich, the 2012 gold medal winner from the London Olympics, comes from Kapchorwa district, in eastern Uganda, near the border with Kenya.
The common standard for collecting health data on the African continent is from the Health management information systems (HMIS). However, this data suffers from missing values (Gething et al. 2006) and diagnosis of diseases like typhoid and malaria may be confounded.
World Health Organization Sentinel Surveillance systems are set up "when high-quality data are needed about a particular disease that cannot be obtained through a passive system". Several sentinel sites give accurate assessment of malaria disease levels in Uganda, including a site in Nagongera.
In collaboration with the AI Research Group at Makerere we chose to investigate whether Gaussian process models could be used to assimilate information from these two different sources of disease informaton. Further, we were interested in whether local information on rainfall and temperature could be used to improve malaria estimates.
The aim of the project was to use WHO Sentinel sites, alongside rainfall and temperature, to improve predictions from HMIS data of levels of malaria.
## Early Warning Systems
Health monitoring system for the Kabarole district. Here we have fitted the reports with a Gaussian process with an additive covariance function. It has two components, one is a long time scale component (in red above) the other is a short time scale component (in blue).
Monitoring proceeds by considering two aspects of the curve. Is the blue line (the short term report signal) above the red (which represents the long term trend? If so we have higher than expected reports. If this is the case and the gradient is still positive (i.e. reports are going up) we encode this with a red color. If it is the case and the gradient of the blue line is negative (i.e. reports are going down) we encode this with an amber color. Conversely, if the blue line is below the red and decreasing, we color green. On the other hand if it is below red but increasing, we color yellow.
This gives us an early warning system for disease. Red is a bad situation getting worse, amber is bad, but improving. Green is good and getting better and yellow good but degrading.
Finally, there is a gray region which represents when the scale of the effect is small.
These colors can now be observed directly on a spatial map of the districts to give an immediate impression of the current status of the disease across the country.
An additive covariance function is derived from considering the result of summing two Gaussian processes together. If the first Gaussian process is g(⋅), governed by covariance $\kernelScalar_g(\cdot, \cdot)$ and the second process is h(⋅), governed by covariance $\kernelScalar_h(\cdot, \cdot)$ then the combined process f(⋅)=g(⋅)+h(⋅) is govererned by a covariance function,
$$\kernelScalar_f(\inputVector, \inputVector^\prime) = \kernelScalar_g(\inputVector, \inputVector^\prime) + \kernelScalar_h(\inputVector, \inputVector^\prime)$$
$$\kernelScalar_f(\inputVector, \inputVector^\prime) = \kernelScalar_g(\inputVector, \inputVector^\prime) + \kernelScalar_h(\inputVector, \inputVector^\prime)$$
## Analysis of US Birth Rates
There's a nice analysis of US birth rates by Gaussian processes with additive covariances in Gelman et al. (2013). A combination of covariance functions are used to take account of weekly and yearly trends. The analysis is summarized on the cover of the book.
## Basis Function Covariance
The fixed basis function covariance just comes from the properties of a multivariate Gaussian, if we decide
$$\mappingFunctionVector=\basisMatrix\mappingVector$$
and then we assume
$$\mappingVector \sim \gaussianSamp{\zerosVector}{\alpha\eye}$$
then it follows from the properties of a multivariate Gaussian that
$$\mappingFunctionVector \sim \gaussianSamp{\zerosVector}{\alpha\basisMatrix\basisMatrix^\top}$$
meaning that the vector of observations from the function is jointly distributed as a Gaussian process and the covariance matrix is $\kernelMatrix = \alpha\basisMatrix \basisMatrix^\top$, each element of the covariance matrix can then be found as the inner product between two rows of the basis funciton matrix.
%load -s basis_cov mlai.py
%load -s radial mlai.py
$$\kernel(\inputVector, \inputVector^\prime) = \basisVector(\inputVector)^\top \basisVector(\inputVector^\prime)$$
## Brownian Covariance
%load -s brownian_cov mlai.py
Brownian motion is also a Gaussian process. It follows a Gaussian random walk, with diffusion occuring at each time point driven by a Gaussian input. This implies it is both Markov and Gaussian. The covariance function for Brownian motion has the form
$$\kernelScalar(t, t^\prime)=\alpha \min(t, t^\prime)$$
$$\kernelScalar(t, t^\prime)=\alpha \min(t, t^\prime)$$
## MLP Covariance
%load -s mlp_cov mlai.py
The multi-layer perceptron (MLP) covariance, also known as the neural network covariance or the arcsin covariance, is derived by considering the infinite limit of a neural network.
$$\kernelScalar(\inputVector, \inputVector^\prime) = \alpha \arcsin\left(\frac{w \inputVector^\top \inputVector^\prime + b}{\sqrt{\left(w \inputVector^\top \inputVector + b + 1\right)\left(w \left.\inputVector^\prime\right.^\top \inputVector^\prime + b + 1\right)}}\right)$$
## RELU Covariance
%load -s relu_cov mlai.py
$$\kernelScalar(\inputVector, \inputVector^\prime) = \alpha \arcsin\left(\frac{w \inputVector^\top \inputVector^\prime + b} {\sqrt{\left(w \inputVector^\top \inputVector + b + 1\right) \left(w \left.\inputVector^\prime\right.^\top \inputVector^\prime + b + 1\right)}}\right)$$
## Sinc Covariance
Another approach to developing covariance function exploits Bochner's theorem Bochner (1959). Bochner's theorem tells us that any positve filter in Fourier space implies has an associated Gaussian process with a stationary covariance function. The covariance function is the inverse Fourier transform of the filter applied in Fourier space.
For example, in signal processing, band limitations are commonly applied as an assumption. For example, we may believe that no frequency above w = 2 exists in the signal. This is equivalent to a rectangle function being applied as a the filter in Fourier space.
The inverse Fourier transform of the rectangle function is the sinc(⋅) function. So the sinc is a valid covariance function, and it represents band limited signals.
Note that other covariance functions we've introduced can also be interpreted in this way. For example, the exponentiated quadratic covariance function can be Fourier transformed to see what the implied filter in Fourier space is. The Fourier transform of the exponentiated quadratic is an exponentiated quadratic, so the standard EQ-covariance implies a EQ filter in Fourier space.
%load -s sinc_cov mlai.py
## Polynomial Covariance
$$\kernelScalar(\inputVector, \inputVector^\prime) = \alpha(w \inputVector^\top\inputVector^\prime + b)^d$$
## Periodic Covariance
$$\kernelScalar(\inputVector, \inputVector^\prime) = \alpha\exp\left(\frac{-2\sin(\pi rw)^2}{\lengthScale^2}\right)$$
## Linear Model of Coregionalization Covariance
%load -s lmc_cov mlai.py
from IPython.core.display import HTML
HTML(anim.to_jshtml())
plot.save_animation(anim,
diagrams='../slides/diagrams/kern',
filename='lmc_covariance.html')
$$\kernelScalar(i, j, \inputVector, \inputVector^\prime) = b_{i,j} \kernelScalar(\inputVector, \inputVector^\prime)$$
## Intrinsic Coregionalization Model Covariance
%load -s icm_cov mlai.py
from IPython.core.display import HTML
HTML(anim.to_jshtml())
plot.save_animation(anim,
diagrams='../slides/diagrams/kern',
filename='icm_covariance.html')
$$\kernelScalar(i, j, \inputVector, \inputVector^\prime) = b_{i,j} \kernelScalar(\inputVector, \inputVector^\prime)$$
## GPSS: Gaussian Process Summer School
If you're interested in finding out more about Gaussian processes, you can attend the Gaussian process summer school, or view the lectures and material on line. Details of the school, future events and past events can be found at the website http://gpss.cc.
## GPy: A Gaussian Process Framework in Python
GPy is a BSD licensed software code base for implementing Gaussian process models in python. This allows GPs to be combined with a wide variety of software libraries.
The software itself is available on GitHub and the team welcomes contributions.
The aim for GPy is to be a probabilistic-style programming language, i.e. you specify the model rather than the algorithm. As well as a large range of covariance functions the software allows for non-Gaussian likelihoods, multivariate outputs, dimensionality reduction and approximations for larger data sets.
## Other Software
GPy has inspired other software solutions, first of all GPflow, which uses Tensor Flow's automatic differentiation engine to allow rapid prototyping of new covariance functions and algorithms. More recently, GPyTorch uses PyTorch for the same purpose.
GPy itself is being restructured with MXFusion as its computational engine to give similiar capabilities.
## Acknowledgments
Stefanos Eleftheriadis, John Bronskill, Hugh Salimbeni, Rich Turner, Zhenwen Dai, Javier Gonzalez, Andreas Damianou, Mark Pullin, Michael Smith, James Hensman, John Quinn, Martin Mubangizi.
# References
Andrade-Pacheco, Ricardo, Martin Mubangizi, John Quinn, and Neil D. Lawrence. 2014. “Consistent Mapping of Government Malaria Records Across a Changing Territory Delimitation.” Malaria Journal 13 (Suppl 1). doi:10.1186/1475-2875-13-S1-P5.
Bochner, Salomon. 1959. Lectures on Fourier Integrals. Princeton University Press. http://books.google.co.uk/books?id=-vU02QewWK8C.
Cho, Youngmin, and Lawrence K. Saul. 2009. “Kernel Methods for Deep Learning.” In Advances in Neural Information Processing Systems 22, edited by Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, 342–50. Curran Associates, Inc. http://papers.nips.cc/paper/3628-kernel-methods-for-deep-learning.pdf.
Della Gatta, Giusy, Mukesh Bansal, Alberto Ambesi-Impiombato, Dario Antonini, Caterina Missero, and Diego di Bernardo. 2008. “Direct Targets of the Trp63 Transcription Factor Revealed by a Combination of Gene Expression Profiling and Reverse Engineering.” Genome Research 18 (6). Telethon Institute of Genetics; Medicine, 80131 Naples, Italy.: 939–48. doi:10.1101/gr.073601.107.
Gelman, Andrew, John B. Carlin, Hal S. Stern, and Donald B. Rubin. 2013. Bayesian Data Analysis. 3rd ed. Chapman; Hall.
Gething, Peter W., Abdisalan M. Noor, Priscilla W. Gikandi, Esther A. A. Ogara, Simon I. Hay, Mark S. Nixon, Robert W. Snow, and Peter M. Atkinson. 2006. “Improving Imperfect Data from Health Management Information Systems in Africa Using Space–Time Geostatistics.” PLoS Medicine 3 (6). Public Library of Science. doi:10.1371/journal.pmed.0030271.
Ioffe, Sergey, and Christian Szegedy. 2015. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” In Proceedings of the 32nd International Conference on Machine Learning, edited by Francis Bach and David Blei, 37:448–56. Proceedings of Machine Learning Research. Lille, France: PMLR. http://proceedings.mlr.press/v37/ioffe15.html.
Kalaitzis, Alfredo A., and Neil D. Lawrence. 2011. “A Simple Approach to Ranking Differentially Expressed Gene Expression Time Courses Through Gaussian Process Regression.” BMC Bioinformatics 12 (180). doi:10.1186/1471-2105-12-180.
MacKay, David J. C. 1992. “Bayesian Methods for Adaptive Models.” PhD thesis, California Institute of Technology.
McCulloch, Warren S., and Walter Pitts. 1943. “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Bulletin of Mathematical Biophysics 5: 115–33.
Mubangizi, Martin, Ricardo Andrade-Pacheco, Michael Thomas Smith, John Quinn, and Neil D. Lawrence. 2014. “Malaria Surveillance with Multiple Data Sources Using Gaussian Process Models.” In 1st International Conference on the Use of Mobile Ict in Africa.
Neal, Radford M. 1994. “Bayesian Learning for Neural Networks.” PhD thesis, Dept. of Computer Science, University of Toronto.
Pearl, Judea. 1995. “From Bayesian Networks to Causal Networks.” In Probabilistic Reasoning and Bayesian Belief Networks, edited by A. Gammerman, 1–31. Alfred Waller.
Rasmussen, Carl Edward, and Christopher K. I. Williams. 2006. Gaussian Processes for Machine Learning. Cambridge, MA: mit.
Steele, S, A Bilchik, J Eberhardt, P Kalina, A Nissan, E Johnson, I Avital, and A Stojadinovic. 2012. “Using Machine-Learned Bayesian Belief Networks to Predict Perioperative Risk of Clostridium Difficile Infection Following Colon Surgery.” Interact J Med Res 1 (2): e6. doi:10.2196/ijmr.2131.
Tipping, Michael E., and Christopher M. Bishop. 1999. “Probabilistic Principal Component Analysis.” Journal of the Royal Statistical Society, B 6 (3): 611–22. doi:doi:10.1111/1467-9868.00196.
1. In classical statistics we often interpret these parameters, β, whereas in machine learning we are normally more interested in the result of the prediction, and less in the prediction. Although this is changing with more need for accountability. In honour of this I normally use β when I care about the value of these parameters, and $\mappingVector$ when I care more about the quality of the prediction. | 2019-06-19 23:34:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6973572373390198, "perplexity": 1236.0855928426568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999066.12/warc/CC-MAIN-20190619224436-20190620010436-00284.warc.gz"} |
https://web2.0calc.com/questions/helpppp-nowwwww | +0
# helpppp nowwwww
+1
121
1
Find all values of $x$ such that $9 + \frac{27}{x} + \frac{8}{x^2} = 0.$If you find more than one value, then list your solutions, separated by commas.
Apr 17, 2020
#1
+294
+2
$$9 + \frac{27}{x} + \frac{8}{x^2} = 0$$
Multiply both sides by x^2
and you will get $$9 x ^2 + 27 x + 8 = 0$$
which is equal to (3x+1)(3x+8)=0
set 3x+1 = 0 and solve for x.
you will get -1/3
set 3x+8=0 and solve for x
you will get -8/3
Apr 17, 2020 | 2020-12-04 15:02:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289600253105164, "perplexity": 2782.9807924375555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141737946.86/warc/CC-MAIN-20201204131750-20201204161750-00271.warc.gz"} |
http://math.stackexchange.com/questions/46864/best-way-of-computing-the-decimal-representation-of-a-fraction-with-an-arbitrary | # Best way of computing the decimal representation of a fraction with an arbitrary precision?
Say you are given a fraction, e.g. $\frac{1}{37}$. What is the best way to compute its decimal notation given an arbitrary precision?
Is there a better way than to use a numerical algorithm, e.g. Newton's method? Is this what your calculator does in the background? And how about the other way around?
-
Well, as the number is rational, its decimal expansion is eventually periodic, so you can simply compute it using the division algorithm until the digits start repeating. As soon as you're there, you know the decimal expansion to arbitrary precision. (as in the maybe more familiar $1/6$ or $1/7$, for example). – t.b. Jun 22 '11 at 9:44
What do you mean by "the other way around"? – ShreevatsaR Jun 22 '11 at 9:53
@ Theo: you are right. Is that also the way a calculator works in the background? And I guess that is also the fact for irrational numbers? @ ShreevatsaR: An algorithm that takes a decimal number and returns the equivalent fraction. – user12205 Jun 22 '11 at 10:15
@Theo: BTW, about memory: using the "baby steps giant steps" / "tortoise and hare" / "Floyd's cycle-finding algorithm" (the "rho" idea as in the Pollard rho algorithm), you can avoid the need for O(n) memory (where n is the denominator) and do it in O(1) memory, at the cost of possibly printing the periodic part more than once. – ShreevatsaR Jun 22 '11 at 11:06
@Theo: Oh, it's a simple idea; that's why it is rediscovered and has so many names. :-) Basically, to find the period $n$ of a eventually periodic sequence $x, f(x), f(f(x))$…, you can keep two values advancing through the sequence one step and two steps at a time respectively. Then you are guaranteed that within $n$ steps once the "slower" value enters the periodic part, they will meet again — the faster value gains on the slower value by one step each time — and when the two values are equal you know the period has been found. – ShreevatsaR Jun 22 '11 at 11:22
## 1
To compute the decimal notation to "infinite" precision, you can use only integer arithmetic, keeping remainders — the way you would do it with pencil and paper. For instance, to find $1/37$:
• $\lfloor 1/37 \rfloor = 0$, so put down "$0.$" and "bring down" a $0$: that is, update your current dividend (an archaic word for the number you're dividing by $37$) to $10 = 10(1 - 0\cdot 37)$.
• $\lfloor 10/37 \rfloor = 0$, so put down $0$ and update your dividend to $100 = 10(10 - 0\cdot 37)$.
• $\lfloor 100/37 \rfloor = 2$, so put down $2$ and update your dividend to $10(100 - 2\cdot37) = 260$
And so on. When you see some dividend repeat, you can stop because you know that from then on, the process will proceed the same way it did the last time you saw that dividend: you have found the periodic part of the decimal expansion. Because after each step the dividend is updated to 10 times a remainder modulo 37, there are only 37 possible remainders; you will always get a repeat after at most 37 steps.
## 2
The above is not what normal calculators (and floating-point calculations on a computer) do. They are built not for arbitrary precision, but for fixed precision. They also don't distinguish between rational and irrational numbers for floating-point computation. Different division methods are used (see Wikipedia), including the naive division algorithm, Newton's method, multiplication by the reciprocal (computed either with a specialized routine or even possibly a lookup table), Hensel lifting, etc. (See e.g. sigfpe's post on division by 7 using 2-adic numbers.) Something like Newton's method, optimized for the word size of the calculator/computer, is preferred. BTW, numbers are usually stored in mantissa-exponent form, as a pair of binary integers $(s,e)$ denoting the number $1.s \times 2^e$. (For instance $37.0$ which is $100101.0$ in binary may be stored as the pair $(00101,5)$.)
## 3
Given the decimal expansion of a number, to write it as a fraction is easy if you know that it is exact. For instance if you know that a number is exactly $0.453$, then it is $453/1000$. But usually with fixed precision you know the decimal expansion only approximately: given $0.333333333$, what you want is probably $1/3$ rather than $333333333/1000000000$. For this, the best tool is continued fractions. For instance, $1/37 = 0.\overline{027}$ but if you have $0.02702703$ instead, its continued fraction gives the sequence of convergents $0$, $1/36$, $1/37$, $245700/9090899$, $737101/27272734$, $982801/36363633$, $2702703/100000000$, and you can use your judgment to decide that $1/37$ is probably what you want.
-
Very nice answer, thank you! One link I'd like to add is the Wikipedia article on digital division since it contains a few promising references and seems quite readable itself. – t.b. Jun 22 '11 at 11:01
@Theo: Thanks, added the link you suggested and fixed the stub! – ShreevatsaR Jun 22 '11 at 11:11 | 2014-07-23 14:13:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9149157404899597, "perplexity": 353.520861109584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997878518.58/warc/CC-MAIN-20140722025758-00139-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://datascience.stackexchange.com/questions/42767/machine-learning-classification-model-for-binary-input-and-output-data | # Machine learning Classification model for binary input and output data
I have a large longitudinal dataset with 5 minute granularity for a period of around 30 months from thousands of households. I would like to classify them using a binary output (0/1) based on the input which is also a set of binary variables (sensors activated or not 0/1). I have a training dataset available with the labeled binary output (0/1) with binary inputs.
I would like to know which machine learning model will be best for this type of case where both input and outputs are binary in nature.
Whether Logistic regression is one of the options or not?
Your problem is one of "sequence classification" for which Recurrent Neural Networks (RNN) e.g. Long short-term memory (LSTM) are generally used.
See here for a good example.
and here for a technical paper.
Here is a specialized package for sequence classification which uses convolutional neural networks (CNN).
CPT algorithm, an accurate method for sequence prediction, can also be used here. A continuous output can easily be rounded to 0 or 1 to get binary result.
• Thanks for the reply @rnso, My outputs are discreet(0- a person at home and 1 represents away) and inputs are reading from the movement sensors. My input is not constant as it depends on the number of sensors. (Ranges 2 to 30 sensors). We have collected training data from a pilot study having the label- my plan is to build a model based on this training data and other big data sets will be my test data. Dec 17 '18 at 17:54
• Best may be to get means from all sensor for a person so that you have one sequence per person. Go through links given above and also other links from internet search and you should be able to create a satisfactory model.
– rnso
Dec 18 '18 at 1:19 | 2021-09-26 19:41:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3696765601634979, "perplexity": 582.7006105478141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057913.34/warc/CC-MAIN-20210926175051-20210926205051-00035.warc.gz"} |
https://icml.cc/virtual/2022/session/20058 | ## Theory
##### Hall G
Moderator : Fanny Yang
Tue 19 Jul 10:30 a.m. PDT — noon PDT
Abstract:
Chat is not available.
Tue 19 July 10:30 - 10:50 PDT
(Oral)
##### Learning Mixtures of Linear Dynamical Systems
Yanxi Chen · H. Vincent Poor
We study the problem of learning a mixture of multiple linear dynamical systems (LDSs) from unlabeled short sample trajectories, each generated by one of the LDS models. Despite the wide applicability of mixture models for time-series data, learning algorithms that come with end-to-end performance guarantees are largely absent from existing literature. There are multiple sources of technical challenges, including but not limited to (1) the presence of latent variables (i.e. the unknown labels of trajectories); (2) the possibility that the sample trajectories might have lengths much smaller than the dimension $d$ of the LDS models; and (3) the complicated temporal dependence inherent to time-series data. To tackle these challenges, we develop a two-stage meta-algorithm, which is guaranteed to efficiently recover each ground-truth LDS model up to error $\tilde{O}(\sqrt{d/T})$, where $T$ is the total sample size. We validate our theoretical studies with numerical experiments, confirming the efficacy of the proposed algorithm.
Tue 19 July 10:50 - 10:55 PDT
(Spotlight)
##### Massively Parallel $k$-Means Clustering for Perturbation Resilient Instances
We consider $k$-means clustering of $n$ data points in Euclidean space in the Massively Parallel Computation (MPC) model, a computational model which is an abstraction of modern massively parallel computing system such as MapReduce. Recent work provides evidence that getting $O(1)$-approximate $k$-means solution for general input points using $o(\log n)$ rounds in the MPC model may be impossible under certain conditions [Ghaffari, Kuhn \& Uitto'2019]. However, the real-world data points usually have better structures. One instance of interest is the set of data points which is perturbation resilient [Bilu \& Linial'2010]. In particular, a point set is $\alpha$-perturbation resilient for $k$-means if perturbing pairwise distances by multiplicative factors in the range $[1,\alpha]$ does not change the optimum $k$-means clusters. We bypass the worst case lower bound by considering the perturbation resilient input points and showing $o(\log n)$ rounds $k$-means clustering algorithms for these instances in the MPC model. Specifically, we show a fully scalable $(1+\varepsilon)$-approximate $k$-means clustering algorithm for $O(\alpha)$-perturbation resilient instance in the MPC model using $O(1)$ rounds and ${O}_{\varepsilon,d}(n^{1+1/\alpha^2+o(1)})$ total space. If the space per machine is sufficiently larger than $k$, i.e., at least $k\cdot n^{\Omega(1)}$, we also develop an optimal $k$-means clustering algorithm for $O(\alpha)$-perturbation resilient instance in MPC using $O(1)$ rounds and ${O}_d(n^{1+o(1)}\cdot(n^{1/\alpha^2}+k))$ total space.
Tue 19 July 10:55 - 11:00 PDT
(Spotlight)
##### Residual-Based Sampling for Online Outlier-Robust PCA
Tianhao Zhu · Jie Shen
Outlier-robust principal component analysis (ORPCA) has been broadly applied in scientific discovery in the last decades. In this paper, we study online ORPCA, an important variant that addresses the practical challenge that the data points arrive in a sequential manner and the goal is to recover the underlying subspace of the clean data with one pass of the data. Our main contribution is the first provable algorithm that enjoys comparable recovery guarantee to the best known batch algorithm, while significantly improving upon the state-of-the-art online ORPCA algorithms. The core technique is a robust version of the residual norm which, informally speaking, leverages not only the importance of a data point, but also how likely it behaves as an outlier.
Tue 19 July 11:00 - 11:05 PDT
(Spotlight)
##### Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times
Daniele Calandriello · Luigi Carratino · Alessandro Lazaric · Michal Valko · Lorenzo Rosasco
Computing a Gaussian process (GP) posterior has a computational cost cubical in the number of historical points. A reformulation of the same GP posterior highlights that this complexity mainly depends on how many \emph{unique} historical points are considered. This can have important implication in active learning settings, where the set of historical points is constructed sequentially by the learner. We show that sequential black-box optimization based on GPs (GP-Opt) can be made efficient by sticking to a candidate solution for multiple evaluation steps and switch only when necessary. Limiting the number of switches also limits the number of unique points in the history of the GP. Thus, the efficient GP reformulation can be used to exactly and cheaply compute the posteriors required to run the GP-Opt algorithms. This approach is especially useful in real-world applications of GP-Opt with high switch costs (e.g. switching chemicals in wet labs, data/model loading in hyperparameter optimization). As examples of this meta-approach, we modify two well-established GP-Opt algorithms, GP-UCB and GP-EI, to switch candidates as infrequently as possible adapting rules from batched GP-Opt. These versions preserve all the theoretical no-regret guarantees while improving practical aspects of the algorithms such as runtime, memory complexity, and the ability of batching candidates and evaluating them in parallel.
Tue 19 July 11:05 - 11:10 PDT
(Spotlight)
##### Streaming Algorithms for Support-Aware Histograms
Justin Chen · Piotr Indyk · Tal Wagner
Histograms, i.e., piece-wise constant approximations, are a popular tool used to represent data distributions. Traditionally, the difference between the histogram and the underlying distribution (i.e., the approximation error) is measured using the L_p norm, which sums the differences between the two functions over all items in the domain. Although useful in many applications, the drawback of this error measure is that it treats approximation errors of all items in the same way, irrespective of whether the mass of an item is important for the downstream application that uses the approximation. As a result, even relatively simple distributions cannot be approximated by succinct histograms without incurring large error.In this paper, we address this issue by adapting the definition of approximation so that only the errors of the items that belong to the support of the distribution are considered. Under this definition, we develop efficient 1-pass and 2-pass streaming algorithms that compute near-optimal histograms in sub-linear space. We also present lower bounds on the space complexity of this problem. Surprisingly, under this notion of error, there is an exponential gap in the space complexity of 1-pass and 2-pass streaming algorithms. Finally, we demonstrate the utility of our algorithms on a collection of real and synthetic data sets.
Tue 19 July 11:10 - 11:15 PDT
(Spotlight)
##### Power-Law Escape Rate of SGD
Takashi Mori · Liu Ziyin · Kangqiao Liu · Masahito Ueda
Stochastic gradient descent (SGD) undergoes complicated multiplicative noise for the mean-square loss. We use this property of SGD noise to derive a stochastic differential equation (SDE) with simpler additive noise by performing a random time change. Using this formalism, we show that the log loss barrier $\Delta\log L=\log[L(\theta^s)/L(\theta^*)]$ between a local minimum $\theta^*$ and a saddle $\theta^s$ determines the escape rate of SGD from the local minimum, contrary to the previous results borrowing from physics that the linear loss barrier $\Delta L=L(\theta^s)-L(\theta^*)$ decides the escape rate. Our escape-rate formula strongly depends on the typical magnitude $h^*$ and the number $n$ of the outlier eigenvalues of the Hessian. This result explains an empirical fact that SGD prefers flat minima with low effective dimensions, giving an insight into implicit biases of SGD.
Tue 19 July 11:15 - 11:35 PDT
(Oral)
##### Generalized Results for the Existence and Consistency of the MLE in the Bradley-Terry-Luce Model
Heejong Bong · Alessandro Rinaldo
Ranking problems based on pairwise comparisons, such as those arising in online gaming, often involve a large pool of items to order. In these situations, the gap in performance between any two items can be significant, and the smallest and largest winning probabilities can be very close to zero or one. Furthermore, each item may be compared only to a subset of all the items, so that not all pairwise comparisons are observed. In this paper, we study the performance of the Bradley-Terry-Luce model for ranking from pairwise comparison data under more realistic settings than those considered in the literature so far. In particular, we allow for near-degenerate winning probabilities and arbitrary comparison designs. We obtain novel results about the existence of the maximum likelihood estimator (MLE) and the corresponding $\ell_2$ estimation error without the bounded winning probability assumption commonly used in the literature and for arbitrary comparison graph topologies. Central to our approach is the reliance on the Fisher information matrix to express the dependence on the graph topologies and the impact of the values of the winning probabilities on the estimation risk and on the conditions for the existence of the MLE. Our bounds recover existing results as special cases but are more broadly applicable.
Tue 19 July 11:35 - 11:40 PDT
(Spotlight)
##### Faster Algorithms for Learning Convex Functions
Ali Siahkamari · Durmus Alp Emre Acar · Christopher Liao · Kelly Geyer · Venkatesh Saligrama · Brian Kulis
The task of approximating an arbitrary convex function arises in several learning problems such as convex regression, learning with a difference of convex (DC) functions, and learning Bregman or $f$-divergences. In this paper, we develop and analyze an approach for solving a broad range of convex function learning problems that is faster than state-of-the-art approaches. Our approach is based on a 2-block ADMM method where each block can be computed in closed form. For the task of convex Lipschitz regression, we establish that our proposed algorithm converges with iteration complexity of $O(n\sqrt{d}/\epsilon)$ for a dataset $\bm X \in \mathbb R^{n\times d}$ and $\epsilon > 0$. Combined with per-iteration computation complexity, our method converges with the rate $O(n^3 d^{1.5}/\epsilon+n^2 d^{2.5}/\epsilon+n d^3/\epsilon)$. This new rate improves the state of the art rate of $O(n^5d^2/\epsilon)$ if $d = o( n^4)$. Further we provide similar solvers for DC regression and Bregman divergence learning. Unlike previous approaches, our method is amenable to the use of GPUs. We demonstrate on regression and metric learning experiments that our approach is over 100 times faster than existing approaches on some data sets, and produces results that are comparable to state of the art.
Tue 19 July 11:40 - 11:45 PDT
(Spotlight)
##### Feature selection using e-values
Subhabrata Majumdar · Snigdhansu Chatterjee
In the context of supervised learning, we introduce the concept of e-value. An e-value is a scalar quantity that represents the proximity of the sampling distribution of parameter estimates in a model trained on a subset of features to that of the model trained on all features (i.e. the full model). Under general conditions, a rank ordering of e-values separates models that contain all essential features from those that do not. For a p-dimensional feature space, this requires fitting only the full model and evaluating p+1 models, as opposed to the traditional requirement of fitting and evaluating 2^p models.The above e-values framework is applicable to a wide range of parametric models. We use data depths and a fast resampling-based algorithm to implement a feature selection procedure, providing consistency results. Through experiments across several model settings and synthetic and real datasets, we establish that the e-values can be a promising general alternative to existing model-specific methods of feature selection.
Tue 19 July 11:45 - 11:50 PDT
(Spotlight)
##### ActiveHedge: Hedge meets Active Learning
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama
We consider the classical problem of multi-class prediction with expert advice, but with an active learning twist. In this new setting the learner will only query the labels of a small number of examples, but still aims to minimize regret to the best expert as usual; the learner is also allowed a very short "burn-in" phase where it can fast-forward and query certain highly-informative examples. We design an algorithm that utilizes Hedge (aka Exponential Weights) as a subroutine, and we show that under a very particular combinatorial constraint on the matrix of expert predictions we can obtain a very strong regret guarantee while querying very few labels. This constraint, which we refer to as $\zeta$-compactness, or just compactness, can be viewed as a non-stochastic variant of the disagreement coefficient, another popular parameter used to reason about the sample complexity of active learning in the IID setting. We also give a polynomial-time algorithm to calculate the $\zeta$-compactness of a matrix up to an approximation factor of 3.
Tue 19 July 11:50 - 11:55 PDT
(Spotlight)
##### One-Pass Algorithms for MAP Inference of Nonsymmetric Determinantal Point Processes
Aravind Reddy · Ryan A. Rossi · Zhao Song · Anup Rao · Tung Mai · Nedim Lipka · Gang Wu · Eunyee Koh · Nesreen K Ahmed
In this paper, we initiate the study of one-pass algorithms for solving the maximum-a-posteriori (MAP) inference problem for Non-symmetric Determinantal Point Processes (NDPPs). In particular, we formulate streaming and online versions of the problem and provide one-pass algorithms for solving these problems. In our streaming setting, data points arrive in an arbitrary order and the algorithms are constrained to use a single-pass over the data as well as sub-linear memory, and only need to output a valid solution at the end of the stream. Our online setting has an additional requirement of maintaining a valid solution at any point in time. We design new one-pass algorithms for these problems and show that they perform comparably to (or even better than) the offline greedy algorithm while using substantially lower memory.
Tue 19 July 11:55 - 12:00 PDT
(Spotlight)
##### Deciphering Lasso-based Classification Through a Large Dimensional Analysis of the Iterative Soft-Thresholding Algorithm
Malik TIOMOKO · Ekkehard Schnoor · Mohamed El Amine Seddik · Igor Colin · Aladin Virmaux
This paper proposes a theoretical analysis of a Lasso-based classification algorithm. Leveraging on a realistic regime where the dimension of the data $p$ and their number $n$ are of the same order of magnitude, the theoretical classification error is derived as a function of the data statistics. As a result, insights into the functioning of the Lasso in classification and its differences with competing algorithms are highlighted. Our work is based on an original novel analysis of the Iterative Soft-Thresholding Algorithm (ISTA), which may be of independent interest beyond the particular problem studied here and may be adapted to similar iterative schemes.A theoretical optimization of the model's hyperparameters is also provided, which allows for the data- and time-consuming cross-validation to be avoided. Finally, several applications on synthetic and real data are provided to validate the theoretical study and justify its impact in the design and understanding of algorithms of practical interest. | 2023-03-29 20:08:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5815495848655701, "perplexity": 713.4545852420571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00761.warc.gz"} |